Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v6.6-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull PCI updates from Bjorn Helgaas:
"Enumeration:
- Add locking to read/modify/write PCIe Capability Register accessors
for Link Control and Root Control
- Use pci_dev_id() when possible instead of manually composing ID
from dev->bus->number and dev->devfn

Resource management:
- Move prototypes for __weak sysfs resource files to linux/pci.h to
fix 'no previous prototype' warnings
- Make more I/O port accesses depend on HAS_IOPORT
- Use devm_platform_get_and_ioremap_resource() instead of open-coding
platform_get_resource() followed by devm_ioremap_resource()

Power management:
- Ensure devices are powered up while accessing VPD
- If device is powered-up, keep it that way while polling for PME
- Only read PCI_PM_CTRL register when available, to avoid reading the
wrong register and corrupting dev->current_state

Virtualization:
- Avoid Secondary Bus Reset on NVIDIA T4 GPUs

Error handling:
- Remove unused pci_disable_pcie_error_reporting()
- Unexport pci_enable_pcie_error_reporting(), used only by aer.c
- Unexport pcie_port_bus_type, used only by PCI core

VGA:
- Simplify and clean up typos in VGA arbiter

Apple PCIe controller driver:
- Initialize pcie->nvecs (number of available MSIs) before use

Broadcom iProc PCIe controller driver:
- Use of_property_read_bool() instead of low-level accessors for
boolean properties

Broadcom STB PCIe controller driver:
- Assert PERST# when probing BCM2711 because some bootloaders don't
do it

Freescale i.MX6 PCIe controller driver:
- Add .host_deinit() callback so we can clean up things like
regulators on probe failure or driver unload

Freescale Layerscape PCIe controller driver:
- Add support for link-down notification so the endpoint driver can
process LINK_DOWN events
- Add suspend/resume support, including manual
PME_Turn_off/PME_TO_Ack handshake
- Save Link Capabilities during probe so they can be restored when
handling a link-up event, since the controller loses the Link Width
and Link Speed values during reset

Intel VMD host bridge driver:
- Fix disable of bridge windows during domain reset; previously we
cleared the base/limit registers, which actually left the windows
enabled

Marvell MVEBU PCIe controller driver:
- Remove unused busn member

Microchip PolarFlare PCIe controller driver:
- Fix interrupt bit definitions so the SEC and DED interrupt handlers
work correctly
- Make driver buildable as a module
- Read FPGA MSI configuration parameters from hardware instead of
hard-coding them

Microsoft Hyper-V host bridge driver:
- To avoid a NULL pointer dereference, skip MSI restore after
hibernate if MSI/MSI-X hasn't been enabled

NVIDIA Tegra194 PCIe controller driver:
- Revert 'PCI: tegra194: Enable support for 256 Byte payload' because
Linux doesn't know how to reduce MPS from to 256 to 128 bytes for
endpoints below a switch (because other devices below the switch
might already be operating), which leads to 'Malformed TLP' errors

Qualcomm PCIe controller driver:
- Add DT and driver support for interconnect bandwidth voting for
'pcie-mem' and 'cpu-pcie' interconnects
- Fix broken SDX65 'compatible' DT property
- Configure controller so MHI bus master clock will be switched off
while in ASPM L1.x states
- Use alignment restriction from EPF core in EPF MHI driver
- Add Endpoint eDMA support
- Add MHI eDMA support
- Add Snapdragon SM8450 support to the EPF MHI driversupport
- Add MHI eDMA support
- Add Snapdragon SM8450 support to the EPF MHI driversupport
- Add MHI eDMA support
- Add Snapdragon SM8450 support to the EPF MHI driversupport
- Add MHI eDMA support
- Add Snapdragon SM8450 support to the EPF MHI driver
- Use iATU for EPF MHI transfers smaller than 4K to avoid eDMA setup
latency
- Add sa8775p DT binding and driver support

Rockchip PCIe controller driver:
- Use 64-bit mask on MSI 64-bit PCI address to avoid zeroing out the
upper 32 bits

SiFive FU740 PCIe controller driver:
- Set the supported number of MSI vectors so we can use all available
MSI interrupts

Synopsys DesignWare PCIe controller driver:
- Add generic dwc suspend/resume APIs (dw_pcie_suspend_noirq() and
dw_pcie_resume_noirq()) to be called by controller driver
suspend/resume ops, and a controller callback to send PME_Turn_Off

MicroSemi Switchtec management driver:
- Add support for PCIe Gen5 devices

Miscellaneous:
- Reorder and compress to reduce size of struct pci_dev
- Fix race in DOE destroy_work_on_stack()
- Add stubs to avoid casts between incompatible function types
- Explicitly include correct DT includes to untangle headers"

* tag 'pci-v6.6-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (96 commits)
PCI: qcom-ep: Add ICC bandwidth voting support
dt-bindings: PCI: qcom: ep: Add interconnects path
PCI: qcom-ep: Treat unknown IRQ events as an error
dt-bindings: PCI: qcom: Fix SDX65 compatible
PCI: endpoint: Add kernel-doc for pci_epc_mem_init() API
PCI: epf-mhi: Use iATU for small transfers
PCI: epf-mhi: Add support for SM8450
PCI: epf-mhi: Add eDMA support
PCI: qcom-ep: Add eDMA support
PCI: epf-mhi: Make use of the alignment restriction from EPF core
PCI/PM: Only read PCI_PM_CTRL register when available
PCI: qcom: Add support for sa8775p SoC
dt-bindings: PCI: qcom: Add sa8775p compatible
PCI: qcom-ep: Pass alignment restriction to the EPF core
PCI: Simplify pcie_capability_clear_and_set_word() control flow
PCI: Tidy config space save/restore messages
PCI: Fix code formatting inconsistencies
PCI: Fix typos in docs and comments
PCI: Fix pci_bus_resetable(), pci_slot_resetable() name typos
PCI: Simplify pci_dev_driver()
...

+1622 -870
+6 -6
Documentation/PCI/pci-error-recovery.rst
··· 17 17 and the PCI-host bridges found on IBM Power4, Power5 and Power6-based 18 18 pSeries boxes. A typical action taken is to disconnect the affected device, 19 19 halting all I/O to it. The goal of a disconnection is to avoid system 20 - corruption; for example, to halt system memory corruption due to DMA's 20 + corruption; for example, to halt system memory corruption due to DMAs 21 21 to "wild" addresses. Typically, a reconnection mechanism is also 22 22 offered, so that the affected PCI device(s) are reset and put back 23 23 into working condition. The reset phase requires coordination ··· 178 178 complex and not worth implementing. 179 179 180 180 The current powerpc implementation doesn't much care if the device 181 - attempts I/O at this point, or not. I/O's will fail, returning 181 + attempts I/O at this point, or not. I/Os will fail, returning 182 182 a value of 0xff on read, and writes will be dropped. If more than 183 - EEH_MAX_FAILS I/O's are attempted to a frozen adapter, EEH 183 + EEH_MAX_FAILS I/Os are attempted to a frozen adapter, EEH 184 184 assumes that the device driver has gone into an infinite loop 185 185 and prints an error to syslog. A reboot is then required to 186 186 get the device working again. ··· 204 204 .. note:: 205 205 206 206 The following is proposed; no platform implements this yet: 207 - Proposal: All I/O's should be done _synchronously_ from within 207 + Proposal: All I/Os should be done _synchronously_ from within 208 208 this callback, errors triggered by them will be returned via 209 209 the normal pci_check_whatever() API, no new error_detected() 210 210 callback will be issued due to an error happening here. However, ··· 258 258 soft reset(default) and fundamental(optional) reset. 259 259 260 260 Powerpc soft reset consists of asserting the adapter #RST line and then 261 - restoring the PCI BAR's and PCI configuration header to a state 261 + restoring the PCI BARs and PCI configuration header to a state 262 262 that is equivalent to what it would be after a fresh system 263 263 power-on followed by power-on BIOS/system firmware initialization. 264 264 Soft reset is also known as hot-reset. ··· 362 362 the operator will probably want to remove and replace the device. 363 363 Note, however, not all failures are truly "permanent". Some are 364 364 caused by over-heating, some by a poorly seated card. Many 365 - PCI error events are caused by software bugs, e.g. DMA's to 365 + PCI error events are caused by software bugs, e.g. DMAs to 366 366 wild addresses or bogus split transactions due to programming 367 367 errors. See the discussion in Documentation/powerpc/eeh-pci-error-recovery.rst 368 368 for additional detail on real-life experience of the causes of
+9 -5
Documentation/PCI/pciebus-howto.rst
··· 213 213 -------------------- 214 214 215 215 Each service driver runs its PCI config operations on its own 216 - capability structure except the PCI Express capability structure, in 217 - which Root Control register and Device Control register are shared 218 - between PME and AER. This patch assumes that all service drivers 219 - will be well behaved and not overwrite other service driver's 220 - configuration settings. 216 + capability structure except the PCI Express capability structure, 217 + that is shared between many drivers including the service drivers. 218 + RMW Capability accessors (pcie_capability_clear_and_set_word(), 219 + pcie_capability_set_word(), and pcie_capability_clear_word()) protect 220 + a selected set of PCI Express Capability Registers (Link Control 221 + Register and Root Control Register). Any change to those registers 222 + should be performed using RMW accessors to avoid problems due to 223 + concurrent updates. For the up-to-date list of protected registers, 224 + see pcie_capability_clear_and_set_word().
+22 -5
Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml
··· 11 11 12 12 properties: 13 13 compatible: 14 - enum: 15 - - qcom,sdx55-pcie-ep 16 - - qcom,sdx65-pcie-ep 17 - - qcom,sm8450-pcie-ep 14 + oneOf: 15 + - enum: 16 + - qcom,sdx55-pcie-ep 17 + - qcom,sm8450-pcie-ep 18 + - items: 19 + - const: qcom,sdx65-pcie-ep 20 + - const: qcom,sdx55-pcie-ep 18 21 19 22 reg: 20 23 items: ··· 74 71 description: GPIO used as WAKE# output signal 75 72 maxItems: 1 76 73 74 + interconnects: 75 + maxItems: 2 76 + 77 + interconnect-names: 78 + items: 79 + - const: pcie-mem 80 + - const: cpu-pcie 81 + 77 82 resets: 78 83 maxItems: 1 79 84 ··· 109 98 - interrupts 110 99 - interrupt-names 111 100 - reset-gpios 101 + - interconnects 102 + - interconnect-names 112 103 - resets 113 104 - reset-names 114 105 - power-domains ··· 123 110 contains: 124 111 enum: 125 112 - qcom,sdx55-pcie-ep 126 - - qcom,sdx65-pcie-ep 127 113 then: 128 114 properties: 129 115 clocks: ··· 179 167 - | 180 168 #include <dt-bindings/clock/qcom,gcc-sdx55.h> 181 169 #include <dt-bindings/gpio/gpio.h> 170 + #include <dt-bindings/interconnect/qcom,sdx55.h> 182 171 #include <dt-bindings/interrupt-controller/arm-gic.h> 172 + 183 173 pcie_ep: pcie-ep@1c00000 { 184 174 compatible = "qcom,sdx55-pcie-ep"; 185 175 reg = <0x01c00000 0x3000>, ··· 208 194 interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>, 209 195 <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>; 210 196 interrupt-names = "global", "doorbell"; 197 + interconnects = <&system_noc MASTER_PCIE &mc_virt SLAVE_EBI_CH0>, 198 + <&mem_noc MASTER_AMPSS_M0 &system_noc SLAVE_PCIE_0>; 199 + interconnect-names = "pcie-mem", "cpu-pcie"; 211 200 reset-gpios = <&tlmm 57 GPIO_ACTIVE_LOW>; 212 201 wake-gpios = <&tlmm 53 GPIO_ACTIVE_LOW>; 213 202 resets = <&gcc GCC_PCIE_BCR>;
+28
Documentation/devicetree/bindings/pci/qcom,pcie.yaml
··· 29 29 - qcom,pcie-msm8996 30 30 - qcom,pcie-qcs404 31 31 - qcom,pcie-sa8540p 32 + - qcom,pcie-sa8775p 32 33 - qcom,pcie-sc7280 33 34 - qcom,pcie-sc8180x 34 35 - qcom,pcie-sc8280xp ··· 212 211 compatible: 213 212 contains: 214 213 enum: 214 + - qcom,pcie-sa8775p 215 215 - qcom,pcie-sc7280 216 216 - qcom,pcie-sc8180x 217 217 - qcom,pcie-sc8280xp ··· 750 748 compatible: 751 749 contains: 752 750 enum: 751 + - qcom,pcie-sa8775p 752 + then: 753 + properties: 754 + clocks: 755 + minItems: 5 756 + maxItems: 5 757 + clock-names: 758 + items: 759 + - const: aux # Auxiliary clock 760 + - const: cfg # Configuration clock 761 + - const: bus_master # Master AXI clock 762 + - const: bus_slave # Slave AXI clock 763 + - const: slave_q2a # Slave Q2A clock 764 + resets: 765 + maxItems: 1 766 + reset-names: 767 + items: 768 + - const: pci # PCIe core reset 769 + 770 + - if: 771 + properties: 772 + compatible: 773 + contains: 774 + enum: 753 775 - qcom,pcie-sa8540p 776 + - qcom,pcie-sa8775p 754 777 - qcom,pcie-sc8280xp 755 778 then: 756 779 required: ··· 817 790 contains: 818 791 enum: 819 792 - qcom,pcie-msm8996 793 + - qcom,pcie-sa8775p 820 794 - qcom,pcie-sc7280 821 795 - qcom,pcie-sc8180x 822 796 - qcom,pcie-sdm845
-3
arch/alpha/include/asm/pci.h
··· 88 88 enum pci_mmap_state mmap_type); 89 89 #define HAVE_PCI_LEGACY 1 90 90 91 - extern int pci_create_resource_files(struct pci_dev *dev); 92 - extern void pci_remove_resource_files(struct pci_dev *dev); 93 - 94 91 #endif /* __ALPHA_PCI_H */
+2 -2
arch/x86/pci/irq.c
··· 136 136 if (ir->signature != IRT_SIGNATURE || !ir->used || ir->size < ir->used) 137 137 return NULL; 138 138 139 - size = sizeof(*ir) + ir->used * sizeof(ir->slots[0]); 139 + size = struct_size(ir, slots, ir->used); 140 140 if (size > limit - addr) 141 141 return NULL; 142 142 143 143 DBG(KERN_DEBUG "PCI: $IRT Interrupt Routing Table found at 0x%lx\n", 144 144 __pa(ir)); 145 145 146 - size = sizeof(*rt) + ir->used * sizeof(rt->slots[0]); 146 + size = struct_size(rt, slots, ir->used); 147 147 rt = kzalloc(size, GFP_KERNEL); 148 148 if (!rt) 149 149 return NULL;
+10 -26
drivers/gpu/drm/amd/amdgpu/cik.c
··· 1574 1574 u16 bridge_cfg2, gpu_cfg2; 1575 1575 u32 max_lw, current_lw, tmp; 1576 1576 1577 - pcie_capability_read_word(root, PCI_EXP_LNKCTL, 1578 - &bridge_cfg); 1579 - pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL, 1580 - &gpu_cfg); 1581 - 1582 - tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD; 1583 - pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16); 1584 - 1585 - tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD; 1586 - pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL, 1587 - tmp16); 1577 + pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); 1578 + pcie_capability_set_word(adev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); 1588 1579 1589 1580 tmp = RREG32_PCIE(ixPCIE_LC_STATUS1); 1590 1581 max_lw = (tmp & PCIE_LC_STATUS1__LC_DETECTED_LINK_WIDTH_MASK) >> ··· 1628 1637 msleep(100); 1629 1638 1630 1639 /* linkctl */ 1631 - pcie_capability_read_word(root, PCI_EXP_LNKCTL, 1632 - &tmp16); 1633 - tmp16 &= ~PCI_EXP_LNKCTL_HAWD; 1634 - tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD); 1635 - pcie_capability_write_word(root, PCI_EXP_LNKCTL, 1636 - tmp16); 1637 - 1638 - pcie_capability_read_word(adev->pdev, 1639 - PCI_EXP_LNKCTL, 1640 - &tmp16); 1641 - tmp16 &= ~PCI_EXP_LNKCTL_HAWD; 1642 - tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD); 1643 - pcie_capability_write_word(adev->pdev, 1644 - PCI_EXP_LNKCTL, 1645 - tmp16); 1640 + pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL, 1641 + PCI_EXP_LNKCTL_HAWD, 1642 + bridge_cfg & 1643 + PCI_EXP_LNKCTL_HAWD); 1644 + pcie_capability_clear_and_set_word(adev->pdev, PCI_EXP_LNKCTL, 1645 + PCI_EXP_LNKCTL_HAWD, 1646 + gpu_cfg & 1647 + PCI_EXP_LNKCTL_HAWD); 1646 1648 1647 1649 /* linkctl2 */ 1648 1650 pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
+10 -26
drivers/gpu/drm/amd/amdgpu/si.c
··· 2276 2276 u16 bridge_cfg2, gpu_cfg2; 2277 2277 u32 max_lw, current_lw, tmp; 2278 2278 2279 - pcie_capability_read_word(root, PCI_EXP_LNKCTL, 2280 - &bridge_cfg); 2281 - pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL, 2282 - &gpu_cfg); 2283 - 2284 - tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD; 2285 - pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16); 2286 - 2287 - tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD; 2288 - pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL, 2289 - tmp16); 2279 + pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); 2280 + pcie_capability_set_word(adev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); 2290 2281 2291 2282 tmp = RREG32_PCIE(PCIE_LC_STATUS1); 2292 2283 max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT; ··· 2322 2331 2323 2332 mdelay(100); 2324 2333 2325 - pcie_capability_read_word(root, PCI_EXP_LNKCTL, 2326 - &tmp16); 2327 - tmp16 &= ~PCI_EXP_LNKCTL_HAWD; 2328 - tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD); 2329 - pcie_capability_write_word(root, PCI_EXP_LNKCTL, 2330 - tmp16); 2331 - 2332 - pcie_capability_read_word(adev->pdev, 2333 - PCI_EXP_LNKCTL, 2334 - &tmp16); 2335 - tmp16 &= ~PCI_EXP_LNKCTL_HAWD; 2336 - tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD); 2337 - pcie_capability_write_word(adev->pdev, 2338 - PCI_EXP_LNKCTL, 2339 - tmp16); 2334 + pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL, 2335 + PCI_EXP_LNKCTL_HAWD, 2336 + bridge_cfg & 2337 + PCI_EXP_LNKCTL_HAWD); 2338 + pcie_capability_clear_and_set_word(adev->pdev, PCI_EXP_LNKCTL, 2339 + PCI_EXP_LNKCTL_HAWD, 2340 + gpu_cfg & 2341 + PCI_EXP_LNKCTL_HAWD); 2340 2342 2341 2343 pcie_capability_read_word(root, PCI_EXP_LNKCTL2, 2342 2344 &tmp16);
+10 -26
drivers/gpu/drm/radeon/cik.c
··· 9534 9534 u16 bridge_cfg2, gpu_cfg2; 9535 9535 u32 max_lw, current_lw, tmp; 9536 9536 9537 - pcie_capability_read_word(root, PCI_EXP_LNKCTL, 9538 - &bridge_cfg); 9539 - pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL, 9540 - &gpu_cfg); 9541 - 9542 - tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD; 9543 - pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16); 9544 - 9545 - tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD; 9546 - pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL, 9547 - tmp16); 9537 + pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); 9538 + pcie_capability_set_word(rdev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); 9548 9539 9549 9540 tmp = RREG32_PCIE_PORT(PCIE_LC_STATUS1); 9550 9541 max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT; ··· 9582 9591 msleep(100); 9583 9592 9584 9593 /* linkctl */ 9585 - pcie_capability_read_word(root, PCI_EXP_LNKCTL, 9586 - &tmp16); 9587 - tmp16 &= ~PCI_EXP_LNKCTL_HAWD; 9588 - tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD); 9589 - pcie_capability_write_word(root, PCI_EXP_LNKCTL, 9590 - tmp16); 9591 - 9592 - pcie_capability_read_word(rdev->pdev, 9593 - PCI_EXP_LNKCTL, 9594 - &tmp16); 9595 - tmp16 &= ~PCI_EXP_LNKCTL_HAWD; 9596 - tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD); 9597 - pcie_capability_write_word(rdev->pdev, 9598 - PCI_EXP_LNKCTL, 9599 - tmp16); 9594 + pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL, 9595 + PCI_EXP_LNKCTL_HAWD, 9596 + bridge_cfg & 9597 + PCI_EXP_LNKCTL_HAWD); 9598 + pcie_capability_clear_and_set_word(rdev->pdev, PCI_EXP_LNKCTL, 9599 + PCI_EXP_LNKCTL_HAWD, 9600 + gpu_cfg & 9601 + PCI_EXP_LNKCTL_HAWD); 9600 9602 9601 9603 /* linkctl2 */ 9602 9604 pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
+10 -27
drivers/gpu/drm/radeon/si.c
··· 7131 7131 u16 bridge_cfg2, gpu_cfg2; 7132 7132 u32 max_lw, current_lw, tmp; 7133 7133 7134 - pcie_capability_read_word(root, PCI_EXP_LNKCTL, 7135 - &bridge_cfg); 7136 - pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL, 7137 - &gpu_cfg); 7138 - 7139 - tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD; 7140 - pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16); 7141 - 7142 - tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD; 7143 - pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL, 7144 - tmp16); 7134 + pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); 7135 + pcie_capability_set_word(rdev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); 7145 7136 7146 7137 tmp = RREG32_PCIE(PCIE_LC_STATUS1); 7147 7138 max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT; ··· 7179 7188 msleep(100); 7180 7189 7181 7190 /* linkctl */ 7182 - pcie_capability_read_word(root, PCI_EXP_LNKCTL, 7183 - &tmp16); 7184 - tmp16 &= ~PCI_EXP_LNKCTL_HAWD; 7185 - tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD); 7186 - pcie_capability_write_word(root, 7187 - PCI_EXP_LNKCTL, 7188 - tmp16); 7189 - 7190 - pcie_capability_read_word(rdev->pdev, 7191 - PCI_EXP_LNKCTL, 7192 - &tmp16); 7193 - tmp16 &= ~PCI_EXP_LNKCTL_HAWD; 7194 - tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD); 7195 - pcie_capability_write_word(rdev->pdev, 7196 - PCI_EXP_LNKCTL, 7197 - tmp16); 7191 + pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL, 7192 + PCI_EXP_LNKCTL_HAWD, 7193 + bridge_cfg & 7194 + PCI_EXP_LNKCTL_HAWD); 7195 + pcie_capability_clear_and_set_word(rdev->pdev, PCI_EXP_LNKCTL, 7196 + PCI_EXP_LNKCTL_HAWD, 7197 + gpu_cfg & 7198 + PCI_EXP_LNKCTL_HAWD); 7198 7199 7199 7200 /* linkctl2 */ 7200 7201 pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
+8 -13
drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
··· 338 338 list_for_each_entry(sdev, &bridge_bus->devices, bus_list) { 339 339 err = pci_read_config_word(sdev, PCI_DEVICE_ID, &sdev_id); 340 340 if (err) 341 - return err; 341 + return pcibios_err_to_errno(err); 342 342 if (sdev_id != dev_id) { 343 343 mlx5_core_warn(dev, "unrecognized dev_id (0x%x)\n", sdev_id); 344 344 return -EPERM; ··· 398 398 399 399 err = pci_read_config_word(dev->pdev, PCI_DEVICE_ID, &dev_id); 400 400 if (err) 401 - return err; 401 + return pcibios_err_to_errno(err); 402 402 err = mlx5_check_dev_ids(dev, dev_id); 403 403 if (err) 404 404 return err; ··· 411 411 pci_cfg_access_lock(sdev); 412 412 } 413 413 /* PCI link toggle */ 414 - err = pci_read_config_word(bridge, cap + PCI_EXP_LNKCTL, &reg16); 414 + err = pcie_capability_set_word(bridge, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_LD); 415 415 if (err) 416 - return err; 417 - reg16 |= PCI_EXP_LNKCTL_LD; 418 - err = pci_write_config_word(bridge, cap + PCI_EXP_LNKCTL, reg16); 419 - if (err) 420 - return err; 416 + return pcibios_err_to_errno(err); 421 417 msleep(500); 422 - reg16 &= ~PCI_EXP_LNKCTL_LD; 423 - err = pci_write_config_word(bridge, cap + PCI_EXP_LNKCTL, reg16); 418 + err = pcie_capability_clear_word(bridge, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_LD); 424 419 if (err) 425 - return err; 420 + return pcibios_err_to_errno(err); 426 421 427 422 /* Check link */ 428 423 if (!bridge->link_active_reporting) { ··· 430 435 do { 431 436 err = pci_read_config_word(bridge, cap + PCI_EXP_LNKSTA, &reg16); 432 437 if (err) 433 - return err; 438 + return pcibios_err_to_errno(err); 434 439 if (reg16 & PCI_EXP_LNKSTA_DLLLA) 435 440 break; 436 441 msleep(20); ··· 448 453 do { 449 454 err = pci_read_config_word(dev->pdev, PCI_DEVICE_ID, &reg16); 450 455 if (err) 451 - return err; 456 + return pcibios_err_to_errno(err); 452 457 if (reg16 == dev_id) 453 458 break; 454 459 msleep(20);
+5 -4
drivers/net/wireless/ath/ath10k/pci.c
··· 1963 1963 ath10k_pci_irq_enable(ar); 1964 1964 ath10k_pci_rx_post(ar); 1965 1965 1966 - pcie_capability_write_word(ar_pci->pdev, PCI_EXP_LNKCTL, 1967 - ar_pci->link_ctl); 1966 + pcie_capability_clear_and_set_word(ar_pci->pdev, PCI_EXP_LNKCTL, 1967 + PCI_EXP_LNKCTL_ASPMC, 1968 + ar_pci->link_ctl & PCI_EXP_LNKCTL_ASPMC); 1968 1969 1969 1970 return 0; 1970 1971 } ··· 2822 2821 2823 2822 pcie_capability_read_word(ar_pci->pdev, PCI_EXP_LNKCTL, 2824 2823 &ar_pci->link_ctl); 2825 - pcie_capability_write_word(ar_pci->pdev, PCI_EXP_LNKCTL, 2826 - ar_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC); 2824 + pcie_capability_clear_word(ar_pci->pdev, PCI_EXP_LNKCTL, 2825 + PCI_EXP_LNKCTL_ASPMC); 2827 2826 2828 2827 /* 2829 2828 * Bring the target up cleanly.
+6 -4
drivers/net/wireless/ath/ath11k/pci.c
··· 582 582 u16_get_bits(ab_pci->link_ctl, PCI_EXP_LNKCTL_ASPM_L1)); 583 583 584 584 /* disable L0s and L1 */ 585 - pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL, 586 - ab_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC); 585 + pcie_capability_clear_word(ab_pci->pdev, PCI_EXP_LNKCTL, 586 + PCI_EXP_LNKCTL_ASPMC); 587 587 588 588 set_bit(ATH11K_PCI_ASPM_RESTORE, &ab_pci->flags); 589 589 } ··· 591 591 static void ath11k_pci_aspm_restore(struct ath11k_pci *ab_pci) 592 592 { 593 593 if (test_and_clear_bit(ATH11K_PCI_ASPM_RESTORE, &ab_pci->flags)) 594 - pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL, 595 - ab_pci->link_ctl); 594 + pcie_capability_clear_and_set_word(ab_pci->pdev, PCI_EXP_LNKCTL, 595 + PCI_EXP_LNKCTL_ASPMC, 596 + ab_pci->link_ctl & 597 + PCI_EXP_LNKCTL_ASPMC); 596 598 } 597 599 598 600 static int ath11k_pci_power_up(struct ath11k_base *ab)
+6 -4
drivers/net/wireless/ath/ath12k/pci.c
··· 794 794 u16_get_bits(ab_pci->link_ctl, PCI_EXP_LNKCTL_ASPM_L1)); 795 795 796 796 /* disable L0s and L1 */ 797 - pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL, 798 - ab_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC); 797 + pcie_capability_clear_word(ab_pci->pdev, PCI_EXP_LNKCTL, 798 + PCI_EXP_LNKCTL_ASPMC); 799 799 800 800 set_bit(ATH12K_PCI_ASPM_RESTORE, &ab_pci->flags); 801 801 } ··· 803 803 static void ath12k_pci_aspm_restore(struct ath12k_pci *ab_pci) 804 804 { 805 805 if (test_and_clear_bit(ATH12K_PCI_ASPM_RESTORE, &ab_pci->flags)) 806 - pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL, 807 - ab_pci->link_ctl); 806 + pcie_capability_clear_and_set_word(ab_pci->pdev, PCI_EXP_LNKCTL, 807 + PCI_EXP_LNKCTL_ASPMC, 808 + ab_pci->link_ctl & 809 + PCI_EXP_LNKCTL_ASPMC); 808 810 } 809 811 810 812 static void ath12k_pci_kill_tasklets(struct ath12k_base *ab)
+26 -14
drivers/pci/access.c
··· 497 497 } 498 498 EXPORT_SYMBOL(pcie_capability_write_dword); 499 499 500 - int pcie_capability_clear_and_set_word(struct pci_dev *dev, int pos, 501 - u16 clear, u16 set) 500 + int pcie_capability_clear_and_set_word_unlocked(struct pci_dev *dev, int pos, 501 + u16 clear, u16 set) 502 502 { 503 503 int ret; 504 504 u16 val; 505 505 506 506 ret = pcie_capability_read_word(dev, pos, &val); 507 - if (!ret) { 508 - val &= ~clear; 509 - val |= set; 510 - ret = pcie_capability_write_word(dev, pos, val); 511 - } 507 + if (ret) 508 + return ret; 509 + 510 + val &= ~clear; 511 + val |= set; 512 + return pcie_capability_write_word(dev, pos, val); 513 + } 514 + EXPORT_SYMBOL(pcie_capability_clear_and_set_word_unlocked); 515 + 516 + int pcie_capability_clear_and_set_word_locked(struct pci_dev *dev, int pos, 517 + u16 clear, u16 set) 518 + { 519 + unsigned long flags; 520 + int ret; 521 + 522 + spin_lock_irqsave(&dev->pcie_cap_lock, flags); 523 + ret = pcie_capability_clear_and_set_word_unlocked(dev, pos, clear, set); 524 + spin_unlock_irqrestore(&dev->pcie_cap_lock, flags); 512 525 513 526 return ret; 514 527 } 515 - EXPORT_SYMBOL(pcie_capability_clear_and_set_word); 528 + EXPORT_SYMBOL(pcie_capability_clear_and_set_word_locked); 516 529 517 530 int pcie_capability_clear_and_set_dword(struct pci_dev *dev, int pos, 518 531 u32 clear, u32 set) ··· 534 521 u32 val; 535 522 536 523 ret = pcie_capability_read_dword(dev, pos, &val); 537 - if (!ret) { 538 - val &= ~clear; 539 - val |= set; 540 - ret = pcie_capability_write_dword(dev, pos, val); 541 - } 524 + if (ret) 525 + return ret; 542 526 543 - return ret; 527 + val &= ~clear; 528 + val |= set; 529 + return pcie_capability_write_dword(dev, pos, val); 544 530 } 545 531 EXPORT_SYMBOL(pcie_capability_clear_and_set_dword); 546 532
+1 -1
drivers/pci/controller/Kconfig
··· 216 216 This selects a driver for the MediaTek MT7621 PCIe Controller. 217 217 218 218 config PCIE_MICROCHIP_HOST 219 - bool "Microchip AXI PCIe controller" 219 + tristate "Microchip AXI PCIe controller" 220 220 depends on PCI_MSI && OF 221 221 select PCI_HOST_COMMON 222 222 help
+1 -1
drivers/pci/controller/cadence/pci-j721e.c
··· 14 14 #include <linux/irqdomain.h> 15 15 #include <linux/mfd/syscon.h> 16 16 #include <linux/of.h> 17 - #include <linux/of_device.h> 18 17 #include <linux/pci.h> 18 + #include <linux/platform_device.h> 19 19 #include <linux/pm_runtime.h> 20 20 #include <linux/regmap.h> 21 21
+1 -2
drivers/pci/controller/cadence/pcie-cadence-plat.c
··· 6 6 * Author: Tom Joseph <tjoseph@cadence.com> 7 7 */ 8 8 #include <linux/kernel.h> 9 - #include <linux/of_address.h> 9 + #include <linux/of.h> 10 10 #include <linux/of_pci.h> 11 11 #include <linux/platform_device.h> 12 12 #include <linux/pm_runtime.h> 13 - #include <linux/of_device.h> 14 13 #include "pcie-cadence.h" 15 14 16 15 #define CDNS_PLAT_CPU_TO_BUS_ADDR 0x0FFFFFFF
+1
drivers/pci/controller/cadence/pcie-cadence.c
··· 4 4 // Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com> 5 5 6 6 #include <linux/kernel.h> 7 + #include <linux/of.h> 7 8 8 9 #include "pcie-cadence.h" 9 10
+1 -1
drivers/pci/controller/cadence/pcie-cadence.h
··· 32 32 #define CDNS_PCIE_LM_ID_SUBSYS(sub) \ 33 33 (((sub) << CDNS_PCIE_LM_ID_SUBSYS_SHIFT) & CDNS_PCIE_LM_ID_SUBSYS_MASK) 34 34 35 - /* Root Port Requestor ID Register */ 35 + /* Root Port Requester ID Register */ 36 36 #define CDNS_PCIE_LM_RP_RID (CDNS_PCIE_LM_BASE + 0x0228) 37 37 #define CDNS_PCIE_LM_RP_RID_MASK GENMASK(15, 0) 38 38 #define CDNS_PCIE_LM_RP_RID_SHIFT 0
+1 -1
drivers/pci/controller/dwc/pci-dra7xx.c
··· 16 16 #include <linux/irqdomain.h> 17 17 #include <linux/kernel.h> 18 18 #include <linux/module.h> 19 - #include <linux/of_device.h> 19 + #include <linux/of.h> 20 20 #include <linux/of_gpio.h> 21 21 #include <linux/of_pci.h> 22 22 #include <linux/pci.h>
+1 -1
drivers/pci/controller/dwc/pci-exynos.c
··· 14 14 #include <linux/interrupt.h> 15 15 #include <linux/kernel.h> 16 16 #include <linux/init.h> 17 - #include <linux/of_device.h> 18 17 #include <linux/pci.h> 19 18 #include <linux/platform_device.h> 20 19 #include <linux/phy/phy.h> 21 20 #include <linux/regulator/consumer.h> 21 + #include <linux/mod_devicetable.h> 22 22 #include <linux/module.h> 23 23 24 24 #include "pcie-designware.h"
+3 -3
drivers/pci/controller/dwc/pci-imx6.c
··· 17 17 #include <linux/mfd/syscon/imx6q-iomuxc-gpr.h> 18 18 #include <linux/mfd/syscon/imx7-iomuxc-gpr.h> 19 19 #include <linux/module.h> 20 + #include <linux/of.h> 20 21 #include <linux/of_gpio.h> 21 - #include <linux/of_device.h> 22 22 #include <linux/of_address.h> 23 23 #include <linux/pci.h> 24 24 #include <linux/platform_device.h> ··· 1040 1040 1041 1041 static const struct dw_pcie_host_ops imx6_pcie_host_ops = { 1042 1042 .host_init = imx6_pcie_host_init, 1043 + .host_deinit = imx6_pcie_host_exit, 1043 1044 }; 1044 1045 1045 1046 static const struct dw_pcie_ops dw_pcie_ops = { ··· 1283 1282 return PTR_ERR(imx6_pcie->phy_base); 1284 1283 } 1285 1284 1286 - dbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1287 - pci->dbi_base = devm_ioremap_resource(dev, dbi_base); 1285 + pci->dbi_base = devm_platform_get_and_ioremap_resource(pdev, 0, &dbi_base); 1288 1286 if (IS_ERR(pci->dbi_base)) 1289 1287 return PTR_ERR(pci->dbi_base); 1290 1288
-1
drivers/pci/controller/dwc/pci-keystone.c
··· 19 19 #include <linux/mfd/syscon.h> 20 20 #include <linux/msi.h> 21 21 #include <linux/of.h> 22 - #include <linux/of_device.h> 23 22 #include <linux/of_irq.h> 24 23 #include <linux/of_pci.h> 25 24 #include <linux/phy/phy.h>
+20
drivers/pci/controller/dwc/pci-layerscape-ep.c
··· 45 45 struct pci_epc_features *ls_epc; 46 46 const struct ls_pcie_ep_drvdata *drvdata; 47 47 int irq; 48 + u32 lnkcap; 48 49 bool big_endian; 49 50 }; 50 51 ··· 74 73 struct ls_pcie_ep *pcie = dev_id; 75 74 struct dw_pcie *pci = pcie->pci; 76 75 u32 val, cfg; 76 + u8 offset; 77 77 78 78 val = ls_lut_readl(pcie, PEX_PF0_PME_MES_DR); 79 79 ls_lut_writel(pcie, PEX_PF0_PME_MES_DR, val); ··· 83 81 return IRQ_NONE; 84 82 85 83 if (val & PEX_PF0_PME_MES_DR_LUD) { 84 + 85 + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 86 + 87 + /* 88 + * The values of the Maximum Link Width and Supported Link 89 + * Speed from the Link Capabilities Register will be lost 90 + * during link down or hot reset. Restore initial value 91 + * that configured by the Reset Configuration Word (RCW). 92 + */ 93 + dw_pcie_dbi_ro_wr_en(pci); 94 + dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, pcie->lnkcap); 95 + dw_pcie_dbi_ro_wr_dis(pci); 96 + 86 97 cfg = ls_lut_readl(pcie, PEX_PF0_CONFIG); 87 98 cfg |= PEX_PF0_CFG_READY; 88 99 ls_lut_writel(pcie, PEX_PF0_CONFIG, cfg); ··· 104 89 dev_dbg(pci->dev, "Link up\n"); 105 90 } else if (val & PEX_PF0_PME_MES_DR_LDD) { 106 91 dev_dbg(pci->dev, "Link down\n"); 92 + pci_epc_linkdown(pci->ep.epc); 107 93 } else if (val & PEX_PF0_PME_MES_DR_HRD) { 108 94 dev_dbg(pci->dev, "Hot reset\n"); 109 95 } ··· 231 215 struct ls_pcie_ep *pcie; 232 216 struct pci_epc_features *ls_epc; 233 217 struct resource *dbi_base; 218 + u8 offset; 234 219 int ret; 235 220 236 221 pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); ··· 267 250 pcie->big_endian = of_property_read_bool(dev->of_node, "big-endian"); 268 251 269 252 platform_set_drvdata(pdev, pcie); 253 + 254 + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 255 + pcie->lnkcap = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); 270 256 271 257 ret = dw_pcie_ep_init(&pci->ep); 272 258 if (ret)
+131 -9
drivers/pci/controller/dwc/pci-layerscape.c
··· 8 8 * Author: Minghuan Lian <Minghuan.Lian@freescale.com> 9 9 */ 10 10 11 + #include <linux/delay.h> 11 12 #include <linux/kernel.h> 12 13 #include <linux/interrupt.h> 13 14 #include <linux/init.h> 15 + #include <linux/iopoll.h> 14 16 #include <linux/of_pci.h> 15 17 #include <linux/of_platform.h> 16 18 #include <linux/of_address.h> ··· 22 20 #include <linux/mfd/syscon.h> 23 21 #include <linux/regmap.h> 24 22 23 + #include "../../pci.h" 25 24 #include "pcie-designware.h" 26 25 27 26 /* PEX Internal Configuration Registers */ ··· 30 27 #define PCIE_ABSERR 0x8d0 /* Bridge Slave Error Response Register */ 31 28 #define PCIE_ABSERR_SETTING 0x9401 /* Forward error of non-posted request */ 32 29 30 + /* PF Message Command Register */ 31 + #define LS_PCIE_PF_MCR 0x2c 32 + #define PF_MCR_PTOMR BIT(0) 33 + #define PF_MCR_EXL2S BIT(1) 34 + 33 35 #define PCIE_IATU_NUM 6 36 + 37 + struct ls_pcie_drvdata { 38 + const u32 pf_off; 39 + bool pm_support; 40 + }; 34 41 35 42 struct ls_pcie { 36 43 struct dw_pcie *pci; 44 + const struct ls_pcie_drvdata *drvdata; 45 + void __iomem *pf_base; 46 + bool big_endian; 37 47 }; 38 48 49 + #define ls_pcie_pf_readl_addr(addr) ls_pcie_pf_readl(pcie, addr) 39 50 #define to_ls_pcie(x) dev_get_drvdata((x)->dev) 40 51 41 52 static bool ls_pcie_is_bridge(struct ls_pcie *pcie) ··· 90 73 iowrite32(PCIE_ABSERR_SETTING, pci->dbi_base + PCIE_ABSERR); 91 74 } 92 75 76 + static u32 ls_pcie_pf_readl(struct ls_pcie *pcie, u32 off) 77 + { 78 + if (pcie->big_endian) 79 + return ioread32be(pcie->pf_base + off); 80 + 81 + return ioread32(pcie->pf_base + off); 82 + } 83 + 84 + static void ls_pcie_pf_writel(struct ls_pcie *pcie, u32 off, u32 val) 85 + { 86 + if (pcie->big_endian) 87 + iowrite32be(val, pcie->pf_base + off); 88 + else 89 + iowrite32(val, pcie->pf_base + off); 90 + } 91 + 92 + static void ls_pcie_send_turnoff_msg(struct dw_pcie_rp *pp) 93 + { 94 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 95 + struct ls_pcie *pcie = to_ls_pcie(pci); 96 + u32 val; 97 + int ret; 98 + 99 + val = ls_pcie_pf_readl(pcie, LS_PCIE_PF_MCR); 100 + val |= PF_MCR_PTOMR; 101 + ls_pcie_pf_writel(pcie, LS_PCIE_PF_MCR, val); 102 + 103 + ret = readx_poll_timeout(ls_pcie_pf_readl_addr, LS_PCIE_PF_MCR, 104 + val, !(val & PF_MCR_PTOMR), 105 + PCIE_PME_TO_L2_TIMEOUT_US/10, 106 + PCIE_PME_TO_L2_TIMEOUT_US); 107 + if (ret) 108 + dev_err(pcie->pci->dev, "PME_Turn_off timeout\n"); 109 + } 110 + 111 + static void ls_pcie_exit_from_l2(struct dw_pcie_rp *pp) 112 + { 113 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 114 + struct ls_pcie *pcie = to_ls_pcie(pci); 115 + u32 val; 116 + int ret; 117 + 118 + /* 119 + * Set PF_MCR_EXL2S bit in LS_PCIE_PF_MCR register for the link 120 + * to exit L2 state. 121 + */ 122 + val = ls_pcie_pf_readl(pcie, LS_PCIE_PF_MCR); 123 + val |= PF_MCR_EXL2S; 124 + ls_pcie_pf_writel(pcie, LS_PCIE_PF_MCR, val); 125 + 126 + /* 127 + * L2 exit timeout of 10ms is not defined in the specifications, 128 + * it was chosen based on empirical observations. 129 + */ 130 + ret = readx_poll_timeout(ls_pcie_pf_readl_addr, LS_PCIE_PF_MCR, 131 + val, !(val & PF_MCR_EXL2S), 132 + 1000, 133 + 10000); 134 + if (ret) 135 + dev_err(pcie->pci->dev, "L2 exit timeout\n"); 136 + } 137 + 93 138 static int ls_pcie_host_init(struct dw_pcie_rp *pp) 94 139 { 95 140 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); ··· 170 91 171 92 static const struct dw_pcie_host_ops ls_pcie_host_ops = { 172 93 .host_init = ls_pcie_host_init, 94 + .pme_turn_off = ls_pcie_send_turnoff_msg, 95 + }; 96 + 97 + static const struct ls_pcie_drvdata ls1021a_drvdata = { 98 + .pm_support = false, 99 + }; 100 + 101 + static const struct ls_pcie_drvdata layerscape_drvdata = { 102 + .pf_off = 0xc0000, 103 + .pm_support = true, 173 104 }; 174 105 175 106 static const struct of_device_id ls_pcie_of_match[] = { 176 - { .compatible = "fsl,ls1012a-pcie", }, 177 - { .compatible = "fsl,ls1021a-pcie", }, 178 - { .compatible = "fsl,ls1028a-pcie", }, 179 - { .compatible = "fsl,ls1043a-pcie", }, 180 - { .compatible = "fsl,ls1046a-pcie", }, 181 - { .compatible = "fsl,ls2080a-pcie", }, 182 - { .compatible = "fsl,ls2085a-pcie", }, 183 - { .compatible = "fsl,ls2088a-pcie", }, 184 - { .compatible = "fsl,ls1088a-pcie", }, 107 + { .compatible = "fsl,ls1012a-pcie", .data = &layerscape_drvdata }, 108 + { .compatible = "fsl,ls1021a-pcie", .data = &ls1021a_drvdata }, 109 + { .compatible = "fsl,ls1028a-pcie", .data = &layerscape_drvdata }, 110 + { .compatible = "fsl,ls1043a-pcie", .data = &ls1021a_drvdata }, 111 + { .compatible = "fsl,ls1046a-pcie", .data = &layerscape_drvdata }, 112 + { .compatible = "fsl,ls2080a-pcie", .data = &layerscape_drvdata }, 113 + { .compatible = "fsl,ls2085a-pcie", .data = &layerscape_drvdata }, 114 + { .compatible = "fsl,ls2088a-pcie", .data = &layerscape_drvdata }, 115 + { .compatible = "fsl,ls1088a-pcie", .data = &layerscape_drvdata }, 185 116 { }, 186 117 }; 187 118 ··· 210 121 if (!pci) 211 122 return -ENOMEM; 212 123 124 + pcie->drvdata = of_device_get_match_data(dev); 125 + 213 126 pci->dev = dev; 214 127 pci->pp.ops = &ls_pcie_host_ops; 215 128 ··· 222 131 if (IS_ERR(pci->dbi_base)) 223 132 return PTR_ERR(pci->dbi_base); 224 133 134 + pcie->big_endian = of_property_read_bool(dev->of_node, "big-endian"); 135 + 136 + pcie->pf_base = pci->dbi_base + pcie->drvdata->pf_off; 137 + 225 138 if (!ls_pcie_is_bridge(pcie)) 226 139 return -ENODEV; 227 140 ··· 234 139 return dw_pcie_host_init(&pci->pp); 235 140 } 236 141 142 + static int ls_pcie_suspend_noirq(struct device *dev) 143 + { 144 + struct ls_pcie *pcie = dev_get_drvdata(dev); 145 + 146 + if (!pcie->drvdata->pm_support) 147 + return 0; 148 + 149 + return dw_pcie_suspend_noirq(pcie->pci); 150 + } 151 + 152 + static int ls_pcie_resume_noirq(struct device *dev) 153 + { 154 + struct ls_pcie *pcie = dev_get_drvdata(dev); 155 + 156 + if (!pcie->drvdata->pm_support) 157 + return 0; 158 + 159 + ls_pcie_exit_from_l2(&pcie->pci->pp); 160 + 161 + return dw_pcie_resume_noirq(pcie->pci); 162 + } 163 + 164 + static const struct dev_pm_ops ls_pcie_pm_ops = { 165 + NOIRQ_SYSTEM_SLEEP_PM_OPS(ls_pcie_suspend_noirq, ls_pcie_resume_noirq) 166 + }; 167 + 237 168 static struct platform_driver ls_pcie_driver = { 238 169 .probe = ls_pcie_probe, 239 170 .driver = { 240 171 .name = "layerscape-pcie", 241 172 .of_match_table = ls_pcie_of_match, 242 173 .suppress_bind_attrs = true, 174 + .pm = &ls_pcie_pm_ops, 243 175 }, 244 176 }; 245 177 builtin_platform_driver(ls_pcie_driver);
+9 -4
drivers/pci/controller/dwc/pci-meson.c
··· 9 9 #include <linux/clk.h> 10 10 #include <linux/delay.h> 11 11 #include <linux/gpio/consumer.h> 12 - #include <linux/of_device.h> 13 12 #include <linux/of_gpio.h> 14 13 #include <linux/pci.h> 15 14 #include <linux/platform_device.h> ··· 16 17 #include <linux/resource.h> 17 18 #include <linux/types.h> 18 19 #include <linux/phy/phy.h> 20 + #include <linux/mod_devicetable.h> 19 21 #include <linux/module.h> 20 22 21 23 #include "pcie-designware.h" ··· 163 163 return 0; 164 164 } 165 165 166 + static inline void meson_pcie_disable_clock(void *data) 167 + { 168 + struct clk *clk = data; 169 + 170 + clk_disable_unprepare(clk); 171 + } 172 + 166 173 static inline struct clk *meson_pcie_probe_clock(struct device *dev, 167 174 const char *id, u64 rate) 168 175 { ··· 194 187 return ERR_PTR(ret); 195 188 } 196 189 197 - devm_add_action_or_reset(dev, 198 - (void (*) (void *))clk_disable_unprepare, 199 - clk); 190 + devm_add_action_or_reset(dev, meson_pcie_disable_clock, clk); 200 191 201 192 return clk; 202 193 }
+1 -1
drivers/pci/controller/dwc/pcie-artpec6.c
··· 10 10 #include <linux/delay.h> 11 11 #include <linux/kernel.h> 12 12 #include <linux/init.h> 13 - #include <linux/of_device.h> 13 + #include <linux/of.h> 14 14 #include <linux/pci.h> 15 15 #include <linux/platform_device.h> 16 16 #include <linux/resource.h>
+71
drivers/pci/controller/dwc/pcie-designware-host.c
··· 8 8 * Author: Jingoo Han <jg1.han@samsung.com> 9 9 */ 10 10 11 + #include <linux/iopoll.h> 11 12 #include <linux/irqchip/chained_irq.h> 12 13 #include <linux/irqdomain.h> 13 14 #include <linux/msi.h> ··· 17 16 #include <linux/pci_regs.h> 18 17 #include <linux/platform_device.h> 19 18 19 + #include "../../pci.h" 20 20 #include "pcie-designware.h" 21 21 22 22 static struct pci_ops dw_pcie_ops; ··· 809 807 return 0; 810 808 } 811 809 EXPORT_SYMBOL_GPL(dw_pcie_setup_rc); 810 + 811 + int dw_pcie_suspend_noirq(struct dw_pcie *pci) 812 + { 813 + u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 814 + u32 val; 815 + int ret; 816 + 817 + /* 818 + * If L1SS is supported, then do not put the link into L2 as some 819 + * devices such as NVMe expect low resume latency. 820 + */ 821 + if (dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKCTL) & PCI_EXP_LNKCTL_ASPM_L1) 822 + return 0; 823 + 824 + if (dw_pcie_get_ltssm(pci) <= DW_PCIE_LTSSM_DETECT_ACT) 825 + return 0; 826 + 827 + if (!pci->pp.ops->pme_turn_off) 828 + return 0; 829 + 830 + pci->pp.ops->pme_turn_off(&pci->pp); 831 + 832 + ret = read_poll_timeout(dw_pcie_get_ltssm, val, val == DW_PCIE_LTSSM_L2_IDLE, 833 + PCIE_PME_TO_L2_TIMEOUT_US/10, 834 + PCIE_PME_TO_L2_TIMEOUT_US, false, pci); 835 + if (ret) { 836 + dev_err(pci->dev, "Timeout waiting for L2 entry! LTSSM: 0x%x\n", val); 837 + return ret; 838 + } 839 + 840 + if (pci->pp.ops->host_deinit) 841 + pci->pp.ops->host_deinit(&pci->pp); 842 + 843 + pci->suspended = true; 844 + 845 + return ret; 846 + } 847 + EXPORT_SYMBOL_GPL(dw_pcie_suspend_noirq); 848 + 849 + int dw_pcie_resume_noirq(struct dw_pcie *pci) 850 + { 851 + int ret; 852 + 853 + if (!pci->suspended) 854 + return 0; 855 + 856 + pci->suspended = false; 857 + 858 + if (pci->pp.ops->host_init) { 859 + ret = pci->pp.ops->host_init(&pci->pp); 860 + if (ret) { 861 + dev_err(pci->dev, "Host init failed: %d\n", ret); 862 + return ret; 863 + } 864 + } 865 + 866 + dw_pcie_setup_rc(&pci->pp); 867 + 868 + ret = dw_pcie_start_link(pci); 869 + if (ret) 870 + return ret; 871 + 872 + ret = dw_pcie_wait_for_link(pci); 873 + if (ret) 874 + return ret; 875 + 876 + return ret; 877 + } 878 + EXPORT_SYMBOL_GPL(dw_pcie_resume_noirq);
+1 -1
drivers/pci/controller/dwc/pcie-designware-plat.c
··· 12 12 #include <linux/interrupt.h> 13 13 #include <linux/kernel.h> 14 14 #include <linux/init.h> 15 - #include <linux/of_device.h> 15 + #include <linux/of.h> 16 16 #include <linux/pci.h> 17 17 #include <linux/platform_device.h> 18 18 #include <linux/resource.h>
+1 -1
drivers/pci/controller/dwc/pcie-designware.c
··· 16 16 #include <linux/gpio/consumer.h> 17 17 #include <linux/ioport.h> 18 18 #include <linux/of.h> 19 - #include <linux/of_platform.h> 19 + #include <linux/platform_device.h> 20 20 #include <linux/sizes.h> 21 21 #include <linux/types.h> 22 22
+28
drivers/pci/controller/dwc/pcie-designware.h
··· 288 288 DW_PCIE_NUM_CORE_RSTS 289 289 }; 290 290 291 + enum dw_pcie_ltssm { 292 + /* Need to align with PCIE_PORT_DEBUG0 bits 0:5 */ 293 + DW_PCIE_LTSSM_DETECT_QUIET = 0x0, 294 + DW_PCIE_LTSSM_DETECT_ACT = 0x1, 295 + DW_PCIE_LTSSM_L0 = 0x11, 296 + DW_PCIE_LTSSM_L2_IDLE = 0x15, 297 + 298 + DW_PCIE_LTSSM_UNKNOWN = 0xFFFFFFFF, 299 + }; 300 + 291 301 struct dw_pcie_host_ops { 292 302 int (*host_init)(struct dw_pcie_rp *pp); 293 303 void (*host_deinit)(struct dw_pcie_rp *pp); 294 304 int (*msi_host_init)(struct dw_pcie_rp *pp); 305 + void (*pme_turn_off)(struct dw_pcie_rp *pp); 295 306 }; 296 307 297 308 struct dw_pcie_rp { ··· 375 364 void (*write_dbi2)(struct dw_pcie *pcie, void __iomem *base, u32 reg, 376 365 size_t size, u32 val); 377 366 int (*link_up)(struct dw_pcie *pcie); 367 + enum dw_pcie_ltssm (*get_ltssm)(struct dw_pcie *pcie); 378 368 int (*start_link)(struct dw_pcie *pcie); 379 369 void (*stop_link)(struct dw_pcie *pcie); 380 370 }; ··· 405 393 struct reset_control_bulk_data app_rsts[DW_PCIE_NUM_APP_RSTS]; 406 394 struct reset_control_bulk_data core_rsts[DW_PCIE_NUM_CORE_RSTS]; 407 395 struct gpio_desc *pe_rst; 396 + bool suspended; 408 397 }; 409 398 410 399 #define to_dw_pcie_from_pp(port) container_of((port), struct dw_pcie, pp) ··· 442 429 void dw_pcie_iatu_detect(struct dw_pcie *pci); 443 430 int dw_pcie_edma_detect(struct dw_pcie *pci); 444 431 void dw_pcie_edma_remove(struct dw_pcie *pci); 432 + 433 + int dw_pcie_suspend_noirq(struct dw_pcie *pci); 434 + int dw_pcie_resume_noirq(struct dw_pcie *pci); 445 435 446 436 static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val) 447 437 { ··· 515 499 { 516 500 if (pci->ops && pci->ops->stop_link) 517 501 pci->ops->stop_link(pci); 502 + } 503 + 504 + static inline enum dw_pcie_ltssm dw_pcie_get_ltssm(struct dw_pcie *pci) 505 + { 506 + u32 val; 507 + 508 + if (pci->ops && pci->ops->get_ltssm) 509 + return pci->ops->get_ltssm(pci); 510 + 511 + val = dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG0); 512 + 513 + return (enum dw_pcie_ltssm)FIELD_GET(PORT_LOGIC_LTSSM_STATE_MASK, val); 518 514 } 519 515 520 516 #ifdef CONFIG_PCIE_DW_HOST
+1 -1
drivers/pci/controller/dwc/pcie-dw-rockchip.c
··· 14 14 #include <linux/irqdomain.h> 15 15 #include <linux/mfd/syscon.h> 16 16 #include <linux/module.h> 17 - #include <linux/of_device.h> 17 + #include <linux/of.h> 18 18 #include <linux/of_irq.h> 19 19 #include <linux/phy/phy.h> 20 20 #include <linux/platform_device.h>
+1
drivers/pci/controller/dwc/pcie-fu740.c
··· 299 299 pci->dev = dev; 300 300 pci->ops = &dw_pcie_ops; 301 301 pci->pp.ops = &fu740_pcie_host_ops; 302 + pci->pp.num_vectors = MAX_MSI_IRQS; 302 303 303 304 /* SiFive specific region: mgmt */ 304 305 afp->mgmt_base = devm_platform_ioremap_resource_byname(pdev, "mgmt");
+2
drivers/pci/controller/dwc/pcie-intel-gw.c
··· 9 9 #include <linux/clk.h> 10 10 #include <linux/gpio/consumer.h> 11 11 #include <linux/iopoll.h> 12 + #include <linux/mod_devicetable.h> 12 13 #include <linux/pci_regs.h> 13 14 #include <linux/phy/phy.h> 14 15 #include <linux/platform_device.h> 16 + #include <linux/property.h> 15 17 #include <linux/reset.h> 16 18 17 19 #include "../../pci.h"
+8 -3
drivers/pci/controller/dwc/pcie-keembay.c
··· 148 148 .stop_link = keembay_pcie_stop_link, 149 149 }; 150 150 151 + static inline void keembay_pcie_disable_clock(void *data) 152 + { 153 + struct clk *clk = data; 154 + 155 + clk_disable_unprepare(clk); 156 + } 157 + 151 158 static inline struct clk *keembay_pcie_probe_clock(struct device *dev, 152 159 const char *id, u64 rate) 153 160 { ··· 175 168 if (ret) 176 169 return ERR_PTR(ret); 177 170 178 - ret = devm_add_action_or_reset(dev, 179 - (void(*)(void *))clk_disable_unprepare, 180 - clk); 171 + ret = devm_add_action_or_reset(dev, keembay_pcie_disable_clock, clk); 181 172 if (ret) 182 173 return ERR_PTR(ret); 183 174
+1 -2
drivers/pci/controller/dwc/pcie-kirin.c
··· 16 16 #include <linux/gpio/consumer.h> 17 17 #include <linux/interrupt.h> 18 18 #include <linux/mfd/syscon.h> 19 - #include <linux/of_address.h> 20 - #include <linux/of_device.h> 19 + #include <linux/of.h> 21 20 #include <linux/of_gpio.h> 22 21 #include <linux/of_pci.h> 23 22 #include <linux/phy/phy.h>
+78 -3
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 13 13 #include <linux/debugfs.h> 14 14 #include <linux/delay.h> 15 15 #include <linux/gpio/consumer.h> 16 + #include <linux/interconnect.h> 16 17 #include <linux/mfd/syscon.h> 17 18 #include <linux/phy/pcie.h> 18 19 #include <linux/phy/phy.h> ··· 75 74 #define PARF_INT_ALL_PLS_ERR BIT(15) 76 75 #define PARF_INT_ALL_PME_LEGACY BIT(16) 77 76 #define PARF_INT_ALL_PLS_PME BIT(17) 77 + #define PARF_INT_ALL_EDMA BIT(22) 78 78 79 79 /* PARF_BDF_TO_SID_CFG register fields */ 80 80 #define PARF_BDF_TO_SID_BYPASS BIT(0) ··· 135 133 #define CORE_RESET_TIME_US_MAX 1005 136 134 #define WAKE_DELAY_US 2000 /* 2 ms */ 137 135 136 + #define PCIE_GEN1_BW_MBPS 250 137 + #define PCIE_GEN2_BW_MBPS 500 138 + #define PCIE_GEN3_BW_MBPS 985 139 + #define PCIE_GEN4_BW_MBPS 1969 140 + 138 141 #define to_pcie_ep(x) dev_get_drvdata((x)->dev) 139 142 140 143 enum qcom_pcie_ep_link_status { ··· 162 155 * @wake: WAKE# GPIO 163 156 * @phy: PHY controller block 164 157 * @debugfs: PCIe Endpoint Debugfs directory 158 + * @icc_mem: Handle to an interconnect path between PCIe and MEM 165 159 * @clks: PCIe clocks 166 160 * @num_clks: PCIe clocks count 167 161 * @perst_en: Flag for PERST enable ··· 185 177 struct gpio_desc *wake; 186 178 struct phy *phy; 187 179 struct dentry *debugfs; 180 + 181 + struct icc_path *icc_mem; 188 182 189 183 struct clk_bulk_data *clks; 190 184 int num_clks; ··· 263 253 disable_irq(pcie_ep->perst_irq); 264 254 } 265 255 256 + static void qcom_pcie_ep_icc_update(struct qcom_pcie_ep *pcie_ep) 257 + { 258 + struct dw_pcie *pci = &pcie_ep->pci; 259 + u32 offset, status, bw; 260 + int speed, width; 261 + int ret; 262 + 263 + if (!pcie_ep->icc_mem) 264 + return; 265 + 266 + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 267 + status = readw(pci->dbi_base + offset + PCI_EXP_LNKSTA); 268 + 269 + speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status); 270 + width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status); 271 + 272 + switch (speed) { 273 + case 1: 274 + bw = MBps_to_icc(PCIE_GEN1_BW_MBPS); 275 + break; 276 + case 2: 277 + bw = MBps_to_icc(PCIE_GEN2_BW_MBPS); 278 + break; 279 + case 3: 280 + bw = MBps_to_icc(PCIE_GEN3_BW_MBPS); 281 + break; 282 + default: 283 + dev_warn(pci->dev, "using default GEN4 bandwidth\n"); 284 + fallthrough; 285 + case 4: 286 + bw = MBps_to_icc(PCIE_GEN4_BW_MBPS); 287 + break; 288 + } 289 + 290 + ret = icc_set_bw(pcie_ep->icc_mem, 0, width * bw); 291 + if (ret) 292 + dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n", 293 + ret); 294 + } 295 + 266 296 static int qcom_pcie_enable_resources(struct qcom_pcie_ep *pcie_ep) 267 297 { 298 + struct dw_pcie *pci = &pcie_ep->pci; 268 299 int ret; 269 300 270 301 ret = clk_bulk_prepare_enable(pcie_ep->num_clks, pcie_ep->clks); ··· 328 277 if (ret) 329 278 goto err_phy_exit; 330 279 280 + /* 281 + * Some Qualcomm platforms require interconnect bandwidth constraints 282 + * to be set before enabling interconnect clocks. 283 + * 284 + * Set an initial peak bandwidth corresponding to single-lane Gen 1 285 + * for the pcie-mem path. 286 + */ 287 + ret = icc_set_bw(pcie_ep->icc_mem, 0, MBps_to_icc(PCIE_GEN1_BW_MBPS)); 288 + if (ret) { 289 + dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n", 290 + ret); 291 + goto err_phy_off; 292 + } 293 + 331 294 return 0; 332 295 296 + err_phy_off: 297 + phy_power_off(pcie_ep->phy); 333 298 err_phy_exit: 334 299 phy_exit(pcie_ep->phy); 335 300 err_disable_clk: ··· 356 289 357 290 static void qcom_pcie_disable_resources(struct qcom_pcie_ep *pcie_ep) 358 291 { 292 + icc_set_bw(pcie_ep->icc_mem, 0, 0); 359 293 phy_power_off(pcie_ep->phy); 360 294 phy_exit(pcie_ep->phy); 361 295 clk_bulk_disable_unprepare(pcie_ep->num_clks, pcie_ep->clks); ··· 463 395 writel_relaxed(0, pcie_ep->parf + PARF_INT_ALL_MASK); 464 396 val = PARF_INT_ALL_LINK_DOWN | PARF_INT_ALL_BME | 465 397 PARF_INT_ALL_PM_TURNOFF | PARF_INT_ALL_DSTATE_CHANGE | 466 - PARF_INT_ALL_LINK_UP; 398 + PARF_INT_ALL_LINK_UP | PARF_INT_ALL_EDMA; 467 399 writel_relaxed(val, pcie_ep->parf + PARF_INT_ALL_MASK); 468 400 469 401 ret = dw_pcie_ep_init_complete(&pcie_ep->pci.ep); ··· 483 415 /* Gate Master AXI clock to MHI bus during L1SS */ 484 416 val = readl_relaxed(pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL); 485 417 val &= ~PARF_MSTR_AXI_CLK_EN; 486 - val = readl_relaxed(pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL); 418 + writel_relaxed(val, pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL); 487 419 488 420 dw_pcie_ep_init_notify(&pcie_ep->pci.ep); 489 421 ··· 618 550 if (IS_ERR(pcie_ep->phy)) 619 551 ret = PTR_ERR(pcie_ep->phy); 620 552 553 + pcie_ep->icc_mem = devm_of_icc_get(dev, "pcie-mem"); 554 + if (IS_ERR(pcie_ep->icc_mem)) 555 + ret = PTR_ERR(pcie_ep->icc_mem); 556 + 621 557 return ret; 622 558 } 623 559 ··· 645 573 } else if (FIELD_GET(PARF_INT_ALL_BME, status)) { 646 574 dev_dbg(dev, "Received BME event. Link is enabled!\n"); 647 575 pcie_ep->link_status = QCOM_PCIE_EP_LINK_ENABLED; 576 + qcom_pcie_ep_icc_update(pcie_ep); 648 577 pci_epc_bme_notify(pci->ep.epc); 649 578 } else if (FIELD_GET(PARF_INT_ALL_PM_TURNOFF, status)) { 650 579 dev_dbg(dev, "Received PM Turn-off event! Entering L23\n"); ··· 666 593 dw_pcie_ep_linkup(&pci->ep); 667 594 pcie_ep->link_status = QCOM_PCIE_EP_LINK_UP; 668 595 } else { 669 - dev_dbg(dev, "Received unknown event: %d\n", status); 596 + dev_err(dev, "Received unknown event: %d\n", status); 670 597 } 671 598 672 599 return IRQ_HANDLED; ··· 779 706 .core_init_notifier = true, 780 707 .msi_capable = true, 781 708 .msix_capable = false, 709 + .align = SZ_4K, 782 710 }; 783 711 784 712 static const struct pci_epc_features * ··· 817 743 pcie_ep->pci.dev = dev; 818 744 pcie_ep->pci.ops = &pci_ops; 819 745 pcie_ep->pci.ep.ops = &pci_ep_ops; 746 + pcie_ep->pci.edma.nr_irqs = 1; 820 747 platform_set_drvdata(pdev, pcie_ep); 821 748 822 749 ret = qcom_pcie_ep_get_resources(pdev, pcie_ep);
+2 -1
drivers/pci/controller/dwc/pcie-qcom.c
··· 19 19 #include <linux/iopoll.h> 20 20 #include <linux/kernel.h> 21 21 #include <linux/init.h> 22 - #include <linux/of_device.h> 22 + #include <linux/of.h> 23 23 #include <linux/of_gpio.h> 24 24 #include <linux/pci.h> 25 25 #include <linux/pm_runtime.h> ··· 1613 1613 { .compatible = "qcom,pcie-msm8996", .data = &cfg_2_3_2 }, 1614 1614 { .compatible = "qcom,pcie-qcs404", .data = &cfg_2_4_0 }, 1615 1615 { .compatible = "qcom,pcie-sa8540p", .data = &cfg_1_9_0 }, 1616 + { .compatible = "qcom,pcie-sa8775p", .data = &cfg_1_9_0}, 1616 1617 { .compatible = "qcom,pcie-sc7280", .data = &cfg_1_9_0 }, 1617 1618 { .compatible = "qcom,pcie-sc8180x", .data = &cfg_1_9_0 }, 1618 1619 { .compatible = "qcom,pcie-sc8280xp", .data = &cfg_1_9_0 },
-11
drivers/pci/controller/dwc/pcie-tegra194.c
··· 20 20 #include <linux/kernel.h> 21 21 #include <linux/module.h> 22 22 #include <linux/of.h> 23 - #include <linux/of_device.h> 24 23 #include <linux/of_gpio.h> 25 24 #include <linux/of_pci.h> 26 25 #include <linux/pci.h> ··· 898 899 if (!pcie->pcie_cap_base) 899 900 pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci, 900 901 PCI_CAP_ID_EXP); 901 - 902 - val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL); 903 - val_16 &= ~PCI_EXP_DEVCTL_PAYLOAD; 904 - val_16 |= PCI_EXP_DEVCTL_PAYLOAD_256B; 905 - dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL, val_16); 906 902 907 903 val = dw_pcie_readl_dbi(pci, PCI_IO_BASE); 908 904 val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8); ··· 1880 1886 1881 1887 pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci, 1882 1888 PCI_CAP_ID_EXP); 1883 - 1884 - val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL); 1885 - val_16 &= ~PCI_EXP_DEVCTL_PAYLOAD; 1886 - val_16 |= PCI_EXP_DEVCTL_PAYLOAD_256B; 1887 - dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL, val_16); 1888 1889 1889 1890 /* Clear Slot Clock Configuration bit if SRNS configuration */ 1890 1891 if (pcie->enable_srns) {
+1 -1
drivers/pci/controller/dwc/pcie-uniphier-ep.c
··· 11 11 #include <linux/delay.h> 12 12 #include <linux/init.h> 13 13 #include <linux/iopoll.h> 14 - #include <linux/of_device.h> 14 + #include <linux/of.h> 15 15 #include <linux/pci.h> 16 16 #include <linux/phy/phy.h> 17 17 #include <linux/platform_device.h>
-3
drivers/pci/controller/mobiveil/pcie-mobiveil-host.c
··· 17 17 #include <linux/kernel.h> 18 18 #include <linux/module.h> 19 19 #include <linux/msi.h> 20 - #include <linux/of_address.h> 21 - #include <linux/of_irq.h> 22 - #include <linux/of_platform.h> 23 20 #include <linux/of_pci.h> 24 21 #include <linux/pci.h> 25 22 #include <linux/platform_device.h>
+1 -2
drivers/pci/controller/pci-ftpci100.c
··· 15 15 #include <linux/interrupt.h> 16 16 #include <linux/io.h> 17 17 #include <linux/kernel.h> 18 - #include <linux/of_address.h> 19 - #include <linux/of_device.h> 18 + #include <linux/of.h> 20 19 #include <linux/of_irq.h> 21 20 #include <linux/of_pci.h> 22 21 #include <linux/pci.h>
+1 -1
drivers/pci/controller/pci-host-common.c
··· 9 9 10 10 #include <linux/kernel.h> 11 11 #include <linux/module.h> 12 + #include <linux/of.h> 12 13 #include <linux/of_address.h> 13 - #include <linux/of_device.h> 14 14 #include <linux/of_pci.h> 15 15 #include <linux/pci-ecam.h> 16 16 #include <linux/platform_device.h>
+3
drivers/pci/controller/pci-hyperv.c
··· 3983 3983 struct msi_desc *entry; 3984 3984 int ret = 0; 3985 3985 3986 + if (!pdev->msi_enabled && !pdev->msix_enabled) 3987 + return 0; 3988 + 3986 3989 msi_lock_descs(&pdev->dev); 3987 3990 msi_for_each_desc(entry, &pdev->dev, MSI_DESC_ASSOCIATED) { 3988 3991 irq_data = irq_get_irq_data(entry->irq);
+1 -2
drivers/pci/controller/pci-ixp4xx.c
··· 19 19 #include <linux/init.h> 20 20 #include <linux/io.h> 21 21 #include <linux/kernel.h> 22 - #include <linux/of_address.h> 23 - #include <linux/of_device.h> 22 + #include <linux/of.h> 24 23 #include <linux/of_pci.h> 25 24 #include <linux/pci.h> 26 25 #include <linux/platform_device.h>
+1 -1
drivers/pci/controller/pci-loongson.c
··· 5 5 * Copyright (C) 2020 Jiaxun Yang <jiaxun.yang@flygoat.com> 6 6 */ 7 7 8 - #include <linux/of_device.h> 8 + #include <linux/of.h> 9 9 #include <linux/of_pci.h> 10 10 #include <linux/pci.h> 11 11 #include <linux/pci_ids.h>
-1
drivers/pci/controller/pci-mvebu.c
··· 87 87 struct resource io; 88 88 struct resource realio; 89 89 struct resource mem; 90 - struct resource busn; 91 90 int nports; 92 91 }; 93 92
+1 -2
drivers/pci/controller/pci-rcar-gen2.c
··· 290 290 priv = pci_host_bridge_priv(bridge); 291 291 bridge->sysdata = priv; 292 292 293 - cfg_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 294 - reg = devm_ioremap_resource(dev, cfg_res); 293 + reg = devm_platform_get_and_ioremap_resource(pdev, 0, &cfg_res); 295 294 if (IS_ERR(reg)) 296 295 return PTR_ERR(reg); 297 296
+2 -4
drivers/pci/controller/pci-v3-semi.c
··· 20 20 #include <linux/interrupt.h> 21 21 #include <linux/io.h> 22 22 #include <linux/kernel.h> 23 - #include <linux/of_address.h> 24 - #include <linux/of_device.h> 23 + #include <linux/of.h> 25 24 #include <linux/of_pci.h> 26 25 #include <linux/pci.h> 27 26 #include <linux/platform_device.h> ··· 735 736 return ret; 736 737 } 737 738 738 - regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 739 - v3->base = devm_ioremap_resource(dev, regs); 739 + v3->base = devm_platform_get_and_ioremap_resource(pdev, 0, &regs); 740 740 if (IS_ERR(v3->base)) 741 741 return PTR_ERR(v3->base); 742 742 /*
+1 -2
drivers/pci/controller/pci-xgene-msi.c
··· 441 441 442 442 platform_set_drvdata(pdev, xgene_msi); 443 443 444 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 445 - xgene_msi->msi_regs = devm_ioremap_resource(&pdev->dev, res); 444 + xgene_msi->msi_regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 446 445 if (IS_ERR(xgene_msi->msi_regs)) { 447 446 rc = PTR_ERR(xgene_msi->msi_regs); 448 447 goto error;
+2 -3
drivers/pci/controller/pcie-altera.c
··· 9 9 #include <linux/delay.h> 10 10 #include <linux/interrupt.h> 11 11 #include <linux/irqchip/chained_irq.h> 12 + #include <linux/irqdomain.h> 12 13 #include <linux/init.h> 13 14 #include <linux/module.h> 14 - #include <linux/of_address.h> 15 - #include <linux/of_device.h> 16 - #include <linux/of_irq.h> 15 + #include <linux/of.h> 17 16 #include <linux/of_pci.h> 18 17 #include <linux/pci.h> 19 18 #include <linux/platform_device.h>
+7 -3
drivers/pci/controller/pcie-apple.c
··· 670 670 static int apple_pcie_add_device(struct apple_pcie_port *port, 671 671 struct pci_dev *pdev) 672 672 { 673 - u32 sid, rid = PCI_DEVID(pdev->bus->number, pdev->devfn); 673 + u32 sid, rid = pci_dev_id(pdev); 674 674 int idx, err; 675 675 676 676 dev_dbg(&pdev->dev, "added to bus %s, index %d\n", ··· 701 701 static void apple_pcie_release_device(struct apple_pcie_port *port, 702 702 struct pci_dev *pdev) 703 703 { 704 - u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn); 704 + u32 rid = pci_dev_id(pdev); 705 705 int idx; 706 706 707 707 mutex_lock(&port->pcie->lock); ··· 783 783 cfg->priv = pcie; 784 784 INIT_LIST_HEAD(&pcie->ports); 785 785 786 + ret = apple_msi_init(pcie); 787 + if (ret) 788 + return ret; 789 + 786 790 for_each_child_of_node(dev->of_node, of_port) { 787 791 ret = apple_pcie_setup_port(pcie, of_port); 788 792 if (ret) { ··· 796 792 } 797 793 } 798 794 799 - return apple_msi_init(pcie); 795 + return 0; 800 796 } 801 797 802 798 static int apple_pcie_probe(struct platform_device *pdev)
+5 -1
drivers/pci/controller/pcie-brcmstb.c
··· 439 439 }; 440 440 441 441 static struct msi_domain_info brcm_msi_domain_info = { 442 - /* Multi MSI is supported by the controller, but not by this driver */ 443 442 .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 444 443 MSI_FLAG_MULTI_PCI_MSI), 445 444 .chip = &brcm_msi_irq_chip, ··· 873 874 874 875 /* Reset the bridge */ 875 876 pcie->bridge_sw_init_set(pcie, 1); 877 + 878 + /* Ensure that PERST# is asserted; some bootloaders may deassert it. */ 879 + if (pcie->type == BCM2711) 880 + pcie->perst_set(pcie, 1); 881 + 876 882 usleep_range(100, 200); 877 883 878 884 /* Take the bridge out of reset */
+2 -3
drivers/pci/controller/pcie-iproc-msi.c
··· 525 525 if (!of_device_is_compatible(node, "brcm,iproc-msi")) 526 526 return -ENODEV; 527 527 528 - if (!of_find_property(node, "msi-controller", NULL)) 528 + if (!of_property_read_bool(node, "msi-controller")) 529 529 return -ENODEV; 530 530 531 531 if (pcie->msi) ··· 585 585 return -EINVAL; 586 586 } 587 587 588 - if (of_find_property(node, "brcm,pcie-msi-inten", NULL)) 589 - msi->has_inten_reg = true; 588 + msi->has_inten_reg = of_property_read_bool(node, "brcm,pcie-msi-inten"); 590 589 591 590 msi->nr_msi_vecs = msi->nr_irqs * EQ_LEN; 592 591 msi->bitmap = devm_bitmap_zalloc(pcie->dev, msi->nr_msi_vecs,
+242 -165
drivers/pci/controller/pcie-microchip-host.c
··· 7 7 * Author: Daire McNamara <daire.mcnamara@microchip.com> 8 8 */ 9 9 10 + #include <linux/bitfield.h> 10 11 #include <linux/clk.h> 11 12 #include <linux/irqchip/chained_irq.h> 12 13 #include <linux/irqdomain.h> ··· 21 20 #include "../pci.h" 22 21 23 22 /* Number of MSI IRQs */ 24 - #define MC_NUM_MSI_IRQS 32 25 - #define MC_NUM_MSI_IRQS_CODED 5 23 + #define MC_MAX_NUM_MSI_IRQS 32 26 24 27 25 /* PCIe Bridge Phy and Controller Phy offsets */ 28 26 #define MC_PCIE1_BRIDGE_ADDR 0x00008000u ··· 30 30 #define MC_PCIE_BRIDGE_ADDR (MC_PCIE1_BRIDGE_ADDR) 31 31 #define MC_PCIE_CTRL_ADDR (MC_PCIE1_CTRL_ADDR) 32 32 33 - /* PCIe Controller Phy Regs */ 34 - #define SEC_ERROR_CNT 0x20 35 - #define DED_ERROR_CNT 0x24 36 - #define SEC_ERROR_INT 0x28 37 - #define SEC_ERROR_INT_TX_RAM_SEC_ERR_INT GENMASK(3, 0) 38 - #define SEC_ERROR_INT_RX_RAM_SEC_ERR_INT GENMASK(7, 4) 39 - #define SEC_ERROR_INT_PCIE2AXI_RAM_SEC_ERR_INT GENMASK(11, 8) 40 - #define SEC_ERROR_INT_AXI2PCIE_RAM_SEC_ERR_INT GENMASK(15, 12) 41 - #define NUM_SEC_ERROR_INTS (4) 42 - #define SEC_ERROR_INT_MASK 0x2c 43 - #define DED_ERROR_INT 0x30 44 - #define DED_ERROR_INT_TX_RAM_DED_ERR_INT GENMASK(3, 0) 45 - #define DED_ERROR_INT_RX_RAM_DED_ERR_INT GENMASK(7, 4) 46 - #define DED_ERROR_INT_PCIE2AXI_RAM_DED_ERR_INT GENMASK(11, 8) 47 - #define DED_ERROR_INT_AXI2PCIE_RAM_DED_ERR_INT GENMASK(15, 12) 48 - #define NUM_DED_ERROR_INTS (4) 49 - #define DED_ERROR_INT_MASK 0x34 50 - #define ECC_CONTROL 0x38 51 - #define ECC_CONTROL_TX_RAM_INJ_ERROR_0 BIT(0) 52 - #define ECC_CONTROL_TX_RAM_INJ_ERROR_1 BIT(1) 53 - #define ECC_CONTROL_TX_RAM_INJ_ERROR_2 BIT(2) 54 - #define ECC_CONTROL_TX_RAM_INJ_ERROR_3 BIT(3) 55 - #define ECC_CONTROL_RX_RAM_INJ_ERROR_0 BIT(4) 56 - #define ECC_CONTROL_RX_RAM_INJ_ERROR_1 BIT(5) 57 - #define ECC_CONTROL_RX_RAM_INJ_ERROR_2 BIT(6) 58 - #define ECC_CONTROL_RX_RAM_INJ_ERROR_3 BIT(7) 59 - #define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_0 BIT(8) 60 - #define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_1 BIT(9) 61 - #define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_2 BIT(10) 62 - #define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_3 BIT(11) 63 - #define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_0 BIT(12) 64 - #define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_1 BIT(13) 65 - #define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_2 BIT(14) 66 - #define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_3 BIT(15) 67 - #define ECC_CONTROL_TX_RAM_ECC_BYPASS BIT(24) 68 - #define ECC_CONTROL_RX_RAM_ECC_BYPASS BIT(25) 69 - #define ECC_CONTROL_PCIE2AXI_RAM_ECC_BYPASS BIT(26) 70 - #define ECC_CONTROL_AXI2PCIE_RAM_ECC_BYPASS BIT(27) 71 - #define LTSSM_STATE 0x5c 72 - #define LTSSM_L0_STATE 0x10 73 - #define PCIE_EVENT_INT 0x14c 74 - #define PCIE_EVENT_INT_L2_EXIT_INT BIT(0) 75 - #define PCIE_EVENT_INT_HOTRST_EXIT_INT BIT(1) 76 - #define PCIE_EVENT_INT_DLUP_EXIT_INT BIT(2) 77 - #define PCIE_EVENT_INT_MASK GENMASK(2, 0) 78 - #define PCIE_EVENT_INT_L2_EXIT_INT_MASK BIT(16) 79 - #define PCIE_EVENT_INT_HOTRST_EXIT_INT_MASK BIT(17) 80 - #define PCIE_EVENT_INT_DLUP_EXIT_INT_MASK BIT(18) 81 - #define PCIE_EVENT_INT_ENB_MASK GENMASK(18, 16) 82 - #define PCIE_EVENT_INT_ENB_SHIFT 16 83 - #define NUM_PCIE_EVENTS (3) 84 - 85 33 /* PCIe Bridge Phy Regs */ 86 - #define PCIE_PCI_IDS_DW1 0x9c 87 - 88 - /* PCIe Config space MSI capability structure */ 89 - #define MC_MSI_CAP_CTRL_OFFSET 0xe0u 90 - #define MC_MSI_MAX_Q_AVAIL (MC_NUM_MSI_IRQS_CODED << 1) 91 - #define MC_MSI_Q_SIZE (MC_NUM_MSI_IRQS_CODED << 4) 34 + #define PCIE_PCI_IRQ_DW0 0xa8 35 + #define MSIX_CAP_MASK BIT(31) 36 + #define NUM_MSI_MSGS_MASK GENMASK(6, 4) 37 + #define NUM_MSI_MSGS_SHIFT 4 92 38 93 39 #define IMASK_LOCAL 0x180 94 40 #define DMA_END_ENGINE_0_MASK 0x00000000u ··· 83 137 #define ISTATUS_LOCAL 0x184 84 138 #define IMASK_HOST 0x188 85 139 #define ISTATUS_HOST 0x18c 86 - #define MSI_ADDR 0x190 140 + #define IMSI_ADDR 0x190 87 141 #define ISTATUS_MSI 0x194 88 142 89 143 /* PCIe Master table init defines */ ··· 108 162 109 163 #define ATR_ENTRY_SIZE 32 110 164 165 + /* PCIe Controller Phy Regs */ 166 + #define SEC_ERROR_EVENT_CNT 0x20 167 + #define DED_ERROR_EVENT_CNT 0x24 168 + #define SEC_ERROR_INT 0x28 169 + #define SEC_ERROR_INT_TX_RAM_SEC_ERR_INT GENMASK(3, 0) 170 + #define SEC_ERROR_INT_RX_RAM_SEC_ERR_INT GENMASK(7, 4) 171 + #define SEC_ERROR_INT_PCIE2AXI_RAM_SEC_ERR_INT GENMASK(11, 8) 172 + #define SEC_ERROR_INT_AXI2PCIE_RAM_SEC_ERR_INT GENMASK(15, 12) 173 + #define SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT GENMASK(15, 0) 174 + #define NUM_SEC_ERROR_INTS (4) 175 + #define SEC_ERROR_INT_MASK 0x2c 176 + #define DED_ERROR_INT 0x30 177 + #define DED_ERROR_INT_TX_RAM_DED_ERR_INT GENMASK(3, 0) 178 + #define DED_ERROR_INT_RX_RAM_DED_ERR_INT GENMASK(7, 4) 179 + #define DED_ERROR_INT_PCIE2AXI_RAM_DED_ERR_INT GENMASK(11, 8) 180 + #define DED_ERROR_INT_AXI2PCIE_RAM_DED_ERR_INT GENMASK(15, 12) 181 + #define DED_ERROR_INT_ALL_RAM_DED_ERR_INT GENMASK(15, 0) 182 + #define NUM_DED_ERROR_INTS (4) 183 + #define DED_ERROR_INT_MASK 0x34 184 + #define ECC_CONTROL 0x38 185 + #define ECC_CONTROL_TX_RAM_INJ_ERROR_0 BIT(0) 186 + #define ECC_CONTROL_TX_RAM_INJ_ERROR_1 BIT(1) 187 + #define ECC_CONTROL_TX_RAM_INJ_ERROR_2 BIT(2) 188 + #define ECC_CONTROL_TX_RAM_INJ_ERROR_3 BIT(3) 189 + #define ECC_CONTROL_RX_RAM_INJ_ERROR_0 BIT(4) 190 + #define ECC_CONTROL_RX_RAM_INJ_ERROR_1 BIT(5) 191 + #define ECC_CONTROL_RX_RAM_INJ_ERROR_2 BIT(6) 192 + #define ECC_CONTROL_RX_RAM_INJ_ERROR_3 BIT(7) 193 + #define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_0 BIT(8) 194 + #define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_1 BIT(9) 195 + #define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_2 BIT(10) 196 + #define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_3 BIT(11) 197 + #define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_0 BIT(12) 198 + #define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_1 BIT(13) 199 + #define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_2 BIT(14) 200 + #define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_3 BIT(15) 201 + #define ECC_CONTROL_TX_RAM_ECC_BYPASS BIT(24) 202 + #define ECC_CONTROL_RX_RAM_ECC_BYPASS BIT(25) 203 + #define ECC_CONTROL_PCIE2AXI_RAM_ECC_BYPASS BIT(26) 204 + #define ECC_CONTROL_AXI2PCIE_RAM_ECC_BYPASS BIT(27) 205 + #define PCIE_EVENT_INT 0x14c 206 + #define PCIE_EVENT_INT_L2_EXIT_INT BIT(0) 207 + #define PCIE_EVENT_INT_HOTRST_EXIT_INT BIT(1) 208 + #define PCIE_EVENT_INT_DLUP_EXIT_INT BIT(2) 209 + #define PCIE_EVENT_INT_MASK GENMASK(2, 0) 210 + #define PCIE_EVENT_INT_L2_EXIT_INT_MASK BIT(16) 211 + #define PCIE_EVENT_INT_HOTRST_EXIT_INT_MASK BIT(17) 212 + #define PCIE_EVENT_INT_DLUP_EXIT_INT_MASK BIT(18) 213 + #define PCIE_EVENT_INT_ENB_MASK GENMASK(18, 16) 214 + #define PCIE_EVENT_INT_ENB_SHIFT 16 215 + #define NUM_PCIE_EVENTS (3) 216 + 217 + /* PCIe Config space MSI capability structure */ 218 + #define MC_MSI_CAP_CTRL_OFFSET 0xe0u 219 + 220 + /* Events */ 111 221 #define EVENT_PCIE_L2_EXIT 0 112 222 #define EVENT_PCIE_HOTRST_EXIT 1 113 223 #define EVENT_PCIE_DLUP_EXIT 2 114 224 #define EVENT_SEC_TX_RAM_SEC_ERR 3 115 225 #define EVENT_SEC_RX_RAM_SEC_ERR 4 116 - #define EVENT_SEC_AXI2PCIE_RAM_SEC_ERR 5 117 - #define EVENT_SEC_PCIE2AXI_RAM_SEC_ERR 6 226 + #define EVENT_SEC_PCIE2AXI_RAM_SEC_ERR 5 227 + #define EVENT_SEC_AXI2PCIE_RAM_SEC_ERR 6 118 228 #define EVENT_DED_TX_RAM_DED_ERR 7 119 229 #define EVENT_DED_RX_RAM_DED_ERR 8 120 - #define EVENT_DED_AXI2PCIE_RAM_DED_ERR 9 121 - #define EVENT_DED_PCIE2AXI_RAM_DED_ERR 10 230 + #define EVENT_DED_PCIE2AXI_RAM_DED_ERR 9 231 + #define EVENT_DED_AXI2PCIE_RAM_DED_ERR 10 122 232 #define EVENT_LOCAL_DMA_END_ENGINE_0 11 123 233 #define EVENT_LOCAL_DMA_END_ENGINE_1 12 124 234 #define EVENT_LOCAL_DMA_ERROR_ENGINE_0 13 ··· 261 259 struct irq_domain *dev_domain; 262 260 u32 num_vectors; 263 261 u64 vector_phy; 264 - DECLARE_BITMAP(used, MC_NUM_MSI_IRQS); 262 + DECLARE_BITMAP(used, MC_MAX_NUM_MSI_IRQS); 265 263 }; 266 264 267 265 struct mc_pcie { ··· 384 382 385 383 static char poss_clks[][5] = { "fic0", "fic1", "fic2", "fic3" }; 386 384 387 - static void mc_pcie_enable_msi(struct mc_pcie *port, void __iomem *base) 385 + static struct mc_pcie *port; 386 + 387 + static void mc_pcie_enable_msi(struct mc_pcie *port, void __iomem *ecam) 388 388 { 389 389 struct mc_msi *msi = &port->msi; 390 - u32 cap_offset = MC_MSI_CAP_CTRL_OFFSET; 391 - u16 msg_ctrl = readw_relaxed(base + cap_offset + PCI_MSI_FLAGS); 390 + u16 reg; 391 + u8 queue_size; 392 392 393 - msg_ctrl |= PCI_MSI_FLAGS_ENABLE; 394 - msg_ctrl &= ~PCI_MSI_FLAGS_QMASK; 395 - msg_ctrl |= MC_MSI_MAX_Q_AVAIL; 396 - msg_ctrl &= ~PCI_MSI_FLAGS_QSIZE; 397 - msg_ctrl |= MC_MSI_Q_SIZE; 398 - msg_ctrl |= PCI_MSI_FLAGS_64BIT; 393 + /* Fixup MSI enable flag */ 394 + reg = readw_relaxed(ecam + MC_MSI_CAP_CTRL_OFFSET + PCI_MSI_FLAGS); 395 + reg |= PCI_MSI_FLAGS_ENABLE; 396 + writew_relaxed(reg, ecam + MC_MSI_CAP_CTRL_OFFSET + PCI_MSI_FLAGS); 399 397 400 - writew_relaxed(msg_ctrl, base + cap_offset + PCI_MSI_FLAGS); 398 + /* Fixup PCI MSI queue flags */ 399 + queue_size = FIELD_GET(PCI_MSI_FLAGS_QMASK, reg); 400 + reg |= FIELD_PREP(PCI_MSI_FLAGS_QSIZE, queue_size); 401 + writew_relaxed(reg, ecam + MC_MSI_CAP_CTRL_OFFSET + PCI_MSI_FLAGS); 401 402 403 + /* Fixup MSI addr fields */ 402 404 writel_relaxed(lower_32_bits(msi->vector_phy), 403 - base + cap_offset + PCI_MSI_ADDRESS_LO); 405 + ecam + MC_MSI_CAP_CTRL_OFFSET + PCI_MSI_ADDRESS_LO); 404 406 writel_relaxed(upper_32_bits(msi->vector_phy), 405 - base + cap_offset + PCI_MSI_ADDRESS_HI); 407 + ecam + MC_MSI_CAP_CTRL_OFFSET + PCI_MSI_ADDRESS_HI); 406 408 } 407 409 408 410 static void mc_handle_msi(struct irq_desc *desc) ··· 479 473 { 480 474 struct mc_pcie *port = domain->host_data; 481 475 struct mc_msi *msi = &port->msi; 482 - void __iomem *bridge_base_addr = 483 - port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 484 476 unsigned long bit; 485 - u32 val; 486 477 487 478 mutex_lock(&msi->lock); 488 479 bit = find_first_zero_bit(msi->used, msi->num_vectors); ··· 492 489 493 490 irq_domain_set_info(domain, virq, bit, &mc_msi_bottom_irq_chip, 494 491 domain->host_data, handle_edge_irq, NULL, NULL); 495 - 496 - /* Enable MSI interrupts */ 497 - val = readl_relaxed(bridge_base_addr + IMASK_LOCAL); 498 - val |= PM_MSI_INT_MSI_MASK; 499 - writel_relaxed(val, bridge_base_addr + IMASK_LOCAL); 500 492 501 493 mutex_unlock(&msi->lock); 502 494 ··· 654 656 return (reg & field.reg_mask) ? BIT(field.event_bit) : 0; 655 657 } 656 658 657 - static u32 pcie_events(void __iomem *addr) 659 + static u32 pcie_events(struct mc_pcie *port) 658 660 { 659 - u32 reg = readl_relaxed(addr); 661 + void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR; 662 + u32 reg = readl_relaxed(ctrl_base_addr + PCIE_EVENT_INT); 660 663 u32 val = 0; 661 664 int i; 662 665 ··· 667 668 return val; 668 669 } 669 670 670 - static u32 sec_errors(void __iomem *addr) 671 + static u32 sec_errors(struct mc_pcie *port) 671 672 { 672 - u32 reg = readl_relaxed(addr); 673 + void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR; 674 + u32 reg = readl_relaxed(ctrl_base_addr + SEC_ERROR_INT); 673 675 u32 val = 0; 674 676 int i; 675 677 ··· 680 680 return val; 681 681 } 682 682 683 - static u32 ded_errors(void __iomem *addr) 683 + static u32 ded_errors(struct mc_pcie *port) 684 684 { 685 - u32 reg = readl_relaxed(addr); 685 + void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR; 686 + u32 reg = readl_relaxed(ctrl_base_addr + DED_ERROR_INT); 686 687 u32 val = 0; 687 688 int i; 688 689 ··· 693 692 return val; 694 693 } 695 694 696 - static u32 local_events(void __iomem *addr) 695 + static u32 local_events(struct mc_pcie *port) 697 696 { 698 - u32 reg = readl_relaxed(addr); 697 + void __iomem *bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 698 + u32 reg = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL); 699 699 u32 val = 0; 700 700 int i; 701 701 ··· 708 706 709 707 static u32 get_events(struct mc_pcie *port) 710 708 { 711 - void __iomem *bridge_base_addr = 712 - port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 713 - void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR; 714 709 u32 events = 0; 715 710 716 - events |= pcie_events(ctrl_base_addr + PCIE_EVENT_INT); 717 - events |= sec_errors(ctrl_base_addr + SEC_ERROR_INT); 718 - events |= ded_errors(ctrl_base_addr + DED_ERROR_INT); 719 - events |= local_events(bridge_base_addr + ISTATUS_LOCAL); 711 + events |= pcie_events(port); 712 + events |= sec_errors(port); 713 + events |= ded_errors(port); 714 + events |= local_events(port); 720 715 721 716 return events; 722 717 } ··· 847 848 .map = mc_pcie_event_map, 848 849 }; 849 850 851 + static inline void mc_pcie_deinit_clk(void *data) 852 + { 853 + struct clk *clk = data; 854 + 855 + clk_disable_unprepare(clk); 856 + } 857 + 850 858 static inline struct clk *mc_pcie_init_clk(struct device *dev, const char *id) 851 859 { 852 860 struct clk *clk; ··· 869 863 if (ret) 870 864 return ERR_PTR(ret); 871 865 872 - devm_add_action_or_reset(dev, (void (*) (void *))clk_disable_unprepare, 873 - clk); 866 + devm_add_action_or_reset(dev, mc_pcie_deinit_clk, clk); 874 867 875 868 return clk; 876 869 } ··· 992 987 return 0; 993 988 } 994 989 995 - static int mc_platform_init(struct pci_config_window *cfg) 990 + static inline void mc_clear_secs(struct mc_pcie *port) 996 991 { 997 - struct device *dev = cfg->parent; 998 - struct platform_device *pdev = to_platform_device(dev); 999 - struct mc_pcie *port; 1000 - void __iomem *bridge_base_addr; 1001 - void __iomem *ctrl_base_addr; 1002 - int ret; 992 + void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR; 993 + 994 + writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT, ctrl_base_addr + 995 + SEC_ERROR_INT); 996 + writel_relaxed(0, ctrl_base_addr + SEC_ERROR_EVENT_CNT); 997 + } 998 + 999 + static inline void mc_clear_deds(struct mc_pcie *port) 1000 + { 1001 + void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR; 1002 + 1003 + writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT, ctrl_base_addr + 1004 + DED_ERROR_INT); 1005 + writel_relaxed(0, ctrl_base_addr + DED_ERROR_EVENT_CNT); 1006 + } 1007 + 1008 + static void mc_disable_interrupts(struct mc_pcie *port) 1009 + { 1010 + void __iomem *bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 1011 + void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR; 1012 + u32 val; 1013 + 1014 + /* Ensure ECC bypass is enabled */ 1015 + val = ECC_CONTROL_TX_RAM_ECC_BYPASS | 1016 + ECC_CONTROL_RX_RAM_ECC_BYPASS | 1017 + ECC_CONTROL_PCIE2AXI_RAM_ECC_BYPASS | 1018 + ECC_CONTROL_AXI2PCIE_RAM_ECC_BYPASS; 1019 + writel_relaxed(val, ctrl_base_addr + ECC_CONTROL); 1020 + 1021 + /* Disable SEC errors and clear any outstanding */ 1022 + writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT, ctrl_base_addr + 1023 + SEC_ERROR_INT_MASK); 1024 + mc_clear_secs(port); 1025 + 1026 + /* Disable DED errors and clear any outstanding */ 1027 + writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT, ctrl_base_addr + 1028 + DED_ERROR_INT_MASK); 1029 + mc_clear_deds(port); 1030 + 1031 + /* Disable local interrupts and clear any outstanding */ 1032 + writel_relaxed(0, bridge_base_addr + IMASK_LOCAL); 1033 + writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_LOCAL); 1034 + writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_MSI); 1035 + 1036 + /* Disable PCIe events and clear any outstanding */ 1037 + val = PCIE_EVENT_INT_L2_EXIT_INT | 1038 + PCIE_EVENT_INT_HOTRST_EXIT_INT | 1039 + PCIE_EVENT_INT_DLUP_EXIT_INT | 1040 + PCIE_EVENT_INT_L2_EXIT_INT_MASK | 1041 + PCIE_EVENT_INT_HOTRST_EXIT_INT_MASK | 1042 + PCIE_EVENT_INT_DLUP_EXIT_INT_MASK; 1043 + writel_relaxed(val, ctrl_base_addr + PCIE_EVENT_INT); 1044 + 1045 + /* Disable host interrupts and clear any outstanding */ 1046 + writel_relaxed(0, bridge_base_addr + IMASK_HOST); 1047 + writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_HOST); 1048 + } 1049 + 1050 + static int mc_init_interrupts(struct platform_device *pdev, struct mc_pcie *port) 1051 + { 1052 + struct device *dev = &pdev->dev; 1003 1053 int irq; 1004 1054 int i, intx_irq, msi_irq, event_irq; 1005 - u32 val; 1006 - int err; 1055 + int ret; 1007 1056 1008 - port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL); 1009 - if (!port) 1010 - return -ENOMEM; 1011 - port->dev = dev; 1012 - 1013 - ret = mc_pcie_init_clks(dev); 1014 - if (ret) { 1015 - dev_err(dev, "failed to get clock resources, error %d\n", ret); 1016 - return -ENODEV; 1017 - } 1018 - 1019 - port->axi_base_addr = devm_platform_ioremap_resource(pdev, 1); 1020 - if (IS_ERR(port->axi_base_addr)) 1021 - return PTR_ERR(port->axi_base_addr); 1022 - 1023 - bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 1024 - ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR; 1025 - 1026 - port->msi.vector_phy = MSI_ADDR; 1027 - port->msi.num_vectors = MC_NUM_MSI_IRQS; 1028 1057 ret = mc_pcie_init_irq_domains(port); 1029 1058 if (ret) { 1030 1059 dev_err(dev, "failed creating IRQ domains\n"); ··· 1076 1037 return -ENXIO; 1077 1038 } 1078 1039 1079 - err = devm_request_irq(dev, event_irq, mc_event_handler, 1040 + ret = devm_request_irq(dev, event_irq, mc_event_handler, 1080 1041 0, event_cause[i].sym, port); 1081 - if (err) { 1042 + if (ret) { 1082 1043 dev_err(dev, "failed to request IRQ %d\n", event_irq); 1083 - return err; 1044 + return ret; 1084 1045 } 1085 1046 } 1086 1047 ··· 1105 1066 /* Plug the main event chained handler */ 1106 1067 irq_set_chained_handler_and_data(irq, mc_handle_event, port); 1107 1068 1108 - /* Hardware doesn't setup MSI by default */ 1069 + return 0; 1070 + } 1071 + 1072 + static int mc_platform_init(struct pci_config_window *cfg) 1073 + { 1074 + struct device *dev = cfg->parent; 1075 + struct platform_device *pdev = to_platform_device(dev); 1076 + void __iomem *bridge_base_addr = 1077 + port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 1078 + int ret; 1079 + 1080 + /* Configure address translation table 0 for PCIe config space */ 1081 + mc_pcie_setup_window(bridge_base_addr, 0, cfg->res.start, 1082 + cfg->res.start, 1083 + resource_size(&cfg->res)); 1084 + 1085 + /* Need some fixups in config space */ 1109 1086 mc_pcie_enable_msi(port, cfg->win); 1110 1087 1111 - val = readl_relaxed(bridge_base_addr + IMASK_LOCAL); 1112 - val |= PM_MSI_INT_INTX_MASK; 1113 - writel_relaxed(val, bridge_base_addr + IMASK_LOCAL); 1088 + /* Configure non-config space outbound ranges */ 1089 + ret = mc_pcie_setup_windows(pdev, port); 1090 + if (ret) 1091 + return ret; 1114 1092 1115 - writel_relaxed(val, ctrl_base_addr + ECC_CONTROL); 1093 + /* Address translation is up; safe to enable interrupts */ 1094 + ret = mc_init_interrupts(pdev, port); 1095 + if (ret) 1096 + return ret; 1116 1097 1117 - val = PCIE_EVENT_INT_L2_EXIT_INT | 1118 - PCIE_EVENT_INT_HOTRST_EXIT_INT | 1119 - PCIE_EVENT_INT_DLUP_EXIT_INT; 1120 - writel_relaxed(val, ctrl_base_addr + PCIE_EVENT_INT); 1098 + return 0; 1099 + } 1121 1100 1122 - val = SEC_ERROR_INT_TX_RAM_SEC_ERR_INT | 1123 - SEC_ERROR_INT_RX_RAM_SEC_ERR_INT | 1124 - SEC_ERROR_INT_PCIE2AXI_RAM_SEC_ERR_INT | 1125 - SEC_ERROR_INT_AXI2PCIE_RAM_SEC_ERR_INT; 1126 - writel_relaxed(val, ctrl_base_addr + SEC_ERROR_INT); 1127 - writel_relaxed(0, ctrl_base_addr + SEC_ERROR_INT_MASK); 1128 - writel_relaxed(0, ctrl_base_addr + SEC_ERROR_CNT); 1101 + static int mc_host_probe(struct platform_device *pdev) 1102 + { 1103 + struct device *dev = &pdev->dev; 1104 + void __iomem *bridge_base_addr; 1105 + int ret; 1106 + u32 val; 1129 1107 1130 - val = DED_ERROR_INT_TX_RAM_DED_ERR_INT | 1131 - DED_ERROR_INT_RX_RAM_DED_ERR_INT | 1132 - DED_ERROR_INT_PCIE2AXI_RAM_DED_ERR_INT | 1133 - DED_ERROR_INT_AXI2PCIE_RAM_DED_ERR_INT; 1134 - writel_relaxed(val, ctrl_base_addr + DED_ERROR_INT); 1135 - writel_relaxed(0, ctrl_base_addr + DED_ERROR_INT_MASK); 1136 - writel_relaxed(0, ctrl_base_addr + DED_ERROR_CNT); 1108 + port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL); 1109 + if (!port) 1110 + return -ENOMEM; 1137 1111 1138 - writel_relaxed(0, bridge_base_addr + IMASK_HOST); 1139 - writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_HOST); 1112 + port->dev = dev; 1140 1113 1141 - /* Configure Address Translation Table 0 for PCIe config space */ 1142 - mc_pcie_setup_window(bridge_base_addr, 0, cfg->res.start & 0xffffffff, 1143 - cfg->res.start, resource_size(&cfg->res)); 1114 + port->axi_base_addr = devm_platform_ioremap_resource(pdev, 1); 1115 + if (IS_ERR(port->axi_base_addr)) 1116 + return PTR_ERR(port->axi_base_addr); 1144 1117 1145 - return mc_pcie_setup_windows(pdev, port); 1118 + mc_disable_interrupts(port); 1119 + 1120 + bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 1121 + 1122 + /* Allow enabling MSI by disabling MSI-X */ 1123 + val = readl(bridge_base_addr + PCIE_PCI_IRQ_DW0); 1124 + val &= ~MSIX_CAP_MASK; 1125 + writel(val, bridge_base_addr + PCIE_PCI_IRQ_DW0); 1126 + 1127 + /* Pick num vectors from bitfile programmed onto FPGA fabric */ 1128 + val = readl(bridge_base_addr + PCIE_PCI_IRQ_DW0); 1129 + val &= NUM_MSI_MSGS_MASK; 1130 + val >>= NUM_MSI_MSGS_SHIFT; 1131 + 1132 + port->msi.num_vectors = 1 << val; 1133 + 1134 + /* Pick vector address from design */ 1135 + port->msi.vector_phy = readl_relaxed(bridge_base_addr + IMSI_ADDR); 1136 + 1137 + ret = mc_pcie_init_clks(dev); 1138 + if (ret) { 1139 + dev_err(dev, "failed to get clock resources, error %d\n", ret); 1140 + return -ENODEV; 1141 + } 1142 + 1143 + return pci_host_common_probe(pdev); 1146 1144 } 1147 1145 1148 1146 static const struct pci_ecam_ops mc_ecam_ops = { ··· 1202 1126 MODULE_DEVICE_TABLE(of, mc_pcie_of_match); 1203 1127 1204 1128 static struct platform_driver mc_pcie_driver = { 1205 - .probe = pci_host_common_probe, 1129 + .probe = mc_host_probe, 1206 1130 .driver = { 1207 1131 .name = "microchip-pcie", 1208 1132 .of_match_table = mc_pcie_of_match, ··· 1211 1135 }; 1212 1136 1213 1137 builtin_platform_driver(mc_pcie_driver); 1138 + MODULE_LICENSE("GPL"); 1214 1139 MODULE_DESCRIPTION("Microchip PCIe host controller driver"); 1215 1140 MODULE_AUTHOR("Daire McNamara <daire.mcnamara@microchip.com>");
+1 -3
drivers/pci/controller/pcie-rockchip-host.c
··· 24 24 #include <linux/kernel.h> 25 25 #include <linux/mfd/syscon.h> 26 26 #include <linux/module.h> 27 - #include <linux/of_address.h> 28 - #include <linux/of_device.h> 27 + #include <linux/of.h> 29 28 #include <linux/of_pci.h> 30 - #include <linux/of_platform.h> 31 29 #include <linux/pci.h> 32 30 #include <linux/pci_ids.h> 33 31 #include <linux/phy/phy.h>
+1
drivers/pci/controller/pcie-rockchip.c
··· 15 15 #include <linux/delay.h> 16 16 #include <linux/gpio/consumer.h> 17 17 #include <linux/iopoll.h> 18 + #include <linux/of.h> 18 19 #include <linux/of_pci.h> 19 20 #include <linux/phy/phy.h> 20 21 #include <linux/platform_device.h>
+3 -3
drivers/pci/controller/pcie-rockchip.h
··· 158 158 #define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274) 159 159 #define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20) 160 160 161 - #define PCIE_ADDR_MASK 0xffffff00 161 + #define MAX_AXI_IB_ROOTPORT_REGION_NUM 3 162 + #define MIN_AXI_ADDR_BITS_PASSED 8 163 + #define PCIE_ADDR_MASK GENMASK_ULL(63, MIN_AXI_ADDR_BITS_PASSED) 162 164 #define PCIE_CORE_AXI_CONF_BASE 0xc00000 163 165 #define PCIE_CORE_OB_REGION_ADDR0 (PCIE_CORE_AXI_CONF_BASE + 0x0) 164 166 #define PCIE_CORE_OB_REGION_ADDR0_NUM_BITS 0x3f ··· 187 185 #define AXI_WRAPPER_TYPE1_CFG 0xb 188 186 #define AXI_WRAPPER_NOR_MSG 0xc 189 187 190 - #define MAX_AXI_IB_ROOTPORT_REGION_NUM 3 191 - #define MIN_AXI_ADDR_BITS_PASSED 8 192 188 #define PCIE_RC_SEND_PME_OFF 0x11960 193 189 #define ROCKCHIP_VENDOR_ID 0x1d87 194 190 #define PCIE_LINK_IS_L2(x) \
+17 -2
drivers/pci/controller/vmd.c
··· 541 541 PCI_CLASS_BRIDGE_PCI)) 542 542 continue; 543 543 544 - memset_io(base + PCI_IO_BASE, 0, 545 - PCI_ROM_ADDRESS1 - PCI_IO_BASE); 544 + /* 545 + * Temporarily disable the I/O range before updating 546 + * PCI_IO_BASE. 547 + */ 548 + writel(0x0000ffff, base + PCI_IO_BASE_UPPER16); 549 + /* Update lower 16 bits of I/O base/limit */ 550 + writew(0x00f0, base + PCI_IO_BASE); 551 + /* Update upper 16 bits of I/O base/limit */ 552 + writel(0, base + PCI_IO_BASE_UPPER16); 553 + 554 + /* MMIO Base/Limit */ 555 + writel(0x0000fff0, base + PCI_MEMORY_BASE); 556 + 557 + /* Prefetchable MMIO Base/Limit */ 558 + writel(0, base + PCI_PREF_LIMIT_UPPER32); 559 + writel(0x0000fff0, base + PCI_PREF_MEMORY_BASE); 560 + writel(0xffffffff, base + PCI_PREF_BASE_UPPER32); 546 561 } 547 562 } 548 563 }
+1 -1
drivers/pci/doe.c
··· 293 293 static void signal_task_complete(struct pci_doe_task *task, int rv) 294 294 { 295 295 task->rv = rv; 296 - task->complete(task); 297 296 destroy_work_on_stack(&task->work); 297 + task->complete(task); 298 298 } 299 299 300 300 static void signal_task_abort(struct pci_doe_task *task, int rv)
+270 -16
drivers/pci/endpoint/functions/pci-epf-mhi.c
··· 6 6 * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 7 7 */ 8 8 9 + #include <linux/dmaengine.h> 9 10 #include <linux/mhi_ep.h> 10 11 #include <linux/module.h> 12 + #include <linux/of_dma.h> 11 13 #include <linux/platform_device.h> 12 14 #include <linux/pci-epc.h> 13 15 #include <linux/pci-epf.h> ··· 18 16 19 17 #define to_epf_mhi(cntrl) container_of(cntrl, struct pci_epf_mhi, cntrl) 20 18 19 + /* Platform specific flags */ 20 + #define MHI_EPF_USE_DMA BIT(0) 21 + 21 22 struct pci_epf_mhi_ep_info { 22 23 const struct mhi_ep_cntrl_config *config; 23 24 struct pci_epf_header *epf_header; ··· 28 23 u32 epf_flags; 29 24 u32 msi_count; 30 25 u32 mru; 26 + u32 flags; 31 27 }; 32 28 33 29 #define MHI_EP_CHANNEL_CONFIG(ch_num, ch_name, direction) \ ··· 97 91 .mru = 0x8000, 98 92 }; 99 93 94 + static struct pci_epf_header sm8450_header = { 95 + .vendorid = PCI_VENDOR_ID_QCOM, 96 + .deviceid = 0x0306, 97 + .baseclass_code = PCI_CLASS_OTHERS, 98 + .interrupt_pin = PCI_INTERRUPT_INTA, 99 + }; 100 + 101 + static const struct pci_epf_mhi_ep_info sm8450_info = { 102 + .config = &mhi_v1_config, 103 + .epf_header = &sm8450_header, 104 + .bar_num = BAR_0, 105 + .epf_flags = PCI_BASE_ADDRESS_MEM_TYPE_32, 106 + .msi_count = 32, 107 + .mru = 0x8000, 108 + .flags = MHI_EPF_USE_DMA, 109 + }; 110 + 100 111 struct pci_epf_mhi { 112 + const struct pci_epc_features *epc_features; 101 113 const struct pci_epf_mhi_ep_info *info; 102 114 struct mhi_ep_cntrl mhi_cntrl; 103 115 struct pci_epf *epf; 104 116 struct mutex lock; 105 117 void __iomem *mmio; 106 118 resource_size_t mmio_phys; 119 + struct dma_chan *dma_chan_tx; 120 + struct dma_chan *dma_chan_rx; 107 121 u32 mmio_size; 108 122 int irq; 109 123 }; 124 + 125 + static size_t get_align_offset(struct pci_epf_mhi *epf_mhi, u64 addr) 126 + { 127 + return addr & (epf_mhi->epc_features->align -1); 128 + } 110 129 111 130 static int __pci_epf_mhi_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, 112 131 phys_addr_t *paddr, void __iomem **vaddr, ··· 164 133 size_t size) 165 134 { 166 135 struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); 167 - struct pci_epc *epc = epf_mhi->epf->epc; 168 - size_t offset = pci_addr & (epc->mem->window.page_size - 1); 136 + size_t offset = get_align_offset(epf_mhi, pci_addr); 169 137 170 138 return __pci_epf_mhi_alloc_map(mhi_cntrl, pci_addr, paddr, vaddr, 171 139 offset, size); ··· 189 159 size_t size) 190 160 { 191 161 struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); 192 - struct pci_epf *epf = epf_mhi->epf; 193 - struct pci_epc *epc = epf->epc; 194 - size_t offset = pci_addr & (epc->mem->window.page_size - 1); 162 + size_t offset = get_align_offset(epf_mhi, pci_addr); 195 163 196 164 __pci_epf_mhi_unmap_free(mhi_cntrl, pci_addr, paddr, vaddr, offset, 197 165 size); ··· 209 181 vector + 1); 210 182 } 211 183 212 - static int pci_epf_mhi_read_from_host(struct mhi_ep_cntrl *mhi_cntrl, u64 from, 213 - void *to, size_t size) 184 + static int pci_epf_mhi_iatu_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from, 185 + void *to, size_t size) 214 186 { 215 187 struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); 216 - size_t offset = from % SZ_4K; 188 + size_t offset = get_align_offset(epf_mhi, from); 217 189 void __iomem *tre_buf; 218 190 phys_addr_t tre_phys; 219 191 int ret; ··· 237 209 return 0; 238 210 } 239 211 240 - static int pci_epf_mhi_write_to_host(struct mhi_ep_cntrl *mhi_cntrl, 241 - void *from, u64 to, size_t size) 212 + static int pci_epf_mhi_iatu_write(struct mhi_ep_cntrl *mhi_cntrl, 213 + void *from, u64 to, size_t size) 242 214 { 243 215 struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); 244 - size_t offset = to % SZ_4K; 216 + size_t offset = get_align_offset(epf_mhi, to); 245 217 void __iomem *tre_buf; 246 218 phys_addr_t tre_phys; 247 219 int ret; ··· 263 235 mutex_unlock(&epf_mhi->lock); 264 236 265 237 return 0; 238 + } 239 + 240 + static void pci_epf_mhi_dma_callback(void *param) 241 + { 242 + complete(param); 243 + } 244 + 245 + static int pci_epf_mhi_edma_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from, 246 + void *to, size_t size) 247 + { 248 + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); 249 + struct device *dma_dev = epf_mhi->epf->epc->dev.parent; 250 + struct dma_chan *chan = epf_mhi->dma_chan_rx; 251 + struct device *dev = &epf_mhi->epf->dev; 252 + DECLARE_COMPLETION_ONSTACK(complete); 253 + struct dma_async_tx_descriptor *desc; 254 + struct dma_slave_config config = {}; 255 + dma_cookie_t cookie; 256 + dma_addr_t dst_addr; 257 + int ret; 258 + 259 + if (size < SZ_4K) 260 + return pci_epf_mhi_iatu_read(mhi_cntrl, from, to, size); 261 + 262 + mutex_lock(&epf_mhi->lock); 263 + 264 + config.direction = DMA_DEV_TO_MEM; 265 + config.src_addr = from; 266 + 267 + ret = dmaengine_slave_config(chan, &config); 268 + if (ret) { 269 + dev_err(dev, "Failed to configure DMA channel\n"); 270 + goto err_unlock; 271 + } 272 + 273 + dst_addr = dma_map_single(dma_dev, to, size, DMA_FROM_DEVICE); 274 + ret = dma_mapping_error(dma_dev, dst_addr); 275 + if (ret) { 276 + dev_err(dev, "Failed to map remote memory\n"); 277 + goto err_unlock; 278 + } 279 + 280 + desc = dmaengine_prep_slave_single(chan, dst_addr, size, DMA_DEV_TO_MEM, 281 + DMA_CTRL_ACK | DMA_PREP_INTERRUPT); 282 + if (!desc) { 283 + dev_err(dev, "Failed to prepare DMA\n"); 284 + ret = -EIO; 285 + goto err_unmap; 286 + } 287 + 288 + desc->callback = pci_epf_mhi_dma_callback; 289 + desc->callback_param = &complete; 290 + 291 + cookie = dmaengine_submit(desc); 292 + ret = dma_submit_error(cookie); 293 + if (ret) { 294 + dev_err(dev, "Failed to do DMA submit\n"); 295 + goto err_unmap; 296 + } 297 + 298 + dma_async_issue_pending(chan); 299 + ret = wait_for_completion_timeout(&complete, msecs_to_jiffies(1000)); 300 + if (!ret) { 301 + dev_err(dev, "DMA transfer timeout\n"); 302 + dmaengine_terminate_sync(chan); 303 + ret = -ETIMEDOUT; 304 + } 305 + 306 + err_unmap: 307 + dma_unmap_single(dma_dev, dst_addr, size, DMA_FROM_DEVICE); 308 + err_unlock: 309 + mutex_unlock(&epf_mhi->lock); 310 + 311 + return ret; 312 + } 313 + 314 + static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl, void *from, 315 + u64 to, size_t size) 316 + { 317 + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); 318 + struct device *dma_dev = epf_mhi->epf->epc->dev.parent; 319 + struct dma_chan *chan = epf_mhi->dma_chan_tx; 320 + struct device *dev = &epf_mhi->epf->dev; 321 + DECLARE_COMPLETION_ONSTACK(complete); 322 + struct dma_async_tx_descriptor *desc; 323 + struct dma_slave_config config = {}; 324 + dma_cookie_t cookie; 325 + dma_addr_t src_addr; 326 + int ret; 327 + 328 + if (size < SZ_4K) 329 + return pci_epf_mhi_iatu_write(mhi_cntrl, from, to, size); 330 + 331 + mutex_lock(&epf_mhi->lock); 332 + 333 + config.direction = DMA_MEM_TO_DEV; 334 + config.dst_addr = to; 335 + 336 + ret = dmaengine_slave_config(chan, &config); 337 + if (ret) { 338 + dev_err(dev, "Failed to configure DMA channel\n"); 339 + goto err_unlock; 340 + } 341 + 342 + src_addr = dma_map_single(dma_dev, from, size, DMA_TO_DEVICE); 343 + ret = dma_mapping_error(dma_dev, src_addr); 344 + if (ret) { 345 + dev_err(dev, "Failed to map remote memory\n"); 346 + goto err_unlock; 347 + } 348 + 349 + desc = dmaengine_prep_slave_single(chan, src_addr, size, DMA_MEM_TO_DEV, 350 + DMA_CTRL_ACK | DMA_PREP_INTERRUPT); 351 + if (!desc) { 352 + dev_err(dev, "Failed to prepare DMA\n"); 353 + ret = -EIO; 354 + goto err_unmap; 355 + } 356 + 357 + desc->callback = pci_epf_mhi_dma_callback; 358 + desc->callback_param = &complete; 359 + 360 + cookie = dmaengine_submit(desc); 361 + ret = dma_submit_error(cookie); 362 + if (ret) { 363 + dev_err(dev, "Failed to do DMA submit\n"); 364 + goto err_unmap; 365 + } 366 + 367 + dma_async_issue_pending(chan); 368 + ret = wait_for_completion_timeout(&complete, msecs_to_jiffies(1000)); 369 + if (!ret) { 370 + dev_err(dev, "DMA transfer timeout\n"); 371 + dmaengine_terminate_sync(chan); 372 + ret = -ETIMEDOUT; 373 + } 374 + 375 + err_unmap: 376 + dma_unmap_single(dma_dev, src_addr, size, DMA_FROM_DEVICE); 377 + err_unlock: 378 + mutex_unlock(&epf_mhi->lock); 379 + 380 + return ret; 381 + } 382 + 383 + struct epf_dma_filter { 384 + struct device *dev; 385 + u32 dma_mask; 386 + }; 387 + 388 + static bool pci_epf_mhi_filter(struct dma_chan *chan, void *node) 389 + { 390 + struct epf_dma_filter *filter = node; 391 + struct dma_slave_caps caps; 392 + 393 + memset(&caps, 0, sizeof(caps)); 394 + dma_get_slave_caps(chan, &caps); 395 + 396 + return chan->device->dev == filter->dev && filter->dma_mask & 397 + caps.directions; 398 + } 399 + 400 + static int pci_epf_mhi_dma_init(struct pci_epf_mhi *epf_mhi) 401 + { 402 + struct device *dma_dev = epf_mhi->epf->epc->dev.parent; 403 + struct device *dev = &epf_mhi->epf->dev; 404 + struct epf_dma_filter filter; 405 + dma_cap_mask_t mask; 406 + 407 + dma_cap_zero(mask); 408 + dma_cap_set(DMA_SLAVE, mask); 409 + 410 + filter.dev = dma_dev; 411 + filter.dma_mask = BIT(DMA_MEM_TO_DEV); 412 + epf_mhi->dma_chan_tx = dma_request_channel(mask, pci_epf_mhi_filter, 413 + &filter); 414 + if (IS_ERR_OR_NULL(epf_mhi->dma_chan_tx)) { 415 + dev_err(dev, "Failed to request tx channel\n"); 416 + return -ENODEV; 417 + } 418 + 419 + filter.dma_mask = BIT(DMA_DEV_TO_MEM); 420 + epf_mhi->dma_chan_rx = dma_request_channel(mask, pci_epf_mhi_filter, 421 + &filter); 422 + if (IS_ERR_OR_NULL(epf_mhi->dma_chan_rx)) { 423 + dev_err(dev, "Failed to request rx channel\n"); 424 + dma_release_channel(epf_mhi->dma_chan_tx); 425 + epf_mhi->dma_chan_tx = NULL; 426 + return -ENODEV; 427 + } 428 + 429 + return 0; 430 + } 431 + 432 + static void pci_epf_mhi_dma_deinit(struct pci_epf_mhi *epf_mhi) 433 + { 434 + dma_release_channel(epf_mhi->dma_chan_tx); 435 + dma_release_channel(epf_mhi->dma_chan_rx); 436 + epf_mhi->dma_chan_tx = NULL; 437 + epf_mhi->dma_chan_rx = NULL; 266 438 } 267 439 268 440 static int pci_epf_mhi_core_init(struct pci_epf *epf) ··· 498 270 return ret; 499 271 } 500 272 273 + epf_mhi->epc_features = pci_epc_get_features(epc, epf->func_no, epf->vfunc_no); 274 + if (!epf_mhi->epc_features) 275 + return -ENODATA; 276 + 501 277 return 0; 502 278 } 503 279 ··· 514 282 struct device *dev = &epf->dev; 515 283 int ret; 516 284 285 + if (info->flags & MHI_EPF_USE_DMA) { 286 + ret = pci_epf_mhi_dma_init(epf_mhi); 287 + if (ret) { 288 + dev_err(dev, "Failed to initialize DMA: %d\n", ret); 289 + return ret; 290 + } 291 + } 292 + 517 293 mhi_cntrl->mmio = epf_mhi->mmio; 518 294 mhi_cntrl->irq = epf_mhi->irq; 519 295 mhi_cntrl->mru = info->mru; ··· 531 291 mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq; 532 292 mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map; 533 293 mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free; 534 - mhi_cntrl->read_from_host = pci_epf_mhi_read_from_host; 535 - mhi_cntrl->write_to_host = pci_epf_mhi_write_to_host; 294 + if (info->flags & MHI_EPF_USE_DMA) { 295 + mhi_cntrl->read_from_host = pci_epf_mhi_edma_read; 296 + mhi_cntrl->write_to_host = pci_epf_mhi_edma_write; 297 + } else { 298 + mhi_cntrl->read_from_host = pci_epf_mhi_iatu_read; 299 + mhi_cntrl->write_to_host = pci_epf_mhi_iatu_write; 300 + } 536 301 537 302 /* Register the MHI EP controller */ 538 303 ret = mhi_ep_register_controller(mhi_cntrl, info->config); 539 304 if (ret) { 540 305 dev_err(dev, "Failed to register MHI EP controller: %d\n", ret); 306 + if (info->flags & MHI_EPF_USE_DMA) 307 + pci_epf_mhi_dma_deinit(epf_mhi); 541 308 return ret; 542 309 } 543 310 ··· 554 307 static int pci_epf_mhi_link_down(struct pci_epf *epf) 555 308 { 556 309 struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); 310 + const struct pci_epf_mhi_ep_info *info = epf_mhi->info; 557 311 struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl; 558 312 559 313 if (mhi_cntrl->mhi_dev) { 560 314 mhi_ep_power_down(mhi_cntrl); 315 + if (info->flags & MHI_EPF_USE_DMA) 316 + pci_epf_mhi_dma_deinit(epf_mhi); 561 317 mhi_ep_unregister_controller(mhi_cntrl); 562 318 } 563 319 ··· 570 320 static int pci_epf_mhi_bme(struct pci_epf *epf) 571 321 { 572 322 struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); 323 + const struct pci_epf_mhi_ep_info *info = epf_mhi->info; 573 324 struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl; 574 325 struct device *dev = &epf->dev; 575 326 int ret; ··· 583 332 ret = mhi_ep_power_up(mhi_cntrl); 584 333 if (ret) { 585 334 dev_err(dev, "Failed to power up MHI EP: %d\n", ret); 335 + if (info->flags & MHI_EPF_USE_DMA) 336 + pci_epf_mhi_dma_deinit(epf_mhi); 586 337 mhi_ep_unregister_controller(mhi_cntrl); 587 338 } 588 339 } ··· 635 382 */ 636 383 if (mhi_cntrl->mhi_dev) { 637 384 mhi_ep_power_down(mhi_cntrl); 385 + if (info->flags & MHI_EPF_USE_DMA) 386 + pci_epf_mhi_dma_deinit(epf_mhi); 638 387 mhi_ep_unregister_controller(mhi_cntrl); 639 388 } 640 389 ··· 677 422 } 678 423 679 424 static const struct pci_epf_device_id pci_epf_mhi_ids[] = { 680 - { 681 - .name = "sdx55", .driver_data = (kernel_ulong_t)&sdx55_info, 682 - }, 425 + { .name = "sdx55", .driver_data = (kernel_ulong_t)&sdx55_info }, 426 + { .name = "sm8450", .driver_data = (kernel_ulong_t)&sm8450_info }, 683 427 {}, 684 428 }; 685 429
+16 -16
drivers/pci/endpoint/functions/pci-epf-vntb.c
··· 986 986 /*==== virtual PCI bus driver, which only load virtual NTB PCI driver ====*/ 987 987 988 988 static u32 pci_space[] = { 989 - 0xffffffff, /*DeviceID, Vendor ID*/ 990 - 0, /*Status, Command*/ 991 - 0xffffffff, /*Class code, subclass, prog if, revision id*/ 992 - 0x40, /*bist, header type, latency Timer, cache line size*/ 993 - 0, /*BAR 0*/ 994 - 0, /*BAR 1*/ 995 - 0, /*BAR 2*/ 996 - 0, /*BAR 3*/ 997 - 0, /*BAR 4*/ 998 - 0, /*BAR 5*/ 999 - 0, /*Cardbus cis point*/ 1000 - 0, /*Subsystem ID Subystem vendor id*/ 1001 - 0, /*ROM Base Address*/ 1002 - 0, /*Reserved, Cap. Point*/ 1003 - 0, /*Reserved,*/ 1004 - 0, /*Max Lat, Min Gnt, interrupt pin, interrupt line*/ 989 + 0xffffffff, /* Device ID, Vendor ID */ 990 + 0, /* Status, Command */ 991 + 0xffffffff, /* Base Class, Subclass, Prog Intf, Revision ID */ 992 + 0x40, /* BIST, Header Type, Latency Timer, Cache Line Size */ 993 + 0, /* BAR 0 */ 994 + 0, /* BAR 1 */ 995 + 0, /* BAR 2 */ 996 + 0, /* BAR 3 */ 997 + 0, /* BAR 4 */ 998 + 0, /* BAR 5 */ 999 + 0, /* Cardbus CIS Pointer */ 1000 + 0, /* Subsystem ID, Subsystem Vendor ID */ 1001 + 0, /* ROM Base Address */ 1002 + 0, /* Reserved, Capabilities Pointer */ 1003 + 0, /* Reserved */ 1004 + 0, /* Max_Lat, Min_Gnt, Interrupt Pin, Interrupt Line */ 1005 1005 }; 1006 1006 1007 1007 static int pci_read(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *val)
-1
drivers/pci/endpoint/pci-epc-core.c
··· 9 9 #include <linux/device.h> 10 10 #include <linux/slab.h> 11 11 #include <linux/module.h> 12 - #include <linux/of_device.h> 13 12 14 13 #include <linux/pci-epc.h> 15 14 #include <linux/pci-epf.h>
+10
drivers/pci/endpoint/pci-epc-mem.c
··· 115 115 } 116 116 EXPORT_SYMBOL_GPL(pci_epc_multi_mem_init); 117 117 118 + /** 119 + * pci_epc_mem_init() - Initialize the pci_epc_mem structure 120 + * @epc: the EPC device that invoked pci_epc_mem_init 121 + * @base: Physical address of the window region 122 + * @size: Total Size of the window region 123 + * @page_size: Page size of the window region 124 + * 125 + * Invoke to initialize a single pci_epc_mem structure used by the 126 + * endpoint functions to allocate memory for mapping the PCI host memory 127 + */ 118 128 int pci_epc_mem_init(struct pci_epc *epc, phys_addr_t base, 119 129 size_t size, size_t page_size) 120 130 {
-1
drivers/pci/hotplug/acpiphp.h
··· 178 178 int acpiphp_enable_slot(struct acpiphp_slot *slot); 179 179 int acpiphp_disable_slot(struct acpiphp_slot *slot); 180 180 u8 acpiphp_get_power_status(struct acpiphp_slot *slot); 181 - u8 acpiphp_get_attention_status(struct acpiphp_slot *slot); 182 181 u8 acpiphp_get_latch_status(struct acpiphp_slot *slot); 183 182 u8 acpiphp_get_adapter_status(struct acpiphp_slot *slot); 184 183
-2
drivers/pci/hotplug/cpci_hotplug.h
··· 83 83 * board/chassis drivers. 84 84 */ 85 85 u8 cpci_get_attention_status(struct slot *slot); 86 - u8 cpci_get_latch_status(struct slot *slot); 87 - u8 cpci_get_adapter_status(struct slot *slot); 88 86 u16 cpci_get_hs_csr(struct slot *slot); 89 87 int cpci_set_attention_status(struct slot *slot, int status); 90 88 int cpci_check_and_clear_ins(struct slot *slot);
-2
drivers/pci/hotplug/ibmphp.h
··· 264 264 void ibmphp_free_ebda_hpc_queue(void); 265 265 int ibmphp_access_ebda(void); 266 266 struct slot *ibmphp_get_slot_from_physical_num(u8); 267 - int ibmphp_get_total_hp_slots(void); 268 - void ibmphp_free_ibm_slot(struct slot *); 269 267 void ibmphp_free_bus_info_queue(void); 270 268 void ibmphp_free_ebda_pci_rsrc_queue(void); 271 269 struct bus_info *ibmphp_find_same_bus_num(u32);
+5 -5
drivers/pci/hotplug/ibmphp_pci.c
··· 329 329 static int configure_device(struct pci_func *func) 330 330 { 331 331 u32 bar[6]; 332 - u32 address[] = { 332 + static const u32 address[] = { 333 333 PCI_BASE_ADDRESS_0, 334 334 PCI_BASE_ADDRESS_1, 335 335 PCI_BASE_ADDRESS_2, ··· 564 564 struct resource_node *pfmem = NULL; 565 565 struct resource_node *bus_pfmem[2] = {NULL, NULL}; 566 566 struct bus_node *bus; 567 - u32 address[] = { 567 + static const u32 address[] = { 568 568 PCI_BASE_ADDRESS_0, 569 569 PCI_BASE_ADDRESS_1, 570 570 0 ··· 1053 1053 int howmany = 0; /*this is to see if there are any devices behind the bridge */ 1054 1054 1055 1055 u32 bar[6], class; 1056 - u32 address[] = { 1056 + static const u32 address[] = { 1057 1057 PCI_BASE_ADDRESS_0, 1058 1058 PCI_BASE_ADDRESS_1, 1059 1059 PCI_BASE_ADDRESS_2, ··· 1182 1182 static int unconfigure_boot_device(u8 busno, u8 device, u8 function) 1183 1183 { 1184 1184 u32 start_address; 1185 - u32 address[] = { 1185 + static const u32 address[] = { 1186 1186 PCI_BASE_ADDRESS_0, 1187 1187 PCI_BASE_ADDRESS_1, 1188 1188 PCI_BASE_ADDRESS_2, ··· 1310 1310 struct resource_node *mem = NULL; 1311 1311 struct resource_node *pfmem = NULL; 1312 1312 struct bus_node *bus; 1313 - u32 address[] = { 1313 + static const u32 address[] = { 1314 1314 PCI_BASE_ADDRESS_0, 1315 1315 PCI_BASE_ADDRESS_1, 1316 1316 0
+3 -9
drivers/pci/hotplug/pciehp_hpc.c
··· 332 332 static int __pciehp_link_set(struct controller *ctrl, bool enable) 333 333 { 334 334 struct pci_dev *pdev = ctrl_dev(ctrl); 335 - u16 lnk_ctrl; 336 335 337 - pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &lnk_ctrl); 336 + pcie_capability_clear_and_set_word(pdev, PCI_EXP_LNKCTL, 337 + PCI_EXP_LNKCTL_LD, 338 + enable ? 0 : PCI_EXP_LNKCTL_LD); 338 339 339 - if (enable) 340 - lnk_ctrl &= ~PCI_EXP_LNKCTL_LD; 341 - else 342 - lnk_ctrl |= PCI_EXP_LNKCTL_LD; 343 - 344 - pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnk_ctrl); 345 - ctrl_dbg(ctrl, "%s: lnk_ctrl = %x\n", __func__, lnk_ctrl); 346 340 return 0; 347 341 } 348 342
+1 -2
drivers/pci/iov.c
··· 41 41 return -EINVAL; 42 42 43 43 pf = pci_physfn(dev); 44 - return (((dev->bus->number << 8) + dev->devfn) - 45 - ((pf->bus->number << 8) + pf->devfn + pf->sriov->offset)) / 44 + return (pci_dev_id(dev) - (pci_dev_id(pf) + pf->sriov->offset)) / 46 45 pf->sriov->stride; 47 46 } 48 47 EXPORT_SYMBOL_GPL(pci_iov_vf_id);
+2 -2
drivers/pci/msi/irqdomain.c
··· 336 336 if (!irq_domain_is_msi_parent(domain)) { 337 337 /* 338 338 * For "global" PCI/MSI interrupt domains the associated 339 - * msi_domain_info::flags is the authoritive source of 339 + * msi_domain_info::flags is the authoritative source of 340 340 * information. 341 341 */ 342 342 info = domain->host_data; ··· 344 344 } else { 345 345 /* 346 346 * For MSI parent domains the supported feature set 347 - * is avaliable in the parent ops. This makes checks 347 + * is available in the parent ops. This makes checks 348 348 * possible before actually instantiating the 349 349 * per device domain because the parent is never 350 350 * expanding the PCI/MSI functionality.
+2 -3
drivers/pci/p2pdma.c
··· 435 435 /* Intel Xeon E7 v3/Xeon E5 v3/Core i7 */ 436 436 {PCI_VENDOR_ID_INTEL, 0x2f00, REQ_SAME_HOST_BRIDGE}, 437 437 {PCI_VENDOR_ID_INTEL, 0x2f01, REQ_SAME_HOST_BRIDGE}, 438 - /* Intel SkyLake-E */ 438 + /* Intel Skylake-E */ 439 439 {PCI_VENDOR_ID_INTEL, 0x2030, 0}, 440 440 {PCI_VENDOR_ID_INTEL, 0x2031, 0}, 441 441 {PCI_VENDOR_ID_INTEL, 0x2032, 0}, ··· 532 532 533 533 static unsigned long map_types_idx(struct pci_dev *client) 534 534 { 535 - return (pci_domain_nr(client->bus) << 16) | 536 - (client->bus->number << 8) | client->devfn; 535 + return (pci_domain_nr(client->bus) << 16) | pci_dev_id(client); 537 536 } 538 537 539 538 /*
+9 -9
drivers/pci/pci-driver.c
··· 193 193 u32 vendor, device, subvendor = PCI_ANY_ID, 194 194 subdevice = PCI_ANY_ID, class = 0, class_mask = 0; 195 195 unsigned long driver_data = 0; 196 - int fields = 0; 196 + int fields; 197 197 int retval = 0; 198 198 199 199 fields = sscanf(buf, "%x %x %x %x %x %x %lx", ··· 260 260 struct pci_driver *pdrv = to_pci_driver(driver); 261 261 u32 vendor, device, subvendor = PCI_ANY_ID, 262 262 subdevice = PCI_ANY_ID, class = 0, class_mask = 0; 263 - int fields = 0; 263 + int fields; 264 264 size_t retval = -ENODEV; 265 265 266 266 fields = sscanf(buf, "%x %x %x %x %x %x", ··· 1474 1474 */ 1475 1475 struct pci_driver *pci_dev_driver(const struct pci_dev *dev) 1476 1476 { 1477 + int i; 1478 + 1477 1479 if (dev->driver) 1478 1480 return dev->driver; 1479 - else { 1480 - int i; 1481 - for (i = 0; i <= PCI_ROM_RESOURCE; i++) 1482 - if (dev->resource[i].flags & IORESOURCE_BUSY) 1483 - return &pci_compat_driver; 1484 - } 1481 + 1482 + for (i = 0; i <= PCI_ROM_RESOURCE; i++) 1483 + if (dev->resource[i].flags & IORESOURCE_BUSY) 1484 + return &pci_compat_driver; 1485 + 1485 1486 return NULL; 1486 1487 } 1487 1488 EXPORT_SYMBOL(pci_dev_driver); ··· 1706 1705 .name = "pci_express", 1707 1706 .match = pcie_port_bus_match, 1708 1707 }; 1709 - EXPORT_SYMBOL_GPL(pcie_port_bus_type); 1710 1708 #endif 1711 1709 1712 1710 static int __init pci_driver_init(void)
+4
drivers/pci/pci-sysfs.c
··· 1083 1083 struct bin_attribute *attr, char *buf, 1084 1084 loff_t off, size_t count, bool write) 1085 1085 { 1086 + #ifdef CONFIG_HAS_IOPORT 1086 1087 struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 1087 1088 int bar = (unsigned long)attr->private; 1088 1089 unsigned long port = off; ··· 1117 1116 return 4; 1118 1117 } 1119 1118 return -EINVAL; 1119 + #else 1120 + return -ENXIO; 1121 + #endif 1120 1122 } 1121 1123 1122 1124 static ssize_t pci_read_resource_io(struct file *filp, struct kobject *kobj,
+39 -33
drivers/pci/pci.c
··· 1226 1226 * 1227 1227 * On success, return 0 or 1, depending on whether or not it is necessary to 1228 1228 * restore the device's BARs subsequently (1 is returned in that case). 1229 + * 1230 + * On failure, return a negative error code. Always return failure if @dev 1231 + * lacks a Power Management Capability, even if the platform was able to 1232 + * put the device in D0 via non-PCI means. 1229 1233 */ 1230 1234 int pci_power_up(struct pci_dev *dev) 1231 1235 { ··· 1245 1241 dev->current_state = PCI_D0; 1246 1242 else 1247 1243 dev->current_state = state; 1248 - 1249 - if (state == PCI_D0) 1250 - return 0; 1251 1244 1252 1245 return -EIO; 1253 1246 } ··· 1291 1290 * 1292 1291 * Call pci_power_up() to put @dev into D0, read from its PCI_PM_CTRL register 1293 1292 * to confirm the state change, restore its BARs if they might be lost and 1294 - * reconfigure ASPM in acordance with the new power state. 1293 + * reconfigure ASPM in accordance with the new power state. 1295 1294 * 1296 1295 * If pci_restore_state() is going to be called right after a power state change 1297 1296 * to D0, it is more efficient to use pci_power_up() directly instead of this ··· 1303 1302 int ret; 1304 1303 1305 1304 ret = pci_power_up(dev); 1306 - if (ret < 0) 1305 + if (ret < 0) { 1306 + if (dev->current_state == PCI_D0) 1307 + return 0; 1308 + 1307 1309 return ret; 1310 + } 1308 1311 1309 1312 pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); 1310 1313 dev->current_state = pmcsr & PCI_PM_CTRL_STATE_MASK; ··· 1686 1681 /* XXX: 100% dword access ok here? */ 1687 1682 for (i = 0; i < 16; i++) { 1688 1683 pci_read_config_dword(dev, i * 4, &dev->saved_config_space[i]); 1689 - pci_dbg(dev, "saving config space at offset %#x (reading %#x)\n", 1684 + pci_dbg(dev, "save config %#04x: %#010x\n", 1690 1685 i * 4, dev->saved_config_space[i]); 1691 1686 } 1692 1687 dev->state_saved = true; ··· 1717 1712 return; 1718 1713 1719 1714 for (;;) { 1720 - pci_dbg(pdev, "restoring config space at offset %#x (was %#x, writing %#x)\n", 1715 + pci_dbg(pdev, "restore config %#04x: %#010x -> %#010x\n", 1721 1716 offset, val, saved_val); 1722 1717 pci_write_config_dword(pdev, offset, saved_val); 1723 1718 if (retry-- <= 0) ··· 2420 2415 2421 2416 mutex_lock(&pci_pme_list_mutex); 2422 2417 list_for_each_entry_safe(pme_dev, n, &pci_pme_list, list) { 2423 - if (pme_dev->dev->pme_poll) { 2424 - struct pci_dev *bridge; 2418 + struct pci_dev *pdev = pme_dev->dev; 2425 2419 2426 - bridge = pme_dev->dev->bus->self; 2420 + if (pdev->pme_poll) { 2421 + struct pci_dev *bridge = pdev->bus->self; 2422 + struct device *dev = &pdev->dev; 2423 + int pm_status; 2424 + 2427 2425 /* 2428 2426 * If bridge is in low power state, the 2429 2427 * configuration space of subordinate devices ··· 2434 2426 */ 2435 2427 if (bridge && bridge->current_state != PCI_D0) 2436 2428 continue; 2429 + 2437 2430 /* 2438 - * If the device is in D3cold it should not be 2439 - * polled either. 2431 + * If the device is in a low power state it 2432 + * should not be polled either. 2440 2433 */ 2441 - if (pme_dev->dev->current_state == PCI_D3cold) 2434 + pm_status = pm_runtime_get_if_active(dev, true); 2435 + if (!pm_status) 2442 2436 continue; 2443 2437 2444 - pci_pme_wakeup(pme_dev->dev, NULL); 2438 + if (pdev->current_state != PCI_D3cold) 2439 + pci_pme_wakeup(pdev, NULL); 2440 + 2441 + if (pm_status > 0) 2442 + pm_runtime_put(dev); 2445 2443 } else { 2446 2444 list_del(&pme_dev->list); 2447 2445 kfree(pme_dev); ··· 4205 4191 4206 4192 phys_addr_t pci_pio_to_address(unsigned long pio) 4207 4193 { 4208 - phys_addr_t address = (phys_addr_t)OF_BAD_ADDR; 4209 - 4210 4194 #ifdef PCI_IOBASE 4211 - if (pio >= MMIO_UPPER_LIMIT) 4212 - return address; 4213 - 4214 - address = logic_pio_to_hwaddr(pio); 4195 + if (pio < MMIO_UPPER_LIMIT) 4196 + return logic_pio_to_hwaddr(pio); 4215 4197 #endif 4216 4198 4217 - return address; 4199 + return (phys_addr_t) OF_BAD_ADDR; 4218 4200 } 4219 4201 EXPORT_SYMBOL_GPL(pci_pio_to_address); 4220 4202 ··· 4937 4927 int pcie_retrain_link(struct pci_dev *pdev, bool use_lt) 4938 4928 { 4939 4929 int rc; 4940 - u16 lnkctl; 4941 4930 4942 4931 /* 4943 4932 * Ensure the updated LNKCTL parameters are used during link ··· 4948 4939 if (rc) 4949 4940 return rc; 4950 4941 4951 - pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &lnkctl); 4952 - lnkctl |= PCI_EXP_LNKCTL_RL; 4953 - pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnkctl); 4942 + pcie_capability_set_word(pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_RL); 4954 4943 if (pdev->clear_retrain_link) { 4955 4944 /* 4956 4945 * Due to an erratum in some devices the Retrain Link bit 4957 4946 * needs to be cleared again manually to allow the link 4958 4947 * training to succeed. 4959 4948 */ 4960 - lnkctl &= ~PCI_EXP_LNKCTL_RL; 4961 - pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnkctl); 4949 + pcie_capability_clear_word(pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_RL); 4962 4950 } 4963 4951 4964 4952 return pcie_wait_for_link_status(pdev, use_lt, !use_lt); ··· 5637 5631 EXPORT_SYMBOL_GPL(pci_try_reset_function); 5638 5632 5639 5633 /* Do any devices on or below this bus prevent a bus reset? */ 5640 - static bool pci_bus_resetable(struct pci_bus *bus) 5634 + static bool pci_bus_resettable(struct pci_bus *bus) 5641 5635 { 5642 5636 struct pci_dev *dev; 5643 5637 ··· 5647 5641 5648 5642 list_for_each_entry(dev, &bus->devices, bus_list) { 5649 5643 if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET || 5650 - (dev->subordinate && !pci_bus_resetable(dev->subordinate))) 5644 + (dev->subordinate && !pci_bus_resettable(dev->subordinate))) 5651 5645 return false; 5652 5646 } 5653 5647 ··· 5705 5699 } 5706 5700 5707 5701 /* Do any devices on or below this slot prevent a bus reset? */ 5708 - static bool pci_slot_resetable(struct pci_slot *slot) 5702 + static bool pci_slot_resettable(struct pci_slot *slot) 5709 5703 { 5710 5704 struct pci_dev *dev; 5711 5705 ··· 5717 5711 if (!dev->slot || dev->slot != slot) 5718 5712 continue; 5719 5713 if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET || 5720 - (dev->subordinate && !pci_bus_resetable(dev->subordinate))) 5714 + (dev->subordinate && !pci_bus_resettable(dev->subordinate))) 5721 5715 return false; 5722 5716 } 5723 5717 ··· 5853 5847 { 5854 5848 int rc; 5855 5849 5856 - if (!slot || !pci_slot_resetable(slot)) 5850 + if (!slot || !pci_slot_resettable(slot)) 5857 5851 return -ENOTTY; 5858 5852 5859 5853 if (!probe) ··· 5920 5914 { 5921 5915 int ret; 5922 5916 5923 - if (!bus->self || !pci_bus_resetable(bus)) 5917 + if (!bus->self || !pci_bus_resettable(bus)) 5924 5918 return -ENOTTY; 5925 5919 5926 5920 if (probe)
+20 -21
drivers/pci/pci.h
··· 13 13 14 14 #define PCIE_LINK_RETRAIN_TIMEOUT_MS 1000 15 15 16 + /* 17 + * PCIe r6.0, sec 5.3.3.2.1 <PME Synchronization> 18 + * Recommends 1ms to 10ms timeout to check L2 ready. 19 + */ 20 + #define PCIE_PME_TO_L2_TIMEOUT_US 10000 21 + 16 22 extern const unsigned char pcie_link_speed[]; 17 23 extern bool pci_early_dump; 18 24 ··· 153 147 void pci_create_legacy_files(struct pci_bus *bus); 154 148 void pci_remove_legacy_files(struct pci_bus *bus); 155 149 #else 156 - static inline void pci_create_legacy_files(struct pci_bus *bus) { return; } 157 - static inline void pci_remove_legacy_files(struct pci_bus *bus) { return; } 150 + static inline void pci_create_legacy_files(struct pci_bus *bus) { } 151 + static inline void pci_remove_legacy_files(struct pci_bus *bus) { } 158 152 #endif 159 153 160 154 /* Lock for read/write access to pci device and bus lists */ ··· 428 422 pci_ers_result_t dpc_reset_link(struct pci_dev *pdev); 429 423 bool pci_dpc_recovered(struct pci_dev *pdev); 430 424 #else 431 - static inline void pci_save_dpc_state(struct pci_dev *dev) {} 432 - static inline void pci_restore_dpc_state(struct pci_dev *dev) {} 433 - static inline void pci_dpc_init(struct pci_dev *pdev) {} 425 + static inline void pci_save_dpc_state(struct pci_dev *dev) { } 426 + static inline void pci_restore_dpc_state(struct pci_dev *dev) { } 427 + static inline void pci_dpc_init(struct pci_dev *pdev) { } 434 428 static inline bool pci_dpc_recovered(struct pci_dev *pdev) { return false; } 435 429 #endif 436 430 ··· 442 436 int (*cb)(struct pci_dev *, void *), 443 437 void *userdata); 444 438 #else 445 - static inline void pci_rcec_init(struct pci_dev *dev) {} 446 - static inline void pci_rcec_exit(struct pci_dev *dev) {} 447 - static inline void pcie_link_rcec(struct pci_dev *rcec) {} 439 + static inline void pci_rcec_init(struct pci_dev *dev) { } 440 + static inline void pci_rcec_exit(struct pci_dev *dev) { } 441 + static inline void pcie_link_rcec(struct pci_dev *rcec) { } 448 442 static inline void pcie_walk_rcec(struct pci_dev *rcec, 449 443 int (*cb)(struct pci_dev *, void *), 450 - void *userdata) {} 444 + void *userdata) { } 451 445 #endif 452 446 453 447 #ifdef CONFIG_PCI_ATS ··· 490 484 { 491 485 return -ENODEV; 492 486 } 493 - static inline void pci_iov_release(struct pci_dev *dev) 494 - 495 - { 496 - } 497 - static inline void pci_iov_remove(struct pci_dev *dev) 498 - { 499 - } 500 - static inline void pci_restore_iov_state(struct pci_dev *dev) 501 - { 502 - } 487 + static inline void pci_iov_release(struct pci_dev *dev) { } 488 + static inline void pci_iov_remove(struct pci_dev *dev) { } 489 + static inline void pci_restore_iov_state(struct pci_dev *dev) { } 503 490 static inline int pci_iov_bus_range(struct pci_bus *bus) 504 491 { 505 492 return 0; ··· 729 730 { 730 731 return -ENOTTY; 731 732 } 732 - static inline void pci_set_acpi_fwnode(struct pci_dev *dev) {} 733 + static inline void pci_set_acpi_fwnode(struct pci_dev *dev) { } 733 734 static inline int pci_acpi_program_hp_params(struct pci_dev *dev) 734 735 { 735 736 return -ENODEV; ··· 750 751 { 751 752 return PCI_UNKNOWN; 752 753 } 753 - static inline void acpi_pci_refresh_power_state(struct pci_dev *dev) {} 754 + static inline void acpi_pci_refresh_power_state(struct pci_dev *dev) { } 754 755 static inline int acpi_pci_wakeup(struct pci_dev *dev, bool enable) 755 756 { 756 757 return -ENODEV;
+4 -18
drivers/pci/pcie/aer.c
··· 230 230 return pcie_ports_native || host->native_aer; 231 231 } 232 232 233 - int pci_enable_pcie_error_reporting(struct pci_dev *dev) 233 + static int pci_enable_pcie_error_reporting(struct pci_dev *dev) 234 234 { 235 235 int rc; 236 236 ··· 240 240 rc = pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_AER_FLAGS); 241 241 return pcibios_err_to_errno(rc); 242 242 } 243 - EXPORT_SYMBOL_GPL(pci_enable_pcie_error_reporting); 244 - 245 - int pci_disable_pcie_error_reporting(struct pci_dev *dev) 246 - { 247 - int rc; 248 - 249 - if (!pcie_aer_is_native(dev)) 250 - return -EIO; 251 - 252 - rc = pcie_capability_clear_word(dev, PCI_EXP_DEVCTL, PCI_EXP_AER_FLAGS); 253 - return pcibios_err_to_errno(rc); 254 - } 255 - EXPORT_SYMBOL_GPL(pci_disable_pcie_error_reporting); 256 243 257 244 int pci_aer_clear_nonfatal_status(struct pci_dev *dev) 258 245 { ··· 699 712 void aer_print_error(struct pci_dev *dev, struct aer_err_info *info) 700 713 { 701 714 int layer, agent; 702 - int id = ((dev->bus->number << 8) | dev->devfn); 715 + int id = pci_dev_id(dev); 703 716 const char *level; 704 717 705 718 if (!info->status) { ··· 834 847 if ((PCI_BUS_NUM(e_info->id) != 0) && 835 848 !(dev->bus->bus_flags & PCI_BUS_FLAGS_NO_AERSID)) { 836 849 /* Device ID match? */ 837 - if (e_info->id == ((dev->bus->number << 8) | dev->devfn)) 850 + if (e_info->id == pci_dev_id(dev)) 838 851 return true; 839 852 840 853 /* Continue id comparing if there is no multiple error */ ··· 968 981 969 982 #ifdef CONFIG_ACPI_APEI_PCIEAER 970 983 971 - #define AER_RECOVER_RING_ORDER 4 972 - #define AER_RECOVER_RING_SIZE (1 << AER_RECOVER_RING_ORDER) 984 + #define AER_RECOVER_RING_SIZE 16 973 985 974 986 struct aer_recover_entry { 975 987 u8 bus;
+13 -17
drivers/pci/pcie/aspm.c
··· 199 199 static void pcie_aspm_configure_common_clock(struct pcie_link_state *link) 200 200 { 201 201 int same_clock = 1; 202 - u16 reg16, parent_reg, child_reg[8]; 202 + u16 reg16, ccc, parent_old_ccc, child_old_ccc[8]; 203 203 struct pci_dev *child, *parent = link->pdev; 204 204 struct pci_bus *linkbus = parent->subordinate; 205 205 /* ··· 221 221 222 222 /* Port might be already in common clock mode */ 223 223 pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16); 224 + parent_old_ccc = reg16 & PCI_EXP_LNKCTL_CCC; 224 225 if (same_clock && (reg16 & PCI_EXP_LNKCTL_CCC)) { 225 226 bool consistent = true; 226 227 ··· 238 237 pci_info(parent, "ASPM: current common clock configuration is inconsistent, reconfiguring\n"); 239 238 } 240 239 240 + ccc = same_clock ? PCI_EXP_LNKCTL_CCC : 0; 241 241 /* Configure downstream component, all functions */ 242 242 list_for_each_entry(child, &linkbus->devices, bus_list) { 243 243 pcie_capability_read_word(child, PCI_EXP_LNKCTL, &reg16); 244 - child_reg[PCI_FUNC(child->devfn)] = reg16; 245 - if (same_clock) 246 - reg16 |= PCI_EXP_LNKCTL_CCC; 247 - else 248 - reg16 &= ~PCI_EXP_LNKCTL_CCC; 249 - pcie_capability_write_word(child, PCI_EXP_LNKCTL, reg16); 244 + child_old_ccc[PCI_FUNC(child->devfn)] = reg16 & PCI_EXP_LNKCTL_CCC; 245 + pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL, 246 + PCI_EXP_LNKCTL_CCC, ccc); 250 247 } 251 248 252 249 /* Configure upstream component */ 253 - pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16); 254 - parent_reg = reg16; 255 - if (same_clock) 256 - reg16 |= PCI_EXP_LNKCTL_CCC; 257 - else 258 - reg16 &= ~PCI_EXP_LNKCTL_CCC; 259 - pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); 250 + pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL, 251 + PCI_EXP_LNKCTL_CCC, ccc); 260 252 261 253 if (pcie_retrain_link(link->pdev, true)) { 262 254 263 255 /* Training failed. Restore common clock configurations */ 264 256 pci_err(parent, "ASPM: Could not configure common clock\n"); 265 257 list_for_each_entry(child, &linkbus->devices, bus_list) 266 - pcie_capability_write_word(child, PCI_EXP_LNKCTL, 267 - child_reg[PCI_FUNC(child->devfn)]); 268 - pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg); 258 + pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL, 259 + PCI_EXP_LNKCTL_CCC, 260 + child_old_ccc[PCI_FUNC(child->devfn)]); 261 + pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL, 262 + PCI_EXP_LNKCTL_CCC, parent_old_ccc); 269 263 } 270 264 } 271 265
+2 -2
drivers/pci/probe.c
··· 8 8 #include <linux/init.h> 9 9 #include <linux/pci.h> 10 10 #include <linux/msi.h> 11 - #include <linux/of_device.h> 12 11 #include <linux/of_pci.h> 13 12 #include <linux/pci_hotplug.h> 14 13 #include <linux/slab.h> ··· 2136 2137 { 2137 2138 struct pci_dev *root; 2138 2139 2139 - /* PCI_EXP_DEVICE_RELAX_EN is RsvdP in VFs */ 2140 + /* PCI_EXP_DEVCTL_RELAX_EN is RsvdP in VFs */ 2140 2141 if (dev->is_virtfn) 2141 2142 return; 2142 2143 ··· 2323 2324 .end = -1, 2324 2325 }; 2325 2326 2327 + spin_lock_init(&dev->pcie_cap_lock); 2326 2328 #ifdef CONFIG_PCI_MSI 2327 2329 raw_spin_lock_init(&dev->msi_lock); 2328 2330 #endif
+43 -5
drivers/pci/quirks.c
··· 361 361 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_3, quirk_isa_dma_hangs); 362 362 #endif 363 363 364 + #ifdef CONFIG_HAS_IOPORT 364 365 /* 365 - * Intel NM10 "TigerPoint" LPC PM1a_STS.BM_STS must be clear 366 + * Intel NM10 "Tiger Point" LPC PM1a_STS.BM_STS must be clear 366 367 * for some HT machines to use C4 w/o hanging. 367 368 */ 368 369 static void quirk_tigerpoint_bm_sts(struct pci_dev *dev) ··· 376 375 pm1a = inw(pmbase); 377 376 378 377 if (pm1a & 0x10) { 379 - pci_info(dev, FW_BUG "TigerPoint LPC.BM_STS cleared\n"); 378 + pci_info(dev, FW_BUG "Tiger Point LPC.BM_STS cleared\n"); 380 379 outw(0x10, pmbase); 381 380 } 382 381 } 383 382 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_TGP_LPC, quirk_tigerpoint_bm_sts); 383 + #endif 384 384 385 385 /* Chipsets where PCI->PCI transfers vanish or hang */ 386 386 static void quirk_nopcipci(struct pci_dev *dev) ··· 3075 3073 3076 3074 /* 3077 3075 * HT MSI mapping should be disabled on devices that are below 3078 - * a non-Hypertransport host bridge. Locate the host bridge... 3076 + * a non-HyperTransport host bridge. Locate the host bridge. 3079 3077 */ 3080 3078 host_bridge = pci_get_domain_bus_and_slot(pci_domain_nr(dev->bus), 0, 3081 3079 PCI_DEVFN(0, 0)); ··· 3726 3724 */ 3727 3725 static void quirk_nvidia_no_bus_reset(struct pci_dev *dev) 3728 3726 { 3729 - if ((dev->device & 0xffc0) == 0x2340) 3727 + if ((dev->device & 0xffc0) == 0x2340 || dev->device == 0x1eb8) 3730 3728 quirk_no_bus_reset(dev); 3731 3729 } 3732 3730 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, ··· 5731 5729 /* 5732 5730 * Microsemi Switchtec NTB uses devfn proxy IDs to move TLPs between 5733 5731 * NT endpoints via the internal switch fabric. These IDs replace the 5734 - * originating requestor ID TLPs which access host memory on peer NTB 5732 + * originating Requester ID TLPs which access host memory on peer NTB 5735 5733 * ports. Therefore, all proxy IDs must be aliased to the NTB device 5736 5734 * to permit access when the IOMMU is turned on. 5737 5735 */ ··· 5869 5867 SWITCHTEC_QUIRK(0x4552); /* PAXA 52XG4 */ 5870 5868 SWITCHTEC_QUIRK(0x4536); /* PAXA 36XG4 */ 5871 5869 SWITCHTEC_QUIRK(0x4528); /* PAXA 28XG4 */ 5870 + SWITCHTEC_QUIRK(0x5000); /* PFX 100XG5 */ 5871 + SWITCHTEC_QUIRK(0x5084); /* PFX 84XG5 */ 5872 + SWITCHTEC_QUIRK(0x5068); /* PFX 68XG5 */ 5873 + SWITCHTEC_QUIRK(0x5052); /* PFX 52XG5 */ 5874 + SWITCHTEC_QUIRK(0x5036); /* PFX 36XG5 */ 5875 + SWITCHTEC_QUIRK(0x5028); /* PFX 28XG5 */ 5876 + SWITCHTEC_QUIRK(0x5100); /* PSX 100XG5 */ 5877 + SWITCHTEC_QUIRK(0x5184); /* PSX 84XG5 */ 5878 + SWITCHTEC_QUIRK(0x5168); /* PSX 68XG5 */ 5879 + SWITCHTEC_QUIRK(0x5152); /* PSX 52XG5 */ 5880 + SWITCHTEC_QUIRK(0x5136); /* PSX 36XG5 */ 5881 + SWITCHTEC_QUIRK(0x5128); /* PSX 28XG5 */ 5882 + SWITCHTEC_QUIRK(0x5200); /* PAX 100XG5 */ 5883 + SWITCHTEC_QUIRK(0x5284); /* PAX 84XG5 */ 5884 + SWITCHTEC_QUIRK(0x5268); /* PAX 68XG5 */ 5885 + SWITCHTEC_QUIRK(0x5252); /* PAX 52XG5 */ 5886 + SWITCHTEC_QUIRK(0x5236); /* PAX 36XG5 */ 5887 + SWITCHTEC_QUIRK(0x5228); /* PAX 28XG5 */ 5888 + SWITCHTEC_QUIRK(0x5300); /* PFXA 100XG5 */ 5889 + SWITCHTEC_QUIRK(0x5384); /* PFXA 84XG5 */ 5890 + SWITCHTEC_QUIRK(0x5368); /* PFXA 68XG5 */ 5891 + SWITCHTEC_QUIRK(0x5352); /* PFXA 52XG5 */ 5892 + SWITCHTEC_QUIRK(0x5336); /* PFXA 36XG5 */ 5893 + SWITCHTEC_QUIRK(0x5328); /* PFXA 28XG5 */ 5894 + SWITCHTEC_QUIRK(0x5400); /* PSXA 100XG5 */ 5895 + SWITCHTEC_QUIRK(0x5484); /* PSXA 84XG5 */ 5896 + SWITCHTEC_QUIRK(0x5468); /* PSXA 68XG5 */ 5897 + SWITCHTEC_QUIRK(0x5452); /* PSXA 52XG5 */ 5898 + SWITCHTEC_QUIRK(0x5436); /* PSXA 36XG5 */ 5899 + SWITCHTEC_QUIRK(0x5428); /* PSXA 28XG5 */ 5900 + SWITCHTEC_QUIRK(0x5500); /* PAXA 100XG5 */ 5901 + SWITCHTEC_QUIRK(0x5584); /* PAXA 84XG5 */ 5902 + SWITCHTEC_QUIRK(0x5568); /* PAXA 68XG5 */ 5903 + SWITCHTEC_QUIRK(0x5552); /* PAXA 52XG5 */ 5904 + SWITCHTEC_QUIRK(0x5536); /* PAXA 36XG5 */ 5905 + SWITCHTEC_QUIRK(0x5528); /* PAXA 28XG5 */ 5872 5906 5873 5907 /* 5874 5908 * The PLX NTB uses devfn proxy IDs to move TLPs between NT endpoints.
+1 -1
drivers/pci/setup-bus.c
··· 1799 1799 * Make sure prefetchable memory is reduced from 1800 1800 * the correct resource. Specifically we put 32-bit 1801 1801 * prefetchable memory in non-prefetchable window 1802 - * if there is an 64-bit pretchable window. 1802 + * if there is an 64-bit prefetchable window. 1803 1803 * 1804 1804 * See comments in __pci_bus_size_bridges() for 1805 1805 * more information.
+2 -2
drivers/pci/setup-res.c
··· 104 104 pci_read_config_dword(dev, reg, &check); 105 105 106 106 if ((new ^ check) & mask) { 107 - pci_err(dev, "BAR %d: error updating (%#08x != %#08x)\n", 107 + pci_err(dev, "BAR %d: error updating (%#010x != %#010x)\n", 108 108 resno, new, check); 109 109 } 110 110 ··· 113 113 pci_write_config_dword(dev, reg + 4, new); 114 114 pci_read_config_dword(dev, reg + 4, &check); 115 115 if (check != new) { 116 - pci_err(dev, "BAR %d: error updating (high %#08x != %#08x)\n", 116 + pci_err(dev, "BAR %d: error updating (high %#010x != %#010x)\n", 117 117 resno, new, check); 118 118 } 119 119 }
+97 -61
drivers/pci/switch/switchtec.c
··· 372 372 if (stdev->gen == SWITCHTEC_GEN3) \ 373 373 return io_string_show(buf, &si->gen3.field, \ 374 374 sizeof(si->gen3.field)); \ 375 - else if (stdev->gen == SWITCHTEC_GEN4) \ 375 + else if (stdev->gen >= SWITCHTEC_GEN4) \ 376 376 return io_string_show(buf, &si->gen4.field, \ 377 377 sizeof(si->gen4.field)); \ 378 378 else \ ··· 663 663 if (stdev->gen == SWITCHTEC_GEN3) { 664 664 info.flash_length = ioread32(&fi->gen3.flash_length); 665 665 info.num_partitions = SWITCHTEC_NUM_PARTITIONS_GEN3; 666 - } else if (stdev->gen == SWITCHTEC_GEN4) { 666 + } else if (stdev->gen >= SWITCHTEC_GEN4) { 667 667 info.flash_length = ioread32(&fi->gen4.flash_length); 668 668 info.num_partitions = SWITCHTEC_NUM_PARTITIONS_GEN4; 669 669 } else { ··· 870 870 ret = flash_part_info_gen3(stdev, &info); 871 871 if (ret) 872 872 return ret; 873 - } else if (stdev->gen == SWITCHTEC_GEN4) { 873 + } else if (stdev->gen >= SWITCHTEC_GEN4) { 874 874 ret = flash_part_info_gen4(stdev, &info); 875 875 if (ret) 876 876 return ret; ··· 1610 1610 1611 1611 if (stdev->gen == SWITCHTEC_GEN3) 1612 1612 part_id = &stdev->mmio_sys_info->gen3.partition_id; 1613 - else if (stdev->gen == SWITCHTEC_GEN4) 1613 + else if (stdev->gen >= SWITCHTEC_GEN4) 1614 1614 part_id = &stdev->mmio_sys_info->gen4.partition_id; 1615 1615 else 1616 1616 return -EOPNOTSUPP; ··· 1727 1727 } 1728 1728 1729 1729 static const struct pci_device_id switchtec_pci_tbl[] = { 1730 - SWITCHTEC_PCI_DEVICE(0x8531, SWITCHTEC_GEN3), //PFX 24xG3 1731 - SWITCHTEC_PCI_DEVICE(0x8532, SWITCHTEC_GEN3), //PFX 32xG3 1732 - SWITCHTEC_PCI_DEVICE(0x8533, SWITCHTEC_GEN3), //PFX 48xG3 1733 - SWITCHTEC_PCI_DEVICE(0x8534, SWITCHTEC_GEN3), //PFX 64xG3 1734 - SWITCHTEC_PCI_DEVICE(0x8535, SWITCHTEC_GEN3), //PFX 80xG3 1735 - SWITCHTEC_PCI_DEVICE(0x8536, SWITCHTEC_GEN3), //PFX 96xG3 1736 - SWITCHTEC_PCI_DEVICE(0x8541, SWITCHTEC_GEN3), //PSX 24xG3 1737 - SWITCHTEC_PCI_DEVICE(0x8542, SWITCHTEC_GEN3), //PSX 32xG3 1738 - SWITCHTEC_PCI_DEVICE(0x8543, SWITCHTEC_GEN3), //PSX 48xG3 1739 - SWITCHTEC_PCI_DEVICE(0x8544, SWITCHTEC_GEN3), //PSX 64xG3 1740 - SWITCHTEC_PCI_DEVICE(0x8545, SWITCHTEC_GEN3), //PSX 80xG3 1741 - SWITCHTEC_PCI_DEVICE(0x8546, SWITCHTEC_GEN3), //PSX 96xG3 1742 - SWITCHTEC_PCI_DEVICE(0x8551, SWITCHTEC_GEN3), //PAX 24XG3 1743 - SWITCHTEC_PCI_DEVICE(0x8552, SWITCHTEC_GEN3), //PAX 32XG3 1744 - SWITCHTEC_PCI_DEVICE(0x8553, SWITCHTEC_GEN3), //PAX 48XG3 1745 - SWITCHTEC_PCI_DEVICE(0x8554, SWITCHTEC_GEN3), //PAX 64XG3 1746 - SWITCHTEC_PCI_DEVICE(0x8555, SWITCHTEC_GEN3), //PAX 80XG3 1747 - SWITCHTEC_PCI_DEVICE(0x8556, SWITCHTEC_GEN3), //PAX 96XG3 1748 - SWITCHTEC_PCI_DEVICE(0x8561, SWITCHTEC_GEN3), //PFXL 24XG3 1749 - SWITCHTEC_PCI_DEVICE(0x8562, SWITCHTEC_GEN3), //PFXL 32XG3 1750 - SWITCHTEC_PCI_DEVICE(0x8563, SWITCHTEC_GEN3), //PFXL 48XG3 1751 - SWITCHTEC_PCI_DEVICE(0x8564, SWITCHTEC_GEN3), //PFXL 64XG3 1752 - SWITCHTEC_PCI_DEVICE(0x8565, SWITCHTEC_GEN3), //PFXL 80XG3 1753 - SWITCHTEC_PCI_DEVICE(0x8566, SWITCHTEC_GEN3), //PFXL 96XG3 1754 - SWITCHTEC_PCI_DEVICE(0x8571, SWITCHTEC_GEN3), //PFXI 24XG3 1755 - SWITCHTEC_PCI_DEVICE(0x8572, SWITCHTEC_GEN3), //PFXI 32XG3 1756 - SWITCHTEC_PCI_DEVICE(0x8573, SWITCHTEC_GEN3), //PFXI 48XG3 1757 - SWITCHTEC_PCI_DEVICE(0x8574, SWITCHTEC_GEN3), //PFXI 64XG3 1758 - SWITCHTEC_PCI_DEVICE(0x8575, SWITCHTEC_GEN3), //PFXI 80XG3 1759 - SWITCHTEC_PCI_DEVICE(0x8576, SWITCHTEC_GEN3), //PFXI 96XG3 1760 - SWITCHTEC_PCI_DEVICE(0x4000, SWITCHTEC_GEN4), //PFX 100XG4 1761 - SWITCHTEC_PCI_DEVICE(0x4084, SWITCHTEC_GEN4), //PFX 84XG4 1762 - SWITCHTEC_PCI_DEVICE(0x4068, SWITCHTEC_GEN4), //PFX 68XG4 1763 - SWITCHTEC_PCI_DEVICE(0x4052, SWITCHTEC_GEN4), //PFX 52XG4 1764 - SWITCHTEC_PCI_DEVICE(0x4036, SWITCHTEC_GEN4), //PFX 36XG4 1765 - SWITCHTEC_PCI_DEVICE(0x4028, SWITCHTEC_GEN4), //PFX 28XG4 1766 - SWITCHTEC_PCI_DEVICE(0x4100, SWITCHTEC_GEN4), //PSX 100XG4 1767 - SWITCHTEC_PCI_DEVICE(0x4184, SWITCHTEC_GEN4), //PSX 84XG4 1768 - SWITCHTEC_PCI_DEVICE(0x4168, SWITCHTEC_GEN4), //PSX 68XG4 1769 - SWITCHTEC_PCI_DEVICE(0x4152, SWITCHTEC_GEN4), //PSX 52XG4 1770 - SWITCHTEC_PCI_DEVICE(0x4136, SWITCHTEC_GEN4), //PSX 36XG4 1771 - SWITCHTEC_PCI_DEVICE(0x4128, SWITCHTEC_GEN4), //PSX 28XG4 1772 - SWITCHTEC_PCI_DEVICE(0x4200, SWITCHTEC_GEN4), //PAX 100XG4 1773 - SWITCHTEC_PCI_DEVICE(0x4284, SWITCHTEC_GEN4), //PAX 84XG4 1774 - SWITCHTEC_PCI_DEVICE(0x4268, SWITCHTEC_GEN4), //PAX 68XG4 1775 - SWITCHTEC_PCI_DEVICE(0x4252, SWITCHTEC_GEN4), //PAX 52XG4 1776 - SWITCHTEC_PCI_DEVICE(0x4236, SWITCHTEC_GEN4), //PAX 36XG4 1777 - SWITCHTEC_PCI_DEVICE(0x4228, SWITCHTEC_GEN4), //PAX 28XG4 1778 - SWITCHTEC_PCI_DEVICE(0x4352, SWITCHTEC_GEN4), //PFXA 52XG4 1779 - SWITCHTEC_PCI_DEVICE(0x4336, SWITCHTEC_GEN4), //PFXA 36XG4 1780 - SWITCHTEC_PCI_DEVICE(0x4328, SWITCHTEC_GEN4), //PFXA 28XG4 1781 - SWITCHTEC_PCI_DEVICE(0x4452, SWITCHTEC_GEN4), //PSXA 52XG4 1782 - SWITCHTEC_PCI_DEVICE(0x4436, SWITCHTEC_GEN4), //PSXA 36XG4 1783 - SWITCHTEC_PCI_DEVICE(0x4428, SWITCHTEC_GEN4), //PSXA 28XG4 1784 - SWITCHTEC_PCI_DEVICE(0x4552, SWITCHTEC_GEN4), //PAXA 52XG4 1785 - SWITCHTEC_PCI_DEVICE(0x4536, SWITCHTEC_GEN4), //PAXA 36XG4 1786 - SWITCHTEC_PCI_DEVICE(0x4528, SWITCHTEC_GEN4), //PAXA 28XG4 1730 + SWITCHTEC_PCI_DEVICE(0x8531, SWITCHTEC_GEN3), /* PFX 24xG3 */ 1731 + SWITCHTEC_PCI_DEVICE(0x8532, SWITCHTEC_GEN3), /* PFX 32xG3 */ 1732 + SWITCHTEC_PCI_DEVICE(0x8533, SWITCHTEC_GEN3), /* PFX 48xG3 */ 1733 + SWITCHTEC_PCI_DEVICE(0x8534, SWITCHTEC_GEN3), /* PFX 64xG3 */ 1734 + SWITCHTEC_PCI_DEVICE(0x8535, SWITCHTEC_GEN3), /* PFX 80xG3 */ 1735 + SWITCHTEC_PCI_DEVICE(0x8536, SWITCHTEC_GEN3), /* PFX 96xG3 */ 1736 + SWITCHTEC_PCI_DEVICE(0x8541, SWITCHTEC_GEN3), /* PSX 24xG3 */ 1737 + SWITCHTEC_PCI_DEVICE(0x8542, SWITCHTEC_GEN3), /* PSX 32xG3 */ 1738 + SWITCHTEC_PCI_DEVICE(0x8543, SWITCHTEC_GEN3), /* PSX 48xG3 */ 1739 + SWITCHTEC_PCI_DEVICE(0x8544, SWITCHTEC_GEN3), /* PSX 64xG3 */ 1740 + SWITCHTEC_PCI_DEVICE(0x8545, SWITCHTEC_GEN3), /* PSX 80xG3 */ 1741 + SWITCHTEC_PCI_DEVICE(0x8546, SWITCHTEC_GEN3), /* PSX 96xG3 */ 1742 + SWITCHTEC_PCI_DEVICE(0x8551, SWITCHTEC_GEN3), /* PAX 24XG3 */ 1743 + SWITCHTEC_PCI_DEVICE(0x8552, SWITCHTEC_GEN3), /* PAX 32XG3 */ 1744 + SWITCHTEC_PCI_DEVICE(0x8553, SWITCHTEC_GEN3), /* PAX 48XG3 */ 1745 + SWITCHTEC_PCI_DEVICE(0x8554, SWITCHTEC_GEN3), /* PAX 64XG3 */ 1746 + SWITCHTEC_PCI_DEVICE(0x8555, SWITCHTEC_GEN3), /* PAX 80XG3 */ 1747 + SWITCHTEC_PCI_DEVICE(0x8556, SWITCHTEC_GEN3), /* PAX 96XG3 */ 1748 + SWITCHTEC_PCI_DEVICE(0x8561, SWITCHTEC_GEN3), /* PFXL 24XG3 */ 1749 + SWITCHTEC_PCI_DEVICE(0x8562, SWITCHTEC_GEN3), /* PFXL 32XG3 */ 1750 + SWITCHTEC_PCI_DEVICE(0x8563, SWITCHTEC_GEN3), /* PFXL 48XG3 */ 1751 + SWITCHTEC_PCI_DEVICE(0x8564, SWITCHTEC_GEN3), /* PFXL 64XG3 */ 1752 + SWITCHTEC_PCI_DEVICE(0x8565, SWITCHTEC_GEN3), /* PFXL 80XG3 */ 1753 + SWITCHTEC_PCI_DEVICE(0x8566, SWITCHTEC_GEN3), /* PFXL 96XG3 */ 1754 + SWITCHTEC_PCI_DEVICE(0x8571, SWITCHTEC_GEN3), /* PFXI 24XG3 */ 1755 + SWITCHTEC_PCI_DEVICE(0x8572, SWITCHTEC_GEN3), /* PFXI 32XG3 */ 1756 + SWITCHTEC_PCI_DEVICE(0x8573, SWITCHTEC_GEN3), /* PFXI 48XG3 */ 1757 + SWITCHTEC_PCI_DEVICE(0x8574, SWITCHTEC_GEN3), /* PFXI 64XG3 */ 1758 + SWITCHTEC_PCI_DEVICE(0x8575, SWITCHTEC_GEN3), /* PFXI 80XG3 */ 1759 + SWITCHTEC_PCI_DEVICE(0x8576, SWITCHTEC_GEN3), /* PFXI 96XG3 */ 1760 + SWITCHTEC_PCI_DEVICE(0x4000, SWITCHTEC_GEN4), /* PFX 100XG4 */ 1761 + SWITCHTEC_PCI_DEVICE(0x4084, SWITCHTEC_GEN4), /* PFX 84XG4 */ 1762 + SWITCHTEC_PCI_DEVICE(0x4068, SWITCHTEC_GEN4), /* PFX 68XG4 */ 1763 + SWITCHTEC_PCI_DEVICE(0x4052, SWITCHTEC_GEN4), /* PFX 52XG4 */ 1764 + SWITCHTEC_PCI_DEVICE(0x4036, SWITCHTEC_GEN4), /* PFX 36XG4 */ 1765 + SWITCHTEC_PCI_DEVICE(0x4028, SWITCHTEC_GEN4), /* PFX 28XG4 */ 1766 + SWITCHTEC_PCI_DEVICE(0x4100, SWITCHTEC_GEN4), /* PSX 100XG4 */ 1767 + SWITCHTEC_PCI_DEVICE(0x4184, SWITCHTEC_GEN4), /* PSX 84XG4 */ 1768 + SWITCHTEC_PCI_DEVICE(0x4168, SWITCHTEC_GEN4), /* PSX 68XG4 */ 1769 + SWITCHTEC_PCI_DEVICE(0x4152, SWITCHTEC_GEN4), /* PSX 52XG4 */ 1770 + SWITCHTEC_PCI_DEVICE(0x4136, SWITCHTEC_GEN4), /* PSX 36XG4 */ 1771 + SWITCHTEC_PCI_DEVICE(0x4128, SWITCHTEC_GEN4), /* PSX 28XG4 */ 1772 + SWITCHTEC_PCI_DEVICE(0x4200, SWITCHTEC_GEN4), /* PAX 100XG4 */ 1773 + SWITCHTEC_PCI_DEVICE(0x4284, SWITCHTEC_GEN4), /* PAX 84XG4 */ 1774 + SWITCHTEC_PCI_DEVICE(0x4268, SWITCHTEC_GEN4), /* PAX 68XG4 */ 1775 + SWITCHTEC_PCI_DEVICE(0x4252, SWITCHTEC_GEN4), /* PAX 52XG4 */ 1776 + SWITCHTEC_PCI_DEVICE(0x4236, SWITCHTEC_GEN4), /* PAX 36XG4 */ 1777 + SWITCHTEC_PCI_DEVICE(0x4228, SWITCHTEC_GEN4), /* PAX 28XG4 */ 1778 + SWITCHTEC_PCI_DEVICE(0x4352, SWITCHTEC_GEN4), /* PFXA 52XG4 */ 1779 + SWITCHTEC_PCI_DEVICE(0x4336, SWITCHTEC_GEN4), /* PFXA 36XG4 */ 1780 + SWITCHTEC_PCI_DEVICE(0x4328, SWITCHTEC_GEN4), /* PFXA 28XG4 */ 1781 + SWITCHTEC_PCI_DEVICE(0x4452, SWITCHTEC_GEN4), /* PSXA 52XG4 */ 1782 + SWITCHTEC_PCI_DEVICE(0x4436, SWITCHTEC_GEN4), /* PSXA 36XG4 */ 1783 + SWITCHTEC_PCI_DEVICE(0x4428, SWITCHTEC_GEN4), /* PSXA 28XG4 */ 1784 + SWITCHTEC_PCI_DEVICE(0x4552, SWITCHTEC_GEN4), /* PAXA 52XG4 */ 1785 + SWITCHTEC_PCI_DEVICE(0x4536, SWITCHTEC_GEN4), /* PAXA 36XG4 */ 1786 + SWITCHTEC_PCI_DEVICE(0x4528, SWITCHTEC_GEN4), /* PAXA 28XG4 */ 1787 + SWITCHTEC_PCI_DEVICE(0x5000, SWITCHTEC_GEN5), /* PFX 100XG5 */ 1788 + SWITCHTEC_PCI_DEVICE(0x5084, SWITCHTEC_GEN5), /* PFX 84XG5 */ 1789 + SWITCHTEC_PCI_DEVICE(0x5068, SWITCHTEC_GEN5), /* PFX 68XG5 */ 1790 + SWITCHTEC_PCI_DEVICE(0x5052, SWITCHTEC_GEN5), /* PFX 52XG5 */ 1791 + SWITCHTEC_PCI_DEVICE(0x5036, SWITCHTEC_GEN5), /* PFX 36XG5 */ 1792 + SWITCHTEC_PCI_DEVICE(0x5028, SWITCHTEC_GEN5), /* PFX 28XG5 */ 1793 + SWITCHTEC_PCI_DEVICE(0x5100, SWITCHTEC_GEN5), /* PSX 100XG5 */ 1794 + SWITCHTEC_PCI_DEVICE(0x5184, SWITCHTEC_GEN5), /* PSX 84XG5 */ 1795 + SWITCHTEC_PCI_DEVICE(0x5168, SWITCHTEC_GEN5), /* PSX 68XG5 */ 1796 + SWITCHTEC_PCI_DEVICE(0x5152, SWITCHTEC_GEN5), /* PSX 52XG5 */ 1797 + SWITCHTEC_PCI_DEVICE(0x5136, SWITCHTEC_GEN5), /* PSX 36XG5 */ 1798 + SWITCHTEC_PCI_DEVICE(0x5128, SWITCHTEC_GEN5), /* PSX 28XG5 */ 1799 + SWITCHTEC_PCI_DEVICE(0x5200, SWITCHTEC_GEN5), /* PAX 100XG5 */ 1800 + SWITCHTEC_PCI_DEVICE(0x5284, SWITCHTEC_GEN5), /* PAX 84XG5 */ 1801 + SWITCHTEC_PCI_DEVICE(0x5268, SWITCHTEC_GEN5), /* PAX 68XG5 */ 1802 + SWITCHTEC_PCI_DEVICE(0x5252, SWITCHTEC_GEN5), /* PAX 52XG5 */ 1803 + SWITCHTEC_PCI_DEVICE(0x5236, SWITCHTEC_GEN5), /* PAX 36XG5 */ 1804 + SWITCHTEC_PCI_DEVICE(0x5228, SWITCHTEC_GEN5), /* PAX 28XG5 */ 1805 + SWITCHTEC_PCI_DEVICE(0x5300, SWITCHTEC_GEN5), /* PFXA 100XG5 */ 1806 + SWITCHTEC_PCI_DEVICE(0x5384, SWITCHTEC_GEN5), /* PFXA 84XG5 */ 1807 + SWITCHTEC_PCI_DEVICE(0x5368, SWITCHTEC_GEN5), /* PFXA 68XG5 */ 1808 + SWITCHTEC_PCI_DEVICE(0x5352, SWITCHTEC_GEN5), /* PFXA 52XG5 */ 1809 + SWITCHTEC_PCI_DEVICE(0x5336, SWITCHTEC_GEN5), /* PFXA 36XG5 */ 1810 + SWITCHTEC_PCI_DEVICE(0x5328, SWITCHTEC_GEN5), /* PFXA 28XG5 */ 1811 + SWITCHTEC_PCI_DEVICE(0x5400, SWITCHTEC_GEN5), /* PSXA 100XG5 */ 1812 + SWITCHTEC_PCI_DEVICE(0x5484, SWITCHTEC_GEN5), /* PSXA 84XG5 */ 1813 + SWITCHTEC_PCI_DEVICE(0x5468, SWITCHTEC_GEN5), /* PSXA 68XG5 */ 1814 + SWITCHTEC_PCI_DEVICE(0x5452, SWITCHTEC_GEN5), /* PSXA 52XG5 */ 1815 + SWITCHTEC_PCI_DEVICE(0x5436, SWITCHTEC_GEN5), /* PSXA 36XG5 */ 1816 + SWITCHTEC_PCI_DEVICE(0x5428, SWITCHTEC_GEN5), /* PSXA 28XG5 */ 1817 + SWITCHTEC_PCI_DEVICE(0x5500, SWITCHTEC_GEN5), /* PAXA 100XG5 */ 1818 + SWITCHTEC_PCI_DEVICE(0x5584, SWITCHTEC_GEN5), /* PAXA 84XG5 */ 1819 + SWITCHTEC_PCI_DEVICE(0x5568, SWITCHTEC_GEN5), /* PAXA 68XG5 */ 1820 + SWITCHTEC_PCI_DEVICE(0x5552, SWITCHTEC_GEN5), /* PAXA 52XG5 */ 1821 + SWITCHTEC_PCI_DEVICE(0x5536, SWITCHTEC_GEN5), /* PAXA 36XG5 */ 1822 + SWITCHTEC_PCI_DEVICE(0x5528, SWITCHTEC_GEN5), /* PAXA 28XG5 */ 1787 1823 {0} 1788 1824 }; 1789 1825 MODULE_DEVICE_TABLE(pci, switchtec_pci_tbl);
+6 -6
drivers/pci/syscall.c
··· 52 52 53 53 switch (len) { 54 54 case 1: 55 - err = put_user(byte, (unsigned char __user *)buf); 55 + err = put_user(byte, (u8 __user *)buf); 56 56 break; 57 57 case 2: 58 - err = put_user(word, (unsigned short __user *)buf); 58 + err = put_user(word, (u16 __user *)buf); 59 59 break; 60 60 case 4: 61 - err = put_user(dword, (unsigned int __user *)buf); 61 + err = put_user(dword, (u32 __user *)buf); 62 62 break; 63 63 } 64 64 pci_dev_put(dev); ··· 70 70 they get instead of a machine check on x86. */ 71 71 switch (len) { 72 72 case 1: 73 - put_user(-1, (unsigned char __user *)buf); 73 + put_user(-1, (u8 __user *)buf); 74 74 break; 75 75 case 2: 76 - put_user(-1, (unsigned short __user *)buf); 76 + put_user(-1, (u16 __user *)buf); 77 77 break; 78 78 case 4: 79 - put_user(-1, (unsigned int __user *)buf); 79 + put_user(-1, (u32 __user *)buf); 80 80 break; 81 81 } 82 82 pci_dev_put(dev);
+180 -178
drivers/pci/vgaarb.c
··· 1 1 // SPDX-License-Identifier: MIT 2 2 /* 3 - * vgaarb.c: Implements the VGA arbitration. For details refer to 3 + * vgaarb.c: Implements VGA arbitration. For details refer to 4 4 * Documentation/gpu/vgaarbiter.rst 5 5 * 6 6 * (C) Copyright 2005 Benjamin Herrenschmidt <benh@kernel.crashing.org> ··· 30 30 #include <linux/vt.h> 31 31 #include <linux/console.h> 32 32 #include <linux/acpi.h> 33 - 34 33 #include <linux/uaccess.h> 35 - 36 34 #include <linux/vgaarb.h> 37 35 38 36 static void vga_arbiter_notify_clients(void); 37 + 39 38 /* 40 - * We keep a list of all vga devices in the system to speed 39 + * We keep a list of all VGA devices in the system to speed 41 40 * up the various operations of the arbiter 42 41 */ 43 42 struct vga_device { 44 43 struct list_head list; 45 44 struct pci_dev *pdev; 46 - unsigned int decodes; /* what does it decodes */ 47 - unsigned int owns; /* what does it owns */ 48 - unsigned int locks; /* what does it locks */ 45 + unsigned int decodes; /* what it decodes */ 46 + unsigned int owns; /* what it owns */ 47 + unsigned int locks; /* what it locks */ 49 48 unsigned int io_lock_cnt; /* legacy IO lock count */ 50 49 unsigned int mem_lock_cnt; /* legacy MEM lock count */ 51 50 unsigned int io_norm_cnt; /* normal IO count */ ··· 59 60 static bool vga_arbiter_used; 60 61 static DEFINE_SPINLOCK(vga_lock); 61 62 static DECLARE_WAIT_QUEUE_HEAD(vga_wait_queue); 62 - 63 63 64 64 static const char *vga_iostate_to_str(unsigned int iostate) 65 65 { ··· 75 77 return "none"; 76 78 } 77 79 78 - static int vga_str_to_iostate(char *buf, int str_size, int *io_state) 80 + static int vga_str_to_iostate(char *buf, int str_size, unsigned int *io_state) 79 81 { 80 - /* we could in theory hand out locks on IO and mem 81 - * separately to userspace but it can cause deadlocks */ 82 + /* 83 + * In theory, we could hand out locks on IO and MEM separately to 84 + * userspace, but this can cause deadlocks. 85 + */ 82 86 if (strncmp(buf, "none", 4) == 0) { 83 87 *io_state = VGA_RSRC_NONE; 84 88 return 1; 85 89 } 86 90 87 - /* XXX We're not chekcing the str_size! */ 91 + /* XXX We're not checking the str_size! */ 88 92 if (strncmp(buf, "io+mem", 6) == 0) 89 93 goto both; 90 94 else if (strncmp(buf, "io", 2) == 0) ··· 99 99 return 1; 100 100 } 101 101 102 - /* this is only used a cookie - it should not be dereferenced */ 102 + /* This is only used as a cookie, it should not be dereferenced */ 103 103 static struct pci_dev *vga_default; 104 104 105 105 /* Find somebody in our list */ ··· 116 116 /** 117 117 * vga_default_device - return the default VGA device, for vgacon 118 118 * 119 - * This can be defined by the platform. The default implementation 120 - * is rather dumb and will probably only work properly on single 121 - * vga card setups and/or x86 platforms. 119 + * This can be defined by the platform. The default implementation is 120 + * rather dumb and will probably only work properly on single VGA card 121 + * setups and/or x86 platforms. 122 122 * 123 - * If your VGA default device is not PCI, you'll have to return 124 - * NULL here. In this case, I assume it will not conflict with 125 - * any PCI card. If this is not true, I'll have to define two archs 126 - * hooks for enabling/disabling the VGA default device if that is 127 - * possible. This may be a problem with real _ISA_ VGA cards, in 128 - * addition to a PCI one. I don't know at this point how to deal 129 - * with that card. Can theirs IOs be disabled at all ? If not, then 130 - * I suppose it's a matter of having the proper arch hook telling 131 - * us about it, so we basically never allow anybody to succeed a 132 - * vga_get()... 123 + * If your VGA default device is not PCI, you'll have to return NULL here. 124 + * In this case, I assume it will not conflict with any PCI card. If this 125 + * is not true, I'll have to define two arch hooks for enabling/disabling 126 + * the VGA default device if that is possible. This may be a problem with 127 + * real _ISA_ VGA cards, in addition to a PCI one. I don't know at this 128 + * point how to deal with that card. Can their IOs be disabled at all? If 129 + * not, then I suppose it's a matter of having the proper arch hook telling 130 + * us about it, so we basically never allow anybody to succeed a vga_get(). 133 131 */ 134 132 struct pci_dev *vga_default_device(void) 135 133 { ··· 145 147 } 146 148 147 149 /** 148 - * vga_remove_vgacon - deactivete vga console 150 + * vga_remove_vgacon - deactivate VGA console 149 151 * 150 - * Unbind and unregister vgacon in case pdev is the default vga 151 - * device. Can be called by gpu drivers on initialization to make 152 - * sure vga register access done by vgacon will not disturb the 153 - * device. 152 + * Unbind and unregister vgacon in case pdev is the default VGA device. 153 + * Can be called by GPU drivers on initialization to make sure VGA register 154 + * access done by vgacon will not disturb the device. 154 155 * 155 - * @pdev: pci device. 156 + * @pdev: PCI device. 156 157 */ 157 158 #if !defined(CONFIG_VGA_CONSOLE) 158 159 int vga_remove_vgacon(struct pci_dev *pdev) ··· 190 193 #endif 191 194 EXPORT_SYMBOL(vga_remove_vgacon); 192 195 193 - /* If we don't ever use VGA arb we should avoid 194 - turning off anything anywhere due to old X servers getting 195 - confused about the boot device not being VGA */ 196 + /* 197 + * If we don't ever use VGA arbitration, we should avoid turning off 198 + * anything anywhere due to old X servers getting confused about the boot 199 + * device not being VGA. 200 + */ 196 201 static void vga_check_first_use(void) 197 202 { 198 - /* we should inform all GPUs in the system that 199 - * VGA arb has occurred and to try and disable resources 200 - * if they can */ 203 + /* 204 + * Inform all GPUs in the system that VGA arbitration has occurred 205 + * so they can disable resources if possible. 206 + */ 201 207 if (!vga_arbiter_used) { 202 208 vga_arbiter_used = true; 203 209 vga_arbiter_notify_clients(); ··· 216 216 unsigned int pci_bits; 217 217 u32 flags = 0; 218 218 219 - /* Account for "normal" resources to lock. If we decode the legacy, 219 + /* 220 + * Account for "normal" resources to lock. If we decode the legacy, 220 221 * counterpart, we need to request it as well 221 222 */ 222 223 if ((rsrc & VGA_RSRC_NORMAL_IO) && ··· 237 236 if (wants == 0) 238 237 goto lock_them; 239 238 240 - /* We don't need to request a legacy resource, we just enable 241 - * appropriate decoding and go 239 + /* 240 + * We don't need to request a legacy resource, we just enable 241 + * appropriate decoding and go. 242 242 */ 243 243 legacy_wants = wants & VGA_RSRC_LEGACY_MASK; 244 244 if (legacy_wants == 0) 245 245 goto enable_them; 246 246 247 - /* Ok, we don't, let's find out how we need to kick off */ 247 + /* Ok, we don't, let's find out who we need to kick off */ 248 248 list_for_each_entry(conflict, &vga_list, list) { 249 249 unsigned int lwants = legacy_wants; 250 250 unsigned int change_bridge = 0; ··· 254 252 if (vgadev == conflict) 255 253 continue; 256 254 257 - /* We have a possible conflict. before we go further, we must 255 + /* 256 + * We have a possible conflict. Before we go further, we must 258 257 * check if we sit on the same bus as the conflicting device. 259 - * if we don't, then we must tie both IO and MEM resources 258 + * If we don't, then we must tie both IO and MEM resources 260 259 * together since there is only a single bit controlling 261 - * VGA forwarding on P2P bridges 260 + * VGA forwarding on P2P bridges. 262 261 */ 263 262 if (vgadev->pdev->bus != conflict->pdev->bus) { 264 263 change_bridge = 1; 265 264 lwants = VGA_RSRC_LEGACY_IO | VGA_RSRC_LEGACY_MEM; 266 265 } 267 266 268 - /* Check if the guy has a lock on the resource. If he does, 269 - * return the conflicting entry 267 + /* 268 + * Check if the guy has a lock on the resource. If he does, 269 + * return the conflicting entry. 270 270 */ 271 271 if (conflict->locks & lwants) 272 272 return conflict; 273 273 274 - /* Ok, now check if it owns the resource we want. We can 275 - * lock resources that are not decoded, therefore a device 274 + /* 275 + * Ok, now check if it owns the resource we want. We can 276 + * lock resources that are not decoded; therefore a device 276 277 * can own resources it doesn't decode. 277 278 */ 278 279 match = lwants & conflict->owns; 279 280 if (!match) 280 281 continue; 281 282 282 - /* looks like he doesn't have a lock, we can steal 283 - * them from him 283 + /* 284 + * Looks like he doesn't have a lock, we can steal them 285 + * from him. 284 286 */ 285 287 286 288 flags = 0; 287 289 pci_bits = 0; 288 290 289 - /* If we can't control legacy resources via the bridge, we 291 + /* 292 + * If we can't control legacy resources via the bridge, we 290 293 * also need to disable normal decoding. 291 294 */ 292 295 if (!conflict->bridge_has_one_vga) { ··· 318 311 } 319 312 320 313 enable_them: 321 - /* ok dude, we got it, everybody conflicting has been disabled, let's 314 + /* 315 + * Ok, we got it, everybody conflicting has been disabled, let's 322 316 * enable us. Mark any bits in "owns" regardless of whether we 323 317 * decoded them. We can lock resources we don't decode, therefore 324 318 * we must track them via "owns". ··· 361 353 362 354 vgaarb_dbg(dev, "%s\n", __func__); 363 355 364 - /* Update our counters, and account for equivalent legacy resources 365 - * if we decode them 356 + /* 357 + * Update our counters and account for equivalent legacy resources 358 + * if we decode them. 366 359 */ 367 360 if ((rsrc & VGA_RSRC_NORMAL_IO) && vgadev->io_norm_cnt > 0) { 368 361 vgadev->io_norm_cnt--; ··· 380 371 if ((rsrc & VGA_RSRC_LEGACY_MEM) && vgadev->mem_lock_cnt > 0) 381 372 vgadev->mem_lock_cnt--; 382 373 383 - /* Just clear lock bits, we do lazy operations so we don't really 384 - * have to bother about anything else at this point 374 + /* 375 + * Just clear lock bits, we do lazy operations so we don't really 376 + * have to bother about anything else at this point. 385 377 */ 386 378 if (vgadev->io_lock_cnt == 0) 387 379 vgadev->locks &= ~VGA_RSRC_LEGACY_IO; 388 380 if (vgadev->mem_lock_cnt == 0) 389 381 vgadev->locks &= ~VGA_RSRC_LEGACY_MEM; 390 382 391 - /* Kick the wait queue in case somebody was waiting if we actually 392 - * released something 383 + /* 384 + * Kick the wait queue in case somebody was waiting if we actually 385 + * released something. 393 386 */ 394 387 if (old_locks != vgadev->locks) 395 388 wake_up_all(&vga_wait_queue); 396 389 } 397 390 398 391 /** 399 - * vga_get - acquire & locks VGA resources 400 - * @pdev: pci device of the VGA card or NULL for the system default 392 + * vga_get - acquire & lock VGA resources 393 + * @pdev: PCI device of the VGA card or NULL for the system default 401 394 * @rsrc: bit mask of resources to acquire and lock 402 395 * @interruptible: blocking should be interruptible by signals ? 403 396 * 404 - * This function acquires VGA resources for the given card and mark those 405 - * resources locked. If the resource requested are "normal" (and not legacy) 397 + * Acquire VGA resources for the given card and mark those resources 398 + * locked. If the resources requested are "normal" (and not legacy) 406 399 * resources, the arbiter will first check whether the card is doing legacy 407 - * decoding for that type of resource. If yes, the lock is "converted" into a 408 - * legacy resource lock. 400 + * decoding for that type of resource. If yes, the lock is "converted" into 401 + * a legacy resource lock. 409 402 * 410 403 * The arbiter will first look for all VGA cards that might conflict and disable 411 404 * their IOs and/or Memory access, including VGA forwarding on P2P bridges if ··· 439 428 int rc = 0; 440 429 441 430 vga_check_first_use(); 442 - /* The one who calls us should check for this, but lets be sure... */ 431 + /* The caller should check for this, but let's be sure */ 443 432 if (pdev == NULL) 444 433 pdev = vga_default_device(); 445 434 if (pdev == NULL) ··· 458 447 if (conflict == NULL) 459 448 break; 460 449 461 - 462 - /* We have a conflict, we wait until somebody kicks the 450 + /* 451 + * We have a conflict; we wait until somebody kicks the 463 452 * work queue. Currently we have one work queue that we 464 453 * kick each time some resources are released, but it would 465 - * be fairly easy to have a per device one so that we only 466 - * need to attach to the conflicting device 454 + * be fairly easy to have a per-device one so that we only 455 + * need to attach to the conflicting device. 467 456 */ 468 457 init_waitqueue_entry(&wait, current); 469 458 add_wait_queue(&vga_wait_queue, &wait); ··· 485 474 486 475 /** 487 476 * vga_tryget - try to acquire & lock legacy VGA resources 488 - * @pdev: pci devivce of VGA card or NULL for system default 477 + * @pdev: PCI device of VGA card or NULL for system default 489 478 * @rsrc: bit mask of resources to acquire and lock 490 479 * 491 - * This function performs the same operation as vga_get(), but will return an 492 - * error (-EBUSY) instead of blocking if the resources are already locked by 493 - * another card. It can be called in any context 480 + * Perform the same operation as vga_get(), but return an error (-EBUSY) 481 + * instead of blocking if the resources are already locked by another card. 482 + * Can be called in any context. 494 483 * 495 484 * On success, release the VGA resource again with vga_put(). 496 485 * ··· 506 495 507 496 vga_check_first_use(); 508 497 509 - /* The one who calls us should check for this, but lets be sure... */ 498 + /* The caller should check for this, but let's be sure */ 510 499 if (pdev == NULL) 511 500 pdev = vga_default_device(); 512 501 if (pdev == NULL) ··· 526 515 527 516 /** 528 517 * vga_put - release lock on legacy VGA resources 529 - * @pdev: pci device of VGA card or NULL for system default 530 - * @rsrc: but mask of resource to release 518 + * @pdev: PCI device of VGA card or NULL for system default 519 + * @rsrc: bit mask of resource to release 531 520 * 532 - * This fuction releases resources previously locked by vga_get() or 533 - * vga_tryget(). The resources aren't disabled right away, so that a subsequence 534 - * vga_get() on the same card will succeed immediately. Resources have a 535 - * counter, so locks are only released if the counter reaches 0. 521 + * Release resources previously locked by vga_get() or vga_tryget(). The 522 + * resources aren't disabled right away, so that a subsequent vga_get() on 523 + * the same card will succeed immediately. Resources have a counter, so 524 + * locks are only released if the counter reaches 0. 536 525 */ 537 526 void vga_put(struct pci_dev *pdev, unsigned int rsrc) 538 527 { 539 528 struct vga_device *vgadev; 540 529 unsigned long flags; 541 530 542 - /* The one who calls us should check for this, but lets be sure... */ 531 + /* The caller should check for this, but let's be sure */ 543 532 if (pdev == NULL) 544 533 pdev = vga_default_device(); 545 534 if (pdev == NULL) ··· 676 665 } 677 666 678 667 /* 679 - * vgadev has neither IO nor MEM enabled. If we haven't found any 668 + * Vgadev has neither IO nor MEM enabled. If we haven't found any 680 669 * other VGA devices, it is the best candidate so far. 681 670 */ 682 671 if (!boot_vga) ··· 707 696 return; 708 697 } 709 698 710 - /* okay iterate the new devices bridge hierarachy */ 699 + /* Iterate the new device's bridge hierarchy */ 711 700 new_bus = vgadev->pdev->bus; 712 701 while (new_bus) { 713 702 new_bridge = new_bus->self; 714 703 715 - /* go through list of devices already registered */ 704 + /* Go through list of devices already registered */ 716 705 list_for_each_entry(same_bridge_vgadev, &vga_list, list) { 717 706 bus = same_bridge_vgadev->pdev->bus; 718 707 bridge = bus->self; 719 708 720 - /* see if the share a bridge with this device */ 709 + /* See if it shares a bridge with this device */ 721 710 if (new_bridge == bridge) { 722 711 /* 723 - * If their direct parent bridge is the same 712 + * If its direct parent bridge is the same 724 713 * as any bridge of this device then it can't 725 714 * be used for that device. 726 715 */ ··· 728 717 } 729 718 730 719 /* 731 - * Now iterate the previous devices bridge hierarchy. 732 - * If the new devices parent bridge is in the other 733 - * devices hierarchy then we can't use it to control 734 - * this device 720 + * Now iterate the previous device's bridge hierarchy. 721 + * If the new device's parent bridge is in the other 722 + * device's hierarchy, we can't use it to control this 723 + * device. 735 724 */ 736 725 while (bus) { 737 726 bridge = bus->self; ··· 752 741 } 753 742 754 743 /* 755 - * Currently, we assume that the "initial" setup of the system is 756 - * not sane, that is we come up with conflicting devices and let 757 - * the arbiter's client decides if devices decodes or not legacy 758 - * things. 744 + * Currently, we assume that the "initial" setup of the system is not sane, 745 + * that is, we come up with conflicting devices and let the arbiter's 746 + * client decide if devices decodes legacy things or not. 759 747 */ 760 748 static bool vga_arbiter_add_pci_device(struct pci_dev *pdev) 761 749 { ··· 773 763 if (vgadev == NULL) { 774 764 vgaarb_err(&pdev->dev, "failed to allocate VGA arbiter data\n"); 775 765 /* 776 - * What to do on allocation failure ? For now, let's just do 766 + * What to do on allocation failure? For now, let's just do 777 767 * nothing, I'm not sure there is anything saner to be done. 778 768 */ 779 769 return false; ··· 791 781 vgadev->decodes = VGA_RSRC_LEGACY_IO | VGA_RSRC_LEGACY_MEM | 792 782 VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM; 793 783 794 - /* by default mark it as decoding */ 784 + /* By default, mark it as decoding */ 795 785 vga_decode_count++; 796 - /* Mark that we "own" resources based on our enables, we will 797 - * clear that below if the bridge isn't forwarding 786 + 787 + /* 788 + * Mark that we "own" resources based on our enables, we will 789 + * clear that below if the bridge isn't forwarding. 798 790 */ 799 791 pci_read_config_word(pdev, PCI_COMMAND, &cmd); 800 792 if (cmd & PCI_COMMAND_IO) ··· 876 864 return ret; 877 865 } 878 866 879 - /* this is called with the lock */ 880 - static inline void vga_update_device_decodes(struct vga_device *vgadev, 881 - int new_decodes) 867 + /* Called with the lock */ 868 + static void vga_update_device_decodes(struct vga_device *vgadev, 869 + unsigned int new_decodes) 882 870 { 883 871 struct device *dev = &vgadev->pdev->dev; 884 - int old_decodes, decodes_removed, decodes_unlocked; 872 + unsigned int old_decodes = vgadev->decodes; 873 + unsigned int decodes_removed = ~new_decodes & old_decodes; 874 + unsigned int decodes_unlocked = vgadev->locks & decodes_removed; 885 875 886 - old_decodes = vgadev->decodes; 887 - decodes_removed = ~new_decodes & old_decodes; 888 - decodes_unlocked = vgadev->locks & decodes_removed; 889 876 vgadev->decodes = new_decodes; 890 877 891 - vgaarb_info(dev, "changed VGA decodes: olddecodes=%s,decodes=%s:owns=%s\n", 892 - vga_iostate_to_str(old_decodes), 893 - vga_iostate_to_str(vgadev->decodes), 894 - vga_iostate_to_str(vgadev->owns)); 878 + vgaarb_info(dev, "VGA decodes changed: olddecodes=%s,decodes=%s:owns=%s\n", 879 + vga_iostate_to_str(old_decodes), 880 + vga_iostate_to_str(vgadev->decodes), 881 + vga_iostate_to_str(vgadev->owns)); 895 882 896 - /* if we removed locked decodes, lock count goes to zero, and release */ 883 + /* If we removed locked decodes, lock count goes to zero, and release */ 897 884 if (decodes_unlocked) { 898 885 if (decodes_unlocked & VGA_RSRC_LEGACY_IO) 899 886 vgadev->io_lock_cnt = 0; ··· 901 890 __vga_put(vgadev, decodes_unlocked); 902 891 } 903 892 904 - /* change decodes counter */ 893 + /* Change decodes counter */ 905 894 if (old_decodes & VGA_RSRC_LEGACY_MASK && 906 895 !(new_decodes & VGA_RSRC_LEGACY_MASK)) 907 896 vga_decode_count--; ··· 925 914 if (vgadev == NULL) 926 915 goto bail; 927 916 928 - /* don't let userspace futz with kernel driver decodes */ 917 + /* Don't let userspace futz with kernel driver decodes */ 929 918 if (userspace && vgadev->set_decode) 930 919 goto bail; 931 920 932 - /* update the device decodes + counter */ 921 + /* Update the device decodes + counter */ 933 922 vga_update_device_decodes(vgadev, decodes); 934 923 935 - /* XXX if somebody is going from "doesn't decode" to "decodes" state 936 - * here, additional care must be taken as we may have pending owner 937 - * ship of non-legacy region ... 924 + /* 925 + * XXX If somebody is going from "doesn't decode" to "decodes" 926 + * state here, additional care must be taken as we may have pending 927 + * ownership of non-legacy region. 938 928 */ 939 929 bail: 940 930 spin_unlock_irqrestore(&vga_lock, flags); ··· 943 931 944 932 /** 945 933 * vga_set_legacy_decoding 946 - * @pdev: pci device of the VGA card 934 + * @pdev: PCI device of the VGA card 947 935 * @decodes: bit mask of what legacy regions the card decodes 948 936 * 949 - * Indicates to the arbiter if the card decodes legacy VGA IOs, legacy VGA 937 + * Indicate to the arbiter if the card decodes legacy VGA IOs, legacy VGA 950 938 * Memory, both, or none. All cards default to both, the card driver (fbdev for 951 939 * example) should tell the arbiter if it has disabled legacy decoding, so the 952 940 * card can be left out of the arbitration process (and can be safe to take ··· 960 948 961 949 /** 962 950 * vga_client_register - register or unregister a VGA arbitration client 963 - * @pdev: pci device of the VGA client 964 - * @set_decode: vga decode change callback 951 + * @pdev: PCI device of the VGA client 952 + * @set_decode: VGA decode change callback 965 953 * 966 954 * Clients have two callback mechanisms they can use. 967 955 * 968 956 * @set_decode callback: If a client can disable its GPU VGA resource, it 969 957 * will get a callback from this to set the encode/decode state. 970 958 * 971 - * Rationale: we cannot disable VGA decode resources unconditionally some single 972 - * GPU laptops seem to require ACPI or BIOS access to the VGA registers to 973 - * control things like backlights etc. Hopefully newer multi-GPU laptops do 974 - * something saner, and desktops won't have any special ACPI for this. The 975 - * driver will get a callback when VGA arbitration is first used by userspace 976 - * since some older X servers have issues. 959 + * Rationale: we cannot disable VGA decode resources unconditionally 960 + * because some single GPU laptops seem to require ACPI or BIOS access to 961 + * the VGA registers to control things like backlights etc. Hopefully newer 962 + * multi-GPU laptops do something saner, and desktops won't have any 963 + * special ACPI for this. The driver will get a callback when VGA 964 + * arbitration is first used by userspace since some older X servers have 965 + * issues. 977 966 * 978 - * This function does not check whether a client for @pdev has been registered 979 - * already. 967 + * Does not check whether a client for @pdev has been registered already. 980 968 * 981 - * To unregister just call vga_client_unregister(). 969 + * To unregister, call vga_client_unregister(). 982 970 * 983 - * Returns: 0 on success, -1 on failure 971 + * Returns: 0 on success, -ENODEV on failure 984 972 */ 985 973 int vga_client_register(struct pci_dev *pdev, 986 974 unsigned int (*set_decode)(struct pci_dev *pdev, bool decode)) 987 975 { 988 - int ret = -ENODEV; 989 - struct vga_device *vgadev; 990 976 unsigned long flags; 977 + struct vga_device *vgadev; 991 978 992 979 spin_lock_irqsave(&vga_lock, flags); 993 980 vgadev = vgadev_find(pdev); 994 - if (!vgadev) 995 - goto bail; 996 - 997 - vgadev->set_decode = set_decode; 998 - ret = 0; 999 - 1000 - bail: 981 + if (vgadev) 982 + vgadev->set_decode = set_decode; 1001 983 spin_unlock_irqrestore(&vga_lock, flags); 1002 - return ret; 1003 - 984 + if (!vgadev) 985 + return -ENODEV; 986 + return 0; 1004 987 } 1005 988 EXPORT_SYMBOL(vga_client_register); 1006 989 ··· 1004 997 * 1005 998 * Semantics is: 1006 999 * 1007 - * open : open user instance of the arbitrer. by default, it's 1000 + * open : Open user instance of the arbiter. By default, it's 1008 1001 * attached to the default VGA device of the system. 1009 1002 * 1010 - * close : close user instance, release locks 1003 + * close : Close user instance, release locks 1011 1004 * 1012 - * read : return a string indicating the status of the target. 1013 - * an IO state string is of the form {io,mem,io+mem,none}, 1005 + * read : Return a string indicating the status of the target. 1006 + * An IO state string is of the form {io,mem,io+mem,none}, 1014 1007 * mc and ic are respectively mem and io lock counts (for 1015 1008 * debugging/diagnostic only). "decodes" indicate what the 1016 1009 * card currently decodes, "owns" indicates what is currently ··· 1024 1017 * write : write a command to the arbiter. List of commands is: 1025 1018 * 1026 1019 * target <card_ID> : switch target to card <card_ID> (see below) 1027 - * lock <io_state> : acquires locks on target ("none" is invalid io_state) 1020 + * lock <io_state> : acquire locks on target ("none" is invalid io_state) 1028 1021 * trylock <io_state> : non-blocking acquire locks on target 1029 1022 * unlock <io_state> : release locks on target 1030 1023 * unlock all : release all locks on target held by this user ··· 1041 1034 * Note about locks: 1042 1035 * 1043 1036 * The driver keeps track of which user has what locks on which card. It 1044 - * supports stacking, like the kernel one. This complexifies the implementation 1037 + * supports stacking, like the kernel one. This complicates the implementation 1045 1038 * a bit, but makes the arbiter more tolerant to userspace problems and able 1046 1039 * to properly cleanup in all cases when a process dies. 1047 1040 * Currently, a max of 16 cards simultaneously can have locks issued from 1048 1041 * userspace for a given user (file descriptor instance) of the arbiter. 1049 1042 * 1050 1043 * If the device is hot-unplugged, there is a hook inside the module to notify 1051 - * they being added/removed in the system and automatically added/removed in 1044 + * it being added/removed in the system and automatically added/removed in 1052 1045 * the arbiter. 1053 1046 */ 1054 1047 1055 1048 #define MAX_USER_CARDS CONFIG_VGA_ARB_MAX_GPUS 1056 1049 #define PCI_INVALID_CARD ((struct pci_dev *)-1UL) 1057 1050 1058 - /* 1059 - * Each user has an array of these, tracking which cards have locks 1060 - */ 1051 + /* Each user has an array of these, tracking which cards have locks */ 1061 1052 struct vga_arb_user_card { 1062 1053 struct pci_dev *pdev; 1063 1054 unsigned int mem_cnt; ··· 1074 1069 1075 1070 1076 1071 /* 1077 - * This function gets a string in the format: "PCI:domain:bus:dev.fn" and 1078 - * returns the respective values. If the string is not in this format, 1079 - * it returns 0. 1072 + * Take a string in the format: "PCI:domain:bus:dev.fn" and return the 1073 + * respective values. If the string is not in this format, return 0. 1080 1074 */ 1081 1075 static int vga_pci_str_to_vars(char *buf, int count, unsigned int *domain, 1082 1076 unsigned int *bus, unsigned int *devfn) 1083 1077 { 1084 1078 int n; 1085 1079 unsigned int slot, func; 1086 - 1087 1080 1088 1081 n = sscanf(buf, "PCI:%x:%x:%x.%x", domain, bus, &slot, &func); 1089 1082 if (n != 4) ··· 1107 1104 if (lbuf == NULL) 1108 1105 return -ENOMEM; 1109 1106 1110 - /* Protects vga_list */ 1107 + /* Protect vga_list */ 1111 1108 spin_lock_irqsave(&vga_lock, flags); 1112 1109 1113 1110 /* If we are targeting the default, use it */ ··· 1121 1118 /* Find card vgadev structure */ 1122 1119 vgadev = vgadev_find(pdev); 1123 1120 if (vgadev == NULL) { 1124 - /* Wow, it's not in the list, that shouldn't happen, 1125 - * let's fix us up and return invalid card 1121 + /* 1122 + * Wow, it's not in the list, that shouldn't happen, let's 1123 + * fix us up and return invalid card. 1126 1124 */ 1127 1125 spin_unlock_irqrestore(&vga_lock, flags); 1128 1126 len = sprintf(lbuf, "invalid"); 1129 1127 goto done; 1130 1128 } 1131 1129 1132 - /* Fill the buffer with infos */ 1130 + /* Fill the buffer with info */ 1133 1131 len = snprintf(lbuf, 1024, 1134 1132 "count:%d,PCI:%s,decodes=%s,owns=%s,locks=%s(%u:%u)\n", 1135 1133 vga_decode_count, pci_name(pdev), ··· 1176 1172 if (copy_from_user(kbuf, buf, count)) 1177 1173 return -EFAULT; 1178 1174 curr_pos = kbuf; 1179 - kbuf[count] = '\0'; /* Just to make sure... */ 1175 + kbuf[count] = '\0'; 1180 1176 1181 1177 if (strncmp(curr_pos, "lock ", 5) == 0) { 1182 1178 curr_pos += 5; ··· 1201 1197 1202 1198 vga_get_uninterruptible(pdev, io_state); 1203 1199 1204 - /* Update the client's locks lists... */ 1200 + /* Update the client's locks lists */ 1205 1201 for (i = 0; i < MAX_USER_CARDS; i++) { 1206 1202 if (priv->cards[i].pdev == pdev) { 1207 1203 if (io_state & VGA_RSRC_LEGACY_IO) ··· 1318 1314 curr_pos += 7; 1319 1315 remaining -= 7; 1320 1316 pr_debug("client 0x%p called 'target'\n", priv); 1321 - /* if target is default */ 1317 + /* If target is default */ 1322 1318 if (!strncmp(curr_pos, "default", 7)) 1323 1319 pdev = pci_dev_get(vga_default_device()); 1324 1320 else { ··· 1368 1364 vgaarb_dbg(&pdev->dev, "maximum user cards (%d) number reached, ignoring this one!\n", 1369 1365 MAX_USER_CARDS); 1370 1366 pci_dev_put(pdev); 1371 - /* XXX: which value to return? */ 1367 + /* XXX: Which value to return? */ 1372 1368 ret_val = -ENOMEM; 1373 1369 goto done; 1374 1370 } ··· 1429 1425 list_add(&priv->list, &vga_user_list); 1430 1426 spin_unlock_irqrestore(&vga_user_lock, flags); 1431 1427 1432 - /* Set the client' lists of locks */ 1428 + /* Set the client's lists of locks */ 1433 1429 priv->target = vga_default_device(); /* Maybe this is still null! */ 1434 1430 priv->cards[0].pdev = priv->target; 1435 1431 priv->cards[0].io_cnt = 0; 1436 1432 priv->cards[0].mem_cnt = 0; 1437 - 1438 1433 1439 1434 return 0; 1440 1435 } ··· 1468 1465 } 1469 1466 1470 1467 /* 1471 - * callback any registered clients to let them know we have a 1472 - * change in VGA cards 1468 + * Callback any registered clients to let them know we have a change in VGA 1469 + * cards. 1473 1470 */ 1474 1471 static void vga_arbiter_notify_clients(void) 1475 1472 { 1476 1473 struct vga_device *vgadev; 1477 1474 unsigned long flags; 1478 - uint32_t new_decodes; 1475 + unsigned int new_decodes; 1479 1476 bool new_state; 1480 1477 1481 1478 if (!vga_arbiter_used) 1482 1479 return; 1483 1480 1481 + new_state = (vga_count > 1) ? false : true; 1482 + 1484 1483 spin_lock_irqsave(&vga_lock, flags); 1485 1484 list_for_each_entry(vgadev, &vga_list, list) { 1486 - if (vga_count > 1) 1487 - new_state = false; 1488 - else 1489 - new_state = true; 1490 1485 if (vgadev->set_decode) { 1491 1486 new_decodes = vgadev->set_decode(vgadev->pdev, 1492 1487 new_state); ··· 1503 1502 1504 1503 vgaarb_dbg(dev, "%s\n", __func__); 1505 1504 1506 - /* For now we're only intereted in devices added and removed. I didn't 1507 - * test this thing here, so someone needs to double check for the 1508 - * cases of hotplugable vga cards. */ 1505 + /* 1506 + * For now, we're only interested in devices added and removed. 1507 + * I didn't test this thing here, so someone needs to double check 1508 + * for the cases of hot-pluggable VGA cards. 1509 + */ 1509 1510 if (action == BUS_NOTIFY_ADD_DEVICE) 1510 1511 notify = vga_arbiter_add_pci_device(pdev); 1511 1512 else if (action == BUS_NOTIFY_DEL_DEVICE) ··· 1546 1543 1547 1544 bus_register_notifier(&pci_bus_type, &pci_notifier); 1548 1545 1549 - /* We add all PCI devices satisfying VGA class in the arbiter by 1550 - * default */ 1546 + /* Add all VGA class PCI devices by default */ 1551 1547 pdev = NULL; 1552 1548 while ((pdev = 1553 1549 pci_get_subsys(PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
+32 -2
drivers/pci/vpd.c
··· 275 275 size_t count) 276 276 { 277 277 struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); 278 + struct pci_dev *vpd_dev = dev; 279 + ssize_t ret; 278 280 279 - return pci_read_vpd(dev, off, count, buf); 281 + if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) { 282 + vpd_dev = pci_get_func0_dev(dev); 283 + if (!vpd_dev) 284 + return -ENODEV; 285 + } 286 + 287 + pci_config_pm_runtime_get(vpd_dev); 288 + ret = pci_read_vpd(vpd_dev, off, count, buf); 289 + pci_config_pm_runtime_put(vpd_dev); 290 + 291 + if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) 292 + pci_dev_put(vpd_dev); 293 + 294 + return ret; 280 295 } 281 296 282 297 static ssize_t vpd_write(struct file *filp, struct kobject *kobj, ··· 299 284 size_t count) 300 285 { 301 286 struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); 287 + struct pci_dev *vpd_dev = dev; 288 + ssize_t ret; 302 289 303 - return pci_write_vpd(dev, off, count, buf); 290 + if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) { 291 + vpd_dev = pci_get_func0_dev(dev); 292 + if (!vpd_dev) 293 + return -ENODEV; 294 + } 295 + 296 + pci_config_pm_runtime_get(vpd_dev); 297 + ret = pci_write_vpd(vpd_dev, off, count, buf); 298 + pci_config_pm_runtime_put(vpd_dev); 299 + 300 + if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) 301 + pci_dev_put(vpd_dev); 302 + 303 + return ret; 304 304 } 305 305 static BIN_ATTR(vpd, 0600, vpd_read, vpd_write, 0); 306 306
-11
include/linux/aer.h
··· 41 41 }; 42 42 43 43 #if defined(CONFIG_PCIEAER) 44 - /* PCIe port driver needs this function to enable AER */ 45 - int pci_enable_pcie_error_reporting(struct pci_dev *dev); 46 - int pci_disable_pcie_error_reporting(struct pci_dev *dev); 47 44 int pci_aer_clear_nonfatal_status(struct pci_dev *dev); 48 45 #else 49 - static inline int pci_enable_pcie_error_reporting(struct pci_dev *dev) 50 - { 51 - return -EINVAL; 52 - } 53 - static inline int pci_disable_pcie_error_reporting(struct pci_dev *dev) 54 - { 55 - return -EINVAL; 56 - } 57 46 static inline int pci_aer_clear_nonfatal_status(struct pci_dev *dev) 58 47 { 59 48 return -EINVAL;
+40 -6
include/linux/pci.h
··· 366 366 pci_power_t current_state; /* Current operating state. In ACPI, 367 367 this is D0-D3, D0 being fully 368 368 functional, and D3 being off. */ 369 - unsigned int imm_ready:1; /* Supports Immediate Readiness */ 370 369 u8 pm_cap; /* PM capability offset */ 370 + unsigned int imm_ready:1; /* Supports Immediate Readiness */ 371 371 unsigned int pme_support:5; /* Bitmask of states from which PME# 372 372 can be generated */ 373 373 unsigned int pme_poll:1; /* Poll device's PME status bit */ ··· 392 392 393 393 #ifdef CONFIG_PCIEASPM 394 394 struct pcie_link_state *link_state; /* ASPM link state */ 395 + u16 l1ss; /* L1SS Capability pointer */ 395 396 unsigned int ltr_path:1; /* Latency Tolerance Reporting 396 397 supported from root to here */ 397 - u16 l1ss; /* L1SS Capability pointer */ 398 398 #endif 399 399 unsigned int pasid_no_tlp:1; /* PASID works without TLP Prefix */ 400 400 unsigned int eetlp_prefix_path:1; /* End-to-End TLP Prefix */ ··· 464 464 unsigned int no_vf_scan:1; /* Don't scan for VFs after IOV enablement */ 465 465 unsigned int no_command_memory:1; /* No PCI_COMMAND_MEMORY */ 466 466 unsigned int rom_bar_overlap:1; /* ROM BAR disable broken */ 467 + unsigned int rom_attr_enabled:1; /* Display of ROM attribute enabled? */ 467 468 pci_dev_flags_t dev_flags; 468 469 atomic_t enable_cnt; /* pci_enable_device has been called */ 469 470 471 + spinlock_t pcie_cap_lock; /* Protects RMW ops in capability accessors */ 470 472 u32 saved_config_space[16]; /* Config space saved at suspend time */ 471 473 struct hlist_head saved_cap_space; 472 - int rom_attr_enabled; /* Display of ROM attribute enabled? */ 473 474 struct bin_attribute *res_attr[DEVICE_COUNT_RESOURCE]; /* sysfs file for resources */ 474 475 struct bin_attribute *res_attr_wc[DEVICE_COUNT_RESOURCE]; /* sysfs file for WC mapping of resources */ 475 476 ··· 1218 1217 int pcie_capability_read_dword(struct pci_dev *dev, int pos, u32 *val); 1219 1218 int pcie_capability_write_word(struct pci_dev *dev, int pos, u16 val); 1220 1219 int pcie_capability_write_dword(struct pci_dev *dev, int pos, u32 val); 1221 - int pcie_capability_clear_and_set_word(struct pci_dev *dev, int pos, 1222 - u16 clear, u16 set); 1220 + int pcie_capability_clear_and_set_word_unlocked(struct pci_dev *dev, int pos, 1221 + u16 clear, u16 set); 1222 + int pcie_capability_clear_and_set_word_locked(struct pci_dev *dev, int pos, 1223 + u16 clear, u16 set); 1223 1224 int pcie_capability_clear_and_set_dword(struct pci_dev *dev, int pos, 1224 1225 u32 clear, u32 set); 1226 + 1227 + /** 1228 + * pcie_capability_clear_and_set_word - RMW accessor for PCI Express Capability Registers 1229 + * @dev: PCI device structure of the PCI Express device 1230 + * @pos: PCI Express Capability Register 1231 + * @clear: Clear bitmask 1232 + * @set: Set bitmask 1233 + * 1234 + * Perform a Read-Modify-Write (RMW) operation using @clear and @set 1235 + * bitmasks on PCI Express Capability Register at @pos. Certain PCI Express 1236 + * Capability Registers are accessed concurrently in RMW fashion, hence 1237 + * require locking which is handled transparently to the caller. 1238 + */ 1239 + static inline int pcie_capability_clear_and_set_word(struct pci_dev *dev, 1240 + int pos, 1241 + u16 clear, u16 set) 1242 + { 1243 + switch (pos) { 1244 + case PCI_EXP_LNKCTL: 1245 + case PCI_EXP_RTCTL: 1246 + return pcie_capability_clear_and_set_word_locked(dev, pos, 1247 + clear, set); 1248 + default: 1249 + return pcie_capability_clear_and_set_word_unlocked(dev, pos, 1250 + clear, set); 1251 + } 1252 + } 1225 1253 1226 1254 static inline int pcie_capability_set_word(struct pci_dev *dev, int pos, 1227 1255 u16 set) ··· 1433 1403 void pci_assign_unassigned_bus_resources(struct pci_bus *bus); 1434 1404 void pci_assign_unassigned_root_bus_resources(struct pci_bus *bus); 1435 1405 int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type); 1436 - void pdev_enable_device(struct pci_dev *); 1437 1406 int pci_enable_resources(struct pci_dev *, int mask); 1438 1407 void pci_assign_irq(struct pci_dev *dev); 1439 1408 struct resource *pci_find_resource(struct pci_dev *dev, struct resource *res); ··· 2288 2259 int pcibios_alloc_irq(struct pci_dev *dev); 2289 2260 void pcibios_free_irq(struct pci_dev *dev); 2290 2261 resource_size_t pcibios_default_alignment(void); 2262 + 2263 + #if !defined(HAVE_PCI_MMAP) && !defined(ARCH_GENERIC_PCI_MMAP_RESOURCE) 2264 + extern int pci_create_resource_files(struct pci_dev *dev); 2265 + extern void pci_remove_resource_files(struct pci_dev *dev); 2266 + #endif 2291 2267 2292 2268 #if defined(CONFIG_PCI_MMCONFIG) || defined(CONFIG_ACPI_MCFG) 2293 2269 void __init pci_mmcfg_early_init(void);
+1
include/linux/switchtec.h
··· 41 41 enum switchtec_gen { 42 42 SWITCHTEC_GEN3, 43 43 SWITCHTEC_GEN4, 44 + SWITCHTEC_GEN5, 44 45 }; 45 46 46 47 struct mrpc_regs {
+4 -23
include/linux/vgaarb.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + 1 3 /* 2 4 * The VGA aribiter manages VGA space routing and VGA resource decode to 3 5 * allow multiple VGA devices to be used in a system in a safe way. ··· 7 5 * (C) Copyright 2005 Benjamin Herrenschmidt <benh@kernel.crashing.org> 8 6 * (C) Copyright 2007 Paulo R. Zanoni <przanoni@gmail.com> 9 7 * (C) Copyright 2007, 2009 Tiago Vignatti <vignatti@freedesktop.org> 10 - * 11 - * Permission is hereby granted, free of charge, to any person obtaining a 12 - * copy of this software and associated documentation files (the "Software"), 13 - * to deal in the Software without restriction, including without limitation 14 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 15 - * and/or sell copies of the Software, and to permit persons to whom the 16 - * Software is furnished to do so, subject to the following conditions: 17 - * 18 - * The above copyright notice and this permission notice (including the next 19 - * paragraph) shall be included in all copies or substantial portions of the 20 - * Software. 21 - * 22 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 23 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 24 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 25 - * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 26 - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 27 - * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 28 - * DEALINGS 29 - * IN THE SOFTWARE. 30 - * 31 8 */ 32 9 33 10 #ifndef LINUX_VGA_H ··· 77 96 static inline int vga_get_interruptible(struct pci_dev *pdev, 78 97 unsigned int rsrc) 79 98 { 80 - return vga_get(pdev, rsrc, 1); 99 + return vga_get(pdev, rsrc, 1); 81 100 } 82 101 83 102 /** ··· 92 111 static inline int vga_get_uninterruptible(struct pci_dev *pdev, 93 112 unsigned int rsrc) 94 113 { 95 - return vga_get(pdev, rsrc, 0); 114 + return vga_get(pdev, rsrc, 0); 96 115 } 97 116 98 117 static inline void vga_client_unregister(struct pci_dev *pdev)