Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v6.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:

- Skip E820 checks for MCFG ECAM regions for new (2016+) machines,
since there's no requirement to describe them in E820 and some
platforms require ECAM to work (Bjorn Helgaas)

- Rename PCI_IRQ_LEGACY to PCI_IRQ_INTX to be more specific (Damien
Le Moal)

- Remove last user and pci_enable_device_io() (Heiner Kallweit)

- Wait for Link Training==0 to avoid possible race (Ilpo Järvinen)

- Skip waiting for devices that have been disconnected while
suspended (Ilpo Järvinen)

- Clear Secondary Status errors after enumeration since Master Aborts
and Unsupported Request errors are an expected part of enumeration
(Vidya Sagar)

MSI:

- Remove unused IMS (Interrupt Message Store) support (Bjorn Helgaas)

Error handling:

- Mask Genesys GL975x SD host controller Replay Timer Timeout
correctable errors caused by a hardware defect; the errors cause
interrupts that prevent system suspend (Kai-Heng Feng)

- Fix EDR-related _DSM support, which previously evaluated revision 5
but assumed revision 6 behavior (Kuppuswamy Sathyanarayanan)

ASPM:

- Simplify link state definitions and mask calculation (Ilpo
Järvinen)

Power management:

- Avoid D3cold for HP Pavilion 17 PC/1972 PCIe Ports, where BIOS
apparently doesn't know how to put them back in D0 (Mario
Limonciello)

CXL:

- Support resetting CXL devices; special handling required because
CXL Ports mask Secondary Bus Reset by default (Dave Jiang)

DOE:

- Support DOE Discovery Version 2 (Alexey Kardashevskiy)

Endpoint framework:

- Set endpoint BAR to be 64-bit if the driver says that's all the
device supports, in addition to doing so if the size is >2GB
(Niklas Cassel)

- Simplify endpoint BAR allocation and setting interfaces (Niklas
Cassel)

Cadence PCIe controller driver:

- Drop DT binding redundant msi-parent and pci-bus.yaml (Krzysztof
Kozlowski)

Cadence PCIe endpoint driver:

- Configure endpoint BARs to be 64-bit based on the BAR type, not the
BAR value (Niklas Cassel)

Freescale Layerscape PCIe controller driver:

- Convert DT binding to YAML (Frank Li)

MediaTek MT7621 PCIe controller driver:

- Add DT binding missing 'reg' property for child Root Ports
(Krzysztof Kozlowski)

- Fix theoretical string truncation in PHY name (Sergio Paracuellos)

NVIDIA Tegra194 PCIe controller driver:

- Return success for endpoint probe instead of falling through to the
failure path (Vidya Sagar)

Renesas R-Car PCIe controller driver:

- Add DT binding missing IOMMU properties (Geert Uytterhoeven)

- Add DT binding R-Car V4H compatible for host and endpoint mode
(Yoshihiro Shimoda)

Rockchip PCIe controller driver:

- Configure endpoint BARs to be 64-bit based on the BAR type, not the
BAR value (Niklas Cassel)

- Add DT binding missing maxItems to ep-gpios (Krzysztof Kozlowski)

- Set the Subsystem Vendor ID, which was previously zero because it
was masked incorrectly (Rick Wertenbroek)

Synopsys DesignWare PCIe controller driver:

- Restructure DBI register access to accommodate devices where this
requires Refclk to be active (Manivannan Sadhasivam)

- Remove the deinit() callback, which was only need by the
pcie-rcar-gen4, and do it directly in that driver (Manivannan
Sadhasivam)

- Add dw_pcie_ep_cleanup() so drivers that support PERST# can clean
up things like eDMA (Manivannan Sadhasivam)

- Rename dw_pcie_ep_exit() to dw_pcie_ep_deinit() to make it parallel
to dw_pcie_ep_init() (Manivannan Sadhasivam)

- Rename dw_pcie_ep_init_complete() to dw_pcie_ep_init_registers() to
reflect the actual functionality (Manivannan Sadhasivam)

- Call dw_pcie_ep_init_registers() directly from all the glue
drivers, not just those that require active Refclk from the host
(Manivannan Sadhasivam)

- Remove the "core_init_notifier" flag, which was an obscure way for
glue drivers to indicate that they depend on Refclk from the host
(Manivannan Sadhasivam)

TI J721E PCIe driver:

- Add DT binding J784S4 SoC Device ID (Siddharth Vadapalli)

- Add DT binding J722S SoC support (Siddharth Vadapalli)

TI Keystone PCIe controller driver:

- Add DT binding missing num-viewport, phys and phy-name properties
(Jan Kiszka)

Miscellaneous:

- Constify and annotate with __ro_after_init (Heiner Kallweit)

- Convert DT bindings to YAML (Krzysztof Kozlowski)

- Check for kcalloc() failure in of_pci_prop_intr_map() (Duoming
Zhou)"

* tag 'pci-v6.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (97 commits)
PCI: Do not wait for disconnected devices when resuming
x86/pci: Skip early E820 check for ECAM region
PCI: Remove unused pci_enable_device_io()
ata: pata_cs5520: Remove unnecessary call to pci_enable_device_io()
PCI: Update pci_find_capability() stub return types
PCI: Remove PCI_IRQ_LEGACY
scsi: vmw_pvscsi: Do not use PCI_IRQ_LEGACY instead of PCI_IRQ_LEGACY
scsi: pmcraid: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
scsi: mpt3sas: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
scsi: megaraid_sas: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
scsi: ipr: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
scsi: hpsa: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
scsi: arcmsr: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
wifi: rtw89: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
dt-bindings: PCI: rockchip,rk3399-pcie: Add missing maxItems to ep-gpios
Revert "genirq/msi: Provide constants for PCI/IMS support"
Revert "x86/apic/msi: Enable PCI/IMS"
Revert "iommu/vt-d: Enable PCI/IMS"
Revert "iommu/amd: Enable PCI/IMS"
Revert "PCI/MSI: Provide IMS (Interrupt Message Store) support"
...

+1232 -761
+1 -1
Documentation/PCI/msi-howto.rst
··· 103 103 if it can't meet the minimum number of vectors. 104 104 105 105 The flags argument is used to specify which type of interrupt can be used 106 - by the device and the driver (PCI_IRQ_LEGACY, PCI_IRQ_MSI, PCI_IRQ_MSIX). 106 + by the device and the driver (PCI_IRQ_INTX, PCI_IRQ_MSI, PCI_IRQ_MSIX). 107 107 A convenient short-hand (PCI_IRQ_ALL_TYPES) is also available to ask for 108 108 any possible kind of interrupt. If the PCI_IRQ_AFFINITY flag is set, 109 109 pci_alloc_irq_vectors() will spread the interrupts around the available CPUs.
+1 -1
Documentation/PCI/pci.rst
··· 335 335 capability registers. Many architectures, chip-sets, or BIOSes do NOT 336 336 support MSI or MSI-X and a call to pci_alloc_irq_vectors with just 337 337 the PCI_IRQ_MSI and PCI_IRQ_MSIX flags will fail, so try to always 338 - specify PCI_IRQ_LEGACY as well. 338 + specify PCI_IRQ_INTX as well. 339 339 340 340 Drivers that have different interrupt handlers for MSI/MSI-X and 341 341 legacy INTx should chose the right one based on the msi_enabled
+1 -1
Documentation/PCI/pcieaer-howto.rst
··· 241 241 Then, you need a user space tool named aer-inject, which can be gotten 242 242 from: 243 243 244 - https://git.kernel.org/cgit/linux/kernel/git/gong.chen/aer-inject.git/ 244 + https://github.com/intel/aer-inject.git 245 245 246 246 More information about aer-inject can be found in the document in 247 247 its source code.
+1 -1
Documentation/devicetree/bindings/pci/amlogic,axg-pcie.yaml
··· 13 13 Amlogic Meson PCIe host controller is based on the Synopsys DesignWare PCI core. 14 14 15 15 allOf: 16 - - $ref: /schemas/pci/pci-bus.yaml# 16 + - $ref: /schemas/pci/pci-host-bridge.yaml# 17 17 - $ref: /schemas/pci/snps,dw-pcie-common.yaml# 18 18 19 19 # We need a select here so we don't match all nodes with 'snps,dw-pcie'
+1 -1
Documentation/devicetree/bindings/pci/apple,pcie.yaml
··· 85 85 unevaluatedProperties: false 86 86 87 87 allOf: 88 - - $ref: /schemas/pci/pci-bus.yaml# 88 + - $ref: /schemas/pci/pci-host-bridge.yaml# 89 89 - $ref: /schemas/interrupt-controller/msi-controller.yaml# 90 90 - if: 91 91 properties:
+1 -1
Documentation/devicetree/bindings/pci/brcm,iproc-pcie.yaml
··· 11 11 - Scott Branden <scott.branden@broadcom.com> 12 12 13 13 allOf: 14 - - $ref: /schemas/pci/pci-bus.yaml# 14 + - $ref: /schemas/pci/pci-host-bridge.yaml# 15 15 16 16 properties: 17 17 compatible:
+1 -1
Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
··· 108 108 - msi-controller 109 109 110 110 allOf: 111 - - $ref: /schemas/pci/pci-bus.yaml# 111 + - $ref: /schemas/pci/pci-host-bridge.yaml# 112 112 - $ref: /schemas/interrupt-controller/msi-controller.yaml# 113 113 - if: 114 114 properties:
-3
Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml
··· 10 10 - Tom Joseph <tjoseph@cadence.com> 11 11 12 12 allOf: 13 - - $ref: /schemas/pci/pci-bus.yaml# 14 13 - $ref: cdns-pcie-host.yaml# 15 14 16 15 properties: ··· 23 24 items: 24 25 - const: reg 25 26 - const: cfg 26 - 27 - msi-parent: true 28 27 29 28 required: 30 29 - reg
+1 -1
Documentation/devicetree/bindings/pci/cdns-pcie-host.yaml
··· 10 10 - Tom Joseph <tjoseph@cadence.com> 11 11 12 12 allOf: 13 - - $ref: /schemas/pci/pci-bus.yaml# 13 + - $ref: /schemas/pci/pci-host-bridge.yaml# 14 14 - $ref: cdns-pcie.yaml# 15 15 16 16 properties:
+1 -1
Documentation/devicetree/bindings/pci/faraday,ftpci100.yaml
··· 51 51 <0x6000 0 0 4 &pci_intc 2>; 52 52 53 53 allOf: 54 - - $ref: /schemas/pci/pci-bus.yaml# 54 + - $ref: /schemas/pci/pci-host-bridge.yaml# 55 55 56 56 properties: 57 57 compatible:
+102
Documentation/devicetree/bindings/pci/fsl,layerscape-pcie-ep.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/fsl,layerscape-pcie-ep.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Freescale Layerscape PCIe Endpoint(EP) controller 8 + 9 + maintainers: 10 + - Frank Li <Frank.Li@nxp.com> 11 + 12 + description: 13 + This PCIe EP controller is based on the Synopsys DesignWare PCIe IP. 14 + 15 + This controller derives its clocks from the Reset Configuration Word (RCW) 16 + which is used to describe the PLL settings at the time of chip-reset. 17 + 18 + Also as per the available Reference Manuals, there is no specific 'version' 19 + register available in the Freescale PCIe controller register set, 20 + which can allow determining the underlying DesignWare PCIe controller version 21 + information. 22 + 23 + properties: 24 + compatible: 25 + enum: 26 + - fsl,ls2088a-pcie-ep 27 + - fsl,ls1088a-pcie-ep 28 + - fsl,ls1046a-pcie-ep 29 + - fsl,ls1028a-pcie-ep 30 + - fsl,lx2160ar2-pcie-ep 31 + 32 + reg: 33 + maxItems: 2 34 + 35 + reg-names: 36 + items: 37 + - const: regs 38 + - const: addr_space 39 + 40 + fsl,pcie-scfg: 41 + $ref: /schemas/types.yaml#/definitions/phandle 42 + description: A phandle to the SCFG device node. The second entry is the 43 + physical PCIe controller index starting from '0'. This is used to get 44 + SCFG PEXN registers. 45 + 46 + big-endian: 47 + $ref: /schemas/types.yaml#/definitions/flag 48 + description: If the PEX_LUT and PF register block is in big-endian, specify 49 + this property. 50 + 51 + dma-coherent: true 52 + 53 + interrupts: 54 + minItems: 1 55 + maxItems: 2 56 + 57 + interrupt-names: 58 + minItems: 1 59 + maxItems: 2 60 + 61 + required: 62 + - compatible 63 + - reg 64 + - reg-names 65 + 66 + allOf: 67 + - if: 68 + properties: 69 + compatible: 70 + enum: 71 + - fsl,ls1028a-pcie-ep 72 + - fsl,ls1046a-pcie-ep 73 + - fsl,ls1088a-pcie-ep 74 + then: 75 + properties: 76 + interrupt-names: 77 + items: 78 + - const: pme 79 + 80 + unevaluatedProperties: false 81 + 82 + examples: 83 + - | 84 + #include <dt-bindings/interrupt-controller/arm-gic.h> 85 + 86 + soc { 87 + #address-cells = <2>; 88 + #size-cells = <2>; 89 + 90 + pcie_ep1: pcie-ep@3400000 { 91 + compatible = "fsl,ls1028a-pcie-ep"; 92 + reg = <0x00 0x03400000 0x0 0x00100000 93 + 0x80 0x00000000 0x8 0x00000000>; 94 + reg-names = "regs", "addr_space"; 95 + interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; /* PME interrupt */ 96 + interrupt-names = "pme"; 97 + num-ib-windows = <6>; 98 + num-ob-windows = <8>; 99 + status = "disabled"; 100 + }; 101 + }; 102 + ...
+167
Documentation/devicetree/bindings/pci/fsl,layerscape-pcie.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/fsl,layerscape-pcie.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Freescale Layerscape PCIe Root Complex(RC) controller 8 + 9 + maintainers: 10 + - Frank Li <Frank.Li@nxp.com> 11 + 12 + description: 13 + This PCIe RC controller is based on the Synopsys DesignWare PCIe IP 14 + 15 + This controller derives its clocks from the Reset Configuration Word (RCW) 16 + which is used to describe the PLL settings at the time of chip-reset. 17 + 18 + Also as per the available Reference Manuals, there is no specific 'version' 19 + register available in the Freescale PCIe controller register set, 20 + which can allow determining the underlying DesignWare PCIe controller version 21 + information. 22 + 23 + properties: 24 + compatible: 25 + enum: 26 + - fsl,ls1021a-pcie 27 + - fsl,ls2080a-pcie 28 + - fsl,ls2085a-pcie 29 + - fsl,ls2088a-pcie 30 + - fsl,ls1088a-pcie 31 + - fsl,ls1046a-pcie 32 + - fsl,ls1043a-pcie 33 + - fsl,ls1012a-pcie 34 + - fsl,ls1028a-pcie 35 + - fsl,lx2160a-pcie 36 + 37 + reg: 38 + maxItems: 2 39 + 40 + reg-names: 41 + items: 42 + - const: regs 43 + - const: config 44 + 45 + fsl,pcie-scfg: 46 + $ref: /schemas/types.yaml#/definitions/phandle 47 + description: A phandle to the SCFG device node. The second entry is the 48 + physical PCIe controller index starting from '0'. This is used to get 49 + SCFG PEXN registers. 50 + 51 + big-endian: 52 + $ref: /schemas/types.yaml#/definitions/flag 53 + description: If the PEX_LUT and PF register block is in big-endian, specify 54 + this property. 55 + 56 + dma-coherent: true 57 + 58 + msi-parent: true 59 + 60 + iommu-map: true 61 + 62 + interrupts: 63 + minItems: 1 64 + maxItems: 2 65 + 66 + interrupt-names: 67 + minItems: 1 68 + maxItems: 2 69 + 70 + required: 71 + - compatible 72 + - reg 73 + - reg-names 74 + - "#address-cells" 75 + - "#size-cells" 76 + - device_type 77 + - bus-range 78 + - ranges 79 + - interrupts 80 + - interrupt-names 81 + - "#interrupt-cells" 82 + - interrupt-map-mask 83 + - interrupt-map 84 + 85 + allOf: 86 + - $ref: /schemas/pci/pci-bus.yaml# 87 + 88 + - if: 89 + properties: 90 + compatible: 91 + enum: 92 + - fsl,ls1028a-pcie 93 + - fsl,ls1046a-pcie 94 + - fsl,ls1043a-pcie 95 + - fsl,ls1012a-pcie 96 + then: 97 + properties: 98 + interrupts: 99 + maxItems: 2 100 + interrupt-names: 101 + items: 102 + - const: pme 103 + - const: aer 104 + 105 + - if: 106 + properties: 107 + compatible: 108 + enum: 109 + - fsl,ls2080a-pcie 110 + - fsl,ls2085a-pcie 111 + - fsl,ls2088a-pcie 112 + then: 113 + properties: 114 + interrupts: 115 + maxItems: 1 116 + interrupt-names: 117 + items: 118 + - const: intr 119 + 120 + - if: 121 + properties: 122 + compatible: 123 + enum: 124 + - fsl,ls1088a-pcie 125 + then: 126 + properties: 127 + interrupts: 128 + maxItems: 1 129 + interrupt-names: 130 + items: 131 + - const: aer 132 + 133 + unevaluatedProperties: false 134 + 135 + examples: 136 + - | 137 + #include <dt-bindings/interrupt-controller/arm-gic.h> 138 + 139 + soc { 140 + #address-cells = <2>; 141 + #size-cells = <2>; 142 + 143 + pcie@3400000 { 144 + compatible = "fsl,ls1088a-pcie"; 145 + reg = <0x00 0x03400000 0x0 0x00100000>, /* controller registers */ 146 + <0x20 0x00000000 0x0 0x00002000>; /* configuration space */ 147 + reg-names = "regs", "config"; 148 + interrupts = <0 108 IRQ_TYPE_LEVEL_HIGH>; /* aer interrupt */ 149 + interrupt-names = "aer"; 150 + #address-cells = <3>; 151 + #size-cells = <2>; 152 + dma-coherent; 153 + device_type = "pci"; 154 + bus-range = <0x0 0xff>; 155 + ranges = <0x81000000 0x0 0x00000000 0x20 0x00010000 0x0 0x00010000 /* downstream I/O */ 156 + 0x82000000 0x0 0x40000000 0x20 0x40000000 0x0 0x40000000>; /* non-prefetchable memory */ 157 + msi-parent = <&its>; 158 + #interrupt-cells = <1>; 159 + interrupt-map-mask = <0 0 0 7>; 160 + interrupt-map = <0000 0 0 1 &gic 0 0 0 109 IRQ_TYPE_LEVEL_HIGH>, 161 + <0000 0 0 2 &gic 0 0 0 110 IRQ_TYPE_LEVEL_HIGH>, 162 + <0000 0 0 3 &gic 0 0 0 111 IRQ_TYPE_LEVEL_HIGH>, 163 + <0000 0 0 4 &gic 0 0 0 112 IRQ_TYPE_LEVEL_HIGH>; 164 + iommu-map = <0 &smmu 0 1>; /* Fixed-up by bootloader */ 165 + }; 166 + }; 167 + ...
+1 -1
Documentation/devicetree/bindings/pci/host-generic-pci.yaml
··· 116 116 - ranges 117 117 118 118 allOf: 119 - - $ref: /schemas/pci/pci-bus.yaml# 119 + - $ref: /schemas/pci/pci-host-bridge.yaml# 120 120 - if: 121 121 properties: 122 122 compatible:
+1 -1
Documentation/devicetree/bindings/pci/intel,ixp4xx-pci.yaml
··· 12 12 description: PCI host controller found in the Intel IXP4xx SoC series. 13 13 14 14 allOf: 15 - - $ref: /schemas/pci/pci-bus.yaml# 15 + - $ref: /schemas/pci/pci-host-bridge.yaml# 16 16 17 17 properties: 18 18 compatible:
+1 -1
Documentation/devicetree/bindings/pci/intel,keembay-pcie.yaml
··· 11 11 - Srikanth Thokala <srikanth.thokala@intel.com> 12 12 13 13 allOf: 14 - - $ref: /schemas/pci/pci-bus.yaml# 14 + - $ref: /schemas/pci/pci-host-bridge.yaml# 15 15 16 16 properties: 17 17 compatible:
-79
Documentation/devicetree/bindings/pci/layerscape-pci.txt
··· 1 - Freescale Layerscape PCIe controller 2 - 3 - This PCIe host controller is based on the Synopsys DesignWare PCIe IP 4 - and thus inherits all the common properties defined in snps,dw-pcie.yaml. 5 - 6 - This controller derives its clocks from the Reset Configuration Word (RCW) 7 - which is used to describe the PLL settings at the time of chip-reset. 8 - 9 - Also as per the available Reference Manuals, there is no specific 'version' 10 - register available in the Freescale PCIe controller register set, 11 - which can allow determining the underlying DesignWare PCIe controller version 12 - information. 13 - 14 - Required properties: 15 - - compatible: should contain the platform identifier such as: 16 - RC mode: 17 - "fsl,ls1021a-pcie" 18 - "fsl,ls2080a-pcie", "fsl,ls2085a-pcie" 19 - "fsl,ls2088a-pcie" 20 - "fsl,ls1088a-pcie" 21 - "fsl,ls1046a-pcie" 22 - "fsl,ls1043a-pcie" 23 - "fsl,ls1012a-pcie" 24 - "fsl,ls1028a-pcie" 25 - EP mode: 26 - "fsl,ls1028a-pcie-ep", "fsl,ls-pcie-ep" 27 - "fsl,ls1046a-pcie-ep", "fsl,ls-pcie-ep" 28 - "fsl,ls1088a-pcie-ep", "fsl,ls-pcie-ep" 29 - "fsl,ls2088a-pcie-ep", "fsl,ls-pcie-ep" 30 - "fsl,lx2160ar2-pcie-ep", "fsl,ls-pcie-ep" 31 - - reg: base addresses and lengths of the PCIe controller register blocks. 32 - - interrupts: A list of interrupt outputs of the controller. Must contain an 33 - entry for each entry in the interrupt-names property. 34 - - interrupt-names: It could include the following entries: 35 - "aer": Used for interrupt line which reports AER events when 36 - non MSI/MSI-X/INTx mode is used 37 - "pme": Used for interrupt line which reports PME events when 38 - non MSI/MSI-X/INTx mode is used 39 - "intr": Used for SoCs(like ls2080a, lx2160a, ls2080a, ls2088a, ls1088a) 40 - which has a single interrupt line for miscellaneous controller 41 - events(could include AER and PME events). 42 - - fsl,pcie-scfg: Must include two entries. 43 - The first entry must be a link to the SCFG device node 44 - The second entry is the physical PCIe controller index starting from '0'. 45 - This is used to get SCFG PEXN registers 46 - - dma-coherent: Indicates that the hardware IP block can ensure the coherency 47 - of the data transferred from/to the IP block. This can avoid the software 48 - cache flush/invalid actions, and improve the performance significantly. 49 - 50 - Optional properties: 51 - - big-endian: If the PEX_LUT and PF register block is in big-endian, specify 52 - this property. 53 - 54 - Example: 55 - 56 - pcie@3400000 { 57 - compatible = "fsl,ls1088a-pcie"; 58 - reg = <0x00 0x03400000 0x0 0x00100000>, /* controller registers */ 59 - <0x20 0x00000000 0x0 0x00002000>; /* configuration space */ 60 - reg-names = "regs", "config"; 61 - interrupts = <0 108 IRQ_TYPE_LEVEL_HIGH>; /* aer interrupt */ 62 - interrupt-names = "aer"; 63 - #address-cells = <3>; 64 - #size-cells = <2>; 65 - device_type = "pci"; 66 - dma-coherent; 67 - num-viewport = <256>; 68 - bus-range = <0x0 0xff>; 69 - ranges = <0x81000000 0x0 0x00000000 0x20 0x00010000 0x0 0x00010000 /* downstream I/O */ 70 - 0x82000000 0x0 0x40000000 0x20 0x40000000 0x0 0x40000000>; /* non-prefetchable memory */ 71 - msi-parent = <&its>; 72 - #interrupt-cells = <1>; 73 - interrupt-map-mask = <0 0 0 7>; 74 - interrupt-map = <0000 0 0 1 &gic 0 0 0 109 IRQ_TYPE_LEVEL_HIGH>, 75 - <0000 0 0 2 &gic 0 0 0 110 IRQ_TYPE_LEVEL_HIGH>, 76 - <0000 0 0 3 &gic 0 0 0 111 IRQ_TYPE_LEVEL_HIGH>, 77 - <0000 0 0 4 &gic 0 0 0 112 IRQ_TYPE_LEVEL_HIGH>; 78 - iommu-map = <0 &smmu 0 1>; /* Fixed-up by bootloader */ 79 - };
+1 -1
Documentation/devicetree/bindings/pci/loongson.yaml
··· 13 13 PCI host controller found on Loongson PCHs and SoCs. 14 14 15 15 allOf: 16 - - $ref: /schemas/pci/pci-bus.yaml# 16 + - $ref: /schemas/pci/pci-host-bridge.yaml# 17 17 18 18 properties: 19 19 compatible:
+5 -2
Documentation/devicetree/bindings/pci/mediatek,mt7621-pcie.yaml
··· 14 14 with 3 Root Ports. Each Root Port supports a Gen1 1-lane Link 15 15 16 16 allOf: 17 - - $ref: /schemas/pci/pci-bus.yaml# 17 + - $ref: /schemas/pci/pci-host-bridge.yaml# 18 18 19 19 properties: 20 20 compatible: ··· 33 33 patternProperties: 34 34 '^pcie@[0-2],0$': 35 35 type: object 36 - $ref: /schemas/pci/pci-bus.yaml# 36 + $ref: /schemas/pci/pci-pci-bridge.yaml# 37 37 38 38 properties: 39 + reg: 40 + maxItems: 1 41 + 39 42 resets: 40 43 maxItems: 1 41 44
+1 -1
Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml
··· 140 140 - interrupt-controller 141 141 142 142 allOf: 143 - - $ref: /schemas/pci/pci-bus.yaml# 143 + - $ref: /schemas/pci/pci-host-bridge.yaml# 144 144 - if: 145 145 properties: 146 146 compatible:
+1 -1
Documentation/devicetree/bindings/pci/microchip,pcie-host.yaml
··· 10 10 - Daire McNamara <daire.mcnamara@microchip.com> 11 11 12 12 allOf: 13 - - $ref: /schemas/pci/pci-bus.yaml# 13 + - $ref: /schemas/pci/pci-host-bridge.yaml# 14 14 - $ref: /schemas/interrupt-controller/msi-controller.yaml# 15 15 16 16 properties:
+1 -1
Documentation/devicetree/bindings/pci/qcom,pcie-common.yaml
··· 95 95 - msi-map 96 96 97 97 allOf: 98 - - $ref: /schemas/pci/pci-bus.yaml# 98 + - $ref: /schemas/pci/pci-host-bridge.yaml# 99 99 100 100 additionalProperties: true
+1 -1
Documentation/devicetree/bindings/pci/qcom,pcie.yaml
··· 130 130 - msi-map 131 131 132 132 allOf: 133 - - $ref: /schemas/pci/pci-bus.yaml# 133 + - $ref: /schemas/pci/pci-host-bridge.yaml# 134 134 - if: 135 135 properties: 136 136 compatible:
+3 -1
Documentation/devicetree/bindings/pci/rcar-gen4-pci-ep.yaml
··· 16 16 properties: 17 17 compatible: 18 18 items: 19 - - const: renesas,r8a779f0-pcie-ep # R-Car S4-8 19 + - enum: 20 + - renesas,r8a779f0-pcie-ep # R-Car S4-8 21 + - renesas,r8a779g0-pcie-ep # R-Car V4H 20 22 - const: renesas,rcar-gen4-pcie-ep # R-Car Gen4 21 23 22 24 reg:
+3 -1
Documentation/devicetree/bindings/pci/rcar-gen4-pci-host.yaml
··· 16 16 properties: 17 17 compatible: 18 18 items: 19 - - const: renesas,r8a779f0-pcie # R-Car S4-8 19 + - enum: 20 + - renesas,r8a779f0-pcie # R-Car S4-8 21 + - renesas,r8a779g0-pcie # R-Car V4H 20 22 - const: renesas,rcar-gen4-pcie # R-Car Gen4 21 23 22 24 reg:
+4 -1
Documentation/devicetree/bindings/pci/rcar-pci-host.yaml
··· 12 12 - Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> 13 13 14 14 allOf: 15 - - $ref: pci-bus.yaml# 15 + - $ref: /schemas/pci/pci-host-bridge.yaml# 16 16 17 17 properties: 18 18 compatible: ··· 76 76 77 77 vpcie12v-supply: 78 78 description: The 12v regulator to use for PCIe. 79 + 80 + iommu-map: true 81 + iommu-map-mask: true 79 82 80 83 required: 81 84 - compatible
+1 -1
Documentation/devicetree/bindings/pci/renesas,pci-rcar-gen2.yaml
··· 110 110 - "#interrupt-cells" 111 111 112 112 allOf: 113 - - $ref: /schemas/pci/pci-bus.yaml# 113 + - $ref: /schemas/pci/pci-host-bridge.yaml# 114 114 115 115 - if: 116 116 properties:
+2 -1
Documentation/devicetree/bindings/pci/rockchip,rk3399-pcie.yaml
··· 10 10 - Shawn Lin <shawn.lin@rock-chips.com> 11 11 12 12 allOf: 13 - - $ref: /schemas/pci/pci-bus.yaml# 13 + - $ref: /schemas/pci/pci-host-bridge.yaml# 14 14 - $ref: rockchip,rk3399-pcie-common.yaml# 15 15 16 16 properties: ··· 37 37 description: This property is needed if using 24MHz OSC for RC's PHY. 38 38 39 39 ep-gpios: 40 + maxItems: 1 40 41 description: pre-reset GPIO 41 42 42 43 vpcie12v-supply:
+1 -1
Documentation/devicetree/bindings/pci/snps,dw-pcie.yaml
··· 23 23 - compatible 24 24 25 25 allOf: 26 - - $ref: /schemas/pci/pci-bus.yaml# 26 + - $ref: /schemas/pci/pci-host-bridge.yaml# 27 27 - $ref: /schemas/pci/snps,dw-pcie-common.yaml# 28 28 - if: 29 29 not:
+21 -1
Documentation/devicetree/bindings/pci/ti,am65-pci-host.yaml
··· 11 11 - Kishon Vijay Abraham I <kishon@ti.com> 12 12 13 13 allOf: 14 - - $ref: /schemas/pci/pci-bus.yaml# 14 + - $ref: /schemas/pci/pci-host-bridge.yaml# 15 15 16 16 properties: 17 17 compatible: ··· 55 55 56 56 dma-coherent: true 57 57 58 + num-viewport: 59 + $ref: /schemas/types.yaml#/definitions/uint32 60 + 61 + phys: 62 + description: per-lane PHYs 63 + minItems: 1 64 + maxItems: 2 65 + 66 + phy-names: 67 + minItems: 1 68 + maxItems: 2 69 + items: 70 + pattern: '^pcie-phy[0-1]$' 71 + 58 72 required: 59 73 - compatible 60 74 - reg ··· 88 74 - dma-coherent 89 75 - power-domains 90 76 - msi-map 77 + - num-viewport 91 78 92 79 unevaluatedProperties: false 93 80 ··· 96 81 - | 97 82 #include <dt-bindings/interrupt-controller/arm-gic.h> 98 83 #include <dt-bindings/interrupt-controller/irq.h> 84 + #include <dt-bindings/phy/phy.h> 99 85 #include <dt-bindings/soc/ti,sci_pm_domain.h> 100 86 101 87 pcie0_rc: pcie@5500000 { ··· 114 98 ti,syscon-pcie-id = <&scm_conf 0x0210>; 115 99 ti,syscon-pcie-mode = <&scm_conf 0x4060>; 116 100 bus-range = <0x0 0xff>; 101 + num-viewport = <16>; 117 102 max-link-speed = <2>; 118 103 dma-coherent; 119 104 interrupts = <GIC_SPI 340 IRQ_TYPE_EDGE_RISING>; 120 105 msi-map = <0x0 &gic_its 0x0 0x10000>; 121 106 device_type = "pci"; 107 + num-lanes = <1>; 108 + phys = <&serdes0 PHY_TYPE_PCIE 0>; 109 + phy-names = "pcie-phy0"; 122 110 };
+5
Documentation/devicetree/bindings/pci/ti,j721e-pci-host.yaml
··· 23 23 items: 24 24 - const: ti,j7200-pcie-host 25 25 - const: ti,j721e-pcie-host 26 + - description: PCIe controller in J722S 27 + items: 28 + - const: ti,j722s-pcie-host 29 + - const: ti,j721e-pcie-host 26 30 27 31 reg: 28 32 maxItems: 4 ··· 72 68 - 0xb00d 73 69 - 0xb00f 74 70 - 0xb010 71 + - 0xb012 75 72 - 0xb013 76 73 77 74 msi-map: true
+1 -1
Documentation/devicetree/bindings/pci/versatile.yaml
··· 13 13 PCI host controller found on the ARM Versatile PB board's FPGA. 14 14 15 15 allOf: 16 - - $ref: /schemas/pci/pci-bus.yaml# 16 + - $ref: /schemas/pci/pci-host-bridge.yaml# 17 17 18 18 properties: 19 19 compatible:
+1 -1
Documentation/devicetree/bindings/pci/xilinx-versal-cpm.yaml
··· 10 10 - Bharat Kumar Gogada <bharat.kumar.gogada@amd.com> 11 11 12 12 allOf: 13 - - $ref: /schemas/pci/pci-bus.yaml# 13 + - $ref: /schemas/pci/pci-host-bridge.yaml# 14 14 15 15 properties: 16 16 compatible:
+1 -1
Documentation/devicetree/bindings/pci/xlnx,axi-pcie-host.yaml
··· 10 10 - Thippeswamy Havalige <thippeswamy.havalige@amd.com> 11 11 12 12 allOf: 13 - - $ref: /schemas/pci/pci-bus.yaml# 13 + - $ref: /schemas/pci/pci-host-bridge.yaml# 14 14 15 15 properties: 16 16 compatible:
+1 -1
Documentation/devicetree/bindings/pci/xlnx,nwl-pcie.yaml
··· 10 10 - Thippeswamy Havalige <thippeswamy.havalige@amd.com> 11 11 12 12 allOf: 13 - - $ref: /schemas/pci/pci-bus.yaml# 13 + - $ref: /schemas/pci/pci-host-bridge.yaml# 14 14 - $ref: /schemas/interrupt-controller/msi-controller.yaml# 15 15 16 16 properties:
+1 -1
Documentation/devicetree/bindings/pci/xlnx,xdma-host.yaml
··· 10 10 - Thippeswamy Havalige <thippeswamy.havalige@amd.com> 11 11 12 12 allOf: 13 - - $ref: /schemas/pci/pci-bus.yaml# 13 + - $ref: /schemas/pci/pci-host-bridge.yaml# 14 14 15 15 properties: 16 16 compatible:
+1 -1
Documentation/translations/zh_CN/PCI/msi-howto.rst
··· 88 88 如果设备对最小数量的向量有要求,驱动程序可以传递一个min_vecs参数,设置为这个限制, 89 89 如果PCI核不能满足最小数量的向量,将返回-ENOSPC。 90 90 91 - flags参数用来指定设备和驱动程序可以使用哪种类型的中断(PCI_IRQ_LEGACY, PCI_IRQ_MSI, 91 + flags参数用来指定设备和驱动程序可以使用哪种类型的中断(PCI_IRQ_INTX, PCI_IRQ_MSI, 92 92 PCI_IRQ_MSIX)。一个方便的短语(PCI_IRQ_ALL_TYPES)也可以用来要求任何可能的中断类型。 93 93 如果PCI_IRQ_AFFINITY标志被设置,pci_alloc_irq_vectors()将把中断分散到可用的CPU上。 94 94
+1 -1
Documentation/translations/zh_CN/PCI/pci.rst
··· 304 304 的PCI_IRQ_MSI和/或PCI_IRQ_MSIX标志来启用MSI功能。这将导致PCI支持将CPU向量数 305 305 据编程到PCI设备功能寄存器中。许多架构、芯片组或BIOS不支持MSI或MSI-X,调用 306 306 ``pci_alloc_irq_vectors`` 时只使用PCI_IRQ_MSI和PCI_IRQ_MSIX标志会失败, 307 - 所以尽量也要指定 ``PCI_IRQ_LEGACY`` 。 307 + 所以尽量也要指定 ``PCI_IRQ_INTX`` 。 308 308 309 309 对MSI/MSI-X和传统INTx有不同中断处理程序的驱动程序应该在调用 310 310 ``pci_alloc_irq_vectors`` 后根据 ``pci_dev``结构体中的 ``msi_enabled``
-5
arch/x86/kernel/apic/msi.c
··· 184 184 alloc->type = X86_IRQ_ALLOC_TYPE_PCI_MSI; 185 185 return 0; 186 186 case DOMAIN_BUS_PCI_DEVICE_MSIX: 187 - case DOMAIN_BUS_PCI_DEVICE_IMS: 188 187 alloc->type = X86_IRQ_ALLOC_TYPE_PCI_MSIX; 189 188 return 0; 190 189 default: ··· 227 228 switch(info->bus_token) { 228 229 case DOMAIN_BUS_PCI_DEVICE_MSI: 229 230 case DOMAIN_BUS_PCI_DEVICE_MSIX: 230 - break; 231 - case DOMAIN_BUS_PCI_DEVICE_IMS: 232 - if (!(pops->supported_flags & MSI_FLAG_PCI_IMS)) 233 - return false; 234 231 break; 235 232 default: 236 233 WARN_ON_ONCE(1);
+29 -11
arch/x86/pci/mmconfig-shared.c
··· 518 518 { 519 519 struct resource *conflict; 520 520 521 - if (!early && !acpi_disabled) { 521 + if (early) { 522 + 523 + /* 524 + * Don't try to do this check unless configuration type 1 525 + * is available. How about type 2? 526 + */ 527 + 528 + /* 529 + * 946f2ee5c731 ("Check that MCFG points to an e820 530 + * reserved area") added this E820 check in 2006 to work 531 + * around BIOS defects. 532 + * 533 + * Per PCI Firmware r3.3, sec 4.1.2, ECAM space must be 534 + * reserved by a PNP0C02 resource, but it need not be 535 + * mentioned in E820. Before the ACPI interpreter is 536 + * available, we can't check for PNP0C02 resources, so 537 + * there's no reliable way to verify the region in this 538 + * early check. Keep it only for the old machines that 539 + * motivated 946f2ee5c731. 540 + */ 541 + if (dmi_get_bios_year() < 2016 && raw_pci_ops) 542 + return is_mmconf_reserved(e820__mapped_all, cfg, dev, 543 + "E820 entry"); 544 + 545 + return true; 546 + } 547 + 548 + if (!acpi_disabled) { 522 549 if (is_mmconf_reserved(is_acpi_reserved, cfg, dev, 523 550 "ACPI motherboard resource")) 524 551 return true; ··· 578 551 * For MCFG information constructed from hotpluggable host bridge's 579 552 * _CBA method, just assume it's reserved. 580 553 */ 581 - if (pci_mmcfg_running_state) 582 - return true; 583 - 584 - /* Don't try to do this check unless configuration 585 - type 1 is available. how about type 2 ?*/ 586 - if (raw_pci_ops) 587 - return is_mmconf_reserved(e820__mapped_all, cfg, dev, 588 - "E820 entry"); 589 - 590 - return false; 554 + return pci_mmcfg_running_state; 591 555 } 592 556 593 557 static void __init pci_mmcfg_reject_broken(int early)
-3
arch/x86/pci/olpc.c
··· 154 154 0x0, 0x40, 0x0, 0x40a, /* CapPtr INT-D, IRQA */ 155 155 0xc8020001, 0x0, 0x0, 0x0, /* Capabilities - 40 is R/O, 44 is 156 156 mask 8103 (power control) */ 157 - #if 0 158 - 0x1, 0x40080000, 0x0, 0x0, /* EECP - see EHCI spec section 2.1.7 */ 159 - #endif 160 157 0x01000001, 0x0, 0x0, 0x0, /* EECP - see EHCI spec section 2.1.7 */ 161 158 0x2020, 0x0, 0x0, 0x0, /* (EHCI page 8) 60 SBRN (R/O), 162 159 61 FLADJ (R/W), PORTWAKECAP */
-6
drivers/ata/pata_cs5520.c
··· 151 151 if (!host) 152 152 return -ENOMEM; 153 153 154 - /* Perform set up for DMA */ 155 - if (pci_enable_device_io(pdev)) { 156 - dev_err(&pdev->dev, "unable to configure BAR2.\n"); 157 - return -ENODEV; 158 - } 159 - 160 154 if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) { 161 155 dev_err(&pdev->dev, "unable to configure DMA mask.\n"); 162 156 return -ENODEV;
+32 -3
drivers/cxl/core/pci.c
··· 525 525 __le32 response[2]; 526 526 int rc; 527 527 528 - rc = pci_doe(doe_mb, PCI_DVSEC_VENDOR_ID_CXL, 528 + rc = pci_doe(doe_mb, PCI_VENDOR_ID_CXL, 529 529 CXL_DOE_PROTOCOL_TABLE_ACCESS, 530 530 &request, sizeof(request), 531 531 &response, sizeof(response)); ··· 555 555 __le32 request = CDAT_DOE_REQ(entry_handle); 556 556 int rc; 557 557 558 - rc = pci_doe(doe_mb, PCI_DVSEC_VENDOR_ID_CXL, 558 + rc = pci_doe(doe_mb, PCI_VENDOR_ID_CXL, 559 559 CXL_DOE_PROTOCOL_TABLE_ACCESS, 560 560 &request, sizeof(request), 561 561 rsp, sizeof(*rsp) + remaining); ··· 640 640 if (!pdev) 641 641 return; 642 642 643 - doe_mb = pci_find_doe_mailbox(pdev, PCI_DVSEC_VENDOR_ID_CXL, 643 + doe_mb = pci_find_doe_mailbox(pdev, PCI_VENDOR_ID_CXL, 644 644 CXL_DOE_PROTOCOL_TABLE_ACCESS); 645 645 if (!doe_mb) { 646 646 dev_dbg(dev, "No CDAT mailbox\n"); ··· 1045 1045 1046 1046 return cxl_flit_size(pdev) * MEGA / bw; 1047 1047 } 1048 + 1049 + static int __cxl_endpoint_decoder_reset_detected(struct device *dev, void *data) 1050 + { 1051 + struct cxl_port *port = data; 1052 + struct cxl_decoder *cxld; 1053 + struct cxl_hdm *cxlhdm; 1054 + void __iomem *hdm; 1055 + u32 ctrl; 1056 + 1057 + if (!is_endpoint_decoder(dev)) 1058 + return 0; 1059 + 1060 + cxld = to_cxl_decoder(dev); 1061 + if ((cxld->flags & CXL_DECODER_F_ENABLE) == 0) 1062 + return 0; 1063 + 1064 + cxlhdm = dev_get_drvdata(&port->dev); 1065 + hdm = cxlhdm->regs.hdm_decoder; 1066 + ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(cxld->id)); 1067 + 1068 + return !FIELD_GET(CXL_HDM_DECODER0_CTRL_COMMITTED, ctrl); 1069 + } 1070 + 1071 + bool cxl_endpoint_decoder_reset_detected(struct cxl_port *port) 1072 + { 1073 + return device_for_each_child(&port->dev, port, 1074 + __cxl_endpoint_decoder_reset_detected); 1075 + } 1076 + EXPORT_SYMBOL_NS_GPL(cxl_endpoint_decoder_reset_detected, CXL);
+1 -1
drivers/cxl/core/regs.c
··· 314 314 .resource = CXL_RESOURCE_NONE, 315 315 }; 316 316 317 - regloc = pci_find_dvsec_capability(pdev, PCI_DVSEC_VENDOR_ID_CXL, 317 + regloc = pci_find_dvsec_capability(pdev, PCI_VENDOR_ID_CXL, 318 318 CXL_DVSEC_REG_LOCATOR); 319 319 if (!regloc) 320 320 return -ENXIO;
+2
drivers/cxl/cxl.h
··· 898 898 struct access_coordinate *c1, 899 899 struct access_coordinate *c2); 900 900 901 + bool cxl_endpoint_decoder_reset_detected(struct cxl_port *port); 902 + 901 903 /* 902 904 * Unit test builds overrides this to __weak, find the 'strong' version 903 905 * of these symbols in tools/testing/cxl/.
-1
drivers/cxl/cxlpci.h
··· 13 13 * "DVSEC" redundancies removed. When obvious, abbreviations may be used. 14 14 */ 15 15 #define PCI_DVSEC_HEADER1_LENGTH_MASK GENMASK(31, 20) 16 - #define PCI_DVSEC_VENDOR_ID_CXL 0x1E98 17 16 18 17 /* CXL 2.0 8.1.3: PCIe DVSEC for CXL Device */ 19 18 #define CXL_DVSEC_PCIE_DEVICE 0
+23 -1
drivers/cxl/pci.c
··· 817 817 cxlds->rcd = is_cxl_restricted(pdev); 818 818 cxlds->serial = pci_get_dsn(pdev); 819 819 cxlds->cxl_dvsec = pci_find_dvsec_capability( 820 - pdev, PCI_DVSEC_VENDOR_ID_CXL, CXL_DVSEC_PCIE_DEVICE); 820 + pdev, PCI_VENDOR_ID_CXL, CXL_DVSEC_PCIE_DEVICE); 821 821 if (!cxlds->cxl_dvsec) 822 822 dev_warn(&pdev->dev, 823 823 "Device DVSEC not present, skip CXL.mem init\n"); ··· 957 957 dev->driver ? "successful" : "failed"); 958 958 } 959 959 960 + static void cxl_reset_done(struct pci_dev *pdev) 961 + { 962 + struct cxl_dev_state *cxlds = pci_get_drvdata(pdev); 963 + struct cxl_memdev *cxlmd = cxlds->cxlmd; 964 + struct device *dev = &pdev->dev; 965 + 966 + /* 967 + * FLR does not expect to touch the HDM decoders and related 968 + * registers. SBR, however, will wipe all device configurations. 969 + * Issue a warning if there was an active decoder before the reset 970 + * that no longer exists. 971 + */ 972 + guard(device)(&cxlmd->dev); 973 + if (cxlmd->endpoint && 974 + cxl_endpoint_decoder_reset_detected(cxlmd->endpoint)) { 975 + dev_crit(dev, "SBR happened without memory regions removal.\n"); 976 + dev_crit(dev, "System may be unstable if regions hosted system memory.\n"); 977 + add_taint(TAINT_USER, LOCKDEP_STILL_OK); 978 + } 979 + } 980 + 960 981 static const struct pci_error_handlers cxl_error_handlers = { 961 982 .error_detected = cxl_error_detected, 962 983 .slot_reset = cxl_slot_reset, 963 984 .resume = cxl_error_resume, 964 985 .cor_error_detected = cxl_cor_error_detected, 986 + .reset_done = cxl_reset_done, 965 987 }; 966 988 967 989 static struct pci_driver cxl_pci_driver = {
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
··· 279 279 adev->irq.msi_enabled = false; 280 280 281 281 if (!amdgpu_msi_ok(adev)) 282 - flags = PCI_IRQ_LEGACY; 282 + flags = PCI_IRQ_INTX; 283 283 else 284 284 flags = PCI_IRQ_ALL_TYPES; 285 285
+1 -1
drivers/infiniband/hw/qib/qib_iba7220.c
··· 3281 3281 3282 3282 qib_free_irq(dd); 3283 3283 dd->msi_lo = 0; 3284 - if (pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_LEGACY) < 0) 3284 + if (pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_INTX) < 0) 3285 3285 qib_dev_err(dd, "Failed to enable INTx\n"); 3286 3286 qib_setup_7220_interrupt(dd); 3287 3287 return 1;
+2 -3
drivers/infiniband/hw/qib/qib_iba7322.c
··· 3471 3471 pci_irq_vector(dd->pcidev, msixnum), 3472 3472 ret); 3473 3473 qib_7322_free_irq(dd); 3474 - pci_alloc_irq_vectors(dd->pcidev, 1, 1, 3475 - PCI_IRQ_LEGACY); 3474 + pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_INTX); 3476 3475 goto try_intx; 3477 3476 } 3478 3477 dd->cspec->msix_entries[msixnum].arg = arg; ··· 5142 5143 qib_devinfo(dd->pcidev, 5143 5144 "MSIx interrupt not detected, trying INTx interrupts\n"); 5144 5145 qib_7322_free_irq(dd); 5145 - if (pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_LEGACY) < 0) 5146 + if (pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_INTX) < 0) 5146 5147 qib_dev_err(dd, "Failed to enable INTx\n"); 5147 5148 qib_setup_7322_interrupt(dd, 0); 5148 5149 return 1;
+1 -1
drivers/infiniband/hw/qib/qib_pcie.c
··· 210 210 } 211 211 212 212 if (dd->flags & QIB_HAS_INTX) 213 - flags |= PCI_IRQ_LEGACY; 213 + flags |= PCI_IRQ_INTX; 214 214 maxvec = (nent && *nent) ? *nent : 1; 215 215 nvec = pci_alloc_irq_vectors(dd->pcidev, 1, maxvec, flags); 216 216 if (nvec < 0)
+1 -1
drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c
··· 531 531 PCI_IRQ_MSIX); 532 532 if (ret < 0) { 533 533 ret = pci_alloc_irq_vectors(pdev, 1, 1, 534 - PCI_IRQ_MSI | PCI_IRQ_LEGACY); 534 + PCI_IRQ_MSI | PCI_IRQ_INTX); 535 535 if (ret < 0) 536 536 return ret; 537 537 }
+2 -15
drivers/iommu/amd/iommu.c
··· 3784 3784 }; 3785 3785 3786 3786 static const struct msi_parent_ops amdvi_msi_parent_ops = { 3787 - .supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED | 3788 - MSI_FLAG_MULTI_PCI_MSI | 3789 - MSI_FLAG_PCI_IMS, 3787 + .supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED | MSI_FLAG_MULTI_PCI_MSI, 3790 3788 .prefix = "IR-", 3791 - .init_dev_msi_info = msi_parent_init_dev_msi_info, 3792 - }; 3793 - 3794 - static const struct msi_parent_ops virt_amdvi_msi_parent_ops = { 3795 - .supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED | 3796 - MSI_FLAG_MULTI_PCI_MSI, 3797 - .prefix = "vIR-", 3798 3789 .init_dev_msi_info = msi_parent_init_dev_msi_info, 3799 3790 }; 3800 3791 ··· 3806 3815 irq_domain_update_bus_token(iommu->ir_domain, DOMAIN_BUS_AMDVI); 3807 3816 iommu->ir_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT | 3808 3817 IRQ_DOMAIN_FLAG_ISOLATED_MSI; 3809 - 3810 - if (amd_iommu_np_cache) 3811 - iommu->ir_domain->msi_parent_ops = &virt_amdvi_msi_parent_ops; 3812 - else 3813 - iommu->ir_domain->msi_parent_ops = &amdvi_msi_parent_ops; 3818 + iommu->ir_domain->msi_parent_ops = &amdvi_msi_parent_ops; 3814 3819 3815 3820 return 0; 3816 3821 }
+3 -16
drivers/iommu/intel/irq_remapping.c
··· 85 85 86 86 static void iommu_disable_irq_remapping(struct intel_iommu *iommu); 87 87 static int __init parse_ioapics_under_ir(void); 88 - static const struct msi_parent_ops dmar_msi_parent_ops, virt_dmar_msi_parent_ops; 88 + static const struct msi_parent_ops dmar_msi_parent_ops; 89 89 90 90 static bool ir_pre_enabled(struct intel_iommu *iommu) 91 91 { ··· 570 570 irq_domain_update_bus_token(iommu->ir_domain, DOMAIN_BUS_DMAR); 571 571 iommu->ir_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT | 572 572 IRQ_DOMAIN_FLAG_ISOLATED_MSI; 573 - 574 - if (cap_caching_mode(iommu->cap)) 575 - iommu->ir_domain->msi_parent_ops = &virt_dmar_msi_parent_ops; 576 - else 577 - iommu->ir_domain->msi_parent_ops = &dmar_msi_parent_ops; 573 + iommu->ir_domain->msi_parent_ops = &dmar_msi_parent_ops; 578 574 579 575 ir_table->base = ir_table_base; 580 576 ir_table->bitmap = bitmap; ··· 1522 1526 }; 1523 1527 1524 1528 static const struct msi_parent_ops dmar_msi_parent_ops = { 1525 - .supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED | 1526 - MSI_FLAG_MULTI_PCI_MSI | 1527 - MSI_FLAG_PCI_IMS, 1529 + .supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED | MSI_FLAG_MULTI_PCI_MSI, 1528 1530 .prefix = "IR-", 1529 - .init_dev_msi_info = msi_parent_init_dev_msi_info, 1530 - }; 1531 - 1532 - static const struct msi_parent_ops virt_dmar_msi_parent_ops = { 1533 - .supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED | 1534 - MSI_FLAG_MULTI_PCI_MSI, 1535 - .prefix = "vIR-", 1536 1531 .init_dev_msi_info = msi_parent_init_dev_msi_info, 1537 1532 }; 1538 1533
+1 -1
drivers/mfd/intel-lpss-pci.c
··· 54 54 if (ret) 55 55 return ret; 56 56 57 - ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_LEGACY); 57 + ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_INTX); 58 58 if (ret < 0) 59 59 return ret; 60 60
+1 -2
drivers/misc/vmw_vmci/vmci_guest.c
··· 787 787 error = pci_alloc_irq_vectors(pdev, num_irq_vectors, num_irq_vectors, 788 788 PCI_IRQ_MSIX); 789 789 if (error < 0) { 790 - error = pci_alloc_irq_vectors(pdev, 1, 1, 791 - PCI_IRQ_MSIX | PCI_IRQ_MSI | PCI_IRQ_LEGACY); 790 + error = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES); 792 791 if (error < 0) 793 792 goto err_unsubscribe_event; 794 793 } else {
+1 -1
drivers/net/ethernet/amd/xgbe/xgbe-pci.c
··· 170 170 goto out; 171 171 172 172 ret = pci_alloc_irq_vectors(pdata->pcidev, 1, 1, 173 - PCI_IRQ_LEGACY | PCI_IRQ_MSI); 173 + PCI_IRQ_INTX | PCI_IRQ_MSI); 174 174 if (ret < 0) { 175 175 dev_info(pdata->dev, "single IRQ enablement failed\n"); 176 176 return ret;
+1 -1
drivers/net/ethernet/aquantia/atlantic/aq_cfg.h
··· 17 17 18 18 #define AQ_CFG_IS_POLLING_DEF 0U 19 19 20 - #define AQ_CFG_FORCE_LEGACY_INT 0U 20 + #define AQ_CFG_FORCE_INTX 0U 21 21 22 22 #define AQ_CFG_INTERRUPT_MODERATION_OFF 0 23 23 #define AQ_CFG_INTERRUPT_MODERATION_ON 1
+1 -1
drivers/net/ethernet/aquantia/atlantic/aq_hw.h
··· 104 104 }; 105 105 106 106 #define AQ_HW_IRQ_INVALID 0U 107 - #define AQ_HW_IRQ_LEGACY 1U 107 + #define AQ_HW_IRQ_INTX 1U 108 108 #define AQ_HW_IRQ_MSI 2U 109 109 #define AQ_HW_IRQ_MSIX 3U 110 110
+1 -1
drivers/net/ethernet/aquantia/atlantic/aq_nic.c
··· 127 127 128 128 cfg->irq_type = aq_pci_func_get_irq_type(self); 129 129 130 - if ((cfg->irq_type == AQ_HW_IRQ_LEGACY) || 130 + if ((cfg->irq_type == AQ_HW_IRQ_INTX) || 131 131 (cfg->aq_hw_caps->vecs == 1U) || 132 132 (cfg->vecs == 1U)) { 133 133 cfg->is_rss = 0U;
+3 -6
drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
··· 200 200 if (self->pdev->msi_enabled) 201 201 return AQ_HW_IRQ_MSI; 202 202 203 - return AQ_HW_IRQ_LEGACY; 203 + return AQ_HW_IRQ_INTX; 204 204 } 205 205 206 206 static void aq_pci_free_irq_vectors(struct aq_nic_s *self) ··· 298 298 299 299 numvecs += AQ_HW_SERVICE_IRQS; 300 300 /*enable interrupts */ 301 - #if !AQ_CFG_FORCE_LEGACY_INT 302 - err = pci_alloc_irq_vectors(self->pdev, 1, numvecs, 303 - PCI_IRQ_MSIX | PCI_IRQ_MSI | 304 - PCI_IRQ_LEGACY); 305 - 301 + #if !AQ_CFG_FORCE_INTX 302 + err = pci_alloc_irq_vectors(self->pdev, 1, numvecs, PCI_IRQ_ALL_TYPES); 306 303 if (err < 0) 307 304 goto err_hwinit; 308 305 numvecs = err;
+1 -1
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
··· 352 352 { 353 353 static u32 aq_hw_atl_igcr_table_[4][2] = { 354 354 [AQ_HW_IRQ_INVALID] = { 0x20000000U, 0x20000000U }, 355 - [AQ_HW_IRQ_LEGACY] = { 0x20000080U, 0x20000080U }, 355 + [AQ_HW_IRQ_INTX] = { 0x20000080U, 0x20000080U }, 356 356 [AQ_HW_IRQ_MSI] = { 0x20000021U, 0x20000025U }, 357 357 [AQ_HW_IRQ_MSIX] = { 0x20000022U, 0x20000026U }, 358 358 };
+1 -1
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
··· 562 562 { 563 563 static u32 aq_hw_atl_igcr_table_[4][2] = { 564 564 [AQ_HW_IRQ_INVALID] = { 0x20000000U, 0x20000000U }, 565 - [AQ_HW_IRQ_LEGACY] = { 0x20000080U, 0x20000080U }, 565 + [AQ_HW_IRQ_INTX] = { 0x20000080U, 0x20000080U }, 566 566 [AQ_HW_IRQ_MSI] = { 0x20000021U, 0x20000025U }, 567 567 [AQ_HW_IRQ_MSIX] = { 0x20000022U, 0x20000026U }, 568 568 };
+1 -1
drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2.c
··· 534 534 { 535 535 static u32 aq_hw_atl2_igcr_table_[4][2] = { 536 536 [AQ_HW_IRQ_INVALID] = { 0x20000000U, 0x20000000U }, 537 - [AQ_HW_IRQ_LEGACY] = { 0x20000080U, 0x20000080U }, 537 + [AQ_HW_IRQ_INTX] = { 0x20000080U, 0x20000080U }, 538 538 [AQ_HW_IRQ_MSI] = { 0x20000021U, 0x20000025U }, 539 539 [AQ_HW_IRQ_MSIX] = { 0x20000022U, 0x20000026U }, 540 540 };
+1 -1
drivers/net/ethernet/atheros/alx/main.c
··· 901 901 int ret; 902 902 903 903 ret = pci_alloc_irq_vectors(alx->hw.pdev, 1, 1, 904 - PCI_IRQ_MSI | PCI_IRQ_LEGACY); 904 + PCI_IRQ_MSI | PCI_IRQ_INTX); 905 905 if (ret < 0) 906 906 return ret; 907 907
+1 -1
drivers/net/ethernet/realtek/r8169_main.c
··· 5106 5106 rtl_lock_config_regs(tp); 5107 5107 fallthrough; 5108 5108 case RTL_GIGA_MAC_VER_07 ... RTL_GIGA_MAC_VER_17: 5109 - flags = PCI_IRQ_LEGACY; 5109 + flags = PCI_IRQ_INTX; 5110 5110 break; 5111 5111 default: 5112 5112 flags = PCI_IRQ_ALL_TYPES;
+4 -4
drivers/net/ethernet/wangxun/libwx/wx_lib.c
··· 1674 1674 /* minmum one for queue, one for misc*/ 1675 1675 nvecs = 1; 1676 1676 nvecs = pci_alloc_irq_vectors(pdev, nvecs, 1677 - nvecs, PCI_IRQ_MSI | PCI_IRQ_LEGACY); 1677 + nvecs, PCI_IRQ_MSI | PCI_IRQ_INTX); 1678 1678 if (nvecs == 1) { 1679 1679 if (pdev->msi_enabled) 1680 1680 wx_err(wx, "Fallback to MSI.\n"); 1681 1681 else 1682 - wx_err(wx, "Fallback to LEGACY.\n"); 1682 + wx_err(wx, "Fallback to INTx.\n"); 1683 1683 } else { 1684 - wx_err(wx, "Failed to allocate MSI/LEGACY interrupts. Error: %d\n", nvecs); 1684 + wx_err(wx, "Failed to allocate MSI/INTx interrupts. Error: %d\n", nvecs); 1685 1685 return nvecs; 1686 1686 } 1687 1687 ··· 2127 2127 * wx_configure_vectors - Configure vectors for hardware 2128 2128 * @wx: board private structure 2129 2129 * 2130 - * wx_configure_vectors sets up the hardware to properly generate MSI-X/MSI/LEGACY 2130 + * wx_configure_vectors sets up the hardware to properly generate MSI-X/MSI/INTx 2131 2131 * interrupts. 2132 2132 **/ 2133 2133 void wx_configure_vectors(struct wx *wx)
+9 -9
drivers/net/wireless/ath/ath10k/ahb.c
··· 394 394 if (!ath10k_pci_irq_pending(ar)) 395 395 return IRQ_NONE; 396 396 397 - ath10k_pci_disable_and_clear_legacy_irq(ar); 397 + ath10k_pci_disable_and_clear_intx_irq(ar); 398 398 ath10k_pci_irq_msi_fw_mask(ar); 399 399 napi_schedule(&ar->napi); 400 400 401 401 return IRQ_HANDLED; 402 402 } 403 403 404 - static int ath10k_ahb_request_irq_legacy(struct ath10k *ar) 404 + static int ath10k_ahb_request_irq_intx(struct ath10k *ar) 405 405 { 406 406 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 407 407 struct ath10k_ahb *ar_ahb = ath10k_ahb_priv(ar); ··· 415 415 ar_ahb->irq, ret); 416 416 return ret; 417 417 } 418 - ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_LEGACY; 418 + ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_INTX; 419 419 420 420 return 0; 421 421 } 422 422 423 - static void ath10k_ahb_release_irq_legacy(struct ath10k *ar) 423 + static void ath10k_ahb_release_irq_intx(struct ath10k *ar) 424 424 { 425 425 struct ath10k_ahb *ar_ahb = ath10k_ahb_priv(ar); 426 426 ··· 430 430 static void ath10k_ahb_irq_disable(struct ath10k *ar) 431 431 { 432 432 ath10k_ce_disable_interrupts(ar); 433 - ath10k_pci_disable_and_clear_legacy_irq(ar); 433 + ath10k_pci_disable_and_clear_intx_irq(ar); 434 434 } 435 435 436 436 static int ath10k_ahb_resource_init(struct ath10k *ar) ··· 621 621 622 622 ath10k_core_napi_enable(ar); 623 623 ath10k_ce_enable_interrupts(ar); 624 - ath10k_pci_enable_legacy_irq(ar); 624 + ath10k_pci_enable_intx_irq(ar); 625 625 626 626 ath10k_pci_rx_post(ar); 627 627 ··· 775 775 776 776 ath10k_pci_init_napi(ar); 777 777 778 - ret = ath10k_ahb_request_irq_legacy(ar); 778 + ret = ath10k_ahb_request_irq_intx(ar); 779 779 if (ret) 780 780 goto err_free_pipes; 781 781 ··· 806 806 ath10k_ahb_clock_disable(ar); 807 807 808 808 err_free_irq: 809 - ath10k_ahb_release_irq_legacy(ar); 809 + ath10k_ahb_release_irq_intx(ar); 810 810 811 811 err_free_pipes: 812 812 ath10k_pci_release_resource(ar); ··· 828 828 829 829 ath10k_core_unregister(ar); 830 830 ath10k_ahb_irq_disable(ar); 831 - ath10k_ahb_release_irq_legacy(ar); 831 + ath10k_ahb_release_irq_intx(ar); 832 832 ath10k_pci_release_resource(ar); 833 833 ath10k_ahb_halt_chip(ar); 834 834 ath10k_ahb_clock_disable(ar);
+18 -18
drivers/net/wireless/ath/ath10k/pci.c
··· 721 721 return false; 722 722 } 723 723 724 - void ath10k_pci_disable_and_clear_legacy_irq(struct ath10k *ar) 724 + void ath10k_pci_disable_and_clear_intx_irq(struct ath10k *ar) 725 725 { 726 726 /* IMPORTANT: INTR_CLR register has to be set after 727 727 * INTR_ENABLE is set to 0, otherwise interrupt can not be ··· 739 739 PCIE_INTR_ENABLE_ADDRESS); 740 740 } 741 741 742 - void ath10k_pci_enable_legacy_irq(struct ath10k *ar) 742 + void ath10k_pci_enable_intx_irq(struct ath10k *ar) 743 743 { 744 744 ath10k_pci_write32(ar, SOC_CORE_BASE_ADDRESS + 745 745 PCIE_INTR_ENABLE_ADDRESS, ··· 1935 1935 static void ath10k_pci_irq_disable(struct ath10k *ar) 1936 1936 { 1937 1937 ath10k_ce_disable_interrupts(ar); 1938 - ath10k_pci_disable_and_clear_legacy_irq(ar); 1938 + ath10k_pci_disable_and_clear_intx_irq(ar); 1939 1939 ath10k_pci_irq_msi_fw_mask(ar); 1940 1940 } 1941 1941 ··· 1949 1949 static void ath10k_pci_irq_enable(struct ath10k *ar) 1950 1950 { 1951 1951 ath10k_ce_enable_interrupts(ar); 1952 - ath10k_pci_enable_legacy_irq(ar); 1952 + ath10k_pci_enable_intx_irq(ar); 1953 1953 ath10k_pci_irq_msi_fw_unmask(ar); 1954 1954 } 1955 1955 ··· 3111 3111 return IRQ_NONE; 3112 3112 } 3113 3113 3114 - if ((ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_LEGACY) && 3114 + if ((ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_INTX) && 3115 3115 !ath10k_pci_irq_pending(ar)) 3116 3116 return IRQ_NONE; 3117 3117 3118 - ath10k_pci_disable_and_clear_legacy_irq(ar); 3118 + ath10k_pci_disable_and_clear_intx_irq(ar); 3119 3119 ath10k_pci_irq_msi_fw_mask(ar); 3120 3120 napi_schedule(&ar->napi); 3121 3121 ··· 3152 3152 napi_schedule(ctx); 3153 3153 goto out; 3154 3154 } 3155 - ath10k_pci_enable_legacy_irq(ar); 3155 + ath10k_pci_enable_intx_irq(ar); 3156 3156 ath10k_pci_irq_msi_fw_unmask(ar); 3157 3157 } 3158 3158 ··· 3177 3177 return 0; 3178 3178 } 3179 3179 3180 - static int ath10k_pci_request_irq_legacy(struct ath10k *ar) 3180 + static int ath10k_pci_request_irq_intx(struct ath10k *ar) 3181 3181 { 3182 3182 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 3183 3183 int ret; ··· 3199 3199 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 3200 3200 3201 3201 switch (ar_pci->oper_irq_mode) { 3202 - case ATH10K_PCI_IRQ_LEGACY: 3203 - return ath10k_pci_request_irq_legacy(ar); 3202 + case ATH10K_PCI_IRQ_INTX: 3203 + return ath10k_pci_request_irq_intx(ar); 3204 3204 case ATH10K_PCI_IRQ_MSI: 3205 3205 return ath10k_pci_request_irq_msi(ar); 3206 3206 default: ··· 3232 3232 ath10k_pci_irq_mode); 3233 3233 3234 3234 /* Try MSI */ 3235 - if (ath10k_pci_irq_mode != ATH10K_PCI_IRQ_LEGACY) { 3235 + if (ath10k_pci_irq_mode != ATH10K_PCI_IRQ_INTX) { 3236 3236 ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_MSI; 3237 3237 ret = pci_enable_msi(ar_pci->pdev); 3238 3238 if (ret == 0) ··· 3250 3250 * For now, fix the race by repeating the write in below 3251 3251 * synchronization checking. 3252 3252 */ 3253 - ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_LEGACY; 3253 + ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_INTX; 3254 3254 3255 3255 ath10k_pci_write32(ar, SOC_CORE_BASE_ADDRESS + PCIE_INTR_ENABLE_ADDRESS, 3256 3256 PCIE_INTR_FIRMWARE_MASK | PCIE_INTR_CE_MASK_ALL); ··· 3258 3258 return 0; 3259 3259 } 3260 3260 3261 - static void ath10k_pci_deinit_irq_legacy(struct ath10k *ar) 3261 + static void ath10k_pci_deinit_irq_intx(struct ath10k *ar) 3262 3262 { 3263 3263 ath10k_pci_write32(ar, SOC_CORE_BASE_ADDRESS + PCIE_INTR_ENABLE_ADDRESS, 3264 3264 0); ··· 3269 3269 struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 3270 3270 3271 3271 switch (ar_pci->oper_irq_mode) { 3272 - case ATH10K_PCI_IRQ_LEGACY: 3273 - ath10k_pci_deinit_irq_legacy(ar); 3272 + case ATH10K_PCI_IRQ_INTX: 3273 + ath10k_pci_deinit_irq_intx(ar); 3274 3274 break; 3275 3275 default: 3276 3276 pci_disable_msi(ar_pci->pdev); ··· 3307 3307 if (val & FW_IND_INITIALIZED) 3308 3308 break; 3309 3309 3310 - if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_LEGACY) 3310 + if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_INTX) 3311 3311 /* Fix potential race by repeating CORE_BASE writes */ 3312 - ath10k_pci_enable_legacy_irq(ar); 3312 + ath10k_pci_enable_intx_irq(ar); 3313 3313 3314 3314 mdelay(10); 3315 3315 } while (time_before(jiffies, timeout)); 3316 3316 3317 - ath10k_pci_disable_and_clear_legacy_irq(ar); 3317 + ath10k_pci_disable_and_clear_intx_irq(ar); 3318 3318 ath10k_pci_irq_msi_fw_mask(ar); 3319 3319 3320 3320 if (val == 0xffffffff) {
+3 -3
drivers/net/wireless/ath/ath10k/pci.h
··· 101 101 102 102 enum ath10k_pci_irq_mode { 103 103 ATH10K_PCI_IRQ_AUTO = 0, 104 - ATH10K_PCI_IRQ_LEGACY = 1, 104 + ATH10K_PCI_IRQ_INTX = 1, 105 105 ATH10K_PCI_IRQ_MSI = 2, 106 106 }; 107 107 ··· 243 243 int ath10k_pci_init_config(struct ath10k *ar); 244 244 void ath10k_pci_rx_post(struct ath10k *ar); 245 245 void ath10k_pci_flush(struct ath10k *ar); 246 - void ath10k_pci_enable_legacy_irq(struct ath10k *ar); 246 + void ath10k_pci_enable_intx_irq(struct ath10k *ar); 247 247 bool ath10k_pci_irq_pending(struct ath10k *ar); 248 - void ath10k_pci_disable_and_clear_legacy_irq(struct ath10k *ar); 248 + void ath10k_pci_disable_and_clear_intx_irq(struct ath10k *ar); 249 249 void ath10k_pci_irq_msi_fw_mask(struct ath10k *ar); 250 250 int ath10k_pci_wait_for_target_init(struct ath10k *ar); 251 251 int ath10k_pci_setup_resource(struct ath10k *ar);
+1 -1
drivers/net/wireless/realtek/rtw88/pci.c
··· 1613 1613 1614 1614 static int rtw_pci_request_irq(struct rtw_dev *rtwdev, struct pci_dev *pdev) 1615 1615 { 1616 - unsigned int flags = PCI_IRQ_LEGACY; 1616 + unsigned int flags = PCI_IRQ_INTX; 1617 1617 int ret; 1618 1618 1619 1619 if (!rtw_disable_msi)
+1 -1
drivers/net/wireless/realtek/rtw89/pci.c
··· 3637 3637 unsigned long flags = 0; 3638 3638 int ret; 3639 3639 3640 - flags |= PCI_IRQ_LEGACY | PCI_IRQ_MSI; 3640 + flags |= PCI_IRQ_INTX | PCI_IRQ_MSI; 3641 3641 ret = pci_alloc_irq_vectors(pdev, 1, 1, flags); 3642 3642 if (ret < 0) { 3643 3643 rtw89_err(rtwdev, "failed to alloc irq vectors, ret %d\n", ret);
+1 -1
drivers/ntb/hw/idt/ntb_hw_idt.c
··· 2129 2129 int ret; 2130 2130 2131 2131 /* Allocate just one interrupt vector for the ISR */ 2132 - ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI | PCI_IRQ_LEGACY); 2132 + ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI | PCI_IRQ_INTX); 2133 2133 if (ret != 1) { 2134 2134 dev_err(&pdev->dev, "Failed to allocate IRQ vector"); 2135 2135 return ret;
+31 -13
drivers/pci/access.c
··· 36 36 int noinline pci_bus_read_config_##size \ 37 37 (struct pci_bus *bus, unsigned int devfn, int pos, type *value) \ 38 38 { \ 39 - int res; \ 40 39 unsigned long flags; \ 41 40 u32 data = 0; \ 42 - if (PCI_##size##_BAD) return PCIBIOS_BAD_REGISTER_NUMBER; \ 41 + int res; \ 42 + \ 43 + if (PCI_##size##_BAD) \ 44 + return PCIBIOS_BAD_REGISTER_NUMBER; \ 45 + \ 43 46 pci_lock_config(flags); \ 44 47 res = bus->ops->read(bus, devfn, pos, len, &data); \ 45 48 if (res) \ ··· 50 47 else \ 51 48 *value = (type)data; \ 52 49 pci_unlock_config(flags); \ 50 + \ 53 51 return res; \ 54 52 } 55 53 ··· 58 54 int noinline pci_bus_write_config_##size \ 59 55 (struct pci_bus *bus, unsigned int devfn, int pos, type value) \ 60 56 { \ 61 - int res; \ 62 57 unsigned long flags; \ 63 - if (PCI_##size##_BAD) return PCIBIOS_BAD_REGISTER_NUMBER; \ 58 + int res; \ 59 + \ 60 + if (PCI_##size##_BAD) \ 61 + return PCIBIOS_BAD_REGISTER_NUMBER; \ 62 + \ 64 63 pci_lock_config(flags); \ 65 64 res = bus->ops->write(bus, devfn, pos, len, value); \ 66 65 pci_unlock_config(flags); \ 66 + \ 67 67 return res; \ 68 68 } 69 69 ··· 224 216 } 225 217 226 218 /* Returns 0 on success, negative values indicate error. */ 227 - #define PCI_USER_READ_CONFIG(size, type) \ 219 + #define PCI_USER_READ_CONFIG(size, type) \ 228 220 int pci_user_read_config_##size \ 229 221 (struct pci_dev *dev, int pos, type *val) \ 230 222 { \ 231 - int ret = PCIBIOS_SUCCESSFUL; \ 232 223 u32 data = -1; \ 224 + int ret; \ 225 + \ 233 226 if (PCI_##size##_BAD) \ 234 227 return -EINVAL; \ 235 - raw_spin_lock_irq(&pci_lock); \ 228 + \ 229 + raw_spin_lock_irq(&pci_lock); \ 236 230 if (unlikely(dev->block_cfg_access)) \ 237 231 pci_wait_cfg(dev); \ 238 232 ret = dev->bus->ops->read(dev->bus, dev->devfn, \ 239 - pos, sizeof(type), &data); \ 240 - raw_spin_unlock_irq(&pci_lock); \ 233 + pos, sizeof(type), &data); \ 234 + raw_spin_unlock_irq(&pci_lock); \ 241 235 if (ret) \ 242 236 PCI_SET_ERROR_RESPONSE(val); \ 243 237 else \ 244 238 *val = (type)data; \ 239 + \ 245 240 return pcibios_err_to_errno(ret); \ 246 241 } \ 247 242 EXPORT_SYMBOL_GPL(pci_user_read_config_##size); ··· 254 243 int pci_user_write_config_##size \ 255 244 (struct pci_dev *dev, int pos, type val) \ 256 245 { \ 257 - int ret = PCIBIOS_SUCCESSFUL; \ 246 + int ret; \ 247 + \ 258 248 if (PCI_##size##_BAD) \ 259 249 return -EINVAL; \ 260 - raw_spin_lock_irq(&pci_lock); \ 250 + \ 251 + raw_spin_lock_irq(&pci_lock); \ 261 252 if (unlikely(dev->block_cfg_access)) \ 262 253 pci_wait_cfg(dev); \ 263 254 ret = dev->bus->ops->write(dev->bus, dev->devfn, \ 264 - pos, sizeof(type), val); \ 265 - raw_spin_unlock_irq(&pci_lock); \ 255 + pos, sizeof(type), val); \ 256 + raw_spin_unlock_irq(&pci_lock); \ 257 + \ 266 258 return pcibios_err_to_errno(ret); \ 267 259 } \ 268 260 EXPORT_SYMBOL_GPL(pci_user_write_config_##size); ··· 288 274 void pci_cfg_access_lock(struct pci_dev *dev) 289 275 { 290 276 might_sleep(); 277 + 278 + lock_map_acquire(&dev->cfg_access_lock); 291 279 292 280 raw_spin_lock_irq(&pci_lock); 293 281 if (dev->block_cfg_access) ··· 345 329 raw_spin_unlock_irqrestore(&pci_lock, flags); 346 330 347 331 wake_up_all(&pci_cfg_wait); 332 + 333 + lock_map_release(&dev->cfg_access_lock); 348 334 } 349 335 EXPORT_SYMBOL_GPL(pci_cfg_access_unlock); 350 336
+3 -4
drivers/pci/controller/cadence/pcie-cadence-ep.c
··· 99 99 ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS; 100 100 } else { 101 101 bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH); 102 - bool is_64bits = sz > SZ_2G; 102 + bool is_64bits = !!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64); 103 103 104 104 if (is_64bits && (bar & 1)) 105 105 return -EINVAL; 106 - 107 - if (is_64bits && !(flags & PCI_BASE_ADDRESS_MEM_TYPE_64)) 108 - epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64; 109 106 110 107 if (is_64bits && is_prefetch) 111 108 ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS; ··· 742 745 cdns_pcie_detect_quiet_min_delay_set(&ep->pcie); 743 746 744 747 spin_lock_init(&ep->lock); 748 + 749 + pci_epc_init_notify(epc); 745 750 746 751 return 0; 747 752
+9
drivers/pci/controller/dwc/pci-dra7xx.c
··· 467 467 return ret; 468 468 } 469 469 470 + ret = dw_pcie_ep_init_registers(ep); 471 + if (ret) { 472 + dev_err(dev, "Failed to initialize DWC endpoint registers\n"); 473 + dw_pcie_ep_deinit(ep); 474 + return ret; 475 + } 476 + 477 + dw_pcie_ep_init_notify(ep); 478 + 470 479 return 0; 471 480 } 472 481
+10
drivers/pci/controller/dwc/pci-imx6.c
··· 1123 1123 dev_err(dev, "failed to initialize endpoint\n"); 1124 1124 return ret; 1125 1125 } 1126 + 1127 + ret = dw_pcie_ep_init_registers(ep); 1128 + if (ret) { 1129 + dev_err(dev, "Failed to initialize DWC endpoint registers\n"); 1130 + dw_pcie_ep_deinit(ep); 1131 + return ret; 1132 + } 1133 + 1134 + dw_pcie_ep_init_notify(ep); 1135 + 1126 1136 /* Start LTSSM. */ 1127 1137 imx6_pcie_ltssm_enable(dev); 1128 1138
+11
drivers/pci/controller/dwc/pci-keystone.c
··· 1286 1286 ret = dw_pcie_ep_init(&pci->ep); 1287 1287 if (ret < 0) 1288 1288 goto err_get_sync; 1289 + 1290 + ret = dw_pcie_ep_init_registers(&pci->ep); 1291 + if (ret) { 1292 + dev_err(dev, "Failed to initialize DWC endpoint registers\n"); 1293 + goto err_ep_init; 1294 + } 1295 + 1296 + dw_pcie_ep_init_notify(&pci->ep); 1297 + 1289 1298 break; 1290 1299 default: 1291 1300 dev_err(dev, "INVALID device type %d\n", mode); ··· 1304 1295 1305 1296 return 0; 1306 1297 1298 + err_ep_init: 1299 + dw_pcie_ep_deinit(&pci->ep); 1307 1300 err_get_sync: 1308 1301 pm_runtime_put(dev); 1309 1302 pm_runtime_disable(dev);
+9
drivers/pci/controller/dwc/pci-layerscape-ep.c
··· 279 279 if (ret) 280 280 return ret; 281 281 282 + ret = dw_pcie_ep_init_registers(&pci->ep); 283 + if (ret) { 284 + dev_err(dev, "Failed to initialize DWC endpoint registers\n"); 285 + dw_pcie_ep_deinit(&pci->ep); 286 + return ret; 287 + } 288 + 289 + dw_pcie_ep_init_notify(&pci->ep); 290 + 282 291 return ls_pcie_ep_interrupt_init(pcie, pdev); 283 292 } 284 293
+14 -1
drivers/pci/controller/dwc/pcie-artpec6.c
··· 441 441 442 442 pci->ep.ops = &pcie_ep_ops; 443 443 444 - return dw_pcie_ep_init(&pci->ep); 444 + ret = dw_pcie_ep_init(&pci->ep); 445 + if (ret) 446 + return ret; 447 + 448 + ret = dw_pcie_ep_init_registers(&pci->ep); 449 + if (ret) { 450 + dev_err(dev, "Failed to initialize DWC endpoint registers\n"); 451 + dw_pcie_ep_deinit(&pci->ep); 452 + return ret; 453 + } 454 + 455 + dw_pcie_ep_init_notify(&pci->ep); 456 + 457 + break; 445 458 default: 446 459 dev_err(dev, "INVALID device type %d\n", artpec6_pcie->mode); 447 460 }
+163 -77
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 15 15 #include <linux/pci-epc.h> 16 16 #include <linux/pci-epf.h> 17 17 18 + /** 19 + * dw_pcie_ep_linkup - Notify EPF drivers about Link Up event 20 + * @ep: DWC EP device 21 + */ 18 22 void dw_pcie_ep_linkup(struct dw_pcie_ep *ep) 19 23 { 20 24 struct pci_epc *epc = ep->epc; ··· 27 23 } 28 24 EXPORT_SYMBOL_GPL(dw_pcie_ep_linkup); 29 25 26 + /** 27 + * dw_pcie_ep_init_notify - Notify EPF drivers about EPC initialization complete 28 + * @ep: DWC EP device 29 + */ 30 30 void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep) 31 31 { 32 32 struct pci_epc *epc = ep->epc; ··· 39 31 } 40 32 EXPORT_SYMBOL_GPL(dw_pcie_ep_init_notify); 41 33 34 + /** 35 + * dw_pcie_ep_get_func_from_ep - Get the struct dw_pcie_ep_func corresponding to 36 + * the endpoint function 37 + * @ep: DWC EP device 38 + * @func_no: Function number of the endpoint device 39 + * 40 + * Return: struct dw_pcie_ep_func if success, NULL otherwise. 41 + */ 42 42 struct dw_pcie_ep_func * 43 43 dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no) 44 44 { ··· 77 61 dw_pcie_dbi_ro_wr_dis(pci); 78 62 } 79 63 64 + /** 65 + * dw_pcie_ep_reset_bar - Reset endpoint BAR 66 + * @pci: DWC PCI device 67 + * @bar: BAR number of the endpoint 68 + */ 80 69 void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) 81 70 { 82 71 u8 func_no, funcs; ··· 461 440 .get_features = dw_pcie_ep_get_features, 462 441 }; 463 442 443 + /** 444 + * dw_pcie_ep_raise_intx_irq - Raise INTx IRQ to the host 445 + * @ep: DWC EP device 446 + * @func_no: Function number of the endpoint 447 + * 448 + * Return: 0 if success, errono otherwise. 449 + */ 464 450 int dw_pcie_ep_raise_intx_irq(struct dw_pcie_ep *ep, u8 func_no) 465 451 { 466 452 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); ··· 479 451 } 480 452 EXPORT_SYMBOL_GPL(dw_pcie_ep_raise_intx_irq); 481 453 454 + /** 455 + * dw_pcie_ep_raise_msi_irq - Raise MSI IRQ to the host 456 + * @ep: DWC EP device 457 + * @func_no: Function number of the endpoint 458 + * @interrupt_num: Interrupt number to be raised 459 + * 460 + * Return: 0 if success, errono otherwise. 461 + */ 482 462 int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, 483 463 u8 interrupt_num) 484 464 { ··· 536 500 } 537 501 EXPORT_SYMBOL_GPL(dw_pcie_ep_raise_msi_irq); 538 502 503 + /** 504 + * dw_pcie_ep_raise_msix_irq_doorbell - Raise MSI-X to the host using Doorbell 505 + * method 506 + * @ep: DWC EP device 507 + * @func_no: Function number of the endpoint device 508 + * @interrupt_num: Interrupt number to be raised 509 + * 510 + * Return: 0 if success, errno otherwise. 511 + */ 539 512 int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no, 540 513 u16 interrupt_num) 541 514 { ··· 564 519 return 0; 565 520 } 566 521 522 + /** 523 + * dw_pcie_ep_raise_msix_irq - Raise MSI-X to the host 524 + * @ep: DWC EP device 525 + * @func_no: Function number of the endpoint device 526 + * @interrupt_num: Interrupt number to be raised 527 + * 528 + * Return: 0 if success, errno otherwise. 529 + */ 567 530 int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, 568 531 u16 interrupt_num) 569 532 { ··· 619 566 return 0; 620 567 } 621 568 622 - void dw_pcie_ep_exit(struct dw_pcie_ep *ep) 569 + /** 570 + * dw_pcie_ep_cleanup - Cleanup DWC EP resources after fundamental reset 571 + * @ep: DWC EP device 572 + * 573 + * Cleans up the DWC EP specific resources like eDMA etc... after fundamental 574 + * reset like PERST#. Note that this API is only applicable for drivers 575 + * supporting PERST# or any other methods of fundamental reset. 576 + */ 577 + void dw_pcie_ep_cleanup(struct dw_pcie_ep *ep) 623 578 { 624 579 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 625 - struct pci_epc *epc = ep->epc; 626 580 627 581 dw_pcie_edma_remove(pci); 582 + ep->epc->init_complete = false; 583 + } 584 + EXPORT_SYMBOL_GPL(dw_pcie_ep_cleanup); 585 + 586 + /** 587 + * dw_pcie_ep_deinit - Deinitialize the endpoint device 588 + * @ep: DWC EP device 589 + * 590 + * Deinitialize the endpoint device. EPC device is not destroyed since that will 591 + * be taken care by Devres. 592 + */ 593 + void dw_pcie_ep_deinit(struct dw_pcie_ep *ep) 594 + { 595 + struct pci_epc *epc = ep->epc; 596 + 597 + dw_pcie_ep_cleanup(ep); 628 598 629 599 pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem, 630 600 epc->mem->window.page_size); 631 601 632 602 pci_epc_mem_exit(epc); 633 - 634 - if (ep->ops->deinit) 635 - ep->ops->deinit(ep); 636 603 } 637 - EXPORT_SYMBOL_GPL(dw_pcie_ep_exit); 604 + EXPORT_SYMBOL_GPL(dw_pcie_ep_deinit); 638 605 639 606 static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap) 640 607 { ··· 674 601 return 0; 675 602 } 676 603 677 - int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep) 604 + /** 605 + * dw_pcie_ep_init_registers - Initialize DWC EP specific registers 606 + * @ep: DWC EP device 607 + * 608 + * Initialize the registers (CSRs) specific to DWC EP. This API should be called 609 + * only when the endpoint receives an active refclk (either from host or 610 + * generated locally). 611 + */ 612 + int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep) 678 613 { 679 614 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 615 + struct dw_pcie_ep_func *ep_func; 616 + struct device *dev = pci->dev; 617 + struct pci_epc *epc = ep->epc; 680 618 unsigned int offset, ptm_cap_base; 681 619 unsigned int nbars; 682 620 u8 hdr_type; 621 + u8 func_no; 622 + int i, ret; 623 + void *addr; 683 624 u32 reg; 684 - int i; 685 625 686 626 hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE) & 687 627 PCI_HEADER_TYPE_MASK; ··· 704 618 hdr_type); 705 619 return -EIO; 706 620 } 621 + 622 + dw_pcie_version_detect(pci); 623 + 624 + dw_pcie_iatu_detect(pci); 625 + 626 + ret = dw_pcie_edma_detect(pci); 627 + if (ret) 628 + return ret; 629 + 630 + if (!ep->ib_window_map) { 631 + ep->ib_window_map = devm_bitmap_zalloc(dev, pci->num_ib_windows, 632 + GFP_KERNEL); 633 + if (!ep->ib_window_map) 634 + goto err_remove_edma; 635 + } 636 + 637 + if (!ep->ob_window_map) { 638 + ep->ob_window_map = devm_bitmap_zalloc(dev, pci->num_ob_windows, 639 + GFP_KERNEL); 640 + if (!ep->ob_window_map) 641 + goto err_remove_edma; 642 + } 643 + 644 + if (!ep->outbound_addr) { 645 + addr = devm_kcalloc(dev, pci->num_ob_windows, sizeof(phys_addr_t), 646 + GFP_KERNEL); 647 + if (!addr) 648 + goto err_remove_edma; 649 + ep->outbound_addr = addr; 650 + } 651 + 652 + for (func_no = 0; func_no < epc->max_functions; func_no++) { 653 + 654 + ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 655 + if (ep_func) 656 + continue; 657 + 658 + ep_func = devm_kzalloc(dev, sizeof(*ep_func), GFP_KERNEL); 659 + if (!ep_func) 660 + goto err_remove_edma; 661 + 662 + ep_func->func_no = func_no; 663 + ep_func->msi_cap = dw_pcie_ep_find_capability(ep, func_no, 664 + PCI_CAP_ID_MSI); 665 + ep_func->msix_cap = dw_pcie_ep_find_capability(ep, func_no, 666 + PCI_CAP_ID_MSIX); 667 + 668 + list_add_tail(&ep_func->list, &ep->func_list); 669 + } 670 + 671 + if (ep->ops->init) 672 + ep->ops->init(ep); 707 673 708 674 offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); 709 675 ptm_cap_base = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_PTM); ··· 796 658 dw_pcie_dbi_ro_wr_dis(pci); 797 659 798 660 return 0; 799 - } 800 - EXPORT_SYMBOL_GPL(dw_pcie_ep_init_complete); 801 661 662 + err_remove_edma: 663 + dw_pcie_edma_remove(pci); 664 + 665 + return ret; 666 + } 667 + EXPORT_SYMBOL_GPL(dw_pcie_ep_init_registers); 668 + 669 + /** 670 + * dw_pcie_ep_init - Initialize the endpoint device 671 + * @ep: DWC EP device 672 + * 673 + * Initialize the endpoint device. Allocate resources and create the EPC 674 + * device with the endpoint framework. 675 + * 676 + * Return: 0 if success, errno otherwise. 677 + */ 802 678 int dw_pcie_ep_init(struct dw_pcie_ep *ep) 803 679 { 804 680 int ret; 805 - void *addr; 806 - u8 func_no; 807 681 struct resource *res; 808 682 struct pci_epc *epc; 809 683 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 810 684 struct device *dev = pci->dev; 811 685 struct platform_device *pdev = to_platform_device(dev); 812 686 struct device_node *np = dev->of_node; 813 - const struct pci_epc_features *epc_features; 814 - struct dw_pcie_ep_func *ep_func; 815 687 816 688 INIT_LIST_HEAD(&ep->func_list); 817 689 ··· 839 691 if (ep->ops->pre_init) 840 692 ep->ops->pre_init(ep); 841 693 842 - dw_pcie_version_detect(pci); 843 - 844 - dw_pcie_iatu_detect(pci); 845 - 846 - ep->ib_window_map = devm_bitmap_zalloc(dev, pci->num_ib_windows, 847 - GFP_KERNEL); 848 - if (!ep->ib_window_map) 849 - return -ENOMEM; 850 - 851 - ep->ob_window_map = devm_bitmap_zalloc(dev, pci->num_ob_windows, 852 - GFP_KERNEL); 853 - if (!ep->ob_window_map) 854 - return -ENOMEM; 855 - 856 - addr = devm_kcalloc(dev, pci->num_ob_windows, sizeof(phys_addr_t), 857 - GFP_KERNEL); 858 - if (!addr) 859 - return -ENOMEM; 860 - ep->outbound_addr = addr; 861 - 862 694 epc = devm_pci_epc_create(dev, &epc_ops); 863 695 if (IS_ERR(epc)) { 864 696 dev_err(dev, "Failed to create epc device\n"); ··· 852 724 if (ret < 0) 853 725 epc->max_functions = 1; 854 726 855 - for (func_no = 0; func_no < epc->max_functions; func_no++) { 856 - ep_func = devm_kzalloc(dev, sizeof(*ep_func), GFP_KERNEL); 857 - if (!ep_func) 858 - return -ENOMEM; 859 - 860 - ep_func->func_no = func_no; 861 - ep_func->msi_cap = dw_pcie_ep_find_capability(ep, func_no, 862 - PCI_CAP_ID_MSI); 863 - ep_func->msix_cap = dw_pcie_ep_find_capability(ep, func_no, 864 - PCI_CAP_ID_MSIX); 865 - 866 - list_add_tail(&ep_func->list, &ep->func_list); 867 - } 868 - 869 - if (ep->ops->init) 870 - ep->ops->init(ep); 871 - 872 727 ret = pci_epc_mem_init(epc, ep->phys_base, ep->addr_size, 873 728 ep->page_size); 874 729 if (ret < 0) { 875 730 dev_err(dev, "Failed to initialize address space\n"); 876 - goto err_ep_deinit; 731 + return ret; 877 732 } 878 733 879 734 ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys, ··· 867 756 goto err_exit_epc_mem; 868 757 } 869 758 870 - ret = dw_pcie_edma_detect(pci); 871 - if (ret) 872 - goto err_free_epc_mem; 873 - 874 - if (ep->ops->get_features) { 875 - epc_features = ep->ops->get_features(ep); 876 - if (epc_features->core_init_notifier) 877 - return 0; 878 - } 879 - 880 - ret = dw_pcie_ep_init_complete(ep); 881 - if (ret) 882 - goto err_remove_edma; 883 - 884 759 return 0; 885 - 886 - err_remove_edma: 887 - dw_pcie_edma_remove(pci); 888 - 889 - err_free_epc_mem: 890 - pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem, 891 - epc->mem->window.page_size); 892 760 893 761 err_exit_epc_mem: 894 762 pci_epc_mem_exit(epc); 895 - 896 - err_ep_deinit: 897 - if (ep->ops->deinit) 898 - ep->ops->deinit(ep); 899 763 900 764 return ret; 901 765 }
+11
drivers/pci/controller/dwc/pcie-designware-plat.c
··· 145 145 146 146 pci->ep.ops = &pcie_ep_ops; 147 147 ret = dw_pcie_ep_init(&pci->ep); 148 + if (ret) 149 + return ret; 150 + 151 + ret = dw_pcie_ep_init_registers(&pci->ep); 152 + if (ret) { 153 + dev_err(dev, "Failed to initialize DWC endpoint registers\n"); 154 + dw_pcie_ep_deinit(&pci->ep); 155 + } 156 + 157 + dw_pcie_ep_init_notify(&pci->ep); 158 + 148 159 break; 149 160 default: 150 161 dev_err(dev, "INVALID device type %d\n", dw_plat_pcie->mode);
+9 -5
drivers/pci/controller/dwc/pcie-designware.h
··· 333 333 struct dw_pcie_ep_ops { 334 334 void (*pre_init)(struct dw_pcie_ep *ep); 335 335 void (*init)(struct dw_pcie_ep *ep); 336 - void (*deinit)(struct dw_pcie_ep *ep); 337 336 int (*raise_irq)(struct dw_pcie_ep *ep, u8 func_no, 338 337 unsigned int type, u16 interrupt_num); 339 338 const struct pci_epc_features* (*get_features)(struct dw_pcie_ep *ep); ··· 669 670 #ifdef CONFIG_PCIE_DW_EP 670 671 void dw_pcie_ep_linkup(struct dw_pcie_ep *ep); 671 672 int dw_pcie_ep_init(struct dw_pcie_ep *ep); 672 - int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep); 673 + int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep); 673 674 void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep); 674 - void dw_pcie_ep_exit(struct dw_pcie_ep *ep); 675 + void dw_pcie_ep_deinit(struct dw_pcie_ep *ep); 676 + void dw_pcie_ep_cleanup(struct dw_pcie_ep *ep); 675 677 int dw_pcie_ep_raise_intx_irq(struct dw_pcie_ep *ep, u8 func_no); 676 678 int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, 677 679 u8 interrupt_num); ··· 693 693 return 0; 694 694 } 695 695 696 - static inline int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep) 696 + static inline int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep) 697 697 { 698 698 return 0; 699 699 } ··· 702 702 { 703 703 } 704 704 705 - static inline void dw_pcie_ep_exit(struct dw_pcie_ep *ep) 705 + static inline void dw_pcie_ep_deinit(struct dw_pcie_ep *ep) 706 + { 707 + } 708 + 709 + static inline void dw_pcie_ep_cleanup(struct dw_pcie_ep *ep) 706 710 { 707 711 } 708 712
+17 -1
drivers/pci/controller/dwc/pcie-keembay.c
··· 396 396 struct keembay_pcie *pcie; 397 397 struct dw_pcie *pci; 398 398 enum dw_pcie_device_mode mode; 399 + int ret; 399 400 400 401 data = device_get_match_data(dev); 401 402 if (!data) ··· 431 430 return -ENODEV; 432 431 433 432 pci->ep.ops = &keembay_pcie_ep_ops; 434 - return dw_pcie_ep_init(&pci->ep); 433 + ret = dw_pcie_ep_init(&pci->ep); 434 + if (ret) 435 + return ret; 436 + 437 + ret = dw_pcie_ep_init_registers(&pci->ep); 438 + if (ret) { 439 + dev_err(dev, "Failed to initialize DWC endpoint registers\n"); 440 + dw_pcie_ep_deinit(&pci->ep); 441 + return ret; 442 + } 443 + 444 + dw_pcie_ep_init_notify(&pci->ep); 445 + 446 + break; 435 447 default: 436 448 dev_err(dev, "Invalid device type %d\n", pcie->mode); 437 449 return -ENODEV; 438 450 } 451 + 452 + return 0; 439 453 } 440 454 441 455 static const struct keembay_pcie_of_data keembay_pcie_rc_of_data = {
+2 -2
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 463 463 PARF_INT_ALL_LINK_UP | PARF_INT_ALL_EDMA; 464 464 writel_relaxed(val, pcie_ep->parf + PARF_INT_ALL_MASK); 465 465 466 - ret = dw_pcie_ep_init_complete(&pcie_ep->pci.ep); 466 + ret = dw_pcie_ep_init_registers(&pcie_ep->pci.ep); 467 467 if (ret) { 468 468 dev_err(dev, "Failed to complete initialization: %d\n", ret); 469 469 goto err_disable_resources; ··· 507 507 return; 508 508 } 509 509 510 + dw_pcie_ep_cleanup(&pci->ep); 510 511 qcom_pcie_disable_resources(pcie_ep); 511 512 pcie_ep->link_status = QCOM_PCIE_EP_LINK_DISABLED; 512 513 } ··· 775 774 776 775 static const struct pci_epc_features qcom_pcie_epc_features = { 777 776 .linkup_notifier = true, 778 - .core_init_notifier = true, 779 777 .msi_capable = true, 780 778 .msix_capable = false, 781 779 .align = SZ_4K,
+21 -7
drivers/pci/controller/dwc/pcie-rcar-gen4.c
··· 352 352 dw_pcie_ep_reset_bar(pci, bar); 353 353 } 354 354 355 - static void rcar_gen4_pcie_ep_deinit(struct dw_pcie_ep *ep) 355 + static void rcar_gen4_pcie_ep_deinit(struct rcar_gen4_pcie *rcar) 356 356 { 357 - struct dw_pcie *dw = to_dw_pcie_from_ep(ep); 358 - struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw); 359 - 360 357 writel(0, rcar->base + PCIEDMAINTSTSEN); 361 358 rcar_gen4_pcie_common_deinit(rcar); 362 359 } ··· 407 410 static const struct dw_pcie_ep_ops pcie_ep_ops = { 408 411 .pre_init = rcar_gen4_pcie_ep_pre_init, 409 412 .init = rcar_gen4_pcie_ep_init, 410 - .deinit = rcar_gen4_pcie_ep_deinit, 411 413 .raise_irq = rcar_gen4_pcie_ep_raise_irq, 412 414 .get_features = rcar_gen4_pcie_ep_get_features, 413 415 .get_dbi_offset = rcar_gen4_pcie_ep_get_dbi_offset, ··· 416 420 static int rcar_gen4_add_dw_pcie_ep(struct rcar_gen4_pcie *rcar) 417 421 { 418 422 struct dw_pcie_ep *ep = &rcar->dw.ep; 423 + struct device *dev = rcar->dw.dev; 424 + int ret; 419 425 420 426 if (!IS_ENABLED(CONFIG_PCIE_RCAR_GEN4_EP)) 421 427 return -ENODEV; 422 428 423 429 ep->ops = &pcie_ep_ops; 424 430 425 - return dw_pcie_ep_init(ep); 431 + ret = dw_pcie_ep_init(ep); 432 + if (ret) { 433 + rcar_gen4_pcie_ep_deinit(rcar); 434 + return ret; 435 + } 436 + 437 + ret = dw_pcie_ep_init_registers(ep); 438 + if (ret) { 439 + dev_err(dev, "Failed to initialize DWC endpoint registers\n"); 440 + dw_pcie_ep_deinit(ep); 441 + rcar_gen4_pcie_ep_deinit(rcar); 442 + } 443 + 444 + dw_pcie_ep_init_notify(ep); 445 + 446 + return ret; 426 447 } 427 448 428 449 static void rcar_gen4_remove_dw_pcie_ep(struct rcar_gen4_pcie *rcar) 429 450 { 430 - dw_pcie_ep_exit(&rcar->dw.ep); 451 + dw_pcie_ep_deinit(&rcar->dw.ep); 452 + rcar_gen4_pcie_ep_deinit(rcar); 431 453 } 432 454 433 455 /* Common */
+6 -2
drivers/pci/controller/dwc/pcie-tegra194.c
··· 1715 1715 if (ret) 1716 1716 dev_err(pcie->dev, "Failed to go Detect state: %d\n", ret); 1717 1717 1718 + dw_pcie_ep_cleanup(&pcie->pci.ep); 1719 + 1718 1720 reset_control_assert(pcie->core_rst); 1719 1721 1720 1722 tegra_pcie_disable_phy(pcie); ··· 1897 1895 val = (upper_32_bits(ep->msi_mem_phys) & MSIX_ADDR_MATCH_HIGH_OFF_MASK); 1898 1896 dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_HIGH_OFF, val); 1899 1897 1900 - ret = dw_pcie_ep_init_complete(ep); 1898 + ret = dw_pcie_ep_init_registers(ep); 1901 1899 if (ret) { 1902 1900 dev_err(dev, "Failed to complete initialization: %d\n", ret); 1903 1901 goto fail_init_complete; ··· 2006 2004 2007 2005 static const struct pci_epc_features tegra_pcie_epc_features = { 2008 2006 .linkup_notifier = true, 2009 - .core_init_notifier = true, 2010 2007 .msi_capable = false, 2011 2008 .msix_capable = false, 2012 2009 .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M, ··· 2274 2273 ret = tegra_pcie_config_ep(pcie, pdev); 2275 2274 if (ret < 0) 2276 2275 goto fail; 2276 + else 2277 + return 0; 2277 2278 break; 2278 2279 2279 2280 default: 2280 2281 dev_err(dev, "Invalid PCIe device type %d\n", 2281 2282 pcie->of_data->mode); 2283 + ret = -EINVAL; 2282 2284 } 2283 2285 2284 2286 fail:
+14 -1
drivers/pci/controller/dwc/pcie-uniphier-ep.c
··· 399 399 return ret; 400 400 401 401 priv->pci.ep.ops = &uniphier_pcie_ep_ops; 402 - return dw_pcie_ep_init(&priv->pci.ep); 402 + ret = dw_pcie_ep_init(&priv->pci.ep); 403 + if (ret) 404 + return ret; 405 + 406 + ret = dw_pcie_ep_init_registers(&priv->pci.ep); 407 + if (ret) { 408 + dev_err(dev, "Failed to initialize DWC endpoint registers\n"); 409 + dw_pcie_ep_deinit(&priv->pci.ep); 410 + return ret; 411 + } 412 + 413 + dw_pcie_ep_init_notify(&priv->pci.ep); 414 + 415 + return 0; 403 416 } 404 417 405 418 static const struct uniphier_pcie_ep_soc_data uniphier_pro5_data = {
+1 -1
drivers/pci/controller/pcie-mt7621.c
··· 202 202 struct mt7621_pcie_port *port; 203 203 struct device *dev = pcie->dev; 204 204 struct platform_device *pdev = to_platform_device(dev); 205 - char name[10]; 205 + char name[11]; 206 206 int err; 207 207 208 208 port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);
+2
drivers/pci/controller/pcie-rcar-ep.c
··· 542 542 goto err_pm_put; 543 543 } 544 544 545 + pci_epc_init_notify(epc); 546 + 545 547 return 0; 546 548 547 549 err_pm_put:
+5 -5
drivers/pci/controller/pcie-rockchip-ep.c
··· 98 98 99 99 /* All functions share the same vendor ID with function 0 */ 100 100 if (fn == 0) { 101 - u32 vid_regs = (hdr->vendorid & GENMASK(15, 0)) | 102 - (hdr->subsys_vendor_id & GENMASK(31, 16)) << 16; 103 - 104 - rockchip_pcie_write(rockchip, vid_regs, 101 + rockchip_pcie_write(rockchip, 102 + hdr->vendorid | hdr->subsys_vendor_id << 16, 105 103 PCIE_CORE_CONFIG_VENDOR); 106 104 } 107 105 ··· 151 153 ctrl = ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_IO_32BITS; 152 154 } else { 153 155 bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH); 154 - bool is_64bits = sz > SZ_2G; 156 + bool is_64bits = !!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64); 155 157 156 158 if (is_64bits && (bar & 1)) 157 159 return -EINVAL; ··· 606 608 607 609 rockchip_pcie_write(rockchip, PCIE_CLIENT_CONF_ENABLE, 608 610 PCIE_CLIENT_CONFIG); 611 + 612 + pci_epc_init_notify(epc); 609 613 610 614 return 0; 611 615 err_epc_mem_exit:
+9 -3
drivers/pci/doe.c
··· 383 383 complete(task->private); 384 384 } 385 385 386 - static int pci_doe_discovery(struct pci_doe_mb *doe_mb, u8 *index, u16 *vid, 386 + static int pci_doe_discovery(struct pci_doe_mb *doe_mb, u8 capver, u8 *index, u16 *vid, 387 387 u8 *protocol) 388 388 { 389 389 u32 request_pl = FIELD_PREP(PCI_DOE_DATA_OBJECT_DISC_REQ_3_INDEX, 390 - *index); 390 + *index) | 391 + FIELD_PREP(PCI_DOE_DATA_OBJECT_DISC_REQ_3_VER, 392 + (capver >= 2) ? 2 : 0); 391 393 __le32 request_pl_le = cpu_to_le32(request_pl); 392 394 __le32 response_pl_le; 393 395 u32 response_pl; ··· 423 421 { 424 422 u8 index = 0; 425 423 u8 xa_idx = 0; 424 + u32 hdr = 0; 425 + 426 + pci_read_config_dword(doe_mb->pdev, doe_mb->cap_offset, &hdr); 426 427 427 428 do { 428 429 int rc; 429 430 u16 vid; 430 431 u8 prot; 431 432 432 - rc = pci_doe_discovery(doe_mb, &index, &vid, &prot); 433 + rc = pci_doe_discovery(doe_mb, PCI_EXT_CAP_VER(hdr), &index, 434 + &vid, &prot); 433 435 if (rc) 434 436 return rc; 435 437
+20 -60
drivers/pci/endpoint/functions/pci-epf-test.c
··· 690 690 { 691 691 struct pci_epf_test *epf_test = epf_get_drvdata(epf); 692 692 struct pci_epc *epc = epf->epc; 693 - struct pci_epf_bar *epf_bar; 694 693 int bar; 695 694 696 695 cancel_delayed_work(&epf_test->cmd_handler); 697 696 pci_epf_test_clean_dma_chan(epf_test); 698 697 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 699 - epf_bar = &epf->bar[bar]; 698 + if (!epf_test->reg[bar]) 699 + continue; 700 700 701 - if (epf_test->reg[bar]) { 702 - pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, 703 - epf_bar); 704 - pci_epf_free_space(epf, epf_test->reg[bar], bar, 705 - PRIMARY_INTERFACE); 706 - } 701 + pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, 702 + &epf->bar[bar]); 703 + pci_epf_free_space(epf, epf_test->reg[bar], bar, 704 + PRIMARY_INTERFACE); 707 705 } 708 706 } 709 707 710 708 static int pci_epf_test_set_bar(struct pci_epf *epf) 711 709 { 712 - int bar, add; 713 - int ret; 714 - struct pci_epf_bar *epf_bar; 710 + int bar, ret; 715 711 struct pci_epc *epc = epf->epc; 716 712 struct device *dev = &epf->dev; 717 713 struct pci_epf_test *epf_test = epf_get_drvdata(epf); 718 714 enum pci_barno test_reg_bar = epf_test->test_reg_bar; 719 - const struct pci_epc_features *epc_features; 720 715 721 - epc_features = epf_test->epc_features; 722 - 723 - for (bar = 0; bar < PCI_STD_NUM_BARS; bar += add) { 724 - epf_bar = &epf->bar[bar]; 725 - /* 726 - * pci_epc_set_bar() sets PCI_BASE_ADDRESS_MEM_TYPE_64 727 - * if the specific implementation required a 64-bit BAR, 728 - * even if we only requested a 32-bit BAR. 729 - */ 730 - add = (epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64) ? 2 : 1; 731 - 732 - if (epc_features->bar[bar].type == BAR_RESERVED) 716 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 717 + if (!epf_test->reg[bar]) 733 718 continue; 734 719 735 720 ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, 736 - epf_bar); 721 + &epf->bar[bar]); 737 722 if (ret) { 738 723 pci_epf_free_space(epf, epf_test->reg[bar], bar, 739 724 PRIMARY_INTERFACE); ··· 738 753 const struct pci_epc_features *epc_features; 739 754 struct pci_epc *epc = epf->epc; 740 755 struct device *dev = &epf->dev; 756 + bool linkup_notifier = false; 741 757 bool msix_capable = false; 742 758 bool msi_capable = true; 743 759 int ret; ··· 781 795 } 782 796 } 783 797 798 + linkup_notifier = epc_features->linkup_notifier; 799 + if (!linkup_notifier) 800 + queue_work(kpcitest_workqueue, &epf_test->cmd_handler.work); 801 + 784 802 return 0; 785 803 } 786 804 ··· 807 817 { 808 818 struct pci_epf_test *epf_test = epf_get_drvdata(epf); 809 819 struct device *dev = &epf->dev; 810 - struct pci_epf_bar *epf_bar; 811 820 size_t msix_table_size = 0; 812 821 size_t test_reg_bar_size; 813 822 size_t pba_size = 0; 814 823 bool msix_capable; 815 824 void *base; 816 - int bar, add; 817 825 enum pci_barno test_reg_bar = epf_test->test_reg_bar; 826 + enum pci_barno bar; 818 827 const struct pci_epc_features *epc_features; 819 828 size_t test_reg_size; 820 829 ··· 838 849 } 839 850 epf_test->reg[test_reg_bar] = base; 840 851 841 - for (bar = 0; bar < PCI_STD_NUM_BARS; bar += add) { 842 - epf_bar = &epf->bar[bar]; 843 - add = (epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64) ? 2 : 1; 852 + for (bar = BAR_0; bar < PCI_STD_NUM_BARS; bar++) { 853 + bar = pci_epc_get_next_free_bar(epc_features, bar); 854 + if (bar == NO_BAR) 855 + break; 844 856 845 857 if (bar == test_reg_bar) 846 - continue; 847 - 848 - if (epc_features->bar[bar].type == BAR_RESERVED) 849 858 continue; 850 859 851 860 base = pci_epf_alloc_space(epf, bar_size[bar], bar, ··· 857 870 return 0; 858 871 } 859 872 860 - static void pci_epf_configure_bar(struct pci_epf *epf, 861 - const struct pci_epc_features *epc_features) 862 - { 863 - struct pci_epf_bar *epf_bar; 864 - int i; 865 - 866 - for (i = 0; i < PCI_STD_NUM_BARS; i++) { 867 - epf_bar = &epf->bar[i]; 868 - if (epc_features->bar[i].only_64bit) 869 - epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64; 870 - } 871 - } 872 - 873 873 static int pci_epf_test_bind(struct pci_epf *epf) 874 874 { 875 875 int ret; ··· 864 890 const struct pci_epc_features *epc_features; 865 891 enum pci_barno test_reg_bar = BAR_0; 866 892 struct pci_epc *epc = epf->epc; 867 - bool linkup_notifier = false; 868 - bool core_init_notifier = false; 869 893 870 894 if (WARN_ON_ONCE(!epc)) 871 895 return -EINVAL; ··· 874 902 return -EOPNOTSUPP; 875 903 } 876 904 877 - linkup_notifier = epc_features->linkup_notifier; 878 - core_init_notifier = epc_features->core_init_notifier; 879 905 test_reg_bar = pci_epc_get_first_free_bar(epc_features); 880 906 if (test_reg_bar < 0) 881 907 return -EINVAL; 882 - pci_epf_configure_bar(epf, epc_features); 883 908 884 909 epf_test->test_reg_bar = test_reg_bar; 885 910 epf_test->epc_features = epc_features; ··· 885 916 if (ret) 886 917 return ret; 887 918 888 - if (!core_init_notifier) { 889 - ret = pci_epf_test_core_init(epf); 890 - if (ret) 891 - return ret; 892 - } 893 - 894 919 epf_test->dma_supported = true; 895 920 896 921 ret = pci_epf_test_init_dma_chan(epf_test); 897 922 if (ret) 898 923 epf_test->dma_supported = false; 899 - 900 - if (!linkup_notifier && !core_init_notifier) 901 - queue_work(kpcitest_workqueue, &epf_test->cmd_handler.work); 902 924 903 925 return 0; 904 926 }
+9
drivers/pci/endpoint/pci-ep-cfs.c
··· 64 64 return ret; 65 65 } 66 66 67 + /* Send any pending EPC initialization complete to the EPF driver */ 68 + pci_epc_notify_pending_init(epc, epf); 69 + 67 70 return 0; 68 71 } 69 72 ··· 127 124 pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE); 128 125 return ret; 129 126 } 127 + 128 + /* Send any pending EPC initialization complete to the EPF driver */ 129 + pci_epc_notify_pending_init(epc, epf); 130 130 131 131 return 0; 132 132 } ··· 235 229 pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE); 236 230 return ret; 237 231 } 232 + 233 + /* Send any pending EPC initialization complete to the EPF driver */ 234 + pci_epc_notify_pending_init(epc, epf); 238 235 239 236 return 0; 240 237 }
+22
drivers/pci/endpoint/pci-epc-core.c
··· 748 748 epf->event_ops->core_init(epf); 749 749 mutex_unlock(&epf->lock); 750 750 } 751 + epc->init_complete = true; 751 752 mutex_unlock(&epc->list_lock); 752 753 } 753 754 EXPORT_SYMBOL_GPL(pci_epc_init_notify); 755 + 756 + /** 757 + * pci_epc_notify_pending_init() - Notify the pending EPC device initialization 758 + * complete to the EPF device 759 + * @epc: the EPC device whose core initialization is pending to be notified 760 + * @epf: the EPF device to be notified 761 + * 762 + * Invoke to notify the pending EPC device initialization complete to the EPF 763 + * device. This is used to deliver the notification if the EPC initialization 764 + * got completed before the EPF driver bind. 765 + */ 766 + void pci_epc_notify_pending_init(struct pci_epc *epc, struct pci_epf *epf) 767 + { 768 + if (epc->init_complete) { 769 + mutex_lock(&epf->lock); 770 + if (epf->event_ops && epf->event_ops->core_init) 771 + epf->event_ops->core_init(epf); 772 + mutex_unlock(&epf->lock); 773 + } 774 + } 775 + EXPORT_SYMBOL_GPL(pci_epc_notify_pending_init); 754 776 755 777 /** 756 778 * pci_epc_bme_notify() - Notify the EPF device that the EPC device has received
+6 -3
drivers/pci/endpoint/pci-epf-core.c
··· 255 255 * @type: Identifies if the allocation is for primary EPC or secondary EPC 256 256 * 257 257 * Invoke to allocate memory for the PCI EPF register space. 258 + * Flag PCI_BASE_ADDRESS_MEM_TYPE_64 will automatically get set if the BAR 259 + * can only be a 64-bit BAR, or if the requested size is larger than 2 GB. 258 260 */ 259 261 void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar, 260 262 const struct pci_epc_features *epc_features, ··· 306 304 epf_bar[bar].addr = space; 307 305 epf_bar[bar].size = size; 308 306 epf_bar[bar].barno = bar; 309 - epf_bar[bar].flags |= upper_32_bits(size) ? 310 - PCI_BASE_ADDRESS_MEM_TYPE_64 : 311 - PCI_BASE_ADDRESS_MEM_TYPE_32; 307 + if (upper_32_bits(size) || epc_features->bar[bar].only_64bit) 308 + epf_bar[bar].flags |= PCI_BASE_ADDRESS_MEM_TYPE_64; 309 + else 310 + epf_bar[bar].flags |= PCI_BASE_ADDRESS_MEM_TYPE_32; 312 311 313 312 return space; 314 313 }
+5 -7
drivers/pci/hotplug/TODO
··· 6 6 ->set_power callbacks in struct cpci_hp_controller_ops. Why were they 7 7 introduced? Can they be removed from the struct? 8 8 9 + * Returned code from pci_hp_add_bridge() is not checked. 10 + 9 11 cpqphp: 10 12 11 13 * The driver spawns a kthread cpqhp_event_thread() which is woken by the ··· 17 15 18 16 * A large portion of cpqphp_ctrl.c and cpqphp_pci.c concerns resource 19 17 management. Doesn't this duplicate functionality in the core? 18 + 19 + * Returned code from pci_hp_add_bridge() is not checked. 20 20 21 21 ibmphp: 22 22 ··· 47 43 * A large portion of ibmphp_res.c and ibmphp_pci.c concerns resource 48 44 management. Doesn't this duplicate functionality in the core? 49 45 50 - sgi_hotplug: 51 - 52 - * Several functions access the pci_slot member in struct hotplug_slot even 53 - though pci_hotplug.h declares it private. See sn_hp_destroy() for an 54 - example. Either the pci_slot member should no longer be declared private 55 - or sgi_hotplug should store a pointer to it in struct slot. Probably the 56 - former. 46 + * Returned code from pci_hp_add_bridge() is not checked. 57 47 58 48 shpchp: 59 49
+4 -54
drivers/pci/msi/api.c
··· 213 213 * * %PCI_IRQ_MSIX Allow trying MSI-X vector allocations 214 214 * * %PCI_IRQ_MSI Allow trying MSI vector allocations 215 215 * 216 - * * %PCI_IRQ_LEGACY Allow trying legacy INTx interrupts, if 217 - * and only if @min_vecs == 1 216 + * * %PCI_IRQ_INTX Allow trying INTx interrupts, if and 217 + * only if @min_vecs == 1 218 218 * 219 219 * * %PCI_IRQ_AFFINITY Auto-manage IRQs affinity by spreading 220 220 * the vectors around available CPUs ··· 279 279 return nvecs; 280 280 } 281 281 282 - /* use legacy IRQ if allowed */ 283 - if (flags & PCI_IRQ_LEGACY) { 282 + /* use INTx IRQ if allowed */ 283 + if (flags & PCI_IRQ_INTX) { 284 284 if (min_vecs == 1 && dev->irq) { 285 285 /* 286 286 * Invoke the affinity spreading logic to ensure that ··· 364 364 return &desc->affinity[idx].mask; 365 365 } 366 366 EXPORT_SYMBOL(pci_irq_get_affinity); 367 - 368 - /** 369 - * pci_ims_alloc_irq - Allocate an interrupt on a PCI/IMS interrupt domain 370 - * @dev: The PCI device to operate on 371 - * @icookie: Pointer to an IMS implementation specific cookie for this 372 - * IMS instance (PASID, queue ID, pointer...). 373 - * The cookie content is copied into the MSI descriptor for the 374 - * interrupt chip callbacks or domain specific setup functions. 375 - * @affdesc: Optional pointer to an interrupt affinity descriptor 376 - * 377 - * There is no index for IMS allocations as IMS is an implementation 378 - * specific storage and does not have any direct associations between 379 - * index, which might be a pure software construct, and device 380 - * functionality. This association is established by the driver either via 381 - * the index - if there is a hardware table - or in case of purely software 382 - * managed IMS implementation the association happens via the 383 - * irq_write_msi_msg() callback of the implementation specific interrupt 384 - * chip, which utilizes the provided @icookie to store the MSI message in 385 - * the appropriate place. 386 - * 387 - * Return: A struct msi_map 388 - * 389 - * On success msi_map::index contains the allocated index (>= 0) and 390 - * msi_map::virq the allocated Linux interrupt number (> 0). 391 - * 392 - * On fail msi_map::index contains the error code and msi_map::virq 393 - * is set to 0. 394 - */ 395 - struct msi_map pci_ims_alloc_irq(struct pci_dev *dev, union msi_instance_cookie *icookie, 396 - const struct irq_affinity_desc *affdesc) 397 - { 398 - return msi_domain_alloc_irq_at(&dev->dev, MSI_SECONDARY_DOMAIN, MSI_ANY_INDEX, 399 - affdesc, icookie); 400 - } 401 - EXPORT_SYMBOL_GPL(pci_ims_alloc_irq); 402 - 403 - /** 404 - * pci_ims_free_irq - Allocate an interrupt on a PCI/IMS interrupt domain 405 - * which was allocated via pci_ims_alloc_irq() 406 - * @dev: The PCI device to operate on 407 - * @map: A struct msi_map describing the interrupt to free as 408 - * returned from pci_ims_alloc_irq() 409 - */ 410 - void pci_ims_free_irq(struct pci_dev *dev, struct msi_map map) 411 - { 412 - if (WARN_ON_ONCE(map.index < 0 || map.virq <= 0)) 413 - return; 414 - msi_domain_free_irqs_range(&dev->dev, MSI_SECONDARY_DOMAIN, map.index, map.index); 415 - } 416 - EXPORT_SYMBOL_GPL(pci_ims_free_irq); 417 367 418 368 /** 419 369 * pci_free_irq_vectors() - Free previously allocated IRQs for a device
-59
drivers/pci/msi/irqdomain.c
··· 355 355 return (supported & feature_mask) == feature_mask; 356 356 } 357 357 358 - /** 359 - * pci_create_ims_domain - Create a secondary IMS domain for a PCI device 360 - * @pdev: The PCI device to operate on 361 - * @template: The MSI info template which describes the domain 362 - * @hwsize: The size of the hardware entry table or 0 if the domain 363 - * is purely software managed 364 - * @data: Optional pointer to domain specific data to be stored 365 - * in msi_domain_info::data 366 - * 367 - * Return: True on success, false otherwise 368 - * 369 - * An IMS domain is expected to have the following constraints: 370 - * - The index space is managed by the core code 371 - * 372 - * - There is no requirement for consecutive index ranges 373 - * 374 - * - The interrupt chip must provide the following callbacks: 375 - * - irq_mask() 376 - * - irq_unmask() 377 - * - irq_write_msi_msg() 378 - * 379 - * - The interrupt chip must provide the following optional callbacks 380 - * when the irq_mask(), irq_unmask() and irq_write_msi_msg() callbacks 381 - * cannot operate directly on hardware, e.g. in the case that the 382 - * interrupt message store is in queue memory: 383 - * - irq_bus_lock() 384 - * - irq_bus_unlock() 385 - * 386 - * These callbacks are invoked from preemptible task context and are 387 - * allowed to sleep. In this case the mandatory callbacks above just 388 - * store the information. The irq_bus_unlock() callback is supposed 389 - * to make the change effective before returning. 390 - * 391 - * - Interrupt affinity setting is handled by the underlying parent 392 - * interrupt domain and communicated to the IMS domain via 393 - * irq_write_msi_msg(). 394 - * 395 - * The domain is automatically destroyed when the PCI device is removed. 396 - */ 397 - bool pci_create_ims_domain(struct pci_dev *pdev, const struct msi_domain_template *template, 398 - unsigned int hwsize, void *data) 399 - { 400 - struct irq_domain *domain = dev_get_msi_domain(&pdev->dev); 401 - 402 - if (!domain || !irq_domain_is_msi_parent(domain)) 403 - return false; 404 - 405 - if (template->info.bus_token != DOMAIN_BUS_PCI_DEVICE_IMS || 406 - !(template->info.flags & MSI_FLAG_ALLOC_SIMPLE_MSI_DESCS) || 407 - !(template->info.flags & MSI_FLAG_FREE_MSI_DESCS) || 408 - !template->chip.irq_mask || !template->chip.irq_unmask || 409 - !template->chip.irq_write_msi_msg || template->chip.irq_set_affinity) 410 - return false; 411 - 412 - return msi_create_device_irq_domain(&pdev->dev, MSI_SECONDARY_DOMAIN, template, 413 - hwsize, data, NULL); 414 - } 415 - EXPORT_SYMBOL_GPL(pci_create_ims_domain); 416 - 417 358 /* 418 359 * Users of the generic MSI infrastructure expect a device to have a single ID, 419 360 * so with DMA aliases we have to pick the least-worst compromise. Devices with
+9 -6
drivers/pci/msi/msi.c
··· 86 86 return 0; 87 87 88 88 ret = devm_add_action(&dev->dev, pcim_msi_release, dev); 89 - if (!ret) 90 - dev->is_msi_managed = true; 91 - return ret; 89 + if (ret) 90 + return ret; 91 + 92 + dev->is_msi_managed = true; 93 + return 0; 92 94 } 93 95 94 96 /* ··· 101 99 { 102 100 int ret = msi_setup_device_data(&dev->dev); 103 101 104 - if (!ret) 105 - ret = pcim_setup_msi_release(dev); 106 - return ret; 102 + if (ret) 103 + return ret; 104 + 105 + return pcim_setup_msi_release(dev); 107 106 } 108 107 109 108 /*
+2
drivers/pci/of_property.c
··· 238 238 return 0; 239 239 240 240 int_map = kcalloc(map_sz, sizeof(u32), GFP_KERNEL); 241 + if (!int_map) 242 + return -ENOMEM; 241 243 mapp = int_map; 242 244 243 245 list_for_each_entry(child, &pdev->subordinate->devices, bus_list) {
+121 -22
drivers/pci/pci.c
··· 142 142 * the dfl or actual value as it sees fit. Don't forget this is 143 143 * measured in 32-bit words, not bytes. 144 144 */ 145 - u8 pci_dfl_cache_line_size = L1_CACHE_BYTES >> 2; 146 - u8 pci_cache_line_size; 145 + u8 pci_dfl_cache_line_size __ro_after_init = L1_CACHE_BYTES >> 2; 146 + u8 pci_cache_line_size __ro_after_init ; 147 147 148 148 /* 149 149 * If we set up a device for bus mastering, we need to check the latency ··· 1277 1277 for (;;) { 1278 1278 u32 id; 1279 1279 1280 + if (pci_dev_is_disconnected(dev)) { 1281 + pci_dbg(dev, "disconnected; not waiting\n"); 1282 + return -ENOTTY; 1283 + } 1284 + 1280 1285 pci_read_config_dword(dev, PCI_COMMAND, &id); 1281 1286 if (!PCI_POSSIBLE_ERROR(id)) 1282 1287 break; ··· 2114 2109 atomic_dec(&dev->enable_cnt); 2115 2110 return err; 2116 2111 } 2117 - 2118 - /** 2119 - * pci_enable_device_io - Initialize a device for use with IO space 2120 - * @dev: PCI device to be initialized 2121 - * 2122 - * Initialize device before it's used by a driver. Ask low-level code 2123 - * to enable I/O resources. Wake up the device if it was suspended. 2124 - * Beware, this function can fail. 2125 - */ 2126 - int pci_enable_device_io(struct pci_dev *dev) 2127 - { 2128 - return pci_enable_device_flags(dev, IORESOURCE_IO); 2129 - } 2130 - EXPORT_SYMBOL(pci_enable_device_io); 2131 2112 2132 2113 /** 2133 2114 * pci_enable_device_mem - Initialize a device for use with Memory space ··· 2951 2960 DMI_MATCH(DMI_BOARD_VENDOR, "Elo Touch Solutions"), 2952 2961 DMI_MATCH(DMI_BOARD_NAME, "Geminilake"), 2953 2962 DMI_MATCH(DMI_BOARD_VERSION, "Continental Z2"), 2963 + }, 2964 + }, 2965 + { 2966 + /* 2967 + * Changing power state of root port dGPU is connected fails 2968 + * https://gitlab.freedesktop.org/drm/amd/-/issues/3229 2969 + */ 2970 + .ident = "Hewlett-Packard HP Pavilion 17 Notebook PC/1972", 2971 + .matches = { 2972 + DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"), 2973 + DMI_MATCH(DMI_BOARD_NAME, "1972"), 2974 + DMI_MATCH(DMI_BOARD_VERSION, "95.33"), 2954 2975 }, 2955 2976 }, 2956 2977 #endif ··· 4628 4625 4629 4626 /* 4630 4627 * Ensure the updated LNKCTL parameters are used during link 4631 - * training by checking that there is no ongoing link training to 4632 - * avoid LTSSM race as recommended in Implementation Note at the 4633 - * end of PCIe r6.0.1 sec 7.5.3.7. 4628 + * training by checking that there is no ongoing link training that 4629 + * may have started before link parameters were changed, so as to 4630 + * avoid LTSSM race as recommended in Implementation Note at the end 4631 + * of PCIe r6.1 sec 7.5.3.7. 4634 4632 */ 4635 - rc = pcie_wait_for_link_status(pdev, use_lt, !use_lt); 4633 + rc = pcie_wait_for_link_status(pdev, true, false); 4636 4634 if (rc) 4637 4635 return rc; 4638 4636 ··· 4883 4879 */ 4884 4880 int pci_bridge_secondary_bus_reset(struct pci_dev *dev) 4885 4881 { 4882 + lock_map_assert_held(&dev->cfg_access_lock); 4886 4883 pcibios_reset_secondary_bus(dev); 4887 4884 4888 4885 return pci_bridge_wait_for_secondary_bus(dev, "bus reset"); ··· 4932 4927 return pci_reset_hotplug_slot(dev->slot->hotplug, probe); 4933 4928 } 4934 4929 4930 + static u16 cxl_port_dvsec(struct pci_dev *dev) 4931 + { 4932 + return pci_find_dvsec_capability(dev, PCI_VENDOR_ID_CXL, 4933 + PCI_DVSEC_CXL_PORT); 4934 + } 4935 + 4936 + static bool cxl_sbr_masked(struct pci_dev *dev) 4937 + { 4938 + u16 dvsec, reg; 4939 + int rc; 4940 + 4941 + dvsec = cxl_port_dvsec(dev); 4942 + if (!dvsec) 4943 + return false; 4944 + 4945 + rc = pci_read_config_word(dev, dvsec + PCI_DVSEC_CXL_PORT_CTL, &reg); 4946 + if (rc || PCI_POSSIBLE_ERROR(reg)) 4947 + return false; 4948 + 4949 + /* 4950 + * Per CXL spec r3.1, sec 8.1.5.2, when "Unmask SBR" is 0, the SBR 4951 + * bit in Bridge Control has no effect. When 1, the Port generates 4952 + * hot reset when the SBR bit is set to 1. 4953 + */ 4954 + if (reg & PCI_DVSEC_CXL_PORT_CTL_UNMASK_SBR) 4955 + return false; 4956 + 4957 + return true; 4958 + } 4959 + 4935 4960 static int pci_reset_bus_function(struct pci_dev *dev, bool probe) 4936 4961 { 4962 + struct pci_dev *bridge = pci_upstream_bridge(dev); 4937 4963 int rc; 4964 + 4965 + /* 4966 + * If "dev" is below a CXL port that has SBR control masked, SBR 4967 + * won't do anything, so return error. 4968 + */ 4969 + if (bridge && cxl_sbr_masked(bridge)) { 4970 + if (probe) 4971 + return 0; 4972 + 4973 + return -ENOTTY; 4974 + } 4938 4975 4939 4976 rc = pci_dev_reset_slot_function(dev, probe); 4940 4977 if (rc != -ENOTTY) 4941 4978 return rc; 4942 4979 return pci_parent_bus_reset(dev, probe); 4980 + } 4981 + 4982 + static int cxl_reset_bus_function(struct pci_dev *dev, bool probe) 4983 + { 4984 + struct pci_dev *bridge; 4985 + u16 dvsec, reg, val; 4986 + int rc; 4987 + 4988 + bridge = pci_upstream_bridge(dev); 4989 + if (!bridge) 4990 + return -ENOTTY; 4991 + 4992 + dvsec = cxl_port_dvsec(bridge); 4993 + if (!dvsec) 4994 + return -ENOTTY; 4995 + 4996 + if (probe) 4997 + return 0; 4998 + 4999 + rc = pci_read_config_word(bridge, dvsec + PCI_DVSEC_CXL_PORT_CTL, &reg); 5000 + if (rc) 5001 + return -ENOTTY; 5002 + 5003 + if (reg & PCI_DVSEC_CXL_PORT_CTL_UNMASK_SBR) { 5004 + val = reg; 5005 + } else { 5006 + val = reg | PCI_DVSEC_CXL_PORT_CTL_UNMASK_SBR; 5007 + pci_write_config_word(bridge, dvsec + PCI_DVSEC_CXL_PORT_CTL, 5008 + val); 5009 + } 5010 + 5011 + rc = pci_reset_bus_function(dev, probe); 5012 + 5013 + if (reg != val) 5014 + pci_write_config_word(bridge, dvsec + PCI_DVSEC_CXL_PORT_CTL, 5015 + reg); 5016 + 5017 + return rc; 4943 5018 } 4944 5019 4945 5020 void pci_dev_lock(struct pci_dev *dev) ··· 5106 5021 { pci_af_flr, .name = "af_flr" }, 5107 5022 { pci_pm_reset, .name = "pm" }, 5108 5023 { pci_reset_bus_function, .name = "bus" }, 5024 + { cxl_reset_bus_function, .name = "cxl_bus" }, 5109 5025 }; 5110 5026 5111 5027 static ssize_t reset_method_show(struct device *dev, ··· 5331 5245 */ 5332 5246 int pci_reset_function(struct pci_dev *dev) 5333 5247 { 5248 + struct pci_dev *bridge; 5334 5249 int rc; 5335 5250 5336 5251 if (!pci_reset_supported(dev)) 5337 5252 return -ENOTTY; 5253 + 5254 + /* 5255 + * If there's no upstream bridge, no locking is needed since there is 5256 + * no upstream bridge configuration to hold consistent. 5257 + */ 5258 + bridge = pci_upstream_bridge(dev); 5259 + if (bridge) 5260 + pci_dev_lock(bridge); 5338 5261 5339 5262 pci_dev_lock(dev); 5340 5263 pci_dev_save_and_disable(dev); ··· 5352 5257 5353 5258 pci_dev_restore(dev); 5354 5259 pci_dev_unlock(dev); 5260 + 5261 + if (bridge) 5262 + pci_dev_unlock(bridge); 5355 5263 5356 5264 return rc; 5357 5265 } ··· 6163 6065 * and width, multiplying them, and applying encoding overhead. The result 6164 6066 * is in Mb/s, i.e., megabits/second of raw bandwidth. 6165 6067 */ 6166 - u32 pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed *speed, 6167 - enum pcie_link_width *width) 6068 + static u32 pcie_bandwidth_capable(struct pci_dev *dev, 6069 + enum pci_bus_speed *speed, 6070 + enum pcie_link_width *width) 6168 6071 { 6169 6072 *speed = pcie_get_speed_cap(dev); 6170 6073 *width = pcie_get_width_cap(dev);
-2
drivers/pci/pci.h
··· 293 293 const char *pci_speed_string(enum pci_bus_speed speed); 294 294 enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev); 295 295 enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev); 296 - u32 pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed *speed, 297 - enum pcie_link_width *width); 298 296 void __pcie_print_link_status(struct pci_dev *dev, bool verbose); 299 297 void pcie_report_downtraining(struct pci_dev *dev); 300 298 void pcie_update_link_speed(struct pci_bus *bus, u16 link_status);
+1 -1
drivers/pci/pcie/Kconfig
··· 47 47 error injection can fake almost all kinds of errors with the 48 48 help of a user space helper tool aer-inject, which can be 49 49 gotten from: 50 - https://git.kernel.org/cgit/linux/kernel/git/gong.chen/aer-inject.git/ 50 + https://github.com/intel/aer-inject.git 51 51 52 52 config PCIEAER_CXL 53 53 bool "PCI Express CXL RAS support"
+1 -1
drivers/pci/pcie/aer_inject.c
··· 6 6 * trigger various real hardware errors. Software based error 7 7 * injection can fake almost all kinds of errors with the help of a 8 8 * user space helper tool aer-inject, which can be gotten from: 9 - * https://git.kernel.org/cgit/linux/kernel/git/gong.chen/aer-inject.git/ 9 + * https://github.com/intel/aer-inject.git 10 10 * 11 11 * Copyright 2009 Intel Corporation. 12 12 * Huang Ying <ying.huang@intel.com>
+89 -93
drivers/pci/pcie/aspm.c
··· 8 8 */ 9 9 10 10 #include <linux/bitfield.h> 11 + #include <linux/bits.h> 12 + #include <linux/build_bug.h> 11 13 #include <linux/kernel.h> 12 14 #include <linux/limits.h> 13 15 #include <linux/math.h> ··· 191 189 #endif 192 190 #define MODULE_PARAM_PREFIX "pcie_aspm." 193 191 194 - /* Note: those are not register definitions */ 195 - #define ASPM_STATE_L0S_UP (1) /* Upstream direction L0s state */ 196 - #define ASPM_STATE_L0S_DW (2) /* Downstream direction L0s state */ 197 - #define ASPM_STATE_L1 (4) /* L1 state */ 198 - #define ASPM_STATE_L1_1 (8) /* ASPM L1.1 state */ 199 - #define ASPM_STATE_L1_2 (0x10) /* ASPM L1.2 state */ 200 - #define ASPM_STATE_L1_1_PCIPM (0x20) /* PCI PM L1.1 state */ 201 - #define ASPM_STATE_L1_2_PCIPM (0x40) /* PCI PM L1.2 state */ 202 - #define ASPM_STATE_L1_SS_PCIPM (ASPM_STATE_L1_1_PCIPM | ASPM_STATE_L1_2_PCIPM) 203 - #define ASPM_STATE_L1_2_MASK (ASPM_STATE_L1_2 | ASPM_STATE_L1_2_PCIPM) 204 - #define ASPM_STATE_L1SS (ASPM_STATE_L1_1 | ASPM_STATE_L1_1_PCIPM |\ 205 - ASPM_STATE_L1_2_MASK) 206 - #define ASPM_STATE_L0S (ASPM_STATE_L0S_UP | ASPM_STATE_L0S_DW) 207 - #define ASPM_STATE_ALL (ASPM_STATE_L0S | ASPM_STATE_L1 | \ 208 - ASPM_STATE_L1SS) 192 + /* Note: these are not register definitions */ 193 + #define PCIE_LINK_STATE_L0S_UP BIT(0) /* Upstream direction L0s state */ 194 + #define PCIE_LINK_STATE_L0S_DW BIT(1) /* Downstream direction L0s state */ 195 + static_assert(PCIE_LINK_STATE_L0S == (PCIE_LINK_STATE_L0S_UP | PCIE_LINK_STATE_L0S_DW)); 196 + 197 + #define PCIE_LINK_STATE_L1_SS_PCIPM (PCIE_LINK_STATE_L1_1_PCIPM |\ 198 + PCIE_LINK_STATE_L1_2_PCIPM) 199 + #define PCIE_LINK_STATE_L1_2_MASK (PCIE_LINK_STATE_L1_2 |\ 200 + PCIE_LINK_STATE_L1_2_PCIPM) 201 + #define PCIE_LINK_STATE_L1SS (PCIE_LINK_STATE_L1_1 |\ 202 + PCIE_LINK_STATE_L1_1_PCIPM |\ 203 + PCIE_LINK_STATE_L1_2_MASK) 209 204 210 205 struct pcie_link_state { 211 206 struct pci_dev *pdev; /* Upstream component of the Link */ ··· 274 275 return 0; 275 276 case POLICY_POWERSAVE: 276 277 /* Enable ASPM L0s/L1 */ 277 - return (ASPM_STATE_L0S | ASPM_STATE_L1); 278 + return PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1; 278 279 case POLICY_POWER_SUPERSAVE: 279 280 /* Enable Everything */ 280 - return ASPM_STATE_ALL; 281 + return PCIE_LINK_STATE_ASPM_ALL; 281 282 case POLICY_DEFAULT: 282 283 return link->aspm_default; 283 284 } ··· 580 581 latency_dw_l1 = calc_l1_latency(lnkcap_dw); 581 582 582 583 /* Check upstream direction L0s latency */ 583 - if ((link->aspm_capable & ASPM_STATE_L0S_UP) && 584 + if ((link->aspm_capable & PCIE_LINK_STATE_L0S_UP) && 584 585 (latency_up_l0s > acceptable_l0s)) 585 - link->aspm_capable &= ~ASPM_STATE_L0S_UP; 586 + link->aspm_capable &= ~PCIE_LINK_STATE_L0S_UP; 586 587 587 588 /* Check downstream direction L0s latency */ 588 - if ((link->aspm_capable & ASPM_STATE_L0S_DW) && 589 + if ((link->aspm_capable & PCIE_LINK_STATE_L0S_DW) && 589 590 (latency_dw_l0s > acceptable_l0s)) 590 - link->aspm_capable &= ~ASPM_STATE_L0S_DW; 591 + link->aspm_capable &= ~PCIE_LINK_STATE_L0S_DW; 591 592 /* 592 593 * Check L1 latency. 593 594 * Every switch on the path to root complex need 1 ··· 602 603 * substate latencies (and hence do not do any check). 603 604 */ 604 605 latency = max_t(u32, latency_up_l1, latency_dw_l1); 605 - if ((link->aspm_capable & ASPM_STATE_L1) && 606 + if ((link->aspm_capable & PCIE_LINK_STATE_L1) && 606 607 (latency + l1_switch_latency > acceptable_l1)) 607 - link->aspm_capable &= ~ASPM_STATE_L1; 608 + link->aspm_capable &= ~PCIE_LINK_STATE_L1; 608 609 l1_switch_latency += NSEC_PER_USEC; 609 610 610 611 link = link->parent; ··· 740 741 child_l1ss_cap &= ~PCI_L1SS_CAP_ASPM_L1_2; 741 742 742 743 if (parent_l1ss_cap & child_l1ss_cap & PCI_L1SS_CAP_ASPM_L1_1) 743 - link->aspm_support |= ASPM_STATE_L1_1; 744 + link->aspm_support |= PCIE_LINK_STATE_L1_1; 744 745 if (parent_l1ss_cap & child_l1ss_cap & PCI_L1SS_CAP_ASPM_L1_2) 745 - link->aspm_support |= ASPM_STATE_L1_2; 746 + link->aspm_support |= PCIE_LINK_STATE_L1_2; 746 747 if (parent_l1ss_cap & child_l1ss_cap & PCI_L1SS_CAP_PCIPM_L1_1) 747 - link->aspm_support |= ASPM_STATE_L1_1_PCIPM; 748 + link->aspm_support |= PCIE_LINK_STATE_L1_1_PCIPM; 748 749 if (parent_l1ss_cap & child_l1ss_cap & PCI_L1SS_CAP_PCIPM_L1_2) 749 - link->aspm_support |= ASPM_STATE_L1_2_PCIPM; 750 + link->aspm_support |= PCIE_LINK_STATE_L1_2_PCIPM; 750 751 751 752 if (parent_l1ss_cap) 752 753 pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CTL1, ··· 756 757 &child_l1ss_ctl1); 757 758 758 759 if (parent_l1ss_ctl1 & child_l1ss_ctl1 & PCI_L1SS_CTL1_ASPM_L1_1) 759 - link->aspm_enabled |= ASPM_STATE_L1_1; 760 + link->aspm_enabled |= PCIE_LINK_STATE_L1_1; 760 761 if (parent_l1ss_ctl1 & child_l1ss_ctl1 & PCI_L1SS_CTL1_ASPM_L1_2) 761 - link->aspm_enabled |= ASPM_STATE_L1_2; 762 + link->aspm_enabled |= PCIE_LINK_STATE_L1_2; 762 763 if (parent_l1ss_ctl1 & child_l1ss_ctl1 & PCI_L1SS_CTL1_PCIPM_L1_1) 763 - link->aspm_enabled |= ASPM_STATE_L1_1_PCIPM; 764 + link->aspm_enabled |= PCIE_LINK_STATE_L1_1_PCIPM; 764 765 if (parent_l1ss_ctl1 & child_l1ss_ctl1 & PCI_L1SS_CTL1_PCIPM_L1_2) 765 - link->aspm_enabled |= ASPM_STATE_L1_2_PCIPM; 766 + link->aspm_enabled |= PCIE_LINK_STATE_L1_2_PCIPM; 766 767 767 - if (link->aspm_support & ASPM_STATE_L1_2_MASK) 768 + if (link->aspm_support & PCIE_LINK_STATE_L1_2_MASK) 768 769 aspm_calc_l12_info(link, parent_l1ss_cap, child_l1ss_cap); 769 770 } 770 771 ··· 777 778 778 779 if (blacklist) { 779 780 /* Set enabled/disable so that we will disable ASPM later */ 780 - link->aspm_enabled = ASPM_STATE_ALL; 781 - link->aspm_disable = ASPM_STATE_ALL; 781 + link->aspm_enabled = PCIE_LINK_STATE_ASPM_ALL; 782 + link->aspm_disable = PCIE_LINK_STATE_ASPM_ALL; 782 783 return; 783 784 } 784 785 ··· 813 814 * support L0s. 814 815 */ 815 816 if (parent_lnkcap & child_lnkcap & PCI_EXP_LNKCAP_ASPM_L0S) 816 - link->aspm_support |= ASPM_STATE_L0S; 817 + link->aspm_support |= PCIE_LINK_STATE_L0S; 817 818 818 819 if (child_lnkctl & PCI_EXP_LNKCTL_ASPM_L0S) 819 - link->aspm_enabled |= ASPM_STATE_L0S_UP; 820 + link->aspm_enabled |= PCIE_LINK_STATE_L0S_UP; 820 821 if (parent_lnkctl & PCI_EXP_LNKCTL_ASPM_L0S) 821 - link->aspm_enabled |= ASPM_STATE_L0S_DW; 822 + link->aspm_enabled |= PCIE_LINK_STATE_L0S_DW; 822 823 823 824 /* Setup L1 state */ 824 825 if (parent_lnkcap & child_lnkcap & PCI_EXP_LNKCAP_ASPM_L1) 825 - link->aspm_support |= ASPM_STATE_L1; 826 + link->aspm_support |= PCIE_LINK_STATE_L1; 826 827 827 828 if (parent_lnkctl & child_lnkctl & PCI_EXP_LNKCTL_ASPM_L1) 828 - link->aspm_enabled |= ASPM_STATE_L1; 829 + link->aspm_enabled |= PCIE_LINK_STATE_L1; 829 830 830 831 aspm_l1ss_init(link); 831 832 ··· 875 876 * If needed, disable L1, and it gets enabled later 876 877 * in pcie_config_aspm_link(). 877 878 */ 878 - if (enable_req & (ASPM_STATE_L1_1 | ASPM_STATE_L1_2)) { 879 + if (enable_req & (PCIE_LINK_STATE_L1_1 | PCIE_LINK_STATE_L1_2)) { 879 880 pcie_capability_clear_word(child, PCI_EXP_LNKCTL, 880 881 PCI_EXP_LNKCTL_ASPM_L1); 881 882 pcie_capability_clear_word(parent, PCI_EXP_LNKCTL, ··· 883 884 } 884 885 885 886 val = 0; 886 - if (state & ASPM_STATE_L1_1) 887 + if (state & PCIE_LINK_STATE_L1_1) 887 888 val |= PCI_L1SS_CTL1_ASPM_L1_1; 888 - if (state & ASPM_STATE_L1_2) 889 + if (state & PCIE_LINK_STATE_L1_2) 889 890 val |= PCI_L1SS_CTL1_ASPM_L1_2; 890 - if (state & ASPM_STATE_L1_1_PCIPM) 891 + if (state & PCIE_LINK_STATE_L1_1_PCIPM) 891 892 val |= PCI_L1SS_CTL1_PCIPM_L1_1; 892 - if (state & ASPM_STATE_L1_2_PCIPM) 893 + if (state & PCIE_LINK_STATE_L1_2_PCIPM) 893 894 val |= PCI_L1SS_CTL1_PCIPM_L1_2; 894 895 895 896 /* Enable what we need to enable */ ··· 915 916 state &= (link->aspm_capable & ~link->aspm_disable); 916 917 917 918 /* Can't enable any substates if L1 is not enabled */ 918 - if (!(state & ASPM_STATE_L1)) 919 - state &= ~ASPM_STATE_L1SS; 919 + if (!(state & PCIE_LINK_STATE_L1)) 920 + state &= ~PCIE_LINK_STATE_L1SS; 920 921 921 922 /* Spec says both ports must be in D0 before enabling PCI PM substates*/ 922 923 if (parent->current_state != PCI_D0 || child->current_state != PCI_D0) { 923 - state &= ~ASPM_STATE_L1_SS_PCIPM; 924 - state |= (link->aspm_enabled & ASPM_STATE_L1_SS_PCIPM); 924 + state &= ~PCIE_LINK_STATE_L1_SS_PCIPM; 925 + state |= (link->aspm_enabled & PCIE_LINK_STATE_L1_SS_PCIPM); 925 926 } 926 927 927 928 /* Nothing to do if the link is already in the requested state */ 928 929 if (link->aspm_enabled == state) 929 930 return; 930 931 /* Convert ASPM state to upstream/downstream ASPM register state */ 931 - if (state & ASPM_STATE_L0S_UP) 932 + if (state & PCIE_LINK_STATE_L0S_UP) 932 933 dwstream |= PCI_EXP_LNKCTL_ASPM_L0S; 933 - if (state & ASPM_STATE_L0S_DW) 934 + if (state & PCIE_LINK_STATE_L0S_DW) 934 935 upstream |= PCI_EXP_LNKCTL_ASPM_L0S; 935 - if (state & ASPM_STATE_L1) { 936 + if (state & PCIE_LINK_STATE_L1) { 936 937 upstream |= PCI_EXP_LNKCTL_ASPM_L1; 937 938 dwstream |= PCI_EXP_LNKCTL_ASPM_L1; 938 939 } 939 940 940 - if (link->aspm_capable & ASPM_STATE_L1SS) 941 + if (link->aspm_capable & PCIE_LINK_STATE_L1SS) 941 942 pcie_config_aspm_l1ss(link, state); 942 943 943 944 /* ··· 946 947 * upstream component first and then downstream, and vice 947 948 * versa for disabling ASPM L1. Spec doesn't mention L0S. 948 949 */ 949 - if (state & ASPM_STATE_L1) 950 + if (state & PCIE_LINK_STATE_L1) 950 951 pcie_config_aspm_dev(parent, upstream); 951 952 list_for_each_entry(child, &linkbus->devices, bus_list) 952 953 pcie_config_aspm_dev(child, dwstream); 953 - if (!(state & ASPM_STATE_L1)) 954 + if (!(state & PCIE_LINK_STATE_L1)) 954 955 pcie_config_aspm_dev(parent, upstream); 955 956 956 957 link->aspm_enabled = state; ··· 1323 1324 return bridge->link_state; 1324 1325 } 1325 1326 1327 + static u8 pci_calc_aspm_disable_mask(int state) 1328 + { 1329 + state &= ~PCIE_LINK_STATE_CLKPM; 1330 + 1331 + /* L1 PM substates require L1 */ 1332 + if (state & PCIE_LINK_STATE_L1) 1333 + state |= PCIE_LINK_STATE_L1SS; 1334 + 1335 + return state; 1336 + } 1337 + 1338 + static u8 pci_calc_aspm_enable_mask(int state) 1339 + { 1340 + state &= ~PCIE_LINK_STATE_CLKPM; 1341 + 1342 + /* L1 PM substates require L1 */ 1343 + if (state & PCIE_LINK_STATE_L1SS) 1344 + state |= PCIE_LINK_STATE_L1; 1345 + 1346 + return state; 1347 + } 1348 + 1326 1349 static int __pci_disable_link_state(struct pci_dev *pdev, int state, bool locked) 1327 1350 { 1328 1351 struct pcie_link_state *link = pcie_aspm_get_link(pdev); ··· 1367 1346 if (!locked) 1368 1347 down_read(&pci_bus_sem); 1369 1348 mutex_lock(&aspm_lock); 1370 - if (state & PCIE_LINK_STATE_L0S) 1371 - link->aspm_disable |= ASPM_STATE_L0S; 1372 - if (state & PCIE_LINK_STATE_L1) 1373 - /* L1 PM substates require L1 */ 1374 - link->aspm_disable |= ASPM_STATE_L1 | ASPM_STATE_L1SS; 1375 - if (state & PCIE_LINK_STATE_L1_1) 1376 - link->aspm_disable |= ASPM_STATE_L1_1; 1377 - if (state & PCIE_LINK_STATE_L1_2) 1378 - link->aspm_disable |= ASPM_STATE_L1_2; 1379 - if (state & PCIE_LINK_STATE_L1_1_PCIPM) 1380 - link->aspm_disable |= ASPM_STATE_L1_1_PCIPM; 1381 - if (state & PCIE_LINK_STATE_L1_2_PCIPM) 1382 - link->aspm_disable |= ASPM_STATE_L1_2_PCIPM; 1349 + link->aspm_disable |= pci_calc_aspm_disable_mask(state); 1383 1350 pcie_config_aspm_link(link, policy_to_aspm_state(link)); 1384 1351 1385 1352 if (state & PCIE_LINK_STATE_CLKPM) ··· 1423 1414 if (!locked) 1424 1415 down_read(&pci_bus_sem); 1425 1416 mutex_lock(&aspm_lock); 1426 - link->aspm_default = 0; 1427 - if (state & PCIE_LINK_STATE_L0S) 1428 - link->aspm_default |= ASPM_STATE_L0S; 1429 - if (state & PCIE_LINK_STATE_L1) 1430 - link->aspm_default |= ASPM_STATE_L1; 1431 - /* L1 PM substates require L1 */ 1432 - if (state & PCIE_LINK_STATE_L1_1) 1433 - link->aspm_default |= ASPM_STATE_L1_1 | ASPM_STATE_L1; 1434 - if (state & PCIE_LINK_STATE_L1_2) 1435 - link->aspm_default |= ASPM_STATE_L1_2 | ASPM_STATE_L1; 1436 - if (state & PCIE_LINK_STATE_L1_1_PCIPM) 1437 - link->aspm_default |= ASPM_STATE_L1_1_PCIPM | ASPM_STATE_L1; 1438 - if (state & PCIE_LINK_STATE_L1_2_PCIPM) 1439 - link->aspm_default |= ASPM_STATE_L1_2_PCIPM | ASPM_STATE_L1; 1417 + link->aspm_default = pci_calc_aspm_enable_mask(state); 1440 1418 pcie_config_aspm_link(link, policy_to_aspm_state(link)); 1441 1419 1442 1420 link->clkpm_default = (state & PCIE_LINK_STATE_CLKPM) ? 1 : 0; ··· 1559 1563 if (state_enable) { 1560 1564 link->aspm_disable &= ~state; 1561 1565 /* need to enable L1 for substates */ 1562 - if (state & ASPM_STATE_L1SS) 1563 - link->aspm_disable &= ~ASPM_STATE_L1; 1566 + if (state & PCIE_LINK_STATE_L1SS) 1567 + link->aspm_disable &= ~PCIE_LINK_STATE_L1; 1564 1568 } else { 1565 1569 link->aspm_disable |= state; 1566 - if (state & ASPM_STATE_L1) 1567 - link->aspm_disable |= ASPM_STATE_L1SS; 1570 + if (state & PCIE_LINK_STATE_L1) 1571 + link->aspm_disable |= PCIE_LINK_STATE_L1SS; 1568 1572 } 1569 1573 1570 1574 pcie_config_aspm_link(link, policy_to_aspm_state(link)); ··· 1578 1582 #define ASPM_ATTR(_f, _s) \ 1579 1583 static ssize_t _f##_show(struct device *dev, \ 1580 1584 struct device_attribute *attr, char *buf) \ 1581 - { return aspm_attr_show_common(dev, attr, buf, ASPM_STATE_##_s); } \ 1585 + { return aspm_attr_show_common(dev, attr, buf, PCIE_LINK_STATE_##_s); } \ 1582 1586 \ 1583 1587 static ssize_t _f##_store(struct device *dev, \ 1584 1588 struct device_attribute *attr, \ 1585 1589 const char *buf, size_t len) \ 1586 - { return aspm_attr_store_common(dev, attr, buf, len, ASPM_STATE_##_s); } 1590 + { return aspm_attr_store_common(dev, attr, buf, len, PCIE_LINK_STATE_##_s); } 1587 1591 1588 1592 ASPM_ATTR(l0s_aspm, L0S) 1589 1593 ASPM_ATTR(l1_aspm, L1) ··· 1650 1654 struct pci_dev *pdev = to_pci_dev(dev); 1651 1655 struct pcie_link_state *link = pcie_aspm_get_link(pdev); 1652 1656 static const u8 aspm_state_map[] = { 1653 - ASPM_STATE_L0S, 1654 - ASPM_STATE_L1, 1655 - ASPM_STATE_L1_1, 1656 - ASPM_STATE_L1_2, 1657 - ASPM_STATE_L1_1_PCIPM, 1658 - ASPM_STATE_L1_2_PCIPM, 1657 + PCIE_LINK_STATE_L0S, 1658 + PCIE_LINK_STATE_L1, 1659 + PCIE_LINK_STATE_L1_1, 1660 + PCIE_LINK_STATE_L1_2, 1661 + PCIE_LINK_STATE_L1_1_PCIPM, 1662 + PCIE_LINK_STATE_L1_2_PCIPM, 1659 1663 }; 1660 1664 1661 1665 if (aspm_disabled || !link)
+17 -11
drivers/pci/pcie/edr.c
··· 32 32 int status = 0; 33 33 34 34 /* 35 - * Behavior when calling unsupported _DSM functions is undefined, 36 - * so check whether EDR_PORT_DPC_ENABLE_DSM is supported. 35 + * Per PCI Firmware r3.3, sec 4.6.12, EDR_PORT_DPC_ENABLE_DSM is 36 + * optional. Return success if it's not implemented. 37 37 */ 38 - if (!acpi_check_dsm(adev->handle, &pci_acpi_dsm_guid, 5, 38 + if (!acpi_check_dsm(adev->handle, &pci_acpi_dsm_guid, 6, 39 39 1ULL << EDR_PORT_DPC_ENABLE_DSM)) 40 40 return 0; 41 41 ··· 46 46 argv4.package.count = 1; 47 47 argv4.package.elements = &req; 48 48 49 - /* 50 - * Per Downstream Port Containment Related Enhancements ECN to PCI 51 - * Firmware Specification r3.2, sec 4.6.12, EDR_PORT_DPC_ENABLE_DSM is 52 - * optional. Return success if it's not implemented. 53 - */ 54 - obj = acpi_evaluate_dsm(adev->handle, &pci_acpi_dsm_guid, 5, 49 + obj = acpi_evaluate_dsm(adev->handle, &pci_acpi_dsm_guid, 6, 55 50 EDR_PORT_DPC_ENABLE_DSM, &argv4); 56 51 if (!obj) 57 52 return 0; ··· 80 85 u16 port; 81 86 82 87 /* 83 - * Behavior when calling unsupported _DSM functions is undefined, 84 - * so check whether EDR_PORT_DPC_ENABLE_DSM is supported. 88 + * If EDR_PORT_LOCATE_DSM is not implemented under the target of 89 + * EDR, the target is the port that experienced the containment 90 + * event (PCI Firmware r3.3, sec 4.6.13). 85 91 */ 86 92 if (!acpi_check_dsm(adev->handle, &pci_acpi_dsm_guid, 5, 87 93 1ULL << EDR_PORT_LOCATE_DSM)) ··· 96 100 if (obj->type != ACPI_TYPE_INTEGER) { 97 101 ACPI_FREE(obj); 98 102 pci_err(pdev, FW_BUG "Locate Port _DSM returned non integer\n"); 103 + return NULL; 104 + } 105 + 106 + /* 107 + * Bit 31 represents the success/failure of the operation. If bit 108 + * 31 is set, the operation failed. 109 + */ 110 + if (obj->integer.value & BIT(31)) { 111 + ACPI_FREE(obj); 112 + pci_err(pdev, "Locate Port _DSM failed\n"); 99 113 return NULL; 100 114 } 101 115
+3 -9
drivers/pci/pcie/err.c
··· 116 116 117 117 device_lock(&dev->dev); 118 118 pdrv = dev->driver; 119 - if (!pdrv || 120 - !pdrv->err_handler || 121 - !pdrv->err_handler->mmio_enabled) 119 + if (!pdrv || !pdrv->err_handler || !pdrv->err_handler->mmio_enabled) 122 120 goto out; 123 121 124 122 err_handler = pdrv->err_handler; ··· 135 137 136 138 device_lock(&dev->dev); 137 139 pdrv = dev->driver; 138 - if (!pdrv || 139 - !pdrv->err_handler || 140 - !pdrv->err_handler->slot_reset) 140 + if (!pdrv || !pdrv->err_handler || !pdrv->err_handler->slot_reset) 141 141 goto out; 142 142 143 143 err_handler = pdrv->err_handler; ··· 154 158 device_lock(&dev->dev); 155 159 pdrv = dev->driver; 156 160 if (!pci_dev_set_io_state(dev, pci_channel_io_normal) || 157 - !pdrv || 158 - !pdrv->err_handler || 159 - !pdrv->err_handler->resume) 161 + !pdrv || !pdrv->err_handler || !pdrv->err_handler->resume) 160 162 goto out; 161 163 162 164 err_handler = pdrv->err_handler;
+4 -4
drivers/pci/pcie/portdrv.c
··· 187 187 * interrupt. 188 188 */ 189 189 if ((mask & PCIE_PORT_SERVICE_PME) && pcie_pme_no_msi()) 190 - goto legacy_irq; 190 + goto intx_irq; 191 191 192 192 /* Try to use MSI-X or MSI if supported */ 193 193 if (pcie_port_enable_irq_vec(dev, irqs, mask) == 0) 194 194 return 0; 195 195 196 - legacy_irq: 197 - /* fall back to legacy IRQ */ 198 - ret = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY); 196 + intx_irq: 197 + /* fall back to INTX IRQ */ 198 + ret = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_INTX); 199 199 if (ret < 0) 200 200 return -ENODEV; 201 201
+7 -1
drivers/pci/probe.c
··· 95 95 kfree(pci_bus); 96 96 } 97 97 98 - static struct class pcibus_class = { 98 + static const struct class pcibus_class = { 99 99 .name = "pci_bus", 100 100 .dev_release = &release_pcibus_dev, 101 101 .dev_groups = pcibus_groups, ··· 1482 1482 } 1483 1483 1484 1484 out: 1485 + /* Clear errors in the Secondary Status Register */ 1486 + pci_write_config_word(dev, PCI_SEC_STATUS, 0xffff); 1487 + 1485 1488 pci_write_config_word(dev, PCI_BRIDGE_CONTROL, bctl); 1486 1489 1487 1490 pm_runtime_put(&dev->dev); ··· 2546 2543 dev->dev.dma_mask = &dev->dma_mask; 2547 2544 dev->dev.dma_parms = &dev->dma_parms; 2548 2545 dev->dev.coherent_dma_mask = 0xffffffffull; 2546 + lockdep_register_key(&dev->cfg_access_key); 2547 + lockdep_init_map(&dev->cfg_access_lock, dev_name(&dev->dev), 2548 + &dev->cfg_access_key, 0); 2549 2549 2550 2550 dma_set_max_seg_size(&dev->dev, 65536); 2551 2551 dma_set_seg_boundary(&dev->dev, 0xffffffff);
+20
drivers/pci/quirks.c
··· 6253 6253 pdev->d3cold_delay = 1000; 6254 6254 } 6255 6255 DECLARE_PCI_FIXUP_FINAL(0x5555, 0x0004, pci_fixup_d3cold_delay_1sec); 6256 + 6257 + #ifdef CONFIG_PCIEAER 6258 + static void pci_mask_replay_timer_timeout(struct pci_dev *pdev) 6259 + { 6260 + struct pci_dev *parent = pci_upstream_bridge(pdev); 6261 + u32 val; 6262 + 6263 + if (!parent || !parent->aer_cap) 6264 + return; 6265 + 6266 + pci_info(parent, "mask Replay Timer Timeout Correctable Errors due to %s hardware defect", 6267 + pci_name(pdev)); 6268 + 6269 + pci_read_config_dword(parent, parent->aer_cap + PCI_ERR_COR_MASK, &val); 6270 + val |= PCI_ERR_COR_REP_TIMER; 6271 + pci_write_config_dword(parent, parent->aer_cap + PCI_ERR_COR_MASK, val); 6272 + } 6273 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_GLI, 0x9750, pci_mask_replay_timer_timeout); 6274 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_GLI, 0x9755, pci_mask_replay_timer_timeout); 6275 + #endif
+1 -1
drivers/perf/cxl_pmu.c
··· 345 345 346 346 /* For CXL spec defined events */ 347 347 #define CXL_PMU_EVENT_CXL_ATTR(_name, _gid, _msk) \ 348 - CXL_PMU_EVENT_ATTR(_name, PCI_DVSEC_VENDOR_ID_CXL, _gid, _msk) 348 + CXL_PMU_EVENT_ATTR(_name, PCI_VENDOR_ID_CXL, _gid, _msk) 349 349 350 350 static struct attribute *cxl_pmu_event_attrs[] = { 351 351 CXL_PMU_EVENT_CXL_ATTR(clock_ticks, CXL_PMU_GID_CLOCK_TICKS, BIT(0)),
+1 -1
drivers/platform/x86/intel_ips.c
··· 1505 1505 * IRQ handler for ME interaction 1506 1506 * Note: don't use MSI here as the PCH has bugs. 1507 1507 */ 1508 - ret = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY); 1508 + ret = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_INTX); 1509 1509 if (ret < 0) 1510 1510 return ret; 1511 1511
+1 -1
drivers/scsi/arcmsr/arcmsr_hba.c
··· 1007 1007 goto msi_int1; 1008 1008 } 1009 1009 } 1010 - nvec = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_LEGACY); 1010 + nvec = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_INTX); 1011 1011 if (nvec < 1) 1012 1012 return FAILED; 1013 1013 msi_int1:
+1 -1
drivers/scsi/hpsa.c
··· 7509 7509 */ 7510 7510 static int hpsa_interrupt_mode(struct ctlr_info *h) 7511 7511 { 7512 - unsigned int flags = PCI_IRQ_LEGACY; 7512 + unsigned int flags = PCI_IRQ_INTX; 7513 7513 int ret; 7514 7514 7515 7515 /* Some boards advertise MSI but don't really support it */
+1 -1
drivers/scsi/ipr.c
··· 9465 9465 ipr_number_of_msix = IPR_MAX_MSIX_VECTORS; 9466 9466 } 9467 9467 9468 - irq_flag = PCI_IRQ_LEGACY; 9468 + irq_flag = PCI_IRQ_INTX; 9469 9469 if (ioa_cfg->ipr_chip->has_msi) 9470 9470 irq_flag |= PCI_IRQ_MSI | PCI_IRQ_MSIX; 9471 9471 rc = pci_alloc_irq_vectors(pdev, 1, ipr_number_of_msix, irq_flag);
+2 -2
drivers/scsi/megaraid/megaraid_sas_base.c
··· 6305 6305 } 6306 6306 6307 6307 if (!instance->msix_vectors) { 6308 - i = pci_alloc_irq_vectors(instance->pdev, 1, 1, PCI_IRQ_LEGACY); 6308 + i = pci_alloc_irq_vectors(instance->pdev, 1, 1, PCI_IRQ_INTX); 6309 6309 if (i < 0) 6310 6310 goto fail_init_adapter; 6311 6311 } ··· 7844 7844 7845 7845 if (!instance->msix_vectors) { 7846 7846 rval = pci_alloc_irq_vectors(instance->pdev, 1, 1, 7847 - PCI_IRQ_LEGACY); 7847 + PCI_IRQ_INTX); 7848 7848 if (rval < 0) 7849 7849 goto fail_reenable_msix; 7850 7850 }
+1 -1
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 3515 3515 ioc_info(ioc, "High IOPs queues : disabled\n"); 3516 3516 ioc->reply_queue_count = 1; 3517 3517 ioc->iopoll_q_start_index = ioc->reply_queue_count - 0; 3518 - r = pci_alloc_irq_vectors(ioc->pdev, 1, 1, PCI_IRQ_LEGACY); 3518 + r = pci_alloc_irq_vectors(ioc->pdev, 1, 1, PCI_IRQ_INTX); 3519 3519 if (r < 0) { 3520 3520 dfailprintk(ioc, 3521 3521 ioc_info(ioc, "pci_alloc_irq_vector(legacy) failed (r=%d) !!!\n",
+1 -1
drivers/scsi/pmcraid.c
··· 4036 4036 pmcraid_register_interrupt_handler(struct pmcraid_instance *pinstance) 4037 4037 { 4038 4038 struct pci_dev *pdev = pinstance->pdev; 4039 - unsigned int irq_flag = PCI_IRQ_LEGACY, flag; 4039 + unsigned int irq_flag = PCI_IRQ_INTX, flag; 4040 4040 int num_hrrq, rc, i; 4041 4041 irq_handler_t isr; 4042 4042
+1 -1
drivers/scsi/vmw_pvscsi.c
··· 1346 1346 1347 1347 static int pvscsi_probe(struct pci_dev *pdev, const struct pci_device_id *id) 1348 1348 { 1349 - unsigned int irq_flag = PCI_IRQ_MSIX | PCI_IRQ_MSI | PCI_IRQ_LEGACY; 1349 + unsigned int irq_flag = PCI_IRQ_ALL_TYPES; 1350 1350 struct pvscsi_adapter *adapter; 1351 1351 struct pvscsi_adapter adapter_temp; 1352 1352 struct Scsi_Host *host = NULL;
+1 -1
drivers/tty/serial/8250/8250_pci.c
··· 4108 4108 rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_ALL_TYPES); 4109 4109 } else { 4110 4110 pci_dbg(dev, "Using legacy interrupts\n"); 4111 - rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY); 4111 + rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_INTX); 4112 4112 } 4113 4113 if (rc < 0) { 4114 4114 kfree(priv);
+2 -1
drivers/usb/core/hcd-pci.c
··· 189 189 * make sure irq setup is not touched for xhci in generic hcd code 190 190 */ 191 191 if ((driver->flags & HCD_MASK) < HCD_USB3) { 192 - retval = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY | PCI_IRQ_MSI); 192 + retval = pci_alloc_irq_vectors(dev, 1, 1, 193 + PCI_IRQ_INTX | PCI_IRQ_MSI); 193 194 if (retval < 0) { 194 195 dev_err(&dev->dev, 195 196 "Found HC with no IRQ. Check BIOS/PCI %s setup!\n",
-1
include/linux/irqdomain_defs.h
··· 25 25 DOMAIN_BUS_PCI_DEVICE_MSIX, 26 26 DOMAIN_BUS_DMAR, 27 27 DOMAIN_BUS_AMDVI, 28 - DOMAIN_BUS_PCI_DEVICE_IMS, 29 28 DOMAIN_BUS_DEVICE_MSI, 30 29 DOMAIN_BUS_WIRED_TO_MSI, 31 30 };
+5
include/linux/lockdep.h
··· 297 297 .wait_type_inner = _wait_type, \ 298 298 .lock_type = LD_LOCK_WAIT_OVERRIDE, } 299 299 300 + #define lock_map_assert_held(l) \ 301 + lockdep_assert(lock_is_held(l) != LOCK_STATE_NOT_HELD) 302 + 300 303 #else /* !CONFIG_LOCKDEP */ 301 304 302 305 static inline void lockdep_init_task(struct task_struct *task) ··· 390 387 391 388 #define DEFINE_WAIT_OVERRIDE_MAP(_name, _wait_type) \ 392 389 struct lockdep_map __maybe_unused _name = {} 390 + 391 + #define lock_map_assert_held(l) do { (void)(l); } while (0) 393 392 394 393 #endif /* !LOCKDEP */ 395 394
-2
include/linux/msi.h
··· 573 573 MSI_FLAG_MSIX_CONTIGUOUS = (1 << 19), 574 574 /* PCI/MSI-X vectors can be dynamically allocated/freed post MSI-X enable */ 575 575 MSI_FLAG_PCI_MSIX_ALLOC_DYN = (1 << 20), 576 - /* Support for PCI/IMS */ 577 - MSI_FLAG_PCI_IMS = (1 << 21), 578 576 }; 579 577 580 578 /**
-1
include/linux/msi_api.h
··· 15 15 */ 16 16 enum msi_domain_ids { 17 17 MSI_DEFAULT_DOMAIN, 18 - MSI_SECONDARY_DOMAIN, 19 18 MSI_MAX_DEVICE_IRQDOMAINS, 20 19 }; 21 20
+4 -3
include/linux/pci-epc.h
··· 128 128 * @group: configfs group representing the PCI EPC device 129 129 * @lock: mutex to protect pci_epc ops 130 130 * @function_num_map: bitmap to manage physical function number 131 + * @init_complete: flag to indicate whether the EPC initialization is complete 132 + * or not 131 133 */ 132 134 struct pci_epc { 133 135 struct device dev; ··· 145 143 /* mutex to protect against concurrent access of EP controller */ 146 144 struct mutex lock; 147 145 unsigned long function_num_map; 146 + bool init_complete; 148 147 }; 149 148 150 149 /** ··· 182 179 /** 183 180 * struct pci_epc_features - features supported by a EPC device per function 184 181 * @linkup_notifier: indicate if the EPC device can notify EPF driver on link up 185 - * @core_init_notifier: indicate cores that can notify about their availability 186 - * for initialization 187 182 * @msi_capable: indicate if the endpoint function has MSI capability 188 183 * @msix_capable: indicate if the endpoint function has MSI-X capability 189 184 * @bar: array specifying the hardware description for each BAR ··· 189 188 */ 190 189 struct pci_epc_features { 191 190 unsigned int linkup_notifier : 1; 192 - unsigned int core_init_notifier : 1; 193 191 unsigned int msi_capable : 1; 194 192 unsigned int msix_capable : 1; 195 193 struct pci_epc_bar_desc bar[PCI_STD_NUM_BARS]; ··· 225 225 void pci_epc_linkup(struct pci_epc *epc); 226 226 void pci_epc_linkdown(struct pci_epc *epc); 227 227 void pci_epc_init_notify(struct pci_epc *epc); 228 + void pci_epc_notify_pending_init(struct pci_epc *epc, struct pci_epf *epf); 228 229 void pci_epc_bme_notify(struct pci_epc *epc); 229 230 void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf, 230 231 enum pci_epc_interface_type type);
+27 -47
include/linux/pci.h
··· 51 51 PCI_STATUS_PARITY) 52 52 53 53 /* Number of reset methods used in pci_reset_fn_methods array in pci.c */ 54 - #define PCI_NUM_RESET_METHODS 7 54 + #define PCI_NUM_RESET_METHODS 8 55 55 56 56 #define PCI_RESET_PROBE true 57 57 #define PCI_RESET_DO_RESET false ··· 413 413 struct resource driver_exclusive_resource; /* driver exclusive resource ranges */ 414 414 415 415 bool match_driver; /* Skip attaching driver */ 416 + struct lock_class_key cfg_access_key; 417 + struct lockdep_map cfg_access_lock; 416 418 417 419 unsigned int transparent:1; /* Subtractive decode bridge */ 418 420 unsigned int io_window:1; /* Bridge has I/O window */ ··· 1079 1077 #define PCI_IRQ_MSIX (1 << 2) /* Allow MSI-X interrupts */ 1080 1078 #define PCI_IRQ_AFFINITY (1 << 3) /* Auto-assign affinity */ 1081 1079 1082 - #define PCI_IRQ_LEGACY PCI_IRQ_INTX /* Deprecated! Use PCI_IRQ_INTX */ 1083 - 1084 1080 /* These external functions are only available when PCI support is enabled */ 1085 1081 #ifdef CONFIG_PCI 1086 1082 ··· 1315 1315 int pci_user_write_config_dword(struct pci_dev *dev, int where, u32 val); 1316 1316 1317 1317 int __must_check pci_enable_device(struct pci_dev *dev); 1318 - int __must_check pci_enable_device_io(struct pci_dev *dev); 1319 1318 int __must_check pci_enable_device_mem(struct pci_dev *dev); 1320 1319 int __must_check pci_reenable_device(struct pci_dev *); 1321 1320 int __must_check pcim_enable_device(struct pci_dev *pdev); ··· 1647 1648 */ 1648 1649 #define PCI_IRQ_VIRTUAL (1 << 4) 1649 1650 1650 - #define PCI_IRQ_ALL_TYPES \ 1651 - (PCI_IRQ_LEGACY | PCI_IRQ_MSI | PCI_IRQ_MSIX) 1651 + #define PCI_IRQ_ALL_TYPES (PCI_IRQ_INTX | PCI_IRQ_MSI | PCI_IRQ_MSIX) 1652 1652 1653 1653 #include <linux/dmapool.h> 1654 1654 ··· 1655 1657 u32 vector; /* Kernel uses to write allocated vector */ 1656 1658 u16 entry; /* Driver uses to specify entry, OS writes */ 1657 1659 }; 1658 - 1659 - struct msi_domain_template; 1660 1660 1661 1661 #ifdef CONFIG_PCI_MSI 1662 1662 int pci_msi_vec_count(struct pci_dev *dev); ··· 1688 1692 void pci_free_irq_vectors(struct pci_dev *dev); 1689 1693 int pci_irq_vector(struct pci_dev *dev, unsigned int nr); 1690 1694 const struct cpumask *pci_irq_get_affinity(struct pci_dev *pdev, int vec); 1691 - bool pci_create_ims_domain(struct pci_dev *pdev, const struct msi_domain_template *template, 1692 - unsigned int hwsize, void *data); 1693 - struct msi_map pci_ims_alloc_irq(struct pci_dev *pdev, union msi_instance_cookie *icookie, 1694 - const struct irq_affinity_desc *affdesc); 1695 - void pci_ims_free_irq(struct pci_dev *pdev, struct msi_map map); 1696 1695 1697 1696 #else 1698 1697 static inline int pci_msi_vec_count(struct pci_dev *dev) { return -ENOSYS; } ··· 1710 1719 unsigned int max_vecs, unsigned int flags, 1711 1720 struct irq_affinity *aff_desc) 1712 1721 { 1713 - if ((flags & PCI_IRQ_LEGACY) && min_vecs == 1 && dev->irq) 1722 + if ((flags & PCI_IRQ_INTX) && min_vecs == 1 && dev->irq) 1714 1723 return 1; 1715 1724 return -ENOSPC; 1716 1725 } ··· 1751 1760 { 1752 1761 return cpu_possible_mask; 1753 1762 } 1754 - 1755 - static inline bool pci_create_ims_domain(struct pci_dev *pdev, 1756 - const struct msi_domain_template *template, 1757 - unsigned int hwsize, void *data) 1758 - { return false; } 1759 - 1760 - static inline struct msi_map pci_ims_alloc_irq(struct pci_dev *pdev, 1761 - union msi_instance_cookie *icookie, 1762 - const struct irq_affinity_desc *affdesc) 1763 - { 1764 - struct msi_map map = { .index = -ENOSYS, }; 1765 - 1766 - return map; 1767 - } 1768 - 1769 - static inline void pci_ims_free_irq(struct pci_dev *pdev, struct msi_map map) 1770 - { 1771 - } 1772 - 1773 1763 #endif 1774 1764 1775 1765 /** ··· 1793 1821 #define pcie_ports_native false 1794 1822 #endif 1795 1823 1796 - #define PCIE_LINK_STATE_L0S BIT(0) 1797 - #define PCIE_LINK_STATE_L1 BIT(1) 1798 - #define PCIE_LINK_STATE_CLKPM BIT(2) 1799 - #define PCIE_LINK_STATE_L1_1 BIT(3) 1800 - #define PCIE_LINK_STATE_L1_2 BIT(4) 1801 - #define PCIE_LINK_STATE_L1_1_PCIPM BIT(5) 1802 - #define PCIE_LINK_STATE_L1_2_PCIPM BIT(6) 1803 - #define PCIE_LINK_STATE_ALL (PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1 |\ 1804 - PCIE_LINK_STATE_CLKPM | PCIE_LINK_STATE_L1_1 |\ 1805 - PCIE_LINK_STATE_L1_2 | PCIE_LINK_STATE_L1_1_PCIPM |\ 1824 + #define PCIE_LINK_STATE_L0S (BIT(0) | BIT(1)) /* Upstr/dwnstr L0s */ 1825 + #define PCIE_LINK_STATE_L1 BIT(2) /* L1 state */ 1826 + #define PCIE_LINK_STATE_L1_1 BIT(3) /* ASPM L1.1 state */ 1827 + #define PCIE_LINK_STATE_L1_2 BIT(4) /* ASPM L1.2 state */ 1828 + #define PCIE_LINK_STATE_L1_1_PCIPM BIT(5) /* PCI-PM L1.1 state */ 1829 + #define PCIE_LINK_STATE_L1_2_PCIPM BIT(6) /* PCI-PM L1.2 state */ 1830 + #define PCIE_LINK_STATE_ASPM_ALL (PCIE_LINK_STATE_L0S |\ 1831 + PCIE_LINK_STATE_L1 |\ 1832 + PCIE_LINK_STATE_L1_1 |\ 1833 + PCIE_LINK_STATE_L1_2 |\ 1834 + PCIE_LINK_STATE_L1_1_PCIPM |\ 1806 1835 PCIE_LINK_STATE_L1_2_PCIPM) 1836 + #define PCIE_LINK_STATE_CLKPM BIT(7) 1837 + #define PCIE_LINK_STATE_ALL (PCIE_LINK_STATE_ASPM_ALL |\ 1838 + PCIE_LINK_STATE_CLKPM) 1807 1839 1808 1840 #ifdef CONFIG_PCIEASPM 1809 1841 int pci_disable_link_state(struct pci_dev *pdev, int state); ··· 1990 2014 static inline void pci_unregister_driver(struct pci_driver *drv) { } 1991 2015 static inline u8 pci_find_capability(struct pci_dev *dev, int cap) 1992 2016 { return 0; } 1993 - static inline int pci_find_next_capability(struct pci_dev *dev, u8 post, 1994 - int cap) 2017 + static inline u8 pci_find_next_capability(struct pci_dev *dev, u8 post, int cap) 1995 2018 { return 0; } 1996 - static inline int pci_find_ext_capability(struct pci_dev *dev, int cap) 2019 + static inline u16 pci_find_ext_capability(struct pci_dev *dev, int cap) 1997 2020 { return 0; } 1998 2021 1999 2022 static inline u64 pci_get_dsn(struct pci_dev *dev) ··· 2494 2519 2495 2520 static inline bool pci_dev_is_disconnected(const struct pci_dev *dev) 2496 2521 { 2497 - return dev->error_state == pci_channel_io_perm_failure; 2522 + /* 2523 + * error_state is set in pci_dev_set_io_state() using xchg/cmpxchg() 2524 + * and read w/o common lock. READ_ONCE() ensures compiler cannot cache 2525 + * the value (e.g. inside the loop in pci_dev_wait()). 2526 + */ 2527 + return READ_ONCE(dev->error_state) == pci_channel_io_perm_failure; 2498 2528 } 2499 2529 2500 2530 void pci_request_acs(void);
+2
include/linux/pci_ids.h
··· 2608 2608 2609 2609 #define PCI_VENDOR_ID_ALIBABA 0x1ded 2610 2610 2611 + #define PCI_VENDOR_ID_CXL 0x1e98 2612 + 2611 2613 #define PCI_VENDOR_ID_TEHUTI 0x1fc9 2612 2614 #define PCI_DEVICE_ID_TEHUTI_3009 0x3009 2613 2615 #define PCI_DEVICE_ID_TEHUTI_3010 0x3010
+6
include/uapi/linux/pci_regs.h
··· 1144 1144 #define PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH 0x0003ffff 1145 1145 1146 1146 #define PCI_DOE_DATA_OBJECT_DISC_REQ_3_INDEX 0x000000ff 1147 + #define PCI_DOE_DATA_OBJECT_DISC_REQ_3_VER 0x0000ff00 1147 1148 #define PCI_DOE_DATA_OBJECT_DISC_RSP_3_VID 0x0000ffff 1148 1149 #define PCI_DOE_DATA_OBJECT_DISC_RSP_3_PROTOCOL 0x00ff0000 1149 1150 #define PCI_DOE_DATA_OBJECT_DISC_RSP_3_NEXT_INDEX 0xff000000 1151 + 1152 + /* Compute Express Link (CXL r3.1, sec 8.1.5) */ 1153 + #define PCI_DVSEC_CXL_PORT 3 1154 + #define PCI_DVSEC_CXL_PORT_CTL 0x0c 1155 + #define PCI_DVSEC_CXL_PORT_CTL_UNMASK_SBR 0x00000001 1150 1156 1151 1157 #endif /* LINUX_PCI_REGS_H */
+1 -1
sound/soc/intel/avs/core.c
··· 339 339 int ret; 340 340 341 341 /* request one and check that we only got one interrupt */ 342 - ret = pci_alloc_irq_vectors(pci, 1, 1, PCI_IRQ_MSI | PCI_IRQ_LEGACY); 342 + ret = pci_alloc_irq_vectors(pci, 1, 1, PCI_IRQ_MSI | PCI_IRQ_INTX); 343 343 if (ret != 1) { 344 344 dev_err(adev->dev, "Failed to allocate IRQ vector: %d\n", ret); 345 345 return ret;