Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v6.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:

- Use acpi_evaluate_dsm_typed() instead of open-coding _DSM
evaluation to learn device characteristics (Andy Shevchenko)

- Tidy multi-function header checks using new PCI_HEADER_TYPE_MASK
definition (Ilpo Järvinen)

- Simplify config access error checking in various drivers (Ilpo
Järvinen)

- Use pcie_capability_clear_word() (not
pcie_capability_clear_and_set_word()) when only clearing (Ilpo
Järvinen)

- Add pci_get_base_class() to simplify finding devices using base
class only (ignoring subclass and programming interface) (Sui
Jingfeng)

- Add pci_is_vga(), which includes ancient PCI_CLASS_NOT_DEFINED_VGA
devices from before the Class Code was added to PCI (Sui Jingfeng)

- Use pci_is_vga() for vgaarb, sysfs "boot_vga", virtio, qxl to
include ancient VGA devices (Sui Jingfeng)

Resource management:

- Make pci_assign_unassigned_resources() non-init because sparc uses
it after init (Randy Dunlap)

Driver binding:

- Retain .remove() and .probe() callbacks (previously __init) because
sysfs may cause them to be called later (Uwe Kleine-König)

- Prevent xHCI driver from claiming AMD VanGogh USB3 DRD device, so
it can be claimed by dwc3 instead (Vicki Pfau)

PCI device hotplug:

- Add Ampere Altra Attention Indicator extension driver for acpiphp
(D Scott Phillips)

Power management:

- Quirk VideoPropulsion Torrent QN16e with longer delay after reset
(Lukas Wunner)

- Prevent users from overriding drivers that say we shouldn't use
D3cold (Lukas Wunner)

- Avoid PME from D3hot/D3cold for AMD Rembrandt and Phoenix USB4
because wakeup interrupts from those states don't work if amd-pmc
has put the platform in a hardware sleep state (Mario Limonciello)

IOMMU:

- Disable ATS for Intel IPU E2000 devices with invalidation message
endianness erratum (Bartosz Pawlowski)

Error handling:

- Factor out interrupt enable/disable into helpers (Kai-Heng Feng)

Peer-to-peer DMA:

- Fix flexible-array usage in struct pci_p2pdma_pagemap in case we
ever use pagemaps with multiple entries (Gustavo A. R. Silva)

ASPM:

- Revert a change that broke when drivers disabled L1 and users later
enabled an L1.x substate via sysfs, and fix a similar issue when
users disabled L1 via sysfs (Heiner Kallweit)

Endpoint framework:

- Fix double free in __pci_epc_create() (Dan Carpenter)

- Use IS_ERR_OR_NULL() to simplify endpoint core (Ruan Jinjie)

Cadence PCIe controller driver:

- Drop unused "is_rc" member (Li Chen)

Freescale Layerscape PCIe controller driver:

- Enable 64-bit addressing in endpoint mode (Guanhua Gao)

Intel VMD host bridge driver:

- Fix multi-function header check (Ilpo Järvinen)

Microsoft Hyper-V host bridge driver:

- Annotate struct hv_dr_state with __counted_by (Kees Cook)

NVIDIA Tegra194 PCIe controller driver:

- Drop setting of LNKCAP_MLW (max link width) since dw_pcie_setup()
already does this via dw_pcie_link_set_max_link_width() (Yoshihiro
Shimoda)

Qualcomm PCIe controller driver:

- Use PCIE_SPEED2MBS_ENC() to simplify encoding of link speed
(Manivannan Sadhasivam)

- Add a .write_dbi2() callback so DBI2 register writes, e.g., for
setting the BAR size, work correctly (Manivannan Sadhasivam)

- Enable ASPM for platforms that use 1.9.0 ops, because the PCI core
doesn't enable ASPM states that haven't been enabled by the
firmware (Manivannan Sadhasivam)

Renesas R-Car Gen4 PCIe controller driver:

- Add DesignWare core support (set max link width, EDMA_UNROLL flag,
.pre_init(), .deinit(), etc) for use by R-Car Gen4 driver
(Yoshihiro Shimoda)

- Add driver and DT schema for DesignWare-based Renesas R-Car Gen4
controller in both host and endpoint mode (Yoshihiro Shimoda)

Xilinx NWL PCIe controller driver:

- Update ECAM size to support 256 buses (Thippeswamy Havalige)

- Stop setting bridge primary/secondary/subordinate bus numbers,
since PCI core does this (Thippeswamy Havalige)

Xilinx XDMA controller driver:

- Add driver and DT schema for Zynq UltraScale+ MPSoCs devices with
Xilinx XDMA Soft IP (Thippeswamy Havalige)

Miscellaneous:

- Use FIELD_GET()/FIELD_PREP() to simplify and reduce use of _SHIFT
macros (Ilpo Järvinen, Bjorn Helgaas)

- Remove logic_outb(), _outw(), outl() duplicate declarations (John
Sanpe)

- Replace unnecessary UTF-8 in Kconfig help text because menuconfig
doesn't render it correctly (Liu Song)"

* tag 'pci-v6.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (102 commits)
PCI: qcom-ep: Add dedicated callback for writing to DBI2 registers
PCI: Simplify pcie_capability_clear_and_set_word() to ..._clear_word()
PCI: endpoint: Fix double free in __pci_epc_create()
PCI: xilinx-xdma: Add Xilinx XDMA Root Port driver
dt-bindings: PCI: xilinx-xdma: Add schemas for Xilinx XDMA PCIe Root Port Bridge
PCI: xilinx-cpm: Move IRQ definitions to a common header
PCI: xilinx-nwl: Modify ECAM size to enable support for 256 buses
PCI: xilinx-nwl: Rename the NWL_ECAM_VALUE_DEFAULT macro
dt-bindings: PCI: xilinx-nwl: Modify ECAM size in the DT example
PCI: xilinx-nwl: Remove redundant code that sets Type 1 header fields
PCI: hotplug: Add Ampere Altra Attention Indicator extension driver
PCI/AER: Factor out interrupt toggling into helpers
PCI: acpiphp: Allow built-in drivers for Attention Indicators
PCI/portdrv: Use FIELD_GET()
PCI/VC: Use FIELD_GET()
PCI/PTM: Use FIELD_GET()
PCI/PME: Use FIELD_GET()
PCI/ATS: Use FIELD_GET()
PCI/ATS: Show PASID Capability register width in bitmasks
PCI/ASPM: Fix L1 substate handling in aspm_attr_store_common()
...

+2613 -511
+115
Documentation/devicetree/bindings/pci/rcar-gen4-pci-ep.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + # Copyright (C) 2022-2023 Renesas Electronics Corp. 3 + %YAML 1.2 4 + --- 5 + $id: http://devicetree.org/schemas/pci/rcar-gen4-pci-ep.yaml# 6 + $schema: http://devicetree.org/meta-schemas/core.yaml# 7 + 8 + title: Renesas R-Car Gen4 PCIe Endpoint 9 + 10 + maintainers: 11 + - Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> 12 + 13 + allOf: 14 + - $ref: snps,dw-pcie-ep.yaml# 15 + 16 + properties: 17 + compatible: 18 + items: 19 + - const: renesas,r8a779f0-pcie-ep # R-Car S4-8 20 + - const: renesas,rcar-gen4-pcie-ep # R-Car Gen4 21 + 22 + reg: 23 + maxItems: 7 24 + 25 + reg-names: 26 + items: 27 + - const: dbi 28 + - const: dbi2 29 + - const: atu 30 + - const: dma 31 + - const: app 32 + - const: phy 33 + - const: addr_space 34 + 35 + interrupts: 36 + maxItems: 3 37 + 38 + interrupt-names: 39 + items: 40 + - const: dma 41 + - const: sft_ce 42 + - const: app 43 + 44 + clocks: 45 + maxItems: 2 46 + 47 + clock-names: 48 + items: 49 + - const: core 50 + - const: ref 51 + 52 + power-domains: 53 + maxItems: 1 54 + 55 + resets: 56 + maxItems: 1 57 + 58 + reset-names: 59 + items: 60 + - const: pwr 61 + 62 + max-link-speed: 63 + maximum: 4 64 + 65 + num-lanes: 66 + maximum: 4 67 + 68 + max-functions: 69 + maximum: 2 70 + 71 + required: 72 + - compatible 73 + - reg 74 + - reg-names 75 + - interrupts 76 + - interrupt-names 77 + - clocks 78 + - clock-names 79 + - power-domains 80 + - resets 81 + - reset-names 82 + 83 + unevaluatedProperties: false 84 + 85 + examples: 86 + - | 87 + #include <dt-bindings/clock/r8a779f0-cpg-mssr.h> 88 + #include <dt-bindings/interrupt-controller/arm-gic.h> 89 + #include <dt-bindings/power/r8a779f0-sysc.h> 90 + 91 + soc { 92 + #address-cells = <2>; 93 + #size-cells = <2>; 94 + 95 + pcie0_ep: pcie-ep@e65d0000 { 96 + compatible = "renesas,r8a779f0-pcie-ep", "renesas,rcar-gen4-pcie-ep"; 97 + reg = <0 0xe65d0000 0 0x2000>, <0 0xe65d2000 0 0x1000>, 98 + <0 0xe65d3000 0 0x2000>, <0 0xe65d5000 0 0x1200>, 99 + <0 0xe65d6200 0 0x0e00>, <0 0xe65d7000 0 0x0400>, 100 + <0 0xfe000000 0 0x400000>; 101 + reg-names = "dbi", "dbi2", "atu", "dma", "app", "phy", "addr_space"; 102 + interrupts = <GIC_SPI 417 IRQ_TYPE_LEVEL_HIGH>, 103 + <GIC_SPI 418 IRQ_TYPE_LEVEL_HIGH>, 104 + <GIC_SPI 422 IRQ_TYPE_LEVEL_HIGH>; 105 + interrupt-names = "dma", "sft_ce", "app"; 106 + clocks = <&cpg CPG_MOD 624>, <&pcie0_clkref>; 107 + clock-names = "core", "ref"; 108 + power-domains = <&sysc R8A779F0_PD_ALWAYS_ON>; 109 + resets = <&cpg 624>; 110 + reset-names = "pwr"; 111 + max-link-speed = <4>; 112 + num-lanes = <2>; 113 + max-functions = /bits/ 8 <2>; 114 + }; 115 + };
+127
Documentation/devicetree/bindings/pci/rcar-gen4-pci-host.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + # Copyright (C) 2022-2023 Renesas Electronics Corp. 3 + %YAML 1.2 4 + --- 5 + $id: http://devicetree.org/schemas/pci/rcar-gen4-pci-host.yaml# 6 + $schema: http://devicetree.org/meta-schemas/core.yaml# 7 + 8 + title: Renesas R-Car Gen4 PCIe Host 9 + 10 + maintainers: 11 + - Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> 12 + 13 + allOf: 14 + - $ref: snps,dw-pcie.yaml# 15 + 16 + properties: 17 + compatible: 18 + items: 19 + - const: renesas,r8a779f0-pcie # R-Car S4-8 20 + - const: renesas,rcar-gen4-pcie # R-Car Gen4 21 + 22 + reg: 23 + maxItems: 7 24 + 25 + reg-names: 26 + items: 27 + - const: dbi 28 + - const: dbi2 29 + - const: atu 30 + - const: dma 31 + - const: app 32 + - const: phy 33 + - const: config 34 + 35 + interrupts: 36 + maxItems: 4 37 + 38 + interrupt-names: 39 + items: 40 + - const: msi 41 + - const: dma 42 + - const: sft_ce 43 + - const: app 44 + 45 + clocks: 46 + maxItems: 2 47 + 48 + clock-names: 49 + items: 50 + - const: core 51 + - const: ref 52 + 53 + power-domains: 54 + maxItems: 1 55 + 56 + resets: 57 + maxItems: 1 58 + 59 + reset-names: 60 + items: 61 + - const: pwr 62 + 63 + max-link-speed: 64 + maximum: 4 65 + 66 + num-lanes: 67 + maximum: 4 68 + 69 + required: 70 + - compatible 71 + - reg 72 + - reg-names 73 + - interrupts 74 + - interrupt-names 75 + - clocks 76 + - clock-names 77 + - power-domains 78 + - resets 79 + - reset-names 80 + 81 + unevaluatedProperties: false 82 + 83 + examples: 84 + - | 85 + #include <dt-bindings/clock/r8a779f0-cpg-mssr.h> 86 + #include <dt-bindings/interrupt-controller/arm-gic.h> 87 + #include <dt-bindings/power/r8a779f0-sysc.h> 88 + 89 + soc { 90 + #address-cells = <2>; 91 + #size-cells = <2>; 92 + 93 + pcie: pcie@e65d0000 { 94 + compatible = "renesas,r8a779f0-pcie", "renesas,rcar-gen4-pcie"; 95 + reg = <0 0xe65d0000 0 0x1000>, <0 0xe65d2000 0 0x0800>, 96 + <0 0xe65d3000 0 0x2000>, <0 0xe65d5000 0 0x1200>, 97 + <0 0xe65d6200 0 0x0e00>, <0 0xe65d7000 0 0x0400>, 98 + <0 0xfe000000 0 0x400000>; 99 + reg-names = "dbi", "dbi2", "atu", "dma", "app", "phy", "config"; 100 + interrupts = <GIC_SPI 416 IRQ_TYPE_LEVEL_HIGH>, 101 + <GIC_SPI 417 IRQ_TYPE_LEVEL_HIGH>, 102 + <GIC_SPI 418 IRQ_TYPE_LEVEL_HIGH>, 103 + <GIC_SPI 422 IRQ_TYPE_LEVEL_HIGH>; 104 + interrupt-names = "msi", "dma", "sft_ce", "app"; 105 + clocks = <&cpg CPG_MOD 624>, <&pcie0_clkref>; 106 + clock-names = "core", "ref"; 107 + power-domains = <&sysc R8A779F0_PD_ALWAYS_ON>; 108 + resets = <&cpg 624>; 109 + reset-names = "pwr"; 110 + max-link-speed = <4>; 111 + num-lanes = <2>; 112 + #address-cells = <3>; 113 + #size-cells = <2>; 114 + bus-range = <0x00 0xff>; 115 + device_type = "pci"; 116 + ranges = <0x01000000 0 0x00000000 0 0xfe000000 0 0x00400000>, 117 + <0x02000000 0 0x30000000 0 0x30000000 0 0x10000000>; 118 + dma-ranges = <0x42000000 0 0x00000000 0 0x00000000 1 0x00000000>; 119 + #interrupt-cells = <1>; 120 + interrupt-map-mask = <0 0 0 7>; 121 + interrupt-map = <0 0 0 1 &gic GIC_SPI 416 IRQ_TYPE_LEVEL_HIGH>, 122 + <0 0 0 2 &gic GIC_SPI 416 IRQ_TYPE_LEVEL_HIGH>, 123 + <0 0 0 3 &gic GIC_SPI 416 IRQ_TYPE_LEVEL_HIGH>, 124 + <0 0 0 4 &gic GIC_SPI 416 IRQ_TYPE_LEVEL_HIGH>; 125 + snps,enable-cdm-check; 126 + }; 127 + };
+2 -2
Documentation/devicetree/bindings/pci/snps,dw-pcie-common.yaml
··· 33 33 specific for each activated function, while the rest of the sub-spaces 34 34 are common for all of them (if there are more than one). 35 35 minItems: 2 36 - maxItems: 6 36 + maxItems: 7 37 37 38 38 reg-names: 39 39 minItems: 2 40 - maxItems: 6 40 + maxItems: 7 41 41 42 42 interrupts: 43 43 description:
+2 -2
Documentation/devicetree/bindings/pci/snps,dw-pcie-ep.yaml
··· 33 33 normal controller functioning. iATU memory IO region is also required 34 34 if the space is unrolled (IP-core version >= 4.80a). 35 35 minItems: 2 36 - maxItems: 5 36 + maxItems: 7 37 37 38 38 reg-names: 39 39 minItems: 2 40 - maxItems: 5 40 + maxItems: 7 41 41 items: 42 42 oneOf: 43 43 - description:
+2 -2
Documentation/devicetree/bindings/pci/snps,dw-pcie.yaml
··· 42 42 are required for the normal controller work. iATU memory IO region is 43 43 also required if the space is unrolled (IP-core version >= 4.80a). 44 44 minItems: 2 45 - maxItems: 5 45 + maxItems: 7 46 46 47 47 reg-names: 48 48 minItems: 2 49 - maxItems: 5 49 + maxItems: 7 50 50 items: 51 51 oneOf: 52 52 - description:
+1 -1
Documentation/devicetree/bindings/pci/xlnx,nwl-pcie.yaml
··· 118 118 compatible = "xlnx,nwl-pcie-2.11"; 119 119 reg = <0x0 0xfd0e0000 0x0 0x1000>, 120 120 <0x0 0xfd480000 0x0 0x1000>, 121 - <0x80 0x00000000 0x0 0x1000000>; 121 + <0x80 0x00000000 0x0 0x10000000>; 122 122 reg-names = "breg", "pcireg", "cfg"; 123 123 ranges = <0x02000000 0x0 0xe0000000 0x0 0xe0000000 0x0 0x10000000>, 124 124 <0x43000000 0x00000006 0x0 0x00000006 0x0 0x00000002 0x0>;
+114
Documentation/devicetree/bindings/pci/xlnx,xdma-host.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/xlnx,xdma-host.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Xilinx XDMA PL PCIe Root Port Bridge 8 + 9 + maintainers: 10 + - Thippeswamy Havalige <thippeswamy.havalige@amd.com> 11 + 12 + allOf: 13 + - $ref: /schemas/pci/pci-bus.yaml# 14 + 15 + properties: 16 + compatible: 17 + const: xlnx,xdma-host-3.00 18 + 19 + reg: 20 + maxItems: 1 21 + 22 + ranges: 23 + maxItems: 2 24 + 25 + interrupts: 26 + items: 27 + - description: interrupt asserted when miscellaneous interrupt is received. 28 + - description: msi0 interrupt asserted when an MSI is received. 29 + - description: msi1 interrupt asserted when an MSI is received. 30 + 31 + interrupt-names: 32 + items: 33 + - const: misc 34 + - const: msi0 35 + - const: msi1 36 + 37 + interrupt-map-mask: 38 + items: 39 + - const: 0 40 + - const: 0 41 + - const: 0 42 + - const: 7 43 + 44 + interrupt-map: 45 + maxItems: 4 46 + 47 + "#interrupt-cells": 48 + const: 1 49 + 50 + interrupt-controller: 51 + description: identifies the node as an interrupt controller 52 + type: object 53 + properties: 54 + interrupt-controller: true 55 + 56 + "#address-cells": 57 + const: 0 58 + 59 + "#interrupt-cells": 60 + const: 1 61 + 62 + required: 63 + - interrupt-controller 64 + - "#address-cells" 65 + - "#interrupt-cells" 66 + 67 + additionalProperties: false 68 + 69 + required: 70 + - compatible 71 + - reg 72 + - ranges 73 + - interrupts 74 + - interrupt-map 75 + - interrupt-map-mask 76 + - "#interrupt-cells" 77 + - interrupt-controller 78 + 79 + unevaluatedProperties: false 80 + 81 + examples: 82 + 83 + - | 84 + #include <dt-bindings/interrupt-controller/arm-gic.h> 85 + #include <dt-bindings/interrupt-controller/irq.h> 86 + 87 + soc { 88 + #address-cells = <2>; 89 + #size-cells = <2>; 90 + pcie@a0000000 { 91 + compatible = "xlnx,xdma-host-3.00"; 92 + reg = <0x0 0xa0000000 0x0 0x10000000>; 93 + ranges = <0x2000000 0x0 0xb0000000 0x0 0xb0000000 0x0 0x1000000>, 94 + <0x43000000 0x5 0x0 0x5 0x0 0x0 0x1000000>; 95 + #address-cells = <3>; 96 + #size-cells = <2>; 97 + #interrupt-cells = <1>; 98 + device_type = "pci"; 99 + interrupt-parent = <&gic>; 100 + interrupts = <GIC_SPI 89 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>, 101 + <GIC_SPI 91 IRQ_TYPE_LEVEL_HIGH>; 102 + interrupt-names = "misc", "msi0", "msi1"; 103 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 104 + interrupt-map = <0 0 0 1 &pcie_intc_0 0>, 105 + <0 0 0 2 &pcie_intc_0 1>, 106 + <0 0 0 3 &pcie_intc_0 2>, 107 + <0 0 0 4 &pcie_intc_0 3>; 108 + pcie_intc_0: interrupt-controller { 109 + #address-cells = <0>; 110 + #interrupt-cells = <1>; 111 + interrupt-controller; 112 + }; 113 + }; 114 + };
+1
MAINTAINERS
··· 16526 16526 S: Maintained 16527 16527 F: Documentation/devicetree/bindings/pci/*rcar* 16528 16528 F: drivers/pci/controller/*rcar* 16529 + F: drivers/pci/controller/dwc/*rcar* 16529 16530 16530 16531 PCI DRIVER FOR SAMSUNG EXYNOS 16531 16532 M: Jingoo Han <jingoohan1@gmail.com>
+9 -8
arch/alpha/kernel/sys_miata.c
··· 183 183 the 2nd 8259 controller. So we have to check for it first. */ 184 184 185 185 if((slot == 7) && (PCI_FUNC(dev->devfn) == 3)) { 186 - u8 irq=0; 187 186 struct pci_dev *pdev = pci_get_slot(dev->bus, dev->devfn & ~7); 188 - if(pdev == NULL || pci_read_config_byte(pdev, 0x40,&irq) != PCIBIOS_SUCCESSFUL) { 189 - pci_dev_put(pdev); 187 + u8 irq = 0; 188 + int ret; 189 + 190 + if (!pdev) 190 191 return -1; 191 - } 192 - else { 193 - pci_dev_put(pdev); 194 - return irq; 195 - } 192 + 193 + ret = pci_read_config_byte(pdev, 0x40, &irq); 194 + pci_dev_put(pdev); 195 + 196 + return ret == PCIBIOS_SUCCESSFUL ? irq : -1; 196 197 } 197 198 198 199 return COMMON_TABLE_LOOKUP;
+6 -5
arch/sh/drivers/pci/common.c
··· 50 50 int top_bus, int current_bus) 51 51 { 52 52 u32 pci_devfn; 53 - unsigned short vid; 53 + u16 vid; 54 54 int cap66 = -1; 55 55 u16 stat; 56 + int ret; 56 57 57 58 pr_info("PCI: Checking 66MHz capabilities...\n"); 58 59 59 60 for (pci_devfn = 0; pci_devfn < 0xff; pci_devfn++) { 60 61 if (PCI_FUNC(pci_devfn)) 61 62 continue; 62 - if (early_read_config_word(hose, top_bus, current_bus, 63 - pci_devfn, PCI_VENDOR_ID, &vid) != 64 - PCIBIOS_SUCCESSFUL) 63 + ret = early_read_config_word(hose, top_bus, current_bus, 64 + pci_devfn, PCI_VENDOR_ID, &vid); 65 + if (ret != PCIBIOS_SUCCESSFUL) 65 66 continue; 66 - if (vid == 0xffff) 67 + if (PCI_POSSIBLE_ERROR(vid)) 67 68 continue; 68 69 69 70 /* check 66MHz capability */
+59
arch/x86/pci/fixup.c
··· 3 3 * Exceptions for specific devices. Usually work-arounds for fatal design flaws. 4 4 */ 5 5 6 + #include <linux/bitfield.h> 6 7 #include <linux/delay.h> 7 8 #include <linux/dmi.h> 8 9 #include <linux/pci.h> 10 + #include <linux/suspend.h> 9 11 #include <linux/vgaarb.h> 10 12 #include <asm/amd_nb.h> 11 13 #include <asm/hpet.h> ··· 906 904 } 907 905 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x5ad6, chromeos_save_apl_pci_l1ss_capability); 908 906 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x5ad6, chromeos_fixup_apl_pci_l1ss_capability); 907 + 908 + #ifdef CONFIG_SUSPEND 909 + /* 910 + * Root Ports on some AMD SoCs advertise PME_Support for D3hot and D3cold, but 911 + * if the SoC is put into a hardware sleep state by the amd-pmc driver, the 912 + * Root Ports don't generate wakeup interrupts for USB devices. 913 + * 914 + * When suspending, remove D3hot and D3cold from the PME_Support advertised 915 + * by the Root Port so we don't use those states if we're expecting wakeup 916 + * interrupts. Restore the advertised PME_Support when resuming. 917 + */ 918 + static void amd_rp_pme_suspend(struct pci_dev *dev) 919 + { 920 + struct pci_dev *rp; 921 + 922 + /* 923 + * PM_SUSPEND_ON means we're doing runtime suspend, which means 924 + * amd-pmc will not be involved so PMEs during D3 work as advertised. 925 + * 926 + * The PMEs *do* work if amd-pmc doesn't put the SoC in the hardware 927 + * sleep state, but we assume amd-pmc is always present. 928 + */ 929 + if (pm_suspend_target_state == PM_SUSPEND_ON) 930 + return; 931 + 932 + rp = pcie_find_root_port(dev); 933 + if (!rp->pm_cap) 934 + return; 935 + 936 + rp->pme_support &= ~((PCI_PM_CAP_PME_D3hot|PCI_PM_CAP_PME_D3cold) >> 937 + PCI_PM_CAP_PME_SHIFT); 938 + dev_info_once(&rp->dev, "quirk: disabling D3cold for suspend\n"); 939 + } 940 + 941 + static void amd_rp_pme_resume(struct pci_dev *dev) 942 + { 943 + struct pci_dev *rp; 944 + u16 pmc; 945 + 946 + rp = pcie_find_root_port(dev); 947 + if (!rp->pm_cap) 948 + return; 949 + 950 + pci_read_config_word(rp, rp->pm_cap + PCI_PM_PMC, &pmc); 951 + rp->pme_support = FIELD_GET(PCI_PM_CAP_PME_MASK, pmc); 952 + } 953 + /* Rembrandt (yellow_carp) */ 954 + DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x162e, amd_rp_pme_suspend); 955 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x162e, amd_rp_pme_resume); 956 + DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x162f, amd_rp_pme_suspend); 957 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x162f, amd_rp_pme_resume); 958 + /* Phoenix (pink_sardine) */ 959 + DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x1668, amd_rp_pme_suspend); 960 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1668, amd_rp_pme_resume); 961 + DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x1669, amd_rp_pme_suspend); 962 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1669, amd_rp_pme_resume); 963 + #endif /* CONFIG_SUSPEND */
+11 -9
drivers/atm/iphase.c
··· 2291 2291 static int reset_sar(struct atm_dev *dev) 2292 2292 { 2293 2293 IADEV *iadev; 2294 - int i, error = 1; 2294 + int i, error; 2295 2295 unsigned int pci[64]; 2296 2296 2297 2297 iadev = INPH_IA_DEV(dev); 2298 - for(i=0; i<64; i++) 2299 - if ((error = pci_read_config_dword(iadev->pci, 2300 - i*4, &pci[i])) != PCIBIOS_SUCCESSFUL) 2301 - return error; 2298 + for (i = 0; i < 64; i++) { 2299 + error = pci_read_config_dword(iadev->pci, i * 4, &pci[i]); 2300 + if (error != PCIBIOS_SUCCESSFUL) 2301 + return error; 2302 + } 2302 2303 writel(0, iadev->reg+IPHASE5575_EXT_RESET); 2303 - for(i=0; i<64; i++) 2304 - if ((error = pci_write_config_dword(iadev->pci, 2305 - i*4, pci[i])) != PCIBIOS_SUCCESSFUL) 2306 - return error; 2304 + for (i = 0; i < 64; i++) { 2305 + error = pci_write_config_dword(iadev->pci, i * 4, pci[i]); 2306 + if (error != PCIBIOS_SUCCESSFUL) 2307 + return error; 2308 + } 2307 2309 udelay(5); 2308 2310 return 0; 2309 2311 }
+4 -7
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
··· 1393 1393 struct pci_dev *pdev = NULL; 1394 1394 int ret; 1395 1395 1396 - while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { 1397 - if (!atif->handle) 1398 - amdgpu_atif_pci_probe_handle(pdev); 1399 - if (!atcs->handle) 1400 - amdgpu_atcs_pci_probe_handle(pdev); 1401 - } 1396 + while ((pdev = pci_get_base_class(PCI_BASE_CLASS_DISPLAY, pdev))) { 1397 + if ((pdev->class != PCI_CLASS_DISPLAY_VGA << 8) && 1398 + (pdev->class != PCI_CLASS_DISPLAY_OTHER << 8)) 1399 + continue; 1402 1400 1403 - while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) { 1404 1401 if (!atif->handle) 1405 1402 amdgpu_atif_pci_probe_handle(pdev); 1406 1403 if (!atcs->handle)
+5 -15
drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
··· 287 287 if (adev->flags & AMD_IS_APU) 288 288 return false; 289 289 290 - while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { 290 + while ((pdev = pci_get_base_class(PCI_BASE_CLASS_DISPLAY, pdev))) { 291 + if ((pdev->class != PCI_CLASS_DISPLAY_VGA << 8) && 292 + (pdev->class != PCI_CLASS_DISPLAY_OTHER << 8)) 293 + continue; 294 + 291 295 dhandle = ACPI_HANDLE(&pdev->dev); 292 296 if (!dhandle) 293 297 continue; ··· 300 296 if (ACPI_SUCCESS(status)) { 301 297 found = true; 302 298 break; 303 - } 304 - } 305 - 306 - if (!found) { 307 - while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) { 308 - dhandle = ACPI_HANDLE(&pdev->dev); 309 - if (!dhandle) 310 - continue; 311 - 312 - status = acpi_get_handle(dhandle, "ATRM", &atrm_handle); 313 - if (ACPI_SUCCESS(status)) { 314 - found = true; 315 - break; 316 - } 317 299 } 318 300 } 319 301
+4 -7
drivers/gpu/drm/nouveau/nouveau_acpi.c
··· 284 284 printk("MXM: GUID detected in BIOS\n"); 285 285 286 286 /* now do DSM detection */ 287 - while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { 288 - vga_count++; 287 + while ((pdev = pci_get_base_class(PCI_BASE_CLASS_DISPLAY, pdev))) { 288 + if ((pdev->class != PCI_CLASS_DISPLAY_VGA << 8) && 289 + (pdev->class != PCI_CLASS_DISPLAY_3D << 8)) 290 + continue; 289 291 290 - nouveau_dsm_pci_probe(pdev, &dhandle, &has_mux, &has_optimus, 291 - &has_optimus_flags, &has_power_resources); 292 - } 293 - 294 - while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_3D << 8, pdev)) != NULL) { 295 292 vga_count++; 296 293 297 294 nouveau_dsm_pci_probe(pdev, &dhandle, &has_mux, &has_optimus,
+3 -8
drivers/gpu/drm/qxl/qxl_drv.c
··· 68 68 static struct drm_driver qxl_driver; 69 69 static struct pci_driver qxl_pci_driver; 70 70 71 - static bool is_vga(struct pci_dev *pdev) 72 - { 73 - return pdev->class == PCI_CLASS_DISPLAY_VGA << 8; 74 - } 75 - 76 71 static int 77 72 qxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 78 73 { ··· 95 100 if (ret) 96 101 goto disable_pci; 97 102 98 - if (is_vga(pdev) && pdev->revision < 5) { 103 + if (pci_is_vga(pdev) && pdev->revision < 5) { 99 104 ret = vga_get_interruptible(pdev, VGA_RSRC_LEGACY_IO); 100 105 if (ret) { 101 106 DRM_ERROR("can't get legacy vga ioports\n"); ··· 126 131 unload: 127 132 qxl_device_fini(qdev); 128 133 put_vga: 129 - if (is_vga(pdev) && pdev->revision < 5) 134 + if (pci_is_vga(pdev) && pdev->revision < 5) 130 135 vga_put(pdev, VGA_RSRC_LEGACY_IO); 131 136 disable_pci: 132 137 pci_disable_device(pdev); ··· 154 159 155 160 drm_dev_unregister(dev); 156 161 drm_atomic_helper_shutdown(dev); 157 - if (is_vga(pdev) && pdev->revision < 5) 162 + if (pci_is_vga(pdev) && pdev->revision < 5) 158 163 vga_put(pdev, VGA_RSRC_LEGACY_IO); 159 164 } 160 165
+5 -15
drivers/gpu/drm/radeon/radeon_bios.c
··· 199 199 if (rdev->flags & RADEON_IS_IGP) 200 200 return false; 201 201 202 - while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { 202 + while ((pdev = pci_get_base_class(PCI_BASE_CLASS_DISPLAY, pdev))) { 203 + if ((pdev->class != PCI_CLASS_DISPLAY_VGA << 8) && 204 + (pdev->class != PCI_CLASS_DISPLAY_OTHER << 8)) 205 + continue; 206 + 203 207 dhandle = ACPI_HANDLE(&pdev->dev); 204 208 if (!dhandle) 205 209 continue; ··· 212 208 if (ACPI_SUCCESS(status)) { 213 209 found = true; 214 210 break; 215 - } 216 - } 217 - 218 - if (!found) { 219 - while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) { 220 - dhandle = ACPI_HANDLE(&pdev->dev); 221 - if (!dhandle) 222 - continue; 223 - 224 - status = acpi_get_handle(dhandle, "ATRM", &atrm_handle); 225 - if (ACPI_SUCCESS(status)) { 226 - found = true; 227 - break; 228 - } 229 211 } 230 212 } 231 213
+1 -1
drivers/gpu/drm/virtio/virtgpu_drv.c
··· 51 51 { 52 52 struct pci_dev *pdev = to_pci_dev(dev->dev); 53 53 const char *pname = dev_name(&pdev->dev); 54 - bool vga = (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA; 54 + bool vga = pci_is_vga(pdev); 55 55 int ret; 56 56 57 57 DRM_INFO("pci: %s detected at %s\n",
+4
drivers/misc/pci_endpoint_test.c
··· 81 81 #define PCI_DEVICE_ID_RENESAS_R8A774B1 0x002b 82 82 #define PCI_DEVICE_ID_RENESAS_R8A774C0 0x002d 83 83 #define PCI_DEVICE_ID_RENESAS_R8A774E1 0x0025 84 + #define PCI_DEVICE_ID_RENESAS_R8A779F0 0x0031 84 85 85 86 static DEFINE_IDA(pci_endpoint_test_ida); 86 87 ··· 991 990 { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774B1),}, 992 991 { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774C0),}, 993 992 { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774E1),}, 993 + { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A779F0), 994 + .driver_data = (kernel_ulong_t)&default_data, 995 + }, 994 996 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721E), 995 997 .driver_data = (kernel_ulong_t)&j721e_data, 996 998 },
+1 -1
drivers/pci/Kconfig
··· 170 170 select GENERIC_ALLOCATOR 171 171 select NEED_SG_DMA_FLAGS 172 172 help 173 - Enableѕ drivers to do PCI peer-to-peer transactions to and from 173 + Enables drivers to do PCI peer-to-peer transactions to and from 174 174 BARs that are exposed in other devices that are the part of 175 175 the hierarchy where peer-to-peer DMA is guaranteed by the PCI 176 176 specification to work (ie. anything below a single PCI bridge).
+2 -5
drivers/pci/ats.c
··· 9 9 * Copyright (C) 2011 Advanced Micro Devices, 10 10 */ 11 11 12 + #include <linux/bitfield.h> 12 13 #include <linux/export.h> 13 14 #include <linux/pci-ats.h> 14 15 #include <linux/pci.h> ··· 481 480 } 482 481 EXPORT_SYMBOL_GPL(pci_pasid_features); 483 482 484 - #define PASID_NUMBER_SHIFT 8 485 - #define PASID_NUMBER_MASK (0x1f << PASID_NUMBER_SHIFT) 486 483 /** 487 484 * pci_max_pasids - Get maximum number of PASIDs supported by device 488 485 * @pdev: PCI device structure ··· 502 503 503 504 pci_read_config_word(pdev, pasid + PCI_PASID_CAP, &supported); 504 505 505 - supported = (supported & PASID_NUMBER_MASK) >> PASID_NUMBER_SHIFT; 506 - 507 - return (1 << supported); 506 + return (1 << FIELD_GET(PCI_PASID_CAP_WIDTH, supported)); 508 507 } 509 508 EXPORT_SYMBOL_GPL(pci_max_pasids); 510 509 #endif /* CONFIG_PCI_PASID */
+11
drivers/pci/controller/Kconfig
··· 324 324 Say 'Y' here if you want kernel to support the Xilinx AXI PCIe 325 325 Host Bridge driver. 326 326 327 + config PCIE_XILINX_DMA_PL 328 + bool "Xilinx DMA PL PCIe host bridge support" 329 + depends on ARCH_ZYNQMP || COMPILE_TEST 330 + depends on PCI_MSI 331 + select PCI_HOST_COMMON 332 + help 333 + Say 'Y' here if you want kernel support for the Xilinx PL DMA 334 + PCIe host bridge. The controller is a Soft IP which can act as 335 + Root Port. If your system provides Xilinx PCIe host controller 336 + bridge DMA as Soft IP say 'Y'; if you are not sure, say 'N'. 337 + 327 338 config PCIE_XILINX_NWL 328 339 bool "Xilinx NWL PCIe controller" 329 340 depends on ARCH_ZYNQMP || COMPILE_TEST
+1
drivers/pci/controller/Makefile
··· 17 17 obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o 18 18 obj-$(CONFIG_PCIE_XILINX_NWL) += pcie-xilinx-nwl.o 19 19 obj-$(CONFIG_PCIE_XILINX_CPM) += pcie-xilinx-cpm.o 20 + obj-$(CONFIG_PCIE_XILINX_DMA_PL) += pcie-xilinx-dma-pl.o 20 21 obj-$(CONFIG_PCI_V3_SEMI) += pci-v3-semi.o 21 22 obj-$(CONFIG_PCI_XGENE) += pci-xgene.o 22 23 obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o
+5 -4
drivers/pci/controller/cadence/pcie-cadence-ep.c
··· 3 3 // Cadence PCIe endpoint controller driver. 4 4 // Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com> 5 5 6 + #include <linux/bitfield.h> 6 7 #include <linux/delay.h> 7 8 #include <linux/kernel.h> 8 9 #include <linux/of.h> ··· 263 262 * Get the Multiple Message Enable bitfield from the Message Control 264 263 * register. 265 264 */ 266 - mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4; 265 + mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags); 267 266 268 267 return mme; 269 268 } ··· 395 394 return -EINVAL; 396 395 397 396 /* Get the number of enabled MSIs */ 398 - mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4; 397 + mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags); 399 398 msi_count = 1 << mme; 400 399 if (!interrupt_num || interrupt_num > msi_count) 401 400 return -EINVAL; ··· 450 449 return -EINVAL; 451 450 452 451 /* Get the number of enabled MSIs */ 453 - mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4; 452 + mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags); 454 453 msi_count = 1 << mme; 455 454 if (!interrupt_num || interrupt_num > msi_count) 456 455 return -EINVAL; ··· 507 506 508 507 reg = cap + PCI_MSIX_TABLE; 509 508 tbl_offset = cdns_pcie_ep_fn_readl(pcie, fn, reg); 510 - bir = tbl_offset & PCI_MSIX_TABLE_BIR; 509 + bir = FIELD_GET(PCI_MSIX_TABLE_BIR, tbl_offset); 511 510 tbl_offset &= PCI_MSIX_TABLE_OFFSET; 512 511 513 512 msix_tbl = epf->epf_bar[bir]->addr + tbl_offset;
-5
drivers/pci/controller/cadence/pcie-cadence-plat.c
··· 17 17 /** 18 18 * struct cdns_plat_pcie - private data for this PCIe platform driver 19 19 * @pcie: Cadence PCIe controller 20 - * @is_rc: Set to 1 indicates the PCIe controller mode is Root Complex, 21 - * if 0 it is in Endpoint mode. 22 20 */ 23 21 struct cdns_plat_pcie { 24 22 struct cdns_pcie *pcie; 25 - bool is_rc; 26 23 }; 27 24 28 25 struct cdns_plat_pcie_of_data { ··· 73 76 rc->pcie.dev = dev; 74 77 rc->pcie.ops = &cdns_plat_ops; 75 78 cdns_plat_pcie->pcie = &rc->pcie; 76 - cdns_plat_pcie->is_rc = is_rc; 77 79 78 80 ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie); 79 81 if (ret) { ··· 100 104 ep->pcie.dev = dev; 101 105 ep->pcie.ops = &cdns_plat_ops; 102 106 cdns_plat_pcie->pcie = &ep->pcie; 103 - cdns_plat_pcie->is_rc = is_rc; 104 107 105 108 ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie); 106 109 if (ret) {
+25
drivers/pci/controller/dwc/Kconfig
··· 286 286 to work in endpoint mode. The PCIe controller uses the DesignWare core 287 287 plus Qualcomm-specific hardware wrappers. 288 288 289 + config PCIE_RCAR_GEN4 290 + tristate 291 + 292 + config PCIE_RCAR_GEN4_HOST 293 + tristate "Renesas R-Car Gen4 PCIe controller (host mode)" 294 + depends on ARCH_RENESAS || COMPILE_TEST 295 + depends on PCI_MSI 296 + select PCIE_DW_HOST 297 + select PCIE_RCAR_GEN4 298 + help 299 + Say Y here if you want PCIe controller (host mode) on R-Car Gen4 SoCs. 300 + To compile this driver as a module, choose M here: the module will be 301 + called pcie-rcar-gen4.ko. This uses the DesignWare core. 302 + 303 + config PCIE_RCAR_GEN4_EP 304 + tristate "Renesas R-Car Gen4 PCIe controller (endpoint mode)" 305 + depends on ARCH_RENESAS || COMPILE_TEST 306 + depends on PCI_ENDPOINT 307 + select PCIE_DW_EP 308 + select PCIE_RCAR_GEN4 309 + help 310 + Say Y here if you want PCIe controller (endpoint mode) on R-Car Gen4 311 + SoCs. To compile this driver as a module, choose M here: the module 312 + will be called pcie-rcar-gen4.ko. This uses the DesignWare core. 313 + 289 314 config PCIE_ROCKCHIP_DW_HOST 290 315 bool "Rockchip DesignWare PCIe controller" 291 316 select PCIE_DW
+1
drivers/pci/controller/dwc/Makefile
··· 26 26 obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o 27 27 obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o 28 28 obj-$(CONFIG_PCIE_VISCONTI_HOST) += pcie-visconti.o 29 + obj-$(CONFIG_PCIE_RCAR_GEN4) += pcie-rcar-gen4.o 29 30 30 31 # The following drivers are for devices that use the generic ACPI 31 32 # pci_root.c driver but don't support standard ECAM config access.
+2 -2
drivers/pci/controller/dwc/pci-exynos.c
··· 375 375 return ret; 376 376 } 377 377 378 - static int __exit exynos_pcie_remove(struct platform_device *pdev) 378 + static int exynos_pcie_remove(struct platform_device *pdev) 379 379 { 380 380 struct exynos_pcie *ep = platform_get_drvdata(pdev); 381 381 ··· 431 431 432 432 static struct platform_driver exynos_pcie_driver = { 433 433 .probe = exynos_pcie_probe, 434 - .remove = __exit_p(exynos_pcie_remove), 434 + .remove = exynos_pcie_remove, 435 435 .driver = { 436 436 .name = "exynos-pcie", 437 437 .of_match_table = exynos_pcie_of_match,
+4 -4
drivers/pci/controller/dwc/pci-keystone.c
··· 1100 1100 { }, 1101 1101 }; 1102 1102 1103 - static int __init ks_pcie_probe(struct platform_device *pdev) 1103 + static int ks_pcie_probe(struct platform_device *pdev) 1104 1104 { 1105 1105 const struct dw_pcie_host_ops *host_ops; 1106 1106 const struct dw_pcie_ep_ops *ep_ops; ··· 1302 1302 return ret; 1303 1303 } 1304 1304 1305 - static int __exit ks_pcie_remove(struct platform_device *pdev) 1305 + static int ks_pcie_remove(struct platform_device *pdev) 1306 1306 { 1307 1307 struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev); 1308 1308 struct device_link **link = ks_pcie->link; ··· 1318 1318 return 0; 1319 1319 } 1320 1320 1321 - static struct platform_driver ks_pcie_driver __refdata = { 1321 + static struct platform_driver ks_pcie_driver = { 1322 1322 .probe = ks_pcie_probe, 1323 - .remove = __exit_p(ks_pcie_remove), 1323 + .remove = ks_pcie_remove, 1324 1324 .driver = { 1325 1325 .name = "keystone-pcie", 1326 1326 .of_match_table = ks_pcie_of_match,
+2
drivers/pci/controller/dwc/pci-layerscape-ep.c
··· 266 266 267 267 pcie->big_endian = of_property_read_bool(dev->of_node, "big-endian"); 268 268 269 + dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); 270 + 269 271 platform_set_drvdata(pdev, pcie); 270 272 271 273 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
+1 -1
drivers/pci/controller/dwc/pci-layerscape.c
··· 58 58 u32 header_type; 59 59 60 60 header_type = ioread8(pci->dbi_base + PCI_HEADER_TYPE); 61 - header_type &= 0x7f; 61 + header_type &= PCI_HEADER_TYPE_MASK; 62 62 63 63 return header_type == PCI_HEADER_TYPE_BRIDGE; 64 64 }
+40 -12
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 6 6 * Author: Kishon Vijay Abraham I <kishon@ti.com> 7 7 */ 8 8 9 + #include <linux/bitfield.h> 9 10 #include <linux/of.h> 10 11 #include <linux/platform_device.h> 11 12 ··· 53 52 return func_offset; 54 53 } 55 54 55 + static unsigned int dw_pcie_ep_get_dbi2_offset(struct dw_pcie_ep *ep, u8 func_no) 56 + { 57 + unsigned int dbi2_offset = 0; 58 + 59 + if (ep->ops->get_dbi2_offset) 60 + dbi2_offset = ep->ops->get_dbi2_offset(ep, func_no); 61 + else if (ep->ops->func_conf_select) /* for backward compatibility */ 62 + dbi2_offset = ep->ops->func_conf_select(ep, func_no); 63 + 64 + return dbi2_offset; 65 + } 66 + 56 67 static void __dw_pcie_ep_reset_bar(struct dw_pcie *pci, u8 func_no, 57 68 enum pci_barno bar, int flags) 58 69 { 59 - u32 reg; 60 - unsigned int func_offset = 0; 70 + unsigned int func_offset, dbi2_offset; 61 71 struct dw_pcie_ep *ep = &pci->ep; 72 + u32 reg, reg_dbi2; 62 73 63 74 func_offset = dw_pcie_ep_func_select(ep, func_no); 75 + dbi2_offset = dw_pcie_ep_get_dbi2_offset(ep, func_no); 64 76 65 77 reg = func_offset + PCI_BASE_ADDRESS_0 + (4 * bar); 78 + reg_dbi2 = dbi2_offset + PCI_BASE_ADDRESS_0 + (4 * bar); 66 79 dw_pcie_dbi_ro_wr_en(pci); 67 - dw_pcie_writel_dbi2(pci, reg, 0x0); 80 + dw_pcie_writel_dbi2(pci, reg_dbi2, 0x0); 68 81 dw_pcie_writel_dbi(pci, reg, 0x0); 69 82 if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) { 70 - dw_pcie_writel_dbi2(pci, reg + 4, 0x0); 83 + dw_pcie_writel_dbi2(pci, reg_dbi2 + 4, 0x0); 71 84 dw_pcie_writel_dbi(pci, reg + 4, 0x0); 72 85 } 73 86 dw_pcie_dbi_ro_wr_dis(pci); ··· 243 228 { 244 229 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 245 230 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 231 + unsigned int func_offset, dbi2_offset; 246 232 enum pci_barno bar = epf_bar->barno; 247 233 size_t size = epf_bar->size; 248 234 int flags = epf_bar->flags; 249 - unsigned int func_offset = 0; 235 + u32 reg, reg_dbi2; 250 236 int ret, type; 251 - u32 reg; 252 237 253 238 func_offset = dw_pcie_ep_func_select(ep, func_no); 239 + dbi2_offset = dw_pcie_ep_get_dbi2_offset(ep, func_no); 254 240 255 241 reg = PCI_BASE_ADDRESS_0 + (4 * bar) + func_offset; 242 + reg_dbi2 = PCI_BASE_ADDRESS_0 + (4 * bar) + dbi2_offset; 256 243 257 244 if (!(flags & PCI_BASE_ADDRESS_SPACE)) 258 245 type = PCIE_ATU_TYPE_MEM; ··· 270 253 271 254 dw_pcie_dbi_ro_wr_en(pci); 272 255 273 - dw_pcie_writel_dbi2(pci, reg, lower_32_bits(size - 1)); 256 + dw_pcie_writel_dbi2(pci, reg_dbi2, lower_32_bits(size - 1)); 274 257 dw_pcie_writel_dbi(pci, reg, flags); 275 258 276 259 if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) { 277 - dw_pcie_writel_dbi2(pci, reg + 4, upper_32_bits(size - 1)); 260 + dw_pcie_writel_dbi2(pci, reg_dbi2 + 4, upper_32_bits(size - 1)); 278 261 dw_pcie_writel_dbi(pci, reg + 4, 0); 279 262 } 280 263 ··· 351 334 if (!(val & PCI_MSI_FLAGS_ENABLE)) 352 335 return -EINVAL; 353 336 354 - val = (val & PCI_MSI_FLAGS_QSIZE) >> 4; 337 + val = FIELD_GET(PCI_MSI_FLAGS_QSIZE, val); 355 338 356 339 return val; 357 340 } ··· 374 357 reg = ep_func->msi_cap + func_offset + PCI_MSI_FLAGS; 375 358 val = dw_pcie_readw_dbi(pci, reg); 376 359 val &= ~PCI_MSI_FLAGS_QMASK; 377 - val |= (interrupts << 1) & PCI_MSI_FLAGS_QMASK; 360 + val |= FIELD_PREP(PCI_MSI_FLAGS_QMASK, interrupts); 378 361 dw_pcie_dbi_ro_wr_en(pci); 379 362 dw_pcie_writew_dbi(pci, reg, val); 380 363 dw_pcie_dbi_ro_wr_dis(pci); ··· 601 584 602 585 reg = ep_func->msix_cap + func_offset + PCI_MSIX_TABLE; 603 586 tbl_offset = dw_pcie_readl_dbi(pci, reg); 604 - bir = (tbl_offset & PCI_MSIX_TABLE_BIR); 587 + bir = FIELD_GET(PCI_MSIX_TABLE_BIR, tbl_offset); 605 588 tbl_offset &= PCI_MSIX_TABLE_OFFSET; 606 589 607 590 msix_tbl = ep->epf_bar[bir]->addr + tbl_offset; ··· 638 621 epc->mem->window.page_size); 639 622 640 623 pci_epc_mem_exit(epc); 624 + 625 + if (ep->ops->deinit) 626 + ep->ops->deinit(ep); 641 627 } 628 + EXPORT_SYMBOL_GPL(dw_pcie_ep_exit); 642 629 643 630 static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap) 644 631 { ··· 744 723 ep->phys_base = res->start; 745 724 ep->addr_size = resource_size(res); 746 725 726 + if (ep->ops->pre_init) 727 + ep->ops->pre_init(ep); 728 + 747 729 dw_pcie_version_detect(pci); 748 730 749 731 dw_pcie_iatu_detect(pci); ··· 801 777 ep->page_size); 802 778 if (ret < 0) { 803 779 dev_err(dev, "Failed to initialize address space\n"); 804 - return ret; 780 + goto err_ep_deinit; 805 781 } 806 782 807 783 ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys, ··· 837 813 838 814 err_exit_epc_mem: 839 815 pci_epc_mem_exit(epc); 816 + 817 + err_ep_deinit: 818 + if (ep->ops->deinit) 819 + ep->ops->deinit(ep); 840 820 841 821 return ret; 842 822 }
+3
drivers/pci/controller/dwc/pcie-designware-host.c
··· 502 502 if (ret) 503 503 goto err_stop_link; 504 504 505 + if (pp->ops->host_post_init) 506 + pp->ops->host_post_init(pp); 507 + 505 508 return 0; 506 509 507 510 err_stop_link:
+56 -46
drivers/pci/controller/dwc/pcie-designware.c
··· 365 365 if (ret) 366 366 dev_err(pci->dev, "write DBI address failed\n"); 367 367 } 368 + EXPORT_SYMBOL_GPL(dw_pcie_write_dbi2); 368 369 369 370 static inline void __iomem *dw_pcie_select_atu(struct dw_pcie *pci, u32 dir, 370 371 u32 index) ··· 733 732 734 733 } 735 734 735 + static void dw_pcie_link_set_max_link_width(struct dw_pcie *pci, u32 num_lanes) 736 + { 737 + u32 lnkcap, lwsc, plc; 738 + u8 cap; 739 + 740 + if (!num_lanes) 741 + return; 742 + 743 + /* Set the number of lanes */ 744 + plc = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL); 745 + plc &= ~PORT_LINK_FAST_LINK_MODE; 746 + plc &= ~PORT_LINK_MODE_MASK; 747 + 748 + /* Set link width speed control register */ 749 + lwsc = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL); 750 + lwsc &= ~PORT_LOGIC_LINK_WIDTH_MASK; 751 + switch (num_lanes) { 752 + case 1: 753 + plc |= PORT_LINK_MODE_1_LANES; 754 + lwsc |= PORT_LOGIC_LINK_WIDTH_1_LANES; 755 + break; 756 + case 2: 757 + plc |= PORT_LINK_MODE_2_LANES; 758 + lwsc |= PORT_LOGIC_LINK_WIDTH_2_LANES; 759 + break; 760 + case 4: 761 + plc |= PORT_LINK_MODE_4_LANES; 762 + lwsc |= PORT_LOGIC_LINK_WIDTH_4_LANES; 763 + break; 764 + case 8: 765 + plc |= PORT_LINK_MODE_8_LANES; 766 + lwsc |= PORT_LOGIC_LINK_WIDTH_8_LANES; 767 + break; 768 + default: 769 + dev_err(pci->dev, "num-lanes %u: invalid value\n", num_lanes); 770 + return; 771 + } 772 + dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, plc); 773 + dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, lwsc); 774 + 775 + cap = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 776 + lnkcap = dw_pcie_readl_dbi(pci, cap + PCI_EXP_LNKCAP); 777 + lnkcap &= ~PCI_EXP_LNKCAP_MLW; 778 + lnkcap |= FIELD_PREP(PCI_EXP_LNKCAP_MLW, num_lanes); 779 + dw_pcie_writel_dbi(pci, cap + PCI_EXP_LNKCAP, lnkcap); 780 + } 781 + 736 782 void dw_pcie_iatu_detect(struct dw_pcie *pci) 737 783 { 738 784 int max_region, ob, ib; ··· 888 840 * Indirect eDMA CSRs access has been completely removed since v5.40a 889 841 * thus no space is now reserved for the eDMA channels viewport and 890 842 * former DMA CTRL register is no longer fixed to FFs. 843 + * 844 + * Note that Renesas R-Car S4-8's PCIe controllers for unknown reason 845 + * have zeros in the eDMA CTRL register even though the HW-manual 846 + * explicitly states there must FFs if the unrolled mapping is enabled. 847 + * For such cases the low-level drivers are supposed to manually 848 + * activate the unrolled mapping to bypass the auto-detection procedure. 891 849 */ 892 - if (dw_pcie_ver_is_ge(pci, 540A)) 850 + if (dw_pcie_ver_is_ge(pci, 540A) || dw_pcie_cap_is(pci, EDMA_UNROLL)) 893 851 val = 0xFFFFFFFF; 894 852 else 895 853 val = dw_pcie_readl_dbi(pci, PCIE_DMA_VIEWPORT_BASE + PCIE_DMA_CTRL); ··· 1067 1013 val |= PORT_LINK_DLL_LINK_EN; 1068 1014 dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val); 1069 1015 1070 - if (!pci->num_lanes) { 1071 - dev_dbg(pci->dev, "Using h/w default number of lanes\n"); 1072 - return; 1073 - } 1074 - 1075 - /* Set the number of lanes */ 1076 - val &= ~PORT_LINK_FAST_LINK_MODE; 1077 - val &= ~PORT_LINK_MODE_MASK; 1078 - switch (pci->num_lanes) { 1079 - case 1: 1080 - val |= PORT_LINK_MODE_1_LANES; 1081 - break; 1082 - case 2: 1083 - val |= PORT_LINK_MODE_2_LANES; 1084 - break; 1085 - case 4: 1086 - val |= PORT_LINK_MODE_4_LANES; 1087 - break; 1088 - case 8: 1089 - val |= PORT_LINK_MODE_8_LANES; 1090 - break; 1091 - default: 1092 - dev_err(pci->dev, "num-lanes %u: invalid value\n", pci->num_lanes); 1093 - return; 1094 - } 1095 - dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val); 1096 - 1097 - /* Set link width speed control register */ 1098 - val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL); 1099 - val &= ~PORT_LOGIC_LINK_WIDTH_MASK; 1100 - switch (pci->num_lanes) { 1101 - case 1: 1102 - val |= PORT_LOGIC_LINK_WIDTH_1_LANES; 1103 - break; 1104 - case 2: 1105 - val |= PORT_LOGIC_LINK_WIDTH_2_LANES; 1106 - break; 1107 - case 4: 1108 - val |= PORT_LOGIC_LINK_WIDTH_4_LANES; 1109 - break; 1110 - case 8: 1111 - val |= PORT_LOGIC_LINK_WIDTH_8_LANES; 1112 - break; 1113 - } 1114 - dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); 1016 + dw_pcie_link_set_max_link_width(pci, pci->num_lanes); 1115 1017 }
+7 -2
drivers/pci/controller/dwc/pcie-designware.h
··· 51 51 52 52 /* DWC PCIe controller capabilities */ 53 53 #define DW_PCIE_CAP_REQ_RES 0 54 - #define DW_PCIE_CAP_IATU_UNROLL 1 55 - #define DW_PCIE_CAP_CDM_CHECK 2 54 + #define DW_PCIE_CAP_EDMA_UNROLL 1 55 + #define DW_PCIE_CAP_IATU_UNROLL 2 56 + #define DW_PCIE_CAP_CDM_CHECK 3 56 57 57 58 #define dw_pcie_cap_is(_pci, _cap) \ 58 59 test_bit(DW_PCIE_CAP_ ## _cap, &(_pci)->caps) ··· 302 301 struct dw_pcie_host_ops { 303 302 int (*host_init)(struct dw_pcie_rp *pp); 304 303 void (*host_deinit)(struct dw_pcie_rp *pp); 304 + void (*host_post_init)(struct dw_pcie_rp *pp); 305 305 int (*msi_host_init)(struct dw_pcie_rp *pp); 306 306 void (*pme_turn_off)(struct dw_pcie_rp *pp); 307 307 }; ··· 331 329 }; 332 330 333 331 struct dw_pcie_ep_ops { 332 + void (*pre_init)(struct dw_pcie_ep *ep); 334 333 void (*ep_init)(struct dw_pcie_ep *ep); 334 + void (*deinit)(struct dw_pcie_ep *ep); 335 335 int (*raise_irq)(struct dw_pcie_ep *ep, u8 func_no, 336 336 enum pci_epc_irq_type type, u16 interrupt_num); 337 337 const struct pci_epc_features* (*get_features)(struct dw_pcie_ep *ep); ··· 345 341 * driver. 346 342 */ 347 343 unsigned int (*func_conf_select)(struct dw_pcie_ep *ep, u8 func_no); 344 + unsigned int (*get_dbi2_offset)(struct dw_pcie_ep *ep, u8 func_no); 348 345 }; 349 346 350 347 struct dw_pcie_ep_func {
+2 -2
drivers/pci/controller/dwc/pcie-kirin.c
··· 741 741 return ret; 742 742 } 743 743 744 - static int __exit kirin_pcie_remove(struct platform_device *pdev) 744 + static int kirin_pcie_remove(struct platform_device *pdev) 745 745 { 746 746 struct kirin_pcie *kirin_pcie = platform_get_drvdata(pdev); 747 747 ··· 818 818 819 819 static struct platform_driver kirin_pcie_driver = { 820 820 .probe = kirin_pcie_probe, 821 - .remove = __exit_p(kirin_pcie_remove), 821 + .remove = kirin_pcie_remove, 822 822 .driver = { 823 823 .name = "kirin-pcie", 824 824 .of_match_table = kirin_pcie_match,
+23 -25
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 23 23 #include <linux/reset.h> 24 24 #include <linux/module.h> 25 25 26 + #include "../../pci.h" 26 27 #include "pcie-designware.h" 27 28 28 29 /* PARF registers */ ··· 124 123 125 124 /* ELBI registers */ 126 125 #define ELBI_SYS_STTS 0x08 126 + #define ELBI_CS2_ENABLE 0xa4 127 127 128 128 /* DBI registers */ 129 129 #define DBI_CON_STATUS 0x44 ··· 137 135 #define CORE_RESET_TIME_US_MAX 1005 138 136 #define WAKE_DELAY_US 2000 /* 2 ms */ 139 137 140 - #define PCIE_GEN1_BW_MBPS 250 141 - #define PCIE_GEN2_BW_MBPS 500 142 - #define PCIE_GEN3_BW_MBPS 985 143 - #define PCIE_GEN4_BW_MBPS 1969 138 + #define QCOM_PCIE_LINK_SPEED_TO_BW(speed) \ 139 + Mbps_to_icc(PCIE_SPEED2MBS_ENC(pcie_link_speed[speed])) 144 140 145 141 #define to_pcie_ep(x) dev_get_drvdata((x)->dev) 146 142 ··· 263 263 disable_irq(pcie_ep->perst_irq); 264 264 } 265 265 266 + static void qcom_pcie_dw_write_dbi2(struct dw_pcie *pci, void __iomem *base, 267 + u32 reg, size_t size, u32 val) 268 + { 269 + struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); 270 + int ret; 271 + 272 + writel(1, pcie_ep->elbi + ELBI_CS2_ENABLE); 273 + 274 + ret = dw_pcie_write(pci->dbi_base2 + reg, size, val); 275 + if (ret) 276 + dev_err(pci->dev, "Failed to write DBI2 register (0x%x): %d\n", reg, ret); 277 + 278 + writel(0, pcie_ep->elbi + ELBI_CS2_ENABLE); 279 + } 280 + 266 281 static void qcom_pcie_ep_icc_update(struct qcom_pcie_ep *pcie_ep) 267 282 { 268 283 struct dw_pcie *pci = &pcie_ep->pci; 269 - u32 offset, status, bw; 284 + u32 offset, status; 270 285 int speed, width; 271 286 int ret; 272 287 ··· 294 279 speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status); 295 280 width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status); 296 281 297 - switch (speed) { 298 - case 1: 299 - bw = MBps_to_icc(PCIE_GEN1_BW_MBPS); 300 - break; 301 - case 2: 302 - bw = MBps_to_icc(PCIE_GEN2_BW_MBPS); 303 - break; 304 - case 3: 305 - bw = MBps_to_icc(PCIE_GEN3_BW_MBPS); 306 - break; 307 - default: 308 - dev_warn(pci->dev, "using default GEN4 bandwidth\n"); 309 - fallthrough; 310 - case 4: 311 - bw = MBps_to_icc(PCIE_GEN4_BW_MBPS); 312 - break; 313 - } 314 - 315 - ret = icc_set_bw(pcie_ep->icc_mem, 0, width * bw); 282 + ret = icc_set_bw(pcie_ep->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed)); 316 283 if (ret) 317 284 dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n", 318 285 ret); ··· 332 335 * Set an initial peak bandwidth corresponding to single-lane Gen 1 333 336 * for the pcie-mem path. 334 337 */ 335 - ret = icc_set_bw(pcie_ep->icc_mem, 0, MBps_to_icc(PCIE_GEN1_BW_MBPS)); 338 + ret = icc_set_bw(pcie_ep->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1)); 336 339 if (ret) { 337 340 dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n", 338 341 ret); ··· 516 519 .link_up = qcom_pcie_dw_link_up, 517 520 .start_link = qcom_pcie_dw_start_link, 518 521 .stop_link = qcom_pcie_dw_stop_link, 522 + .write_dbi2 = qcom_pcie_dw_write_dbi2, 519 523 }; 520 524 521 525 static int qcom_pcie_ep_get_io_resources(struct platform_device *pdev,
+34 -18
drivers/pci/controller/dwc/pcie-qcom.c
··· 147 147 148 148 #define QCOM_PCIE_CRC8_POLYNOMIAL (BIT(2) | BIT(1) | BIT(0)) 149 149 150 + #define QCOM_PCIE_LINK_SPEED_TO_BW(speed) \ 151 + Mbps_to_icc(PCIE_SPEED2MBS_ENC(pcie_link_speed[speed])) 152 + 150 153 #define QCOM_PCIE_1_0_0_MAX_CLOCKS 4 151 154 struct qcom_pcie_resources_1_0_0 { 152 155 struct clk_bulk_data clks[QCOM_PCIE_1_0_0_MAX_CLOCKS]; ··· 221 218 int (*get_resources)(struct qcom_pcie *pcie); 222 219 int (*init)(struct qcom_pcie *pcie); 223 220 int (*post_init)(struct qcom_pcie *pcie); 221 + void (*host_post_init)(struct qcom_pcie *pcie); 224 222 void (*deinit)(struct qcom_pcie *pcie); 225 223 void (*ltssm_enable)(struct qcom_pcie *pcie); 226 224 int (*config_sid)(struct qcom_pcie *pcie); ··· 966 962 return 0; 967 963 } 968 964 965 + static int qcom_pcie_enable_aspm(struct pci_dev *pdev, void *userdata) 966 + { 967 + /* Downstream devices need to be in D0 state before enabling PCI PM substates */ 968 + pci_set_power_state(pdev, PCI_D0); 969 + pci_enable_link_state(pdev, PCIE_LINK_STATE_ALL); 970 + 971 + return 0; 972 + } 973 + 974 + static void qcom_pcie_host_post_init_2_7_0(struct qcom_pcie *pcie) 975 + { 976 + struct dw_pcie_rp *pp = &pcie->pci->pp; 977 + 978 + pci_walk_bus(pp->bridge->bus, qcom_pcie_enable_aspm, NULL); 979 + } 980 + 969 981 static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie) 970 982 { 971 983 struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; ··· 1234 1214 pcie->cfg->ops->deinit(pcie); 1235 1215 } 1236 1216 1217 + static void qcom_pcie_host_post_init(struct dw_pcie_rp *pp) 1218 + { 1219 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1220 + struct qcom_pcie *pcie = to_qcom_pcie(pci); 1221 + 1222 + if (pcie->cfg->ops->host_post_init) 1223 + pcie->cfg->ops->host_post_init(pcie); 1224 + } 1225 + 1237 1226 static const struct dw_pcie_host_ops qcom_pcie_dw_ops = { 1238 1227 .host_init = qcom_pcie_host_init, 1239 1228 .host_deinit = qcom_pcie_host_deinit, 1229 + .host_post_init = qcom_pcie_host_post_init, 1240 1230 }; 1241 1231 1242 1232 /* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */ ··· 1308 1278 .get_resources = qcom_pcie_get_resources_2_7_0, 1309 1279 .init = qcom_pcie_init_2_7_0, 1310 1280 .post_init = qcom_pcie_post_init_2_7_0, 1281 + .host_post_init = qcom_pcie_host_post_init_2_7_0, 1311 1282 .deinit = qcom_pcie_deinit_2_7_0, 1312 1283 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1313 1284 .config_sid = qcom_pcie_config_sid_1_9_0, ··· 1376 1345 * Set an initial peak bandwidth corresponding to single-lane Gen 1 1377 1346 * for the pcie-mem path. 1378 1347 */ 1379 - ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250)); 1348 + ret = icc_set_bw(pcie->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1)); 1380 1349 if (ret) { 1381 1350 dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n", 1382 1351 ret); ··· 1389 1358 static void qcom_pcie_icc_update(struct qcom_pcie *pcie) 1390 1359 { 1391 1360 struct dw_pcie *pci = pcie->pci; 1392 - u32 offset, status, bw; 1361 + u32 offset, status; 1393 1362 int speed, width; 1394 1363 int ret; 1395 1364 ··· 1406 1375 speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status); 1407 1376 width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status); 1408 1377 1409 - switch (speed) { 1410 - case 1: 1411 - bw = MBps_to_icc(250); 1412 - break; 1413 - case 2: 1414 - bw = MBps_to_icc(500); 1415 - break; 1416 - default: 1417 - WARN_ON_ONCE(1); 1418 - fallthrough; 1419 - case 3: 1420 - bw = MBps_to_icc(985); 1421 - break; 1422 - } 1423 - 1424 - ret = icc_set_bw(pcie->icc_mem, 0, width * bw); 1378 + ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed)); 1425 1379 if (ret) { 1426 1380 dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n", 1427 1381 ret);
+527
drivers/pci/controller/dwc/pcie-rcar-gen4.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * PCIe controller driver for Renesas R-Car Gen4 Series SoCs 4 + * Copyright (C) 2022-2023 Renesas Electronics Corporation 5 + */ 6 + 7 + #include <linux/delay.h> 8 + #include <linux/interrupt.h> 9 + #include <linux/io.h> 10 + #include <linux/module.h> 11 + #include <linux/of_device.h> 12 + #include <linux/pci.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/pm_runtime.h> 15 + #include <linux/reset.h> 16 + 17 + #include "../../pci.h" 18 + #include "pcie-designware.h" 19 + 20 + /* Renesas-specific */ 21 + /* PCIe Mode Setting Register 0 */ 22 + #define PCIEMSR0 0x0000 23 + #define BIFUR_MOD_SET_ON BIT(0) 24 + #define DEVICE_TYPE_EP 0 25 + #define DEVICE_TYPE_RC BIT(4) 26 + 27 + /* PCIe Interrupt Status 0 */ 28 + #define PCIEINTSTS0 0x0084 29 + 30 + /* PCIe Interrupt Status 0 Enable */ 31 + #define PCIEINTSTS0EN 0x0310 32 + #define MSI_CTRL_INT BIT(26) 33 + #define SMLH_LINK_UP BIT(7) 34 + #define RDLH_LINK_UP BIT(6) 35 + 36 + /* PCIe DMA Interrupt Status Enable */ 37 + #define PCIEDMAINTSTSEN 0x0314 38 + #define PCIEDMAINTSTSEN_INIT GENMASK(15, 0) 39 + 40 + /* PCIe Reset Control Register 1 */ 41 + #define PCIERSTCTRL1 0x0014 42 + #define APP_HOLD_PHY_RST BIT(16) 43 + #define APP_LTSSM_ENABLE BIT(0) 44 + 45 + #define RCAR_NUM_SPEED_CHANGE_RETRIES 10 46 + #define RCAR_MAX_LINK_SPEED 4 47 + 48 + #define RCAR_GEN4_PCIE_EP_FUNC_DBI_OFFSET 0x1000 49 + #define RCAR_GEN4_PCIE_EP_FUNC_DBI2_OFFSET 0x800 50 + 51 + struct rcar_gen4_pcie { 52 + struct dw_pcie dw; 53 + void __iomem *base; 54 + struct platform_device *pdev; 55 + enum dw_pcie_device_mode mode; 56 + }; 57 + #define to_rcar_gen4_pcie(_dw) container_of(_dw, struct rcar_gen4_pcie, dw) 58 + 59 + /* Common */ 60 + static void rcar_gen4_pcie_ltssm_enable(struct rcar_gen4_pcie *rcar, 61 + bool enable) 62 + { 63 + u32 val; 64 + 65 + val = readl(rcar->base + PCIERSTCTRL1); 66 + if (enable) { 67 + val |= APP_LTSSM_ENABLE; 68 + val &= ~APP_HOLD_PHY_RST; 69 + } else { 70 + /* 71 + * Since the datasheet of R-Car doesn't mention how to assert 72 + * the APP_HOLD_PHY_RST, don't assert it again. Otherwise, 73 + * hang-up issue happened in the dw_edma_core_off() when 74 + * the controller didn't detect a PCI device. 75 + */ 76 + val &= ~APP_LTSSM_ENABLE; 77 + } 78 + writel(val, rcar->base + PCIERSTCTRL1); 79 + } 80 + 81 + static int rcar_gen4_pcie_link_up(struct dw_pcie *dw) 82 + { 83 + struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw); 84 + u32 val, mask; 85 + 86 + val = readl(rcar->base + PCIEINTSTS0); 87 + mask = RDLH_LINK_UP | SMLH_LINK_UP; 88 + 89 + return (val & mask) == mask; 90 + } 91 + 92 + /* 93 + * Manually initiate the speed change. Return 0 if change succeeded; otherwise 94 + * -ETIMEDOUT. 95 + */ 96 + static int rcar_gen4_pcie_speed_change(struct dw_pcie *dw) 97 + { 98 + u32 val; 99 + int i; 100 + 101 + val = dw_pcie_readl_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL); 102 + val &= ~PORT_LOGIC_SPEED_CHANGE; 103 + dw_pcie_writel_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL, val); 104 + 105 + val = dw_pcie_readl_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL); 106 + val |= PORT_LOGIC_SPEED_CHANGE; 107 + dw_pcie_writel_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL, val); 108 + 109 + for (i = 0; i < RCAR_NUM_SPEED_CHANGE_RETRIES; i++) { 110 + val = dw_pcie_readl_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL); 111 + if (!(val & PORT_LOGIC_SPEED_CHANGE)) 112 + return 0; 113 + usleep_range(10000, 11000); 114 + } 115 + 116 + return -ETIMEDOUT; 117 + } 118 + 119 + /* 120 + * Enable LTSSM of this controller and manually initiate the speed change. 121 + * Always return 0. 122 + */ 123 + static int rcar_gen4_pcie_start_link(struct dw_pcie *dw) 124 + { 125 + struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw); 126 + int i, changes; 127 + 128 + rcar_gen4_pcie_ltssm_enable(rcar, true); 129 + 130 + /* 131 + * Require direct speed change with retrying here if the link_gen is 132 + * PCIe Gen2 or higher. 133 + */ 134 + changes = min_not_zero(dw->link_gen, RCAR_MAX_LINK_SPEED) - 1; 135 + 136 + /* 137 + * Since dw_pcie_setup_rc() sets it once, PCIe Gen2 will be trained. 138 + * So, this needs remaining times for up to PCIe Gen4 if RC mode. 139 + */ 140 + if (changes && rcar->mode == DW_PCIE_RC_TYPE) 141 + changes--; 142 + 143 + for (i = 0; i < changes; i++) { 144 + /* It may not be connected in EP mode yet. So, break the loop */ 145 + if (rcar_gen4_pcie_speed_change(dw)) 146 + break; 147 + } 148 + 149 + return 0; 150 + } 151 + 152 + static void rcar_gen4_pcie_stop_link(struct dw_pcie *dw) 153 + { 154 + struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw); 155 + 156 + rcar_gen4_pcie_ltssm_enable(rcar, false); 157 + } 158 + 159 + static int rcar_gen4_pcie_common_init(struct rcar_gen4_pcie *rcar) 160 + { 161 + struct dw_pcie *dw = &rcar->dw; 162 + u32 val; 163 + int ret; 164 + 165 + ret = clk_bulk_prepare_enable(DW_PCIE_NUM_CORE_CLKS, dw->core_clks); 166 + if (ret) { 167 + dev_err(dw->dev, "Enabling core clocks failed\n"); 168 + return ret; 169 + } 170 + 171 + if (!reset_control_status(dw->core_rsts[DW_PCIE_PWR_RST].rstc)) 172 + reset_control_assert(dw->core_rsts[DW_PCIE_PWR_RST].rstc); 173 + 174 + val = readl(rcar->base + PCIEMSR0); 175 + if (rcar->mode == DW_PCIE_RC_TYPE) { 176 + val |= DEVICE_TYPE_RC; 177 + } else if (rcar->mode == DW_PCIE_EP_TYPE) { 178 + val |= DEVICE_TYPE_EP; 179 + } else { 180 + ret = -EINVAL; 181 + goto err_unprepare; 182 + } 183 + 184 + if (dw->num_lanes < 4) 185 + val |= BIFUR_MOD_SET_ON; 186 + 187 + writel(val, rcar->base + PCIEMSR0); 188 + 189 + ret = reset_control_deassert(dw->core_rsts[DW_PCIE_PWR_RST].rstc); 190 + if (ret) 191 + goto err_unprepare; 192 + 193 + return 0; 194 + 195 + err_unprepare: 196 + clk_bulk_disable_unprepare(DW_PCIE_NUM_CORE_CLKS, dw->core_clks); 197 + 198 + return ret; 199 + } 200 + 201 + static void rcar_gen4_pcie_common_deinit(struct rcar_gen4_pcie *rcar) 202 + { 203 + struct dw_pcie *dw = &rcar->dw; 204 + 205 + reset_control_assert(dw->core_rsts[DW_PCIE_PWR_RST].rstc); 206 + clk_bulk_disable_unprepare(DW_PCIE_NUM_CORE_CLKS, dw->core_clks); 207 + } 208 + 209 + static int rcar_gen4_pcie_prepare(struct rcar_gen4_pcie *rcar) 210 + { 211 + struct device *dev = rcar->dw.dev; 212 + int err; 213 + 214 + pm_runtime_enable(dev); 215 + err = pm_runtime_resume_and_get(dev); 216 + if (err < 0) { 217 + dev_err(dev, "Runtime resume failed\n"); 218 + pm_runtime_disable(dev); 219 + } 220 + 221 + return err; 222 + } 223 + 224 + static void rcar_gen4_pcie_unprepare(struct rcar_gen4_pcie *rcar) 225 + { 226 + struct device *dev = rcar->dw.dev; 227 + 228 + pm_runtime_put(dev); 229 + pm_runtime_disable(dev); 230 + } 231 + 232 + static int rcar_gen4_pcie_get_resources(struct rcar_gen4_pcie *rcar) 233 + { 234 + /* Renesas-specific registers */ 235 + rcar->base = devm_platform_ioremap_resource_byname(rcar->pdev, "app"); 236 + 237 + return PTR_ERR_OR_ZERO(rcar->base); 238 + } 239 + 240 + static const struct dw_pcie_ops dw_pcie_ops = { 241 + .start_link = rcar_gen4_pcie_start_link, 242 + .stop_link = rcar_gen4_pcie_stop_link, 243 + .link_up = rcar_gen4_pcie_link_up, 244 + }; 245 + 246 + static struct rcar_gen4_pcie *rcar_gen4_pcie_alloc(struct platform_device *pdev) 247 + { 248 + struct device *dev = &pdev->dev; 249 + struct rcar_gen4_pcie *rcar; 250 + 251 + rcar = devm_kzalloc(dev, sizeof(*rcar), GFP_KERNEL); 252 + if (!rcar) 253 + return ERR_PTR(-ENOMEM); 254 + 255 + rcar->dw.ops = &dw_pcie_ops; 256 + rcar->dw.dev = dev; 257 + rcar->pdev = pdev; 258 + dw_pcie_cap_set(&rcar->dw, EDMA_UNROLL); 259 + dw_pcie_cap_set(&rcar->dw, REQ_RES); 260 + platform_set_drvdata(pdev, rcar); 261 + 262 + return rcar; 263 + } 264 + 265 + /* Host mode */ 266 + static int rcar_gen4_pcie_host_init(struct dw_pcie_rp *pp) 267 + { 268 + struct dw_pcie *dw = to_dw_pcie_from_pp(pp); 269 + struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw); 270 + int ret; 271 + u32 val; 272 + 273 + gpiod_set_value_cansleep(dw->pe_rst, 1); 274 + 275 + ret = rcar_gen4_pcie_common_init(rcar); 276 + if (ret) 277 + return ret; 278 + 279 + /* 280 + * According to the section 3.5.7.2 "RC Mode" in DWC PCIe Dual Mode 281 + * Rev.5.20a and 3.5.6.1 "RC mode" in DWC PCIe RC databook v5.20a, we 282 + * should disable two BARs to avoid unnecessary memory assignment 283 + * during device enumeration. 284 + */ 285 + dw_pcie_writel_dbi2(dw, PCI_BASE_ADDRESS_0, 0x0); 286 + dw_pcie_writel_dbi2(dw, PCI_BASE_ADDRESS_1, 0x0); 287 + 288 + /* Enable MSI interrupt signal */ 289 + val = readl(rcar->base + PCIEINTSTS0EN); 290 + val |= MSI_CTRL_INT; 291 + writel(val, rcar->base + PCIEINTSTS0EN); 292 + 293 + msleep(PCIE_T_PVPERL_MS); /* pe_rst requires 100msec delay */ 294 + 295 + gpiod_set_value_cansleep(dw->pe_rst, 0); 296 + 297 + return 0; 298 + } 299 + 300 + static void rcar_gen4_pcie_host_deinit(struct dw_pcie_rp *pp) 301 + { 302 + struct dw_pcie *dw = to_dw_pcie_from_pp(pp); 303 + struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw); 304 + 305 + gpiod_set_value_cansleep(dw->pe_rst, 1); 306 + rcar_gen4_pcie_common_deinit(rcar); 307 + } 308 + 309 + static const struct dw_pcie_host_ops rcar_gen4_pcie_host_ops = { 310 + .host_init = rcar_gen4_pcie_host_init, 311 + .host_deinit = rcar_gen4_pcie_host_deinit, 312 + }; 313 + 314 + static int rcar_gen4_add_dw_pcie_rp(struct rcar_gen4_pcie *rcar) 315 + { 316 + struct dw_pcie_rp *pp = &rcar->dw.pp; 317 + 318 + if (!IS_ENABLED(CONFIG_PCIE_RCAR_GEN4_HOST)) 319 + return -ENODEV; 320 + 321 + pp->num_vectors = MAX_MSI_IRQS; 322 + pp->ops = &rcar_gen4_pcie_host_ops; 323 + 324 + return dw_pcie_host_init(pp); 325 + } 326 + 327 + static void rcar_gen4_remove_dw_pcie_rp(struct rcar_gen4_pcie *rcar) 328 + { 329 + dw_pcie_host_deinit(&rcar->dw.pp); 330 + } 331 + 332 + /* Endpoint mode */ 333 + static void rcar_gen4_pcie_ep_pre_init(struct dw_pcie_ep *ep) 334 + { 335 + struct dw_pcie *dw = to_dw_pcie_from_ep(ep); 336 + struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw); 337 + int ret; 338 + 339 + ret = rcar_gen4_pcie_common_init(rcar); 340 + if (ret) 341 + return; 342 + 343 + writel(PCIEDMAINTSTSEN_INIT, rcar->base + PCIEDMAINTSTSEN); 344 + } 345 + 346 + static void rcar_gen4_pcie_ep_init(struct dw_pcie_ep *ep) 347 + { 348 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 349 + enum pci_barno bar; 350 + 351 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 352 + dw_pcie_ep_reset_bar(pci, bar); 353 + } 354 + 355 + static void rcar_gen4_pcie_ep_deinit(struct dw_pcie_ep *ep) 356 + { 357 + struct dw_pcie *dw = to_dw_pcie_from_ep(ep); 358 + struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw); 359 + 360 + writel(0, rcar->base + PCIEDMAINTSTSEN); 361 + rcar_gen4_pcie_common_deinit(rcar); 362 + } 363 + 364 + static int rcar_gen4_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 365 + enum pci_epc_irq_type type, 366 + u16 interrupt_num) 367 + { 368 + struct dw_pcie *dw = to_dw_pcie_from_ep(ep); 369 + 370 + switch (type) { 371 + case PCI_EPC_IRQ_LEGACY: 372 + return dw_pcie_ep_raise_legacy_irq(ep, func_no); 373 + case PCI_EPC_IRQ_MSI: 374 + return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); 375 + default: 376 + dev_err(dw->dev, "Unknown IRQ type\n"); 377 + return -EINVAL; 378 + } 379 + 380 + return 0; 381 + } 382 + 383 + static const struct pci_epc_features rcar_gen4_pcie_epc_features = { 384 + .linkup_notifier = false, 385 + .msi_capable = true, 386 + .msix_capable = false, 387 + .reserved_bar = 1 << BAR_1 | 1 << BAR_3 | 1 << BAR_5, 388 + .align = SZ_1M, 389 + }; 390 + 391 + static const struct pci_epc_features* 392 + rcar_gen4_pcie_ep_get_features(struct dw_pcie_ep *ep) 393 + { 394 + return &rcar_gen4_pcie_epc_features; 395 + } 396 + 397 + static unsigned int rcar_gen4_pcie_ep_func_conf_select(struct dw_pcie_ep *ep, 398 + u8 func_no) 399 + { 400 + return func_no * RCAR_GEN4_PCIE_EP_FUNC_DBI_OFFSET; 401 + } 402 + 403 + static unsigned int rcar_gen4_pcie_ep_get_dbi2_offset(struct dw_pcie_ep *ep, 404 + u8 func_no) 405 + { 406 + return func_no * RCAR_GEN4_PCIE_EP_FUNC_DBI2_OFFSET; 407 + } 408 + 409 + static const struct dw_pcie_ep_ops pcie_ep_ops = { 410 + .pre_init = rcar_gen4_pcie_ep_pre_init, 411 + .ep_init = rcar_gen4_pcie_ep_init, 412 + .deinit = rcar_gen4_pcie_ep_deinit, 413 + .raise_irq = rcar_gen4_pcie_ep_raise_irq, 414 + .get_features = rcar_gen4_pcie_ep_get_features, 415 + .func_conf_select = rcar_gen4_pcie_ep_func_conf_select, 416 + .get_dbi2_offset = rcar_gen4_pcie_ep_get_dbi2_offset, 417 + }; 418 + 419 + static int rcar_gen4_add_dw_pcie_ep(struct rcar_gen4_pcie *rcar) 420 + { 421 + struct dw_pcie_ep *ep = &rcar->dw.ep; 422 + 423 + if (!IS_ENABLED(CONFIG_PCIE_RCAR_GEN4_EP)) 424 + return -ENODEV; 425 + 426 + ep->ops = &pcie_ep_ops; 427 + 428 + return dw_pcie_ep_init(ep); 429 + } 430 + 431 + static void rcar_gen4_remove_dw_pcie_ep(struct rcar_gen4_pcie *rcar) 432 + { 433 + dw_pcie_ep_exit(&rcar->dw.ep); 434 + } 435 + 436 + /* Common */ 437 + static int rcar_gen4_add_dw_pcie(struct rcar_gen4_pcie *rcar) 438 + { 439 + rcar->mode = (enum dw_pcie_device_mode)of_device_get_match_data(&rcar->pdev->dev); 440 + 441 + switch (rcar->mode) { 442 + case DW_PCIE_RC_TYPE: 443 + return rcar_gen4_add_dw_pcie_rp(rcar); 444 + case DW_PCIE_EP_TYPE: 445 + return rcar_gen4_add_dw_pcie_ep(rcar); 446 + default: 447 + return -EINVAL; 448 + } 449 + } 450 + 451 + static int rcar_gen4_pcie_probe(struct platform_device *pdev) 452 + { 453 + struct rcar_gen4_pcie *rcar; 454 + int err; 455 + 456 + rcar = rcar_gen4_pcie_alloc(pdev); 457 + if (IS_ERR(rcar)) 458 + return PTR_ERR(rcar); 459 + 460 + err = rcar_gen4_pcie_get_resources(rcar); 461 + if (err) 462 + return err; 463 + 464 + err = rcar_gen4_pcie_prepare(rcar); 465 + if (err) 466 + return err; 467 + 468 + err = rcar_gen4_add_dw_pcie(rcar); 469 + if (err) 470 + goto err_unprepare; 471 + 472 + return 0; 473 + 474 + err_unprepare: 475 + rcar_gen4_pcie_unprepare(rcar); 476 + 477 + return err; 478 + } 479 + 480 + static void rcar_gen4_remove_dw_pcie(struct rcar_gen4_pcie *rcar) 481 + { 482 + switch (rcar->mode) { 483 + case DW_PCIE_RC_TYPE: 484 + rcar_gen4_remove_dw_pcie_rp(rcar); 485 + break; 486 + case DW_PCIE_EP_TYPE: 487 + rcar_gen4_remove_dw_pcie_ep(rcar); 488 + break; 489 + default: 490 + break; 491 + } 492 + } 493 + 494 + static void rcar_gen4_pcie_remove(struct platform_device *pdev) 495 + { 496 + struct rcar_gen4_pcie *rcar = platform_get_drvdata(pdev); 497 + 498 + rcar_gen4_remove_dw_pcie(rcar); 499 + rcar_gen4_pcie_unprepare(rcar); 500 + } 501 + 502 + static const struct of_device_id rcar_gen4_pcie_of_match[] = { 503 + { 504 + .compatible = "renesas,rcar-gen4-pcie", 505 + .data = (void *)DW_PCIE_RC_TYPE, 506 + }, 507 + { 508 + .compatible = "renesas,rcar-gen4-pcie-ep", 509 + .data = (void *)DW_PCIE_EP_TYPE, 510 + }, 511 + {}, 512 + }; 513 + MODULE_DEVICE_TABLE(of, rcar_gen4_pcie_of_match); 514 + 515 + static struct platform_driver rcar_gen4_pcie_driver = { 516 + .driver = { 517 + .name = "pcie-rcar-gen4", 518 + .of_match_table = rcar_gen4_pcie_of_match, 519 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 520 + }, 521 + .probe = rcar_gen4_pcie_probe, 522 + .remove_new = rcar_gen4_pcie_remove, 523 + }; 524 + module_platform_driver(rcar_gen4_pcie_driver); 525 + 526 + MODULE_DESCRIPTION("Renesas R-Car Gen4 PCIe controller driver"); 527 + MODULE_LICENSE("GPL");
+12 -15
drivers/pci/controller/dwc/pcie-tegra194.c
··· 9 9 * Author: Vidya Sagar <vidyas@nvidia.com> 10 10 */ 11 11 12 + #include <linux/bitfield.h> 12 13 #include <linux/clk.h> 13 14 #include <linux/debugfs.h> 14 15 #include <linux/delay.h> ··· 126 125 127 126 #define APPL_LTR_MSG_1 0xC4 128 127 #define LTR_MSG_REQ BIT(15) 129 - #define LTR_MST_NO_SNOOP_SHIFT 16 128 + #define LTR_NOSNOOP_MSG_REQ BIT(31) 130 129 131 130 #define APPL_LTR_MSG_2 0xC8 132 131 #define APPL_LTR_MSG_2_LTR_MSG_REQ_STATE BIT(3) ··· 322 321 speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, val); 323 322 width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val); 324 323 325 - val = width * (PCIE_SPEED2MBS_ENC(pcie_link_speed[speed]) / BITS_PER_BYTE); 324 + val = width * PCIE_SPEED2MBS_ENC(pcie_link_speed[speed]); 326 325 327 - if (icc_set_bw(pcie->icc_path, MBps_to_icc(val), 0)) 326 + if (icc_set_bw(pcie->icc_path, Mbps_to_icc(val), 0)) 328 327 dev_err(pcie->dev, "can't set bw[%u]\n", val); 329 328 330 329 if (speed >= ARRAY_SIZE(pcie_gen_freq)) ··· 347 346 */ 348 347 val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA); 349 348 if (val & PCI_EXP_LNKSTA_LBMS) { 350 - current_link_width = (val & PCI_EXP_LNKSTA_NLW) >> 351 - PCI_EXP_LNKSTA_NLW_SHIFT; 349 + current_link_width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val); 352 350 if (pcie->init_link_width > current_link_width) { 353 351 dev_warn(pci->dev, "PCIe link is bad, width reduced\n"); 354 352 val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + ··· 496 496 ktime_t timeout; 497 497 498 498 /* 110us for both snoop and no-snoop */ 499 - val = 110 | (2 << PCI_LTR_SCALE_SHIFT) | LTR_MSG_REQ; 500 - val |= (val << LTR_MST_NO_SNOOP_SHIFT); 499 + val = FIELD_PREP(PCI_LTR_VALUE_MASK, 110) | 500 + FIELD_PREP(PCI_LTR_SCALE_MASK, 2) | 501 + LTR_MSG_REQ | 502 + FIELD_PREP(PCI_LTR_NOSNOOP_VALUE, 110) | 503 + FIELD_PREP(PCI_LTR_NOSNOOP_SCALE, 2) | 504 + LTR_NOSNOOP_MSG_REQ; 501 505 appl_writel(pcie, val, APPL_LTR_MSG_1); 502 506 503 507 /* Send LTR upstream */ ··· 764 760 765 761 val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base + 766 762 PCI_EXP_LNKSTA); 767 - pcie->init_link_width = (val_w & PCI_EXP_LNKSTA_NLW) >> 768 - PCI_EXP_LNKSTA_NLW_SHIFT; 763 + pcie->init_link_width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val_w); 769 764 770 765 val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base + 771 766 PCI_EXP_LNKCTL); ··· 919 916 val |= (AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 << 920 917 AMBA_ERROR_RESPONSE_CRS_SHIFT); 921 918 dw_pcie_writel_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT, val); 922 - 923 - /* Configure Max lane width from DT */ 924 - val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP); 925 - val &= ~PCI_EXP_LNKCAP_MLW; 926 - val |= (pcie->num_lanes << PCI_EXP_LNKSTA_NLW_SHIFT); 927 - dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, val); 928 919 929 920 /* Clear Slot Clock Configuration bit if SRNS configuration */ 930 921 if (pcie->enable_srns) {
+1 -1
drivers/pci/controller/mobiveil/pcie-mobiveil-host.c
··· 539 539 u32 header_type; 540 540 541 541 header_type = mobiveil_csr_readb(pcie, PCI_HEADER_TYPE); 542 - header_type &= 0x7f; 542 + header_type &= PCI_HEADER_TYPE_MASK; 543 543 544 544 return header_type == PCI_HEADER_TYPE_BRIDGE; 545 545 }
+1 -1
drivers/pci/controller/pci-hyperv.c
··· 545 545 struct hv_dr_state { 546 546 struct list_head list_entry; 547 547 u32 device_count; 548 - struct hv_pcidev_description func[]; 548 + struct hv_pcidev_description func[] __counted_by(device_count); 549 549 }; 550 550 551 551 struct hv_pci_dev {
+1 -1
drivers/pci/controller/pci-mvebu.c
··· 264 264 */ 265 265 lnkcap = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP); 266 266 lnkcap &= ~PCI_EXP_LNKCAP_MLW; 267 - lnkcap |= (port->is_x4 ? 4 : 1) << 4; 267 + lnkcap |= FIELD_PREP(PCI_EXP_LNKCAP_MLW, port->is_x4 ? 4 : 1); 268 268 mvebu_writel(port, lnkcap, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP); 269 269 270 270 /* Disable Root Bridge I/O space, memory space and bus mastering. */
+4 -3
drivers/pci/controller/pci-xgene.c
··· 163 163 int where, int size, u32 *val) 164 164 { 165 165 struct xgene_pcie *port = pcie_bus_to_port(bus); 166 + int ret; 166 167 167 - if (pci_generic_config_read32(bus, devfn, where & ~0x3, 4, val) != 168 - PCIBIOS_SUCCESSFUL) 169 - return PCIBIOS_DEVICE_NOT_FOUND; 168 + ret = pci_generic_config_read32(bus, devfn, where & ~0x3, 4, val); 169 + if (ret != PCIBIOS_SUCCESSFUL) 170 + return ret; 170 171 171 172 /* 172 173 * The v1 controller has a bug in its Configuration Request Retry
+1 -1
drivers/pci/controller/pcie-iproc.c
··· 783 783 784 784 /* make sure we are not in EP mode */ 785 785 iproc_pci_raw_config_read32(pcie, 0, PCI_HEADER_TYPE, 1, &hdr_type); 786 - if ((hdr_type & 0x7f) != PCI_HEADER_TYPE_BRIDGE) { 786 + if ((hdr_type & PCI_HEADER_TYPE_MASK) != PCI_HEADER_TYPE_BRIDGE) { 787 787 dev_err(dev, "in EP mode, hdr=%#02x\n", hdr_type); 788 788 return -EFAULT; 789 789 }
+1 -1
drivers/pci/controller/pcie-rcar-ep.c
··· 43 43 rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP); 44 44 rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS), 45 45 PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ENDPOINT << 4); 46 - rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), 0x7f, 46 + rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), PCI_HEADER_TYPE_MASK, 47 47 PCI_HEADER_TYPE_NORMAL); 48 48 49 49 /* Write out the physical slot number = 0 */
+1 -1
drivers/pci/controller/pcie-rcar-host.c
··· 460 460 rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP); 461 461 rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS), 462 462 PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ROOT_PORT << 4); 463 - rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), 0x7f, 463 + rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), PCI_HEADER_TYPE_MASK, 464 464 PCI_HEADER_TYPE_BRIDGE); 465 465 466 466 /* Enable data link layer active state reporting */
+31
drivers/pci/controller/pcie-xilinx-common.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * (C) Copyright 2023, Xilinx, Inc. 4 + */ 5 + 6 + #include <linux/pci.h> 7 + #include <linux/pci-ecam.h> 8 + #include <linux/platform_device.h> 9 + 10 + /* Interrupt registers definitions */ 11 + #define XILINX_PCIE_INTR_LINK_DOWN 0 12 + #define XILINX_PCIE_INTR_HOT_RESET 3 13 + #define XILINX_PCIE_INTR_CFG_PCIE_TIMEOUT 4 14 + #define XILINX_PCIE_INTR_CFG_TIMEOUT 8 15 + #define XILINX_PCIE_INTR_CORRECTABLE 9 16 + #define XILINX_PCIE_INTR_NONFATAL 10 17 + #define XILINX_PCIE_INTR_FATAL 11 18 + #define XILINX_PCIE_INTR_CFG_ERR_POISON 12 19 + #define XILINX_PCIE_INTR_PME_TO_ACK_RCVD 15 20 + #define XILINX_PCIE_INTR_INTX 16 21 + #define XILINX_PCIE_INTR_PM_PME_RCVD 17 22 + #define XILINX_PCIE_INTR_MSI 17 23 + #define XILINX_PCIE_INTR_SLV_UNSUPP 20 24 + #define XILINX_PCIE_INTR_SLV_UNEXP 21 25 + #define XILINX_PCIE_INTR_SLV_COMPL 22 26 + #define XILINX_PCIE_INTR_SLV_ERRP 23 27 + #define XILINX_PCIE_INTR_SLV_CMPABT 24 28 + #define XILINX_PCIE_INTR_SLV_ILLBUR 25 29 + #define XILINX_PCIE_INTR_MST_DECERR 26 30 + #define XILINX_PCIE_INTR_MST_SLVERR 27 31 + #define XILINX_PCIE_INTR_SLV_PCIE_TIMEOUT 28
+7 -31
drivers/pci/controller/pcie-xilinx-cpm.c
··· 16 16 #include <linux/of_address.h> 17 17 #include <linux/of_pci.h> 18 18 #include <linux/of_platform.h> 19 - #include <linux/pci.h> 20 - #include <linux/platform_device.h> 21 - #include <linux/pci-ecam.h> 22 19 23 20 #include "../pci.h" 21 + #include "pcie-xilinx-common.h" 24 22 25 23 /* Register definitions */ 26 24 #define XILINX_CPM_PCIE_REG_IDR 0x00000E10 ··· 36 38 #define XILINX_CPM_PCIE_IR_ENABLE 0x000002A8 37 39 #define XILINX_CPM_PCIE_IR_LOCAL BIT(0) 38 40 39 - /* Interrupt registers definitions */ 40 - #define XILINX_CPM_PCIE_INTR_LINK_DOWN 0 41 - #define XILINX_CPM_PCIE_INTR_HOT_RESET 3 42 - #define XILINX_CPM_PCIE_INTR_CFG_PCIE_TIMEOUT 4 43 - #define XILINX_CPM_PCIE_INTR_CFG_TIMEOUT 8 44 - #define XILINX_CPM_PCIE_INTR_CORRECTABLE 9 45 - #define XILINX_CPM_PCIE_INTR_NONFATAL 10 46 - #define XILINX_CPM_PCIE_INTR_FATAL 11 47 - #define XILINX_CPM_PCIE_INTR_CFG_ERR_POISON 12 48 - #define XILINX_CPM_PCIE_INTR_PME_TO_ACK_RCVD 15 49 - #define XILINX_CPM_PCIE_INTR_INTX 16 50 - #define XILINX_CPM_PCIE_INTR_PM_PME_RCVD 17 51 - #define XILINX_CPM_PCIE_INTR_SLV_UNSUPP 20 52 - #define XILINX_CPM_PCIE_INTR_SLV_UNEXP 21 53 - #define XILINX_CPM_PCIE_INTR_SLV_COMPL 22 54 - #define XILINX_CPM_PCIE_INTR_SLV_ERRP 23 55 - #define XILINX_CPM_PCIE_INTR_SLV_CMPABT 24 56 - #define XILINX_CPM_PCIE_INTR_SLV_ILLBUR 25 57 - #define XILINX_CPM_PCIE_INTR_MST_DECERR 26 58 - #define XILINX_CPM_PCIE_INTR_MST_SLVERR 27 59 - #define XILINX_CPM_PCIE_INTR_SLV_PCIE_TIMEOUT 28 60 - 61 - #define IMR(x) BIT(XILINX_CPM_PCIE_INTR_ ##x) 41 + #define IMR(x) BIT(XILINX_PCIE_INTR_ ##x) 62 42 63 43 #define XILINX_CPM_PCIE_IMR_ALL_MASK \ 64 44 ( \ ··· 299 323 } 300 324 301 325 #define _IC(x, s) \ 302 - [XILINX_CPM_PCIE_INTR_ ## x] = { __stringify(x), s } 326 + [XILINX_PCIE_INTR_ ## x] = { __stringify(x), s } 303 327 304 328 static const struct { 305 329 const char *sym; ··· 335 359 d = irq_domain_get_irq_data(port->cpm_domain, irq); 336 360 337 361 switch (d->hwirq) { 338 - case XILINX_CPM_PCIE_INTR_CORRECTABLE: 339 - case XILINX_CPM_PCIE_INTR_NONFATAL: 340 - case XILINX_CPM_PCIE_INTR_FATAL: 362 + case XILINX_PCIE_INTR_CORRECTABLE: 363 + case XILINX_PCIE_INTR_NONFATAL: 364 + case XILINX_PCIE_INTR_FATAL: 341 365 cpm_pcie_clear_err_interrupts(port); 342 366 fallthrough; 343 367 ··· 442 466 } 443 467 444 468 port->intx_irq = irq_create_mapping(port->cpm_domain, 445 - XILINX_CPM_PCIE_INTR_INTX); 469 + XILINX_PCIE_INTR_INTX); 446 470 if (!port->intx_irq) { 447 471 dev_err(dev, "Failed to map INTx interrupt\n"); 448 472 return -ENXIO;
+814
drivers/pci/controller/pcie-xilinx-dma-pl.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * PCIe host controller driver for Xilinx XDMA PCIe Bridge 4 + * 5 + * Copyright (C) 2023 Xilinx, Inc. All rights reserved. 6 + */ 7 + #include <linux/bitfield.h> 8 + #include <linux/interrupt.h> 9 + #include <linux/irq.h> 10 + #include <linux/irqdomain.h> 11 + #include <linux/kernel.h> 12 + #include <linux/module.h> 13 + #include <linux/msi.h> 14 + #include <linux/of_address.h> 15 + #include <linux/of_pci.h> 16 + 17 + #include "../pci.h" 18 + #include "pcie-xilinx-common.h" 19 + 20 + /* Register definitions */ 21 + #define XILINX_PCIE_DMA_REG_IDR 0x00000138 22 + #define XILINX_PCIE_DMA_REG_IMR 0x0000013c 23 + #define XILINX_PCIE_DMA_REG_PSCR 0x00000144 24 + #define XILINX_PCIE_DMA_REG_RPSC 0x00000148 25 + #define XILINX_PCIE_DMA_REG_MSIBASE1 0x0000014c 26 + #define XILINX_PCIE_DMA_REG_MSIBASE2 0x00000150 27 + #define XILINX_PCIE_DMA_REG_RPEFR 0x00000154 28 + #define XILINX_PCIE_DMA_REG_IDRN 0x00000160 29 + #define XILINX_PCIE_DMA_REG_IDRN_MASK 0x00000164 30 + #define XILINX_PCIE_DMA_REG_MSI_LOW 0x00000170 31 + #define XILINX_PCIE_DMA_REG_MSI_HI 0x00000174 32 + #define XILINX_PCIE_DMA_REG_MSI_LOW_MASK 0x00000178 33 + #define XILINX_PCIE_DMA_REG_MSI_HI_MASK 0x0000017c 34 + 35 + #define IMR(x) BIT(XILINX_PCIE_INTR_ ##x) 36 + 37 + #define XILINX_PCIE_INTR_IMR_ALL_MASK \ 38 + ( \ 39 + IMR(LINK_DOWN) | \ 40 + IMR(HOT_RESET) | \ 41 + IMR(CFG_TIMEOUT) | \ 42 + IMR(CORRECTABLE) | \ 43 + IMR(NONFATAL) | \ 44 + IMR(FATAL) | \ 45 + IMR(INTX) | \ 46 + IMR(MSI) | \ 47 + IMR(SLV_UNSUPP) | \ 48 + IMR(SLV_UNEXP) | \ 49 + IMR(SLV_COMPL) | \ 50 + IMR(SLV_ERRP) | \ 51 + IMR(SLV_CMPABT) | \ 52 + IMR(SLV_ILLBUR) | \ 53 + IMR(MST_DECERR) | \ 54 + IMR(MST_SLVERR) | \ 55 + ) 56 + 57 + #define XILINX_PCIE_DMA_IMR_ALL_MASK 0x0ff30fe9 58 + #define XILINX_PCIE_DMA_IDR_ALL_MASK 0xffffffff 59 + #define XILINX_PCIE_DMA_IDRN_MASK GENMASK(19, 16) 60 + 61 + /* Root Port Error Register definitions */ 62 + #define XILINX_PCIE_DMA_RPEFR_ERR_VALID BIT(18) 63 + #define XILINX_PCIE_DMA_RPEFR_REQ_ID GENMASK(15, 0) 64 + #define XILINX_PCIE_DMA_RPEFR_ALL_MASK 0xffffffff 65 + 66 + /* Root Port Interrupt Register definitions */ 67 + #define XILINX_PCIE_DMA_IDRN_SHIFT 16 68 + 69 + /* Root Port Status/control Register definitions */ 70 + #define XILINX_PCIE_DMA_REG_RPSC_BEN BIT(0) 71 + 72 + /* Phy Status/Control Register definitions */ 73 + #define XILINX_PCIE_DMA_REG_PSCR_LNKUP BIT(11) 74 + 75 + /* Number of MSI IRQs */ 76 + #define XILINX_NUM_MSI_IRQS 64 77 + 78 + struct xilinx_msi { 79 + struct irq_domain *msi_domain; 80 + unsigned long *bitmap; 81 + struct irq_domain *dev_domain; 82 + struct mutex lock; /* Protect bitmap variable */ 83 + int irq_msi0; 84 + int irq_msi1; 85 + }; 86 + 87 + /** 88 + * struct pl_dma_pcie - PCIe port information 89 + * @dev: Device pointer 90 + * @reg_base: IO Mapped Register Base 91 + * @irq: Interrupt number 92 + * @cfg: Holds mappings of config space window 93 + * @phys_reg_base: Physical address of reg base 94 + * @intx_domain: Legacy IRQ domain pointer 95 + * @pldma_domain: PL DMA IRQ domain pointer 96 + * @resources: Bus Resources 97 + * @msi: MSI information 98 + * @intx_irq: INTx error interrupt number 99 + * @lock: Lock protecting shared register access 100 + */ 101 + struct pl_dma_pcie { 102 + struct device *dev; 103 + void __iomem *reg_base; 104 + int irq; 105 + struct pci_config_window *cfg; 106 + phys_addr_t phys_reg_base; 107 + struct irq_domain *intx_domain; 108 + struct irq_domain *pldma_domain; 109 + struct list_head resources; 110 + struct xilinx_msi msi; 111 + int intx_irq; 112 + raw_spinlock_t lock; 113 + }; 114 + 115 + static inline u32 pcie_read(struct pl_dma_pcie *port, u32 reg) 116 + { 117 + return readl(port->reg_base + reg); 118 + } 119 + 120 + static inline void pcie_write(struct pl_dma_pcie *port, u32 val, u32 reg) 121 + { 122 + writel(val, port->reg_base + reg); 123 + } 124 + 125 + static inline bool xilinx_pl_dma_pcie_link_up(struct pl_dma_pcie *port) 126 + { 127 + return (pcie_read(port, XILINX_PCIE_DMA_REG_PSCR) & 128 + XILINX_PCIE_DMA_REG_PSCR_LNKUP) ? true : false; 129 + } 130 + 131 + static void xilinx_pl_dma_pcie_clear_err_interrupts(struct pl_dma_pcie *port) 132 + { 133 + unsigned long val = pcie_read(port, XILINX_PCIE_DMA_REG_RPEFR); 134 + 135 + if (val & XILINX_PCIE_DMA_RPEFR_ERR_VALID) { 136 + dev_dbg(port->dev, "Requester ID %lu\n", 137 + val & XILINX_PCIE_DMA_RPEFR_REQ_ID); 138 + pcie_write(port, XILINX_PCIE_DMA_RPEFR_ALL_MASK, 139 + XILINX_PCIE_DMA_REG_RPEFR); 140 + } 141 + } 142 + 143 + static bool xilinx_pl_dma_pcie_valid_device(struct pci_bus *bus, 144 + unsigned int devfn) 145 + { 146 + struct pl_dma_pcie *port = bus->sysdata; 147 + 148 + if (!pci_is_root_bus(bus)) { 149 + /* 150 + * Checking whether the link is up is the last line of 151 + * defense, and this check is inherently racy by definition. 152 + * Sending a PIO request to a downstream device when the link is 153 + * down causes an unrecoverable error, and a reset of the entire 154 + * PCIe controller will be needed. We can reduce the likelihood 155 + * of that unrecoverable error by checking whether the link is 156 + * up, but we can't completely prevent it because the link may 157 + * go down between the link-up check and the PIO request. 158 + */ 159 + if (!xilinx_pl_dma_pcie_link_up(port)) 160 + return false; 161 + } else if (devfn > 0) 162 + /* Only one device down on each root port */ 163 + return false; 164 + 165 + return true; 166 + } 167 + 168 + static void __iomem *xilinx_pl_dma_pcie_map_bus(struct pci_bus *bus, 169 + unsigned int devfn, int where) 170 + { 171 + struct pl_dma_pcie *port = bus->sysdata; 172 + 173 + if (!xilinx_pl_dma_pcie_valid_device(bus, devfn)) 174 + return NULL; 175 + 176 + return port->reg_base + PCIE_ECAM_OFFSET(bus->number, devfn, where); 177 + } 178 + 179 + /* PCIe operations */ 180 + static struct pci_ecam_ops xilinx_pl_dma_pcie_ops = { 181 + .pci_ops = { 182 + .map_bus = xilinx_pl_dma_pcie_map_bus, 183 + .read = pci_generic_config_read, 184 + .write = pci_generic_config_write, 185 + } 186 + }; 187 + 188 + static void xilinx_pl_dma_pcie_enable_msi(struct pl_dma_pcie *port) 189 + { 190 + phys_addr_t msi_addr = port->phys_reg_base; 191 + 192 + pcie_write(port, upper_32_bits(msi_addr), XILINX_PCIE_DMA_REG_MSIBASE1); 193 + pcie_write(port, lower_32_bits(msi_addr), XILINX_PCIE_DMA_REG_MSIBASE2); 194 + } 195 + 196 + static void xilinx_mask_intx_irq(struct irq_data *data) 197 + { 198 + struct pl_dma_pcie *port = irq_data_get_irq_chip_data(data); 199 + unsigned long flags; 200 + u32 mask, val; 201 + 202 + mask = BIT(data->hwirq + XILINX_PCIE_DMA_IDRN_SHIFT); 203 + raw_spin_lock_irqsave(&port->lock, flags); 204 + val = pcie_read(port, XILINX_PCIE_DMA_REG_IDRN_MASK); 205 + pcie_write(port, (val & (~mask)), XILINX_PCIE_DMA_REG_IDRN_MASK); 206 + raw_spin_unlock_irqrestore(&port->lock, flags); 207 + } 208 + 209 + static void xilinx_unmask_intx_irq(struct irq_data *data) 210 + { 211 + struct pl_dma_pcie *port = irq_data_get_irq_chip_data(data); 212 + unsigned long flags; 213 + u32 mask, val; 214 + 215 + mask = BIT(data->hwirq + XILINX_PCIE_DMA_IDRN_SHIFT); 216 + raw_spin_lock_irqsave(&port->lock, flags); 217 + val = pcie_read(port, XILINX_PCIE_DMA_REG_IDRN_MASK); 218 + pcie_write(port, (val | mask), XILINX_PCIE_DMA_REG_IDRN_MASK); 219 + raw_spin_unlock_irqrestore(&port->lock, flags); 220 + } 221 + 222 + static struct irq_chip xilinx_leg_irq_chip = { 223 + .name = "pl_dma:INTx", 224 + .irq_mask = xilinx_mask_intx_irq, 225 + .irq_unmask = xilinx_unmask_intx_irq, 226 + }; 227 + 228 + static int xilinx_pl_dma_pcie_intx_map(struct irq_domain *domain, 229 + unsigned int irq, irq_hw_number_t hwirq) 230 + { 231 + irq_set_chip_and_handler(irq, &xilinx_leg_irq_chip, handle_level_irq); 232 + irq_set_chip_data(irq, domain->host_data); 233 + irq_set_status_flags(irq, IRQ_LEVEL); 234 + 235 + return 0; 236 + } 237 + 238 + /* INTx IRQ Domain operations */ 239 + static const struct irq_domain_ops intx_domain_ops = { 240 + .map = xilinx_pl_dma_pcie_intx_map, 241 + }; 242 + 243 + static irqreturn_t xilinx_pl_dma_pcie_msi_handler_high(int irq, void *args) 244 + { 245 + struct xilinx_msi *msi; 246 + unsigned long status; 247 + u32 bit, virq; 248 + struct pl_dma_pcie *port = args; 249 + 250 + msi = &port->msi; 251 + 252 + while ((status = pcie_read(port, XILINX_PCIE_DMA_REG_MSI_HI)) != 0) { 253 + for_each_set_bit(bit, &status, 32) { 254 + pcie_write(port, 1 << bit, XILINX_PCIE_DMA_REG_MSI_HI); 255 + bit = bit + 32; 256 + virq = irq_find_mapping(msi->dev_domain, bit); 257 + if (virq) 258 + generic_handle_irq(virq); 259 + } 260 + } 261 + 262 + return IRQ_HANDLED; 263 + } 264 + 265 + static irqreturn_t xilinx_pl_dma_pcie_msi_handler_low(int irq, void *args) 266 + { 267 + struct pl_dma_pcie *port = args; 268 + struct xilinx_msi *msi; 269 + unsigned long status; 270 + u32 bit, virq; 271 + 272 + msi = &port->msi; 273 + 274 + while ((status = pcie_read(port, XILINX_PCIE_DMA_REG_MSI_LOW)) != 0) { 275 + for_each_set_bit(bit, &status, 32) { 276 + pcie_write(port, 1 << bit, XILINX_PCIE_DMA_REG_MSI_LOW); 277 + virq = irq_find_mapping(msi->dev_domain, bit); 278 + if (virq) 279 + generic_handle_irq(virq); 280 + } 281 + } 282 + 283 + return IRQ_HANDLED; 284 + } 285 + 286 + static irqreturn_t xilinx_pl_dma_pcie_event_flow(int irq, void *args) 287 + { 288 + struct pl_dma_pcie *port = args; 289 + unsigned long val; 290 + int i; 291 + 292 + val = pcie_read(port, XILINX_PCIE_DMA_REG_IDR); 293 + val &= pcie_read(port, XILINX_PCIE_DMA_REG_IMR); 294 + for_each_set_bit(i, &val, 32) 295 + generic_handle_domain_irq(port->pldma_domain, i); 296 + 297 + pcie_write(port, val, XILINX_PCIE_DMA_REG_IDR); 298 + 299 + return IRQ_HANDLED; 300 + } 301 + 302 + #define _IC(x, s) \ 303 + [XILINX_PCIE_INTR_ ## x] = { __stringify(x), s } 304 + 305 + static const struct { 306 + const char *sym; 307 + const char *str; 308 + } intr_cause[32] = { 309 + _IC(LINK_DOWN, "Link Down"), 310 + _IC(HOT_RESET, "Hot reset"), 311 + _IC(CFG_TIMEOUT, "ECAM access timeout"), 312 + _IC(CORRECTABLE, "Correctable error message"), 313 + _IC(NONFATAL, "Non fatal error message"), 314 + _IC(FATAL, "Fatal error message"), 315 + _IC(SLV_UNSUPP, "Slave unsupported request"), 316 + _IC(SLV_UNEXP, "Slave unexpected completion"), 317 + _IC(SLV_COMPL, "Slave completion timeout"), 318 + _IC(SLV_ERRP, "Slave Error Poison"), 319 + _IC(SLV_CMPABT, "Slave Completer Abort"), 320 + _IC(SLV_ILLBUR, "Slave Illegal Burst"), 321 + _IC(MST_DECERR, "Master decode error"), 322 + _IC(MST_SLVERR, "Master slave error"), 323 + }; 324 + 325 + static irqreturn_t xilinx_pl_dma_pcie_intr_handler(int irq, void *dev_id) 326 + { 327 + struct pl_dma_pcie *port = (struct pl_dma_pcie *)dev_id; 328 + struct device *dev = port->dev; 329 + struct irq_data *d; 330 + 331 + d = irq_domain_get_irq_data(port->pldma_domain, irq); 332 + switch (d->hwirq) { 333 + case XILINX_PCIE_INTR_CORRECTABLE: 334 + case XILINX_PCIE_INTR_NONFATAL: 335 + case XILINX_PCIE_INTR_FATAL: 336 + xilinx_pl_dma_pcie_clear_err_interrupts(port); 337 + fallthrough; 338 + 339 + default: 340 + if (intr_cause[d->hwirq].str) 341 + dev_warn(dev, "%s\n", intr_cause[d->hwirq].str); 342 + else 343 + dev_warn(dev, "Unknown IRQ %ld\n", d->hwirq); 344 + } 345 + 346 + return IRQ_HANDLED; 347 + } 348 + 349 + static struct irq_chip xilinx_msi_irq_chip = { 350 + .name = "pl_dma:PCIe MSI", 351 + .irq_enable = pci_msi_unmask_irq, 352 + .irq_disable = pci_msi_mask_irq, 353 + .irq_mask = pci_msi_mask_irq, 354 + .irq_unmask = pci_msi_unmask_irq, 355 + }; 356 + 357 + static struct msi_domain_info xilinx_msi_domain_info = { 358 + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 359 + MSI_FLAG_MULTI_PCI_MSI), 360 + .chip = &xilinx_msi_irq_chip, 361 + }; 362 + 363 + static void xilinx_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 364 + { 365 + struct pl_dma_pcie *pcie = irq_data_get_irq_chip_data(data); 366 + phys_addr_t msi_addr = pcie->phys_reg_base; 367 + 368 + msg->address_lo = lower_32_bits(msi_addr); 369 + msg->address_hi = upper_32_bits(msi_addr); 370 + msg->data = data->hwirq; 371 + } 372 + 373 + static int xilinx_msi_set_affinity(struct irq_data *irq_data, 374 + const struct cpumask *mask, bool force) 375 + { 376 + return -EINVAL; 377 + } 378 + 379 + static struct irq_chip xilinx_irq_chip = { 380 + .name = "pl_dma:MSI", 381 + .irq_compose_msi_msg = xilinx_compose_msi_msg, 382 + .irq_set_affinity = xilinx_msi_set_affinity, 383 + }; 384 + 385 + static int xilinx_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, 386 + unsigned int nr_irqs, void *args) 387 + { 388 + struct pl_dma_pcie *pcie = domain->host_data; 389 + struct xilinx_msi *msi = &pcie->msi; 390 + int bit, i; 391 + 392 + mutex_lock(&msi->lock); 393 + bit = bitmap_find_free_region(msi->bitmap, XILINX_NUM_MSI_IRQS, 394 + get_count_order(nr_irqs)); 395 + if (bit < 0) { 396 + mutex_unlock(&msi->lock); 397 + return -ENOSPC; 398 + } 399 + 400 + for (i = 0; i < nr_irqs; i++) { 401 + irq_domain_set_info(domain, virq + i, bit + i, &xilinx_irq_chip, 402 + domain->host_data, handle_simple_irq, 403 + NULL, NULL); 404 + } 405 + mutex_unlock(&msi->lock); 406 + 407 + return 0; 408 + } 409 + 410 + static void xilinx_irq_domain_free(struct irq_domain *domain, unsigned int virq, 411 + unsigned int nr_irqs) 412 + { 413 + struct irq_data *data = irq_domain_get_irq_data(domain, virq); 414 + struct pl_dma_pcie *pcie = irq_data_get_irq_chip_data(data); 415 + struct xilinx_msi *msi = &pcie->msi; 416 + 417 + mutex_lock(&msi->lock); 418 + bitmap_release_region(msi->bitmap, data->hwirq, 419 + get_count_order(nr_irqs)); 420 + mutex_unlock(&msi->lock); 421 + } 422 + 423 + static const struct irq_domain_ops dev_msi_domain_ops = { 424 + .alloc = xilinx_irq_domain_alloc, 425 + .free = xilinx_irq_domain_free, 426 + }; 427 + 428 + static void xilinx_pl_dma_pcie_free_irq_domains(struct pl_dma_pcie *port) 429 + { 430 + struct xilinx_msi *msi = &port->msi; 431 + 432 + if (port->intx_domain) { 433 + irq_domain_remove(port->intx_domain); 434 + port->intx_domain = NULL; 435 + } 436 + 437 + if (msi->dev_domain) { 438 + irq_domain_remove(msi->dev_domain); 439 + msi->dev_domain = NULL; 440 + } 441 + 442 + if (msi->msi_domain) { 443 + irq_domain_remove(msi->msi_domain); 444 + msi->msi_domain = NULL; 445 + } 446 + } 447 + 448 + static int xilinx_pl_dma_pcie_init_msi_irq_domain(struct pl_dma_pcie *port) 449 + { 450 + struct device *dev = port->dev; 451 + struct xilinx_msi *msi = &port->msi; 452 + int size = BITS_TO_LONGS(XILINX_NUM_MSI_IRQS) * sizeof(long); 453 + struct fwnode_handle *fwnode = of_node_to_fwnode(port->dev->of_node); 454 + 455 + msi->dev_domain = irq_domain_add_linear(NULL, XILINX_NUM_MSI_IRQS, 456 + &dev_msi_domain_ops, port); 457 + if (!msi->dev_domain) 458 + goto out; 459 + 460 + msi->msi_domain = pci_msi_create_irq_domain(fwnode, 461 + &xilinx_msi_domain_info, 462 + msi->dev_domain); 463 + if (!msi->msi_domain) 464 + goto out; 465 + 466 + mutex_init(&msi->lock); 467 + msi->bitmap = kzalloc(size, GFP_KERNEL); 468 + if (!msi->bitmap) 469 + goto out; 470 + 471 + raw_spin_lock_init(&port->lock); 472 + xilinx_pl_dma_pcie_enable_msi(port); 473 + 474 + return 0; 475 + 476 + out: 477 + xilinx_pl_dma_pcie_free_irq_domains(port); 478 + dev_err(dev, "Failed to allocate MSI IRQ domains\n"); 479 + 480 + return -ENOMEM; 481 + } 482 + 483 + /* 484 + * INTx error interrupts are Xilinx controller specific interrupt, used to 485 + * notify user about errors such as cfg timeout, slave unsupported requests, 486 + * fatal and non fatal error etc. 487 + */ 488 + 489 + static irqreturn_t xilinx_pl_dma_pcie_intx_flow(int irq, void *args) 490 + { 491 + unsigned long val; 492 + int i; 493 + struct pl_dma_pcie *port = args; 494 + 495 + val = FIELD_GET(XILINX_PCIE_DMA_IDRN_MASK, 496 + pcie_read(port, XILINX_PCIE_DMA_REG_IDRN)); 497 + 498 + for_each_set_bit(i, &val, PCI_NUM_INTX) 499 + generic_handle_domain_irq(port->intx_domain, i); 500 + return IRQ_HANDLED; 501 + } 502 + 503 + static void xilinx_pl_dma_pcie_mask_event_irq(struct irq_data *d) 504 + { 505 + struct pl_dma_pcie *port = irq_data_get_irq_chip_data(d); 506 + u32 val; 507 + 508 + raw_spin_lock(&port->lock); 509 + val = pcie_read(port, XILINX_PCIE_DMA_REG_IMR); 510 + val &= ~BIT(d->hwirq); 511 + pcie_write(port, val, XILINX_PCIE_DMA_REG_IMR); 512 + raw_spin_unlock(&port->lock); 513 + } 514 + 515 + static void xilinx_pl_dma_pcie_unmask_event_irq(struct irq_data *d) 516 + { 517 + struct pl_dma_pcie *port = irq_data_get_irq_chip_data(d); 518 + u32 val; 519 + 520 + raw_spin_lock(&port->lock); 521 + val = pcie_read(port, XILINX_PCIE_DMA_REG_IMR); 522 + val |= BIT(d->hwirq); 523 + pcie_write(port, val, XILINX_PCIE_DMA_REG_IMR); 524 + raw_spin_unlock(&port->lock); 525 + } 526 + 527 + static struct irq_chip xilinx_pl_dma_pcie_event_irq_chip = { 528 + .name = "pl_dma:RC-Event", 529 + .irq_mask = xilinx_pl_dma_pcie_mask_event_irq, 530 + .irq_unmask = xilinx_pl_dma_pcie_unmask_event_irq, 531 + }; 532 + 533 + static int xilinx_pl_dma_pcie_event_map(struct irq_domain *domain, 534 + unsigned int irq, irq_hw_number_t hwirq) 535 + { 536 + irq_set_chip_and_handler(irq, &xilinx_pl_dma_pcie_event_irq_chip, 537 + handle_level_irq); 538 + irq_set_chip_data(irq, domain->host_data); 539 + irq_set_status_flags(irq, IRQ_LEVEL); 540 + 541 + return 0; 542 + } 543 + 544 + static const struct irq_domain_ops event_domain_ops = { 545 + .map = xilinx_pl_dma_pcie_event_map, 546 + }; 547 + 548 + /** 549 + * xilinx_pl_dma_pcie_init_irq_domain - Initialize IRQ domain 550 + * @port: PCIe port information 551 + * 552 + * Return: '0' on success and error value on failure. 553 + */ 554 + static int xilinx_pl_dma_pcie_init_irq_domain(struct pl_dma_pcie *port) 555 + { 556 + struct device *dev = port->dev; 557 + struct device_node *node = dev->of_node; 558 + struct device_node *pcie_intc_node; 559 + int ret; 560 + 561 + /* Setup INTx */ 562 + pcie_intc_node = of_get_child_by_name(node, "interrupt-controller"); 563 + if (!pcie_intc_node) { 564 + dev_err(dev, "No PCIe Intc node found\n"); 565 + return -EINVAL; 566 + } 567 + 568 + port->pldma_domain = irq_domain_add_linear(pcie_intc_node, 32, 569 + &event_domain_ops, port); 570 + if (!port->pldma_domain) 571 + return -ENOMEM; 572 + 573 + irq_domain_update_bus_token(port->pldma_domain, DOMAIN_BUS_NEXUS); 574 + 575 + port->intx_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, 576 + &intx_domain_ops, port); 577 + if (!port->intx_domain) { 578 + dev_err(dev, "Failed to get a INTx IRQ domain\n"); 579 + return PTR_ERR(port->intx_domain); 580 + } 581 + 582 + irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED); 583 + 584 + ret = xilinx_pl_dma_pcie_init_msi_irq_domain(port); 585 + if (ret != 0) { 586 + irq_domain_remove(port->intx_domain); 587 + return -ENOMEM; 588 + } 589 + 590 + of_node_put(pcie_intc_node); 591 + raw_spin_lock_init(&port->lock); 592 + 593 + return 0; 594 + } 595 + 596 + static int xilinx_pl_dma_pcie_setup_irq(struct pl_dma_pcie *port) 597 + { 598 + struct device *dev = port->dev; 599 + struct platform_device *pdev = to_platform_device(dev); 600 + int i, irq, err; 601 + 602 + port->irq = platform_get_irq(pdev, 0); 603 + if (port->irq < 0) 604 + return port->irq; 605 + 606 + for (i = 0; i < ARRAY_SIZE(intr_cause); i++) { 607 + int err; 608 + 609 + if (!intr_cause[i].str) 610 + continue; 611 + 612 + irq = irq_create_mapping(port->pldma_domain, i); 613 + if (!irq) { 614 + dev_err(dev, "Failed to map interrupt\n"); 615 + return -ENXIO; 616 + } 617 + 618 + err = devm_request_irq(dev, irq, 619 + xilinx_pl_dma_pcie_intr_handler, 620 + IRQF_SHARED | IRQF_NO_THREAD, 621 + intr_cause[i].sym, port); 622 + if (err) { 623 + dev_err(dev, "Failed to request IRQ %d\n", irq); 624 + return err; 625 + } 626 + } 627 + 628 + port->intx_irq = irq_create_mapping(port->pldma_domain, 629 + XILINX_PCIE_INTR_INTX); 630 + if (!port->intx_irq) { 631 + dev_err(dev, "Failed to map INTx interrupt\n"); 632 + return -ENXIO; 633 + } 634 + 635 + err = devm_request_irq(dev, port->intx_irq, xilinx_pl_dma_pcie_intx_flow, 636 + IRQF_SHARED | IRQF_NO_THREAD, NULL, port); 637 + if (err) { 638 + dev_err(dev, "Failed to request INTx IRQ %d\n", irq); 639 + return err; 640 + } 641 + 642 + err = devm_request_irq(dev, port->irq, xilinx_pl_dma_pcie_event_flow, 643 + IRQF_SHARED | IRQF_NO_THREAD, NULL, port); 644 + if (err) { 645 + dev_err(dev, "Failed to request event IRQ %d\n", irq); 646 + return err; 647 + } 648 + 649 + return 0; 650 + } 651 + 652 + static void xilinx_pl_dma_pcie_init_port(struct pl_dma_pcie *port) 653 + { 654 + if (xilinx_pl_dma_pcie_link_up(port)) 655 + dev_info(port->dev, "PCIe Link is UP\n"); 656 + else 657 + dev_info(port->dev, "PCIe Link is DOWN\n"); 658 + 659 + /* Disable all interrupts */ 660 + pcie_write(port, ~XILINX_PCIE_DMA_IDR_ALL_MASK, 661 + XILINX_PCIE_DMA_REG_IMR); 662 + 663 + /* Clear pending interrupts */ 664 + pcie_write(port, pcie_read(port, XILINX_PCIE_DMA_REG_IDR) & 665 + XILINX_PCIE_DMA_IMR_ALL_MASK, 666 + XILINX_PCIE_DMA_REG_IDR); 667 + 668 + /* Needed for MSI DECODE MODE */ 669 + pcie_write(port, XILINX_PCIE_DMA_IDR_ALL_MASK, 670 + XILINX_PCIE_DMA_REG_MSI_LOW_MASK); 671 + pcie_write(port, XILINX_PCIE_DMA_IDR_ALL_MASK, 672 + XILINX_PCIE_DMA_REG_MSI_HI_MASK); 673 + 674 + /* Set the Bridge enable bit */ 675 + pcie_write(port, pcie_read(port, XILINX_PCIE_DMA_REG_RPSC) | 676 + XILINX_PCIE_DMA_REG_RPSC_BEN, 677 + XILINX_PCIE_DMA_REG_RPSC); 678 + } 679 + 680 + static int xilinx_request_msi_irq(struct pl_dma_pcie *port) 681 + { 682 + struct device *dev = port->dev; 683 + struct platform_device *pdev = to_platform_device(dev); 684 + int ret; 685 + 686 + port->msi.irq_msi0 = platform_get_irq_byname(pdev, "msi0"); 687 + if (port->msi.irq_msi0 <= 0) { 688 + dev_err(dev, "Unable to find msi0 IRQ line\n"); 689 + return port->msi.irq_msi0; 690 + } 691 + 692 + ret = devm_request_irq(dev, port->msi.irq_msi0, xilinx_pl_dma_pcie_msi_handler_low, 693 + IRQF_SHARED | IRQF_NO_THREAD, "xlnx-pcie-dma-pl", 694 + port); 695 + if (ret) { 696 + dev_err(dev, "Failed to register interrupt\n"); 697 + return ret; 698 + } 699 + 700 + port->msi.irq_msi1 = platform_get_irq_byname(pdev, "msi1"); 701 + if (port->msi.irq_msi1 <= 0) { 702 + dev_err(dev, "Unable to find msi1 IRQ line\n"); 703 + return port->msi.irq_msi1; 704 + } 705 + 706 + ret = devm_request_irq(dev, port->msi.irq_msi1, xilinx_pl_dma_pcie_msi_handler_high, 707 + IRQF_SHARED | IRQF_NO_THREAD, "xlnx-pcie-dma-pl", 708 + port); 709 + if (ret) { 710 + dev_err(dev, "Failed to register interrupt\n"); 711 + return ret; 712 + } 713 + 714 + return 0; 715 + } 716 + 717 + static int xilinx_pl_dma_pcie_parse_dt(struct pl_dma_pcie *port, 718 + struct resource *bus_range) 719 + { 720 + struct device *dev = port->dev; 721 + struct platform_device *pdev = to_platform_device(dev); 722 + struct resource *res; 723 + int err; 724 + 725 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 726 + if (!res) { 727 + dev_err(dev, "Missing \"reg\" property\n"); 728 + return -ENXIO; 729 + } 730 + port->phys_reg_base = res->start; 731 + 732 + port->cfg = pci_ecam_create(dev, res, bus_range, &xilinx_pl_dma_pcie_ops); 733 + if (IS_ERR(port->cfg)) 734 + return PTR_ERR(port->cfg); 735 + 736 + port->reg_base = port->cfg->win; 737 + 738 + err = xilinx_request_msi_irq(port); 739 + if (err) { 740 + pci_ecam_free(port->cfg); 741 + return err; 742 + } 743 + 744 + return 0; 745 + } 746 + 747 + static int xilinx_pl_dma_pcie_probe(struct platform_device *pdev) 748 + { 749 + struct device *dev = &pdev->dev; 750 + struct pl_dma_pcie *port; 751 + struct pci_host_bridge *bridge; 752 + struct resource_entry *bus; 753 + int err; 754 + 755 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*port)); 756 + if (!bridge) 757 + return -ENODEV; 758 + 759 + port = pci_host_bridge_priv(bridge); 760 + 761 + port->dev = dev; 762 + 763 + bus = resource_list_first_type(&bridge->windows, IORESOURCE_BUS); 764 + if (!bus) 765 + return -ENODEV; 766 + 767 + err = xilinx_pl_dma_pcie_parse_dt(port, bus->res); 768 + if (err) { 769 + dev_err(dev, "Parsing DT failed\n"); 770 + return err; 771 + } 772 + 773 + xilinx_pl_dma_pcie_init_port(port); 774 + 775 + err = xilinx_pl_dma_pcie_init_irq_domain(port); 776 + if (err) 777 + goto err_irq_domain; 778 + 779 + err = xilinx_pl_dma_pcie_setup_irq(port); 780 + 781 + bridge->sysdata = port; 782 + bridge->ops = &xilinx_pl_dma_pcie_ops.pci_ops; 783 + 784 + err = pci_host_probe(bridge); 785 + if (err < 0) 786 + goto err_host_bridge; 787 + 788 + return 0; 789 + 790 + err_host_bridge: 791 + xilinx_pl_dma_pcie_free_irq_domains(port); 792 + 793 + err_irq_domain: 794 + pci_ecam_free(port->cfg); 795 + return err; 796 + } 797 + 798 + static const struct of_device_id xilinx_pl_dma_pcie_of_match[] = { 799 + { 800 + .compatible = "xlnx,xdma-host-3.00", 801 + }, 802 + {} 803 + }; 804 + 805 + static struct platform_driver xilinx_pl_dma_pcie_driver = { 806 + .driver = { 807 + .name = "xilinx-xdma-pcie", 808 + .of_match_table = xilinx_pl_dma_pcie_of_match, 809 + .suppress_bind_attrs = true, 810 + }, 811 + .probe = xilinx_pl_dma_pcie_probe, 812 + }; 813 + 814 + builtin_platform_driver(xilinx_pl_dma_pcie_driver);
+3 -15
drivers/pci/controller/pcie-xilinx-nwl.c
··· 126 126 #define E_ECAM_CR_ENABLE BIT(0) 127 127 #define E_ECAM_SIZE_LOC GENMASK(20, 16) 128 128 #define E_ECAM_SIZE_SHIFT 16 129 - #define NWL_ECAM_VALUE_DEFAULT 12 129 + #define NWL_ECAM_MAX_SIZE 16 130 130 131 131 #define CFG_DMA_REG_BAR GENMASK(2, 0) 132 132 #define CFG_PCIE_CACHE GENMASK(7, 0) ··· 165 165 u32 ecam_size; 166 166 int irq_intx; 167 167 int irq_misc; 168 - u32 ecam_value; 169 - u8 last_busno; 170 168 struct nwl_msi msi; 171 169 struct irq_domain *legacy_irq_domain; 172 170 struct clk *clk; ··· 623 625 { 624 626 struct device *dev = pcie->dev; 625 627 struct platform_device *pdev = to_platform_device(dev); 626 - u32 breg_val, ecam_val, first_busno = 0; 628 + u32 breg_val, ecam_val; 627 629 int err; 628 630 629 631 breg_val = nwl_bridge_readl(pcie, E_BREG_CAPABILITIES) & BREG_PRESENT; ··· 673 675 E_ECAM_CR_ENABLE, E_ECAM_CONTROL); 674 676 675 677 nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) | 676 - (pcie->ecam_value << E_ECAM_SIZE_SHIFT), 678 + (NWL_ECAM_MAX_SIZE << E_ECAM_SIZE_SHIFT), 677 679 E_ECAM_CONTROL); 678 680 679 681 nwl_bridge_writel(pcie, lower_32_bits(pcie->phys_ecam_base), 680 682 E_ECAM_BASE_LO); 681 683 nwl_bridge_writel(pcie, upper_32_bits(pcie->phys_ecam_base), 682 684 E_ECAM_BASE_HI); 683 - 684 - /* Get bus range */ 685 - ecam_val = nwl_bridge_readl(pcie, E_ECAM_CONTROL); 686 - pcie->last_busno = (ecam_val & E_ECAM_SIZE_LOC) >> E_ECAM_SIZE_SHIFT; 687 - /* Write primary, secondary and subordinate bus numbers */ 688 - ecam_val = first_busno; 689 - ecam_val |= (first_busno + 1) << 8; 690 - ecam_val |= (pcie->last_busno << E_ECAM_SIZE_SHIFT); 691 - writel(ecam_val, (pcie->ecam_base + PCI_PRIMARY_BUS)); 692 685 693 686 if (nwl_pcie_link_up(pcie)) 694 687 dev_info(dev, "Link is UP\n"); ··· 781 792 pcie = pci_host_bridge_priv(bridge); 782 793 783 794 pcie->dev = dev; 784 - pcie->ecam_value = NWL_ECAM_VALUE_DEFAULT; 785 795 786 796 err = nwl_pcie_parse_dt(pcie, pdev); 787 797 if (err) {
+3 -7
drivers/pci/controller/vmd.c
··· 525 525 base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus, 526 526 PCI_DEVFN(dev, 0), 0); 527 527 528 - hdr_type = readb(base + PCI_HEADER_TYPE) & 529 - PCI_HEADER_TYPE_MASK; 528 + hdr_type = readb(base + PCI_HEADER_TYPE); 530 529 531 - functions = (hdr_type & 0x80) ? 8 : 1; 530 + functions = (hdr_type & PCI_HEADER_TYPE_MFD) ? 8 : 1; 532 531 for (fn = 0; fn < functions; fn++) { 533 532 base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus, 534 533 PCI_DEVFN(dev, fn), 0); ··· 1077 1078 struct vmd_dev *vmd = pci_get_drvdata(pdev); 1078 1079 int err, i; 1079 1080 1080 - if (vmd->irq_domain) 1081 - vmd_set_msi_remapping(vmd, true); 1082 - else 1083 - vmd_set_msi_remapping(vmd, false); 1081 + vmd_set_msi_remapping(vmd, !!vmd->irq_domain); 1084 1082 1085 1083 for (i = 0; i < vmd->msix_count; i++) { 1086 1084 err = devm_request_irq(dev, vmd->irqs[i].virq,
+6 -7
drivers/pci/endpoint/pci-epc-core.c
··· 38 38 */ 39 39 void pci_epc_put(struct pci_epc *epc) 40 40 { 41 - if (!epc || IS_ERR(epc)) 41 + if (IS_ERR_OR_NULL(epc)) 42 42 return; 43 43 44 44 module_put(epc->ops->owner); ··· 660 660 struct list_head *list; 661 661 u32 func_no = 0; 662 662 663 - if (!epc || IS_ERR(epc) || !epf) 663 + if (IS_ERR_OR_NULL(epc) || !epf) 664 664 return; 665 665 666 666 if (type == PRIMARY_INTERFACE) { ··· 691 691 { 692 692 struct pci_epf *epf; 693 693 694 - if (!epc || IS_ERR(epc)) 694 + if (IS_ERR_OR_NULL(epc)) 695 695 return; 696 696 697 697 mutex_lock(&epc->list_lock); ··· 717 717 { 718 718 struct pci_epf *epf; 719 719 720 - if (!epc || IS_ERR(epc)) 720 + if (IS_ERR_OR_NULL(epc)) 721 721 return; 722 722 723 723 mutex_lock(&epc->list_lock); ··· 743 743 { 744 744 struct pci_epf *epf; 745 745 746 - if (!epc || IS_ERR(epc)) 746 + if (IS_ERR_OR_NULL(epc)) 747 747 return; 748 748 749 749 mutex_lock(&epc->list_lock); ··· 769 769 { 770 770 struct pci_epf *epf; 771 771 772 - if (!epc || IS_ERR(epc)) 772 + if (IS_ERR_OR_NULL(epc)) 773 773 return; 774 774 775 775 mutex_lock(&epc->list_lock); ··· 869 869 870 870 put_dev: 871 871 put_device(&epc->dev); 872 - kfree(epc); 873 872 874 873 err_ret: 875 874 return ERR_PTR(ret);
+12
drivers/pci/hotplug/Kconfig
··· 61 61 62 62 When in doubt, say N. 63 63 64 + config HOTPLUG_PCI_ACPI_AMPERE_ALTRA 65 + tristate "ACPI PCI Hotplug driver Ampere Altra extensions" 66 + depends on HOTPLUG_PCI_ACPI 67 + depends on HAVE_ARM_SMCCC_DISCOVERY 68 + help 69 + Say Y here if you have an Ampere Altra system. 70 + 71 + To compile this driver as a module, choose M here: the 72 + module will be called acpiphp_ampere_altra. 73 + 74 + When in doubt, say N. 75 + 64 76 config HOTPLUG_PCI_ACPI_IBM 65 77 tristate "ACPI PCI Hotplug driver IBM extensions" 66 78 depends on HOTPLUG_PCI_ACPI
+1
drivers/pci/hotplug/Makefile
··· 23 23 24 24 # acpiphp_ibm extends acpiphp, so should be linked afterwards. 25 25 26 + obj-$(CONFIG_HOTPLUG_PCI_ACPI_AMPERE_ALTRA) += acpiphp_ampere_altra.o 26 27 obj-$(CONFIG_HOTPLUG_PCI_ACPI_IBM) += acpiphp_ibm.o 27 28 28 29 pci_hotplug-objs := pci_hotplug_core.o
+127
drivers/pci/hotplug/acpiphp_ampere_altra.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * ACPI PCI Hot Plug Extension for Ampere Altra. Allows control of 4 + * attention LEDs via requests to system firmware. 5 + * 6 + * Copyright (C) 2023 Ampere Computing LLC 7 + */ 8 + 9 + #define pr_fmt(fmt) "acpiphp_ampere_altra: " fmt 10 + 11 + #include <linux/init.h> 12 + #include <linux/module.h> 13 + #include <linux/pci.h> 14 + #include <linux/pci_hotplug.h> 15 + #include <linux/platform_device.h> 16 + 17 + #include "acpiphp.h" 18 + 19 + #define HANDLE_OPEN 0xb0200000 20 + #define HANDLE_CLOSE 0xb0300000 21 + #define REQUEST 0xf0700000 22 + #define LED_CMD 0x00000004 23 + #define LED_ATTENTION 0x00000002 24 + #define LED_SET_ON 0x00000001 25 + #define LED_SET_OFF 0x00000002 26 + #define LED_SET_BLINK 0x00000003 27 + 28 + static u32 led_service_id[4]; 29 + 30 + static int led_status(u8 status) 31 + { 32 + switch (status) { 33 + case 1: return LED_SET_ON; 34 + case 2: return LED_SET_BLINK; 35 + default: return LED_SET_OFF; 36 + } 37 + } 38 + 39 + static int set_attention_status(struct hotplug_slot *slot, u8 status) 40 + { 41 + struct arm_smccc_res res; 42 + struct pci_bus *bus; 43 + struct pci_dev *root_port; 44 + unsigned long flags; 45 + u32 handle; 46 + int ret = 0; 47 + 48 + bus = slot->pci_slot->bus; 49 + root_port = pcie_find_root_port(bus->self); 50 + if (!root_port) 51 + return -ENODEV; 52 + 53 + local_irq_save(flags); 54 + arm_smccc_smc(HANDLE_OPEN, led_service_id[0], led_service_id[1], 55 + led_service_id[2], led_service_id[3], 0, 0, 0, &res); 56 + if (res.a0) { 57 + ret = -ENODEV; 58 + goto out; 59 + } 60 + handle = res.a1 & 0xffff0000; 61 + 62 + arm_smccc_smc(REQUEST, LED_CMD, led_status(status), LED_ATTENTION, 63 + (PCI_SLOT(root_port->devfn) << 4) | (pci_domain_nr(bus) & 0xf), 64 + 0, 0, handle, &res); 65 + if (res.a0) 66 + ret = -ENODEV; 67 + 68 + arm_smccc_smc(HANDLE_CLOSE, handle, 0, 0, 0, 0, 0, 0, &res); 69 + 70 + out: 71 + local_irq_restore(flags); 72 + return ret; 73 + } 74 + 75 + static int get_attention_status(struct hotplug_slot *slot, u8 *status) 76 + { 77 + return -EINVAL; 78 + } 79 + 80 + static struct acpiphp_attention_info ampere_altra_attn = { 81 + .set_attn = set_attention_status, 82 + .get_attn = get_attention_status, 83 + .owner = THIS_MODULE, 84 + }; 85 + 86 + static int altra_led_probe(struct platform_device *pdev) 87 + { 88 + struct fwnode_handle *fwnode = dev_fwnode(&pdev->dev); 89 + int ret; 90 + 91 + ret = fwnode_property_read_u32_array(fwnode, "uuid", led_service_id, 4); 92 + if (ret) { 93 + dev_err(&pdev->dev, "can't find uuid\n"); 94 + return ret; 95 + } 96 + 97 + ret = acpiphp_register_attention(&ampere_altra_attn); 98 + if (ret) { 99 + dev_err(&pdev->dev, "can't register driver\n"); 100 + return ret; 101 + } 102 + return 0; 103 + } 104 + 105 + static void altra_led_remove(struct platform_device *pdev) 106 + { 107 + acpiphp_unregister_attention(&ampere_altra_attn); 108 + } 109 + 110 + static const struct acpi_device_id altra_led_ids[] = { 111 + { "AMPC0008", 0 }, 112 + { } 113 + }; 114 + MODULE_DEVICE_TABLE(acpi, altra_led_ids); 115 + 116 + static struct platform_driver altra_led_driver = { 117 + .driver = { 118 + .name = "ampere-altra-leds", 119 + .acpi_match_table = altra_led_ids, 120 + }, 121 + .probe = altra_led_probe, 122 + .remove_new = altra_led_remove, 123 + }; 124 + module_platform_driver(altra_led_driver); 125 + 126 + MODULE_AUTHOR("D Scott Phillips <scott@os.amperecomputing.com>"); 127 + MODULE_LICENSE("GPL");
+1 -2
drivers/pci/hotplug/acpiphp_core.c
··· 78 78 { 79 79 int retval = -EINVAL; 80 80 81 - if (info && info->owner && info->set_attn && 82 - info->get_attn && !attention_info) { 81 + if (info && info->set_attn && info->get_attn && !attention_info) { 83 82 retval = 0; 84 83 attention_info = info; 85 84 }
+3 -3
drivers/pci/hotplug/cpqphp_ctrl.c
··· 2059 2059 return rc; 2060 2060 2061 2061 /* If it's a bridge, check the VGA Enable bit */ 2062 - if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { 2062 + if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) { 2063 2063 rc = pci_bus_read_config_byte(pci_bus, devfn, PCI_BRIDGE_CONTROL, &BCR); 2064 2064 if (rc) 2065 2065 return rc; ··· 2342 2342 if (rc) 2343 2343 return rc; 2344 2344 2345 - if ((temp_byte & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { 2345 + if ((temp_byte & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) { 2346 2346 /* set Primary bus */ 2347 2347 dbg("set Primary bus = %d\n", func->bus); 2348 2348 rc = pci_bus_write_config_byte(pci_bus, devfn, PCI_PRIMARY_BUS, func->bus); ··· 2739 2739 * PCI_BRIDGE_CTL_SERR | 2740 2740 * PCI_BRIDGE_CTL_NO_ISA */ 2741 2741 rc = pci_bus_write_config_word(pci_bus, devfn, PCI_BRIDGE_CONTROL, command); 2742 - } else if ((temp_byte & 0x7F) == PCI_HEADER_TYPE_NORMAL) { 2742 + } else if ((temp_byte & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_NORMAL) { 2743 2743 /* Standard device */ 2744 2744 rc = pci_bus_read_config_byte(pci_bus, devfn, 0x0B, &class_code); 2745 2745
+11 -11
drivers/pci/hotplug/cpqphp_pci.c
··· 363 363 return rc; 364 364 365 365 /* If multi-function device, set max_functions to 8 */ 366 - if (header_type & 0x80) 366 + if (header_type & PCI_HEADER_TYPE_MFD) 367 367 max_functions = 8; 368 368 else 369 369 max_functions = 1; ··· 372 372 373 373 do { 374 374 DevError = 0; 375 - if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { 375 + if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) { 376 376 /* Recurse the subordinate bus 377 377 * get the subordinate bus number 378 378 */ ··· 487 487 pci_bus_read_config_byte(ctrl->pci_bus, PCI_DEVFN(new_slot->device, 0), 0x0B, &class_code); 488 488 pci_bus_read_config_byte(ctrl->pci_bus, PCI_DEVFN(new_slot->device, 0), PCI_HEADER_TYPE, &header_type); 489 489 490 - if (header_type & 0x80) /* Multi-function device */ 490 + if (header_type & PCI_HEADER_TYPE_MFD) 491 491 max_functions = 8; 492 492 else 493 493 max_functions = 1; 494 494 495 495 while (function < max_functions) { 496 - if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { 496 + if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) { 497 497 /* Recurse the subordinate bus */ 498 498 pci_bus_read_config_byte(ctrl->pci_bus, PCI_DEVFN(new_slot->device, function), PCI_SECONDARY_BUS, &secondary_bus); 499 499 ··· 571 571 /* Check for Bridge */ 572 572 pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type); 573 573 574 - if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { 574 + if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) { 575 575 pci_bus_read_config_byte(pci_bus, devfn, PCI_SECONDARY_BUS, &secondary_bus); 576 576 577 577 sub_bus = (int) secondary_bus; ··· 625 625 626 626 } /* End of base register loop */ 627 627 628 - } else if ((header_type & 0x7F) == 0x00) { 628 + } else if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_NORMAL) { 629 629 /* Figure out IO and memory base lengths */ 630 630 for (cloop = 0x10; cloop <= 0x24; cloop += 4) { 631 631 temp_register = 0xFFFFFFFF; ··· 723 723 /* Check for Bridge */ 724 724 pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type); 725 725 726 - if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { 726 + if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) { 727 727 /* Clear Bridge Control Register */ 728 728 command = 0x00; 729 729 pci_bus_write_config_word(pci_bus, devfn, PCI_BRIDGE_CONTROL, command); ··· 858 858 } 859 859 } /* End of base register loop */ 860 860 /* Standard header */ 861 - } else if ((header_type & 0x7F) == 0x00) { 861 + } else if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_NORMAL) { 862 862 /* Figure out IO and memory base lengths */ 863 863 for (cloop = 0x10; cloop <= 0x24; cloop += 4) { 864 864 pci_bus_read_config_dword(pci_bus, devfn, cloop, &save_base); ··· 975 975 pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type); 976 976 977 977 /* If this is a bridge device, restore subordinate devices */ 978 - if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { 978 + if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) { 979 979 pci_bus_read_config_byte(pci_bus, devfn, PCI_SECONDARY_BUS, &secondary_bus); 980 980 981 981 sub_bus = (int) secondary_bus; ··· 1067 1067 /* Check for Bridge */ 1068 1068 pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type); 1069 1069 1070 - if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { 1070 + if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) { 1071 1071 /* In order to continue checking, we must program the 1072 1072 * bus registers in the bridge to respond to accesses 1073 1073 * for its subordinate bus(es) ··· 1090 1090 1091 1091 } 1092 1092 /* Check to see if it is a standard config header */ 1093 - else if ((header_type & 0x7F) == PCI_HEADER_TYPE_NORMAL) { 1093 + else if ((header_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_NORMAL) { 1094 1094 /* Check subsystem vendor and ID */ 1095 1095 pci_bus_read_config_dword(pci_bus, devfn, PCI_SUBSYSTEM_VENDOR_ID, &temp_register); 1096 1096
+3 -2
drivers/pci/hotplug/ibmphp.h
··· 17 17 */ 18 18 19 19 #include <linux/pci_hotplug.h> 20 + #include <linux/pci_regs.h> 20 21 21 22 extern int ibmphp_debug; 22 23 ··· 287 286 288 287 /* pci specific defines */ 289 288 #define PCI_VENDOR_ID_NOTVALID 0xFFFF 290 - #define PCI_HEADER_TYPE_MULTIDEVICE 0x80 291 - #define PCI_HEADER_TYPE_MULTIBRIDGE 0x81 289 + #define PCI_HEADER_TYPE_MULTIDEVICE (PCI_HEADER_TYPE_MFD|PCI_HEADER_TYPE_NORMAL) 290 + #define PCI_HEADER_TYPE_MULTIBRIDGE (PCI_HEADER_TYPE_MFD|PCI_HEADER_TYPE_BRIDGE) 292 291 293 292 #define LATENCY 0x64 294 293 #define CACHE 64
+1 -1
drivers/pci/hotplug/ibmphp_pci.c
··· 1087 1087 pci_bus_read_config_dword(ibmphp_pci_bus, devfn, PCI_CLASS_REVISION, &class); 1088 1088 1089 1089 debug("hdr_type behind the bridge is %x\n", hdr_type); 1090 - if ((hdr_type & 0x7f) == PCI_HEADER_TYPE_BRIDGE) { 1090 + if ((hdr_type & PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) { 1091 1091 err("embedded bridges not supported for hot-plugging.\n"); 1092 1092 amount->not_correct = 1; 1093 1093 return amount;
+2 -1
drivers/pci/hotplug/pciehp_core.c
··· 20 20 #define pr_fmt(fmt) "pciehp: " fmt 21 21 #define dev_fmt pr_fmt 22 22 23 + #include <linux/bitfield.h> 23 24 #include <linux/moduleparam.h> 24 25 #include <linux/kernel.h> 25 26 #include <linux/slab.h> ··· 104 103 struct pci_dev *pdev = ctrl->pcie->port; 105 104 106 105 if (status) 107 - status <<= PCI_EXP_SLTCTL_ATTN_IND_SHIFT; 106 + status = FIELD_PREP(PCI_EXP_SLTCTL_AIC, status); 108 107 else 109 108 status = PCI_EXP_SLTCTL_ATTN_IND_OFF; 110 109
+3 -2
drivers/pci/hotplug/pciehp_hpc.c
··· 14 14 15 15 #define dev_fmt(fmt) "pciehp: " fmt 16 16 17 + #include <linux/bitfield.h> 17 18 #include <linux/dmi.h> 18 19 #include <linux/kernel.h> 19 20 #include <linux/types.h> ··· 485 484 struct pci_dev *pdev = ctrl_dev(ctrl); 486 485 487 486 pci_config_pm_runtime_get(pdev); 488 - pcie_write_cmd_nowait(ctrl, status << 6, 487 + pcie_write_cmd_nowait(ctrl, FIELD_PREP(PCI_EXP_SLTCTL_AIC, status), 489 488 PCI_EXP_SLTCTL_AIC | PCI_EXP_SLTCTL_PIC); 490 489 pci_config_pm_runtime_put(pdev); 491 490 return 0; ··· 1029 1028 PCI_EXP_SLTSTA_DLLSC | PCI_EXP_SLTSTA_PDC); 1030 1029 1031 1030 ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c IbPresDis%c LLActRep%c%s\n", 1032 - (slot_cap & PCI_EXP_SLTCAP_PSN) >> 19, 1031 + FIELD_GET(PCI_EXP_SLTCAP_PSN, slot_cap), 1033 1032 FLAG(slot_cap, PCI_EXP_SLTCAP_ABP), 1034 1033 FLAG(slot_cap, PCI_EXP_SLTCAP_PCP), 1035 1034 FLAG(slot_cap, PCI_EXP_SLTCAP_MRLSP),
+2 -1
drivers/pci/hotplug/pnv_php.c
··· 5 5 * Copyright Gavin Shan, IBM Corporation 2016. 6 6 */ 7 7 8 + #include <linux/bitfield.h> 8 9 #include <linux/libfdt.h> 9 10 #include <linux/module.h> 10 11 #include <linux/pci.h> ··· 732 731 733 732 /* Check hotplug MSIx entry is in range */ 734 733 pcie_capability_read_word(pdev, PCI_EXP_FLAGS, &pcie_flag); 735 - entry.entry = (pcie_flag & PCI_EXP_FLAGS_IRQ) >> 9; 734 + entry.entry = FIELD_GET(PCI_EXP_FLAGS_IRQ, pcie_flag); 736 735 if (entry.entry >= nr_entries) 737 736 return -ERANGE; 738 737
+6 -4
drivers/pci/msi/msi.c
··· 6 6 * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com) 7 7 * Copyright (C) 2016 Christoph Hellwig. 8 8 */ 9 + #include <linux/bitfield.h> 9 10 #include <linux/err.h> 10 11 #include <linux/export.h> 11 12 #include <linux/irq.h> ··· 189 188 190 189 pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl); 191 190 msgctl &= ~PCI_MSI_FLAGS_QSIZE; 192 - msgctl |= desc->pci.msi_attrib.multiple << 4; 191 + msgctl |= FIELD_PREP(PCI_MSI_FLAGS_QSIZE, desc->pci.msi_attrib.multiple); 193 192 pci_write_config_word(dev, pos + PCI_MSI_FLAGS, msgctl); 194 193 195 194 pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_LO, msg->address_lo); ··· 300 299 desc.pci.msi_attrib.is_64 = !!(control & PCI_MSI_FLAGS_64BIT); 301 300 desc.pci.msi_attrib.can_mask = !!(control & PCI_MSI_FLAGS_MASKBIT); 302 301 desc.pci.msi_attrib.default_irq = dev->irq; 303 - desc.pci.msi_attrib.multi_cap = (control & PCI_MSI_FLAGS_QMASK) >> 1; 302 + desc.pci.msi_attrib.multi_cap = FIELD_GET(PCI_MSI_FLAGS_QMASK, control); 304 303 desc.pci.msi_attrib.multiple = ilog2(__roundup_pow_of_two(nvec)); 305 304 desc.affinity = masks; 306 305 ··· 479 478 return -EINVAL; 480 479 481 480 pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl); 482 - ret = 1 << ((msgctl & PCI_MSI_FLAGS_QMASK) >> 1); 481 + ret = 1 << FIELD_GET(PCI_MSI_FLAGS_QMASK, msgctl); 483 482 484 483 return ret; 485 484 } ··· 512 511 pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); 513 512 pci_msi_update_mask(entry, 0, 0); 514 513 control &= ~PCI_MSI_FLAGS_QSIZE; 515 - control |= (entry->pci.msi_attrib.multiple << 4) | PCI_MSI_FLAGS_ENABLE; 514 + control |= PCI_MSI_FLAGS_ENABLE | 515 + FIELD_PREP(PCI_MSI_FLAGS_QSIZE, entry->pci.msi_attrib.multiple); 516 516 pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); 517 517 } 518 518
+1 -2
drivers/pci/p2pdma.c
··· 28 28 }; 29 29 30 30 struct pci_p2pdma_pagemap { 31 - struct dev_pagemap pgmap; 32 31 struct pci_dev *provider; 33 32 u64 bus_offset; 33 + struct dev_pagemap pgmap; 34 34 }; 35 35 36 36 static struct pci_p2pdma_pagemap *to_p2p_pgmap(struct dev_pagemap *pgmap) ··· 837 837 if (unlikely(!percpu_ref_tryget_live_rcu(ref))) { 838 838 gen_pool_free(p2pdma->pool, (unsigned long) ret, size); 839 839 ret = NULL; 840 - goto out; 841 840 } 842 841 out: 843 842 rcu_read_unlock();
+8 -7
drivers/pci/pci-acpi.c
··· 911 911 { 912 912 int acpi_state, d_max; 913 913 914 - if (pdev->no_d3cold) 914 + if (pdev->no_d3cold || !pdev->d3cold_allowed) 915 915 d_max = ACPI_STATE_D3_HOT; 916 916 else 917 917 d_max = ACPI_STATE_D3_COLD; ··· 1215 1215 if (!pci_is_root_bus(bus)) 1216 1216 return; 1217 1217 1218 - obj = acpi_evaluate_dsm(ACPI_HANDLE(bus->bridge), &pci_acpi_dsm_guid, 3, 1219 - DSM_PCI_POWER_ON_RESET_DELAY, NULL); 1218 + obj = acpi_evaluate_dsm_typed(ACPI_HANDLE(bus->bridge), &pci_acpi_dsm_guid, 3, 1219 + DSM_PCI_POWER_ON_RESET_DELAY, NULL, ACPI_TYPE_INTEGER); 1220 1220 if (!obj) 1221 1221 return; 1222 1222 1223 - if (obj->type == ACPI_TYPE_INTEGER && obj->integer.value == 1) { 1223 + if (obj->integer.value == 1) { 1224 1224 bridge = pci_find_host_bridge(bus); 1225 1225 bridge->ignore_reset_delay = 1; 1226 1226 } ··· 1376 1376 if (bridge->ignore_reset_delay) 1377 1377 pdev->d3cold_delay = 0; 1378 1378 1379 - obj = acpi_evaluate_dsm(handle, &pci_acpi_dsm_guid, 3, 1380 - DSM_PCI_DEVICE_READINESS_DURATIONS, NULL); 1379 + obj = acpi_evaluate_dsm_typed(handle, &pci_acpi_dsm_guid, 3, 1380 + DSM_PCI_DEVICE_READINESS_DURATIONS, NULL, 1381 + ACPI_TYPE_PACKAGE); 1381 1382 if (!obj) 1382 1383 return; 1383 1384 1384 - if (obj->type == ACPI_TYPE_PACKAGE && obj->package.count == 5) { 1385 + if (obj->package.count == 5) { 1385 1386 elements = obj->package.elements; 1386 1387 if (elements[0].type == ACPI_TYPE_INTEGER) { 1387 1388 value = (int)elements[0].integer.value / 1000;
+6 -11
drivers/pci/pci-sysfs.c
··· 12 12 * Modeled after usb's driverfs.c 13 13 */ 14 14 15 - 15 + #include <linux/bitfield.h> 16 16 #include <linux/kernel.h> 17 17 #include <linux/sched.h> 18 18 #include <linux/pci.h> ··· 230 230 if (err) 231 231 return -EINVAL; 232 232 233 - return sysfs_emit(buf, "%u\n", 234 - (linkstat & PCI_EXP_LNKSTA_NLW) >> PCI_EXP_LNKSTA_NLW_SHIFT); 233 + return sysfs_emit(buf, "%u\n", FIELD_GET(PCI_EXP_LNKSTA_NLW, linkstat)); 235 234 } 236 235 static DEVICE_ATTR_RO(current_link_width); 237 236 ··· 529 530 return -EINVAL; 530 531 531 532 pdev->d3cold_allowed = !!val; 532 - if (pdev->d3cold_allowed) 533 - pci_d3cold_enable(pdev); 534 - else 535 - pci_d3cold_disable(pdev); 533 + pci_bridge_d3_update(pdev); 536 534 537 535 pm_runtime_resume(dev); 538 536 ··· 1547 1551 struct device *dev = kobj_to_dev(kobj); 1548 1552 struct pci_dev *pdev = to_pci_dev(dev); 1549 1553 1550 - if (a == &dev_attr_boot_vga.attr) 1551 - if ((pdev->class >> 8) != PCI_CLASS_DISPLAY_VGA) 1552 - return 0; 1554 + if (a == &dev_attr_boot_vga.attr && pci_is_vga(pdev)) 1555 + return a->mode; 1553 1556 1554 - return a->mode; 1557 + return 0; 1555 1558 } 1556 1559 1557 1560 static struct attribute *pci_dev_hp_attrs[] = {
+35 -34
drivers/pci/pci.c
··· 534 534 535 535 pci_bus_read_config_byte(bus, devfn, PCI_HEADER_TYPE, &hdr_type); 536 536 537 - pos = __pci_bus_find_cap_start(bus, devfn, hdr_type & 0x7f); 537 + pos = __pci_bus_find_cap_start(bus, devfn, hdr_type & PCI_HEADER_TYPE_MASK); 538 538 if (pos) 539 539 pos = __pci_find_next_cap(bus, devfn, pos, cap); 540 540 ··· 732 732 { 733 733 u16 vsec = 0; 734 734 u32 header; 735 + int ret; 735 736 736 737 if (vendor != dev->vendor) 737 738 return 0; 738 739 739 740 while ((vsec = pci_find_next_ext_capability(dev, vsec, 740 741 PCI_EXT_CAP_ID_VNDR))) { 741 - if (pci_read_config_dword(dev, vsec + PCI_VNDR_HEADER, 742 - &header) == PCIBIOS_SUCCESSFUL && 743 - PCI_VNDR_HEADER_ID(header) == cap) 742 + ret = pci_read_config_dword(dev, vsec + PCI_VNDR_HEADER, &header); 743 + if (ret != PCIBIOS_SUCCESSFUL) 744 + continue; 745 + 746 + if (PCI_VNDR_HEADER_ID(header) == cap) 744 747 return vsec; 745 748 } 746 749 ··· 1778 1775 return; 1779 1776 1780 1777 pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); 1781 - nbars = (ctrl & PCI_REBAR_CTRL_NBAR_MASK) >> 1782 - PCI_REBAR_CTRL_NBAR_SHIFT; 1778 + nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, ctrl); 1783 1779 1784 1780 for (i = 0; i < nbars; i++, pos += 8) { 1785 1781 struct resource *res; ··· 1789 1787 res = pdev->resource + bar_idx; 1790 1788 size = pci_rebar_bytes_to_size(resource_size(res)); 1791 1789 ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE; 1792 - ctrl |= size << PCI_REBAR_CTRL_BAR_SHIFT; 1790 + ctrl |= FIELD_PREP(PCI_REBAR_CTRL_BAR_SIZE, size); 1793 1791 pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl); 1794 1792 } 1795 1793 } ··· 3230 3228 (pmc & PCI_PM_CAP_PME_D2) ? " D2" : "", 3231 3229 (pmc & PCI_PM_CAP_PME_D3hot) ? " D3hot" : "", 3232 3230 (pmc & PCI_PM_CAP_PME_D3cold) ? " D3cold" : ""); 3233 - dev->pme_support = pmc >> PCI_PM_CAP_PME_SHIFT; 3231 + dev->pme_support = FIELD_GET(PCI_PM_CAP_PME_MASK, pmc); 3234 3232 dev->pme_poll = true; 3235 3233 /* 3236 3234 * Make device's PM flags reflect the wake-up capability, but ··· 3301 3299 ent_offset += 4; 3302 3300 3303 3301 /* Entry size field indicates DWORDs after 1st */ 3304 - ent_size = ((dw0 & PCI_EA_ES) + 1) << 2; 3302 + ent_size = (FIELD_GET(PCI_EA_ES, dw0) + 1) << 2; 3305 3303 3306 3304 if (!(dw0 & PCI_EA_ENABLE)) /* Entry not enabled */ 3307 3305 goto out; 3308 3306 3309 - bei = (dw0 & PCI_EA_BEI) >> 4; 3310 - prop = (dw0 & PCI_EA_PP) >> 8; 3307 + bei = FIELD_GET(PCI_EA_BEI, dw0); 3308 + prop = FIELD_GET(PCI_EA_PP, dw0); 3311 3309 3312 3310 /* 3313 3311 * If the Property is in the reserved range, try the Secondary 3314 3312 * Property instead. 3315 3313 */ 3316 3314 if (prop > PCI_EA_P_BRIDGE_IO && prop < PCI_EA_P_MEM_RESERVED) 3317 - prop = (dw0 & PCI_EA_SP) >> 16; 3315 + prop = FIELD_GET(PCI_EA_SP, dw0); 3318 3316 if (prop > PCI_EA_P_BRIDGE_IO) 3319 3317 goto out; 3320 3318 ··· 3721 3719 return -ENOTSUPP; 3722 3720 3723 3721 pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); 3724 - nbars = (ctrl & PCI_REBAR_CTRL_NBAR_MASK) >> 3725 - PCI_REBAR_CTRL_NBAR_SHIFT; 3722 + nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, ctrl); 3726 3723 3727 3724 for (i = 0; i < nbars; i++, pos += 8) { 3728 3725 int bar_idx; 3729 3726 3730 3727 pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); 3731 - bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX; 3728 + bar_idx = FIELD_GET(PCI_REBAR_CTRL_BAR_IDX, ctrl); 3732 3729 if (bar_idx == bar) 3733 3730 return pos; 3734 3731 } ··· 3753 3752 return 0; 3754 3753 3755 3754 pci_read_config_dword(pdev, pos + PCI_REBAR_CAP, &cap); 3756 - cap &= PCI_REBAR_CAP_SIZES; 3755 + cap = FIELD_GET(PCI_REBAR_CAP_SIZES, cap); 3757 3756 3758 3757 /* Sapphire RX 5600 XT Pulse has an invalid cap dword for BAR 0 */ 3759 3758 if (pdev->vendor == PCI_VENDOR_ID_ATI && pdev->device == 0x731f && 3760 - bar == 0 && cap == 0x7000) 3761 - cap = 0x3f000; 3759 + bar == 0 && cap == 0x700) 3760 + return 0x3f00; 3762 3761 3763 - return cap >> 4; 3762 + return cap; 3764 3763 } 3765 3764 EXPORT_SYMBOL(pci_rebar_get_possible_sizes); 3766 3765 ··· 3782 3781 return pos; 3783 3782 3784 3783 pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); 3785 - return (ctrl & PCI_REBAR_CTRL_BAR_SIZE) >> PCI_REBAR_CTRL_BAR_SHIFT; 3784 + return FIELD_GET(PCI_REBAR_CTRL_BAR_SIZE, ctrl); 3786 3785 } 3787 3786 3788 3787 /** ··· 3805 3804 3806 3805 pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); 3807 3806 ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE; 3808 - ctrl |= size << PCI_REBAR_CTRL_BAR_SHIFT; 3807 + ctrl |= FIELD_PREP(PCI_REBAR_CTRL_BAR_SIZE, size); 3809 3808 pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl); 3810 3809 return 0; 3811 3810 } ··· 6043 6042 if (pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat)) 6044 6043 return -EINVAL; 6045 6044 6046 - return 512 << ((stat & PCI_X_STATUS_MAX_READ) >> 21); 6045 + return 512 << FIELD_GET(PCI_X_STATUS_MAX_READ, stat); 6047 6046 } 6048 6047 EXPORT_SYMBOL(pcix_get_max_mmrbc); 6049 6048 ··· 6066 6065 if (pci_read_config_word(dev, cap + PCI_X_CMD, &cmd)) 6067 6066 return -EINVAL; 6068 6067 6069 - return 512 << ((cmd & PCI_X_CMD_MAX_READ) >> 2); 6068 + return 512 << FIELD_GET(PCI_X_CMD_MAX_READ, cmd); 6070 6069 } 6071 6070 EXPORT_SYMBOL(pcix_get_mmrbc); 6072 6071 ··· 6097 6096 if (pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat)) 6098 6097 return -EINVAL; 6099 6098 6100 - if (v > (stat & PCI_X_STATUS_MAX_READ) >> 21) 6099 + if (v > FIELD_GET(PCI_X_STATUS_MAX_READ, stat)) 6101 6100 return -E2BIG; 6102 6101 6103 6102 if (pci_read_config_word(dev, cap + PCI_X_CMD, &cmd)) 6104 6103 return -EINVAL; 6105 6104 6106 - o = (cmd & PCI_X_CMD_MAX_READ) >> 2; 6105 + o = FIELD_GET(PCI_X_CMD_MAX_READ, cmd); 6107 6106 if (o != v) { 6108 6107 if (v > o && (dev->bus->bus_flags & PCI_BUS_FLAGS_NO_MMRBC)) 6109 6108 return -EIO; 6110 6109 6111 6110 cmd &= ~PCI_X_CMD_MAX_READ; 6112 - cmd |= v << 2; 6111 + cmd |= FIELD_PREP(PCI_X_CMD_MAX_READ, v); 6113 6112 if (pci_write_config_word(dev, cap + PCI_X_CMD, cmd)) 6114 6113 return -EIO; 6115 6114 } ··· 6129 6128 6130 6129 pcie_capability_read_word(dev, PCI_EXP_DEVCTL, &ctl); 6131 6130 6132 - return 128 << ((ctl & PCI_EXP_DEVCTL_READRQ) >> 12); 6131 + return 128 << FIELD_GET(PCI_EXP_DEVCTL_READRQ, ctl); 6133 6132 } 6134 6133 EXPORT_SYMBOL(pcie_get_readrq); 6135 6134 ··· 6162 6161 rq = mps; 6163 6162 } 6164 6163 6165 - v = (ffs(rq) - 8) << 12; 6164 + v = FIELD_PREP(PCI_EXP_DEVCTL_READRQ, ffs(rq) - 8); 6166 6165 6167 6166 if (bridge->no_inc_mrrs) { 6168 6167 int max_mrrs = pcie_get_readrq(dev); ··· 6192 6191 6193 6192 pcie_capability_read_word(dev, PCI_EXP_DEVCTL, &ctl); 6194 6193 6195 - return 128 << ((ctl & PCI_EXP_DEVCTL_PAYLOAD) >> 5); 6194 + return 128 << FIELD_GET(PCI_EXP_DEVCTL_PAYLOAD, ctl); 6196 6195 } 6197 6196 EXPORT_SYMBOL(pcie_get_mps); 6198 6197 ··· 6215 6214 v = ffs(mps) - 8; 6216 6215 if (v > dev->pcie_mpss) 6217 6216 return -EINVAL; 6218 - v <<= 5; 6217 + v = FIELD_PREP(PCI_EXP_DEVCTL_PAYLOAD, v); 6219 6218 6220 6219 ret = pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL, 6221 6220 PCI_EXP_DEVCTL_PAYLOAD, v); ··· 6257 6256 while (dev) { 6258 6257 pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta); 6259 6258 6260 - next_speed = pcie_link_speed[lnksta & PCI_EXP_LNKSTA_CLS]; 6261 - next_width = (lnksta & PCI_EXP_LNKSTA_NLW) >> 6262 - PCI_EXP_LNKSTA_NLW_SHIFT; 6259 + next_speed = pcie_link_speed[FIELD_GET(PCI_EXP_LNKSTA_CLS, 6260 + lnksta)]; 6261 + next_width = FIELD_GET(PCI_EXP_LNKSTA_NLW, lnksta); 6263 6262 6264 6263 next_bw = next_width * PCIE_SPEED2MBS_ENC(next_speed); 6265 6264 ··· 6331 6330 6332 6331 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap); 6333 6332 if (lnkcap) 6334 - return (lnkcap & PCI_EXP_LNKCAP_MLW) >> 4; 6333 + return FIELD_GET(PCI_EXP_LNKCAP_MLW, lnkcap); 6335 6334 6336 6335 return PCIE_LNK_WIDTH_UNKNOWN; 6337 6336 }
+3
drivers/pci/pci.h
··· 13 13 14 14 #define PCIE_LINK_RETRAIN_TIMEOUT_MS 1000 15 15 16 + /* Power stable to PERST# inactive from PCIe card Electromechanical Spec */ 17 + #define PCIE_T_PVPERL_MS 100 18 + 16 19 /* 17 20 * PCIe r6.0, sec 5.3.3.2.1 <PME Synchronization> 18 21 * Recommends 1ms to 10ms timeout to check L2 ready.
+27 -18
drivers/pci/pcie/aer.c
··· 1224 1224 return IRQ_WAKE_THREAD; 1225 1225 } 1226 1226 1227 + static void aer_enable_irq(struct pci_dev *pdev) 1228 + { 1229 + int aer = pdev->aer_cap; 1230 + u32 reg32; 1231 + 1232 + /* Enable Root Port's interrupt in response to error messages */ 1233 + pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, &reg32); 1234 + reg32 |= ROOT_PORT_INTR_ON_MESG_MASK; 1235 + pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32); 1236 + } 1237 + 1238 + static void aer_disable_irq(struct pci_dev *pdev) 1239 + { 1240 + int aer = pdev->aer_cap; 1241 + u32 reg32; 1242 + 1243 + /* Disable Root's interrupt in response to error messages */ 1244 + pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, &reg32); 1245 + reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK; 1246 + pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32); 1247 + } 1248 + 1227 1249 /** 1228 1250 * aer_enable_rootport - enable Root Port's interrupts when receiving messages 1229 1251 * @rpc: pointer to a Root Port data structure ··· 1275 1253 pci_read_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, &reg32); 1276 1254 pci_write_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, reg32); 1277 1255 1278 - /* Enable Root Port's interrupt in response to error messages */ 1279 - pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, &reg32); 1280 - reg32 |= ROOT_PORT_INTR_ON_MESG_MASK; 1281 - pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32); 1256 + aer_enable_irq(pdev); 1282 1257 } 1283 1258 1284 1259 /** ··· 1290 1271 int aer = pdev->aer_cap; 1291 1272 u32 reg32; 1292 1273 1293 - /* Disable Root's interrupt in response to error messages */ 1294 - pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, &reg32); 1295 - reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK; 1296 - pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32); 1274 + aer_disable_irq(pdev); 1297 1275 1298 1276 /* Clear Root's error status reg */ 1299 1277 pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, &reg32); ··· 1385 1369 */ 1386 1370 aer = root ? root->aer_cap : 0; 1387 1371 1388 - if ((host->native_aer || pcie_ports_native) && aer) { 1389 - /* Disable Root's interrupt in response to error messages */ 1390 - pci_read_config_dword(root, aer + PCI_ERR_ROOT_COMMAND, &reg32); 1391 - reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK; 1392 - pci_write_config_dword(root, aer + PCI_ERR_ROOT_COMMAND, reg32); 1393 - } 1372 + if ((host->native_aer || pcie_ports_native) && aer) 1373 + aer_disable_irq(root); 1394 1374 1395 1375 if (type == PCI_EXP_TYPE_RC_EC || type == PCI_EXP_TYPE_RC_END) { 1396 1376 rc = pcie_reset_flr(dev, PCI_RESET_DO_RESET); ··· 1405 1393 pci_read_config_dword(root, aer + PCI_ERR_ROOT_STATUS, &reg32); 1406 1394 pci_write_config_dword(root, aer + PCI_ERR_ROOT_STATUS, reg32); 1407 1395 1408 - /* Enable Root Port's interrupt in response to error messages */ 1409 - pci_read_config_dword(root, aer + PCI_ERR_ROOT_COMMAND, &reg32); 1410 - reg32 |= ROOT_PORT_INTR_ON_MESG_MASK; 1411 - pci_write_config_dword(root, aer + PCI_ERR_ROOT_COMMAND, reg32); 1396 + aer_enable_irq(root); 1412 1397 } 1413 1398 1414 1399 return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED;
+47 -37
drivers/pci/pcie/aspm.c
··· 7 7 * Copyright (C) Shaohua Li (shaohua.li@intel.com) 8 8 */ 9 9 10 + #include <linux/bitfield.h> 10 11 #include <linux/kernel.h> 12 + #include <linux/limits.h> 11 13 #include <linux/math.h> 12 14 #include <linux/module.h> 13 15 #include <linux/moduleparam.h> ··· 18 16 #include <linux/errno.h> 19 17 #include <linux/pm.h> 20 18 #include <linux/init.h> 19 + #include <linux/printk.h> 21 20 #include <linux/slab.h> 22 - #include <linux/jiffies.h> 23 - #include <linux/delay.h> 21 + #include <linux/time.h> 22 + 24 23 #include "../pci.h" 25 24 26 25 #ifdef MODULE_PARAM_PREFIX ··· 270 267 /* Convert L0s latency encoding to ns */ 271 268 static u32 calc_l0s_latency(u32 lnkcap) 272 269 { 273 - u32 encoding = (lnkcap & PCI_EXP_LNKCAP_L0SEL) >> 12; 270 + u32 encoding = FIELD_GET(PCI_EXP_LNKCAP_L0SEL, lnkcap); 274 271 275 272 if (encoding == 0x7) 276 - return (5 * 1000); /* > 4us */ 273 + return 5 * NSEC_PER_USEC; /* > 4us */ 277 274 return (64 << encoding); 278 275 } 279 276 ··· 281 278 static u32 calc_l0s_acceptable(u32 encoding) 282 279 { 283 280 if (encoding == 0x7) 284 - return -1U; 281 + return U32_MAX; 285 282 return (64 << encoding); 286 283 } 287 284 288 285 /* Convert L1 latency encoding to ns */ 289 286 static u32 calc_l1_latency(u32 lnkcap) 290 287 { 291 - u32 encoding = (lnkcap & PCI_EXP_LNKCAP_L1EL) >> 15; 288 + u32 encoding = FIELD_GET(PCI_EXP_LNKCAP_L1EL, lnkcap); 292 289 293 290 if (encoding == 0x7) 294 - return (65 * 1000); /* > 64us */ 295 - return (1000 << encoding); 291 + return 65 * NSEC_PER_USEC; /* > 64us */ 292 + return NSEC_PER_USEC << encoding; 296 293 } 297 294 298 295 /* Convert L1 acceptable latency encoding to ns */ 299 296 static u32 calc_l1_acceptable(u32 encoding) 300 297 { 301 298 if (encoding == 0x7) 302 - return -1U; 303 - return (1000 << encoding); 299 + return U32_MAX; 300 + return NSEC_PER_USEC << encoding; 304 301 } 305 302 306 303 /* Convert L1SS T_pwr encoding to usec */ ··· 328 325 */ 329 326 static void encode_l12_threshold(u32 threshold_us, u32 *scale, u32 *value) 330 327 { 331 - u64 threshold_ns = (u64) threshold_us * 1000; 328 + u64 threshold_ns = (u64)threshold_us * NSEC_PER_USEC; 332 329 333 330 /* 334 331 * LTR_L1.2_THRESHOLD_Value ("value") is a 10-bit field with max 335 332 * value of 0x3ff. 336 333 */ 337 - if (threshold_ns <= 0x3ff * 1) { 334 + if (threshold_ns <= 1 * FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE)) { 338 335 *scale = 0; /* Value times 1ns */ 339 336 *value = threshold_ns; 340 - } else if (threshold_ns <= 0x3ff * 32) { 337 + } else if (threshold_ns <= 32 * FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE)) { 341 338 *scale = 1; /* Value times 32ns */ 342 339 *value = roundup(threshold_ns, 32) / 32; 343 - } else if (threshold_ns <= 0x3ff * 1024) { 340 + } else if (threshold_ns <= 1024 * FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE)) { 344 341 *scale = 2; /* Value times 1024ns */ 345 342 *value = roundup(threshold_ns, 1024) / 1024; 346 - } else if (threshold_ns <= 0x3ff * 32768) { 343 + } else if (threshold_ns <= 32768 * FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE)) { 347 344 *scale = 3; /* Value times 32768ns */ 348 345 *value = roundup(threshold_ns, 32768) / 32768; 349 - } else if (threshold_ns <= 0x3ff * 1048576) { 346 + } else if (threshold_ns <= 1048576 * FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE)) { 350 347 *scale = 4; /* Value times 1048576ns */ 351 348 *value = roundup(threshold_ns, 1048576) / 1048576; 352 - } else if (threshold_ns <= 0x3ff * (u64) 33554432) { 349 + } else if (threshold_ns <= (u64)33554432 * FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE)) { 353 350 *scale = 5; /* Value times 33554432ns */ 354 351 *value = roundup(threshold_ns, 33554432) / 33554432; 355 352 } else { 356 353 *scale = 5; 357 - *value = 0x3ff; /* Max representable value */ 354 + *value = FIELD_MAX(PCI_L1SS_CTL1_LTR_L12_TH_VALUE); 358 355 } 359 356 } 360 357 ··· 374 371 link = endpoint->bus->self->link_state; 375 372 376 373 /* Calculate endpoint L0s acceptable latency */ 377 - encoding = (endpoint->devcap & PCI_EXP_DEVCAP_L0S) >> 6; 374 + encoding = FIELD_GET(PCI_EXP_DEVCAP_L0S, endpoint->devcap); 378 375 acceptable_l0s = calc_l0s_acceptable(encoding); 379 376 380 377 /* Calculate endpoint L1 acceptable latency */ 381 - encoding = (endpoint->devcap & PCI_EXP_DEVCAP_L1) >> 9; 378 + encoding = FIELD_GET(PCI_EXP_DEVCAP_L1, endpoint->devcap); 382 379 acceptable_l1 = calc_l1_acceptable(encoding); 383 380 384 381 while (link) { ··· 420 417 if ((link->aspm_capable & ASPM_STATE_L1) && 421 418 (latency + l1_switch_latency > acceptable_l1)) 422 419 link->aspm_capable &= ~ASPM_STATE_L1; 423 - l1_switch_latency += 1000; 420 + l1_switch_latency += NSEC_PER_USEC; 424 421 425 422 link = link->parent; 426 423 } ··· 449 446 u32 pl1_2_enables, cl1_2_enables; 450 447 451 448 /* Choose the greater of the two Port Common_Mode_Restore_Times */ 452 - val1 = (parent_l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8; 453 - val2 = (child_l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8; 449 + val1 = FIELD_GET(PCI_L1SS_CAP_CM_RESTORE_TIME, parent_l1ss_cap); 450 + val2 = FIELD_GET(PCI_L1SS_CAP_CM_RESTORE_TIME, child_l1ss_cap); 454 451 t_common_mode = max(val1, val2); 455 452 456 453 /* Choose the greater of the two Port T_POWER_ON times */ 457 - val1 = (parent_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19; 458 - scale1 = (parent_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16; 459 - val2 = (child_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19; 460 - scale2 = (child_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16; 454 + val1 = FIELD_GET(PCI_L1SS_CAP_P_PWR_ON_VALUE, parent_l1ss_cap); 455 + scale1 = FIELD_GET(PCI_L1SS_CAP_P_PWR_ON_SCALE, parent_l1ss_cap); 456 + val2 = FIELD_GET(PCI_L1SS_CAP_P_PWR_ON_VALUE, child_l1ss_cap); 457 + scale2 = FIELD_GET(PCI_L1SS_CAP_P_PWR_ON_SCALE, child_l1ss_cap); 461 458 462 459 if (calc_l12_pwron(parent, scale1, val1) > 463 460 calc_l12_pwron(child, scale2, val2)) { 464 - ctl2 |= scale1 | (val1 << 3); 461 + ctl2 |= FIELD_PREP(PCI_L1SS_CTL2_T_PWR_ON_SCALE, scale1) | 462 + FIELD_PREP(PCI_L1SS_CTL2_T_PWR_ON_VALUE, val1); 465 463 t_power_on = calc_l12_pwron(parent, scale1, val1); 466 464 } else { 467 - ctl2 |= scale2 | (val2 << 3); 465 + ctl2 |= FIELD_PREP(PCI_L1SS_CTL2_T_PWR_ON_SCALE, scale2) | 466 + FIELD_PREP(PCI_L1SS_CTL2_T_PWR_ON_VALUE, val2); 468 467 t_power_on = calc_l12_pwron(child, scale2, val2); 469 468 } 470 469 ··· 482 477 */ 483 478 l1_2_threshold = 2 + 4 + t_common_mode + t_power_on; 484 479 encode_l12_threshold(l1_2_threshold, &scale, &value); 485 - ctl1 |= t_common_mode << 8 | scale << 29 | value << 16; 480 + ctl1 |= FIELD_PREP(PCI_L1SS_CTL1_CM_RESTORE_TIME, t_common_mode) | 481 + FIELD_PREP(PCI_L1SS_CTL1_LTR_L12_TH_VALUE, value) | 482 + FIELD_PREP(PCI_L1SS_CTL1_LTR_L12_TH_SCALE, scale); 486 483 487 484 /* Some broken devices only support dword access to L1 SS */ 488 485 pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CTL1, &pctl1); ··· 696 689 * in pcie_config_aspm_link(). 697 690 */ 698 691 if (enable_req & (ASPM_STATE_L1_1 | ASPM_STATE_L1_2)) { 699 - pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL, 700 - PCI_EXP_LNKCTL_ASPM_L1, 0); 701 - pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL, 702 - PCI_EXP_LNKCTL_ASPM_L1, 0); 692 + pcie_capability_clear_word(child, PCI_EXP_LNKCTL, 693 + PCI_EXP_LNKCTL_ASPM_L1); 694 + pcie_capability_clear_word(parent, PCI_EXP_LNKCTL, 695 + PCI_EXP_LNKCTL_ASPM_L1); 703 696 } 704 697 705 698 val = 0; ··· 1066 1059 if (state & PCIE_LINK_STATE_L0S) 1067 1060 link->aspm_disable |= ASPM_STATE_L0S; 1068 1061 if (state & PCIE_LINK_STATE_L1) 1069 - link->aspm_disable |= ASPM_STATE_L1; 1062 + /* L1 PM substates require L1 */ 1063 + link->aspm_disable |= ASPM_STATE_L1 | ASPM_STATE_L1SS; 1070 1064 if (state & PCIE_LINK_STATE_L1_1) 1071 1065 link->aspm_disable |= ASPM_STATE_L1_1; 1072 1066 if (state & PCIE_LINK_STATE_L1_2) ··· 1255 1247 link->aspm_disable &= ~ASPM_STATE_L1; 1256 1248 } else { 1257 1249 link->aspm_disable |= state; 1250 + if (state & ASPM_STATE_L1) 1251 + link->aspm_disable |= ASPM_STATE_L1SS; 1258 1252 } 1259 1253 1260 1254 pcie_config_aspm_link(link, policy_to_aspm_state(link)); ··· 1371 1361 aspm_policy = POLICY_DEFAULT; 1372 1362 aspm_disabled = 1; 1373 1363 aspm_support_enabled = false; 1374 - printk(KERN_INFO "PCIe ASPM is disabled\n"); 1364 + pr_info("PCIe ASPM is disabled\n"); 1375 1365 } else if (!strcmp(str, "force")) { 1376 1366 aspm_force = 1; 1377 - printk(KERN_INFO "PCIe ASPM is forcibly enabled\n"); 1367 + pr_info("PCIe ASPM is forcibly enabled\n"); 1378 1368 } 1379 1369 return 1; 1380 1370 }
+27 -15
drivers/pci/pcie/dpc.c
··· 9 9 #define dev_fmt(fmt) "DPC: " fmt 10 10 11 11 #include <linux/aer.h> 12 + #include <linux/bitfield.h> 12 13 #include <linux/delay.h> 13 14 #include <linux/interrupt.h> 14 15 #include <linux/init.h> ··· 17 16 18 17 #include "portdrv.h" 19 18 #include "../pci.h" 19 + 20 + #define PCI_EXP_DPC_CTL_EN_MASK (PCI_EXP_DPC_CTL_EN_FATAL | \ 21 + PCI_EXP_DPC_CTL_EN_NONFATAL) 20 22 21 23 static const char * const rp_pio_error_string[] = { 22 24 "Configuration Request received UR Completion", /* Bit Position 0 */ ··· 206 202 207 203 /* Get First Error Pointer */ 208 204 pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &dpc_status); 209 - first_error = (dpc_status & 0x1f00) >> 8; 205 + first_error = FIELD_GET(PCI_EXP_DPC_RP_PIO_FEP, dpc_status); 210 206 211 207 for (i = 0; i < ARRAY_SIZE(rp_pio_error_string); i++) { 212 208 if ((status & ~mask) & (1 << i)) ··· 274 270 pci_info(pdev, "containment event, status:%#06x source:%#06x\n", 275 271 status, source); 276 272 277 - reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN) >> 1; 278 - ext_reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT) >> 5; 273 + reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN; 274 + ext_reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT; 279 275 pci_warn(pdev, "%s detected\n", 280 - (reason == 0) ? "unmasked uncorrectable error" : 281 - (reason == 1) ? "ERR_NONFATAL" : 282 - (reason == 2) ? "ERR_FATAL" : 283 - (ext_reason == 0) ? "RP PIO error" : 284 - (ext_reason == 1) ? "software trigger" : 285 - "reserved error"); 276 + (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR) ? 277 + "unmasked uncorrectable error" : 278 + (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE) ? 279 + "ERR_NONFATAL" : 280 + (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE) ? 281 + "ERR_FATAL" : 282 + (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO) ? 283 + "RP PIO error" : 284 + (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_SW_TRIGGER) ? 285 + "software trigger" : 286 + "reserved error"); 286 287 287 288 /* show RP PIO error detail information */ 288 - if (pdev->dpc_rp_extensions && reason == 3 && ext_reason == 0) 289 + if (pdev->dpc_rp_extensions && 290 + reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT && 291 + ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO) 289 292 dpc_process_rp_pio_error(pdev); 290 - else if (reason == 0 && 293 + else if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR && 291 294 dpc_get_aer_uncorrect_severity(pdev, &info) && 292 295 aer_get_device_error_info(pdev, &info)) { 293 296 aer_print_error(pdev, &info); ··· 349 338 /* Quirks may set dpc_rp_log_size if device or firmware is buggy */ 350 339 if (!pdev->dpc_rp_log_size) { 351 340 pdev->dpc_rp_log_size = 352 - (cap & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8; 341 + FIELD_GET(PCI_EXP_DPC_RP_PIO_LOG_SIZE, cap); 353 342 if (pdev->dpc_rp_log_size < 4 || pdev->dpc_rp_log_size > 9) { 354 343 pci_err(pdev, "RP PIO log size %u is invalid\n", 355 344 pdev->dpc_rp_log_size); ··· 379 368 } 380 369 381 370 pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CAP, &cap); 371 + 382 372 pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, &ctl); 383 - 384 - ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN; 373 + ctl &= ~PCI_EXP_DPC_CTL_EN_MASK; 374 + ctl |= PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN; 385 375 pci_write_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, ctl); 386 - pci_info(pdev, "enabled with IRQ %d\n", dev->irq); 387 376 377 + pci_info(pdev, "enabled with IRQ %d\n", dev->irq); 388 378 pci_info(pdev, "error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", 389 379 cap & PCI_EXP_DPC_IRQ, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT), 390 380 FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP),
+3 -1
drivers/pci/pcie/pme.c
··· 9 9 10 10 #define dev_fmt(fmt) "PME: " fmt 11 11 12 + #include <linux/bitfield.h> 12 13 #include <linux/pci.h> 13 14 #include <linux/kernel.h> 14 15 #include <linux/errno.h> ··· 236 235 pcie_clear_root_pme_status(port); 237 236 238 237 spin_unlock_irq(&data->lock); 239 - pcie_pme_handle_request(port, rtsta & 0xffff); 238 + pcie_pme_handle_request(port, 239 + FIELD_GET(PCI_EXP_RTSTA_PME_RQ_ID, rtsta)); 240 240 spin_lock_irq(&data->lock); 241 241 242 242 continue;
+4 -3
drivers/pci/pcie/portdrv.c
··· 6 6 * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com) 7 7 */ 8 8 9 + #include <linux/bitfield.h> 9 10 #include <linux/dmi.h> 10 11 #include <linux/init.h> 11 12 #include <linux/module.h> ··· 70 69 if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP | 71 70 PCIE_PORT_SERVICE_BWNOTIF)) { 72 71 pcie_capability_read_word(dev, PCI_EXP_FLAGS, &reg16); 73 - *pme = (reg16 & PCI_EXP_FLAGS_IRQ) >> 9; 72 + *pme = FIELD_GET(PCI_EXP_FLAGS_IRQ, reg16); 74 73 nvec = *pme + 1; 75 74 } 76 75 ··· 82 81 if (pos) { 83 82 pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, 84 83 &reg32); 85 - *aer = (reg32 & PCI_ERR_ROOT_AER_IRQ) >> 27; 84 + *aer = FIELD_GET(PCI_ERR_ROOT_AER_IRQ, reg32); 86 85 nvec = max(nvec, *aer + 1); 87 86 } 88 87 } ··· 93 92 if (pos) { 94 93 pci_read_config_word(dev, pos + PCI_EXP_DPC_CAP, 95 94 &reg16); 96 - *dpc = reg16 & PCI_EXP_DPC_IRQ; 95 + *dpc = FIELD_GET(PCI_EXP_DPC_IRQ, reg16); 97 96 nvec = max(nvec, *dpc + 1); 98 97 } 99 98 }
+3 -2
drivers/pci/pcie/ptm.c
··· 4 4 * Copyright (c) 2016, Intel Corporation. 5 5 */ 6 6 7 + #include <linux/bitfield.h> 7 8 #include <linux/module.h> 8 9 #include <linux/init.h> 9 10 #include <linux/pci.h> ··· 54 53 pci_add_ext_cap_save_buffer(dev, PCI_EXT_CAP_ID_PTM, sizeof(u32)); 55 54 56 55 pci_read_config_dword(dev, ptm + PCI_PTM_CAP, &cap); 57 - dev->ptm_granularity = (cap & PCI_PTM_GRANULARITY_MASK) >> 8; 56 + dev->ptm_granularity = FIELD_GET(PCI_PTM_GRANULARITY_MASK, cap); 58 57 59 58 /* 60 59 * Per the spec recommendation (PCIe r6.0, sec 7.9.15.3), select the ··· 147 146 148 147 ctrl |= PCI_PTM_CTRL_ENABLE; 149 148 ctrl &= ~PCI_PTM_GRANULARITY_MASK; 150 - ctrl |= dev->ptm_granularity << 8; 149 + ctrl |= FIELD_PREP(PCI_PTM_GRANULARITY_MASK, dev->ptm_granularity); 151 150 if (dev->ptm_root) 152 151 ctrl |= PCI_PTM_CTRL_ROOT; 153 152
+7 -7
drivers/pci/probe.c
··· 807 807 } 808 808 809 809 bus->max_bus_speed = max; 810 - bus->cur_bus_speed = pcix_bus_speed[ 811 - (status & PCI_X_SSTATUS_FREQ) >> 6]; 810 + bus->cur_bus_speed = 811 + pcix_bus_speed[FIELD_GET(PCI_X_SSTATUS_FREQ, status)]; 812 812 813 813 return; 814 814 } ··· 1217 1217 1218 1218 offset = ea + PCI_EA_FIRST_ENT; 1219 1219 pci_read_config_dword(dev, offset, &dw); 1220 - ea_sec = dw & PCI_EA_SEC_BUS_MASK; 1221 - ea_sub = (dw & PCI_EA_SUB_BUS_MASK) >> PCI_EA_SUB_BUS_SHIFT; 1220 + ea_sec = FIELD_GET(PCI_EA_SEC_BUS_MASK, dw); 1221 + ea_sub = FIELD_GET(PCI_EA_SUB_BUS_MASK, dw); 1222 1222 if (ea_sec == 0 || ea_sub < ea_sec) 1223 1223 return false; 1224 1224 ··· 1652 1652 static bool pci_ext_cfg_is_aliased(struct pci_dev *dev) 1653 1653 { 1654 1654 #ifdef CONFIG_PCI_QUIRKS 1655 - int pos; 1655 + int pos, ret; 1656 1656 u32 header, tmp; 1657 1657 1658 1658 pci_read_config_dword(dev, PCI_VENDOR_ID, &header); 1659 1659 1660 1660 for (pos = PCI_CFG_SPACE_SIZE; 1661 1661 pos < PCI_CFG_SPACE_EXP_SIZE; pos += PCI_CFG_SPACE_SIZE) { 1662 - if (pci_read_config_dword(dev, pos, &tmp) != PCIBIOS_SUCCESSFUL 1663 - || header != tmp) 1662 + ret = pci_read_config_dword(dev, pos, &tmp); 1663 + if ((ret != PCIBIOS_SUCCESSFUL) || (header != tmp)) 1664 1664 return false; 1665 1665 } 1666 1666
+55 -20
drivers/pci/quirks.c
··· 690 690 /* 691 691 * In the AMD NL platform, this device ([1022:7912]) has a class code of 692 692 * PCI_CLASS_SERIAL_USB_XHCI (0x0c0330), which means the xhci driver will 693 - * claim it. 693 + * claim it. The same applies on the VanGogh platform device ([1022:163a]). 694 694 * 695 695 * But the dwc3 driver is a more specific driver for this device, and we'd 696 696 * prefer to use it instead of xhci. To prevent xhci from claiming the ··· 698 698 * defines as "USB device (not host controller)". The dwc3 driver can then 699 699 * claim it based on its Vendor and Device ID. 700 700 */ 701 - static void quirk_amd_nl_class(struct pci_dev *pdev) 701 + static void quirk_amd_dwc_class(struct pci_dev *pdev) 702 702 { 703 703 u32 class = pdev->class; 704 704 ··· 708 708 class, pdev->class); 709 709 } 710 710 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_NL_USB, 711 - quirk_amd_nl_class); 711 + quirk_amd_dwc_class); 712 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_VANGOGH_USB, 713 + quirk_amd_dwc_class); 712 714 713 715 /* 714 716 * Synopsys USB 3.x host HAPS platform has a class code of ··· 1846 1844 1847 1845 /* Update pdev accordingly */ 1848 1846 pci_read_config_byte(pdev, PCI_HEADER_TYPE, &hdr); 1849 - pdev->hdr_type = hdr & 0x7f; 1850 - pdev->multifunction = !!(hdr & 0x80); 1847 + pdev->hdr_type = hdr & PCI_HEADER_TYPE_MASK; 1848 + pdev->multifunction = FIELD_GET(PCI_HEADER_TYPE_MFD, hdr); 1851 1849 1852 1850 pci_read_config_dword(pdev, PCI_CLASS_REVISION, &class); 1853 1851 pdev->class = class >> 8; ··· 4555 4553 4556 4554 pci_info(root_port, "Disabling No Snoop/Relaxed Ordering Attributes to avoid PCIe Completion erratum in %s\n", 4557 4555 dev_name(&pdev->dev)); 4558 - pcie_capability_clear_and_set_word(root_port, PCI_EXP_DEVCTL, 4559 - PCI_EXP_DEVCTL_RELAX_EN | 4560 - PCI_EXP_DEVCTL_NOSNOOP_EN, 0); 4556 + pcie_capability_clear_word(root_port, PCI_EXP_DEVCTL, 4557 + PCI_EXP_DEVCTL_RELAX_EN | 4558 + PCI_EXP_DEVCTL_NOSNOOP_EN); 4561 4559 } 4562 4560 4563 4561 /* ··· 5385 5383 */ 5386 5384 static void quirk_intel_qat_vf_cap(struct pci_dev *pdev) 5387 5385 { 5388 - int pos, i = 0; 5386 + int pos, i = 0, ret; 5389 5387 u8 next_cap; 5390 5388 u16 reg16, *cap; 5391 5389 struct pci_cap_saved_state *state; ··· 5431 5429 pdev->pcie_mpss = reg16 & PCI_EXP_DEVCAP_PAYLOAD; 5432 5430 5433 5431 pdev->cfg_size = PCI_CFG_SPACE_EXP_SIZE; 5434 - if (pci_read_config_dword(pdev, PCI_CFG_SPACE_SIZE, &status) != 5435 - PCIBIOS_SUCCESSFUL || (status == 0xffffffff)) 5432 + ret = pci_read_config_dword(pdev, PCI_CFG_SPACE_SIZE, &status); 5433 + if ((ret != PCIBIOS_SUCCESSFUL) || (PCI_POSSIBLE_ERROR(status))) 5436 5434 pdev->cfg_size = PCI_CFG_SPACE_SIZE; 5437 5435 5438 5436 if (pci_find_saved_cap(pdev, PCI_CAP_ID_EXP)) ··· 5509 5507 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0422, quirk_no_ext_tags); 5510 5508 5511 5509 #ifdef CONFIG_PCI_ATS 5510 + static void quirk_no_ats(struct pci_dev *pdev) 5511 + { 5512 + pci_info(pdev, "disabling ATS\n"); 5513 + pdev->ats_cap = 0; 5514 + } 5515 + 5512 5516 /* 5513 5517 * Some devices require additional driver setup to enable ATS. Don't use 5514 5518 * ATS for those devices as ATS will be enabled before the driver has had a ··· 5528 5520 (pdev->subsystem_device == 0xce19 || 5529 5521 pdev->subsystem_device == 0xcc10 || 5530 5522 pdev->subsystem_device == 0xcc08)) 5531 - goto no_ats; 5532 - else 5533 - return; 5523 + quirk_no_ats(pdev); 5524 + } else { 5525 + quirk_no_ats(pdev); 5534 5526 } 5535 - 5536 - no_ats: 5537 - pci_info(pdev, "disabling ATS\n"); 5538 - pdev->ats_cap = 0; 5539 5527 } 5540 5528 5541 5529 /* AMD Stoney platform GPU */ ··· 5554 5550 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x734f, quirk_amd_harvest_no_ats); 5555 5551 /* AMD Raven platform iGPU */ 5556 5552 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x15d8, quirk_amd_harvest_no_ats); 5553 + 5554 + /* 5555 + * Intel IPU E2000 revisions before C0 implement incorrect endianness 5556 + * in ATS Invalidate Request message body. Disable ATS for those devices. 5557 + */ 5558 + static void quirk_intel_e2000_no_ats(struct pci_dev *pdev) 5559 + { 5560 + if (pdev->revision < 0x20) 5561 + quirk_no_ats(pdev); 5562 + } 5563 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1451, quirk_intel_e2000_no_ats); 5564 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1452, quirk_intel_e2000_no_ats); 5565 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1453, quirk_intel_e2000_no_ats); 5566 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1454, quirk_intel_e2000_no_ats); 5567 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1455, quirk_intel_e2000_no_ats); 5568 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1457, quirk_intel_e2000_no_ats); 5569 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1459, quirk_intel_e2000_no_ats); 5570 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x145a, quirk_intel_e2000_no_ats); 5571 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x145c, quirk_intel_e2000_no_ats); 5557 5572 #endif /* CONFIG_PCI_ATS */ 5558 5573 5559 5574 /* Freescale PCIe doesn't support MSI in RC mode */ ··· 5689 5666 5690 5667 /* The GPU becomes a multi-function device when the HDA is enabled */ 5691 5668 pci_read_config_byte(gpu, PCI_HEADER_TYPE, &hdr_type); 5692 - gpu->multifunction = !!(hdr_type & 0x80); 5669 + gpu->multifunction = FIELD_GET(PCI_HEADER_TYPE_MFD, hdr_type); 5693 5670 } 5694 5671 DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, 5695 5672 PCI_BASE_CLASS_DISPLAY, 16, quirk_nvidia_hda); ··· 6177 6154 if (!(val & PCI_EXP_DPC_CAP_RP_EXT)) 6178 6155 return; 6179 6156 6180 - if (!((val & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8)) { 6157 + if (FIELD_GET(PCI_EXP_DPC_RP_PIO_LOG_SIZE, val) == 0) { 6181 6158 pci_info(dev, "Overriding RP PIO Log Size to 4\n"); 6182 6159 dev->dpc_rp_log_size = 4; 6183 6160 } ··· 6211 6188 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5020, of_pci_make_dev_node); 6212 6189 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5021, of_pci_make_dev_node); 6213 6190 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_REDHAT, 0x0005, of_pci_make_dev_node); 6191 + 6192 + /* 6193 + * Devices known to require a longer delay before first config space access 6194 + * after reset recovery or resume from D3cold: 6195 + * 6196 + * VideoPropulsion (aka Genroco) Torrent QN16e MPEG QAM Modulator 6197 + */ 6198 + static void pci_fixup_d3cold_delay_1sec(struct pci_dev *pdev) 6199 + { 6200 + pdev->d3cold_delay = 1000; 6201 + } 6202 + DECLARE_PCI_FIXUP_FINAL(0x5555, 0x0004, pci_fixup_d3cold_delay_1sec);
+31
drivers/pci/search.c
··· 364 364 EXPORT_SYMBOL(pci_get_class); 365 365 366 366 /** 367 + * pci_get_base_class - searching for a PCI device by matching against the base class code only 368 + * @class: search for a PCI device with this base class code 369 + * @from: Previous PCI device found in search, or %NULL for new search. 370 + * 371 + * Iterates through the list of known PCI devices. If a PCI device is found 372 + * with a matching base class code, the reference count to the device is 373 + * incremented. See pci_match_one_device() to figure out how does this works. 374 + * A new search is initiated by passing %NULL as the @from argument. 375 + * Otherwise if @from is not %NULL, searches continue from next device on the 376 + * global list. The reference count for @from is always decremented if it is 377 + * not %NULL. 378 + * 379 + * Returns: 380 + * A pointer to a matched PCI device, %NULL Otherwise. 381 + */ 382 + struct pci_dev *pci_get_base_class(unsigned int class, struct pci_dev *from) 383 + { 384 + struct pci_device_id id = { 385 + .vendor = PCI_ANY_ID, 386 + .device = PCI_ANY_ID, 387 + .subvendor = PCI_ANY_ID, 388 + .subdevice = PCI_ANY_ID, 389 + .class_mask = 0xFF0000, 390 + .class = class << 16, 391 + }; 392 + 393 + return pci_get_dev_by_id(&id, from); 394 + } 395 + EXPORT_SYMBOL(pci_get_base_class); 396 + 397 + /** 367 398 * pci_dev_present - Returns 1 if device matching the device list is present, 0 if not. 368 399 * @ids: A pointer to a null terminated list of struct pci_device_id structures 369 400 * that describe the type of PCI device the caller is trying to find.
+1 -1
drivers/pci/setup-bus.c
··· 2129 2129 pci_bus_dump_resources(bus); 2130 2130 } 2131 2131 2132 - void __init pci_assign_unassigned_resources(void) 2132 + void pci_assign_unassigned_resources(void) 2133 2133 { 2134 2134 struct pci_bus *root_bus; 2135 2135
+5 -4
drivers/pci/vc.c
··· 6 6 * Author: Alex Williamson <alex.williamson@redhat.com> 7 7 */ 8 8 9 + #include <linux/bitfield.h> 9 10 #include <linux/device.h> 10 11 #include <linux/kernel.h> 11 12 #include <linux/module.h> ··· 202 201 /* Extended VC Count (not counting VC0) */ 203 202 evcc = cap1 & PCI_VC_CAP1_EVCC; 204 203 /* Low Priority Extended VC Count (not counting VC0) */ 205 - lpevcc = (cap1 & PCI_VC_CAP1_LPEVCC) >> 4; 204 + lpevcc = FIELD_GET(PCI_VC_CAP1_LPEVCC, cap1); 206 205 /* Port Arbitration Table Entry Size (bits) */ 207 - parb_size = 1 << ((cap1 & PCI_VC_CAP1_ARB_SIZE) >> 10); 206 + parb_size = 1 << FIELD_GET(PCI_VC_CAP1_ARB_SIZE, cap1); 208 207 209 208 /* 210 209 * Port VC Control Register contains VC Arbitration Select, which ··· 232 231 int vcarb_offset; 233 232 234 233 pci_read_config_dword(dev, pos + PCI_VC_PORT_CAP2, &cap2); 235 - vcarb_offset = ((cap2 & PCI_VC_CAP2_ARB_OFF) >> 24) * 16; 234 + vcarb_offset = FIELD_GET(PCI_VC_CAP2_ARB_OFF, cap2) * 16; 236 235 237 236 if (vcarb_offset) { 238 237 int size, vcarb_phases = 0; ··· 278 277 279 278 pci_read_config_dword(dev, pos + PCI_VC_RES_CAP + 280 279 (i * PCI_CAP_VC_PER_VC_SIZEOF), &cap); 281 - parb_offset = ((cap & PCI_VC_RES_CAP_ARB_OFF) >> 24) * 16; 280 + parb_offset = FIELD_GET(PCI_VC_RES_CAP_ARB_OFF, cap) * 16; 282 281 if (parb_offset) { 283 282 int size, parb_phases = 0; 284 283
+8 -6
drivers/pci/vgaarb.c
··· 764 764 struct pci_dev *bridge; 765 765 u16 cmd; 766 766 767 - /* Only deal with VGA class devices */ 768 - if ((pdev->class >> 8) != PCI_CLASS_DISPLAY_VGA) 769 - return false; 770 - 771 767 /* Allocate structure */ 772 768 vgadev = kzalloc(sizeof(struct vga_device), GFP_KERNEL); 773 769 if (vgadev == NULL) { ··· 1499 1503 1500 1504 vgaarb_dbg(dev, "%s\n", __func__); 1501 1505 1506 + /* Only deal with VGA class devices */ 1507 + if (!pci_is_vga(pdev)) 1508 + return 0; 1509 + 1502 1510 /* 1503 1511 * For now, we're only interested in devices added and removed. 1504 1512 * I didn't test this thing here, so someone needs to double check ··· 1550 1550 pdev = NULL; 1551 1551 while ((pdev = 1552 1552 pci_get_subsys(PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, 1553 - PCI_ANY_ID, pdev)) != NULL) 1554 - vga_arbiter_add_pci_device(pdev); 1553 + PCI_ANY_ID, pdev)) != NULL) { 1554 + if (pci_is_vga(pdev)) 1555 + vga_arbiter_add_pci_device(pdev); 1556 + } 1555 1557 1556 1558 pr_info("loaded\n"); 1557 1559 return rc;
+8 -4
drivers/scsi/ipr.c
··· 761 761 static int ipr_save_pcix_cmd_reg(struct ipr_ioa_cfg *ioa_cfg) 762 762 { 763 763 int pcix_cmd_reg = pci_find_capability(ioa_cfg->pdev, PCI_CAP_ID_PCIX); 764 + int rc; 764 765 765 766 if (pcix_cmd_reg == 0) 766 767 return 0; 767 768 768 - if (pci_read_config_word(ioa_cfg->pdev, pcix_cmd_reg + PCI_X_CMD, 769 - &ioa_cfg->saved_pcix_cmd_reg) != PCIBIOS_SUCCESSFUL) { 769 + rc = pci_read_config_word(ioa_cfg->pdev, pcix_cmd_reg + PCI_X_CMD, 770 + &ioa_cfg->saved_pcix_cmd_reg); 771 + if (rc != PCIBIOS_SUCCESSFUL) { 770 772 dev_err(&ioa_cfg->pdev->dev, "Failed to save PCI-X command register\n"); 771 773 return -EIO; 772 774 } ··· 787 785 static int ipr_set_pcix_cmd_reg(struct ipr_ioa_cfg *ioa_cfg) 788 786 { 789 787 int pcix_cmd_reg = pci_find_capability(ioa_cfg->pdev, PCI_CAP_ID_PCIX); 788 + int rc; 790 789 791 790 if (pcix_cmd_reg) { 792 - if (pci_write_config_word(ioa_cfg->pdev, pcix_cmd_reg + PCI_X_CMD, 793 - ioa_cfg->saved_pcix_cmd_reg) != PCIBIOS_SUCCESSFUL) { 791 + rc = pci_write_config_word(ioa_cfg->pdev, pcix_cmd_reg + PCI_X_CMD, 792 + ioa_cfg->saved_pcix_cmd_reg); 793 + if (rc != PCIBIOS_SUCCESSFUL) { 794 794 dev_err(&ioa_cfg->pdev->dev, "Failed to setup PCI-X command register\n"); 795 795 return -EIO; 796 796 }
-3
include/linux/logic_pio.h
··· 39 39 40 40 #ifdef CONFIG_INDIRECT_PIO 41 41 u8 logic_inb(unsigned long addr); 42 - void logic_outb(u8 value, unsigned long addr); 43 - void logic_outw(u16 value, unsigned long addr); 44 - void logic_outl(u32 value, unsigned long addr); 45 42 u16 logic_inw(unsigned long addr); 46 43 u32 logic_inl(unsigned long addr); 47 44 void logic_outb(u8 value, unsigned long addr);
+29
include/linux/pci.h
··· 713 713 dev->hdr_type == PCI_HEADER_TYPE_CARDBUS; 714 714 } 715 715 716 + /** 717 + * pci_is_vga - check if the PCI device is a VGA device 718 + * 719 + * The PCI Code and ID Assignment spec, r1.15, secs 1.4 and 1.1, define 720 + * VGA Base Class and Sub-Classes: 721 + * 722 + * 03 00 PCI_CLASS_DISPLAY_VGA VGA-compatible or 8514-compatible 723 + * 00 01 PCI_CLASS_NOT_DEFINED_VGA VGA-compatible (before Class Code) 724 + * 725 + * Return true if the PCI device is a VGA device and uses the legacy VGA 726 + * resources ([mem 0xa0000-0xbffff], [io 0x3b0-0x3bb], [io 0x3c0-0x3df] and 727 + * aliases). 728 + */ 729 + static inline bool pci_is_vga(struct pci_dev *pdev) 730 + { 731 + if ((pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA) 732 + return true; 733 + 734 + if ((pdev->class >> 8) == PCI_CLASS_NOT_DEFINED_VGA) 735 + return true; 736 + 737 + return false; 738 + } 739 + 716 740 #define for_each_pci_bridge(dev, bus) \ 717 741 list_for_each_entry(dev, &bus->devices, bus_list) \ 718 742 if (!pci_is_bridge(dev)) {} else ··· 1205 1181 struct pci_dev *pci_get_domain_bus_and_slot(int domain, unsigned int bus, 1206 1182 unsigned int devfn); 1207 1183 struct pci_dev *pci_get_class(unsigned int class, struct pci_dev *from); 1184 + struct pci_dev *pci_get_base_class(unsigned int class, struct pci_dev *from); 1185 + 1208 1186 int pci_dev_present(const struct pci_device_id *ids); 1209 1187 1210 1188 int pci_bus_read_config_byte(struct pci_bus *bus, unsigned int devfn, ··· 1976 1950 struct pci_dev *from) 1977 1951 { return NULL; } 1978 1952 1953 + static inline struct pci_dev *pci_get_base_class(unsigned int class, 1954 + struct pci_dev *from) 1955 + { return NULL; } 1979 1956 1980 1957 static inline int pci_dev_present(const struct pci_device_id *ids) 1981 1958 { return 0; }
+1
include/linux/pci_ids.h
··· 582 582 #define PCI_DEVICE_ID_AMD_1AH_M20H_DF_F3 0x16fb 583 583 #define PCI_DEVICE_ID_AMD_MI200_DF_F3 0x14d3 584 584 #define PCI_DEVICE_ID_AMD_MI300_DF_F3 0x152b 585 + #define PCI_DEVICE_ID_AMD_VANGOGH_USB 0x163a 585 586 #define PCI_DEVICE_ID_AMD_CNB17H_F3 0x1703 586 587 #define PCI_DEVICE_ID_AMD_LANCE 0x2000 587 588 #define PCI_DEVICE_ID_AMD_LANCE_HOME 0x2001
+19 -5
include/uapi/linux/pci_regs.h
··· 80 80 #define PCI_HEADER_TYPE_NORMAL 0 81 81 #define PCI_HEADER_TYPE_BRIDGE 1 82 82 #define PCI_HEADER_TYPE_CARDBUS 2 83 + #define PCI_HEADER_TYPE_MFD 0x80 /* Multi-Function Device (possible) */ 83 84 84 85 #define PCI_BIST 0x0f /* 8 bits */ 85 86 #define PCI_BIST_CODE_MASK 0x0f /* Return result */ ··· 638 637 #define PCI_EXP_RTCAP 0x1e /* Root Capabilities */ 639 638 #define PCI_EXP_RTCAP_CRSVIS 0x0001 /* CRS Software Visibility capability */ 640 639 #define PCI_EXP_RTSTA 0x20 /* Root Status */ 640 + #define PCI_EXP_RTSTA_PME_RQ_ID 0x0000ffff /* PME Requester ID */ 641 641 #define PCI_EXP_RTSTA_PME 0x00010000 /* PME status */ 642 642 #define PCI_EXP_RTSTA_PENDING 0x00020000 /* PME pending */ 643 643 /* ··· 932 930 933 931 /* Process Address Space ID */ 934 932 #define PCI_PASID_CAP 0x04 /* PASID feature register */ 935 - #define PCI_PASID_CAP_EXEC 0x02 /* Exec permissions Supported */ 936 - #define PCI_PASID_CAP_PRIV 0x04 /* Privilege Mode Supported */ 933 + #define PCI_PASID_CAP_EXEC 0x0002 /* Exec permissions Supported */ 934 + #define PCI_PASID_CAP_PRIV 0x0004 /* Privilege Mode Supported */ 935 + #define PCI_PASID_CAP_WIDTH 0x1f00 937 936 #define PCI_PASID_CTRL 0x06 /* PASID control register */ 938 - #define PCI_PASID_CTRL_ENABLE 0x01 /* Enable bit */ 939 - #define PCI_PASID_CTRL_EXEC 0x02 /* Exec permissions Enable */ 940 - #define PCI_PASID_CTRL_PRIV 0x04 /* Privilege Mode Enable */ 937 + #define PCI_PASID_CTRL_ENABLE 0x0001 /* Enable bit */ 938 + #define PCI_PASID_CTRL_EXEC 0x0002 /* Exec permissions Enable */ 939 + #define PCI_PASID_CTRL_PRIV 0x0004 /* Privilege Mode Enable */ 941 940 #define PCI_EXT_CAP_PASID_SIZEOF 8 942 941 943 942 /* Single Root I/O Virtualization */ ··· 978 975 #define PCI_LTR_VALUE_MASK 0x000003ff 979 976 #define PCI_LTR_SCALE_MASK 0x00001c00 980 977 #define PCI_LTR_SCALE_SHIFT 10 978 + #define PCI_LTR_NOSNOOP_VALUE 0x03ff0000 /* Max No-Snoop Latency Value */ 979 + #define PCI_LTR_NOSNOOP_SCALE 0x1c000000 /* Scale for Max Value */ 981 980 #define PCI_EXT_CAP_LTR_SIZEOF 8 982 981 983 982 /* Access Control Service */ ··· 1047 1042 #define PCI_EXP_DPC_STATUS 0x08 /* DPC Status */ 1048 1043 #define PCI_EXP_DPC_STATUS_TRIGGER 0x0001 /* Trigger Status */ 1049 1044 #define PCI_EXP_DPC_STATUS_TRIGGER_RSN 0x0006 /* Trigger Reason */ 1045 + #define PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR 0x0000 /* Uncorrectable error */ 1046 + #define PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE 0x0002 /* Rcvd ERR_NONFATAL */ 1047 + #define PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE 0x0004 /* Rcvd ERR_FATAL */ 1048 + #define PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT 0x0006 /* Reason in Trig Reason Extension field */ 1050 1049 #define PCI_EXP_DPC_STATUS_INTERRUPT 0x0008 /* Interrupt Status */ 1051 1050 #define PCI_EXP_DPC_RP_BUSY 0x0010 /* Root Port Busy */ 1052 1051 #define PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT 0x0060 /* Trig Reason Extension */ 1052 + #define PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO 0x0000 /* RP PIO error */ 1053 + #define PCI_EXP_DPC_STATUS_TRIGGER_RSN_SW_TRIGGER 0x0020 /* DPC SW Trigger bit */ 1054 + #define PCI_EXP_DPC_RP_PIO_FEP 0x1f00 /* RP PIO First Err Ptr */ 1053 1055 1054 1056 #define PCI_EXP_DPC_SOURCE_ID 0x0A /* DPC Source Identifier */ 1055 1057 ··· 1100 1088 #define PCI_L1SS_CTL1_LTR_L12_TH_VALUE 0x03ff0000 /* LTR_L1.2_THRESHOLD_Value */ 1101 1089 #define PCI_L1SS_CTL1_LTR_L12_TH_SCALE 0xe0000000 /* LTR_L1.2_THRESHOLD_Scale */ 1102 1090 #define PCI_L1SS_CTL2 0x0c /* Control 2 Register */ 1091 + #define PCI_L1SS_CTL2_T_PWR_ON_SCALE 0x00000003 /* T_POWER_ON Scale */ 1092 + #define PCI_L1SS_CTL2_T_PWR_ON_VALUE 0x000000f8 /* T_POWER_ON Value */ 1103 1093 1104 1094 /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */ 1105 1095 #define PCI_DVSEC_HEADER1 0x4 /* Designated Vendor-Specific Header1 */
+5 -11
sound/pci/hda/hda_intel.c
··· 1417 1417 acpi_handle dhandle, atpx_handle; 1418 1418 acpi_status status; 1419 1419 1420 - while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { 1421 - dhandle = ACPI_HANDLE(&pdev->dev); 1422 - if (dhandle) { 1423 - status = acpi_get_handle(dhandle, "ATPX", &atpx_handle); 1424 - if (ACPI_SUCCESS(status)) { 1425 - pci_dev_put(pdev); 1426 - return true; 1427 - } 1428 - } 1429 - } 1430 - while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) { 1420 + while ((pdev = pci_get_base_class(PCI_BASE_CLASS_DISPLAY, pdev))) { 1421 + if ((pdev->class != PCI_CLASS_DISPLAY_VGA << 8) && 1422 + (pdev->class != PCI_CLASS_DISPLAY_OTHER << 8)) 1423 + continue; 1424 + 1431 1425 dhandle = ACPI_HANDLE(&pdev->dev); 1432 1426 if (dhandle) { 1433 1427 status = acpi_get_handle(dhandle, "ATPX", &atpx_handle);