Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v5.13-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:
- Release OF node when pci_scan_device() fails (Dmitry Baryshkov)
- Add pci_disable_parity() (Bjorn Helgaas)
- Disable Mellanox Tavor parity reporting (Heiner Kallweit)
- Disable N2100 r8169 parity reporting (Heiner Kallweit)
- Fix RCiEP device to RCEC association (Qiuxu Zhuo)
- Convert sysfs "config", "rom", "reset", "label", "index",
"acpi_index" to static attributes to help fix races in device
enumeration (Krzysztof Wilczyński)
- Convert sysfs "vpd" to static attribute (Heiner Kallweit, Krzysztof
Wilczyński)
- Use sysfs_emit() in "show" functions (Krzysztof Wilczyński)
- Remove unused alloc_pci_root_info() return value (Krzysztof
Wilczyński)

PCI device hotplug:
- Fix acpiphp reference count leak (Feilong Lin)

Power management:
- Fix acpi_pci_set_power_state() debug message (Rafael J. Wysocki)
- Fix runtime PM imbalance (Dinghao Liu)

Virtualization:
- Increase delay after FLR to work around Intel DC P4510 NVMe erratum
(Raphael Norwitz)

MSI:
- Convert rcar, tegra, xilinx to MSI domains (Marc Zyngier)
- For rcar, xilinx, use controller address as MSI doorbell (Marc
Zyngier)
- Remove unused hv msi_controller struct (Marc Zyngier)
- Remove unused PCI core msi_controller support (Marc Zyngier)
- Remove struct msi_controller altogether (Marc Zyngier)
- Remove unused default_teardown_msi_irqs() (Marc Zyngier)
- Let host bridges declare their reliance on MSI domains (Marc
Zyngier)
- Make pci_host_common_probe() declare its reliance on MSI domains
(Marc Zyngier)
- Advertise mediatek lack of built-in MSI handling (Thomas Gleixner)
- Document ways of ending up with NO_MSI (Marc Zyngier)
- Refactor HT advertising of NO_MSI flag (Marc Zyngier)

VPD:
- Remove obsolete Broadcom NIC VPD length-limiting quirk (Heiner
Kallweit)
- Remove sysfs VPD size checking dead code (Heiner Kallweit)
- Convert VPF sysfs file to static attribute (Heiner Kallweit)
- Remove unnecessary pci_set_vpd_size() (Heiner Kallweit)
- Tone down "missing VPD" message (Heiner Kallweit)

Endpoint framework:
- Fix NULL pointer dereference when epc_features not implemented
(Shradha Todi)
- Add missing destroy_workqueue() in endpoint test (Yang Yingliang)

Amazon Annapurna Labs PCIe controller driver:
- Fix compile testing without CONFIG_PCI_ECAM (Arnd Bergmann)
- Fix "no symbols" warnings when compile testing with
CONFIG_TRIM_UNUSED_KSYMS (Arnd Bergmann)

APM X-Gene PCIe controller driver:
- Fix cfg resource mapping regression (Dejin Zheng)

Broadcom iProc PCIe controller driver:
- Return zero for success of iproc_msi_irq_domain_alloc() (Pali
Rohár)

Broadcom STB PCIe controller driver:
- Add reset_control_rearm() stub for !CONFIG_RESET_CONTROLLER (Jim
Quinlan)
- Fix use of BCM7216 reset controller (Jim Quinlan)
- Use reset/rearm for Broadcom STB pulse reset instead of
deassert/assert (Jim Quinlan)
- Fix brcm_pcie_probe() error return for unsupported revision (Wei
Yongjun)

Cavium ThunderX PCIe controller driver:
- Fix compile testing (Arnd Bergmann)
- Fix "no symbols" warnings when compile testing with
CONFIG_TRIM_UNUSED_KSYMS (Arnd Bergmann)

Freescale Layerscape PCIe controller driver:
- Fix ls_pcie_ep_probe() syntax error (comma for semicolon)
(Krzysztof Wilczyński)
- Remove layerscape-gen4 dependencies on OF and ARM64, add dependency
on ARCH_LAYERSCAPE (Geert Uytterhoeven)

HiSilicon HIP PCIe controller driver:
- Remove obsolete HiSilicon PCIe DT description (Dongdong Liu)

Intel Gateway PCIe controller driver:
- Remove unused pcie_app_rd() (Jiapeng Chong)

Intel VMD host bridge driver:
- Program IRTE with Requester ID of VMD endpoint, not child device
(Jon Derrick)
- Disable VMD MSI-X remapping when possible so children can use more
MSI-X vectors (Jon Derrick)

MediaTek PCIe controller driver:
- Configure FC and FTS for functions other than 0 (Ryder Lee)
- Add YAML schema for MediaTek (Jianjun Wang)
- Export pci_pio_to_address() for module use (Jianjun Wang)
- Add MediaTek MT8192 PCIe controller driver (Jianjun Wang)
- Add MediaTek MT8192 INTx support (Jianjun Wang)
- Add MediaTek MT8192 MSI support (Jianjun Wang)
- Add MediaTek MT8192 system power management support (Jianjun Wang)
- Add missing MODULE_DEVICE_TABLE (Qiheng Lin)

Microchip PolarFlare PCIe controller driver:
- Make several symbols static (Wei Yongjun)

NVIDIA Tegra PCIe controller driver:
- Add MCFG quirks for Tegra194 ECAM errata (Vidya Sagar)
- Make several symbols const (Rikard Falkeborn)
- Fix Kconfig host/endpoint typo (Wesley Sheng)

SiFive FU740 PCIe controller driver:
- Add pcie_aux clock to prci driver (Greentime Hu)
- Use reset-simple in prci driver for PCIe (Greentime Hu)
- Add SiFive FU740 PCIe host controller driver and DT binding (Paul
Walmsley, Greentime Hu)

Synopsys DesignWare PCIe controller driver:
- Move MSI Receiver init to dw_pcie_host_init() so it is
re-initialized along with the RC in resume (Jisheng Zhang)
- Move iATU detection earlier to fix regression (Hou Zhiqiang)

TI J721E PCIe driver:
- Add DT binding and TI j721e support for refclk to PCIe connector
(Kishon Vijay Abraham I)
- Add host mode and endpoint mode DT bindings for TI AM64 SoC (Kishon
Vijay Abraham I)

TI Keystone PCIe controller driver:
- Use generic config accessors for TI AM65x (K3) to fix regression
(Kishon Vijay Abraham I)

Xilinx NWL PCIe controller driver:
- Add support for coherent PCIe DMA traffic using CCI (Bharat Kumar
Gogada)
- Add optional "dma-coherent" DT property (Bharat Kumar Gogada)

Miscellaneous:
- Fix kernel-doc warnings (Krzysztof Wilczyński)
- Remove unused MicroGate SyncLink device IDs (Jiri Slaby)
- Remove redundant dev_err() for devm_ioremap_resource() failure
(Chen Hui)
- Remove redundant initialization (Colin Ian King)
- Drop redundant dev_err() for platform_get_irq() errors (Krzysztof
Wilczyński)"

* tag 'pci-v5.13-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (98 commits)
riscv: dts: Add PCIe support for the SiFive FU740-C000 SoC
PCI: fu740: Add SiFive FU740 PCIe host controller driver
dt-bindings: PCI: Add SiFive FU740 PCIe host controller
MAINTAINERS: Add maintainers for SiFive FU740 PCIe driver
clk: sifive: Use reset-simple in prci driver for PCIe driver
clk: sifive: Add pcie_aux clock in prci driver for PCIe driver
PCI: brcmstb: Use reset/rearm instead of deassert/assert
ata: ahci_brcm: Fix use of BCM7216 reset controller
reset: add missing empty function reset_control_rearm()
PCI: Allow VPD access for QLogic ISP2722
PCI/VPD: Add helper pci_get_func0_dev()
PCI/VPD: Remove pci_vpd_find_tag() SRDT handling
PCI/VPD: Remove pci_vpd_find_tag() 'offset' argument
PCI/VPD: Change pci_vpd_init() return type to void
PCI/VPD: Make missing VPD message less alarming
PCI/VPD: Remove pci_set_vpd_size()
x86/PCI: Remove unused alloc_pci_root_info() return value
MAINTAINERS: Add Jianjun Wang as MediaTek PCI co-maintainer
PCI: mediatek-gen3: Add system PM support
PCI: mediatek-gen3: Add MSI support
...

+2965 -1304
-43
Documentation/devicetree/bindings/pci/hisilicon-pcie.txt
··· 1 - HiSilicon Hip05 and Hip06 PCIe host bridge DT description 2 - 3 - HiSilicon PCIe host controller is based on the Synopsys DesignWare PCI core. 4 - It shares common functions with the PCIe DesignWare core driver and inherits 5 - common properties defined in 6 - Documentation/devicetree/bindings/pci/designware-pcie.txt. 7 - 8 - Additional properties are described here: 9 - 10 - Required properties 11 - - compatible: Should contain "hisilicon,hip05-pcie" or "hisilicon,hip06-pcie". 12 - - reg: Should contain rc_dbi, config registers location and length. 13 - - reg-names: Must include the following entries: 14 - "rc_dbi": controller configuration registers; 15 - "config": PCIe configuration space registers. 16 - - msi-parent: Should be its_pcie which is an ITS receiving MSI interrupts. 17 - - port-id: Should be 0, 1, 2 or 3. 18 - 19 - Optional properties: 20 - - status: Either "ok" or "disabled". 21 - - dma-coherent: Present if DMA operations are coherent. 22 - 23 - Hip05 Example (note that Hip06 is the same except compatible): 24 - pcie@b0080000 { 25 - compatible = "hisilicon,hip05-pcie", "snps,dw-pcie"; 26 - reg = <0 0xb0080000 0 0x10000>, <0x220 0x00000000 0 0x2000>; 27 - reg-names = "rc_dbi", "config"; 28 - bus-range = <0 15>; 29 - msi-parent = <&its_pcie>; 30 - #address-cells = <3>; 31 - #size-cells = <2>; 32 - device_type = "pci"; 33 - dma-coherent; 34 - ranges = <0x82000000 0 0x00000000 0x220 0x00000000 0 0x10000000>; 35 - num-lanes = <8>; 36 - port-id = <1>; 37 - #interrupt-cells = <1>; 38 - interrupt-map-mask = <0xf800 0 0 7>; 39 - interrupt-map = <0x0 0 0 1 &mbigen_pcie 1 10 40 - 0x0 0 0 2 &mbigen_pcie 2 11 41 - 0x0 0 0 3 &mbigen_pcie 3 12 42 - 0x0 0 0 4 &mbigen_pcie 4 13>; 43 - };
+181
Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/mediatek-pcie-gen3.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Gen3 PCIe controller on MediaTek SoCs 8 + 9 + maintainers: 10 + - Jianjun Wang <jianjun.wang@mediatek.com> 11 + 12 + description: |+ 13 + PCIe Gen3 MAC controller for MediaTek SoCs, it supports Gen3 speed 14 + and compatible with Gen2, Gen1 speed. 15 + 16 + This PCIe controller supports up to 256 MSI vectors, the MSI hardware 17 + block diagram is as follows: 18 + 19 + +-----+ 20 + | GIC | 21 + +-----+ 22 + ^ 23 + | 24 + port->irq 25 + | 26 + +-+-+-+-+-+-+-+-+ 27 + |0|1|2|3|4|5|6|7| (PCIe intc) 28 + +-+-+-+-+-+-+-+-+ 29 + ^ ^ ^ 30 + | | ... | 31 + +-------+ +------+ +-----------+ 32 + | | | 33 + +-+-+---+--+--+ +-+-+---+--+--+ +-+-+---+--+--+ 34 + |0|1|...|30|31| |0|1|...|30|31| |0|1|...|30|31| (MSI sets) 35 + +-+-+---+--+--+ +-+-+---+--+--+ +-+-+---+--+--+ 36 + ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ 37 + | | | | | | | | | | | | (MSI vectors) 38 + | | | | | | | | | | | | 39 + 40 + (MSI SET0) (MSI SET1) ... (MSI SET7) 41 + 42 + With 256 MSI vectors supported, the MSI vectors are composed of 8 sets, 43 + each set has its own address for MSI message, and supports 32 MSI vectors 44 + to generate interrupt. 45 + 46 + allOf: 47 + - $ref: /schemas/pci/pci-bus.yaml# 48 + 49 + properties: 50 + compatible: 51 + const: mediatek,mt8192-pcie 52 + 53 + reg: 54 + maxItems: 1 55 + 56 + reg-names: 57 + items: 58 + - const: pcie-mac 59 + 60 + interrupts: 61 + maxItems: 1 62 + 63 + ranges: 64 + minItems: 1 65 + maxItems: 8 66 + 67 + resets: 68 + minItems: 1 69 + maxItems: 2 70 + 71 + reset-names: 72 + minItems: 1 73 + maxItems: 2 74 + items: 75 + - const: phy 76 + - const: mac 77 + 78 + clocks: 79 + maxItems: 6 80 + 81 + clock-names: 82 + items: 83 + - const: pl_250m 84 + - const: tl_26m 85 + - const: tl_96m 86 + - const: tl_32k 87 + - const: peri_26m 88 + - const: top_133m 89 + 90 + assigned-clocks: 91 + maxItems: 1 92 + 93 + assigned-clock-parents: 94 + maxItems: 1 95 + 96 + phys: 97 + maxItems: 1 98 + 99 + '#interrupt-cells': 100 + const: 1 101 + 102 + interrupt-controller: 103 + description: Interrupt controller node for handling legacy PCI interrupts. 104 + type: object 105 + properties: 106 + '#address-cells': 107 + const: 0 108 + '#interrupt-cells': 109 + const: 1 110 + interrupt-controller: true 111 + 112 + required: 113 + - '#address-cells' 114 + - '#interrupt-cells' 115 + - interrupt-controller 116 + 117 + additionalProperties: false 118 + 119 + required: 120 + - compatible 121 + - reg 122 + - reg-names 123 + - interrupts 124 + - ranges 125 + - clocks 126 + - '#interrupt-cells' 127 + - interrupt-controller 128 + 129 + unevaluatedProperties: false 130 + 131 + examples: 132 + - | 133 + #include <dt-bindings/interrupt-controller/arm-gic.h> 134 + #include <dt-bindings/interrupt-controller/irq.h> 135 + 136 + bus { 137 + #address-cells = <2>; 138 + #size-cells = <2>; 139 + 140 + pcie: pcie@11230000 { 141 + compatible = "mediatek,mt8192-pcie"; 142 + device_type = "pci"; 143 + #address-cells = <3>; 144 + #size-cells = <2>; 145 + reg = <0x00 0x11230000 0x00 0x4000>; 146 + reg-names = "pcie-mac"; 147 + interrupts = <GIC_SPI 251 IRQ_TYPE_LEVEL_HIGH 0>; 148 + bus-range = <0x00 0xff>; 149 + ranges = <0x82000000 0x00 0x12000000 0x00 150 + 0x12000000 0x00 0x1000000>; 151 + clocks = <&infracfg 44>, 152 + <&infracfg 40>, 153 + <&infracfg 43>, 154 + <&infracfg 97>, 155 + <&infracfg 99>, 156 + <&infracfg 111>; 157 + clock-names = "pl_250m", "tl_26m", "tl_96m", 158 + "tl_32k", "peri_26m", "top_133m"; 159 + assigned-clocks = <&topckgen 50>; 160 + assigned-clock-parents = <&topckgen 91>; 161 + 162 + phys = <&pciephy>; 163 + phy-names = "pcie-phy"; 164 + 165 + resets = <&infracfg_rst 2>, 166 + <&infracfg_rst 3>; 167 + reset-names = "phy", "mac"; 168 + 169 + #interrupt-cells = <1>; 170 + interrupt-map-mask = <0 0 0 0x7>; 171 + interrupt-map = <0 0 0 1 &pcie_intc 0>, 172 + <0 0 0 2 &pcie_intc 1>, 173 + <0 0 0 3 &pcie_intc 2>, 174 + <0 0 0 4 &pcie_intc 3>; 175 + pcie_intc: interrupt-controller { 176 + #address-cells = <0>; 177 + #interrupt-cells = <1>; 178 + interrupt-controller; 179 + }; 180 + }; 181 + };
+113
Documentation/devicetree/bindings/pci/sifive,fu740-pcie.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/sifive,fu740-pcie.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: SiFive FU740 PCIe host controller 8 + 9 + description: |+ 10 + SiFive FU740 PCIe host controller is based on the Synopsys DesignWare 11 + PCI core. It shares common features with the PCIe DesignWare core and 12 + inherits common properties defined in 13 + Documentation/devicetree/bindings/pci/designware-pcie.txt. 14 + 15 + maintainers: 16 + - Paul Walmsley <paul.walmsley@sifive.com> 17 + - Greentime Hu <greentime.hu@sifive.com> 18 + 19 + allOf: 20 + - $ref: /schemas/pci/pci-bus.yaml# 21 + 22 + properties: 23 + compatible: 24 + const: sifive,fu740-pcie 25 + 26 + reg: 27 + maxItems: 3 28 + 29 + reg-names: 30 + items: 31 + - const: dbi 32 + - const: config 33 + - const: mgmt 34 + 35 + num-lanes: 36 + const: 8 37 + 38 + msi-parent: true 39 + 40 + interrupt-names: 41 + items: 42 + - const: msi 43 + - const: inta 44 + - const: intb 45 + - const: intc 46 + - const: intd 47 + 48 + resets: 49 + description: A phandle to the PCIe power up reset line. 50 + maxItems: 1 51 + 52 + pwren-gpios: 53 + description: Should specify the GPIO for controlling the PCI bus device power on. 54 + maxItems: 1 55 + 56 + reset-gpios: 57 + maxItems: 1 58 + 59 + required: 60 + - dma-coherent 61 + - num-lanes 62 + - interrupts 63 + - interrupt-names 64 + - interrupt-parent 65 + - interrupt-map-mask 66 + - interrupt-map 67 + - clock-names 68 + - clocks 69 + - resets 70 + - pwren-gpios 71 + - reset-gpios 72 + 73 + unevaluatedProperties: false 74 + 75 + examples: 76 + - | 77 + bus { 78 + #address-cells = <2>; 79 + #size-cells = <2>; 80 + #include <dt-bindings/clock/sifive-fu740-prci.h> 81 + 82 + pcie@e00000000 { 83 + compatible = "sifive,fu740-pcie"; 84 + #address-cells = <3>; 85 + #size-cells = <2>; 86 + #interrupt-cells = <1>; 87 + reg = <0xe 0x00000000 0x0 0x80000000>, 88 + <0xd 0xf0000000 0x0 0x10000000>, 89 + <0x0 0x100d0000 0x0 0x1000>; 90 + reg-names = "dbi", "config", "mgmt"; 91 + device_type = "pci"; 92 + dma-coherent; 93 + bus-range = <0x0 0xff>; 94 + ranges = <0x81000000 0x0 0x60080000 0x0 0x60080000 0x0 0x10000>, /* I/O */ 95 + <0x82000000 0x0 0x60090000 0x0 0x60090000 0x0 0xff70000>, /* mem */ 96 + <0x82000000 0x0 0x70000000 0x0 0x70000000 0x0 0x1000000>, /* mem */ 97 + <0xc3000000 0x20 0x00000000 0x20 0x00000000 0x20 0x00000000>; /* mem prefetchable */ 98 + num-lanes = <0x8>; 99 + interrupts = <56>, <57>, <58>, <59>, <60>, <61>, <62>, <63>, <64>; 100 + interrupt-names = "msi", "inta", "intb", "intc", "intd"; 101 + interrupt-parent = <&plic0>; 102 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 103 + interrupt-map = <0x0 0x0 0x0 0x1 &plic0 57>, 104 + <0x0 0x0 0x0 0x2 &plic0 58>, 105 + <0x0 0x0 0x0 0x3 &plic0 59>, 106 + <0x0 0x0 0x0 0x4 &plic0 60>; 107 + clock-names = "pcie_aux"; 108 + clocks = <&prci PRCI_CLK_PCIE_AUX>; 109 + resets = <&prci 4>; 110 + pwren-gpios = <&gpio 5 0>; 111 + reset-gpios = <&gpio 8 0>; 112 + }; 113 + };
+5 -4
Documentation/devicetree/bindings/pci/ti,j721e-pci-ep.yaml
··· 16 16 properties: 17 17 compatible: 18 18 oneOf: 19 + - const: ti,j721e-pcie-ep 20 + - description: PCIe EP controller in AM64 21 + items: 22 + - const: ti,am64-pcie-ep 23 + - const: ti,j721e-pcie-ep 19 24 - description: PCIe EP controller in J7200 20 25 items: 21 26 - const: ti,j7200-pcie-ep 22 - - const: ti,j721e-pcie-ep 23 - - description: PCIe EP controller in J721E 24 - items: 25 27 - const: ti,j721e-pcie-ep 26 28 27 29 reg: ··· 68 66 - power-domains 69 67 - clocks 70 68 - clock-names 71 - - dma-coherent 72 69 - max-functions 73 70 - phys 74 71 - phy-names
+14 -6
Documentation/devicetree/bindings/pci/ti,j721e-pci-host.yaml
··· 16 16 properties: 17 17 compatible: 18 18 oneOf: 19 + - const: ti,j721e-pcie-host 20 + - description: PCIe controller in AM64 21 + items: 22 + - const: ti,am64-pcie-host 23 + - const: ti,j721e-pcie-host 19 24 - description: PCIe controller in J7200 20 25 items: 21 26 - const: ti,j7200-pcie-host 22 - - const: ti,j721e-pcie-host 23 - - description: PCIe controller in J721E 24 - items: 25 27 - const: ti,j721e-pcie-host 26 28 27 29 reg: ··· 48 46 maxItems: 1 49 47 50 48 clocks: 51 - maxItems: 1 52 - description: clock-specifier to represent input to the PCIe 49 + minItems: 1 50 + maxItems: 2 51 + description: |+ 52 + clock-specifier to represent input to the PCIe for 1 item. 53 + 2nd item if present represents reference clock to the connector. 53 54 54 55 clock-names: 56 + minItems: 1 55 57 items: 56 58 - const: fck 59 + - const: pcie_refclk 57 60 58 61 vendor-id: 59 62 const: 0x104c ··· 69 62 - const: 0xb00d 70 63 - items: 71 64 - const: 0xb00f 65 + - items: 66 + - const: 0xb010 72 67 73 68 msi-map: true 74 69 ··· 87 78 - vendor-id 88 79 - device-id 89 80 - msi-map 90 - - dma-coherent 91 81 - dma-ranges 92 82 - ranges 93 83 - reset-gpios
+2
Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt
··· 33 33 - #address-cells: specifies the number of cells needed to encode an 34 34 address. The value must be 0. 35 35 36 + Optional properties: 37 + - dma-coherent: present if DMA operations are coherent 36 38 37 39 Example: 38 40 ++++++++
+9 -1
MAINTAINERS
··· 13983 13983 F: Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt 13984 13984 F: drivers/pci/controller/dwc/*imx6* 13985 13985 13986 + PCI DRIVER FOR FU740 13987 + M: Paul Walmsley <paul.walmsley@sifive.com> 13988 + M: Greentime Hu <greentime.hu@sifive.com> 13989 + L: linux-pci@vger.kernel.org 13990 + S: Maintained 13991 + F: Documentation/devicetree/bindings/pci/sifive,fu740-pcie.yaml 13992 + F: drivers/pci/controller/dwc/pcie-fu740.c 13993 + 13986 13994 PCI DRIVER FOR INTEL VOLUME MANAGEMENT DEVICE (VMD) 13987 13995 M: Jonathan Derrick <jonathan.derrick@intel.com> 13988 13996 L: linux-pci@vger.kernel.org ··· 14198 14190 M: Zhou Wang <wangzhou1@hisilicon.com> 14199 14191 L: linux-pci@vger.kernel.org 14200 14192 S: Maintained 14201 - F: Documentation/devicetree/bindings/pci/hisilicon-pcie.txt 14202 14193 F: drivers/pci/controller/dwc/pcie-hisi.c 14203 14194 14204 14195 PCIE DRIVER FOR HISILICON KIRIN ··· 14217 14210 14218 14211 PCIE DRIVER FOR MEDIATEK 14219 14212 M: Ryder Lee <ryder.lee@mediatek.com> 14213 + M: Jianjun Wang <jianjun.wang@mediatek.com> 14220 14214 L: linux-pci@vger.kernel.org 14221 14215 L: linux-mediatek@lists.infradead.org 14222 14216 S: Supported
+4 -4
arch/arm/mach-iop32x/n2100.c
··· 116 116 }; 117 117 118 118 /* 119 - * Both r8169 chips on the n2100 exhibit PCI parity problems. Set 120 - * the ->broken_parity_status flag for both ports so that the r8169 121 - * driver knows it should ignore error interrupts. 119 + * Both r8169 chips on the n2100 exhibit PCI parity problems. Turn 120 + * off parity reporting for both ports so we don't get error interrupts 121 + * for them. 122 122 */ 123 123 static void n2100_fixup_r8169(struct pci_dev *dev) 124 124 { 125 125 if (dev->bus->number == 0 && 126 126 (dev->devfn == PCI_DEVFN(1, 0) || 127 127 dev->devfn == PCI_DEVFN(2, 0))) 128 - dev->broken_parity_status = 1; 128 + pci_disable_parity(dev); 129 129 } 130 130 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_REALTEK, PCI_ANY_ID, n2100_fixup_r8169); 131 131
+33
arch/riscv/boot/dts/sifive/fu740-c000.dtsi
··· 159 159 reg = <0x0 0x10000000 0x0 0x1000>; 160 160 clocks = <&hfclk>, <&rtcclk>; 161 161 #clock-cells = <1>; 162 + #reset-cells = <1>; 162 163 }; 163 164 uart0: serial@10010000 { 164 165 compatible = "sifive,fu740-c000-uart", "sifive,uart0"; ··· 289 288 #interrupt-cells = <2>; 290 289 clocks = <&prci PRCI_CLK_PCLK>; 291 290 status = "disabled"; 291 + }; 292 + pcie@e00000000 { 293 + compatible = "sifive,fu740-pcie"; 294 + #address-cells = <3>; 295 + #size-cells = <2>; 296 + #interrupt-cells = <1>; 297 + reg = <0xe 0x00000000 0x0 0x80000000>, 298 + <0xd 0xf0000000 0x0 0x10000000>, 299 + <0x0 0x100d0000 0x0 0x1000>; 300 + reg-names = "dbi", "config", "mgmt"; 301 + device_type = "pci"; 302 + dma-coherent; 303 + bus-range = <0x0 0xff>; 304 + ranges = <0x81000000 0x0 0x60080000 0x0 0x60080000 0x0 0x10000>, /* I/O */ 305 + <0x82000000 0x0 0x60090000 0x0 0x60090000 0x0 0xff70000>, /* mem */ 306 + <0x82000000 0x0 0x70000000 0x0 0x70000000 0x0 0x1000000>, /* mem */ 307 + <0xc3000000 0x20 0x00000000 0x20 0x00000000 0x20 0x00000000>; /* mem prefetchable */ 308 + num-lanes = <0x8>; 309 + interrupts = <56>, <57>, <58>, <59>, <60>, <61>, <62>, <63>, <64>; 310 + interrupt-names = "msi", "inta", "intb", "intc", "intd"; 311 + interrupt-parent = <&plic0>; 312 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 313 + interrupt-map = <0x0 0x0 0x0 0x1 &plic0 57>, 314 + <0x0 0x0 0x0 0x2 &plic0 58>, 315 + <0x0 0x0 0x0 0x3 &plic0 59>, 316 + <0x0 0x0 0x0 0x4 &plic0 60>; 317 + clock-names = "pcie_aux"; 318 + clocks = <&prci PRCI_CLK_PCIE_AUX>; 319 + pwren-gpios = <&gpio 5 0>; 320 + reset-gpios = <&gpio 8 0>; 321 + resets = <&prci 4>; 322 + status = "okay"; 292 323 }; 293 324 }; 294 325 };
+1 -1
arch/x86/pci/amd_bus.c
··· 126 126 node = (reg >> 4) & 0x07; 127 127 link = (reg >> 8) & 0x03; 128 128 129 - info = alloc_pci_root_info(min_bus, max_bus, node, link); 129 + alloc_pci_root_info(min_bus, max_bus, node, link); 130 130 } 131 131 132 132 /*
+7
drivers/acpi/pci_mcfg.c
··· 116 116 THUNDER_ECAM_QUIRK(2, 12), 117 117 THUNDER_ECAM_QUIRK(2, 13), 118 118 119 + { "NVIDIA", "TEGRA194", 1, 0, MCFG_BUS_ANY, &tegra194_pcie_ops}, 120 + { "NVIDIA", "TEGRA194", 1, 1, MCFG_BUS_ANY, &tegra194_pcie_ops}, 121 + { "NVIDIA", "TEGRA194", 1, 2, MCFG_BUS_ANY, &tegra194_pcie_ops}, 122 + { "NVIDIA", "TEGRA194", 1, 3, MCFG_BUS_ANY, &tegra194_pcie_ops}, 123 + { "NVIDIA", "TEGRA194", 1, 4, MCFG_BUS_ANY, &tegra194_pcie_ops}, 124 + { "NVIDIA", "TEGRA194", 1, 5, MCFG_BUS_ANY, &tegra194_pcie_ops}, 125 + 119 126 #define XGENE_V1_ECAM_MCFG(rev, seg) \ 120 127 {"APM ", "XGENE ", rev, seg, MCFG_BUS_ANY, \ 121 128 &xgene_v1_pcie_ecam_ops }
+23 -23
drivers/ata/ahci_brcm.c
··· 86 86 u32 port_mask; 87 87 u32 quirks; 88 88 enum brcm_ahci_version version; 89 - struct reset_control *rcdev; 89 + struct reset_control *rcdev_rescal; 90 + struct reset_control *rcdev_ahci; 90 91 }; 91 92 92 93 static inline u32 brcm_sata_readreg(void __iomem *addr) ··· 353 352 else 354 353 ret = 0; 355 354 356 - if (priv->version != BRCM_SATA_BCM7216) 357 - reset_control_assert(priv->rcdev); 355 + reset_control_assert(priv->rcdev_ahci); 356 + reset_control_rearm(priv->rcdev_rescal); 358 357 359 358 return ret; 360 359 } ··· 366 365 struct brcm_ahci_priv *priv = hpriv->plat_data; 367 366 int ret = 0; 368 367 369 - if (priv->version == BRCM_SATA_BCM7216) 370 - ret = reset_control_reset(priv->rcdev); 371 - else 372 - ret = reset_control_deassert(priv->rcdev); 368 + ret = reset_control_deassert(priv->rcdev_ahci); 369 + if (ret) 370 + return ret; 371 + ret = reset_control_reset(priv->rcdev_rescal); 373 372 if (ret) 374 373 return ret; 375 374 ··· 435 434 { 436 435 const struct of_device_id *of_id; 437 436 struct device *dev = &pdev->dev; 438 - const char *reset_name = NULL; 439 437 struct brcm_ahci_priv *priv; 440 438 struct ahci_host_priv *hpriv; 441 439 struct resource *res; ··· 456 456 if (IS_ERR(priv->top_ctrl)) 457 457 return PTR_ERR(priv->top_ctrl); 458 458 459 - /* Reset is optional depending on platform and named differently */ 460 - if (priv->version == BRCM_SATA_BCM7216) 461 - reset_name = "rescal"; 462 - else 463 - reset_name = "ahci"; 464 - 465 - priv->rcdev = devm_reset_control_get_optional(&pdev->dev, reset_name); 466 - if (IS_ERR(priv->rcdev)) 467 - return PTR_ERR(priv->rcdev); 459 + if (priv->version == BRCM_SATA_BCM7216) { 460 + priv->rcdev_rescal = devm_reset_control_get_optional_shared( 461 + &pdev->dev, "rescal"); 462 + if (IS_ERR(priv->rcdev_rescal)) 463 + return PTR_ERR(priv->rcdev_rescal); 464 + } 465 + priv->rcdev_ahci = devm_reset_control_get_optional(&pdev->dev, "ahci"); 466 + if (IS_ERR(priv->rcdev_ahci)) 467 + return PTR_ERR(priv->rcdev_ahci); 468 468 469 469 hpriv = ahci_platform_get_resources(pdev, 0); 470 470 if (IS_ERR(hpriv)) ··· 485 485 break; 486 486 } 487 487 488 - if (priv->version == BRCM_SATA_BCM7216) 489 - ret = reset_control_reset(priv->rcdev); 490 - else 491 - ret = reset_control_deassert(priv->rcdev); 488 + ret = reset_control_reset(priv->rcdev_rescal); 489 + if (ret) 490 + return ret; 491 + ret = reset_control_deassert(priv->rcdev_ahci); 492 492 if (ret) 493 493 return ret; 494 494 ··· 539 539 out_disable_clks: 540 540 ahci_platform_disable_clks(hpriv); 541 541 out_reset: 542 - if (priv->version != BRCM_SATA_BCM7216) 543 - reset_control_assert(priv->rcdev); 542 + reset_control_assert(priv->rcdev_ahci); 543 + reset_control_rearm(priv->rcdev_rescal); 544 544 return ret; 545 545 } 546 546
+2
drivers/clk/sifive/Kconfig
··· 10 10 11 11 config CLK_SIFIVE_PRCI 12 12 bool "PRCI driver for SiFive SoCs" 13 + select RESET_CONTROLLER 14 + select RESET_SIMPLE 13 15 select CLK_ANALOGBITS_WRPLL_CLN28HPC 14 16 help 15 17 Supports the Power Reset Clock interface (PRCI) IP block found in
+11
drivers/clk/sifive/fu740-prci.c
··· 72 72 .recalc_rate = sifive_prci_hfpclkplldiv_recalc_rate, 73 73 }; 74 74 75 + static const struct clk_ops sifive_fu740_prci_pcie_aux_clk_ops = { 76 + .enable = sifive_prci_pcie_aux_clock_enable, 77 + .disable = sifive_prci_pcie_aux_clock_disable, 78 + .is_enabled = sifive_prci_pcie_aux_clock_is_enabled, 79 + }; 80 + 75 81 /* List of clock controls provided by the PRCI */ 76 82 struct __prci_clock __prci_init_clocks_fu740[] = { 77 83 [PRCI_CLK_COREPLL] = { ··· 125 119 .name = "pclk", 126 120 .parent_name = "hfpclkpll", 127 121 .ops = &sifive_fu740_prci_hfpclkplldiv_clk_ops, 122 + }, 123 + [PRCI_CLK_PCIE_AUX] = { 124 + .name = "pcie_aux", 125 + .parent_name = "hfclk", 126 + .ops = &sifive_fu740_prci_pcie_aux_clk_ops, 128 127 }, 129 128 };
+1 -1
drivers/clk/sifive/fu740-prci.h
··· 9 9 10 10 #include "sifive-prci.h" 11 11 12 - #define NUM_CLOCK_FU740 8 12 + #define NUM_CLOCK_FU740 9 13 13 14 14 extern struct __prci_clock __prci_init_clocks_fu740[NUM_CLOCK_FU740]; 15 15
+54
drivers/clk/sifive/sifive-prci.c
··· 453 453 r = __prci_readl(pd, PRCI_HFPCLKPLLSEL_OFFSET); /* barrier */ 454 454 } 455 455 456 + /* PCIE AUX clock APIs for enable, disable. */ 457 + int sifive_prci_pcie_aux_clock_is_enabled(struct clk_hw *hw) 458 + { 459 + struct __prci_clock *pc = clk_hw_to_prci_clock(hw); 460 + struct __prci_data *pd = pc->pd; 461 + u32 r; 462 + 463 + r = __prci_readl(pd, PRCI_PCIE_AUX_OFFSET); 464 + 465 + if (r & PRCI_PCIE_AUX_EN_MASK) 466 + return 1; 467 + else 468 + return 0; 469 + } 470 + 471 + int sifive_prci_pcie_aux_clock_enable(struct clk_hw *hw) 472 + { 473 + struct __prci_clock *pc = clk_hw_to_prci_clock(hw); 474 + struct __prci_data *pd = pc->pd; 475 + u32 r __maybe_unused; 476 + 477 + if (sifive_prci_pcie_aux_clock_is_enabled(hw)) 478 + return 0; 479 + 480 + __prci_writel(1, PRCI_PCIE_AUX_OFFSET, pd); 481 + r = __prci_readl(pd, PRCI_PCIE_AUX_OFFSET); /* barrier */ 482 + 483 + return 0; 484 + } 485 + 486 + void sifive_prci_pcie_aux_clock_disable(struct clk_hw *hw) 487 + { 488 + struct __prci_clock *pc = clk_hw_to_prci_clock(hw); 489 + struct __prci_data *pd = pc->pd; 490 + u32 r __maybe_unused; 491 + 492 + __prci_writel(0, PRCI_PCIE_AUX_OFFSET, pd); 493 + r = __prci_readl(pd, PRCI_PCIE_AUX_OFFSET); /* barrier */ 494 + 495 + } 496 + 456 497 /** 457 498 * __prci_register_clocks() - register clock controls in the PRCI 458 499 * @dev: Linux struct device ··· 588 547 if (IS_ERR(pd->va)) 589 548 return PTR_ERR(pd->va); 590 549 550 + pd->reset.rcdev.owner = THIS_MODULE; 551 + pd->reset.rcdev.nr_resets = PRCI_RST_NR; 552 + pd->reset.rcdev.ops = &reset_simple_ops; 553 + pd->reset.rcdev.of_node = pdev->dev.of_node; 554 + pd->reset.active_low = true; 555 + pd->reset.membase = pd->va + PRCI_DEVICESRESETREG_OFFSET; 556 + spin_lock_init(&pd->reset.lock); 557 + 558 + r = devm_reset_controller_register(&pdev->dev, &pd->reset.rcdev); 559 + if (r) { 560 + dev_err(dev, "could not register reset controller: %d\n", r); 561 + return r; 562 + } 591 563 r = __prci_register_clocks(dev, pd, desc); 592 564 if (r) { 593 565 dev_err(dev, "could not register clocks: %d\n", r);
+13
drivers/clk/sifive/sifive-prci.h
··· 11 11 12 12 #include <linux/clk/analogbits-wrpll-cln28hpc.h> 13 13 #include <linux/clk-provider.h> 14 + #include <linux/reset/reset-simple.h> 14 15 #include <linux/platform_device.h> 15 16 16 17 /* ··· 68 67 #define PRCI_DDRPLLCFG1_CKE_SHIFT 31 69 68 #define PRCI_DDRPLLCFG1_CKE_MASK (0x1 << PRCI_DDRPLLCFG1_CKE_SHIFT) 70 69 70 + /* PCIEAUX */ 71 + #define PRCI_PCIE_AUX_OFFSET 0x14 72 + #define PRCI_PCIE_AUX_EN_SHIFT 0 73 + #define PRCI_PCIE_AUX_EN_MASK (0x1 << PRCI_PCIE_AUX_EN_SHIFT) 74 + 71 75 /* GEMGXLPLLCFG0 */ 72 76 #define PRCI_GEMGXLPLLCFG0_OFFSET 0x1c 73 77 #define PRCI_GEMGXLPLLCFG0_DIVR_SHIFT 0 ··· 121 115 #define PRCI_DEVICESRESETREG_CHIPLINK_RST_N_SHIFT 6 122 116 #define PRCI_DEVICESRESETREG_CHIPLINK_RST_N_MASK \ 123 117 (0x1 << PRCI_DEVICESRESETREG_CHIPLINK_RST_N_SHIFT) 118 + 119 + #define PRCI_RST_NR 7 124 120 125 121 /* CLKMUXSTATUSREG */ 126 122 #define PRCI_CLKMUXSTATUSREG_OFFSET 0x2c ··· 224 216 */ 225 217 struct __prci_data { 226 218 void __iomem *va; 219 + struct reset_simple_data reset; 227 220 struct clk_hw_onecell_data hw_clks; 228 221 }; 229 222 ··· 304 295 unsigned long parent_rate); 305 296 unsigned long sifive_prci_hfpclkplldiv_recalc_rate(struct clk_hw *hw, 306 297 unsigned long parent_rate); 298 + 299 + int sifive_prci_pcie_aux_clock_is_enabled(struct clk_hw *hw); 300 + int sifive_prci_pcie_aux_clock_enable(struct clk_hw *hw); 301 + void sifive_prci_pcie_aux_clock_disable(struct clk_hw *hw); 307 302 308 303 #endif /* __SIFIVE_CLK_SIFIVE_PRCI_H */
+2 -1
drivers/iommu/intel/irq_remapping.c
··· 1280 1280 break; 1281 1281 case X86_IRQ_ALLOC_TYPE_PCI_MSI: 1282 1282 case X86_IRQ_ALLOC_TYPE_PCI_MSIX: 1283 - set_msi_sid(irte, msi_desc_to_pci_dev(info->desc)); 1283 + set_msi_sid(irte, 1284 + pci_real_dma_dev(msi_desc_to_pci_dev(info->desc))); 1284 1285 break; 1285 1286 default: 1286 1287 BUG_ON(1);
+1 -1
drivers/net/ethernet/broadcom/bnx2.c
··· 8057 8057 data[i + 3] = data[i + BNX2_VPD_LEN]; 8058 8058 } 8059 8059 8060 - i = pci_vpd_find_tag(data, 0, BNX2_VPD_LEN, PCI_VPD_LRDT_RO_DATA); 8060 + i = pci_vpd_find_tag(data, BNX2_VPD_LEN, PCI_VPD_LRDT_RO_DATA); 8061 8061 if (i < 0) 8062 8062 goto vpd_done; 8063 8063
+1 -2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 12206 12206 /* VPD RO tag should be first tag after identifier string, hence 12207 12207 * we should be able to find it in first BNX2X_VPD_LEN chars 12208 12208 */ 12209 - i = pci_vpd_find_tag(vpd_start, 0, BNX2X_VPD_LEN, 12210 - PCI_VPD_LRDT_RO_DATA); 12209 + i = pci_vpd_find_tag(vpd_start, BNX2X_VPD_LEN, PCI_VPD_LRDT_RO_DATA); 12211 12210 if (i < 0) 12212 12211 goto out_not_found; 12213 12212
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 12794 12794 goto exit; 12795 12795 } 12796 12796 12797 - i = pci_vpd_find_tag(vpd_data, 0, vpd_size, PCI_VPD_LRDT_RO_DATA); 12797 + i = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA); 12798 12798 if (i < 0) { 12799 12799 netdev_err(bp->dev, "VPD READ-Only not found\n"); 12800 12800 goto exit;
+2 -2
drivers/net/ethernet/broadcom/tg3.c
··· 13016 13016 if (!buf) 13017 13017 return -ENOMEM; 13018 13018 13019 - i = pci_vpd_find_tag((u8 *)buf, 0, len, PCI_VPD_LRDT_RO_DATA); 13019 + i = pci_vpd_find_tag((u8 *)buf, len, PCI_VPD_LRDT_RO_DATA); 13020 13020 if (i > 0) { 13021 13021 j = pci_vpd_lrdt_size(&((u8 *)buf)[i]); 13022 13022 if (j < 0) ··· 15629 15629 if (!vpd_data) 15630 15630 goto out_no_vpd; 15631 15631 15632 - i = pci_vpd_find_tag(vpd_data, 0, vpdlen, PCI_VPD_LRDT_RO_DATA); 15632 + i = pci_vpd_find_tag(vpd_data, vpdlen, PCI_VPD_LRDT_RO_DATA); 15633 15633 if (i < 0) 15634 15634 goto out_not_found; 15635 15635
+1 -1
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 2775 2775 if (id_len > ID_LEN) 2776 2776 id_len = ID_LEN; 2777 2777 2778 - i = pci_vpd_find_tag(vpd, 0, VPD_LEN, PCI_VPD_LRDT_RO_DATA); 2778 + i = pci_vpd_find_tag(vpd, VPD_LEN, PCI_VPD_LRDT_RO_DATA); 2779 2779 if (i < 0) { 2780 2780 dev_err(adapter->pdev_dev, "missing VPD-R section\n"); 2781 2781 ret = -EINVAL;
-14
drivers/net/ethernet/realtek/r8169_main.c
··· 4398 4398 if (net_ratelimit()) 4399 4399 netdev_err(dev, "PCI error (cmd = 0x%04x, status_errs = 0x%04x)\n", 4400 4400 pci_cmd, pci_status_errs); 4401 - /* 4402 - * The recovery sequence below admits a very elaborated explanation: 4403 - * - it seems to work; 4404 - * - I did not see what else could be done; 4405 - * - it makes iop3xx happy. 4406 - * 4407 - * Feel free to adjust to your needs. 4408 - */ 4409 - if (pdev->broken_parity_status) 4410 - pci_cmd &= ~PCI_COMMAND_PARITY; 4411 - else 4412 - pci_cmd |= PCI_COMMAND_SERR | PCI_COMMAND_PARITY; 4413 - 4414 - pci_write_config_word(pdev, PCI_COMMAND, pci_cmd); 4415 4401 4416 4402 rtl_schedule_task(tp, RTL_FLAG_TASK_RESET_PENDING); 4417 4403 }
+1 -1
drivers/net/ethernet/sfc/efx.c
··· 920 920 } 921 921 922 922 /* Get the Read only section */ 923 - ro_start = pci_vpd_find_tag(vpd_data, 0, vpd_size, PCI_VPD_LRDT_RO_DATA); 923 + ro_start = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA); 924 924 if (ro_start < 0) { 925 925 netif_err(efx, drv, efx->net_dev, "VPD Read-only not found\n"); 926 926 return;
+1 -1
drivers/net/ethernet/sfc/falcon/efx.c
··· 2800 2800 } 2801 2801 2802 2802 /* Get the Read only section */ 2803 - ro_start = pci_vpd_find_tag(vpd_data, 0, vpd_size, PCI_VPD_LRDT_RO_DATA); 2803 + ro_start = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA); 2804 2804 if (ro_start < 0) { 2805 2805 netif_err(efx, drv, efx->net_dev, "VPD Read-only not found\n"); 2806 2806 return;
+1 -1
drivers/pci/ats.c
··· 480 480 #define PASID_NUMBER_SHIFT 8 481 481 #define PASID_NUMBER_MASK (0x1f << PASID_NUMBER_SHIFT) 482 482 /** 483 - * pci_max_pasid - Get maximum number of PASIDs supported by device 483 + * pci_max_pasids - Get maximum number of PASIDs supported by device 484 484 * @pdev: PCI device structure 485 485 * 486 486 * Returns negative value when PASID capability is not present.
+14 -3
drivers/pci/controller/Kconfig
··· 41 41 bool "NVIDIA Tegra PCIe controller" 42 42 depends on ARCH_TEGRA || COMPILE_TEST 43 43 depends on PCI_MSI_IRQ_DOMAIN 44 - select PCI_MSI_ARCH_FALLBACKS 45 44 help 46 45 Say Y here if you want support for the PCIe host controller found 47 46 on NVIDIA Tegra SoCs. ··· 58 59 bool "Renesas R-Car PCIe host controller" 59 60 depends on ARCH_RENESAS || COMPILE_TEST 60 61 depends on PCI_MSI_IRQ_DOMAIN 61 - select PCI_MSI_ARCH_FALLBACKS 62 62 help 63 63 Say Y here if you want PCIe controller support on R-Car SoCs in host 64 64 mode. ··· 86 88 config PCIE_XILINX 87 89 bool "Xilinx AXI PCIe host bridge support" 88 90 depends on OF || COMPILE_TEST 89 - select PCI_MSI_ARCH_FALLBACKS 91 + depends on PCI_MSI_IRQ_DOMAIN 90 92 help 91 93 Say 'Y' here if you want kernel to support the Xilinx AXI PCIe 92 94 Host Bridge driver. ··· 229 231 depends on PCI_MSI_IRQ_DOMAIN 230 232 help 231 233 Say Y here if you want to enable PCIe controller support on 234 + MediaTek SoCs. 235 + 236 + config PCIE_MEDIATEK_GEN3 237 + tristate "MediaTek Gen3 PCIe controller" 238 + depends on ARCH_MEDIATEK || COMPILE_TEST 239 + depends on PCI_MSI_IRQ_DOMAIN 240 + help 241 + Adds support for PCIe Gen3 MAC controller for MediaTek SoCs. 242 + This PCIe controller is compatible with Gen3, Gen2 and Gen1 speed, 243 + and support up to 256 MSI interrupt numbers for 244 + multi-function devices. 245 + 246 + Say Y here if you want to enable Gen3 PCIe controller support on 232 247 MediaTek SoCs. 233 248 234 249 config VMD
+7 -1
drivers/pci/controller/Makefile
··· 11 11 obj-$(CONFIG_PCIE_RCAR_EP) += pcie-rcar.o pcie-rcar-ep.o 12 12 obj-$(CONFIG_PCI_HOST_COMMON) += pci-host-common.o 13 13 obj-$(CONFIG_PCI_HOST_GENERIC) += pci-host-generic.o 14 + obj-$(CONFIG_PCI_HOST_THUNDER_ECAM) += pci-thunder-ecam.o 15 + obj-$(CONFIG_PCI_HOST_THUNDER_PEM) += pci-thunder-pem.o 14 16 obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o 15 17 obj-$(CONFIG_PCIE_XILINX_NWL) += pcie-xilinx-nwl.o 16 18 obj-$(CONFIG_PCIE_XILINX_CPM) += pcie-xilinx-cpm.o 17 19 obj-$(CONFIG_PCI_V3_SEMI) += pci-v3-semi.o 20 + obj-$(CONFIG_PCI_XGENE) += pci-xgene.o 18 21 obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o 19 22 obj-$(CONFIG_PCI_VERSATILE) += pci-versatile.o 20 23 obj-$(CONFIG_PCIE_IPROC) += pcie-iproc.o ··· 30 27 obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o 31 28 obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o 32 29 obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o 30 + obj-$(CONFIG_PCIE_MEDIATEK_GEN3) += pcie-mediatek-gen3.o 33 31 obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o 34 32 obj-$(CONFIG_VMD) += vmd.o 35 33 obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o ··· 51 47 # ARM64 and use internal ifdefs to only build the pieces we need 52 48 # depending on whether ACPI, the DT driver, or both are enabled. 53 49 54 - ifdef CONFIG_PCI 50 + ifdef CONFIG_ACPI 51 + ifdef CONFIG_PCI_QUIRKS 55 52 obj-$(CONFIG_ARM64) += pci-thunder-ecam.o 56 53 obj-$(CONFIG_ARM64) += pci-thunder-pem.o 57 54 obj-$(CONFIG_ARM64) += pci-xgene.o 55 + endif 58 56 endif
+22 -2
drivers/pci/controller/cadence/pci-j721e.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - /** 2 + /* 3 3 * pci-j721e - PCIe controller driver for TI's J721E SoCs 4 4 * 5 5 * Copyright (C) 2020 Texas Instruments Incorporated - http://www.ti.com 6 6 * Author: Kishon Vijay Abraham I <kishon@ti.com> 7 7 */ 8 8 9 + #include <linux/clk.h> 9 10 #include <linux/delay.h> 10 11 #include <linux/gpio/consumer.h> 11 12 #include <linux/io.h> ··· 51 50 52 51 struct j721e_pcie { 53 52 struct device *dev; 53 + struct clk *refclk; 54 54 u32 mode; 55 55 u32 num_lanes; 56 56 struct cdns_pcie *cdns_pcie; ··· 314 312 struct cdns_pcie_ep *ep; 315 313 struct gpio_desc *gpiod; 316 314 void __iomem *base; 315 + struct clk *clk; 317 316 u32 num_lanes; 318 317 u32 mode; 319 318 int ret; ··· 414 411 goto err_get_sync; 415 412 } 416 413 414 + clk = devm_clk_get_optional(dev, "pcie_refclk"); 415 + if (IS_ERR(clk)) { 416 + ret = PTR_ERR(clk); 417 + dev_err(dev, "failed to get pcie_refclk\n"); 418 + goto err_pcie_setup; 419 + } 420 + 421 + ret = clk_prepare_enable(clk); 422 + if (ret) { 423 + dev_err(dev, "failed to enable pcie_refclk\n"); 424 + goto err_get_sync; 425 + } 426 + pcie->refclk = clk; 427 + 417 428 /* 418 429 * "Power Sequencing and Reset Signal Timings" table in 419 430 * PCI EXPRESS CARD ELECTROMECHANICAL SPECIFICATION, REV. 3.0 ··· 442 425 } 443 426 444 427 ret = cdns_pcie_host_setup(rc); 445 - if (ret < 0) 428 + if (ret < 0) { 429 + clk_disable_unprepare(pcie->refclk); 446 430 goto err_pcie_setup; 431 + } 447 432 448 433 break; 449 434 case PCI_MODE_EP: ··· 498 479 struct cdns_pcie *cdns_pcie = pcie->cdns_pcie; 499 480 struct device *dev = &pdev->dev; 500 481 482 + clk_disable_unprepare(pcie->refclk); 501 483 cdns_pcie_disable_phy(cdns_pcie); 502 484 pm_runtime_put(dev); 503 485 pm_runtime_disable(dev);
+11 -1
drivers/pci/controller/dwc/Kconfig
··· 280 280 select PCIE_TEGRA194 281 281 help 282 282 Enables support for the PCIe controller in the NVIDIA Tegra194 SoC to 283 - work in host mode. There are two instances of PCIe controllers in 283 + work in endpoint mode. There are two instances of PCIe controllers in 284 284 Tegra194. This controller can work either as EP or RC. In order to 285 285 enable host-specific features PCIE_TEGRA194_HOST must be selected and 286 286 in order to enable device-specific features PCIE_TEGRA194_EP must be ··· 311 311 depends on OF && (ARM64 || COMPILE_TEST) 312 312 depends on PCI_MSI_IRQ_DOMAIN 313 313 select PCIE_DW_HOST 314 + select PCI_ECAM 314 315 help 315 316 Say Y here to enable support of the Amazon's Annapurna Labs PCIe 316 317 controller IP on Amazon SoCs. The PCIe controller uses the DesignWare 317 318 core plus Annapurna Labs proprietary hardware wrappers. This is 318 319 required only for DT-based platforms. ACPI platforms with the 319 320 Annapurna Labs PCIe controller don't need to enable this. 321 + 322 + config PCIE_FU740 323 + bool "SiFive FU740 PCIe host controller" 324 + depends on PCI_MSI_IRQ_DOMAIN 325 + depends on SOC_SIFIVE || COMPILE_TEST 326 + select PCIE_DW_HOST 327 + help 328 + Say Y here if you want PCIe controller support for the SiFive 329 + FU740. 320 330 321 331 endmenu
+8 -2
drivers/pci/controller/dwc/Makefile
··· 5 5 obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o 6 6 obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o 7 7 obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o 8 + obj-$(CONFIG_PCIE_FU740) += pcie-fu740.o 8 9 obj-$(CONFIG_PCI_IMX6) += pci-imx6.o 9 10 obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o 10 11 obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o ··· 18 17 obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o 19 18 obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o 20 19 obj-$(CONFIG_PCI_MESON) += pci-meson.o 21 - obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o 22 20 obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o 23 21 obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o 24 22 ··· 31 31 # ARM64 and use internal ifdefs to only build the pieces we need 32 32 # depending on whether ACPI, the DT driver, or both are enabled. 33 33 34 - ifdef CONFIG_PCI 34 + obj-$(CONFIG_PCIE_AL) += pcie-al.o 35 + obj-$(CONFIG_PCI_HISI) += pcie-hisi.o 36 + 37 + ifdef CONFIG_ACPI 38 + ifdef CONFIG_PCI_QUIRKS 35 39 obj-$(CONFIG_ARM64) += pcie-al.o 36 40 obj-$(CONFIG_ARM64) += pcie-hisi.o 41 + obj-$(CONFIG_ARM64) += pcie-tegra194.o 42 + endif 37 43 endif
+10 -4
drivers/pci/controller/dwc/pci-keystone.c
··· 346 346 }; 347 347 348 348 /** 349 - * ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask 350 - * registers 349 + * ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask registers 350 + * @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone 351 + * PCIe host controller driver information. 351 352 * 352 353 * Since modification of dbi_cs2 involves different clock domain, read the 353 354 * status back to ensure the transition is complete. ··· 368 367 369 368 /** 370 369 * ks_pcie_clear_dbi_mode() - Disable DBI mode 370 + * @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone 371 + * PCIe host controller driver information. 371 372 * 372 373 * Since modification of dbi_cs2 involves different clock domain, read the 373 374 * status back to ensure the transition is complete. ··· 452 449 453 450 /** 454 451 * ks_pcie_v3_65_add_bus() - keystone add_bus post initialization 452 + * @bus: A pointer to the PCI bus structure. 455 453 * 456 454 * This sets BAR0 to enable inbound access for MSI_IRQ register 457 455 */ ··· 492 488 493 489 /** 494 490 * ks_pcie_link_up() - Check if link up 491 + * @pci: A pointer to the dw_pcie structure which holds the DesignWare PCIe host 492 + * controller driver information. 495 493 */ 496 494 static int ks_pcie_link_up(struct dw_pcie *pci) 497 495 { ··· 611 605 612 606 /** 613 607 * ks_pcie_legacy_irq_handler() - Handle legacy interrupt 614 - * @irq: IRQ line for legacy interrupts 615 608 * @desc: Pointer to irq descriptor 616 609 * 617 610 * Traverse through pending legacy interrupts and invoke handler for each. Also ··· 803 798 int ret; 804 799 805 800 pp->bridge->ops = &ks_pcie_ops; 806 - pp->bridge->child_ops = &ks_child_pcie_ops; 801 + if (!ks_pcie->is_am6) 802 + pp->bridge->child_ops = &ks_child_pcie_ops; 807 803 808 804 ret = ks_pcie_config_legacy_irq(ks_pcie); 809 805 if (ret)
+1 -1
drivers/pci/controller/dwc/pci-layerscape-ep.c
··· 154 154 pci->dev = dev; 155 155 pci->ops = pcie->drvdata->dw_pcie_ops; 156 156 157 - ls_epc->bar_fixed_64bit = (1 << BAR_2) | (1 << BAR_4), 157 + ls_epc->bar_fixed_64bit = (1 << BAR_2) | (1 << BAR_4); 158 158 159 159 pcie->pci = pci; 160 160 pcie->ls_epc = ls_epc;
+2
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 705 705 } 706 706 } 707 707 708 + dw_pcie_iatu_detect(pci); 709 + 708 710 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space"); 709 711 if (!res) 710 712 return -EINVAL;
+3 -1
drivers/pci/controller/dwc/pcie-designware-host.c
··· 398 398 if (ret) 399 399 goto err_free_msi; 400 400 } 401 + dw_pcie_iatu_detect(pci); 401 402 402 403 dw_pcie_setup_rc(pp); 403 - dw_pcie_msi_init(pp); 404 404 405 405 if (!dw_pcie_link_up(pci) && pci->ops && pci->ops->start_link) { 406 406 ret = pci->ops->start_link(pci); ··· 550 550 ~0); 551 551 } 552 552 } 553 + 554 + dw_pcie_msi_init(pp); 553 555 554 556 /* Setup RC BARs */ 555 557 dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0x00000004);
+8 -3
drivers/pci/controller/dwc/pcie-designware.c
··· 660 660 pci->num_ob_windows = ob; 661 661 } 662 662 663 - void dw_pcie_setup(struct dw_pcie *pci) 663 + void dw_pcie_iatu_detect(struct dw_pcie *pci) 664 664 { 665 - u32 val; 666 665 struct device *dev = pci->dev; 667 - struct device_node *np = dev->of_node; 668 666 struct platform_device *pdev = to_platform_device(dev); 669 667 670 668 if (pci->version >= 0x480A || (!pci->version && ··· 691 693 692 694 dev_info(pci->dev, "Detected iATU regions: %u outbound, %u inbound", 693 695 pci->num_ob_windows, pci->num_ib_windows); 696 + } 697 + 698 + void dw_pcie_setup(struct dw_pcie *pci) 699 + { 700 + u32 val; 701 + struct device *dev = pci->dev; 702 + struct device_node *np = dev->of_node; 694 703 695 704 if (pci->link_gen > 0) 696 705 dw_pcie_link_set_max_speed(pci, pci->link_gen);
+1
drivers/pci/controller/dwc/pcie-designware.h
··· 306 306 void dw_pcie_disable_atu(struct dw_pcie *pci, int index, 307 307 enum dw_pcie_region_type type); 308 308 void dw_pcie_setup(struct dw_pcie *pci); 309 + void dw_pcie_iatu_detect(struct dw_pcie *pci); 309 310 310 311 static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val) 311 312 {
+309
drivers/pci/controller/dwc/pcie-fu740.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * FU740 DesignWare PCIe Controller integration 4 + * Copyright (C) 2019-2021 SiFive, Inc. 5 + * Paul Walmsley 6 + * Greentime Hu 7 + * 8 + * Based in part on the i.MX6 PCIe host controller shim which is: 9 + * 10 + * Copyright (C) 2013 Kosagi 11 + * https://www.kosagi.com 12 + */ 13 + 14 + #include <linux/clk.h> 15 + #include <linux/delay.h> 16 + #include <linux/gpio.h> 17 + #include <linux/gpio/consumer.h> 18 + #include <linux/kernel.h> 19 + #include <linux/mfd/syscon.h> 20 + #include <linux/module.h> 21 + #include <linux/pci.h> 22 + #include <linux/platform_device.h> 23 + #include <linux/regulator/consumer.h> 24 + #include <linux/resource.h> 25 + #include <linux/types.h> 26 + #include <linux/interrupt.h> 27 + #include <linux/iopoll.h> 28 + #include <linux/reset.h> 29 + 30 + #include "pcie-designware.h" 31 + 32 + #define to_fu740_pcie(x) dev_get_drvdata((x)->dev) 33 + 34 + struct fu740_pcie { 35 + struct dw_pcie pci; 36 + void __iomem *mgmt_base; 37 + struct gpio_desc *reset; 38 + struct gpio_desc *pwren; 39 + struct clk *pcie_aux; 40 + struct reset_control *rst; 41 + }; 42 + 43 + #define SIFIVE_DEVICESRESETREG 0x28 44 + 45 + #define PCIEX8MGMT_PERST_N 0x0 46 + #define PCIEX8MGMT_APP_LTSSM_ENABLE 0x10 47 + #define PCIEX8MGMT_APP_HOLD_PHY_RST 0x18 48 + #define PCIEX8MGMT_DEVICE_TYPE 0x708 49 + #define PCIEX8MGMT_PHY0_CR_PARA_ADDR 0x860 50 + #define PCIEX8MGMT_PHY0_CR_PARA_RD_EN 0x870 51 + #define PCIEX8MGMT_PHY0_CR_PARA_RD_DATA 0x878 52 + #define PCIEX8MGMT_PHY0_CR_PARA_SEL 0x880 53 + #define PCIEX8MGMT_PHY0_CR_PARA_WR_DATA 0x888 54 + #define PCIEX8MGMT_PHY0_CR_PARA_WR_EN 0x890 55 + #define PCIEX8MGMT_PHY0_CR_PARA_ACK 0x898 56 + #define PCIEX8MGMT_PHY1_CR_PARA_ADDR 0x8a0 57 + #define PCIEX8MGMT_PHY1_CR_PARA_RD_EN 0x8b0 58 + #define PCIEX8MGMT_PHY1_CR_PARA_RD_DATA 0x8b8 59 + #define PCIEX8MGMT_PHY1_CR_PARA_SEL 0x8c0 60 + #define PCIEX8MGMT_PHY1_CR_PARA_WR_DATA 0x8c8 61 + #define PCIEX8MGMT_PHY1_CR_PARA_WR_EN 0x8d0 62 + #define PCIEX8MGMT_PHY1_CR_PARA_ACK 0x8d8 63 + 64 + #define PCIEX8MGMT_PHY_CDR_TRACK_EN BIT(0) 65 + #define PCIEX8MGMT_PHY_LOS_THRSHLD BIT(5) 66 + #define PCIEX8MGMT_PHY_TERM_EN BIT(9) 67 + #define PCIEX8MGMT_PHY_TERM_ACDC BIT(10) 68 + #define PCIEX8MGMT_PHY_EN BIT(11) 69 + #define PCIEX8MGMT_PHY_INIT_VAL (PCIEX8MGMT_PHY_CDR_TRACK_EN|\ 70 + PCIEX8MGMT_PHY_LOS_THRSHLD|\ 71 + PCIEX8MGMT_PHY_TERM_EN|\ 72 + PCIEX8MGMT_PHY_TERM_ACDC|\ 73 + PCIEX8MGMT_PHY_EN) 74 + 75 + #define PCIEX8MGMT_PHY_LANEN_DIG_ASIC_RX_OVRD_IN_3 0x1008 76 + #define PCIEX8MGMT_PHY_LANE_OFF 0x100 77 + #define PCIEX8MGMT_PHY_LANE0_BASE (PCIEX8MGMT_PHY_LANEN_DIG_ASIC_RX_OVRD_IN_3 + 0x100 * 0) 78 + #define PCIEX8MGMT_PHY_LANE1_BASE (PCIEX8MGMT_PHY_LANEN_DIG_ASIC_RX_OVRD_IN_3 + 0x100 * 1) 79 + #define PCIEX8MGMT_PHY_LANE2_BASE (PCIEX8MGMT_PHY_LANEN_DIG_ASIC_RX_OVRD_IN_3 + 0x100 * 2) 80 + #define PCIEX8MGMT_PHY_LANE3_BASE (PCIEX8MGMT_PHY_LANEN_DIG_ASIC_RX_OVRD_IN_3 + 0x100 * 3) 81 + 82 + static void fu740_pcie_assert_reset(struct fu740_pcie *afp) 83 + { 84 + /* Assert PERST_N GPIO */ 85 + gpiod_set_value_cansleep(afp->reset, 0); 86 + /* Assert controller PERST_N */ 87 + writel_relaxed(0x0, afp->mgmt_base + PCIEX8MGMT_PERST_N); 88 + } 89 + 90 + static void fu740_pcie_deassert_reset(struct fu740_pcie *afp) 91 + { 92 + /* Deassert controller PERST_N */ 93 + writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_PERST_N); 94 + /* Deassert PERST_N GPIO */ 95 + gpiod_set_value_cansleep(afp->reset, 1); 96 + } 97 + 98 + static void fu740_pcie_power_on(struct fu740_pcie *afp) 99 + { 100 + gpiod_set_value_cansleep(afp->pwren, 1); 101 + /* 102 + * Ensure that PERST has been asserted for at least 100 ms. 103 + * Section 2.2 of PCI Express Card Electromechanical Specification 104 + * Revision 3.0 105 + */ 106 + msleep(100); 107 + } 108 + 109 + static void fu740_pcie_drive_reset(struct fu740_pcie *afp) 110 + { 111 + fu740_pcie_assert_reset(afp); 112 + fu740_pcie_power_on(afp); 113 + fu740_pcie_deassert_reset(afp); 114 + } 115 + 116 + static void fu740_phyregwrite(const uint8_t phy, const uint16_t addr, 117 + const uint16_t wrdata, struct fu740_pcie *afp) 118 + { 119 + struct device *dev = afp->pci.dev; 120 + void __iomem *phy_cr_para_addr; 121 + void __iomem *phy_cr_para_wr_data; 122 + void __iomem *phy_cr_para_wr_en; 123 + void __iomem *phy_cr_para_ack; 124 + int ret, val; 125 + 126 + /* Setup */ 127 + if (phy) { 128 + phy_cr_para_addr = afp->mgmt_base + PCIEX8MGMT_PHY1_CR_PARA_ADDR; 129 + phy_cr_para_wr_data = afp->mgmt_base + PCIEX8MGMT_PHY1_CR_PARA_WR_DATA; 130 + phy_cr_para_wr_en = afp->mgmt_base + PCIEX8MGMT_PHY1_CR_PARA_WR_EN; 131 + phy_cr_para_ack = afp->mgmt_base + PCIEX8MGMT_PHY1_CR_PARA_ACK; 132 + } else { 133 + phy_cr_para_addr = afp->mgmt_base + PCIEX8MGMT_PHY0_CR_PARA_ADDR; 134 + phy_cr_para_wr_data = afp->mgmt_base + PCIEX8MGMT_PHY0_CR_PARA_WR_DATA; 135 + phy_cr_para_wr_en = afp->mgmt_base + PCIEX8MGMT_PHY0_CR_PARA_WR_EN; 136 + phy_cr_para_ack = afp->mgmt_base + PCIEX8MGMT_PHY0_CR_PARA_ACK; 137 + } 138 + 139 + writel_relaxed(addr, phy_cr_para_addr); 140 + writel_relaxed(wrdata, phy_cr_para_wr_data); 141 + writel_relaxed(1, phy_cr_para_wr_en); 142 + 143 + /* Wait for wait_idle */ 144 + ret = readl_poll_timeout(phy_cr_para_ack, val, val, 10, 5000); 145 + if (ret) 146 + dev_warn(dev, "Wait for wait_idle state failed!\n"); 147 + 148 + /* Clear */ 149 + writel_relaxed(0, phy_cr_para_wr_en); 150 + 151 + /* Wait for ~wait_idle */ 152 + ret = readl_poll_timeout(phy_cr_para_ack, val, !val, 10, 5000); 153 + if (ret) 154 + dev_warn(dev, "Wait for !wait_idle state failed!\n"); 155 + } 156 + 157 + static void fu740_pcie_init_phy(struct fu740_pcie *afp) 158 + { 159 + /* Enable phy cr_para_sel interfaces */ 160 + writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_PHY0_CR_PARA_SEL); 161 + writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_PHY1_CR_PARA_SEL); 162 + 163 + /* 164 + * Wait 10 cr_para cycles to guarantee that the registers are ready 165 + * to be edited. 166 + */ 167 + ndelay(10); 168 + 169 + /* Set PHY AC termination mode */ 170 + fu740_phyregwrite(0, PCIEX8MGMT_PHY_LANE0_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp); 171 + fu740_phyregwrite(0, PCIEX8MGMT_PHY_LANE1_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp); 172 + fu740_phyregwrite(0, PCIEX8MGMT_PHY_LANE2_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp); 173 + fu740_phyregwrite(0, PCIEX8MGMT_PHY_LANE3_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp); 174 + fu740_phyregwrite(1, PCIEX8MGMT_PHY_LANE0_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp); 175 + fu740_phyregwrite(1, PCIEX8MGMT_PHY_LANE1_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp); 176 + fu740_phyregwrite(1, PCIEX8MGMT_PHY_LANE2_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp); 177 + fu740_phyregwrite(1, PCIEX8MGMT_PHY_LANE3_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp); 178 + } 179 + 180 + static int fu740_pcie_start_link(struct dw_pcie *pci) 181 + { 182 + struct device *dev = pci->dev; 183 + struct fu740_pcie *afp = dev_get_drvdata(dev); 184 + 185 + /* Enable LTSSM */ 186 + writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_APP_LTSSM_ENABLE); 187 + return 0; 188 + } 189 + 190 + static int fu740_pcie_host_init(struct pcie_port *pp) 191 + { 192 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 193 + struct fu740_pcie *afp = to_fu740_pcie(pci); 194 + struct device *dev = pci->dev; 195 + int ret; 196 + 197 + /* Power on reset */ 198 + fu740_pcie_drive_reset(afp); 199 + 200 + /* Enable pcieauxclk */ 201 + ret = clk_prepare_enable(afp->pcie_aux); 202 + if (ret) { 203 + dev_err(dev, "unable to enable pcie_aux clock\n"); 204 + return ret; 205 + } 206 + 207 + /* 208 + * Assert hold_phy_rst (hold the controller LTSSM in reset after 209 + * power_up_rst_n for register programming with cr_para) 210 + */ 211 + writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_APP_HOLD_PHY_RST); 212 + 213 + /* Deassert power_up_rst_n */ 214 + ret = reset_control_deassert(afp->rst); 215 + if (ret) { 216 + dev_err(dev, "unable to deassert pcie_power_up_rst_n\n"); 217 + return ret; 218 + } 219 + 220 + fu740_pcie_init_phy(afp); 221 + 222 + /* Disable pcieauxclk */ 223 + clk_disable_unprepare(afp->pcie_aux); 224 + /* Clear hold_phy_rst */ 225 + writel_relaxed(0x0, afp->mgmt_base + PCIEX8MGMT_APP_HOLD_PHY_RST); 226 + /* Enable pcieauxclk */ 227 + ret = clk_prepare_enable(afp->pcie_aux); 228 + /* Set RC mode */ 229 + writel_relaxed(0x4, afp->mgmt_base + PCIEX8MGMT_DEVICE_TYPE); 230 + 231 + return 0; 232 + } 233 + 234 + static const struct dw_pcie_host_ops fu740_pcie_host_ops = { 235 + .host_init = fu740_pcie_host_init, 236 + }; 237 + 238 + static const struct dw_pcie_ops dw_pcie_ops = { 239 + .start_link = fu740_pcie_start_link, 240 + }; 241 + 242 + static int fu740_pcie_probe(struct platform_device *pdev) 243 + { 244 + struct device *dev = &pdev->dev; 245 + struct dw_pcie *pci; 246 + struct fu740_pcie *afp; 247 + 248 + afp = devm_kzalloc(dev, sizeof(*afp), GFP_KERNEL); 249 + if (!afp) 250 + return -ENOMEM; 251 + pci = &afp->pci; 252 + pci->dev = dev; 253 + pci->ops = &dw_pcie_ops; 254 + pci->pp.ops = &fu740_pcie_host_ops; 255 + 256 + /* SiFive specific region: mgmt */ 257 + afp->mgmt_base = devm_platform_ioremap_resource_byname(pdev, "mgmt"); 258 + if (IS_ERR(afp->mgmt_base)) 259 + return PTR_ERR(afp->mgmt_base); 260 + 261 + /* Fetch GPIOs */ 262 + afp->reset = devm_gpiod_get_optional(dev, "reset-gpios", GPIOD_OUT_LOW); 263 + if (IS_ERR(afp->reset)) 264 + return dev_err_probe(dev, PTR_ERR(afp->reset), "unable to get reset-gpios\n"); 265 + 266 + afp->pwren = devm_gpiod_get_optional(dev, "pwren-gpios", GPIOD_OUT_LOW); 267 + if (IS_ERR(afp->pwren)) 268 + return dev_err_probe(dev, PTR_ERR(afp->pwren), "unable to get pwren-gpios\n"); 269 + 270 + /* Fetch clocks */ 271 + afp->pcie_aux = devm_clk_get(dev, "pcie_aux"); 272 + if (IS_ERR(afp->pcie_aux)) 273 + return dev_err_probe(dev, PTR_ERR(afp->pcie_aux), 274 + "pcie_aux clock source missing or invalid\n"); 275 + 276 + /* Fetch reset */ 277 + afp->rst = devm_reset_control_get_exclusive(dev, NULL); 278 + if (IS_ERR(afp->rst)) 279 + return dev_err_probe(dev, PTR_ERR(afp->rst), "unable to get reset\n"); 280 + 281 + platform_set_drvdata(pdev, afp); 282 + 283 + return dw_pcie_host_init(&pci->pp); 284 + } 285 + 286 + static void fu740_pcie_shutdown(struct platform_device *pdev) 287 + { 288 + struct fu740_pcie *afp = platform_get_drvdata(pdev); 289 + 290 + /* Bring down link, so bootloader gets clean state in case of reboot */ 291 + fu740_pcie_assert_reset(afp); 292 + } 293 + 294 + static const struct of_device_id fu740_pcie_of_match[] = { 295 + { .compatible = "sifive,fu740-pcie", }, 296 + {}, 297 + }; 298 + 299 + static struct platform_driver fu740_pcie_driver = { 300 + .driver = { 301 + .name = "fu740-pcie", 302 + .of_match_table = fu740_pcie_of_match, 303 + .suppress_bind_attrs = true, 304 + }, 305 + .probe = fu740_pcie_probe, 306 + .shutdown = fu740_pcie_shutdown, 307 + }; 308 + 309 + builtin_platform_driver(fu740_pcie_driver);
-5
drivers/pci/controller/dwc/pcie-intel-gw.c
··· 81 81 writel(val, base + ofs); 82 82 } 83 83 84 - static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs) 85 - { 86 - return readl(lpp->app_base + ofs); 87 - } 88 - 89 84 static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 ofs, u32 val) 90 85 { 91 86 writel(val, lpp->app_base + ofs);
+105 -3
drivers/pci/controller/dwc/pcie-tegra194.c
··· 22 22 #include <linux/of_irq.h> 23 23 #include <linux/of_pci.h> 24 24 #include <linux/pci.h> 25 + #include <linux/pci-acpi.h> 26 + #include <linux/pci-ecam.h> 25 27 #include <linux/phy/phy.h> 26 28 #include <linux/pinctrl/consumer.h> 27 29 #include <linux/platform_device.h> ··· 312 310 struct tegra_pcie_dw_of_data { 313 311 enum dw_pcie_device_mode mode; 314 312 }; 313 + 314 + #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) 315 + struct tegra194_pcie_ecam { 316 + void __iomem *config_base; 317 + void __iomem *iatu_base; 318 + void __iomem *dbi_base; 319 + }; 320 + 321 + static int tegra194_acpi_init(struct pci_config_window *cfg) 322 + { 323 + struct device *dev = cfg->parent; 324 + struct tegra194_pcie_ecam *pcie_ecam; 325 + 326 + pcie_ecam = devm_kzalloc(dev, sizeof(*pcie_ecam), GFP_KERNEL); 327 + if (!pcie_ecam) 328 + return -ENOMEM; 329 + 330 + pcie_ecam->config_base = cfg->win; 331 + pcie_ecam->iatu_base = cfg->win + SZ_256K; 332 + pcie_ecam->dbi_base = cfg->win + SZ_512K; 333 + cfg->priv = pcie_ecam; 334 + 335 + return 0; 336 + } 337 + 338 + static void atu_reg_write(struct tegra194_pcie_ecam *pcie_ecam, int index, 339 + u32 val, u32 reg) 340 + { 341 + u32 offset = PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index); 342 + 343 + writel(val, pcie_ecam->iatu_base + offset + reg); 344 + } 345 + 346 + static void program_outbound_atu(struct tegra194_pcie_ecam *pcie_ecam, 347 + int index, int type, u64 cpu_addr, 348 + u64 pci_addr, u64 size) 349 + { 350 + atu_reg_write(pcie_ecam, index, lower_32_bits(cpu_addr), 351 + PCIE_ATU_LOWER_BASE); 352 + atu_reg_write(pcie_ecam, index, upper_32_bits(cpu_addr), 353 + PCIE_ATU_UPPER_BASE); 354 + atu_reg_write(pcie_ecam, index, lower_32_bits(pci_addr), 355 + PCIE_ATU_LOWER_TARGET); 356 + atu_reg_write(pcie_ecam, index, lower_32_bits(cpu_addr + size - 1), 357 + PCIE_ATU_LIMIT); 358 + atu_reg_write(pcie_ecam, index, upper_32_bits(pci_addr), 359 + PCIE_ATU_UPPER_TARGET); 360 + atu_reg_write(pcie_ecam, index, type, PCIE_ATU_CR1); 361 + atu_reg_write(pcie_ecam, index, PCIE_ATU_ENABLE, PCIE_ATU_CR2); 362 + } 363 + 364 + static void __iomem *tegra194_map_bus(struct pci_bus *bus, 365 + unsigned int devfn, int where) 366 + { 367 + struct pci_config_window *cfg = bus->sysdata; 368 + struct tegra194_pcie_ecam *pcie_ecam = cfg->priv; 369 + u32 busdev; 370 + int type; 371 + 372 + if (bus->number < cfg->busr.start || bus->number > cfg->busr.end) 373 + return NULL; 374 + 375 + if (bus->number == cfg->busr.start) { 376 + if (PCI_SLOT(devfn) == 0) 377 + return pcie_ecam->dbi_base + where; 378 + else 379 + return NULL; 380 + } 381 + 382 + busdev = PCIE_ATU_BUS(bus->number) | PCIE_ATU_DEV(PCI_SLOT(devfn)) | 383 + PCIE_ATU_FUNC(PCI_FUNC(devfn)); 384 + 385 + if (bus->parent->number == cfg->busr.start) { 386 + if (PCI_SLOT(devfn) == 0) 387 + type = PCIE_ATU_TYPE_CFG0; 388 + else 389 + return NULL; 390 + } else { 391 + type = PCIE_ATU_TYPE_CFG1; 392 + } 393 + 394 + program_outbound_atu(pcie_ecam, 0, type, cfg->res.start, busdev, 395 + SZ_256K); 396 + 397 + return pcie_ecam->config_base + where; 398 + } 399 + 400 + const struct pci_ecam_ops tegra194_pcie_ops = { 401 + .init = tegra194_acpi_init, 402 + .pci_ops = { 403 + .map_bus = tegra194_map_bus, 404 + .read = pci_generic_config_read, 405 + .write = pci_generic_config_write, 406 + } 407 + }; 408 + #endif /* defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) */ 409 + 410 + #ifdef CONFIG_PCIE_TEGRA194 315 411 316 412 static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci) 317 413 { ··· 1119 1019 .stop_link = tegra_pcie_dw_stop_link, 1120 1020 }; 1121 1021 1122 - static struct dw_pcie_host_ops tegra_pcie_dw_host_ops = { 1022 + static const struct dw_pcie_host_ops tegra_pcie_dw_host_ops = { 1123 1023 .host_init = tegra_pcie_dw_host_init, 1124 1024 }; 1125 1025 ··· 1745 1645 if (pcie->ep_state == EP_STATE_ENABLED) 1746 1646 return; 1747 1647 1748 - ret = pm_runtime_get_sync(dev); 1648 + ret = pm_runtime_resume_and_get(dev); 1749 1649 if (ret < 0) { 1750 1650 dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n", 1751 1651 ret); ··· 1981 1881 return &tegra_pcie_epc_features; 1982 1882 } 1983 1883 1984 - static struct dw_pcie_ep_ops pcie_ep_ops = { 1884 + static const struct dw_pcie_ep_ops pcie_ep_ops = { 1985 1885 .raise_irq = tegra_pcie_ep_raise_irq, 1986 1886 .get_features = tegra_pcie_ep_get_features, 1987 1887 }; ··· 2411 2311 MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>"); 2412 2312 MODULE_DESCRIPTION("NVIDIA PCIe host controller driver"); 2413 2313 MODULE_LICENSE("GPL v2"); 2314 + 2315 + #endif /* CONFIG_PCIE_TEGRA194 */
+1 -2
drivers/pci/controller/mobiveil/Kconfig
··· 24 24 25 25 config PCIE_LAYERSCAPE_GEN4 26 26 bool "Freescale Layerscape PCIe Gen4 controller" 27 - depends on PCI 28 - depends on OF && (ARM64 || ARCH_LAYERSCAPE) 27 + depends on ARCH_LAYERSCAPE || COMPILE_TEST 29 28 depends on PCI_MSI_IRQ_DOMAIN 30 29 select PCIE_MOBIVEIL_HOST 31 30 help
+1
drivers/pci/controller/pci-host-common.c
··· 79 79 80 80 bridge->sysdata = cfg; 81 81 bridge->ops = (struct pci_ops *)&ops->pci_ops; 82 + bridge->msi_domain = true; 82 83 83 84 return pci_host_probe(bridge); 84 85 }
-4
drivers/pci/controller/pci-hyperv.c
··· 473 473 struct list_head dr_list; 474 474 475 475 struct msi_domain_info msi_info; 476 - struct msi_controller msi_chip; 477 476 struct irq_domain *irq_domain; 478 477 479 478 spinlock_t retarget_msi_interrupt_lock; ··· 1864 1865 &hbus->resources_for_children); 1865 1866 if (!hbus->pci_bus) 1866 1867 return -ENODEV; 1867 - 1868 - hbus->pci_bus->msi = &hbus->msi_chip; 1869 - hbus->pci_bus->msi->dev = &hbus->hdev->device; 1870 1868 1871 1869 pci_lock_rescan_remove(); 1872 1870 pci_scan_child_bus(hbus->pci_bus);
+198 -165
drivers/pci/controller/pci-tegra.c
··· 21 21 #include <linux/interrupt.h> 22 22 #include <linux/iopoll.h> 23 23 #include <linux/irq.h> 24 + #include <linux/irqchip/chained_irq.h> 24 25 #include <linux/irqdomain.h> 25 26 #include <linux/kernel.h> 26 27 #include <linux/init.h> ··· 79 78 #define AFI_MSI_FPCI_BAR_ST 0x64 80 79 #define AFI_MSI_AXI_BAR_ST 0x68 81 80 82 - #define AFI_MSI_VEC0 0x6c 83 - #define AFI_MSI_VEC1 0x70 84 - #define AFI_MSI_VEC2 0x74 85 - #define AFI_MSI_VEC3 0x78 86 - #define AFI_MSI_VEC4 0x7c 87 - #define AFI_MSI_VEC5 0x80 88 - #define AFI_MSI_VEC6 0x84 89 - #define AFI_MSI_VEC7 0x88 90 - 91 - #define AFI_MSI_EN_VEC0 0x8c 92 - #define AFI_MSI_EN_VEC1 0x90 93 - #define AFI_MSI_EN_VEC2 0x94 94 - #define AFI_MSI_EN_VEC3 0x98 95 - #define AFI_MSI_EN_VEC4 0x9c 96 - #define AFI_MSI_EN_VEC5 0xa0 97 - #define AFI_MSI_EN_VEC6 0xa4 98 - #define AFI_MSI_EN_VEC7 0xa8 81 + #define AFI_MSI_VEC(x) (0x6c + ((x) * 4)) 82 + #define AFI_MSI_EN_VEC(x) (0x8c + ((x) * 4)) 99 83 100 84 #define AFI_CONFIGURATION 0xac 101 85 #define AFI_CONFIGURATION_EN_FPCI (1 << 0) ··· 266 280 #define LINK_RETRAIN_TIMEOUT 100000 /* in usec */ 267 281 268 282 struct tegra_msi { 269 - struct msi_controller chip; 270 283 DECLARE_BITMAP(used, INT_PCI_MSI_NR); 271 284 struct irq_domain *domain; 272 - struct mutex lock; 285 + struct mutex map_lock; 286 + spinlock_t mask_lock; 273 287 void *virt; 274 288 dma_addr_t phys; 275 289 int irq; ··· 319 333 } ectl; 320 334 }; 321 335 322 - static inline struct tegra_msi *to_tegra_msi(struct msi_controller *chip) 323 - { 324 - return container_of(chip, struct tegra_msi, chip); 325 - } 326 - 327 336 struct tegra_pcie { 328 337 struct device *dev; 329 338 ··· 352 371 const struct tegra_pcie_soc *soc; 353 372 struct dentry *debugfs; 354 373 }; 374 + 375 + static inline struct tegra_pcie *msi_to_pcie(struct tegra_msi *msi) 376 + { 377 + return container_of(msi, struct tegra_pcie, msi); 378 + } 355 379 356 380 struct tegra_pcie_port { 357 381 struct tegra_pcie *pcie; ··· 1418 1432 } 1419 1433 } 1420 1434 1421 - 1422 1435 static int tegra_pcie_get_resources(struct tegra_pcie *pcie) 1423 1436 { 1424 1437 struct device *dev = pcie->dev; ··· 1494 1509 phys_put: 1495 1510 if (soc->program_uphy) 1496 1511 tegra_pcie_phys_put(pcie); 1512 + 1497 1513 return err; 1498 1514 } 1499 1515 ··· 1537 1551 afi_writel(pcie, val, AFI_PCIE_PME); 1538 1552 } 1539 1553 1540 - static int tegra_msi_alloc(struct tegra_msi *chip) 1554 + static void tegra_pcie_msi_irq(struct irq_desc *desc) 1541 1555 { 1542 - int msi; 1543 - 1544 - mutex_lock(&chip->lock); 1545 - 1546 - msi = find_first_zero_bit(chip->used, INT_PCI_MSI_NR); 1547 - if (msi < INT_PCI_MSI_NR) 1548 - set_bit(msi, chip->used); 1549 - else 1550 - msi = -ENOSPC; 1551 - 1552 - mutex_unlock(&chip->lock); 1553 - 1554 - return msi; 1555 - } 1556 - 1557 - static void tegra_msi_free(struct tegra_msi *chip, unsigned long irq) 1558 - { 1559 - struct device *dev = chip->chip.dev; 1560 - 1561 - mutex_lock(&chip->lock); 1562 - 1563 - if (!test_bit(irq, chip->used)) 1564 - dev_err(dev, "trying to free unused MSI#%lu\n", irq); 1565 - else 1566 - clear_bit(irq, chip->used); 1567 - 1568 - mutex_unlock(&chip->lock); 1569 - } 1570 - 1571 - static irqreturn_t tegra_pcie_msi_irq(int irq, void *data) 1572 - { 1573 - struct tegra_pcie *pcie = data; 1574 - struct device *dev = pcie->dev; 1556 + struct tegra_pcie *pcie = irq_desc_get_handler_data(desc); 1557 + struct irq_chip *chip = irq_desc_get_chip(desc); 1575 1558 struct tegra_msi *msi = &pcie->msi; 1576 - unsigned int i, processed = 0; 1559 + struct device *dev = pcie->dev; 1560 + unsigned int i; 1561 + 1562 + chained_irq_enter(chip, desc); 1577 1563 1578 1564 for (i = 0; i < 8; i++) { 1579 - unsigned long reg = afi_readl(pcie, AFI_MSI_VEC0 + i * 4); 1565 + unsigned long reg = afi_readl(pcie, AFI_MSI_VEC(i)); 1580 1566 1581 1567 while (reg) { 1582 1568 unsigned int offset = find_first_bit(&reg, 32); 1583 1569 unsigned int index = i * 32 + offset; 1584 1570 unsigned int irq; 1585 1571 1586 - /* clear the interrupt */ 1587 - afi_writel(pcie, 1 << offset, AFI_MSI_VEC0 + i * 4); 1588 - 1589 - irq = irq_find_mapping(msi->domain, index); 1572 + irq = irq_find_mapping(msi->domain->parent, index); 1590 1573 if (irq) { 1591 - if (test_bit(index, msi->used)) 1592 - generic_handle_irq(irq); 1593 - else 1594 - dev_info(dev, "unhandled MSI\n"); 1574 + generic_handle_irq(irq); 1595 1575 } else { 1596 1576 /* 1597 1577 * that's weird who triggered this? 1598 1578 * just clear it 1599 1579 */ 1600 1580 dev_info(dev, "unexpected MSI\n"); 1581 + afi_writel(pcie, BIT(index % 32), AFI_MSI_VEC(index)); 1601 1582 } 1602 1583 1603 1584 /* see if there's any more pending in this vector */ 1604 - reg = afi_readl(pcie, AFI_MSI_VEC0 + i * 4); 1605 - 1606 - processed++; 1585 + reg = afi_readl(pcie, AFI_MSI_VEC(i)); 1607 1586 } 1608 1587 } 1609 1588 1610 - return processed > 0 ? IRQ_HANDLED : IRQ_NONE; 1589 + chained_irq_exit(chip, desc); 1611 1590 } 1612 1591 1613 - static int tegra_msi_setup_irq(struct msi_controller *chip, 1614 - struct pci_dev *pdev, struct msi_desc *desc) 1592 + static void tegra_msi_top_irq_ack(struct irq_data *d) 1615 1593 { 1616 - struct tegra_msi *msi = to_tegra_msi(chip); 1617 - struct msi_msg msg; 1618 - unsigned int irq; 1619 - int hwirq; 1620 - 1621 - hwirq = tegra_msi_alloc(msi); 1622 - if (hwirq < 0) 1623 - return hwirq; 1624 - 1625 - irq = irq_create_mapping(msi->domain, hwirq); 1626 - if (!irq) { 1627 - tegra_msi_free(msi, hwirq); 1628 - return -EINVAL; 1629 - } 1630 - 1631 - irq_set_msi_desc(irq, desc); 1632 - 1633 - msg.address_lo = lower_32_bits(msi->phys); 1634 - msg.address_hi = upper_32_bits(msi->phys); 1635 - msg.data = hwirq; 1636 - 1637 - pci_write_msi_msg(irq, &msg); 1638 - 1639 - return 0; 1594 + irq_chip_ack_parent(d); 1640 1595 } 1641 1596 1642 - static void tegra_msi_teardown_irq(struct msi_controller *chip, 1643 - unsigned int irq) 1597 + static void tegra_msi_top_irq_mask(struct irq_data *d) 1644 1598 { 1645 - struct tegra_msi *msi = to_tegra_msi(chip); 1646 - struct irq_data *d = irq_get_irq_data(irq); 1647 - irq_hw_number_t hwirq = irqd_to_hwirq(d); 1648 - 1649 - irq_dispose_mapping(irq); 1650 - tegra_msi_free(msi, hwirq); 1599 + pci_msi_mask_irq(d); 1600 + irq_chip_mask_parent(d); 1651 1601 } 1652 1602 1653 - static struct irq_chip tegra_msi_irq_chip = { 1654 - .name = "Tegra PCIe MSI", 1655 - .irq_enable = pci_msi_unmask_irq, 1656 - .irq_disable = pci_msi_mask_irq, 1657 - .irq_mask = pci_msi_mask_irq, 1658 - .irq_unmask = pci_msi_unmask_irq, 1603 + static void tegra_msi_top_irq_unmask(struct irq_data *d) 1604 + { 1605 + pci_msi_unmask_irq(d); 1606 + irq_chip_unmask_parent(d); 1607 + } 1608 + 1609 + static struct irq_chip tegra_msi_top_chip = { 1610 + .name = "Tegra PCIe MSI", 1611 + .irq_ack = tegra_msi_top_irq_ack, 1612 + .irq_mask = tegra_msi_top_irq_mask, 1613 + .irq_unmask = tegra_msi_top_irq_unmask, 1659 1614 }; 1660 1615 1661 - static int tegra_msi_map(struct irq_domain *domain, unsigned int irq, 1662 - irq_hw_number_t hwirq) 1616 + static void tegra_msi_irq_ack(struct irq_data *d) 1663 1617 { 1664 - irq_set_chip_and_handler(irq, &tegra_msi_irq_chip, handle_simple_irq); 1665 - irq_set_chip_data(irq, domain->host_data); 1618 + struct tegra_msi *msi = irq_data_get_irq_chip_data(d); 1619 + struct tegra_pcie *pcie = msi_to_pcie(msi); 1620 + unsigned int index = d->hwirq / 32; 1621 + 1622 + /* clear the interrupt */ 1623 + afi_writel(pcie, BIT(d->hwirq % 32), AFI_MSI_VEC(index)); 1624 + } 1625 + 1626 + static void tegra_msi_irq_mask(struct irq_data *d) 1627 + { 1628 + struct tegra_msi *msi = irq_data_get_irq_chip_data(d); 1629 + struct tegra_pcie *pcie = msi_to_pcie(msi); 1630 + unsigned int index = d->hwirq / 32; 1631 + unsigned long flags; 1632 + u32 value; 1633 + 1634 + spin_lock_irqsave(&msi->mask_lock, flags); 1635 + value = afi_readl(pcie, AFI_MSI_EN_VEC(index)); 1636 + value &= ~BIT(d->hwirq % 32); 1637 + afi_writel(pcie, value, AFI_MSI_EN_VEC(index)); 1638 + spin_unlock_irqrestore(&msi->mask_lock, flags); 1639 + } 1640 + 1641 + static void tegra_msi_irq_unmask(struct irq_data *d) 1642 + { 1643 + struct tegra_msi *msi = irq_data_get_irq_chip_data(d); 1644 + struct tegra_pcie *pcie = msi_to_pcie(msi); 1645 + unsigned int index = d->hwirq / 32; 1646 + unsigned long flags; 1647 + u32 value; 1648 + 1649 + spin_lock_irqsave(&msi->mask_lock, flags); 1650 + value = afi_readl(pcie, AFI_MSI_EN_VEC(index)); 1651 + value |= BIT(d->hwirq % 32); 1652 + afi_writel(pcie, value, AFI_MSI_EN_VEC(index)); 1653 + spin_unlock_irqrestore(&msi->mask_lock, flags); 1654 + } 1655 + 1656 + static int tegra_msi_set_affinity(struct irq_data *d, const struct cpumask *mask, bool force) 1657 + { 1658 + return -EINVAL; 1659 + } 1660 + 1661 + static void tegra_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 1662 + { 1663 + struct tegra_msi *msi = irq_data_get_irq_chip_data(data); 1664 + 1665 + msg->address_lo = lower_32_bits(msi->phys); 1666 + msg->address_hi = upper_32_bits(msi->phys); 1667 + msg->data = data->hwirq; 1668 + } 1669 + 1670 + static struct irq_chip tegra_msi_bottom_chip = { 1671 + .name = "Tegra MSI", 1672 + .irq_ack = tegra_msi_irq_ack, 1673 + .irq_mask = tegra_msi_irq_mask, 1674 + .irq_unmask = tegra_msi_irq_unmask, 1675 + .irq_set_affinity = tegra_msi_set_affinity, 1676 + .irq_compose_msi_msg = tegra_compose_msi_msg, 1677 + }; 1678 + 1679 + static int tegra_msi_domain_alloc(struct irq_domain *domain, unsigned int virq, 1680 + unsigned int nr_irqs, void *args) 1681 + { 1682 + struct tegra_msi *msi = domain->host_data; 1683 + unsigned int i; 1684 + int hwirq; 1685 + 1686 + mutex_lock(&msi->map_lock); 1687 + 1688 + hwirq = bitmap_find_free_region(msi->used, INT_PCI_MSI_NR, order_base_2(nr_irqs)); 1689 + 1690 + mutex_unlock(&msi->map_lock); 1691 + 1692 + if (hwirq < 0) 1693 + return -ENOSPC; 1694 + 1695 + for (i = 0; i < nr_irqs; i++) 1696 + irq_domain_set_info(domain, virq + i, hwirq + i, 1697 + &tegra_msi_bottom_chip, domain->host_data, 1698 + handle_edge_irq, NULL, NULL); 1666 1699 1667 1700 tegra_cpuidle_pcie_irqs_in_use(); 1668 1701 1669 1702 return 0; 1670 1703 } 1671 1704 1672 - static const struct irq_domain_ops msi_domain_ops = { 1673 - .map = tegra_msi_map, 1705 + static void tegra_msi_domain_free(struct irq_domain *domain, unsigned int virq, 1706 + unsigned int nr_irqs) 1707 + { 1708 + struct irq_data *d = irq_domain_get_irq_data(domain, virq); 1709 + struct tegra_msi *msi = domain->host_data; 1710 + 1711 + mutex_lock(&msi->map_lock); 1712 + 1713 + bitmap_release_region(msi->used, d->hwirq, order_base_2(nr_irqs)); 1714 + 1715 + mutex_unlock(&msi->map_lock); 1716 + } 1717 + 1718 + static const struct irq_domain_ops tegra_msi_domain_ops = { 1719 + .alloc = tegra_msi_domain_alloc, 1720 + .free = tegra_msi_domain_free, 1674 1721 }; 1722 + 1723 + static struct msi_domain_info tegra_msi_info = { 1724 + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 1725 + MSI_FLAG_PCI_MSIX), 1726 + .chip = &tegra_msi_top_chip, 1727 + }; 1728 + 1729 + static int tegra_allocate_domains(struct tegra_msi *msi) 1730 + { 1731 + struct tegra_pcie *pcie = msi_to_pcie(msi); 1732 + struct fwnode_handle *fwnode = dev_fwnode(pcie->dev); 1733 + struct irq_domain *parent; 1734 + 1735 + parent = irq_domain_create_linear(fwnode, INT_PCI_MSI_NR, 1736 + &tegra_msi_domain_ops, msi); 1737 + if (!parent) { 1738 + dev_err(pcie->dev, "failed to create IRQ domain\n"); 1739 + return -ENOMEM; 1740 + } 1741 + irq_domain_update_bus_token(parent, DOMAIN_BUS_NEXUS); 1742 + 1743 + msi->domain = pci_msi_create_irq_domain(fwnode, &tegra_msi_info, parent); 1744 + if (!msi->domain) { 1745 + dev_err(pcie->dev, "failed to create MSI domain\n"); 1746 + irq_domain_remove(parent); 1747 + return -ENOMEM; 1748 + } 1749 + 1750 + return 0; 1751 + } 1752 + 1753 + static void tegra_free_domains(struct tegra_msi *msi) 1754 + { 1755 + struct irq_domain *parent = msi->domain->parent; 1756 + 1757 + irq_domain_remove(msi->domain); 1758 + irq_domain_remove(parent); 1759 + } 1675 1760 1676 1761 static int tegra_pcie_msi_setup(struct tegra_pcie *pcie) 1677 1762 { 1678 - struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); 1679 1763 struct platform_device *pdev = to_platform_device(pcie->dev); 1680 1764 struct tegra_msi *msi = &pcie->msi; 1681 1765 struct device *dev = pcie->dev; 1682 1766 int err; 1683 1767 1684 - mutex_init(&msi->lock); 1768 + mutex_init(&msi->map_lock); 1769 + spin_lock_init(&msi->mask_lock); 1685 1770 1686 - msi->chip.dev = dev; 1687 - msi->chip.setup_irq = tegra_msi_setup_irq; 1688 - msi->chip.teardown_irq = tegra_msi_teardown_irq; 1689 - 1690 - msi->domain = irq_domain_add_linear(dev->of_node, INT_PCI_MSI_NR, 1691 - &msi_domain_ops, &msi->chip); 1692 - if (!msi->domain) { 1693 - dev_err(dev, "failed to create IRQ domain\n"); 1694 - return -ENOMEM; 1771 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 1772 + err = tegra_allocate_domains(msi); 1773 + if (err) 1774 + return err; 1695 1775 } 1696 1776 1697 1777 err = platform_get_irq_byname(pdev, "msi"); ··· 1766 1714 1767 1715 msi->irq = err; 1768 1716 1769 - err = request_irq(msi->irq, tegra_pcie_msi_irq, IRQF_NO_THREAD, 1770 - tegra_msi_irq_chip.name, pcie); 1771 - if (err < 0) { 1772 - dev_err(dev, "failed to request IRQ: %d\n", err); 1773 - goto free_irq_domain; 1774 - } 1717 + irq_set_chained_handler_and_data(msi->irq, tegra_pcie_msi_irq, pcie); 1775 1718 1776 1719 /* Though the PCIe controller can address >32-bit address space, to 1777 1720 * facilitate endpoints that support only 32-bit MSI target address, ··· 1787 1740 goto free_irq; 1788 1741 } 1789 1742 1790 - host->msi = &msi->chip; 1791 - 1792 1743 return 0; 1793 1744 1794 1745 free_irq: 1795 - free_irq(msi->irq, pcie); 1746 + irq_set_chained_handler_and_data(msi->irq, NULL, NULL); 1796 1747 free_irq_domain: 1797 - irq_domain_remove(msi->domain); 1748 + if (IS_ENABLED(CONFIG_PCI_MSI)) 1749 + tegra_free_domains(msi); 1750 + 1798 1751 return err; 1799 1752 } 1800 1753 ··· 1802 1755 { 1803 1756 const struct tegra_pcie_soc *soc = pcie->soc; 1804 1757 struct tegra_msi *msi = &pcie->msi; 1805 - u32 reg; 1758 + u32 reg, msi_state[INT_PCI_MSI_NR / 32]; 1759 + int i; 1806 1760 1807 1761 afi_writel(pcie, msi->phys >> soc->msi_base_shift, AFI_MSI_FPCI_BAR_ST); 1808 1762 afi_writel(pcie, msi->phys, AFI_MSI_AXI_BAR_ST); 1809 1763 /* this register is in 4K increments */ 1810 1764 afi_writel(pcie, 1, AFI_MSI_BAR_SZ); 1811 1765 1812 - /* enable all MSI vectors */ 1813 - afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC0); 1814 - afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC1); 1815 - afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC2); 1816 - afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC3); 1817 - afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC4); 1818 - afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC5); 1819 - afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC6); 1820 - afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC7); 1766 + /* Restore the MSI allocation state */ 1767 + bitmap_to_arr32(msi_state, msi->used, INT_PCI_MSI_NR); 1768 + for (i = 0; i < ARRAY_SIZE(msi_state); i++) 1769 + afi_writel(pcie, msi_state[i], AFI_MSI_EN_VEC(i)); 1821 1770 1822 1771 /* and unmask the MSI interrupt */ 1823 1772 reg = afi_readl(pcie, AFI_INTR_MASK); ··· 1829 1786 dma_free_attrs(pcie->dev, PAGE_SIZE, msi->virt, msi->phys, 1830 1787 DMA_ATTR_NO_KERNEL_MAPPING); 1831 1788 1832 - if (msi->irq > 0) 1833 - free_irq(msi->irq, pcie); 1834 - 1835 1789 for (i = 0; i < INT_PCI_MSI_NR; i++) { 1836 1790 irq = irq_find_mapping(msi->domain, i); 1837 1791 if (irq > 0) 1838 - irq_dispose_mapping(irq); 1792 + irq_domain_free_irqs(irq, 1); 1839 1793 } 1840 1794 1841 - irq_domain_remove(msi->domain); 1795 + irq_set_chained_handler_and_data(msi->irq, NULL, NULL); 1796 + 1797 + if (IS_ENABLED(CONFIG_PCI_MSI)) 1798 + tegra_free_domains(msi); 1842 1799 } 1843 1800 1844 1801 static int tegra_pcie_disable_msi(struct tegra_pcie *pcie) ··· 1849 1806 value = afi_readl(pcie, AFI_INTR_MASK); 1850 1807 value &= ~AFI_INTR_MASK_MSI_MASK; 1851 1808 afi_writel(pcie, value, AFI_INTR_MASK); 1852 - 1853 - /* disable all MSI vectors */ 1854 - afi_writel(pcie, 0, AFI_MSI_EN_VEC0); 1855 - afi_writel(pcie, 0, AFI_MSI_EN_VEC1); 1856 - afi_writel(pcie, 0, AFI_MSI_EN_VEC2); 1857 - afi_writel(pcie, 0, AFI_MSI_EN_VEC3); 1858 - afi_writel(pcie, 0, AFI_MSI_EN_VEC4); 1859 - afi_writel(pcie, 0, AFI_MSI_EN_VEC5); 1860 - afi_writel(pcie, 0, AFI_MSI_EN_VEC6); 1861 - afi_writel(pcie, 0, AFI_MSI_EN_VEC7); 1862 1809 1863 1810 return 0; 1864 1811 }
+1 -1
drivers/pci/controller/pci-thunder-ecam.c
··· 116 116 * the config space access window. Since we are working with 117 117 * the high-order 32 bits, shift everything down by 32 bits. 118 118 */ 119 - node_bits = (cfg->res.start >> 32) & (1 << 12); 119 + node_bits = upper_32_bits(cfg->res.start) & (1 << 12); 120 120 121 121 v |= node_bits; 122 122 set_val(v, where, size, val);
+7 -6
drivers/pci/controller/pci-thunder-pem.c
··· 12 12 #include <linux/pci-acpi.h> 13 13 #include <linux/pci-ecam.h> 14 14 #include <linux/platform_device.h> 15 + #include <linux/io-64-nonatomic-lo-hi.h> 15 16 #include "../pci.h" 16 17 17 18 #if defined(CONFIG_PCI_HOST_THUNDER_PEM) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)) ··· 325 324 * structure here for the BAR. 326 325 */ 327 326 bar4_start = res_pem->start + 0xf00000; 328 - pem_pci->ea_entry[0] = (u32)bar4_start | 2; 329 - pem_pci->ea_entry[1] = (u32)(res_pem->end - bar4_start) & ~3u; 330 - pem_pci->ea_entry[2] = (u32)(bar4_start >> 32); 327 + pem_pci->ea_entry[0] = lower_32_bits(bar4_start) | 2; 328 + pem_pci->ea_entry[1] = lower_32_bits(res_pem->end - bar4_start) & ~3u; 329 + pem_pci->ea_entry[2] = upper_32_bits(bar4_start); 331 330 332 331 cfg->priv = pem_pci; 333 332 return 0; ··· 335 334 336 335 #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) 337 336 338 - #define PEM_RES_BASE 0x87e0c0000000UL 339 - #define PEM_NODE_MASK GENMASK(45, 44) 340 - #define PEM_INDX_MASK GENMASK(26, 24) 337 + #define PEM_RES_BASE 0x87e0c0000000ULL 338 + #define PEM_NODE_MASK GENMASK_ULL(45, 44) 339 + #define PEM_INDX_MASK GENMASK_ULL(26, 24) 341 340 #define PEM_MIN_DOM_IN_NODE 4 342 341 #define PEM_MAX_DOM_IN_NODE 10 343 342
+2 -1
drivers/pci/controller/pci-xgene.c
··· 354 354 if (IS_ERR(port->csr_base)) 355 355 return PTR_ERR(port->csr_base); 356 356 357 - port->cfg_base = devm_platform_ioremap_resource_byname(pdev, "cfg"); 357 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg"); 358 + port->cfg_base = devm_ioremap_resource(dev, res); 358 359 if (IS_ERR(port->cfg_base)) 359 360 return PTR_ERR(port->cfg_base); 360 361 port->cfg_addr = res->start;
+1 -3
drivers/pci/controller/pcie-altera-msi.c
··· 236 236 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 237 237 "vector_slave"); 238 238 msi->vector_base = devm_ioremap_resource(&pdev->dev, res); 239 - if (IS_ERR(msi->vector_base)) { 240 - dev_err(&pdev->dev, "failed to map vector_slave memory\n"); 239 + if (IS_ERR(msi->vector_base)) 241 240 return PTR_ERR(msi->vector_base); 242 - } 243 241 244 242 msi->vector_phy = res->start; 245 243
+14 -6
drivers/pci/controller/pcie-brcmstb.c
··· 1148 1148 1149 1149 brcm_pcie_turn_off(pcie); 1150 1150 ret = brcm_phy_stop(pcie); 1151 + reset_control_rearm(pcie->rescal); 1151 1152 clk_disable_unprepare(pcie->clk); 1152 1153 1153 1154 return ret; ··· 1164 1163 base = pcie->base; 1165 1164 clk_prepare_enable(pcie->clk); 1166 1165 1166 + ret = reset_control_reset(pcie->rescal); 1167 + if (ret) 1168 + goto err_disable_clk; 1169 + 1167 1170 ret = brcm_phy_start(pcie); 1168 1171 if (ret) 1169 - goto err; 1172 + goto err_reset; 1170 1173 1171 1174 /* Take bridge out of reset so we can access the SERDES reg */ 1172 1175 pcie->bridge_sw_init_set(pcie, 0); ··· 1185 1180 1186 1181 ret = brcm_pcie_setup(pcie); 1187 1182 if (ret) 1188 - goto err; 1183 + goto err_reset; 1189 1184 1190 1185 if (pcie->msi) 1191 1186 brcm_msi_set_regs(pcie->msi); 1192 1187 1193 1188 return 0; 1194 1189 1195 - err: 1190 + err_reset: 1191 + reset_control_rearm(pcie->rescal); 1192 + err_disable_clk: 1196 1193 clk_disable_unprepare(pcie->clk); 1197 1194 return ret; 1198 1195 } ··· 1204 1197 brcm_msi_remove(pcie); 1205 1198 brcm_pcie_turn_off(pcie); 1206 1199 brcm_phy_stop(pcie); 1207 - reset_control_assert(pcie->rescal); 1200 + reset_control_rearm(pcie->rescal); 1208 1201 clk_disable_unprepare(pcie->clk); 1209 1202 } 1210 1203 ··· 1285 1278 return PTR_ERR(pcie->perst_reset); 1286 1279 } 1287 1280 1288 - ret = reset_control_deassert(pcie->rescal); 1281 + ret = reset_control_reset(pcie->rescal); 1289 1282 if (ret) 1290 1283 dev_err(&pdev->dev, "failed to deassert 'rescal'\n"); 1291 1284 1292 1285 ret = brcm_phy_start(pcie); 1293 1286 if (ret) { 1294 - reset_control_assert(pcie->rescal); 1287 + reset_control_rearm(pcie->rescal); 1295 1288 clk_disable_unprepare(pcie->clk); 1296 1289 return ret; 1297 1290 } ··· 1303 1296 pcie->hw_rev = readl(pcie->base + PCIE_MISC_REVISION); 1304 1297 if (pcie->type == BCM4908 && pcie->hw_rev >= BRCM_PCIE_HW_REV_3_20) { 1305 1298 dev_err(pcie->dev, "hardware revision with unsupported PERST# setup\n"); 1299 + ret = -ENODEV; 1306 1300 goto fail; 1307 1301 } 1308 1302
+1 -1
drivers/pci/controller/pcie-iproc-msi.c
··· 271 271 NULL, NULL); 272 272 } 273 273 274 - return hwirq; 274 + return 0; 275 275 } 276 276 277 277 static void iproc_msi_irq_domain_free(struct irq_domain *domain,
+1027
drivers/pci/controller/pcie-mediatek-gen3.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * MediaTek PCIe host controller driver. 4 + * 5 + * Copyright (c) 2020 MediaTek Inc. 6 + * Author: Jianjun Wang <jianjun.wang@mediatek.com> 7 + */ 8 + 9 + #include <linux/clk.h> 10 + #include <linux/delay.h> 11 + #include <linux/iopoll.h> 12 + #include <linux/irq.h> 13 + #include <linux/irqchip/chained_irq.h> 14 + #include <linux/irqdomain.h> 15 + #include <linux/kernel.h> 16 + #include <linux/module.h> 17 + #include <linux/msi.h> 18 + #include <linux/pci.h> 19 + #include <linux/phy/phy.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/pm_domain.h> 22 + #include <linux/pm_runtime.h> 23 + #include <linux/reset.h> 24 + 25 + #include "../pci.h" 26 + 27 + #define PCIE_SETTING_REG 0x80 28 + #define PCIE_PCI_IDS_1 0x9c 29 + #define PCI_CLASS(class) (class << 8) 30 + #define PCIE_RC_MODE BIT(0) 31 + 32 + #define PCIE_CFGNUM_REG 0x140 33 + #define PCIE_CFG_DEVFN(devfn) ((devfn) & GENMASK(7, 0)) 34 + #define PCIE_CFG_BUS(bus) (((bus) << 8) & GENMASK(15, 8)) 35 + #define PCIE_CFG_BYTE_EN(bytes) (((bytes) << 16) & GENMASK(19, 16)) 36 + #define PCIE_CFG_FORCE_BYTE_EN BIT(20) 37 + #define PCIE_CFG_OFFSET_ADDR 0x1000 38 + #define PCIE_CFG_HEADER(bus, devfn) \ 39 + (PCIE_CFG_BUS(bus) | PCIE_CFG_DEVFN(devfn)) 40 + 41 + #define PCIE_RST_CTRL_REG 0x148 42 + #define PCIE_MAC_RSTB BIT(0) 43 + #define PCIE_PHY_RSTB BIT(1) 44 + #define PCIE_BRG_RSTB BIT(2) 45 + #define PCIE_PE_RSTB BIT(3) 46 + 47 + #define PCIE_LTSSM_STATUS_REG 0x150 48 + #define PCIE_LTSSM_STATE_MASK GENMASK(28, 24) 49 + #define PCIE_LTSSM_STATE(val) ((val & PCIE_LTSSM_STATE_MASK) >> 24) 50 + #define PCIE_LTSSM_STATE_L2_IDLE 0x14 51 + 52 + #define PCIE_LINK_STATUS_REG 0x154 53 + #define PCIE_PORT_LINKUP BIT(8) 54 + 55 + #define PCIE_MSI_SET_NUM 8 56 + #define PCIE_MSI_IRQS_PER_SET 32 57 + #define PCIE_MSI_IRQS_NUM \ 58 + (PCIE_MSI_IRQS_PER_SET * PCIE_MSI_SET_NUM) 59 + 60 + #define PCIE_INT_ENABLE_REG 0x180 61 + #define PCIE_MSI_ENABLE GENMASK(PCIE_MSI_SET_NUM + 8 - 1, 8) 62 + #define PCIE_MSI_SHIFT 8 63 + #define PCIE_INTX_SHIFT 24 64 + #define PCIE_INTX_ENABLE \ 65 + GENMASK(PCIE_INTX_SHIFT + PCI_NUM_INTX - 1, PCIE_INTX_SHIFT) 66 + 67 + #define PCIE_INT_STATUS_REG 0x184 68 + #define PCIE_MSI_SET_ENABLE_REG 0x190 69 + #define PCIE_MSI_SET_ENABLE GENMASK(PCIE_MSI_SET_NUM - 1, 0) 70 + 71 + #define PCIE_MSI_SET_BASE_REG 0xc00 72 + #define PCIE_MSI_SET_OFFSET 0x10 73 + #define PCIE_MSI_SET_STATUS_OFFSET 0x04 74 + #define PCIE_MSI_SET_ENABLE_OFFSET 0x08 75 + 76 + #define PCIE_MSI_SET_ADDR_HI_BASE 0xc80 77 + #define PCIE_MSI_SET_ADDR_HI_OFFSET 0x04 78 + 79 + #define PCIE_ICMD_PM_REG 0x198 80 + #define PCIE_TURN_OFF_LINK BIT(4) 81 + 82 + #define PCIE_TRANS_TABLE_BASE_REG 0x800 83 + #define PCIE_ATR_SRC_ADDR_MSB_OFFSET 0x4 84 + #define PCIE_ATR_TRSL_ADDR_LSB_OFFSET 0x8 85 + #define PCIE_ATR_TRSL_ADDR_MSB_OFFSET 0xc 86 + #define PCIE_ATR_TRSL_PARAM_OFFSET 0x10 87 + #define PCIE_ATR_TLB_SET_OFFSET 0x20 88 + 89 + #define PCIE_MAX_TRANS_TABLES 8 90 + #define PCIE_ATR_EN BIT(0) 91 + #define PCIE_ATR_SIZE(size) \ 92 + (((((size) - 1) << 1) & GENMASK(6, 1)) | PCIE_ATR_EN) 93 + #define PCIE_ATR_ID(id) ((id) & GENMASK(3, 0)) 94 + #define PCIE_ATR_TYPE_MEM PCIE_ATR_ID(0) 95 + #define PCIE_ATR_TYPE_IO PCIE_ATR_ID(1) 96 + #define PCIE_ATR_TLP_TYPE(type) (((type) << 16) & GENMASK(18, 16)) 97 + #define PCIE_ATR_TLP_TYPE_MEM PCIE_ATR_TLP_TYPE(0) 98 + #define PCIE_ATR_TLP_TYPE_IO PCIE_ATR_TLP_TYPE(2) 99 + 100 + /** 101 + * struct mtk_msi_set - MSI information for each set 102 + * @base: IO mapped register base 103 + * @msg_addr: MSI message address 104 + * @saved_irq_state: IRQ enable state saved at suspend time 105 + */ 106 + struct mtk_msi_set { 107 + void __iomem *base; 108 + phys_addr_t msg_addr; 109 + u32 saved_irq_state; 110 + }; 111 + 112 + /** 113 + * struct mtk_pcie_port - PCIe port information 114 + * @dev: pointer to PCIe device 115 + * @base: IO mapped register base 116 + * @reg_base: physical register base 117 + * @mac_reset: MAC reset control 118 + * @phy_reset: PHY reset control 119 + * @phy: PHY controller block 120 + * @clks: PCIe clocks 121 + * @num_clks: PCIe clocks count for this port 122 + * @irq: PCIe controller interrupt number 123 + * @saved_irq_state: IRQ enable state saved at suspend time 124 + * @irq_lock: lock protecting IRQ register access 125 + * @intx_domain: legacy INTx IRQ domain 126 + * @msi_domain: MSI IRQ domain 127 + * @msi_bottom_domain: MSI IRQ bottom domain 128 + * @msi_sets: MSI sets information 129 + * @lock: lock protecting IRQ bit map 130 + * @msi_irq_in_use: bit map for assigned MSI IRQ 131 + */ 132 + struct mtk_pcie_port { 133 + struct device *dev; 134 + void __iomem *base; 135 + phys_addr_t reg_base; 136 + struct reset_control *mac_reset; 137 + struct reset_control *phy_reset; 138 + struct phy *phy; 139 + struct clk_bulk_data *clks; 140 + int num_clks; 141 + 142 + int irq; 143 + u32 saved_irq_state; 144 + raw_spinlock_t irq_lock; 145 + struct irq_domain *intx_domain; 146 + struct irq_domain *msi_domain; 147 + struct irq_domain *msi_bottom_domain; 148 + struct mtk_msi_set msi_sets[PCIE_MSI_SET_NUM]; 149 + struct mutex lock; 150 + DECLARE_BITMAP(msi_irq_in_use, PCIE_MSI_IRQS_NUM); 151 + }; 152 + 153 + /** 154 + * mtk_pcie_config_tlp_header() - Configure a configuration TLP header 155 + * @bus: PCI bus to query 156 + * @devfn: device/function number 157 + * @where: offset in config space 158 + * @size: data size in TLP header 159 + * 160 + * Set byte enable field and device information in configuration TLP header. 161 + */ 162 + static void mtk_pcie_config_tlp_header(struct pci_bus *bus, unsigned int devfn, 163 + int where, int size) 164 + { 165 + struct mtk_pcie_port *port = bus->sysdata; 166 + int bytes; 167 + u32 val; 168 + 169 + bytes = (GENMASK(size - 1, 0) & 0xf) << (where & 0x3); 170 + 171 + val = PCIE_CFG_FORCE_BYTE_EN | PCIE_CFG_BYTE_EN(bytes) | 172 + PCIE_CFG_HEADER(bus->number, devfn); 173 + 174 + writel_relaxed(val, port->base + PCIE_CFGNUM_REG); 175 + } 176 + 177 + static void __iomem *mtk_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 178 + int where) 179 + { 180 + struct mtk_pcie_port *port = bus->sysdata; 181 + 182 + return port->base + PCIE_CFG_OFFSET_ADDR + where; 183 + } 184 + 185 + static int mtk_pcie_config_read(struct pci_bus *bus, unsigned int devfn, 186 + int where, int size, u32 *val) 187 + { 188 + mtk_pcie_config_tlp_header(bus, devfn, where, size); 189 + 190 + return pci_generic_config_read32(bus, devfn, where, size, val); 191 + } 192 + 193 + static int mtk_pcie_config_write(struct pci_bus *bus, unsigned int devfn, 194 + int where, int size, u32 val) 195 + { 196 + mtk_pcie_config_tlp_header(bus, devfn, where, size); 197 + 198 + if (size <= 2) 199 + val <<= (where & 0x3) * 8; 200 + 201 + return pci_generic_config_write32(bus, devfn, where, 4, val); 202 + } 203 + 204 + static struct pci_ops mtk_pcie_ops = { 205 + .map_bus = mtk_pcie_map_bus, 206 + .read = mtk_pcie_config_read, 207 + .write = mtk_pcie_config_write, 208 + }; 209 + 210 + static int mtk_pcie_set_trans_table(struct mtk_pcie_port *port, 211 + resource_size_t cpu_addr, 212 + resource_size_t pci_addr, 213 + resource_size_t size, 214 + unsigned long type, int num) 215 + { 216 + void __iomem *table; 217 + u32 val; 218 + 219 + if (num >= PCIE_MAX_TRANS_TABLES) { 220 + dev_err(port->dev, "not enough translate table for addr: %#llx, limited to [%d]\n", 221 + (unsigned long long)cpu_addr, PCIE_MAX_TRANS_TABLES); 222 + return -ENODEV; 223 + } 224 + 225 + table = port->base + PCIE_TRANS_TABLE_BASE_REG + 226 + num * PCIE_ATR_TLB_SET_OFFSET; 227 + 228 + writel_relaxed(lower_32_bits(cpu_addr) | PCIE_ATR_SIZE(fls(size) - 1), 229 + table); 230 + writel_relaxed(upper_32_bits(cpu_addr), 231 + table + PCIE_ATR_SRC_ADDR_MSB_OFFSET); 232 + writel_relaxed(lower_32_bits(pci_addr), 233 + table + PCIE_ATR_TRSL_ADDR_LSB_OFFSET); 234 + writel_relaxed(upper_32_bits(pci_addr), 235 + table + PCIE_ATR_TRSL_ADDR_MSB_OFFSET); 236 + 237 + if (type == IORESOURCE_IO) 238 + val = PCIE_ATR_TYPE_IO | PCIE_ATR_TLP_TYPE_IO; 239 + else 240 + val = PCIE_ATR_TYPE_MEM | PCIE_ATR_TLP_TYPE_MEM; 241 + 242 + writel_relaxed(val, table + PCIE_ATR_TRSL_PARAM_OFFSET); 243 + 244 + return 0; 245 + } 246 + 247 + static void mtk_pcie_enable_msi(struct mtk_pcie_port *port) 248 + { 249 + int i; 250 + u32 val; 251 + 252 + for (i = 0; i < PCIE_MSI_SET_NUM; i++) { 253 + struct mtk_msi_set *msi_set = &port->msi_sets[i]; 254 + 255 + msi_set->base = port->base + PCIE_MSI_SET_BASE_REG + 256 + i * PCIE_MSI_SET_OFFSET; 257 + msi_set->msg_addr = port->reg_base + PCIE_MSI_SET_BASE_REG + 258 + i * PCIE_MSI_SET_OFFSET; 259 + 260 + /* Configure the MSI capture address */ 261 + writel_relaxed(lower_32_bits(msi_set->msg_addr), msi_set->base); 262 + writel_relaxed(upper_32_bits(msi_set->msg_addr), 263 + port->base + PCIE_MSI_SET_ADDR_HI_BASE + 264 + i * PCIE_MSI_SET_ADDR_HI_OFFSET); 265 + } 266 + 267 + val = readl_relaxed(port->base + PCIE_MSI_SET_ENABLE_REG); 268 + val |= PCIE_MSI_SET_ENABLE; 269 + writel_relaxed(val, port->base + PCIE_MSI_SET_ENABLE_REG); 270 + 271 + val = readl_relaxed(port->base + PCIE_INT_ENABLE_REG); 272 + val |= PCIE_MSI_ENABLE; 273 + writel_relaxed(val, port->base + PCIE_INT_ENABLE_REG); 274 + } 275 + 276 + static int mtk_pcie_startup_port(struct mtk_pcie_port *port) 277 + { 278 + struct resource_entry *entry; 279 + struct pci_host_bridge *host = pci_host_bridge_from_priv(port); 280 + unsigned int table_index = 0; 281 + int err; 282 + u32 val; 283 + 284 + /* Set as RC mode */ 285 + val = readl_relaxed(port->base + PCIE_SETTING_REG); 286 + val |= PCIE_RC_MODE; 287 + writel_relaxed(val, port->base + PCIE_SETTING_REG); 288 + 289 + /* Set class code */ 290 + val = readl_relaxed(port->base + PCIE_PCI_IDS_1); 291 + val &= ~GENMASK(31, 8); 292 + val |= PCI_CLASS(PCI_CLASS_BRIDGE_PCI << 8); 293 + writel_relaxed(val, port->base + PCIE_PCI_IDS_1); 294 + 295 + /* Mask all INTx interrupts */ 296 + val = readl_relaxed(port->base + PCIE_INT_ENABLE_REG); 297 + val &= ~PCIE_INTX_ENABLE; 298 + writel_relaxed(val, port->base + PCIE_INT_ENABLE_REG); 299 + 300 + /* Assert all reset signals */ 301 + val = readl_relaxed(port->base + PCIE_RST_CTRL_REG); 302 + val |= PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | PCIE_PE_RSTB; 303 + writel_relaxed(val, port->base + PCIE_RST_CTRL_REG); 304 + 305 + /* 306 + * Described in PCIe CEM specification setctions 2.2 (PERST# Signal) 307 + * and 2.2.1 (Initial Power-Up (G3 to S0)). 308 + * The deassertion of PERST# should be delayed 100ms (TPVPERL) 309 + * for the power and clock to become stable. 310 + */ 311 + msleep(100); 312 + 313 + /* De-assert reset signals */ 314 + val &= ~(PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | PCIE_PE_RSTB); 315 + writel_relaxed(val, port->base + PCIE_RST_CTRL_REG); 316 + 317 + /* Check if the link is up or not */ 318 + err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_REG, val, 319 + !!(val & PCIE_PORT_LINKUP), 20, 320 + PCI_PM_D3COLD_WAIT * USEC_PER_MSEC); 321 + if (err) { 322 + val = readl_relaxed(port->base + PCIE_LTSSM_STATUS_REG); 323 + dev_err(port->dev, "PCIe link down, ltssm reg val: %#x\n", val); 324 + return err; 325 + } 326 + 327 + mtk_pcie_enable_msi(port); 328 + 329 + /* Set PCIe translation windows */ 330 + resource_list_for_each_entry(entry, &host->windows) { 331 + struct resource *res = entry->res; 332 + unsigned long type = resource_type(res); 333 + resource_size_t cpu_addr; 334 + resource_size_t pci_addr; 335 + resource_size_t size; 336 + const char *range_type; 337 + 338 + if (type == IORESOURCE_IO) { 339 + cpu_addr = pci_pio_to_address(res->start); 340 + range_type = "IO"; 341 + } else if (type == IORESOURCE_MEM) { 342 + cpu_addr = res->start; 343 + range_type = "MEM"; 344 + } else { 345 + continue; 346 + } 347 + 348 + pci_addr = res->start - entry->offset; 349 + size = resource_size(res); 350 + err = mtk_pcie_set_trans_table(port, cpu_addr, pci_addr, size, 351 + type, table_index); 352 + if (err) 353 + return err; 354 + 355 + dev_dbg(port->dev, "set %s trans window[%d]: cpu_addr = %#llx, pci_addr = %#llx, size = %#llx\n", 356 + range_type, table_index, (unsigned long long)cpu_addr, 357 + (unsigned long long)pci_addr, (unsigned long long)size); 358 + 359 + table_index++; 360 + } 361 + 362 + return 0; 363 + } 364 + 365 + static int mtk_pcie_set_affinity(struct irq_data *data, 366 + const struct cpumask *mask, bool force) 367 + { 368 + return -EINVAL; 369 + } 370 + 371 + static void mtk_pcie_msi_irq_mask(struct irq_data *data) 372 + { 373 + pci_msi_mask_irq(data); 374 + irq_chip_mask_parent(data); 375 + } 376 + 377 + static void mtk_pcie_msi_irq_unmask(struct irq_data *data) 378 + { 379 + pci_msi_unmask_irq(data); 380 + irq_chip_unmask_parent(data); 381 + } 382 + 383 + static struct irq_chip mtk_msi_irq_chip = { 384 + .irq_ack = irq_chip_ack_parent, 385 + .irq_mask = mtk_pcie_msi_irq_mask, 386 + .irq_unmask = mtk_pcie_msi_irq_unmask, 387 + .name = "MSI", 388 + }; 389 + 390 + static struct msi_domain_info mtk_msi_domain_info = { 391 + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 392 + MSI_FLAG_PCI_MSIX | MSI_FLAG_MULTI_PCI_MSI), 393 + .chip = &mtk_msi_irq_chip, 394 + }; 395 + 396 + static void mtk_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 397 + { 398 + struct mtk_msi_set *msi_set = irq_data_get_irq_chip_data(data); 399 + struct mtk_pcie_port *port = data->domain->host_data; 400 + unsigned long hwirq; 401 + 402 + hwirq = data->hwirq % PCIE_MSI_IRQS_PER_SET; 403 + 404 + msg->address_hi = upper_32_bits(msi_set->msg_addr); 405 + msg->address_lo = lower_32_bits(msi_set->msg_addr); 406 + msg->data = hwirq; 407 + dev_dbg(port->dev, "msi#%#lx address_hi %#x address_lo %#x data %d\n", 408 + hwirq, msg->address_hi, msg->address_lo, msg->data); 409 + } 410 + 411 + static void mtk_msi_bottom_irq_ack(struct irq_data *data) 412 + { 413 + struct mtk_msi_set *msi_set = irq_data_get_irq_chip_data(data); 414 + unsigned long hwirq; 415 + 416 + hwirq = data->hwirq % PCIE_MSI_IRQS_PER_SET; 417 + 418 + writel_relaxed(BIT(hwirq), msi_set->base + PCIE_MSI_SET_STATUS_OFFSET); 419 + } 420 + 421 + static void mtk_msi_bottom_irq_mask(struct irq_data *data) 422 + { 423 + struct mtk_msi_set *msi_set = irq_data_get_irq_chip_data(data); 424 + struct mtk_pcie_port *port = data->domain->host_data; 425 + unsigned long hwirq, flags; 426 + u32 val; 427 + 428 + hwirq = data->hwirq % PCIE_MSI_IRQS_PER_SET; 429 + 430 + raw_spin_lock_irqsave(&port->irq_lock, flags); 431 + val = readl_relaxed(msi_set->base + PCIE_MSI_SET_ENABLE_OFFSET); 432 + val &= ~BIT(hwirq); 433 + writel_relaxed(val, msi_set->base + PCIE_MSI_SET_ENABLE_OFFSET); 434 + raw_spin_unlock_irqrestore(&port->irq_lock, flags); 435 + } 436 + 437 + static void mtk_msi_bottom_irq_unmask(struct irq_data *data) 438 + { 439 + struct mtk_msi_set *msi_set = irq_data_get_irq_chip_data(data); 440 + struct mtk_pcie_port *port = data->domain->host_data; 441 + unsigned long hwirq, flags; 442 + u32 val; 443 + 444 + hwirq = data->hwirq % PCIE_MSI_IRQS_PER_SET; 445 + 446 + raw_spin_lock_irqsave(&port->irq_lock, flags); 447 + val = readl_relaxed(msi_set->base + PCIE_MSI_SET_ENABLE_OFFSET); 448 + val |= BIT(hwirq); 449 + writel_relaxed(val, msi_set->base + PCIE_MSI_SET_ENABLE_OFFSET); 450 + raw_spin_unlock_irqrestore(&port->irq_lock, flags); 451 + } 452 + 453 + static struct irq_chip mtk_msi_bottom_irq_chip = { 454 + .irq_ack = mtk_msi_bottom_irq_ack, 455 + .irq_mask = mtk_msi_bottom_irq_mask, 456 + .irq_unmask = mtk_msi_bottom_irq_unmask, 457 + .irq_compose_msi_msg = mtk_compose_msi_msg, 458 + .irq_set_affinity = mtk_pcie_set_affinity, 459 + .name = "MSI", 460 + }; 461 + 462 + static int mtk_msi_bottom_domain_alloc(struct irq_domain *domain, 463 + unsigned int virq, unsigned int nr_irqs, 464 + void *arg) 465 + { 466 + struct mtk_pcie_port *port = domain->host_data; 467 + struct mtk_msi_set *msi_set; 468 + int i, hwirq, set_idx; 469 + 470 + mutex_lock(&port->lock); 471 + 472 + hwirq = bitmap_find_free_region(port->msi_irq_in_use, PCIE_MSI_IRQS_NUM, 473 + order_base_2(nr_irqs)); 474 + 475 + mutex_unlock(&port->lock); 476 + 477 + if (hwirq < 0) 478 + return -ENOSPC; 479 + 480 + set_idx = hwirq / PCIE_MSI_IRQS_PER_SET; 481 + msi_set = &port->msi_sets[set_idx]; 482 + 483 + for (i = 0; i < nr_irqs; i++) 484 + irq_domain_set_info(domain, virq + i, hwirq + i, 485 + &mtk_msi_bottom_irq_chip, msi_set, 486 + handle_edge_irq, NULL, NULL); 487 + 488 + return 0; 489 + } 490 + 491 + static void mtk_msi_bottom_domain_free(struct irq_domain *domain, 492 + unsigned int virq, unsigned int nr_irqs) 493 + { 494 + struct mtk_pcie_port *port = domain->host_data; 495 + struct irq_data *data = irq_domain_get_irq_data(domain, virq); 496 + 497 + mutex_lock(&port->lock); 498 + 499 + bitmap_release_region(port->msi_irq_in_use, data->hwirq, 500 + order_base_2(nr_irqs)); 501 + 502 + mutex_unlock(&port->lock); 503 + 504 + irq_domain_free_irqs_common(domain, virq, nr_irqs); 505 + } 506 + 507 + static const struct irq_domain_ops mtk_msi_bottom_domain_ops = { 508 + .alloc = mtk_msi_bottom_domain_alloc, 509 + .free = mtk_msi_bottom_domain_free, 510 + }; 511 + 512 + static void mtk_intx_mask(struct irq_data *data) 513 + { 514 + struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data); 515 + unsigned long flags; 516 + u32 val; 517 + 518 + raw_spin_lock_irqsave(&port->irq_lock, flags); 519 + val = readl_relaxed(port->base + PCIE_INT_ENABLE_REG); 520 + val &= ~BIT(data->hwirq + PCIE_INTX_SHIFT); 521 + writel_relaxed(val, port->base + PCIE_INT_ENABLE_REG); 522 + raw_spin_unlock_irqrestore(&port->irq_lock, flags); 523 + } 524 + 525 + static void mtk_intx_unmask(struct irq_data *data) 526 + { 527 + struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data); 528 + unsigned long flags; 529 + u32 val; 530 + 531 + raw_spin_lock_irqsave(&port->irq_lock, flags); 532 + val = readl_relaxed(port->base + PCIE_INT_ENABLE_REG); 533 + val |= BIT(data->hwirq + PCIE_INTX_SHIFT); 534 + writel_relaxed(val, port->base + PCIE_INT_ENABLE_REG); 535 + raw_spin_unlock_irqrestore(&port->irq_lock, flags); 536 + } 537 + 538 + /** 539 + * mtk_intx_eoi() - Clear INTx IRQ status at the end of interrupt 540 + * @data: pointer to chip specific data 541 + * 542 + * As an emulated level IRQ, its interrupt status will remain 543 + * until the corresponding de-assert message is received; hence that 544 + * the status can only be cleared when the interrupt has been serviced. 545 + */ 546 + static void mtk_intx_eoi(struct irq_data *data) 547 + { 548 + struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data); 549 + unsigned long hwirq; 550 + 551 + hwirq = data->hwirq + PCIE_INTX_SHIFT; 552 + writel_relaxed(BIT(hwirq), port->base + PCIE_INT_STATUS_REG); 553 + } 554 + 555 + static struct irq_chip mtk_intx_irq_chip = { 556 + .irq_mask = mtk_intx_mask, 557 + .irq_unmask = mtk_intx_unmask, 558 + .irq_eoi = mtk_intx_eoi, 559 + .irq_set_affinity = mtk_pcie_set_affinity, 560 + .name = "INTx", 561 + }; 562 + 563 + static int mtk_pcie_intx_map(struct irq_domain *domain, unsigned int irq, 564 + irq_hw_number_t hwirq) 565 + { 566 + irq_set_chip_data(irq, domain->host_data); 567 + irq_set_chip_and_handler_name(irq, &mtk_intx_irq_chip, 568 + handle_fasteoi_irq, "INTx"); 569 + return 0; 570 + } 571 + 572 + static const struct irq_domain_ops intx_domain_ops = { 573 + .map = mtk_pcie_intx_map, 574 + }; 575 + 576 + static int mtk_pcie_init_irq_domains(struct mtk_pcie_port *port) 577 + { 578 + struct device *dev = port->dev; 579 + struct device_node *intc_node, *node = dev->of_node; 580 + int ret; 581 + 582 + raw_spin_lock_init(&port->irq_lock); 583 + 584 + /* Setup INTx */ 585 + intc_node = of_get_child_by_name(node, "interrupt-controller"); 586 + if (!intc_node) { 587 + dev_err(dev, "missing interrupt-controller node\n"); 588 + return -ENODEV; 589 + } 590 + 591 + port->intx_domain = irq_domain_add_linear(intc_node, PCI_NUM_INTX, 592 + &intx_domain_ops, port); 593 + if (!port->intx_domain) { 594 + dev_err(dev, "failed to create INTx IRQ domain\n"); 595 + return -ENODEV; 596 + } 597 + 598 + /* Setup MSI */ 599 + mutex_init(&port->lock); 600 + 601 + port->msi_bottom_domain = irq_domain_add_linear(node, PCIE_MSI_IRQS_NUM, 602 + &mtk_msi_bottom_domain_ops, port); 603 + if (!port->msi_bottom_domain) { 604 + dev_err(dev, "failed to create MSI bottom domain\n"); 605 + ret = -ENODEV; 606 + goto err_msi_bottom_domain; 607 + } 608 + 609 + port->msi_domain = pci_msi_create_irq_domain(dev->fwnode, 610 + &mtk_msi_domain_info, 611 + port->msi_bottom_domain); 612 + if (!port->msi_domain) { 613 + dev_err(dev, "failed to create MSI domain\n"); 614 + ret = -ENODEV; 615 + goto err_msi_domain; 616 + } 617 + 618 + return 0; 619 + 620 + err_msi_domain: 621 + irq_domain_remove(port->msi_bottom_domain); 622 + err_msi_bottom_domain: 623 + irq_domain_remove(port->intx_domain); 624 + 625 + return ret; 626 + } 627 + 628 + static void mtk_pcie_irq_teardown(struct mtk_pcie_port *port) 629 + { 630 + irq_set_chained_handler_and_data(port->irq, NULL, NULL); 631 + 632 + if (port->intx_domain) 633 + irq_domain_remove(port->intx_domain); 634 + 635 + if (port->msi_domain) 636 + irq_domain_remove(port->msi_domain); 637 + 638 + if (port->msi_bottom_domain) 639 + irq_domain_remove(port->msi_bottom_domain); 640 + 641 + irq_dispose_mapping(port->irq); 642 + } 643 + 644 + static void mtk_pcie_msi_handler(struct mtk_pcie_port *port, int set_idx) 645 + { 646 + struct mtk_msi_set *msi_set = &port->msi_sets[set_idx]; 647 + unsigned long msi_enable, msi_status; 648 + unsigned int virq; 649 + irq_hw_number_t bit, hwirq; 650 + 651 + msi_enable = readl_relaxed(msi_set->base + PCIE_MSI_SET_ENABLE_OFFSET); 652 + 653 + do { 654 + msi_status = readl_relaxed(msi_set->base + 655 + PCIE_MSI_SET_STATUS_OFFSET); 656 + msi_status &= msi_enable; 657 + if (!msi_status) 658 + break; 659 + 660 + for_each_set_bit(bit, &msi_status, PCIE_MSI_IRQS_PER_SET) { 661 + hwirq = bit + set_idx * PCIE_MSI_IRQS_PER_SET; 662 + virq = irq_find_mapping(port->msi_bottom_domain, hwirq); 663 + generic_handle_irq(virq); 664 + } 665 + } while (true); 666 + } 667 + 668 + static void mtk_pcie_irq_handler(struct irq_desc *desc) 669 + { 670 + struct mtk_pcie_port *port = irq_desc_get_handler_data(desc); 671 + struct irq_chip *irqchip = irq_desc_get_chip(desc); 672 + unsigned long status; 673 + unsigned int virq; 674 + irq_hw_number_t irq_bit = PCIE_INTX_SHIFT; 675 + 676 + chained_irq_enter(irqchip, desc); 677 + 678 + status = readl_relaxed(port->base + PCIE_INT_STATUS_REG); 679 + for_each_set_bit_from(irq_bit, &status, PCI_NUM_INTX + 680 + PCIE_INTX_SHIFT) { 681 + virq = irq_find_mapping(port->intx_domain, 682 + irq_bit - PCIE_INTX_SHIFT); 683 + generic_handle_irq(virq); 684 + } 685 + 686 + irq_bit = PCIE_MSI_SHIFT; 687 + for_each_set_bit_from(irq_bit, &status, PCIE_MSI_SET_NUM + 688 + PCIE_MSI_SHIFT) { 689 + mtk_pcie_msi_handler(port, irq_bit - PCIE_MSI_SHIFT); 690 + 691 + writel_relaxed(BIT(irq_bit), port->base + PCIE_INT_STATUS_REG); 692 + } 693 + 694 + chained_irq_exit(irqchip, desc); 695 + } 696 + 697 + static int mtk_pcie_setup_irq(struct mtk_pcie_port *port) 698 + { 699 + struct device *dev = port->dev; 700 + struct platform_device *pdev = to_platform_device(dev); 701 + int err; 702 + 703 + err = mtk_pcie_init_irq_domains(port); 704 + if (err) 705 + return err; 706 + 707 + port->irq = platform_get_irq(pdev, 0); 708 + if (port->irq < 0) 709 + return port->irq; 710 + 711 + irq_set_chained_handler_and_data(port->irq, mtk_pcie_irq_handler, port); 712 + 713 + return 0; 714 + } 715 + 716 + static int mtk_pcie_parse_port(struct mtk_pcie_port *port) 717 + { 718 + struct device *dev = port->dev; 719 + struct platform_device *pdev = to_platform_device(dev); 720 + struct resource *regs; 721 + int ret; 722 + 723 + regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "pcie-mac"); 724 + if (!regs) 725 + return -EINVAL; 726 + port->base = devm_ioremap_resource(dev, regs); 727 + if (IS_ERR(port->base)) { 728 + dev_err(dev, "failed to map register base\n"); 729 + return PTR_ERR(port->base); 730 + } 731 + 732 + port->reg_base = regs->start; 733 + 734 + port->phy_reset = devm_reset_control_get_optional_exclusive(dev, "phy"); 735 + if (IS_ERR(port->phy_reset)) { 736 + ret = PTR_ERR(port->phy_reset); 737 + if (ret != -EPROBE_DEFER) 738 + dev_err(dev, "failed to get PHY reset\n"); 739 + 740 + return ret; 741 + } 742 + 743 + port->mac_reset = devm_reset_control_get_optional_exclusive(dev, "mac"); 744 + if (IS_ERR(port->mac_reset)) { 745 + ret = PTR_ERR(port->mac_reset); 746 + if (ret != -EPROBE_DEFER) 747 + dev_err(dev, "failed to get MAC reset\n"); 748 + 749 + return ret; 750 + } 751 + 752 + port->phy = devm_phy_optional_get(dev, "pcie-phy"); 753 + if (IS_ERR(port->phy)) { 754 + ret = PTR_ERR(port->phy); 755 + if (ret != -EPROBE_DEFER) 756 + dev_err(dev, "failed to get PHY\n"); 757 + 758 + return ret; 759 + } 760 + 761 + port->num_clks = devm_clk_bulk_get_all(dev, &port->clks); 762 + if (port->num_clks < 0) { 763 + dev_err(dev, "failed to get clocks\n"); 764 + return port->num_clks; 765 + } 766 + 767 + return 0; 768 + } 769 + 770 + static int mtk_pcie_power_up(struct mtk_pcie_port *port) 771 + { 772 + struct device *dev = port->dev; 773 + int err; 774 + 775 + /* PHY power on and enable pipe clock */ 776 + reset_control_deassert(port->phy_reset); 777 + 778 + err = phy_init(port->phy); 779 + if (err) { 780 + dev_err(dev, "failed to initialize PHY\n"); 781 + goto err_phy_init; 782 + } 783 + 784 + err = phy_power_on(port->phy); 785 + if (err) { 786 + dev_err(dev, "failed to power on PHY\n"); 787 + goto err_phy_on; 788 + } 789 + 790 + /* MAC power on and enable transaction layer clocks */ 791 + reset_control_deassert(port->mac_reset); 792 + 793 + pm_runtime_enable(dev); 794 + pm_runtime_get_sync(dev); 795 + 796 + err = clk_bulk_prepare_enable(port->num_clks, port->clks); 797 + if (err) { 798 + dev_err(dev, "failed to enable clocks\n"); 799 + goto err_clk_init; 800 + } 801 + 802 + return 0; 803 + 804 + err_clk_init: 805 + pm_runtime_put_sync(dev); 806 + pm_runtime_disable(dev); 807 + reset_control_assert(port->mac_reset); 808 + phy_power_off(port->phy); 809 + err_phy_on: 810 + phy_exit(port->phy); 811 + err_phy_init: 812 + reset_control_assert(port->phy_reset); 813 + 814 + return err; 815 + } 816 + 817 + static void mtk_pcie_power_down(struct mtk_pcie_port *port) 818 + { 819 + clk_bulk_disable_unprepare(port->num_clks, port->clks); 820 + 821 + pm_runtime_put_sync(port->dev); 822 + pm_runtime_disable(port->dev); 823 + reset_control_assert(port->mac_reset); 824 + 825 + phy_power_off(port->phy); 826 + phy_exit(port->phy); 827 + reset_control_assert(port->phy_reset); 828 + } 829 + 830 + static int mtk_pcie_setup(struct mtk_pcie_port *port) 831 + { 832 + int err; 833 + 834 + err = mtk_pcie_parse_port(port); 835 + if (err) 836 + return err; 837 + 838 + /* Don't touch the hardware registers before power up */ 839 + err = mtk_pcie_power_up(port); 840 + if (err) 841 + return err; 842 + 843 + /* Try link up */ 844 + err = mtk_pcie_startup_port(port); 845 + if (err) 846 + goto err_setup; 847 + 848 + err = mtk_pcie_setup_irq(port); 849 + if (err) 850 + goto err_setup; 851 + 852 + return 0; 853 + 854 + err_setup: 855 + mtk_pcie_power_down(port); 856 + 857 + return err; 858 + } 859 + 860 + static int mtk_pcie_probe(struct platform_device *pdev) 861 + { 862 + struct device *dev = &pdev->dev; 863 + struct mtk_pcie_port *port; 864 + struct pci_host_bridge *host; 865 + int err; 866 + 867 + host = devm_pci_alloc_host_bridge(dev, sizeof(*port)); 868 + if (!host) 869 + return -ENOMEM; 870 + 871 + port = pci_host_bridge_priv(host); 872 + 873 + port->dev = dev; 874 + platform_set_drvdata(pdev, port); 875 + 876 + err = mtk_pcie_setup(port); 877 + if (err) 878 + return err; 879 + 880 + host->ops = &mtk_pcie_ops; 881 + host->sysdata = port; 882 + 883 + err = pci_host_probe(host); 884 + if (err) { 885 + mtk_pcie_irq_teardown(port); 886 + mtk_pcie_power_down(port); 887 + return err; 888 + } 889 + 890 + return 0; 891 + } 892 + 893 + static int mtk_pcie_remove(struct platform_device *pdev) 894 + { 895 + struct mtk_pcie_port *port = platform_get_drvdata(pdev); 896 + struct pci_host_bridge *host = pci_host_bridge_from_priv(port); 897 + 898 + pci_lock_rescan_remove(); 899 + pci_stop_root_bus(host->bus); 900 + pci_remove_root_bus(host->bus); 901 + pci_unlock_rescan_remove(); 902 + 903 + mtk_pcie_irq_teardown(port); 904 + mtk_pcie_power_down(port); 905 + 906 + return 0; 907 + } 908 + 909 + static void __maybe_unused mtk_pcie_irq_save(struct mtk_pcie_port *port) 910 + { 911 + int i; 912 + 913 + raw_spin_lock(&port->irq_lock); 914 + 915 + port->saved_irq_state = readl_relaxed(port->base + PCIE_INT_ENABLE_REG); 916 + 917 + for (i = 0; i < PCIE_MSI_SET_NUM; i++) { 918 + struct mtk_msi_set *msi_set = &port->msi_sets[i]; 919 + 920 + msi_set->saved_irq_state = readl_relaxed(msi_set->base + 921 + PCIE_MSI_SET_ENABLE_OFFSET); 922 + } 923 + 924 + raw_spin_unlock(&port->irq_lock); 925 + } 926 + 927 + static void __maybe_unused mtk_pcie_irq_restore(struct mtk_pcie_port *port) 928 + { 929 + int i; 930 + 931 + raw_spin_lock(&port->irq_lock); 932 + 933 + writel_relaxed(port->saved_irq_state, port->base + PCIE_INT_ENABLE_REG); 934 + 935 + for (i = 0; i < PCIE_MSI_SET_NUM; i++) { 936 + struct mtk_msi_set *msi_set = &port->msi_sets[i]; 937 + 938 + writel_relaxed(msi_set->saved_irq_state, 939 + msi_set->base + PCIE_MSI_SET_ENABLE_OFFSET); 940 + } 941 + 942 + raw_spin_unlock(&port->irq_lock); 943 + } 944 + 945 + static int __maybe_unused mtk_pcie_turn_off_link(struct mtk_pcie_port *port) 946 + { 947 + u32 val; 948 + 949 + val = readl_relaxed(port->base + PCIE_ICMD_PM_REG); 950 + val |= PCIE_TURN_OFF_LINK; 951 + writel_relaxed(val, port->base + PCIE_ICMD_PM_REG); 952 + 953 + /* Check the link is L2 */ 954 + return readl_poll_timeout(port->base + PCIE_LTSSM_STATUS_REG, val, 955 + (PCIE_LTSSM_STATE(val) == 956 + PCIE_LTSSM_STATE_L2_IDLE), 20, 957 + 50 * USEC_PER_MSEC); 958 + } 959 + 960 + static int __maybe_unused mtk_pcie_suspend_noirq(struct device *dev) 961 + { 962 + struct mtk_pcie_port *port = dev_get_drvdata(dev); 963 + int err; 964 + u32 val; 965 + 966 + /* Trigger link to L2 state */ 967 + err = mtk_pcie_turn_off_link(port); 968 + if (err) { 969 + dev_err(port->dev, "cannot enter L2 state\n"); 970 + return err; 971 + } 972 + 973 + /* Pull down the PERST# pin */ 974 + val = readl_relaxed(port->base + PCIE_RST_CTRL_REG); 975 + val |= PCIE_PE_RSTB; 976 + writel_relaxed(val, port->base + PCIE_RST_CTRL_REG); 977 + 978 + dev_dbg(port->dev, "entered L2 states successfully"); 979 + 980 + mtk_pcie_irq_save(port); 981 + mtk_pcie_power_down(port); 982 + 983 + return 0; 984 + } 985 + 986 + static int __maybe_unused mtk_pcie_resume_noirq(struct device *dev) 987 + { 988 + struct mtk_pcie_port *port = dev_get_drvdata(dev); 989 + int err; 990 + 991 + err = mtk_pcie_power_up(port); 992 + if (err) 993 + return err; 994 + 995 + err = mtk_pcie_startup_port(port); 996 + if (err) { 997 + mtk_pcie_power_down(port); 998 + return err; 999 + } 1000 + 1001 + mtk_pcie_irq_restore(port); 1002 + 1003 + return 0; 1004 + } 1005 + 1006 + static const struct dev_pm_ops mtk_pcie_pm_ops = { 1007 + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mtk_pcie_suspend_noirq, 1008 + mtk_pcie_resume_noirq) 1009 + }; 1010 + 1011 + static const struct of_device_id mtk_pcie_of_match[] = { 1012 + { .compatible = "mediatek,mt8192-pcie" }, 1013 + {}, 1014 + }; 1015 + 1016 + static struct platform_driver mtk_pcie_driver = { 1017 + .probe = mtk_pcie_probe, 1018 + .remove = mtk_pcie_remove, 1019 + .driver = { 1020 + .name = "mtk-pcie", 1021 + .of_match_table = mtk_pcie_of_match, 1022 + .pm = &mtk_pcie_pm_ops, 1023 + }, 1024 + }; 1025 + 1026 + module_platform_driver(mtk_pcie_driver); 1027 + MODULE_LICENSE("GPL v2");
+6 -1
drivers/pci/controller/pcie-mediatek.c
··· 143 143 * struct mtk_pcie_soc - differentiate between host generations 144 144 * @need_fix_class_id: whether this host's class ID needed to be fixed or not 145 145 * @need_fix_device_id: whether this host's device ID needed to be fixed or not 146 + * @no_msi: Bridge has no MSI support, and relies on an external block 146 147 * @device_id: device ID which this host need to be fixed 147 148 * @ops: pointer to configuration access functions 148 149 * @startup: pointer to controller setting functions ··· 152 151 struct mtk_pcie_soc { 153 152 bool need_fix_class_id; 154 153 bool need_fix_device_id; 154 + bool no_msi; 155 155 unsigned int device_id; 156 156 struct pci_ops *ops; 157 157 int (*startup)(struct mtk_pcie_port *port); ··· 762 760 static int mtk_pcie_startup_port(struct mtk_pcie_port *port) 763 761 { 764 762 struct mtk_pcie *pcie = port->pcie; 765 - u32 func = PCI_FUNC(port->slot << 3); 763 + u32 func = PCI_FUNC(port->slot); 766 764 u32 slot = PCI_SLOT(port->slot << 3); 767 765 u32 val; 768 766 int err; ··· 1089 1087 1090 1088 host->ops = pcie->soc->ops; 1091 1089 host->sysdata = pcie; 1090 + host->msi_domain = pcie->soc->no_msi; 1092 1091 1093 1092 err = pci_host_probe(host); 1094 1093 if (err) ··· 1179 1176 }; 1180 1177 1181 1178 static const struct mtk_pcie_soc mtk_pcie_soc_v1 = { 1179 + .no_msi = true, 1182 1180 .ops = &mtk_pcie_ops, 1183 1181 .startup = mtk_pcie_startup_port, 1184 1182 }; ··· 1214 1210 { .compatible = "mediatek,mt7629-pcie", .data = &mtk_pcie_soc_mt7629 }, 1215 1211 {}, 1216 1212 }; 1213 + MODULE_DEVICE_TABLE(of, mtk_pcie_ids); 1217 1214 1218 1215 static struct platform_driver mtk_pcie_driver = { 1219 1216 .probe = mtk_pcie_probe,
+5 -7
drivers/pci/controller/pcie-microchip-host.c
··· 301 301 LOCAL_EVENT_CAUSE(PM_MSI_INT_SYS_ERR, "system error"), 302 302 }; 303 303 304 - struct event_map pcie_event_to_event[] = { 304 + static struct event_map pcie_event_to_event[] = { 305 305 PCIE_EVENT_TO_EVENT_MAP(L2_EXIT), 306 306 PCIE_EVENT_TO_EVENT_MAP(HOTRST_EXIT), 307 307 PCIE_EVENT_TO_EVENT_MAP(DLUP_EXIT), 308 308 }; 309 309 310 - struct event_map sec_error_to_event[] = { 310 + static struct event_map sec_error_to_event[] = { 311 311 SEC_ERROR_TO_EVENT_MAP(TX_RAM_SEC_ERR), 312 312 SEC_ERROR_TO_EVENT_MAP(RX_RAM_SEC_ERR), 313 313 SEC_ERROR_TO_EVENT_MAP(PCIE2AXI_RAM_SEC_ERR), 314 314 SEC_ERROR_TO_EVENT_MAP(AXI2PCIE_RAM_SEC_ERR), 315 315 }; 316 316 317 - struct event_map ded_error_to_event[] = { 317 + static struct event_map ded_error_to_event[] = { 318 318 DED_ERROR_TO_EVENT_MAP(TX_RAM_DED_ERR), 319 319 DED_ERROR_TO_EVENT_MAP(RX_RAM_DED_ERR), 320 320 DED_ERROR_TO_EVENT_MAP(PCIE2AXI_RAM_DED_ERR), 321 321 DED_ERROR_TO_EVENT_MAP(AXI2PCIE_RAM_DED_ERR), 322 322 }; 323 323 324 - struct event_map local_status_to_event[] = { 324 + static struct event_map local_status_to_event[] = { 325 325 LOCAL_STATUS_TO_EVENT_MAP(DMA_END_ENGINE_0), 326 326 LOCAL_STATUS_TO_EVENT_MAP(DMA_END_ENGINE_1), 327 327 LOCAL_STATUS_TO_EVENT_MAP(DMA_ERROR_ENGINE_0), ··· 1023 1023 } 1024 1024 1025 1025 irq = platform_get_irq(pdev, 0); 1026 - if (irq < 0) { 1027 - dev_err(dev, "unable to request IRQ%d\n", irq); 1026 + if (irq < 0) 1028 1027 return -ENODEV; 1029 - } 1030 1028 1031 1029 for (i = 0; i < NUM_EVENTS; i++) { 1032 1030 event_irq = irq_create_mapping(port->event_domain, i);
+178 -195
drivers/pci/controller/pcie-rcar-host.c
··· 35 35 struct rcar_msi { 36 36 DECLARE_BITMAP(used, INT_PCI_MSI_NR); 37 37 struct irq_domain *domain; 38 - struct msi_controller chip; 39 - unsigned long pages; 40 - struct mutex lock; 38 + struct mutex map_lock; 39 + spinlock_t mask_lock; 41 40 int irq1; 42 41 int irq2; 43 42 }; 44 - 45 - static inline struct rcar_msi *to_rcar_msi(struct msi_controller *chip) 46 - { 47 - return container_of(chip, struct rcar_msi, chip); 48 - } 49 43 50 44 /* Structure representing the PCIe interface */ 51 45 struct rcar_pcie_host { ··· 49 55 struct rcar_msi msi; 50 56 int (*phy_init_fn)(struct rcar_pcie_host *host); 51 57 }; 58 + 59 + static struct rcar_pcie_host *msi_to_host(struct rcar_msi *msi) 60 + { 61 + return container_of(msi, struct rcar_pcie_host, msi); 62 + } 52 63 53 64 static u32 rcar_read_conf(struct rcar_pcie *pcie, int where) 54 65 { ··· 291 292 292 293 bridge->sysdata = host; 293 294 bridge->ops = &rcar_pcie_ops; 294 - if (IS_ENABLED(CONFIG_PCI_MSI)) 295 - bridge->msi = &host->msi.chip; 296 295 297 296 return pci_host_probe(bridge); 298 297 } ··· 470 473 return err; 471 474 } 472 475 473 - static int rcar_msi_alloc(struct rcar_msi *chip) 474 - { 475 - int msi; 476 - 477 - mutex_lock(&chip->lock); 478 - 479 - msi = find_first_zero_bit(chip->used, INT_PCI_MSI_NR); 480 - if (msi < INT_PCI_MSI_NR) 481 - set_bit(msi, chip->used); 482 - else 483 - msi = -ENOSPC; 484 - 485 - mutex_unlock(&chip->lock); 486 - 487 - return msi; 488 - } 489 - 490 - static int rcar_msi_alloc_region(struct rcar_msi *chip, int no_irqs) 491 - { 492 - int msi; 493 - 494 - mutex_lock(&chip->lock); 495 - msi = bitmap_find_free_region(chip->used, INT_PCI_MSI_NR, 496 - order_base_2(no_irqs)); 497 - mutex_unlock(&chip->lock); 498 - 499 - return msi; 500 - } 501 - 502 - static void rcar_msi_free(struct rcar_msi *chip, unsigned long irq) 503 - { 504 - mutex_lock(&chip->lock); 505 - clear_bit(irq, chip->used); 506 - mutex_unlock(&chip->lock); 507 - } 508 - 509 476 static irqreturn_t rcar_pcie_msi_irq(int irq, void *data) 510 477 { 511 478 struct rcar_pcie_host *host = data; ··· 488 527 unsigned int index = find_first_bit(&reg, 32); 489 528 unsigned int msi_irq; 490 529 491 - /* clear the interrupt */ 492 - rcar_pci_write_reg(pcie, 1 << index, PCIEMSIFR); 493 - 494 - msi_irq = irq_find_mapping(msi->domain, index); 530 + msi_irq = irq_find_mapping(msi->domain->parent, index); 495 531 if (msi_irq) { 496 - if (test_bit(index, msi->used)) 497 - generic_handle_irq(msi_irq); 498 - else 499 - dev_info(dev, "unhandled MSI\n"); 532 + generic_handle_irq(msi_irq); 500 533 } else { 501 534 /* Unknown MSI, just clear it */ 502 535 dev_dbg(dev, "unexpected MSI\n"); 536 + rcar_pci_write_reg(pcie, BIT(index), PCIEMSIFR); 503 537 } 504 538 505 539 /* see if there's any more pending in this vector */ ··· 504 548 return IRQ_HANDLED; 505 549 } 506 550 507 - static int rcar_msi_setup_irq(struct msi_controller *chip, struct pci_dev *pdev, 508 - struct msi_desc *desc) 551 + static void rcar_msi_top_irq_ack(struct irq_data *d) 509 552 { 510 - struct rcar_msi *msi = to_rcar_msi(chip); 511 - struct rcar_pcie_host *host = container_of(chip, struct rcar_pcie_host, 512 - msi.chip); 513 - struct rcar_pcie *pcie = &host->pcie; 514 - struct msi_msg msg; 515 - unsigned int irq; 516 - int hwirq; 517 - 518 - hwirq = rcar_msi_alloc(msi); 519 - if (hwirq < 0) 520 - return hwirq; 521 - 522 - irq = irq_find_mapping(msi->domain, hwirq); 523 - if (!irq) { 524 - rcar_msi_free(msi, hwirq); 525 - return -EINVAL; 526 - } 527 - 528 - irq_set_msi_desc(irq, desc); 529 - 530 - msg.address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE; 531 - msg.address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR); 532 - msg.data = hwirq; 533 - 534 - pci_write_msi_msg(irq, &msg); 535 - 536 - return 0; 553 + irq_chip_ack_parent(d); 537 554 } 538 555 539 - static int rcar_msi_setup_irqs(struct msi_controller *chip, 540 - struct pci_dev *pdev, int nvec, int type) 556 + static void rcar_msi_top_irq_mask(struct irq_data *d) 541 557 { 542 - struct rcar_msi *msi = to_rcar_msi(chip); 543 - struct rcar_pcie_host *host = container_of(chip, struct rcar_pcie_host, 544 - msi.chip); 545 - struct rcar_pcie *pcie = &host->pcie; 546 - struct msi_desc *desc; 547 - struct msi_msg msg; 548 - unsigned int irq; 558 + pci_msi_mask_irq(d); 559 + irq_chip_mask_parent(d); 560 + } 561 + 562 + static void rcar_msi_top_irq_unmask(struct irq_data *d) 563 + { 564 + pci_msi_unmask_irq(d); 565 + irq_chip_unmask_parent(d); 566 + } 567 + 568 + static struct irq_chip rcar_msi_top_chip = { 569 + .name = "PCIe MSI", 570 + .irq_ack = rcar_msi_top_irq_ack, 571 + .irq_mask = rcar_msi_top_irq_mask, 572 + .irq_unmask = rcar_msi_top_irq_unmask, 573 + }; 574 + 575 + static void rcar_msi_irq_ack(struct irq_data *d) 576 + { 577 + struct rcar_msi *msi = irq_data_get_irq_chip_data(d); 578 + struct rcar_pcie *pcie = &msi_to_host(msi)->pcie; 579 + 580 + /* clear the interrupt */ 581 + rcar_pci_write_reg(pcie, BIT(d->hwirq), PCIEMSIFR); 582 + } 583 + 584 + static void rcar_msi_irq_mask(struct irq_data *d) 585 + { 586 + struct rcar_msi *msi = irq_data_get_irq_chip_data(d); 587 + struct rcar_pcie *pcie = &msi_to_host(msi)->pcie; 588 + unsigned long flags; 589 + u32 value; 590 + 591 + spin_lock_irqsave(&msi->mask_lock, flags); 592 + value = rcar_pci_read_reg(pcie, PCIEMSIIER); 593 + value &= ~BIT(d->hwirq); 594 + rcar_pci_write_reg(pcie, value, PCIEMSIIER); 595 + spin_unlock_irqrestore(&msi->mask_lock, flags); 596 + } 597 + 598 + static void rcar_msi_irq_unmask(struct irq_data *d) 599 + { 600 + struct rcar_msi *msi = irq_data_get_irq_chip_data(d); 601 + struct rcar_pcie *pcie = &msi_to_host(msi)->pcie; 602 + unsigned long flags; 603 + u32 value; 604 + 605 + spin_lock_irqsave(&msi->mask_lock, flags); 606 + value = rcar_pci_read_reg(pcie, PCIEMSIIER); 607 + value |= BIT(d->hwirq); 608 + rcar_pci_write_reg(pcie, value, PCIEMSIIER); 609 + spin_unlock_irqrestore(&msi->mask_lock, flags); 610 + } 611 + 612 + static int rcar_msi_set_affinity(struct irq_data *d, const struct cpumask *mask, bool force) 613 + { 614 + return -EINVAL; 615 + } 616 + 617 + static void rcar_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 618 + { 619 + struct rcar_msi *msi = irq_data_get_irq_chip_data(data); 620 + struct rcar_pcie *pcie = &msi_to_host(msi)->pcie; 621 + 622 + msg->address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE; 623 + msg->address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR); 624 + msg->data = data->hwirq; 625 + } 626 + 627 + static struct irq_chip rcar_msi_bottom_chip = { 628 + .name = "Rcar MSI", 629 + .irq_ack = rcar_msi_irq_ack, 630 + .irq_mask = rcar_msi_irq_mask, 631 + .irq_unmask = rcar_msi_irq_unmask, 632 + .irq_set_affinity = rcar_msi_set_affinity, 633 + .irq_compose_msi_msg = rcar_compose_msi_msg, 634 + }; 635 + 636 + static int rcar_msi_domain_alloc(struct irq_domain *domain, unsigned int virq, 637 + unsigned int nr_irqs, void *args) 638 + { 639 + struct rcar_msi *msi = domain->host_data; 640 + unsigned int i; 549 641 int hwirq; 550 - int i; 551 642 552 - /* MSI-X interrupts are not supported */ 553 - if (type == PCI_CAP_ID_MSIX) 554 - return -EINVAL; 643 + mutex_lock(&msi->map_lock); 555 644 556 - WARN_ON(!list_is_singular(&pdev->dev.msi_list)); 557 - desc = list_entry(pdev->dev.msi_list.next, struct msi_desc, list); 645 + hwirq = bitmap_find_free_region(msi->used, INT_PCI_MSI_NR, order_base_2(nr_irqs)); 558 646 559 - hwirq = rcar_msi_alloc_region(msi, nvec); 647 + mutex_unlock(&msi->map_lock); 648 + 560 649 if (hwirq < 0) 561 650 return -ENOSPC; 562 651 563 - irq = irq_find_mapping(msi->domain, hwirq); 564 - if (!irq) 565 - return -ENOSPC; 566 - 567 - for (i = 0; i < nvec; i++) { 568 - /* 569 - * irq_create_mapping() called from rcar_pcie_probe() pre- 570 - * allocates descs, so there is no need to allocate descs here. 571 - * We can therefore assume that if irq_find_mapping() above 572 - * returns non-zero, then the descs are also successfully 573 - * allocated. 574 - */ 575 - if (irq_set_msi_desc_off(irq, i, desc)) { 576 - /* TODO: clear */ 577 - return -EINVAL; 578 - } 579 - } 580 - 581 - desc->nvec_used = nvec; 582 - desc->msi_attrib.multiple = order_base_2(nvec); 583 - 584 - msg.address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE; 585 - msg.address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR); 586 - msg.data = hwirq; 587 - 588 - pci_write_msi_msg(irq, &msg); 652 + for (i = 0; i < nr_irqs; i++) 653 + irq_domain_set_info(domain, virq + i, hwirq + i, 654 + &rcar_msi_bottom_chip, domain->host_data, 655 + handle_edge_irq, NULL, NULL); 589 656 590 657 return 0; 591 658 } 592 659 593 - static void rcar_msi_teardown_irq(struct msi_controller *chip, unsigned int irq) 660 + static void rcar_msi_domain_free(struct irq_domain *domain, unsigned int virq, 661 + unsigned int nr_irqs) 594 662 { 595 - struct rcar_msi *msi = to_rcar_msi(chip); 596 - struct irq_data *d = irq_get_irq_data(irq); 663 + struct irq_data *d = irq_domain_get_irq_data(domain, virq); 664 + struct rcar_msi *msi = domain->host_data; 597 665 598 - rcar_msi_free(msi, d->hwirq); 666 + mutex_lock(&msi->map_lock); 667 + 668 + bitmap_release_region(msi->used, d->hwirq, order_base_2(nr_irqs)); 669 + 670 + mutex_unlock(&msi->map_lock); 599 671 } 600 672 601 - static struct irq_chip rcar_msi_irq_chip = { 602 - .name = "R-Car PCIe MSI", 603 - .irq_enable = pci_msi_unmask_irq, 604 - .irq_disable = pci_msi_mask_irq, 605 - .irq_mask = pci_msi_mask_irq, 606 - .irq_unmask = pci_msi_unmask_irq, 673 + static const struct irq_domain_ops rcar_msi_domain_ops = { 674 + .alloc = rcar_msi_domain_alloc, 675 + .free = rcar_msi_domain_free, 607 676 }; 608 677 609 - static int rcar_msi_map(struct irq_domain *domain, unsigned int irq, 610 - irq_hw_number_t hwirq) 678 + static struct msi_domain_info rcar_msi_info = { 679 + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 680 + MSI_FLAG_MULTI_PCI_MSI), 681 + .chip = &rcar_msi_top_chip, 682 + }; 683 + 684 + static int rcar_allocate_domains(struct rcar_msi *msi) 611 685 { 612 - irq_set_chip_and_handler(irq, &rcar_msi_irq_chip, handle_simple_irq); 613 - irq_set_chip_data(irq, domain->host_data); 686 + struct rcar_pcie *pcie = &msi_to_host(msi)->pcie; 687 + struct fwnode_handle *fwnode = dev_fwnode(pcie->dev); 688 + struct irq_domain *parent; 689 + 690 + parent = irq_domain_create_linear(fwnode, INT_PCI_MSI_NR, 691 + &rcar_msi_domain_ops, msi); 692 + if (!parent) { 693 + dev_err(pcie->dev, "failed to create IRQ domain\n"); 694 + return -ENOMEM; 695 + } 696 + irq_domain_update_bus_token(parent, DOMAIN_BUS_NEXUS); 697 + 698 + msi->domain = pci_msi_create_irq_domain(fwnode, &rcar_msi_info, parent); 699 + if (!msi->domain) { 700 + dev_err(pcie->dev, "failed to create MSI domain\n"); 701 + irq_domain_remove(parent); 702 + return -ENOMEM; 703 + } 614 704 615 705 return 0; 616 706 } 617 707 618 - static const struct irq_domain_ops msi_domain_ops = { 619 - .map = rcar_msi_map, 620 - }; 621 - 622 - static void rcar_pcie_unmap_msi(struct rcar_pcie_host *host) 708 + static void rcar_free_domains(struct rcar_msi *msi) 623 709 { 624 - struct rcar_msi *msi = &host->msi; 625 - int i, irq; 626 - 627 - for (i = 0; i < INT_PCI_MSI_NR; i++) { 628 - irq = irq_find_mapping(msi->domain, i); 629 - if (irq > 0) 630 - irq_dispose_mapping(irq); 631 - } 710 + struct irq_domain *parent = msi->domain->parent; 632 711 633 712 irq_domain_remove(msi->domain); 634 - } 635 - 636 - static void rcar_pcie_hw_enable_msi(struct rcar_pcie_host *host) 637 - { 638 - struct rcar_pcie *pcie = &host->pcie; 639 - struct rcar_msi *msi = &host->msi; 640 - unsigned long base; 641 - 642 - /* setup MSI data target */ 643 - base = virt_to_phys((void *)msi->pages); 644 - 645 - rcar_pci_write_reg(pcie, lower_32_bits(base) | MSIFE, PCIEMSIALR); 646 - rcar_pci_write_reg(pcie, upper_32_bits(base), PCIEMSIAUR); 647 - 648 - /* enable all MSI interrupts */ 649 - rcar_pci_write_reg(pcie, 0xffffffff, PCIEMSIIER); 713 + irq_domain_remove(parent); 650 714 } 651 715 652 716 static int rcar_pcie_enable_msi(struct rcar_pcie_host *host) ··· 674 698 struct rcar_pcie *pcie = &host->pcie; 675 699 struct device *dev = pcie->dev; 676 700 struct rcar_msi *msi = &host->msi; 677 - int err, i; 701 + struct resource res; 702 + int err; 678 703 679 - mutex_init(&msi->lock); 704 + mutex_init(&msi->map_lock); 705 + spin_lock_init(&msi->mask_lock); 680 706 681 - msi->chip.dev = dev; 682 - msi->chip.setup_irq = rcar_msi_setup_irq; 683 - msi->chip.setup_irqs = rcar_msi_setup_irqs; 684 - msi->chip.teardown_irq = rcar_msi_teardown_irq; 707 + err = of_address_to_resource(dev->of_node, 0, &res); 708 + if (err) 709 + return err; 685 710 686 - msi->domain = irq_domain_add_linear(dev->of_node, INT_PCI_MSI_NR, 687 - &msi_domain_ops, &msi->chip); 688 - if (!msi->domain) { 689 - dev_err(dev, "failed to create IRQ domain\n"); 690 - return -ENOMEM; 691 - } 692 - 693 - for (i = 0; i < INT_PCI_MSI_NR; i++) 694 - irq_create_mapping(msi->domain, i); 711 + err = rcar_allocate_domains(msi); 712 + if (err) 713 + return err; 695 714 696 715 /* Two irqs are for MSI, but they are also used for non-MSI irqs */ 697 716 err = devm_request_irq(dev, msi->irq1, rcar_pcie_msi_irq, 698 717 IRQF_SHARED | IRQF_NO_THREAD, 699 - rcar_msi_irq_chip.name, host); 718 + rcar_msi_bottom_chip.name, host); 700 719 if (err < 0) { 701 720 dev_err(dev, "failed to request IRQ: %d\n", err); 702 721 goto err; ··· 699 728 700 729 err = devm_request_irq(dev, msi->irq2, rcar_pcie_msi_irq, 701 730 IRQF_SHARED | IRQF_NO_THREAD, 702 - rcar_msi_irq_chip.name, host); 731 + rcar_msi_bottom_chip.name, host); 703 732 if (err < 0) { 704 733 dev_err(dev, "failed to request IRQ: %d\n", err); 705 734 goto err; 706 735 } 707 736 708 - /* setup MSI data target */ 709 - msi->pages = __get_free_pages(GFP_KERNEL | GFP_DMA32, 0); 710 - rcar_pcie_hw_enable_msi(host); 737 + /* disable all MSIs */ 738 + rcar_pci_write_reg(pcie, 0, PCIEMSIIER); 739 + 740 + /* 741 + * Setup MSI data target using RC base address address, which 742 + * is guaranteed to be in the low 32bit range on any RCar HW. 743 + */ 744 + rcar_pci_write_reg(pcie, lower_32_bits(res.start) | MSIFE, PCIEMSIALR); 745 + rcar_pci_write_reg(pcie, upper_32_bits(res.start), PCIEMSIAUR); 711 746 712 747 return 0; 713 748 714 749 err: 715 - rcar_pcie_unmap_msi(host); 750 + rcar_free_domains(msi); 716 751 return err; 717 752 } 718 753 719 754 static void rcar_pcie_teardown_msi(struct rcar_pcie_host *host) 720 755 { 721 756 struct rcar_pcie *pcie = &host->pcie; 722 - struct rcar_msi *msi = &host->msi; 723 757 724 758 /* Disable all MSI interrupts */ 725 759 rcar_pci_write_reg(pcie, 0, PCIEMSIIER); ··· 732 756 /* Disable address decoding of the MSI interrupt, MSIFE */ 733 757 rcar_pci_write_reg(pcie, 0, PCIEMSIALR); 734 758 735 - free_pages(msi->pages, 0); 736 - 737 - rcar_pcie_unmap_msi(host); 759 + rcar_free_domains(&host->msi); 738 760 } 739 761 740 762 static int rcar_pcie_get_resources(struct rcar_pcie_host *host) ··· 985 1011 dev_info(dev, "PCIe x%d: link up\n", (data >> 20) & 0x3f); 986 1012 987 1013 /* Enable MSI */ 988 - if (IS_ENABLED(CONFIG_PCI_MSI)) 989 - rcar_pcie_hw_enable_msi(host); 1014 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 1015 + struct resource res; 1016 + u32 val; 1017 + 1018 + of_address_to_resource(dev->of_node, 0, &res); 1019 + rcar_pci_write_reg(pcie, upper_32_bits(res.start), PCIEMSIAUR); 1020 + rcar_pci_write_reg(pcie, lower_32_bits(res.start) | MSIFE, PCIEMSIALR); 1021 + 1022 + bitmap_to_arr32(&val, host->msi.used, INT_PCI_MSI_NR); 1023 + rcar_pci_write_reg(pcie, val, PCIEMSIIER); 1024 + } 990 1025 991 1026 rcar_pcie_hw_enable(host); 992 1027
+7
drivers/pci/controller/pcie-xilinx-nwl.c
··· 26 26 27 27 /* Bridge core config registers */ 28 28 #define BRCFG_PCIE_RX0 0x00000000 29 + #define BRCFG_PCIE_RX1 0x00000004 29 30 #define BRCFG_INTERRUPT 0x00000010 30 31 #define BRCFG_PCIE_RX_MSG_FILTER 0x00000020 31 32 ··· 129 128 #define NWL_ECAM_VALUE_DEFAULT 12 130 129 131 130 #define CFG_DMA_REG_BAR GENMASK(2, 0) 131 + #define CFG_PCIE_CACHE GENMASK(7, 0) 132 132 133 133 #define INT_PCI_MSI_NR (2 * 32) 134 134 ··· 676 674 /* Enable msg filtering details */ 677 675 nwl_bridge_writel(pcie, CFG_ENABLE_MSG_FILTER_MASK, 678 676 BRCFG_PCIE_RX_MSG_FILTER); 677 + 678 + /* This routes the PCIe DMA traffic to go through CCI path */ 679 + if (of_dma_is_coherent(dev->of_node)) 680 + nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, BRCFG_PCIE_RX1) | 681 + CFG_PCIE_CACHE, BRCFG_PCIE_RX1); 679 682 680 683 err = nwl_wait_for_link(pcie); 681 684 if (err)
+115 -153
drivers/pci/controller/pcie-xilinx.c
··· 93 93 /** 94 94 * struct xilinx_pcie_port - PCIe port information 95 95 * @reg_base: IO Mapped Register Base 96 - * @irq: Interrupt number 97 - * @msi_pages: MSI pages 98 96 * @dev: Device pointer 97 + * @msi_map: Bitmap of allocated MSIs 98 + * @map_lock: Mutex protecting the MSI allocation 99 99 * @msi_domain: MSI IRQ domain pointer 100 100 * @leg_domain: Legacy IRQ domain pointer 101 101 * @resources: Bus Resources 102 102 */ 103 103 struct xilinx_pcie_port { 104 104 void __iomem *reg_base; 105 - u32 irq; 106 - unsigned long msi_pages; 107 105 struct device *dev; 106 + unsigned long msi_map[BITS_TO_LONGS(XILINX_NUM_MSI_IRQS)]; 107 + struct mutex map_lock; 108 108 struct irq_domain *msi_domain; 109 109 struct irq_domain *leg_domain; 110 110 struct list_head resources; 111 111 }; 112 - 113 - static DECLARE_BITMAP(msi_irq_in_use, XILINX_NUM_MSI_IRQS); 114 112 115 113 static inline u32 pcie_read(struct xilinx_pcie_port *port, u32 reg) 116 114 { ··· 194 196 195 197 /* MSI functions */ 196 198 197 - /** 198 - * xilinx_pcie_destroy_msi - Free MSI number 199 - * @irq: IRQ to be freed 200 - */ 201 - static void xilinx_pcie_destroy_msi(unsigned int irq) 199 + static void xilinx_msi_top_irq_ack(struct irq_data *d) 202 200 { 203 - struct msi_desc *msi; 204 - struct xilinx_pcie_port *port; 205 - struct irq_data *d = irq_get_irq_data(irq); 206 - irq_hw_number_t hwirq = irqd_to_hwirq(d); 207 - 208 - if (!test_bit(hwirq, msi_irq_in_use)) { 209 - msi = irq_get_msi_desc(irq); 210 - port = msi_desc_to_pci_sysdata(msi); 211 - dev_err(port->dev, "Trying to free unused MSI#%d\n", irq); 212 - } else { 213 - clear_bit(hwirq, msi_irq_in_use); 214 - } 201 + /* 202 + * xilinx_pcie_intr_handler() will have performed the Ack. 203 + * Eventually, this should be fixed and the Ack be moved in 204 + * the respective callbacks for INTx and MSI. 205 + */ 215 206 } 216 207 217 - /** 218 - * xilinx_pcie_assign_msi - Allocate MSI number 219 - * 220 - * Return: A valid IRQ on success and error value on failure. 221 - */ 222 - static int xilinx_pcie_assign_msi(void) 223 - { 224 - int pos; 208 + static struct irq_chip xilinx_msi_top_chip = { 209 + .name = "PCIe MSI", 210 + .irq_ack = xilinx_msi_top_irq_ack, 211 + }; 225 212 226 - pos = find_first_zero_bit(msi_irq_in_use, XILINX_NUM_MSI_IRQS); 227 - if (pos < XILINX_NUM_MSI_IRQS) 228 - set_bit(pos, msi_irq_in_use); 229 - else 213 + static int xilinx_msi_set_affinity(struct irq_data *d, const struct cpumask *mask, bool force) 214 + { 215 + return -EINVAL; 216 + } 217 + 218 + static void xilinx_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 219 + { 220 + struct xilinx_pcie_port *pcie = irq_data_get_irq_chip_data(data); 221 + phys_addr_t pa = ALIGN_DOWN(virt_to_phys(pcie), SZ_4K); 222 + 223 + msg->address_lo = lower_32_bits(pa); 224 + msg->address_hi = upper_32_bits(pa); 225 + msg->data = data->hwirq; 226 + } 227 + 228 + static struct irq_chip xilinx_msi_bottom_chip = { 229 + .name = "Xilinx MSI", 230 + .irq_set_affinity = xilinx_msi_set_affinity, 231 + .irq_compose_msi_msg = xilinx_compose_msi_msg, 232 + }; 233 + 234 + static int xilinx_msi_domain_alloc(struct irq_domain *domain, unsigned int virq, 235 + unsigned int nr_irqs, void *args) 236 + { 237 + struct xilinx_pcie_port *port = domain->host_data; 238 + int hwirq, i; 239 + 240 + mutex_lock(&port->map_lock); 241 + 242 + hwirq = bitmap_find_free_region(port->msi_map, XILINX_NUM_MSI_IRQS, order_base_2(nr_irqs)); 243 + 244 + mutex_unlock(&port->map_lock); 245 + 246 + if (hwirq < 0) 230 247 return -ENOSPC; 231 248 232 - return pos; 233 - } 234 - 235 - /** 236 - * xilinx_msi_teardown_irq - Destroy the MSI 237 - * @chip: MSI Chip descriptor 238 - * @irq: MSI IRQ to destroy 239 - */ 240 - static void xilinx_msi_teardown_irq(struct msi_controller *chip, 241 - unsigned int irq) 242 - { 243 - xilinx_pcie_destroy_msi(irq); 244 - irq_dispose_mapping(irq); 245 - } 246 - 247 - /** 248 - * xilinx_pcie_msi_setup_irq - Setup MSI request 249 - * @chip: MSI chip pointer 250 - * @pdev: PCIe device pointer 251 - * @desc: MSI descriptor pointer 252 - * 253 - * Return: '0' on success and error value on failure 254 - */ 255 - static int xilinx_pcie_msi_setup_irq(struct msi_controller *chip, 256 - struct pci_dev *pdev, 257 - struct msi_desc *desc) 258 - { 259 - struct xilinx_pcie_port *port = pdev->bus->sysdata; 260 - unsigned int irq; 261 - int hwirq; 262 - struct msi_msg msg; 263 - phys_addr_t msg_addr; 264 - 265 - hwirq = xilinx_pcie_assign_msi(); 266 - if (hwirq < 0) 267 - return hwirq; 268 - 269 - irq = irq_create_mapping(port->msi_domain, hwirq); 270 - if (!irq) 271 - return -EINVAL; 272 - 273 - irq_set_msi_desc(irq, desc); 274 - 275 - msg_addr = virt_to_phys((void *)port->msi_pages); 276 - 277 - msg.address_hi = 0; 278 - msg.address_lo = msg_addr; 279 - msg.data = irq; 280 - 281 - pci_write_msi_msg(irq, &msg); 249 + for (i = 0; i < nr_irqs; i++) 250 + irq_domain_set_info(domain, virq + i, hwirq + i, 251 + &xilinx_msi_bottom_chip, domain->host_data, 252 + handle_edge_irq, NULL, NULL); 282 253 283 254 return 0; 284 255 } 285 256 286 - /* MSI Chip Descriptor */ 287 - static struct msi_controller xilinx_pcie_msi_chip = { 288 - .setup_irq = xilinx_pcie_msi_setup_irq, 289 - .teardown_irq = xilinx_msi_teardown_irq, 290 - }; 291 - 292 - /* HW Interrupt Chip Descriptor */ 293 - static struct irq_chip xilinx_msi_irq_chip = { 294 - .name = "Xilinx PCIe MSI", 295 - .irq_enable = pci_msi_unmask_irq, 296 - .irq_disable = pci_msi_mask_irq, 297 - .irq_mask = pci_msi_mask_irq, 298 - .irq_unmask = pci_msi_unmask_irq, 299 - }; 300 - 301 - /** 302 - * xilinx_pcie_msi_map - Set the handler for the MSI and mark IRQ as valid 303 - * @domain: IRQ domain 304 - * @irq: Virtual IRQ number 305 - * @hwirq: HW interrupt number 306 - * 307 - * Return: Always returns 0. 308 - */ 309 - static int xilinx_pcie_msi_map(struct irq_domain *domain, unsigned int irq, 310 - irq_hw_number_t hwirq) 257 + static void xilinx_msi_domain_free(struct irq_domain *domain, unsigned int virq, 258 + unsigned int nr_irqs) 311 259 { 312 - irq_set_chip_and_handler(irq, &xilinx_msi_irq_chip, handle_simple_irq); 313 - irq_set_chip_data(irq, domain->host_data); 260 + struct irq_data *d = irq_domain_get_irq_data(domain, virq); 261 + struct xilinx_pcie_port *port = domain->host_data; 314 262 315 - return 0; 263 + mutex_lock(&port->map_lock); 264 + 265 + bitmap_release_region(port->msi_map, d->hwirq, order_base_2(nr_irqs)); 266 + 267 + mutex_unlock(&port->map_lock); 316 268 } 317 269 318 - /* IRQ Domain operations */ 319 - static const struct irq_domain_ops msi_domain_ops = { 320 - .map = xilinx_pcie_msi_map, 270 + static const struct irq_domain_ops xilinx_msi_domain_ops = { 271 + .alloc = xilinx_msi_domain_alloc, 272 + .free = xilinx_msi_domain_free, 321 273 }; 322 274 323 - /** 324 - * xilinx_pcie_enable_msi - Enable MSI support 325 - * @port: PCIe port information 326 - */ 327 - static int xilinx_pcie_enable_msi(struct xilinx_pcie_port *port) 328 - { 329 - phys_addr_t msg_addr; 275 + static struct msi_domain_info xilinx_msi_info = { 276 + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS), 277 + .chip = &xilinx_msi_top_chip, 278 + }; 330 279 331 - port->msi_pages = __get_free_pages(GFP_KERNEL, 0); 332 - if (!port->msi_pages) 280 + static int xilinx_allocate_msi_domains(struct xilinx_pcie_port *pcie) 281 + { 282 + struct fwnode_handle *fwnode = dev_fwnode(pcie->dev); 283 + struct irq_domain *parent; 284 + 285 + parent = irq_domain_create_linear(fwnode, XILINX_NUM_MSI_IRQS, 286 + &xilinx_msi_domain_ops, pcie); 287 + if (!parent) { 288 + dev_err(pcie->dev, "failed to create IRQ domain\n"); 333 289 return -ENOMEM; 290 + } 291 + irq_domain_update_bus_token(parent, DOMAIN_BUS_NEXUS); 334 292 335 - msg_addr = virt_to_phys((void *)port->msi_pages); 336 - pcie_write(port, 0x0, XILINX_PCIE_REG_MSIBASE1); 337 - pcie_write(port, msg_addr, XILINX_PCIE_REG_MSIBASE2); 293 + pcie->msi_domain = pci_msi_create_irq_domain(fwnode, &xilinx_msi_info, parent); 294 + if (!pcie->msi_domain) { 295 + dev_err(pcie->dev, "failed to create MSI domain\n"); 296 + irq_domain_remove(parent); 297 + return -ENOMEM; 298 + } 338 299 339 300 return 0; 301 + } 302 + 303 + static void xilinx_free_msi_domains(struct xilinx_pcie_port *pcie) 304 + { 305 + struct irq_domain *parent = pcie->msi_domain->parent; 306 + 307 + irq_domain_remove(pcie->msi_domain); 308 + irq_domain_remove(parent); 340 309 } 341 310 342 311 /* INTx Functions */ ··· 385 420 } 386 421 387 422 if (status & (XILINX_PCIE_INTR_INTX | XILINX_PCIE_INTR_MSI)) { 423 + unsigned int irq; 424 + 388 425 val = pcie_read(port, XILINX_PCIE_REG_RPIFR1); 389 426 390 427 /* Check whether interrupt valid */ ··· 399 432 if (val & XILINX_PCIE_RPIFR1_MSI_INTR) { 400 433 val = pcie_read(port, XILINX_PCIE_REG_RPIFR2) & 401 434 XILINX_PCIE_RPIFR2_MSG_DATA; 435 + irq = irq_find_mapping(port->msi_domain->parent, val); 402 436 } else { 403 437 val = (val & XILINX_PCIE_RPIFR1_INTR_MASK) >> 404 438 XILINX_PCIE_RPIFR1_INTR_SHIFT; 405 - val = irq_find_mapping(port->leg_domain, val); 439 + irq = irq_find_mapping(port->leg_domain, val); 406 440 } 407 441 408 442 /* Clear interrupt FIFO register 1 */ 409 443 pcie_write(port, XILINX_PCIE_RPIFR1_ALL_MASK, 410 444 XILINX_PCIE_REG_RPIFR1); 411 445 412 - /* Handle the interrupt */ 413 - if (IS_ENABLED(CONFIG_PCI_MSI) || 414 - !(val & XILINX_PCIE_RPIFR1_MSI_INTR)) 415 - generic_handle_irq(val); 446 + if (irq) 447 + generic_handle_irq(irq); 416 448 } 417 449 418 450 if (status & XILINX_PCIE_INTR_SLV_UNSUPP) ··· 457 491 static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port) 458 492 { 459 493 struct device *dev = port->dev; 460 - struct device_node *node = dev->of_node; 461 494 struct device_node *pcie_intc_node; 462 495 int ret; 463 496 464 497 /* Setup INTx */ 465 - pcie_intc_node = of_get_next_child(node, NULL); 498 + pcie_intc_node = of_get_next_child(dev->of_node, NULL); 466 499 if (!pcie_intc_node) { 467 500 dev_err(dev, "No PCIe Intc node found\n"); 468 501 return -ENODEV; ··· 478 513 479 514 /* Setup MSI */ 480 515 if (IS_ENABLED(CONFIG_PCI_MSI)) { 481 - port->msi_domain = irq_domain_add_linear(node, 482 - XILINX_NUM_MSI_IRQS, 483 - &msi_domain_ops, 484 - &xilinx_pcie_msi_chip); 485 - if (!port->msi_domain) { 486 - dev_err(dev, "Failed to get a MSI IRQ domain\n"); 487 - return -ENODEV; 488 - } 516 + phys_addr_t pa = ALIGN_DOWN(virt_to_phys(port), SZ_4K); 489 517 490 - ret = xilinx_pcie_enable_msi(port); 518 + ret = xilinx_allocate_msi_domains(port); 491 519 if (ret) 492 520 return ret; 521 + 522 + pcie_write(port, upper_32_bits(pa), XILINX_PCIE_REG_MSIBASE1); 523 + pcie_write(port, lower_32_bits(pa), XILINX_PCIE_REG_MSIBASE2); 493 524 } 494 525 495 526 return 0; ··· 533 572 struct device *dev = port->dev; 534 573 struct device_node *node = dev->of_node; 535 574 struct resource regs; 575 + unsigned int irq; 536 576 int err; 537 577 538 578 err = of_address_to_resource(node, 0, &regs); ··· 546 584 if (IS_ERR(port->reg_base)) 547 585 return PTR_ERR(port->reg_base); 548 586 549 - port->irq = irq_of_parse_and_map(node, 0); 550 - err = devm_request_irq(dev, port->irq, xilinx_pcie_intr_handler, 587 + irq = irq_of_parse_and_map(node, 0); 588 + err = devm_request_irq(dev, irq, xilinx_pcie_intr_handler, 551 589 IRQF_SHARED | IRQF_NO_THREAD, 552 590 "xilinx-pcie", port); 553 591 if (err) { 554 - dev_err(dev, "unable to request irq %d\n", port->irq); 592 + dev_err(dev, "unable to request irq %d\n", irq); 555 593 return err; 556 594 } 557 595 ··· 579 617 return -ENODEV; 580 618 581 619 port = pci_host_bridge_priv(bridge); 582 - 620 + mutex_init(&port->map_lock); 583 621 port->dev = dev; 584 622 585 623 err = xilinx_pcie_parse_dt(port); ··· 599 637 bridge->sysdata = port; 600 638 bridge->ops = &xilinx_pcie_ops; 601 639 602 - #ifdef CONFIG_PCI_MSI 603 - xilinx_pcie_msi_chip.dev = dev; 604 - bridge->msi = &xilinx_pcie_msi_chip; 605 - #endif 606 - return pci_host_probe(bridge); 640 + err = pci_host_probe(bridge); 641 + if (err) 642 + xilinx_free_msi_domains(port); 643 + 644 + return err; 607 645 } 608 646 609 647 static const struct of_device_id xilinx_pcie_of_match[] = {
+51 -12
drivers/pci/controller/vmd.c
··· 28 28 #define BUS_RESTRICT_CAP(vmcap) (vmcap & 0x1) 29 29 #define PCI_REG_VMCONFIG 0x44 30 30 #define BUS_RESTRICT_CFG(vmcfg) ((vmcfg >> 8) & 0x3) 31 + #define VMCONFIG_MSI_REMAP 0x2 31 32 #define PCI_REG_VMLOCK 0x70 32 33 #define MB2_SHADOW_EN(vmlock) (vmlock & 0x2) 33 34 ··· 60 59 * be used for MSI remapping 61 60 */ 62 61 VMD_FEAT_OFFSET_FIRST_VECTOR = (1 << 3), 62 + 63 + /* 64 + * Device can bypass remapping MSI-X transactions into its MSI-X table, 65 + * avoiding the requirement of a VMD MSI domain for child device 66 + * interrupt handling. 67 + */ 68 + VMD_FEAT_CAN_BYPASS_MSI_REMAP = (1 << 4), 63 69 }; 64 70 65 71 /* ··· 314 306 .chip = &vmd_msi_controller, 315 307 }; 316 308 309 + static void vmd_set_msi_remapping(struct vmd_dev *vmd, bool enable) 310 + { 311 + u16 reg; 312 + 313 + pci_read_config_word(vmd->dev, PCI_REG_VMCONFIG, &reg); 314 + reg = enable ? (reg & ~VMCONFIG_MSI_REMAP) : 315 + (reg | VMCONFIG_MSI_REMAP); 316 + pci_write_config_word(vmd->dev, PCI_REG_VMCONFIG, reg); 317 + } 318 + 317 319 static int vmd_create_irq_domain(struct vmd_dev *vmd) 318 320 { 319 321 struct fwnode_handle *fn; ··· 343 325 344 326 static void vmd_remove_irq_domain(struct vmd_dev *vmd) 345 327 { 328 + /* 329 + * Some production BIOS won't enable remapping between soft reboots. 330 + * Ensure remapping is restored before unloading the driver. 331 + */ 332 + if (!vmd->msix_count) 333 + vmd_set_msi_remapping(vmd, true); 334 + 346 335 if (vmd->irq_domain) { 347 336 struct fwnode_handle *fn = vmd->irq_domain->fwnode; 348 337 ··· 704 679 705 680 sd->node = pcibus_to_node(vmd->dev->bus); 706 681 707 - ret = vmd_create_irq_domain(vmd); 708 - if (ret) 709 - return ret; 710 - 711 682 /* 712 - * Override the irq domain bus token so the domain can be distinguished 713 - * from a regular PCI/MSI domain. 683 + * Currently MSI remapping must be enabled in guest passthrough mode 684 + * due to some missing interrupt remapping plumbing. This is probably 685 + * acceptable because the guest is usually CPU-limited and MSI 686 + * remapping doesn't become a performance bottleneck. 714 687 */ 715 - irq_domain_update_bus_token(vmd->irq_domain, DOMAIN_BUS_VMD_MSI); 688 + if (!(features & VMD_FEAT_CAN_BYPASS_MSI_REMAP) || 689 + offset[0] || offset[1]) { 690 + ret = vmd_alloc_irqs(vmd); 691 + if (ret) 692 + return ret; 693 + 694 + vmd_set_msi_remapping(vmd, true); 695 + 696 + ret = vmd_create_irq_domain(vmd); 697 + if (ret) 698 + return ret; 699 + 700 + /* 701 + * Override the IRQ domain bus token so the domain can be 702 + * distinguished from a regular PCI/MSI domain. 703 + */ 704 + irq_domain_update_bus_token(vmd->irq_domain, DOMAIN_BUS_VMD_MSI); 705 + } else { 706 + vmd_set_msi_remapping(vmd, false); 707 + } 716 708 717 709 pci_add_resource(&resources, &vmd->resources[0]); 718 710 pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]); ··· 794 752 795 753 if (features & VMD_FEAT_OFFSET_FIRST_VECTOR) 796 754 vmd->first_vec = 1; 797 - 798 - err = vmd_alloc_irqs(vmd); 799 - if (err) 800 - return err; 801 755 802 756 spin_lock_init(&vmd->cfg_lock); 803 757 pci_set_drvdata(dev, vmd); ··· 863 825 .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP,}, 864 826 {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_28C0), 865 827 .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW | 866 - VMD_FEAT_HAS_BUS_RESTRICTIONS,}, 828 + VMD_FEAT_HAS_BUS_RESTRICTIONS | 829 + VMD_FEAT_CAN_BYPASS_MSI_REMAP,}, 867 830 {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x467f), 868 831 .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | 869 832 VMD_FEAT_HAS_BUS_RESTRICTIONS |
+11 -5
drivers/pci/endpoint/functions/pci-epf-ntb.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - /** 2 + /* 3 3 * Endpoint Function Driver to implement Non-Transparent Bridge functionality 4 4 * 5 5 * Copyright (C) 2020 Texas Instruments ··· 696 696 697 697 /** 698 698 * epf_ntb_peer_spad_bar_clear() - Clear Peer Scratchpad BAR 699 - * @ntb: NTB device that facilitates communication between HOST1 and HOST2 699 + * @ntb_epc: EPC associated with one of the HOST which holds peer's outbound 700 + * address. 700 701 * 701 702 *+-----------------+------->+------------------+ +-----------------+ 702 703 *| BAR0 | | CONFIG REGION | | BAR0 | ··· 741 740 /** 742 741 * epf_ntb_peer_spad_bar_set() - Set peer scratchpad BAR 743 742 * @ntb: NTB device that facilitates communication between HOST1 and HOST2 743 + * @type: PRIMARY interface or SECONDARY interface 744 744 * 745 745 *+-----------------+------->+------------------+ +-----------------+ 746 746 *| BAR0 | | CONFIG REGION | | BAR0 | ··· 810 808 811 809 /** 812 810 * epf_ntb_config_sspad_bar_clear() - Clear Config + Self scratchpad BAR 813 - * @ntb: NTB device that facilitates communication between HOST1 and HOST2 811 + * @ntb_epc: EPC associated with one of the HOST which holds peer's outbound 812 + * address. 814 813 * 815 814 * +-----------------+------->+------------------+ +-----------------+ 816 815 * | BAR0 | | CONFIG REGION | | BAR0 | ··· 854 851 855 852 /** 856 853 * epf_ntb_config_sspad_bar_set() - Set Config + Self scratchpad BAR 857 - * @ntb: NTB device that facilitates communication between HOST1 and HOST2 854 + * @ntb_epc: EPC associated with one of the HOST which holds peer's outbound 855 + * address. 858 856 * 859 857 * +-----------------+------->+------------------+ +-----------------+ 860 858 * | BAR0 | | CONFIG REGION | | BAR0 | ··· 1316 1312 1317 1313 /** 1318 1314 * epf_ntb_alloc_peer_mem() - Allocate memory in peer's outbound address space 1315 + * @dev: The PCI device. 1319 1316 * @ntb_epc: EPC associated with one of the HOST whose BAR holds peer's outbound 1320 1317 * address 1321 1318 * @bar: BAR of @ntb_epc in for which memory has to be allocated (could be ··· 1665 1660 * epf_ntb_init_epc_bar() - Identify BARs to be used for each of the NTB 1666 1661 * constructs (scratchpad region, doorbell, memorywindow) 1667 1662 * @ntb: NTB device that facilitates communication between HOST1 and HOST2 1668 - * @type: PRIMARY interface or SECONDARY interface 1669 1663 * 1670 1664 * Wrapper to epf_ntb_init_epc_bar_interface() to identify the free BARs 1671 1665 * to be used for each of BAR_CONFIG, BAR_PEER_SPAD, BAR_DB_MW1, BAR_MW2, ··· 2041 2037 /** 2042 2038 * epf_ntb_add_cfs() - Add configfs directory specific to NTB 2043 2039 * @epf: NTB endpoint function device 2040 + * @group: A pointer to the config_group structure referencing a group of 2041 + * config_items of a specific type that belong to a specific sub-system. 2044 2042 * 2045 2043 * Add configfs directory specific to NTB. This directory will hold 2046 2044 * NTB specific properties like db_count, spad_count, num_mws etc.,
+14 -8
drivers/pci/endpoint/functions/pci-epf-test.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - /** 2 + /* 3 3 * Test driver to test endpoint functionality 4 4 * 5 5 * Copyright (C) 2017 Texas Instruments ··· 833 833 return -EINVAL; 834 834 835 835 epc_features = pci_epc_get_features(epc, epf->func_no); 836 - if (epc_features) { 837 - linkup_notifier = epc_features->linkup_notifier; 838 - core_init_notifier = epc_features->core_init_notifier; 839 - test_reg_bar = pci_epc_get_first_free_bar(epc_features); 840 - if (test_reg_bar < 0) 841 - return -EINVAL; 842 - pci_epf_configure_bar(epf, epc_features); 836 + if (!epc_features) { 837 + dev_err(&epf->dev, "epc_features not implemented\n"); 838 + return -EOPNOTSUPP; 843 839 } 840 + 841 + linkup_notifier = epc_features->linkup_notifier; 842 + core_init_notifier = epc_features->core_init_notifier; 843 + test_reg_bar = pci_epc_get_first_free_bar(epc_features); 844 + if (test_reg_bar < 0) 845 + return -EINVAL; 846 + pci_epf_configure_bar(epf, epc_features); 844 847 845 848 epf_test->test_reg_bar = test_reg_bar; 846 849 epf_test->epc_features = epc_features; ··· 925 922 926 923 ret = pci_epf_register_driver(&test_driver); 927 924 if (ret) { 925 + destroy_workqueue(kpcitest_workqueue); 928 926 pr_err("Failed to register pci epf test driver --> %d\n", ret); 929 927 return ret; 930 928 } ··· 936 932 937 933 static void __exit pci_epf_test_exit(void) 938 934 { 935 + if (kpcitest_workqueue) 936 + destroy_workqueue(kpcitest_workqueue); 939 937 pci_epf_unregister_driver(&test_driver); 940 938 } 941 939 module_exit(pci_epf_test_exit);
+2
drivers/pci/endpoint/pci-epc-core.c
··· 594 594 * pci_epc_remove_epf() - remove PCI endpoint function from endpoint controller 595 595 * @epc: the EPC device from which the endpoint function should be removed 596 596 * @epf: the endpoint function to be removed 597 + * @type: identifies if the EPC is connected to the primary or secondary 598 + * interface of EPF 597 599 * 598 600 * Invoke to remove PCI endpoint function from the endpoint controller. 599 601 */
+1 -1
drivers/pci/endpoint/pci-epf-core.c
··· 113 113 void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar, 114 114 enum pci_epc_interface_type type) 115 115 { 116 - struct device *dev = epf->epc->dev.parent; 116 + struct device *dev; 117 117 struct pci_epf_bar *epf_bar; 118 118 struct pci_epc *epc; 119 119
+1 -1
drivers/pci/hotplug/acpi_pcihp.c
··· 157 157 } 158 158 159 159 /** 160 - * acpi_pcihp_check_ejectable - check if handle is ejectable ACPI PCI slot 160 + * acpi_pci_check_ejectable - check if handle is ejectable ACPI PCI slot 161 161 * @pbus: the PCI bus of the PCI slot corresponding to 'handle' 162 162 * @handle: ACPI handle to check 163 163 *
+1 -2
drivers/pci/hotplug/acpiphp.h
··· 148 148 * ACPI has no generic method of setting/getting attention status 149 149 * this allows for device specific driver registration 150 150 */ 151 - struct acpiphp_attention_info 152 - { 151 + struct acpiphp_attention_info { 153 152 int (*set_attn)(struct hotplug_slot *slot, u8 status); 154 153 int (*get_attn)(struct hotplug_slot *slot, u8 *status); 155 154 struct module *owner;
+1
drivers/pci/hotplug/acpiphp_glue.c
··· 533 533 slot->flags &= ~SLOT_ENABLED; 534 534 continue; 535 535 } 536 + pci_dev_put(dev); 536 537 } 537 538 } 538 539
+1 -4
drivers/pci/hotplug/cpqphp_nvram.c
··· 80 80 static void __iomem *compaq_int15_entry_point; 81 81 82 82 /* lock for ordering int15_bios_call() */ 83 - static spinlock_t int15_lock; 83 + static DEFINE_SPINLOCK(int15_lock); 84 84 85 85 86 86 /* This is a series of function that deals with ··· 415 415 compaq_int15_entry_point = (rom_start + ROM_INT15_PHY_ADDR - ROM_PHY_ADDR); 416 416 417 417 dbg("int15 entry = %p\n", compaq_int15_entry_point); 418 - 419 - /* initialize our int15 lock */ 420 - spin_lock_init(&int15_lock); 421 418 } 422 419 423 420
-5
drivers/pci/hotplug/shpchp_hpc.c
··· 174 174 return readb(ctrl->creg + reg); 175 175 } 176 176 177 - static inline void shpc_writeb(struct controller *ctrl, int reg, u8 val) 178 - { 179 - writeb(val, ctrl->creg + reg); 180 - } 181 - 182 177 static inline u16 shpc_readw(struct controller *ctrl, int reg) 183 178 { 184 179 return readw(ctrl->creg + reg);
+11 -34
drivers/pci/msi.c
··· 64 64 /* Arch hooks */ 65 65 int __weak arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *desc) 66 66 { 67 - struct msi_controller *chip = dev->bus->msi; 68 - int err; 69 - 70 - if (!chip || !chip->setup_irq) 71 - return -EINVAL; 72 - 73 - err = chip->setup_irq(chip, dev, desc); 74 - if (err < 0) 75 - return err; 76 - 77 - irq_set_chip_data(desc->irq, chip); 78 - 79 - return 0; 67 + return -EINVAL; 80 68 } 81 69 82 70 void __weak arch_teardown_msi_irq(unsigned int irq) 83 71 { 84 - struct msi_controller *chip = irq_get_chip_data(irq); 85 - 86 - if (!chip || !chip->teardown_irq) 87 - return; 88 - 89 - chip->teardown_irq(chip, irq); 90 72 } 91 73 92 74 int __weak arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) 93 75 { 94 - struct msi_controller *chip = dev->bus->msi; 95 76 struct msi_desc *entry; 96 77 int ret; 97 78 98 - if (chip && chip->setup_irqs) 99 - return chip->setup_irqs(chip, dev, nvec, type); 100 79 /* 101 80 * If an architecture wants to support multiple MSI, it needs to 102 81 * override arch_setup_msi_irqs() ··· 94 115 return 0; 95 116 } 96 117 97 - /* 98 - * We have a default implementation available as a separate non-weak 99 - * function, as it is used by the Xen x86 PCI code 100 - */ 101 - void default_teardown_msi_irqs(struct pci_dev *dev) 118 + void __weak arch_teardown_msi_irqs(struct pci_dev *dev) 102 119 { 103 120 int i; 104 121 struct msi_desc *entry; ··· 103 128 if (entry->irq) 104 129 for (i = 0; i < entry->nvec_used; i++) 105 130 arch_teardown_msi_irq(entry->irq + i); 106 - } 107 - 108 - void __weak arch_teardown_msi_irqs(struct pci_dev *dev) 109 - { 110 - return default_teardown_msi_irqs(dev); 111 131 } 112 132 #endif /* CONFIG_PCI_MSI_ARCH_FALLBACKS */ 113 133 ··· 871 901 * Any bridge which does NOT route MSI transactions from its 872 902 * secondary bus to its primary bus must set NO_MSI flag on 873 903 * the secondary pci_bus. 874 - * We expect only arch-specific PCI host bus controller driver 875 - * or quirks for specific PCI bridges to be setting NO_MSI. 904 + * 905 + * The NO_MSI flag can either be set directly by: 906 + * - arch-specific PCI host bus controller drivers (deprecated) 907 + * - quirks for specific PCI bridges 908 + * 909 + * or indirectly by platform-specific PCI host bridge drivers by 910 + * advertising the 'msi_domain' property, which results in 911 + * the NO_MSI flag when no MSI domain is found for this bridge 912 + * at probe time. 876 913 */ 877 914 for (bus = dev->bus; bus; bus = bus->parent) 878 915 if (bus->bus_flags & PCI_BUS_FLAGS_NO_MSI)
+18 -4
drivers/pci/of.c
··· 190 190 EXPORT_SYMBOL_GPL(of_pci_parse_bus_range); 191 191 192 192 /** 193 - * This function will try to obtain the host bridge domain number by 194 - * finding a property called "linux,pci-domain" of the given device node. 193 + * of_get_pci_domain_nr - Find the host bridge domain number 194 + * of the given device node. 195 + * @node: Device tree node with the domain information. 195 196 * 196 - * @node: device tree node with the domain information 197 + * This function will try to obtain the host bridge domain number by finding 198 + * a property called "linux,pci-domain" of the given device node. 199 + * 200 + * Return: 201 + * * > 0 - On success, an associated domain number. 202 + * * -EINVAL - The property "linux,pci-domain" does not exist. 203 + * * -ENODATA - The linux,pci-domain" property does not have value. 204 + * * -EOVERFLOW - Invalid "linux,pci-domain" property value. 197 205 * 198 206 * Returns the associated domain number from DT in the range [0-0xffff], or 199 207 * a negative value if the required property is not found. ··· 593 585 #endif /* CONFIG_PCI */ 594 586 595 587 /** 588 + * of_pci_get_max_link_speed - Find the maximum link speed of the given device node. 589 + * @node: Device tree node with the maximum link speed information. 590 + * 596 591 * This function will try to find the limitation of link speed by finding 597 592 * a property called "max-link-speed" of the given device node. 598 593 * 599 - * @node: device tree node with the max link speed information 594 + * Return: 595 + * * > 0 - On success, a maximum link speed. 596 + * * -EINVAL - Invalid "max-link-speed" property value, or failure to access 597 + * the property of the device tree node. 600 598 * 601 599 * Returns the associated max link speed from DT, or a negative value if the 602 600 * required property is not found or is invalid.
+1 -1
drivers/pci/pci-acpi.c
··· 1021 1021 1022 1022 if (!error) 1023 1023 pci_dbg(dev, "power state changed by ACPI to %s\n", 1024 - acpi_power_state_string(state_conv[state])); 1024 + acpi_power_state_string(adev->power.state)); 1025 1025 1026 1026 return error; 1027 1027 }
+75 -157
drivers/pci/pci-label.c
··· 33 33 #include <linux/pci-acpi.h> 34 34 #include "pci.h" 35 35 36 + static bool device_has_acpi_name(struct device *dev) 37 + { 38 + #ifdef CONFIG_ACPI 39 + acpi_handle handle = ACPI_HANDLE(dev); 40 + 41 + if (!handle) 42 + return false; 43 + 44 + return acpi_check_dsm(handle, &pci_acpi_dsm_guid, 0x2, 45 + 1 << DSM_PCI_DEVICE_NAME); 46 + #else 47 + return false; 48 + #endif 49 + } 50 + 36 51 #ifdef CONFIG_DMI 37 52 enum smbios_attr_enum { 38 53 SMBIOS_ATTR_NONE = 0, ··· 60 45 { 61 46 const struct dmi_device *dmi; 62 47 struct dmi_dev_onboard *donboard; 63 - int domain_nr; 64 - int bus; 65 - int devfn; 66 - 67 - domain_nr = pci_domain_nr(pdev->bus); 68 - bus = pdev->bus->number; 69 - devfn = pdev->devfn; 48 + int domain_nr = pci_domain_nr(pdev->bus); 49 + int bus = pdev->bus->number; 50 + int devfn = pdev->devfn; 70 51 71 52 dmi = NULL; 72 53 while ((dmi = dmi_find_device(DMI_DEV_TYPE_DEV_ONBOARD, ··· 73 62 donboard->devfn == devfn) { 74 63 if (buf) { 75 64 if (attribute == SMBIOS_ATTR_INSTANCE_SHOW) 76 - return scnprintf(buf, PAGE_SIZE, 77 - "%d\n", 78 - donboard->instance); 65 + return sysfs_emit(buf, "%d\n", 66 + donboard->instance); 79 67 else if (attribute == SMBIOS_ATTR_LABEL_SHOW) 80 - return scnprintf(buf, PAGE_SIZE, 81 - "%s\n", 82 - dmi->name); 68 + return sysfs_emit(buf, "%s\n", 69 + dmi->name); 83 70 } 84 71 return strlen(dmi->name); 85 72 } ··· 85 76 return 0; 86 77 } 87 78 88 - static umode_t smbios_instance_string_exist(struct kobject *kobj, 89 - struct attribute *attr, int n) 79 + static ssize_t smbios_label_show(struct device *dev, 80 + struct device_attribute *attr, char *buf) 90 81 { 91 - struct device *dev; 92 - struct pci_dev *pdev; 93 - 94 - dev = kobj_to_dev(kobj); 95 - pdev = to_pci_dev(dev); 96 - 97 - return find_smbios_instance_string(pdev, NULL, SMBIOS_ATTR_NONE) ? 98 - S_IRUGO : 0; 99 - } 100 - 101 - static ssize_t smbioslabel_show(struct device *dev, 102 - struct device_attribute *attr, char *buf) 103 - { 104 - struct pci_dev *pdev; 105 - pdev = to_pci_dev(dev); 82 + struct pci_dev *pdev = to_pci_dev(dev); 106 83 107 84 return find_smbios_instance_string(pdev, buf, 108 85 SMBIOS_ATTR_LABEL_SHOW); 109 86 } 87 + static struct device_attribute dev_attr_smbios_label = __ATTR(label, 0444, 88 + smbios_label_show, NULL); 110 89 111 - static ssize_t smbiosinstance_show(struct device *dev, 112 - struct device_attribute *attr, char *buf) 90 + static ssize_t index_show(struct device *dev, struct device_attribute *attr, 91 + char *buf) 113 92 { 114 - struct pci_dev *pdev; 115 - pdev = to_pci_dev(dev); 93 + struct pci_dev *pdev = to_pci_dev(dev); 116 94 117 95 return find_smbios_instance_string(pdev, buf, 118 96 SMBIOS_ATTR_INSTANCE_SHOW); 119 97 } 98 + static DEVICE_ATTR_RO(index); 120 99 121 - static struct device_attribute smbios_attr_label = { 122 - .attr = {.name = "label", .mode = 0444}, 123 - .show = smbioslabel_show, 124 - }; 125 - 126 - static struct device_attribute smbios_attr_instance = { 127 - .attr = {.name = "index", .mode = 0444}, 128 - .show = smbiosinstance_show, 129 - }; 130 - 131 - static struct attribute *smbios_attributes[] = { 132 - &smbios_attr_label.attr, 133 - &smbios_attr_instance.attr, 100 + static struct attribute *smbios_attrs[] = { 101 + &dev_attr_smbios_label.attr, 102 + &dev_attr_index.attr, 134 103 NULL, 135 104 }; 136 105 137 - static const struct attribute_group smbios_attr_group = { 138 - .attrs = smbios_attributes, 139 - .is_visible = smbios_instance_string_exist, 106 + static umode_t smbios_attr_is_visible(struct kobject *kobj, struct attribute *a, 107 + int n) 108 + { 109 + struct device *dev = kobj_to_dev(kobj); 110 + struct pci_dev *pdev = to_pci_dev(dev); 111 + 112 + if (device_has_acpi_name(dev)) 113 + return 0; 114 + 115 + if (!find_smbios_instance_string(pdev, NULL, SMBIOS_ATTR_NONE)) 116 + return 0; 117 + 118 + return a->mode; 119 + } 120 + 121 + const struct attribute_group pci_dev_smbios_attr_group = { 122 + .attrs = smbios_attrs, 123 + .is_visible = smbios_attr_is_visible, 140 124 }; 141 - 142 - static int pci_create_smbiosname_file(struct pci_dev *pdev) 143 - { 144 - return sysfs_create_group(&pdev->dev.kobj, &smbios_attr_group); 145 - } 146 - 147 - static void pci_remove_smbiosname_file(struct pci_dev *pdev) 148 - { 149 - sysfs_remove_group(&pdev->dev.kobj, &smbios_attr_group); 150 - } 151 - #else 152 - static inline int pci_create_smbiosname_file(struct pci_dev *pdev) 153 - { 154 - return -1; 155 - } 156 - 157 - static inline void pci_remove_smbiosname_file(struct pci_dev *pdev) 158 - { 159 - } 160 125 #endif 161 126 162 127 #ifdef CONFIG_ACPI ··· 152 169 static int dsm_get_label(struct device *dev, char *buf, 153 170 enum acpi_attr_enum attr) 154 171 { 155 - acpi_handle handle; 172 + acpi_handle handle = ACPI_HANDLE(dev); 156 173 union acpi_object *obj, *tmp; 157 174 int len = -1; 158 175 159 - handle = ACPI_HANDLE(dev); 160 176 if (!handle) 161 177 return -1; 162 178 ··· 191 209 return len; 192 210 } 193 211 194 - static bool device_has_dsm(struct device *dev) 195 - { 196 - acpi_handle handle; 197 - 198 - handle = ACPI_HANDLE(dev); 199 - if (!handle) 200 - return false; 201 - 202 - return !!acpi_check_dsm(handle, &pci_acpi_dsm_guid, 0x2, 203 - 1 << DSM_PCI_DEVICE_NAME); 204 - } 205 - 206 - static umode_t acpi_index_string_exist(struct kobject *kobj, 207 - struct attribute *attr, int n) 208 - { 209 - struct device *dev; 210 - 211 - dev = kobj_to_dev(kobj); 212 - 213 - if (device_has_dsm(dev)) 214 - return S_IRUGO; 215 - 216 - return 0; 217 - } 218 - 219 - static ssize_t acpilabel_show(struct device *dev, 220 - struct device_attribute *attr, char *buf) 212 + static ssize_t label_show(struct device *dev, struct device_attribute *attr, 213 + char *buf) 221 214 { 222 215 return dsm_get_label(dev, buf, ACPI_ATTR_LABEL_SHOW); 223 216 } 217 + static DEVICE_ATTR_RO(label); 224 218 225 - static ssize_t acpiindex_show(struct device *dev, 219 + static ssize_t acpi_index_show(struct device *dev, 226 220 struct device_attribute *attr, char *buf) 227 221 { 228 222 return dsm_get_label(dev, buf, ACPI_ATTR_INDEX_SHOW); 229 223 } 224 + static DEVICE_ATTR_RO(acpi_index); 230 225 231 - static struct device_attribute acpi_attr_label = { 232 - .attr = {.name = "label", .mode = 0444}, 233 - .show = acpilabel_show, 234 - }; 235 - 236 - static struct device_attribute acpi_attr_index = { 237 - .attr = {.name = "acpi_index", .mode = 0444}, 238 - .show = acpiindex_show, 239 - }; 240 - 241 - static struct attribute *acpi_attributes[] = { 242 - &acpi_attr_label.attr, 243 - &acpi_attr_index.attr, 226 + static struct attribute *acpi_attrs[] = { 227 + &dev_attr_label.attr, 228 + &dev_attr_acpi_index.attr, 244 229 NULL, 245 230 }; 246 231 247 - static const struct attribute_group acpi_attr_group = { 248 - .attrs = acpi_attributes, 249 - .is_visible = acpi_index_string_exist, 232 + static umode_t acpi_attr_is_visible(struct kobject *kobj, struct attribute *a, 233 + int n) 234 + { 235 + struct device *dev = kobj_to_dev(kobj); 236 + 237 + if (!device_has_acpi_name(dev)) 238 + return 0; 239 + 240 + return a->mode; 241 + } 242 + 243 + const struct attribute_group pci_dev_acpi_attr_group = { 244 + .attrs = acpi_attrs, 245 + .is_visible = acpi_attr_is_visible, 250 246 }; 251 - 252 - static int pci_create_acpi_index_label_files(struct pci_dev *pdev) 253 - { 254 - return sysfs_create_group(&pdev->dev.kobj, &acpi_attr_group); 255 - } 256 - 257 - static int pci_remove_acpi_index_label_files(struct pci_dev *pdev) 258 - { 259 - sysfs_remove_group(&pdev->dev.kobj, &acpi_attr_group); 260 - return 0; 261 - } 262 - #else 263 - static inline int pci_create_acpi_index_label_files(struct pci_dev *pdev) 264 - { 265 - return -1; 266 - } 267 - 268 - static inline int pci_remove_acpi_index_label_files(struct pci_dev *pdev) 269 - { 270 - return -1; 271 - } 272 - 273 - static inline bool device_has_dsm(struct device *dev) 274 - { 275 - return false; 276 - } 277 247 #endif 278 - 279 - void pci_create_firmware_label_files(struct pci_dev *pdev) 280 - { 281 - if (device_has_dsm(&pdev->dev)) 282 - pci_create_acpi_index_label_files(pdev); 283 - else 284 - pci_create_smbiosname_file(pdev); 285 - } 286 - 287 - void pci_remove_firmware_label_files(struct pci_dev *pdev) 288 - { 289 - if (device_has_dsm(&pdev->dev)) 290 - pci_remove_acpi_index_label_files(pdev); 291 - else 292 - pci_remove_smbiosname_file(pdev); 293 - }
+109 -151
drivers/pci/pci-sysfs.c
··· 39 39 struct pci_dev *pdev; \ 40 40 \ 41 41 pdev = to_pci_dev(dev); \ 42 - return sprintf(buf, format_string, pdev->field); \ 42 + return sysfs_emit(buf, format_string, pdev->field); \ 43 43 } \ 44 44 static DEVICE_ATTR_RO(field) 45 45 ··· 56 56 char *buf) 57 57 { 58 58 struct pci_dev *pdev = to_pci_dev(dev); 59 - return sprintf(buf, "%u\n", pdev->broken_parity_status); 59 + return sysfs_emit(buf, "%u\n", pdev->broken_parity_status); 60 60 } 61 61 62 62 static ssize_t broken_parity_status_store(struct device *dev, ··· 129 129 { 130 130 struct pci_dev *pdev = to_pci_dev(dev); 131 131 132 - return sprintf(buf, "%s\n", pci_power_name(pdev->current_state)); 132 + return sysfs_emit(buf, "%s\n", pci_power_name(pdev->current_state)); 133 133 } 134 134 static DEVICE_ATTR_RO(power_state); 135 135 ··· 138 138 char *buf) 139 139 { 140 140 struct pci_dev *pci_dev = to_pci_dev(dev); 141 - char *str = buf; 142 141 int i; 143 142 int max; 144 143 resource_size_t start, end; 144 + size_t len = 0; 145 145 146 146 if (pci_dev->subordinate) 147 147 max = DEVICE_COUNT_RESOURCE; ··· 151 151 for (i = 0; i < max; i++) { 152 152 struct resource *res = &pci_dev->resource[i]; 153 153 pci_resource_to_user(pci_dev, i, res, &start, &end); 154 - str += sprintf(str, "0x%016llx 0x%016llx 0x%016llx\n", 155 - (unsigned long long)start, 156 - (unsigned long long)end, 157 - (unsigned long long)res->flags); 154 + len += sysfs_emit_at(buf, len, "0x%016llx 0x%016llx 0x%016llx\n", 155 + (unsigned long long)start, 156 + (unsigned long long)end, 157 + (unsigned long long)res->flags); 158 158 } 159 - return (str - buf); 159 + return len; 160 160 } 161 161 static DEVICE_ATTR_RO(resource); 162 162 ··· 165 165 { 166 166 struct pci_dev *pdev = to_pci_dev(dev); 167 167 168 - return sprintf(buf, "%s\n", 169 - pci_speed_string(pcie_get_speed_cap(pdev))); 168 + return sysfs_emit(buf, "%s\n", 169 + pci_speed_string(pcie_get_speed_cap(pdev))); 170 170 } 171 171 static DEVICE_ATTR_RO(max_link_speed); 172 172 ··· 175 175 { 176 176 struct pci_dev *pdev = to_pci_dev(dev); 177 177 178 - return sprintf(buf, "%u\n", pcie_get_width_cap(pdev)); 178 + return sysfs_emit(buf, "%u\n", pcie_get_width_cap(pdev)); 179 179 } 180 180 static DEVICE_ATTR_RO(max_link_width); 181 181 ··· 193 193 194 194 speed = pcie_link_speed[linkstat & PCI_EXP_LNKSTA_CLS]; 195 195 196 - return sprintf(buf, "%s\n", pci_speed_string(speed)); 196 + return sysfs_emit(buf, "%s\n", pci_speed_string(speed)); 197 197 } 198 198 static DEVICE_ATTR_RO(current_link_speed); 199 199 ··· 208 208 if (err) 209 209 return -EINVAL; 210 210 211 - return sprintf(buf, "%u\n", 211 + return sysfs_emit(buf, "%u\n", 212 212 (linkstat & PCI_EXP_LNKSTA_NLW) >> PCI_EXP_LNKSTA_NLW_SHIFT); 213 213 } 214 214 static DEVICE_ATTR_RO(current_link_width); ··· 225 225 if (err) 226 226 return -EINVAL; 227 227 228 - return sprintf(buf, "%u\n", sec_bus); 228 + return sysfs_emit(buf, "%u\n", sec_bus); 229 229 } 230 230 static DEVICE_ATTR_RO(secondary_bus_number); 231 231 ··· 241 241 if (err) 242 242 return -EINVAL; 243 243 244 - return sprintf(buf, "%u\n", sub_bus); 244 + return sysfs_emit(buf, "%u\n", sub_bus); 245 245 } 246 246 static DEVICE_ATTR_RO(subordinate_bus_number); 247 247 ··· 251 251 { 252 252 struct pci_dev *pci_dev = to_pci_dev(dev); 253 253 254 - return sprintf(buf, "%u\n", pci_ari_enabled(pci_dev->bus)); 254 + return sysfs_emit(buf, "%u\n", pci_ari_enabled(pci_dev->bus)); 255 255 } 256 256 static DEVICE_ATTR_RO(ari_enabled); 257 257 ··· 260 260 { 261 261 struct pci_dev *pci_dev = to_pci_dev(dev); 262 262 263 - return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02X\n", 264 - pci_dev->vendor, pci_dev->device, 265 - pci_dev->subsystem_vendor, pci_dev->subsystem_device, 266 - (u8)(pci_dev->class >> 16), (u8)(pci_dev->class >> 8), 267 - (u8)(pci_dev->class)); 263 + return sysfs_emit(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02X\n", 264 + pci_dev->vendor, pci_dev->device, 265 + pci_dev->subsystem_vendor, pci_dev->subsystem_device, 266 + (u8)(pci_dev->class >> 16), (u8)(pci_dev->class >> 8), 267 + (u8)(pci_dev->class)); 268 268 } 269 269 static DEVICE_ATTR_RO(modalias); 270 270 ··· 302 302 struct pci_dev *pdev; 303 303 304 304 pdev = to_pci_dev(dev); 305 - return sprintf(buf, "%u\n", atomic_read(&pdev->enable_cnt)); 305 + return sysfs_emit(buf, "%u\n", atomic_read(&pdev->enable_cnt)); 306 306 } 307 307 static DEVICE_ATTR_RW(enable); 308 308 ··· 338 338 static ssize_t numa_node_show(struct device *dev, struct device_attribute *attr, 339 339 char *buf) 340 340 { 341 - return sprintf(buf, "%d\n", dev->numa_node); 341 + return sysfs_emit(buf, "%d\n", dev->numa_node); 342 342 } 343 343 static DEVICE_ATTR_RW(numa_node); 344 344 #endif ··· 348 348 { 349 349 struct pci_dev *pdev = to_pci_dev(dev); 350 350 351 - return sprintf(buf, "%d\n", fls64(pdev->dma_mask)); 351 + return sysfs_emit(buf, "%d\n", fls64(pdev->dma_mask)); 352 352 } 353 353 static DEVICE_ATTR_RO(dma_mask_bits); 354 354 ··· 356 356 struct device_attribute *attr, 357 357 char *buf) 358 358 { 359 - return sprintf(buf, "%d\n", fls64(dev->coherent_dma_mask)); 359 + return sysfs_emit(buf, "%d\n", fls64(dev->coherent_dma_mask)); 360 360 } 361 361 static DEVICE_ATTR_RO(consistent_dma_mask_bits); 362 362 ··· 366 366 struct pci_dev *pdev = to_pci_dev(dev); 367 367 struct pci_bus *subordinate = pdev->subordinate; 368 368 369 - return sprintf(buf, "%u\n", subordinate ? 370 - !(subordinate->bus_flags & PCI_BUS_FLAGS_NO_MSI) 371 - : !pdev->no_msi); 369 + return sysfs_emit(buf, "%u\n", subordinate ? 370 + !(subordinate->bus_flags & PCI_BUS_FLAGS_NO_MSI) 371 + : !pdev->no_msi); 372 372 } 373 373 374 374 static ssize_t msi_bus_store(struct device *dev, struct device_attribute *attr, ··· 523 523 struct device_attribute *attr, char *buf) 524 524 { 525 525 struct pci_dev *pdev = to_pci_dev(dev); 526 - return sprintf(buf, "%u\n", pdev->d3cold_allowed); 526 + return sysfs_emit(buf, "%u\n", pdev->d3cold_allowed); 527 527 } 528 528 static DEVICE_ATTR_RW(d3cold_allowed); 529 529 #endif ··· 537 537 538 538 if (np == NULL) 539 539 return 0; 540 - return sprintf(buf, "%pOF", np); 540 + return sysfs_emit(buf, "%pOF", np); 541 541 } 542 542 static DEVICE_ATTR_RO(devspec); 543 543 #endif ··· 583 583 ssize_t len; 584 584 585 585 device_lock(dev); 586 - len = scnprintf(buf, PAGE_SIZE, "%s\n", pdev->driver_override); 586 + len = sysfs_emit(buf, "%s\n", pdev->driver_override); 587 587 device_unlock(dev); 588 588 return len; 589 589 } ··· 658 658 struct pci_dev *vga_dev = vga_default_device(); 659 659 660 660 if (vga_dev) 661 - return sprintf(buf, "%u\n", (pdev == vga_dev)); 661 + return sysfs_emit(buf, "%u\n", (pdev == vga_dev)); 662 662 663 - return sprintf(buf, "%u\n", 664 - !!(pdev->resource[PCI_ROM_RESOURCE].flags & 665 - IORESOURCE_ROM_SHADOW)); 663 + return sysfs_emit(buf, "%u\n", 664 + !!(pdev->resource[PCI_ROM_RESOURCE].flags & 665 + IORESOURCE_ROM_SHADOW)); 666 666 } 667 667 static DEVICE_ATTR_RO(boot_vga); 668 668 ··· 808 808 809 809 return count; 810 810 } 811 + static BIN_ATTR(config, 0644, pci_read_config, pci_write_config, 0); 812 + 813 + static struct bin_attribute *pci_dev_config_attrs[] = { 814 + &bin_attr_config, 815 + NULL, 816 + }; 817 + 818 + static umode_t pci_dev_config_attr_is_visible(struct kobject *kobj, 819 + struct bin_attribute *a, int n) 820 + { 821 + struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 822 + 823 + a->size = PCI_CFG_SPACE_SIZE; 824 + if (pdev->cfg_size > PCI_CFG_SPACE_SIZE) 825 + a->size = PCI_CFG_SPACE_EXP_SIZE; 826 + 827 + return a->attr.mode; 828 + } 829 + 830 + static const struct attribute_group pci_dev_config_attr_group = { 831 + .bin_attrs = pci_dev_config_attrs, 832 + .is_bin_visible = pci_dev_config_attr_is_visible, 833 + }; 811 834 812 835 #ifdef HAVE_PCI_LEGACY 813 836 /** ··· 1306 1283 1307 1284 return count; 1308 1285 } 1286 + static BIN_ATTR(rom, 0600, pci_read_rom, pci_write_rom, 0); 1309 1287 1310 - static const struct bin_attribute pci_config_attr = { 1311 - .attr = { 1312 - .name = "config", 1313 - .mode = 0644, 1314 - }, 1315 - .size = PCI_CFG_SPACE_SIZE, 1316 - .read = pci_read_config, 1317 - .write = pci_write_config, 1288 + static struct bin_attribute *pci_dev_rom_attrs[] = { 1289 + &bin_attr_rom, 1290 + NULL, 1318 1291 }; 1319 1292 1320 - static const struct bin_attribute pcie_config_attr = { 1321 - .attr = { 1322 - .name = "config", 1323 - .mode = 0644, 1324 - }, 1325 - .size = PCI_CFG_SPACE_EXP_SIZE, 1326 - .read = pci_read_config, 1327 - .write = pci_write_config, 1293 + static umode_t pci_dev_rom_attr_is_visible(struct kobject *kobj, 1294 + struct bin_attribute *a, int n) 1295 + { 1296 + struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 1297 + size_t rom_size; 1298 + 1299 + /* If the device has a ROM, try to expose it in sysfs. */ 1300 + rom_size = pci_resource_len(pdev, PCI_ROM_RESOURCE); 1301 + if (!rom_size) 1302 + return 0; 1303 + 1304 + a->size = rom_size; 1305 + 1306 + return a->attr.mode; 1307 + } 1308 + 1309 + static const struct attribute_group pci_dev_rom_attr_group = { 1310 + .bin_attrs = pci_dev_rom_attrs, 1311 + .is_bin_visible = pci_dev_rom_attr_is_visible, 1328 1312 }; 1329 1313 1330 1314 static ssize_t reset_store(struct device *dev, struct device_attribute *attr, ··· 1355 1325 1356 1326 return count; 1357 1327 } 1328 + static DEVICE_ATTR_WO(reset); 1358 1329 1359 - static DEVICE_ATTR(reset, 0200, NULL, reset_store); 1330 + static struct attribute *pci_dev_reset_attrs[] = { 1331 + &dev_attr_reset.attr, 1332 + NULL, 1333 + }; 1360 1334 1361 - static int pci_create_capabilities_sysfs(struct pci_dev *dev) 1335 + static umode_t pci_dev_reset_attr_is_visible(struct kobject *kobj, 1336 + struct attribute *a, int n) 1362 1337 { 1363 - int retval; 1338 + struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 1364 1339 1365 - pcie_vpd_create_sysfs_dev_files(dev); 1340 + if (!pdev->reset_fn) 1341 + return 0; 1366 1342 1367 - if (dev->reset_fn) { 1368 - retval = device_create_file(&dev->dev, &dev_attr_reset); 1369 - if (retval) 1370 - goto error; 1371 - } 1372 - return 0; 1373 - 1374 - error: 1375 - pcie_vpd_remove_sysfs_dev_files(dev); 1376 - return retval; 1343 + return a->mode; 1377 1344 } 1345 + 1346 + static const struct attribute_group pci_dev_reset_attr_group = { 1347 + .attrs = pci_dev_reset_attrs, 1348 + .is_visible = pci_dev_reset_attr_is_visible, 1349 + }; 1378 1350 1379 1351 int __must_check pci_create_sysfs_dev_files(struct pci_dev *pdev) 1380 1352 { 1381 - int retval; 1382 - int rom_size; 1383 - struct bin_attribute *attr; 1384 - 1385 1353 if (!sysfs_initialized) 1386 1354 return -EACCES; 1387 1355 1388 - if (pdev->cfg_size > PCI_CFG_SPACE_SIZE) 1389 - retval = sysfs_create_bin_file(&pdev->dev.kobj, &pcie_config_attr); 1390 - else 1391 - retval = sysfs_create_bin_file(&pdev->dev.kobj, &pci_config_attr); 1392 - if (retval) 1393 - goto err; 1394 - 1395 - retval = pci_create_resource_files(pdev); 1396 - if (retval) 1397 - goto err_config_file; 1398 - 1399 - /* If the device has a ROM, try to expose it in sysfs. */ 1400 - rom_size = pci_resource_len(pdev, PCI_ROM_RESOURCE); 1401 - if (rom_size) { 1402 - attr = kzalloc(sizeof(*attr), GFP_ATOMIC); 1403 - if (!attr) { 1404 - retval = -ENOMEM; 1405 - goto err_resource_files; 1406 - } 1407 - sysfs_bin_attr_init(attr); 1408 - attr->size = rom_size; 1409 - attr->attr.name = "rom"; 1410 - attr->attr.mode = 0600; 1411 - attr->read = pci_read_rom; 1412 - attr->write = pci_write_rom; 1413 - retval = sysfs_create_bin_file(&pdev->dev.kobj, attr); 1414 - if (retval) { 1415 - kfree(attr); 1416 - goto err_resource_files; 1417 - } 1418 - pdev->rom_attr = attr; 1419 - } 1420 - 1421 - /* add sysfs entries for various capabilities */ 1422 - retval = pci_create_capabilities_sysfs(pdev); 1423 - if (retval) 1424 - goto err_rom_file; 1425 - 1426 - pci_create_firmware_label_files(pdev); 1427 - 1428 - return 0; 1429 - 1430 - err_rom_file: 1431 - if (pdev->rom_attr) { 1432 - sysfs_remove_bin_file(&pdev->dev.kobj, pdev->rom_attr); 1433 - kfree(pdev->rom_attr); 1434 - pdev->rom_attr = NULL; 1435 - } 1436 - err_resource_files: 1437 - pci_remove_resource_files(pdev); 1438 - err_config_file: 1439 - if (pdev->cfg_size > PCI_CFG_SPACE_SIZE) 1440 - sysfs_remove_bin_file(&pdev->dev.kobj, &pcie_config_attr); 1441 - else 1442 - sysfs_remove_bin_file(&pdev->dev.kobj, &pci_config_attr); 1443 - err: 1444 - return retval; 1445 - } 1446 - 1447 - static void pci_remove_capabilities_sysfs(struct pci_dev *dev) 1448 - { 1449 - pcie_vpd_remove_sysfs_dev_files(dev); 1450 - if (dev->reset_fn) { 1451 - device_remove_file(&dev->dev, &dev_attr_reset); 1452 - dev->reset_fn = 0; 1453 - } 1356 + return pci_create_resource_files(pdev); 1454 1357 } 1455 1358 1456 1359 /** ··· 1397 1434 if (!sysfs_initialized) 1398 1435 return; 1399 1436 1400 - pci_remove_capabilities_sysfs(pdev); 1401 - 1402 - if (pdev->cfg_size > PCI_CFG_SPACE_SIZE) 1403 - sysfs_remove_bin_file(&pdev->dev.kobj, &pcie_config_attr); 1404 - else 1405 - sysfs_remove_bin_file(&pdev->dev.kobj, &pci_config_attr); 1406 - 1407 1437 pci_remove_resource_files(pdev); 1408 - 1409 - if (pdev->rom_attr) { 1410 - sysfs_remove_bin_file(&pdev->dev.kobj, pdev->rom_attr); 1411 - kfree(pdev->rom_attr); 1412 - pdev->rom_attr = NULL; 1413 - } 1414 - 1415 - pci_remove_firmware_label_files(pdev); 1416 1438 } 1417 1439 1418 1440 static int __init pci_sysfs_init(void) ··· 1488 1540 1489 1541 const struct attribute_group *pci_dev_groups[] = { 1490 1542 &pci_dev_group, 1543 + &pci_dev_config_attr_group, 1544 + &pci_dev_rom_attr_group, 1545 + &pci_dev_reset_attr_group, 1546 + &pci_dev_vpd_attr_group, 1547 + #ifdef CONFIG_DMI 1548 + &pci_dev_smbios_attr_group, 1549 + #endif 1550 + #ifdef CONFIG_ACPI 1551 + &pci_dev_acpi_attr_group, 1552 + #endif 1491 1553 NULL, 1492 1554 }; 1493 1555
+18
drivers/pci/pci.c
··· 4072 4072 4073 4073 return address; 4074 4074 } 4075 + EXPORT_SYMBOL_GPL(pci_pio_to_address); 4075 4076 4076 4077 unsigned long __weak pci_address_to_pio(phys_addr_t address) 4077 4078 { ··· 4473 4472 #endif 4474 4473 } 4475 4474 EXPORT_SYMBOL(pci_clear_mwi); 4475 + 4476 + /** 4477 + * pci_disable_parity - disable parity checking for device 4478 + * @dev: the PCI device to operate on 4479 + * 4480 + * Disable parity checking for device @dev 4481 + */ 4482 + void pci_disable_parity(struct pci_dev *dev) 4483 + { 4484 + u16 cmd; 4485 + 4486 + pci_read_config_word(dev, PCI_COMMAND, &cmd); 4487 + if (cmd & PCI_COMMAND_PARITY) { 4488 + cmd &= ~PCI_COMMAND_PARITY; 4489 + pci_write_config_word(dev, PCI_COMMAND, cmd); 4490 + } 4491 + } 4476 4492 4477 4493 /** 4478 4494 * pci_intx - enables/disables PCI INTx for device dev
+12 -12
drivers/pci/pci.h
··· 21 21 22 22 int pci_create_sysfs_dev_files(struct pci_dev *pdev); 23 23 void pci_remove_sysfs_dev_files(struct pci_dev *pdev); 24 - #if !defined(CONFIG_DMI) && !defined(CONFIG_ACPI) 25 - static inline void pci_create_firmware_label_files(struct pci_dev *pdev) 26 - { return; } 27 - static inline void pci_remove_firmware_label_files(struct pci_dev *pdev) 28 - { return; } 29 - #else 30 - void pci_create_firmware_label_files(struct pci_dev *pdev); 31 - void pci_remove_firmware_label_files(struct pci_dev *pdev); 32 - #endif 33 24 void pci_cleanup_rom(struct pci_dev *dev); 25 + #ifdef CONFIG_DMI 26 + extern const struct attribute_group pci_dev_smbios_attr_group; 27 + #endif 34 28 35 29 enum pci_mmap_api { 36 30 PCI_MMAP_SYSFS, /* mmap on /sys/bus/pci/devices/<BDF>/resource<N> */ ··· 135 141 type == PCI_EXP_TYPE_PCIE_BRIDGE; 136 142 } 137 143 138 - int pci_vpd_init(struct pci_dev *dev); 144 + void pci_vpd_init(struct pci_dev *dev); 139 145 void pci_vpd_release(struct pci_dev *dev); 140 - void pcie_vpd_create_sysfs_dev_files(struct pci_dev *dev); 141 - void pcie_vpd_remove_sysfs_dev_files(struct pci_dev *dev); 146 + extern const struct attribute_group pci_dev_vpd_attr_group; 142 147 143 148 /* PCI Virtual Channel */ 144 149 int pci_save_vc_state(struct pci_dev *dev); ··· 618 625 #if defined(CONFIG_PCI_QUIRKS) && defined(CONFIG_ARM64) 619 626 int acpi_get_rc_resources(struct device *dev, const char *hid, u16 segment, 620 627 struct resource *res); 628 + #else 629 + static inline int acpi_get_rc_resources(struct device *dev, const char *hid, 630 + u16 segment, struct resource *res) 631 + { 632 + return -ENODEV; 633 + } 621 634 #endif 622 635 623 636 int pci_rebar_get_current_size(struct pci_dev *pdev, int bar); ··· 696 697 697 698 #ifdef CONFIG_ACPI 698 699 int pci_acpi_program_hp_params(struct pci_dev *dev); 700 + extern const struct attribute_group pci_dev_acpi_attr_group; 699 701 #else 700 702 static inline int pci_acpi_program_hp_params(struct pci_dev *dev) 701 703 {
+3 -3
drivers/pci/pcie/aer.c
··· 129 129 }; 130 130 131 131 /** 132 - * enable_ercr_checking - enable PCIe ECRC checking for a device 132 + * enable_ecrc_checking - enable PCIe ECRC checking for a device 133 133 * @dev: the PCI device 134 134 * 135 135 * Returns 0 on success, or negative on failure. ··· 153 153 } 154 154 155 155 /** 156 - * disable_ercr_checking - disables PCIe ECRC checking for a device 156 + * disable_ecrc_checking - disables PCIe ECRC checking for a device 157 157 * @dev: the PCI device 158 158 * 159 159 * Returns 0 on success, or negative on failure. ··· 1442 1442 }; 1443 1443 1444 1444 /** 1445 - * aer_service_init - register AER root service driver 1445 + * pcie_aer_init - register AER root service driver 1446 1446 * 1447 1447 * Invoked when AER root service driver is loaded. 1448 1448 */
+1 -1
drivers/pci/pcie/pme.c
··· 463 463 }; 464 464 465 465 /** 466 - * pcie_pme_service_init - Register the PCIe PME service driver. 466 + * pcie_pme_init - Register the PCIe PME service driver. 467 467 */ 468 468 int __init pcie_pme_init(void) 469 469 {
+1 -1
drivers/pci/pcie/rcec.c
··· 32 32 33 33 /* Same bus, so check bitmap */ 34 34 for_each_set_bit(devn, &bitmap, 32) 35 - if (devn == rciep->devfn) 35 + if (devn == PCI_SLOT(rciep->devfn)) 36 36 return true; 37 37 38 38 return false;
+3 -2
drivers/pci/probe.c
··· 895 895 /* Temporarily move resources off the list */ 896 896 list_splice_init(&bridge->windows, &resources); 897 897 bus->sysdata = bridge->sysdata; 898 - bus->msi = bridge->msi; 899 898 bus->ops = bridge->ops; 900 899 bus->number = bus->busn_res.start = bridge->busnr; 901 900 #ifdef CONFIG_PCI_DOMAINS_GENERIC ··· 925 926 device_enable_async_suspend(bus->bridge); 926 927 pci_set_bus_of_node(bus); 927 928 pci_set_bus_msi_domain(bus); 929 + if (bridge->msi_domain && !dev_get_msi_domain(&bus->dev)) 930 + bus->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 928 931 929 932 if (!parent) 930 933 set_dev_node(bus->bridge, pcibus_to_node(bus)); ··· 1054 1053 return NULL; 1055 1054 1056 1055 child->parent = parent; 1057 - child->msi = parent->msi; 1058 1056 child->sysdata = parent->sysdata; 1059 1057 child->bus_flags = parent->bus_flags; 1060 1058 ··· 2353 2353 pci_set_of_node(dev); 2354 2354 2355 2355 if (pci_setup_device(dev)) { 2356 + pci_release_of_node(dev); 2356 2357 pci_bus_put(dev->bus); 2357 2358 kfree(dev); 2358 2359 return NULL;
+9 -20
drivers/pci/quirks.c
··· 206 206 PCI_CLASS_BRIDGE_HOST, 8, quirk_mmio_always_on); 207 207 208 208 /* 209 - * The Mellanox Tavor device gives false positive parity errors. Mark this 210 - * device with a broken_parity_status to allow PCI scanning code to "skip" 211 - * this now blacklisted device. 209 + * The Mellanox Tavor device gives false positive parity errors. Disable 210 + * parity error reporting. 212 211 */ 213 - static void quirk_mellanox_tavor(struct pci_dev *dev) 214 - { 215 - dev->broken_parity_status = 1; /* This device gives false positives */ 216 - } 217 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX, PCI_DEVICE_ID_MELLANOX_TAVOR, quirk_mellanox_tavor); 218 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX, PCI_DEVICE_ID_MELLANOX_TAVOR_BRIDGE, quirk_mellanox_tavor); 212 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX, PCI_DEVICE_ID_MELLANOX_TAVOR, pci_disable_parity); 213 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX, PCI_DEVICE_ID_MELLANOX_TAVOR_BRIDGE, pci_disable_parity); 219 214 220 215 /* 221 216 * Deal with broken BIOSes that neglect to enable passive release, ··· 2580 2585 /* Check the HyperTransport MSI mapping to know whether MSI is enabled or not */ 2581 2586 static void quirk_msi_ht_cap(struct pci_dev *dev) 2582 2587 { 2583 - if (dev->subordinate && !msi_ht_cap_enabled(dev)) { 2584 - pci_warn(dev, "MSI quirk detected; subordinate MSI disabled\n"); 2585 - dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 2586 - } 2588 + if (!msi_ht_cap_enabled(dev)) 2589 + quirk_disable_msi(dev); 2587 2590 } 2588 2591 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT2000_PCIE, 2589 2592 quirk_msi_ht_cap); ··· 2594 2601 { 2595 2602 struct pci_dev *pdev; 2596 2603 2597 - if (!dev->subordinate) 2598 - return; 2599 - 2600 2604 /* 2601 2605 * Check HT MSI cap on this chipset and the root one. A single one 2602 2606 * having MSI is enough to be sure that MSI is supported. ··· 2601 2611 pdev = pci_get_slot(dev->bus, 0); 2602 2612 if (!pdev) 2603 2613 return; 2604 - if (!msi_ht_cap_enabled(dev) && !msi_ht_cap_enabled(pdev)) { 2605 - pci_warn(dev, "MSI quirk detected; subordinate MSI disabled\n"); 2606 - dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 2607 - } 2614 + if (!msi_ht_cap_enabled(pdev)) 2615 + quirk_msi_ht_cap(dev); 2608 2616 pci_dev_put(pdev); 2609 2617 } 2610 2618 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_CK804_PCIE, ··· 3910 3922 reset_ivb_igd }, 3911 3923 { PCI_VENDOR_ID_SAMSUNG, 0xa804, nvme_disable_and_flr }, 3912 3924 { PCI_VENDOR_ID_INTEL, 0x0953, delay_250ms_after_flr }, 3925 + { PCI_VENDOR_ID_INTEL, 0x0a54, delay_250ms_after_flr }, 3913 3926 { PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID, 3914 3927 reset_chelsio_generic_dev }, 3915 3928 { 0 }
+2
drivers/pci/remove.c
··· 19 19 pci_pme_active(dev, false); 20 20 21 21 if (pci_dev_is_added(dev)) { 22 + dev->reset_fn = 0; 23 + 22 24 device_release_driver(&dev->dev); 23 25 pci_proc_detach_device(dev); 24 26 pci_remove_sysfs_dev_files(dev);
+57 -177
drivers/pci/vpd.c
··· 16 16 struct pci_vpd_ops { 17 17 ssize_t (*read)(struct pci_dev *dev, loff_t pos, size_t count, void *buf); 18 18 ssize_t (*write)(struct pci_dev *dev, loff_t pos, size_t count, const void *buf); 19 - int (*set_size)(struct pci_dev *dev, size_t len); 20 19 }; 21 20 22 21 struct pci_vpd { 23 22 const struct pci_vpd_ops *ops; 24 - struct bin_attribute *attr; /* Descriptor for sysfs VPD entry */ 25 23 struct mutex lock; 26 24 unsigned int len; 27 25 u16 flag; ··· 27 29 unsigned int busy:1; 28 30 unsigned int valid:1; 29 31 }; 32 + 33 + static struct pci_dev *pci_get_func0_dev(struct pci_dev *dev) 34 + { 35 + return pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); 36 + } 30 37 31 38 /** 32 39 * pci_read_vpd - Read one entry from Vital Product Data ··· 63 60 } 64 61 EXPORT_SYMBOL(pci_write_vpd); 65 62 66 - /** 67 - * pci_set_vpd_size - Set size of Vital Product Data space 68 - * @dev: pci device struct 69 - * @len: size of vpd space 70 - */ 71 - int pci_set_vpd_size(struct pci_dev *dev, size_t len) 72 - { 73 - if (!dev->vpd || !dev->vpd->ops) 74 - return -ENODEV; 75 - return dev->vpd->ops->set_size(dev, len); 76 - } 77 - EXPORT_SYMBOL(pci_set_vpd_size); 78 - 79 63 #define PCI_VPD_MAX_SIZE (PCI_VPD_ADDR_MASK + 1) 80 64 81 65 /** ··· 75 85 size_t off = 0; 76 86 unsigned char header[1+2]; /* 1 byte tag, 2 bytes length */ 77 87 78 - while (off < old_size && 79 - pci_read_vpd(dev, off, 1, header) == 1) { 88 + while (off < old_size && pci_read_vpd(dev, off, 1, header) == 1) { 80 89 unsigned char tag; 90 + 91 + if (!header[0] && !off) { 92 + pci_info(dev, "Invalid VPD tag 00, assume missing optional VPD EPROM\n"); 93 + return 0; 94 + } 81 95 82 96 if (header[0] & PCI_VPD_LRDT) { 83 97 /* Large Resource Data Type Tag */ ··· 291 297 return ret ? ret : count; 292 298 } 293 299 294 - static int pci_vpd_set_size(struct pci_dev *dev, size_t len) 295 - { 296 - struct pci_vpd *vpd = dev->vpd; 297 - 298 - if (len == 0 || len > PCI_VPD_MAX_SIZE) 299 - return -EIO; 300 - 301 - vpd->valid = 1; 302 - vpd->len = len; 303 - 304 - return 0; 305 - } 306 - 307 300 static const struct pci_vpd_ops pci_vpd_ops = { 308 301 .read = pci_vpd_read, 309 302 .write = pci_vpd_write, 310 - .set_size = pci_vpd_set_size, 311 303 }; 312 304 313 305 static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count, 314 306 void *arg) 315 307 { 316 - struct pci_dev *tdev = pci_get_slot(dev->bus, 317 - PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); 308 + struct pci_dev *tdev = pci_get_func0_dev(dev); 318 309 ssize_t ret; 319 310 320 311 if (!tdev) ··· 313 334 static ssize_t pci_vpd_f0_write(struct pci_dev *dev, loff_t pos, size_t count, 314 335 const void *arg) 315 336 { 316 - struct pci_dev *tdev = pci_get_slot(dev->bus, 317 - PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); 337 + struct pci_dev *tdev = pci_get_func0_dev(dev); 318 338 ssize_t ret; 319 339 320 340 if (!tdev) ··· 324 346 return ret; 325 347 } 326 348 327 - static int pci_vpd_f0_set_size(struct pci_dev *dev, size_t len) 328 - { 329 - struct pci_dev *tdev = pci_get_slot(dev->bus, 330 - PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); 331 - int ret; 332 - 333 - if (!tdev) 334 - return -ENODEV; 335 - 336 - ret = pci_set_vpd_size(tdev, len); 337 - pci_dev_put(tdev); 338 - return ret; 339 - } 340 - 341 349 static const struct pci_vpd_ops pci_vpd_f0_ops = { 342 350 .read = pci_vpd_f0_read, 343 351 .write = pci_vpd_f0_write, 344 - .set_size = pci_vpd_f0_set_size, 345 352 }; 346 353 347 - int pci_vpd_init(struct pci_dev *dev) 354 + void pci_vpd_init(struct pci_dev *dev) 348 355 { 349 356 struct pci_vpd *vpd; 350 357 u8 cap; 351 358 352 359 cap = pci_find_capability(dev, PCI_CAP_ID_VPD); 353 360 if (!cap) 354 - return -ENODEV; 361 + return; 355 362 356 363 vpd = kzalloc(sizeof(*vpd), GFP_ATOMIC); 357 364 if (!vpd) 358 - return -ENOMEM; 365 + return; 359 366 360 367 vpd->len = PCI_VPD_MAX_SIZE; 361 368 if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) ··· 352 389 vpd->busy = 0; 353 390 vpd->valid = 0; 354 391 dev->vpd = vpd; 355 - return 0; 356 392 } 357 393 358 394 void pci_vpd_release(struct pci_dev *dev) ··· 359 397 kfree(dev->vpd); 360 398 } 361 399 362 - static ssize_t read_vpd_attr(struct file *filp, struct kobject *kobj, 363 - struct bin_attribute *bin_attr, char *buf, 364 - loff_t off, size_t count) 400 + static ssize_t vpd_read(struct file *filp, struct kobject *kobj, 401 + struct bin_attribute *bin_attr, char *buf, loff_t off, 402 + size_t count) 365 403 { 366 404 struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); 367 - 368 - if (bin_attr->size > 0) { 369 - if (off > bin_attr->size) 370 - count = 0; 371 - else if (count > bin_attr->size - off) 372 - count = bin_attr->size - off; 373 - } 374 405 375 406 return pci_read_vpd(dev, off, count, buf); 376 407 } 377 408 378 - static ssize_t write_vpd_attr(struct file *filp, struct kobject *kobj, 379 - struct bin_attribute *bin_attr, char *buf, 380 - loff_t off, size_t count) 409 + static ssize_t vpd_write(struct file *filp, struct kobject *kobj, 410 + struct bin_attribute *bin_attr, char *buf, loff_t off, 411 + size_t count) 381 412 { 382 413 struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); 383 414 384 - if (bin_attr->size > 0) { 385 - if (off > bin_attr->size) 386 - count = 0; 387 - else if (count > bin_attr->size - off) 388 - count = bin_attr->size - off; 389 - } 390 - 391 415 return pci_write_vpd(dev, off, count, buf); 392 416 } 417 + static BIN_ATTR(vpd, 0600, vpd_read, vpd_write, 0); 393 418 394 - void pcie_vpd_create_sysfs_dev_files(struct pci_dev *dev) 419 + static struct bin_attribute *vpd_attrs[] = { 420 + &bin_attr_vpd, 421 + NULL, 422 + }; 423 + 424 + static umode_t vpd_attr_is_visible(struct kobject *kobj, 425 + struct bin_attribute *a, int n) 395 426 { 396 - int retval; 397 - struct bin_attribute *attr; 427 + struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 398 428 399 - if (!dev->vpd) 400 - return; 429 + if (!pdev->vpd) 430 + return 0; 401 431 402 - attr = kzalloc(sizeof(*attr), GFP_ATOMIC); 403 - if (!attr) 404 - return; 405 - 406 - sysfs_bin_attr_init(attr); 407 - attr->size = 0; 408 - attr->attr.name = "vpd"; 409 - attr->attr.mode = S_IRUSR | S_IWUSR; 410 - attr->read = read_vpd_attr; 411 - attr->write = write_vpd_attr; 412 - retval = sysfs_create_bin_file(&dev->dev.kobj, attr); 413 - if (retval) { 414 - kfree(attr); 415 - return; 416 - } 417 - 418 - dev->vpd->attr = attr; 432 + return a->attr.mode; 419 433 } 420 434 421 - void pcie_vpd_remove_sysfs_dev_files(struct pci_dev *dev) 435 + const struct attribute_group pci_dev_vpd_attr_group = { 436 + .bin_attrs = vpd_attrs, 437 + .is_bin_visible = vpd_attr_is_visible, 438 + }; 439 + 440 + int pci_vpd_find_tag(const u8 *buf, unsigned int len, u8 rdt) 422 441 { 423 - if (dev->vpd && dev->vpd->attr) { 424 - sysfs_remove_bin_file(&dev->dev.kobj, dev->vpd->attr); 425 - kfree(dev->vpd->attr); 426 - } 427 - } 442 + int i = 0; 428 443 429 - int pci_vpd_find_tag(const u8 *buf, unsigned int off, unsigned int len, u8 rdt) 430 - { 431 - int i; 444 + /* look for LRDT tags only, end tag is the only SRDT tag */ 445 + while (i + PCI_VPD_LRDT_TAG_SIZE <= len && buf[i] & PCI_VPD_LRDT) { 446 + if (buf[i] == rdt) 447 + return i; 432 448 433 - for (i = off; i < len; ) { 434 - u8 val = buf[i]; 435 - 436 - if (val & PCI_VPD_LRDT) { 437 - /* Don't return success of the tag isn't complete */ 438 - if (i + PCI_VPD_LRDT_TAG_SIZE > len) 439 - break; 440 - 441 - if (val == rdt) 442 - return i; 443 - 444 - i += PCI_VPD_LRDT_TAG_SIZE + 445 - pci_vpd_lrdt_size(&buf[i]); 446 - } else { 447 - u8 tag = val & ~PCI_VPD_SRDT_LEN_MASK; 448 - 449 - if (tag == rdt) 450 - return i; 451 - 452 - if (tag == PCI_VPD_SRDT_END) 453 - break; 454 - 455 - i += PCI_VPD_SRDT_TAG_SIZE + 456 - pci_vpd_srdt_size(&buf[i]); 457 - } 449 + i += PCI_VPD_LRDT_TAG_SIZE + pci_vpd_lrdt_size(buf + i); 458 450 } 459 451 460 452 return -ENOENT; ··· 446 530 if (!PCI_FUNC(dev->devfn)) 447 531 return; 448 532 449 - f0 = pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); 533 + f0 = pci_get_func0_dev(dev); 450 534 if (!f0) 451 535 return; 452 536 ··· 486 570 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd); 487 571 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID, 488 572 quirk_blacklist_vpd); 489 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_QLOGIC, 0x2261, quirk_blacklist_vpd); 490 573 /* 491 574 * The Amazon Annapurna Labs 0x0031 device id is reused for other non Root Port 492 575 * device types, so the quirk is registered for the PCI_CLASS_BRIDGE_PCI class. ··· 493 578 DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, 494 579 PCI_CLASS_BRIDGE_PCI, 8, quirk_blacklist_vpd); 495 580 496 - /* 497 - * For Broadcom 5706, 5708, 5709 rev. A nics, any read beyond the 498 - * VPD end tag will hang the device. This problem was initially 499 - * observed when a vpd entry was created in sysfs 500 - * ('/sys/bus/pci/devices/<id>/vpd'). A read to this sysfs entry 501 - * will dump 32k of data. Reading a full 32k will cause an access 502 - * beyond the VPD end tag causing the device to hang. Once the device 503 - * is hung, the bnx2 driver will not be able to reset the device. 504 - * We believe that it is legal to read beyond the end tag and 505 - * therefore the solution is to limit the read/write length. 506 - */ 507 - static void quirk_brcm_570x_limit_vpd(struct pci_dev *dev) 581 + static void pci_vpd_set_size(struct pci_dev *dev, size_t len) 508 582 { 509 - /* 510 - * Only disable the VPD capability for 5706, 5706S, 5708, 511 - * 5708S and 5709 rev. A 512 - */ 513 - if ((dev->device == PCI_DEVICE_ID_NX2_5706) || 514 - (dev->device == PCI_DEVICE_ID_NX2_5706S) || 515 - (dev->device == PCI_DEVICE_ID_NX2_5708) || 516 - (dev->device == PCI_DEVICE_ID_NX2_5708S) || 517 - ((dev->device == PCI_DEVICE_ID_NX2_5709) && 518 - (dev->revision & 0xf0) == 0x0)) { 519 - if (dev->vpd) 520 - dev->vpd->len = 0x80; 521 - } 583 + struct pci_vpd *vpd = dev->vpd; 584 + 585 + if (!vpd || len == 0 || len > PCI_VPD_MAX_SIZE) 586 + return; 587 + 588 + vpd->valid = 1; 589 + vpd->len = len; 522 590 } 523 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, 524 - PCI_DEVICE_ID_NX2_5706, 525 - quirk_brcm_570x_limit_vpd); 526 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, 527 - PCI_DEVICE_ID_NX2_5706S, 528 - quirk_brcm_570x_limit_vpd); 529 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, 530 - PCI_DEVICE_ID_NX2_5708, 531 - quirk_brcm_570x_limit_vpd); 532 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, 533 - PCI_DEVICE_ID_NX2_5708S, 534 - quirk_brcm_570x_limit_vpd); 535 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, 536 - PCI_DEVICE_ID_NX2_5709, 537 - quirk_brcm_570x_limit_vpd); 538 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, 539 - PCI_DEVICE_ID_NX2_5709S, 540 - quirk_brcm_570x_limit_vpd); 541 591 542 592 static void quirk_chelsio_extend_vpd(struct pci_dev *dev) 543 593 { ··· 522 642 * limits. 523 643 */ 524 644 if (chip == 0x0 && prod >= 0x20) 525 - pci_set_vpd_size(dev, 8192); 645 + pci_vpd_set_size(dev, 8192); 526 646 else if (chip >= 0x4 && func < 0x8) 527 - pci_set_vpd_size(dev, 2048); 647 + pci_vpd_set_size(dev, 2048); 528 648 } 529 649 530 650 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID,
+1
drivers/reset/Kconfig
··· 197 197 - RCC reset controller in STM32 MCUs 198 198 - Allwinner SoCs 199 199 - ZTE's zx2967 family 200 + - SiFive FU740 SoCs 200 201 201 202 config RESET_STM32MP157 202 203 bool "STM32MP157 Reset Driver" if COMPILE_TEST
+1 -2
drivers/scsi/cxlflash/main.c
··· 1649 1649 } 1650 1650 1651 1651 /* Get the read only section offset */ 1652 - ro_start = pci_vpd_find_tag(vpd_data, 0, vpd_size, 1653 - PCI_VPD_LRDT_RO_DATA); 1652 + ro_start = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA); 1654 1653 if (unlikely(ro_start < 0)) { 1655 1654 dev_err(dev, "%s: VPD Read-only data not found\n", __func__); 1656 1655 rc = -ENODEV;
+1
include/dt-bindings/clock/sifive-fu740-prci.h
··· 19 19 #define PRCI_CLK_CLTXPLL 5 20 20 #define PRCI_CLK_TLCLK 6 21 21 #define PRCI_CLK_PCLK 7 22 + #define PRCI_CLK_PCIE_AUX 8 22 23 23 24 #endif /* __DT_BINDINGS_CLOCK_SIFIVE_FU740_PRCI_H */
+1 -16
include/linux/msi.h
··· 240 240 /* 241 241 * The arch hooks to setup up msi irqs. Default functions are implemented 242 242 * as weak symbols so that they /can/ be overriden by architecture specific 243 - * code if needed. These hooks must be enabled by the architecture or by 244 - * drivers which depend on them via msi_controller based MSI handling. 243 + * code if needed. These hooks can only be enabled by the architecture. 245 244 * 246 245 * If CONFIG_PCI_MSI_ARCH_FALLBACKS is not selected they are replaced by 247 246 * stubs with warnings. ··· 250 251 void arch_teardown_msi_irq(unsigned int irq); 251 252 int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type); 252 253 void arch_teardown_msi_irqs(struct pci_dev *dev); 253 - void default_teardown_msi_irqs(struct pci_dev *dev); 254 254 #else 255 255 static inline int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) 256 256 { ··· 269 271 */ 270 272 void arch_restore_msi_irqs(struct pci_dev *dev); 271 273 void default_restore_msi_irqs(struct pci_dev *dev); 272 - 273 - struct msi_controller { 274 - struct module *owner; 275 - struct device *dev; 276 - struct device_node *of_node; 277 - struct list_head list; 278 - 279 - int (*setup_irq)(struct msi_controller *chip, struct pci_dev *dev, 280 - struct msi_desc *desc); 281 - int (*setup_irqs)(struct msi_controller *chip, struct pci_dev *dev, 282 - int nvec, int type); 283 - void (*teardown_irq)(struct msi_controller *chip, unsigned int irq); 284 - }; 285 274 286 275 #ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN 287 276
+1
include/linux/pci-ecam.h
··· 85 85 extern const struct pci_ecam_ops xgene_v1_pcie_ecam_ops; /* APM X-Gene PCIe v1 */ 86 86 extern const struct pci_ecam_ops xgene_v2_pcie_ecam_ops; /* APM X-Gene PCIe v2.x */ 87 87 extern const struct pci_ecam_ops al_pcie_ops; /* Amazon Annapurna Labs PCIe */ 88 + extern const struct pci_ecam_ops tegra194_pcie_ops; /* Tegra194 PCIe */ 88 89 #endif 89 90 90 91 #if IS_ENABLED(CONFIG_PCI_HOST_COMMON)
+3 -6
include/linux/pci.h
··· 458 458 459 459 u32 saved_config_space[16]; /* Config space saved at suspend time */ 460 460 struct hlist_head saved_cap_space; 461 - struct bin_attribute *rom_attr; /* Attribute descriptor for sysfs ROM entry */ 462 461 int rom_attr_enabled; /* Display of ROM attribute enabled? */ 463 462 struct bin_attribute *res_attr[DEVICE_COUNT_RESOURCE]; /* sysfs file for resources */ 464 463 struct bin_attribute *res_attr_wc[DEVICE_COUNT_RESOURCE]; /* sysfs file for WC mapping of resources */ ··· 539 540 int (*map_irq)(const struct pci_dev *, u8, u8); 540 541 void (*release_fn)(struct pci_host_bridge *); 541 542 void *release_data; 542 - struct msi_controller *msi; 543 543 unsigned int ignore_reset_delay:1; /* For entire hierarchy */ 544 544 unsigned int no_ext_tags:1; /* No Extended Tags */ 545 545 unsigned int native_aer:1; /* OS may use PCIe AER */ ··· 549 551 unsigned int native_dpc:1; /* OS may use PCIe DPC */ 550 552 unsigned int preserve_config:1; /* Preserve FW resource setup */ 551 553 unsigned int size_windows:1; /* Enable root bus sizing */ 554 + unsigned int msi_domain:1; /* Bridge wants MSI domain */ 552 555 553 556 /* Resource alignment requirements */ 554 557 resource_size_t (*align_resource)(struct pci_dev *dev, ··· 620 621 struct resource busn_res; /* Bus numbers routed to this bus */ 621 622 622 623 struct pci_ops *ops; /* Configuration access functions */ 623 - struct msi_controller *msi; /* MSI controller */ 624 624 void *sysdata; /* Hook for sys-specific extension */ 625 625 struct proc_dir_entry *procdir; /* Directory entry in /proc/bus/pci */ 626 626 ··· 1208 1210 int __must_check pcim_set_mwi(struct pci_dev *dev); 1209 1211 int pci_try_set_mwi(struct pci_dev *dev); 1210 1212 void pci_clear_mwi(struct pci_dev *dev); 1213 + void pci_disable_parity(struct pci_dev *dev); 1211 1214 void pci_intx(struct pci_dev *dev, int enable); 1212 1215 bool pci_check_and_mask_intx(struct pci_dev *dev); 1213 1216 bool pci_check_and_unmask_intx(struct pci_dev *dev); ··· 1310 1311 /* Vital Product Data routines */ 1311 1312 ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf); 1312 1313 ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf); 1313 - int pci_set_vpd_size(struct pci_dev *dev, size_t len); 1314 1314 1315 1315 /* Helper functions for low-level code (drivers/pci/setup-[bus,res].c) */ 1316 1316 resource_size_t pcibios_retrieve_fw_addr(struct pci_dev *dev, int idx); ··· 2318 2320 /** 2319 2321 * pci_vpd_find_tag - Locates the Resource Data Type tag provided 2320 2322 * @buf: Pointer to buffered vpd data 2321 - * @off: The offset into the buffer at which to begin the search 2322 2323 * @len: The length of the vpd buffer 2323 2324 * @rdt: The Resource Data Type to search for 2324 2325 * 2325 2326 * Returns the index where the Resource Data Type was found or 2326 2327 * -ENOENT otherwise. 2327 2328 */ 2328 - int pci_vpd_find_tag(const u8 *buf, unsigned int off, unsigned int len, u8 rdt); 2329 + int pci_vpd_find_tag(const u8 *buf, unsigned int len, u8 rdt); 2329 2330 2330 2331 /** 2331 2332 * pci_vpd_find_info_keyword - Locates an information field keyword in the VPD
+5
include/linux/reset.h
··· 76 76 return 0; 77 77 } 78 78 79 + static inline int reset_control_rearm(struct reset_control *rstc) 80 + { 81 + return 0; 82 + } 83 + 79 84 static inline int reset_control_assert(struct reset_control *rstc) 80 85 { 81 86 return 0;