Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v5.15-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
"Enumeration:
- Convert controller drivers to generic_handle_domain_irq() (Marc
Zyngier)
- Simplify VPD (Vital Product Data) access and search (Heiner
Kallweit)
- Update bnx2, bnx2x, bnxt, cxgb4, cxlflash, sfc, tg3 drivers to use
simplified VPD interfaces (Heiner Kallweit)
- Run Max Payload Size quirks before configuring MPS; work around
ASMedia ASM1062 SATA MPS issue (Marek Behún)

Resource management:
- Refactor pci_ioremap_bar() and pci_ioremap_wc_bar() (Krzysztof
Wilczyński)
- Optimize pci_resource_len() to reduce kernel size (Zhen Lei)

PCI device hotplug:
- Fix a double unmap in ibmphp (Vishal Aslot)

PCIe port driver:
- Enable Bandwidth Notification only if port supports it (Stuart
Hayes)

Sysfs/proc/syscalls:
- Add schedule point in proc_bus_pci_read() (Krzysztof Wilczyński)
- Return ~0 data on pciconfig_read() CAP_SYS_ADMIN failure (Krzysztof
Wilczyński)
- Return "int" from pciconfig_read() syscall (Krzysztof Wilczyński)

Virtualization:
- Extend "pci=noats" to also turn on Translation Blocking to protect
against some DMA attacks (Alex Williamson)
- Add sysfs mechanism to control the type of reset used between
device assignments to VMs (Amey Narkhede)
- Add support for ACPI _RST reset method (Shanker Donthineni)
- Add ACS quirks for Cavium multi-function devices (George Cherian)
- Add ACS quirks for NXP LX2xx0 and LX2xx2 platforms (Wasim Khan)
- Allow HiSilicon AMBA devices that appear as fake PCI devices to use
PASID and SVA (Zhangfei Gao)

Endpoint framework:
- Add support for SR-IOV Endpoint devices (Kishon Vijay Abraham I)
- Zero-initialize endpoint test tool parameters so we don't use
random parameters (Shunyong Yang)

APM X-Gene PCIe controller driver:
- Remove redundant dev_err() call in xgene_msi_probe() (ErKun Yang)

Broadcom iProc PCIe controller driver:
- Don't fail devm_pci_alloc_host_bridge() on missing 'ranges' because
it's optional on BCMA devices (Rob Herring)
- Fix BCMA probe resource handling (Rob Herring)

Cadence PCIe driver:
- Work around J7200 Link training electrical issue by increasing
delays in LTSSM (Nadeem Athani)

Intel IXP4xx PCI controller driver:
- Depend on ARCH_IXP4XX to avoid useless config questions (Geert
Uytterhoeven)

Intel Keembay PCIe controller driver:
- Add Intel Keem Bay PCIe controller (Srikanth Thokala)

Marvell Aardvark PCIe controller driver:
- Work around config space completion handling issues (Evan Wang)
- Increase timeout for config access completions (Pali Rohár)
- Emulate CRS Software Visibility bit (Pali Rohár)
- Configure resources from DT 'ranges' property to fix I/O space
access (Pali Rohár)
- Serialize INTx mask/unmask (Pali Rohár)

MediaTek PCIe controller driver:
- Add MT7629 support in DT (Chuanjia Liu)
- Fix an MSI issue (Chuanjia Liu)
- Get syscon regmap ("mediatek,generic-pciecfg"), IRQ number
("pci_irq"), PCI domain ("linux,pci-domain") from DT properties if
present (Chuanjia Liu)

Microsoft Hyper-V host bridge driver:
- Add ARM64 support (Boqun Feng)
- Support "Create Interrupt v3" message (Sunil Muthuswamy)

NVIDIA Tegra PCIe controller driver:
- Use seq_puts(), move err_msg from stack to static, fix OF node leak
(Christophe JAILLET)

NVIDIA Tegra194 PCIe driver:
- Disable suspend when in Endpoint mode (Om Prakash Singh)
- Fix MSI-X address programming error (Om Prakash Singh)
- Disable interrupts during suspend to avoid spurious AER link down
(Om Prakash Singh)

Renesas R-Car PCIe controller driver:
- Work around hardware issue that prevents Link L1->L0 transition
(Marek Vasut)
- Fix runtime PM refcount leak (Dinghao Liu)

Rockchip DesignWare PCIe controller driver:
- Add Rockchip RK356X host controller driver (Simon Xue)

TI J721E PCIe driver:
- Add support for J7200 and AM64 (Kishon Vijay Abraham I)

Toshiba Visconti PCIe controller driver:
- Add Toshiba Visconti PCIe host controller driver (Nobuhiro
Iwamatsu)

Xilinx NWL PCIe controller driver:
- Enable PCIe reference clock via CCF (Hyun Kwon)

Miscellaneous:
- Convert sta2x11 from 'pci_' to 'dma_' API (Christophe JAILLET)
- Fix pci_dev_str_match_path() alloc while atomic bug (used for
kernel parameters that specify devices) (Dan Carpenter)
- Remove pointless Precision Time Management warning when PTM is
present but not enabled (Jakub Kicinski)
- Remove surplus "break" statements (Krzysztof Wilczyński)"

* tag 'pci-v5.15-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (132 commits)
PCI: ibmphp: Fix double unmap of io_mem
x86/PCI: sta2x11: switch from 'pci_' to 'dma_' API
PCI/VPD: Use unaligned access helpers
PCI/VPD: Clean up public VPD defines and inline functions
cxgb4: Use pci_vpd_find_id_string() to find VPD ID string
PCI/VPD: Add pci_vpd_find_id_string()
PCI/VPD: Include post-processing in pci_vpd_find_tag()
PCI/VPD: Stop exporting pci_vpd_find_info_keyword()
PCI/VPD: Stop exporting pci_vpd_find_tag()
PCI: Set dma-can-stall for HiSilicon chips
PCI: rockchip-dwc: Add Rockchip RK356X host controller driver
PCI: dwc: Remove surplus break statement after return
PCI: artpec6: Remove local code block from switch statement
PCI: artpec6: Remove surplus break statement after return
MAINTAINERS: Add entries for Toshiba Visconti PCIe controller
PCI: visconti: Add Toshiba Visconti PCIe host controller driver
PCI/portdrv: Enable Bandwidth Notification only if port supports it
PCI: Allow PASID on fake PCIe devices without TLP prefixes
PCI: mediatek: Use PCI domain to handle ports detection
PCI: mediatek: Add new method to get irq number
...

+3879 -1574
+17
Documentation/ABI/testing/sysfs-bus-pci
··· 121 121 child buses, and re-discover devices removed earlier 122 122 from this part of the device tree. 123 123 124 + What: /sys/bus/pci/devices/.../reset_method 125 + Date: August 2021 126 + Contact: Amey Narkhede <ameynarkhede03@gmail.com> 127 + Description: 128 + Some devices allow an individual function to be reset 129 + without affecting other functions in the same slot. 130 + 131 + For devices that have this support, a file named 132 + reset_method is present in sysfs. Reading this file 133 + gives names of the supported and enabled reset methods and 134 + their ordering. Writing a space-separated list of names of 135 + reset methods sets the reset methods and ordering to be 136 + used when resetting the device. Writing an empty string 137 + disables the ability to reset the device. Writing 138 + "default" enables all supported reset methods in the 139 + default ordering. 140 + 124 141 What: /sys/bus/pci/devices/.../reset 125 142 Date: July 2009 126 143 Contact: Michael S. Tsirkin <mst@redhat.com>
+11 -1
Documentation/PCI/endpoint/pci-endpoint-cfs.rst
··· 43 43 .. <EPF Driver1>/ 44 44 ... <EPF Device 11>/ 45 45 ... <EPF Device 21>/ 46 + ... <EPF Device 31>/ 46 47 .. <EPF Driver2>/ 47 48 ... <EPF Device 12>/ 48 49 ... <EPF Device 22>/ ··· 69 68 ... subsys_vendor_id 70 69 ... subsys_id 71 70 ... interrupt_pin 71 + ... <Symlink EPF Device 31>/ 72 72 ... primary/ 73 73 ... <Symlink EPC Device1>/ 74 74 ... secondary/ ··· 80 78 interface should be added in 'primary' directory and symlink of endpoint 81 79 controller connected to secondary interface should be added in 'secondary' 82 80 directory. 81 + 82 + The <EPF Device> directory can have a list of symbolic links 83 + (<Symlink EPF Device 31>) to other <EPF Device>. These symbolic links should 84 + be created by the user to represent the virtual functions that are bound to 85 + the physical function. In the above directory structure <EPF Device 11> is a 86 + physical function and <EPF Device 31> is a virtual function. An EPF device once 87 + it's linked to another EPF device, cannot be linked to a EPC device. 83 88 84 89 EPC Device 85 90 ========== ··· 107 98 108 99 The <EPC Device> directory will have a list of symbolic links to 109 100 <EPF Device>. These symbolic links should be created by the user to 110 - represent the functions present in the endpoint device. 101 + represent the functions present in the endpoint device. Only <EPF Device> 102 + that represents a physical function can be linked to a EPC device. 111 103 112 104 The <EPC Device> directory will also have a *start* field. Once 113 105 "1" is written to this field, the endpoint device will be ready to
+69
Documentation/devicetree/bindings/pci/intel,keembay-pcie-ep.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/pci/intel,keembay-pcie-ep.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: Intel Keem Bay PCIe controller Endpoint mode 8 + 9 + maintainers: 10 + - Wan Ahmad Zainie <wan.ahmad.zainie.wan.mohamad@intel.com> 11 + - Srikanth Thokala <srikanth.thokala@intel.com> 12 + 13 + properties: 14 + compatible: 15 + const: intel,keembay-pcie-ep 16 + 17 + reg: 18 + maxItems: 5 19 + 20 + reg-names: 21 + items: 22 + - const: dbi 23 + - const: dbi2 24 + - const: atu 25 + - const: addr_space 26 + - const: apb 27 + 28 + interrupts: 29 + maxItems: 4 30 + 31 + interrupt-names: 32 + items: 33 + - const: pcie 34 + - const: pcie_ev 35 + - const: pcie_err 36 + - const: pcie_mem_access 37 + 38 + num-lanes: 39 + description: Number of lanes to use. 40 + enum: [ 1, 2 ] 41 + 42 + required: 43 + - compatible 44 + - reg 45 + - reg-names 46 + - interrupts 47 + - interrupt-names 48 + 49 + additionalProperties: false 50 + 51 + examples: 52 + - | 53 + #include <dt-bindings/interrupt-controller/arm-gic.h> 54 + #include <dt-bindings/interrupt-controller/irq.h> 55 + pcie-ep@37000000 { 56 + compatible = "intel,keembay-pcie-ep"; 57 + reg = <0x37000000 0x00001000>, 58 + <0x37100000 0x00001000>, 59 + <0x37300000 0x00001000>, 60 + <0x36000000 0x01000000>, 61 + <0x37800000 0x00000200>; 62 + reg-names = "dbi", "dbi2", "atu", "addr_space", "apb"; 63 + interrupts = <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>, 64 + <GIC_SPI 108 IRQ_TYPE_EDGE_RISING>, 65 + <GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>, 66 + <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>; 67 + interrupt-names = "pcie", "pcie_ev", "pcie_err", "pcie_mem_access"; 68 + num-lanes = <2>; 69 + };
+97
Documentation/devicetree/bindings/pci/intel,keembay-pcie.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/pci/intel,keembay-pcie.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: Intel Keem Bay PCIe controller Root Complex mode 8 + 9 + maintainers: 10 + - Wan Ahmad Zainie <wan.ahmad.zainie.wan.mohamad@intel.com> 11 + - Srikanth Thokala <srikanth.thokala@intel.com> 12 + 13 + allOf: 14 + - $ref: /schemas/pci/pci-bus.yaml# 15 + 16 + properties: 17 + compatible: 18 + const: intel,keembay-pcie 19 + 20 + ranges: 21 + maxItems: 1 22 + 23 + reset-gpios: 24 + maxItems: 1 25 + 26 + reg: 27 + maxItems: 4 28 + 29 + reg-names: 30 + items: 31 + - const: dbi 32 + - const: atu 33 + - const: config 34 + - const: apb 35 + 36 + clocks: 37 + maxItems: 2 38 + 39 + clock-names: 40 + items: 41 + - const: master 42 + - const: aux 43 + 44 + interrupts: 45 + maxItems: 3 46 + 47 + interrupt-names: 48 + items: 49 + - const: pcie 50 + - const: pcie_ev 51 + - const: pcie_err 52 + 53 + num-lanes: 54 + description: Number of lanes to use. 55 + enum: [ 1, 2 ] 56 + 57 + required: 58 + - compatible 59 + - reg 60 + - reg-names 61 + - ranges 62 + - clocks 63 + - clock-names 64 + - interrupts 65 + - interrupt-names 66 + - reset-gpios 67 + 68 + unevaluatedProperties: false 69 + 70 + examples: 71 + - | 72 + #include <dt-bindings/interrupt-controller/arm-gic.h> 73 + #include <dt-bindings/interrupt-controller/irq.h> 74 + #include <dt-bindings/gpio/gpio.h> 75 + #define KEEM_BAY_A53_PCIE 76 + #define KEEM_BAY_A53_AUX_PCIE 77 + pcie@37000000 { 78 + compatible = "intel,keembay-pcie"; 79 + reg = <0x37000000 0x00001000>, 80 + <0x37300000 0x00001000>, 81 + <0x36e00000 0x00200000>, 82 + <0x37800000 0x00000200>; 83 + reg-names = "dbi", "atu", "config", "apb"; 84 + #address-cells = <3>; 85 + #size-cells = <2>; 86 + device_type = "pci"; 87 + ranges = <0x02000000 0 0x36000000 0x36000000 0 0x00e00000>; 88 + interrupts = <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>, 89 + <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, 90 + <GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>; 91 + interrupt-names = "pcie", "pcie_ev", "pcie_err"; 92 + clocks = <&scmi_clk KEEM_BAY_A53_PCIE>, 93 + <&scmi_clk KEEM_BAY_A53_AUX_PCIE>; 94 + clock-names = "master", "aux"; 95 + reset-gpios = <&pca2 9 GPIO_ACTIVE_LOW>; 96 + num-lanes = <2>; 97 + };
+39
Documentation/devicetree/bindings/pci/mediatek-pcie-cfg.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/mediatek-pcie-cfg.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: MediaTek PCIECFG controller 8 + 9 + maintainers: 10 + - Chuanjia Liu <chuanjia.liu@mediatek.com> 11 + - Jianjun Wang <jianjun.wang@mediatek.com> 12 + 13 + description: | 14 + The MediaTek PCIECFG controller controls some feature about 15 + LTSSM, ASPM and so on. 16 + 17 + properties: 18 + compatible: 19 + items: 20 + - enum: 21 + - mediatek,generic-pciecfg 22 + - const: syscon 23 + 24 + reg: 25 + maxItems: 1 26 + 27 + required: 28 + - compatible 29 + - reg 30 + 31 + additionalProperties: false 32 + 33 + examples: 34 + - | 35 + pciecfg: pciecfg@1a140000 { 36 + compatible = "mediatek,generic-pciecfg", "syscon"; 37 + reg = <0x1a140000 0x1000>; 38 + }; 39 + ...
+111 -95
Documentation/devicetree/bindings/pci/mediatek-pcie.txt
··· 8 8 "mediatek,mt7623-pcie" 9 9 "mediatek,mt7629-pcie" 10 10 - device_type: Must be "pci" 11 - - reg: Base addresses and lengths of the PCIe subsys and root ports. 11 + - reg: Base addresses and lengths of the root ports. 12 12 - reg-names: Names of the above areas to use during resource lookup. 13 13 - #address-cells: Address representation for root ports (must be 3) 14 14 - #size-cells: Size representation for root ports (must be 2) ··· 47 47 - reset-names: Must be "pcie-rst0", "pcie-rst1", "pcie-rstN".. based on the 48 48 number of root ports. 49 49 50 - Required properties for MT2712/MT7622: 50 + Required properties for MT2712/MT7622/MT7629: 51 51 -interrupts: A list of interrupt outputs of the controller, must have one 52 52 entry for each PCIe port 53 + - interrupt-names: Must include the following entries: 54 + - "pcie_irq": The interrupt that is asserted when an MSI/INTX is received 55 + - linux,pci-domain: PCI domain ID. Should be unique for each host controller 53 56 54 57 In addition, the device tree node must have sub-nodes describing each 55 58 PCIe port interface, having the following mandatory properties: ··· 146 143 147 144 Examples for MT2712: 148 145 149 - pcie: pcie@11700000 { 146 + pcie1: pcie@112ff000 { 150 147 compatible = "mediatek,mt2712-pcie"; 151 148 device_type = "pci"; 152 - reg = <0 0x11700000 0 0x1000>, 153 - <0 0x112ff000 0 0x1000>; 154 - reg-names = "port0", "port1"; 149 + reg = <0 0x112ff000 0 0x1000>; 150 + reg-names = "port1"; 151 + linux,pci-domain = <1>; 155 152 #address-cells = <3>; 156 153 #size-cells = <2>; 157 - interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>, 158 - <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>; 159 - clocks = <&topckgen CLK_TOP_PE2_MAC_P0_SEL>, 160 - <&topckgen CLK_TOP_PE2_MAC_P1_SEL>, 161 - <&pericfg CLK_PERI_PCIE0>, 154 + interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>; 155 + interrupt-names = "pcie_irq"; 156 + clocks = <&topckgen CLK_TOP_PE2_MAC_P1_SEL>, 162 157 <&pericfg CLK_PERI_PCIE1>; 163 - clock-names = "sys_ck0", "sys_ck1", "ahb_ck0", "ahb_ck1"; 164 - phys = <&pcie0_phy PHY_TYPE_PCIE>, <&pcie1_phy PHY_TYPE_PCIE>; 165 - phy-names = "pcie-phy0", "pcie-phy1"; 158 + clock-names = "sys_ck1", "ahb_ck1"; 159 + phys = <&u3port1 PHY_TYPE_PCIE>; 160 + phy-names = "pcie-phy1"; 166 161 bus-range = <0x00 0xff>; 167 - ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x10000000>; 162 + ranges = <0x82000000 0 0x11400000 0x0 0x11400000 0 0x300000>; 163 + status = "disabled"; 168 164 169 - pcie0: pcie@0,0 { 170 - reg = <0x0000 0 0 0 0>; 171 - #address-cells = <3>; 172 - #size-cells = <2>; 165 + #interrupt-cells = <1>; 166 + interrupt-map-mask = <0 0 0 7>; 167 + interrupt-map = <0 0 0 1 &pcie_intc1 0>, 168 + <0 0 0 2 &pcie_intc1 1>, 169 + <0 0 0 3 &pcie_intc1 2>, 170 + <0 0 0 4 &pcie_intc1 3>; 171 + pcie_intc1: interrupt-controller { 172 + interrupt-controller; 173 + #address-cells = <0>; 173 174 #interrupt-cells = <1>; 174 - ranges; 175 - interrupt-map-mask = <0 0 0 7>; 176 - interrupt-map = <0 0 0 1 &pcie_intc0 0>, 177 - <0 0 0 2 &pcie_intc0 1>, 178 - <0 0 0 3 &pcie_intc0 2>, 179 - <0 0 0 4 &pcie_intc0 3>; 180 - pcie_intc0: interrupt-controller { 181 - interrupt-controller; 182 - #address-cells = <0>; 183 - #interrupt-cells = <1>; 184 - }; 185 175 }; 176 + }; 186 177 187 - pcie1: pcie@1,0 { 188 - reg = <0x0800 0 0 0 0>; 189 - #address-cells = <3>; 190 - #size-cells = <2>; 178 + pcie0: pcie@11700000 { 179 + compatible = "mediatek,mt2712-pcie"; 180 + device_type = "pci"; 181 + reg = <0 0x11700000 0 0x1000>; 182 + reg-names = "port0"; 183 + linux,pci-domain = <0>; 184 + #address-cells = <3>; 185 + #size-cells = <2>; 186 + interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>; 187 + interrupt-names = "pcie_irq"; 188 + clocks = <&topckgen CLK_TOP_PE2_MAC_P0_SEL>, 189 + <&pericfg CLK_PERI_PCIE0>; 190 + clock-names = "sys_ck0", "ahb_ck0"; 191 + phys = <&u3port0 PHY_TYPE_PCIE>; 192 + phy-names = "pcie-phy0"; 193 + bus-range = <0x00 0xff>; 194 + ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x10000000>; 195 + status = "disabled"; 196 + 197 + #interrupt-cells = <1>; 198 + interrupt-map-mask = <0 0 0 7>; 199 + interrupt-map = <0 0 0 1 &pcie_intc0 0>, 200 + <0 0 0 2 &pcie_intc0 1>, 201 + <0 0 0 3 &pcie_intc0 2>, 202 + <0 0 0 4 &pcie_intc0 3>; 203 + pcie_intc0: interrupt-controller { 204 + interrupt-controller; 205 + #address-cells = <0>; 191 206 #interrupt-cells = <1>; 192 - ranges; 193 - interrupt-map-mask = <0 0 0 7>; 194 - interrupt-map = <0 0 0 1 &pcie_intc1 0>, 195 - <0 0 0 2 &pcie_intc1 1>, 196 - <0 0 0 3 &pcie_intc1 2>, 197 - <0 0 0 4 &pcie_intc1 3>; 198 - pcie_intc1: interrupt-controller { 199 - interrupt-controller; 200 - #address-cells = <0>; 201 - #interrupt-cells = <1>; 202 - }; 203 207 }; 204 208 }; 205 209 206 210 Examples for MT7622: 207 211 208 - pcie: pcie@1a140000 { 212 + pcie0: pcie@1a143000 { 209 213 compatible = "mediatek,mt7622-pcie"; 210 214 device_type = "pci"; 211 - reg = <0 0x1a140000 0 0x1000>, 212 - <0 0x1a143000 0 0x1000>, 213 - <0 0x1a145000 0 0x1000>; 214 - reg-names = "subsys", "port0", "port1"; 215 + reg = <0 0x1a143000 0 0x1000>; 216 + reg-names = "port0"; 217 + linux,pci-domain = <0>; 215 218 #address-cells = <3>; 216 219 #size-cells = <2>; 217 - interrupts = <GIC_SPI 228 IRQ_TYPE_LEVEL_LOW>, 218 - <GIC_SPI 229 IRQ_TYPE_LEVEL_LOW>; 220 + interrupts = <GIC_SPI 228 IRQ_TYPE_LEVEL_LOW>; 221 + interrupt-names = "pcie_irq"; 219 222 clocks = <&pciesys CLK_PCIE_P0_MAC_EN>, 220 - <&pciesys CLK_PCIE_P1_MAC_EN>, 221 223 <&pciesys CLK_PCIE_P0_AHB_EN>, 222 - <&pciesys CLK_PCIE_P1_AHB_EN>, 223 224 <&pciesys CLK_PCIE_P0_AUX_EN>, 224 - <&pciesys CLK_PCIE_P1_AUX_EN>, 225 225 <&pciesys CLK_PCIE_P0_AXI_EN>, 226 - <&pciesys CLK_PCIE_P1_AXI_EN>, 227 226 <&pciesys CLK_PCIE_P0_OBFF_EN>, 228 - <&pciesys CLK_PCIE_P1_OBFF_EN>, 229 - <&pciesys CLK_PCIE_P0_PIPE_EN>, 230 - <&pciesys CLK_PCIE_P1_PIPE_EN>; 231 - clock-names = "sys_ck0", "sys_ck1", "ahb_ck0", "ahb_ck1", 232 - "aux_ck0", "aux_ck1", "axi_ck0", "axi_ck1", 233 - "obff_ck0", "obff_ck1", "pipe_ck0", "pipe_ck1"; 234 - phys = <&pcie0_phy PHY_TYPE_PCIE>, <&pcie1_phy PHY_TYPE_PCIE>; 235 - phy-names = "pcie-phy0", "pcie-phy1"; 227 + <&pciesys CLK_PCIE_P0_PIPE_EN>; 228 + clock-names = "sys_ck0", "ahb_ck0", "aux_ck0", 229 + "axi_ck0", "obff_ck0", "pipe_ck0"; 230 + 236 231 power-domains = <&scpsys MT7622_POWER_DOMAIN_HIF0>; 237 232 bus-range = <0x00 0xff>; 238 - ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x10000000>; 233 + ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x8000000>; 234 + status = "disabled"; 239 235 240 - pcie0: pcie@0,0 { 241 - reg = <0x0000 0 0 0 0>; 242 - #address-cells = <3>; 243 - #size-cells = <2>; 236 + #interrupt-cells = <1>; 237 + interrupt-map-mask = <0 0 0 7>; 238 + interrupt-map = <0 0 0 1 &pcie_intc0 0>, 239 + <0 0 0 2 &pcie_intc0 1>, 240 + <0 0 0 3 &pcie_intc0 2>, 241 + <0 0 0 4 &pcie_intc0 3>; 242 + pcie_intc0: interrupt-controller { 243 + interrupt-controller; 244 + #address-cells = <0>; 244 245 #interrupt-cells = <1>; 245 - ranges; 246 - interrupt-map-mask = <0 0 0 7>; 247 - interrupt-map = <0 0 0 1 &pcie_intc0 0>, 248 - <0 0 0 2 &pcie_intc0 1>, 249 - <0 0 0 3 &pcie_intc0 2>, 250 - <0 0 0 4 &pcie_intc0 3>; 251 - pcie_intc0: interrupt-controller { 252 - interrupt-controller; 253 - #address-cells = <0>; 254 - #interrupt-cells = <1>; 255 - }; 256 246 }; 247 + }; 257 248 258 - pcie1: pcie@1,0 { 259 - reg = <0x0800 0 0 0 0>; 260 - #address-cells = <3>; 261 - #size-cells = <2>; 249 + pcie1: pcie@1a145000 { 250 + compatible = "mediatek,mt7622-pcie"; 251 + device_type = "pci"; 252 + reg = <0 0x1a145000 0 0x1000>; 253 + reg-names = "port1"; 254 + linux,pci-domain = <1>; 255 + #address-cells = <3>; 256 + #size-cells = <2>; 257 + interrupts = <GIC_SPI 229 IRQ_TYPE_LEVEL_LOW>; 258 + interrupt-names = "pcie_irq"; 259 + clocks = <&pciesys CLK_PCIE_P1_MAC_EN>, 260 + /* designer has connect RC1 with p0_ahb clock */ 261 + <&pciesys CLK_PCIE_P0_AHB_EN>, 262 + <&pciesys CLK_PCIE_P1_AUX_EN>, 263 + <&pciesys CLK_PCIE_P1_AXI_EN>, 264 + <&pciesys CLK_PCIE_P1_OBFF_EN>, 265 + <&pciesys CLK_PCIE_P1_PIPE_EN>; 266 + clock-names = "sys_ck1", "ahb_ck1", "aux_ck1", 267 + "axi_ck1", "obff_ck1", "pipe_ck1"; 268 + 269 + power-domains = <&scpsys MT7622_POWER_DOMAIN_HIF0>; 270 + bus-range = <0x00 0xff>; 271 + ranges = <0x82000000 0 0x28000000 0x0 0x28000000 0 0x8000000>; 272 + status = "disabled"; 273 + 274 + #interrupt-cells = <1>; 275 + interrupt-map-mask = <0 0 0 7>; 276 + interrupt-map = <0 0 0 1 &pcie_intc1 0>, 277 + <0 0 0 2 &pcie_intc1 1>, 278 + <0 0 0 3 &pcie_intc1 2>, 279 + <0 0 0 4 &pcie_intc1 3>; 280 + pcie_intc1: interrupt-controller { 281 + interrupt-controller; 282 + #address-cells = <0>; 262 283 #interrupt-cells = <1>; 263 - ranges; 264 - interrupt-map-mask = <0 0 0 7>; 265 - interrupt-map = <0 0 0 1 &pcie_intc1 0>, 266 - <0 0 0 2 &pcie_intc1 1>, 267 - <0 0 0 3 &pcie_intc1 2>, 268 - <0 0 0 4 &pcie_intc1 3>; 269 - pcie_intc1: interrupt-controller { 270 - interrupt-controller; 271 - #address-cells = <0>; 272 - #interrupt-cells = <1>; 273 - }; 274 284 }; 275 285 };
+7
Documentation/devicetree/bindings/pci/pci-ep.yaml
··· 23 23 default: 1 24 24 maximum: 255 25 25 26 + max-virtual-functions: 27 + description: Array representing the number of virtual functions corresponding to each physical 28 + function 29 + $ref: /schemas/types.yaml#/definitions/uint8-array 30 + minItems: 1 31 + maxItems: 255 32 + 26 33 max-link-speed: 27 34 $ref: /schemas/types.yaml#/definitions/uint32 28 35 enum: [ 1, 2, 3, 4 ]
+1
Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt
··· 35 35 36 36 Optional properties: 37 37 - dma-coherent: present if DMA operations are coherent 38 + - clocks: Input clock specifier. Refer to common clock bindings 38 39 39 40 Example: 40 41 ++++++++
+9
MAINTAINERS
··· 2733 2733 F: Documentation/devicetree/bindings/arm/toshiba.yaml 2734 2734 F: Documentation/devicetree/bindings/net/toshiba,visconti-dwmac.yaml 2735 2735 F: Documentation/devicetree/bindings/gpio/toshiba,gpio-visconti.yaml 2736 + F: Documentation/devicetree/bindings/pci/toshiba,visconti-pcie.yaml 2736 2737 F: Documentation/devicetree/bindings/pinctrl/toshiba,tmpv7700-pinctrl.yaml 2737 2738 F: Documentation/devicetree/bindings/watchdog/toshiba,visconti-wdt.yaml 2738 2739 F: arch/arm64/boot/dts/toshiba/ 2739 2740 F: drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c 2740 2741 F: drivers/gpio/gpio-visconti.c 2742 + F: drivers/pci/controller/dwc/pcie-visconti.c 2741 2743 F: drivers/pinctrl/visconti/ 2742 2744 F: drivers/watchdog/visconti_wdt.c 2743 2745 N: visconti ··· 14542 14540 S: Maintained 14543 14541 F: Documentation/devicetree/bindings/pci/hisilicon-histb-pcie.txt 14544 14542 F: drivers/pci/controller/dwc/pcie-histb.c 14543 + 14544 + PCIE DRIVER FOR INTEL KEEM BAY 14545 + M: Srikanth Thokala <srikanth.thokala@intel.com> 14546 + L: linux-pci@vger.kernel.org 14547 + S: Supported 14548 + F: Documentation/devicetree/bindings/pci/intel,keembay-pcie* 14549 + F: drivers/pci/controller/dwc/pcie-keembay.c 14545 14550 14546 14551 PCIE DRIVER FOR INTEL LGM GW SOC 14547 14552 M: Rahul Tanwar <rtanwar@maxlinear.com>
+22 -7
arch/arm64/kernel/pci.c
··· 82 82 83 83 int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge) 84 84 { 85 - if (!acpi_disabled) { 86 - struct pci_config_window *cfg = bridge->bus->sysdata; 87 - struct acpi_device *adev = to_acpi_device(cfg->parent); 88 - struct device *bus_dev = &bridge->bus->dev; 85 + struct pci_config_window *cfg; 86 + struct acpi_device *adev; 87 + struct device *bus_dev; 89 88 90 - ACPI_COMPANION_SET(&bridge->dev, adev); 91 - set_dev_node(bus_dev, acpi_get_node(acpi_device_handle(adev))); 92 - } 89 + if (acpi_disabled) 90 + return 0; 91 + 92 + cfg = bridge->bus->sysdata; 93 + 94 + /* 95 + * On Hyper-V there is no corresponding ACPI device for a root bridge, 96 + * therefore ->parent is set as NULL by the driver. And set 'adev' as 97 + * NULL in this case because there is no proper ACPI device. 98 + */ 99 + if (!cfg->parent) 100 + adev = NULL; 101 + else 102 + adev = to_acpi_device(cfg->parent); 103 + 104 + bus_dev = &bridge->bus->dev; 105 + 106 + ACPI_COMPANION_SET(&bridge->dev, adev); 107 + set_dev_node(bus_dev, acpi_get_node(acpi_device_handle(adev))); 93 108 94 109 return 0; 95 110 }
+1
arch/x86/pci/numachip.c
··· 12 12 13 13 #include <linux/pci.h> 14 14 #include <asm/pci_x86.h> 15 + #include <asm/numachip/numachip.h> 15 16 16 17 static u8 limit __read_mostly; 17 18
+1 -2
arch/x86/pci/sta2x11-fixup.c
··· 146 146 dev_err(dev, "sta2x11: could not set DMA offset\n"); 147 147 148 148 dev->bus_dma_limit = max_amba_addr; 149 - pci_set_consistent_dma_mask(pdev, max_amba_addr); 150 - pci_set_dma_mask(pdev, max_amba_addr); 149 + dma_set_mask_and_coherent(&pdev->dev, max_amba_addr); 151 150 152 151 /* Configure AHB mapping */ 153 152 pci_write_config_dword(pdev, AHB_PEXLBASE(0), 0);
+1 -3
drivers/crypto/cavium/nitrox/nitrox_main.c
··· 306 306 return -ENOMEM; 307 307 } 308 308 309 - /* check flr support */ 310 - if (pcie_has_flr(pdev)) 311 - pcie_flr(pdev); 309 + pcie_reset_flr(pdev, PCI_RESET_DO_RESET); 312 310 313 311 pci_restore_state(pdev); 314 312
+9
drivers/misc/pci_endpoint_test.c
··· 69 69 #define FLAG_USE_DMA BIT(0) 70 70 71 71 #define PCI_DEVICE_ID_TI_AM654 0xb00c 72 + #define PCI_DEVICE_ID_TI_J7200 0xb00f 73 + #define PCI_DEVICE_ID_TI_AM64 0xb010 72 74 #define PCI_DEVICE_ID_LS1088A 0x80c0 73 75 74 76 #define is_am654_pci_dev(pdev) \ ··· 972 970 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721E), 973 971 .driver_data = (kernel_ulong_t)&j721e_data, 974 972 }, 973 + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J7200), 974 + .driver_data = (kernel_ulong_t)&j721e_data, 975 + }, 976 + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM64), 977 + .driver_data = (kernel_ulong_t)&j721e_data, 978 + }, 975 979 { } 976 980 }; 977 981 MODULE_DEVICE_TABLE(pci, pci_endpoint_test_tbl); ··· 987 979 .id_table = pci_endpoint_test_tbl, 988 980 .probe = pci_endpoint_test_probe, 989 981 .remove = pci_endpoint_test_remove, 982 + .sriov_configure = pci_sriov_configure_simple, 990 983 }; 991 984 module_pci_driver(pci_endpoint_test_driver); 992 985
+8 -25
drivers/net/ethernet/broadcom/bnx2.c
··· 8038 8038 static void 8039 8039 bnx2_read_vpd_fw_ver(struct bnx2 *bp) 8040 8040 { 8041 + unsigned int len; 8041 8042 int rc, i, j; 8042 8043 u8 *data; 8043 - unsigned int block_end, rosize, len; 8044 8044 8045 8045 #define BNX2_VPD_NVRAM_OFFSET 0x300 8046 8046 #define BNX2_VPD_LEN 128 ··· 8057 8057 for (i = 0; i < BNX2_VPD_LEN; i += 4) 8058 8058 swab32s((u32 *)&data[i]); 8059 8059 8060 - i = pci_vpd_find_tag(data, BNX2_VPD_LEN, PCI_VPD_LRDT_RO_DATA); 8061 - if (i < 0) 8062 - goto vpd_done; 8063 - 8064 - rosize = pci_vpd_lrdt_size(&data[i]); 8065 - i += PCI_VPD_LRDT_TAG_SIZE; 8066 - block_end = i + rosize; 8067 - 8068 - if (block_end > BNX2_VPD_LEN) 8069 - goto vpd_done; 8070 - 8071 - j = pci_vpd_find_info_keyword(data, i, rosize, 8072 - PCI_VPD_RO_KEYWORD_MFR_ID); 8060 + j = pci_vpd_find_ro_info_keyword(data, BNX2_VPD_LEN, 8061 + PCI_VPD_RO_KEYWORD_MFR_ID, &len); 8073 8062 if (j < 0) 8074 8063 goto vpd_done; 8075 8064 8076 - len = pci_vpd_info_field_size(&data[j]); 8077 - 8078 - j += PCI_VPD_INFO_FLD_HDR_SIZE; 8079 - if (j + len > block_end || len != 4 || 8080 - memcmp(&data[j], "1028", 4)) 8065 + if (len != 4 || memcmp(&data[j], "1028", 4)) 8081 8066 goto vpd_done; 8082 8067 8083 - j = pci_vpd_find_info_keyword(data, i, rosize, 8084 - PCI_VPD_RO_KEYWORD_VENDOR0); 8068 + j = pci_vpd_find_ro_info_keyword(data, BNX2_VPD_LEN, 8069 + PCI_VPD_RO_KEYWORD_VENDOR0, 8070 + &len); 8085 8071 if (j < 0) 8086 8072 goto vpd_done; 8087 8073 8088 - len = pci_vpd_info_field_size(&data[j]); 8089 - 8090 - j += PCI_VPD_INFO_FLD_HDR_SIZE; 8091 - if (j + len > block_end || len > BNX2_MAX_VER_SLEN) 8074 + if (len > BNX2_MAX_VER_SLEN) 8092 8075 goto vpd_done; 8093 8076 8094 8077 memcpy(bp->fw_version, &data[j], len);
-1
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 2407 2407 #define ETH_MAX_RX_CLIENTS_E2 ETH_MAX_RX_CLIENTS_E1H 2408 2408 #endif 2409 2409 2410 - #define BNX2X_VPD_LEN 128 2411 2410 #define VENDOR_ID_LEN 4 2412 2411 2413 2412 #define VF_ACQUIRE_THRESH 3
+20 -71
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 12189 12189 12190 12190 static void bnx2x_read_fwinfo(struct bnx2x *bp) 12191 12191 { 12192 - int cnt, i, block_end, rodi; 12193 - char vpd_start[BNX2X_VPD_LEN+1]; 12194 - char str_id_reg[VENDOR_ID_LEN+1]; 12195 - char str_id_cap[VENDOR_ID_LEN+1]; 12196 - char *vpd_data; 12197 - char *vpd_extended_data = NULL; 12198 - u8 len; 12192 + char str_id[VENDOR_ID_LEN + 1]; 12193 + unsigned int vpd_len, kw_len; 12194 + u8 *vpd_data; 12195 + int rodi; 12199 12196 12200 - cnt = pci_read_vpd(bp->pdev, 0, BNX2X_VPD_LEN, vpd_start); 12201 12197 memset(bp->fw_ver, 0, sizeof(bp->fw_ver)); 12202 12198 12203 - if (cnt < BNX2X_VPD_LEN) 12199 + vpd_data = pci_vpd_alloc(bp->pdev, &vpd_len); 12200 + if (IS_ERR(vpd_data)) 12201 + return; 12202 + 12203 + rodi = pci_vpd_find_ro_info_keyword(vpd_data, vpd_len, 12204 + PCI_VPD_RO_KEYWORD_MFR_ID, &kw_len); 12205 + if (rodi < 0 || kw_len != VENDOR_ID_LEN) 12204 12206 goto out_not_found; 12205 - 12206 - /* VPD RO tag should be first tag after identifier string, hence 12207 - * we should be able to find it in first BNX2X_VPD_LEN chars 12208 - */ 12209 - i = pci_vpd_find_tag(vpd_start, BNX2X_VPD_LEN, PCI_VPD_LRDT_RO_DATA); 12210 - if (i < 0) 12211 - goto out_not_found; 12212 - 12213 - block_end = i + PCI_VPD_LRDT_TAG_SIZE + 12214 - pci_vpd_lrdt_size(&vpd_start[i]); 12215 - 12216 - i += PCI_VPD_LRDT_TAG_SIZE; 12217 - 12218 - if (block_end > BNX2X_VPD_LEN) { 12219 - vpd_extended_data = kmalloc(block_end, GFP_KERNEL); 12220 - if (vpd_extended_data == NULL) 12221 - goto out_not_found; 12222 - 12223 - /* read rest of vpd image into vpd_extended_data */ 12224 - memcpy(vpd_extended_data, vpd_start, BNX2X_VPD_LEN); 12225 - cnt = pci_read_vpd(bp->pdev, BNX2X_VPD_LEN, 12226 - block_end - BNX2X_VPD_LEN, 12227 - vpd_extended_data + BNX2X_VPD_LEN); 12228 - if (cnt < (block_end - BNX2X_VPD_LEN)) 12229 - goto out_not_found; 12230 - vpd_data = vpd_extended_data; 12231 - } else 12232 - vpd_data = vpd_start; 12233 - 12234 - /* now vpd_data holds full vpd content in both cases */ 12235 - 12236 - rodi = pci_vpd_find_info_keyword(vpd_data, i, block_end, 12237 - PCI_VPD_RO_KEYWORD_MFR_ID); 12238 - if (rodi < 0) 12239 - goto out_not_found; 12240 - 12241 - len = pci_vpd_info_field_size(&vpd_data[rodi]); 12242 - 12243 - if (len != VENDOR_ID_LEN) 12244 - goto out_not_found; 12245 - 12246 - rodi += PCI_VPD_INFO_FLD_HDR_SIZE; 12247 12207 12248 12208 /* vendor specific info */ 12249 - snprintf(str_id_reg, VENDOR_ID_LEN + 1, "%04x", PCI_VENDOR_ID_DELL); 12250 - snprintf(str_id_cap, VENDOR_ID_LEN + 1, "%04X", PCI_VENDOR_ID_DELL); 12251 - if (!strncmp(str_id_reg, &vpd_data[rodi], VENDOR_ID_LEN) || 12252 - !strncmp(str_id_cap, &vpd_data[rodi], VENDOR_ID_LEN)) { 12253 - 12254 - rodi = pci_vpd_find_info_keyword(vpd_data, i, block_end, 12255 - PCI_VPD_RO_KEYWORD_VENDOR0); 12256 - if (rodi >= 0) { 12257 - len = pci_vpd_info_field_size(&vpd_data[rodi]); 12258 - 12259 - rodi += PCI_VPD_INFO_FLD_HDR_SIZE; 12260 - 12261 - if (len < 32 && (len + rodi) <= BNX2X_VPD_LEN) { 12262 - memcpy(bp->fw_ver, &vpd_data[rodi], len); 12263 - bp->fw_ver[len] = ' '; 12264 - } 12209 + snprintf(str_id, VENDOR_ID_LEN + 1, "%04x", PCI_VENDOR_ID_DELL); 12210 + if (!strncasecmp(str_id, &vpd_data[rodi], VENDOR_ID_LEN)) { 12211 + rodi = pci_vpd_find_ro_info_keyword(vpd_data, vpd_len, 12212 + PCI_VPD_RO_KEYWORD_VENDOR0, 12213 + &kw_len); 12214 + if (rodi >= 0 && kw_len < sizeof(bp->fw_ver)) { 12215 + memcpy(bp->fw_ver, &vpd_data[rodi], kw_len); 12216 + bp->fw_ver[kw_len] = ' '; 12265 12217 } 12266 - kfree(vpd_extended_data); 12267 - return; 12268 12218 } 12269 12219 out_not_found: 12270 - kfree(vpd_extended_data); 12271 - return; 12220 + kfree(vpd_data); 12272 12221 } 12273 12222 12274 12223 static void bnx2x_set_modes_bitmap(struct bnx2x *bp)
+12 -43
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 13100 13100 return rc; 13101 13101 } 13102 13102 13103 - #define BNXT_VPD_LEN 512 13104 13103 static void bnxt_vpd_read_info(struct bnxt *bp) 13105 13104 { 13106 13105 struct pci_dev *pdev = bp->pdev; 13107 - int i, len, pos, ro_size, size; 13108 - ssize_t vpd_size; 13106 + unsigned int vpd_size, kw_len; 13107 + int pos, size; 13109 13108 u8 *vpd_data; 13110 13109 13111 - vpd_data = kmalloc(BNXT_VPD_LEN, GFP_KERNEL); 13112 - if (!vpd_data) 13110 + vpd_data = pci_vpd_alloc(pdev, &vpd_size); 13111 + if (IS_ERR(vpd_data)) { 13112 + pci_warn(pdev, "Unable to read VPD\n"); 13113 13113 return; 13114 - 13115 - vpd_size = pci_read_vpd(pdev, 0, BNXT_VPD_LEN, vpd_data); 13116 - if (vpd_size <= 0) { 13117 - netdev_err(bp->dev, "Unable to read VPD\n"); 13118 - goto exit; 13119 13114 } 13120 13115 13121 - i = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA); 13122 - if (i < 0) { 13123 - netdev_err(bp->dev, "VPD READ-Only not found\n"); 13124 - goto exit; 13125 - } 13126 - 13127 - i = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA); 13128 - if (i < 0) { 13129 - netdev_err(bp->dev, "VPD READ-Only not found\n"); 13130 - goto exit; 13131 - } 13132 - 13133 - ro_size = pci_vpd_lrdt_size(&vpd_data[i]); 13134 - i += PCI_VPD_LRDT_TAG_SIZE; 13135 - if (i + ro_size > vpd_size) 13136 - goto exit; 13137 - 13138 - pos = pci_vpd_find_info_keyword(vpd_data, i, ro_size, 13139 - PCI_VPD_RO_KEYWORD_PARTNO); 13116 + pos = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size, 13117 + PCI_VPD_RO_KEYWORD_PARTNO, &kw_len); 13140 13118 if (pos < 0) 13141 13119 goto read_sn; 13142 13120 13143 - len = pci_vpd_info_field_size(&vpd_data[pos]); 13144 - pos += PCI_VPD_INFO_FLD_HDR_SIZE; 13145 - if (len + pos > vpd_size) 13146 - goto read_sn; 13147 - 13148 - size = min(len, BNXT_VPD_FLD_LEN - 1); 13121 + size = min_t(int, kw_len, BNXT_VPD_FLD_LEN - 1); 13149 13122 memcpy(bp->board_partno, &vpd_data[pos], size); 13150 13123 13151 13124 read_sn: 13152 - pos = pci_vpd_find_info_keyword(vpd_data, i, ro_size, 13153 - PCI_VPD_RO_KEYWORD_SERIALNO); 13125 + pos = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size, 13126 + PCI_VPD_RO_KEYWORD_SERIALNO, 13127 + &kw_len); 13154 13128 if (pos < 0) 13155 13129 goto exit; 13156 13130 13157 - len = pci_vpd_info_field_size(&vpd_data[pos]); 13158 - pos += PCI_VPD_INFO_FLD_HDR_SIZE; 13159 - if (len + pos > vpd_size) 13160 - goto exit; 13161 - 13162 - size = min(len, BNXT_VPD_FLD_LEN - 1); 13131 + size = min_t(int, kw_len, BNXT_VPD_FLD_LEN - 1); 13163 13132 memcpy(bp->board_serialno, &vpd_data[pos], size); 13164 13133 exit: 13165 13134 kfree(vpd_data);
+29 -86
drivers/net/ethernet/broadcom/tg3.c
··· 12788 12788 memset(tmp_stats, 0, sizeof(struct tg3_ethtool_stats)); 12789 12789 } 12790 12790 12791 - static __be32 *tg3_vpd_readblock(struct tg3 *tp, u32 *vpdlen) 12791 + static __be32 *tg3_vpd_readblock(struct tg3 *tp, unsigned int *vpdlen) 12792 12792 { 12793 12793 int i; 12794 12794 __be32 *buf; ··· 12822 12822 offset = TG3_NVM_VPD_OFF; 12823 12823 len = TG3_NVM_VPD_LEN; 12824 12824 } 12825 - } else { 12826 - len = TG3_NVM_PCI_VPD_MAX_LEN; 12827 - } 12828 12825 12829 - buf = kmalloc(len, GFP_KERNEL); 12830 - if (buf == NULL) 12831 - return NULL; 12826 + buf = kmalloc(len, GFP_KERNEL); 12827 + if (!buf) 12828 + return NULL; 12832 12829 12833 - if (magic == TG3_EEPROM_MAGIC) { 12834 12830 for (i = 0; i < len; i += 4) { 12835 12831 /* The data is in little-endian format in NVRAM. 12836 12832 * Use the big-endian read routines to preserve ··· 12837 12841 } 12838 12842 *vpdlen = len; 12839 12843 } else { 12840 - ssize_t cnt; 12841 - 12842 - cnt = pci_read_vpd(tp->pdev, 0, len, (u8 *)buf); 12843 - if (cnt < 0) 12844 - goto error; 12845 - *vpdlen = cnt; 12844 + buf = pci_vpd_alloc(tp->pdev, vpdlen); 12845 + if (IS_ERR(buf)) 12846 + return NULL; 12846 12847 } 12847 12848 12848 12849 return buf; ··· 12861 12868 12862 12869 static int tg3_test_nvram(struct tg3 *tp) 12863 12870 { 12864 - u32 csum, magic, len; 12871 + u32 csum, magic; 12865 12872 __be32 *buf; 12866 12873 int i, j, k, err = 0, size; 12874 + unsigned int len; 12867 12875 12868 12876 if (tg3_flag(tp, NO_NVRAM)) 12869 12877 return 0; ··· 13007 13013 if (!buf) 13008 13014 return -ENOMEM; 13009 13015 13010 - i = pci_vpd_find_tag((u8 *)buf, len, PCI_VPD_LRDT_RO_DATA); 13011 - if (i > 0) { 13012 - j = pci_vpd_lrdt_size(&((u8 *)buf)[i]); 13013 - if (j < 0) 13014 - goto out; 13015 - 13016 - if (i + PCI_VPD_LRDT_TAG_SIZE + j > len) 13017 - goto out; 13018 - 13019 - i += PCI_VPD_LRDT_TAG_SIZE; 13020 - j = pci_vpd_find_info_keyword((u8 *)buf, i, j, 13021 - PCI_VPD_RO_KEYWORD_CHKSUM); 13022 - if (j > 0) { 13023 - u8 csum8 = 0; 13024 - 13025 - j += PCI_VPD_INFO_FLD_HDR_SIZE; 13026 - 13027 - for (i = 0; i <= j; i++) 13028 - csum8 += ((u8 *)buf)[i]; 13029 - 13030 - if (csum8) 13031 - goto out; 13032 - } 13033 - } 13034 - 13035 - err = 0; 13036 - 13016 + err = pci_vpd_check_csum(buf, len); 13017 + /* go on if no checksum found */ 13018 + if (err == 1) 13019 + err = 0; 13037 13020 out: 13038 13021 kfree(buf); 13039 13022 return err; ··· 15595 15624 static void tg3_read_vpd(struct tg3 *tp) 15596 15625 { 15597 15626 u8 *vpd_data; 15598 - unsigned int block_end, rosize, len; 15599 - u32 vpdlen; 15600 - int j, i = 0; 15627 + unsigned int len, vpdlen; 15628 + int i; 15601 15629 15602 15630 vpd_data = (u8 *)tg3_vpd_readblock(tp, &vpdlen); 15603 15631 if (!vpd_data) 15604 15632 goto out_no_vpd; 15605 15633 15606 - i = pci_vpd_find_tag(vpd_data, vpdlen, PCI_VPD_LRDT_RO_DATA); 15634 + i = pci_vpd_find_ro_info_keyword(vpd_data, vpdlen, 15635 + PCI_VPD_RO_KEYWORD_MFR_ID, &len); 15607 15636 if (i < 0) 15608 - goto out_not_found; 15637 + goto partno; 15609 15638 15610 - rosize = pci_vpd_lrdt_size(&vpd_data[i]); 15611 - block_end = i + PCI_VPD_LRDT_TAG_SIZE + rosize; 15612 - i += PCI_VPD_LRDT_TAG_SIZE; 15639 + if (len != 4 || memcmp(vpd_data + i, "1028", 4)) 15640 + goto partno; 15613 15641 15614 - if (block_end > vpdlen) 15615 - goto out_not_found; 15642 + i = pci_vpd_find_ro_info_keyword(vpd_data, vpdlen, 15643 + PCI_VPD_RO_KEYWORD_VENDOR0, &len); 15644 + if (i < 0) 15645 + goto partno; 15616 15646 15617 - j = pci_vpd_find_info_keyword(vpd_data, i, rosize, 15618 - PCI_VPD_RO_KEYWORD_MFR_ID); 15619 - if (j > 0) { 15620 - len = pci_vpd_info_field_size(&vpd_data[j]); 15621 - 15622 - j += PCI_VPD_INFO_FLD_HDR_SIZE; 15623 - if (j + len > block_end || len != 4 || 15624 - memcmp(&vpd_data[j], "1028", 4)) 15625 - goto partno; 15626 - 15627 - j = pci_vpd_find_info_keyword(vpd_data, i, rosize, 15628 - PCI_VPD_RO_KEYWORD_VENDOR0); 15629 - if (j < 0) 15630 - goto partno; 15631 - 15632 - len = pci_vpd_info_field_size(&vpd_data[j]); 15633 - 15634 - j += PCI_VPD_INFO_FLD_HDR_SIZE; 15635 - if (j + len > block_end) 15636 - goto partno; 15637 - 15638 - if (len >= sizeof(tp->fw_ver)) 15639 - len = sizeof(tp->fw_ver) - 1; 15640 - memset(tp->fw_ver, 0, sizeof(tp->fw_ver)); 15641 - snprintf(tp->fw_ver, sizeof(tp->fw_ver), "%.*s bc ", len, 15642 - &vpd_data[j]); 15643 - } 15647 + memset(tp->fw_ver, 0, sizeof(tp->fw_ver)); 15648 + snprintf(tp->fw_ver, sizeof(tp->fw_ver), "%.*s bc ", len, vpd_data + i); 15644 15649 15645 15650 partno: 15646 - i = pci_vpd_find_info_keyword(vpd_data, i, rosize, 15647 - PCI_VPD_RO_KEYWORD_PARTNO); 15651 + i = pci_vpd_find_ro_info_keyword(vpd_data, vpdlen, 15652 + PCI_VPD_RO_KEYWORD_PARTNO, &len); 15648 15653 if (i < 0) 15649 15654 goto out_not_found; 15650 15655 15651 - len = pci_vpd_info_field_size(&vpd_data[i]); 15652 - 15653 - i += PCI_VPD_INFO_FLD_HDR_SIZE; 15654 - if (len > TG3_BPN_SIZE || 15655 - (len + i) > vpdlen) 15656 + if (len > TG3_BPN_SIZE) 15656 15657 goto out_not_found; 15657 15658 15658 15659 memcpy(tp->board_part_number, &vpd_data[i], len);
-1
drivers/net/ethernet/broadcom/tg3.h
··· 2101 2101 /* Hardware Legacy NVRAM layout */ 2102 2102 #define TG3_NVM_VPD_OFF 0x100 2103 2103 #define TG3_NVM_VPD_LEN 256 2104 - #define TG3_NVM_PCI_VPD_MAX_LEN 512 2105 2104 2106 2105 /* Hardware Selfboot NVRAM layout */ 2107 2106 #define TG3_NVM_HWSB_CFG1 0x00000004
+1 -1
drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
··· 526 526 oct->irq_name_storage = NULL; 527 527 } 528 528 /* Soft reset the octeon device before exiting */ 529 - if (oct->pci_dev->reset_fn) 529 + if (!pcie_reset_flr(oct->pci_dev, PCI_RESET_PROBE)) 530 530 octeon_pci_flr(oct); 531 531 else 532 532 cn23xx_vf_ask_pf_to_do_flr(oct);
-2
drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
··· 84 84 enum { 85 85 MAX_NPORTS = 4, /* max # of ports */ 86 86 SERNUM_LEN = 24, /* Serial # length */ 87 - EC_LEN = 16, /* E/C length */ 88 87 ID_LEN = 16, /* ID length */ 89 88 PN_LEN = 16, /* Part Number length */ 90 89 MACADDR_LEN = 12, /* MAC Address length */ ··· 390 391 391 392 struct vpd_params { 392 393 unsigned int cclk; 393 - u8 ec[EC_LEN + 1]; 394 394 u8 sn[SERNUM_LEN + 1]; 395 395 u8 id[ID_LEN + 1]; 396 396 u8 pn[PN_LEN + 1];
+32 -55
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 2743 2743 */ 2744 2744 int t4_get_raw_vpd_params(struct adapter *adapter, struct vpd_params *p) 2745 2745 { 2746 - int i, ret = 0, addr; 2747 - int ec, sn, pn, na; 2748 - u8 *vpd, csum, base_val = 0; 2749 - unsigned int vpdr_len, kw_offset, id_len; 2746 + unsigned int id_len, pn_len, sn_len, na_len; 2747 + int id, sn, pn, na, addr, ret = 0; 2748 + u8 *vpd, base_val = 0; 2750 2749 2751 2750 vpd = vmalloc(VPD_LEN); 2752 2751 if (!vpd) ··· 2764 2765 if (ret < 0) 2765 2766 goto out; 2766 2767 2767 - if (vpd[0] != PCI_VPD_LRDT_ID_STRING) { 2768 - dev_err(adapter->pdev_dev, "missing VPD ID string\n"); 2768 + ret = pci_vpd_find_id_string(vpd, VPD_LEN, &id_len); 2769 + if (ret < 0) 2770 + goto out; 2771 + id = ret; 2772 + 2773 + ret = pci_vpd_check_csum(vpd, VPD_LEN); 2774 + if (ret) { 2775 + dev_err(adapter->pdev_dev, "VPD checksum incorrect or missing\n"); 2769 2776 ret = -EINVAL; 2770 2777 goto out; 2771 2778 } 2772 2779 2773 - id_len = pci_vpd_lrdt_size(vpd); 2774 - if (id_len > ID_LEN) 2775 - id_len = ID_LEN; 2776 - 2777 - i = pci_vpd_find_tag(vpd, VPD_LEN, PCI_VPD_LRDT_RO_DATA); 2778 - if (i < 0) { 2779 - dev_err(adapter->pdev_dev, "missing VPD-R section\n"); 2780 - ret = -EINVAL; 2780 + ret = pci_vpd_find_ro_info_keyword(vpd, VPD_LEN, 2781 + PCI_VPD_RO_KEYWORD_SERIALNO, &sn_len); 2782 + if (ret < 0) 2781 2783 goto out; 2782 - } 2784 + sn = ret; 2783 2785 2784 - vpdr_len = pci_vpd_lrdt_size(&vpd[i]); 2785 - kw_offset = i + PCI_VPD_LRDT_TAG_SIZE; 2786 - if (vpdr_len + kw_offset > VPD_LEN) { 2787 - dev_err(adapter->pdev_dev, "bad VPD-R length %u\n", vpdr_len); 2788 - ret = -EINVAL; 2786 + ret = pci_vpd_find_ro_info_keyword(vpd, VPD_LEN, 2787 + PCI_VPD_RO_KEYWORD_PARTNO, &pn_len); 2788 + if (ret < 0) 2789 2789 goto out; 2790 - } 2790 + pn = ret; 2791 2791 2792 - #define FIND_VPD_KW(var, name) do { \ 2793 - var = pci_vpd_find_info_keyword(vpd, kw_offset, vpdr_len, name); \ 2794 - if (var < 0) { \ 2795 - dev_err(adapter->pdev_dev, "missing VPD keyword " name "\n"); \ 2796 - ret = -EINVAL; \ 2797 - goto out; \ 2798 - } \ 2799 - var += PCI_VPD_INFO_FLD_HDR_SIZE; \ 2800 - } while (0) 2801 - 2802 - FIND_VPD_KW(i, "RV"); 2803 - for (csum = 0; i >= 0; i--) 2804 - csum += vpd[i]; 2805 - 2806 - if (csum) { 2807 - dev_err(adapter->pdev_dev, 2808 - "corrupted VPD EEPROM, actual csum %u\n", csum); 2809 - ret = -EINVAL; 2792 + ret = pci_vpd_find_ro_info_keyword(vpd, VPD_LEN, "NA", &na_len); 2793 + if (ret < 0) 2810 2794 goto out; 2811 - } 2795 + na = ret; 2812 2796 2813 - FIND_VPD_KW(ec, "EC"); 2814 - FIND_VPD_KW(sn, "SN"); 2815 - FIND_VPD_KW(pn, "PN"); 2816 - FIND_VPD_KW(na, "NA"); 2817 - #undef FIND_VPD_KW 2818 - 2819 - memcpy(p->id, vpd + PCI_VPD_LRDT_TAG_SIZE, id_len); 2797 + memcpy(p->id, vpd + id, min_t(int, id_len, ID_LEN)); 2820 2798 strim(p->id); 2821 - memcpy(p->ec, vpd + ec, EC_LEN); 2822 - strim(p->ec); 2823 - i = pci_vpd_info_field_size(vpd + sn - PCI_VPD_INFO_FLD_HDR_SIZE); 2824 - memcpy(p->sn, vpd + sn, min(i, SERNUM_LEN)); 2799 + memcpy(p->sn, vpd + sn, min_t(int, sn_len, SERNUM_LEN)); 2825 2800 strim(p->sn); 2826 - i = pci_vpd_info_field_size(vpd + pn - PCI_VPD_INFO_FLD_HDR_SIZE); 2827 - memcpy(p->pn, vpd + pn, min(i, PN_LEN)); 2801 + memcpy(p->pn, vpd + pn, min_t(int, pn_len, PN_LEN)); 2828 2802 strim(p->pn); 2829 - memcpy(p->na, vpd + na, min(i, MACADDR_LEN)); 2803 + memcpy(p->na, vpd + na, min_t(int, na_len, MACADDR_LEN)); 2830 2804 strim((char *)p->na); 2831 2805 2832 2806 out: 2833 2807 vfree(vpd); 2834 - return ret < 0 ? ret : 0; 2808 + if (ret < 0) { 2809 + dev_err(adapter->pdev_dev, "error reading VPD\n"); 2810 + return ret; 2811 + } 2812 + 2813 + return 0; 2835 2814 } 2836 2815 2837 2816 /**
+20 -58
drivers/net/ethernet/sfc/efx.c
··· 900 900 901 901 /* NIC VPD information 902 902 * Called during probe to display the part number of the 903 - * installed NIC. VPD is potentially very large but this should 904 - * always appear within the first 512 bytes. 903 + * installed NIC. 905 904 */ 906 - #define SFC_VPD_LEN 512 907 905 static void efx_probe_vpd_strings(struct efx_nic *efx) 908 906 { 909 907 struct pci_dev *dev = efx->pci_dev; 910 - char vpd_data[SFC_VPD_LEN]; 911 - ssize_t vpd_size; 912 - int ro_start, ro_size, i, j; 908 + unsigned int vpd_size, kw_len; 909 + u8 *vpd_data; 910 + int start; 913 911 914 - /* Get the vpd data from the device */ 915 - vpd_size = pci_read_vpd(dev, 0, sizeof(vpd_data), vpd_data); 916 - if (vpd_size <= 0) { 917 - netif_err(efx, drv, efx->net_dev, "Unable to read VPD\n"); 912 + vpd_data = pci_vpd_alloc(dev, &vpd_size); 913 + if (IS_ERR(vpd_data)) { 914 + pci_warn(dev, "Unable to read VPD\n"); 918 915 return; 919 916 } 920 917 921 - /* Get the Read only section */ 922 - ro_start = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA); 923 - if (ro_start < 0) { 924 - netif_err(efx, drv, efx->net_dev, "VPD Read-only not found\n"); 925 - return; 926 - } 918 + start = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size, 919 + PCI_VPD_RO_KEYWORD_PARTNO, &kw_len); 920 + if (start < 0) 921 + pci_err(dev, "Part number not found or incomplete\n"); 922 + else 923 + pci_info(dev, "Part Number : %.*s\n", kw_len, vpd_data + start); 927 924 928 - ro_size = pci_vpd_lrdt_size(&vpd_data[ro_start]); 929 - j = ro_size; 930 - i = ro_start + PCI_VPD_LRDT_TAG_SIZE; 931 - if (i + j > vpd_size) 932 - j = vpd_size - i; 925 + start = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size, 926 + PCI_VPD_RO_KEYWORD_SERIALNO, &kw_len); 927 + if (start < 0) 928 + pci_err(dev, "Serial number not found or incomplete\n"); 929 + else 930 + efx->vpd_sn = kmemdup_nul(vpd_data + start, kw_len, GFP_KERNEL); 933 931 934 - /* Get the Part number */ 935 - i = pci_vpd_find_info_keyword(vpd_data, i, j, "PN"); 936 - if (i < 0) { 937 - netif_err(efx, drv, efx->net_dev, "Part number not found\n"); 938 - return; 939 - } 940 - 941 - j = pci_vpd_info_field_size(&vpd_data[i]); 942 - i += PCI_VPD_INFO_FLD_HDR_SIZE; 943 - if (i + j > vpd_size) { 944 - netif_err(efx, drv, efx->net_dev, "Incomplete part number\n"); 945 - return; 946 - } 947 - 948 - netif_info(efx, drv, efx->net_dev, 949 - "Part Number : %.*s\n", j, &vpd_data[i]); 950 - 951 - i = ro_start + PCI_VPD_LRDT_TAG_SIZE; 952 - j = ro_size; 953 - i = pci_vpd_find_info_keyword(vpd_data, i, j, "SN"); 954 - if (i < 0) { 955 - netif_err(efx, drv, efx->net_dev, "Serial number not found\n"); 956 - return; 957 - } 958 - 959 - j = pci_vpd_info_field_size(&vpd_data[i]); 960 - i += PCI_VPD_INFO_FLD_HDR_SIZE; 961 - if (i + j > vpd_size) { 962 - netif_err(efx, drv, efx->net_dev, "Incomplete serial number\n"); 963 - return; 964 - } 965 - 966 - efx->vpd_sn = kmalloc(j + 1, GFP_KERNEL); 967 - if (!efx->vpd_sn) 968 - return; 969 - 970 - snprintf(efx->vpd_sn, j + 1, "%s", &vpd_data[i]); 932 + kfree(vpd_data); 971 933 } 972 934 973 935
+20 -59
drivers/net/ethernet/sfc/falcon/efx.c
··· 2780 2780 }; 2781 2781 2782 2782 /* NIC VPD information 2783 - * Called during probe to display the part number of the 2784 - * installed NIC. VPD is potentially very large but this should 2785 - * always appear within the first 512 bytes. 2783 + * Called during probe to display the part number of the installed NIC. 2786 2784 */ 2787 - #define SFC_VPD_LEN 512 2788 2785 static void ef4_probe_vpd_strings(struct ef4_nic *efx) 2789 2786 { 2790 2787 struct pci_dev *dev = efx->pci_dev; 2791 - char vpd_data[SFC_VPD_LEN]; 2792 - ssize_t vpd_size; 2793 - int ro_start, ro_size, i, j; 2788 + unsigned int vpd_size, kw_len; 2789 + u8 *vpd_data; 2790 + int start; 2794 2791 2795 - /* Get the vpd data from the device */ 2796 - vpd_size = pci_read_vpd(dev, 0, sizeof(vpd_data), vpd_data); 2797 - if (vpd_size <= 0) { 2798 - netif_err(efx, drv, efx->net_dev, "Unable to read VPD\n"); 2792 + vpd_data = pci_vpd_alloc(dev, &vpd_size); 2793 + if (IS_ERR(vpd_data)) { 2794 + pci_warn(dev, "Unable to read VPD\n"); 2799 2795 return; 2800 2796 } 2801 2797 2802 - /* Get the Read only section */ 2803 - ro_start = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA); 2804 - if (ro_start < 0) { 2805 - netif_err(efx, drv, efx->net_dev, "VPD Read-only not found\n"); 2806 - return; 2807 - } 2798 + start = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size, 2799 + PCI_VPD_RO_KEYWORD_PARTNO, &kw_len); 2800 + if (start < 0) 2801 + pci_warn(dev, "Part number not found or incomplete\n"); 2802 + else 2803 + pci_info(dev, "Part Number : %.*s\n", kw_len, vpd_data + start); 2808 2804 2809 - ro_size = pci_vpd_lrdt_size(&vpd_data[ro_start]); 2810 - j = ro_size; 2811 - i = ro_start + PCI_VPD_LRDT_TAG_SIZE; 2812 - if (i + j > vpd_size) 2813 - j = vpd_size - i; 2805 + start = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size, 2806 + PCI_VPD_RO_KEYWORD_SERIALNO, &kw_len); 2807 + if (start < 0) 2808 + pci_warn(dev, "Serial number not found or incomplete\n"); 2809 + else 2810 + efx->vpd_sn = kmemdup_nul(vpd_data + start, kw_len, GFP_KERNEL); 2814 2811 2815 - /* Get the Part number */ 2816 - i = pci_vpd_find_info_keyword(vpd_data, i, j, "PN"); 2817 - if (i < 0) { 2818 - netif_err(efx, drv, efx->net_dev, "Part number not found\n"); 2819 - return; 2820 - } 2821 - 2822 - j = pci_vpd_info_field_size(&vpd_data[i]); 2823 - i += PCI_VPD_INFO_FLD_HDR_SIZE; 2824 - if (i + j > vpd_size) { 2825 - netif_err(efx, drv, efx->net_dev, "Incomplete part number\n"); 2826 - return; 2827 - } 2828 - 2829 - netif_info(efx, drv, efx->net_dev, 2830 - "Part Number : %.*s\n", j, &vpd_data[i]); 2831 - 2832 - i = ro_start + PCI_VPD_LRDT_TAG_SIZE; 2833 - j = ro_size; 2834 - i = pci_vpd_find_info_keyword(vpd_data, i, j, "SN"); 2835 - if (i < 0) { 2836 - netif_err(efx, drv, efx->net_dev, "Serial number not found\n"); 2837 - return; 2838 - } 2839 - 2840 - j = pci_vpd_info_field_size(&vpd_data[i]); 2841 - i += PCI_VPD_INFO_FLD_HDR_SIZE; 2842 - if (i + j > vpd_size) { 2843 - netif_err(efx, drv, efx->net_dev, "Incomplete serial number\n"); 2844 - return; 2845 - } 2846 - 2847 - efx->vpd_sn = kmalloc(j + 1, GFP_KERNEL); 2848 - if (!efx->vpd_sn) 2849 - return; 2850 - 2851 - snprintf(efx->vpd_sn, j + 1, "%s", &vpd_data[i]); 2812 + kfree(vpd_data); 2852 2813 } 2853 2814 2854 2815
+1 -1
drivers/pci/ats.c
··· 376 376 if (WARN_ON(pdev->pasid_enabled)) 377 377 return -EBUSY; 378 378 379 - if (!pdev->eetlp_prefix_path) 379 + if (!pdev->eetlp_prefix_path && !pdev->pasid_no_tlp) 380 380 return -EINVAL; 381 381 382 382 if (!pasid)
+1
drivers/pci/controller/Kconfig
··· 40 40 config PCI_IXP4XX 41 41 bool "Intel IXP4xx PCI controller" 42 42 depends on ARM && OF 43 + depends on ARCH_IXP4XX || COMPILE_TEST 43 44 default ARCH_IXP4XX 44 45 help 45 46 Say Y here if you want support for the PCI host controller found
+56 -5
drivers/pci/controller/cadence/pci-j721e.c
··· 27 27 #define STATUS_REG_SYS_2 0x508 28 28 #define STATUS_CLR_REG_SYS_2 0x708 29 29 #define LINK_DOWN BIT(1) 30 + #define J7200_LINK_DOWN BIT(10) 30 31 31 32 #define J721E_PCIE_USER_CMD_STATUS 0x4 32 33 #define LINK_TRAINING_ENABLE BIT(0) ··· 58 57 struct cdns_pcie *cdns_pcie; 59 58 void __iomem *user_cfg_base; 60 59 void __iomem *intd_cfg_base; 60 + u32 linkdown_irq_regfield; 61 61 }; 62 62 63 63 enum j721e_pcie_mode { ··· 68 66 69 67 struct j721e_pcie_data { 70 68 enum j721e_pcie_mode mode; 71 - bool quirk_retrain_flag; 69 + unsigned int quirk_retrain_flag:1; 70 + unsigned int quirk_detect_quiet_flag:1; 71 + u32 linkdown_irq_regfield; 72 + unsigned int byte_access_allowed:1; 72 73 }; 73 74 74 75 static inline u32 j721e_pcie_user_readl(struct j721e_pcie *pcie, u32 offset) ··· 103 98 u32 reg; 104 99 105 100 reg = j721e_pcie_intd_readl(pcie, STATUS_REG_SYS_2); 106 - if (!(reg & LINK_DOWN)) 101 + if (!(reg & pcie->linkdown_irq_regfield)) 107 102 return IRQ_NONE; 108 103 109 104 dev_err(dev, "LINK DOWN!\n"); 110 105 111 - j721e_pcie_intd_writel(pcie, STATUS_CLR_REG_SYS_2, LINK_DOWN); 106 + j721e_pcie_intd_writel(pcie, STATUS_CLR_REG_SYS_2, pcie->linkdown_irq_regfield); 112 107 return IRQ_HANDLED; 113 108 } 114 109 ··· 117 112 u32 reg; 118 113 119 114 reg = j721e_pcie_intd_readl(pcie, ENABLE_REG_SYS_2); 120 - reg |= LINK_DOWN; 115 + reg |= pcie->linkdown_irq_regfield; 121 116 j721e_pcie_intd_writel(pcie, ENABLE_REG_SYS_2, reg); 122 117 } 123 118 ··· 289 284 static const struct j721e_pcie_data j721e_pcie_rc_data = { 290 285 .mode = PCI_MODE_RC, 291 286 .quirk_retrain_flag = true, 287 + .byte_access_allowed = false, 288 + .linkdown_irq_regfield = LINK_DOWN, 292 289 }; 293 290 294 291 static const struct j721e_pcie_data j721e_pcie_ep_data = { 295 292 .mode = PCI_MODE_EP, 293 + .linkdown_irq_regfield = LINK_DOWN, 294 + }; 295 + 296 + static const struct j721e_pcie_data j7200_pcie_rc_data = { 297 + .mode = PCI_MODE_RC, 298 + .quirk_detect_quiet_flag = true, 299 + .linkdown_irq_regfield = J7200_LINK_DOWN, 300 + .byte_access_allowed = true, 301 + }; 302 + 303 + static const struct j721e_pcie_data j7200_pcie_ep_data = { 304 + .mode = PCI_MODE_EP, 305 + .quirk_detect_quiet_flag = true, 306 + }; 307 + 308 + static const struct j721e_pcie_data am64_pcie_rc_data = { 309 + .mode = PCI_MODE_RC, 310 + .linkdown_irq_regfield = J7200_LINK_DOWN, 311 + .byte_access_allowed = true, 312 + }; 313 + 314 + static const struct j721e_pcie_data am64_pcie_ep_data = { 315 + .mode = PCI_MODE_EP, 316 + .linkdown_irq_regfield = J7200_LINK_DOWN, 296 317 }; 297 318 298 319 static const struct of_device_id of_j721e_pcie_match[] = { ··· 329 298 { 330 299 .compatible = "ti,j721e-pcie-ep", 331 300 .data = &j721e_pcie_ep_data, 301 + }, 302 + { 303 + .compatible = "ti,j7200-pcie-host", 304 + .data = &j7200_pcie_rc_data, 305 + }, 306 + { 307 + .compatible = "ti,j7200-pcie-ep", 308 + .data = &j7200_pcie_ep_data, 309 + }, 310 + { 311 + .compatible = "ti,am64-pcie-host", 312 + .data = &am64_pcie_rc_data, 313 + }, 314 + { 315 + .compatible = "ti,am64-pcie-ep", 316 + .data = &am64_pcie_ep_data, 332 317 }, 333 318 {}, 334 319 }; ··· 379 332 380 333 pcie->dev = dev; 381 334 pcie->mode = mode; 335 + pcie->linkdown_irq_regfield = data->linkdown_irq_regfield; 382 336 383 337 base = devm_platform_ioremap_resource_byname(pdev, "intd_cfg"); 384 338 if (IS_ERR(base)) ··· 439 391 goto err_get_sync; 440 392 } 441 393 442 - bridge->ops = &cdns_ti_pcie_host_ops; 394 + if (!data->byte_access_allowed) 395 + bridge->ops = &cdns_ti_pcie_host_ops; 443 396 rc = pci_host_bridge_priv(bridge); 444 397 rc->quirk_retrain_flag = data->quirk_retrain_flag; 398 + rc->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag; 445 399 446 400 cdns_pcie = &rc->pcie; 447 401 cdns_pcie->dev = dev; ··· 509 459 ret = -ENOMEM; 510 460 goto err_get_sync; 511 461 } 462 + ep->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag; 512 463 513 464 cdns_pcie = &ep->pcie; 514 465 cdns_pcie->dev = dev;
+147 -53
drivers/pci/controller/cadence/pcie-cadence-ep.c
··· 16 16 #define CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE 0x1 17 17 #define CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY 0x3 18 18 19 - static int cdns_pcie_ep_write_header(struct pci_epc *epc, u8 fn, 19 + static u8 cdns_pcie_get_fn_from_vfn(struct cdns_pcie *pcie, u8 fn, u8 vfn) 20 + { 21 + u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET; 22 + u32 first_vf_offset, stride; 23 + 24 + if (vfn == 0) 25 + return fn; 26 + 27 + first_vf_offset = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_OFFSET); 28 + stride = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_STRIDE); 29 + fn = fn + first_vf_offset + ((vfn - 1) * stride); 30 + 31 + return fn; 32 + } 33 + 34 + static int cdns_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn, 20 35 struct pci_epf_header *hdr) 21 36 { 22 37 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 38 + u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET; 23 39 struct cdns_pcie *pcie = &ep->pcie; 40 + u32 reg; 41 + 42 + if (vfn > 1) { 43 + dev_err(&epc->dev, "Only Virtual Function #1 has deviceID\n"); 44 + return -EINVAL; 45 + } else if (vfn == 1) { 46 + reg = cap + PCI_SRIOV_VF_DID; 47 + cdns_pcie_ep_fn_writew(pcie, fn, reg, hdr->deviceid); 48 + return 0; 49 + } 24 50 25 51 cdns_pcie_ep_fn_writew(pcie, fn, PCI_DEVICE_ID, hdr->deviceid); 26 52 cdns_pcie_ep_fn_writeb(pcie, fn, PCI_REVISION_ID, hdr->revid); ··· 73 47 return 0; 74 48 } 75 49 76 - static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, 50 + static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn, 77 51 struct pci_epf_bar *epf_bar) 78 52 { 79 53 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); ··· 118 92 119 93 addr0 = lower_32_bits(bar_phys); 120 94 addr1 = upper_32_bits(bar_phys); 95 + 96 + if (vfn == 1) 97 + reg = CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn); 98 + else 99 + reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn); 100 + b = (bar < BAR_4) ? bar : bar - BAR_4; 101 + 102 + if (vfn == 0 || vfn == 1) { 103 + cfg = cdns_pcie_readl(pcie, reg); 104 + cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) | 105 + CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)); 106 + cfg |= (CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) | 107 + CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl)); 108 + cdns_pcie_writel(pcie, reg, cfg); 109 + } 110 + 111 + fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 121 112 cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), 122 113 addr0); 123 114 cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), 124 115 addr1); 125 116 126 - if (bar < BAR_4) { 127 - reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn); 128 - b = bar; 129 - } else { 130 - reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn); 131 - b = bar - BAR_4; 132 - } 133 - 134 - cfg = cdns_pcie_readl(pcie, reg); 135 - cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) | 136 - CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)); 137 - cfg |= (CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) | 138 - CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl)); 139 - cdns_pcie_writel(pcie, reg, cfg); 140 - 117 + if (vfn > 0) 118 + epf = &epf->epf[vfn - 1]; 141 119 epf->epf_bar[bar] = epf_bar; 142 120 143 121 return 0; 144 122 } 145 123 146 - static void cdns_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, 124 + static void cdns_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn, 147 125 struct pci_epf_bar *epf_bar) 148 126 { 149 127 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); ··· 156 126 enum pci_barno bar = epf_bar->barno; 157 127 u32 reg, cfg, b, ctrl; 158 128 159 - if (bar < BAR_4) { 160 - reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn); 161 - b = bar; 162 - } else { 163 - reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn); 164 - b = bar - BAR_4; 129 + if (vfn == 1) 130 + reg = CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn); 131 + else 132 + reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn); 133 + b = (bar < BAR_4) ? bar : bar - BAR_4; 134 + 135 + if (vfn == 0 || vfn == 1) { 136 + ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED; 137 + cfg = cdns_pcie_readl(pcie, reg); 138 + cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) | 139 + CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)); 140 + cfg |= CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl); 141 + cdns_pcie_writel(pcie, reg, cfg); 165 142 } 166 143 167 - ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED; 168 - cfg = cdns_pcie_readl(pcie, reg); 169 - cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) | 170 - CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)); 171 - cfg |= CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl); 172 - cdns_pcie_writel(pcie, reg, cfg); 173 - 144 + fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 174 145 cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), 0); 175 146 cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), 0); 176 147 148 + if (vfn > 0) 149 + epf = &epf->epf[vfn - 1]; 177 150 epf->epf_bar[bar] = NULL; 178 151 } 179 152 180 - static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, phys_addr_t addr, 181 - u64 pci_addr, size_t size) 153 + static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn, 154 + phys_addr_t addr, u64 pci_addr, size_t size) 182 155 { 183 156 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 184 157 struct cdns_pcie *pcie = &ep->pcie; ··· 194 161 return -EINVAL; 195 162 } 196 163 164 + fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 197 165 cdns_pcie_set_outbound_region(pcie, 0, fn, r, false, addr, pci_addr, size); 198 166 199 167 set_bit(r, &ep->ob_region_map); ··· 203 169 return 0; 204 170 } 205 171 206 - static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, 172 + static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn, 207 173 phys_addr_t addr) 208 174 { 209 175 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); ··· 223 189 clear_bit(r, &ep->ob_region_map); 224 190 } 225 191 226 - static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 mmc) 192 + static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, u8 mmc) 227 193 { 228 194 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 229 195 struct cdns_pcie *pcie = &ep->pcie; 230 196 u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET; 231 197 u16 flags; 198 + 199 + fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 232 200 233 201 /* 234 202 * Set the Multiple Message Capable bitfield into the Message Control ··· 245 209 return 0; 246 210 } 247 211 248 - static int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn) 212 + static int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn) 249 213 { 250 214 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 251 215 struct cdns_pcie *pcie = &ep->pcie; 252 216 u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET; 253 217 u16 flags, mme; 218 + 219 + fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 254 220 255 221 /* Validate that the MSI feature is actually enabled. */ 256 222 flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS); ··· 268 230 return mme; 269 231 } 270 232 271 - static int cdns_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no) 233 + static int cdns_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no) 272 234 { 273 235 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 274 236 struct cdns_pcie *pcie = &ep->pcie; 275 237 u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET; 276 238 u32 val, reg; 239 + 240 + func_no = cdns_pcie_get_fn_from_vfn(pcie, func_no, vfunc_no); 277 241 278 242 reg = cap + PCI_MSIX_FLAGS; 279 243 val = cdns_pcie_ep_fn_readw(pcie, func_no, reg); ··· 287 247 return val; 288 248 } 289 249 290 - static int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u16 interrupts, 291 - enum pci_barno bir, u32 offset) 250 + static int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u8 vfn, 251 + u16 interrupts, enum pci_barno bir, 252 + u32 offset) 292 253 { 293 254 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 294 255 struct cdns_pcie *pcie = &ep->pcie; 295 256 u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET; 296 257 u32 val, reg; 258 + 259 + fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 297 260 298 261 reg = cap + PCI_MSIX_FLAGS; 299 262 val = cdns_pcie_ep_fn_readw(pcie, fn, reg); ··· 317 274 return 0; 318 275 } 319 276 320 - static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, 321 - u8 intx, bool is_asserted) 277 + static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, u8 intx, 278 + bool is_asserted) 322 279 { 323 280 struct cdns_pcie *pcie = &ep->pcie; 324 281 unsigned long flags; ··· 360 317 writel(0, ep->irq_cpu_addr + offset); 361 318 } 362 319 363 - static int cdns_pcie_ep_send_legacy_irq(struct cdns_pcie_ep *ep, u8 fn, u8 intx) 320 + static int cdns_pcie_ep_send_legacy_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn, 321 + u8 intx) 364 322 { 365 323 u16 cmd; 366 324 ··· 378 334 return 0; 379 335 } 380 336 381 - static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, 337 + static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn, 382 338 u8 interrupt_num) 383 339 { 384 340 struct cdns_pcie *pcie = &ep->pcie; ··· 386 342 u16 flags, mme, data, data_mask; 387 343 u8 msi_count; 388 344 u64 pci_addr, pci_addr_mask = 0xff; 345 + 346 + fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 389 347 390 348 /* Check whether the MSI feature has been enabled by the PCI host. */ 391 349 flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS); ··· 428 382 return 0; 429 383 } 430 384 431 - static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn, 385 + static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn, u8 vfn, 432 386 phys_addr_t addr, u8 interrupt_num, 433 387 u32 entry_size, u32 *msi_data, 434 388 u32 *msi_addr_offset) ··· 441 395 u8 msi_count; 442 396 int ret; 443 397 int i; 398 + 399 + fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 444 400 445 401 /* Check whether the MSI feature has been enabled by the PCI host. */ 446 402 flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS); ··· 467 419 pci_addr &= GENMASK_ULL(63, 2); 468 420 469 421 for (i = 0; i < interrupt_num; i++) { 470 - ret = cdns_pcie_ep_map_addr(epc, fn, addr, 422 + ret = cdns_pcie_ep_map_addr(epc, fn, vfn, addr, 471 423 pci_addr & ~pci_addr_mask, 472 424 entry_size); 473 425 if (ret) ··· 481 433 return 0; 482 434 } 483 435 484 - static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, 436 + static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn, 485 437 u16 interrupt_num) 486 438 { 487 439 u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET; ··· 494 446 u16 flags; 495 447 u8 bir; 496 448 449 + epf = &ep->epf[fn]; 450 + if (vfn > 0) 451 + epf = &epf->epf[vfn - 1]; 452 + 453 + fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 454 + 497 455 /* Check whether the MSI-X feature has been enabled by the PCI host. */ 498 456 flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSIX_FLAGS); 499 457 if (!(flags & PCI_MSIX_FLAGS_ENABLE)) ··· 510 456 bir = tbl_offset & PCI_MSIX_TABLE_BIR; 511 457 tbl_offset &= PCI_MSIX_TABLE_OFFSET; 512 458 513 - epf = &ep->epf[fn]; 514 459 msix_tbl = epf->epf_bar[bir]->addr + tbl_offset; 515 460 msg_addr = msix_tbl[(interrupt_num - 1)].msg_addr; 516 461 msg_data = msix_tbl[(interrupt_num - 1)].msg_data; ··· 531 478 return 0; 532 479 } 533 480 534 - static int cdns_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, 481 + static int cdns_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, u8 vfn, 535 482 enum pci_epc_irq_type type, 536 483 u16 interrupt_num) 537 484 { 538 485 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 486 + struct cdns_pcie *pcie = &ep->pcie; 487 + struct device *dev = pcie->dev; 539 488 540 489 switch (type) { 541 490 case PCI_EPC_IRQ_LEGACY: 542 - return cdns_pcie_ep_send_legacy_irq(ep, fn, 0); 491 + if (vfn > 0) { 492 + dev_err(dev, "Cannot raise legacy interrupts for VF\n"); 493 + return -EINVAL; 494 + } 495 + return cdns_pcie_ep_send_legacy_irq(ep, fn, vfn, 0); 543 496 544 497 case PCI_EPC_IRQ_MSI: 545 - return cdns_pcie_ep_send_msi_irq(ep, fn, interrupt_num); 498 + return cdns_pcie_ep_send_msi_irq(ep, fn, vfn, interrupt_num); 546 499 547 500 case PCI_EPC_IRQ_MSIX: 548 - return cdns_pcie_ep_send_msix_irq(ep, fn, interrupt_num); 501 + return cdns_pcie_ep_send_msix_irq(ep, fn, vfn, interrupt_num); 549 502 550 503 default: 551 504 break; ··· 582 523 return 0; 583 524 } 584 525 526 + static const struct pci_epc_features cdns_pcie_epc_vf_features = { 527 + .linkup_notifier = false, 528 + .msi_capable = true, 529 + .msix_capable = true, 530 + .align = 65536, 531 + }; 532 + 585 533 static const struct pci_epc_features cdns_pcie_epc_features = { 586 534 .linkup_notifier = false, 587 535 .msi_capable = true, ··· 597 531 }; 598 532 599 533 static const struct pci_epc_features* 600 - cdns_pcie_ep_get_features(struct pci_epc *epc, u8 func_no) 534 + cdns_pcie_ep_get_features(struct pci_epc *epc, u8 func_no, u8 vfunc_no) 601 535 { 602 - return &cdns_pcie_epc_features; 536 + if (!vfunc_no) 537 + return &cdns_pcie_epc_features; 538 + 539 + return &cdns_pcie_epc_vf_features; 603 540 } 604 541 605 542 static const struct pci_epc_ops cdns_pcie_epc_ops = { ··· 628 559 struct platform_device *pdev = to_platform_device(dev); 629 560 struct device_node *np = dev->of_node; 630 561 struct cdns_pcie *pcie = &ep->pcie; 562 + struct cdns_pcie_epf *epf; 631 563 struct resource *res; 632 564 struct pci_epc *epc; 633 565 int ret; 566 + int i; 634 567 635 568 pcie->is_rc = false; 636 569 ··· 677 606 if (!ep->epf) 678 607 return -ENOMEM; 679 608 609 + epc->max_vfs = devm_kcalloc(dev, epc->max_functions, 610 + sizeof(*epc->max_vfs), GFP_KERNEL); 611 + if (!epc->max_vfs) 612 + return -ENOMEM; 613 + 614 + ret = of_property_read_u8_array(np, "max-virtual-functions", 615 + epc->max_vfs, epc->max_functions); 616 + if (ret == 0) { 617 + for (i = 0; i < epc->max_functions; i++) { 618 + epf = &ep->epf[i]; 619 + if (epc->max_vfs[i] == 0) 620 + continue; 621 + epf->epf = devm_kcalloc(dev, epc->max_vfs[i], 622 + sizeof(*ep->epf), GFP_KERNEL); 623 + if (!epf->epf) 624 + return -ENOMEM; 625 + } 626 + } 627 + 680 628 ret = pci_epc_mem_init(epc, pcie->mem_res->start, 681 629 resource_size(pcie->mem_res), PAGE_SIZE); 682 630 if (ret < 0) { ··· 713 623 ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE; 714 624 /* Reserve region 0 for IRQs */ 715 625 set_bit(0, &ep->ob_region_map); 626 + 627 + if (ep->quirk_detect_quiet_flag) 628 + cdns_pcie_detect_quiet_min_delay_set(&ep->pcie); 629 + 716 630 spin_lock_init(&ep->lock); 717 631 718 632 return 0;
+3
drivers/pci/controller/cadence/pcie-cadence-host.c
··· 498 498 return PTR_ERR(rc->cfg_base); 499 499 rc->cfg_res = res; 500 500 501 + if (rc->quirk_detect_quiet_flag) 502 + cdns_pcie_detect_quiet_min_delay_set(&rc->pcie); 503 + 501 504 ret = cdns_pcie_start_link(pcie); 502 505 if (ret) { 503 506 dev_err(dev, "Failed to start link\n");
+16
drivers/pci/controller/cadence/pcie-cadence.c
··· 7 7 8 8 #include "pcie-cadence.h" 9 9 10 + void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie) 11 + { 12 + u32 delay = 0x3; 13 + u32 ltssm_control_cap; 14 + 15 + /* 16 + * Set the LTSSM Detect Quiet state min. delay to 2ms. 17 + */ 18 + ltssm_control_cap = cdns_pcie_readl(pcie, CDNS_PCIE_LTSSM_CONTROL_CAP); 19 + ltssm_control_cap = ((ltssm_control_cap & 20 + ~CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK) | 21 + CDNS_PCIE_DETECT_QUIET_MIN_DELAY(delay)); 22 + 23 + cdns_pcie_writel(pcie, CDNS_PCIE_LTSSM_CONTROL_CAP, ltssm_control_cap); 24 + } 25 + 10 26 void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, 11 27 u32 r, bool is_io, 12 28 u64 cpu_addr, u64 pci_addr, size_t size)
+28 -1
drivers/pci/controller/cadence/pcie-cadence.h
··· 8 8 9 9 #include <linux/kernel.h> 10 10 #include <linux/pci.h> 11 + #include <linux/pci-epf.h> 11 12 #include <linux/phy/phy.h> 12 13 13 14 /* Parameters for the waiting for link up routine */ ··· 47 46 #define CDNS_PCIE_LM_EP_ID_BUS_SHIFT 8 48 47 49 48 /* Endpoint Function f BAR b Configuration Registers */ 49 + #define CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn) \ 50 + (((bar) < BAR_4) ? CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) : CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn)) 50 51 #define CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) \ 51 52 (CDNS_PCIE_LM_BASE + 0x0240 + (fn) * 0x0008) 52 53 #define CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn) \ 53 54 (CDNS_PCIE_LM_BASE + 0x0244 + (fn) * 0x0008) 55 + #define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn) \ 56 + (((bar) < BAR_4) ? CDNS_PCIE_LM_EP_VFUNC_BAR_CFG0(fn) : CDNS_PCIE_LM_EP_VFUNC_BAR_CFG1(fn)) 57 + #define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG0(fn) \ 58 + (CDNS_PCIE_LM_BASE + 0x0280 + (fn) * 0x0008) 59 + #define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG1(fn) \ 60 + (CDNS_PCIE_LM_BASE + 0x0284 + (fn) * 0x0008) 54 61 #define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) \ 55 62 (GENMASK(4, 0) << ((b) * 8)) 56 63 #define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \ ··· 123 114 124 115 #define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET 0x90 125 116 #define CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET 0xb0 117 + #define CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET 0x200 126 118 127 119 /* 128 120 * Root Port Registers (PCI configuration space for the root port function) ··· 198 188 199 189 /* AXI link down register */ 200 190 #define CDNS_PCIE_AT_LINKDOWN (CDNS_PCIE_AT_BASE + 0x0824) 191 + 192 + /* LTSSM Capabilities register */ 193 + #define CDNS_PCIE_LTSSM_CONTROL_CAP (CDNS_PCIE_LM_BASE + 0x0054) 194 + #define CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK GENMASK(2, 1) 195 + #define CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT 1 196 + #define CDNS_PCIE_DETECT_QUIET_MIN_DELAY(delay) \ 197 + (((delay) << CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT) & \ 198 + CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK) 201 199 202 200 enum cdns_pcie_rp_bar { 203 201 RP_BAR_UNDEFINED = -1, ··· 313 295 * @avail_ib_bar: Satus of RP_BAR0, RP_BAR1 and RP_NO_BAR if it's free or 314 296 * available 315 297 * @quirk_retrain_flag: Retrain link as quirk for PCIe Gen2 298 + * @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk 316 299 */ 317 300 struct cdns_pcie_rc { 318 301 struct cdns_pcie pcie; ··· 322 303 u32 vendor_id; 323 304 u32 device_id; 324 305 bool avail_ib_bar[CDNS_PCIE_RP_MAX_IB]; 325 - bool quirk_retrain_flag; 306 + unsigned int quirk_retrain_flag:1; 307 + unsigned int quirk_detect_quiet_flag:1; 326 308 }; 327 309 328 310 /** 329 311 * struct cdns_pcie_epf - Structure to hold info about endpoint function 312 + * @epf: Info about virtual functions attached to the physical function 330 313 * @epf_bar: reference to the pci_epf_bar for the six Base Address Registers 331 314 */ 332 315 struct cdns_pcie_epf { 316 + struct cdns_pcie_epf *epf; 333 317 struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS]; 334 318 }; 335 319 ··· 356 334 * registers fields (RMW) accessible by both remote RC and EP to 357 335 * minimize time between read and write 358 336 * @epf: Structure to hold info about endpoint function 337 + * @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk 359 338 */ 360 339 struct cdns_pcie_ep { 361 340 struct cdns_pcie pcie; ··· 371 348 /* protect writing to PCI_STATUS while raising legacy interrupts */ 372 349 spinlock_t lock; 373 350 struct cdns_pcie_epf *epf; 351 + unsigned int quirk_detect_quiet_flag:1; 374 352 }; 375 353 376 354 ··· 532 508 return 0; 533 509 } 534 510 #endif 511 + 512 + void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie); 513 + 535 514 void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, 536 515 u32 r, bool is_io, 537 516 u64 cpu_addr, u64 pci_addr, size_t size);
+48
drivers/pci/controller/dwc/Kconfig
··· 214 214 Enables support for the PCIe controller in the ARTPEC-6 SoC to work in 215 215 endpoint mode. This uses the DesignWare core. 216 216 217 + config PCIE_ROCKCHIP_DW_HOST 218 + bool "Rockchip DesignWare PCIe controller" 219 + select PCIE_DW 220 + select PCIE_DW_HOST 221 + depends on PCI_MSI_IRQ_DOMAIN 222 + depends on ARCH_ROCKCHIP || COMPILE_TEST 223 + depends on OF 224 + help 225 + Enables support for the DesignWare PCIe controller in the 226 + Rockchip SoC except RK3399. 227 + 217 228 config PCIE_INTEL_GW 218 229 bool "Intel Gateway PCIe host controller support" 219 230 depends on OF && (X86 || COMPILE_TEST) ··· 235 224 Gateway SoCs. 236 225 The PCIe controller uses the DesignWare core plus Intel-specific 237 226 hardware wrappers. 227 + 228 + config PCIE_KEEMBAY 229 + bool 230 + 231 + config PCIE_KEEMBAY_HOST 232 + bool "Intel Keem Bay PCIe controller - Host mode" 233 + depends on ARCH_KEEMBAY || COMPILE_TEST 234 + depends on PCI && PCI_MSI_IRQ_DOMAIN 235 + select PCIE_DW_HOST 236 + select PCIE_KEEMBAY 237 + help 238 + Say 'Y' here to enable support for the PCIe controller in Keem Bay 239 + to work in host mode. 240 + The PCIe controller is based on DesignWare Hardware and uses 241 + DesignWare core functions. 242 + 243 + config PCIE_KEEMBAY_EP 244 + bool "Intel Keem Bay PCIe controller - Endpoint mode" 245 + depends on ARCH_KEEMBAY || COMPILE_TEST 246 + depends on PCI && PCI_MSI_IRQ_DOMAIN 247 + depends on PCI_ENDPOINT 248 + select PCIE_DW_EP 249 + select PCIE_KEEMBAY 250 + help 251 + Say 'Y' here to enable support for the PCIe controller in Keem Bay 252 + to work in endpoint mode. 253 + The PCIe controller is based on DesignWare Hardware and uses 254 + DesignWare core functions. 238 255 239 256 config PCIE_KIRIN 240 257 depends on OF && (ARM64 || COMPILE_TEST) ··· 324 285 enable host-specific features PCIE_TEGRA194_HOST must be selected and 325 286 in order to enable device-specific features PCIE_TEGRA194_EP must be 326 287 selected. This uses the DesignWare core. 288 + 289 + config PCIE_VISCONTI_HOST 290 + bool "Toshiba Visconti PCIe controllers" 291 + depends on ARCH_VISCONTI || COMPILE_TEST 292 + depends on PCI_MSI_IRQ_DOMAIN 293 + select PCIE_DW_HOST 294 + help 295 + Say Y here if you want PCIe controller support on Toshiba Visconti SoC. 296 + This driver supports TMPV7708 SoC. 327 297 328 298 config PCIE_UNIPHIER 329 299 bool "Socionext UniPhier PCIe host controllers"
+3
drivers/pci/controller/dwc/Makefile
··· 14 14 obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o 15 15 obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o 16 16 obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o 17 + obj-$(CONFIG_PCIE_ROCKCHIP_DW_HOST) += pcie-dw-rockchip.o 17 18 obj-$(CONFIG_PCIE_INTEL_GW) += pcie-intel-gw.o 19 + obj-$(CONFIG_PCIE_KEEMBAY) += pcie-keembay.o 18 20 obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o 19 21 obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o 20 22 obj-$(CONFIG_PCI_MESON) += pci-meson.o 21 23 obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o 22 24 obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o 23 25 obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o 26 + obj-$(CONFIG_PCIE_VISCONTI_HOST) += pcie-visconti.o 24 27 25 28 # The following drivers are for devices that use the generic ACPI 26 29 # pci_root.c driver but don't support standard ECAM config access.
+6 -10
drivers/pci/controller/dwc/pci-dra7xx.c
··· 204 204 { 205 205 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 206 206 unsigned long val; 207 - int pos, irq; 207 + int pos; 208 208 209 209 val = dw_pcie_readl_dbi(pci, PCIE_MSI_INTR0_STATUS + 210 210 (index * MSI_REG_CTRL_BLOCK_SIZE)); ··· 213 213 214 214 pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL, 0); 215 215 while (pos != MAX_MSI_IRQS_PER_CTRL) { 216 - irq = irq_find_mapping(pp->irq_domain, 217 - (index * MAX_MSI_IRQS_PER_CTRL) + pos); 218 - generic_handle_irq(irq); 216 + generic_handle_domain_irq(pp->irq_domain, 217 + (index * MAX_MSI_IRQS_PER_CTRL) + pos); 219 218 pos++; 220 219 pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL, pos); 221 220 } ··· 256 257 struct dw_pcie *pci; 257 258 struct pcie_port *pp; 258 259 unsigned long reg; 259 - u32 virq, bit; 260 + u32 bit; 260 261 261 262 chained_irq_enter(chip, desc); 262 263 ··· 275 276 case INTB: 276 277 case INTC: 277 278 case INTD: 278 - for_each_set_bit(bit, &reg, PCI_NUM_INTX) { 279 - virq = irq_find_mapping(dra7xx->irq_domain, bit); 280 - if (virq) 281 - generic_handle_irq(virq); 282 - } 279 + for_each_set_bit(bit, &reg, PCI_NUM_INTX) 280 + generic_handle_domain_irq(dra7xx->irq_domain, bit); 283 281 break; 284 282 } 285 283
+5 -9
drivers/pci/controller/dwc/pci-keystone.c
··· 259 259 struct dw_pcie *pci = ks_pcie->pci; 260 260 struct device *dev = pci->dev; 261 261 u32 pending; 262 - int virq; 263 262 264 263 pending = ks_pcie_app_readl(ks_pcie, IRQ_STATUS(offset)); 265 264 266 265 if (BIT(0) & pending) { 267 - virq = irq_linear_revmap(ks_pcie->legacy_irq_domain, offset); 268 - dev_dbg(dev, ": irq: irq_offset %d, virq %d\n", offset, virq); 269 - generic_handle_irq(virq); 266 + dev_dbg(dev, ": irq: irq_offset %d", offset); 267 + generic_handle_domain_irq(ks_pcie->legacy_irq_domain, offset); 270 268 } 271 269 272 270 /* EOI the INTx interrupt */ ··· 577 579 struct pcie_port *pp = &pci->pp; 578 580 struct device *dev = pci->dev; 579 581 struct irq_chip *chip = irq_desc_get_chip(desc); 580 - u32 vector, virq, reg, pos; 582 + u32 vector, reg, pos; 581 583 582 584 dev_dbg(dev, "%s, irq %d\n", __func__, irq); 583 585 ··· 598 600 continue; 599 601 600 602 vector = offset + (pos << 3); 601 - virq = irq_linear_revmap(pp->irq_domain, vector); 602 - dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n", pos, vector, 603 - virq); 604 - generic_handle_irq(virq); 603 + dev_dbg(dev, "irq: bit %d, vector %d\n", pos, vector); 604 + generic_handle_domain_irq(pp->irq_domain, vector); 605 605 } 606 606 607 607 chained_irq_exit(chip, desc);
+2 -5
drivers/pci/controller/dwc/pcie-artpec6.c
··· 384 384 const struct artpec_pcie_of_data *data; 385 385 enum artpec_pcie_variants variant; 386 386 enum dw_pcie_device_mode mode; 387 + u32 val; 387 388 388 389 match = of_match_device(artpec6_pcie_of_match, dev); 389 390 if (!match) ··· 433 432 if (ret < 0) 434 433 return ret; 435 434 break; 436 - case DW_PCIE_EP_TYPE: { 437 - u32 val; 438 - 435 + case DW_PCIE_EP_TYPE: 439 436 if (!IS_ENABLED(CONFIG_PCIE_ARTPEC6_EP)) 440 437 return -ENODEV; 441 438 ··· 444 445 pci->ep.ops = &pcie_ep_ops; 445 446 446 447 return dw_pcie_ep_init(&pci->ep); 447 - break; 448 - } 449 448 default: 450 449 dev_err(dev, "INVALID device type %d\n", artpec6_pcie->mode); 451 450 }
+18 -18
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 125 125 return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap); 126 126 } 127 127 128 - static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, 128 + static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 129 129 struct pci_epf_header *hdr) 130 130 { 131 131 struct dw_pcie_ep *ep = epc_get_drvdata(epc); ··· 202 202 return 0; 203 203 } 204 204 205 - static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no, 205 + static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 206 206 struct pci_epf_bar *epf_bar) 207 207 { 208 208 struct dw_pcie_ep *ep = epc_get_drvdata(epc); ··· 217 217 ep->epf_bar[bar] = NULL; 218 218 } 219 219 220 - static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, 220 + static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 221 221 struct pci_epf_bar *epf_bar) 222 222 { 223 223 int ret; ··· 276 276 return -EINVAL; 277 277 } 278 278 279 - static void dw_pcie_ep_unmap_addr(struct pci_epc *epc, u8 func_no, 279 + static void dw_pcie_ep_unmap_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 280 280 phys_addr_t addr) 281 281 { 282 282 int ret; ··· 292 292 clear_bit(atu_index, ep->ob_window_map); 293 293 } 294 294 295 - static int dw_pcie_ep_map_addr(struct pci_epc *epc, u8 func_no, 296 - phys_addr_t addr, 297 - u64 pci_addr, size_t size) 295 + static int dw_pcie_ep_map_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 296 + phys_addr_t addr, u64 pci_addr, size_t size) 298 297 { 299 298 int ret; 300 299 struct dw_pcie_ep *ep = epc_get_drvdata(epc); ··· 308 309 return 0; 309 310 } 310 311 311 - static int dw_pcie_ep_get_msi(struct pci_epc *epc, u8 func_no) 312 + static int dw_pcie_ep_get_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no) 312 313 { 313 314 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 314 315 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); ··· 332 333 return val; 333 334 } 334 335 335 - static int dw_pcie_ep_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts) 336 + static int dw_pcie_ep_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 337 + u8 interrupts) 336 338 { 337 339 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 338 340 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); ··· 358 358 return 0; 359 359 } 360 360 361 - static int dw_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no) 361 + static int dw_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no) 362 362 { 363 363 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 364 364 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); ··· 382 382 return val; 383 383 } 384 384 385 - static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts, 386 - enum pci_barno bir, u32 offset) 385 + static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 386 + u16 interrupts, enum pci_barno bir, u32 offset) 387 387 { 388 388 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 389 389 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); ··· 418 418 return 0; 419 419 } 420 420 421 - static int dw_pcie_ep_raise_irq(struct pci_epc *epc, u8 func_no, 421 + static int dw_pcie_ep_raise_irq(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 422 422 enum pci_epc_irq_type type, u16 interrupt_num) 423 423 { 424 424 struct dw_pcie_ep *ep = epc_get_drvdata(epc); ··· 450 450 } 451 451 452 452 static const struct pci_epc_features* 453 - dw_pcie_ep_get_features(struct pci_epc *epc, u8 func_no) 453 + dw_pcie_ep_get_features(struct pci_epc *epc, u8 func_no, u8 vfunc_no) 454 454 { 455 455 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 456 456 ··· 525 525 aligned_offset = msg_addr_lower & (epc->mem->window.page_size - 1); 526 526 msg_addr = ((u64)msg_addr_upper) << 32 | 527 527 (msg_addr_lower & ~aligned_offset); 528 - ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, 528 + ret = dw_pcie_ep_map_addr(epc, func_no, 0, ep->msi_mem_phys, msg_addr, 529 529 epc->mem->window.page_size); 530 530 if (ret) 531 531 return ret; 532 532 533 533 writel(msg_data | (interrupt_num - 1), ep->msi_mem + aligned_offset); 534 534 535 - dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys); 535 + dw_pcie_ep_unmap_addr(epc, func_no, 0, ep->msi_mem_phys); 536 536 537 537 return 0; 538 538 } ··· 593 593 } 594 594 595 595 aligned_offset = msg_addr & (epc->mem->window.page_size - 1); 596 - ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, 596 + ret = dw_pcie_ep_map_addr(epc, func_no, 0, ep->msi_mem_phys, msg_addr, 597 597 epc->mem->window.page_size); 598 598 if (ret) 599 599 return ret; 600 600 601 601 writel(msg_data, ep->msi_mem + aligned_offset); 602 602 603 - dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys); 603 + dw_pcie_ep_unmap_addr(epc, func_no, 0, ep->msi_mem_phys); 604 604 605 605 return 0; 606 606 }
+4 -5
drivers/pci/controller/dwc/pcie-designware-host.c
··· 55 55 /* MSI int handler */ 56 56 irqreturn_t dw_handle_msi_irq(struct pcie_port *pp) 57 57 { 58 - int i, pos, irq; 58 + int i, pos; 59 59 unsigned long val; 60 60 u32 status, num_ctrls; 61 61 irqreturn_t ret = IRQ_NONE; ··· 74 74 pos = 0; 75 75 while ((pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL, 76 76 pos)) != MAX_MSI_IRQS_PER_CTRL) { 77 - irq = irq_find_mapping(pp->irq_domain, 78 - (i * MAX_MSI_IRQS_PER_CTRL) + 79 - pos); 80 - generic_handle_irq(irq); 77 + generic_handle_domain_irq(pp->irq_domain, 78 + (i * MAX_MSI_IRQS_PER_CTRL) + 79 + pos); 81 80 pos++; 82 81 } 83 82 }
-1
drivers/pci/controller/dwc/pcie-designware-plat.c
··· 164 164 165 165 pci->ep.ops = &pcie_ep_ops; 166 166 return dw_pcie_ep_init(&pci->ep); 167 - break; 168 167 default: 169 168 dev_err(dev, "INVALID device type %d\n", dw_plat_pcie->mode); 170 169 }
+279
drivers/pci/controller/dwc/pcie-dw-rockchip.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCIe host controller driver for Rockchip SoCs. 4 + * 5 + * Copyright (C) 2021 Rockchip Electronics Co., Ltd. 6 + * http://www.rock-chips.com 7 + * 8 + * Author: Simon Xue <xxm@rock-chips.com> 9 + */ 10 + 11 + #include <linux/clk.h> 12 + #include <linux/gpio/consumer.h> 13 + #include <linux/mfd/syscon.h> 14 + #include <linux/module.h> 15 + #include <linux/of_device.h> 16 + #include <linux/phy/phy.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/regmap.h> 19 + #include <linux/reset.h> 20 + 21 + #include "pcie-designware.h" 22 + 23 + /* 24 + * The upper 16 bits of PCIE_CLIENT_CONFIG are a write 25 + * mask for the lower 16 bits. 26 + */ 27 + #define HIWORD_UPDATE(mask, val) (((mask) << 16) | (val)) 28 + #define HIWORD_UPDATE_BIT(val) HIWORD_UPDATE(val, val) 29 + 30 + #define to_rockchip_pcie(x) dev_get_drvdata((x)->dev) 31 + 32 + #define PCIE_CLIENT_RC_MODE HIWORD_UPDATE_BIT(0x40) 33 + #define PCIE_CLIENT_ENABLE_LTSSM HIWORD_UPDATE_BIT(0xc) 34 + #define PCIE_SMLH_LINKUP BIT(16) 35 + #define PCIE_RDLH_LINKUP BIT(17) 36 + #define PCIE_LINKUP (PCIE_SMLH_LINKUP | PCIE_RDLH_LINKUP) 37 + #define PCIE_L0S_ENTRY 0x11 38 + #define PCIE_CLIENT_GENERAL_CONTROL 0x0 39 + #define PCIE_CLIENT_GENERAL_DEBUG 0x104 40 + #define PCIE_CLIENT_HOT_RESET_CTRL 0x180 41 + #define PCIE_CLIENT_LTSSM_STATUS 0x300 42 + #define PCIE_LTSSM_ENABLE_ENHANCE BIT(4) 43 + #define PCIE_LTSSM_STATUS_MASK GENMASK(5, 0) 44 + 45 + struct rockchip_pcie { 46 + struct dw_pcie pci; 47 + void __iomem *apb_base; 48 + struct phy *phy; 49 + struct clk_bulk_data *clks; 50 + unsigned int clk_cnt; 51 + struct reset_control *rst; 52 + struct gpio_desc *rst_gpio; 53 + struct regulator *vpcie3v3; 54 + }; 55 + 56 + static int rockchip_pcie_readl_apb(struct rockchip_pcie *rockchip, 57 + u32 reg) 58 + { 59 + return readl_relaxed(rockchip->apb_base + reg); 60 + } 61 + 62 + static void rockchip_pcie_writel_apb(struct rockchip_pcie *rockchip, 63 + u32 val, u32 reg) 64 + { 65 + writel_relaxed(val, rockchip->apb_base + reg); 66 + } 67 + 68 + static void rockchip_pcie_enable_ltssm(struct rockchip_pcie *rockchip) 69 + { 70 + rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_ENABLE_LTSSM, 71 + PCIE_CLIENT_GENERAL_CONTROL); 72 + } 73 + 74 + static int rockchip_pcie_link_up(struct dw_pcie *pci) 75 + { 76 + struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); 77 + u32 val = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_LTSSM_STATUS); 78 + 79 + if ((val & PCIE_LINKUP) == PCIE_LINKUP && 80 + (val & PCIE_LTSSM_STATUS_MASK) == PCIE_L0S_ENTRY) 81 + return 1; 82 + 83 + return 0; 84 + } 85 + 86 + static int rockchip_pcie_start_link(struct dw_pcie *pci) 87 + { 88 + struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); 89 + 90 + /* Reset device */ 91 + gpiod_set_value_cansleep(rockchip->rst_gpio, 0); 92 + 93 + rockchip_pcie_enable_ltssm(rockchip); 94 + 95 + /* 96 + * PCIe requires the refclk to be stable for 100µs prior to releasing 97 + * PERST. See table 2-4 in section 2.6.2 AC Specifications of the PCI 98 + * Express Card Electromechanical Specification, 1.1. However, we don't 99 + * know if the refclk is coming from RC's PHY or external OSC. If it's 100 + * from RC, so enabling LTSSM is the just right place to release #PERST. 101 + * We need more extra time as before, rather than setting just 102 + * 100us as we don't know how long should the device need to reset. 103 + */ 104 + msleep(100); 105 + gpiod_set_value_cansleep(rockchip->rst_gpio, 1); 106 + 107 + return 0; 108 + } 109 + 110 + static int rockchip_pcie_host_init(struct pcie_port *pp) 111 + { 112 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 113 + struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); 114 + u32 val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE); 115 + 116 + /* LTSSM enable control mode */ 117 + rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL); 118 + 119 + rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_RC_MODE, 120 + PCIE_CLIENT_GENERAL_CONTROL); 121 + 122 + return 0; 123 + } 124 + 125 + static const struct dw_pcie_host_ops rockchip_pcie_host_ops = { 126 + .host_init = rockchip_pcie_host_init, 127 + }; 128 + 129 + static int rockchip_pcie_clk_init(struct rockchip_pcie *rockchip) 130 + { 131 + struct device *dev = rockchip->pci.dev; 132 + int ret; 133 + 134 + ret = devm_clk_bulk_get_all(dev, &rockchip->clks); 135 + if (ret < 0) 136 + return ret; 137 + 138 + rockchip->clk_cnt = ret; 139 + 140 + return clk_bulk_prepare_enable(rockchip->clk_cnt, rockchip->clks); 141 + } 142 + 143 + static int rockchip_pcie_resource_get(struct platform_device *pdev, 144 + struct rockchip_pcie *rockchip) 145 + { 146 + rockchip->apb_base = devm_platform_ioremap_resource_byname(pdev, "apb"); 147 + if (IS_ERR(rockchip->apb_base)) 148 + return PTR_ERR(rockchip->apb_base); 149 + 150 + rockchip->rst_gpio = devm_gpiod_get_optional(&pdev->dev, "reset", 151 + GPIOD_OUT_HIGH); 152 + if (IS_ERR(rockchip->rst_gpio)) 153 + return PTR_ERR(rockchip->rst_gpio); 154 + 155 + return 0; 156 + } 157 + 158 + static int rockchip_pcie_phy_init(struct rockchip_pcie *rockchip) 159 + { 160 + struct device *dev = rockchip->pci.dev; 161 + int ret; 162 + 163 + rockchip->phy = devm_phy_get(dev, "pcie-phy"); 164 + if (IS_ERR(rockchip->phy)) 165 + return dev_err_probe(dev, PTR_ERR(rockchip->phy), 166 + "missing PHY\n"); 167 + 168 + ret = phy_init(rockchip->phy); 169 + if (ret < 0) 170 + return ret; 171 + 172 + ret = phy_power_on(rockchip->phy); 173 + if (ret) 174 + phy_exit(rockchip->phy); 175 + 176 + return ret; 177 + } 178 + 179 + static void rockchip_pcie_phy_deinit(struct rockchip_pcie *rockchip) 180 + { 181 + phy_exit(rockchip->phy); 182 + phy_power_off(rockchip->phy); 183 + } 184 + 185 + static int rockchip_pcie_reset_control_release(struct rockchip_pcie *rockchip) 186 + { 187 + struct device *dev = rockchip->pci.dev; 188 + 189 + rockchip->rst = devm_reset_control_array_get_exclusive(dev); 190 + if (IS_ERR(rockchip->rst)) 191 + return dev_err_probe(dev, PTR_ERR(rockchip->rst), 192 + "failed to get reset lines\n"); 193 + 194 + return reset_control_deassert(rockchip->rst); 195 + } 196 + 197 + static const struct dw_pcie_ops dw_pcie_ops = { 198 + .link_up = rockchip_pcie_link_up, 199 + .start_link = rockchip_pcie_start_link, 200 + }; 201 + 202 + static int rockchip_pcie_probe(struct platform_device *pdev) 203 + { 204 + struct device *dev = &pdev->dev; 205 + struct rockchip_pcie *rockchip; 206 + struct pcie_port *pp; 207 + int ret; 208 + 209 + rockchip = devm_kzalloc(dev, sizeof(*rockchip), GFP_KERNEL); 210 + if (!rockchip) 211 + return -ENOMEM; 212 + 213 + platform_set_drvdata(pdev, rockchip); 214 + 215 + rockchip->pci.dev = dev; 216 + rockchip->pci.ops = &dw_pcie_ops; 217 + 218 + pp = &rockchip->pci.pp; 219 + pp->ops = &rockchip_pcie_host_ops; 220 + 221 + ret = rockchip_pcie_resource_get(pdev, rockchip); 222 + if (ret) 223 + return ret; 224 + 225 + /* DON'T MOVE ME: must be enable before PHY init */ 226 + rockchip->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3"); 227 + if (IS_ERR(rockchip->vpcie3v3)) { 228 + if (PTR_ERR(rockchip->vpcie3v3) != -ENODEV) 229 + return dev_err_probe(dev, PTR_ERR(rockchip->vpcie3v3), 230 + "failed to get vpcie3v3 regulator\n"); 231 + rockchip->vpcie3v3 = NULL; 232 + } else { 233 + ret = regulator_enable(rockchip->vpcie3v3); 234 + if (ret) { 235 + dev_err(dev, "failed to enable vpcie3v3 regulator\n"); 236 + return ret; 237 + } 238 + } 239 + 240 + ret = rockchip_pcie_phy_init(rockchip); 241 + if (ret) 242 + goto disable_regulator; 243 + 244 + ret = rockchip_pcie_reset_control_release(rockchip); 245 + if (ret) 246 + goto deinit_phy; 247 + 248 + ret = rockchip_pcie_clk_init(rockchip); 249 + if (ret) 250 + goto deinit_phy; 251 + 252 + ret = dw_pcie_host_init(pp); 253 + if (!ret) 254 + return 0; 255 + 256 + clk_bulk_disable_unprepare(rockchip->clk_cnt, rockchip->clks); 257 + deinit_phy: 258 + rockchip_pcie_phy_deinit(rockchip); 259 + disable_regulator: 260 + if (rockchip->vpcie3v3) 261 + regulator_disable(rockchip->vpcie3v3); 262 + 263 + return ret; 264 + } 265 + 266 + static const struct of_device_id rockchip_pcie_of_match[] = { 267 + { .compatible = "rockchip,rk3568-pcie", }, 268 + {}, 269 + }; 270 + 271 + static struct platform_driver rockchip_pcie_driver = { 272 + .driver = { 273 + .name = "rockchip-dw-pcie", 274 + .of_match_table = rockchip_pcie_of_match, 275 + .suppress_bind_attrs = true, 276 + }, 277 + .probe = rockchip_pcie_probe, 278 + }; 279 + builtin_platform_driver(rockchip_pcie_driver);
+460
drivers/pci/controller/dwc/pcie-keembay.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * PCIe controller driver for Intel Keem Bay 4 + * Copyright (C) 2020 Intel Corporation 5 + */ 6 + 7 + #include <linux/bitfield.h> 8 + #include <linux/bits.h> 9 + #include <linux/clk.h> 10 + #include <linux/delay.h> 11 + #include <linux/err.h> 12 + #include <linux/gpio/consumer.h> 13 + #include <linux/init.h> 14 + #include <linux/iopoll.h> 15 + #include <linux/irqchip/chained_irq.h> 16 + #include <linux/kernel.h> 17 + #include <linux/mod_devicetable.h> 18 + #include <linux/pci.h> 19 + #include <linux/platform_device.h> 20 + #include <linux/property.h> 21 + 22 + #include "pcie-designware.h" 23 + 24 + /* PCIE_REGS_APB_SLV Registers */ 25 + #define PCIE_REGS_PCIE_CFG 0x0004 26 + #define PCIE_DEVICE_TYPE BIT(8) 27 + #define PCIE_RSTN BIT(0) 28 + #define PCIE_REGS_PCIE_APP_CNTRL 0x0008 29 + #define APP_LTSSM_ENABLE BIT(0) 30 + #define PCIE_REGS_INTERRUPT_ENABLE 0x0028 31 + #define MSI_CTRL_INT_EN BIT(8) 32 + #define EDMA_INT_EN GENMASK(7, 0) 33 + #define PCIE_REGS_INTERRUPT_STATUS 0x002c 34 + #define MSI_CTRL_INT BIT(8) 35 + #define PCIE_REGS_PCIE_SII_PM_STATE 0x00b0 36 + #define SMLH_LINK_UP BIT(19) 37 + #define RDLH_LINK_UP BIT(8) 38 + #define PCIE_REGS_PCIE_SII_LINK_UP (SMLH_LINK_UP | RDLH_LINK_UP) 39 + #define PCIE_REGS_PCIE_PHY_CNTL 0x0164 40 + #define PHY0_SRAM_BYPASS BIT(8) 41 + #define PCIE_REGS_PCIE_PHY_STAT 0x0168 42 + #define PHY0_MPLLA_STATE BIT(1) 43 + #define PCIE_REGS_LJPLL_STA 0x016c 44 + #define LJPLL_LOCK BIT(0) 45 + #define PCIE_REGS_LJPLL_CNTRL_0 0x0170 46 + #define LJPLL_EN BIT(29) 47 + #define LJPLL_FOUT_EN GENMASK(24, 21) 48 + #define PCIE_REGS_LJPLL_CNTRL_2 0x0178 49 + #define LJPLL_REF_DIV GENMASK(17, 12) 50 + #define LJPLL_FB_DIV GENMASK(11, 0) 51 + #define PCIE_REGS_LJPLL_CNTRL_3 0x017c 52 + #define LJPLL_POST_DIV3A GENMASK(24, 22) 53 + #define LJPLL_POST_DIV2A GENMASK(18, 16) 54 + 55 + #define PERST_DELAY_US 1000 56 + #define AUX_CLK_RATE_HZ 24000000 57 + 58 + struct keembay_pcie { 59 + struct dw_pcie pci; 60 + void __iomem *apb_base; 61 + enum dw_pcie_device_mode mode; 62 + 63 + struct clk *clk_master; 64 + struct clk *clk_aux; 65 + struct gpio_desc *reset; 66 + }; 67 + 68 + struct keembay_pcie_of_data { 69 + enum dw_pcie_device_mode mode; 70 + }; 71 + 72 + static void keembay_ep_reset_assert(struct keembay_pcie *pcie) 73 + { 74 + gpiod_set_value_cansleep(pcie->reset, 1); 75 + usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); 76 + } 77 + 78 + static void keembay_ep_reset_deassert(struct keembay_pcie *pcie) 79 + { 80 + /* 81 + * Ensure that PERST# is asserted for a minimum of 100ms. 82 + * 83 + * For more details, refer to PCI Express Card Electromechanical 84 + * Specification Revision 1.1, Table-2.4. 85 + */ 86 + msleep(100); 87 + 88 + gpiod_set_value_cansleep(pcie->reset, 0); 89 + usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); 90 + } 91 + 92 + static void keembay_pcie_ltssm_set(struct keembay_pcie *pcie, bool enable) 93 + { 94 + u32 val; 95 + 96 + val = readl(pcie->apb_base + PCIE_REGS_PCIE_APP_CNTRL); 97 + if (enable) 98 + val |= APP_LTSSM_ENABLE; 99 + else 100 + val &= ~APP_LTSSM_ENABLE; 101 + writel(val, pcie->apb_base + PCIE_REGS_PCIE_APP_CNTRL); 102 + } 103 + 104 + static int keembay_pcie_link_up(struct dw_pcie *pci) 105 + { 106 + struct keembay_pcie *pcie = dev_get_drvdata(pci->dev); 107 + u32 val; 108 + 109 + val = readl(pcie->apb_base + PCIE_REGS_PCIE_SII_PM_STATE); 110 + 111 + return (val & PCIE_REGS_PCIE_SII_LINK_UP) == PCIE_REGS_PCIE_SII_LINK_UP; 112 + } 113 + 114 + static int keembay_pcie_start_link(struct dw_pcie *pci) 115 + { 116 + struct keembay_pcie *pcie = dev_get_drvdata(pci->dev); 117 + u32 val; 118 + int ret; 119 + 120 + if (pcie->mode == DW_PCIE_EP_TYPE) 121 + return 0; 122 + 123 + keembay_pcie_ltssm_set(pcie, false); 124 + 125 + ret = readl_poll_timeout(pcie->apb_base + PCIE_REGS_PCIE_PHY_STAT, 126 + val, val & PHY0_MPLLA_STATE, 20, 127 + 500 * USEC_PER_MSEC); 128 + if (ret) { 129 + dev_err(pci->dev, "MPLLA is not locked\n"); 130 + return ret; 131 + } 132 + 133 + keembay_pcie_ltssm_set(pcie, true); 134 + 135 + return 0; 136 + } 137 + 138 + static void keembay_pcie_stop_link(struct dw_pcie *pci) 139 + { 140 + struct keembay_pcie *pcie = dev_get_drvdata(pci->dev); 141 + 142 + keembay_pcie_ltssm_set(pcie, false); 143 + } 144 + 145 + static const struct dw_pcie_ops keembay_pcie_ops = { 146 + .link_up = keembay_pcie_link_up, 147 + .start_link = keembay_pcie_start_link, 148 + .stop_link = keembay_pcie_stop_link, 149 + }; 150 + 151 + static inline struct clk *keembay_pcie_probe_clock(struct device *dev, 152 + const char *id, u64 rate) 153 + { 154 + struct clk *clk; 155 + int ret; 156 + 157 + clk = devm_clk_get(dev, id); 158 + if (IS_ERR(clk)) 159 + return clk; 160 + 161 + if (rate) { 162 + ret = clk_set_rate(clk, rate); 163 + if (ret) 164 + return ERR_PTR(ret); 165 + } 166 + 167 + ret = clk_prepare_enable(clk); 168 + if (ret) 169 + return ERR_PTR(ret); 170 + 171 + ret = devm_add_action_or_reset(dev, 172 + (void(*)(void *))clk_disable_unprepare, 173 + clk); 174 + if (ret) 175 + return ERR_PTR(ret); 176 + 177 + return clk; 178 + } 179 + 180 + static int keembay_pcie_probe_clocks(struct keembay_pcie *pcie) 181 + { 182 + struct dw_pcie *pci = &pcie->pci; 183 + struct device *dev = pci->dev; 184 + 185 + pcie->clk_master = keembay_pcie_probe_clock(dev, "master", 0); 186 + if (IS_ERR(pcie->clk_master)) 187 + return dev_err_probe(dev, PTR_ERR(pcie->clk_master), 188 + "Failed to enable master clock"); 189 + 190 + pcie->clk_aux = keembay_pcie_probe_clock(dev, "aux", AUX_CLK_RATE_HZ); 191 + if (IS_ERR(pcie->clk_aux)) 192 + return dev_err_probe(dev, PTR_ERR(pcie->clk_aux), 193 + "Failed to enable auxiliary clock"); 194 + 195 + return 0; 196 + } 197 + 198 + /* 199 + * Initialize the internal PCIe PLL in Host mode. 200 + * See the following sections in Keem Bay data book, 201 + * (1) 6.4.6.1 PCIe Subsystem Example Initialization, 202 + * (2) 6.8 PCIe Low Jitter PLL for Ref Clk Generation. 203 + */ 204 + static int keembay_pcie_pll_init(struct keembay_pcie *pcie) 205 + { 206 + struct dw_pcie *pci = &pcie->pci; 207 + u32 val; 208 + int ret; 209 + 210 + val = FIELD_PREP(LJPLL_REF_DIV, 0) | FIELD_PREP(LJPLL_FB_DIV, 0x32); 211 + writel(val, pcie->apb_base + PCIE_REGS_LJPLL_CNTRL_2); 212 + 213 + val = FIELD_PREP(LJPLL_POST_DIV3A, 0x2) | 214 + FIELD_PREP(LJPLL_POST_DIV2A, 0x2); 215 + writel(val, pcie->apb_base + PCIE_REGS_LJPLL_CNTRL_3); 216 + 217 + val = FIELD_PREP(LJPLL_EN, 0x1) | FIELD_PREP(LJPLL_FOUT_EN, 0xc); 218 + writel(val, pcie->apb_base + PCIE_REGS_LJPLL_CNTRL_0); 219 + 220 + ret = readl_poll_timeout(pcie->apb_base + PCIE_REGS_LJPLL_STA, 221 + val, val & LJPLL_LOCK, 20, 222 + 500 * USEC_PER_MSEC); 223 + if (ret) 224 + dev_err(pci->dev, "Low jitter PLL is not locked\n"); 225 + 226 + return ret; 227 + } 228 + 229 + static void keembay_pcie_msi_irq_handler(struct irq_desc *desc) 230 + { 231 + struct keembay_pcie *pcie = irq_desc_get_handler_data(desc); 232 + struct irq_chip *chip = irq_desc_get_chip(desc); 233 + u32 val, mask, status; 234 + struct pcie_port *pp; 235 + 236 + /* 237 + * Keem Bay PCIe Controller provides an additional IP logic on top of 238 + * standard DWC IP to clear MSI IRQ by writing '1' to the respective 239 + * bit of the status register. 240 + * 241 + * So, a chained irq handler is defined to handle this additional 242 + * IP logic. 243 + */ 244 + 245 + chained_irq_enter(chip, desc); 246 + 247 + pp = &pcie->pci.pp; 248 + val = readl(pcie->apb_base + PCIE_REGS_INTERRUPT_STATUS); 249 + mask = readl(pcie->apb_base + PCIE_REGS_INTERRUPT_ENABLE); 250 + 251 + status = val & mask; 252 + 253 + if (status & MSI_CTRL_INT) { 254 + dw_handle_msi_irq(pp); 255 + writel(status, pcie->apb_base + PCIE_REGS_INTERRUPT_STATUS); 256 + } 257 + 258 + chained_irq_exit(chip, desc); 259 + } 260 + 261 + static int keembay_pcie_setup_msi_irq(struct keembay_pcie *pcie) 262 + { 263 + struct dw_pcie *pci = &pcie->pci; 264 + struct device *dev = pci->dev; 265 + struct platform_device *pdev = to_platform_device(dev); 266 + int irq; 267 + 268 + irq = platform_get_irq_byname(pdev, "pcie"); 269 + if (irq < 0) 270 + return irq; 271 + 272 + irq_set_chained_handler_and_data(irq, keembay_pcie_msi_irq_handler, 273 + pcie); 274 + 275 + return 0; 276 + } 277 + 278 + static void keembay_pcie_ep_init(struct dw_pcie_ep *ep) 279 + { 280 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 281 + struct keembay_pcie *pcie = dev_get_drvdata(pci->dev); 282 + 283 + writel(EDMA_INT_EN, pcie->apb_base + PCIE_REGS_INTERRUPT_ENABLE); 284 + } 285 + 286 + static int keembay_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 287 + enum pci_epc_irq_type type, 288 + u16 interrupt_num) 289 + { 290 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 291 + 292 + switch (type) { 293 + case PCI_EPC_IRQ_LEGACY: 294 + /* Legacy interrupts are not supported in Keem Bay */ 295 + dev_err(pci->dev, "Legacy IRQ is not supported\n"); 296 + return -EINVAL; 297 + case PCI_EPC_IRQ_MSI: 298 + return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); 299 + case PCI_EPC_IRQ_MSIX: 300 + return dw_pcie_ep_raise_msix_irq(ep, func_no, interrupt_num); 301 + default: 302 + dev_err(pci->dev, "Unknown IRQ type %d\n", type); 303 + return -EINVAL; 304 + } 305 + } 306 + 307 + static const struct pci_epc_features keembay_pcie_epc_features = { 308 + .linkup_notifier = false, 309 + .msi_capable = true, 310 + .msix_capable = true, 311 + .reserved_bar = BIT(BAR_1) | BIT(BAR_3) | BIT(BAR_5), 312 + .bar_fixed_64bit = BIT(BAR_0) | BIT(BAR_2) | BIT(BAR_4), 313 + .align = SZ_16K, 314 + }; 315 + 316 + static const struct pci_epc_features * 317 + keembay_pcie_get_features(struct dw_pcie_ep *ep) 318 + { 319 + return &keembay_pcie_epc_features; 320 + } 321 + 322 + static const struct dw_pcie_ep_ops keembay_pcie_ep_ops = { 323 + .ep_init = keembay_pcie_ep_init, 324 + .raise_irq = keembay_pcie_ep_raise_irq, 325 + .get_features = keembay_pcie_get_features, 326 + }; 327 + 328 + static const struct dw_pcie_host_ops keembay_pcie_host_ops = { 329 + }; 330 + 331 + static int keembay_pcie_add_pcie_port(struct keembay_pcie *pcie, 332 + struct platform_device *pdev) 333 + { 334 + struct dw_pcie *pci = &pcie->pci; 335 + struct pcie_port *pp = &pci->pp; 336 + struct device *dev = &pdev->dev; 337 + u32 val; 338 + int ret; 339 + 340 + pp->ops = &keembay_pcie_host_ops; 341 + pp->msi_irq = -ENODEV; 342 + 343 + ret = keembay_pcie_setup_msi_irq(pcie); 344 + if (ret) 345 + return ret; 346 + 347 + pcie->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); 348 + if (IS_ERR(pcie->reset)) 349 + return PTR_ERR(pcie->reset); 350 + 351 + ret = keembay_pcie_probe_clocks(pcie); 352 + if (ret) 353 + return ret; 354 + 355 + val = readl(pcie->apb_base + PCIE_REGS_PCIE_PHY_CNTL); 356 + val |= PHY0_SRAM_BYPASS; 357 + writel(val, pcie->apb_base + PCIE_REGS_PCIE_PHY_CNTL); 358 + 359 + writel(PCIE_DEVICE_TYPE, pcie->apb_base + PCIE_REGS_PCIE_CFG); 360 + 361 + ret = keembay_pcie_pll_init(pcie); 362 + if (ret) 363 + return ret; 364 + 365 + val = readl(pcie->apb_base + PCIE_REGS_PCIE_CFG); 366 + writel(val | PCIE_RSTN, pcie->apb_base + PCIE_REGS_PCIE_CFG); 367 + keembay_ep_reset_deassert(pcie); 368 + 369 + ret = dw_pcie_host_init(pp); 370 + if (ret) { 371 + keembay_ep_reset_assert(pcie); 372 + dev_err(dev, "Failed to initialize host: %d\n", ret); 373 + return ret; 374 + } 375 + 376 + val = readl(pcie->apb_base + PCIE_REGS_INTERRUPT_ENABLE); 377 + if (IS_ENABLED(CONFIG_PCI_MSI)) 378 + val |= MSI_CTRL_INT_EN; 379 + writel(val, pcie->apb_base + PCIE_REGS_INTERRUPT_ENABLE); 380 + 381 + return 0; 382 + } 383 + 384 + static int keembay_pcie_probe(struct platform_device *pdev) 385 + { 386 + const struct keembay_pcie_of_data *data; 387 + struct device *dev = &pdev->dev; 388 + struct keembay_pcie *pcie; 389 + struct dw_pcie *pci; 390 + enum dw_pcie_device_mode mode; 391 + 392 + data = device_get_match_data(dev); 393 + if (!data) 394 + return -ENODEV; 395 + 396 + mode = (enum dw_pcie_device_mode)data->mode; 397 + 398 + pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 399 + if (!pcie) 400 + return -ENOMEM; 401 + 402 + pci = &pcie->pci; 403 + pci->dev = dev; 404 + pci->ops = &keembay_pcie_ops; 405 + 406 + pcie->mode = mode; 407 + 408 + pcie->apb_base = devm_platform_ioremap_resource_byname(pdev, "apb"); 409 + if (IS_ERR(pcie->apb_base)) 410 + return PTR_ERR(pcie->apb_base); 411 + 412 + platform_set_drvdata(pdev, pcie); 413 + 414 + switch (pcie->mode) { 415 + case DW_PCIE_RC_TYPE: 416 + if (!IS_ENABLED(CONFIG_PCIE_KEEMBAY_HOST)) 417 + return -ENODEV; 418 + 419 + return keembay_pcie_add_pcie_port(pcie, pdev); 420 + case DW_PCIE_EP_TYPE: 421 + if (!IS_ENABLED(CONFIG_PCIE_KEEMBAY_EP)) 422 + return -ENODEV; 423 + 424 + pci->ep.ops = &keembay_pcie_ep_ops; 425 + return dw_pcie_ep_init(&pci->ep); 426 + default: 427 + dev_err(dev, "Invalid device type %d\n", pcie->mode); 428 + return -ENODEV; 429 + } 430 + } 431 + 432 + static const struct keembay_pcie_of_data keembay_pcie_rc_of_data = { 433 + .mode = DW_PCIE_RC_TYPE, 434 + }; 435 + 436 + static const struct keembay_pcie_of_data keembay_pcie_ep_of_data = { 437 + .mode = DW_PCIE_EP_TYPE, 438 + }; 439 + 440 + static const struct of_device_id keembay_pcie_of_match[] = { 441 + { 442 + .compatible = "intel,keembay-pcie", 443 + .data = &keembay_pcie_rc_of_data, 444 + }, 445 + { 446 + .compatible = "intel,keembay-pcie-ep", 447 + .data = &keembay_pcie_ep_of_data, 448 + }, 449 + {} 450 + }; 451 + 452 + static struct platform_driver keembay_pcie_driver = { 453 + .driver = { 454 + .name = "keembay-pcie", 455 + .of_match_table = keembay_pcie_of_match, 456 + .suppress_bind_attrs = true, 457 + }, 458 + .probe = keembay_pcie_probe, 459 + }; 460 + builtin_platform_driver(keembay_pcie_driver);
+31 -23
drivers/pci/controller/dwc/pcie-tegra194.c
··· 497 497 struct tegra_pcie_dw *pcie = arg; 498 498 struct dw_pcie_ep *ep = &pcie->pci.ep; 499 499 int spurious = 1; 500 - u32 val, tmp; 500 + u32 status_l0, status_l1, link_status; 501 501 502 - val = appl_readl(pcie, APPL_INTR_STATUS_L0); 503 - if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) { 504 - val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0); 505 - appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0); 502 + status_l0 = appl_readl(pcie, APPL_INTR_STATUS_L0); 503 + if (status_l0 & APPL_INTR_STATUS_L0_LINK_STATE_INT) { 504 + status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0); 505 + appl_writel(pcie, status_l1, APPL_INTR_STATUS_L1_0_0); 506 506 507 - if (val & APPL_INTR_STATUS_L1_0_0_HOT_RESET_DONE) 507 + if (status_l1 & APPL_INTR_STATUS_L1_0_0_HOT_RESET_DONE) 508 508 pex_ep_event_hot_rst_done(pcie); 509 509 510 - if (val & APPL_INTR_STATUS_L1_0_0_RDLH_LINK_UP_CHGED) { 511 - tmp = appl_readl(pcie, APPL_LINK_STATUS); 512 - if (tmp & APPL_LINK_STATUS_RDLH_LINK_UP) { 510 + if (status_l1 & APPL_INTR_STATUS_L1_0_0_RDLH_LINK_UP_CHGED) { 511 + link_status = appl_readl(pcie, APPL_LINK_STATUS); 512 + if (link_status & APPL_LINK_STATUS_RDLH_LINK_UP) { 513 513 dev_dbg(pcie->dev, "Link is up with Host\n"); 514 514 dw_pcie_ep_linkup(ep); 515 515 } ··· 518 518 spurious = 0; 519 519 } 520 520 521 - if (val & APPL_INTR_STATUS_L0_PCI_CMD_EN_INT) { 522 - val = appl_readl(pcie, APPL_INTR_STATUS_L1_15); 523 - appl_writel(pcie, val, APPL_INTR_STATUS_L1_15); 521 + if (status_l0 & APPL_INTR_STATUS_L0_PCI_CMD_EN_INT) { 522 + status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_15); 523 + appl_writel(pcie, status_l1, APPL_INTR_STATUS_L1_15); 524 524 525 - if (val & APPL_INTR_STATUS_L1_15_CFG_BME_CHGED) 525 + if (status_l1 & APPL_INTR_STATUS_L1_15_CFG_BME_CHGED) 526 526 return IRQ_WAKE_THREAD; 527 527 528 528 spurious = 0; ··· 530 530 531 531 if (spurious) { 532 532 dev_warn(pcie->dev, "Random interrupt (STATUS = 0x%08X)\n", 533 - val); 534 - appl_writel(pcie, val, APPL_INTR_STATUS_L0); 533 + status_l0); 534 + appl_writel(pcie, status_l0, APPL_INTR_STATUS_L0); 535 535 } 536 536 537 537 return IRQ_HANDLED; ··· 1493 1493 return; 1494 1494 } 1495 1495 1496 + /* 1497 + * PCIe controller exits from L2 only if reset is applied, so 1498 + * controller doesn't handle interrupts. But in cases where 1499 + * L2 entry fails, PERST# is asserted which can trigger surprise 1500 + * link down AER. However this function call happens in 1501 + * suspend_noirq(), so AER interrupt will not be processed. 1502 + * Disable all interrupts to avoid such a scenario. 1503 + */ 1504 + appl_writel(pcie, 0x0, APPL_INTR_EN_L0_0); 1505 + 1496 1506 if (tegra_pcie_try_link_l2(pcie)) { 1497 1507 dev_info(pcie->dev, "Link didn't transition to L2 state\n"); 1498 1508 /* ··· 1773 1763 val = (ep->msi_mem_phys & MSIX_ADDR_MATCH_LOW_OFF_MASK); 1774 1764 val |= MSIX_ADDR_MATCH_LOW_OFF_EN; 1775 1765 dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_LOW_OFF, val); 1776 - val = (lower_32_bits(ep->msi_mem_phys) & MSIX_ADDR_MATCH_HIGH_OFF_MASK); 1766 + val = (upper_32_bits(ep->msi_mem_phys) & MSIX_ADDR_MATCH_HIGH_OFF_MASK); 1777 1767 dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_HIGH_OFF, val); 1778 1768 1779 1769 ret = dw_pcie_ep_init_complete(ep); ··· 1943 1933 if (ret < 0) { 1944 1934 dev_err(dev, "Failed to request IRQ for PERST: %d\n", ret); 1945 1935 return ret; 1946 - } 1947 - 1948 - name = devm_kasprintf(dev, GFP_KERNEL, "tegra_pcie_%u_ep_work", 1949 - pcie->cid); 1950 - if (!name) { 1951 - dev_err(dev, "Failed to create PCIe EP work thread string\n"); 1952 - return -ENOMEM; 1953 1936 } 1954 1937 1955 1938 pm_runtime_enable(dev); ··· 2238 2235 { 2239 2236 struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); 2240 2237 u32 val; 2238 + 2239 + if (pcie->mode == DW_PCIE_EP_TYPE) { 2240 + dev_err(dev, "Suspend is not supported in EP mode"); 2241 + return -ENOTSUPP; 2242 + } 2241 2243 2242 2244 if (!pcie->link_state) 2243 2245 return 0;
+3 -5
drivers/pci/controller/dwc/pcie-uniphier.c
··· 235 235 struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci); 236 236 struct irq_chip *chip = irq_desc_get_chip(desc); 237 237 unsigned long reg; 238 - u32 val, bit, virq; 238 + u32 val, bit; 239 239 240 240 /* INT for debug */ 241 241 val = readl(priv->base + PCL_RCV_INT); ··· 257 257 val = readl(priv->base + PCL_RCV_INTX); 258 258 reg = FIELD_GET(PCL_RCV_INTX_ALL_STATUS, val); 259 259 260 - for_each_set_bit(bit, &reg, PCI_NUM_INTX) { 261 - virq = irq_linear_revmap(priv->legacy_irq_domain, bit); 262 - generic_handle_irq(virq); 263 - } 260 + for_each_set_bit(bit, &reg, PCI_NUM_INTX) 261 + generic_handle_domain_irq(priv->legacy_irq_domain, bit); 264 262 265 263 chained_irq_exit(chip, desc); 266 264 }
+332
drivers/pci/controller/dwc/pcie-visconti.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * DWC PCIe RC driver for Toshiba Visconti ARM SoC 4 + * 5 + * Copyright (C) 2021 Toshiba Electronic Device & Storage Corporation 6 + * Copyright (C) 2021 TOSHIBA CORPORATION 7 + * 8 + * Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp> 9 + */ 10 + 11 + #include <linux/clk.h> 12 + #include <linux/delay.h> 13 + #include <linux/gpio.h> 14 + #include <linux/interrupt.h> 15 + #include <linux/init.h> 16 + #include <linux/iopoll.h> 17 + #include <linux/kernel.h> 18 + #include <linux/of_platform.h> 19 + #include <linux/pci.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/resource.h> 22 + #include <linux/types.h> 23 + 24 + #include "pcie-designware.h" 25 + #include "../../pci.h" 26 + 27 + struct visconti_pcie { 28 + struct dw_pcie pci; 29 + void __iomem *ulreg_base; 30 + void __iomem *smu_base; 31 + void __iomem *mpu_base; 32 + struct clk *refclk; 33 + struct clk *coreclk; 34 + struct clk *auxclk; 35 + }; 36 + 37 + #define PCIE_UL_REG_S_PCIE_MODE 0x00F4 38 + #define PCIE_UL_REG_S_PCIE_MODE_EP 0x00 39 + #define PCIE_UL_REG_S_PCIE_MODE_RC 0x04 40 + 41 + #define PCIE_UL_REG_S_PERSTN_CTRL 0x00F8 42 + #define PCIE_UL_IOM_PCIE_PERSTN_I_EN BIT(3) 43 + #define PCIE_UL_DIRECT_PERSTN_EN BIT(2) 44 + #define PCIE_UL_PERSTN_OUT BIT(1) 45 + #define PCIE_UL_DIRECT_PERSTN BIT(0) 46 + #define PCIE_UL_REG_S_PERSTN_CTRL_INIT (PCIE_UL_IOM_PCIE_PERSTN_I_EN | \ 47 + PCIE_UL_DIRECT_PERSTN_EN | \ 48 + PCIE_UL_DIRECT_PERSTN) 49 + 50 + #define PCIE_UL_REG_S_PHY_INIT_02 0x0104 51 + #define PCIE_UL_PHY0_SRAM_EXT_LD_DONE BIT(0) 52 + 53 + #define PCIE_UL_REG_S_PHY_INIT_03 0x0108 54 + #define PCIE_UL_PHY0_SRAM_INIT_DONE BIT(0) 55 + 56 + #define PCIE_UL_REG_S_INT_EVENT_MASK1 0x0138 57 + #define PCIE_UL_CFG_PME_INT BIT(0) 58 + #define PCIE_UL_CFG_LINK_EQ_REQ_INT BIT(1) 59 + #define PCIE_UL_EDMA_INT0 BIT(2) 60 + #define PCIE_UL_EDMA_INT1 BIT(3) 61 + #define PCIE_UL_EDMA_INT2 BIT(4) 62 + #define PCIE_UL_EDMA_INT3 BIT(5) 63 + #define PCIE_UL_S_INT_EVENT_MASK1_ALL (PCIE_UL_CFG_PME_INT | \ 64 + PCIE_UL_CFG_LINK_EQ_REQ_INT | \ 65 + PCIE_UL_EDMA_INT0 | \ 66 + PCIE_UL_EDMA_INT1 | \ 67 + PCIE_UL_EDMA_INT2 | \ 68 + PCIE_UL_EDMA_INT3) 69 + 70 + #define PCIE_UL_REG_S_SB_MON 0x0198 71 + #define PCIE_UL_REG_S_SIG_MON 0x019C 72 + #define PCIE_UL_CORE_RST_N_MON BIT(0) 73 + 74 + #define PCIE_UL_REG_V_SII_DBG_00 0x0844 75 + #define PCIE_UL_REG_V_SII_GEN_CTRL_01 0x0860 76 + #define PCIE_UL_APP_LTSSM_ENABLE BIT(0) 77 + 78 + #define PCIE_UL_REG_V_PHY_ST_00 0x0864 79 + #define PCIE_UL_SMLH_LINK_UP BIT(0) 80 + 81 + #define PCIE_UL_REG_V_PHY_ST_02 0x0868 82 + #define PCIE_UL_S_DETECT_ACT 0x01 83 + #define PCIE_UL_S_L0 0x11 84 + 85 + #define PISMU_CKON_PCIE 0x0038 86 + #define PISMU_CKON_PCIE_AUX_CLK BIT(1) 87 + #define PISMU_CKON_PCIE_MSTR_ACLK BIT(0) 88 + 89 + #define PISMU_RSOFF_PCIE 0x0538 90 + #define PISMU_RSOFF_PCIE_ULREG_RST_N BIT(1) 91 + #define PISMU_RSOFF_PCIE_PWR_UP_RST_N BIT(0) 92 + 93 + #define PCIE_MPU_REG_MP_EN 0x0 94 + #define MPU_MP_EN_DISABLE BIT(0) 95 + 96 + /* Access registers in PCIe ulreg */ 97 + static void visconti_ulreg_writel(struct visconti_pcie *pcie, u32 val, u32 reg) 98 + { 99 + writel_relaxed(val, pcie->ulreg_base + reg); 100 + } 101 + 102 + static u32 visconti_ulreg_readl(struct visconti_pcie *pcie, u32 reg) 103 + { 104 + return readl_relaxed(pcie->ulreg_base + reg); 105 + } 106 + 107 + /* Access registers in PCIe smu */ 108 + static void visconti_smu_writel(struct visconti_pcie *pcie, u32 val, u32 reg) 109 + { 110 + writel_relaxed(val, pcie->smu_base + reg); 111 + } 112 + 113 + /* Access registers in PCIe mpu */ 114 + static void visconti_mpu_writel(struct visconti_pcie *pcie, u32 val, u32 reg) 115 + { 116 + writel_relaxed(val, pcie->mpu_base + reg); 117 + } 118 + 119 + static u32 visconti_mpu_readl(struct visconti_pcie *pcie, u32 reg) 120 + { 121 + return readl_relaxed(pcie->mpu_base + reg); 122 + } 123 + 124 + static int visconti_pcie_link_up(struct dw_pcie *pci) 125 + { 126 + struct visconti_pcie *pcie = dev_get_drvdata(pci->dev); 127 + void __iomem *addr = pcie->ulreg_base; 128 + u32 val = readl_relaxed(addr + PCIE_UL_REG_V_PHY_ST_02); 129 + 130 + return !!(val & PCIE_UL_S_L0); 131 + } 132 + 133 + static int visconti_pcie_start_link(struct dw_pcie *pci) 134 + { 135 + struct visconti_pcie *pcie = dev_get_drvdata(pci->dev); 136 + void __iomem *addr = pcie->ulreg_base; 137 + u32 val; 138 + int ret; 139 + 140 + visconti_ulreg_writel(pcie, PCIE_UL_APP_LTSSM_ENABLE, 141 + PCIE_UL_REG_V_SII_GEN_CTRL_01); 142 + 143 + ret = readl_relaxed_poll_timeout(addr + PCIE_UL_REG_V_PHY_ST_02, 144 + val, (val & PCIE_UL_S_L0), 145 + 90000, 100000); 146 + if (ret) 147 + return ret; 148 + 149 + visconti_ulreg_writel(pcie, PCIE_UL_S_INT_EVENT_MASK1_ALL, 150 + PCIE_UL_REG_S_INT_EVENT_MASK1); 151 + 152 + if (dw_pcie_link_up(pci)) { 153 + val = visconti_mpu_readl(pcie, PCIE_MPU_REG_MP_EN); 154 + visconti_mpu_writel(pcie, val & ~MPU_MP_EN_DISABLE, 155 + PCIE_MPU_REG_MP_EN); 156 + } 157 + 158 + return 0; 159 + } 160 + 161 + static void visconti_pcie_stop_link(struct dw_pcie *pci) 162 + { 163 + struct visconti_pcie *pcie = dev_get_drvdata(pci->dev); 164 + u32 val; 165 + 166 + val = visconti_ulreg_readl(pcie, PCIE_UL_REG_V_SII_GEN_CTRL_01); 167 + val &= ~PCIE_UL_APP_LTSSM_ENABLE; 168 + visconti_ulreg_writel(pcie, val, PCIE_UL_REG_V_SII_GEN_CTRL_01); 169 + 170 + val = visconti_mpu_readl(pcie, PCIE_MPU_REG_MP_EN); 171 + visconti_mpu_writel(pcie, val | MPU_MP_EN_DISABLE, PCIE_MPU_REG_MP_EN); 172 + } 173 + 174 + /* 175 + * In this SoC specification, the CPU bus outputs the offset value from 176 + * 0x40000000 to the PCIe bus, so 0x40000000 is subtracted from the CPU 177 + * bus address. This 0x40000000 is also based on io_base from DT. 178 + */ 179 + static u64 visconti_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 cpu_addr) 180 + { 181 + struct pcie_port *pp = &pci->pp; 182 + 183 + return cpu_addr & ~pp->io_base; 184 + } 185 + 186 + static const struct dw_pcie_ops dw_pcie_ops = { 187 + .cpu_addr_fixup = visconti_pcie_cpu_addr_fixup, 188 + .link_up = visconti_pcie_link_up, 189 + .start_link = visconti_pcie_start_link, 190 + .stop_link = visconti_pcie_stop_link, 191 + }; 192 + 193 + static int visconti_pcie_host_init(struct pcie_port *pp) 194 + { 195 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 196 + struct visconti_pcie *pcie = dev_get_drvdata(pci->dev); 197 + void __iomem *addr; 198 + int err; 199 + u32 val; 200 + 201 + visconti_smu_writel(pcie, 202 + PISMU_CKON_PCIE_AUX_CLK | PISMU_CKON_PCIE_MSTR_ACLK, 203 + PISMU_CKON_PCIE); 204 + ndelay(250); 205 + 206 + visconti_smu_writel(pcie, PISMU_RSOFF_PCIE_ULREG_RST_N, 207 + PISMU_RSOFF_PCIE); 208 + visconti_ulreg_writel(pcie, PCIE_UL_REG_S_PCIE_MODE_RC, 209 + PCIE_UL_REG_S_PCIE_MODE); 210 + 211 + val = PCIE_UL_REG_S_PERSTN_CTRL_INIT; 212 + visconti_ulreg_writel(pcie, val, PCIE_UL_REG_S_PERSTN_CTRL); 213 + udelay(100); 214 + 215 + val |= PCIE_UL_PERSTN_OUT; 216 + visconti_ulreg_writel(pcie, val, PCIE_UL_REG_S_PERSTN_CTRL); 217 + udelay(100); 218 + 219 + visconti_smu_writel(pcie, PISMU_RSOFF_PCIE_PWR_UP_RST_N, 220 + PISMU_RSOFF_PCIE); 221 + 222 + addr = pcie->ulreg_base + PCIE_UL_REG_S_PHY_INIT_03; 223 + err = readl_relaxed_poll_timeout(addr, val, 224 + (val & PCIE_UL_PHY0_SRAM_INIT_DONE), 225 + 100, 1000); 226 + if (err) 227 + return err; 228 + 229 + visconti_ulreg_writel(pcie, PCIE_UL_PHY0_SRAM_EXT_LD_DONE, 230 + PCIE_UL_REG_S_PHY_INIT_02); 231 + 232 + addr = pcie->ulreg_base + PCIE_UL_REG_S_SIG_MON; 233 + return readl_relaxed_poll_timeout(addr, val, 234 + (val & PCIE_UL_CORE_RST_N_MON), 100, 235 + 1000); 236 + } 237 + 238 + static const struct dw_pcie_host_ops visconti_pcie_host_ops = { 239 + .host_init = visconti_pcie_host_init, 240 + }; 241 + 242 + static int visconti_get_resources(struct platform_device *pdev, 243 + struct visconti_pcie *pcie) 244 + { 245 + struct device *dev = &pdev->dev; 246 + 247 + pcie->ulreg_base = devm_platform_ioremap_resource_byname(pdev, "ulreg"); 248 + if (IS_ERR(pcie->ulreg_base)) 249 + return PTR_ERR(pcie->ulreg_base); 250 + 251 + pcie->smu_base = devm_platform_ioremap_resource_byname(pdev, "smu"); 252 + if (IS_ERR(pcie->smu_base)) 253 + return PTR_ERR(pcie->smu_base); 254 + 255 + pcie->mpu_base = devm_platform_ioremap_resource_byname(pdev, "mpu"); 256 + if (IS_ERR(pcie->mpu_base)) 257 + return PTR_ERR(pcie->mpu_base); 258 + 259 + pcie->refclk = devm_clk_get(dev, "ref"); 260 + if (IS_ERR(pcie->refclk)) 261 + return dev_err_probe(dev, PTR_ERR(pcie->refclk), 262 + "Failed to get ref clock\n"); 263 + 264 + pcie->coreclk = devm_clk_get(dev, "core"); 265 + if (IS_ERR(pcie->coreclk)) 266 + return dev_err_probe(dev, PTR_ERR(pcie->coreclk), 267 + "Failed to get core clock\n"); 268 + 269 + pcie->auxclk = devm_clk_get(dev, "aux"); 270 + if (IS_ERR(pcie->auxclk)) 271 + return dev_err_probe(dev, PTR_ERR(pcie->auxclk), 272 + "Failed to get aux clock\n"); 273 + 274 + return 0; 275 + } 276 + 277 + static int visconti_add_pcie_port(struct visconti_pcie *pcie, 278 + struct platform_device *pdev) 279 + { 280 + struct dw_pcie *pci = &pcie->pci; 281 + struct pcie_port *pp = &pci->pp; 282 + struct device *dev = &pdev->dev; 283 + 284 + pp->irq = platform_get_irq_byname(pdev, "intr"); 285 + if (pp->irq < 0) { 286 + dev_err(dev, "Interrupt intr is missing"); 287 + return pp->irq; 288 + } 289 + 290 + pp->ops = &visconti_pcie_host_ops; 291 + 292 + return dw_pcie_host_init(pp); 293 + } 294 + 295 + static int visconti_pcie_probe(struct platform_device *pdev) 296 + { 297 + struct device *dev = &pdev->dev; 298 + struct visconti_pcie *pcie; 299 + struct dw_pcie *pci; 300 + int ret; 301 + 302 + pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 303 + if (!pcie) 304 + return -ENOMEM; 305 + 306 + pci = &pcie->pci; 307 + pci->dev = dev; 308 + pci->ops = &dw_pcie_ops; 309 + 310 + ret = visconti_get_resources(pdev, pcie); 311 + if (ret) 312 + return ret; 313 + 314 + platform_set_drvdata(pdev, pcie); 315 + 316 + return visconti_add_pcie_port(pcie, pdev); 317 + } 318 + 319 + static const struct of_device_id visconti_pcie_match[] = { 320 + { .compatible = "toshiba,visconti-pcie" }, 321 + {}, 322 + }; 323 + 324 + static struct platform_driver visconti_pcie_driver = { 325 + .probe = visconti_pcie_probe, 326 + .driver = { 327 + .name = "visconti-pcie", 328 + .of_match_table = visconti_pcie_match, 329 + .suppress_bind_attrs = true, 330 + }, 331 + }; 332 + builtin_platform_driver(visconti_pcie_driver);
+6 -9
drivers/pci/controller/mobiveil/pcie-mobiveil-host.c
··· 92 92 u32 msi_data, msi_addr_lo, msi_addr_hi; 93 93 u32 intr_status, msi_status; 94 94 unsigned long shifted_status; 95 - u32 bit, virq, val, mask; 95 + u32 bit, val, mask; 96 96 97 97 /* 98 98 * The core provides a single interrupt for both INTx/MSI messages. ··· 114 114 shifted_status >>= PAB_INTX_START; 115 115 do { 116 116 for_each_set_bit(bit, &shifted_status, PCI_NUM_INTX) { 117 - virq = irq_find_mapping(rp->intx_domain, 118 - bit + 1); 119 - if (virq) 120 - generic_handle_irq(virq); 121 - else 117 + int ret; 118 + ret = generic_handle_domain_irq(rp->intx_domain, 119 + bit + 1); 120 + if (ret) 122 121 dev_err_ratelimited(dev, "unexpected IRQ, INT%d\n", 123 122 bit); 124 123 ··· 154 155 dev_dbg(dev, "MSI registers, data: %08x, addr: %08x:%08x\n", 155 156 msi_data, msi_addr_hi, msi_addr_lo); 156 157 157 - virq = irq_find_mapping(msi->dev_domain, msi_data); 158 - if (virq) 159 - generic_handle_irq(virq); 158 + generic_handle_domain_irq(msi->dev_domain, msi_data); 160 159 161 160 msi_status = readl_relaxed(pcie->apb_csr_base + 162 161 MSI_STATUS_OFFSET);
+320 -14
drivers/pci/controller/pci-aardvark.c
··· 58 58 #define PIO_COMPLETION_STATUS_CRS 2 59 59 #define PIO_COMPLETION_STATUS_CA 4 60 60 #define PIO_NON_POSTED_REQ BIT(10) 61 + #define PIO_ERR_STATUS BIT(11) 61 62 #define PIO_ADDR_LS (PIO_BASE_ADDR + 0x8) 62 63 #define PIO_ADDR_MS (PIO_BASE_ADDR + 0xc) 63 64 #define PIO_WR_DATA (PIO_BASE_ADDR + 0x10) ··· 119 118 #define PCIE_MSI_MASK_REG (CONTROL_BASE_ADDR + 0x5C) 120 119 #define PCIE_MSI_PAYLOAD_REG (CONTROL_BASE_ADDR + 0x9C) 121 120 121 + /* PCIe window configuration */ 122 + #define OB_WIN_BASE_ADDR 0x4c00 123 + #define OB_WIN_BLOCK_SIZE 0x20 124 + #define OB_WIN_COUNT 8 125 + #define OB_WIN_REG_ADDR(win, offset) (OB_WIN_BASE_ADDR + \ 126 + OB_WIN_BLOCK_SIZE * (win) + \ 127 + (offset)) 128 + #define OB_WIN_MATCH_LS(win) OB_WIN_REG_ADDR(win, 0x00) 129 + #define OB_WIN_ENABLE BIT(0) 130 + #define OB_WIN_MATCH_MS(win) OB_WIN_REG_ADDR(win, 0x04) 131 + #define OB_WIN_REMAP_LS(win) OB_WIN_REG_ADDR(win, 0x08) 132 + #define OB_WIN_REMAP_MS(win) OB_WIN_REG_ADDR(win, 0x0c) 133 + #define OB_WIN_MASK_LS(win) OB_WIN_REG_ADDR(win, 0x10) 134 + #define OB_WIN_MASK_MS(win) OB_WIN_REG_ADDR(win, 0x14) 135 + #define OB_WIN_ACTIONS(win) OB_WIN_REG_ADDR(win, 0x18) 136 + #define OB_WIN_DEFAULT_ACTIONS (OB_WIN_ACTIONS(OB_WIN_COUNT-1) + 0x4) 137 + #define OB_WIN_FUNC_NUM_MASK GENMASK(31, 24) 138 + #define OB_WIN_FUNC_NUM_SHIFT 24 139 + #define OB_WIN_FUNC_NUM_ENABLE BIT(23) 140 + #define OB_WIN_BUS_NUM_BITS_MASK GENMASK(22, 20) 141 + #define OB_WIN_BUS_NUM_BITS_SHIFT 20 142 + #define OB_WIN_MSG_CODE_ENABLE BIT(22) 143 + #define OB_WIN_MSG_CODE_MASK GENMASK(21, 14) 144 + #define OB_WIN_MSG_CODE_SHIFT 14 145 + #define OB_WIN_MSG_PAYLOAD_LEN BIT(12) 146 + #define OB_WIN_ATTR_ENABLE BIT(11) 147 + #define OB_WIN_ATTR_TC_MASK GENMASK(10, 8) 148 + #define OB_WIN_ATTR_TC_SHIFT 8 149 + #define OB_WIN_ATTR_RELAXED BIT(7) 150 + #define OB_WIN_ATTR_NOSNOOP BIT(6) 151 + #define OB_WIN_ATTR_POISON BIT(5) 152 + #define OB_WIN_ATTR_IDO BIT(4) 153 + #define OB_WIN_TYPE_MASK GENMASK(3, 0) 154 + #define OB_WIN_TYPE_SHIFT 0 155 + #define OB_WIN_TYPE_MEM 0x0 156 + #define OB_WIN_TYPE_IO 0x4 157 + #define OB_WIN_TYPE_CONFIG_TYPE0 0x8 158 + #define OB_WIN_TYPE_CONFIG_TYPE1 0x9 159 + #define OB_WIN_TYPE_MSG 0xc 160 + 122 161 /* LMI registers base address and register offsets */ 123 162 #define LMI_BASE_ADDR 0x6000 124 163 #define CFG_REG (LMI_BASE_ADDR + 0x0) ··· 207 166 #define PCIE_CONFIG_WR_TYPE0 0xa 208 167 #define PCIE_CONFIG_WR_TYPE1 0xb 209 168 210 - #define PIO_RETRY_CNT 500 169 + #define PIO_RETRY_CNT 750000 /* 1.5 s */ 211 170 #define PIO_RETRY_DELAY 2 /* 2 us*/ 212 171 213 172 #define LINK_WAIT_MAX_RETRIES 10 ··· 218 177 219 178 #define MSI_IRQ_NUM 32 220 179 180 + #define CFG_RD_CRS_VAL 0xffff0001 181 + 221 182 struct advk_pcie { 222 183 struct platform_device *pdev; 223 184 void __iomem *base; 185 + struct { 186 + phys_addr_t match; 187 + phys_addr_t remap; 188 + phys_addr_t mask; 189 + u32 actions; 190 + } wins[OB_WIN_COUNT]; 191 + u8 wins_count; 224 192 struct irq_domain *irq_domain; 225 193 struct irq_chip irq_chip; 194 + raw_spinlock_t irq_lock; 226 195 struct irq_domain *msi_domain; 227 196 struct irq_domain *msi_inner_domain; 228 197 struct irq_chip msi_bottom_irq_chip; ··· 417 366 dev_err(dev, "link never came up\n"); 418 367 } 419 368 369 + /* 370 + * Set PCIe address window register which could be used for memory 371 + * mapping. 372 + */ 373 + static void advk_pcie_set_ob_win(struct advk_pcie *pcie, u8 win_num, 374 + phys_addr_t match, phys_addr_t remap, 375 + phys_addr_t mask, u32 actions) 376 + { 377 + advk_writel(pcie, OB_WIN_ENABLE | 378 + lower_32_bits(match), OB_WIN_MATCH_LS(win_num)); 379 + advk_writel(pcie, upper_32_bits(match), OB_WIN_MATCH_MS(win_num)); 380 + advk_writel(pcie, lower_32_bits(remap), OB_WIN_REMAP_LS(win_num)); 381 + advk_writel(pcie, upper_32_bits(remap), OB_WIN_REMAP_MS(win_num)); 382 + advk_writel(pcie, lower_32_bits(mask), OB_WIN_MASK_LS(win_num)); 383 + advk_writel(pcie, upper_32_bits(mask), OB_WIN_MASK_MS(win_num)); 384 + advk_writel(pcie, actions, OB_WIN_ACTIONS(win_num)); 385 + } 386 + 387 + static void advk_pcie_disable_ob_win(struct advk_pcie *pcie, u8 win_num) 388 + { 389 + advk_writel(pcie, 0, OB_WIN_MATCH_LS(win_num)); 390 + advk_writel(pcie, 0, OB_WIN_MATCH_MS(win_num)); 391 + advk_writel(pcie, 0, OB_WIN_REMAP_LS(win_num)); 392 + advk_writel(pcie, 0, OB_WIN_REMAP_MS(win_num)); 393 + advk_writel(pcie, 0, OB_WIN_MASK_LS(win_num)); 394 + advk_writel(pcie, 0, OB_WIN_MASK_MS(win_num)); 395 + advk_writel(pcie, 0, OB_WIN_ACTIONS(win_num)); 396 + } 397 + 420 398 static void advk_pcie_setup_hw(struct advk_pcie *pcie) 421 399 { 422 400 u32 reg; 401 + int i; 423 402 424 403 /* Enable TX */ 425 404 reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG); ··· 528 447 reg = PCIE_IRQ_ALL_MASK & (~PCIE_IRQ_ENABLE_INTS_MASK); 529 448 advk_writel(pcie, reg, HOST_CTRL_INT_MASK_REG); 530 449 450 + /* 451 + * Enable AXI address window location generation: 452 + * When it is enabled, the default outbound window 453 + * configurations (Default User Field: 0xD0074CFC) 454 + * are used to transparent address translation for 455 + * the outbound transactions. Thus, PCIe address 456 + * windows are not required for transparent memory 457 + * access when default outbound window configuration 458 + * is set for memory access. 459 + */ 531 460 reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG); 532 461 reg |= PCIE_CORE_CTRL2_OB_WIN_ENABLE; 533 462 advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG); 534 463 535 - /* Bypass the address window mapping for PIO */ 464 + /* 465 + * Set memory access in Default User Field so it 466 + * is not required to configure PCIe address for 467 + * transparent memory access. 468 + */ 469 + advk_writel(pcie, OB_WIN_TYPE_MEM, OB_WIN_DEFAULT_ACTIONS); 470 + 471 + /* 472 + * Bypass the address window mapping for PIO: 473 + * Since PIO access already contains all required 474 + * info over AXI interface by PIO registers, the 475 + * address window is not required. 476 + */ 536 477 reg = advk_readl(pcie, PIO_CTRL); 537 478 reg |= PIO_CTRL_ADDR_WIN_DISABLE; 538 479 advk_writel(pcie, reg, PIO_CTRL); 480 + 481 + /* 482 + * Configure PCIe address windows for non-memory or 483 + * non-transparent access as by default PCIe uses 484 + * transparent memory access. 485 + */ 486 + for (i = 0; i < pcie->wins_count; i++) 487 + advk_pcie_set_ob_win(pcie, i, 488 + pcie->wins[i].match, pcie->wins[i].remap, 489 + pcie->wins[i].mask, pcie->wins[i].actions); 490 + 491 + /* Disable remaining PCIe outbound windows */ 492 + for (i = pcie->wins_count; i < OB_WIN_COUNT; i++) 493 + advk_pcie_disable_ob_win(pcie, i); 539 494 540 495 advk_pcie_train_link(pcie); 541 496 ··· 589 472 advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG); 590 473 } 591 474 592 - static void advk_pcie_check_pio_status(struct advk_pcie *pcie) 475 + static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u32 *val) 593 476 { 594 477 struct device *dev = &pcie->pdev->dev; 595 478 u32 reg; ··· 600 483 status = (reg & PIO_COMPLETION_STATUS_MASK) >> 601 484 PIO_COMPLETION_STATUS_SHIFT; 602 485 603 - if (!status) 604 - return; 605 - 486 + /* 487 + * According to HW spec, the PIO status check sequence as below: 488 + * 1) even if COMPLETION_STATUS(bit9:7) indicates successful, 489 + * it still needs to check Error Status(bit11), only when this bit 490 + * indicates no error happen, the operation is successful. 491 + * 2) value Unsupported Request(1) of COMPLETION_STATUS(bit9:7) only 492 + * means a PIO write error, and for PIO read it is successful with 493 + * a read value of 0xFFFFFFFF. 494 + * 3) value Completion Retry Status(CRS) of COMPLETION_STATUS(bit9:7) 495 + * only means a PIO write error, and for PIO read it is successful 496 + * with a read value of 0xFFFF0001. 497 + * 4) value Completer Abort (CA) of COMPLETION_STATUS(bit9:7) means 498 + * error for both PIO read and PIO write operation. 499 + * 5) other errors are indicated as 'unknown'. 500 + */ 606 501 switch (status) { 502 + case PIO_COMPLETION_STATUS_OK: 503 + if (reg & PIO_ERR_STATUS) { 504 + strcomp_status = "COMP_ERR"; 505 + break; 506 + } 507 + /* Get the read result */ 508 + if (val) 509 + *val = advk_readl(pcie, PIO_RD_DATA); 510 + /* No error */ 511 + strcomp_status = NULL; 512 + break; 607 513 case PIO_COMPLETION_STATUS_UR: 608 514 strcomp_status = "UR"; 609 515 break; 610 516 case PIO_COMPLETION_STATUS_CRS: 517 + if (allow_crs && val) { 518 + /* PCIe r4.0, sec 2.3.2, says: 519 + * If CRS Software Visibility is enabled: 520 + * For a Configuration Read Request that includes both 521 + * bytes of the Vendor ID field of a device Function's 522 + * Configuration Space Header, the Root Complex must 523 + * complete the Request to the host by returning a 524 + * read-data value of 0001h for the Vendor ID field and 525 + * all '1's for any additional bytes included in the 526 + * request. 527 + * 528 + * So CRS in this case is not an error status. 529 + */ 530 + *val = CFG_RD_CRS_VAL; 531 + strcomp_status = NULL; 532 + break; 533 + } 534 + /* PCIe r4.0, sec 2.3.2, says: 535 + * If CRS Software Visibility is not enabled, the Root Complex 536 + * must re-issue the Configuration Request as a new Request. 537 + * If CRS Software Visibility is enabled: For a Configuration 538 + * Write Request or for any other Configuration Read Request, 539 + * the Root Complex must re-issue the Configuration Request as 540 + * a new Request. 541 + * A Root Complex implementation may choose to limit the number 542 + * of Configuration Request/CRS Completion Status loops before 543 + * determining that something is wrong with the target of the 544 + * Request and taking appropriate action, e.g., complete the 545 + * Request to the host as a failed transaction. 546 + * 547 + * To simplify implementation do not re-issue the Configuration 548 + * Request and complete the Request as a failed transaction. 549 + */ 611 550 strcomp_status = "CRS"; 612 551 break; 613 552 case PIO_COMPLETION_STATUS_CA: ··· 674 501 break; 675 502 } 676 503 504 + if (!strcomp_status) 505 + return 0; 506 + 677 507 if (reg & PIO_NON_POSTED_REQ) 678 508 str_posted = "Non-posted"; 679 509 else ··· 684 508 685 509 dev_err(dev, "%s PIO Response Status: %s, %#x @ %#x\n", 686 510 str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS)); 511 + 512 + return -EFAULT; 687 513 } 688 514 689 515 static int advk_pcie_wait_pio(struct advk_pcie *pcie) ··· 723 545 case PCI_EXP_RTCTL: { 724 546 u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG); 725 547 *value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE; 548 + *value |= PCI_EXP_RTCAP_CRSVIS << 16; 726 549 return PCI_BRIDGE_EMUL_HANDLED; 727 550 } 728 551 ··· 805 626 static int advk_sw_pci_bridge_init(struct advk_pcie *pcie) 806 627 { 807 628 struct pci_bridge_emul *bridge = &pcie->bridge; 629 + int ret; 808 630 809 631 bridge->conf.vendor = 810 632 cpu_to_le16(advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff); ··· 829 649 bridge->data = pcie; 830 650 bridge->ops = &advk_pci_bridge_emul_ops; 831 651 832 - return pci_bridge_emul_init(bridge, 0); 652 + /* PCIe config space can be initialized after pci_bridge_emul_init() */ 653 + ret = pci_bridge_emul_init(bridge, 0); 654 + if (ret < 0) 655 + return ret; 656 + 657 + /* Indicates supports for Completion Retry Status */ 658 + bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS); 659 + 660 + return 0; 833 661 } 834 662 835 663 static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus, ··· 889 701 int where, int size, u32 *val) 890 702 { 891 703 struct advk_pcie *pcie = bus->sysdata; 704 + bool allow_crs; 892 705 u32 reg; 893 706 int ret; 894 707 ··· 902 713 return pci_bridge_emul_conf_read(&pcie->bridge, where, 903 714 size, val); 904 715 716 + /* 717 + * Completion Retry Status is possible to return only when reading all 718 + * 4 bytes from PCI_VENDOR_ID and PCI_DEVICE_ID registers at once and 719 + * CRSSVE flag on Root Bridge is enabled. 720 + */ 721 + allow_crs = (where == PCI_VENDOR_ID) && (size == 4) && 722 + (le16_to_cpu(pcie->bridge.pcie_conf.rootctl) & 723 + PCI_EXP_RTCTL_CRSSVE); 724 + 905 725 if (advk_pcie_pio_is_running(pcie)) { 726 + /* 727 + * If it is possible return Completion Retry Status so caller 728 + * tries to issue the request again instead of failing. 729 + */ 730 + if (allow_crs) { 731 + *val = CFG_RD_CRS_VAL; 732 + return PCIBIOS_SUCCESSFUL; 733 + } 906 734 *val = 0xffffffff; 907 735 return PCIBIOS_SET_FAILED; 908 736 } ··· 947 741 948 742 ret = advk_pcie_wait_pio(pcie); 949 743 if (ret < 0) { 744 + /* 745 + * If it is possible return Completion Retry Status so caller 746 + * tries to issue the request again instead of failing. 747 + */ 748 + if (allow_crs) { 749 + *val = CFG_RD_CRS_VAL; 750 + return PCIBIOS_SUCCESSFUL; 751 + } 950 752 *val = 0xffffffff; 951 753 return PCIBIOS_SET_FAILED; 952 754 } 953 755 954 - advk_pcie_check_pio_status(pcie); 756 + /* Check PIO status and get the read result */ 757 + ret = advk_pcie_check_pio_status(pcie, allow_crs, val); 758 + if (ret < 0) { 759 + *val = 0xffffffff; 760 + return PCIBIOS_SET_FAILED; 761 + } 955 762 956 - /* Get the read result */ 957 - *val = advk_readl(pcie, PIO_RD_DATA); 958 763 if (size == 1) 959 764 *val = (*val >> (8 * (where & 3))) & 0xff; 960 765 else if (size == 2) ··· 1029 812 if (ret < 0) 1030 813 return PCIBIOS_SET_FAILED; 1031 814 1032 - advk_pcie_check_pio_status(pcie); 815 + ret = advk_pcie_check_pio_status(pcie, false, NULL); 816 + if (ret < 0) 817 + return PCIBIOS_SET_FAILED; 1033 818 1034 819 return PCIBIOS_SUCCESSFUL; 1035 820 } ··· 1105 886 { 1106 887 struct advk_pcie *pcie = d->domain->host_data; 1107 888 irq_hw_number_t hwirq = irqd_to_hwirq(d); 889 + unsigned long flags; 1108 890 u32 mask; 1109 891 892 + raw_spin_lock_irqsave(&pcie->irq_lock, flags); 1110 893 mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); 1111 894 mask |= PCIE_ISR1_INTX_ASSERT(hwirq); 1112 895 advk_writel(pcie, mask, PCIE_ISR1_MASK_REG); 896 + raw_spin_unlock_irqrestore(&pcie->irq_lock, flags); 1113 897 } 1114 898 1115 899 static void advk_pcie_irq_unmask(struct irq_data *d) 1116 900 { 1117 901 struct advk_pcie *pcie = d->domain->host_data; 1118 902 irq_hw_number_t hwirq = irqd_to_hwirq(d); 903 + unsigned long flags; 1119 904 u32 mask; 1120 905 906 + raw_spin_lock_irqsave(&pcie->irq_lock, flags); 1121 907 mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); 1122 908 mask &= ~PCIE_ISR1_INTX_ASSERT(hwirq); 1123 909 advk_writel(pcie, mask, PCIE_ISR1_MASK_REG); 910 + raw_spin_unlock_irqrestore(&pcie->irq_lock, flags); 1124 911 } 1125 912 1126 913 static int advk_pcie_irq_map(struct irq_domain *h, ··· 1210 985 struct irq_chip *irq_chip; 1211 986 int ret = 0; 1212 987 988 + raw_spin_lock_init(&pcie->irq_lock); 989 + 1213 990 pcie_intc_node = of_get_next_child(node, NULL); 1214 991 if (!pcie_intc_node) { 1215 992 dev_err(dev, "No PCIe Intc node found\n"); ··· 1276 1049 { 1277 1050 u32 isr0_val, isr0_mask, isr0_status; 1278 1051 u32 isr1_val, isr1_mask, isr1_status; 1279 - int i, virq; 1052 + int i; 1280 1053 1281 1054 isr0_val = advk_readl(pcie, PCIE_ISR0_REG); 1282 1055 isr0_mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); ··· 1304 1077 advk_writel(pcie, PCIE_ISR1_INTX_ASSERT(i), 1305 1078 PCIE_ISR1_REG); 1306 1079 1307 - virq = irq_find_mapping(pcie->irq_domain, i); 1308 - generic_handle_irq(virq); 1080 + generic_handle_domain_irq(pcie->irq_domain, i); 1309 1081 } 1310 1082 } 1311 1083 ··· 1388 1162 struct device *dev = &pdev->dev; 1389 1163 struct advk_pcie *pcie; 1390 1164 struct pci_host_bridge *bridge; 1165 + struct resource_entry *entry; 1391 1166 int ret, irq; 1392 1167 1393 1168 bridge = devm_pci_alloc_host_bridge(dev, sizeof(struct advk_pcie)); ··· 1398 1171 pcie = pci_host_bridge_priv(bridge); 1399 1172 pcie->pdev = pdev; 1400 1173 platform_set_drvdata(pdev, pcie); 1174 + 1175 + resource_list_for_each_entry(entry, &bridge->windows) { 1176 + resource_size_t start = entry->res->start; 1177 + resource_size_t size = resource_size(entry->res); 1178 + unsigned long type = resource_type(entry->res); 1179 + u64 win_size; 1180 + 1181 + /* 1182 + * Aardvark hardware allows to configure also PCIe window 1183 + * for config type 0 and type 1 mapping, but driver uses 1184 + * only PIO for issuing configuration transfers which does 1185 + * not use PCIe window configuration. 1186 + */ 1187 + if (type != IORESOURCE_MEM && type != IORESOURCE_MEM_64 && 1188 + type != IORESOURCE_IO) 1189 + continue; 1190 + 1191 + /* 1192 + * Skip transparent memory resources. Default outbound access 1193 + * configuration is set to transparent memory access so it 1194 + * does not need window configuration. 1195 + */ 1196 + if ((type == IORESOURCE_MEM || type == IORESOURCE_MEM_64) && 1197 + entry->offset == 0) 1198 + continue; 1199 + 1200 + /* 1201 + * The n-th PCIe window is configured by tuple (match, remap, mask) 1202 + * and an access to address A uses this window if A matches the 1203 + * match with given mask. 1204 + * So every PCIe window size must be a power of two and every start 1205 + * address must be aligned to window size. Minimal size is 64 KiB 1206 + * because lower 16 bits of mask must be zero. Remapped address 1207 + * may have set only bits from the mask. 1208 + */ 1209 + while (pcie->wins_count < OB_WIN_COUNT && size > 0) { 1210 + /* Calculate the largest aligned window size */ 1211 + win_size = (1ULL << (fls64(size)-1)) | 1212 + (start ? (1ULL << __ffs64(start)) : 0); 1213 + win_size = 1ULL << __ffs64(win_size); 1214 + if (win_size < 0x10000) 1215 + break; 1216 + 1217 + dev_dbg(dev, 1218 + "Configuring PCIe window %d: [0x%llx-0x%llx] as %lu\n", 1219 + pcie->wins_count, (unsigned long long)start, 1220 + (unsigned long long)start + win_size, type); 1221 + 1222 + if (type == IORESOURCE_IO) { 1223 + pcie->wins[pcie->wins_count].actions = OB_WIN_TYPE_IO; 1224 + pcie->wins[pcie->wins_count].match = pci_pio_to_address(start); 1225 + } else { 1226 + pcie->wins[pcie->wins_count].actions = OB_WIN_TYPE_MEM; 1227 + pcie->wins[pcie->wins_count].match = start; 1228 + } 1229 + pcie->wins[pcie->wins_count].remap = start - entry->offset; 1230 + pcie->wins[pcie->wins_count].mask = ~(win_size - 1); 1231 + 1232 + if (pcie->wins[pcie->wins_count].remap & (win_size - 1)) 1233 + break; 1234 + 1235 + start += win_size; 1236 + size -= win_size; 1237 + pcie->wins_count++; 1238 + } 1239 + 1240 + if (size > 0) { 1241 + dev_err(&pcie->pdev->dev, 1242 + "Invalid PCIe region [0x%llx-0x%llx]\n", 1243 + (unsigned long long)entry->res->start, 1244 + (unsigned long long)entry->res->end + 1); 1245 + return -EINVAL; 1246 + } 1247 + } 1401 1248 1402 1249 pcie->base = devm_platform_ioremap_resource(pdev, 0); 1403 1250 if (IS_ERR(pcie->base)) ··· 1553 1252 { 1554 1253 struct advk_pcie *pcie = platform_get_drvdata(pdev); 1555 1254 struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); 1255 + int i; 1556 1256 1557 1257 pci_lock_rescan_remove(); 1558 1258 pci_stop_root_bus(bridge->bus); ··· 1562 1260 1563 1261 advk_pcie_remove_msi_irq_domain(pcie); 1564 1262 advk_pcie_remove_irq_domain(pcie); 1263 + 1264 + /* Disable outbound address windows mapping */ 1265 + for (i = 0; i < OB_WIN_COUNT; i++) 1266 + advk_pcie_disable_ob_win(pcie, i); 1565 1267 1566 1268 return 0; 1567 1269 }
+1 -1
drivers/pci/controller/pci-ftpci100.c
··· 314 314 for (i = 0; i < 4; i++) { 315 315 if ((irq_stat & BIT(i)) == 0) 316 316 continue; 317 - generic_handle_irq(irq_find_mapping(p->irqdomain, i)); 317 + generic_handle_domain_irq(p->irqdomain, i); 318 318 } 319 319 320 320 chained_irq_exit(irqchip, desc);
+112 -41
drivers/pci/controller/pci-hyperv.c
··· 40 40 #include <linux/kernel.h> 41 41 #include <linux/module.h> 42 42 #include <linux/pci.h> 43 + #include <linux/pci-ecam.h> 43 44 #include <linux/delay.h> 44 45 #include <linux/semaphore.h> 45 46 #include <linux/irqdomain.h> ··· 65 64 PCI_PROTOCOL_VERSION_1_1 = PCI_MAKE_VERSION(1, 1), /* Win10 */ 66 65 PCI_PROTOCOL_VERSION_1_2 = PCI_MAKE_VERSION(1, 2), /* RS1 */ 67 66 PCI_PROTOCOL_VERSION_1_3 = PCI_MAKE_VERSION(1, 3), /* Vibranium */ 67 + PCI_PROTOCOL_VERSION_1_4 = PCI_MAKE_VERSION(1, 4), /* WS2022 */ 68 68 }; 69 69 70 70 #define CPU_AFFINITY_ALL -1ULL ··· 75 73 * first. 76 74 */ 77 75 static enum pci_protocol_version_t pci_protocol_versions[] = { 76 + PCI_PROTOCOL_VERSION_1_4, 78 77 PCI_PROTOCOL_VERSION_1_3, 79 78 PCI_PROTOCOL_VERSION_1_2, 80 79 PCI_PROTOCOL_VERSION_1_1, ··· 125 122 PCI_CREATE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x17, 126 123 PCI_DELETE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x18, /* unused */ 127 124 PCI_BUS_RELATIONS2 = PCI_MESSAGE_BASE + 0x19, 125 + PCI_RESOURCES_ASSIGNED3 = PCI_MESSAGE_BASE + 0x1A, 126 + PCI_CREATE_INTERRUPT_MESSAGE3 = PCI_MESSAGE_BASE + 0x1B, 128 127 PCI_MESSAGE_MAXIMUM 129 128 }; 130 129 ··· 235 230 struct hv_msi_desc2 { 236 231 u8 vector; 237 232 u8 delivery_mode; 233 + u16 vector_count; 234 + u16 processor_count; 235 + u16 processor_array[32]; 236 + } __packed; 237 + 238 + /* 239 + * struct hv_msi_desc3 - 1.3 version of hv_msi_desc 240 + * Everything is the same as in 'hv_msi_desc2' except that the size of the 241 + * 'vector' field is larger to support bigger vector values. For ex: LPI 242 + * vectors on ARM. 243 + */ 244 + struct hv_msi_desc3 { 245 + u32 vector; 246 + u8 delivery_mode; 247 + u8 reserved; 238 248 u16 vector_count; 239 249 u16 processor_count; 240 250 u16 processor_array[32]; ··· 403 383 struct hv_msi_desc2 int_desc; 404 384 } __packed; 405 385 386 + struct pci_create_interrupt3 { 387 + struct pci_message message_type; 388 + union win_slot_encoding wslot; 389 + struct hv_msi_desc3 int_desc; 390 + } __packed; 391 + 406 392 struct pci_delete_interrupt { 407 393 struct pci_message message_type; 408 394 union win_slot_encoding wslot; ··· 474 448 }; 475 449 476 450 struct hv_pcibus_device { 451 + #ifdef CONFIG_X86 477 452 struct pci_sysdata sysdata; 453 + #elif defined(CONFIG_ARM64) 454 + struct pci_config_window sysdata; 455 + #endif 456 + struct pci_host_bridge *bridge; 457 + struct fwnode_handle *fwnode; 478 458 /* Protocol version negotiated with the host */ 479 459 enum pci_protocol_version_t protocol_version; 480 460 enum hv_pcibus_state state; ··· 495 463 spinlock_t config_lock; /* Avoid two threads writing index page */ 496 464 spinlock_t device_list_lock; /* Protect lists below */ 497 465 void __iomem *cfg_addr; 498 - 499 - struct list_head resources_for_children; 500 466 501 467 struct list_head children; 502 468 struct list_head dr_list; ··· 1358 1328 return sizeof(*int_pkt); 1359 1329 } 1360 1330 1331 + /* 1332 + * Create MSI w/ dummy vCPU set targeting just one vCPU, overwritten 1333 + * by subsequent retarget in hv_irq_unmask(). 1334 + */ 1335 + static int hv_compose_msi_req_get_cpu(struct cpumask *affinity) 1336 + { 1337 + return cpumask_first_and(affinity, cpu_online_mask); 1338 + } 1339 + 1361 1340 static u32 hv_compose_msi_req_v2( 1362 1341 struct pci_create_interrupt2 *int_pkt, struct cpumask *affinity, 1363 1342 u32 slot, u8 vector) ··· 1378 1339 int_pkt->int_desc.vector = vector; 1379 1340 int_pkt->int_desc.vector_count = 1; 1380 1341 int_pkt->int_desc.delivery_mode = APIC_DELIVERY_MODE_FIXED; 1342 + cpu = hv_compose_msi_req_get_cpu(affinity); 1343 + int_pkt->int_desc.processor_array[0] = 1344 + hv_cpu_number_to_vp_number(cpu); 1345 + int_pkt->int_desc.processor_count = 1; 1381 1346 1382 - /* 1383 - * Create MSI w/ dummy vCPU set targeting just one vCPU, overwritten 1384 - * by subsequent retarget in hv_irq_unmask(). 1385 - */ 1386 - cpu = cpumask_first_and(affinity, cpu_online_mask); 1347 + return sizeof(*int_pkt); 1348 + } 1349 + 1350 + static u32 hv_compose_msi_req_v3( 1351 + struct pci_create_interrupt3 *int_pkt, struct cpumask *affinity, 1352 + u32 slot, u32 vector) 1353 + { 1354 + int cpu; 1355 + 1356 + int_pkt->message_type.type = PCI_CREATE_INTERRUPT_MESSAGE3; 1357 + int_pkt->wslot.slot = slot; 1358 + int_pkt->int_desc.vector = vector; 1359 + int_pkt->int_desc.reserved = 0; 1360 + int_pkt->int_desc.vector_count = 1; 1361 + int_pkt->int_desc.delivery_mode = APIC_DELIVERY_MODE_FIXED; 1362 + cpu = hv_compose_msi_req_get_cpu(affinity); 1387 1363 int_pkt->int_desc.processor_array[0] = 1388 1364 hv_cpu_number_to_vp_number(cpu); 1389 1365 int_pkt->int_desc.processor_count = 1; ··· 1433 1379 union { 1434 1380 struct pci_create_interrupt v1; 1435 1381 struct pci_create_interrupt2 v2; 1382 + struct pci_create_interrupt3 v3; 1436 1383 } int_pkts; 1437 1384 } __packed ctxt; 1438 1385 ··· 1476 1421 case PCI_PROTOCOL_VERSION_1_2: 1477 1422 case PCI_PROTOCOL_VERSION_1_3: 1478 1423 size = hv_compose_msi_req_v2(&ctxt.int_pkts.v2, 1424 + dest, 1425 + hpdev->desc.win_slot.slot, 1426 + cfg->vector); 1427 + break; 1428 + 1429 + case PCI_PROTOCOL_VERSION_1_4: 1430 + size = hv_compose_msi_req_v3(&ctxt.int_pkts.v3, 1479 1431 dest, 1480 1432 hpdev->desc.win_slot.slot, 1481 1433 cfg->vector); ··· 1628 1566 hbus->msi_info.handler = handle_edge_irq; 1629 1567 hbus->msi_info.handler_name = "edge"; 1630 1568 hbus->msi_info.data = hbus; 1631 - hbus->irq_domain = pci_msi_create_irq_domain(hbus->sysdata.fwnode, 1569 + hbus->irq_domain = pci_msi_create_irq_domain(hbus->fwnode, 1632 1570 &hbus->msi_info, 1633 1571 x86_vector_domain); 1634 1572 if (!hbus->irq_domain) { ··· 1636 1574 "Failed to build an MSI IRQ domain\n"); 1637 1575 return -ENODEV; 1638 1576 } 1577 + 1578 + dev_set_msi_domain(&hbus->bridge->dev, hbus->irq_domain); 1639 1579 1640 1580 return 0; 1641 1581 } ··· 1861 1797 1862 1798 slot_nr = PCI_SLOT(wslot_to_devfn(hpdev->desc.win_slot.slot)); 1863 1799 snprintf(name, SLOT_NAME_SIZE, "%u", hpdev->desc.ser); 1864 - hpdev->pci_slot = pci_create_slot(hbus->pci_bus, slot_nr, 1800 + hpdev->pci_slot = pci_create_slot(hbus->bridge->bus, slot_nr, 1865 1801 name, NULL); 1866 1802 if (IS_ERR(hpdev->pci_slot)) { 1867 1803 pr_warn("pci_create slot %s failed\n", name); ··· 1891 1827 static void hv_pci_assign_numa_node(struct hv_pcibus_device *hbus) 1892 1828 { 1893 1829 struct pci_dev *dev; 1894 - struct pci_bus *bus = hbus->pci_bus; 1830 + struct pci_bus *bus = hbus->bridge->bus; 1895 1831 struct hv_pci_dev *hv_dev; 1896 1832 1897 1833 list_for_each_entry(dev, &bus->devices, bus_list) { ··· 1914 1850 */ 1915 1851 static int create_root_hv_pci_bus(struct hv_pcibus_device *hbus) 1916 1852 { 1917 - /* Register the device */ 1918 - hbus->pci_bus = pci_create_root_bus(&hbus->hdev->device, 1919 - 0, /* bus number is always zero */ 1920 - &hv_pcifront_ops, 1921 - &hbus->sysdata, 1922 - &hbus->resources_for_children); 1923 - if (!hbus->pci_bus) 1924 - return -ENODEV; 1853 + int error; 1854 + struct pci_host_bridge *bridge = hbus->bridge; 1855 + 1856 + bridge->dev.parent = &hbus->hdev->device; 1857 + bridge->sysdata = &hbus->sysdata; 1858 + bridge->ops = &hv_pcifront_ops; 1859 + 1860 + error = pci_scan_root_bus_bridge(bridge); 1861 + if (error) 1862 + return error; 1925 1863 1926 1864 pci_lock_rescan_remove(); 1927 - pci_scan_child_bus(hbus->pci_bus); 1928 1865 hv_pci_assign_numa_node(hbus); 1929 - pci_bus_assign_resources(hbus->pci_bus); 1866 + pci_bus_assign_resources(bridge->bus); 1930 1867 hv_pci_assign_slots(hbus); 1931 - pci_bus_add_devices(hbus->pci_bus); 1868 + pci_bus_add_devices(bridge->bus); 1932 1869 pci_unlock_rescan_remove(); 1933 1870 hbus->state = hv_pcibus_installed; 1934 1871 return 0; ··· 2192 2127 * because there may have been changes. 2193 2128 */ 2194 2129 pci_lock_rescan_remove(); 2195 - pci_scan_child_bus(hbus->pci_bus); 2130 + pci_scan_child_bus(hbus->bridge->bus); 2196 2131 hv_pci_assign_numa_node(hbus); 2197 2132 hv_pci_assign_slots(hbus); 2198 2133 pci_unlock_rescan_remove(); ··· 2360 2295 /* 2361 2296 * Ejection can come before or after the PCI bus has been set up, so 2362 2297 * attempt to find it and tear down the bus state, if it exists. This 2363 - * must be done without constructs like pci_domain_nr(hbus->pci_bus) 2364 - * because hbus->pci_bus may not exist yet. 2298 + * must be done without constructs like pci_domain_nr(hbus->bridge->bus) 2299 + * because hbus->bridge->bus may not exist yet. 2365 2300 */ 2366 2301 wslot = wslot_to_devfn(hpdev->desc.win_slot.slot); 2367 - pdev = pci_get_domain_bus_and_slot(hbus->sysdata.domain, 0, wslot); 2302 + pdev = pci_get_domain_bus_and_slot(hbus->bridge->domain_nr, 0, wslot); 2368 2303 if (pdev) { 2369 2304 pci_lock_rescan_remove(); 2370 2305 pci_stop_and_remove_bus_device(pdev); ··· 2727 2662 /* Modify this resource to become a bridge window. */ 2728 2663 hbus->low_mmio_res->flags |= IORESOURCE_WINDOW; 2729 2664 hbus->low_mmio_res->flags &= ~IORESOURCE_BUSY; 2730 - pci_add_resource(&hbus->resources_for_children, 2731 - hbus->low_mmio_res); 2665 + pci_add_resource(&hbus->bridge->windows, hbus->low_mmio_res); 2732 2666 } 2733 2667 2734 2668 if (hbus->high_mmio_space) { ··· 2746 2682 /* Modify this resource to become a bridge window. */ 2747 2683 hbus->high_mmio_res->flags |= IORESOURCE_WINDOW; 2748 2684 hbus->high_mmio_res->flags &= ~IORESOURCE_BUSY; 2749 - pci_add_resource(&hbus->resources_for_children, 2750 - hbus->high_mmio_res); 2685 + pci_add_resource(&hbus->bridge->windows, hbus->high_mmio_res); 2751 2686 } 2752 2687 2753 2688 return 0; ··· 3065 3002 static int hv_pci_probe(struct hv_device *hdev, 3066 3003 const struct hv_vmbus_device_id *dev_id) 3067 3004 { 3005 + struct pci_host_bridge *bridge; 3068 3006 struct hv_pcibus_device *hbus; 3069 3007 u16 dom_req, dom; 3070 3008 char *name; ··· 3077 3013 * hv_irq_unmask(). Those must not cross a page boundary. 3078 3014 */ 3079 3015 BUILD_BUG_ON(sizeof(*hbus) > HV_HYP_PAGE_SIZE); 3016 + 3017 + bridge = devm_pci_alloc_host_bridge(&hdev->device, 0); 3018 + if (!bridge) 3019 + return -ENOMEM; 3080 3020 3081 3021 /* 3082 3022 * With the recent 59bb47985c1d ("mm, sl[aou]b: guarantee natural ··· 3103 3035 hbus = kzalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL); 3104 3036 if (!hbus) 3105 3037 return -ENOMEM; 3038 + 3039 + hbus->bridge = bridge; 3106 3040 hbus->state = hv_pcibus_init; 3107 3041 hbus->wslot_res_allocated = -1; 3108 3042 ··· 3136 3066 "PCI dom# 0x%hx has collision, using 0x%hx", 3137 3067 dom_req, dom); 3138 3068 3069 + hbus->bridge->domain_nr = dom; 3070 + #ifdef CONFIG_X86 3139 3071 hbus->sysdata.domain = dom; 3072 + #endif 3140 3073 3141 3074 hbus->hdev = hdev; 3142 3075 INIT_LIST_HEAD(&hbus->children); 3143 3076 INIT_LIST_HEAD(&hbus->dr_list); 3144 - INIT_LIST_HEAD(&hbus->resources_for_children); 3145 3077 spin_lock_init(&hbus->config_lock); 3146 3078 spin_lock_init(&hbus->device_list_lock); 3147 3079 spin_lock_init(&hbus->retarget_msi_interrupt_lock); 3148 3080 hbus->wq = alloc_ordered_workqueue("hv_pci_%x", 0, 3149 - hbus->sysdata.domain); 3081 + hbus->bridge->domain_nr); 3150 3082 if (!hbus->wq) { 3151 3083 ret = -ENOMEM; 3152 3084 goto free_dom; ··· 3185 3113 goto unmap; 3186 3114 } 3187 3115 3188 - hbus->sysdata.fwnode = irq_domain_alloc_named_fwnode(name); 3116 + hbus->fwnode = irq_domain_alloc_named_fwnode(name); 3189 3117 kfree(name); 3190 - if (!hbus->sysdata.fwnode) { 3118 + if (!hbus->fwnode) { 3191 3119 ret = -ENOMEM; 3192 3120 goto unmap; 3193 3121 } ··· 3265 3193 free_irq_domain: 3266 3194 irq_domain_remove(hbus->irq_domain); 3267 3195 free_fwnode: 3268 - irq_domain_free_fwnode(hbus->sysdata.fwnode); 3196 + irq_domain_free_fwnode(hbus->fwnode); 3269 3197 unmap: 3270 3198 iounmap(hbus->cfg_addr); 3271 3199 free_config: ··· 3275 3203 destroy_wq: 3276 3204 destroy_workqueue(hbus->wq); 3277 3205 free_dom: 3278 - hv_put_dom_num(hbus->sysdata.domain); 3206 + hv_put_dom_num(hbus->bridge->domain_nr); 3279 3207 free_bus: 3280 3208 kfree(hbus); 3281 3209 return ret; ··· 3367 3295 3368 3296 /* Remove the bus from PCI's point of view. */ 3369 3297 pci_lock_rescan_remove(); 3370 - pci_stop_root_bus(hbus->pci_bus); 3298 + pci_stop_root_bus(hbus->bridge->bus); 3371 3299 hv_pci_remove_slots(hbus); 3372 - pci_remove_root_bus(hbus->pci_bus); 3300 + pci_remove_root_bus(hbus->bridge->bus); 3373 3301 pci_unlock_rescan_remove(); 3374 3302 } 3375 3303 ··· 3379 3307 3380 3308 iounmap(hbus->cfg_addr); 3381 3309 hv_free_config_window(hbus); 3382 - pci_free_resource_list(&hbus->resources_for_children); 3383 3310 hv_pci_free_bridge_windows(hbus); 3384 3311 irq_domain_remove(hbus->irq_domain); 3385 - irq_domain_free_fwnode(hbus->sysdata.fwnode); 3312 + irq_domain_free_fwnode(hbus->fwnode); 3386 3313 3387 - hv_put_dom_num(hbus->sysdata.domain); 3314 + hv_put_dom_num(hbus->bridge->domain_nr); 3388 3315 3389 3316 kfree(hbus); 3390 3317 return ret; ··· 3461 3390 */ 3462 3391 static void hv_pci_restore_msi_state(struct hv_pcibus_device *hbus) 3463 3392 { 3464 - pci_walk_bus(hbus->pci_bus, hv_pci_restore_msi_msg, NULL); 3393 + pci_walk_bus(hbus->bridge->bus, hv_pci_restore_msi_msg, NULL); 3465 3394 } 3466 3395 3467 3396 static int hv_pci_resume(struct hv_device *hdev)
+17 -21
drivers/pci/controller/pci-tegra.c
··· 372 372 struct gpio_desc *reset_gpio; 373 373 }; 374 374 375 - struct tegra_pcie_bus { 376 - struct list_head list; 377 - unsigned int nr; 378 - }; 379 - 380 375 static inline void afi_writel(struct tegra_pcie *pcie, u32 value, 381 376 unsigned long offset) 382 377 { ··· 759 764 760 765 static irqreturn_t tegra_pcie_isr(int irq, void *arg) 761 766 { 762 - const char *err_msg[] = { 767 + static const char * const err_msg[] = { 763 768 "Unknown", 764 769 "AXI slave error", 765 770 "AXI decode error", ··· 1548 1553 while (reg) { 1549 1554 unsigned int offset = find_first_bit(&reg, 32); 1550 1555 unsigned int index = i * 32 + offset; 1551 - unsigned int irq; 1556 + int ret; 1552 1557 1553 - irq = irq_find_mapping(msi->domain->parent, index); 1554 - if (irq) { 1555 - generic_handle_irq(irq); 1556 - } else { 1558 + ret = generic_handle_domain_irq(msi->domain->parent, index); 1559 + if (ret) { 1557 1560 /* 1558 1561 * that's weird who triggered this? 1559 1562 * just clear it ··· 2186 2193 rp->np = port; 2187 2194 2188 2195 rp->base = devm_pci_remap_cfg_resource(dev, &rp->regs); 2189 - if (IS_ERR(rp->base)) 2190 - return PTR_ERR(rp->base); 2196 + if (IS_ERR(rp->base)) { 2197 + err = PTR_ERR(rp->base); 2198 + goto err_node_put; 2199 + } 2191 2200 2192 2201 label = devm_kasprintf(dev, GFP_KERNEL, "pex-reset-%u", index); 2193 2202 if (!label) { 2194 - dev_err(dev, "failed to create reset GPIO label\n"); 2195 - return -ENOMEM; 2203 + err = -ENOMEM; 2204 + goto err_node_put; 2196 2205 } 2197 2206 2198 2207 /* ··· 2212 2217 } else { 2213 2218 dev_err(dev, "failed to get reset GPIO: %ld\n", 2214 2219 PTR_ERR(rp->reset_gpio)); 2215 - return PTR_ERR(rp->reset_gpio); 2220 + err = PTR_ERR(rp->reset_gpio); 2221 + goto err_node_put; 2216 2222 } 2217 2223 } 2218 2224 ··· 2544 2548 if (list_empty(&pcie->ports)) 2545 2549 return NULL; 2546 2550 2547 - seq_printf(s, "Index Status\n"); 2551 + seq_puts(s, "Index Status\n"); 2548 2552 2549 2553 return seq_list_start(&pcie->ports, *pos); 2550 2554 } ··· 2581 2585 seq_printf(s, "%2u ", port->index); 2582 2586 2583 2587 if (up) 2584 - seq_printf(s, "up"); 2588 + seq_puts(s, "up"); 2585 2589 2586 2590 if (active) { 2587 2591 if (up) 2588 - seq_printf(s, ", "); 2592 + seq_puts(s, ", "); 2589 2593 2590 - seq_printf(s, "active"); 2594 + seq_puts(s, "active"); 2591 2595 } 2592 2596 2593 - seq_printf(s, "\n"); 2597 + seq_puts(s, "\n"); 2594 2598 return 0; 2595 2599 } 2596 2600
+3 -7
drivers/pci/controller/pci-xgene-msi.c
··· 291 291 struct irq_chip *chip = irq_desc_get_chip(desc); 292 292 struct xgene_msi_group *msi_groups; 293 293 struct xgene_msi *xgene_msi; 294 - unsigned int virq; 295 - int msir_index, msir_val, hw_irq; 294 + int msir_index, msir_val, hw_irq, ret; 296 295 u32 intr_index, grp_select, msi_grp; 297 296 298 297 chained_irq_enter(chip, desc); ··· 329 330 * CPU0 330 331 */ 331 332 hw_irq = hwirq_to_canonical_hwirq(hw_irq); 332 - virq = irq_find_mapping(xgene_msi->inner_domain, hw_irq); 333 - WARN_ON(!virq); 334 - if (virq != 0) 335 - generic_handle_irq(virq); 333 + ret = generic_handle_domain_irq(xgene_msi->inner_domain, hw_irq); 334 + WARN_ON_ONCE(ret); 336 335 msir_val &= ~(1 << intr_index); 337 336 } 338 337 grp_select &= ~(1 << msir_index); ··· 448 451 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 449 452 xgene_msi->msi_regs = devm_ioremap_resource(&pdev->dev, res); 450 453 if (IS_ERR(xgene_msi->msi_regs)) { 451 - dev_err(&pdev->dev, "no reg space\n"); 452 454 rc = PTR_ERR(xgene_msi->msi_regs); 453 455 goto error; 454 456 }
+4 -6
drivers/pci/controller/pcie-altera-msi.c
··· 55 55 struct altera_msi *msi; 56 56 unsigned long status; 57 57 u32 bit; 58 - u32 virq; 58 + int ret; 59 59 60 60 chained_irq_enter(chip, desc); 61 61 msi = irq_desc_get_handler_data(desc); ··· 65 65 /* Dummy read from vector to clear the interrupt */ 66 66 readl_relaxed(msi->vector_base + (bit * sizeof(u32))); 67 67 68 - virq = irq_find_mapping(msi->inner_domain, bit); 69 - if (virq) 70 - generic_handle_irq(virq); 71 - else 72 - dev_err(&msi->pdev->dev, "unexpected MSI\n"); 68 + ret = generic_handle_domain_irq(msi->inner_domain, bit); 69 + if (ret) 70 + dev_err_ratelimited(&msi->pdev->dev, "unexpected MSI\n"); 73 71 } 74 72 } 75 73
+4 -6
drivers/pci/controller/pcie-altera.c
··· 646 646 struct device *dev; 647 647 unsigned long status; 648 648 u32 bit; 649 - u32 virq; 649 + int ret; 650 650 651 651 chained_irq_enter(chip, desc); 652 652 pcie = irq_desc_get_handler_data(desc); ··· 658 658 /* clear interrupts */ 659 659 cra_writel(pcie, 1 << bit, P2A_INT_STATUS); 660 660 661 - virq = irq_find_mapping(pcie->irq_domain, bit); 662 - if (virq) 663 - generic_handle_irq(virq); 664 - else 665 - dev_err(dev, "unexpected IRQ, INT%d\n", bit); 661 + ret = generic_handle_domain_irq(pcie->irq_domain, bit); 662 + if (ret) 663 + dev_err_ratelimited(dev, "unexpected IRQ, INT%d\n", bit); 666 664 } 667 665 } 668 666
+4 -5
drivers/pci/controller/pcie-brcmstb.c
··· 476 476 static void brcm_pcie_msi_isr(struct irq_desc *desc) 477 477 { 478 478 struct irq_chip *chip = irq_desc_get_chip(desc); 479 - unsigned long status, virq; 479 + unsigned long status; 480 480 struct brcm_msi *msi; 481 481 struct device *dev; 482 482 u32 bit; ··· 489 489 status >>= msi->legacy_shift; 490 490 491 491 for_each_set_bit(bit, &status, msi->nr) { 492 - virq = irq_find_mapping(msi->inner_domain, bit); 493 - if (virq) 494 - generic_handle_irq(virq); 495 - else 492 + int ret; 493 + ret = generic_handle_domain_irq(msi->inner_domain, bit); 494 + if (ret) 496 495 dev_dbg(dev, "unexpected MSI\n"); 497 496 } 498 497
+6 -10
drivers/pci/controller/pcie-iproc-bcma.c
··· 35 35 { 36 36 struct device *dev = &bdev->dev; 37 37 struct iproc_pcie *pcie; 38 - LIST_HEAD(resources); 39 38 struct pci_host_bridge *bridge; 40 39 int ret; 41 40 ··· 59 60 pcie->mem.end = bdev->addr_s[0] + SZ_128M - 1; 60 61 pcie->mem.name = "PCIe MEM space"; 61 62 pcie->mem.flags = IORESOURCE_MEM; 62 - pci_add_resource(&resources, &pcie->mem); 63 + pci_add_resource(&bridge->windows, &pcie->mem); 64 + ret = devm_request_pci_bus_resources(dev, &bridge->windows); 65 + if (ret) 66 + return ret; 63 67 64 68 pcie->map_irq = iproc_pcie_bcma_map_irq; 65 69 66 - ret = iproc_pcie_setup(pcie, &resources); 67 - if (ret) { 68 - dev_err(dev, "PCIe controller setup failed\n"); 69 - pci_free_resource_list(&resources); 70 - return ret; 71 - } 72 - 73 70 bcma_set_drvdata(bdev, pcie); 74 - return 0; 71 + 72 + return iproc_pcie_setup(pcie, &bridge->windows); 75 73 } 76 74 77 75 static void iproc_pcie_bcma_remove(struct bcma_device *bdev)
+1 -3
drivers/pci/controller/pcie-iproc-msi.c
··· 326 326 struct iproc_msi *msi; 327 327 u32 eq, head, tail, nr_events; 328 328 unsigned long hwirq; 329 - int virq; 330 329 331 330 chained_irq_enter(chip, desc); 332 331 ··· 361 362 /* process all outstanding events */ 362 363 while (nr_events--) { 363 364 hwirq = decode_msi_hwirq(msi, eq, head); 364 - virq = irq_find_mapping(msi->inner_domain, hwirq); 365 - generic_handle_irq(virq); 365 + generic_handle_domain_irq(msi->inner_domain, hwirq); 366 366 367 367 head++; 368 368 head %= EQ_LEN;
+4 -9
drivers/pci/controller/pcie-mediatek-gen3.c
··· 645 645 { 646 646 struct mtk_msi_set *msi_set = &port->msi_sets[set_idx]; 647 647 unsigned long msi_enable, msi_status; 648 - unsigned int virq; 649 648 irq_hw_number_t bit, hwirq; 650 649 651 650 msi_enable = readl_relaxed(msi_set->base + PCIE_MSI_SET_ENABLE_OFFSET); ··· 658 659 659 660 for_each_set_bit(bit, &msi_status, PCIE_MSI_IRQS_PER_SET) { 660 661 hwirq = bit + set_idx * PCIE_MSI_IRQS_PER_SET; 661 - virq = irq_find_mapping(port->msi_bottom_domain, hwirq); 662 - generic_handle_irq(virq); 662 + generic_handle_domain_irq(port->msi_bottom_domain, hwirq); 663 663 } 664 664 } while (true); 665 665 } ··· 668 670 struct mtk_pcie_port *port = irq_desc_get_handler_data(desc); 669 671 struct irq_chip *irqchip = irq_desc_get_chip(desc); 670 672 unsigned long status; 671 - unsigned int virq; 672 673 irq_hw_number_t irq_bit = PCIE_INTX_SHIFT; 673 674 674 675 chained_irq_enter(irqchip, desc); 675 676 676 677 status = readl_relaxed(port->base + PCIE_INT_STATUS_REG); 677 678 for_each_set_bit_from(irq_bit, &status, PCI_NUM_INTX + 678 - PCIE_INTX_SHIFT) { 679 - virq = irq_find_mapping(port->intx_domain, 680 - irq_bit - PCIE_INTX_SHIFT); 681 - generic_handle_irq(virq); 682 - } 679 + PCIE_INTX_SHIFT) 680 + generic_handle_domain_irq(port->intx_domain, 681 + irq_bit - PCIE_INTX_SHIFT); 683 682 684 683 irq_bit = PCIE_MSI_SHIFT; 685 684 for_each_set_bit_from(irq_bit, &status, PCIE_MSI_SET_NUM +
+43 -21
drivers/pci/controller/pcie-mediatek.c
··· 14 14 #include <linux/irqchip/chained_irq.h> 15 15 #include <linux/irqdomain.h> 16 16 #include <linux/kernel.h> 17 + #include <linux/mfd/syscon.h> 17 18 #include <linux/msi.h> 18 19 #include <linux/module.h> 19 20 #include <linux/of_address.h> ··· 24 23 #include <linux/phy/phy.h> 25 24 #include <linux/platform_device.h> 26 25 #include <linux/pm_runtime.h> 26 + #include <linux/regmap.h> 27 27 #include <linux/reset.h> 28 28 29 29 #include "../pci.h" ··· 209 207 * struct mtk_pcie - PCIe host information 210 208 * @dev: pointer to PCIe device 211 209 * @base: IO mapped register base 210 + * @cfg: IO mapped register map for PCIe config 212 211 * @free_ck: free-run reference clock 213 212 * @mem: non-prefetchable memory resource 214 213 * @ports: pointer to PCIe port information ··· 218 215 struct mtk_pcie { 219 216 struct device *dev; 220 217 void __iomem *base; 218 + struct regmap *cfg; 221 219 struct clk *free_ck; 222 220 223 221 struct list_head ports; ··· 606 602 struct mtk_pcie_port *port = irq_desc_get_handler_data(desc); 607 603 struct irq_chip *irqchip = irq_desc_get_chip(desc); 608 604 unsigned long status; 609 - u32 virq; 610 605 u32 bit = INTX_SHIFT; 611 606 612 607 chained_irq_enter(irqchip, desc); ··· 615 612 for_each_set_bit_from(bit, &status, PCI_NUM_INTX + INTX_SHIFT) { 616 613 /* Clear the INTx */ 617 614 writel(1 << bit, port->base + PCIE_INT_STATUS); 618 - virq = irq_find_mapping(port->irq_domain, 619 - bit - INTX_SHIFT); 620 - generic_handle_irq(virq); 615 + generic_handle_domain_irq(port->irq_domain, 616 + bit - INTX_SHIFT); 621 617 } 622 618 } 623 619 ··· 625 623 unsigned long imsi_status; 626 624 627 625 while ((imsi_status = readl(port->base + PCIE_IMSI_STATUS))) { 628 - for_each_set_bit(bit, &imsi_status, MTK_MSI_IRQS_NUM) { 629 - virq = irq_find_mapping(port->inner_domain, bit); 630 - generic_handle_irq(virq); 631 - } 626 + for_each_set_bit(bit, &imsi_status, MTK_MSI_IRQS_NUM) 627 + generic_handle_domain_irq(port->inner_domain, bit); 632 628 } 633 629 /* Clear MSI interrupt status */ 634 630 writel(MSI_STATUS, port->base + PCIE_INT_STATUS); ··· 650 650 return err; 651 651 } 652 652 653 - port->irq = platform_get_irq(pdev, port->slot); 653 + if (of_find_property(dev->of_node, "interrupt-names", NULL)) 654 + port->irq = platform_get_irq_byname(pdev, "pcie_irq"); 655 + else 656 + port->irq = platform_get_irq(pdev, port->slot); 657 + 654 658 if (port->irq < 0) 655 659 return port->irq; 656 660 ··· 686 682 val |= PCIE_CSR_LTSSM_EN(port->slot) | 687 683 PCIE_CSR_ASPM_L1_EN(port->slot); 688 684 writel(val, pcie->base + PCIE_SYS_CFG_V2); 685 + } else if (pcie->cfg) { 686 + val = PCIE_CSR_LTSSM_EN(port->slot) | 687 + PCIE_CSR_ASPM_L1_EN(port->slot); 688 + regmap_update_bits(pcie->cfg, PCIE_SYS_CFG_V2, val, val); 689 689 } 690 690 691 691 /* Assert all reset signals */ ··· 993 985 struct device *dev = pcie->dev; 994 986 struct platform_device *pdev = to_platform_device(dev); 995 987 struct resource *regs; 988 + struct device_node *cfg_node; 996 989 int err; 997 990 998 991 /* get shared registers, which are optional */ ··· 1002 993 pcie->base = devm_ioremap_resource(dev, regs); 1003 994 if (IS_ERR(pcie->base)) 1004 995 return PTR_ERR(pcie->base); 996 + } 997 + 998 + cfg_node = of_find_compatible_node(NULL, NULL, 999 + "mediatek,generic-pciecfg"); 1000 + if (cfg_node) { 1001 + pcie->cfg = syscon_node_to_regmap(cfg_node); 1002 + if (IS_ERR(pcie->cfg)) 1003 + return PTR_ERR(pcie->cfg); 1005 1004 } 1006 1005 1007 1006 pcie->free_ck = devm_clk_get(dev, "free_ck"); ··· 1044 1027 struct device *dev = pcie->dev; 1045 1028 struct device_node *node = dev->of_node, *child; 1046 1029 struct mtk_pcie_port *port, *tmp; 1047 - int err; 1030 + int err, slot; 1048 1031 1049 - for_each_available_child_of_node(node, child) { 1050 - int slot; 1032 + slot = of_get_pci_domain_nr(dev->of_node); 1033 + if (slot < 0) { 1034 + for_each_available_child_of_node(node, child) { 1035 + err = of_pci_get_devfn(child); 1036 + if (err < 0) { 1037 + dev_err(dev, "failed to get devfn: %d\n", err); 1038 + goto error_put_node; 1039 + } 1051 1040 1052 - err = of_pci_get_devfn(child); 1053 - if (err < 0) { 1054 - dev_err(dev, "failed to parse devfn: %d\n", err); 1055 - goto error_put_node; 1041 + slot = PCI_SLOT(err); 1042 + 1043 + err = mtk_pcie_parse_port(pcie, child, slot); 1044 + if (err) 1045 + goto error_put_node; 1056 1046 } 1057 - 1058 - slot = PCI_SLOT(err); 1059 - 1060 - err = mtk_pcie_parse_port(pcie, child, slot); 1047 + } else { 1048 + err = mtk_pcie_parse_port(pcie, node, slot); 1061 1049 if (err) 1062 - goto error_put_node; 1050 + return err; 1063 1051 } 1064 1052 1065 1053 err = mtk_pcie_subsys_powerup(pcie);
+7 -11
drivers/pci/controller/pcie-microchip-host.c
··· 412 412 port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 413 413 unsigned long status; 414 414 u32 bit; 415 - u32 virq; 415 + int ret; 416 416 417 417 status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL); 418 418 if (status & PM_MSI_INT_MSI_MASK) { 419 419 status = readl_relaxed(bridge_base_addr + ISTATUS_MSI); 420 420 for_each_set_bit(bit, &status, msi->num_vectors) { 421 - virq = irq_find_mapping(msi->dev_domain, bit); 422 - if (virq) 423 - generic_handle_irq(virq); 424 - else 421 + ret = generic_handle_domain_irq(msi->dev_domain, bit); 422 + if (ret) 425 423 dev_err_ratelimited(dev, "bad MSI IRQ %d\n", 426 424 bit); 427 425 } ··· 568 570 port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 569 571 unsigned long status; 570 572 u32 bit; 571 - u32 virq; 573 + int ret; 572 574 573 575 status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL); 574 576 if (status & PM_MSI_INT_INTX_MASK) { 575 577 status &= PM_MSI_INT_INTX_MASK; 576 578 status >>= PM_MSI_INT_INTX_SHIFT; 577 579 for_each_set_bit(bit, &status, PCI_NUM_INTX) { 578 - virq = irq_find_mapping(port->intx_domain, bit); 579 - if (virq) 580 - generic_handle_irq(virq); 581 - else 580 + ret = generic_handle_domain_irq(port->intx_domain, bit); 581 + if (ret) 582 582 dev_err_ratelimited(dev, "bad INTx IRQ %d\n", 583 583 bit); 584 584 } ··· 741 745 events = get_events(port); 742 746 743 747 for_each_set_bit(bit, &events, NUM_EVENTS) 744 - generic_handle_irq(irq_find_mapping(port->event_domain, bit)); 748 + generic_handle_domain_irq(port->event_domain, bit); 745 749 746 750 chained_irq_exit(chip, desc); 747 751 }
+12 -11
drivers/pci/controller/pcie-rcar-ep.c
··· 159 159 return 0; 160 160 } 161 161 162 - static int rcar_pcie_ep_write_header(struct pci_epc *epc, u8 fn, 162 + static int rcar_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn, 163 163 struct pci_epf_header *hdr) 164 164 { 165 165 struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); ··· 195 195 return 0; 196 196 } 197 197 198 - static int rcar_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, 198 + static int rcar_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 199 199 struct pci_epf_bar *epf_bar) 200 200 { 201 201 int flags = epf_bar->flags | LAR_ENABLE | LAM_64BIT; ··· 246 246 return 0; 247 247 } 248 248 249 - static void rcar_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, 249 + static void rcar_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn, 250 250 struct pci_epf_bar *epf_bar) 251 251 { 252 252 struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); ··· 259 259 clear_bit(atu_index + 1, ep->ib_window_map); 260 260 } 261 261 262 - static int rcar_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 interrupts) 262 + static int rcar_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, 263 + u8 interrupts) 263 264 { 264 265 struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); 265 266 struct rcar_pcie *pcie = &ep->pcie; ··· 273 272 return 0; 274 273 } 275 274 276 - static int rcar_pcie_ep_get_msi(struct pci_epc *epc, u8 fn) 275 + static int rcar_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn) 277 276 { 278 277 struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); 279 278 struct rcar_pcie *pcie = &ep->pcie; ··· 286 285 return ((flags & MSICAP0_MMESE_MASK) >> MSICAP0_MMESE_OFFSET); 287 286 } 288 287 289 - static int rcar_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, 288 + static int rcar_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn, 290 289 phys_addr_t addr, u64 pci_addr, size_t size) 291 290 { 292 291 struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); ··· 323 322 return 0; 324 323 } 325 324 326 - static void rcar_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, 325 + static void rcar_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn, 327 326 phys_addr_t addr) 328 327 { 329 328 struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); ··· 404 403 return 0; 405 404 } 406 405 407 - static int rcar_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, 406 + static int rcar_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, u8 vfn, 408 407 enum pci_epc_irq_type type, 409 408 u16 interrupt_num) 410 409 { ··· 452 451 }; 453 452 454 453 static const struct pci_epc_features* 455 - rcar_pcie_ep_get_features(struct pci_epc *epc, u8 func_no) 454 + rcar_pcie_ep_get_features(struct pci_epc *epc, u8 func_no, u8 vfunc_no) 456 455 { 457 456 return &rcar_pcie_epc_features; 458 457 } ··· 493 492 pcie->dev = dev; 494 493 495 494 pm_runtime_enable(dev); 496 - err = pm_runtime_get_sync(dev); 495 + err = pm_runtime_resume_and_get(dev); 497 496 if (err < 0) { 498 - dev_err(dev, "pm_runtime_get_sync failed\n"); 497 + dev_err(dev, "pm_runtime_resume_and_get failed\n"); 499 498 goto err_pm_disable; 500 499 } 501 500
+89 -5
drivers/pci/controller/pcie-rcar-host.c
··· 13 13 14 14 #include <linux/bitops.h> 15 15 #include <linux/clk.h> 16 + #include <linux/clk-provider.h> 16 17 #include <linux/delay.h> 17 18 #include <linux/interrupt.h> 18 19 #include <linux/irq.h> 19 20 #include <linux/irqdomain.h> 20 21 #include <linux/kernel.h> 21 22 #include <linux/init.h> 23 + #include <linux/iopoll.h> 22 24 #include <linux/msi.h> 23 25 #include <linux/of_address.h> 24 26 #include <linux/of_irq.h> ··· 42 40 int irq1; 43 41 int irq2; 44 42 }; 43 + 44 + #ifdef CONFIG_ARM 45 + /* 46 + * Here we keep a static copy of the remapped PCIe controller address. 47 + * This is only used on aarch32 systems, all of which have one single 48 + * PCIe controller, to provide quick access to the PCIe controller in 49 + * the L1 link state fixup function, called from the ARM fault handler. 50 + */ 51 + static void __iomem *pcie_base; 52 + /* 53 + * Static copy of bus clock pointer, so we can check whether the clock 54 + * is enabled or not. 55 + */ 56 + static struct clk *pcie_bus_clk; 57 + #endif 45 58 46 59 /* Structure representing the PCIe interface */ 47 60 struct rcar_pcie_host { ··· 503 486 504 487 while (reg) { 505 488 unsigned int index = find_first_bit(&reg, 32); 506 - unsigned int msi_irq; 489 + int ret; 507 490 508 - msi_irq = irq_find_mapping(msi->domain->parent, index); 509 - if (msi_irq) { 510 - generic_handle_irq(msi_irq); 511 - } else { 491 + ret = generic_handle_domain_irq(msi->domain->parent, index); 492 + if (ret) { 512 493 /* Unknown MSI, just clear it */ 513 494 dev_dbg(dev, "unexpected MSI\n"); 514 495 rcar_pci_write_reg(pcie, BIT(index), PCIEMSIFR); ··· 791 776 } 792 777 host->msi.irq2 = i; 793 778 779 + #ifdef CONFIG_ARM 780 + /* Cache static copy for L1 link state fixup hook on aarch32 */ 781 + pcie_base = pcie->base; 782 + pcie_bus_clk = host->bus_clk; 783 + #endif 784 + 794 785 return 0; 795 786 796 787 err_irq2: ··· 1052 1031 }, 1053 1032 .probe = rcar_pcie_probe, 1054 1033 }; 1034 + 1035 + #ifdef CONFIG_ARM 1036 + static DEFINE_SPINLOCK(pmsr_lock); 1037 + static int rcar_pcie_aarch32_abort_handler(unsigned long addr, 1038 + unsigned int fsr, struct pt_regs *regs) 1039 + { 1040 + unsigned long flags; 1041 + u32 pmsr, val; 1042 + int ret = 0; 1043 + 1044 + spin_lock_irqsave(&pmsr_lock, flags); 1045 + 1046 + if (!pcie_base || !__clk_is_enabled(pcie_bus_clk)) { 1047 + ret = 1; 1048 + goto unlock_exit; 1049 + } 1050 + 1051 + pmsr = readl(pcie_base + PMSR); 1052 + 1053 + /* 1054 + * Test if the PCIe controller received PM_ENTER_L1 DLLP and 1055 + * the PCIe controller is not in L1 link state. If true, apply 1056 + * fix, which will put the controller into L1 link state, from 1057 + * which it can return to L0s/L0 on its own. 1058 + */ 1059 + if ((pmsr & PMEL1RX) && ((pmsr & PMSTATE) != PMSTATE_L1)) { 1060 + writel(L1IATN, pcie_base + PMCTLR); 1061 + ret = readl_poll_timeout_atomic(pcie_base + PMSR, val, 1062 + val & L1FAEG, 10, 1000); 1063 + WARN(ret, "Timeout waiting for L1 link state, ret=%d\n", ret); 1064 + writel(L1FAEG | PMEL1RX, pcie_base + PMSR); 1065 + } 1066 + 1067 + unlock_exit: 1068 + spin_unlock_irqrestore(&pmsr_lock, flags); 1069 + return ret; 1070 + } 1071 + 1072 + static const struct of_device_id rcar_pcie_abort_handler_of_match[] __initconst = { 1073 + { .compatible = "renesas,pcie-r8a7779" }, 1074 + { .compatible = "renesas,pcie-r8a7790" }, 1075 + { .compatible = "renesas,pcie-r8a7791" }, 1076 + { .compatible = "renesas,pcie-rcar-gen2" }, 1077 + {}, 1078 + }; 1079 + 1080 + static int __init rcar_pcie_init(void) 1081 + { 1082 + if (of_find_matching_node(NULL, rcar_pcie_abort_handler_of_match)) { 1083 + #ifdef CONFIG_ARM_LPAE 1084 + hook_fault_code(17, rcar_pcie_aarch32_abort_handler, SIGBUS, 0, 1085 + "asynchronous external abort"); 1086 + #else 1087 + hook_fault_code(22, rcar_pcie_aarch32_abort_handler, SIGBUS, 0, 1088 + "imprecise external abort"); 1089 + #endif 1090 + } 1091 + 1092 + return platform_driver_register(&rcar_pcie_driver); 1093 + } 1094 + device_initcall(rcar_pcie_init); 1095 + #else 1055 1096 builtin_platform_driver(rcar_pcie_driver); 1097 + #endif
+7
drivers/pci/controller/pcie-rcar.h
··· 85 85 #define LTSMDIS BIT(31) 86 86 #define MACCTLR_INIT_VAL (LTSMDIS | MACCTLR_NFTS_MASK) 87 87 #define PMSR 0x01105c 88 + #define L1FAEG BIT(31) 89 + #define PMEL1RX BIT(23) 90 + #define PMSTATE GENMASK(18, 16) 91 + #define PMSTATE_L1 (3 << 16) 92 + #define PMCTLR 0x011060 93 + #define L1IATN BIT(31) 94 + 88 95 #define MACS2R 0x011078 89 96 #define MACCGSPSETR 0x011084 90 97 #define SPCNGRSN BIT(31)
+9 -9
drivers/pci/controller/pcie-rockchip-ep.c
··· 122 122 ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r)); 123 123 } 124 124 125 - static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn, 125 + static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn, 126 126 struct pci_epf_header *hdr) 127 127 { 128 128 struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); ··· 159 159 return 0; 160 160 } 161 161 162 - static int rockchip_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, 162 + static int rockchip_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn, 163 163 struct pci_epf_bar *epf_bar) 164 164 { 165 165 struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); ··· 227 227 return 0; 228 228 } 229 229 230 - static void rockchip_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, 230 + static void rockchip_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn, 231 231 struct pci_epf_bar *epf_bar) 232 232 { 233 233 struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); ··· 256 256 ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar)); 257 257 } 258 258 259 - static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, 259 + static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn, 260 260 phys_addr_t addr, u64 pci_addr, 261 261 size_t size) 262 262 { ··· 284 284 return 0; 285 285 } 286 286 287 - static void rockchip_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, 287 + static void rockchip_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn, 288 288 phys_addr_t addr) 289 289 { 290 290 struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); ··· 308 308 clear_bit(r, &ep->ob_region_map); 309 309 } 310 310 311 - static int rockchip_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, 311 + static int rockchip_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, 312 312 u8 multi_msg_cap) 313 313 { 314 314 struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); ··· 329 329 return 0; 330 330 } 331 331 332 - static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn) 332 + static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn) 333 333 { 334 334 struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 335 335 struct rockchip_pcie *rockchip = &ep->rockchip; ··· 471 471 return 0; 472 472 } 473 473 474 - static int rockchip_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, 474 + static int rockchip_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, u8 vfn, 475 475 enum pci_epc_irq_type type, 476 476 u16 interrupt_num) 477 477 { ··· 510 510 }; 511 511 512 512 static const struct pci_epc_features* 513 - rockchip_pcie_ep_get_features(struct pci_epc *epc, u8 func_no) 513 + rockchip_pcie_ep_get_features(struct pci_epc *epc, u8 func_no, u8 vfunc_no) 514 514 { 515 515 return &rockchip_pcie_epc_features; 516 516 }
+3 -5
drivers/pci/controller/pcie-rockchip-host.c
··· 517 517 struct device *dev = rockchip->dev; 518 518 u32 reg; 519 519 u32 hwirq; 520 - u32 virq; 520 + int ret; 521 521 522 522 chained_irq_enter(chip, desc); 523 523 ··· 528 528 hwirq = ffs(reg) - 1; 529 529 reg &= ~BIT(hwirq); 530 530 531 - virq = irq_find_mapping(rockchip->irq_domain, hwirq); 532 - if (virq) 533 - generic_handle_irq(virq); 534 - else 531 + ret = generic_handle_domain_irq(rockchip->irq_domain, hwirq); 532 + if (ret) 535 533 dev_err(dev, "unexpected IRQ, INT%d\n", hwirq); 536 534 } 537 535
+2 -2
drivers/pci/controller/pcie-xilinx-cpm.c
··· 222 222 pcie_read(port, XILINX_CPM_PCIE_REG_IDRN)); 223 223 224 224 for_each_set_bit(i, &val, PCI_NUM_INTX) 225 - generic_handle_irq(irq_find_mapping(port->intx_domain, i)); 225 + generic_handle_domain_irq(port->intx_domain, i); 226 226 227 227 chained_irq_exit(chip, desc); 228 228 } ··· 282 282 val = pcie_read(port, XILINX_CPM_PCIE_REG_IDR); 283 283 val &= pcie_read(port, XILINX_CPM_PCIE_REG_IMR); 284 284 for_each_set_bit(i, &val, 32) 285 - generic_handle_irq(irq_find_mapping(port->cpm_domain, i)); 285 + generic_handle_domain_irq(port->cpm_domain, i); 286 286 pcie_write(port, val, XILINX_CPM_PCIE_REG_IDR); 287 287 288 288 /*
+15 -10
drivers/pci/controller/pcie-xilinx-nwl.c
··· 6 6 * (C) Copyright 2014 - 2015, Xilinx, Inc. 7 7 */ 8 8 9 + #include <linux/clk.h> 9 10 #include <linux/delay.h> 10 11 #include <linux/interrupt.h> 11 12 #include <linux/irq.h> ··· 170 169 u8 last_busno; 171 170 struct nwl_msi msi; 172 171 struct irq_domain *legacy_irq_domain; 172 + struct clk *clk; 173 173 raw_spinlock_t leg_mask_lock; 174 174 }; 175 175 ··· 320 318 struct nwl_pcie *pcie; 321 319 unsigned long status; 322 320 u32 bit; 323 - u32 virq; 324 321 325 322 chained_irq_enter(chip, desc); 326 323 pcie = irq_desc_get_handler_data(desc); 327 324 328 325 while ((status = nwl_bridge_readl(pcie, MSGF_LEG_STATUS) & 329 326 MSGF_LEG_SR_MASKALL) != 0) { 330 - for_each_set_bit(bit, &status, PCI_NUM_INTX) { 331 - virq = irq_find_mapping(pcie->legacy_irq_domain, bit); 332 - if (virq) 333 - generic_handle_irq(virq); 334 - } 327 + for_each_set_bit(bit, &status, PCI_NUM_INTX) 328 + generic_handle_domain_irq(pcie->legacy_irq_domain, bit); 335 329 } 336 330 337 331 chained_irq_exit(chip, desc); ··· 338 340 struct nwl_msi *msi; 339 341 unsigned long status; 340 342 u32 bit; 341 - u32 virq; 342 343 343 344 msi = &pcie->msi; 344 345 345 346 while ((status = nwl_bridge_readl(pcie, status_reg)) != 0) { 346 347 for_each_set_bit(bit, &status, 32) { 347 348 nwl_bridge_writel(pcie, 1 << bit, status_reg); 348 - virq = irq_find_mapping(msi->dev_domain, bit); 349 - if (virq) 350 - generic_handle_irq(virq); 349 + generic_handle_domain_irq(msi->dev_domain, bit); 351 350 } 352 351 } 353 352 } ··· 815 820 err = nwl_pcie_parse_dt(pcie, pdev); 816 821 if (err) { 817 822 dev_err(dev, "Parsing DT failed\n"); 823 + return err; 824 + } 825 + 826 + pcie->clk = devm_clk_get(dev, NULL); 827 + if (IS_ERR(pcie->clk)) 828 + return PTR_ERR(pcie->clk); 829 + 830 + err = clk_prepare_enable(pcie->clk); 831 + if (err) { 832 + dev_err(dev, "can't enable PCIe ref clock\n"); 818 833 return err; 819 834 } 820 835
+4 -5
drivers/pci/controller/pcie-xilinx.c
··· 385 385 } 386 386 387 387 if (status & (XILINX_PCIE_INTR_INTX | XILINX_PCIE_INTR_MSI)) { 388 - unsigned int irq; 388 + struct irq_domain *domain; 389 389 390 390 val = pcie_read(port, XILINX_PCIE_REG_RPIFR1); 391 391 ··· 399 399 if (val & XILINX_PCIE_RPIFR1_MSI_INTR) { 400 400 val = pcie_read(port, XILINX_PCIE_REG_RPIFR2) & 401 401 XILINX_PCIE_RPIFR2_MSG_DATA; 402 - irq = irq_find_mapping(port->msi_domain->parent, val); 402 + domain = port->msi_domain->parent; 403 403 } else { 404 404 val = (val & XILINX_PCIE_RPIFR1_INTR_MASK) >> 405 405 XILINX_PCIE_RPIFR1_INTR_SHIFT; 406 - irq = irq_find_mapping(port->leg_domain, val); 406 + domain = port->leg_domain; 407 407 } 408 408 409 409 /* Clear interrupt FIFO register 1 */ 410 410 pcie_write(port, XILINX_PCIE_RPIFR1_ALL_MASK, 411 411 XILINX_PCIE_REG_RPIFR1); 412 412 413 - if (irq) 414 - generic_handle_irq(irq); 413 + generic_handle_domain_irq(domain, val); 415 414 } 416 415 417 416 if (status & XILINX_PCIE_INTR_SLV_UNSUPP)
+54 -35
drivers/pci/endpoint/functions/pci-epf-ntb.c
··· 87 87 88 88 struct epf_ntb_epc { 89 89 u8 func_no; 90 + u8 vfunc_no; 90 91 bool linkup; 91 92 bool is_msix; 92 93 int msix_bar; ··· 144 143 struct epf_ntb_epc *ntb_epc; 145 144 struct epf_ntb_ctrl *ctrl; 146 145 struct pci_epc *epc; 146 + u8 func_no, vfunc_no; 147 147 bool is_msix; 148 - u8 func_no; 149 148 int ret; 150 149 151 150 for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++) { 152 151 ntb_epc = ntb->epc[type]; 153 152 epc = ntb_epc->epc; 154 153 func_no = ntb_epc->func_no; 154 + vfunc_no = ntb_epc->vfunc_no; 155 155 is_msix = ntb_epc->is_msix; 156 156 ctrl = ntb_epc->reg; 157 157 if (link_up) ··· 160 158 else 161 159 ctrl->link_status &= ~LINK_STATUS_UP; 162 160 irq_type = is_msix ? PCI_EPC_IRQ_MSIX : PCI_EPC_IRQ_MSI; 163 - ret = pci_epc_raise_irq(epc, func_no, irq_type, 1); 161 + ret = pci_epc_raise_irq(epc, func_no, vfunc_no, irq_type, 1); 164 162 if (ret) { 165 163 dev_err(&epc->dev, 166 164 "%s intf: Failed to raise Link Up IRQ\n", ··· 240 238 enum pci_barno peer_barno; 241 239 struct epf_ntb_ctrl *ctrl; 242 240 phys_addr_t phys_addr; 241 + u8 func_no, vfunc_no; 243 242 struct pci_epc *epc; 244 243 u64 addr, size; 245 244 int ret = 0; 246 - u8 func_no; 247 245 248 246 ntb_epc = ntb->epc[type]; 249 247 epc = ntb_epc->epc; ··· 269 267 } 270 268 271 269 func_no = ntb_epc->func_no; 270 + vfunc_no = ntb_epc->vfunc_no; 272 271 273 - ret = pci_epc_map_addr(epc, func_no, phys_addr, addr, size); 272 + ret = pci_epc_map_addr(epc, func_no, vfunc_no, phys_addr, addr, size); 274 273 if (ret) 275 274 dev_err(&epc->dev, 276 275 "%s intf: Failed to map memory window %d address\n", ··· 299 296 enum pci_barno peer_barno; 300 297 struct epf_ntb_ctrl *ctrl; 301 298 phys_addr_t phys_addr; 299 + u8 func_no, vfunc_no; 302 300 struct pci_epc *epc; 303 - u8 func_no; 304 301 305 302 ntb_epc = ntb->epc[type]; 306 303 epc = ntb_epc->epc; ··· 314 311 if (mw + NTB_MW_OFFSET == BAR_DB_MW1) 315 312 phys_addr += ctrl->mw1_offset; 316 313 func_no = ntb_epc->func_no; 314 + vfunc_no = ntb_epc->vfunc_no; 317 315 318 - pci_epc_unmap_addr(epc, func_no, phys_addr); 316 + pci_epc_unmap_addr(epc, func_no, vfunc_no, phys_addr); 319 317 } 320 318 321 319 /** ··· 389 385 struct epf_ntb_ctrl *peer_ctrl; 390 386 enum pci_barno peer_barno; 391 387 phys_addr_t phys_addr; 388 + u8 func_no, vfunc_no; 392 389 struct pci_epc *epc; 393 - u8 func_no; 394 390 int ret, i; 395 391 396 392 ntb_epc = ntb->epc[type]; ··· 404 400 405 401 phys_addr = peer_epf_bar->phys_addr; 406 402 func_no = ntb_epc->func_no; 403 + vfunc_no = ntb_epc->vfunc_no; 407 404 408 - ret = pci_epc_map_msi_irq(epc, func_no, phys_addr, db_count, 405 + ret = pci_epc_map_msi_irq(epc, func_no, vfunc_no, phys_addr, db_count, 409 406 db_entry_size, &db_data, &db_offset); 410 407 if (ret) { 411 408 dev_err(&epc->dev, "%s intf: Failed to map MSI IRQ\n", ··· 496 491 u32 db_entry_size, msg_data; 497 492 enum pci_barno peer_barno; 498 493 phys_addr_t phys_addr; 494 + u8 func_no, vfunc_no; 499 495 struct pci_epc *epc; 500 496 size_t align; 501 497 u64 msg_addr; 502 - u8 func_no; 503 498 int ret, i; 504 499 505 500 ntb_epc = ntb->epc[type]; ··· 517 512 align = epc_features->align; 518 513 519 514 func_no = ntb_epc->func_no; 515 + vfunc_no = ntb_epc->vfunc_no; 520 516 db_entry_size = peer_ctrl->db_entry_size; 521 517 522 518 for (i = 0; i < db_count; i++) { 523 519 msg_addr = ALIGN_DOWN(msix_tbl[i].msg_addr, align); 524 520 msg_data = msix_tbl[i].msg_data; 525 - ret = pci_epc_map_addr(epc, func_no, phys_addr, msg_addr, 521 + ret = pci_epc_map_addr(epc, func_no, vfunc_no, phys_addr, msg_addr, 526 522 db_entry_size); 527 523 if (ret) { 528 524 dev_err(&epc->dev, ··· 592 586 struct pci_epf_bar *peer_epf_bar; 593 587 enum pci_barno peer_barno; 594 588 phys_addr_t phys_addr; 589 + u8 func_no, vfunc_no; 595 590 struct pci_epc *epc; 596 - u8 func_no; 597 591 598 592 ntb_epc = ntb->epc[type]; 599 593 epc = ntb_epc->epc; ··· 603 597 peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno]; 604 598 phys_addr = peer_epf_bar->phys_addr; 605 599 func_no = ntb_epc->func_no; 600 + vfunc_no = ntb_epc->vfunc_no; 606 601 607 - pci_epc_unmap_addr(epc, func_no, phys_addr); 602 + pci_epc_unmap_addr(epc, func_no, vfunc_no, phys_addr); 608 603 } 609 604 610 605 /** ··· 735 728 { 736 729 struct pci_epf_bar *epf_bar; 737 730 enum pci_barno barno; 731 + u8 func_no, vfunc_no; 738 732 struct pci_epc *epc; 739 - u8 func_no; 740 733 741 734 epc = ntb_epc->epc; 742 735 func_no = ntb_epc->func_no; 736 + vfunc_no = ntb_epc->vfunc_no; 743 737 barno = ntb_epc->epf_ntb_bar[BAR_PEER_SPAD]; 744 738 epf_bar = &ntb_epc->epf_bar[barno]; 745 - pci_epc_clear_bar(epc, func_no, epf_bar); 739 + pci_epc_clear_bar(epc, func_no, vfunc_no, epf_bar); 746 740 } 747 741 748 742 /** ··· 783 775 struct pci_epf_bar *peer_epf_bar, *epf_bar; 784 776 enum pci_barno peer_barno, barno; 785 777 u32 peer_spad_offset; 778 + u8 func_no, vfunc_no; 786 779 struct pci_epc *epc; 787 780 struct device *dev; 788 - u8 func_no; 789 781 int ret; 790 782 791 783 dev = &ntb->epf->dev; ··· 798 790 barno = ntb_epc->epf_ntb_bar[BAR_PEER_SPAD]; 799 791 epf_bar = &ntb_epc->epf_bar[barno]; 800 792 func_no = ntb_epc->func_no; 793 + vfunc_no = ntb_epc->vfunc_no; 801 794 epc = ntb_epc->epc; 802 795 803 796 peer_spad_offset = peer_ntb_epc->reg->spad_offset; ··· 807 798 epf_bar->barno = barno; 808 799 epf_bar->flags = PCI_BASE_ADDRESS_MEM_TYPE_32; 809 800 810 - ret = pci_epc_set_bar(epc, func_no, epf_bar); 801 + ret = pci_epc_set_bar(epc, func_no, vfunc_no, epf_bar); 811 802 if (ret) { 812 803 dev_err(dev, "%s intf: peer SPAD BAR set failed\n", 813 804 pci_epc_interface_string(type)); ··· 851 842 { 852 843 struct pci_epf_bar *epf_bar; 853 844 enum pci_barno barno; 845 + u8 func_no, vfunc_no; 854 846 struct pci_epc *epc; 855 - u8 func_no; 856 847 857 848 epc = ntb_epc->epc; 858 849 func_no = ntb_epc->func_no; 850 + vfunc_no = ntb_epc->vfunc_no; 859 851 barno = ntb_epc->epf_ntb_bar[BAR_CONFIG]; 860 852 epf_bar = &ntb_epc->epf_bar[barno]; 861 - pci_epc_clear_bar(epc, func_no, epf_bar); 853 + pci_epc_clear_bar(epc, func_no, vfunc_no, epf_bar); 862 854 } 863 855 864 856 /** ··· 896 886 { 897 887 struct pci_epf_bar *epf_bar; 898 888 enum pci_barno barno; 889 + u8 func_no, vfunc_no; 899 890 struct epf_ntb *ntb; 900 891 struct pci_epc *epc; 901 892 struct device *dev; 902 - u8 func_no; 903 893 int ret; 904 894 905 895 ntb = ntb_epc->epf_ntb; ··· 907 897 908 898 epc = ntb_epc->epc; 909 899 func_no = ntb_epc->func_no; 900 + vfunc_no = ntb_epc->vfunc_no; 910 901 barno = ntb_epc->epf_ntb_bar[BAR_CONFIG]; 911 902 epf_bar = &ntb_epc->epf_bar[barno]; 912 903 913 - ret = pci_epc_set_bar(epc, func_no, epf_bar); 904 + ret = pci_epc_set_bar(epc, func_no, vfunc_no, epf_bar); 914 905 if (ret) { 915 906 dev_err(dev, "%s inft: Config/Status/SPAD BAR set failed\n", 916 907 pci_epc_interface_string(ntb_epc->type)); ··· 1225 1214 struct pci_epf_bar *epf_bar; 1226 1215 enum epf_ntb_bar bar; 1227 1216 enum pci_barno barno; 1217 + u8 func_no, vfunc_no; 1228 1218 struct pci_epc *epc; 1229 - u8 func_no; 1230 1219 1231 1220 epc = ntb_epc->epc; 1232 1221 1233 1222 func_no = ntb_epc->func_no; 1223 + vfunc_no = ntb_epc->vfunc_no; 1234 1224 1235 1225 for (bar = BAR_DB_MW1; bar < BAR_MW4; bar++) { 1236 1226 barno = ntb_epc->epf_ntb_bar[bar]; 1237 1227 epf_bar = &ntb_epc->epf_bar[barno]; 1238 - pci_epc_clear_bar(epc, func_no, epf_bar); 1228 + pci_epc_clear_bar(epc, func_no, vfunc_no, epf_bar); 1239 1229 } 1240 1230 } 1241 1231 ··· 1275 1263 const struct pci_epc_features *epc_features; 1276 1264 bool msix_capable, msi_capable; 1277 1265 struct epf_ntb_epc *ntb_epc; 1266 + u8 func_no, vfunc_no; 1278 1267 struct pci_epc *epc; 1279 1268 struct device *dev; 1280 1269 u32 db_count; 1281 - u8 func_no; 1282 1270 int ret; 1283 1271 1284 1272 ntb_epc = ntb->epc[type]; ··· 1294 1282 } 1295 1283 1296 1284 func_no = ntb_epc->func_no; 1285 + vfunc_no = ntb_epc->vfunc_no; 1297 1286 1298 1287 db_count = ntb->db_count; 1299 1288 if (db_count > MAX_DB_COUNT) { ··· 1306 1293 epc = ntb_epc->epc; 1307 1294 1308 1295 if (msi_capable) { 1309 - ret = pci_epc_set_msi(epc, func_no, db_count); 1296 + ret = pci_epc_set_msi(epc, func_no, vfunc_no, db_count); 1310 1297 if (ret) { 1311 1298 dev_err(dev, "%s intf: MSI configuration failed\n", 1312 1299 pci_epc_interface_string(type)); ··· 1315 1302 } 1316 1303 1317 1304 if (msix_capable) { 1318 - ret = pci_epc_set_msix(epc, func_no, db_count, 1305 + ret = pci_epc_set_msix(epc, func_no, vfunc_no, db_count, 1319 1306 ntb_epc->msix_bar, 1320 1307 ntb_epc->msix_table_offset); 1321 1308 if (ret) { ··· 1436 1423 u32 num_mws, db_count; 1437 1424 enum epf_ntb_bar bar; 1438 1425 enum pci_barno barno; 1426 + u8 func_no, vfunc_no; 1439 1427 struct pci_epc *epc; 1440 1428 struct device *dev; 1441 1429 size_t align; 1442 1430 int ret, i; 1443 - u8 func_no; 1444 1431 u64 size; 1445 1432 1446 1433 ntb_epc = ntb->epc[type]; ··· 1450 1437 epc_features = ntb_epc->epc_features; 1451 1438 align = epc_features->align; 1452 1439 func_no = ntb_epc->func_no; 1440 + vfunc_no = ntb_epc->vfunc_no; 1453 1441 epc = ntb_epc->epc; 1454 1442 num_mws = ntb->num_mws; 1455 1443 db_count = ntb->db_count; ··· 1478 1464 barno = ntb_epc->epf_ntb_bar[bar]; 1479 1465 epf_bar = &ntb_epc->epf_bar[barno]; 1480 1466 1481 - ret = pci_epc_set_bar(epc, func_no, epf_bar); 1467 + ret = pci_epc_set_bar(epc, func_no, vfunc_no, epf_bar); 1482 1468 if (ret) { 1483 1469 dev_err(dev, "%s intf: DoorBell BAR set failed\n", 1484 1470 pci_epc_interface_string(type)); ··· 1550 1536 const struct pci_epc_features *epc_features; 1551 1537 struct pci_epf_bar *epf_bar; 1552 1538 struct epf_ntb_epc *ntb_epc; 1539 + u8 func_no, vfunc_no; 1553 1540 struct pci_epf *epf; 1554 1541 struct device *dev; 1555 - u8 func_no; 1556 1542 1557 1543 dev = &ntb->epf->dev; 1558 1544 ··· 1561 1547 return -ENOMEM; 1562 1548 1563 1549 epf = ntb->epf; 1550 + vfunc_no = epf->vfunc_no; 1564 1551 if (type == PRIMARY_INTERFACE) { 1565 1552 func_no = epf->func_no; 1566 1553 epf_bar = epf->bar; ··· 1573 1558 ntb_epc->linkup = false; 1574 1559 ntb_epc->epc = epc; 1575 1560 ntb_epc->func_no = func_no; 1561 + ntb_epc->vfunc_no = vfunc_no; 1576 1562 ntb_epc->type = type; 1577 1563 ntb_epc->epf_bar = epf_bar; 1578 1564 ntb_epc->epf_ntb = ntb; 1579 1565 1580 - epc_features = pci_epc_get_features(epc, func_no); 1566 + epc_features = pci_epc_get_features(epc, func_no, vfunc_no); 1581 1567 if (!epc_features) 1582 1568 return -EINVAL; 1583 1569 ntb_epc->epc_features = epc_features; ··· 1718 1702 enum pci_epc_interface_type type) 1719 1703 { 1720 1704 struct epf_ntb_epc *ntb_epc; 1705 + u8 func_no, vfunc_no; 1721 1706 struct pci_epc *epc; 1722 1707 struct pci_epf *epf; 1723 1708 struct device *dev; 1724 - u8 func_no; 1725 1709 int ret; 1726 1710 1727 1711 ntb_epc = ntb->epc[type]; ··· 1729 1713 dev = &epf->dev; 1730 1714 epc = ntb_epc->epc; 1731 1715 func_no = ntb_epc->func_no; 1716 + vfunc_no = ntb_epc->vfunc_no; 1732 1717 1733 1718 ret = epf_ntb_config_sspad_bar_set(ntb->epc[type]); 1734 1719 if (ret) { ··· 1759 1742 goto err_db_mw_bar_init; 1760 1743 } 1761 1744 1762 - ret = pci_epc_write_header(epc, func_no, epf->header); 1763 - if (ret) { 1764 - dev_err(dev, "%s intf: Configuration header write failed\n", 1765 - pci_epc_interface_string(type)); 1766 - goto err_write_header; 1745 + if (vfunc_no <= 1) { 1746 + ret = pci_epc_write_header(epc, func_no, vfunc_no, epf->header); 1747 + if (ret) { 1748 + dev_err(dev, "%s intf: Configuration header write failed\n", 1749 + pci_epc_interface_string(type)); 1750 + goto err_write_header; 1751 + } 1767 1752 } 1768 1753 1769 1754 INIT_DELAYED_WORK(&ntb->epc[type]->cmd_handler, epf_ntb_cmd_handler);
+42 -32
drivers/pci/endpoint/functions/pci-epf-test.c
··· 247 247 goto err; 248 248 } 249 249 250 - ret = pci_epc_map_addr(epc, epf->func_no, src_phys_addr, reg->src_addr, 251 - reg->size); 250 + ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, src_phys_addr, 251 + reg->src_addr, reg->size); 252 252 if (ret) { 253 253 dev_err(dev, "Failed to map source address\n"); 254 254 reg->status = STATUS_SRC_ADDR_INVALID; ··· 263 263 goto err_src_map_addr; 264 264 } 265 265 266 - ret = pci_epc_map_addr(epc, epf->func_no, dst_phys_addr, reg->dst_addr, 267 - reg->size); 266 + ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, dst_phys_addr, 267 + reg->dst_addr, reg->size); 268 268 if (ret) { 269 269 dev_err(dev, "Failed to map destination address\n"); 270 270 reg->status = STATUS_DST_ADDR_INVALID; ··· 291 291 pci_epf_test_print_rate("COPY", reg->size, &start, &end, use_dma); 292 292 293 293 err_map_addr: 294 - pci_epc_unmap_addr(epc, epf->func_no, dst_phys_addr); 294 + pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, dst_phys_addr); 295 295 296 296 err_dst_addr: 297 297 pci_epc_mem_free_addr(epc, dst_phys_addr, dst_addr, reg->size); 298 298 299 299 err_src_map_addr: 300 - pci_epc_unmap_addr(epc, epf->func_no, src_phys_addr); 300 + pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, src_phys_addr); 301 301 302 302 err_src_addr: 303 303 pci_epc_mem_free_addr(epc, src_phys_addr, src_addr, reg->size); ··· 331 331 goto err; 332 332 } 333 333 334 - ret = pci_epc_map_addr(epc, epf->func_no, phys_addr, reg->src_addr, 335 - reg->size); 334 + ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, phys_addr, 335 + reg->src_addr, reg->size); 336 336 if (ret) { 337 337 dev_err(dev, "Failed to map address\n"); 338 338 reg->status = STATUS_SRC_ADDR_INVALID; ··· 386 386 kfree(buf); 387 387 388 388 err_map_addr: 389 - pci_epc_unmap_addr(epc, epf->func_no, phys_addr); 389 + pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, phys_addr); 390 390 391 391 err_addr: 392 392 pci_epc_mem_free_addr(epc, phys_addr, src_addr, reg->size); ··· 419 419 goto err; 420 420 } 421 421 422 - ret = pci_epc_map_addr(epc, epf->func_no, phys_addr, reg->dst_addr, 423 - reg->size); 422 + ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, phys_addr, 423 + reg->dst_addr, reg->size); 424 424 if (ret) { 425 425 dev_err(dev, "Failed to map address\n"); 426 426 reg->status = STATUS_DST_ADDR_INVALID; ··· 479 479 kfree(buf); 480 480 481 481 err_map_addr: 482 - pci_epc_unmap_addr(epc, epf->func_no, phys_addr); 482 + pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, phys_addr); 483 483 484 484 err_addr: 485 485 pci_epc_mem_free_addr(epc, phys_addr, dst_addr, reg->size); ··· 501 501 502 502 switch (irq_type) { 503 503 case IRQ_TYPE_LEGACY: 504 - pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_LEGACY, 0); 504 + pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, 505 + PCI_EPC_IRQ_LEGACY, 0); 505 506 break; 506 507 case IRQ_TYPE_MSI: 507 - pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSI, irq); 508 + pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, 509 + PCI_EPC_IRQ_MSI, irq); 508 510 break; 509 511 case IRQ_TYPE_MSIX: 510 - pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSIX, irq); 512 + pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, 513 + PCI_EPC_IRQ_MSIX, irq); 511 514 break; 512 515 default: 513 516 dev_err(dev, "Failed to raise IRQ, unknown type\n"); ··· 545 542 546 543 if (command & COMMAND_RAISE_LEGACY_IRQ) { 547 544 reg->status = STATUS_IRQ_RAISED; 548 - pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_LEGACY, 0); 545 + pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, 546 + PCI_EPC_IRQ_LEGACY, 0); 549 547 goto reset_handler; 550 548 } 551 549 ··· 584 580 } 585 581 586 582 if (command & COMMAND_RAISE_MSI_IRQ) { 587 - count = pci_epc_get_msi(epc, epf->func_no); 583 + count = pci_epc_get_msi(epc, epf->func_no, epf->vfunc_no); 588 584 if (reg->irq_number > count || count <= 0) 589 585 goto reset_handler; 590 586 reg->status = STATUS_IRQ_RAISED; 591 - pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSI, 592 - reg->irq_number); 587 + pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, 588 + PCI_EPC_IRQ_MSI, reg->irq_number); 593 589 goto reset_handler; 594 590 } 595 591 596 592 if (command & COMMAND_RAISE_MSIX_IRQ) { 597 - count = pci_epc_get_msix(epc, epf->func_no); 593 + count = pci_epc_get_msix(epc, epf->func_no, epf->vfunc_no); 598 594 if (reg->irq_number > count || count <= 0) 599 595 goto reset_handler; 600 596 reg->status = STATUS_IRQ_RAISED; 601 - pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSIX, 602 - reg->irq_number); 597 + pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, 598 + PCI_EPC_IRQ_MSIX, reg->irq_number); 603 599 goto reset_handler; 604 600 } 605 601 ··· 622 618 epf_bar = &epf->bar[bar]; 623 619 624 620 if (epf_test->reg[bar]) { 625 - pci_epc_clear_bar(epc, epf->func_no, epf_bar); 621 + pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, 622 + epf_bar); 626 623 pci_epf_free_space(epf, epf_test->reg[bar], bar, 627 624 PRIMARY_INTERFACE); 628 625 } ··· 655 650 if (!!(epc_features->reserved_bar & (1 << bar))) 656 651 continue; 657 652 658 - ret = pci_epc_set_bar(epc, epf->func_no, epf_bar); 653 + ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, 654 + epf_bar); 659 655 if (ret) { 660 656 pci_epf_free_space(epf, epf_test->reg[bar], bar, 661 657 PRIMARY_INTERFACE); ··· 680 674 bool msi_capable = true; 681 675 int ret; 682 676 683 - epc_features = pci_epc_get_features(epc, epf->func_no); 677 + epc_features = pci_epc_get_features(epc, epf->func_no, epf->vfunc_no); 684 678 if (epc_features) { 685 679 msix_capable = epc_features->msix_capable; 686 680 msi_capable = epc_features->msi_capable; 687 681 } 688 682 689 - ret = pci_epc_write_header(epc, epf->func_no, header); 690 - if (ret) { 691 - dev_err(dev, "Configuration header write failed\n"); 692 - return ret; 683 + if (epf->vfunc_no <= 1) { 684 + ret = pci_epc_write_header(epc, epf->func_no, epf->vfunc_no, header); 685 + if (ret) { 686 + dev_err(dev, "Configuration header write failed\n"); 687 + return ret; 688 + } 693 689 } 694 690 695 691 ret = pci_epf_test_set_bar(epf); ··· 699 691 return ret; 700 692 701 693 if (msi_capable) { 702 - ret = pci_epc_set_msi(epc, epf->func_no, epf->msi_interrupts); 694 + ret = pci_epc_set_msi(epc, epf->func_no, epf->vfunc_no, 695 + epf->msi_interrupts); 703 696 if (ret) { 704 697 dev_err(dev, "MSI configuration failed\n"); 705 698 return ret; ··· 708 699 } 709 700 710 701 if (msix_capable) { 711 - ret = pci_epc_set_msix(epc, epf->func_no, epf->msix_interrupts, 702 + ret = pci_epc_set_msix(epc, epf->func_no, epf->vfunc_no, 703 + epf->msix_interrupts, 712 704 epf_test->test_reg_bar, 713 705 epf_test->msix_table_offset); 714 706 if (ret) { ··· 842 832 if (WARN_ON_ONCE(!epc)) 843 833 return -EINVAL; 844 834 845 - epc_features = pci_epc_get_features(epc, epf->func_no); 835 + epc_features = pci_epc_get_features(epc, epf->func_no, epf->vfunc_no); 846 836 if (!epc_features) { 847 837 dev_err(&epf->dev, "epc_features not implemented\n"); 848 838 return -EOPNOTSUPP;
+24
drivers/pci/endpoint/pci-ep-cfs.c
··· 475 475 NULL, 476 476 }; 477 477 478 + static int pci_epf_vepf_link(struct config_item *epf_pf_item, 479 + struct config_item *epf_vf_item) 480 + { 481 + struct pci_epf_group *epf_vf_group = to_pci_epf_group(epf_vf_item); 482 + struct pci_epf_group *epf_pf_group = to_pci_epf_group(epf_pf_item); 483 + struct pci_epf *epf_pf = epf_pf_group->epf; 484 + struct pci_epf *epf_vf = epf_vf_group->epf; 485 + 486 + return pci_epf_add_vepf(epf_pf, epf_vf); 487 + } 488 + 489 + static void pci_epf_vepf_unlink(struct config_item *epf_pf_item, 490 + struct config_item *epf_vf_item) 491 + { 492 + struct pci_epf_group *epf_vf_group = to_pci_epf_group(epf_vf_item); 493 + struct pci_epf_group *epf_pf_group = to_pci_epf_group(epf_pf_item); 494 + struct pci_epf *epf_pf = epf_pf_group->epf; 495 + struct pci_epf *epf_vf = epf_vf_group->epf; 496 + 497 + pci_epf_remove_vepf(epf_pf, epf_vf); 498 + } 499 + 478 500 static void pci_epf_release(struct config_item *item) 479 501 { 480 502 struct pci_epf_group *epf_group = to_pci_epf_group(item); ··· 509 487 } 510 488 511 489 static struct configfs_item_operations pci_epf_ops = { 490 + .allow_link = pci_epf_vepf_link, 491 + .drop_link = pci_epf_vepf_unlink, 512 492 .release = pci_epf_release, 513 493 }; 514 494
+95 -39
drivers/pci/endpoint/pci-epc-core.c
··· 137 137 * @epc: the features supported by *this* EPC device will be returned 138 138 * @func_no: the features supported by the EPC device specific to the 139 139 * endpoint function with func_no will be returned 140 + * @vfunc_no: the features supported by the EPC device specific to the 141 + * virtual endpoint function with vfunc_no will be returned 140 142 * 141 143 * Invoke to get the features provided by the EPC which may be 142 144 * specific to an endpoint function. Returns pci_epc_features on success 143 145 * and NULL for any failures. 144 146 */ 145 147 const struct pci_epc_features *pci_epc_get_features(struct pci_epc *epc, 146 - u8 func_no) 148 + u8 func_no, u8 vfunc_no) 147 149 { 148 150 const struct pci_epc_features *epc_features; 149 151 150 152 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 151 153 return NULL; 152 154 155 + if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no])) 156 + return NULL; 157 + 153 158 if (!epc->ops->get_features) 154 159 return NULL; 155 160 156 161 mutex_lock(&epc->lock); 157 - epc_features = epc->ops->get_features(epc, func_no); 162 + epc_features = epc->ops->get_features(epc, func_no, vfunc_no); 158 163 mutex_unlock(&epc->lock); 159 164 160 165 return epc_features; ··· 210 205 /** 211 206 * pci_epc_raise_irq() - interrupt the host system 212 207 * @epc: the EPC device which has to interrupt the host 213 - * @func_no: the endpoint function number in the EPC device 208 + * @func_no: the physical endpoint function number in the EPC device 209 + * @vfunc_no: the virtual endpoint function number in the physical function 214 210 * @type: specify the type of interrupt; legacy, MSI or MSI-X 215 211 * @interrupt_num: the MSI or MSI-X interrupt number 216 212 * 217 213 * Invoke to raise an legacy, MSI or MSI-X interrupt 218 214 */ 219 - int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no, 215 + int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 220 216 enum pci_epc_irq_type type, u16 interrupt_num) 221 217 { 222 218 int ret; ··· 225 219 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 226 220 return -EINVAL; 227 221 222 + if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no])) 223 + return -EINVAL; 224 + 228 225 if (!epc->ops->raise_irq) 229 226 return 0; 230 227 231 228 mutex_lock(&epc->lock); 232 - ret = epc->ops->raise_irq(epc, func_no, type, interrupt_num); 229 + ret = epc->ops->raise_irq(epc, func_no, vfunc_no, type, interrupt_num); 233 230 mutex_unlock(&epc->lock); 234 231 235 232 return ret; ··· 244 235 * MSI data 245 236 * @epc: the EPC device which has the MSI capability 246 237 * @func_no: the physical endpoint function number in the EPC device 238 + * @vfunc_no: the virtual endpoint function number in the physical function 247 239 * @phys_addr: the physical address of the outbound region 248 240 * @interrupt_num: the MSI interrupt number 249 241 * @entry_size: Size of Outbound address region for each interrupt ··· 260 250 * physical address (in outbound region) of the other interface to ring 261 251 * doorbell. 262 252 */ 263 - int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, phys_addr_t phys_addr, 264 - u8 interrupt_num, u32 entry_size, u32 *msi_data, 265 - u32 *msi_addr_offset) 253 + int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 254 + phys_addr_t phys_addr, u8 interrupt_num, u32 entry_size, 255 + u32 *msi_data, u32 *msi_addr_offset) 266 256 { 267 257 int ret; 268 258 269 259 if (IS_ERR_OR_NULL(epc)) 270 260 return -EINVAL; 271 261 262 + if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no])) 263 + return -EINVAL; 264 + 272 265 if (!epc->ops->map_msi_irq) 273 266 return -EINVAL; 274 267 275 268 mutex_lock(&epc->lock); 276 - ret = epc->ops->map_msi_irq(epc, func_no, phys_addr, interrupt_num, 277 - entry_size, msi_data, msi_addr_offset); 269 + ret = epc->ops->map_msi_irq(epc, func_no, vfunc_no, phys_addr, 270 + interrupt_num, entry_size, msi_data, 271 + msi_addr_offset); 278 272 mutex_unlock(&epc->lock); 279 273 280 274 return ret; ··· 288 274 /** 289 275 * pci_epc_get_msi() - get the number of MSI interrupt numbers allocated 290 276 * @epc: the EPC device to which MSI interrupts was requested 291 - * @func_no: the endpoint function number in the EPC device 277 + * @func_no: the physical endpoint function number in the EPC device 278 + * @vfunc_no: the virtual endpoint function number in the physical function 292 279 * 293 280 * Invoke to get the number of MSI interrupts allocated by the RC 294 281 */ 295 - int pci_epc_get_msi(struct pci_epc *epc, u8 func_no) 282 + int pci_epc_get_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no) 296 283 { 297 284 int interrupt; 298 285 299 286 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 300 287 return 0; 301 288 289 + if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no])) 290 + return 0; 291 + 302 292 if (!epc->ops->get_msi) 303 293 return 0; 304 294 305 295 mutex_lock(&epc->lock); 306 - interrupt = epc->ops->get_msi(epc, func_no); 296 + interrupt = epc->ops->get_msi(epc, func_no, vfunc_no); 307 297 mutex_unlock(&epc->lock); 308 298 309 299 if (interrupt < 0) ··· 322 304 /** 323 305 * pci_epc_set_msi() - set the number of MSI interrupt numbers required 324 306 * @epc: the EPC device on which MSI has to be configured 325 - * @func_no: the endpoint function number in the EPC device 307 + * @func_no: the physical endpoint function number in the EPC device 308 + * @vfunc_no: the virtual endpoint function number in the physical function 326 309 * @interrupts: number of MSI interrupts required by the EPF 327 310 * 328 311 * Invoke to set the required number of MSI interrupts. 329 312 */ 330 - int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts) 313 + int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no, u8 interrupts) 331 314 { 332 315 int ret; 333 316 u8 encode_int; ··· 337 318 interrupts > 32) 338 319 return -EINVAL; 339 320 321 + if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no])) 322 + return -EINVAL; 323 + 340 324 if (!epc->ops->set_msi) 341 325 return 0; 342 326 343 327 encode_int = order_base_2(interrupts); 344 328 345 329 mutex_lock(&epc->lock); 346 - ret = epc->ops->set_msi(epc, func_no, encode_int); 330 + ret = epc->ops->set_msi(epc, func_no, vfunc_no, encode_int); 347 331 mutex_unlock(&epc->lock); 348 332 349 333 return ret; ··· 356 334 /** 357 335 * pci_epc_get_msix() - get the number of MSI-X interrupt numbers allocated 358 336 * @epc: the EPC device to which MSI-X interrupts was requested 359 - * @func_no: the endpoint function number in the EPC device 337 + * @func_no: the physical endpoint function number in the EPC device 338 + * @vfunc_no: the virtual endpoint function number in the physical function 360 339 * 361 340 * Invoke to get the number of MSI-X interrupts allocated by the RC 362 341 */ 363 - int pci_epc_get_msix(struct pci_epc *epc, u8 func_no) 342 + int pci_epc_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no) 364 343 { 365 344 int interrupt; 366 345 367 346 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 368 347 return 0; 369 348 349 + if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no])) 350 + return 0; 351 + 370 352 if (!epc->ops->get_msix) 371 353 return 0; 372 354 373 355 mutex_lock(&epc->lock); 374 - interrupt = epc->ops->get_msix(epc, func_no); 356 + interrupt = epc->ops->get_msix(epc, func_no, vfunc_no); 375 357 mutex_unlock(&epc->lock); 376 358 377 359 if (interrupt < 0) ··· 388 362 /** 389 363 * pci_epc_set_msix() - set the number of MSI-X interrupt numbers required 390 364 * @epc: the EPC device on which MSI-X has to be configured 391 - * @func_no: the endpoint function number in the EPC device 365 + * @func_no: the physical endpoint function number in the EPC device 366 + * @vfunc_no: the virtual endpoint function number in the physical function 392 367 * @interrupts: number of MSI-X interrupts required by the EPF 393 368 * @bir: BAR where the MSI-X table resides 394 369 * @offset: Offset pointing to the start of MSI-X table 395 370 * 396 371 * Invoke to set the required number of MSI-X interrupts. 397 372 */ 398 - int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts, 399 - enum pci_barno bir, u32 offset) 373 + int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 374 + u16 interrupts, enum pci_barno bir, u32 offset) 400 375 { 401 376 int ret; 402 377 ··· 405 378 interrupts < 1 || interrupts > 2048) 406 379 return -EINVAL; 407 380 381 + if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no])) 382 + return -EINVAL; 383 + 408 384 if (!epc->ops->set_msix) 409 385 return 0; 410 386 411 387 mutex_lock(&epc->lock); 412 - ret = epc->ops->set_msix(epc, func_no, interrupts - 1, bir, offset); 388 + ret = epc->ops->set_msix(epc, func_no, vfunc_no, interrupts - 1, bir, 389 + offset); 413 390 mutex_unlock(&epc->lock); 414 391 415 392 return ret; ··· 423 392 /** 424 393 * pci_epc_unmap_addr() - unmap CPU address from PCI address 425 394 * @epc: the EPC device on which address is allocated 426 - * @func_no: the endpoint function number in the EPC device 395 + * @func_no: the physical endpoint function number in the EPC device 396 + * @vfunc_no: the virtual endpoint function number in the physical function 427 397 * @phys_addr: physical address of the local system 428 398 * 429 399 * Invoke to unmap the CPU address from PCI address. 430 400 */ 431 - void pci_epc_unmap_addr(struct pci_epc *epc, u8 func_no, 401 + void pci_epc_unmap_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 432 402 phys_addr_t phys_addr) 433 403 { 434 404 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 405 + return; 406 + 407 + if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no])) 435 408 return; 436 409 437 410 if (!epc->ops->unmap_addr) 438 411 return; 439 412 440 413 mutex_lock(&epc->lock); 441 - epc->ops->unmap_addr(epc, func_no, phys_addr); 414 + epc->ops->unmap_addr(epc, func_no, vfunc_no, phys_addr); 442 415 mutex_unlock(&epc->lock); 443 416 } 444 417 EXPORT_SYMBOL_GPL(pci_epc_unmap_addr); ··· 450 415 /** 451 416 * pci_epc_map_addr() - map CPU address to PCI address 452 417 * @epc: the EPC device on which address is allocated 453 - * @func_no: the endpoint function number in the EPC device 418 + * @func_no: the physical endpoint function number in the EPC device 419 + * @vfunc_no: the virtual endpoint function number in the physical function 454 420 * @phys_addr: physical address of the local system 455 421 * @pci_addr: PCI address to which the physical address should be mapped 456 422 * @size: the size of the allocation 457 423 * 458 424 * Invoke to map CPU address with PCI address. 459 425 */ 460 - int pci_epc_map_addr(struct pci_epc *epc, u8 func_no, 426 + int pci_epc_map_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 461 427 phys_addr_t phys_addr, u64 pci_addr, size_t size) 462 428 { 463 429 int ret; ··· 466 430 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 467 431 return -EINVAL; 468 432 433 + if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no])) 434 + return -EINVAL; 435 + 469 436 if (!epc->ops->map_addr) 470 437 return 0; 471 438 472 439 mutex_lock(&epc->lock); 473 - ret = epc->ops->map_addr(epc, func_no, phys_addr, pci_addr, size); 440 + ret = epc->ops->map_addr(epc, func_no, vfunc_no, phys_addr, pci_addr, 441 + size); 474 442 mutex_unlock(&epc->lock); 475 443 476 444 return ret; ··· 484 444 /** 485 445 * pci_epc_clear_bar() - reset the BAR 486 446 * @epc: the EPC device for which the BAR has to be cleared 487 - * @func_no: the endpoint function number in the EPC device 447 + * @func_no: the physical endpoint function number in the EPC device 448 + * @vfunc_no: the virtual endpoint function number in the physical function 488 449 * @epf_bar: the struct epf_bar that contains the BAR information 489 450 * 490 451 * Invoke to reset the BAR of the endpoint device. 491 452 */ 492 - void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no, 453 + void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 493 454 struct pci_epf_bar *epf_bar) 494 455 { 495 456 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions || ··· 498 457 epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64)) 499 458 return; 500 459 460 + if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no])) 461 + return; 462 + 501 463 if (!epc->ops->clear_bar) 502 464 return; 503 465 504 466 mutex_lock(&epc->lock); 505 - epc->ops->clear_bar(epc, func_no, epf_bar); 467 + epc->ops->clear_bar(epc, func_no, vfunc_no, epf_bar); 506 468 mutex_unlock(&epc->lock); 507 469 } 508 470 EXPORT_SYMBOL_GPL(pci_epc_clear_bar); ··· 513 469 /** 514 470 * pci_epc_set_bar() - configure BAR in order for host to assign PCI addr space 515 471 * @epc: the EPC device on which BAR has to be configured 516 - * @func_no: the endpoint function number in the EPC device 472 + * @func_no: the physical endpoint function number in the EPC device 473 + * @vfunc_no: the virtual endpoint function number in the physical function 517 474 * @epf_bar: the struct epf_bar that contains the BAR information 518 475 * 519 476 * Invoke to configure the BAR of the endpoint device. 520 477 */ 521 - int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, 478 + int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 522 479 struct pci_epf_bar *epf_bar) 523 480 { 524 481 int ret; ··· 534 489 !(flags & PCI_BASE_ADDRESS_MEM_TYPE_64))) 535 490 return -EINVAL; 536 491 492 + if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no])) 493 + return -EINVAL; 494 + 537 495 if (!epc->ops->set_bar) 538 496 return 0; 539 497 540 498 mutex_lock(&epc->lock); 541 - ret = epc->ops->set_bar(epc, func_no, epf_bar); 499 + ret = epc->ops->set_bar(epc, func_no, vfunc_no, epf_bar); 542 500 mutex_unlock(&epc->lock); 543 501 544 502 return ret; ··· 551 503 /** 552 504 * pci_epc_write_header() - write standard configuration header 553 505 * @epc: the EPC device to which the configuration header should be written 554 - * @func_no: the endpoint function number in the EPC device 506 + * @func_no: the physical endpoint function number in the EPC device 507 + * @vfunc_no: the virtual endpoint function number in the physical function 555 508 * @header: standard configuration header fields 556 509 * 557 510 * Invoke to write the configuration header to the endpoint controller. Every ··· 560 511 * configuration header would be written. The callback function should write 561 512 * the header fields to this dedicated location. 562 513 */ 563 - int pci_epc_write_header(struct pci_epc *epc, u8 func_no, 514 + int pci_epc_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 564 515 struct pci_epf_header *header) 565 516 { 566 517 int ret; ··· 568 519 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 569 520 return -EINVAL; 570 521 522 + if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no])) 523 + return -EINVAL; 524 + 525 + /* Only Virtual Function #1 has deviceID */ 526 + if (vfunc_no > 1) 527 + return -EINVAL; 528 + 571 529 if (!epc->ops->write_header) 572 530 return 0; 573 531 574 532 mutex_lock(&epc->lock); 575 - ret = epc->ops->write_header(epc, func_no, header); 533 + ret = epc->ops->write_header(epc, func_no, vfunc_no, header); 576 534 mutex_unlock(&epc->lock); 577 535 578 536 return ret; ··· 604 548 u32 func_no; 605 549 int ret = 0; 606 550 607 - if (IS_ERR_OR_NULL(epc)) 551 + if (IS_ERR_OR_NULL(epc) || epf->is_vf) 608 552 return -EINVAL; 609 553 610 554 if (type == PRIMARY_INTERFACE && epf->epc)
+144 -2
drivers/pci/endpoint/pci-epf-core.c
··· 62 62 */ 63 63 void pci_epf_unbind(struct pci_epf *epf) 64 64 { 65 + struct pci_epf *epf_vf; 66 + 65 67 if (!epf->driver) { 66 68 dev_WARN(&epf->dev, "epf device not bound to driver\n"); 67 69 return; 68 70 } 69 71 70 72 mutex_lock(&epf->lock); 71 - epf->driver->ops->unbind(epf); 73 + list_for_each_entry(epf_vf, &epf->pci_vepf, list) { 74 + if (epf_vf->is_bound) 75 + epf_vf->driver->ops->unbind(epf_vf); 76 + } 77 + if (epf->is_bound) 78 + epf->driver->ops->unbind(epf); 72 79 mutex_unlock(&epf->lock); 73 80 module_put(epf->driver->owner); 74 81 } ··· 90 83 */ 91 84 int pci_epf_bind(struct pci_epf *epf) 92 85 { 86 + struct device *dev = &epf->dev; 87 + struct pci_epf *epf_vf; 88 + u8 func_no, vfunc_no; 89 + struct pci_epc *epc; 93 90 int ret; 94 91 95 92 if (!epf->driver) { 96 - dev_WARN(&epf->dev, "epf device not bound to driver\n"); 93 + dev_WARN(dev, "epf device not bound to driver\n"); 97 94 return -EINVAL; 98 95 } 99 96 ··· 105 94 return -EAGAIN; 106 95 107 96 mutex_lock(&epf->lock); 97 + list_for_each_entry(epf_vf, &epf->pci_vepf, list) { 98 + vfunc_no = epf_vf->vfunc_no; 99 + 100 + if (vfunc_no < 1) { 101 + dev_err(dev, "Invalid virtual function number\n"); 102 + ret = -EINVAL; 103 + goto ret; 104 + } 105 + 106 + epc = epf->epc; 107 + func_no = epf->func_no; 108 + if (!IS_ERR_OR_NULL(epc)) { 109 + if (!epc->max_vfs) { 110 + dev_err(dev, "No support for virt function\n"); 111 + ret = -EINVAL; 112 + goto ret; 113 + } 114 + 115 + if (vfunc_no > epc->max_vfs[func_no]) { 116 + dev_err(dev, "PF%d: Exceeds max vfunc number\n", 117 + func_no); 118 + ret = -EINVAL; 119 + goto ret; 120 + } 121 + } 122 + 123 + epc = epf->sec_epc; 124 + func_no = epf->sec_epc_func_no; 125 + if (!IS_ERR_OR_NULL(epc)) { 126 + if (!epc->max_vfs) { 127 + dev_err(dev, "No support for virt function\n"); 128 + ret = -EINVAL; 129 + goto ret; 130 + } 131 + 132 + if (vfunc_no > epc->max_vfs[func_no]) { 133 + dev_err(dev, "PF%d: Exceeds max vfunc number\n", 134 + func_no); 135 + ret = -EINVAL; 136 + goto ret; 137 + } 138 + } 139 + 140 + epf_vf->func_no = epf->func_no; 141 + epf_vf->sec_epc_func_no = epf->sec_epc_func_no; 142 + epf_vf->epc = epf->epc; 143 + epf_vf->sec_epc = epf->sec_epc; 144 + ret = epf_vf->driver->ops->bind(epf_vf); 145 + if (ret) 146 + goto ret; 147 + epf_vf->is_bound = true; 148 + } 149 + 108 150 ret = epf->driver->ops->bind(epf); 151 + if (ret) 152 + goto ret; 153 + epf->is_bound = true; 154 + 109 155 mutex_unlock(&epf->lock); 156 + return 0; 157 + 158 + ret: 159 + mutex_unlock(&epf->lock); 160 + pci_epf_unbind(epf); 110 161 111 162 return ret; 112 163 } 113 164 EXPORT_SYMBOL_GPL(pci_epf_bind); 165 + 166 + /** 167 + * pci_epf_add_vepf() - associate virtual EP function to physical EP function 168 + * @epf_pf: the physical EP function to which the virtual EP function should be 169 + * associated 170 + * @epf_vf: the virtual EP function to be added 171 + * 172 + * A physical endpoint function can be associated with multiple virtual 173 + * endpoint functions. Invoke pci_epf_add_epf() to add a virtual PCI endpoint 174 + * function to a physical PCI endpoint function. 175 + */ 176 + int pci_epf_add_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf) 177 + { 178 + u32 vfunc_no; 179 + 180 + if (IS_ERR_OR_NULL(epf_pf) || IS_ERR_OR_NULL(epf_vf)) 181 + return -EINVAL; 182 + 183 + if (epf_pf->epc || epf_vf->epc || epf_vf->epf_pf) 184 + return -EBUSY; 185 + 186 + if (epf_pf->sec_epc || epf_vf->sec_epc) 187 + return -EBUSY; 188 + 189 + mutex_lock(&epf_pf->lock); 190 + vfunc_no = find_first_zero_bit(&epf_pf->vfunction_num_map, 191 + BITS_PER_LONG); 192 + if (vfunc_no >= BITS_PER_LONG) { 193 + mutex_unlock(&epf_pf->lock); 194 + return -EINVAL; 195 + } 196 + 197 + set_bit(vfunc_no, &epf_pf->vfunction_num_map); 198 + epf_vf->vfunc_no = vfunc_no; 199 + 200 + epf_vf->epf_pf = epf_pf; 201 + epf_vf->is_vf = true; 202 + 203 + list_add_tail(&epf_vf->list, &epf_pf->pci_vepf); 204 + mutex_unlock(&epf_pf->lock); 205 + 206 + return 0; 207 + } 208 + EXPORT_SYMBOL_GPL(pci_epf_add_vepf); 209 + 210 + /** 211 + * pci_epf_remove_vepf() - remove virtual EP function from physical EP function 212 + * @epf_pf: the physical EP function from which the virtual EP function should 213 + * be removed 214 + * @epf_vf: the virtual EP function to be removed 215 + * 216 + * Invoke to remove a virtual endpoint function from the physcial endpoint 217 + * function. 218 + */ 219 + void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf) 220 + { 221 + if (IS_ERR_OR_NULL(epf_pf) || IS_ERR_OR_NULL(epf_vf)) 222 + return; 223 + 224 + mutex_lock(&epf_pf->lock); 225 + clear_bit(epf_vf->vfunc_no, &epf_pf->vfunction_num_map); 226 + list_del(&epf_vf->list); 227 + mutex_unlock(&epf_pf->lock); 228 + } 229 + EXPORT_SYMBOL_GPL(pci_epf_remove_vepf); 114 230 115 231 /** 116 232 * pci_epf_free_space() - free the allocated PCI EPF register space ··· 454 316 kfree(epf); 455 317 return ERR_PTR(-ENOMEM); 456 318 } 319 + 320 + /* VFs are numbered starting with 1. So set BIT(0) by default */ 321 + epf->vfunction_num_map = 1; 322 + INIT_LIST_HEAD(&epf->pci_vepf); 457 323 458 324 dev = &epf->dev; 459 325 device_initialize(dev);
-3
drivers/pci/hotplug/TODO
··· 40 40 41 41 * The return value of pci_hp_register() is not checked. 42 42 43 - * iounmap(io_mem) is called in the error path of ebda_rsrc_controller() 44 - and once more in the error path of its caller ibmphp_access_ebda(). 45 - 46 43 * The various slot data structures are difficult to follow and need to be 47 44 simplified. A lot of functions are too large and too complex, they need 48 45 to be broken up into smaller, manageable pieces. Negative examples are
+1 -4
drivers/pci/hotplug/ibmphp_ebda.c
··· 714 714 /* init hpc structure */ 715 715 hpc_ptr = alloc_ebda_hpc(slot_num, bus_num); 716 716 if (!hpc_ptr) { 717 - rc = -ENOMEM; 718 - goto error_no_hpc; 717 + return -ENOMEM; 719 718 } 720 719 hpc_ptr->ctlr_id = ctlr_id; 721 720 hpc_ptr->ctlr_relative_id = ctlr; ··· 909 910 kfree(tmp_slot); 910 911 error_no_slot: 911 912 free_ebda_hpc(hpc_ptr); 912 - error_no_hpc: 913 - iounmap(io_mem); 914 913 return rc; 915 914 } 916 915
+1 -1
drivers/pci/hotplug/pciehp.h
··· 184 184 185 185 int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot); 186 186 int pciehp_sysfs_disable_slot(struct hotplug_slot *hotplug_slot); 187 - int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe); 187 + int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, bool probe); 188 188 int pciehp_get_attention_status(struct hotplug_slot *hotplug_slot, u8 *status); 189 189 int pciehp_set_raw_indicator_status(struct hotplug_slot *h_slot, u8 status); 190 190 int pciehp_get_raw_indicator_status(struct hotplug_slot *h_slot, u8 *status);
+1 -1
drivers/pci/hotplug/pciehp_hpc.c
··· 870 870 * momentarily, if we see that they could interfere. Also, clear any spurious 871 871 * events after. 872 872 */ 873 - int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe) 873 + int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, bool probe) 874 874 { 875 875 struct controller *ctrl = to_ctrl(hotplug_slot); 876 876 struct pci_dev *pdev = ctrl_dev(ctrl);
+1 -1
drivers/pci/hotplug/pnv_php.c
··· 526 526 return 0; 527 527 } 528 528 529 - static int pnv_php_reset_slot(struct hotplug_slot *slot, int probe) 529 + static int pnv_php_reset_slot(struct hotplug_slot *slot, bool probe) 530 530 { 531 531 struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); 532 532 struct pci_dev *bridge = php_slot->pdev;
+1 -1
drivers/pci/of.c
··· 310 310 /* Check for ranges property */ 311 311 err = of_pci_range_parser_init(&parser, dev_node); 312 312 if (err) 313 - goto failed; 313 + return 0; 314 314 315 315 dev_dbg(dev, "Parsing ranges property...\n"); 316 316 for_each_of_pci_range(&parser, &range) {
+61 -42
drivers/pci/pci-acpi.c
··· 934 934 935 935 static struct acpi_device *acpi_pci_find_companion(struct device *dev); 936 936 937 - static bool acpi_pci_bridge_d3(struct pci_dev *dev) 937 + void pci_set_acpi_fwnode(struct pci_dev *dev) 938 938 { 939 - const struct fwnode_handle *fwnode; 940 - struct acpi_device *adev; 941 - struct pci_dev *root; 942 - u8 val; 939 + if (!ACPI_COMPANION(&dev->dev) && !pci_dev_is_added(dev)) 940 + ACPI_COMPANION_SET(&dev->dev, 941 + acpi_pci_find_companion(&dev->dev)); 942 + } 943 943 944 - if (!dev->is_hotplug_bridge) 945 - return false; 944 + /** 945 + * pci_dev_acpi_reset - do a function level reset using _RST method 946 + * @dev: device to reset 947 + * @probe: if true, return 0 if device supports _RST 948 + */ 949 + int pci_dev_acpi_reset(struct pci_dev *dev, bool probe) 950 + { 951 + acpi_handle handle = ACPI_HANDLE(&dev->dev); 946 952 947 - /* Assume D3 support if the bridge is power-manageable by ACPI. */ 948 - adev = ACPI_COMPANION(&dev->dev); 949 - if (!adev && !pci_dev_is_added(dev)) { 950 - adev = acpi_pci_find_companion(&dev->dev); 951 - ACPI_COMPANION_SET(&dev->dev, adev); 953 + if (!handle || !acpi_has_method(handle, "_RST")) 954 + return -ENOTTY; 955 + 956 + if (probe) 957 + return 0; 958 + 959 + if (ACPI_FAILURE(acpi_evaluate_object(handle, "_RST", NULL, NULL))) { 960 + pci_warn(dev, "ACPI _RST failed\n"); 961 + return -ENOTTY; 952 962 } 953 963 954 - if (adev && acpi_device_power_manageable(adev)) 955 - return true; 956 - 957 - /* 958 - * Look for a special _DSD property for the root port and if it 959 - * is set we know the hierarchy behind it supports D3 just fine. 960 - */ 961 - root = pcie_find_root_port(dev); 962 - if (!root) 963 - return false; 964 - 965 - adev = ACPI_COMPANION(&root->dev); 966 - if (root == dev) { 967 - /* 968 - * It is possible that the ACPI companion is not yet bound 969 - * for the root port so look it up manually here. 970 - */ 971 - if (!adev && !pci_dev_is_added(root)) 972 - adev = acpi_pci_find_companion(&root->dev); 973 - } 974 - 975 - if (!adev) 976 - return false; 977 - 978 - fwnode = acpi_fwnode_handle(adev); 979 - if (fwnode_property_read_u8(fwnode, "HotPlugSupportInD3", &val)) 980 - return false; 981 - 982 - return val == 1; 964 + return 0; 983 965 } 984 966 985 967 static bool acpi_pci_power_manageable(struct pci_dev *dev) 986 968 { 987 969 struct acpi_device *adev = ACPI_COMPANION(&dev->dev); 988 - return adev ? acpi_device_power_manageable(adev) : false; 970 + 971 + if (!adev) 972 + return false; 973 + return acpi_device_power_manageable(adev); 974 + } 975 + 976 + static bool acpi_pci_bridge_d3(struct pci_dev *dev) 977 + { 978 + const union acpi_object *obj; 979 + struct acpi_device *adev; 980 + struct pci_dev *rpdev; 981 + 982 + if (!dev->is_hotplug_bridge) 983 + return false; 984 + 985 + /* Assume D3 support if the bridge is power-manageable by ACPI. */ 986 + if (acpi_pci_power_manageable(dev)) 987 + return true; 988 + 989 + /* 990 + * The ACPI firmware will provide the device-specific properties through 991 + * _DSD configuration object. Look for the 'HotPlugSupportInD3' property 992 + * for the root port and if it is set we know the hierarchy behind it 993 + * supports D3 just fine. 994 + */ 995 + rpdev = pcie_find_root_port(dev); 996 + if (!rpdev) 997 + return false; 998 + 999 + adev = ACPI_COMPANION(&rpdev->dev); 1000 + if (!adev) 1001 + return false; 1002 + 1003 + if (acpi_dev_get_property(adev, "HotPlugSupportInD3", 1004 + ACPI_TYPE_INTEGER, &obj) < 0) 1005 + return false; 1006 + 1007 + return obj->integer.value == 1; 989 1008 } 990 1009 991 1010 static int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state)
+1 -1
drivers/pci/pci-bridge-emul.h
··· 54 54 __le16 slotctl; 55 55 __le16 slotsta; 56 56 __le16 rootctl; 57 - __le16 rsvd; 57 + __le16 rootcap; 58 58 __le32 rootsta; 59 59 __le32 devcap2; 60 60 __le16 devctl2;
+2 -1
drivers/pci/pci-sysfs.c
··· 1367 1367 { 1368 1368 struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 1369 1369 1370 - if (!pdev->reset_fn) 1370 + if (!pci_reset_supported(pdev)) 1371 1371 return 0; 1372 1372 1373 1373 return a->mode; ··· 1491 1491 &pci_dev_config_attr_group, 1492 1492 &pci_dev_rom_attr_group, 1493 1493 &pci_dev_reset_attr_group, 1494 + &pci_dev_reset_method_attr_group, 1494 1495 &pci_dev_vpd_attr_group, 1495 1496 #ifdef CONFIG_DMI 1496 1497 &pci_dev_smbios_attr_group,
+237 -94
drivers/pci/pci.c
··· 31 31 #include <linux/vmalloc.h> 32 32 #include <asm/dma.h> 33 33 #include <linux/aer.h> 34 + #include <linux/bitfield.h> 34 35 #include "pci.h" 35 36 36 37 DEFINE_MUTEX(pci_slot_mutex); ··· 71 70 72 71 if (delay) 73 72 msleep(delay); 73 + } 74 + 75 + bool pci_reset_supported(struct pci_dev *dev) 76 + { 77 + return dev->reset_methods[0] != 0; 74 78 } 75 79 76 80 #ifdef CONFIG_PCI_DOMAINS ··· 212 206 EXPORT_SYMBOL_GPL(pci_status_get_and_clear_errors); 213 207 214 208 #ifdef CONFIG_HAS_IOMEM 215 - void __iomem *pci_ioremap_bar(struct pci_dev *pdev, int bar) 209 + static void __iomem *__pci_ioremap_resource(struct pci_dev *pdev, int bar, 210 + bool write_combine) 216 211 { 217 212 struct resource *res = &pdev->resource[bar]; 213 + resource_size_t start = res->start; 214 + resource_size_t size = resource_size(res); 218 215 219 216 /* 220 217 * Make sure the BAR is actually a memory resource, not an IO resource 221 218 */ 222 219 if (res->flags & IORESOURCE_UNSET || !(res->flags & IORESOURCE_MEM)) { 223 - pci_warn(pdev, "can't ioremap BAR %d: %pR\n", bar, res); 220 + pci_err(pdev, "can't ioremap BAR %d: %pR\n", bar, res); 224 221 return NULL; 225 222 } 226 - return ioremap(res->start, resource_size(res)); 223 + 224 + if (write_combine) 225 + return ioremap_wc(start, size); 226 + 227 + return ioremap(start, size); 228 + } 229 + 230 + void __iomem *pci_ioremap_bar(struct pci_dev *pdev, int bar) 231 + { 232 + return __pci_ioremap_resource(pdev, bar, false); 227 233 } 228 234 EXPORT_SYMBOL_GPL(pci_ioremap_bar); 229 235 230 236 void __iomem *pci_ioremap_wc_bar(struct pci_dev *pdev, int bar) 231 237 { 232 - /* 233 - * Make sure the BAR is actually a memory resource, not an IO resource 234 - */ 235 - if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) { 236 - WARN_ON(1); 237 - return NULL; 238 - } 239 - return ioremap_wc(pci_resource_start(pdev, bar), 240 - pci_resource_len(pdev, bar)); 238 + return __pci_ioremap_resource(pdev, bar, true); 241 239 } 242 240 EXPORT_SYMBOL_GPL(pci_ioremap_wc_bar); 243 241 #endif ··· 275 265 276 266 *endptr = strchrnul(path, ';'); 277 267 278 - wpath = kmemdup_nul(path, *endptr - path, GFP_KERNEL); 268 + wpath = kmemdup_nul(path, *endptr - path, GFP_ATOMIC); 279 269 if (!wpath) 280 270 return -ENOMEM; 281 271 ··· 925 915 /* Upstream Forwarding */ 926 916 ctrl |= (cap & PCI_ACS_UF); 927 917 928 - /* Enable Translation Blocking for external devices */ 929 - if (dev->external_facing || dev->untrusted) 918 + /* Enable Translation Blocking for external devices and noats */ 919 + if (pci_ats_disabled() || dev->external_facing || dev->untrusted) 930 920 ctrl |= (cap & PCI_ACS_TB); 931 921 932 922 pci_write_config_word(dev, pos + PCI_ACS_CTRL, ctrl); ··· 4639 4629 EXPORT_SYMBOL(pci_wait_for_pending_transaction); 4640 4630 4641 4631 /** 4642 - * pcie_has_flr - check if a device supports function level resets 4643 - * @dev: device to check 4644 - * 4645 - * Returns true if the device advertises support for PCIe function level 4646 - * resets. 4647 - */ 4648 - bool pcie_has_flr(struct pci_dev *dev) 4649 - { 4650 - u32 cap; 4651 - 4652 - if (dev->dev_flags & PCI_DEV_FLAGS_NO_FLR_RESET) 4653 - return false; 4654 - 4655 - pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &cap); 4656 - return cap & PCI_EXP_DEVCAP_FLR; 4657 - } 4658 - EXPORT_SYMBOL_GPL(pcie_has_flr); 4659 - 4660 - /** 4661 4632 * pcie_flr - initiate a PCIe function level reset 4662 4633 * @dev: device to reset 4663 4634 * 4664 - * Initiate a function level reset on @dev. The caller should ensure the 4665 - * device supports FLR before calling this function, e.g. by using the 4666 - * pcie_has_flr() helper. 4635 + * Initiate a function level reset unconditionally on @dev without 4636 + * checking any flags and DEVCAP 4667 4637 */ 4668 4638 int pcie_flr(struct pci_dev *dev) 4669 4639 { ··· 4666 4676 } 4667 4677 EXPORT_SYMBOL_GPL(pcie_flr); 4668 4678 4669 - static int pci_af_flr(struct pci_dev *dev, int probe) 4679 + /** 4680 + * pcie_reset_flr - initiate a PCIe function level reset 4681 + * @dev: device to reset 4682 + * @probe: if true, return 0 if device can be reset this way 4683 + * 4684 + * Initiate a function level reset on @dev. 4685 + */ 4686 + int pcie_reset_flr(struct pci_dev *dev, bool probe) 4687 + { 4688 + if (dev->dev_flags & PCI_DEV_FLAGS_NO_FLR_RESET) 4689 + return -ENOTTY; 4690 + 4691 + if (!(dev->devcap & PCI_EXP_DEVCAP_FLR)) 4692 + return -ENOTTY; 4693 + 4694 + if (probe) 4695 + return 0; 4696 + 4697 + return pcie_flr(dev); 4698 + } 4699 + EXPORT_SYMBOL_GPL(pcie_reset_flr); 4700 + 4701 + static int pci_af_flr(struct pci_dev *dev, bool probe) 4670 4702 { 4671 4703 int pos; 4672 4704 u8 cap; ··· 4735 4723 /** 4736 4724 * pci_pm_reset - Put device into PCI_D3 and back into PCI_D0. 4737 4725 * @dev: Device to reset. 4738 - * @probe: If set, only check if the device can be reset this way. 4726 + * @probe: if true, return 0 if the device can be reset this way. 4739 4727 * 4740 4728 * If @dev supports native PCI PM and its PCI_PM_CTRL_NO_SOFT_RESET flag is 4741 4729 * unset, it will be reinitialized internally when going from PCI_D3hot to ··· 4747 4735 * by default (i.e. unless the @dev's d3hot_delay field has a different value). 4748 4736 * Moreover, only devices in D0 can be reset by this function. 4749 4737 */ 4750 - static int pci_pm_reset(struct pci_dev *dev, int probe) 4738 + static int pci_pm_reset(struct pci_dev *dev, bool probe) 4751 4739 { 4752 4740 u16 csr; 4753 4741 ··· 5007 4995 } 5008 4996 EXPORT_SYMBOL_GPL(pci_bridge_secondary_bus_reset); 5009 4997 5010 - static int pci_parent_bus_reset(struct pci_dev *dev, int probe) 4998 + static int pci_parent_bus_reset(struct pci_dev *dev, bool probe) 5011 4999 { 5012 5000 struct pci_dev *pdev; 5013 5001 ··· 5025 5013 return pci_bridge_secondary_bus_reset(dev->bus->self); 5026 5014 } 5027 5015 5028 - static int pci_reset_hotplug_slot(struct hotplug_slot *hotplug, int probe) 5016 + static int pci_reset_hotplug_slot(struct hotplug_slot *hotplug, bool probe) 5029 5017 { 5030 5018 int rc = -ENOTTY; 5031 5019 ··· 5040 5028 return rc; 5041 5029 } 5042 5030 5043 - static int pci_dev_reset_slot_function(struct pci_dev *dev, int probe) 5031 + static int pci_dev_reset_slot_function(struct pci_dev *dev, bool probe) 5044 5032 { 5045 5033 if (dev->multifunction || dev->subordinate || !dev->slot || 5046 5034 dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET) ··· 5049 5037 return pci_reset_hotplug_slot(dev->slot->hotplug, probe); 5050 5038 } 5051 5039 5052 - static int pci_reset_bus_function(struct pci_dev *dev, int probe) 5040 + static int pci_reset_bus_function(struct pci_dev *dev, bool probe) 5053 5041 { 5054 5042 int rc; 5055 5043 ··· 5133 5121 err_handler->reset_done(dev); 5134 5122 } 5135 5123 5124 + /* dev->reset_methods[] is a 0-terminated list of indices into this array */ 5125 + static const struct pci_reset_fn_method pci_reset_fn_methods[] = { 5126 + { }, 5127 + { pci_dev_specific_reset, .name = "device_specific" }, 5128 + { pci_dev_acpi_reset, .name = "acpi" }, 5129 + { pcie_reset_flr, .name = "flr" }, 5130 + { pci_af_flr, .name = "af_flr" }, 5131 + { pci_pm_reset, .name = "pm" }, 5132 + { pci_reset_bus_function, .name = "bus" }, 5133 + }; 5134 + 5135 + static ssize_t reset_method_show(struct device *dev, 5136 + struct device_attribute *attr, char *buf) 5137 + { 5138 + struct pci_dev *pdev = to_pci_dev(dev); 5139 + ssize_t len = 0; 5140 + int i, m; 5141 + 5142 + for (i = 0; i < PCI_NUM_RESET_METHODS; i++) { 5143 + m = pdev->reset_methods[i]; 5144 + if (!m) 5145 + break; 5146 + 5147 + len += sysfs_emit_at(buf, len, "%s%s", len ? " " : "", 5148 + pci_reset_fn_methods[m].name); 5149 + } 5150 + 5151 + if (len) 5152 + len += sysfs_emit_at(buf, len, "\n"); 5153 + 5154 + return len; 5155 + } 5156 + 5157 + static int reset_method_lookup(const char *name) 5158 + { 5159 + int m; 5160 + 5161 + for (m = 1; m < PCI_NUM_RESET_METHODS; m++) { 5162 + if (sysfs_streq(name, pci_reset_fn_methods[m].name)) 5163 + return m; 5164 + } 5165 + 5166 + return 0; /* not found */ 5167 + } 5168 + 5169 + static ssize_t reset_method_store(struct device *dev, 5170 + struct device_attribute *attr, 5171 + const char *buf, size_t count) 5172 + { 5173 + struct pci_dev *pdev = to_pci_dev(dev); 5174 + char *options, *name; 5175 + int m, n; 5176 + u8 reset_methods[PCI_NUM_RESET_METHODS] = { 0 }; 5177 + 5178 + if (sysfs_streq(buf, "")) { 5179 + pdev->reset_methods[0] = 0; 5180 + pci_warn(pdev, "All device reset methods disabled by user"); 5181 + return count; 5182 + } 5183 + 5184 + if (sysfs_streq(buf, "default")) { 5185 + pci_init_reset_methods(pdev); 5186 + return count; 5187 + } 5188 + 5189 + options = kstrndup(buf, count, GFP_KERNEL); 5190 + if (!options) 5191 + return -ENOMEM; 5192 + 5193 + n = 0; 5194 + while ((name = strsep(&options, " ")) != NULL) { 5195 + if (sysfs_streq(name, "")) 5196 + continue; 5197 + 5198 + name = strim(name); 5199 + 5200 + m = reset_method_lookup(name); 5201 + if (!m) { 5202 + pci_err(pdev, "Invalid reset method '%s'", name); 5203 + goto error; 5204 + } 5205 + 5206 + if (pci_reset_fn_methods[m].reset_fn(pdev, PCI_RESET_PROBE)) { 5207 + pci_err(pdev, "Unsupported reset method '%s'", name); 5208 + goto error; 5209 + } 5210 + 5211 + if (n == PCI_NUM_RESET_METHODS - 1) { 5212 + pci_err(pdev, "Too many reset methods\n"); 5213 + goto error; 5214 + } 5215 + 5216 + reset_methods[n++] = m; 5217 + } 5218 + 5219 + reset_methods[n] = 0; 5220 + 5221 + /* Warn if dev-specific supported but not highest priority */ 5222 + if (pci_reset_fn_methods[1].reset_fn(pdev, PCI_RESET_PROBE) == 0 && 5223 + reset_methods[0] != 1) 5224 + pci_warn(pdev, "Device-specific reset disabled/de-prioritized by user"); 5225 + memcpy(pdev->reset_methods, reset_methods, sizeof(pdev->reset_methods)); 5226 + kfree(options); 5227 + return count; 5228 + 5229 + error: 5230 + /* Leave previous methods unchanged */ 5231 + kfree(options); 5232 + return -EINVAL; 5233 + } 5234 + static DEVICE_ATTR_RW(reset_method); 5235 + 5236 + static struct attribute *pci_dev_reset_method_attrs[] = { 5237 + &dev_attr_reset_method.attr, 5238 + NULL, 5239 + }; 5240 + 5241 + static umode_t pci_dev_reset_method_attr_is_visible(struct kobject *kobj, 5242 + struct attribute *a, int n) 5243 + { 5244 + struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 5245 + 5246 + if (!pci_reset_supported(pdev)) 5247 + return 0; 5248 + 5249 + return a->mode; 5250 + } 5251 + 5252 + const struct attribute_group pci_dev_reset_method_attr_group = { 5253 + .attrs = pci_dev_reset_method_attrs, 5254 + .is_visible = pci_dev_reset_method_attr_is_visible, 5255 + }; 5256 + 5136 5257 /** 5137 5258 * __pci_reset_function_locked - reset a PCI device function while holding 5138 5259 * the @dev mutex lock. ··· 5288 5143 */ 5289 5144 int __pci_reset_function_locked(struct pci_dev *dev) 5290 5145 { 5291 - int rc; 5146 + int i, m, rc = -ENOTTY; 5292 5147 5293 5148 might_sleep(); 5294 5149 5295 5150 /* 5296 - * A reset method returns -ENOTTY if it doesn't support this device 5297 - * and we should try the next method. 5151 + * A reset method returns -ENOTTY if it doesn't support this device and 5152 + * we should try the next method. 5298 5153 * 5299 - * If it returns 0 (success), we're finished. If it returns any 5300 - * other error, we're also finished: this indicates that further 5301 - * reset mechanisms might be broken on the device. 5154 + * If it returns 0 (success), we're finished. If it returns any other 5155 + * error, we're also finished: this indicates that further reset 5156 + * mechanisms might be broken on the device. 5302 5157 */ 5303 - rc = pci_dev_specific_reset(dev, 0); 5304 - if (rc != -ENOTTY) 5305 - return rc; 5306 - if (pcie_has_flr(dev)) { 5307 - rc = pcie_flr(dev); 5158 + for (i = 0; i < PCI_NUM_RESET_METHODS; i++) { 5159 + m = dev->reset_methods[i]; 5160 + if (!m) 5161 + return -ENOTTY; 5162 + 5163 + rc = pci_reset_fn_methods[m].reset_fn(dev, PCI_RESET_DO_RESET); 5164 + if (!rc) 5165 + return 0; 5308 5166 if (rc != -ENOTTY) 5309 5167 return rc; 5310 5168 } 5311 - rc = pci_af_flr(dev, 0); 5312 - if (rc != -ENOTTY) 5313 - return rc; 5314 - rc = pci_pm_reset(dev, 0); 5315 - if (rc != -ENOTTY) 5316 - return rc; 5317 - return pci_reset_bus_function(dev, 0); 5169 + 5170 + return -ENOTTY; 5318 5171 } 5319 5172 EXPORT_SYMBOL_GPL(__pci_reset_function_locked); 5320 5173 5321 5174 /** 5322 - * pci_probe_reset_function - check whether the device can be safely reset 5323 - * @dev: PCI device to reset 5175 + * pci_init_reset_methods - check whether device can be safely reset 5176 + * and store supported reset mechanisms. 5177 + * @dev: PCI device to check for reset mechanisms 5324 5178 * 5325 5179 * Some devices allow an individual function to be reset without affecting 5326 - * other functions in the same device. The PCI device must be responsive 5327 - * to PCI config space in order to use this function. 5180 + * other functions in the same device. The PCI device must be in D0-D3hot 5181 + * state. 5328 5182 * 5329 - * Returns 0 if the device function can be reset or negative if the 5330 - * device doesn't support resetting a single function. 5183 + * Stores reset mechanisms supported by device in reset_methods byte array 5184 + * which is a member of struct pci_dev. 5331 5185 */ 5332 - int pci_probe_reset_function(struct pci_dev *dev) 5186 + void pci_init_reset_methods(struct pci_dev *dev) 5333 5187 { 5334 - int rc; 5188 + int m, i, rc; 5189 + 5190 + BUILD_BUG_ON(ARRAY_SIZE(pci_reset_fn_methods) != PCI_NUM_RESET_METHODS); 5335 5191 5336 5192 might_sleep(); 5337 5193 5338 - rc = pci_dev_specific_reset(dev, 1); 5339 - if (rc != -ENOTTY) 5340 - return rc; 5341 - if (pcie_has_flr(dev)) 5342 - return 0; 5343 - rc = pci_af_flr(dev, 1); 5344 - if (rc != -ENOTTY) 5345 - return rc; 5346 - rc = pci_pm_reset(dev, 1); 5347 - if (rc != -ENOTTY) 5348 - return rc; 5194 + i = 0; 5195 + for (m = 1; m < PCI_NUM_RESET_METHODS; m++) { 5196 + rc = pci_reset_fn_methods[m].reset_fn(dev, PCI_RESET_PROBE); 5197 + if (!rc) 5198 + dev->reset_methods[i++] = m; 5199 + else if (rc != -ENOTTY) 5200 + break; 5201 + } 5349 5202 5350 - return pci_reset_bus_function(dev, 1); 5203 + dev->reset_methods[i] = 0; 5351 5204 } 5352 5205 5353 5206 /** ··· 5368 5225 { 5369 5226 int rc; 5370 5227 5371 - if (!dev->reset_fn) 5228 + if (!pci_reset_supported(dev)) 5372 5229 return -ENOTTY; 5373 5230 5374 5231 pci_dev_lock(dev); ··· 5404 5261 { 5405 5262 int rc; 5406 5263 5407 - if (!dev->reset_fn) 5264 + if (!pci_reset_supported(dev)) 5408 5265 return -ENOTTY; 5409 5266 5410 5267 pci_dev_save_and_disable(dev); ··· 5427 5284 { 5428 5285 int rc; 5429 5286 5430 - if (!dev->reset_fn) 5287 + if (!pci_reset_supported(dev)) 5431 5288 return -ENOTTY; 5432 5289 5433 5290 if (!pci_dev_trylock(dev)) ··· 5655 5512 } 5656 5513 } 5657 5514 5658 - static int pci_slot_reset(struct pci_slot *slot, int probe) 5515 + static int pci_slot_reset(struct pci_slot *slot, bool probe) 5659 5516 { 5660 5517 int rc; 5661 5518 ··· 5683 5540 */ 5684 5541 int pci_probe_reset_slot(struct pci_slot *slot) 5685 5542 { 5686 - return pci_slot_reset(slot, 1); 5543 + return pci_slot_reset(slot, PCI_RESET_PROBE); 5687 5544 } 5688 5545 EXPORT_SYMBOL_GPL(pci_probe_reset_slot); 5689 5546 ··· 5706 5563 { 5707 5564 int rc; 5708 5565 5709 - rc = pci_slot_reset(slot, 1); 5566 + rc = pci_slot_reset(slot, PCI_RESET_PROBE); 5710 5567 if (rc) 5711 5568 return rc; 5712 5569 5713 5570 if (pci_slot_trylock(slot)) { 5714 5571 pci_slot_save_and_disable_locked(slot); 5715 5572 might_sleep(); 5716 - rc = pci_reset_hotplug_slot(slot->hotplug, 0); 5573 + rc = pci_reset_hotplug_slot(slot->hotplug, PCI_RESET_DO_RESET); 5717 5574 pci_slot_restore_locked(slot); 5718 5575 pci_slot_unlock(slot); 5719 5576 } else ··· 5722 5579 return rc; 5723 5580 } 5724 5581 5725 - static int pci_bus_reset(struct pci_bus *bus, int probe) 5582 + static int pci_bus_reset(struct pci_bus *bus, bool probe) 5726 5583 { 5727 5584 int ret; 5728 5585 ··· 5768 5625 goto bus_reset; 5769 5626 5770 5627 list_for_each_entry(slot, &bus->slots, list) 5771 - if (pci_slot_reset(slot, 0)) 5628 + if (pci_slot_reset(slot, PCI_RESET_DO_RESET)) 5772 5629 goto bus_reset; 5773 5630 5774 5631 mutex_unlock(&pci_slot_mutex); 5775 5632 return 0; 5776 5633 bus_reset: 5777 5634 mutex_unlock(&pci_slot_mutex); 5778 - return pci_bus_reset(bridge->subordinate, 0); 5635 + return pci_bus_reset(bridge->subordinate, PCI_RESET_DO_RESET); 5779 5636 } 5780 5637 5781 5638 /** ··· 5786 5643 */ 5787 5644 int pci_probe_reset_bus(struct pci_bus *bus) 5788 5645 { 5789 - return pci_bus_reset(bus, 1); 5646 + return pci_bus_reset(bus, PCI_RESET_PROBE); 5790 5647 } 5791 5648 EXPORT_SYMBOL_GPL(pci_probe_reset_bus); 5792 5649 ··· 5800 5657 { 5801 5658 int rc; 5802 5659 5803 - rc = pci_bus_reset(bus, 1); 5660 + rc = pci_bus_reset(bus, PCI_RESET_PROBE); 5804 5661 if (rc) 5805 5662 return rc; 5806 5663
+41 -6
drivers/pci/pci.h
··· 33 33 int pci_mmap_fits(struct pci_dev *pdev, int resno, struct vm_area_struct *vmai, 34 34 enum pci_mmap_api mmap_api); 35 35 36 - int pci_probe_reset_function(struct pci_dev *dev); 36 + bool pci_reset_supported(struct pci_dev *dev); 37 + void pci_init_reset_methods(struct pci_dev *dev); 37 38 int pci_bridge_secondary_bus_reset(struct pci_dev *dev); 38 39 int pci_bus_error_reset(struct pci_dev *dev); 40 + 41 + struct pci_cap_saved_data { 42 + u16 cap_nr; 43 + bool cap_extended; 44 + unsigned int size; 45 + u32 data[]; 46 + }; 47 + 48 + struct pci_cap_saved_state { 49 + struct hlist_node next; 50 + struct pci_cap_saved_data cap; 51 + }; 52 + 53 + void pci_allocate_cap_save_buffers(struct pci_dev *dev); 54 + void pci_free_cap_save_buffers(struct pci_dev *dev); 55 + int pci_add_cap_save_buffer(struct pci_dev *dev, char cap, unsigned int size); 56 + int pci_add_ext_cap_save_buffer(struct pci_dev *dev, 57 + u16 cap, unsigned int size); 58 + struct pci_cap_saved_state *pci_find_saved_cap(struct pci_dev *dev, char cap); 59 + struct pci_cap_saved_state *pci_find_saved_ext_cap(struct pci_dev *dev, 60 + u16 cap); 39 61 40 62 #define PCI_PM_D2_DELAY 200 /* usec; see PCIe r4.0, sec 5.9.1 */ 41 63 #define PCI_PM_D3HOT_WAIT 10 /* msec */ ··· 122 100 void pci_ea_init(struct pci_dev *dev); 123 101 void pci_msi_init(struct pci_dev *dev); 124 102 void pci_msix_init(struct pci_dev *dev); 125 - void pci_allocate_cap_save_buffers(struct pci_dev *dev); 126 - void pci_free_cap_save_buffers(struct pci_dev *dev); 127 103 bool pci_bridge_d3_possible(struct pci_dev *dev); 128 104 void pci_bridge_d3_update(struct pci_dev *dev); 129 105 void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev); ··· 624 604 struct pci_dev_reset_methods { 625 605 u16 vendor; 626 606 u16 device; 627 - int (*reset)(struct pci_dev *dev, int probe); 607 + int (*reset)(struct pci_dev *dev, bool probe); 608 + }; 609 + 610 + struct pci_reset_fn_method { 611 + int (*reset_fn)(struct pci_dev *pdev, bool probe); 612 + char *name; 628 613 }; 629 614 630 615 #ifdef CONFIG_PCI_QUIRKS 631 - int pci_dev_specific_reset(struct pci_dev *dev, int probe); 616 + int pci_dev_specific_reset(struct pci_dev *dev, bool probe); 632 617 #else 633 - static inline int pci_dev_specific_reset(struct pci_dev *dev, int probe) 618 + static inline int pci_dev_specific_reset(struct pci_dev *dev, bool probe) 634 619 { 635 620 return -ENOTTY; 636 621 } ··· 723 698 #ifdef CONFIG_ACPI 724 699 int pci_acpi_program_hp_params(struct pci_dev *dev); 725 700 extern const struct attribute_group pci_dev_acpi_attr_group; 701 + void pci_set_acpi_fwnode(struct pci_dev *dev); 702 + int pci_dev_acpi_reset(struct pci_dev *dev, bool probe); 726 703 #else 704 + static inline int pci_dev_acpi_reset(struct pci_dev *dev, bool probe) 705 + { 706 + return -ENOTTY; 707 + } 708 + 709 + static inline void pci_set_acpi_fwnode(struct pci_dev *dev) {} 727 710 static inline int pci_acpi_program_hp_params(struct pci_dev *dev) 728 711 { 729 712 return -ENODEV; ··· 741 708 #ifdef CONFIG_PCIEASPM 742 709 extern const struct attribute_group aspm_ctrl_attr_group; 743 710 #endif 711 + 712 + extern const struct attribute_group pci_dev_reset_method_attr_group; 744 713 745 714 #endif /* DRIVERS_PCI_H */
+5 -7
drivers/pci/pcie/aer.c
··· 1407 1407 } 1408 1408 1409 1409 if (type == PCI_EXP_TYPE_RC_EC || type == PCI_EXP_TYPE_RC_END) { 1410 - if (pcie_has_flr(dev)) { 1411 - rc = pcie_flr(dev); 1412 - pci_info(dev, "has been reset (%d)\n", rc); 1413 - } else { 1414 - pci_info(dev, "not reset (no FLR support)\n"); 1415 - rc = -ENOTTY; 1416 - } 1410 + rc = pcie_reset_flr(dev, PCI_RESET_DO_RESET); 1411 + if (!rc) 1412 + pci_info(dev, "has been reset\n"); 1413 + else 1414 + pci_info(dev, "not reset (no FLR support: %d)\n", rc); 1417 1415 } else { 1418 1416 rc = pci_bus_error_reset(dev); 1419 1417 pci_info(dev, "%s Port link has been reset (%d)\n",
+7 -2
drivers/pci/pcie/portdrv_core.c
··· 257 257 services |= PCIE_PORT_SERVICE_DPC; 258 258 259 259 if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM || 260 - pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) 261 - services |= PCIE_PORT_SERVICE_BWNOTIF; 260 + pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) { 261 + u32 linkcap; 262 + 263 + pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &linkcap); 264 + if (linkcap & PCI_EXP_LNKCAP_LBNC) 265 + services |= PCIE_PORT_SERVICE_BWNOTIF; 266 + } 262 267 263 268 return services; 264 269 }
+1 -3
drivers/pci/pcie/ptm.c
··· 60 60 return; 61 61 62 62 save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_PTM); 63 - if (!save_state) { 64 - pci_err(dev, "no suspend buffer for PTM\n"); 63 + if (!save_state) 65 64 return; 66 - } 67 65 68 66 cap = (u16 *)&save_state->cap.data[0]; 69 67 pci_read_config_word(dev, ptm + PCI_PTM_CTRL, cap);
+18 -11
drivers/pci/probe.c
··· 19 19 #include <linux/hypervisor.h> 20 20 #include <linux/irqdomain.h> 21 21 #include <linux/pm_runtime.h> 22 + #include <linux/bitfield.h> 22 23 #include "pci.h" 23 24 24 25 #define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */ ··· 595 594 bridge->native_pme = 1; 596 595 bridge->native_ltr = 1; 597 596 bridge->native_dpc = 1; 597 + bridge->domain_nr = PCI_DOMAIN_NR_NOT_SET; 598 598 599 599 device_initialize(&bridge->dev); 600 600 } ··· 830 828 { 831 829 struct irq_domain *d; 832 830 831 + /* If the host bridge driver sets a MSI domain of the bridge, use it */ 832 + d = dev_get_msi_domain(bus->bridge); 833 + 833 834 /* 834 835 * Any firmware interface that can resolve the msi_domain 835 836 * should be called from here. 836 837 */ 837 - d = pci_host_bridge_of_msi_domain(bus); 838 + if (!d) 839 + d = pci_host_bridge_of_msi_domain(bus); 838 840 if (!d) 839 841 d = pci_host_bridge_acpi_msi_domain(bus); 840 842 ··· 904 898 bus->ops = bridge->ops; 905 899 bus->number = bus->busn_res.start = bridge->busnr; 906 900 #ifdef CONFIG_PCI_DOMAINS_GENERIC 907 - bus->domain_nr = pci_bus_find_domain_nr(bus, parent); 901 + if (bridge->domain_nr == PCI_DOMAIN_NR_NOT_SET) 902 + bus->domain_nr = pci_bus_find_domain_nr(bus, parent); 903 + else 904 + bus->domain_nr = bridge->domain_nr; 908 905 #endif 909 906 910 907 b = pci_find_bus(pci_domain_nr(bus), bridge->busnr); ··· 1507 1498 pdev->pcie_cap = pos; 1508 1499 pci_read_config_word(pdev, pos + PCI_EXP_FLAGS, &reg16); 1509 1500 pdev->pcie_flags_reg = reg16; 1510 - pci_read_config_word(pdev, pos + PCI_EXP_DEVCAP, &reg16); 1511 - pdev->pcie_mpss = reg16 & PCI_EXP_DEVCAP_PAYLOAD; 1501 + pci_read_config_dword(pdev, pos + PCI_EXP_DEVCAP, &pdev->devcap); 1502 + pdev->pcie_mpss = FIELD_GET(PCI_EXP_DEVCAP_PAYLOAD, pdev->devcap); 1512 1503 1513 1504 parent = pci_upstream_bridge(pdev); 1514 1505 if (!parent) ··· 1818 1809 dev->error_state = pci_channel_io_normal; 1819 1810 set_pcie_port_type(dev); 1820 1811 1812 + pci_set_of_node(dev); 1813 + pci_set_acpi_fwnode(dev); 1814 + 1821 1815 pci_dev_assign_slot(dev); 1822 1816 1823 1817 /* ··· 1958 1946 default: /* unknown header */ 1959 1947 pci_err(dev, "unknown header type %02x, ignoring device\n", 1960 1948 dev->hdr_type); 1949 + pci_release_of_node(dev); 1961 1950 return -EIO; 1962 1951 1963 1952 bad: ··· 2238 2225 { 2239 2226 pci_aer_exit(dev); 2240 2227 pci_rcec_exit(dev); 2241 - pci_vpd_release(dev); 2242 2228 pci_iov_release(dev); 2243 2229 pci_free_cap_save_buffers(dev); 2244 2230 } ··· 2386 2374 dev->vendor = l & 0xffff; 2387 2375 dev->device = (l >> 16) & 0xffff; 2388 2376 2389 - pci_set_of_node(dev); 2390 - 2391 2377 if (pci_setup_device(dev)) { 2392 - pci_release_of_node(dev); 2393 2378 pci_bus_put(dev->bus); 2394 2379 kfree(dev); 2395 2380 return NULL; ··· 2437 2428 pci_rcec_init(dev); /* Root Complex Event Collector */ 2438 2429 2439 2430 pcie_report_downtraining(dev); 2440 - 2441 - if (pci_probe_reset_function(dev) == 0) 2442 - dev->reset_fn = 1; 2431 + pci_init_reset_methods(dev); 2443 2432 } 2444 2433 2445 2434 /*
+1
drivers/pci/proc.c
··· 83 83 buf += 4; 84 84 pos += 4; 85 85 cnt -= 4; 86 + cond_resched(); 86 87 } 87 88 88 89 if (cnt >= 2) {
+107 -21
drivers/pci/quirks.c
··· 1822 1822 DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_HUAWEI, 0x1610, PCI_CLASS_BRIDGE_PCI, 8, quirk_pcie_mch); 1823 1823 1824 1824 /* 1825 + * HiSilicon KunPeng920 and KunPeng930 have devices appear as PCI but are 1826 + * actually on the AMBA bus. These fake PCI devices can support SVA via 1827 + * SMMU stall feature, by setting dma-can-stall for ACPI platforms. 1828 + * 1829 + * Normally stalling must not be enabled for PCI devices, since it would 1830 + * break the PCI requirement for free-flowing writes and may lead to 1831 + * deadlock. We expect PCI devices to support ATS and PRI if they want to 1832 + * be fault-tolerant, so there's no ACPI binding to describe anything else, 1833 + * even when a "PCI" device turns out to be a regular old SoC device 1834 + * dressed up as a RCiEP and normal rules don't apply. 1835 + */ 1836 + static void quirk_huawei_pcie_sva(struct pci_dev *pdev) 1837 + { 1838 + struct property_entry properties[] = { 1839 + PROPERTY_ENTRY_BOOL("dma-can-stall"), 1840 + {}, 1841 + }; 1842 + 1843 + if (pdev->revision != 0x21 && pdev->revision != 0x30) 1844 + return; 1845 + 1846 + pdev->pasid_no_tlp = 1; 1847 + 1848 + /* 1849 + * Set the dma-can-stall property on ACPI platforms. Device tree 1850 + * can set it directly. 1851 + */ 1852 + if (!pdev->dev.of_node && 1853 + device_add_properties(&pdev->dev, properties)) 1854 + pci_warn(pdev, "could not add stall property"); 1855 + } 1856 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0xa250, quirk_huawei_pcie_sva); 1857 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0xa251, quirk_huawei_pcie_sva); 1858 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0xa255, quirk_huawei_pcie_sva); 1859 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0xa256, quirk_huawei_pcie_sva); 1860 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0xa258, quirk_huawei_pcie_sva); 1861 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0xa259, quirk_huawei_pcie_sva); 1862 + 1863 + /* 1825 1864 * It's possible for the MSI to get corrupted if SHPC and ACPI are used 1826 1865 * together on certain PXH-based systems. 1827 1866 */ ··· 3274 3235 { 3275 3236 dev->pcie_mpss = 1; /* 256 bytes */ 3276 3237 } 3277 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE, 3278 - PCI_DEVICE_ID_SOLARFLARE_SFC4000A_0, fixup_mpss_256); 3279 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE, 3280 - PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256); 3281 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE, 3282 - PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256); 3238 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE, 3239 + PCI_DEVICE_ID_SOLARFLARE_SFC4000A_0, fixup_mpss_256); 3240 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE, 3241 + PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256); 3242 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE, 3243 + PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256); 3244 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_ASMEDIA, 0x0612, fixup_mpss_256); 3283 3245 3284 3246 /* 3285 3247 * Intel 5000 and 5100 Memory controllers have an erratum with read completion ··· 3743 3703 * reset a single function if other methods (e.g. FLR, PM D0->D3) are 3744 3704 * not available. 3745 3705 */ 3746 - static int reset_intel_82599_sfp_virtfn(struct pci_dev *dev, int probe) 3706 + static int reset_intel_82599_sfp_virtfn(struct pci_dev *dev, bool probe) 3747 3707 { 3748 3708 /* 3749 3709 * http://www.intel.com/content/dam/doc/datasheet/82599-10-gbe-controller-datasheet.pdf ··· 3765 3725 #define NSDE_PWR_STATE 0xd0100 3766 3726 #define IGD_OPERATION_TIMEOUT 10000 /* set timeout 10 seconds */ 3767 3727 3768 - static int reset_ivb_igd(struct pci_dev *dev, int probe) 3728 + static int reset_ivb_igd(struct pci_dev *dev, bool probe) 3769 3729 { 3770 3730 void __iomem *mmio_base; 3771 3731 unsigned long timeout; ··· 3808 3768 } 3809 3769 3810 3770 /* Device-specific reset method for Chelsio T4-based adapters */ 3811 - static int reset_chelsio_generic_dev(struct pci_dev *dev, int probe) 3771 + static int reset_chelsio_generic_dev(struct pci_dev *dev, bool probe) 3812 3772 { 3813 3773 u16 old_command; 3814 3774 u16 msix_flags; ··· 3886 3846 * Chapter 3: NVMe control registers 3887 3847 * Chapter 7.3: Reset behavior 3888 3848 */ 3889 - static int nvme_disable_and_flr(struct pci_dev *dev, int probe) 3849 + static int nvme_disable_and_flr(struct pci_dev *dev, bool probe) 3890 3850 { 3891 3851 void __iomem *bar; 3892 3852 u16 cmd; 3893 3853 u32 cfg; 3894 3854 3895 3855 if (dev->class != PCI_CLASS_STORAGE_EXPRESS || 3896 - !pcie_has_flr(dev) || !pci_resource_start(dev, 0)) 3856 + pcie_reset_flr(dev, PCI_RESET_PROBE) || !pci_resource_start(dev, 0)) 3897 3857 return -ENOTTY; 3898 3858 3899 3859 if (probe) ··· 3960 3920 * device too soon after FLR. A 250ms delay after FLR has heuristically 3961 3921 * proven to produce reliably working results for device assignment cases. 3962 3922 */ 3963 - static int delay_250ms_after_flr(struct pci_dev *dev, int probe) 3923 + static int delay_250ms_after_flr(struct pci_dev *dev, bool probe) 3964 3924 { 3965 - if (!pcie_has_flr(dev)) 3966 - return -ENOTTY; 3967 - 3968 3925 if (probe) 3969 - return 0; 3926 + return pcie_reset_flr(dev, PCI_RESET_PROBE); 3970 3927 3971 - pcie_flr(dev); 3928 + pcie_reset_flr(dev, PCI_RESET_DO_RESET); 3972 3929 3973 3930 msleep(250); 3974 3931 ··· 3980 3943 #define HINIC_OPERATION_TIMEOUT 15000 /* 15 seconds */ 3981 3944 3982 3945 /* Device-specific reset method for Huawei Intelligent NIC virtual functions */ 3983 - static int reset_hinic_vf_dev(struct pci_dev *pdev, int probe) 3946 + static int reset_hinic_vf_dev(struct pci_dev *pdev, bool probe) 3984 3947 { 3985 3948 unsigned long timeout; 3986 3949 void __iomem *bar; ··· 4057 4020 * because when a host assigns a device to a guest VM, the host may need 4058 4021 * to reset the device but probably doesn't have a driver for it. 4059 4022 */ 4060 - int pci_dev_specific_reset(struct pci_dev *dev, int probe) 4023 + int pci_dev_specific_reset(struct pci_dev *dev, bool probe) 4061 4024 { 4062 4025 const struct pci_dev_reset_methods *i; 4063 4026 ··· 4652 4615 PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); 4653 4616 } 4654 4617 4618 + /* 4619 + * Each of these NXP Root Ports is in a Root Complex with a unique segment 4620 + * number and does provide isolation features to disable peer transactions 4621 + * and validate bus numbers in requests, but does not provide an ACS 4622 + * capability. 4623 + */ 4624 + static int pci_quirk_nxp_rp_acs(struct pci_dev *dev, u16 acs_flags) 4625 + { 4626 + return pci_acs_ctrl_enabled(acs_flags, 4627 + PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); 4628 + } 4629 + 4655 4630 static int pci_quirk_al_acs(struct pci_dev *dev, u16 acs_flags) 4656 4631 { 4657 4632 if (pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) ··· 4890 4841 { 0x10df, 0x720, pci_quirk_mf_endpoint_acs }, /* Emulex Skyhawk-R */ 4891 4842 /* Cavium ThunderX */ 4892 4843 { PCI_VENDOR_ID_CAVIUM, PCI_ANY_ID, pci_quirk_cavium_acs }, 4844 + /* Cavium multi-function devices */ 4845 + { PCI_VENDOR_ID_CAVIUM, 0xA026, pci_quirk_mf_endpoint_acs }, 4846 + { PCI_VENDOR_ID_CAVIUM, 0xA059, pci_quirk_mf_endpoint_acs }, 4847 + { PCI_VENDOR_ID_CAVIUM, 0xA060, pci_quirk_mf_endpoint_acs }, 4893 4848 /* APM X-Gene */ 4894 4849 { PCI_VENDOR_ID_AMCC, 0xE004, pci_quirk_xgene_acs }, 4895 4850 /* Ampere Computing */ ··· 4914 4861 { PCI_VENDOR_ID_ZHAOXIN, 0x3038, pci_quirk_mf_endpoint_acs }, 4915 4862 { PCI_VENDOR_ID_ZHAOXIN, 0x3104, pci_quirk_mf_endpoint_acs }, 4916 4863 { PCI_VENDOR_ID_ZHAOXIN, 0x9083, pci_quirk_mf_endpoint_acs }, 4864 + /* NXP root ports, xx=16, 12, or 08 cores */ 4865 + /* LX2xx0A : without security features + CAN-FD */ 4866 + { PCI_VENDOR_ID_NXP, 0x8d81, pci_quirk_nxp_rp_acs }, 4867 + { PCI_VENDOR_ID_NXP, 0x8da1, pci_quirk_nxp_rp_acs }, 4868 + { PCI_VENDOR_ID_NXP, 0x8d83, pci_quirk_nxp_rp_acs }, 4869 + /* LX2xx0C : security features + CAN-FD */ 4870 + { PCI_VENDOR_ID_NXP, 0x8d80, pci_quirk_nxp_rp_acs }, 4871 + { PCI_VENDOR_ID_NXP, 0x8da0, pci_quirk_nxp_rp_acs }, 4872 + { PCI_VENDOR_ID_NXP, 0x8d82, pci_quirk_nxp_rp_acs }, 4873 + /* LX2xx0E : security features + CAN */ 4874 + { PCI_VENDOR_ID_NXP, 0x8d90, pci_quirk_nxp_rp_acs }, 4875 + { PCI_VENDOR_ID_NXP, 0x8db0, pci_quirk_nxp_rp_acs }, 4876 + { PCI_VENDOR_ID_NXP, 0x8d92, pci_quirk_nxp_rp_acs }, 4877 + /* LX2xx0N : without security features + CAN */ 4878 + { PCI_VENDOR_ID_NXP, 0x8d91, pci_quirk_nxp_rp_acs }, 4879 + { PCI_VENDOR_ID_NXP, 0x8db1, pci_quirk_nxp_rp_acs }, 4880 + { PCI_VENDOR_ID_NXP, 0x8d93, pci_quirk_nxp_rp_acs }, 4881 + /* LX2xx2A : without security features + CAN-FD */ 4882 + { PCI_VENDOR_ID_NXP, 0x8d89, pci_quirk_nxp_rp_acs }, 4883 + { PCI_VENDOR_ID_NXP, 0x8da9, pci_quirk_nxp_rp_acs }, 4884 + { PCI_VENDOR_ID_NXP, 0x8d8b, pci_quirk_nxp_rp_acs }, 4885 + /* LX2xx2C : security features + CAN-FD */ 4886 + { PCI_VENDOR_ID_NXP, 0x8d88, pci_quirk_nxp_rp_acs }, 4887 + { PCI_VENDOR_ID_NXP, 0x8da8, pci_quirk_nxp_rp_acs }, 4888 + { PCI_VENDOR_ID_NXP, 0x8d8a, pci_quirk_nxp_rp_acs }, 4889 + /* LX2xx2E : security features + CAN */ 4890 + { PCI_VENDOR_ID_NXP, 0x8d98, pci_quirk_nxp_rp_acs }, 4891 + { PCI_VENDOR_ID_NXP, 0x8db8, pci_quirk_nxp_rp_acs }, 4892 + { PCI_VENDOR_ID_NXP, 0x8d9a, pci_quirk_nxp_rp_acs }, 4893 + /* LX2xx2N : without security features + CAN */ 4894 + { PCI_VENDOR_ID_NXP, 0x8d99, pci_quirk_nxp_rp_acs }, 4895 + { PCI_VENDOR_ID_NXP, 0x8db9, pci_quirk_nxp_rp_acs }, 4896 + { PCI_VENDOR_ID_NXP, 0x8d9b, pci_quirk_nxp_rp_acs }, 4917 4897 /* Zhaoxin Root/Downstream Ports */ 4918 4898 { PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs }, 4919 4899 { 0 } ··· 5118 5032 ctrl |= (cap & PCI_ACS_CR); 5119 5033 ctrl |= (cap & PCI_ACS_UF); 5120 5034 5121 - if (dev->external_facing || dev->untrusted) 5035 + if (pci_ats_disabled() || dev->external_facing || dev->untrusted) 5122 5036 ctrl |= (cap & PCI_ACS_TB); 5123 5037 5124 5038 pci_write_config_dword(dev, pos + INTEL_SPT_ACS_CTRL, ctrl); ··· 5716 5630 5717 5631 if (pdev->subsystem_vendor != PCI_VENDOR_ID_LENOVO || 5718 5632 pdev->subsystem_device != 0x222e || 5719 - !pdev->reset_fn) 5633 + !pci_reset_supported(pdev)) 5720 5634 return; 5721 5635 5722 5636 if (pci_enable_device_mem(pdev))
-1
drivers/pci/remove.c
··· 19 19 pci_pme_active(dev, false); 20 20 21 21 if (pci_dev_is_added(dev)) { 22 - dev->reset_fn = 0; 23 22 24 23 device_release_driver(&dev->dev); 25 24 pci_proc_detach_device(dev);
+4 -3
drivers/pci/syscall.c
··· 19 19 u8 byte; 20 20 u16 word; 21 21 u32 dword; 22 - long err; 23 - int cfg_ret; 22 + int err, cfg_ret; 24 23 24 + err = -EPERM; 25 + dev = NULL; 25 26 if (!capable(CAP_SYS_ADMIN)) 26 - return -EPERM; 27 + goto error; 27 28 28 29 err = -ENODEV; 29 30 dev = pci_get_domain_bus_and_slot(0, bus, dfn);
+260 -242
drivers/pci/vpd.c
··· 9 9 #include <linux/delay.h> 10 10 #include <linux/export.h> 11 11 #include <linux/sched/signal.h> 12 + #include <asm/unaligned.h> 12 13 #include "pci.h" 13 14 15 + #define PCI_VPD_LRDT_TAG_SIZE 3 16 + #define PCI_VPD_SRDT_LEN_MASK 0x07 17 + #define PCI_VPD_SRDT_TAG_SIZE 1 18 + #define PCI_VPD_STIN_END 0x0f 19 + #define PCI_VPD_INFO_FLD_HDR_SIZE 3 20 + 21 + static u16 pci_vpd_lrdt_size(const u8 *lrdt) 22 + { 23 + return get_unaligned_le16(lrdt + 1); 24 + } 25 + 26 + static u8 pci_vpd_srdt_tag(const u8 *srdt) 27 + { 28 + return *srdt >> 3; 29 + } 30 + 31 + static u8 pci_vpd_srdt_size(const u8 *srdt) 32 + { 33 + return *srdt & PCI_VPD_SRDT_LEN_MASK; 34 + } 35 + 36 + static u8 pci_vpd_info_field_size(const u8 *info_field) 37 + { 38 + return info_field[2]; 39 + } 40 + 14 41 /* VPD access through PCI 2.2+ VPD capability */ 15 - 16 - struct pci_vpd_ops { 17 - ssize_t (*read)(struct pci_dev *dev, loff_t pos, size_t count, void *buf); 18 - ssize_t (*write)(struct pci_dev *dev, loff_t pos, size_t count, const void *buf); 19 - }; 20 - 21 - struct pci_vpd { 22 - const struct pci_vpd_ops *ops; 23 - struct mutex lock; 24 - unsigned int len; 25 - u16 flag; 26 - u8 cap; 27 - unsigned int busy:1; 28 - unsigned int valid:1; 29 - }; 30 42 31 43 static struct pci_dev *pci_get_func0_dev(struct pci_dev *dev) 32 44 { 33 45 return pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); 34 46 } 35 47 36 - /** 37 - * pci_read_vpd - Read one entry from Vital Product Data 38 - * @dev: pci device struct 39 - * @pos: offset in vpd space 40 - * @count: number of bytes to read 41 - * @buf: pointer to where to store result 42 - */ 43 - ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf) 44 - { 45 - if (!dev->vpd || !dev->vpd->ops) 46 - return -ENODEV; 47 - return dev->vpd->ops->read(dev, pos, count, buf); 48 - } 49 - EXPORT_SYMBOL(pci_read_vpd); 50 - 51 - /** 52 - * pci_write_vpd - Write entry to Vital Product Data 53 - * @dev: pci device struct 54 - * @pos: offset in vpd space 55 - * @count: number of bytes to write 56 - * @buf: buffer containing write data 57 - */ 58 - ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf) 59 - { 60 - if (!dev->vpd || !dev->vpd->ops) 61 - return -ENODEV; 62 - return dev->vpd->ops->write(dev, pos, count, buf); 63 - } 64 - EXPORT_SYMBOL(pci_write_vpd); 65 - 66 - #define PCI_VPD_MAX_SIZE (PCI_VPD_ADDR_MASK + 1) 48 + #define PCI_VPD_MAX_SIZE (PCI_VPD_ADDR_MASK + 1) 49 + #define PCI_VPD_SZ_INVALID UINT_MAX 67 50 68 51 /** 69 52 * pci_vpd_size - determine actual size of Vital Product Data 70 53 * @dev: pci device struct 71 - * @old_size: current assumed size, also maximum allowed size 72 54 */ 73 - static size_t pci_vpd_size(struct pci_dev *dev, size_t old_size) 55 + static size_t pci_vpd_size(struct pci_dev *dev) 74 56 { 75 - size_t off = 0; 76 - unsigned char header[1+2]; /* 1 byte tag, 2 bytes length */ 57 + size_t off = 0, size; 58 + unsigned char tag, header[1+2]; /* 1 byte tag, 2 bytes length */ 77 59 78 - while (off < old_size && pci_read_vpd(dev, off, 1, header) == 1) { 79 - unsigned char tag; 60 + /* Otherwise the following reads would fail. */ 61 + dev->vpd.len = PCI_VPD_MAX_SIZE; 80 62 81 - if (!header[0] && !off) { 82 - pci_info(dev, "Invalid VPD tag 00, assume missing optional VPD EPROM\n"); 83 - return 0; 84 - } 63 + while (pci_read_vpd(dev, off, 1, header) == 1) { 64 + size = 0; 65 + 66 + if (off == 0 && (header[0] == 0x00 || header[0] == 0xff)) 67 + goto error; 85 68 86 69 if (header[0] & PCI_VPD_LRDT) { 87 70 /* Large Resource Data Type Tag */ 88 - tag = pci_vpd_lrdt_tag(header); 89 - /* Only read length from known tag items */ 90 - if ((tag == PCI_VPD_LTIN_ID_STRING) || 91 - (tag == PCI_VPD_LTIN_RO_DATA) || 92 - (tag == PCI_VPD_LTIN_RW_DATA)) { 93 - if (pci_read_vpd(dev, off+1, 2, 94 - &header[1]) != 2) { 95 - pci_warn(dev, "invalid large VPD tag %02x size at offset %zu", 96 - tag, off + 1); 97 - return 0; 98 - } 99 - off += PCI_VPD_LRDT_TAG_SIZE + 100 - pci_vpd_lrdt_size(header); 71 + if (pci_read_vpd(dev, off + 1, 2, &header[1]) != 2) { 72 + pci_warn(dev, "failed VPD read at offset %zu\n", 73 + off + 1); 74 + return off ?: PCI_VPD_SZ_INVALID; 101 75 } 76 + size = pci_vpd_lrdt_size(header); 77 + if (off + size > PCI_VPD_MAX_SIZE) 78 + goto error; 79 + 80 + off += PCI_VPD_LRDT_TAG_SIZE + size; 102 81 } else { 103 82 /* Short Resource Data Type Tag */ 104 - off += PCI_VPD_SRDT_TAG_SIZE + 105 - pci_vpd_srdt_size(header); 106 83 tag = pci_vpd_srdt_tag(header); 107 - } 84 + size = pci_vpd_srdt_size(header); 85 + if (off + size > PCI_VPD_MAX_SIZE) 86 + goto error; 108 87 109 - if (tag == PCI_VPD_STIN_END) /* End tag descriptor */ 110 - return off; 111 - 112 - if ((tag != PCI_VPD_LTIN_ID_STRING) && 113 - (tag != PCI_VPD_LTIN_RO_DATA) && 114 - (tag != PCI_VPD_LTIN_RW_DATA)) { 115 - pci_warn(dev, "invalid %s VPD tag %02x at offset %zu", 116 - (header[0] & PCI_VPD_LRDT) ? "large" : "short", 117 - tag, off); 118 - return 0; 88 + off += PCI_VPD_SRDT_TAG_SIZE + size; 89 + if (tag == PCI_VPD_STIN_END) /* End tag descriptor */ 90 + return off; 119 91 } 120 92 } 121 - return 0; 93 + return off; 94 + 95 + error: 96 + pci_info(dev, "invalid VPD tag %#04x (size %zu) at offset %zu%s\n", 97 + header[0], size, off, off == 0 ? 98 + "; assume missing optional EEPROM" : ""); 99 + return off ?: PCI_VPD_SZ_INVALID; 122 100 } 123 101 124 102 /* ··· 104 126 * This code has to spin since there is no other notification from the PCI 105 127 * hardware. Since the VPD is often implemented by serial attachment to an 106 128 * EEPROM, it may take many milliseconds to complete. 129 + * @set: if true wait for flag to be set, else wait for it to be cleared 107 130 * 108 131 * Returns 0 on success, negative values indicate error. 109 132 */ 110 - static int pci_vpd_wait(struct pci_dev *dev) 133 + static int pci_vpd_wait(struct pci_dev *dev, bool set) 111 134 { 112 - struct pci_vpd *vpd = dev->vpd; 135 + struct pci_vpd *vpd = &dev->vpd; 113 136 unsigned long timeout = jiffies + msecs_to_jiffies(125); 114 137 unsigned long max_sleep = 16; 115 138 u16 status; 116 139 int ret; 117 - 118 - if (!vpd->busy) 119 - return 0; 120 140 121 141 do { 122 142 ret = pci_user_read_config_word(dev, vpd->cap + PCI_VPD_ADDR, ··· 122 146 if (ret < 0) 123 147 return ret; 124 148 125 - if ((status & PCI_VPD_ADDR_F) == vpd->flag) { 126 - vpd->busy = 0; 149 + if (!!(status & PCI_VPD_ADDR_F) == set) 127 150 return 0; 128 - } 129 - 130 - if (fatal_signal_pending(current)) 131 - return -EINTR; 132 151 133 152 if (time_after(jiffies, timeout)) 134 153 break; ··· 140 169 static ssize_t pci_vpd_read(struct pci_dev *dev, loff_t pos, size_t count, 141 170 void *arg) 142 171 { 143 - struct pci_vpd *vpd = dev->vpd; 144 - int ret; 172 + struct pci_vpd *vpd = &dev->vpd; 173 + int ret = 0; 145 174 loff_t end = pos + count; 146 175 u8 *buf = arg; 147 176 177 + if (!vpd->cap) 178 + return -ENODEV; 179 + 148 180 if (pos < 0) 149 181 return -EINVAL; 150 - 151 - if (!vpd->valid) { 152 - vpd->valid = 1; 153 - vpd->len = pci_vpd_size(dev, vpd->len); 154 - } 155 - 156 - if (vpd->len == 0) 157 - return -EIO; 158 182 159 183 if (pos > vpd->len) 160 184 return 0; ··· 162 196 if (mutex_lock_killable(&vpd->lock)) 163 197 return -EINTR; 164 198 165 - ret = pci_vpd_wait(dev); 166 - if (ret < 0) 167 - goto out; 168 - 169 199 while (pos < end) { 170 200 u32 val; 171 201 unsigned int i, skip; 202 + 203 + if (fatal_signal_pending(current)) { 204 + ret = -EINTR; 205 + break; 206 + } 172 207 173 208 ret = pci_user_write_config_word(dev, vpd->cap + PCI_VPD_ADDR, 174 209 pos & ~3); 175 210 if (ret < 0) 176 211 break; 177 - vpd->busy = 1; 178 - vpd->flag = PCI_VPD_ADDR_F; 179 - ret = pci_vpd_wait(dev); 212 + ret = pci_vpd_wait(dev, true); 180 213 if (ret < 0) 181 214 break; 182 215 ··· 193 228 val >>= 8; 194 229 } 195 230 } 196 - out: 231 + 197 232 mutex_unlock(&vpd->lock); 198 233 return ret ? ret : count; 199 234 } ··· 201 236 static ssize_t pci_vpd_write(struct pci_dev *dev, loff_t pos, size_t count, 202 237 const void *arg) 203 238 { 204 - struct pci_vpd *vpd = dev->vpd; 239 + struct pci_vpd *vpd = &dev->vpd; 205 240 const u8 *buf = arg; 206 241 loff_t end = pos + count; 207 242 int ret = 0; 208 243 244 + if (!vpd->cap) 245 + return -ENODEV; 246 + 209 247 if (pos < 0 || (pos & 3) || (count & 3)) 210 248 return -EINVAL; 211 - 212 - if (!vpd->valid) { 213 - vpd->valid = 1; 214 - vpd->len = pci_vpd_size(dev, vpd->len); 215 - } 216 - 217 - if (vpd->len == 0) 218 - return -EIO; 219 249 220 250 if (end > vpd->len) 221 251 return -EINVAL; ··· 218 258 if (mutex_lock_killable(&vpd->lock)) 219 259 return -EINTR; 220 260 221 - ret = pci_vpd_wait(dev); 222 - if (ret < 0) 223 - goto out; 224 - 225 261 while (pos < end) { 226 - u32 val; 227 - 228 - val = *buf++; 229 - val |= *buf++ << 8; 230 - val |= *buf++ << 16; 231 - val |= *buf++ << 24; 232 - 233 - ret = pci_user_write_config_dword(dev, vpd->cap + PCI_VPD_DATA, val); 262 + ret = pci_user_write_config_dword(dev, vpd->cap + PCI_VPD_DATA, 263 + get_unaligned_le32(buf)); 234 264 if (ret < 0) 235 265 break; 236 266 ret = pci_user_write_config_word(dev, vpd->cap + PCI_VPD_ADDR, ··· 228 278 if (ret < 0) 229 279 break; 230 280 231 - vpd->busy = 1; 232 - vpd->flag = 0; 233 - ret = pci_vpd_wait(dev); 281 + ret = pci_vpd_wait(dev, false); 234 282 if (ret < 0) 235 283 break; 236 284 285 + buf += sizeof(u32); 237 286 pos += sizeof(u32); 238 287 } 239 - out: 288 + 240 289 mutex_unlock(&vpd->lock); 241 290 return ret ? ret : count; 242 291 } 243 292 244 - static const struct pci_vpd_ops pci_vpd_ops = { 245 - .read = pci_vpd_read, 246 - .write = pci_vpd_write, 247 - }; 248 - 249 - static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count, 250 - void *arg) 251 - { 252 - struct pci_dev *tdev = pci_get_func0_dev(dev); 253 - ssize_t ret; 254 - 255 - if (!tdev) 256 - return -ENODEV; 257 - 258 - ret = pci_read_vpd(tdev, pos, count, arg); 259 - pci_dev_put(tdev); 260 - return ret; 261 - } 262 - 263 - static ssize_t pci_vpd_f0_write(struct pci_dev *dev, loff_t pos, size_t count, 264 - const void *arg) 265 - { 266 - struct pci_dev *tdev = pci_get_func0_dev(dev); 267 - ssize_t ret; 268 - 269 - if (!tdev) 270 - return -ENODEV; 271 - 272 - ret = pci_write_vpd(tdev, pos, count, arg); 273 - pci_dev_put(tdev); 274 - return ret; 275 - } 276 - 277 - static const struct pci_vpd_ops pci_vpd_f0_ops = { 278 - .read = pci_vpd_f0_read, 279 - .write = pci_vpd_f0_write, 280 - }; 281 - 282 293 void pci_vpd_init(struct pci_dev *dev) 283 294 { 284 - struct pci_vpd *vpd; 285 - u8 cap; 295 + dev->vpd.cap = pci_find_capability(dev, PCI_CAP_ID_VPD); 296 + mutex_init(&dev->vpd.lock); 286 297 287 - cap = pci_find_capability(dev, PCI_CAP_ID_VPD); 288 - if (!cap) 289 - return; 298 + if (!dev->vpd.len) 299 + dev->vpd.len = pci_vpd_size(dev); 290 300 291 - vpd = kzalloc(sizeof(*vpd), GFP_ATOMIC); 292 - if (!vpd) 293 - return; 294 - 295 - vpd->len = PCI_VPD_MAX_SIZE; 296 - if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) 297 - vpd->ops = &pci_vpd_f0_ops; 298 - else 299 - vpd->ops = &pci_vpd_ops; 300 - mutex_init(&vpd->lock); 301 - vpd->cap = cap; 302 - vpd->busy = 0; 303 - vpd->valid = 0; 304 - dev->vpd = vpd; 305 - } 306 - 307 - void pci_vpd_release(struct pci_dev *dev) 308 - { 309 - kfree(dev->vpd); 301 + if (dev->vpd.len == PCI_VPD_SZ_INVALID) 302 + dev->vpd.cap = 0; 310 303 } 311 304 312 305 static ssize_t vpd_read(struct file *filp, struct kobject *kobj, ··· 281 388 { 282 389 struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 283 390 284 - if (!pdev->vpd) 391 + if (!pdev->vpd.cap) 285 392 return 0; 286 393 287 394 return a->attr.mode; ··· 292 399 .is_bin_visible = vpd_attr_is_visible, 293 400 }; 294 401 295 - int pci_vpd_find_tag(const u8 *buf, unsigned int len, u8 rdt) 402 + void *pci_vpd_alloc(struct pci_dev *dev, unsigned int *size) 403 + { 404 + unsigned int len = dev->vpd.len; 405 + void *buf; 406 + int cnt; 407 + 408 + if (!dev->vpd.cap) 409 + return ERR_PTR(-ENODEV); 410 + 411 + buf = kmalloc(len, GFP_KERNEL); 412 + if (!buf) 413 + return ERR_PTR(-ENOMEM); 414 + 415 + cnt = pci_read_vpd(dev, 0, len, buf); 416 + if (cnt != len) { 417 + kfree(buf); 418 + return ERR_PTR(-EIO); 419 + } 420 + 421 + if (size) 422 + *size = len; 423 + 424 + return buf; 425 + } 426 + EXPORT_SYMBOL_GPL(pci_vpd_alloc); 427 + 428 + static int pci_vpd_find_tag(const u8 *buf, unsigned int len, u8 rdt, unsigned int *size) 296 429 { 297 430 int i = 0; 298 431 299 432 /* look for LRDT tags only, end tag is the only SRDT tag */ 300 433 while (i + PCI_VPD_LRDT_TAG_SIZE <= len && buf[i] & PCI_VPD_LRDT) { 301 - if (buf[i] == rdt) 302 - return i; 434 + unsigned int lrdt_len = pci_vpd_lrdt_size(buf + i); 435 + u8 tag = buf[i]; 303 436 304 - i += PCI_VPD_LRDT_TAG_SIZE + pci_vpd_lrdt_size(buf + i); 437 + i += PCI_VPD_LRDT_TAG_SIZE; 438 + if (tag == rdt) { 439 + if (i + lrdt_len > len) 440 + lrdt_len = len - i; 441 + if (size) 442 + *size = lrdt_len; 443 + return i; 444 + } 445 + 446 + i += lrdt_len; 305 447 } 306 448 307 449 return -ENOENT; 308 450 } 309 - EXPORT_SYMBOL_GPL(pci_vpd_find_tag); 310 451 311 - int pci_vpd_find_info_keyword(const u8 *buf, unsigned int off, 452 + int pci_vpd_find_id_string(const u8 *buf, unsigned int len, unsigned int *size) 453 + { 454 + return pci_vpd_find_tag(buf, len, PCI_VPD_LRDT_ID_STRING, size); 455 + } 456 + EXPORT_SYMBOL_GPL(pci_vpd_find_id_string); 457 + 458 + static int pci_vpd_find_info_keyword(const u8 *buf, unsigned int off, 312 459 unsigned int len, const char *kw) 313 460 { 314 461 int i; ··· 364 431 365 432 return -ENOENT; 366 433 } 367 - EXPORT_SYMBOL_GPL(pci_vpd_find_info_keyword); 434 + 435 + /** 436 + * pci_read_vpd - Read one entry from Vital Product Data 437 + * @dev: PCI device struct 438 + * @pos: offset in VPD space 439 + * @count: number of bytes to read 440 + * @buf: pointer to where to store result 441 + */ 442 + ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf) 443 + { 444 + ssize_t ret; 445 + 446 + if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) { 447 + dev = pci_get_func0_dev(dev); 448 + if (!dev) 449 + return -ENODEV; 450 + 451 + ret = pci_vpd_read(dev, pos, count, buf); 452 + pci_dev_put(dev); 453 + return ret; 454 + } 455 + 456 + return pci_vpd_read(dev, pos, count, buf); 457 + } 458 + EXPORT_SYMBOL(pci_read_vpd); 459 + 460 + /** 461 + * pci_write_vpd - Write entry to Vital Product Data 462 + * @dev: PCI device struct 463 + * @pos: offset in VPD space 464 + * @count: number of bytes to write 465 + * @buf: buffer containing write data 466 + */ 467 + ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf) 468 + { 469 + ssize_t ret; 470 + 471 + if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) { 472 + dev = pci_get_func0_dev(dev); 473 + if (!dev) 474 + return -ENODEV; 475 + 476 + ret = pci_vpd_write(dev, pos, count, buf); 477 + pci_dev_put(dev); 478 + return ret; 479 + } 480 + 481 + return pci_vpd_write(dev, pos, count, buf); 482 + } 483 + EXPORT_SYMBOL(pci_write_vpd); 484 + 485 + int pci_vpd_find_ro_info_keyword(const void *buf, unsigned int len, 486 + const char *kw, unsigned int *size) 487 + { 488 + int ro_start, infokw_start; 489 + unsigned int ro_len, infokw_size; 490 + 491 + ro_start = pci_vpd_find_tag(buf, len, PCI_VPD_LRDT_RO_DATA, &ro_len); 492 + if (ro_start < 0) 493 + return ro_start; 494 + 495 + infokw_start = pci_vpd_find_info_keyword(buf, ro_start, ro_len, kw); 496 + if (infokw_start < 0) 497 + return infokw_start; 498 + 499 + infokw_size = pci_vpd_info_field_size(buf + infokw_start); 500 + infokw_start += PCI_VPD_INFO_FLD_HDR_SIZE; 501 + 502 + if (infokw_start + infokw_size > len) 503 + return -EINVAL; 504 + 505 + if (size) 506 + *size = infokw_size; 507 + 508 + return infokw_start; 509 + } 510 + EXPORT_SYMBOL_GPL(pci_vpd_find_ro_info_keyword); 511 + 512 + int pci_vpd_check_csum(const void *buf, unsigned int len) 513 + { 514 + const u8 *vpd = buf; 515 + unsigned int size; 516 + u8 csum = 0; 517 + int rv_start; 518 + 519 + rv_start = pci_vpd_find_ro_info_keyword(buf, len, PCI_VPD_RO_KEYWORD_CHKSUM, &size); 520 + if (rv_start == -ENOENT) /* no checksum in VPD */ 521 + return 1; 522 + else if (rv_start < 0) 523 + return rv_start; 524 + 525 + if (!size) 526 + return -EINVAL; 527 + 528 + while (rv_start >= 0) 529 + csum += vpd[rv_start--]; 530 + 531 + return csum ? -EILSEQ : 0; 532 + } 533 + EXPORT_SYMBOL_GPL(pci_vpd_check_csum); 368 534 369 535 #ifdef CONFIG_PCI_QUIRKS 370 536 /* ··· 482 450 if (!f0) 483 451 return; 484 452 485 - if (f0->vpd && dev->class == f0->class && 453 + if (f0->vpd.cap && dev->class == f0->class && 486 454 dev->vendor == f0->vendor && dev->device == f0->device) 487 455 dev->dev_flags |= PCI_DEV_FLAGS_VPD_REF_F0; 488 456 ··· 500 468 */ 501 469 static void quirk_blacklist_vpd(struct pci_dev *dev) 502 470 { 503 - if (dev->vpd) { 504 - dev->vpd->len = 0; 505 - pci_warn(dev, FW_BUG "disabling VPD access (can't determine size of non-standard VPD format)\n"); 506 - } 471 + dev->vpd.len = PCI_VPD_SZ_INVALID; 472 + pci_warn(dev, FW_BUG "disabling VPD access (can't determine size of non-standard VPD format)\n"); 507 473 } 508 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0060, quirk_blacklist_vpd); 509 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x007c, quirk_blacklist_vpd); 510 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0413, quirk_blacklist_vpd); 511 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0078, quirk_blacklist_vpd); 512 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0079, quirk_blacklist_vpd); 513 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0073, quirk_blacklist_vpd); 514 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0071, quirk_blacklist_vpd); 515 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005b, quirk_blacklist_vpd); 516 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x002f, quirk_blacklist_vpd); 517 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005d, quirk_blacklist_vpd); 518 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd); 519 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID, 520 - quirk_blacklist_vpd); 474 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x0060, quirk_blacklist_vpd); 475 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x007c, quirk_blacklist_vpd); 476 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x0413, quirk_blacklist_vpd); 477 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x0078, quirk_blacklist_vpd); 478 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x0079, quirk_blacklist_vpd); 479 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x0073, quirk_blacklist_vpd); 480 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x0071, quirk_blacklist_vpd); 481 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x005b, quirk_blacklist_vpd); 482 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x002f, quirk_blacklist_vpd); 483 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x005d, quirk_blacklist_vpd); 484 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd); 485 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID, quirk_blacklist_vpd); 521 486 /* 522 487 * The Amazon Annapurna Labs 0x0031 device id is reused for other non Root Port 523 488 * device types, so the quirk is registered for the PCI_CLASS_BRIDGE_PCI class. 524 489 */ 525 - DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, 526 - PCI_CLASS_BRIDGE_PCI, 8, quirk_blacklist_vpd); 527 - 528 - static void pci_vpd_set_size(struct pci_dev *dev, size_t len) 529 - { 530 - struct pci_vpd *vpd = dev->vpd; 531 - 532 - if (!vpd || len == 0 || len > PCI_VPD_MAX_SIZE) 533 - return; 534 - 535 - vpd->valid = 1; 536 - vpd->len = len; 537 - } 490 + DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, 491 + PCI_CLASS_BRIDGE_PCI, 8, quirk_blacklist_vpd); 538 492 539 493 static void quirk_chelsio_extend_vpd(struct pci_dev *dev) 540 494 { ··· 540 522 * limits. 541 523 */ 542 524 if (chip == 0x0 && prod >= 0x20) 543 - pci_vpd_set_size(dev, 8192); 525 + dev->vpd.len = 8192; 544 526 else if (chip >= 0x4 && func < 0x8) 545 - pci_vpd_set_size(dev, 2048); 527 + dev->vpd.len = 2048; 546 528 } 547 529 548 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID, 549 - quirk_chelsio_extend_vpd); 530 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID, 531 + quirk_chelsio_extend_vpd); 550 532 551 533 #endif
+6 -28
drivers/scsi/cxlflash/main.c
··· 1629 1629 { 1630 1630 struct device *dev = &cfg->dev->dev; 1631 1631 struct pci_dev *pdev = cfg->dev; 1632 - int rc = 0; 1633 - int ro_start, ro_size, i, j, k; 1632 + int i, k, rc = 0; 1633 + unsigned int kw_size; 1634 1634 ssize_t vpd_size; 1635 1635 char vpd_data[CXLFLASH_VPD_LEN]; 1636 1636 char tmp_buf[WWPN_BUF_LEN] = { 0 }; ··· 1648 1648 goto out; 1649 1649 } 1650 1650 1651 - /* Get the read only section offset */ 1652 - ro_start = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA); 1653 - if (unlikely(ro_start < 0)) { 1654 - dev_err(dev, "%s: VPD Read-only data not found\n", __func__); 1655 - rc = -ENODEV; 1656 - goto out; 1657 - } 1658 - 1659 - /* Get the read only section size, cap when extends beyond read VPD */ 1660 - ro_size = pci_vpd_lrdt_size(&vpd_data[ro_start]); 1661 - j = ro_size; 1662 - i = ro_start + PCI_VPD_LRDT_TAG_SIZE; 1663 - if (unlikely((i + j) > vpd_size)) { 1664 - dev_dbg(dev, "%s: Might need to read more VPD (%d > %ld)\n", 1665 - __func__, (i + j), vpd_size); 1666 - ro_size = vpd_size - i; 1667 - } 1668 - 1669 1651 /* 1670 1652 * Find the offset of the WWPN tag within the read only 1671 1653 * VPD data and validate the found field (partials are ··· 1663 1681 * ports programmed and operate in an undefined state. 1664 1682 */ 1665 1683 for (k = 0; k < cfg->num_fc_ports; k++) { 1666 - j = ro_size; 1667 - i = ro_start + PCI_VPD_LRDT_TAG_SIZE; 1668 - 1669 - i = pci_vpd_find_info_keyword(vpd_data, i, j, wwpn_vpd_tags[k]); 1670 - if (i < 0) { 1684 + i = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size, 1685 + wwpn_vpd_tags[k], &kw_size); 1686 + if (i == -ENOENT) { 1671 1687 if (wwpn_vpd_required) 1672 1688 dev_err(dev, "%s: Port %d WWPN not found\n", 1673 1689 __func__, k); ··· 1673 1693 continue; 1674 1694 } 1675 1695 1676 - j = pci_vpd_info_field_size(&vpd_data[i]); 1677 - i += PCI_VPD_INFO_FLD_HDR_SIZE; 1678 - if (unlikely((i + j > vpd_size) || (j != WWPN_LEN))) { 1696 + if (i < 0 || kw_size != WWPN_LEN) { 1679 1697 dev_err(dev, "%s: Port %d WWPN incomplete or bad VPD\n", 1680 1698 __func__, k); 1681 1699 rc = -ENODEV;
+1 -1
include/asm-generic/pci_iomap.h
··· 52 52 } 53 53 #endif 54 54 55 - #endif /* __ASM_GENERIC_IO_H */ 55 + #endif /* __ASM_GENERIC_PCI_IOMAP_H */
+31 -26
include/linux/pci-epc.h
··· 62 62 * @owner: the module owner containing the ops 63 63 */ 64 64 struct pci_epc_ops { 65 - int (*write_header)(struct pci_epc *epc, u8 func_no, 65 + int (*write_header)(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 66 66 struct pci_epf_header *hdr); 67 - int (*set_bar)(struct pci_epc *epc, u8 func_no, 67 + int (*set_bar)(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 68 68 struct pci_epf_bar *epf_bar); 69 - void (*clear_bar)(struct pci_epc *epc, u8 func_no, 69 + void (*clear_bar)(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 70 70 struct pci_epf_bar *epf_bar); 71 - int (*map_addr)(struct pci_epc *epc, u8 func_no, 71 + int (*map_addr)(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 72 72 phys_addr_t addr, u64 pci_addr, size_t size); 73 - void (*unmap_addr)(struct pci_epc *epc, u8 func_no, 73 + void (*unmap_addr)(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 74 74 phys_addr_t addr); 75 - int (*set_msi)(struct pci_epc *epc, u8 func_no, u8 interrupts); 76 - int (*get_msi)(struct pci_epc *epc, u8 func_no); 77 - int (*set_msix)(struct pci_epc *epc, u8 func_no, u16 interrupts, 78 - enum pci_barno, u32 offset); 79 - int (*get_msix)(struct pci_epc *epc, u8 func_no); 80 - int (*raise_irq)(struct pci_epc *epc, u8 func_no, 75 + int (*set_msi)(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 76 + u8 interrupts); 77 + int (*get_msi)(struct pci_epc *epc, u8 func_no, u8 vfunc_no); 78 + int (*set_msix)(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 79 + u16 interrupts, enum pci_barno, u32 offset); 80 + int (*get_msix)(struct pci_epc *epc, u8 func_no, u8 vfunc_no); 81 + int (*raise_irq)(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 81 82 enum pci_epc_irq_type type, u16 interrupt_num); 82 - int (*map_msi_irq)(struct pci_epc *epc, u8 func_no, 83 + int (*map_msi_irq)(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 83 84 phys_addr_t phys_addr, u8 interrupt_num, 84 85 u32 entry_size, u32 *msi_data, 85 86 u32 *msi_addr_offset); 86 87 int (*start)(struct pci_epc *epc); 87 88 void (*stop)(struct pci_epc *epc); 88 89 const struct pci_epc_features* (*get_features)(struct pci_epc *epc, 89 - u8 func_no); 90 + u8 func_no, u8 vfunc_no); 90 91 struct module *owner; 91 92 }; 92 93 ··· 129 128 * single window. 130 129 * @num_windows: number of windows supported by device 131 130 * @max_functions: max number of functions that can be configured in this EPC 131 + * @max_vfs: Array indicating the maximum number of virtual functions that can 132 + * be associated with each physical function 132 133 * @group: configfs group representing the PCI EPC device 133 134 * @lock: mutex to protect pci_epc ops 134 135 * @function_num_map: bitmap to manage physical function number ··· 144 141 struct pci_epc_mem *mem; 145 142 unsigned int num_windows; 146 143 u8 max_functions; 144 + u8 *max_vfs; 147 145 struct config_group *group; 148 146 /* mutex to protect against concurrent access of EP controller */ 149 147 struct mutex lock; ··· 212 208 void pci_epc_init_notify(struct pci_epc *epc); 213 209 void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf, 214 210 enum pci_epc_interface_type type); 215 - int pci_epc_write_header(struct pci_epc *epc, u8 func_no, 211 + int pci_epc_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 216 212 struct pci_epf_header *hdr); 217 - int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, 213 + int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 218 214 struct pci_epf_bar *epf_bar); 219 - void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no, 215 + void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 220 216 struct pci_epf_bar *epf_bar); 221 - int pci_epc_map_addr(struct pci_epc *epc, u8 func_no, 217 + int pci_epc_map_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 222 218 phys_addr_t phys_addr, 223 219 u64 pci_addr, size_t size); 224 - void pci_epc_unmap_addr(struct pci_epc *epc, u8 func_no, 220 + void pci_epc_unmap_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 225 221 phys_addr_t phys_addr); 226 - int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts); 227 - int pci_epc_get_msi(struct pci_epc *epc, u8 func_no); 228 - int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts, 229 - enum pci_barno, u32 offset); 230 - int pci_epc_get_msix(struct pci_epc *epc, u8 func_no); 231 - int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, 222 + int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 223 + u8 interrupts); 224 + int pci_epc_get_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no); 225 + int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 226 + u16 interrupts, enum pci_barno, u32 offset); 227 + int pci_epc_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no); 228 + int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 232 229 phys_addr_t phys_addr, u8 interrupt_num, 233 230 u32 entry_size, u32 *msi_data, u32 *msi_addr_offset); 234 - int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no, 231 + int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 235 232 enum pci_epc_irq_type type, u16 interrupt_num); 236 233 int pci_epc_start(struct pci_epc *epc); 237 234 void pci_epc_stop(struct pci_epc *epc); 238 235 const struct pci_epc_features *pci_epc_get_features(struct pci_epc *epc, 239 - u8 func_no); 236 + u8 func_no, u8 vfunc_no); 240 237 enum pci_barno 241 238 pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features); 242 239 enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features
+15 -1
include/linux/pci-epf.h
··· 121 121 * @bar: represents the BAR of EPF device 122 122 * @msi_interrupts: number of MSI interrupts required by this function 123 123 * @msix_interrupts: number of MSI-X interrupts required by this function 124 - * @func_no: unique function number within this endpoint device 124 + * @func_no: unique (physical) function number within this endpoint device 125 + * @vfunc_no: unique virtual function number within a physical function 125 126 * @epc: the EPC device to which this EPF device is bound 127 + * @epf_pf: the physical EPF device to which this virtual EPF device is bound 126 128 * @driver: the EPF driver to which this EPF device is bound 127 129 * @list: to add pci_epf as a list of PCI endpoint functions to pci_epc 128 130 * @nb: notifier block to notify EPF of any EPC events (like linkup) ··· 135 133 * @sec_epc_bar: represents the BAR of EPF device associated with secondary EPC 136 134 * @sec_epc_func_no: unique (physical) function number within the secondary EPC 137 135 * @group: configfs group associated with the EPF device 136 + * @is_bound: indicates if bind notification to function driver has been invoked 137 + * @is_vf: true - virtual function, false - physical function 138 + * @vfunction_num_map: bitmap to manage virtual function number 139 + * @pci_vepf: list of virtual endpoint functions associated with this function 138 140 */ 139 141 struct pci_epf { 140 142 struct device dev; ··· 148 142 u8 msi_interrupts; 149 143 u16 msix_interrupts; 150 144 u8 func_no; 145 + u8 vfunc_no; 151 146 152 147 struct pci_epc *epc; 148 + struct pci_epf *epf_pf; 153 149 struct pci_epf_driver *driver; 154 150 struct list_head list; 155 151 struct notifier_block nb; ··· 164 156 struct pci_epf_bar sec_epc_bar[6]; 165 157 u8 sec_epc_func_no; 166 158 struct config_group *group; 159 + unsigned int is_bound; 160 + unsigned int is_vf; 161 + unsigned long vfunction_num_map; 162 + struct list_head pci_vepf; 167 163 }; 168 164 169 165 /** ··· 211 199 void pci_epf_unbind(struct pci_epf *epf); 212 200 struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf, 213 201 struct config_group *group); 202 + int pci_epf_add_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf); 203 + void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf); 214 204 #endif /* __LINUX_PCI_EPF_H */
+60 -105
include/linux/pci.h
··· 49 49 PCI_STATUS_SIG_TARGET_ABORT | \ 50 50 PCI_STATUS_PARITY) 51 51 52 + /* Number of reset methods used in pci_reset_fn_methods array in pci.c */ 53 + #define PCI_NUM_RESET_METHODS 7 54 + 55 + #define PCI_RESET_PROBE true 56 + #define PCI_RESET_DO_RESET false 57 + 52 58 /* 53 59 * The PCI interface treats multi-function devices as independent 54 60 * devices. The slot/function address of each device is encoded ··· 294 288 enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev); 295 289 enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev); 296 290 297 - struct pci_cap_saved_data { 298 - u16 cap_nr; 299 - bool cap_extended; 300 - unsigned int size; 301 - u32 data[]; 302 - }; 303 - 304 - struct pci_cap_saved_state { 305 - struct hlist_node next; 306 - struct pci_cap_saved_data cap; 291 + struct pci_vpd { 292 + struct mutex lock; 293 + unsigned int len; 294 + u8 cap; 307 295 }; 308 296 309 297 struct irq_affinity; 310 298 struct pcie_link_state; 311 - struct pci_vpd; 312 299 struct pci_sriov; 313 300 struct pci_p2pdma; 314 301 struct rcec_ea; ··· 332 333 struct rcec_ea *rcec_ea; /* RCEC cached endpoint association */ 333 334 struct pci_dev *rcec; /* Associated RCEC device */ 334 335 #endif 336 + u32 devcap; /* PCIe Device Capabilities */ 335 337 u8 pcie_cap; /* PCIe capability offset */ 336 338 u8 msi_cap; /* MSI capability offset */ 337 339 u8 msix_cap; /* MSI-X capability offset */ ··· 388 388 supported from root to here */ 389 389 u16 l1ss; /* L1SS Capability pointer */ 390 390 #endif 391 + unsigned int pasid_no_tlp:1; /* PASID works without TLP Prefix */ 391 392 unsigned int eetlp_prefix_path:1; /* End-to-End TLP Prefix */ 392 393 393 394 pci_channel_state_t error_state; /* Current connectivity state */ ··· 428 427 unsigned int state_saved:1; 429 428 unsigned int is_physfn:1; 430 429 unsigned int is_virtfn:1; 431 - unsigned int reset_fn:1; 432 430 unsigned int is_hotplug_bridge:1; 433 431 unsigned int shpc_managed:1; /* SHPC owned by shpchp */ 434 432 unsigned int is_thunderbolt:1; /* Thunderbolt controller */ ··· 473 473 #ifdef CONFIG_PCI_MSI 474 474 const struct attribute_group **msi_irq_groups; 475 475 #endif 476 - struct pci_vpd *vpd; 476 + struct pci_vpd vpd; 477 477 #ifdef CONFIG_PCIE_DPC 478 478 u16 dpc_cap; 479 479 unsigned int dpc_rp_extensions:1; ··· 505 505 char *driver_override; /* Driver name to force a match */ 506 506 507 507 unsigned long priv_flags; /* Private flags for the PCI driver */ 508 + 509 + /* These methods index pci_reset_fn_methods[] */ 510 + u8 reset_methods[PCI_NUM_RESET_METHODS]; /* In priority order */ 508 511 }; 509 512 510 513 static inline struct pci_dev *pci_physfn(struct pci_dev *dev) ··· 529 526 return (pdev->error_state != pci_channel_io_normal); 530 527 } 531 528 529 + /* 530 + * Currently in ACPI spec, for each PCI host bridge, PCI Segment 531 + * Group number is limited to a 16-bit value, therefore (int)-1 is 532 + * not a valid PCI domain number, and can be used as a sentinel 533 + * value indicating ->domain_nr is not set by the driver (and 534 + * CONFIG_PCI_DOMAINS_GENERIC=y archs will set it with 535 + * pci_bus_find_domain_nr()). 536 + */ 537 + #define PCI_DOMAIN_NR_NOT_SET (-1) 538 + 532 539 struct pci_host_bridge { 533 540 struct device dev; 534 541 struct pci_bus *bus; /* Root bus */ ··· 546 533 struct pci_ops *child_ops; 547 534 void *sysdata; 548 535 int busnr; 536 + int domain_nr; 549 537 struct list_head windows; /* resource_entry */ 550 538 struct list_head dma_ranges; /* dma ranges resource list */ 551 539 u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */ ··· 1271 1257 enum pci_bus_speed *speed, 1272 1258 enum pcie_link_width *width); 1273 1259 void pcie_print_link_status(struct pci_dev *dev); 1274 - bool pcie_has_flr(struct pci_dev *dev); 1260 + int pcie_reset_flr(struct pci_dev *dev, bool probe); 1275 1261 int pcie_flr(struct pci_dev *dev); 1276 1262 int __pci_reset_function_locked(struct pci_dev *dev); 1277 1263 int pci_reset_function(struct pci_dev *dev); ··· 1321 1307 struct pci_saved_state *state); 1322 1308 int pci_load_and_free_saved_state(struct pci_dev *dev, 1323 1309 struct pci_saved_state **state); 1324 - struct pci_cap_saved_state *pci_find_saved_cap(struct pci_dev *dev, char cap); 1325 - struct pci_cap_saved_state *pci_find_saved_ext_cap(struct pci_dev *dev, 1326 - u16 cap); 1327 - int pci_add_cap_save_buffer(struct pci_dev *dev, char cap, unsigned int size); 1328 - int pci_add_ext_cap_save_buffer(struct pci_dev *dev, 1329 - u16 cap, unsigned int size); 1330 1310 int pci_platform_power_transition(struct pci_dev *dev, pci_power_t state); 1331 1311 int pci_set_power_state(struct pci_dev *dev, pci_power_t state); 1332 1312 pci_power_t pci_choose_state(struct pci_dev *dev, pm_message_t state); ··· 1787 1779 static inline int pcim_enable_device(struct pci_dev *pdev) { return -EIO; } 1788 1780 static inline int pci_assign_resource(struct pci_dev *dev, int i) 1789 1781 { return -EBUSY; } 1790 - static inline int __pci_register_driver(struct pci_driver *drv, 1791 - struct module *owner) 1782 + static inline int __must_check __pci_register_driver(struct pci_driver *drv, 1783 + struct module *owner, 1784 + const char *mod_name) 1792 1785 { return 0; } 1793 1786 static inline int pci_register_driver(struct pci_driver *drv) 1794 1787 { return 0; } ··· 1929 1920 #define pci_resource_end(dev, bar) ((dev)->resource[(bar)].end) 1930 1921 #define pci_resource_flags(dev, bar) ((dev)->resource[(bar)].flags) 1931 1922 #define pci_resource_len(dev,bar) \ 1932 - ((pci_resource_start((dev), (bar)) == 0 && \ 1933 - pci_resource_end((dev), (bar)) == \ 1934 - pci_resource_start((dev), (bar))) ? 0 : \ 1923 + ((pci_resource_end((dev), (bar)) == 0) ? 0 : \ 1935 1924 \ 1936 1925 (pci_resource_end((dev), (bar)) - \ 1937 1926 pci_resource_start((dev), (bar)) + 1)) ··· 2296 2289 #define PCI_VPD_LRDT_RO_DATA PCI_VPD_LRDT_ID(PCI_VPD_LTIN_RO_DATA) 2297 2290 #define PCI_VPD_LRDT_RW_DATA PCI_VPD_LRDT_ID(PCI_VPD_LTIN_RW_DATA) 2298 2291 2299 - /* Small Resource Data Type Tag Item Names */ 2300 - #define PCI_VPD_STIN_END 0x0f /* End */ 2301 - 2302 - #define PCI_VPD_SRDT_END (PCI_VPD_STIN_END << 3) 2303 - 2304 - #define PCI_VPD_SRDT_TIN_MASK 0x78 2305 - #define PCI_VPD_SRDT_LEN_MASK 0x07 2306 - #define PCI_VPD_LRDT_TIN_MASK 0x7f 2307 - 2308 - #define PCI_VPD_LRDT_TAG_SIZE 3 2309 - #define PCI_VPD_SRDT_TAG_SIZE 1 2310 - 2311 - #define PCI_VPD_INFO_FLD_HDR_SIZE 3 2312 - 2313 2292 #define PCI_VPD_RO_KEYWORD_PARTNO "PN" 2314 2293 #define PCI_VPD_RO_KEYWORD_SERIALNO "SN" 2315 2294 #define PCI_VPD_RO_KEYWORD_MFR_ID "MN" ··· 2303 2310 #define PCI_VPD_RO_KEYWORD_CHKSUM "RV" 2304 2311 2305 2312 /** 2306 - * pci_vpd_lrdt_size - Extracts the Large Resource Data Type length 2307 - * @lrdt: Pointer to the beginning of the Large Resource Data Type tag 2313 + * pci_vpd_alloc - Allocate buffer and read VPD into it 2314 + * @dev: PCI device 2315 + * @size: pointer to field where VPD length is returned 2308 2316 * 2309 - * Returns the extracted Large Resource Data Type length. 2317 + * Returns pointer to allocated buffer or an ERR_PTR in case of failure 2310 2318 */ 2311 - static inline u16 pci_vpd_lrdt_size(const u8 *lrdt) 2312 - { 2313 - return (u16)lrdt[1] + ((u16)lrdt[2] << 8); 2314 - } 2319 + void *pci_vpd_alloc(struct pci_dev *dev, unsigned int *size); 2315 2320 2316 2321 /** 2317 - * pci_vpd_lrdt_tag - Extracts the Large Resource Data Type Tag Item 2318 - * @lrdt: Pointer to the beginning of the Large Resource Data Type tag 2322 + * pci_vpd_find_id_string - Locate id string in VPD 2323 + * @buf: Pointer to buffered VPD data 2324 + * @len: The length of the buffer area in which to search 2325 + * @size: Pointer to field where length of id string is returned 2319 2326 * 2320 - * Returns the extracted Large Resource Data Type Tag item. 2327 + * Returns the index of the id string or -ENOENT if not found. 2321 2328 */ 2322 - static inline u16 pci_vpd_lrdt_tag(const u8 *lrdt) 2323 - { 2324 - return (u16)(lrdt[0] & PCI_VPD_LRDT_TIN_MASK); 2325 - } 2329 + int pci_vpd_find_id_string(const u8 *buf, unsigned int len, unsigned int *size); 2326 2330 2327 2331 /** 2328 - * pci_vpd_srdt_size - Extracts the Small Resource Data Type length 2329 - * @srdt: Pointer to the beginning of the Small Resource Data Type tag 2330 - * 2331 - * Returns the extracted Small Resource Data Type length. 2332 - */ 2333 - static inline u8 pci_vpd_srdt_size(const u8 *srdt) 2334 - { 2335 - return (*srdt) & PCI_VPD_SRDT_LEN_MASK; 2336 - } 2337 - 2338 - /** 2339 - * pci_vpd_srdt_tag - Extracts the Small Resource Data Type Tag Item 2340 - * @srdt: Pointer to the beginning of the Small Resource Data Type tag 2341 - * 2342 - * Returns the extracted Small Resource Data Type Tag Item. 2343 - */ 2344 - static inline u8 pci_vpd_srdt_tag(const u8 *srdt) 2345 - { 2346 - return ((*srdt) & PCI_VPD_SRDT_TIN_MASK) >> 3; 2347 - } 2348 - 2349 - /** 2350 - * pci_vpd_info_field_size - Extracts the information field length 2351 - * @info_field: Pointer to the beginning of an information field header 2352 - * 2353 - * Returns the extracted information field length. 2354 - */ 2355 - static inline u8 pci_vpd_info_field_size(const u8 *info_field) 2356 - { 2357 - return info_field[2]; 2358 - } 2359 - 2360 - /** 2361 - * pci_vpd_find_tag - Locates the Resource Data Type tag provided 2362 - * @buf: Pointer to buffered vpd data 2363 - * @len: The length of the vpd buffer 2364 - * @rdt: The Resource Data Type to search for 2365 - * 2366 - * Returns the index where the Resource Data Type was found or 2367 - * -ENOENT otherwise. 2368 - */ 2369 - int pci_vpd_find_tag(const u8 *buf, unsigned int len, u8 rdt); 2370 - 2371 - /** 2372 - * pci_vpd_find_info_keyword - Locates an information field keyword in the VPD 2373 - * @buf: Pointer to buffered vpd data 2374 - * @off: The offset into the buffer at which to begin the search 2375 - * @len: The length of the buffer area, relative to off, in which to search 2332 + * pci_vpd_find_ro_info_keyword - Locate info field keyword in VPD RO section 2333 + * @buf: Pointer to buffered VPD data 2334 + * @len: The length of the buffer area in which to search 2376 2335 * @kw: The keyword to search for 2336 + * @size: Pointer to field where length of found keyword data is returned 2377 2337 * 2378 - * Returns the index where the information field keyword was found or 2379 - * -ENOENT otherwise. 2338 + * Returns the index of the information field keyword data or -ENOENT if 2339 + * not found. 2380 2340 */ 2381 - int pci_vpd_find_info_keyword(const u8 *buf, unsigned int off, 2382 - unsigned int len, const char *kw); 2341 + int pci_vpd_find_ro_info_keyword(const void *buf, unsigned int len, 2342 + const char *kw, unsigned int *size); 2343 + 2344 + /** 2345 + * pci_vpd_check_csum - Check VPD checksum 2346 + * @buf: Pointer to buffered VPD data 2347 + * @len: VPD size 2348 + * 2349 + * Returns 1 if VPD has no checksum, otherwise 0 or an errno 2350 + */ 2351 + int pci_vpd_check_csum(const void *buf, unsigned int len); 2383 2352 2384 2353 /* PCI <-> OF binding helpers */ 2385 2354 #ifdef CONFIG_OF
+1 -1
include/linux/pci_hotplug.h
··· 44 44 int (*get_attention_status) (struct hotplug_slot *slot, u8 *value); 45 45 int (*get_latch_status) (struct hotplug_slot *slot, u8 *value); 46 46 int (*get_adapter_status) (struct hotplug_slot *slot, u8 *value); 47 - int (*reset_slot) (struct hotplug_slot *slot, int probe); 47 + int (*reset_slot) (struct hotplug_slot *slot, bool probe); 48 48 }; 49 49 50 50 /**
+2 -1
include/linux/pci_ids.h
··· 2453 2453 #define PCI_VENDOR_ID_TDI 0x192E 2454 2454 #define PCI_DEVICE_ID_TDI_EHCI 0x0101 2455 2455 2456 - #define PCI_VENDOR_ID_FREESCALE 0x1957 2456 + #define PCI_VENDOR_ID_FREESCALE 0x1957 /* duplicate: NXP */ 2457 + #define PCI_VENDOR_ID_NXP 0x1957 /* duplicate: FREESCALE */ 2457 2458 #define PCI_DEVICE_ID_MPC8308 0xc006 2458 2459 #define PCI_DEVICE_ID_MPC8315E 0x00b4 2459 2460 #define PCI_DEVICE_ID_MPC8315 0x00b5
+1 -1
tools/pci/pcitest.c
··· 40 40 41 41 static int run_test(struct pci_test *test) 42 42 { 43 - struct pci_endpoint_test_xfer_param param; 43 + struct pci_endpoint_test_xfer_param param = {}; 44 44 int ret = -EINVAL; 45 45 int fd; 46 46