Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v5.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:
- Conserve IRQs by setting up portdrv IRQs only when there are users
(Jan Kiszka)
- Rework and simplify _OSC negotiation for control of PCIe features
(Joerg Roedel)
- Remove struct pci_dev.driver pointer since it's redundant with the
struct device.driver pointer (Uwe Kleine-König)

Resource management:
- Coalesce contiguous host bridge apertures from _CRS to accommodate
BARs that cover more than one aperture (Kai-Heng Feng)

Sysfs:
- Check CAP_SYS_ADMIN before parsing user input (Krzysztof
Wilczyński)
- Return -EINVAL consistently from "store" functions (Krzysztof
Wilczyński)
- Use sysfs_emit() in endpoint "show" functions to avoid buffer
overruns (Kunihiko Hayashi)

PCIe native device hotplug:
- Ignore Link Down/Up caused by resets during error recovery so
endpoint drivers can remain bound to the device (Lukas Wunner)

Virtualization:
- Avoid bus resets on Atheros QCA6174, where they hang the device
(Ingmar Klein)
- Work around Pericom PI7C9X2G switch packet drop erratum by using
store and forward mode instead of cut-through (Nathan Rossi)
- Avoid trying to enable AtomicOps on VFs; the PF setting applies to
all VFs (Selvin Xavier)

MSI:
- Document that /sys/bus/pci/devices/.../irq contains the legacy INTx
interrupt or the IRQ of the first MSI (not MSI-X) vector (Barry
Song)

VPD:
- Add pci_read_vpd_any() and pci_write_vpd_any() to access anywhere
in the possible VPD space; use these to simplify the cxgb3 driver
(Heiner Kallweit)

Peer-to-peer DMA:
- Add (not subtract) the bus offset when calculating DMA address
(Wang Lu)

ASPM:
- Re-enable LTR at Downstream Ports so they don't report Unsupported
Requests when reset or hot-added devices send LTR messages
(Mingchuang Qiao)

Apple PCIe controller driver:
- Add driver for Apple M1 PCIe controller (Alyssa Rosenzweig, Marc
Zyngier)

Cadence PCIe controller driver:
- Return success when probe succeeds instead of falling into error
path (Li Chen)

HiSilicon Kirin PCIe controller driver:
- Reorganize PHY logic and add support for external PHY drivers
(Mauro Carvalho Chehab)
- Support PERST# GPIOs for HiKey970 external PEX 8606 bridge (Mauro
Carvalho Chehab)
- Add Kirin 970 support (Mauro Carvalho Chehab)
- Make driver removable (Mauro Carvalho Chehab)

Intel VMD host bridge driver:
- If IOMMU supports interrupt remapping, leave VMD MSI-X remapping
enabled (Adrian Huang)
- Number each controller so we can tell them apart in
/proc/interrupts (Chunguang Xu)
- Avoid building on UML because VMD depends on x86 bare metal APIs
(Johannes Berg)

Marvell Aardvark PCIe controller driver:
- Define macros for PCI_EXP_DEVCTL_PAYLOAD_* (Pali Rohár)
- Set Max Payload Size to 512 bytes per Marvell spec (Pali Rohár)
- Downgrade PIO Response Status messages to debug level (Marek Behún)
- Preserve CRS SV (Config Request Retry Software Visibility) bit in
emulated Root Control register (Pali Rohár)
- Fix issue in configuring reference clock (Pali Rohár)
- Don't clear status bits for masked interrupts (Pali Rohár)
- Don't mask unused interrupts (Pali Rohár)
- Avoid code repetition in advk_pcie_rd_conf() (Marek Behún)
- Retry config accesses on CRS response (Pali Rohár)
- Simplify emulated Root Capabilities initialization (Pali Rohár)
- Fix several link training issues (Pali Rohár)
- Fix link-up checking via LTSSM (Pali Rohár)
- Fix reporting of Data Link Layer Link Active (Pali Rohár)
- Fix emulation of W1C bits (Marek Behún)
- Fix MSI domain .alloc() method to return zero on success (Marek
Behún)
- Read entire 16-bit MSI vector in MSI handler, not just low 8 bits
(Marek Behún)
- Clear Root Port I/O Space, Memory Space, and Bus Master Enable bits
at startup; PCI core will set those as necessary (Pali Rohár)
- When operating as a Root Port, set class code to "PCI Bridge"
instead of the default "Mass Storage Controller" (Pali Rohár)
- Add emulation for PCI_BRIDGE_CTL_BUS_RESET since aardvark doesn't
implement this per spec (Pali Rohár)
- Add emulation of option ROM BAR since aardvark doesn't implement
this per spec (Pali Rohár)

MediaTek MT7621 PCIe controller driver:
- Add MediaTek MT7621 PCIe host controller driver and DT binding
(Sergio Paracuellos)

Qualcomm PCIe controller driver:
- Add SC8180x compatible string (Bjorn Andersson)
- Add endpoint controller driver and DT binding (Manivannan
Sadhasivam)
- Restructure to use of_device_get_match_data() (Prasad Malisetty)
- Add SC7280-specific pcie_1_pipe_clk_src handling (Prasad Malisetty)

Renesas R-Car PCIe controller driver:
- Remove unnecessary includes (Geert Uytterhoeven)

Rockchip DesignWare PCIe controller driver:
- Add DT binding (Simon Xue)

Socionext UniPhier Pro5 controller driver:
- Serialize INTx masking/unmasking (Kunihiko Hayashi)

Synopsys DesignWare PCIe controller driver:
- Run dwc .host_init() method before registering MSI interrupt
handler so we can deal with pending interrupts left by bootloader
(Bjorn Andersson)
- Clean up Kconfig dependencies (Andy Shevchenko)
- Export symbols to allow more modular drivers (Luca Ceresoli)

TI DRA7xx PCIe controller driver:
- Allow host and endpoint drivers to be modules (Luca Ceresoli)
- Enable external clock if present (Luca Ceresoli)

TI J721E PCIe driver:
- Disable PHY when probe fails after initializing it (Christophe
JAILLET)

MicroSemi Switchtec management driver:
- Return error to application when command execution fails because an
out-of-band reset has cleared the device BARs, Memory Space Enable,
etc (Kelvin Cao)
- Fix MRPC error status handling issue (Kelvin Cao)
- Mask out other bits when reading of management VEP instance ID
(Kelvin Cao)
- Return EOPNOTSUPP instead of ENOTSUPP from sysfs show functions
(Kelvin Cao)
- Add check of event support (Logan Gunthorpe)

Miscellaneous:
- Remove unused pci_pool wrappers, which have been replaced by
dma_pool (Cai Huoqing)
- Use 'unsigned int' instead of bare 'unsigned' (Krzysztof
Wilczyński)
- Use kstrtobool() directly, sans strtobool() wrapper (Krzysztof
Wilczyński)
- Fix some sscanf(), sprintf() format mismatches (Krzysztof
Wilczyński)
- Update PCI subsystem information in MAINTAINERS (Krzysztof
Wilczyński)
- Correct some misspellings (Krzysztof Wilczyński)"

* tag 'pci-v5.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (137 commits)
PCI: Add ACS quirk for Pericom PI7C9X2G switches
PCI: apple: Configure RID to SID mapper on device addition
iommu/dart: Exclude MSI doorbell from PCIe device IOVA range
PCI: apple: Implement MSI support
PCI: apple: Add INTx and per-port interrupt support
PCI: kirin: Allow removing the driver
PCI: kirin: De-init the dwc driver
PCI: kirin: Disable clkreq during poweroff sequence
PCI: kirin: Move the power-off code to a common routine
PCI: kirin: Add power_off support for Kirin 960 PHY
PCI: kirin: Allow building it as a module
PCI: kirin: Add MODULE_* macros
PCI: kirin: Add Kirin 970 compatible
PCI: kirin: Support PERST# GPIOs for HiKey970 external PEX 8606 bridge
PCI: apple: Set up reference clocks when probing
PCI: apple: Add initial hardware bring-up
PCI: of: Allow matching of an interrupt-map local to a PCI device
of/irq: Allow matching of an interrupt-map local to an interrupt controller
irqdomain: Make of_phandle_args_to_fwspec() generally available
PCI: Do not enable AtomicOps on VFs
...

+3943 -1190
+11
Documentation/ABI/testing/sysfs-bus-pci
··· 100 100 This attribute indicates the mode that the irq vector named by 101 101 the file is in (msi vs. msix) 102 102 103 + What: /sys/bus/pci/devices/.../irq 104 + Date: August 2021 105 + Contact: Linux PCI developers <linux-pci@vger.kernel.org> 106 + Description: 107 + If a driver has enabled MSI (not MSI-X), "irq" contains the 108 + IRQ of the first MSI vector. Otherwise "irq" contains the 109 + IRQ of the legacy INTx interrupt. 110 + 111 + "irq" being set to 0 indicates that the device isn't 112 + capable of generating legacy INTx interrupts. 113 + 103 114 What: /sys/bus/pci/devices/.../remove 104 115 Date: January 2009 105 116 Contact: Linux PCI developers <linux-pci@vger.kernel.org>
+142
Documentation/devicetree/bindings/pci/mediatek,mt7621-pcie.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/mediatek,mt7621-pcie.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: MediaTek MT7621 PCIe controller 8 + 9 + maintainers: 10 + - Sergio Paracuellos <sergio.paracuellos@gmail.com> 11 + 12 + description: |+ 13 + MediaTek MT7621 PCIe subsys supports a single Root Complex (RC) 14 + with 3 Root Ports. Each Root Port supports a Gen1 1-lane Link 15 + 16 + allOf: 17 + - $ref: /schemas/pci/pci-bus.yaml# 18 + 19 + properties: 20 + compatible: 21 + const: mediatek,mt7621-pci 22 + 23 + reg: 24 + items: 25 + - description: host-pci bridge registers 26 + - description: pcie port 0 RC control registers 27 + - description: pcie port 1 RC control registers 28 + - description: pcie port 2 RC control registers 29 + 30 + ranges: 31 + maxItems: 2 32 + 33 + patternProperties: 34 + 'pcie@[0-2],0': 35 + type: object 36 + $ref: /schemas/pci/pci-bus.yaml# 37 + 38 + properties: 39 + resets: 40 + maxItems: 1 41 + 42 + clocks: 43 + maxItems: 1 44 + 45 + phys: 46 + maxItems: 1 47 + 48 + required: 49 + - "#interrupt-cells" 50 + - interrupt-map-mask 51 + - interrupt-map 52 + - resets 53 + - clocks 54 + - phys 55 + - phy-names 56 + - ranges 57 + 58 + unevaluatedProperties: false 59 + 60 + required: 61 + - compatible 62 + - reg 63 + - ranges 64 + - "#interrupt-cells" 65 + - interrupt-map-mask 66 + - interrupt-map 67 + - reset-gpios 68 + 69 + unevaluatedProperties: false 70 + 71 + examples: 72 + - | 73 + #include <dt-bindings/gpio/gpio.h> 74 + #include <dt-bindings/interrupt-controller/mips-gic.h> 75 + 76 + pcie: pcie@1e140000 { 77 + compatible = "mediatek,mt7621-pci"; 78 + reg = <0x1e140000 0x100>, 79 + <0x1e142000 0x100>, 80 + <0x1e143000 0x100>, 81 + <0x1e144000 0x100>; 82 + 83 + #address-cells = <3>; 84 + #size-cells = <2>; 85 + pinctrl-names = "default"; 86 + pinctrl-0 = <&pcie_pins>; 87 + device_type = "pci"; 88 + ranges = <0x02000000 0 0x60000000 0x60000000 0 0x10000000>, /* pci memory */ 89 + <0x01000000 0 0x1e160000 0x1e160000 0 0x00010000>; /* io space */ 90 + #interrupt-cells = <1>; 91 + interrupt-map-mask = <0xF800 0 0 0>; 92 + interrupt-map = <0x0000 0 0 0 &gic GIC_SHARED 4 IRQ_TYPE_LEVEL_HIGH>, 93 + <0x0800 0 0 0 &gic GIC_SHARED 24 IRQ_TYPE_LEVEL_HIGH>, 94 + <0x1000 0 0 0 &gic GIC_SHARED 25 IRQ_TYPE_LEVEL_HIGH>; 95 + reset-gpios = <&gpio 19 GPIO_ACTIVE_LOW>; 96 + 97 + pcie@0,0 { 98 + reg = <0x0000 0 0 0 0>; 99 + #address-cells = <3>; 100 + #size-cells = <2>; 101 + device_type = "pci"; 102 + #interrupt-cells = <1>; 103 + interrupt-map-mask = <0 0 0 0>; 104 + interrupt-map = <0 0 0 0 &gic GIC_SHARED 4 IRQ_TYPE_LEVEL_HIGH>; 105 + resets = <&rstctrl 24>; 106 + clocks = <&clkctrl 24>; 107 + phys = <&pcie0_phy 1>; 108 + phy-names = "pcie-phy0"; 109 + ranges; 110 + }; 111 + 112 + pcie@1,0 { 113 + reg = <0x0800 0 0 0 0>; 114 + #address-cells = <3>; 115 + #size-cells = <2>; 116 + device_type = "pci"; 117 + #interrupt-cells = <1>; 118 + interrupt-map-mask = <0 0 0 0>; 119 + interrupt-map = <0 0 0 0 &gic GIC_SHARED 24 IRQ_TYPE_LEVEL_HIGH>; 120 + resets = <&rstctrl 25>; 121 + clocks = <&clkctrl 25>; 122 + phys = <&pcie0_phy 1>; 123 + phy-names = "pcie-phy1"; 124 + ranges; 125 + }; 126 + 127 + pcie@2,0 { 128 + reg = <0x1000 0 0 0 0>; 129 + #address-cells = <3>; 130 + #size-cells = <2>; 131 + device_type = "pci"; 132 + #interrupt-cells = <1>; 133 + interrupt-map-mask = <0 0 0 0>; 134 + interrupt-map = <0 0 0 0 &gic GIC_SHARED 25 IRQ_TYPE_LEVEL_HIGH>; 135 + resets = <&rstctrl 26>; 136 + clocks = <&clkctrl 26>; 137 + phys = <&pcie2_phy 0>; 138 + phy-names = "pcie-phy2"; 139 + ranges; 140 + }; 141 + }; 142 + ...
+158
Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/qcom,pcie-ep.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm PCIe Endpoint Controller binding 8 + 9 + maintainers: 10 + - Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 11 + 12 + allOf: 13 + - $ref: "pci-ep.yaml#" 14 + 15 + properties: 16 + compatible: 17 + const: qcom,sdx55-pcie-ep 18 + 19 + reg: 20 + items: 21 + - description: Qualcomm-specific PARF configuration registers 22 + - description: DesignWare PCIe registers 23 + - description: External local bus interface registers 24 + - description: Address Translation Unit (ATU) registers 25 + - description: Memory region used to map remote RC address space 26 + - description: BAR memory region 27 + 28 + reg-names: 29 + items: 30 + - const: parf 31 + - const: dbi 32 + - const: elbi 33 + - const: atu 34 + - const: addr_space 35 + - const: mmio 36 + 37 + clocks: 38 + items: 39 + - description: PCIe Auxiliary clock 40 + - description: PCIe CFG AHB clock 41 + - description: PCIe Master AXI clock 42 + - description: PCIe Slave AXI clock 43 + - description: PCIe Slave Q2A AXI clock 44 + - description: PCIe Sleep clock 45 + - description: PCIe Reference clock 46 + 47 + clock-names: 48 + items: 49 + - const: aux 50 + - const: cfg 51 + - const: bus_master 52 + - const: bus_slave 53 + - const: slave_q2a 54 + - const: sleep 55 + - const: ref 56 + 57 + qcom,perst-regs: 58 + description: Reference to a syscon representing TCSR followed by the two 59 + offsets within syscon for Perst enable and Perst separation 60 + enable registers 61 + $ref: "/schemas/types.yaml#/definitions/phandle-array" 62 + items: 63 + minItems: 3 64 + maxItems: 3 65 + 66 + interrupts: 67 + items: 68 + - description: PCIe Global interrupt 69 + - description: PCIe Doorbell interrupt 70 + 71 + interrupt-names: 72 + items: 73 + - const: global 74 + - const: doorbell 75 + 76 + reset-gpios: 77 + description: GPIO used as PERST# input signal 78 + maxItems: 1 79 + 80 + wake-gpios: 81 + description: GPIO used as WAKE# output signal 82 + maxItems: 1 83 + 84 + resets: 85 + maxItems: 1 86 + 87 + reset-names: 88 + const: core 89 + 90 + power-domains: 91 + maxItems: 1 92 + 93 + phys: 94 + maxItems: 1 95 + 96 + phy-names: 97 + const: pciephy 98 + 99 + num-lanes: 100 + default: 2 101 + 102 + required: 103 + - compatible 104 + - reg 105 + - reg-names 106 + - clocks 107 + - clock-names 108 + - qcom,perst-regs 109 + - interrupts 110 + - interrupt-names 111 + - reset-gpios 112 + - resets 113 + - reset-names 114 + - power-domains 115 + 116 + unevaluatedProperties: false 117 + 118 + examples: 119 + - | 120 + #include <dt-bindings/clock/qcom,gcc-sdx55.h> 121 + #include <dt-bindings/gpio/gpio.h> 122 + #include <dt-bindings/interrupt-controller/arm-gic.h> 123 + pcie_ep: pcie-ep@40000000 { 124 + compatible = "qcom,sdx55-pcie-ep"; 125 + reg = <0x01c00000 0x3000>, 126 + <0x40000000 0xf1d>, 127 + <0x40000f20 0xc8>, 128 + <0x40001000 0x1000>, 129 + <0x40002000 0x1000>, 130 + <0x01c03000 0x3000>; 131 + reg-names = "parf", "dbi", "elbi", "atu", "addr_space", 132 + "mmio"; 133 + 134 + clocks = <&gcc GCC_PCIE_AUX_CLK>, 135 + <&gcc GCC_PCIE_CFG_AHB_CLK>, 136 + <&gcc GCC_PCIE_MSTR_AXI_CLK>, 137 + <&gcc GCC_PCIE_SLV_AXI_CLK>, 138 + <&gcc GCC_PCIE_SLV_Q2A_AXI_CLK>, 139 + <&gcc GCC_PCIE_SLEEP_CLK>, 140 + <&gcc GCC_PCIE_0_CLKREF_CLK>; 141 + clock-names = "aux", "cfg", "bus_master", "bus_slave", 142 + "slave_q2a", "sleep", "ref"; 143 + 144 + qcom,perst-regs = <&tcsr 0xb258 0xb270>; 145 + 146 + interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>, 147 + <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>; 148 + interrupt-names = "global", "doorbell"; 149 + reset-gpios = <&tlmm 57 GPIO_ACTIVE_LOW>; 150 + wake-gpios = <&tlmm 53 GPIO_ACTIVE_LOW>; 151 + resets = <&gcc GCC_PCIE_BCR>; 152 + reset-names = "core"; 153 + power-domains = <&gcc PCIE_GDSC>; 154 + phys = <&pcie0_lane>; 155 + phy-names = "pciephy"; 156 + max-link-speed = <3>; 157 + num-lanes = <2>; 158 + };
+3 -2
Documentation/devicetree/bindings/pci/qcom,pcie.txt
··· 12 12 - "qcom,pcie-ipq4019" for ipq4019 13 13 - "qcom,pcie-ipq8074" for ipq8074 14 14 - "qcom,pcie-qcs404" for qcs404 15 + - "qcom,pcie-sc8180x" for sc8180x 15 16 - "qcom,pcie-sdm845" for sdm845 16 17 - "qcom,pcie-sm8250" for sm8250 17 18 - "qcom,pcie-ipq6018" for ipq6018 ··· 157 156 - "pipe" PIPE clock 158 157 159 158 - clock-names: 160 - Usage: required for sm8250 159 + Usage: required for sc8180x and sm8250 161 160 Value type: <stringlist> 162 161 Definition: Should contain the following entries 163 162 - "aux" Auxiliary clock ··· 246 245 - "ahb" AHB reset 247 246 248 247 - reset-names: 249 - Usage: required for sdm845 and sm8250 248 + Usage: required for sc8180x, sdm845 and sm8250 250 249 Value type: <stringlist> 251 250 Definition: Should contain the following entries 252 251 - "pci" PCIe core reset
+141
Documentation/devicetree/bindings/pci/rockchip-dw-pcie.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/rockchip-dw-pcie.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: DesignWare based PCIe controller on Rockchip SoCs 8 + 9 + maintainers: 10 + - Shawn Lin <shawn.lin@rock-chips.com> 11 + - Simon Xue <xxm@rock-chips.com> 12 + - Heiko Stuebner <heiko@sntech.de> 13 + 14 + description: |+ 15 + RK3568 SoC PCIe host controller is based on the Synopsys DesignWare 16 + PCIe IP and thus inherits all the common properties defined in 17 + designware-pcie.txt. 18 + 19 + allOf: 20 + - $ref: /schemas/pci/pci-bus.yaml# 21 + 22 + # We need a select here so we don't match all nodes with 'snps,dw-pcie' 23 + select: 24 + properties: 25 + compatible: 26 + contains: 27 + const: rockchip,rk3568-pcie 28 + required: 29 + - compatible 30 + 31 + properties: 32 + compatible: 33 + items: 34 + - const: rockchip,rk3568-pcie 35 + - const: snps,dw-pcie 36 + 37 + reg: 38 + items: 39 + - description: Data Bus Interface (DBI) registers 40 + - description: Rockchip designed configuration registers 41 + - description: Config registers 42 + 43 + reg-names: 44 + items: 45 + - const: dbi 46 + - const: apb 47 + - const: config 48 + 49 + clocks: 50 + items: 51 + - description: AHB clock for PCIe master 52 + - description: AHB clock for PCIe slave 53 + - description: AHB clock for PCIe dbi 54 + - description: APB clock for PCIe 55 + - description: Auxiliary clock for PCIe 56 + 57 + clock-names: 58 + items: 59 + - const: aclk_mst 60 + - const: aclk_slv 61 + - const: aclk_dbi 62 + - const: pclk 63 + - const: aux 64 + 65 + msi-map: true 66 + 67 + num-lanes: true 68 + 69 + phys: 70 + maxItems: 1 71 + 72 + phy-names: 73 + const: pcie-phy 74 + 75 + power-domains: 76 + maxItems: 1 77 + 78 + ranges: 79 + maxItems: 2 80 + 81 + resets: 82 + maxItems: 1 83 + 84 + reset-names: 85 + const: pipe 86 + 87 + vpcie3v3-supply: true 88 + 89 + required: 90 + - compatible 91 + - reg 92 + - reg-names 93 + - clocks 94 + - clock-names 95 + - msi-map 96 + - num-lanes 97 + - phys 98 + - phy-names 99 + - power-domains 100 + - resets 101 + - reset-names 102 + 103 + unevaluatedProperties: false 104 + 105 + examples: 106 + - | 107 + 108 + bus { 109 + #address-cells = <2>; 110 + #size-cells = <2>; 111 + 112 + pcie3x2: pcie@fe280000 { 113 + compatible = "rockchip,rk3568-pcie", "snps,dw-pcie"; 114 + reg = <0x3 0xc0800000 0x0 0x390000>, 115 + <0x0 0xfe280000 0x0 0x10000>, 116 + <0x3 0x80000000 0x0 0x100000>; 117 + reg-names = "dbi", "apb", "config"; 118 + bus-range = <0x20 0x2f>; 119 + clocks = <&cru 143>, <&cru 144>, 120 + <&cru 145>, <&cru 146>, 121 + <&cru 147>; 122 + clock-names = "aclk_mst", "aclk_slv", 123 + "aclk_dbi", "pclk", 124 + "aux"; 125 + device_type = "pci"; 126 + linux,pci-domain = <2>; 127 + max-link-speed = <2>; 128 + msi-map = <0x2000 &its 0x2000 0x1000>; 129 + num-lanes = <2>; 130 + phys = <&pcie30phy>; 131 + phy-names = "pcie-phy"; 132 + power-domains = <&power 15>; 133 + ranges = <0x81000000 0x0 0x80800000 0x3 0x80800000 0x0 0x100000>, 134 + <0x83000000 0x0 0x80900000 0x3 0x80900000 0x0 0x3f700000>; 135 + resets = <&cru 193>; 136 + reset-names = "pipe"; 137 + #address-cells = <3>; 138 + #size-cells = <2>; 139 + }; 140 + }; 141 + ...
+35 -5
MAINTAINERS
··· 1297 1297 F: Documentation/devicetree/bindings/iommu/apple,dart.yaml 1298 1298 F: drivers/iommu/apple-dart.c 1299 1299 1300 + APPLE PCIE CONTROLLER DRIVER 1301 + M: Alyssa Rosenzweig <alyssa@rosenzweig.io> 1302 + M: Marc Zyngier <maz@kernel.org> 1303 + L: linux-pci@vger.kernel.org 1304 + S: Maintained 1305 + F: drivers/pci/controller/pcie-apple.c 1306 + 1300 1307 APPLE SMC DRIVER 1301 1308 M: Henrik Rydberg <rydberg@bitmath.org> 1302 1309 L: linux-hwmon@vger.kernel.org ··· 12012 12005 F: Documentation/devicetree/bindings/i2c/i2c-mt7621.txt 12013 12006 F: drivers/i2c/busses/i2c-mt7621.c 12014 12007 12008 + MEDIATEK MT7621 PCIE CONTROLLER DRIVER 12009 + M: Sergio Paracuellos <sergio.paracuellos@gmail.com> 12010 + S: Maintained 12011 + F: Documentation/devicetree/bindings/pci/mediatek,mt7621-pcie.yaml 12012 + F: drivers/pci/controller/pcie-mt7621.c 12013 + 12015 12014 MEDIATEK MT7621 PHY PCI DRIVER 12016 12015 M: Sergio Paracuellos <sergio.paracuellos@gmail.com> 12017 12016 S: Maintained ··· 14660 14647 R: Krzysztof Wilczyński <kw@linux.com> 14661 14648 L: linux-pci@vger.kernel.org 14662 14649 S: Supported 14650 + Q: https://patchwork.kernel.org/project/linux-pci/list/ 14651 + B: https://bugzilla.kernel.org 14652 + C: irc://irc.oftc.net/linux-pci 14653 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git 14663 14654 F: Documentation/PCI/endpoint/* 14664 14655 F: Documentation/misc-devices/pci-endpoint-test.rst 14665 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/kishon/pci-endpoint.git 14666 14656 F: drivers/misc/pci_endpoint_test.c 14667 14657 F: drivers/pci/endpoint/ 14668 14658 F: tools/pci/ ··· 14711 14695 R: Krzysztof Wilczyński <kw@linux.com> 14712 14696 L: linux-pci@vger.kernel.org 14713 14697 S: Supported 14714 - Q: http://patchwork.ozlabs.org/project/linux-pci/list/ 14715 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git/ 14698 + Q: https://patchwork.kernel.org/project/linux-pci/list/ 14699 + B: https://bugzilla.kernel.org 14700 + C: irc://irc.oftc.net/linux-pci 14701 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git 14716 14702 F: drivers/pci/controller/ 14703 + F: drivers/pci/pci-bridge-emul.c 14704 + F: drivers/pci/pci-bridge-emul.h 14717 14705 14718 14706 PCI SUBSYSTEM 14719 14707 M: Bjorn Helgaas <bhelgaas@google.com> 14720 14708 L: linux-pci@vger.kernel.org 14721 14709 S: Supported 14722 - Q: http://patchwork.ozlabs.org/project/linux-pci/list/ 14710 + Q: https://patchwork.kernel.org/project/linux-pci/list/ 14711 + B: https://bugzilla.kernel.org 14712 + C: irc://irc.oftc.net/linux-pci 14723 14713 T: git git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git 14724 14714 F: Documentation/PCI/ 14725 14715 F: Documentation/devicetree/bindings/pci/ ··· 14825 14803 L: linux-pci@vger.kernel.org 14826 14804 L: linux-arm-msm@vger.kernel.org 14827 14805 S: Maintained 14828 - F: drivers/pci/controller/dwc/*qcom* 14806 + F: drivers/pci/controller/dwc/pcie-qcom.c 14807 + 14808 + PCIE ENDPOINT DRIVER FOR QUALCOMM 14809 + M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 14810 + L: linux-pci@vger.kernel.org 14811 + L: linux-arm-msm@vger.kernel.org 14812 + S: Maintained 14813 + F: Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml 14814 + F: drivers/pci/controller/dwc/pcie-qcom-ep.c 14829 14815 14830 14816 PCIE DRIVER FOR ROCKCHIP 14831 14817 M: Shawn Lin <shawn.lin@rock-chips.com>
+1 -2
arch/microblaze/pci/pci-common.c
··· 587 587 } 588 588 DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, pcibios_fixup_resources); 589 589 590 - int pcibios_add_device(struct pci_dev *dev) 590 + int pcibios_device_add(struct pci_dev *dev) 591 591 { 592 592 dev->irq = of_irq_parse_and_map_pci(dev, 0, 0); 593 593 594 594 return 0; 595 595 } 596 - EXPORT_SYMBOL(pcibios_add_device); 597 596 598 597 /* 599 598 * Reparent resource children of pr that conflict with res
+2 -1
arch/mips/ralink/Kconfig
··· 51 51 select SYS_SUPPORTS_HIGHMEM 52 52 select MIPS_GIC 53 53 select CLKSRC_MIPS_GIC 54 - select HAVE_PCI if PCI_MT7621 54 + select HAVE_PCI 55 + select PCI_DRIVERS_GENERIC 55 56 select SOC_BUS 56 57 endchoice 57 58
-5
arch/powerpc/include/asm/ppc-pci.h
··· 55 55 void eeh_sysfs_add_device(struct pci_dev *pdev); 56 56 void eeh_sysfs_remove_device(struct pci_dev *pdev); 57 57 58 - static inline const char *eeh_driver_name(struct pci_dev *pdev) 59 - { 60 - return (pdev && pdev->driver) ? pdev->driver->name : "<null>"; 61 - } 62 - 63 58 #endif /* CONFIG_EEH */ 64 59 65 60 #define PCI_BUSNO(bdfn) ((bdfn >> 8) & 0xff)
+8
arch/powerpc/kernel/eeh.c
··· 399 399 return ret; 400 400 } 401 401 402 + static inline const char *eeh_driver_name(struct pci_dev *pdev) 403 + { 404 + if (pdev) 405 + return dev_driver_string(&pdev->dev); 406 + 407 + return "<null>"; 408 + } 409 + 402 410 /** 403 411 * eeh_dev_check_failure - Check if all 1's data is due to EEH slot freeze 404 412 * @edev: eeh device
+5 -5
arch/powerpc/kernel/eeh_driver.c
··· 104 104 */ 105 105 static inline struct pci_driver *eeh_pcid_get(struct pci_dev *pdev) 106 106 { 107 - if (!pdev || !pdev->driver) 107 + if (!pdev || !pdev->dev.driver) 108 108 return NULL; 109 109 110 - if (!try_module_get(pdev->driver->driver.owner)) 110 + if (!try_module_get(pdev->dev.driver->owner)) 111 111 return NULL; 112 112 113 - return pdev->driver; 113 + return to_pci_driver(pdev->dev.driver); 114 114 } 115 115 116 116 /** ··· 122 122 */ 123 123 static inline void eeh_pcid_put(struct pci_dev *pdev) 124 124 { 125 - if (!pdev || !pdev->driver) 125 + if (!pdev || !pdev->dev.driver) 126 126 return; 127 127 128 - module_put(pdev->driver->driver.owner); 128 + module_put(pdev->dev.driver->owner); 129 129 } 130 130 131 131 /**
+1 -1
arch/powerpc/kernel/pci-common.c
··· 1059 1059 ppc_md.pcibios_bus_add_device(dev); 1060 1060 } 1061 1061 1062 - int pcibios_add_device(struct pci_dev *dev) 1062 + int pcibios_device_add(struct pci_dev *dev) 1063 1063 { 1064 1064 struct irq_domain *d; 1065 1065
+1 -1
arch/powerpc/platforms/powernv/pci-sriov.c
··· 51 51 * to "new_size", calculated above. Implementing this is a convoluted process 52 52 * which requires several hooks in the PCI core: 53 53 * 54 - * 1. In pcibios_add_device() we call pnv_pci_ioda_fixup_iov(). 54 + * 1. In pcibios_device_add() we call pnv_pci_ioda_fixup_iov(). 55 55 * 56 56 * At this point the device has been probed and the device's BARs are sized, 57 57 * but no resource allocations have been done. The SR-IOV BARs are sized
+1 -1
arch/s390/pci/pci.c
··· 561 561 zdev->has_resources = 0; 562 562 } 563 563 564 - int pcibios_add_device(struct pci_dev *pdev) 564 + int pcibios_device_add(struct pci_dev *pdev) 565 565 { 566 566 struct zpci_dev *zdev = to_zpci(pdev); 567 567 struct resource *res;
+1 -1
arch/sparc/kernel/pci.c
··· 1010 1010 } 1011 1011 1012 1012 #ifdef CONFIG_PCI_IOV 1013 - int pcibios_add_device(struct pci_dev *dev) 1013 + int pcibios_device_add(struct pci_dev *dev) 1014 1014 { 1015 1015 struct pci_dev *pdev; 1016 1016
+1 -1
arch/x86/events/intel/uncore.c
··· 1187 1187 * PCI slot and func to indicate the uncore box. 1188 1188 */ 1189 1189 if (id->driver_data & ~0xffff) { 1190 - struct pci_driver *pci_drv = pdev->driver; 1190 + struct pci_driver *pci_drv = to_pci_driver(pdev->dev.driver); 1191 1191 1192 1192 pmu = uncore_pci_find_dev_pmu(pdev, pci_drv->id_table); 1193 1193 if (pmu == NULL)
+1 -1
arch/x86/kernel/probe_roms.c
··· 80 80 */ 81 81 static bool match_id(struct pci_dev *pdev, unsigned short vendor, unsigned short device) 82 82 { 83 - struct pci_driver *drv = pdev->driver; 83 + struct pci_driver *drv = to_pci_driver(pdev->dev.driver); 84 84 const struct pci_device_id *id; 85 85 86 86 if (pdev->vendor == vendor && pdev->device == device)
+1 -1
arch/x86/pci/common.c
··· 632 632 pdev->hotplug_user_indicators = 1; 633 633 } 634 634 635 - int pcibios_add_device(struct pci_dev *dev) 635 + int pcibios_device_add(struct pci_dev *dev) 636 636 { 637 637 struct pci_setup_rom *rom; 638 638 struct irq_domain *msidom;
+84 -77
drivers/acpi/pci_root.c
··· 199 199 acpi_status status; 200 200 u32 result, capbuf[3]; 201 201 202 - support &= OSC_PCI_SUPPORT_MASKS; 203 202 support |= root->osc_support_set; 204 203 205 204 capbuf[OSC_QUERY_DWORD] = OSC_QUERY_ENABLE; 206 205 capbuf[OSC_SUPPORT_DWORD] = support; 207 - if (control) { 208 - *control &= OSC_PCI_CONTROL_MASKS; 209 - capbuf[OSC_CONTROL_DWORD] = *control | root->osc_control_set; 210 - } else { 211 - /* Run _OSC query only with existing controls. */ 212 - capbuf[OSC_CONTROL_DWORD] = root->osc_control_set; 213 - } 206 + capbuf[OSC_CONTROL_DWORD] = *control | root->osc_control_set; 214 207 215 208 status = acpi_pci_run_osc(root->device->handle, capbuf, &result); 216 209 if (ACPI_SUCCESS(status)) { 217 210 root->osc_support_set = support; 218 - if (control) 219 - *control = result; 211 + *control = result; 220 212 } 221 213 return status; 222 - } 223 - 224 - static acpi_status acpi_pci_osc_support(struct acpi_pci_root *root, u32 flags) 225 - { 226 - return acpi_pci_query_osc(root, flags, NULL); 227 214 } 228 215 229 216 struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle) ··· 335 348 * _OSC bits the BIOS has granted control of, but its contents are meaningless 336 349 * on failure. 337 350 **/ 338 - static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req) 351 + static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 support) 339 352 { 353 + u32 req = OSC_PCI_EXPRESS_CAPABILITY_CONTROL; 340 354 struct acpi_pci_root *root; 341 355 acpi_status status; 342 356 u32 ctrl, capbuf[3]; ··· 345 357 if (!mask) 346 358 return AE_BAD_PARAMETER; 347 359 348 - ctrl = *mask & OSC_PCI_CONTROL_MASKS; 349 - if ((ctrl & req) != req) 350 - return AE_TYPE; 351 - 352 360 root = acpi_pci_find_root(handle); 353 361 if (!root) 354 362 return AE_NOT_EXIST; 355 363 356 - *mask = ctrl | root->osc_control_set; 357 - /* No need to evaluate _OSC if the control was already granted. */ 358 - if ((root->osc_control_set & ctrl) == ctrl) 359 - return AE_OK; 364 + ctrl = *mask; 365 + *mask |= root->osc_control_set; 360 366 361 367 /* Need to check the available controls bits before requesting them. */ 362 - while (*mask) { 363 - status = acpi_pci_query_osc(root, root->osc_support_set, mask); 368 + do { 369 + status = acpi_pci_query_osc(root, support, mask); 364 370 if (ACPI_FAILURE(status)) 365 371 return status; 366 372 if (ctrl == *mask) ··· 362 380 decode_osc_control(root, "platform does not support", 363 381 ctrl & ~(*mask)); 364 382 ctrl = *mask; 365 - } 383 + } while (*mask); 384 + 385 + /* No need to request _OSC if the control was already granted. */ 386 + if ((root->osc_control_set & ctrl) == ctrl) 387 + return AE_OK; 366 388 367 389 if ((ctrl & req) != req) { 368 390 decode_osc_control(root, "not requesting control; platform does not support", ··· 385 399 return AE_OK; 386 400 } 387 401 388 - static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm, 389 - bool is_pcie) 402 + static u32 calculate_support(void) 390 403 { 391 - u32 support, control, requested; 392 - acpi_status status; 393 - struct acpi_device *device = root->device; 394 - acpi_handle handle = device->handle; 395 - 396 - /* 397 - * Apple always return failure on _OSC calls when _OSI("Darwin") has 398 - * been called successfully. We know the feature set supported by the 399 - * platform, so avoid calling _OSC at all 400 - */ 401 - if (x86_apple_machine) { 402 - root->osc_control_set = ~OSC_PCI_EXPRESS_PME_CONTROL; 403 - decode_osc_control(root, "OS assumes control of", 404 - root->osc_control_set); 405 - return; 406 - } 404 + u32 support; 407 405 408 406 /* 409 407 * All supported architectures that use ACPI have support for ··· 404 434 if (IS_ENABLED(CONFIG_PCIE_EDR)) 405 435 support |= OSC_PCI_EDR_SUPPORT; 406 436 407 - decode_osc_support(root, "OS supports", support); 408 - status = acpi_pci_osc_support(root, support); 409 - if (ACPI_FAILURE(status)) { 410 - *no_aspm = 1; 437 + return support; 438 + } 411 439 412 - /* _OSC is optional for PCI host bridges */ 413 - if ((status == AE_NOT_FOUND) && !is_pcie) 414 - return; 415 - 416 - dev_info(&device->dev, "_OSC: platform retains control of PCIe features (%s)\n", 417 - acpi_format_exception(status)); 418 - return; 419 - } 420 - 421 - if (pcie_ports_disabled) { 422 - dev_info(&device->dev, "PCIe port services disabled; not requesting _OSC control\n"); 423 - return; 424 - } 425 - 426 - if ((support & ACPI_PCIE_REQ_SUPPORT) != ACPI_PCIE_REQ_SUPPORT) { 427 - decode_osc_support(root, "not requesting OS control; OS requires", 428 - ACPI_PCIE_REQ_SUPPORT); 429 - return; 430 - } 440 + static u32 calculate_control(void) 441 + { 442 + u32 control; 431 443 432 444 control = OSC_PCI_EXPRESS_CAPABILITY_CONTROL 433 445 | OSC_PCI_EXPRESS_PME_CONTROL; ··· 435 483 if (IS_ENABLED(CONFIG_PCIE_DPC) && IS_ENABLED(CONFIG_PCIE_EDR)) 436 484 control |= OSC_PCI_EXPRESS_DPC_CONTROL; 437 485 438 - requested = control; 439 - status = acpi_pci_osc_control_set(handle, &control, 440 - OSC_PCI_EXPRESS_CAPABILITY_CONTROL); 486 + return control; 487 + } 488 + 489 + static bool os_control_query_checks(struct acpi_pci_root *root, u32 support) 490 + { 491 + struct acpi_device *device = root->device; 492 + 493 + if (pcie_ports_disabled) { 494 + dev_info(&device->dev, "PCIe port services disabled; not requesting _OSC control\n"); 495 + return false; 496 + } 497 + 498 + if ((support & ACPI_PCIE_REQ_SUPPORT) != ACPI_PCIE_REQ_SUPPORT) { 499 + decode_osc_support(root, "not requesting OS control; OS requires", 500 + ACPI_PCIE_REQ_SUPPORT); 501 + return false; 502 + } 503 + 504 + return true; 505 + } 506 + 507 + static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm, 508 + bool is_pcie) 509 + { 510 + u32 support, control = 0, requested = 0; 511 + acpi_status status; 512 + struct acpi_device *device = root->device; 513 + acpi_handle handle = device->handle; 514 + 515 + /* 516 + * Apple always return failure on _OSC calls when _OSI("Darwin") has 517 + * been called successfully. We know the feature set supported by the 518 + * platform, so avoid calling _OSC at all 519 + */ 520 + if (x86_apple_machine) { 521 + root->osc_control_set = ~OSC_PCI_EXPRESS_PME_CONTROL; 522 + decode_osc_control(root, "OS assumes control of", 523 + root->osc_control_set); 524 + return; 525 + } 526 + 527 + support = calculate_support(); 528 + 529 + decode_osc_support(root, "OS supports", support); 530 + 531 + if (os_control_query_checks(root, support)) 532 + requested = control = calculate_control(); 533 + 534 + status = acpi_pci_osc_control_set(handle, &control, support); 441 535 if (ACPI_SUCCESS(status)) { 442 - decode_osc_control(root, "OS now controls", control); 536 + if (control) 537 + decode_osc_control(root, "OS now controls", control); 538 + 443 539 if (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_ASPM) { 444 540 /* 445 541 * We have ASPM control, but the FADT indicates that ··· 498 498 *no_aspm = 1; 499 499 } 500 500 } else { 501 - decode_osc_control(root, "OS requested", requested); 502 - decode_osc_control(root, "platform willing to grant", control); 503 - dev_info(&device->dev, "_OSC: platform retains control of PCIe features (%s)\n", 504 - acpi_format_exception(status)); 505 - 506 501 /* 507 502 * We want to disable ASPM here, but aspm_disabled 508 503 * needs to remain in its state from boot so that we ··· 506 511 * root scan. 507 512 */ 508 513 *no_aspm = 1; 514 + 515 + /* _OSC is optional for PCI host bridges */ 516 + if ((status == AE_NOT_FOUND) && !is_pcie) 517 + return; 518 + 519 + if (control) { 520 + decode_osc_control(root, "OS requested", requested); 521 + decode_osc_control(root, "platform willing to grant", control); 522 + } 523 + 524 + dev_info(&device->dev, "_OSC: platform retains control of PCIe features (%s)\n", 525 + acpi_format_exception(status)); 509 526 } 510 527 } 511 528
+1 -5
drivers/bcma/host_pci.c
··· 162 162 { 163 163 struct bcma_bus *bus; 164 164 int err = -ENOMEM; 165 - const char *name; 166 165 u32 val; 167 166 168 167 /* Alloc */ ··· 174 175 if (err) 175 176 goto err_kfree_bus; 176 177 177 - name = dev_name(&dev->dev); 178 - if (dev->driver && dev->driver->name) 179 - name = dev->driver->name; 180 - err = pci_request_regions(dev, name); 178 + err = pci_request_regions(dev, "bcma-pci-bridge"); 181 179 if (err) 182 180 goto err_pci_disable; 183 181 pci_set_master(dev);
+1 -1
drivers/crypto/hisilicon/qm.c
··· 3118 3118 }; 3119 3119 int ret; 3120 3120 3121 - ret = strscpy(interface.name, pdev->driver->name, 3121 + ret = strscpy(interface.name, dev_driver_string(&pdev->dev), 3122 3122 sizeof(interface.name)); 3123 3123 if (ret < 0) 3124 3124 return -ENAMETOOLONG;
+2 -5
drivers/crypto/qat/qat_4xxx/adf_drv.c
··· 247 247 248 248 pci_set_master(pdev); 249 249 250 - if (adf_enable_aer(accel_dev)) { 251 - dev_err(&pdev->dev, "Failed to enable aer.\n"); 252 - ret = -EFAULT; 253 - goto out_err; 254 - } 250 + adf_enable_aer(accel_dev); 255 251 256 252 if (pci_save_state(pdev)) { 257 253 dev_err(&pdev->dev, "Failed to save pci state.\n"); ··· 300 304 .probe = adf_probe, 301 305 .remove = adf_remove, 302 306 .sriov_configure = adf_sriov_configure, 307 + .err_handler = &adf_err_handler, 303 308 }; 304 309 305 310 module_pci_driver(adf_driver);
+2 -5
drivers/crypto/qat/qat_c3xxx/adf_drv.c
··· 33 33 .probe = adf_probe, 34 34 .remove = adf_remove, 35 35 .sriov_configure = adf_sriov_configure, 36 + .err_handler = &adf_err_handler, 36 37 }; 37 38 38 39 static void adf_cleanup_pci_dev(struct adf_accel_dev *accel_dev) ··· 193 192 } 194 193 pci_set_master(pdev); 195 194 196 - if (adf_enable_aer(accel_dev)) { 197 - dev_err(&pdev->dev, "Failed to enable aer\n"); 198 - ret = -EFAULT; 199 - goto out_err_free_reg; 200 - } 195 + adf_enable_aer(accel_dev); 201 196 202 197 if (pci_save_state(pdev)) { 203 198 dev_err(&pdev->dev, "Failed to save pci state\n");
+2 -5
drivers/crypto/qat/qat_c62x/adf_drv.c
··· 33 33 .probe = adf_probe, 34 34 .remove = adf_remove, 35 35 .sriov_configure = adf_sriov_configure, 36 + .err_handler = &adf_err_handler, 36 37 }; 37 38 38 39 static void adf_cleanup_pci_dev(struct adf_accel_dev *accel_dev) ··· 193 192 } 194 193 pci_set_master(pdev); 195 194 196 - if (adf_enable_aer(accel_dev)) { 197 - dev_err(&pdev->dev, "Failed to enable aer\n"); 198 - ret = -EFAULT; 199 - goto out_err_free_reg; 200 - } 195 + adf_enable_aer(accel_dev); 201 196 202 197 if (pci_save_state(pdev)) { 203 198 dev_err(&pdev->dev, "Failed to save pci state\n");
+3 -7
drivers/crypto/qat/qat_common/adf_aer.c
··· 166 166 dev_info(&pdev->dev, "Device is up and running\n"); 167 167 } 168 168 169 - static const struct pci_error_handlers adf_err_handler = { 169 + const struct pci_error_handlers adf_err_handler = { 170 170 .error_detected = adf_error_detected, 171 171 .slot_reset = adf_slot_reset, 172 172 .resume = adf_resume, 173 173 }; 174 + EXPORT_SYMBOL_GPL(adf_err_handler); 174 175 175 176 /** 176 177 * adf_enable_aer() - Enable Advance Error Reporting for acceleration device ··· 180 179 * Function enables PCI Advance Error Reporting for the 181 180 * QAT acceleration device accel_dev. 182 181 * To be used by QAT device specific drivers. 183 - * 184 - * Return: 0 on success, error code otherwise. 185 182 */ 186 - int adf_enable_aer(struct adf_accel_dev *accel_dev) 183 + void adf_enable_aer(struct adf_accel_dev *accel_dev) 187 184 { 188 185 struct pci_dev *pdev = accel_to_pci_dev(accel_dev); 189 - struct pci_driver *pdrv = pdev->driver; 190 186 191 - pdrv->err_handler = &adf_err_handler; 192 187 pci_enable_pcie_error_reporting(pdev); 193 - return 0; 194 188 } 195 189 EXPORT_SYMBOL_GPL(adf_enable_aer); 196 190
+2 -1
drivers/crypto/qat/qat_common/adf_common_drv.h
··· 94 94 int adf_ae_start(struct adf_accel_dev *accel_dev); 95 95 int adf_ae_stop(struct adf_accel_dev *accel_dev); 96 96 97 - int adf_enable_aer(struct adf_accel_dev *accel_dev); 97 + extern const struct pci_error_handlers adf_err_handler; 98 + void adf_enable_aer(struct adf_accel_dev *accel_dev); 98 99 void adf_disable_aer(struct adf_accel_dev *accel_dev); 99 100 void adf_reset_sbr(struct adf_accel_dev *accel_dev); 100 101 void adf_reset_flr(struct adf_accel_dev *accel_dev);
+2 -5
drivers/crypto/qat/qat_dh895xcc/adf_drv.c
··· 33 33 .probe = adf_probe, 34 34 .remove = adf_remove, 35 35 .sriov_configure = adf_sriov_configure, 36 + .err_handler = &adf_err_handler, 36 37 }; 37 38 38 39 static void adf_cleanup_pci_dev(struct adf_accel_dev *accel_dev) ··· 193 192 } 194 193 pci_set_master(pdev); 195 194 196 - if (adf_enable_aer(accel_dev)) { 197 - dev_err(&pdev->dev, "Failed to enable aer\n"); 198 - ret = -EFAULT; 199 - goto out_err_free_reg; 200 - } 195 + adf_enable_aer(accel_dev); 201 196 202 197 if (pci_save_state(pdev)) { 203 198 dev_err(&pdev->dev, "Failed to save pci state\n");
+28
drivers/iommu/apple-dart.c
··· 15 15 #include <linux/bitfield.h> 16 16 #include <linux/clk.h> 17 17 #include <linux/dev_printk.h> 18 + #include <linux/dma-iommu.h> 18 19 #include <linux/dma-mapping.h> 19 20 #include <linux/err.h> 20 21 #include <linux/interrupt.h> ··· 738 737 return 0; 739 738 } 740 739 740 + #ifndef CONFIG_PCIE_APPLE_MSI_DOORBELL_ADDR 741 + /* Keep things compiling when CONFIG_PCI_APPLE isn't selected */ 742 + #define CONFIG_PCIE_APPLE_MSI_DOORBELL_ADDR 0 743 + #endif 744 + #define DOORBELL_ADDR (CONFIG_PCIE_APPLE_MSI_DOORBELL_ADDR & PAGE_MASK) 745 + 746 + static void apple_dart_get_resv_regions(struct device *dev, 747 + struct list_head *head) 748 + { 749 + if (IS_ENABLED(CONFIG_PCIE_APPLE) && dev_is_pci(dev)) { 750 + struct iommu_resv_region *region; 751 + int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO; 752 + 753 + region = iommu_alloc_resv_region(DOORBELL_ADDR, 754 + PAGE_SIZE, prot, 755 + IOMMU_RESV_MSI); 756 + if (!region) 757 + return; 758 + 759 + list_add_tail(&region->list, head); 760 + } 761 + 762 + iommu_dma_get_resv_regions(dev, head); 763 + } 764 + 741 765 static const struct iommu_ops apple_dart_iommu_ops = { 742 766 .domain_alloc = apple_dart_domain_alloc, 743 767 .domain_free = apple_dart_domain_free, ··· 779 753 .device_group = apple_dart_device_group, 780 754 .of_xlate = apple_dart_of_xlate, 781 755 .def_domain_type = apple_dart_def_domain_type, 756 + .get_resv_regions = apple_dart_get_resv_regions, 757 + .put_resv_regions = generic_iommu_put_resv_regions, 782 758 .pgsize_bitmap = -1UL, /* Restricted during dart probe */ 783 759 }; 784 760
+2 -5
drivers/message/fusion/mptbase.c
··· 829 829 mpt_device_driver_register(struct mpt_pci_driver * dd_cbfunc, u8 cb_idx) 830 830 { 831 831 MPT_ADAPTER *ioc; 832 - const struct pci_device_id *id; 833 832 834 833 if (!cb_idx || cb_idx >= MPT_MAX_PROTOCOL_DRIVERS) 835 834 return -EINVAL; ··· 837 838 838 839 /* call per pci device probe entry point */ 839 840 list_for_each_entry(ioc, &ioc_list, list) { 840 - id = ioc->pcidev->driver ? 841 - ioc->pcidev->driver->id_table : NULL; 842 841 if (dd_cbfunc->probe) 843 - dd_cbfunc->probe(ioc->pcidev, id); 842 + dd_cbfunc->probe(ioc->pcidev); 844 843 } 845 844 846 845 return 0; ··· 2029 2032 for(cb_idx = 0; cb_idx < MPT_MAX_PROTOCOL_DRIVERS; cb_idx++) { 2030 2033 if(MptDeviceDriverHandlers[cb_idx] && 2031 2034 MptDeviceDriverHandlers[cb_idx]->probe) { 2032 - MptDeviceDriverHandlers[cb_idx]->probe(pdev,id); 2035 + MptDeviceDriverHandlers[cb_idx]->probe(pdev); 2033 2036 } 2034 2037 } 2035 2038
+1 -1
drivers/message/fusion/mptbase.h
··· 257 257 } MPT_DRIVER_CLASS; 258 258 259 259 struct mpt_pci_driver{ 260 - int (*probe) (struct pci_dev *dev, const struct pci_device_id *id); 260 + int (*probe) (struct pci_dev *dev); 261 261 void (*remove) (struct pci_dev *dev); 262 262 }; 263 263
+2 -2
drivers/message/fusion/mptctl.c
··· 114 114 static int mptctl_hp_hostinfo(MPT_ADAPTER *iocp, unsigned long arg, unsigned int cmd); 115 115 static int mptctl_hp_targetinfo(MPT_ADAPTER *iocp, unsigned long arg); 116 116 117 - static int mptctl_probe(struct pci_dev *, const struct pci_device_id *); 117 + static int mptctl_probe(struct pci_dev *); 118 118 static void mptctl_remove(struct pci_dev *); 119 119 120 120 #ifdef CONFIG_COMPAT ··· 2838 2838 */ 2839 2839 2840 2840 static int 2841 - mptctl_probe(struct pci_dev *pdev, const struct pci_device_id *id) 2841 + mptctl_probe(struct pci_dev *pdev) 2842 2842 { 2843 2843 MPT_ADAPTER *ioc = pci_get_drvdata(pdev); 2844 2844
+1 -1
drivers/message/fusion/mptlan.c
··· 1377 1377 } 1378 1378 1379 1379 static int 1380 - mptlan_probe(struct pci_dev *pdev, const struct pci_device_id *id) 1380 + mptlan_probe(struct pci_dev *pdev) 1381 1381 { 1382 1382 MPT_ADAPTER *ioc = pci_get_drvdata(pdev); 1383 1383 struct net_device *dev;
+17 -13
drivers/misc/cxl/guest.c
··· 20 20 pci_channel_state_t state) 21 21 { 22 22 struct pci_dev *afu_dev; 23 + struct pci_driver *afu_drv; 24 + const struct pci_error_handlers *err_handler; 23 25 24 26 if (afu->phb == NULL) 25 27 return; 26 28 27 29 list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { 28 - if (!afu_dev->driver) 30 + afu_drv = to_pci_driver(afu_dev->dev.driver); 31 + if (!afu_drv) 29 32 continue; 30 33 34 + err_handler = afu_drv->err_handler; 31 35 switch (bus_error_event) { 32 36 case CXL_ERROR_DETECTED_EVENT: 33 37 afu_dev->error_state = state; 34 38 35 - if (afu_dev->driver->err_handler && 36 - afu_dev->driver->err_handler->error_detected) 37 - afu_dev->driver->err_handler->error_detected(afu_dev, state); 38 - break; 39 + if (err_handler && 40 + err_handler->error_detected) 41 + err_handler->error_detected(afu_dev, state); 42 + break; 39 43 case CXL_SLOT_RESET_EVENT: 40 44 afu_dev->error_state = state; 41 45 42 - if (afu_dev->driver->err_handler && 43 - afu_dev->driver->err_handler->slot_reset) 44 - afu_dev->driver->err_handler->slot_reset(afu_dev); 45 - break; 46 + if (err_handler && 47 + err_handler->slot_reset) 48 + err_handler->slot_reset(afu_dev); 49 + break; 46 50 case CXL_RESUME_EVENT: 47 - if (afu_dev->driver->err_handler && 48 - afu_dev->driver->err_handler->resume) 49 - afu_dev->driver->err_handler->resume(afu_dev); 50 - break; 51 + if (err_handler && 52 + err_handler->resume) 53 + err_handler->resume(afu_dev); 54 + break; 51 55 } 52 56 } 53 57 }
+24 -11
drivers/misc/cxl/pci.c
··· 1795 1795 pci_channel_state_t state) 1796 1796 { 1797 1797 struct pci_dev *afu_dev; 1798 + struct pci_driver *afu_drv; 1799 + const struct pci_error_handlers *err_handler; 1798 1800 pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET; 1799 1801 pci_ers_result_t afu_result = PCI_ERS_RESULT_NEED_RESET; 1800 1802 ··· 1807 1805 return result; 1808 1806 1809 1807 list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { 1810 - if (!afu_dev->driver) 1808 + afu_drv = to_pci_driver(afu_dev->dev.driver); 1809 + if (!afu_drv) 1811 1810 continue; 1812 1811 1813 1812 afu_dev->error_state = state; 1814 1813 1815 - if (afu_dev->driver->err_handler) 1816 - afu_result = afu_dev->driver->err_handler->error_detected(afu_dev, 1817 - state); 1814 + err_handler = afu_drv->err_handler; 1815 + if (err_handler) 1816 + afu_result = err_handler->error_detected(afu_dev, 1817 + state); 1818 1818 /* Disconnect trumps all, NONE trumps NEED_RESET */ 1819 1819 if (afu_result == PCI_ERS_RESULT_DISCONNECT) 1820 1820 result = PCI_ERS_RESULT_DISCONNECT; ··· 1976 1972 struct cxl_afu *afu; 1977 1973 struct cxl_context *ctx; 1978 1974 struct pci_dev *afu_dev; 1975 + struct pci_driver *afu_drv; 1976 + const struct pci_error_handlers *err_handler; 1979 1977 pci_ers_result_t afu_result = PCI_ERS_RESULT_RECOVERED; 1980 1978 pci_ers_result_t result = PCI_ERS_RESULT_RECOVERED; 1981 1979 int i; ··· 2034 2028 * shouldn't start new work until we call 2035 2029 * their resume function. 2036 2030 */ 2037 - if (!afu_dev->driver) 2031 + afu_drv = to_pci_driver(afu_dev->dev.driver); 2032 + if (!afu_drv) 2038 2033 continue; 2039 2034 2040 - if (afu_dev->driver->err_handler && 2041 - afu_dev->driver->err_handler->slot_reset) 2042 - afu_result = afu_dev->driver->err_handler->slot_reset(afu_dev); 2035 + err_handler = afu_drv->err_handler; 2036 + if (err_handler && err_handler->slot_reset) 2037 + afu_result = err_handler->slot_reset(afu_dev); 2043 2038 2044 2039 if (afu_result == PCI_ERS_RESULT_DISCONNECT) 2045 2040 result = PCI_ERS_RESULT_DISCONNECT; ··· 2067 2060 struct cxl *adapter = pci_get_drvdata(pdev); 2068 2061 struct cxl_afu *afu; 2069 2062 struct pci_dev *afu_dev; 2063 + struct pci_driver *afu_drv; 2064 + const struct pci_error_handlers *err_handler; 2070 2065 int i; 2071 2066 2072 2067 /* Everything is back now. Drivers should restart work now. ··· 2083 2074 continue; 2084 2075 2085 2076 list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { 2086 - if (afu_dev->driver && afu_dev->driver->err_handler && 2087 - afu_dev->driver->err_handler->resume) 2088 - afu_dev->driver->err_handler->resume(afu_dev); 2077 + afu_drv = to_pci_driver(afu_dev->dev.driver); 2078 + if (!afu_drv) 2079 + continue; 2080 + 2081 + err_handler = afu_drv->err_handler; 2082 + if (err_handler && err_handler->resume) 2083 + err_handler->resume(afu_dev); 2089 2084 } 2090 2085 } 2091 2086 spin_unlock(&adapter->afu_list_lock);
-2
drivers/net/ethernet/chelsio/cxgb3/common.h
··· 676 676 void t3_link_fault(struct adapter *adapter, int port_id); 677 677 int t3_link_start(struct cphy *phy, struct cmac *mac, struct link_config *lc); 678 678 const struct adapter_info *t3_get_adapter_info(unsigned int board_id); 679 - int t3_seeprom_read(struct adapter *adapter, u32 addr, __le32 *data); 680 - int t3_seeprom_write(struct adapter *adapter, u32 addr, __le32 data); 681 679 int t3_seeprom_wp(struct adapter *adapter, int enable); 682 680 int t3_get_tp_version(struct adapter *adapter, u32 *vers); 683 681 int t3_check_tpsram_version(struct adapter *adapter);
+13 -25
drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
··· 2036 2036 { 2037 2037 struct port_info *pi = netdev_priv(dev); 2038 2038 struct adapter *adapter = pi->adapter; 2039 - int i, err = 0; 2040 - 2041 - u8 *buf = kmalloc(EEPROMSIZE, GFP_KERNEL); 2042 - if (!buf) 2043 - return -ENOMEM; 2039 + int cnt; 2044 2040 2045 2041 e->magic = EEPROM_MAGIC; 2046 - for (i = e->offset & ~3; !err && i < e->offset + e->len; i += 4) 2047 - err = t3_seeprom_read(adapter, i, (__le32 *) & buf[i]); 2042 + cnt = pci_read_vpd(adapter->pdev, e->offset, e->len, data); 2043 + if (cnt < 0) 2044 + return cnt; 2048 2045 2049 - if (!err) 2050 - memcpy(data, buf + e->offset, e->len); 2051 - kfree(buf); 2052 - return err; 2046 + e->len = cnt; 2047 + 2048 + return 0; 2053 2049 } 2054 2050 2055 2051 static int set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom, ··· 2054 2058 struct port_info *pi = netdev_priv(dev); 2055 2059 struct adapter *adapter = pi->adapter; 2056 2060 u32 aligned_offset, aligned_len; 2057 - __le32 *p; 2058 2061 u8 *buf; 2059 2062 int err; 2060 2063 ··· 2067 2072 buf = kmalloc(aligned_len, GFP_KERNEL); 2068 2073 if (!buf) 2069 2074 return -ENOMEM; 2070 - err = t3_seeprom_read(adapter, aligned_offset, (__le32 *) buf); 2071 - if (!err && aligned_len > 4) 2072 - err = t3_seeprom_read(adapter, 2073 - aligned_offset + aligned_len - 4, 2074 - (__le32 *) & buf[aligned_len - 4]); 2075 - if (err) 2075 + err = pci_read_vpd(adapter->pdev, aligned_offset, aligned_len, 2076 + buf); 2077 + if (err < 0) 2076 2078 goto out; 2077 2079 memcpy(buf + (eeprom->offset & 3), data, eeprom->len); 2078 2080 } else ··· 2079 2087 if (err) 2080 2088 goto out; 2081 2089 2082 - for (p = (__le32 *) buf; !err && aligned_len; aligned_len -= 4, p++) { 2083 - err = t3_seeprom_write(adapter, aligned_offset, *p); 2084 - aligned_offset += 4; 2085 - } 2086 - 2087 - if (!err) 2090 + err = pci_write_vpd(adapter->pdev, aligned_offset, aligned_len, buf); 2091 + if (err >= 0) 2088 2092 err = t3_seeprom_wp(adapter, 1); 2089 2093 out: 2090 2094 if (buf != data) 2091 2095 kfree(buf); 2092 - return err; 2096 + return err < 0 ? err : 0; 2093 2097 } 2094 2098 2095 2099 static void get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+16 -82
drivers/net/ethernet/chelsio/cxgb3/t3_hw.c
··· 596 596 u32 pad; /* for multiple-of-4 sizing and alignment */ 597 597 }; 598 598 599 - #define EEPROM_MAX_POLL 40 600 599 #define EEPROM_STAT_ADDR 0x4000 601 600 #define VPD_BASE 0xc00 602 - 603 - /** 604 - * t3_seeprom_read - read a VPD EEPROM location 605 - * @adapter: adapter to read 606 - * @addr: EEPROM address 607 - * @data: where to store the read data 608 - * 609 - * Read a 32-bit word from a location in VPD EEPROM using the card's PCI 610 - * VPD ROM capability. A zero is written to the flag bit when the 611 - * address is written to the control register. The hardware device will 612 - * set the flag to 1 when 4 bytes have been read into the data register. 613 - */ 614 - int t3_seeprom_read(struct adapter *adapter, u32 addr, __le32 *data) 615 - { 616 - u16 val; 617 - int attempts = EEPROM_MAX_POLL; 618 - u32 v; 619 - unsigned int base = adapter->params.pci.vpd_cap_addr; 620 - 621 - if ((addr >= EEPROMSIZE && addr != EEPROM_STAT_ADDR) || (addr & 3)) 622 - return -EINVAL; 623 - 624 - pci_write_config_word(adapter->pdev, base + PCI_VPD_ADDR, addr); 625 - do { 626 - udelay(10); 627 - pci_read_config_word(adapter->pdev, base + PCI_VPD_ADDR, &val); 628 - } while (!(val & PCI_VPD_ADDR_F) && --attempts); 629 - 630 - if (!(val & PCI_VPD_ADDR_F)) { 631 - CH_ERR(adapter, "reading EEPROM address 0x%x failed\n", addr); 632 - return -EIO; 633 - } 634 - pci_read_config_dword(adapter->pdev, base + PCI_VPD_DATA, &v); 635 - *data = cpu_to_le32(v); 636 - return 0; 637 - } 638 - 639 - /** 640 - * t3_seeprom_write - write a VPD EEPROM location 641 - * @adapter: adapter to write 642 - * @addr: EEPROM address 643 - * @data: value to write 644 - * 645 - * Write a 32-bit word to a location in VPD EEPROM using the card's PCI 646 - * VPD ROM capability. 647 - */ 648 - int t3_seeprom_write(struct adapter *adapter, u32 addr, __le32 data) 649 - { 650 - u16 val; 651 - int attempts = EEPROM_MAX_POLL; 652 - unsigned int base = adapter->params.pci.vpd_cap_addr; 653 - 654 - if ((addr >= EEPROMSIZE && addr != EEPROM_STAT_ADDR) || (addr & 3)) 655 - return -EINVAL; 656 - 657 - pci_write_config_dword(adapter->pdev, base + PCI_VPD_DATA, 658 - le32_to_cpu(data)); 659 - pci_write_config_word(adapter->pdev,base + PCI_VPD_ADDR, 660 - addr | PCI_VPD_ADDR_F); 661 - do { 662 - msleep(1); 663 - pci_read_config_word(adapter->pdev, base + PCI_VPD_ADDR, &val); 664 - } while ((val & PCI_VPD_ADDR_F) && --attempts); 665 - 666 - if (val & PCI_VPD_ADDR_F) { 667 - CH_ERR(adapter, "write to EEPROM address 0x%x failed\n", addr); 668 - return -EIO; 669 - } 670 - return 0; 671 - } 672 601 673 602 /** 674 603 * t3_seeprom_wp - enable/disable EEPROM write protection ··· 608 679 */ 609 680 int t3_seeprom_wp(struct adapter *adapter, int enable) 610 681 { 611 - return t3_seeprom_write(adapter, EEPROM_STAT_ADDR, enable ? 0xc : 0); 682 + u32 data = enable ? 0xc : 0; 683 + int ret; 684 + 685 + /* EEPROM_STAT_ADDR is outside VPD area, use pci_write_vpd_any() */ 686 + ret = pci_write_vpd_any(adapter->pdev, EEPROM_STAT_ADDR, sizeof(u32), 687 + &data); 688 + 689 + return ret < 0 ? ret : 0; 612 690 } 613 691 614 692 static int vpdstrtouint(char *s, u8 len, unsigned int base, unsigned int *val) ··· 645 709 */ 646 710 static int get_vpd_params(struct adapter *adapter, struct vpd_params *p) 647 711 { 648 - int i, addr, ret; 649 712 struct t3_vpd vpd; 713 + u8 base_val = 0; 714 + int addr, ret; 650 715 651 716 /* 652 717 * Card information is normally at VPD_BASE but some early cards had 653 718 * it at 0. 654 719 */ 655 - ret = t3_seeprom_read(adapter, VPD_BASE, (__le32 *)&vpd); 656 - if (ret) 720 + ret = pci_read_vpd(adapter->pdev, VPD_BASE, 1, &base_val); 721 + if (ret < 0) 657 722 return ret; 658 - addr = vpd.id_tag == 0x82 ? VPD_BASE : 0; 723 + addr = base_val == PCI_VPD_LRDT_ID_STRING ? VPD_BASE : 0; 659 724 660 - for (i = 0; i < sizeof(vpd); i += 4) { 661 - ret = t3_seeprom_read(adapter, addr + i, 662 - (__le32 *)((u8 *)&vpd + i)); 663 - if (ret) 664 - return ret; 665 - } 725 + ret = pci_read_vpd(adapter->pdev, addr, sizeof(vpd), &vpd); 726 + if (ret < 0) 727 + return ret; 666 728 667 729 ret = vpdstrtouint(vpd.cclk_data, vpd.cclk_len, 10, &p->cclk); 668 730 if (ret)
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
··· 608 608 return; 609 609 } 610 610 611 - strncpy(drvinfo->driver, h->pdev->driver->name, 611 + strncpy(drvinfo->driver, dev_driver_string(&h->pdev->dev), 612 612 sizeof(drvinfo->driver)); 613 613 drvinfo->driver[sizeof(drvinfo->driver) - 1] = '\0'; 614 614
+1 -1
drivers/net/ethernet/marvell/prestera/prestera_pci.c
··· 776 776 static int prestera_pci_probe(struct pci_dev *pdev, 777 777 const struct pci_device_id *id) 778 778 { 779 - const char *driver_name = pdev->driver->name; 779 + const char *driver_name = dev_driver_string(&pdev->dev); 780 780 struct prestera_fw *fw; 781 781 int err; 782 782
+1 -1
drivers/net/ethernet/mellanox/mlxsw/pci.c
··· 1875 1875 1876 1876 static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) 1877 1877 { 1878 - const char *driver_name = pdev->driver->name; 1878 + const char *driver_name = dev_driver_string(&pdev->dev); 1879 1879 struct mlxsw_pci *mlxsw_pci; 1880 1880 int err; 1881 1881
+2 -1
drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
··· 202 202 { 203 203 char nsp_version[ETHTOOL_FWVERS_LEN] = {}; 204 204 205 - strlcpy(drvinfo->driver, pdev->driver->name, sizeof(drvinfo->driver)); 205 + strlcpy(drvinfo->driver, dev_driver_string(&pdev->dev), 206 + sizeof(drvinfo->driver)); 206 207 nfp_net_get_nspinfo(app, nsp_version); 207 208 snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version), 208 209 "%s %s %s %s", vnic_version, nsp_version,
+12 -5
drivers/of/irq.c
··· 156 156 157 157 /* Now start the actual "proper" walk of the interrupt tree */ 158 158 while (ipar != NULL) { 159 - /* Now check if cursor is an interrupt-controller and if it is 160 - * then we are done 159 + /* 160 + * Now check if cursor is an interrupt-controller and 161 + * if it is then we are done, unless there is an 162 + * interrupt-map which takes precedence. 161 163 */ 162 - if (of_property_read_bool(ipar, "interrupt-controller")) { 164 + imap = of_get_property(ipar, "interrupt-map", &imaplen); 165 + if (imap == NULL && 166 + of_property_read_bool(ipar, "interrupt-controller")) { 163 167 pr_debug(" -> got it !\n"); 164 168 return 0; 165 169 } ··· 177 173 goto fail; 178 174 } 179 175 180 - /* Now look for an interrupt-map */ 181 - imap = of_get_property(ipar, "interrupt-map", &imaplen); 182 176 /* No interrupt map, check for an interrupt parent */ 183 177 if (imap == NULL) { 184 178 pr_debug(" -> no map, getting parent\n"); ··· 256 254 out_irq->args[i] = be32_to_cpup(imap - newintsize + i); 257 255 out_irq->args_count = intsize = newintsize; 258 256 addrsize = newaddrsize; 257 + 258 + if (ipar == newpar) { 259 + pr_debug("%pOF interrupt-map entry to self\n", ipar); 260 + return 0; 261 + } 259 262 260 263 skiplevel: 261 264 /* Iterate again with new parent */
+27 -1
drivers/pci/controller/Kconfig
··· 254 254 MediaTek SoCs. 255 255 256 256 config VMD 257 - depends on PCI_MSI && X86_64 && SRCU 257 + depends on PCI_MSI && X86_64 && SRCU && !UML 258 258 tristate "Intel Volume Management Device Driver" 259 259 help 260 260 Adds support for the Intel Volume Management Device (VMD). VMD is a ··· 311 311 help 312 312 Say Y here if you want error handling support 313 313 for the PCIe controller's errors on HiSilicon HIP SoCs 314 + 315 + config PCIE_APPLE_MSI_DOORBELL_ADDR 316 + hex 317 + default 0xfffff000 318 + depends on PCIE_APPLE 319 + 320 + config PCIE_APPLE 321 + tristate "Apple PCIe controller" 322 + depends on ARCH_APPLE || COMPILE_TEST 323 + depends on OF 324 + depends on PCI_MSI_IRQ_DOMAIN 325 + select PCI_HOST_COMMON 326 + help 327 + Say Y here if you want to enable PCIe controller support on Apple 328 + system-on-chips, like the Apple M1. This is required for the USB 329 + type-A ports, Ethernet, Wi-Fi, and Bluetooth. 330 + 331 + If unsure, say Y if you have an Apple Silicon system. 332 + 333 + config PCIE_MT7621 334 + tristate "MediaTek MT7621 PCIe Controller" 335 + depends on (RALINK && SOC_MT7621) || (MIPS && COMPILE_TEST) 336 + select PHY_MT7621_PCI 337 + default SOC_MT7621 338 + help 339 + This selects a driver for the MediaTek MT7621 PCIe Controller. 314 340 315 341 source "drivers/pci/controller/dwc/Kconfig" 316 342 source "drivers/pci/controller/mobiveil/Kconfig"
+3
drivers/pci/controller/Makefile
··· 37 37 obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o 38 38 obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o 39 39 obj-$(CONFIG_PCIE_HISI_ERR) += pcie-hisi-error.o 40 + obj-$(CONFIG_PCIE_APPLE) += pcie-apple.o 41 + obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o 42 + 40 43 # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW 41 44 obj-y += dwc/ 42 45 obj-y += mobiveil/
+1 -1
drivers/pci/controller/cadence/pci-j721e.c
··· 474 474 ret = clk_prepare_enable(clk); 475 475 if (ret) { 476 476 dev_err(dev, "failed to enable pcie_refclk\n"); 477 - goto err_get_sync; 477 + goto err_pcie_setup; 478 478 } 479 479 pcie->refclk = clk; 480 480
+2
drivers/pci/controller/cadence/pcie-cadence-plat.c
··· 127 127 goto err_init; 128 128 } 129 129 130 + return 0; 131 + 130 132 err_init: 131 133 err_get_sync: 132 134 pm_runtime_put_sync(dev);
+19 -11
drivers/pci/controller/dwc/Kconfig
··· 8 8 9 9 config PCIE_DW_HOST 10 10 bool 11 - depends on PCI_MSI_IRQ_DOMAIN 12 11 select PCIE_DW 13 12 14 13 config PCIE_DW_EP 15 14 bool 16 - depends on PCI_ENDPOINT 17 15 select PCIE_DW 18 16 19 17 config PCI_DRA7XX 20 - bool 18 + tristate 21 19 22 20 config PCI_DRA7XX_HOST 23 - bool "TI DRA7xx PCIe controller Host Mode" 21 + tristate "TI DRA7xx PCIe controller Host Mode" 24 22 depends on SOC_DRA7XX || COMPILE_TEST 25 - depends on PCI_MSI_IRQ_DOMAIN 26 23 depends on OF && HAS_IOMEM && TI_PIPE3 24 + depends on PCI_MSI_IRQ_DOMAIN 27 25 select PCIE_DW_HOST 28 26 select PCI_DRA7XX 29 27 default y if SOC_DRA7XX ··· 34 36 This uses the DesignWare core. 35 37 36 38 config PCI_DRA7XX_EP 37 - bool "TI DRA7xx PCIe controller Endpoint Mode" 39 + tristate "TI DRA7xx PCIe controller Endpoint Mode" 38 40 depends on SOC_DRA7XX || COMPILE_TEST 39 - depends on PCI_ENDPOINT 40 41 depends on OF && HAS_IOMEM && TI_PIPE3 42 + depends on PCI_ENDPOINT 41 43 select PCIE_DW_EP 42 44 select PCI_DRA7XX 43 45 help ··· 53 55 54 56 config PCIE_DW_PLAT_HOST 55 57 bool "Platform bus based DesignWare PCIe Controller - Host mode" 56 - depends on PCI && PCI_MSI_IRQ_DOMAIN 58 + depends on PCI_MSI_IRQ_DOMAIN 57 59 select PCIE_DW_HOST 58 60 select PCIE_DW_PLAT 59 61 help ··· 136 138 bool "Freescale Layerscape PCIe controller - Host mode" 137 139 depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST) 138 140 depends on PCI_MSI_IRQ_DOMAIN 139 - select MFD_SYSCON 140 141 select PCIE_DW_HOST 142 + select MFD_SYSCON 141 143 help 142 144 Say Y here if you want to enable PCIe controller support on Layerscape 143 145 SoCs to work in Host mode. ··· 177 179 Say Y here to enable PCIe controller support on Qualcomm SoCs. The 178 180 PCIe controller uses the DesignWare core plus Qualcomm-specific 179 181 hardware wrappers. 182 + 183 + config PCIE_QCOM_EP 184 + tristate "Qualcomm PCIe controller - Endpoint mode" 185 + depends on OF && (ARCH_QCOM || COMPILE_TEST) 186 + depends on PCI_ENDPOINT 187 + select PCIE_DW_EP 188 + help 189 + Say Y here to enable support for the PCIe controllers on Qualcomm SoCs 190 + to work in endpoint mode. The PCIe controller uses the DesignWare core 191 + plus Qualcomm-specific hardware wrappers. 180 192 181 193 config PCIE_ARMADA_8K 182 194 bool "Marvell Armada-8K PCIe controller" ··· 274 266 275 267 config PCIE_KIRIN 276 268 depends on OF && (ARM64 || COMPILE_TEST) 277 - bool "HiSilicon Kirin series SoCs PCIe controllers" 269 + tristate "HiSilicon Kirin series SoCs PCIe controllers" 278 270 depends on PCI_MSI_IRQ_DOMAIN 279 271 select PCIE_DW_HOST 280 272 help ··· 291 283 292 284 config PCI_MESON 293 285 tristate "MESON PCIe controller" 294 - depends on PCI_MSI_IRQ_DOMAIN 295 286 default m if ARCH_MESON 287 + depends on PCI_MSI_IRQ_DOMAIN 296 288 select PCIE_DW_HOST 297 289 help 298 290 Say Y here if you want to enable PCI controller support on Amlogic
+1
drivers/pci/controller/dwc/Makefile
··· 12 12 obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o 13 13 obj-$(CONFIG_PCI_LAYERSCAPE_EP) += pci-layerscape-ep.o 14 14 obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o 15 + obj-$(CONFIG_PCIE_QCOM_EP) += pcie-qcom-ep.o 15 16 obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o 16 17 obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o 17 18 obj-$(CONFIG_PCIE_ROCKCHIP_DW_HOST) += pcie-dw-rockchip.o
+20 -2
drivers/pci/controller/dwc/pci-dra7xx.c
··· 7 7 * Authors: Kishon Vijay Abraham I <kishon@ti.com> 8 8 */ 9 9 10 + #include <linux/clk.h> 10 11 #include <linux/delay.h> 11 12 #include <linux/device.h> 12 13 #include <linux/err.h> ··· 15 14 #include <linux/irq.h> 16 15 #include <linux/irqdomain.h> 17 16 #include <linux/kernel.h> 18 - #include <linux/init.h> 17 + #include <linux/module.h> 19 18 #include <linux/of_device.h> 20 19 #include <linux/of_gpio.h> 21 20 #include <linux/of_pci.h> ··· 91 90 int phy_count; /* DT phy-names count */ 92 91 struct phy **phy; 93 92 struct irq_domain *irq_domain; 93 + struct clk *clk; 94 94 enum dw_pcie_device_mode mode; 95 95 }; 96 96 ··· 609 607 }, 610 608 {}, 611 609 }; 610 + MODULE_DEVICE_TABLE(of, of_dra7xx_pcie_match); 612 611 613 612 /* 614 613 * dra7xx_pcie_unaligned_memaccess: workaround for AM572x/AM571x Errata i870 ··· 742 739 link = devm_kcalloc(dev, phy_count, sizeof(*link), GFP_KERNEL); 743 740 if (!link) 744 741 return -ENOMEM; 742 + 743 + dra7xx->clk = devm_clk_get_optional(dev, NULL); 744 + if (IS_ERR(dra7xx->clk)) 745 + return dev_err_probe(dev, PTR_ERR(dra7xx->clk), 746 + "clock request failed"); 747 + 748 + ret = clk_prepare_enable(dra7xx->clk); 749 + if (ret) 750 + return ret; 745 751 746 752 for (i = 0; i < phy_count; i++) { 747 753 snprintf(name, sizeof(name), "pcie-phy%d", i); ··· 937 925 938 926 pm_runtime_disable(dev); 939 927 dra7xx_pcie_disable_phy(dra7xx); 928 + 929 + clk_disable_unprepare(dra7xx->clk); 940 930 } 941 931 942 932 static const struct dev_pm_ops dra7xx_pcie_pm_ops = { ··· 957 943 }, 958 944 .shutdown = dra7xx_pcie_shutdown, 959 945 }; 960 - builtin_platform_driver(dra7xx_pcie_driver); 946 + module_platform_driver(dra7xx_pcie_driver); 947 + 948 + MODULE_AUTHOR("Kishon Vijay Abraham I <kishon@ti.com>"); 949 + MODULE_DESCRIPTION("PCIe controller driver for TI DRA7xx SoCs"); 950 + MODULE_LICENSE("GPL v2");
+1 -1
drivers/pci/controller/dwc/pci-imx6.c
··· 1132 1132 1133 1133 /* Limit link speed */ 1134 1134 pci->link_gen = 1; 1135 - ret = of_property_read_u32(node, "fsl,max-link-speed", &pci->link_gen); 1135 + of_property_read_u32(node, "fsl,max-link-speed", &pci->link_gen); 1136 1136 1137 1137 imx6_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie"); 1138 1138 if (IS_ERR(imx6_pcie->vpcie)) {
+3
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 83 83 for (func_no = 0; func_no < funcs; func_no++) 84 84 __dw_pcie_ep_reset_bar(pci, func_no, bar, 0); 85 85 } 86 + EXPORT_SYMBOL_GPL(dw_pcie_ep_reset_bar); 86 87 87 88 static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie_ep *ep, u8 func_no, 88 89 u8 cap_ptr, u8 cap) ··· 486 485 487 486 return -EINVAL; 488 487 } 488 + EXPORT_SYMBOL_GPL(dw_pcie_ep_raise_legacy_irq); 489 489 490 490 int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, 491 491 u8 interrupt_num) ··· 538 536 539 537 return 0; 540 538 } 539 + EXPORT_SYMBOL_GPL(dw_pcie_ep_raise_msi_irq); 541 540 542 541 int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no, 543 542 u16 interrupt_num)
+10 -9
drivers/pci/controller/dwc/pcie-designware-host.c
··· 335 335 if (pci->link_gen < 1) 336 336 pci->link_gen = of_pci_get_max_link_speed(np); 337 337 338 + /* Set default bus ops */ 339 + bridge->ops = &dw_pcie_ops; 340 + bridge->child_ops = &dw_child_pcie_ops; 341 + 342 + if (pp->ops->host_init) { 343 + ret = pp->ops->host_init(pp); 344 + if (ret) 345 + return ret; 346 + } 347 + 338 348 if (pci_msi_enabled()) { 339 349 pp->has_msi_ctrl = !(pp->ops->msi_host_init || 340 350 of_property_read_bool(np, "msi-parent") || ··· 398 388 } 399 389 } 400 390 401 - /* Set default bus ops */ 402 - bridge->ops = &dw_pcie_ops; 403 - bridge->child_ops = &dw_child_pcie_ops; 404 - 405 - if (pp->ops->host_init) { 406 - ret = pp->ops->host_init(pp); 407 - if (ret) 408 - goto err_free_msi; 409 - } 410 391 dw_pcie_iatu_detect(pci); 411 392 412 393 dw_pcie_setup_rc(pp);
+1
drivers/pci/controller/dwc/pcie-designware.c
··· 538 538 return ((val & PCIE_PORT_DEBUG1_LINK_UP) && 539 539 (!(val & PCIE_PORT_DEBUG1_LINK_IN_TRAINING))); 540 540 } 541 + EXPORT_SYMBOL_GPL(dw_pcie_link_up); 541 542 542 543 void dw_pcie_upconfig_setup(struct dw_pcie *pci) 543 544 {
+495 -145
drivers/pci/controller/dwc/pcie-kirin.c
··· 8 8 * Author: Xiaowei Song <songxiaowei@huawei.com> 9 9 */ 10 10 11 - #include <linux/compiler.h> 12 11 #include <linux/clk.h> 12 + #include <linux/compiler.h> 13 13 #include <linux/delay.h> 14 14 #include <linux/err.h> 15 15 #include <linux/gpio.h> 16 16 #include <linux/interrupt.h> 17 17 #include <linux/mfd/syscon.h> 18 18 #include <linux/of_address.h> 19 + #include <linux/of_device.h> 19 20 #include <linux/of_gpio.h> 20 21 #include <linux/of_pci.h> 22 + #include <linux/phy/phy.h> 21 23 #include <linux/pci.h> 22 24 #include <linux/pci_regs.h> 23 25 #include <linux/platform_device.h> ··· 30 28 31 29 #define to_kirin_pcie(x) dev_get_drvdata((x)->dev) 32 30 33 - #define REF_CLK_FREQ 100000000 34 - 35 31 /* PCIe ELBI registers */ 36 32 #define SOC_PCIECTRL_CTRL0_ADDR 0x000 37 33 #define SOC_PCIECTRL_CTRL1_ADDR 0x004 38 - #define SOC_PCIEPHY_CTRL2_ADDR 0x008 39 - #define SOC_PCIEPHY_CTRL3_ADDR 0x00c 40 34 #define PCIE_ELBI_SLV_DBI_ENABLE (0x1 << 21) 41 35 42 36 /* info located in APB */ 43 37 #define PCIE_APP_LTSSM_ENABLE 0x01c 44 - #define PCIE_APB_PHY_CTRL0 0x0 45 - #define PCIE_APB_PHY_CTRL1 0x4 46 38 #define PCIE_APB_PHY_STATUS0 0x400 47 39 #define PCIE_LINKUP_ENABLE (0x8020) 48 40 #define PCIE_LTSSM_ENABLE_BIT (0x1 << 11) 49 - #define PIPE_CLK_STABLE (0x1 << 19) 50 - #define PHY_REF_PAD_BIT (0x1 << 8) 51 - #define PHY_PWR_DOWN_BIT (0x1 << 22) 52 - #define PHY_RST_ACK_BIT (0x1 << 16) 53 41 54 42 /* info located in sysctrl */ 55 43 #define SCTRL_PCIE_CMOS_OFFSET 0x60 ··· 52 60 #define PCIE_DEBOUNCE_PARAM 0xF0F400 53 61 #define PCIE_OE_BYPASS (0x3 << 28) 54 62 63 + /* 64 + * Max number of connected PCI slots at an external PCI bridge 65 + * 66 + * This is used on HiKey 970, which has a PEX 8606 bridge with 4 connected 67 + * lanes (lane 0 upstream, and the other three lanes, one connected to an 68 + * in-board Ethernet adapter and the other two connected to M.2 and mini 69 + * PCI slots. 70 + * 71 + * Each slot has a different clock source and uses a separate PERST# pin. 72 + */ 73 + #define MAX_PCI_SLOTS 3 74 + 75 + enum pcie_kirin_phy_type { 76 + PCIE_KIRIN_INTERNAL_PHY, 77 + PCIE_KIRIN_EXTERNAL_PHY 78 + }; 79 + 80 + struct kirin_pcie { 81 + enum pcie_kirin_phy_type type; 82 + 83 + struct dw_pcie *pci; 84 + struct regmap *apb; 85 + struct phy *phy; 86 + void *phy_priv; /* only for PCIE_KIRIN_INTERNAL_PHY */ 87 + 88 + /* DWC PERST# */ 89 + int gpio_id_dwc_perst; 90 + 91 + /* Per-slot PERST# */ 92 + int num_slots; 93 + int gpio_id_reset[MAX_PCI_SLOTS]; 94 + const char *reset_names[MAX_PCI_SLOTS]; 95 + 96 + /* Per-slot clkreq */ 97 + int n_gpio_clkreq; 98 + int gpio_id_clkreq[MAX_PCI_SLOTS]; 99 + const char *clkreq_names[MAX_PCI_SLOTS]; 100 + }; 101 + 102 + /* 103 + * Kirin 960 PHY. Can't be split into a PHY driver without changing the 104 + * DT schema. 105 + */ 106 + 107 + #define REF_CLK_FREQ 100000000 108 + 109 + /* PHY info located in APB */ 110 + #define PCIE_APB_PHY_CTRL0 0x0 111 + #define PCIE_APB_PHY_CTRL1 0x4 112 + #define PCIE_APB_PHY_STATUS0 0x400 113 + #define PIPE_CLK_STABLE BIT(19) 114 + #define PHY_REF_PAD_BIT BIT(8) 115 + #define PHY_PWR_DOWN_BIT BIT(22) 116 + #define PHY_RST_ACK_BIT BIT(16) 117 + 55 118 /* peri_crg ctrl */ 56 119 #define CRGCTRL_PCIE_ASSERT_OFFSET 0x88 57 120 #define CRGCTRL_PCIE_ASSERT_BIT 0x8c000000 58 121 59 122 /* Time for delay */ 60 - #define REF_2_PERST_MIN 20000 123 + #define REF_2_PERST_MIN 21000 61 124 #define REF_2_PERST_MAX 25000 62 125 #define PERST_2_ACCESS_MIN 10000 63 126 #define PERST_2_ACCESS_MAX 12000 64 - #define LINK_WAIT_MIN 900 65 - #define LINK_WAIT_MAX 1000 66 127 #define PIPE_CLK_WAIT_MIN 550 67 128 #define PIPE_CLK_WAIT_MAX 600 68 129 #define TIME_CMOS_MIN 100 ··· 123 78 #define TIME_PHY_PD_MIN 10 124 79 #define TIME_PHY_PD_MAX 11 125 80 126 - struct kirin_pcie { 127 - struct dw_pcie *pci; 128 - void __iomem *apb_base; 129 - void __iomem *phy_base; 81 + struct hi3660_pcie_phy { 82 + struct device *dev; 83 + void __iomem *base; 130 84 struct regmap *crgctrl; 131 85 struct regmap *sysctrl; 132 86 struct clk *apb_sys_clk; 133 87 struct clk *apb_phy_clk; 134 88 struct clk *phy_ref_clk; 135 - struct clk *pcie_aclk; 136 - struct clk *pcie_aux_clk; 137 - int gpio_id_reset; 89 + struct clk *aclk; 90 + struct clk *aux_clk; 138 91 }; 139 92 140 - /* Registers in PCIeCTRL */ 141 - static inline void kirin_apb_ctrl_writel(struct kirin_pcie *kirin_pcie, 142 - u32 val, u32 reg) 143 - { 144 - writel(val, kirin_pcie->apb_base + reg); 145 - } 146 - 147 - static inline u32 kirin_apb_ctrl_readl(struct kirin_pcie *kirin_pcie, u32 reg) 148 - { 149 - return readl(kirin_pcie->apb_base + reg); 150 - } 151 - 152 93 /* Registers in PCIePHY */ 153 - static inline void kirin_apb_phy_writel(struct kirin_pcie *kirin_pcie, 94 + static inline void kirin_apb_phy_writel(struct hi3660_pcie_phy *hi3660_pcie_phy, 154 95 u32 val, u32 reg) 155 96 { 156 - writel(val, kirin_pcie->phy_base + reg); 97 + writel(val, hi3660_pcie_phy->base + reg); 157 98 } 158 99 159 - static inline u32 kirin_apb_phy_readl(struct kirin_pcie *kirin_pcie, u32 reg) 100 + static inline u32 kirin_apb_phy_readl(struct hi3660_pcie_phy *hi3660_pcie_phy, 101 + u32 reg) 160 102 { 161 - return readl(kirin_pcie->phy_base + reg); 103 + return readl(hi3660_pcie_phy->base + reg); 162 104 } 163 105 164 - static long kirin_pcie_get_clk(struct kirin_pcie *kirin_pcie, 165 - struct platform_device *pdev) 106 + static int hi3660_pcie_phy_get_clk(struct hi3660_pcie_phy *phy) 166 107 { 167 - struct device *dev = &pdev->dev; 108 + struct device *dev = phy->dev; 168 109 169 - kirin_pcie->phy_ref_clk = devm_clk_get(dev, "pcie_phy_ref"); 170 - if (IS_ERR(kirin_pcie->phy_ref_clk)) 171 - return PTR_ERR(kirin_pcie->phy_ref_clk); 110 + phy->phy_ref_clk = devm_clk_get(dev, "pcie_phy_ref"); 111 + if (IS_ERR(phy->phy_ref_clk)) 112 + return PTR_ERR(phy->phy_ref_clk); 172 113 173 - kirin_pcie->pcie_aux_clk = devm_clk_get(dev, "pcie_aux"); 174 - if (IS_ERR(kirin_pcie->pcie_aux_clk)) 175 - return PTR_ERR(kirin_pcie->pcie_aux_clk); 114 + phy->aux_clk = devm_clk_get(dev, "pcie_aux"); 115 + if (IS_ERR(phy->aux_clk)) 116 + return PTR_ERR(phy->aux_clk); 176 117 177 - kirin_pcie->apb_phy_clk = devm_clk_get(dev, "pcie_apb_phy"); 178 - if (IS_ERR(kirin_pcie->apb_phy_clk)) 179 - return PTR_ERR(kirin_pcie->apb_phy_clk); 118 + phy->apb_phy_clk = devm_clk_get(dev, "pcie_apb_phy"); 119 + if (IS_ERR(phy->apb_phy_clk)) 120 + return PTR_ERR(phy->apb_phy_clk); 180 121 181 - kirin_pcie->apb_sys_clk = devm_clk_get(dev, "pcie_apb_sys"); 182 - if (IS_ERR(kirin_pcie->apb_sys_clk)) 183 - return PTR_ERR(kirin_pcie->apb_sys_clk); 122 + phy->apb_sys_clk = devm_clk_get(dev, "pcie_apb_sys"); 123 + if (IS_ERR(phy->apb_sys_clk)) 124 + return PTR_ERR(phy->apb_sys_clk); 184 125 185 - kirin_pcie->pcie_aclk = devm_clk_get(dev, "pcie_aclk"); 186 - if (IS_ERR(kirin_pcie->pcie_aclk)) 187 - return PTR_ERR(kirin_pcie->pcie_aclk); 126 + phy->aclk = devm_clk_get(dev, "pcie_aclk"); 127 + if (IS_ERR(phy->aclk)) 128 + return PTR_ERR(phy->aclk); 188 129 189 130 return 0; 190 131 } 191 132 192 - static long kirin_pcie_get_resource(struct kirin_pcie *kirin_pcie, 193 - struct platform_device *pdev) 133 + static int hi3660_pcie_phy_get_resource(struct hi3660_pcie_phy *phy) 194 134 { 195 - kirin_pcie->apb_base = 196 - devm_platform_ioremap_resource_byname(pdev, "apb"); 197 - if (IS_ERR(kirin_pcie->apb_base)) 198 - return PTR_ERR(kirin_pcie->apb_base); 135 + struct device *dev = phy->dev; 136 + struct platform_device *pdev; 199 137 200 - kirin_pcie->phy_base = 201 - devm_platform_ioremap_resource_byname(pdev, "phy"); 202 - if (IS_ERR(kirin_pcie->phy_base)) 203 - return PTR_ERR(kirin_pcie->phy_base); 138 + /* registers */ 139 + pdev = container_of(dev, struct platform_device, dev); 204 140 205 - kirin_pcie->crgctrl = 206 - syscon_regmap_lookup_by_compatible("hisilicon,hi3660-crgctrl"); 207 - if (IS_ERR(kirin_pcie->crgctrl)) 208 - return PTR_ERR(kirin_pcie->crgctrl); 141 + phy->base = devm_platform_ioremap_resource_byname(pdev, "phy"); 142 + if (IS_ERR(phy->base)) 143 + return PTR_ERR(phy->base); 209 144 210 - kirin_pcie->sysctrl = 211 - syscon_regmap_lookup_by_compatible("hisilicon,hi3660-sctrl"); 212 - if (IS_ERR(kirin_pcie->sysctrl)) 213 - return PTR_ERR(kirin_pcie->sysctrl); 145 + phy->crgctrl = syscon_regmap_lookup_by_compatible("hisilicon,hi3660-crgctrl"); 146 + if (IS_ERR(phy->crgctrl)) 147 + return PTR_ERR(phy->crgctrl); 148 + 149 + phy->sysctrl = syscon_regmap_lookup_by_compatible("hisilicon,hi3660-sctrl"); 150 + if (IS_ERR(phy->sysctrl)) 151 + return PTR_ERR(phy->sysctrl); 214 152 215 153 return 0; 216 154 } 217 155 218 - static int kirin_pcie_phy_init(struct kirin_pcie *kirin_pcie) 156 + static int hi3660_pcie_phy_start(struct hi3660_pcie_phy *phy) 219 157 { 220 - struct device *dev = kirin_pcie->pci->dev; 158 + struct device *dev = phy->dev; 221 159 u32 reg_val; 222 160 223 - reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_CTRL1); 161 + reg_val = kirin_apb_phy_readl(phy, PCIE_APB_PHY_CTRL1); 224 162 reg_val &= ~PHY_REF_PAD_BIT; 225 - kirin_apb_phy_writel(kirin_pcie, reg_val, PCIE_APB_PHY_CTRL1); 163 + kirin_apb_phy_writel(phy, reg_val, PCIE_APB_PHY_CTRL1); 226 164 227 - reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_CTRL0); 165 + reg_val = kirin_apb_phy_readl(phy, PCIE_APB_PHY_CTRL0); 228 166 reg_val &= ~PHY_PWR_DOWN_BIT; 229 - kirin_apb_phy_writel(kirin_pcie, reg_val, PCIE_APB_PHY_CTRL0); 167 + kirin_apb_phy_writel(phy, reg_val, PCIE_APB_PHY_CTRL0); 230 168 usleep_range(TIME_PHY_PD_MIN, TIME_PHY_PD_MAX); 231 169 232 - reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_CTRL1); 170 + reg_val = kirin_apb_phy_readl(phy, PCIE_APB_PHY_CTRL1); 233 171 reg_val &= ~PHY_RST_ACK_BIT; 234 - kirin_apb_phy_writel(kirin_pcie, reg_val, PCIE_APB_PHY_CTRL1); 172 + kirin_apb_phy_writel(phy, reg_val, PCIE_APB_PHY_CTRL1); 235 173 236 174 usleep_range(PIPE_CLK_WAIT_MIN, PIPE_CLK_WAIT_MAX); 237 - reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_STATUS0); 175 + reg_val = kirin_apb_phy_readl(phy, PCIE_APB_PHY_STATUS0); 238 176 if (reg_val & PIPE_CLK_STABLE) { 239 177 dev_err(dev, "PIPE clk is not stable\n"); 240 178 return -EINVAL; ··· 226 198 return 0; 227 199 } 228 200 229 - static void kirin_pcie_oe_enable(struct kirin_pcie *kirin_pcie) 201 + static void hi3660_pcie_phy_oe_enable(struct hi3660_pcie_phy *phy) 230 202 { 231 203 u32 val; 232 204 233 - regmap_read(kirin_pcie->sysctrl, SCTRL_PCIE_OE_OFFSET, &val); 205 + regmap_read(phy->sysctrl, SCTRL_PCIE_OE_OFFSET, &val); 234 206 val |= PCIE_DEBOUNCE_PARAM; 235 207 val &= ~PCIE_OE_BYPASS; 236 - regmap_write(kirin_pcie->sysctrl, SCTRL_PCIE_OE_OFFSET, val); 208 + regmap_write(phy->sysctrl, SCTRL_PCIE_OE_OFFSET, val); 237 209 } 238 210 239 - static int kirin_pcie_clk_ctrl(struct kirin_pcie *kirin_pcie, bool enable) 211 + static int hi3660_pcie_phy_clk_ctrl(struct hi3660_pcie_phy *phy, bool enable) 240 212 { 241 213 int ret = 0; 242 214 243 215 if (!enable) 244 216 goto close_clk; 245 217 246 - ret = clk_set_rate(kirin_pcie->phy_ref_clk, REF_CLK_FREQ); 218 + ret = clk_set_rate(phy->phy_ref_clk, REF_CLK_FREQ); 247 219 if (ret) 248 220 return ret; 249 221 250 - ret = clk_prepare_enable(kirin_pcie->phy_ref_clk); 222 + ret = clk_prepare_enable(phy->phy_ref_clk); 251 223 if (ret) 252 224 return ret; 253 225 254 - ret = clk_prepare_enable(kirin_pcie->apb_sys_clk); 226 + ret = clk_prepare_enable(phy->apb_sys_clk); 255 227 if (ret) 256 228 goto apb_sys_fail; 257 229 258 - ret = clk_prepare_enable(kirin_pcie->apb_phy_clk); 230 + ret = clk_prepare_enable(phy->apb_phy_clk); 259 231 if (ret) 260 232 goto apb_phy_fail; 261 233 262 - ret = clk_prepare_enable(kirin_pcie->pcie_aclk); 234 + ret = clk_prepare_enable(phy->aclk); 263 235 if (ret) 264 236 goto aclk_fail; 265 237 266 - ret = clk_prepare_enable(kirin_pcie->pcie_aux_clk); 238 + ret = clk_prepare_enable(phy->aux_clk); 267 239 if (ret) 268 240 goto aux_clk_fail; 269 241 270 242 return 0; 271 243 272 244 close_clk: 273 - clk_disable_unprepare(kirin_pcie->pcie_aux_clk); 245 + clk_disable_unprepare(phy->aux_clk); 274 246 aux_clk_fail: 275 - clk_disable_unprepare(kirin_pcie->pcie_aclk); 247 + clk_disable_unprepare(phy->aclk); 276 248 aclk_fail: 277 - clk_disable_unprepare(kirin_pcie->apb_phy_clk); 249 + clk_disable_unprepare(phy->apb_phy_clk); 278 250 apb_phy_fail: 279 - clk_disable_unprepare(kirin_pcie->apb_sys_clk); 251 + clk_disable_unprepare(phy->apb_sys_clk); 280 252 apb_sys_fail: 281 - clk_disable_unprepare(kirin_pcie->phy_ref_clk); 253 + clk_disable_unprepare(phy->phy_ref_clk); 282 254 283 255 return ret; 284 256 } 285 257 286 - static int kirin_pcie_power_on(struct kirin_pcie *kirin_pcie) 258 + static int hi3660_pcie_phy_power_on(struct kirin_pcie *pcie) 287 259 { 260 + struct hi3660_pcie_phy *phy = pcie->phy_priv; 288 261 int ret; 289 262 290 263 /* Power supply for Host */ 291 - regmap_write(kirin_pcie->sysctrl, 264 + regmap_write(phy->sysctrl, 292 265 SCTRL_PCIE_CMOS_OFFSET, SCTRL_PCIE_CMOS_BIT); 293 266 usleep_range(TIME_CMOS_MIN, TIME_CMOS_MAX); 294 - kirin_pcie_oe_enable(kirin_pcie); 295 267 296 - ret = kirin_pcie_clk_ctrl(kirin_pcie, true); 268 + hi3660_pcie_phy_oe_enable(phy); 269 + 270 + ret = hi3660_pcie_phy_clk_ctrl(phy, true); 297 271 if (ret) 298 272 return ret; 299 273 300 274 /* ISO disable, PCIeCtrl, PHY assert and clk gate clear */ 301 - regmap_write(kirin_pcie->sysctrl, 275 + regmap_write(phy->sysctrl, 302 276 SCTRL_PCIE_ISO_OFFSET, SCTRL_PCIE_ISO_BIT); 303 - regmap_write(kirin_pcie->crgctrl, 277 + regmap_write(phy->crgctrl, 304 278 CRGCTRL_PCIE_ASSERT_OFFSET, CRGCTRL_PCIE_ASSERT_BIT); 305 - regmap_write(kirin_pcie->sysctrl, 279 + regmap_write(phy->sysctrl, 306 280 SCTRL_PCIE_HPCLK_OFFSET, SCTRL_PCIE_HPCLK_BIT); 307 281 308 - ret = kirin_pcie_phy_init(kirin_pcie); 282 + ret = hi3660_pcie_phy_start(phy); 309 283 if (ret) 310 - goto close_clk; 284 + goto disable_clks; 311 285 312 - /* perst assert Endpoint */ 313 - if (!gpio_request(kirin_pcie->gpio_id_reset, "pcie_perst")) { 314 - usleep_range(REF_2_PERST_MIN, REF_2_PERST_MAX); 315 - ret = gpio_direction_output(kirin_pcie->gpio_id_reset, 1); 316 - if (ret) 317 - goto close_clk; 318 - usleep_range(PERST_2_ACCESS_MIN, PERST_2_ACCESS_MAX); 286 + return 0; 319 287 288 + disable_clks: 289 + hi3660_pcie_phy_clk_ctrl(phy, false); 290 + return ret; 291 + } 292 + 293 + static int hi3660_pcie_phy_init(struct platform_device *pdev, 294 + struct kirin_pcie *pcie) 295 + { 296 + struct device *dev = &pdev->dev; 297 + struct hi3660_pcie_phy *phy; 298 + int ret; 299 + 300 + phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL); 301 + if (!phy) 302 + return -ENOMEM; 303 + 304 + pcie->phy_priv = phy; 305 + phy->dev = dev; 306 + 307 + /* registers */ 308 + pdev = container_of(dev, struct platform_device, dev); 309 + 310 + ret = hi3660_pcie_phy_get_clk(phy); 311 + if (ret) 312 + return ret; 313 + 314 + return hi3660_pcie_phy_get_resource(phy); 315 + } 316 + 317 + static int hi3660_pcie_phy_power_off(struct kirin_pcie *pcie) 318 + { 319 + struct hi3660_pcie_phy *phy = pcie->phy_priv; 320 + 321 + /* Drop power supply for Host */ 322 + regmap_write(phy->sysctrl, SCTRL_PCIE_CMOS_OFFSET, 0x00); 323 + 324 + hi3660_pcie_phy_clk_ctrl(phy, false); 325 + 326 + return 0; 327 + } 328 + 329 + /* 330 + * The non-PHY part starts here 331 + */ 332 + 333 + static const struct regmap_config pcie_kirin_regmap_conf = { 334 + .name = "kirin_pcie_apb", 335 + .reg_bits = 32, 336 + .val_bits = 32, 337 + .reg_stride = 4, 338 + }; 339 + 340 + static int kirin_pcie_get_gpio_enable(struct kirin_pcie *pcie, 341 + struct platform_device *pdev) 342 + { 343 + struct device *dev = &pdev->dev; 344 + struct device_node *np = dev->of_node; 345 + char name[32]; 346 + int ret, i; 347 + 348 + /* This is an optional property */ 349 + ret = of_gpio_named_count(np, "hisilicon,clken-gpios"); 350 + if (ret < 0) 320 351 return 0; 352 + 353 + if (ret > MAX_PCI_SLOTS) { 354 + dev_err(dev, "Too many GPIO clock requests!\n"); 355 + return -EINVAL; 321 356 } 322 357 323 - close_clk: 324 - kirin_pcie_clk_ctrl(kirin_pcie, false); 358 + pcie->n_gpio_clkreq = ret; 359 + 360 + for (i = 0; i < pcie->n_gpio_clkreq; i++) { 361 + pcie->gpio_id_clkreq[i] = of_get_named_gpio(dev->of_node, 362 + "hisilicon,clken-gpios", i); 363 + if (pcie->gpio_id_clkreq[i] < 0) 364 + return pcie->gpio_id_clkreq[i]; 365 + 366 + sprintf(name, "pcie_clkreq_%d", i); 367 + pcie->clkreq_names[i] = devm_kstrdup_const(dev, name, 368 + GFP_KERNEL); 369 + if (!pcie->clkreq_names[i]) 370 + return -ENOMEM; 371 + } 372 + 373 + return 0; 374 + } 375 + 376 + static int kirin_pcie_parse_port(struct kirin_pcie *pcie, 377 + struct platform_device *pdev, 378 + struct device_node *node) 379 + { 380 + struct device *dev = &pdev->dev; 381 + struct device_node *parent, *child; 382 + int ret, slot, i; 383 + char name[32]; 384 + 385 + for_each_available_child_of_node(node, parent) { 386 + for_each_available_child_of_node(parent, child) { 387 + i = pcie->num_slots; 388 + 389 + pcie->gpio_id_reset[i] = of_get_named_gpio(child, 390 + "reset-gpios", 0); 391 + if (pcie->gpio_id_reset[i] < 0) 392 + continue; 393 + 394 + pcie->num_slots++; 395 + if (pcie->num_slots > MAX_PCI_SLOTS) { 396 + dev_err(dev, "Too many PCI slots!\n"); 397 + ret = -EINVAL; 398 + goto put_node; 399 + } 400 + 401 + ret = of_pci_get_devfn(child); 402 + if (ret < 0) { 403 + dev_err(dev, "failed to parse devfn: %d\n", ret); 404 + goto put_node; 405 + } 406 + 407 + slot = PCI_SLOT(ret); 408 + 409 + sprintf(name, "pcie_perst_%d", slot); 410 + pcie->reset_names[i] = devm_kstrdup_const(dev, name, 411 + GFP_KERNEL); 412 + if (!pcie->reset_names[i]) { 413 + ret = -ENOMEM; 414 + goto put_node; 415 + } 416 + } 417 + } 418 + 419 + return 0; 420 + 421 + put_node: 422 + of_node_put(child); 423 + of_node_put(parent); 424 + return ret; 425 + } 426 + 427 + static long kirin_pcie_get_resource(struct kirin_pcie *kirin_pcie, 428 + struct platform_device *pdev) 429 + { 430 + struct device *dev = &pdev->dev; 431 + struct device_node *child, *node = dev->of_node; 432 + void __iomem *apb_base; 433 + int ret; 434 + 435 + apb_base = devm_platform_ioremap_resource_byname(pdev, "apb"); 436 + if (IS_ERR(apb_base)) 437 + return PTR_ERR(apb_base); 438 + 439 + kirin_pcie->apb = devm_regmap_init_mmio(dev, apb_base, 440 + &pcie_kirin_regmap_conf); 441 + if (IS_ERR(kirin_pcie->apb)) 442 + return PTR_ERR(kirin_pcie->apb); 443 + 444 + /* pcie internal PERST# gpio */ 445 + kirin_pcie->gpio_id_dwc_perst = of_get_named_gpio(dev->of_node, 446 + "reset-gpios", 0); 447 + if (kirin_pcie->gpio_id_dwc_perst == -EPROBE_DEFER) { 448 + return -EPROBE_DEFER; 449 + } else if (!gpio_is_valid(kirin_pcie->gpio_id_dwc_perst)) { 450 + dev_err(dev, "unable to get a valid gpio pin\n"); 451 + return -ENODEV; 452 + } 453 + 454 + ret = kirin_pcie_get_gpio_enable(kirin_pcie, pdev); 455 + if (ret) 456 + return ret; 457 + 458 + /* Parse OF children */ 459 + for_each_available_child_of_node(node, child) { 460 + ret = kirin_pcie_parse_port(kirin_pcie, pdev, child); 461 + if (ret) 462 + goto put_node; 463 + } 464 + 465 + return 0; 466 + 467 + put_node: 468 + of_node_put(child); 325 469 return ret; 326 470 } 327 471 ··· 502 302 { 503 303 u32 val; 504 304 505 - val = kirin_apb_ctrl_readl(kirin_pcie, SOC_PCIECTRL_CTRL0_ADDR); 305 + regmap_read(kirin_pcie->apb, SOC_PCIECTRL_CTRL0_ADDR, &val); 506 306 if (on) 507 307 val = val | PCIE_ELBI_SLV_DBI_ENABLE; 508 308 else 509 309 val = val & ~PCIE_ELBI_SLV_DBI_ENABLE; 510 310 511 - kirin_apb_ctrl_writel(kirin_pcie, val, SOC_PCIECTRL_CTRL0_ADDR); 311 + regmap_write(kirin_pcie->apb, SOC_PCIECTRL_CTRL0_ADDR, val); 512 312 } 513 313 514 314 static void kirin_pcie_sideband_dbi_r_mode(struct kirin_pcie *kirin_pcie, ··· 516 316 { 517 317 u32 val; 518 318 519 - val = kirin_apb_ctrl_readl(kirin_pcie, SOC_PCIECTRL_CTRL1_ADDR); 319 + regmap_read(kirin_pcie->apb, SOC_PCIECTRL_CTRL1_ADDR, &val); 520 320 if (on) 521 321 val = val | PCIE_ELBI_SLV_DBI_ENABLE; 522 322 else 523 323 val = val & ~PCIE_ELBI_SLV_DBI_ENABLE; 524 324 525 - kirin_apb_ctrl_writel(kirin_pcie, val, SOC_PCIECTRL_CTRL1_ADDR); 325 + regmap_write(kirin_pcie->apb, SOC_PCIECTRL_CTRL1_ADDR, val); 526 326 } 527 327 528 328 static int kirin_pcie_rd_own_conf(struct pci_bus *bus, unsigned int devfn, ··· 551 351 return PCIBIOS_SUCCESSFUL; 552 352 } 553 353 354 + static int kirin_pcie_add_bus(struct pci_bus *bus) 355 + { 356 + struct dw_pcie *pci = to_dw_pcie_from_pp(bus->sysdata); 357 + struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); 358 + int i, ret; 359 + 360 + if (!kirin_pcie->num_slots) 361 + return 0; 362 + 363 + /* Send PERST# to each slot */ 364 + for (i = 0; i < kirin_pcie->num_slots; i++) { 365 + ret = gpio_direction_output(kirin_pcie->gpio_id_reset[i], 1); 366 + if (ret) { 367 + dev_err(pci->dev, "PERST# %s error: %d\n", 368 + kirin_pcie->reset_names[i], ret); 369 + } 370 + } 371 + usleep_range(PERST_2_ACCESS_MIN, PERST_2_ACCESS_MAX); 372 + 373 + return 0; 374 + } 375 + 554 376 static struct pci_ops kirin_pci_ops = { 555 377 .read = kirin_pcie_rd_own_conf, 556 378 .write = kirin_pcie_wr_own_conf, 379 + .add_bus = kirin_pcie_add_bus, 557 380 }; 558 381 559 382 static u32 kirin_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, ··· 605 382 static int kirin_pcie_link_up(struct dw_pcie *pci) 606 383 { 607 384 struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); 608 - u32 val = kirin_apb_ctrl_readl(kirin_pcie, PCIE_APB_PHY_STATUS0); 385 + u32 val; 609 386 387 + regmap_read(kirin_pcie->apb, PCIE_APB_PHY_STATUS0, &val); 610 388 if ((val & PCIE_LINKUP_ENABLE) == PCIE_LINKUP_ENABLE) 611 389 return 1; 612 390 ··· 619 395 struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); 620 396 621 397 /* assert LTSSM enable */ 622 - kirin_apb_ctrl_writel(kirin_pcie, PCIE_LTSSM_ENABLE_BIT, 623 - PCIE_APP_LTSSM_ENABLE); 398 + regmap_write(kirin_pcie->apb, PCIE_APP_LTSSM_ENABLE, 399 + PCIE_LTSSM_ENABLE_BIT); 624 400 625 401 return 0; 626 402 } ··· 628 404 static int kirin_pcie_host_init(struct pcie_port *pp) 629 405 { 630 406 pp->bridge->ops = &kirin_pci_ops; 407 + 408 + return 0; 409 + } 410 + 411 + static int kirin_pcie_gpio_request(struct kirin_pcie *kirin_pcie, 412 + struct device *dev) 413 + { 414 + int ret, i; 415 + 416 + for (i = 0; i < kirin_pcie->num_slots; i++) { 417 + if (!gpio_is_valid(kirin_pcie->gpio_id_reset[i])) { 418 + dev_err(dev, "unable to get a valid %s gpio\n", 419 + kirin_pcie->reset_names[i]); 420 + return -ENODEV; 421 + } 422 + 423 + ret = devm_gpio_request(dev, kirin_pcie->gpio_id_reset[i], 424 + kirin_pcie->reset_names[i]); 425 + if (ret) 426 + return ret; 427 + } 428 + 429 + for (i = 0; i < kirin_pcie->n_gpio_clkreq; i++) { 430 + if (!gpio_is_valid(kirin_pcie->gpio_id_clkreq[i])) { 431 + dev_err(dev, "unable to get a valid %s gpio\n", 432 + kirin_pcie->clkreq_names[i]); 433 + return -ENODEV; 434 + } 435 + 436 + ret = devm_gpio_request(dev, kirin_pcie->gpio_id_clkreq[i], 437 + kirin_pcie->clkreq_names[i]); 438 + if (ret) 439 + return ret; 440 + 441 + ret = gpio_direction_output(kirin_pcie->gpio_id_clkreq[i], 0); 442 + if (ret) 443 + return ret; 444 + } 631 445 632 446 return 0; 633 447 } ··· 681 419 .host_init = kirin_pcie_host_init, 682 420 }; 683 421 422 + static int kirin_pcie_power_off(struct kirin_pcie *kirin_pcie) 423 + { 424 + int i; 425 + 426 + if (kirin_pcie->type == PCIE_KIRIN_INTERNAL_PHY) 427 + return hi3660_pcie_phy_power_off(kirin_pcie); 428 + 429 + for (i = 0; i < kirin_pcie->n_gpio_clkreq; i++) 430 + gpio_direction_output(kirin_pcie->gpio_id_clkreq[i], 1); 431 + 432 + phy_power_off(kirin_pcie->phy); 433 + phy_exit(kirin_pcie->phy); 434 + 435 + return 0; 436 + } 437 + 438 + static int kirin_pcie_power_on(struct platform_device *pdev, 439 + struct kirin_pcie *kirin_pcie) 440 + { 441 + struct device *dev = &pdev->dev; 442 + int ret; 443 + 444 + if (kirin_pcie->type == PCIE_KIRIN_INTERNAL_PHY) { 445 + ret = hi3660_pcie_phy_init(pdev, kirin_pcie); 446 + if (ret) 447 + return ret; 448 + 449 + ret = hi3660_pcie_phy_power_on(kirin_pcie); 450 + if (ret) 451 + return ret; 452 + } else { 453 + kirin_pcie->phy = devm_of_phy_get(dev, dev->of_node, NULL); 454 + if (IS_ERR(kirin_pcie->phy)) 455 + return PTR_ERR(kirin_pcie->phy); 456 + 457 + ret = kirin_pcie_gpio_request(kirin_pcie, dev); 458 + if (ret) 459 + return ret; 460 + 461 + ret = phy_init(kirin_pcie->phy); 462 + if (ret) 463 + goto err; 464 + 465 + ret = phy_power_on(kirin_pcie->phy); 466 + if (ret) 467 + goto err; 468 + } 469 + 470 + /* perst assert Endpoint */ 471 + usleep_range(REF_2_PERST_MIN, REF_2_PERST_MAX); 472 + 473 + if (!gpio_request(kirin_pcie->gpio_id_dwc_perst, "pcie_perst_bridge")) { 474 + ret = gpio_direction_output(kirin_pcie->gpio_id_dwc_perst, 1); 475 + if (ret) 476 + goto err; 477 + } 478 + 479 + usleep_range(PERST_2_ACCESS_MIN, PERST_2_ACCESS_MAX); 480 + 481 + return 0; 482 + err: 483 + kirin_pcie_power_off(kirin_pcie); 484 + 485 + return ret; 486 + } 487 + 488 + static int __exit kirin_pcie_remove(struct platform_device *pdev) 489 + { 490 + struct kirin_pcie *kirin_pcie = platform_get_drvdata(pdev); 491 + 492 + dw_pcie_host_deinit(&kirin_pcie->pci->pp); 493 + 494 + kirin_pcie_power_off(kirin_pcie); 495 + 496 + return 0; 497 + } 498 + 499 + static const struct of_device_id kirin_pcie_match[] = { 500 + { 501 + .compatible = "hisilicon,kirin960-pcie", 502 + .data = (void *)PCIE_KIRIN_INTERNAL_PHY 503 + }, 504 + { 505 + .compatible = "hisilicon,kirin970-pcie", 506 + .data = (void *)PCIE_KIRIN_EXTERNAL_PHY 507 + }, 508 + {}, 509 + }; 510 + 684 511 static int kirin_pcie_probe(struct platform_device *pdev) 685 512 { 513 + enum pcie_kirin_phy_type phy_type; 514 + const struct of_device_id *of_id; 686 515 struct device *dev = &pdev->dev; 687 516 struct kirin_pcie *kirin_pcie; 688 517 struct dw_pcie *pci; ··· 783 430 dev_err(dev, "NULL node\n"); 784 431 return -EINVAL; 785 432 } 433 + 434 + of_id = of_match_device(kirin_pcie_match, dev); 435 + if (!of_id) { 436 + dev_err(dev, "OF data missing\n"); 437 + return -EINVAL; 438 + } 439 + 440 + phy_type = (long)of_id->data; 786 441 787 442 kirin_pcie = devm_kzalloc(dev, sizeof(struct kirin_pcie), GFP_KERNEL); 788 443 if (!kirin_pcie) ··· 804 443 pci->ops = &kirin_dw_pcie_ops; 805 444 pci->pp.ops = &kirin_pcie_host_ops; 806 445 kirin_pcie->pci = pci; 807 - 808 - ret = kirin_pcie_get_clk(kirin_pcie, pdev); 809 - if (ret) 810 - return ret; 446 + kirin_pcie->type = phy_type; 811 447 812 448 ret = kirin_pcie_get_resource(kirin_pcie, pdev); 813 449 if (ret) 814 450 return ret; 815 451 816 - kirin_pcie->gpio_id_reset = of_get_named_gpio(dev->of_node, 817 - "reset-gpios", 0); 818 - if (kirin_pcie->gpio_id_reset == -EPROBE_DEFER) { 819 - return -EPROBE_DEFER; 820 - } else if (!gpio_is_valid(kirin_pcie->gpio_id_reset)) { 821 - dev_err(dev, "unable to get a valid gpio pin\n"); 822 - return -ENODEV; 823 - } 452 + platform_set_drvdata(pdev, kirin_pcie); 824 453 825 - ret = kirin_pcie_power_on(kirin_pcie); 454 + ret = kirin_pcie_power_on(pdev, kirin_pcie); 826 455 if (ret) 827 456 return ret; 828 - 829 - platform_set_drvdata(pdev, kirin_pcie); 830 457 831 458 return dw_pcie_host_init(&pci->pp); 832 459 } 833 460 834 - static const struct of_device_id kirin_pcie_match[] = { 835 - { .compatible = "hisilicon,kirin960-pcie" }, 836 - {}, 837 - }; 838 - 839 461 static struct platform_driver kirin_pcie_driver = { 840 462 .probe = kirin_pcie_probe, 463 + .remove = __exit_p(kirin_pcie_remove), 841 464 .driver = { 842 465 .name = "kirin-pcie", 843 - .of_match_table = kirin_pcie_match, 844 - .suppress_bind_attrs = true, 466 + .of_match_table = kirin_pcie_match, 467 + .suppress_bind_attrs = true, 845 468 }, 846 469 }; 847 - builtin_platform_driver(kirin_pcie_driver); 470 + module_platform_driver(kirin_pcie_driver); 471 + 472 + MODULE_DEVICE_TABLE(of, kirin_pcie_match); 473 + MODULE_DESCRIPTION("PCIe host controller driver for Kirin Phone SoCs"); 474 + MODULE_AUTHOR("Xiaowei Song <songxiaowei@huawei.com>"); 475 + MODULE_LICENSE("GPL v2");
+721
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Qualcomm PCIe Endpoint controller driver 4 + * 5 + * Copyright (c) 2020, The Linux Foundation. All rights reserved. 6 + * Author: Siddartha Mohanadoss <smohanad@codeaurora.org 7 + * 8 + * Copyright (c) 2021, Linaro Ltd. 9 + * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org 10 + */ 11 + 12 + #include <linux/clk.h> 13 + #include <linux/delay.h> 14 + #include <linux/gpio/consumer.h> 15 + #include <linux/mfd/syscon.h> 16 + #include <linux/phy/phy.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/pm_domain.h> 19 + #include <linux/regmap.h> 20 + #include <linux/reset.h> 21 + 22 + #include "pcie-designware.h" 23 + 24 + /* PARF registers */ 25 + #define PARF_SYS_CTRL 0x00 26 + #define PARF_DB_CTRL 0x10 27 + #define PARF_PM_CTRL 0x20 28 + #define PARF_MHI_BASE_ADDR_LOWER 0x178 29 + #define PARF_MHI_BASE_ADDR_UPPER 0x17c 30 + #define PARF_DEBUG_INT_EN 0x190 31 + #define PARF_AXI_MSTR_RD_HALT_NO_WRITES 0x1a4 32 + #define PARF_AXI_MSTR_WR_ADDR_HALT 0x1a8 33 + #define PARF_Q2A_FLUSH 0x1ac 34 + #define PARF_LTSSM 0x1b0 35 + #define PARF_CFG_BITS 0x210 36 + #define PARF_INT_ALL_STATUS 0x224 37 + #define PARF_INT_ALL_CLEAR 0x228 38 + #define PARF_INT_ALL_MASK 0x22c 39 + #define PARF_SLV_ADDR_MSB_CTRL 0x2c0 40 + #define PARF_DBI_BASE_ADDR 0x350 41 + #define PARF_DBI_BASE_ADDR_HI 0x354 42 + #define PARF_SLV_ADDR_SPACE_SIZE 0x358 43 + #define PARF_SLV_ADDR_SPACE_SIZE_HI 0x35c 44 + #define PARF_ATU_BASE_ADDR 0x634 45 + #define PARF_ATU_BASE_ADDR_HI 0x638 46 + #define PARF_SRIS_MODE 0x644 47 + #define PARF_DEVICE_TYPE 0x1000 48 + #define PARF_BDF_TO_SID_CFG 0x2c00 49 + 50 + /* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */ 51 + #define PARF_INT_ALL_LINK_DOWN BIT(1) 52 + #define PARF_INT_ALL_BME BIT(2) 53 + #define PARF_INT_ALL_PM_TURNOFF BIT(3) 54 + #define PARF_INT_ALL_DEBUG BIT(4) 55 + #define PARF_INT_ALL_LTR BIT(5) 56 + #define PARF_INT_ALL_MHI_Q6 BIT(6) 57 + #define PARF_INT_ALL_MHI_A7 BIT(7) 58 + #define PARF_INT_ALL_DSTATE_CHANGE BIT(8) 59 + #define PARF_INT_ALL_L1SUB_TIMEOUT BIT(9) 60 + #define PARF_INT_ALL_MMIO_WRITE BIT(10) 61 + #define PARF_INT_ALL_CFG_WRITE BIT(11) 62 + #define PARF_INT_ALL_BRIDGE_FLUSH_N BIT(12) 63 + #define PARF_INT_ALL_LINK_UP BIT(13) 64 + #define PARF_INT_ALL_AER_LEGACY BIT(14) 65 + #define PARF_INT_ALL_PLS_ERR BIT(15) 66 + #define PARF_INT_ALL_PME_LEGACY BIT(16) 67 + #define PARF_INT_ALL_PLS_PME BIT(17) 68 + 69 + /* PARF_BDF_TO_SID_CFG register fields */ 70 + #define PARF_BDF_TO_SID_BYPASS BIT(0) 71 + 72 + /* PARF_DEBUG_INT_EN register fields */ 73 + #define PARF_DEBUG_INT_PM_DSTATE_CHANGE BIT(1) 74 + #define PARF_DEBUG_INT_CFG_BUS_MASTER_EN BIT(2) 75 + #define PARF_DEBUG_INT_RADM_PM_TURNOFF BIT(3) 76 + 77 + /* PARF_DEVICE_TYPE register fields */ 78 + #define PARF_DEVICE_TYPE_EP 0x0 79 + 80 + /* PARF_PM_CTRL register fields */ 81 + #define PARF_PM_CTRL_REQ_EXIT_L1 BIT(1) 82 + #define PARF_PM_CTRL_READY_ENTR_L23 BIT(2) 83 + #define PARF_PM_CTRL_REQ_NOT_ENTR_L1 BIT(5) 84 + 85 + /* PARF_AXI_MSTR_RD_HALT_NO_WRITES register fields */ 86 + #define PARF_AXI_MSTR_RD_HALT_NO_WRITE_EN BIT(0) 87 + 88 + /* PARF_AXI_MSTR_WR_ADDR_HALT register fields */ 89 + #define PARF_AXI_MSTR_WR_ADDR_HALT_EN BIT(31) 90 + 91 + /* PARF_Q2A_FLUSH register fields */ 92 + #define PARF_Q2A_FLUSH_EN BIT(16) 93 + 94 + /* PARF_SYS_CTRL register fields */ 95 + #define PARF_SYS_CTRL_AUX_PWR_DET BIT(4) 96 + #define PARF_SYS_CTRL_CORE_CLK_CGC_DIS BIT(6) 97 + #define PARF_SYS_CTRL_SLV_DBI_WAKE_DISABLE BIT(11) 98 + 99 + /* PARF_DB_CTRL register fields */ 100 + #define PARF_DB_CTRL_INSR_DBNCR_BLOCK BIT(0) 101 + #define PARF_DB_CTRL_RMVL_DBNCR_BLOCK BIT(1) 102 + #define PARF_DB_CTRL_DBI_WKP_BLOCK BIT(4) 103 + #define PARF_DB_CTRL_SLV_WKP_BLOCK BIT(5) 104 + #define PARF_DB_CTRL_MST_WKP_BLOCK BIT(6) 105 + 106 + /* PARF_CFG_BITS register fields */ 107 + #define PARF_CFG_BITS_REQ_EXIT_L1SS_MSI_LTR_EN BIT(1) 108 + 109 + /* ELBI registers */ 110 + #define ELBI_SYS_STTS 0x08 111 + 112 + /* DBI registers */ 113 + #define DBI_CON_STATUS 0x44 114 + 115 + /* DBI register fields */ 116 + #define DBI_CON_STATUS_POWER_STATE_MASK GENMASK(1, 0) 117 + 118 + #define XMLH_LINK_UP 0x400 119 + #define CORE_RESET_TIME_US_MIN 1000 120 + #define CORE_RESET_TIME_US_MAX 1005 121 + #define WAKE_DELAY_US 2000 /* 2 ms */ 122 + 123 + #define to_pcie_ep(x) dev_get_drvdata((x)->dev) 124 + 125 + enum qcom_pcie_ep_link_status { 126 + QCOM_PCIE_EP_LINK_DISABLED, 127 + QCOM_PCIE_EP_LINK_ENABLED, 128 + QCOM_PCIE_EP_LINK_UP, 129 + QCOM_PCIE_EP_LINK_DOWN, 130 + }; 131 + 132 + static struct clk_bulk_data qcom_pcie_ep_clks[] = { 133 + { .id = "cfg" }, 134 + { .id = "aux" }, 135 + { .id = "bus_master" }, 136 + { .id = "bus_slave" }, 137 + { .id = "ref" }, 138 + { .id = "sleep" }, 139 + { .id = "slave_q2a" }, 140 + }; 141 + 142 + struct qcom_pcie_ep { 143 + struct dw_pcie pci; 144 + 145 + void __iomem *parf; 146 + void __iomem *elbi; 147 + struct regmap *perst_map; 148 + struct resource *mmio_res; 149 + 150 + struct reset_control *core_reset; 151 + struct gpio_desc *reset; 152 + struct gpio_desc *wake; 153 + struct phy *phy; 154 + 155 + u32 perst_en; 156 + u32 perst_sep_en; 157 + 158 + enum qcom_pcie_ep_link_status link_status; 159 + int global_irq; 160 + int perst_irq; 161 + }; 162 + 163 + static int qcom_pcie_ep_core_reset(struct qcom_pcie_ep *pcie_ep) 164 + { 165 + struct dw_pcie *pci = &pcie_ep->pci; 166 + struct device *dev = pci->dev; 167 + int ret; 168 + 169 + ret = reset_control_assert(pcie_ep->core_reset); 170 + if (ret) { 171 + dev_err(dev, "Cannot assert core reset\n"); 172 + return ret; 173 + } 174 + 175 + usleep_range(CORE_RESET_TIME_US_MIN, CORE_RESET_TIME_US_MAX); 176 + 177 + ret = reset_control_deassert(pcie_ep->core_reset); 178 + if (ret) { 179 + dev_err(dev, "Cannot de-assert core reset\n"); 180 + return ret; 181 + } 182 + 183 + usleep_range(CORE_RESET_TIME_US_MIN, CORE_RESET_TIME_US_MAX); 184 + 185 + return 0; 186 + } 187 + 188 + /* 189 + * Delatch PERST_EN and PERST_SEPARATION_ENABLE with TCSR to avoid 190 + * device reset during host reboot and hibernation. The driver is 191 + * expected to handle this situation. 192 + */ 193 + static void qcom_pcie_ep_configure_tcsr(struct qcom_pcie_ep *pcie_ep) 194 + { 195 + regmap_write(pcie_ep->perst_map, pcie_ep->perst_en, 0); 196 + regmap_write(pcie_ep->perst_map, pcie_ep->perst_sep_en, 0); 197 + } 198 + 199 + static int qcom_pcie_dw_link_up(struct dw_pcie *pci) 200 + { 201 + struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); 202 + u32 reg; 203 + 204 + reg = readl_relaxed(pcie_ep->elbi + ELBI_SYS_STTS); 205 + 206 + return reg & XMLH_LINK_UP; 207 + } 208 + 209 + static int qcom_pcie_dw_start_link(struct dw_pcie *pci) 210 + { 211 + struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); 212 + 213 + enable_irq(pcie_ep->perst_irq); 214 + 215 + return 0; 216 + } 217 + 218 + static void qcom_pcie_dw_stop_link(struct dw_pcie *pci) 219 + { 220 + struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); 221 + 222 + disable_irq(pcie_ep->perst_irq); 223 + } 224 + 225 + static int qcom_pcie_perst_deassert(struct dw_pcie *pci) 226 + { 227 + struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); 228 + struct device *dev = pci->dev; 229 + u32 val, offset; 230 + int ret; 231 + 232 + ret = clk_bulk_prepare_enable(ARRAY_SIZE(qcom_pcie_ep_clks), 233 + qcom_pcie_ep_clks); 234 + if (ret) 235 + return ret; 236 + 237 + ret = qcom_pcie_ep_core_reset(pcie_ep); 238 + if (ret) 239 + goto err_disable_clk; 240 + 241 + ret = phy_init(pcie_ep->phy); 242 + if (ret) 243 + goto err_disable_clk; 244 + 245 + ret = phy_power_on(pcie_ep->phy); 246 + if (ret) 247 + goto err_phy_exit; 248 + 249 + /* Assert WAKE# to RC to indicate device is ready */ 250 + gpiod_set_value_cansleep(pcie_ep->wake, 1); 251 + usleep_range(WAKE_DELAY_US, WAKE_DELAY_US + 500); 252 + gpiod_set_value_cansleep(pcie_ep->wake, 0); 253 + 254 + qcom_pcie_ep_configure_tcsr(pcie_ep); 255 + 256 + /* Disable BDF to SID mapping */ 257 + val = readl_relaxed(pcie_ep->parf + PARF_BDF_TO_SID_CFG); 258 + val |= PARF_BDF_TO_SID_BYPASS; 259 + writel_relaxed(val, pcie_ep->parf + PARF_BDF_TO_SID_CFG); 260 + 261 + /* Enable debug IRQ */ 262 + val = readl_relaxed(pcie_ep->parf + PARF_DEBUG_INT_EN); 263 + val |= PARF_DEBUG_INT_RADM_PM_TURNOFF | 264 + PARF_DEBUG_INT_CFG_BUS_MASTER_EN | 265 + PARF_DEBUG_INT_PM_DSTATE_CHANGE; 266 + writel_relaxed(val, pcie_ep->parf + PARF_DEBUG_INT_EN); 267 + 268 + /* Configure PCIe to endpoint mode */ 269 + writel_relaxed(PARF_DEVICE_TYPE_EP, pcie_ep->parf + PARF_DEVICE_TYPE); 270 + 271 + /* Allow entering L1 state */ 272 + val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL); 273 + val &= ~PARF_PM_CTRL_REQ_NOT_ENTR_L1; 274 + writel_relaxed(val, pcie_ep->parf + PARF_PM_CTRL); 275 + 276 + /* Read halts write */ 277 + val = readl_relaxed(pcie_ep->parf + PARF_AXI_MSTR_RD_HALT_NO_WRITES); 278 + val &= ~PARF_AXI_MSTR_RD_HALT_NO_WRITE_EN; 279 + writel_relaxed(val, pcie_ep->parf + PARF_AXI_MSTR_RD_HALT_NO_WRITES); 280 + 281 + /* Write after write halt */ 282 + val = readl_relaxed(pcie_ep->parf + PARF_AXI_MSTR_WR_ADDR_HALT); 283 + val |= PARF_AXI_MSTR_WR_ADDR_HALT_EN; 284 + writel_relaxed(val, pcie_ep->parf + PARF_AXI_MSTR_WR_ADDR_HALT); 285 + 286 + /* Q2A flush disable */ 287 + val = readl_relaxed(pcie_ep->parf + PARF_Q2A_FLUSH); 288 + val &= ~PARF_Q2A_FLUSH_EN; 289 + writel_relaxed(val, pcie_ep->parf + PARF_Q2A_FLUSH); 290 + 291 + /* Disable DBI Wakeup, core clock CGC and enable AUX power */ 292 + val = readl_relaxed(pcie_ep->parf + PARF_SYS_CTRL); 293 + val |= PARF_SYS_CTRL_SLV_DBI_WAKE_DISABLE | 294 + PARF_SYS_CTRL_CORE_CLK_CGC_DIS | 295 + PARF_SYS_CTRL_AUX_PWR_DET; 296 + writel_relaxed(val, pcie_ep->parf + PARF_SYS_CTRL); 297 + 298 + /* Disable the debouncers */ 299 + val = readl_relaxed(pcie_ep->parf + PARF_DB_CTRL); 300 + val |= PARF_DB_CTRL_INSR_DBNCR_BLOCK | PARF_DB_CTRL_RMVL_DBNCR_BLOCK | 301 + PARF_DB_CTRL_DBI_WKP_BLOCK | PARF_DB_CTRL_SLV_WKP_BLOCK | 302 + PARF_DB_CTRL_MST_WKP_BLOCK; 303 + writel_relaxed(val, pcie_ep->parf + PARF_DB_CTRL); 304 + 305 + /* Request to exit from L1SS for MSI and LTR MSG */ 306 + val = readl_relaxed(pcie_ep->parf + PARF_CFG_BITS); 307 + val |= PARF_CFG_BITS_REQ_EXIT_L1SS_MSI_LTR_EN; 308 + writel_relaxed(val, pcie_ep->parf + PARF_CFG_BITS); 309 + 310 + dw_pcie_dbi_ro_wr_en(pci); 311 + 312 + /* Set the L0s Exit Latency to 2us-4us = 0x6 */ 313 + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 314 + val = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); 315 + val &= ~PCI_EXP_LNKCAP_L0SEL; 316 + val |= FIELD_PREP(PCI_EXP_LNKCAP_L0SEL, 0x6); 317 + dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, val); 318 + 319 + /* Set the L1 Exit Latency to be 32us-64 us = 0x6 */ 320 + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 321 + val = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); 322 + val &= ~PCI_EXP_LNKCAP_L1EL; 323 + val |= FIELD_PREP(PCI_EXP_LNKCAP_L1EL, 0x6); 324 + dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, val); 325 + 326 + dw_pcie_dbi_ro_wr_dis(pci); 327 + 328 + writel_relaxed(0, pcie_ep->parf + PARF_INT_ALL_MASK); 329 + val = PARF_INT_ALL_LINK_DOWN | PARF_INT_ALL_BME | 330 + PARF_INT_ALL_PM_TURNOFF | PARF_INT_ALL_DSTATE_CHANGE | 331 + PARF_INT_ALL_LINK_UP; 332 + writel_relaxed(val, pcie_ep->parf + PARF_INT_ALL_MASK); 333 + 334 + ret = dw_pcie_ep_init_complete(&pcie_ep->pci.ep); 335 + if (ret) { 336 + dev_err(dev, "Failed to complete initialization: %d\n", ret); 337 + goto err_phy_power_off; 338 + } 339 + 340 + /* 341 + * The physical address of the MMIO region which is exposed as the BAR 342 + * should be written to MHI BASE registers. 343 + */ 344 + writel_relaxed(pcie_ep->mmio_res->start, 345 + pcie_ep->parf + PARF_MHI_BASE_ADDR_LOWER); 346 + writel_relaxed(0, pcie_ep->parf + PARF_MHI_BASE_ADDR_UPPER); 347 + 348 + dw_pcie_ep_init_notify(&pcie_ep->pci.ep); 349 + 350 + /* Enable LTSSM */ 351 + val = readl_relaxed(pcie_ep->parf + PARF_LTSSM); 352 + val |= BIT(8); 353 + writel_relaxed(val, pcie_ep->parf + PARF_LTSSM); 354 + 355 + return 0; 356 + 357 + err_phy_power_off: 358 + phy_power_off(pcie_ep->phy); 359 + err_phy_exit: 360 + phy_exit(pcie_ep->phy); 361 + err_disable_clk: 362 + clk_bulk_disable_unprepare(ARRAY_SIZE(qcom_pcie_ep_clks), 363 + qcom_pcie_ep_clks); 364 + 365 + return ret; 366 + } 367 + 368 + static void qcom_pcie_perst_assert(struct dw_pcie *pci) 369 + { 370 + struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); 371 + struct device *dev = pci->dev; 372 + 373 + if (pcie_ep->link_status == QCOM_PCIE_EP_LINK_DISABLED) { 374 + dev_dbg(dev, "Link is already disabled\n"); 375 + return; 376 + } 377 + 378 + phy_power_off(pcie_ep->phy); 379 + phy_exit(pcie_ep->phy); 380 + clk_bulk_disable_unprepare(ARRAY_SIZE(qcom_pcie_ep_clks), 381 + qcom_pcie_ep_clks); 382 + pcie_ep->link_status = QCOM_PCIE_EP_LINK_DISABLED; 383 + } 384 + 385 + /* Common DWC controller ops */ 386 + static const struct dw_pcie_ops pci_ops = { 387 + .link_up = qcom_pcie_dw_link_up, 388 + .start_link = qcom_pcie_dw_start_link, 389 + .stop_link = qcom_pcie_dw_stop_link, 390 + }; 391 + 392 + static int qcom_pcie_ep_get_io_resources(struct platform_device *pdev, 393 + struct qcom_pcie_ep *pcie_ep) 394 + { 395 + struct device *dev = &pdev->dev; 396 + struct dw_pcie *pci = &pcie_ep->pci; 397 + struct device_node *syscon; 398 + struct resource *res; 399 + int ret; 400 + 401 + pcie_ep->parf = devm_platform_ioremap_resource_byname(pdev, "parf"); 402 + if (IS_ERR(pcie_ep->parf)) 403 + return PTR_ERR(pcie_ep->parf); 404 + 405 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); 406 + pci->dbi_base = devm_pci_remap_cfg_resource(dev, res); 407 + if (IS_ERR(pci->dbi_base)) 408 + return PTR_ERR(pci->dbi_base); 409 + pci->dbi_base2 = pci->dbi_base; 410 + 411 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi"); 412 + pcie_ep->elbi = devm_pci_remap_cfg_resource(dev, res); 413 + if (IS_ERR(pcie_ep->elbi)) 414 + return PTR_ERR(pcie_ep->elbi); 415 + 416 + pcie_ep->mmio_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 417 + "mmio"); 418 + 419 + syscon = of_parse_phandle(dev->of_node, "qcom,perst-regs", 0); 420 + if (!syscon) { 421 + dev_err(dev, "Failed to parse qcom,perst-regs\n"); 422 + return -EINVAL; 423 + } 424 + 425 + pcie_ep->perst_map = syscon_node_to_regmap(syscon); 426 + of_node_put(syscon); 427 + if (IS_ERR(pcie_ep->perst_map)) 428 + return PTR_ERR(pcie_ep->perst_map); 429 + 430 + ret = of_property_read_u32_index(dev->of_node, "qcom,perst-regs", 431 + 1, &pcie_ep->perst_en); 432 + if (ret < 0) { 433 + dev_err(dev, "No Perst Enable offset in syscon\n"); 434 + return ret; 435 + } 436 + 437 + ret = of_property_read_u32_index(dev->of_node, "qcom,perst-regs", 438 + 2, &pcie_ep->perst_sep_en); 439 + if (ret < 0) { 440 + dev_err(dev, "No Perst Separation Enable offset in syscon\n"); 441 + return ret; 442 + } 443 + 444 + return 0; 445 + } 446 + 447 + static int qcom_pcie_ep_get_resources(struct platform_device *pdev, 448 + struct qcom_pcie_ep *pcie_ep) 449 + { 450 + struct device *dev = &pdev->dev; 451 + int ret; 452 + 453 + ret = qcom_pcie_ep_get_io_resources(pdev, pcie_ep); 454 + if (ret) { 455 + dev_err(&pdev->dev, "Failed to get io resources %d\n", ret); 456 + return ret; 457 + } 458 + 459 + ret = devm_clk_bulk_get(dev, ARRAY_SIZE(qcom_pcie_ep_clks), 460 + qcom_pcie_ep_clks); 461 + if (ret) 462 + return ret; 463 + 464 + pcie_ep->core_reset = devm_reset_control_get_exclusive(dev, "core"); 465 + if (IS_ERR(pcie_ep->core_reset)) 466 + return PTR_ERR(pcie_ep->core_reset); 467 + 468 + pcie_ep->reset = devm_gpiod_get(dev, "reset", GPIOD_IN); 469 + if (IS_ERR(pcie_ep->reset)) 470 + return PTR_ERR(pcie_ep->reset); 471 + 472 + pcie_ep->wake = devm_gpiod_get_optional(dev, "wake", GPIOD_OUT_LOW); 473 + if (IS_ERR(pcie_ep->wake)) 474 + return PTR_ERR(pcie_ep->wake); 475 + 476 + pcie_ep->phy = devm_phy_optional_get(&pdev->dev, "pciephy"); 477 + if (IS_ERR(pcie_ep->phy)) 478 + ret = PTR_ERR(pcie_ep->phy); 479 + 480 + return ret; 481 + } 482 + 483 + /* TODO: Notify clients about PCIe state change */ 484 + static irqreturn_t qcom_pcie_ep_global_irq_thread(int irq, void *data) 485 + { 486 + struct qcom_pcie_ep *pcie_ep = data; 487 + struct dw_pcie *pci = &pcie_ep->pci; 488 + struct device *dev = pci->dev; 489 + u32 status = readl_relaxed(pcie_ep->parf + PARF_INT_ALL_STATUS); 490 + u32 mask = readl_relaxed(pcie_ep->parf + PARF_INT_ALL_MASK); 491 + u32 dstate, val; 492 + 493 + writel_relaxed(status, pcie_ep->parf + PARF_INT_ALL_CLEAR); 494 + status &= mask; 495 + 496 + if (FIELD_GET(PARF_INT_ALL_LINK_DOWN, status)) { 497 + dev_dbg(dev, "Received Linkdown event\n"); 498 + pcie_ep->link_status = QCOM_PCIE_EP_LINK_DOWN; 499 + } else if (FIELD_GET(PARF_INT_ALL_BME, status)) { 500 + dev_dbg(dev, "Received BME event. Link is enabled!\n"); 501 + pcie_ep->link_status = QCOM_PCIE_EP_LINK_ENABLED; 502 + } else if (FIELD_GET(PARF_INT_ALL_PM_TURNOFF, status)) { 503 + dev_dbg(dev, "Received PM Turn-off event! Entering L23\n"); 504 + val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL); 505 + val |= PARF_PM_CTRL_READY_ENTR_L23; 506 + writel_relaxed(val, pcie_ep->parf + PARF_PM_CTRL); 507 + } else if (FIELD_GET(PARF_INT_ALL_DSTATE_CHANGE, status)) { 508 + dstate = dw_pcie_readl_dbi(pci, DBI_CON_STATUS) & 509 + DBI_CON_STATUS_POWER_STATE_MASK; 510 + dev_dbg(dev, "Received D%d state event\n", dstate); 511 + if (dstate == 3) { 512 + val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL); 513 + val |= PARF_PM_CTRL_REQ_EXIT_L1; 514 + writel_relaxed(val, pcie_ep->parf + PARF_PM_CTRL); 515 + } 516 + } else if (FIELD_GET(PARF_INT_ALL_LINK_UP, status)) { 517 + dev_dbg(dev, "Received Linkup event. Enumeration complete!\n"); 518 + dw_pcie_ep_linkup(&pci->ep); 519 + pcie_ep->link_status = QCOM_PCIE_EP_LINK_UP; 520 + } else { 521 + dev_dbg(dev, "Received unknown event: %d\n", status); 522 + } 523 + 524 + return IRQ_HANDLED; 525 + } 526 + 527 + static irqreturn_t qcom_pcie_ep_perst_irq_thread(int irq, void *data) 528 + { 529 + struct qcom_pcie_ep *pcie_ep = data; 530 + struct dw_pcie *pci = &pcie_ep->pci; 531 + struct device *dev = pci->dev; 532 + u32 perst; 533 + 534 + perst = gpiod_get_value(pcie_ep->reset); 535 + if (perst) { 536 + dev_dbg(dev, "PERST asserted by host. Shutting down the PCIe link!\n"); 537 + qcom_pcie_perst_assert(pci); 538 + } else { 539 + dev_dbg(dev, "PERST de-asserted by host. Starting link training!\n"); 540 + qcom_pcie_perst_deassert(pci); 541 + } 542 + 543 + irq_set_irq_type(gpiod_to_irq(pcie_ep->reset), 544 + (perst ? IRQF_TRIGGER_HIGH : IRQF_TRIGGER_LOW)); 545 + 546 + return IRQ_HANDLED; 547 + } 548 + 549 + static int qcom_pcie_ep_enable_irq_resources(struct platform_device *pdev, 550 + struct qcom_pcie_ep *pcie_ep) 551 + { 552 + int irq, ret; 553 + 554 + irq = platform_get_irq_byname(pdev, "global"); 555 + if (irq < 0) { 556 + dev_err(&pdev->dev, "Failed to get Global IRQ\n"); 557 + return irq; 558 + } 559 + 560 + ret = devm_request_threaded_irq(&pdev->dev, irq, NULL, 561 + qcom_pcie_ep_global_irq_thread, 562 + IRQF_ONESHOT, 563 + "global_irq", pcie_ep); 564 + if (ret) { 565 + dev_err(&pdev->dev, "Failed to request Global IRQ\n"); 566 + return ret; 567 + } 568 + 569 + pcie_ep->perst_irq = gpiod_to_irq(pcie_ep->reset); 570 + irq_set_status_flags(pcie_ep->perst_irq, IRQ_NOAUTOEN); 571 + ret = devm_request_threaded_irq(&pdev->dev, pcie_ep->perst_irq, NULL, 572 + qcom_pcie_ep_perst_irq_thread, 573 + IRQF_TRIGGER_HIGH | IRQF_ONESHOT, 574 + "perst_irq", pcie_ep); 575 + if (ret) { 576 + dev_err(&pdev->dev, "Failed to request PERST IRQ\n"); 577 + disable_irq(irq); 578 + return ret; 579 + } 580 + 581 + return 0; 582 + } 583 + 584 + static int qcom_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 585 + enum pci_epc_irq_type type, u16 interrupt_num) 586 + { 587 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 588 + 589 + switch (type) { 590 + case PCI_EPC_IRQ_LEGACY: 591 + return dw_pcie_ep_raise_legacy_irq(ep, func_no); 592 + case PCI_EPC_IRQ_MSI: 593 + return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); 594 + default: 595 + dev_err(pci->dev, "Unknown IRQ type\n"); 596 + return -EINVAL; 597 + } 598 + } 599 + 600 + static const struct pci_epc_features qcom_pcie_epc_features = { 601 + .linkup_notifier = true, 602 + .core_init_notifier = true, 603 + .msi_capable = true, 604 + .msix_capable = false, 605 + }; 606 + 607 + static const struct pci_epc_features * 608 + qcom_pcie_epc_get_features(struct dw_pcie_ep *pci_ep) 609 + { 610 + return &qcom_pcie_epc_features; 611 + } 612 + 613 + static void qcom_pcie_ep_init(struct dw_pcie_ep *ep) 614 + { 615 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 616 + enum pci_barno bar; 617 + 618 + for (bar = BAR_0; bar <= BAR_5; bar++) 619 + dw_pcie_ep_reset_bar(pci, bar); 620 + } 621 + 622 + static struct dw_pcie_ep_ops pci_ep_ops = { 623 + .ep_init = qcom_pcie_ep_init, 624 + .raise_irq = qcom_pcie_ep_raise_irq, 625 + .get_features = qcom_pcie_epc_get_features, 626 + }; 627 + 628 + static int qcom_pcie_ep_probe(struct platform_device *pdev) 629 + { 630 + struct device *dev = &pdev->dev; 631 + struct qcom_pcie_ep *pcie_ep; 632 + int ret; 633 + 634 + pcie_ep = devm_kzalloc(dev, sizeof(*pcie_ep), GFP_KERNEL); 635 + if (!pcie_ep) 636 + return -ENOMEM; 637 + 638 + pcie_ep->pci.dev = dev; 639 + pcie_ep->pci.ops = &pci_ops; 640 + pcie_ep->pci.ep.ops = &pci_ep_ops; 641 + platform_set_drvdata(pdev, pcie_ep); 642 + 643 + ret = qcom_pcie_ep_get_resources(pdev, pcie_ep); 644 + if (ret) 645 + return ret; 646 + 647 + ret = clk_bulk_prepare_enable(ARRAY_SIZE(qcom_pcie_ep_clks), 648 + qcom_pcie_ep_clks); 649 + if (ret) 650 + return ret; 651 + 652 + ret = qcom_pcie_ep_core_reset(pcie_ep); 653 + if (ret) 654 + goto err_disable_clk; 655 + 656 + ret = phy_init(pcie_ep->phy); 657 + if (ret) 658 + goto err_disable_clk; 659 + 660 + /* PHY needs to be powered on for dw_pcie_ep_init() */ 661 + ret = phy_power_on(pcie_ep->phy); 662 + if (ret) 663 + goto err_phy_exit; 664 + 665 + ret = dw_pcie_ep_init(&pcie_ep->pci.ep); 666 + if (ret) { 667 + dev_err(dev, "Failed to initialize endpoint: %d\n", ret); 668 + goto err_phy_power_off; 669 + } 670 + 671 + ret = qcom_pcie_ep_enable_irq_resources(pdev, pcie_ep); 672 + if (ret) 673 + goto err_phy_power_off; 674 + 675 + return 0; 676 + 677 + err_phy_power_off: 678 + phy_power_off(pcie_ep->phy); 679 + err_phy_exit: 680 + phy_exit(pcie_ep->phy); 681 + err_disable_clk: 682 + clk_bulk_disable_unprepare(ARRAY_SIZE(qcom_pcie_ep_clks), 683 + qcom_pcie_ep_clks); 684 + 685 + return ret; 686 + } 687 + 688 + static int qcom_pcie_ep_remove(struct platform_device *pdev) 689 + { 690 + struct qcom_pcie_ep *pcie_ep = platform_get_drvdata(pdev); 691 + 692 + if (pcie_ep->link_status == QCOM_PCIE_EP_LINK_DISABLED) 693 + return 0; 694 + 695 + phy_power_off(pcie_ep->phy); 696 + phy_exit(pcie_ep->phy); 697 + clk_bulk_disable_unprepare(ARRAY_SIZE(qcom_pcie_ep_clks), 698 + qcom_pcie_ep_clks); 699 + 700 + return 0; 701 + } 702 + 703 + static const struct of_device_id qcom_pcie_ep_match[] = { 704 + { .compatible = "qcom,sdx55-pcie-ep", }, 705 + { } 706 + }; 707 + 708 + static struct platform_driver qcom_pcie_ep_driver = { 709 + .probe = qcom_pcie_ep_probe, 710 + .remove = qcom_pcie_ep_remove, 711 + .driver = { 712 + .name = "qcom-pcie-ep", 713 + .of_match_table = qcom_pcie_ep_match, 714 + }, 715 + }; 716 + builtin_platform_driver(qcom_pcie_ep_driver); 717 + 718 + MODULE_AUTHOR("Siddartha Mohanadoss <smohanad@codeaurora.org>"); 719 + MODULE_AUTHOR("Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>"); 720 + MODULE_DESCRIPTION("Qualcomm PCIe Endpoint controller driver"); 721 + MODULE_LICENSE("GPL v2");
+85 -11
drivers/pci/controller/dwc/pcie-qcom.c
··· 166 166 struct regulator_bulk_data supplies[2]; 167 167 struct reset_control *pci_reset; 168 168 struct clk *pipe_clk; 169 + struct clk *pipe_clk_src; 170 + struct clk *phy_pipe_clk; 171 + struct clk *ref_clk_src; 169 172 }; 170 173 171 174 union qcom_pcie_resources { ··· 192 189 int (*config_sid)(struct qcom_pcie *pcie); 193 190 }; 194 191 192 + struct qcom_pcie_cfg { 193 + const struct qcom_pcie_ops *ops; 194 + unsigned int pipe_clk_need_muxing:1; 195 + }; 196 + 195 197 struct qcom_pcie { 196 198 struct dw_pcie *pci; 197 199 void __iomem *parf; /* DT parf */ ··· 205 197 struct phy *phy; 206 198 struct gpio_desc *reset; 207 199 const struct qcom_pcie_ops *ops; 200 + unsigned int pipe_clk_need_muxing:1; 208 201 }; 209 202 210 203 #define to_qcom_pcie(x) dev_get_drvdata((x)->dev) ··· 1176 1167 if (ret < 0) 1177 1168 return ret; 1178 1169 1170 + if (pcie->pipe_clk_need_muxing) { 1171 + res->pipe_clk_src = devm_clk_get(dev, "pipe_mux"); 1172 + if (IS_ERR(res->pipe_clk_src)) 1173 + return PTR_ERR(res->pipe_clk_src); 1174 + 1175 + res->phy_pipe_clk = devm_clk_get(dev, "phy_pipe"); 1176 + if (IS_ERR(res->phy_pipe_clk)) 1177 + return PTR_ERR(res->phy_pipe_clk); 1178 + 1179 + res->ref_clk_src = devm_clk_get(dev, "ref"); 1180 + if (IS_ERR(res->ref_clk_src)) 1181 + return PTR_ERR(res->ref_clk_src); 1182 + } 1183 + 1179 1184 res->pipe_clk = devm_clk_get(dev, "pipe"); 1180 1185 return PTR_ERR_OR_ZERO(res->pipe_clk); 1181 1186 } ··· 1207 1184 dev_err(dev, "cannot enable regulators\n"); 1208 1185 return ret; 1209 1186 } 1187 + 1188 + /* Set TCXO as clock source for pcie_pipe_clk_src */ 1189 + if (pcie->pipe_clk_need_muxing) 1190 + clk_set_parent(res->pipe_clk_src, res->ref_clk_src); 1210 1191 1211 1192 ret = clk_bulk_prepare_enable(res->num_clks, res->clks); 1212 1193 if (ret < 0) ··· 1282 1255 static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie) 1283 1256 { 1284 1257 struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; 1258 + 1259 + /* Set pipe clock as clock source for pcie_pipe_clk_src */ 1260 + if (pcie->pipe_clk_need_muxing) 1261 + clk_set_parent(res->pipe_clk_src, res->phy_pipe_clk); 1285 1262 1286 1263 return clk_prepare_enable(res->pipe_clk); 1287 1264 } ··· 1487 1456 .config_sid = qcom_pcie_config_sid_sm8250, 1488 1457 }; 1489 1458 1459 + static const struct qcom_pcie_cfg apq8084_cfg = { 1460 + .ops = &ops_1_0_0, 1461 + }; 1462 + 1463 + static const struct qcom_pcie_cfg ipq8064_cfg = { 1464 + .ops = &ops_2_1_0, 1465 + }; 1466 + 1467 + static const struct qcom_pcie_cfg msm8996_cfg = { 1468 + .ops = &ops_2_3_2, 1469 + }; 1470 + 1471 + static const struct qcom_pcie_cfg ipq8074_cfg = { 1472 + .ops = &ops_2_3_3, 1473 + }; 1474 + 1475 + static const struct qcom_pcie_cfg ipq4019_cfg = { 1476 + .ops = &ops_2_4_0, 1477 + }; 1478 + 1479 + static const struct qcom_pcie_cfg sdm845_cfg = { 1480 + .ops = &ops_2_7_0, 1481 + }; 1482 + 1483 + static const struct qcom_pcie_cfg sm8250_cfg = { 1484 + .ops = &ops_1_9_0, 1485 + }; 1486 + 1487 + static const struct qcom_pcie_cfg sc7280_cfg = { 1488 + .ops = &ops_1_9_0, 1489 + .pipe_clk_need_muxing = true, 1490 + }; 1491 + 1490 1492 static const struct dw_pcie_ops dw_pcie_ops = { 1491 1493 .link_up = qcom_pcie_link_up, 1492 1494 .start_link = qcom_pcie_start_link, ··· 1531 1467 struct pcie_port *pp; 1532 1468 struct dw_pcie *pci; 1533 1469 struct qcom_pcie *pcie; 1470 + const struct qcom_pcie_cfg *pcie_cfg; 1534 1471 int ret; 1535 1472 1536 1473 pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); ··· 1553 1488 1554 1489 pcie->pci = pci; 1555 1490 1556 - pcie->ops = of_device_get_match_data(dev); 1491 + pcie_cfg = of_device_get_match_data(dev); 1492 + if (!pcie_cfg || !pcie_cfg->ops) { 1493 + dev_err(dev, "Invalid platform data\n"); 1494 + return -EINVAL; 1495 + } 1496 + 1497 + pcie->ops = pcie_cfg->ops; 1498 + pcie->pipe_clk_need_muxing = pcie_cfg->pipe_clk_need_muxing; 1557 1499 1558 1500 pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH); 1559 1501 if (IS_ERR(pcie->reset)) { ··· 1617 1545 } 1618 1546 1619 1547 static const struct of_device_id qcom_pcie_match[] = { 1620 - { .compatible = "qcom,pcie-apq8084", .data = &ops_1_0_0 }, 1621 - { .compatible = "qcom,pcie-ipq8064", .data = &ops_2_1_0 }, 1622 - { .compatible = "qcom,pcie-ipq8064-v2", .data = &ops_2_1_0 }, 1623 - { .compatible = "qcom,pcie-apq8064", .data = &ops_2_1_0 }, 1624 - { .compatible = "qcom,pcie-msm8996", .data = &ops_2_3_2 }, 1625 - { .compatible = "qcom,pcie-ipq8074", .data = &ops_2_3_3 }, 1626 - { .compatible = "qcom,pcie-ipq4019", .data = &ops_2_4_0 }, 1627 - { .compatible = "qcom,pcie-qcs404", .data = &ops_2_4_0 }, 1628 - { .compatible = "qcom,pcie-sdm845", .data = &ops_2_7_0 }, 1629 - { .compatible = "qcom,pcie-sm8250", .data = &ops_1_9_0 }, 1548 + { .compatible = "qcom,pcie-apq8084", .data = &apq8084_cfg }, 1549 + { .compatible = "qcom,pcie-ipq8064", .data = &ipq8064_cfg }, 1550 + { .compatible = "qcom,pcie-ipq8064-v2", .data = &ipq8064_cfg }, 1551 + { .compatible = "qcom,pcie-apq8064", .data = &ipq8064_cfg }, 1552 + { .compatible = "qcom,pcie-msm8996", .data = &msm8996_cfg }, 1553 + { .compatible = "qcom,pcie-ipq8074", .data = &ipq8074_cfg }, 1554 + { .compatible = "qcom,pcie-ipq4019", .data = &ipq4019_cfg }, 1555 + { .compatible = "qcom,pcie-qcs404", .data = &ipq4019_cfg }, 1556 + { .compatible = "qcom,pcie-sdm845", .data = &sdm845_cfg }, 1557 + { .compatible = "qcom,pcie-sm8250", .data = &sm8250_cfg }, 1558 + { .compatible = "qcom,pcie-sc8180x", .data = &sm8250_cfg }, 1559 + { .compatible = "qcom,pcie-sc7280", .data = &sc7280_cfg }, 1630 1560 { } 1631 1561 }; 1632 1562
+10 -16
drivers/pci/controller/dwc/pcie-uniphier.c
··· 168 168 writel(PCL_RCV_INTX_ALL_ENABLE, priv->base + PCL_RCV_INTX); 169 169 } 170 170 171 - static void uniphier_pcie_irq_ack(struct irq_data *d) 172 - { 173 - struct pcie_port *pp = irq_data_get_irq_chip_data(d); 174 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 175 - struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci); 176 - u32 val; 177 - 178 - val = readl(priv->base + PCL_RCV_INTX); 179 - val &= ~PCL_RCV_INTX_ALL_STATUS; 180 - val |= BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_STATUS_SHIFT); 181 - writel(val, priv->base + PCL_RCV_INTX); 182 - } 183 - 184 171 static void uniphier_pcie_irq_mask(struct irq_data *d) 185 172 { 186 173 struct pcie_port *pp = irq_data_get_irq_chip_data(d); 187 174 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 188 175 struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci); 176 + unsigned long flags; 189 177 u32 val; 190 178 179 + raw_spin_lock_irqsave(&pp->lock, flags); 180 + 191 181 val = readl(priv->base + PCL_RCV_INTX); 192 - val &= ~PCL_RCV_INTX_ALL_MASK; 193 182 val |= BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT); 194 183 writel(val, priv->base + PCL_RCV_INTX); 184 + 185 + raw_spin_unlock_irqrestore(&pp->lock, flags); 195 186 } 196 187 197 188 static void uniphier_pcie_irq_unmask(struct irq_data *d) ··· 190 199 struct pcie_port *pp = irq_data_get_irq_chip_data(d); 191 200 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 192 201 struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci); 202 + unsigned long flags; 193 203 u32 val; 194 204 205 + raw_spin_lock_irqsave(&pp->lock, flags); 206 + 195 207 val = readl(priv->base + PCL_RCV_INTX); 196 - val &= ~PCL_RCV_INTX_ALL_MASK; 197 208 val &= ~BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT); 198 209 writel(val, priv->base + PCL_RCV_INTX); 210 + 211 + raw_spin_unlock_irqrestore(&pp->lock, flags); 199 212 } 200 213 201 214 static struct irq_chip uniphier_pcie_irq_chip = { 202 215 .name = "PCI", 203 - .irq_ack = uniphier_pcie_irq_ack, 204 216 .irq_mask = uniphier_pcie_irq_mask, 205 217 .irq_unmask = uniphier_pcie_irq_unmask, 206 218 };
+1 -4
drivers/pci/controller/dwc/pcie-visconti.c
··· 279 279 { 280 280 struct dw_pcie *pci = &pcie->pci; 281 281 struct pcie_port *pp = &pci->pp; 282 - struct device *dev = &pdev->dev; 283 282 284 283 pp->irq = platform_get_irq_byname(pdev, "intr"); 285 - if (pp->irq < 0) { 286 - dev_err(dev, "Interrupt intr is missing"); 284 + if (pp->irq < 0) 287 285 return pp->irq; 288 - } 289 286 290 287 pp->ops = &visconti_pcie_host_ops; 291 288
+313 -180
drivers/pci/controller/pci-aardvark.c
··· 31 31 /* PCIe core registers */ 32 32 #define PCIE_CORE_DEV_ID_REG 0x0 33 33 #define PCIE_CORE_CMD_STATUS_REG 0x4 34 - #define PCIE_CORE_CMD_IO_ACCESS_EN BIT(0) 35 - #define PCIE_CORE_CMD_MEM_ACCESS_EN BIT(1) 36 - #define PCIE_CORE_CMD_MEM_IO_REQ_EN BIT(2) 37 34 #define PCIE_CORE_DEV_REV_REG 0x8 35 + #define PCIE_CORE_EXP_ROM_BAR_REG 0x30 38 36 #define PCIE_CORE_PCIEXP_CAP 0xc0 39 37 #define PCIE_CORE_ERR_CAPCTL_REG 0x118 40 38 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX BIT(5) ··· 97 99 #define PCIE_CORE_CTRL2_MSI_ENABLE BIT(10) 98 100 #define PCIE_CORE_REF_CLK_REG (CONTROL_BASE_ADDR + 0x14) 99 101 #define PCIE_CORE_REF_CLK_TX_ENABLE BIT(1) 102 + #define PCIE_CORE_REF_CLK_RX_ENABLE BIT(2) 100 103 #define PCIE_MSG_LOG_REG (CONTROL_BASE_ADDR + 0x30) 101 104 #define PCIE_ISR0_REG (CONTROL_BASE_ADDR + 0x40) 102 105 #define PCIE_MSG_PM_PME_MASK BIT(7) ··· 105 106 #define PCIE_ISR0_MSI_INT_PENDING BIT(24) 106 107 #define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val)) 107 108 #define PCIE_ISR0_INTX_DEASSERT(val) BIT(20 + (val)) 108 - #define PCIE_ISR0_ALL_MASK GENMASK(26, 0) 109 + #define PCIE_ISR0_ALL_MASK GENMASK(31, 0) 109 110 #define PCIE_ISR1_REG (CONTROL_BASE_ADDR + 0x48) 110 111 #define PCIE_ISR1_MASK_REG (CONTROL_BASE_ADDR + 0x4C) 111 112 #define PCIE_ISR1_POWER_STATE_CHANGE BIT(4) 112 113 #define PCIE_ISR1_FLUSH BIT(5) 113 114 #define PCIE_ISR1_INTX_ASSERT(val) BIT(8 + (val)) 114 - #define PCIE_ISR1_ALL_MASK GENMASK(11, 4) 115 + #define PCIE_ISR1_ALL_MASK GENMASK(31, 0) 115 116 #define PCIE_MSI_ADDR_LOW_REG (CONTROL_BASE_ADDR + 0x50) 116 117 #define PCIE_MSI_ADDR_HIGH_REG (CONTROL_BASE_ADDR + 0x54) 117 118 #define PCIE_MSI_STATUS_REG (CONTROL_BASE_ADDR + 0x58) 118 119 #define PCIE_MSI_MASK_REG (CONTROL_BASE_ADDR + 0x5C) 119 120 #define PCIE_MSI_PAYLOAD_REG (CONTROL_BASE_ADDR + 0x9C) 121 + #define PCIE_MSI_DATA_MASK GENMASK(15, 0) 120 122 121 123 /* PCIe window configuration */ 122 124 #define OB_WIN_BASE_ADDR 0x4c00 ··· 164 164 #define CFG_REG (LMI_BASE_ADDR + 0x0) 165 165 #define LTSSM_SHIFT 24 166 166 #define LTSSM_MASK 0x3f 167 - #define LTSSM_L0 0x10 168 167 #define RC_BAR_CONFIG 0x300 168 + 169 + /* LTSSM values in CFG_REG */ 170 + enum { 171 + LTSSM_DETECT_QUIET = 0x0, 172 + LTSSM_DETECT_ACTIVE = 0x1, 173 + LTSSM_POLLING_ACTIVE = 0x2, 174 + LTSSM_POLLING_COMPLIANCE = 0x3, 175 + LTSSM_POLLING_CONFIGURATION = 0x4, 176 + LTSSM_CONFIG_LINKWIDTH_START = 0x5, 177 + LTSSM_CONFIG_LINKWIDTH_ACCEPT = 0x6, 178 + LTSSM_CONFIG_LANENUM_ACCEPT = 0x7, 179 + LTSSM_CONFIG_LANENUM_WAIT = 0x8, 180 + LTSSM_CONFIG_COMPLETE = 0x9, 181 + LTSSM_CONFIG_IDLE = 0xa, 182 + LTSSM_RECOVERY_RCVR_LOCK = 0xb, 183 + LTSSM_RECOVERY_SPEED = 0xc, 184 + LTSSM_RECOVERY_RCVR_CFG = 0xd, 185 + LTSSM_RECOVERY_IDLE = 0xe, 186 + LTSSM_L0 = 0x10, 187 + LTSSM_RX_L0S_ENTRY = 0x11, 188 + LTSSM_RX_L0S_IDLE = 0x12, 189 + LTSSM_RX_L0S_FTS = 0x13, 190 + LTSSM_TX_L0S_ENTRY = 0x14, 191 + LTSSM_TX_L0S_IDLE = 0x15, 192 + LTSSM_TX_L0S_FTS = 0x16, 193 + LTSSM_L1_ENTRY = 0x17, 194 + LTSSM_L1_IDLE = 0x18, 195 + LTSSM_L2_IDLE = 0x19, 196 + LTSSM_L2_TRANSMIT_WAKE = 0x1a, 197 + LTSSM_DISABLED = 0x20, 198 + LTSSM_LOOPBACK_ENTRY_MASTER = 0x21, 199 + LTSSM_LOOPBACK_ACTIVE_MASTER = 0x22, 200 + LTSSM_LOOPBACK_EXIT_MASTER = 0x23, 201 + LTSSM_LOOPBACK_ENTRY_SLAVE = 0x24, 202 + LTSSM_LOOPBACK_ACTIVE_SLAVE = 0x25, 203 + LTSSM_LOOPBACK_EXIT_SLAVE = 0x26, 204 + LTSSM_HOT_RESET = 0x27, 205 + LTSSM_RECOVERY_EQUALIZATION_PHASE0 = 0x28, 206 + LTSSM_RECOVERY_EQUALIZATION_PHASE1 = 0x29, 207 + LTSSM_RECOVERY_EQUALIZATION_PHASE2 = 0x2a, 208 + LTSSM_RECOVERY_EQUALIZATION_PHASE3 = 0x2b, 209 + }; 210 + 169 211 #define VENDOR_ID_REG (LMI_BASE_ADDR + 0x44) 170 212 171 213 /* PCIe core controller registers */ ··· 240 198 #define PCIE_IRQ_MSI_INT2_DET BIT(21) 241 199 #define PCIE_IRQ_RC_DBELL_DET BIT(22) 242 200 #define PCIE_IRQ_EP_STATUS BIT(23) 243 - #define PCIE_IRQ_ALL_MASK 0xfff0fb 201 + #define PCIE_IRQ_ALL_MASK GENMASK(31, 0) 244 202 #define PCIE_IRQ_ENABLE_INTS_MASK PCIE_IRQ_CORE_INT 245 203 246 204 /* Transaction types */ ··· 299 257 return readl(pcie->base + reg); 300 258 } 301 259 302 - static inline u16 advk_read16(struct advk_pcie *pcie, u64 reg) 260 + static u8 advk_pcie_ltssm_state(struct advk_pcie *pcie) 303 261 { 304 - return advk_readl(pcie, (reg & ~0x3)) >> ((reg & 0x3) * 8); 305 - } 306 - 307 - static int advk_pcie_link_up(struct advk_pcie *pcie) 308 - { 309 - u32 val, ltssm_state; 262 + u32 val; 263 + u8 ltssm_state; 310 264 311 265 val = advk_readl(pcie, CFG_REG); 312 266 ltssm_state = (val >> LTSSM_SHIFT) & LTSSM_MASK; 313 - return ltssm_state >= LTSSM_L0; 267 + return ltssm_state; 268 + } 269 + 270 + static inline bool advk_pcie_link_up(struct advk_pcie *pcie) 271 + { 272 + /* check if LTSSM is in normal operation - some L* state */ 273 + u8 ltssm_state = advk_pcie_ltssm_state(pcie); 274 + return ltssm_state >= LTSSM_L0 && ltssm_state < LTSSM_DISABLED; 275 + } 276 + 277 + static inline bool advk_pcie_link_active(struct advk_pcie *pcie) 278 + { 279 + /* 280 + * According to PCIe Base specification 3.0, Table 4-14: Link 281 + * Status Mapped to the LTSSM, and 4.2.6.3.6 Configuration.Idle 282 + * is Link Up mapped to LTSSM Configuration.Idle, Recovery, L0, 283 + * L0s, L1 and L2 states. And according to 3.2.1. Data Link 284 + * Control and Management State Machine Rules is DL Up status 285 + * reported in DL Active state. 286 + */ 287 + u8 ltssm_state = advk_pcie_ltssm_state(pcie); 288 + return ltssm_state >= LTSSM_CONFIG_IDLE && ltssm_state < LTSSM_DISABLED; 289 + } 290 + 291 + static inline bool advk_pcie_link_training(struct advk_pcie *pcie) 292 + { 293 + /* 294 + * According to PCIe Base specification 3.0, Table 4-14: Link 295 + * Status Mapped to the LTSSM is Link Training mapped to LTSSM 296 + * Configuration and Recovery states. 297 + */ 298 + u8 ltssm_state = advk_pcie_ltssm_state(pcie); 299 + return ((ltssm_state >= LTSSM_CONFIG_LINKWIDTH_START && 300 + ltssm_state < LTSSM_L0) || 301 + (ltssm_state >= LTSSM_RECOVERY_EQUALIZATION_PHASE0 && 302 + ltssm_state <= LTSSM_RECOVERY_EQUALIZATION_PHASE3)); 314 303 } 315 304 316 305 static int advk_pcie_wait_for_link(struct advk_pcie *pcie) ··· 364 291 size_t retries; 365 292 366 293 for (retries = 0; retries < RETRAIN_WAIT_MAX_RETRIES; ++retries) { 367 - if (!advk_pcie_link_up(pcie)) 294 + if (advk_pcie_link_training(pcie)) 368 295 break; 369 296 udelay(RETRAIN_WAIT_USLEEP_US); 370 297 } ··· 372 299 373 300 static void advk_pcie_issue_perst(struct advk_pcie *pcie) 374 301 { 375 - u32 reg; 376 - 377 302 if (!pcie->reset_gpio) 378 303 return; 379 - 380 - /* 381 - * As required by PCI Express spec (PCI Express Base Specification, REV. 382 - * 4.0 PCI Express, February 19 2014, 6.6.1 Conventional Reset) a delay 383 - * for at least 100ms after de-asserting PERST# signal is needed before 384 - * link training is enabled. So ensure that link training is disabled 385 - * prior de-asserting PERST# signal to fulfill that PCI Express spec 386 - * requirement. 387 - */ 388 - reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 389 - reg &= ~LINK_TRAINING_EN; 390 - advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 391 304 392 305 /* 10ms delay is needed for some cards */ 393 306 dev_info(&pcie->pdev->dev, "issuing PERST via reset GPIO for 10ms\n"); ··· 382 323 gpiod_set_value_cansleep(pcie->reset_gpio, 0); 383 324 } 384 325 385 - static int advk_pcie_train_at_gen(struct advk_pcie *pcie, int gen) 326 + static void advk_pcie_train_link(struct advk_pcie *pcie) 386 327 { 387 - int ret, neg_gen; 328 + struct device *dev = &pcie->pdev->dev; 388 329 u32 reg; 330 + int ret; 389 331 390 - /* Setup link speed */ 332 + /* 333 + * Setup PCIe rev / gen compliance based on device tree property 334 + * 'max-link-speed' which also forces maximal link speed. 335 + */ 391 336 reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 392 337 reg &= ~PCIE_GEN_SEL_MSK; 393 - if (gen == 3) 338 + if (pcie->link_gen == 3) 394 339 reg |= SPEED_GEN_3; 395 - else if (gen == 2) 340 + else if (pcie->link_gen == 2) 396 341 reg |= SPEED_GEN_2; 397 342 else 398 343 reg |= SPEED_GEN_1; 399 344 advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 400 345 401 346 /* 402 - * Enable link training. This is not needed in every call to this 403 - * function, just once suffices, but it does not break anything either. 347 + * Set maximal link speed value also into PCIe Link Control 2 register. 348 + * Armada 3700 Functional Specification says that default value is based 349 + * on SPEED_GEN but tests showed that default value is always 8.0 GT/s. 404 350 */ 351 + reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL2); 352 + reg &= ~PCI_EXP_LNKCTL2_TLS; 353 + if (pcie->link_gen == 3) 354 + reg |= PCI_EXP_LNKCTL2_TLS_8_0GT; 355 + else if (pcie->link_gen == 2) 356 + reg |= PCI_EXP_LNKCTL2_TLS_5_0GT; 357 + else 358 + reg |= PCI_EXP_LNKCTL2_TLS_2_5GT; 359 + advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL2); 360 + 361 + /* Enable link training after selecting PCIe generation */ 405 362 reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 406 363 reg |= LINK_TRAINING_EN; 407 364 advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 408 - 409 - /* 410 - * Start link training immediately after enabling it. 411 - * This solves problems for some buggy cards. 412 - */ 413 - reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL); 414 - reg |= PCI_EXP_LNKCTL_RL; 415 - advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL); 416 - 417 - ret = advk_pcie_wait_for_link(pcie); 418 - if (ret) 419 - return ret; 420 - 421 - reg = advk_read16(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKSTA); 422 - neg_gen = reg & PCI_EXP_LNKSTA_CLS; 423 - 424 - return neg_gen; 425 - } 426 - 427 - static void advk_pcie_train_link(struct advk_pcie *pcie) 428 - { 429 - struct device *dev = &pcie->pdev->dev; 430 - int neg_gen = -1, gen; 431 365 432 366 /* 433 367 * Reset PCIe card via PERST# signal. Some cards are not detected ··· 432 380 * PERST# signal could have been asserted by pinctrl subsystem before 433 381 * probe() callback has been called or issued explicitly by reset gpio 434 382 * function advk_pcie_issue_perst(), making the endpoint going into 435 - * fundamental reset. As required by PCI Express spec a delay for at 436 - * least 100ms after such a reset before link training is needed. 383 + * fundamental reset. As required by PCI Express spec (PCI Express 384 + * Base Specification, REV. 4.0 PCI Express, February 19 2014, 6.6.1 385 + * Conventional Reset) a delay for at least 100ms after such a reset 386 + * before sending a Configuration Request to the device is needed. 387 + * So wait until PCIe link is up. Function advk_pcie_wait_for_link() 388 + * waits for link at least 900ms. 437 389 */ 438 - msleep(PCI_PM_D3COLD_WAIT); 439 - 440 - /* 441 - * Try link training at link gen specified by device tree property 442 - * 'max-link-speed'. If this fails, iteratively train at lower gen. 443 - */ 444 - for (gen = pcie->link_gen; gen > 0; --gen) { 445 - neg_gen = advk_pcie_train_at_gen(pcie, gen); 446 - if (neg_gen > 0) 447 - break; 448 - } 449 - 450 - if (neg_gen < 0) 451 - goto err; 452 - 453 - /* 454 - * After successful training if negotiated gen is lower than requested, 455 - * train again on negotiated gen. This solves some stability issues for 456 - * some buggy gen1 cards. 457 - */ 458 - if (neg_gen < gen) { 459 - gen = neg_gen; 460 - neg_gen = advk_pcie_train_at_gen(pcie, gen); 461 - } 462 - 463 - if (neg_gen == gen) { 464 - dev_info(dev, "link up at gen %i\n", gen); 465 - return; 466 - } 467 - 468 - err: 469 - dev_err(dev, "link never came up\n"); 390 + ret = advk_pcie_wait_for_link(pcie); 391 + if (ret < 0) 392 + dev_err(dev, "link never came up\n"); 393 + else 394 + dev_info(dev, "link up\n"); 470 395 } 471 396 472 397 /* ··· 480 451 u32 reg; 481 452 int i; 482 453 483 - /* Enable TX */ 454 + /* 455 + * Configure PCIe Reference clock. Direction is from the PCIe 456 + * controller to the endpoint card, so enable transmitting of 457 + * Reference clock differential signal off-chip and disable 458 + * receiving off-chip differential signal. 459 + */ 484 460 reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG); 485 461 reg |= PCIE_CORE_REF_CLK_TX_ENABLE; 462 + reg &= ~PCIE_CORE_REF_CLK_RX_ENABLE; 486 463 advk_writel(pcie, reg, PCIE_CORE_REF_CLK_REG); 487 464 488 465 /* Set to Direct mode */ ··· 512 477 reg = (PCI_VENDOR_ID_MARVELL << 16) | PCI_VENDOR_ID_MARVELL; 513 478 advk_writel(pcie, reg, VENDOR_ID_REG); 514 479 480 + /* 481 + * Change Class Code of PCI Bridge device to PCI Bridge (0x600400), 482 + * because the default value is Mass storage controller (0x010400). 483 + * 484 + * Note that this Aardvark PCI Bridge does not have compliant Type 1 485 + * Configuration Space and it even cannot be accessed via Aardvark's 486 + * PCI config space access method. Something like config space is 487 + * available in internal Aardvark registers starting at offset 0x0 488 + * and is reported as Type 0. In range 0x10 - 0x34 it has totally 489 + * different registers. 490 + * 491 + * Therefore driver uses emulation of PCI Bridge which emulates 492 + * access to configuration space via internal Aardvark registers or 493 + * emulated configuration buffer. 494 + */ 495 + reg = advk_readl(pcie, PCIE_CORE_DEV_REV_REG); 496 + reg &= ~0xffffff00; 497 + reg |= (PCI_CLASS_BRIDGE_PCI << 8) << 8; 498 + advk_writel(pcie, reg, PCIE_CORE_DEV_REV_REG); 499 + 500 + /* Disable Root Bridge I/O space, memory space and bus mastering */ 501 + reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG); 502 + reg &= ~(PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER); 503 + advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG); 504 + 515 505 /* Set Advanced Error Capabilities and Control PF0 register */ 516 506 reg = PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX | 517 507 PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN | ··· 548 488 reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL); 549 489 reg &= ~PCI_EXP_DEVCTL_RELAX_EN; 550 490 reg &= ~PCI_EXP_DEVCTL_NOSNOOP_EN; 491 + reg &= ~PCI_EXP_DEVCTL_PAYLOAD; 551 492 reg &= ~PCI_EXP_DEVCTL_READRQ; 552 - reg |= PCI_EXP_DEVCTL_PAYLOAD; /* Set max payload size */ 493 + reg |= PCI_EXP_DEVCTL_PAYLOAD_512B; 553 494 reg |= PCI_EXP_DEVCTL_READRQ_512B; 554 495 advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL); 555 496 ··· 635 574 advk_pcie_disable_ob_win(pcie, i); 636 575 637 576 advk_pcie_train_link(pcie); 638 - 639 - /* 640 - * FIXME: The following register update is suspicious. This register is 641 - * applicable only when the PCI controller is configured for Endpoint 642 - * mode, not as a Root Complex. But apparently when this code is 643 - * removed, some cards stop working. This should be investigated and 644 - * a comment explaining this should be put here. 645 - */ 646 - reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG); 647 - reg |= PCIE_CORE_CMD_MEM_ACCESS_EN | 648 - PCIE_CORE_CMD_IO_ACCESS_EN | 649 - PCIE_CORE_CMD_MEM_IO_REQ_EN; 650 - advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG); 651 577 } 652 578 653 579 static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u32 *val) ··· 643 595 u32 reg; 644 596 unsigned int status; 645 597 char *strcomp_status, *str_posted; 598 + int ret; 646 599 647 600 reg = advk_readl(pcie, PIO_STAT); 648 601 status = (reg & PIO_COMPLETION_STATUS_MASK) >> ··· 668 619 case PIO_COMPLETION_STATUS_OK: 669 620 if (reg & PIO_ERR_STATUS) { 670 621 strcomp_status = "COMP_ERR"; 622 + ret = -EFAULT; 671 623 break; 672 624 } 673 625 /* Get the read result */ ··· 676 626 *val = advk_readl(pcie, PIO_RD_DATA); 677 627 /* No error */ 678 628 strcomp_status = NULL; 629 + ret = 0; 679 630 break; 680 631 case PIO_COMPLETION_STATUS_UR: 681 632 strcomp_status = "UR"; 633 + ret = -EOPNOTSUPP; 682 634 break; 683 635 case PIO_COMPLETION_STATUS_CRS: 684 636 if (allow_crs && val) { ··· 698 646 */ 699 647 *val = CFG_RD_CRS_VAL; 700 648 strcomp_status = NULL; 649 + ret = 0; 701 650 break; 702 651 } 703 652 /* PCIe r4.0, sec 2.3.2, says: ··· 714 661 * Request and taking appropriate action, e.g., complete the 715 662 * Request to the host as a failed transaction. 716 663 * 717 - * To simplify implementation do not re-issue the Configuration 718 - * Request and complete the Request as a failed transaction. 664 + * So return -EAGAIN and caller (pci-aardvark.c driver) will 665 + * re-issue request again up to the PIO_RETRY_CNT retries. 719 666 */ 720 667 strcomp_status = "CRS"; 668 + ret = -EAGAIN; 721 669 break; 722 670 case PIO_COMPLETION_STATUS_CA: 723 671 strcomp_status = "CA"; 672 + ret = -ECANCELED; 724 673 break; 725 674 default: 726 675 strcomp_status = "Unknown"; 676 + ret = -EINVAL; 727 677 break; 728 678 } 729 679 730 680 if (!strcomp_status) 731 - return 0; 681 + return ret; 732 682 733 683 if (reg & PIO_NON_POSTED_REQ) 734 684 str_posted = "Non-posted"; 735 685 else 736 686 str_posted = "Posted"; 737 687 738 - dev_err(dev, "%s PIO Response Status: %s, %#x @ %#x\n", 688 + dev_dbg(dev, "%s PIO Response Status: %s, %#x @ %#x\n", 739 689 str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS)); 740 690 741 - return -EFAULT; 691 + return ret; 742 692 } 743 693 744 694 static int advk_pcie_wait_pio(struct advk_pcie *pcie) ··· 749 693 struct device *dev = &pcie->pdev->dev; 750 694 int i; 751 695 752 - for (i = 0; i < PIO_RETRY_CNT; i++) { 696 + for (i = 1; i <= PIO_RETRY_CNT; i++) { 753 697 u32 start, isr; 754 698 755 699 start = advk_readl(pcie, PIO_START); 756 700 isr = advk_readl(pcie, PIO_ISR); 757 701 if (!start && isr) 758 - return 0; 702 + return i; 759 703 udelay(PIO_RETRY_DELAY); 760 704 } 761 705 ··· 763 707 return -ETIMEDOUT; 764 708 } 765 709 710 + static pci_bridge_emul_read_status_t 711 + advk_pci_bridge_emul_base_conf_read(struct pci_bridge_emul *bridge, 712 + int reg, u32 *value) 713 + { 714 + struct advk_pcie *pcie = bridge->data; 715 + 716 + switch (reg) { 717 + case PCI_COMMAND: 718 + *value = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG); 719 + return PCI_BRIDGE_EMUL_HANDLED; 720 + 721 + case PCI_ROM_ADDRESS1: 722 + *value = advk_readl(pcie, PCIE_CORE_EXP_ROM_BAR_REG); 723 + return PCI_BRIDGE_EMUL_HANDLED; 724 + 725 + case PCI_INTERRUPT_LINE: { 726 + /* 727 + * From the whole 32bit register we support reading from HW only 728 + * one bit: PCI_BRIDGE_CTL_BUS_RESET. 729 + * Other bits are retrieved only from emulated config buffer. 730 + */ 731 + __le32 *cfgspace = (__le32 *)&bridge->conf; 732 + u32 val = le32_to_cpu(cfgspace[PCI_INTERRUPT_LINE / 4]); 733 + if (advk_readl(pcie, PCIE_CORE_CTRL1_REG) & HOT_RESET_GEN) 734 + val |= PCI_BRIDGE_CTL_BUS_RESET << 16; 735 + else 736 + val &= ~(PCI_BRIDGE_CTL_BUS_RESET << 16); 737 + *value = val; 738 + return PCI_BRIDGE_EMUL_HANDLED; 739 + } 740 + 741 + default: 742 + return PCI_BRIDGE_EMUL_NOT_HANDLED; 743 + } 744 + } 745 + 746 + static void 747 + advk_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge, 748 + int reg, u32 old, u32 new, u32 mask) 749 + { 750 + struct advk_pcie *pcie = bridge->data; 751 + 752 + switch (reg) { 753 + case PCI_COMMAND: 754 + advk_writel(pcie, new, PCIE_CORE_CMD_STATUS_REG); 755 + break; 756 + 757 + case PCI_ROM_ADDRESS1: 758 + advk_writel(pcie, new, PCIE_CORE_EXP_ROM_BAR_REG); 759 + break; 760 + 761 + case PCI_INTERRUPT_LINE: 762 + if (mask & (PCI_BRIDGE_CTL_BUS_RESET << 16)) { 763 + u32 val = advk_readl(pcie, PCIE_CORE_CTRL1_REG); 764 + if (new & (PCI_BRIDGE_CTL_BUS_RESET << 16)) 765 + val |= HOT_RESET_GEN; 766 + else 767 + val &= ~HOT_RESET_GEN; 768 + advk_writel(pcie, val, PCIE_CORE_CTRL1_REG); 769 + } 770 + break; 771 + 772 + default: 773 + break; 774 + } 775 + } 766 776 767 777 static pci_bridge_emul_read_status_t 768 778 advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge, ··· 845 723 case PCI_EXP_RTCTL: { 846 724 u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG); 847 725 *value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE; 726 + *value |= le16_to_cpu(bridge->pcie_conf.rootctl) & PCI_EXP_RTCTL_CRSSVE; 848 727 *value |= PCI_EXP_RTCAP_CRSVIS << 16; 849 728 return PCI_BRIDGE_EMUL_HANDLED; 850 729 } ··· 857 734 return PCI_BRIDGE_EMUL_HANDLED; 858 735 } 859 736 737 + case PCI_EXP_LNKCAP: { 738 + u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg); 739 + /* 740 + * PCI_EXP_LNKCAP_DLLLARC bit is hardwired in aardvark HW to 0. 741 + * But support for PCI_EXP_LNKSTA_DLLLA is emulated via ltssm 742 + * state so explicitly enable PCI_EXP_LNKCAP_DLLLARC flag. 743 + */ 744 + val |= PCI_EXP_LNKCAP_DLLLARC; 745 + *value = val; 746 + return PCI_BRIDGE_EMUL_HANDLED; 747 + } 748 + 860 749 case PCI_EXP_LNKCTL: { 861 750 /* u32 contains both PCI_EXP_LNKCTL and PCI_EXP_LNKSTA */ 862 751 u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg) & 863 752 ~(PCI_EXP_LNKSTA_LT << 16); 864 - if (!advk_pcie_link_up(pcie)) 753 + if (advk_pcie_link_training(pcie)) 865 754 val |= (PCI_EXP_LNKSTA_LT << 16); 755 + if (advk_pcie_link_active(pcie)) 756 + val |= (PCI_EXP_LNKSTA_DLLLA << 16); 866 757 *value = val; 867 758 return PCI_BRIDGE_EMUL_HANDLED; 868 759 } ··· 884 747 case PCI_CAP_LIST_ID: 885 748 case PCI_EXP_DEVCAP: 886 749 case PCI_EXP_DEVCTL: 887 - case PCI_EXP_LNKCAP: 888 750 *value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg); 889 751 return PCI_BRIDGE_EMUL_HANDLED; 890 752 default: ··· 930 794 } 931 795 932 796 static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = { 797 + .read_base = advk_pci_bridge_emul_base_conf_read, 798 + .write_base = advk_pci_bridge_emul_base_conf_write, 933 799 .read_pcie = advk_pci_bridge_emul_pcie_conf_read, 934 800 .write_pcie = advk_pci_bridge_emul_pcie_conf_write, 935 801 }; ··· 943 805 static int advk_sw_pci_bridge_init(struct advk_pcie *pcie) 944 806 { 945 807 struct pci_bridge_emul *bridge = &pcie->bridge; 946 - int ret; 947 808 948 809 bridge->conf.vendor = 949 810 cpu_to_le16(advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff); ··· 962 825 /* Support interrupt A for MSI feature */ 963 826 bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE; 964 827 828 + /* Indicates supports for Completion Retry Status */ 829 + bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS); 830 + 965 831 bridge->has_pcie = true; 966 832 bridge->data = pcie; 967 833 bridge->ops = &advk_pci_bridge_emul_ops; 968 834 969 - /* PCIe config space can be initialized after pci_bridge_emul_init() */ 970 - ret = pci_bridge_emul_init(bridge, 0); 971 - if (ret < 0) 972 - return ret; 973 - 974 - /* Indicates supports for Completion Retry Status */ 975 - bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS); 976 - 977 - return 0; 835 + return pci_bridge_emul_init(bridge, 0); 978 836 } 979 837 980 838 static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus, ··· 1021 889 int where, int size, u32 *val) 1022 890 { 1023 891 struct advk_pcie *pcie = bus->sysdata; 892 + int retry_count; 1024 893 bool allow_crs; 1025 894 u32 reg; 1026 895 int ret; ··· 1044 911 (le16_to_cpu(pcie->bridge.pcie_conf.rootctl) & 1045 912 PCI_EXP_RTCTL_CRSSVE); 1046 913 1047 - if (advk_pcie_pio_is_running(pcie)) { 1048 - /* 1049 - * If it is possible return Completion Retry Status so caller 1050 - * tries to issue the request again instead of failing. 1051 - */ 1052 - if (allow_crs) { 1053 - *val = CFG_RD_CRS_VAL; 1054 - return PCIBIOS_SUCCESSFUL; 1055 - } 1056 - *val = 0xffffffff; 1057 - return PCIBIOS_SET_FAILED; 1058 - } 914 + if (advk_pcie_pio_is_running(pcie)) 915 + goto try_crs; 1059 916 1060 917 /* Program the control register */ 1061 918 reg = advk_readl(pcie, PIO_CTRL); ··· 1064 941 /* Program the data strobe */ 1065 942 advk_writel(pcie, 0xf, PIO_WR_DATA_STRB); 1066 943 1067 - /* Clear PIO DONE ISR and start the transfer */ 1068 - advk_writel(pcie, 1, PIO_ISR); 1069 - advk_writel(pcie, 1, PIO_START); 944 + retry_count = 0; 945 + do { 946 + /* Clear PIO DONE ISR and start the transfer */ 947 + advk_writel(pcie, 1, PIO_ISR); 948 + advk_writel(pcie, 1, PIO_START); 1070 949 1071 - ret = advk_pcie_wait_pio(pcie); 1072 - if (ret < 0) { 1073 - /* 1074 - * If it is possible return Completion Retry Status so caller 1075 - * tries to issue the request again instead of failing. 1076 - */ 1077 - if (allow_crs) { 1078 - *val = CFG_RD_CRS_VAL; 1079 - return PCIBIOS_SUCCESSFUL; 1080 - } 1081 - *val = 0xffffffff; 1082 - return PCIBIOS_SET_FAILED; 1083 - } 950 + ret = advk_pcie_wait_pio(pcie); 951 + if (ret < 0) 952 + goto try_crs; 1084 953 1085 - /* Check PIO status and get the read result */ 1086 - ret = advk_pcie_check_pio_status(pcie, allow_crs, val); 1087 - if (ret < 0) { 1088 - *val = 0xffffffff; 1089 - return PCIBIOS_SET_FAILED; 1090 - } 954 + retry_count += ret; 955 + 956 + /* Check PIO status and get the read result */ 957 + ret = advk_pcie_check_pio_status(pcie, allow_crs, val); 958 + } while (ret == -EAGAIN && retry_count < PIO_RETRY_CNT); 959 + 960 + if (ret < 0) 961 + goto fail; 1091 962 1092 963 if (size == 1) 1093 964 *val = (*val >> (8 * (where & 3))) & 0xff; ··· 1089 972 *val = (*val >> (8 * (where & 3))) & 0xffff; 1090 973 1091 974 return PCIBIOS_SUCCESSFUL; 975 + 976 + try_crs: 977 + /* 978 + * If it is possible, return Completion Retry Status so that caller 979 + * tries to issue the request again instead of failing. 980 + */ 981 + if (allow_crs) { 982 + *val = CFG_RD_CRS_VAL; 983 + return PCIBIOS_SUCCESSFUL; 984 + } 985 + 986 + fail: 987 + *val = 0xffffffff; 988 + return PCIBIOS_SET_FAILED; 1092 989 } 1093 990 1094 991 static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn, ··· 1111 980 struct advk_pcie *pcie = bus->sysdata; 1112 981 u32 reg; 1113 982 u32 data_strobe = 0x0; 983 + int retry_count; 1114 984 int offset; 1115 985 int ret; 1116 986 ··· 1153 1021 /* Program the data strobe */ 1154 1022 advk_writel(pcie, data_strobe, PIO_WR_DATA_STRB); 1155 1023 1156 - /* Clear PIO DONE ISR and start the transfer */ 1157 - advk_writel(pcie, 1, PIO_ISR); 1158 - advk_writel(pcie, 1, PIO_START); 1024 + retry_count = 0; 1025 + do { 1026 + /* Clear PIO DONE ISR and start the transfer */ 1027 + advk_writel(pcie, 1, PIO_ISR); 1028 + advk_writel(pcie, 1, PIO_START); 1159 1029 1160 - ret = advk_pcie_wait_pio(pcie); 1161 - if (ret < 0) 1162 - return PCIBIOS_SET_FAILED; 1030 + ret = advk_pcie_wait_pio(pcie); 1031 + if (ret < 0) 1032 + return PCIBIOS_SET_FAILED; 1163 1033 1164 - ret = advk_pcie_check_pio_status(pcie, false, NULL); 1165 - if (ret < 0) 1166 - return PCIBIOS_SET_FAILED; 1034 + retry_count += ret; 1167 1035 1168 - return PCIBIOS_SUCCESSFUL; 1036 + ret = advk_pcie_check_pio_status(pcie, false, NULL); 1037 + } while (ret == -EAGAIN && retry_count < PIO_RETRY_CNT); 1038 + 1039 + return ret < 0 ? PCIBIOS_SET_FAILED : PCIBIOS_SUCCESSFUL; 1169 1040 } 1170 1041 1171 1042 static struct pci_ops advk_pcie_ops = { ··· 1217 1082 domain->host_data, handle_simple_irq, 1218 1083 NULL, NULL); 1219 1084 1220 - return hwirq; 1085 + return 0; 1221 1086 } 1222 1087 1223 1088 static void advk_msi_irq_domain_free(struct irq_domain *domain, ··· 1398 1263 if (!(BIT(msi_idx) & msi_status)) 1399 1264 continue; 1400 1265 1266 + /* 1267 + * msi_idx contains bits [4:0] of the msi_data and msi_data 1268 + * contains 16bit MSI interrupt number 1269 + */ 1401 1270 advk_writel(pcie, BIT(msi_idx), PCIE_MSI_STATUS_REG); 1402 - msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & 0xFF; 1271 + msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & PCIE_MSI_DATA_MASK; 1403 1272 generic_handle_irq(msi_data); 1404 1273 } 1405 1274 ··· 1424 1285 isr1_val = advk_readl(pcie, PCIE_ISR1_REG); 1425 1286 isr1_mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); 1426 1287 isr1_status = isr1_val & ((~isr1_mask) & PCIE_ISR1_ALL_MASK); 1427 - 1428 - if (!isr0_status && !isr1_status) { 1429 - advk_writel(pcie, isr0_val, PCIE_ISR0_REG); 1430 - advk_writel(pcie, isr1_val, PCIE_ISR1_REG); 1431 - return; 1432 - } 1433 1288 1434 1289 /* Process MSI interrupts */ 1435 1290 if (isr0_status & PCIE_ISR0_MSI_INT_PENDING)
+2 -2
drivers/pci/controller/pci-hyperv.c
··· 3126 3126 3127 3127 if (dom == HVPCI_DOM_INVALID) { 3128 3128 dev_err(&hdev->device, 3129 - "Unable to use dom# 0x%hx or other numbers", dom_req); 3129 + "Unable to use dom# 0x%x or other numbers", dom_req); 3130 3130 ret = -EINVAL; 3131 3131 goto free_bus; 3132 3132 } 3133 3133 3134 3134 if (dom != dom_req) 3135 3135 dev_info(&hdev->device, 3136 - "PCI dom# 0x%hx has collision, using 0x%hx", 3136 + "PCI dom# 0x%x has collision, using 0x%x", 3137 3137 dom_req, dom); 3138 3138 3139 3139 hbus->bridge->domain_nr = dom;
+2 -2
drivers/pci/controller/pci-thunder-ecam.c
··· 17 17 { 18 18 int shift = (where & 3) * 8; 19 19 20 - pr_debug("set_val %04x: %08x\n", (unsigned)(where & ~3), v); 20 + pr_debug("set_val %04x: %08x\n", (unsigned int)(where & ~3), v); 21 21 v >>= shift; 22 22 if (size == 1) 23 23 v &= 0xff; ··· 187 187 188 188 pr_debug("%04x:%04x - Fix pass#: %08x, where: %03x, devfn: %03x\n", 189 189 vendor_device & 0xffff, vendor_device >> 16, class_rev, 190 - (unsigned) where, devfn); 190 + (unsigned int)where, devfn); 191 191 192 192 /* Check for non type-00 header */ 193 193 if (cfg_type == 0) {
+1 -1
drivers/pci/controller/pci-xgene-msi.c
··· 302 302 303 303 /* 304 304 * MSIINTn (n is 0..F) indicates if there is a pending MSI interrupt 305 - * If bit x of this register is set (x is 0..7), one or more interupts 305 + * If bit x of this register is set (x is 0..7), one or more interrupts 306 306 * corresponding to MSInIRx is set. 307 307 */ 308 308 grp_select = xgene_msi_int_read(xgene_msi, msi_grp);
+1 -2
drivers/pci/controller/pci-xgene.c
··· 48 48 #define EN_COHERENCY 0xF0000000 49 49 #define EN_REG 0x00000001 50 50 #define OB_LO_IO 0x00000002 51 - #define XGENE_PCIE_VENDORID 0x10E8 52 51 #define XGENE_PCIE_DEVICEID 0xE004 53 52 #define SZ_1T (SZ_1G*1024ULL) 54 53 #define PIPE_PHY_RATE_RD(src) ((0xc000 & (u32)(src)) >> 0xe) ··· 559 560 xgene_pcie_clear_config(port); 560 561 561 562 /* setup the vendor and device IDs correctly */ 562 - val = (XGENE_PCIE_DEVICEID << 16) | XGENE_PCIE_VENDORID; 563 + val = (XGENE_PCIE_DEVICEID << 16) | PCI_VENDOR_ID_AMCC; 563 564 xgene_pcie_writel(port, BRIDGE_CFG_0, val); 564 565 565 566 ret = xgene_pcie_map_ranges(port);
+824
drivers/pci/controller/pcie-apple.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCIe host bridge driver for Apple system-on-chips. 4 + * 5 + * The HW is ECAM compliant, so once the controller is initialized, 6 + * the driver mostly deals MSI mapping and handling of per-port 7 + * interrupts (INTx, management and error signals). 8 + * 9 + * Initialization requires enabling power and clocks, along with a 10 + * number of register pokes. 11 + * 12 + * Copyright (C) 2021 Alyssa Rosenzweig <alyssa@rosenzweig.io> 13 + * Copyright (C) 2021 Google LLC 14 + * Copyright (C) 2021 Corellium LLC 15 + * Copyright (C) 2021 Mark Kettenis <kettenis@openbsd.org> 16 + * 17 + * Author: Alyssa Rosenzweig <alyssa@rosenzweig.io> 18 + * Author: Marc Zyngier <maz@kernel.org> 19 + */ 20 + 21 + #include <linux/gpio/consumer.h> 22 + #include <linux/kernel.h> 23 + #include <linux/iopoll.h> 24 + #include <linux/irqchip/chained_irq.h> 25 + #include <linux/irqdomain.h> 26 + #include <linux/list.h> 27 + #include <linux/module.h> 28 + #include <linux/msi.h> 29 + #include <linux/notifier.h> 30 + #include <linux/of_irq.h> 31 + #include <linux/pci-ecam.h> 32 + 33 + #define CORE_RC_PHYIF_CTL 0x00024 34 + #define CORE_RC_PHYIF_CTL_RUN BIT(0) 35 + #define CORE_RC_PHYIF_STAT 0x00028 36 + #define CORE_RC_PHYIF_STAT_REFCLK BIT(4) 37 + #define CORE_RC_CTL 0x00050 38 + #define CORE_RC_CTL_RUN BIT(0) 39 + #define CORE_RC_STAT 0x00058 40 + #define CORE_RC_STAT_READY BIT(0) 41 + #define CORE_FABRIC_STAT 0x04000 42 + #define CORE_FABRIC_STAT_MASK 0x001F001F 43 + #define CORE_LANE_CFG(port) (0x84000 + 0x4000 * (port)) 44 + #define CORE_LANE_CFG_REFCLK0REQ BIT(0) 45 + #define CORE_LANE_CFG_REFCLK1 BIT(1) 46 + #define CORE_LANE_CFG_REFCLK0ACK BIT(2) 47 + #define CORE_LANE_CFG_REFCLKEN (BIT(9) | BIT(10)) 48 + #define CORE_LANE_CTL(port) (0x84004 + 0x4000 * (port)) 49 + #define CORE_LANE_CTL_CFGACC BIT(15) 50 + 51 + #define PORT_LTSSMCTL 0x00080 52 + #define PORT_LTSSMCTL_START BIT(0) 53 + #define PORT_INTSTAT 0x00100 54 + #define PORT_INT_TUNNEL_ERR 31 55 + #define PORT_INT_CPL_TIMEOUT 23 56 + #define PORT_INT_RID2SID_MAPERR 22 57 + #define PORT_INT_CPL_ABORT 21 58 + #define PORT_INT_MSI_BAD_DATA 19 59 + #define PORT_INT_MSI_ERR 18 60 + #define PORT_INT_REQADDR_GT32 17 61 + #define PORT_INT_AF_TIMEOUT 15 62 + #define PORT_INT_LINK_DOWN 14 63 + #define PORT_INT_LINK_UP 12 64 + #define PORT_INT_LINK_BWMGMT 11 65 + #define PORT_INT_AER_MASK (15 << 4) 66 + #define PORT_INT_PORT_ERR 4 67 + #define PORT_INT_INTx(i) i 68 + #define PORT_INT_INTx_MASK 15 69 + #define PORT_INTMSK 0x00104 70 + #define PORT_INTMSKSET 0x00108 71 + #define PORT_INTMSKCLR 0x0010c 72 + #define PORT_MSICFG 0x00124 73 + #define PORT_MSICFG_EN BIT(0) 74 + #define PORT_MSICFG_L2MSINUM_SHIFT 4 75 + #define PORT_MSIBASE 0x00128 76 + #define PORT_MSIBASE_1_SHIFT 16 77 + #define PORT_MSIADDR 0x00168 78 + #define PORT_LINKSTS 0x00208 79 + #define PORT_LINKSTS_UP BIT(0) 80 + #define PORT_LINKSTS_BUSY BIT(2) 81 + #define PORT_LINKCMDSTS 0x00210 82 + #define PORT_OUTS_NPREQS 0x00284 83 + #define PORT_OUTS_NPREQS_REQ BIT(24) 84 + #define PORT_OUTS_NPREQS_CPL BIT(16) 85 + #define PORT_RXWR_FIFO 0x00288 86 + #define PORT_RXWR_FIFO_HDR GENMASK(15, 10) 87 + #define PORT_RXWR_FIFO_DATA GENMASK(9, 0) 88 + #define PORT_RXRD_FIFO 0x0028C 89 + #define PORT_RXRD_FIFO_REQ GENMASK(6, 0) 90 + #define PORT_OUTS_CPLS 0x00290 91 + #define PORT_OUTS_CPLS_SHRD GENMASK(14, 8) 92 + #define PORT_OUTS_CPLS_WAIT GENMASK(6, 0) 93 + #define PORT_APPCLK 0x00800 94 + #define PORT_APPCLK_EN BIT(0) 95 + #define PORT_APPCLK_CGDIS BIT(8) 96 + #define PORT_STATUS 0x00804 97 + #define PORT_STATUS_READY BIT(0) 98 + #define PORT_REFCLK 0x00810 99 + #define PORT_REFCLK_EN BIT(0) 100 + #define PORT_REFCLK_CGDIS BIT(8) 101 + #define PORT_PERST 0x00814 102 + #define PORT_PERST_OFF BIT(0) 103 + #define PORT_RID2SID(i16) (0x00828 + 4 * (i16)) 104 + #define PORT_RID2SID_VALID BIT(31) 105 + #define PORT_RID2SID_SID_SHIFT 16 106 + #define PORT_RID2SID_BUS_SHIFT 8 107 + #define PORT_RID2SID_DEV_SHIFT 3 108 + #define PORT_RID2SID_FUNC_SHIFT 0 109 + #define PORT_OUTS_PREQS_HDR 0x00980 110 + #define PORT_OUTS_PREQS_HDR_MASK GENMASK(9, 0) 111 + #define PORT_OUTS_PREQS_DATA 0x00984 112 + #define PORT_OUTS_PREQS_DATA_MASK GENMASK(15, 0) 113 + #define PORT_TUNCTRL 0x00988 114 + #define PORT_TUNCTRL_PERST_ON BIT(0) 115 + #define PORT_TUNCTRL_PERST_ACK_REQ BIT(1) 116 + #define PORT_TUNSTAT 0x0098c 117 + #define PORT_TUNSTAT_PERST_ON BIT(0) 118 + #define PORT_TUNSTAT_PERST_ACK_PEND BIT(1) 119 + #define PORT_PREFMEM_ENABLE 0x00994 120 + 121 + #define MAX_RID2SID 64 122 + 123 + /* 124 + * The doorbell address is set to 0xfffff000, which by convention 125 + * matches what MacOS does, and it is possible to use any other 126 + * address (in the bottom 4GB, as the base register is only 32bit). 127 + * However, it has to be excluded from the IOVA range, and the DART 128 + * driver has to know about it. 129 + */ 130 + #define DOORBELL_ADDR CONFIG_PCIE_APPLE_MSI_DOORBELL_ADDR 131 + 132 + struct apple_pcie { 133 + struct mutex lock; 134 + struct device *dev; 135 + void __iomem *base; 136 + struct irq_domain *domain; 137 + unsigned long *bitmap; 138 + struct list_head ports; 139 + struct completion event; 140 + struct irq_fwspec fwspec; 141 + u32 nvecs; 142 + }; 143 + 144 + struct apple_pcie_port { 145 + struct apple_pcie *pcie; 146 + struct device_node *np; 147 + void __iomem *base; 148 + struct irq_domain *domain; 149 + struct list_head entry; 150 + DECLARE_BITMAP(sid_map, MAX_RID2SID); 151 + int sid_map_sz; 152 + int idx; 153 + }; 154 + 155 + static void rmw_set(u32 set, void __iomem *addr) 156 + { 157 + writel_relaxed(readl_relaxed(addr) | set, addr); 158 + } 159 + 160 + static void rmw_clear(u32 clr, void __iomem *addr) 161 + { 162 + writel_relaxed(readl_relaxed(addr) & ~clr, addr); 163 + } 164 + 165 + static void apple_msi_top_irq_mask(struct irq_data *d) 166 + { 167 + pci_msi_mask_irq(d); 168 + irq_chip_mask_parent(d); 169 + } 170 + 171 + static void apple_msi_top_irq_unmask(struct irq_data *d) 172 + { 173 + pci_msi_unmask_irq(d); 174 + irq_chip_unmask_parent(d); 175 + } 176 + 177 + static struct irq_chip apple_msi_top_chip = { 178 + .name = "PCIe MSI", 179 + .irq_mask = apple_msi_top_irq_mask, 180 + .irq_unmask = apple_msi_top_irq_unmask, 181 + .irq_eoi = irq_chip_eoi_parent, 182 + .irq_set_affinity = irq_chip_set_affinity_parent, 183 + .irq_set_type = irq_chip_set_type_parent, 184 + }; 185 + 186 + static void apple_msi_compose_msg(struct irq_data *data, struct msi_msg *msg) 187 + { 188 + msg->address_hi = upper_32_bits(DOORBELL_ADDR); 189 + msg->address_lo = lower_32_bits(DOORBELL_ADDR); 190 + msg->data = data->hwirq; 191 + } 192 + 193 + static struct irq_chip apple_msi_bottom_chip = { 194 + .name = "MSI", 195 + .irq_mask = irq_chip_mask_parent, 196 + .irq_unmask = irq_chip_unmask_parent, 197 + .irq_eoi = irq_chip_eoi_parent, 198 + .irq_set_affinity = irq_chip_set_affinity_parent, 199 + .irq_set_type = irq_chip_set_type_parent, 200 + .irq_compose_msi_msg = apple_msi_compose_msg, 201 + }; 202 + 203 + static int apple_msi_domain_alloc(struct irq_domain *domain, unsigned int virq, 204 + unsigned int nr_irqs, void *args) 205 + { 206 + struct apple_pcie *pcie = domain->host_data; 207 + struct irq_fwspec fwspec = pcie->fwspec; 208 + unsigned int i; 209 + int ret, hwirq; 210 + 211 + mutex_lock(&pcie->lock); 212 + 213 + hwirq = bitmap_find_free_region(pcie->bitmap, pcie->nvecs, 214 + order_base_2(nr_irqs)); 215 + 216 + mutex_unlock(&pcie->lock); 217 + 218 + if (hwirq < 0) 219 + return -ENOSPC; 220 + 221 + fwspec.param[1] += hwirq; 222 + 223 + ret = irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &fwspec); 224 + if (ret) 225 + return ret; 226 + 227 + for (i = 0; i < nr_irqs; i++) { 228 + irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq + i, 229 + &apple_msi_bottom_chip, 230 + domain->host_data); 231 + } 232 + 233 + return 0; 234 + } 235 + 236 + static void apple_msi_domain_free(struct irq_domain *domain, unsigned int virq, 237 + unsigned int nr_irqs) 238 + { 239 + struct irq_data *d = irq_domain_get_irq_data(domain, virq); 240 + struct apple_pcie *pcie = domain->host_data; 241 + 242 + mutex_lock(&pcie->lock); 243 + 244 + bitmap_release_region(pcie->bitmap, d->hwirq, order_base_2(nr_irqs)); 245 + 246 + mutex_unlock(&pcie->lock); 247 + } 248 + 249 + static const struct irq_domain_ops apple_msi_domain_ops = { 250 + .alloc = apple_msi_domain_alloc, 251 + .free = apple_msi_domain_free, 252 + }; 253 + 254 + static struct msi_domain_info apple_msi_info = { 255 + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 256 + MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX), 257 + .chip = &apple_msi_top_chip, 258 + }; 259 + 260 + static void apple_port_irq_mask(struct irq_data *data) 261 + { 262 + struct apple_pcie_port *port = irq_data_get_irq_chip_data(data); 263 + 264 + writel_relaxed(BIT(data->hwirq), port->base + PORT_INTMSKSET); 265 + } 266 + 267 + static void apple_port_irq_unmask(struct irq_data *data) 268 + { 269 + struct apple_pcie_port *port = irq_data_get_irq_chip_data(data); 270 + 271 + writel_relaxed(BIT(data->hwirq), port->base + PORT_INTMSKCLR); 272 + } 273 + 274 + static bool hwirq_is_intx(unsigned int hwirq) 275 + { 276 + return BIT(hwirq) & PORT_INT_INTx_MASK; 277 + } 278 + 279 + static void apple_port_irq_ack(struct irq_data *data) 280 + { 281 + struct apple_pcie_port *port = irq_data_get_irq_chip_data(data); 282 + 283 + if (!hwirq_is_intx(data->hwirq)) 284 + writel_relaxed(BIT(data->hwirq), port->base + PORT_INTSTAT); 285 + } 286 + 287 + static int apple_port_irq_set_type(struct irq_data *data, unsigned int type) 288 + { 289 + /* 290 + * It doesn't seem that there is any way to configure the 291 + * trigger, so assume INTx have to be level (as per the spec), 292 + * and the rest is edge (which looks likely). 293 + */ 294 + if (hwirq_is_intx(data->hwirq) ^ !!(type & IRQ_TYPE_LEVEL_MASK)) 295 + return -EINVAL; 296 + 297 + irqd_set_trigger_type(data, type); 298 + return 0; 299 + } 300 + 301 + static struct irq_chip apple_port_irqchip = { 302 + .name = "PCIe", 303 + .irq_ack = apple_port_irq_ack, 304 + .irq_mask = apple_port_irq_mask, 305 + .irq_unmask = apple_port_irq_unmask, 306 + .irq_set_type = apple_port_irq_set_type, 307 + }; 308 + 309 + static int apple_port_irq_domain_alloc(struct irq_domain *domain, 310 + unsigned int virq, unsigned int nr_irqs, 311 + void *args) 312 + { 313 + struct apple_pcie_port *port = domain->host_data; 314 + struct irq_fwspec *fwspec = args; 315 + int i; 316 + 317 + for (i = 0; i < nr_irqs; i++) { 318 + irq_flow_handler_t flow = handle_edge_irq; 319 + unsigned int type = IRQ_TYPE_EDGE_RISING; 320 + 321 + if (hwirq_is_intx(fwspec->param[0] + i)) { 322 + flow = handle_level_irq; 323 + type = IRQ_TYPE_LEVEL_HIGH; 324 + } 325 + 326 + irq_domain_set_info(domain, virq + i, fwspec->param[0] + i, 327 + &apple_port_irqchip, port, flow, 328 + NULL, NULL); 329 + 330 + irq_set_irq_type(virq + i, type); 331 + } 332 + 333 + return 0; 334 + } 335 + 336 + static void apple_port_irq_domain_free(struct irq_domain *domain, 337 + unsigned int virq, unsigned int nr_irqs) 338 + { 339 + int i; 340 + 341 + for (i = 0; i < nr_irqs; i++) { 342 + struct irq_data *d = irq_domain_get_irq_data(domain, virq + i); 343 + 344 + irq_set_handler(virq + i, NULL); 345 + irq_domain_reset_irq_data(d); 346 + } 347 + } 348 + 349 + static const struct irq_domain_ops apple_port_irq_domain_ops = { 350 + .translate = irq_domain_translate_onecell, 351 + .alloc = apple_port_irq_domain_alloc, 352 + .free = apple_port_irq_domain_free, 353 + }; 354 + 355 + static void apple_port_irq_handler(struct irq_desc *desc) 356 + { 357 + struct apple_pcie_port *port = irq_desc_get_handler_data(desc); 358 + struct irq_chip *chip = irq_desc_get_chip(desc); 359 + unsigned long stat; 360 + int i; 361 + 362 + chained_irq_enter(chip, desc); 363 + 364 + stat = readl_relaxed(port->base + PORT_INTSTAT); 365 + 366 + for_each_set_bit(i, &stat, 32) 367 + generic_handle_domain_irq(port->domain, i); 368 + 369 + chained_irq_exit(chip, desc); 370 + } 371 + 372 + static int apple_pcie_port_setup_irq(struct apple_pcie_port *port) 373 + { 374 + struct fwnode_handle *fwnode = &port->np->fwnode; 375 + unsigned int irq; 376 + 377 + /* FIXME: consider moving each interrupt under each port */ 378 + irq = irq_of_parse_and_map(to_of_node(dev_fwnode(port->pcie->dev)), 379 + port->idx); 380 + if (!irq) 381 + return -ENXIO; 382 + 383 + port->domain = irq_domain_create_linear(fwnode, 32, 384 + &apple_port_irq_domain_ops, 385 + port); 386 + if (!port->domain) 387 + return -ENOMEM; 388 + 389 + /* Disable all interrupts */ 390 + writel_relaxed(~0, port->base + PORT_INTMSKSET); 391 + writel_relaxed(~0, port->base + PORT_INTSTAT); 392 + 393 + irq_set_chained_handler_and_data(irq, apple_port_irq_handler, port); 394 + 395 + /* Configure MSI base address */ 396 + BUILD_BUG_ON(upper_32_bits(DOORBELL_ADDR)); 397 + writel_relaxed(lower_32_bits(DOORBELL_ADDR), port->base + PORT_MSIADDR); 398 + 399 + /* Enable MSIs, shared between all ports */ 400 + writel_relaxed(0, port->base + PORT_MSIBASE); 401 + writel_relaxed((ilog2(port->pcie->nvecs) << PORT_MSICFG_L2MSINUM_SHIFT) | 402 + PORT_MSICFG_EN, port->base + PORT_MSICFG); 403 + 404 + return 0; 405 + } 406 + 407 + static irqreturn_t apple_pcie_port_irq(int irq, void *data) 408 + { 409 + struct apple_pcie_port *port = data; 410 + unsigned int hwirq = irq_domain_get_irq_data(port->domain, irq)->hwirq; 411 + 412 + switch (hwirq) { 413 + case PORT_INT_LINK_UP: 414 + dev_info_ratelimited(port->pcie->dev, "Link up on %pOF\n", 415 + port->np); 416 + complete_all(&port->pcie->event); 417 + break; 418 + case PORT_INT_LINK_DOWN: 419 + dev_info_ratelimited(port->pcie->dev, "Link down on %pOF\n", 420 + port->np); 421 + break; 422 + default: 423 + return IRQ_NONE; 424 + } 425 + 426 + return IRQ_HANDLED; 427 + } 428 + 429 + static int apple_pcie_port_register_irqs(struct apple_pcie_port *port) 430 + { 431 + static struct { 432 + unsigned int hwirq; 433 + const char *name; 434 + } port_irqs[] = { 435 + { PORT_INT_LINK_UP, "Link up", }, 436 + { PORT_INT_LINK_DOWN, "Link down", }, 437 + }; 438 + int i; 439 + 440 + for (i = 0; i < ARRAY_SIZE(port_irqs); i++) { 441 + struct irq_fwspec fwspec = { 442 + .fwnode = &port->np->fwnode, 443 + .param_count = 1, 444 + .param = { 445 + [0] = port_irqs[i].hwirq, 446 + }, 447 + }; 448 + unsigned int irq; 449 + int ret; 450 + 451 + irq = irq_domain_alloc_irqs(port->domain, 1, NUMA_NO_NODE, 452 + &fwspec); 453 + if (WARN_ON(!irq)) 454 + continue; 455 + 456 + ret = request_irq(irq, apple_pcie_port_irq, 0, 457 + port_irqs[i].name, port); 458 + WARN_ON(ret); 459 + } 460 + 461 + return 0; 462 + } 463 + 464 + static int apple_pcie_setup_refclk(struct apple_pcie *pcie, 465 + struct apple_pcie_port *port) 466 + { 467 + u32 stat; 468 + int res; 469 + 470 + res = readl_relaxed_poll_timeout(pcie->base + CORE_RC_PHYIF_STAT, stat, 471 + stat & CORE_RC_PHYIF_STAT_REFCLK, 472 + 100, 50000); 473 + if (res < 0) 474 + return res; 475 + 476 + rmw_set(CORE_LANE_CTL_CFGACC, pcie->base + CORE_LANE_CTL(port->idx)); 477 + rmw_set(CORE_LANE_CFG_REFCLK0REQ, pcie->base + CORE_LANE_CFG(port->idx)); 478 + 479 + res = readl_relaxed_poll_timeout(pcie->base + CORE_LANE_CFG(port->idx), 480 + stat, stat & CORE_LANE_CFG_REFCLK0ACK, 481 + 100, 50000); 482 + if (res < 0) 483 + return res; 484 + 485 + rmw_set(CORE_LANE_CFG_REFCLK1, pcie->base + CORE_LANE_CFG(port->idx)); 486 + res = readl_relaxed_poll_timeout(pcie->base + CORE_LANE_CFG(port->idx), 487 + stat, stat & CORE_LANE_CFG_REFCLK1, 488 + 100, 50000); 489 + 490 + if (res < 0) 491 + return res; 492 + 493 + rmw_clear(CORE_LANE_CTL_CFGACC, pcie->base + CORE_LANE_CTL(port->idx)); 494 + 495 + rmw_set(CORE_LANE_CFG_REFCLKEN, pcie->base + CORE_LANE_CFG(port->idx)); 496 + rmw_set(PORT_REFCLK_EN, port->base + PORT_REFCLK); 497 + 498 + return 0; 499 + } 500 + 501 + static u32 apple_pcie_rid2sid_write(struct apple_pcie_port *port, 502 + int idx, u32 val) 503 + { 504 + writel_relaxed(val, port->base + PORT_RID2SID(idx)); 505 + /* Read back to ensure completion of the write */ 506 + return readl_relaxed(port->base + PORT_RID2SID(idx)); 507 + } 508 + 509 + static int apple_pcie_setup_port(struct apple_pcie *pcie, 510 + struct device_node *np) 511 + { 512 + struct platform_device *platform = to_platform_device(pcie->dev); 513 + struct apple_pcie_port *port; 514 + struct gpio_desc *reset; 515 + u32 stat, idx; 516 + int ret, i; 517 + 518 + reset = gpiod_get_from_of_node(np, "reset-gpios", 0, 519 + GPIOD_OUT_LOW, "#PERST"); 520 + if (IS_ERR(reset)) 521 + return PTR_ERR(reset); 522 + 523 + port = devm_kzalloc(pcie->dev, sizeof(*port), GFP_KERNEL); 524 + if (!port) 525 + return -ENOMEM; 526 + 527 + ret = of_property_read_u32_index(np, "reg", 0, &idx); 528 + if (ret) 529 + return ret; 530 + 531 + /* Use the first reg entry to work out the port index */ 532 + port->idx = idx >> 11; 533 + port->pcie = pcie; 534 + port->np = np; 535 + 536 + port->base = devm_platform_ioremap_resource(platform, port->idx + 2); 537 + if (IS_ERR(port->base)) 538 + return PTR_ERR(port->base); 539 + 540 + rmw_set(PORT_APPCLK_EN, port->base + PORT_APPCLK); 541 + 542 + ret = apple_pcie_setup_refclk(pcie, port); 543 + if (ret < 0) 544 + return ret; 545 + 546 + rmw_set(PORT_PERST_OFF, port->base + PORT_PERST); 547 + gpiod_set_value(reset, 1); 548 + 549 + ret = readl_relaxed_poll_timeout(port->base + PORT_STATUS, stat, 550 + stat & PORT_STATUS_READY, 100, 250000); 551 + if (ret < 0) { 552 + dev_err(pcie->dev, "port %pOF ready wait timeout\n", np); 553 + return ret; 554 + } 555 + 556 + ret = apple_pcie_port_setup_irq(port); 557 + if (ret) 558 + return ret; 559 + 560 + /* Reset all RID/SID mappings, and check for RAZ/WI registers */ 561 + for (i = 0; i < MAX_RID2SID; i++) { 562 + if (apple_pcie_rid2sid_write(port, i, 0xbad1d) != 0xbad1d) 563 + break; 564 + apple_pcie_rid2sid_write(port, i, 0); 565 + } 566 + 567 + dev_dbg(pcie->dev, "%pOF: %d RID/SID mapping entries\n", np, i); 568 + 569 + port->sid_map_sz = i; 570 + 571 + list_add_tail(&port->entry, &pcie->ports); 572 + init_completion(&pcie->event); 573 + 574 + ret = apple_pcie_port_register_irqs(port); 575 + WARN_ON(ret); 576 + 577 + writel_relaxed(PORT_LTSSMCTL_START, port->base + PORT_LTSSMCTL); 578 + 579 + if (!wait_for_completion_timeout(&pcie->event, HZ / 10)) 580 + dev_warn(pcie->dev, "%pOF link didn't come up\n", np); 581 + 582 + return 0; 583 + } 584 + 585 + static int apple_msi_init(struct apple_pcie *pcie) 586 + { 587 + struct fwnode_handle *fwnode = dev_fwnode(pcie->dev); 588 + struct of_phandle_args args = {}; 589 + struct irq_domain *parent; 590 + int ret; 591 + 592 + ret = of_parse_phandle_with_args(to_of_node(fwnode), "msi-ranges", 593 + "#interrupt-cells", 0, &args); 594 + if (ret) 595 + return ret; 596 + 597 + ret = of_property_read_u32_index(to_of_node(fwnode), "msi-ranges", 598 + args.args_count + 1, &pcie->nvecs); 599 + if (ret) 600 + return ret; 601 + 602 + of_phandle_args_to_fwspec(args.np, args.args, args.args_count, 603 + &pcie->fwspec); 604 + 605 + pcie->bitmap = devm_bitmap_zalloc(pcie->dev, pcie->nvecs, GFP_KERNEL); 606 + if (!pcie->bitmap) 607 + return -ENOMEM; 608 + 609 + parent = irq_find_matching_fwspec(&pcie->fwspec, DOMAIN_BUS_WIRED); 610 + if (!parent) { 611 + dev_err(pcie->dev, "failed to find parent domain\n"); 612 + return -ENXIO; 613 + } 614 + 615 + parent = irq_domain_create_hierarchy(parent, 0, pcie->nvecs, fwnode, 616 + &apple_msi_domain_ops, pcie); 617 + if (!parent) { 618 + dev_err(pcie->dev, "failed to create IRQ domain\n"); 619 + return -ENOMEM; 620 + } 621 + irq_domain_update_bus_token(parent, DOMAIN_BUS_NEXUS); 622 + 623 + pcie->domain = pci_msi_create_irq_domain(fwnode, &apple_msi_info, 624 + parent); 625 + if (!pcie->domain) { 626 + dev_err(pcie->dev, "failed to create MSI domain\n"); 627 + irq_domain_remove(parent); 628 + return -ENOMEM; 629 + } 630 + 631 + return 0; 632 + } 633 + 634 + static struct apple_pcie_port *apple_pcie_get_port(struct pci_dev *pdev) 635 + { 636 + struct pci_config_window *cfg = pdev->sysdata; 637 + struct apple_pcie *pcie = cfg->priv; 638 + struct pci_dev *port_pdev; 639 + struct apple_pcie_port *port; 640 + 641 + /* Find the root port this device is on */ 642 + port_pdev = pcie_find_root_port(pdev); 643 + 644 + /* If finding the port itself, nothing to do */ 645 + if (WARN_ON(!port_pdev) || pdev == port_pdev) 646 + return NULL; 647 + 648 + list_for_each_entry(port, &pcie->ports, entry) { 649 + if (port->idx == PCI_SLOT(port_pdev->devfn)) 650 + return port; 651 + } 652 + 653 + return NULL; 654 + } 655 + 656 + static int apple_pcie_add_device(struct apple_pcie_port *port, 657 + struct pci_dev *pdev) 658 + { 659 + u32 sid, rid = PCI_DEVID(pdev->bus->number, pdev->devfn); 660 + int idx, err; 661 + 662 + dev_dbg(&pdev->dev, "added to bus %s, index %d\n", 663 + pci_name(pdev->bus->self), port->idx); 664 + 665 + err = of_map_id(port->pcie->dev->of_node, rid, "iommu-map", 666 + "iommu-map-mask", NULL, &sid); 667 + if (err) 668 + return err; 669 + 670 + mutex_lock(&port->pcie->lock); 671 + 672 + idx = bitmap_find_free_region(port->sid_map, port->sid_map_sz, 0); 673 + if (idx >= 0) { 674 + apple_pcie_rid2sid_write(port, idx, 675 + PORT_RID2SID_VALID | 676 + (sid << PORT_RID2SID_SID_SHIFT) | rid); 677 + 678 + dev_dbg(&pdev->dev, "mapping RID%x to SID%x (index %d)\n", 679 + rid, sid, idx); 680 + } 681 + 682 + mutex_unlock(&port->pcie->lock); 683 + 684 + return idx >= 0 ? 0 : -ENOSPC; 685 + } 686 + 687 + static void apple_pcie_release_device(struct apple_pcie_port *port, 688 + struct pci_dev *pdev) 689 + { 690 + u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn); 691 + int idx; 692 + 693 + mutex_lock(&port->pcie->lock); 694 + 695 + for_each_set_bit(idx, port->sid_map, port->sid_map_sz) { 696 + u32 val; 697 + 698 + val = readl_relaxed(port->base + PORT_RID2SID(idx)); 699 + if ((val & 0xffff) == rid) { 700 + apple_pcie_rid2sid_write(port, idx, 0); 701 + bitmap_release_region(port->sid_map, idx, 0); 702 + dev_dbg(&pdev->dev, "Released %x (%d)\n", val, idx); 703 + break; 704 + } 705 + } 706 + 707 + mutex_unlock(&port->pcie->lock); 708 + } 709 + 710 + static int apple_pcie_bus_notifier(struct notifier_block *nb, 711 + unsigned long action, 712 + void *data) 713 + { 714 + struct device *dev = data; 715 + struct pci_dev *pdev = to_pci_dev(dev); 716 + struct apple_pcie_port *port; 717 + int err; 718 + 719 + /* 720 + * This is a bit ugly. We assume that if we get notified for 721 + * any PCI device, we must be in charge of it, and that there 722 + * is no other PCI controller in the whole system. It probably 723 + * holds for now, but who knows for how long? 724 + */ 725 + port = apple_pcie_get_port(pdev); 726 + if (!port) 727 + return NOTIFY_DONE; 728 + 729 + switch (action) { 730 + case BUS_NOTIFY_ADD_DEVICE: 731 + err = apple_pcie_add_device(port, pdev); 732 + if (err) 733 + return notifier_from_errno(err); 734 + break; 735 + case BUS_NOTIFY_DEL_DEVICE: 736 + apple_pcie_release_device(port, pdev); 737 + break; 738 + default: 739 + return NOTIFY_DONE; 740 + } 741 + 742 + return NOTIFY_OK; 743 + } 744 + 745 + static struct notifier_block apple_pcie_nb = { 746 + .notifier_call = apple_pcie_bus_notifier, 747 + }; 748 + 749 + static int apple_pcie_init(struct pci_config_window *cfg) 750 + { 751 + struct device *dev = cfg->parent; 752 + struct platform_device *platform = to_platform_device(dev); 753 + struct device_node *of_port; 754 + struct apple_pcie *pcie; 755 + int ret; 756 + 757 + pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 758 + if (!pcie) 759 + return -ENOMEM; 760 + 761 + pcie->dev = dev; 762 + 763 + mutex_init(&pcie->lock); 764 + 765 + pcie->base = devm_platform_ioremap_resource(platform, 1); 766 + if (IS_ERR(pcie->base)) 767 + return PTR_ERR(pcie->base); 768 + 769 + cfg->priv = pcie; 770 + INIT_LIST_HEAD(&pcie->ports); 771 + 772 + for_each_child_of_node(dev->of_node, of_port) { 773 + ret = apple_pcie_setup_port(pcie, of_port); 774 + if (ret) { 775 + dev_err(pcie->dev, "Port %pOF setup fail: %d\n", of_port, ret); 776 + of_node_put(of_port); 777 + return ret; 778 + } 779 + } 780 + 781 + return apple_msi_init(pcie); 782 + } 783 + 784 + static int apple_pcie_probe(struct platform_device *pdev) 785 + { 786 + int ret; 787 + 788 + ret = bus_register_notifier(&pci_bus_type, &apple_pcie_nb); 789 + if (ret) 790 + return ret; 791 + 792 + ret = pci_host_common_probe(pdev); 793 + if (ret) 794 + bus_unregister_notifier(&pci_bus_type, &apple_pcie_nb); 795 + 796 + return ret; 797 + } 798 + 799 + static const struct pci_ecam_ops apple_pcie_cfg_ecam_ops = { 800 + .init = apple_pcie_init, 801 + .pci_ops = { 802 + .map_bus = pci_ecam_map_bus, 803 + .read = pci_generic_config_read, 804 + .write = pci_generic_config_write, 805 + } 806 + }; 807 + 808 + static const struct of_device_id apple_pcie_of_match[] = { 809 + { .compatible = "apple,pcie", .data = &apple_pcie_cfg_ecam_ops }, 810 + { } 811 + }; 812 + MODULE_DEVICE_TABLE(of, apple_pcie_of_match); 813 + 814 + static struct platform_driver apple_pcie_driver = { 815 + .probe = apple_pcie_probe, 816 + .driver = { 817 + .name = "pcie-apple", 818 + .of_match_table = apple_pcie_of_match, 819 + .suppress_bind_attrs = true, 820 + }, 821 + }; 822 + module_platform_driver(apple_pcie_driver); 823 + 824 + MODULE_LICENSE("GPL v2");
+1 -1
drivers/pci/controller/pcie-brcmstb.c
··· 145 145 #define BRCM_INT_PCI_MSI_LEGACY_NR 8 146 146 #define BRCM_INT_PCI_MSI_SHIFT 0 147 147 148 - /* MSI target adresses */ 148 + /* MSI target addresses */ 149 149 #define BRCM_MSI_TARGET_ADDR_LT_4GB 0x0fffffffcULL 150 150 #define BRCM_MSI_TARGET_ADDR_GT_4GB 0xffffffffcULL 151 151
+1 -1
drivers/pci/controller/pcie-iproc.c
··· 249 249 250 250 /* 251 251 * To hold the address of the register where the MSI writes are 252 - * programed. When ARM GICv3 ITS is used, this should be programmed 252 + * programmed. When ARM GICv3 ITS is used, this should be programmed 253 253 * with the address of the GITS_TRANSLATER register. 254 254 */ 255 255 IPROC_PCIE_MSI_ADDR_LO,
+1 -4
drivers/pci/controller/pcie-rcar-ep.c
··· 6 6 * Author: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> 7 7 */ 8 8 9 - #include <linux/clk.h> 10 9 #include <linux/delay.h> 11 10 #include <linux/of_address.h> 12 - #include <linux/of_irq.h> 13 - #include <linux/of_pci.h> 14 11 #include <linux/of_platform.h> 15 12 #include <linux/pci.h> 16 13 #include <linux/pci-epc.h> 17 - #include <linux/phy/phy.h> 18 14 #include <linux/platform_device.h> 15 + #include <linux/pm_runtime.h> 19 16 20 17 #include "pcie-rcar.h" 21 18
-2
drivers/pci/controller/pcie-rcar-host.c
··· 24 24 #include <linux/msi.h> 25 25 #include <linux/of_address.h> 26 26 #include <linux/of_irq.h> 27 - #include <linux/of_pci.h> 28 27 #include <linux/of_platform.h> 29 28 #include <linux/pci.h> 30 29 #include <linux/phy/phy.h> 31 30 #include <linux/platform_device.h> 32 31 #include <linux/pm_runtime.h> 33 - #include <linux/slab.h> 34 32 35 33 #include "pcie-rcar.h" 36 34
+36 -11
drivers/pci/controller/vmd.c
··· 6 6 7 7 #include <linux/device.h> 8 8 #include <linux/interrupt.h> 9 + #include <linux/iommu.h> 9 10 #include <linux/irq.h> 10 11 #include <linux/kernel.h> 11 12 #include <linux/module.h> ··· 19 18 #include <linux/rcupdate.h> 20 19 21 20 #include <asm/irqdomain.h> 22 - #include <asm/device.h> 23 - #include <asm/msi.h> 24 21 25 22 #define VMD_CFGBAR 0 26 23 #define VMD_MEMBAR1 2 ··· 68 69 */ 69 70 VMD_FEAT_CAN_BYPASS_MSI_REMAP = (1 << 4), 70 71 }; 72 + 73 + static DEFINE_IDA(vmd_instance_ida); 71 74 72 75 /* 73 76 * Lock for manipulating VMD IRQ lists. ··· 121 120 struct pci_bus *bus; 122 121 u8 busn_start; 123 122 u8 first_vec; 123 + char *name; 124 + int instance; 124 125 }; 125 126 126 127 static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus) ··· 653 650 INIT_LIST_HEAD(&vmd->irqs[i].irq_list); 654 651 err = devm_request_irq(&dev->dev, pci_irq_vector(dev, i), 655 652 vmd_irq, IRQF_NO_THREAD, 656 - "vmd", &vmd->irqs[i]); 653 + vmd->name, &vmd->irqs[i]); 657 654 if (err) 658 655 return err; 659 656 } ··· 764 761 * acceptable because the guest is usually CPU-limited and MSI 765 762 * remapping doesn't become a performance bottleneck. 766 763 */ 767 - if (!(features & VMD_FEAT_CAN_BYPASS_MSI_REMAP) || 764 + if (iommu_capable(vmd->dev->dev.bus, IOMMU_CAP_INTR_REMAP) || 765 + !(features & VMD_FEAT_CAN_BYPASS_MSI_REMAP) || 768 766 offset[0] || offset[1]) { 769 767 ret = vmd_alloc_irqs(vmd); 770 768 if (ret) ··· 838 834 return -ENOMEM; 839 835 840 836 vmd->dev = dev; 837 + vmd->instance = ida_simple_get(&vmd_instance_ida, 0, 0, GFP_KERNEL); 838 + if (vmd->instance < 0) 839 + return vmd->instance; 840 + 841 + vmd->name = kasprintf(GFP_KERNEL, "vmd%d", vmd->instance); 842 + if (!vmd->name) { 843 + err = -ENOMEM; 844 + goto out_release_instance; 845 + } 846 + 841 847 err = pcim_enable_device(dev); 842 848 if (err < 0) 843 - return err; 849 + goto out_release_instance; 844 850 845 851 vmd->cfgbar = pcim_iomap(dev, VMD_CFGBAR, 0); 846 - if (!vmd->cfgbar) 847 - return -ENOMEM; 852 + if (!vmd->cfgbar) { 853 + err = -ENOMEM; 854 + goto out_release_instance; 855 + } 848 856 849 857 pci_set_master(dev); 850 858 if (dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(64)) && 851 - dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32))) 852 - return -ENODEV; 859 + dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32))) { 860 + err = -ENODEV; 861 + goto out_release_instance; 862 + } 853 863 854 864 if (features & VMD_FEAT_OFFSET_FIRST_VECTOR) 855 865 vmd->first_vec = 1; ··· 872 854 pci_set_drvdata(dev, vmd); 873 855 err = vmd_enable_domain(vmd, features); 874 856 if (err) 875 - return err; 857 + goto out_release_instance; 876 858 877 859 dev_info(&vmd->dev->dev, "Bound to PCI domain %04x\n", 878 860 vmd->sysdata.domain); 879 861 return 0; 862 + 863 + out_release_instance: 864 + ida_simple_remove(&vmd_instance_ida, vmd->instance); 865 + kfree(vmd->name); 866 + return err; 880 867 } 881 868 882 869 static void vmd_cleanup_srcu(struct vmd_dev *vmd) ··· 902 879 vmd_cleanup_srcu(vmd); 903 880 vmd_detach_resources(vmd); 904 881 vmd_remove_irq_domain(vmd); 882 + ida_simple_remove(&vmd_instance_ida, vmd->instance); 883 + kfree(vmd->name); 905 884 } 906 885 907 886 #ifdef CONFIG_PM_SLEEP ··· 928 903 for (i = 0; i < vmd->msix_count; i++) { 929 904 err = devm_request_irq(dev, pci_irq_vector(pdev, i), 930 905 vmd_irq, IRQF_NO_THREAD, 931 - "vmd", &vmd->irqs[i]); 906 + vmd->name, &vmd->irqs[i]); 932 907 if (err) 933 908 return err; 934 909 }
+8 -14
drivers/pci/endpoint/functions/pci-epf-ntb.c
··· 1937 1937 struct config_group *group = to_config_group(item); \ 1938 1938 struct epf_ntb *ntb = to_epf_ntb(group); \ 1939 1939 \ 1940 - return sprintf(page, "%d\n", ntb->_name); \ 1940 + return sysfs_emit(page, "%d\n", ntb->_name); \ 1941 1941 } 1942 1942 1943 1943 #define EPF_NTB_W(_name) \ ··· 1947 1947 struct config_group *group = to_config_group(item); \ 1948 1948 struct epf_ntb *ntb = to_epf_ntb(group); \ 1949 1949 u32 val; \ 1950 - int ret; \ 1951 1950 \ 1952 - ret = kstrtou32(page, 0, &val); \ 1953 - if (ret) \ 1954 - return ret; \ 1951 + if (kstrtou32(page, 0, &val) < 0) \ 1952 + return -EINVAL; \ 1955 1953 \ 1956 1954 ntb->_name = val; \ 1957 1955 \ ··· 1966 1968 \ 1967 1969 sscanf(#_name, "mw%d", &win_no); \ 1968 1970 \ 1969 - return sprintf(page, "%lld\n", ntb->mws_size[win_no - 1]); \ 1971 + return sysfs_emit(page, "%lld\n", ntb->mws_size[win_no - 1]); \ 1970 1972 } 1971 1973 1972 1974 #define EPF_NTB_MW_W(_name) \ ··· 1978 1980 struct device *dev = &ntb->epf->dev; \ 1979 1981 int win_no; \ 1980 1982 u64 val; \ 1981 - int ret; \ 1982 1983 \ 1983 - ret = kstrtou64(page, 0, &val); \ 1984 - if (ret) \ 1985 - return ret; \ 1984 + if (kstrtou64(page, 0, &val) < 0) \ 1985 + return -EINVAL; \ 1986 1986 \ 1987 1987 if (sscanf(#_name, "mw%d", &win_no) != 1) \ 1988 1988 return -EINVAL; \ ··· 2001 2005 struct config_group *group = to_config_group(item); 2002 2006 struct epf_ntb *ntb = to_epf_ntb(group); 2003 2007 u32 val; 2004 - int ret; 2005 2008 2006 - ret = kstrtou32(page, 0, &val); 2007 - if (ret) 2008 - return ret; 2009 + if (kstrtou32(page, 0, &val) < 0) 2010 + return -EINVAL; 2009 2011 2010 2012 if (val > MAX_MW) 2011 2013 return -EINVAL;
+18 -30
drivers/pci/endpoint/pci-ep-cfs.c
··· 175 175 176 176 epc = epc_group->epc; 177 177 178 - ret = kstrtobool(page, &start); 179 - if (ret) 180 - return ret; 178 + if (kstrtobool(page, &start) < 0) 179 + return -EINVAL; 181 180 182 181 if (!start) { 183 182 pci_epc_stop(epc); ··· 197 198 198 199 static ssize_t pci_epc_start_show(struct config_item *item, char *page) 199 200 { 200 - return sprintf(page, "%d\n", 201 - to_pci_epc_group(item)->start); 201 + return sysfs_emit(page, "%d\n", to_pci_epc_group(item)->start); 202 202 } 203 203 204 204 CONFIGFS_ATTR(pci_epc_, start); ··· 319 321 struct pci_epf *epf = to_pci_epf_group(item)->epf; \ 320 322 if (WARN_ON_ONCE(!epf->header)) \ 321 323 return -EINVAL; \ 322 - return sprintf(page, "0x%04x\n", epf->header->_name); \ 324 + return sysfs_emit(page, "0x%04x\n", epf->header->_name); \ 323 325 } 324 326 325 327 #define PCI_EPF_HEADER_W_u32(_name) \ ··· 327 329 const char *page, size_t len) \ 328 330 { \ 329 331 u32 val; \ 330 - int ret; \ 331 332 struct pci_epf *epf = to_pci_epf_group(item)->epf; \ 332 333 if (WARN_ON_ONCE(!epf->header)) \ 333 334 return -EINVAL; \ 334 - ret = kstrtou32(page, 0, &val); \ 335 - if (ret) \ 336 - return ret; \ 335 + if (kstrtou32(page, 0, &val) < 0) \ 336 + return -EINVAL; \ 337 337 epf->header->_name = val; \ 338 338 return len; \ 339 339 } ··· 341 345 const char *page, size_t len) \ 342 346 { \ 343 347 u16 val; \ 344 - int ret; \ 345 348 struct pci_epf *epf = to_pci_epf_group(item)->epf; \ 346 349 if (WARN_ON_ONCE(!epf->header)) \ 347 350 return -EINVAL; \ 348 - ret = kstrtou16(page, 0, &val); \ 349 - if (ret) \ 350 - return ret; \ 351 + if (kstrtou16(page, 0, &val) < 0) \ 352 + return -EINVAL; \ 351 353 epf->header->_name = val; \ 352 354 return len; \ 353 355 } ··· 355 361 const char *page, size_t len) \ 356 362 { \ 357 363 u8 val; \ 358 - int ret; \ 359 364 struct pci_epf *epf = to_pci_epf_group(item)->epf; \ 360 365 if (WARN_ON_ONCE(!epf->header)) \ 361 366 return -EINVAL; \ 362 - ret = kstrtou8(page, 0, &val); \ 363 - if (ret) \ 364 - return ret; \ 367 + if (kstrtou8(page, 0, &val) < 0) \ 368 + return -EINVAL; \ 365 369 epf->header->_name = val; \ 366 370 return len; \ 367 371 } ··· 368 376 const char *page, size_t len) 369 377 { 370 378 u8 val; 371 - int ret; 372 379 373 - ret = kstrtou8(page, 0, &val); 374 - if (ret) 375 - return ret; 380 + if (kstrtou8(page, 0, &val) < 0) 381 + return -EINVAL; 376 382 377 383 to_pci_epf_group(item)->epf->msi_interrupts = val; 378 384 ··· 380 390 static ssize_t pci_epf_msi_interrupts_show(struct config_item *item, 381 391 char *page) 382 392 { 383 - return sprintf(page, "%d\n", 384 - to_pci_epf_group(item)->epf->msi_interrupts); 393 + return sysfs_emit(page, "%d\n", 394 + to_pci_epf_group(item)->epf->msi_interrupts); 385 395 } 386 396 387 397 static ssize_t pci_epf_msix_interrupts_store(struct config_item *item, 388 398 const char *page, size_t len) 389 399 { 390 400 u16 val; 391 - int ret; 392 401 393 - ret = kstrtou16(page, 0, &val); 394 - if (ret) 395 - return ret; 402 + if (kstrtou16(page, 0, &val) < 0) 403 + return -EINVAL; 396 404 397 405 to_pci_epf_group(item)->epf->msix_interrupts = val; 398 406 ··· 400 412 static ssize_t pci_epf_msix_interrupts_show(struct config_item *item, 401 413 char *page) 402 414 { 403 - return sprintf(page, "%d\n", 404 - to_pci_epf_group(item)->epf->msix_interrupts); 415 + return sysfs_emit(page, "%d\n", 416 + to_pci_epf_group(item)->epf->msix_interrupts); 405 417 } 406 418 407 419 PCI_EPF_HEADER_R(vendorid)
+1 -1
drivers/pci/endpoint/pci-epc-core.c
··· 700 700 /** 701 701 * pci_epc_init_notify() - Notify the EPF device that EPC device's core 702 702 * initialization is completed. 703 - * @epc: the EPC device whose core initialization is completeds 703 + * @epc: the EPC device whose core initialization is completed 704 704 * 705 705 * Invoke to Notify the EPF device that the EPC device's initialization 706 706 * is completed.
+2 -2
drivers/pci/endpoint/pci-epf-core.c
··· 224 224 * be removed 225 225 * @epf_vf: the virtual EP function to be removed 226 226 * 227 - * Invoke to remove a virtual endpoint function from the physcial endpoint 227 + * Invoke to remove a virtual endpoint function from the physical endpoint 228 228 * function. 229 229 */ 230 230 void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf) ··· 432 432 /** 433 433 * pci_epf_create() - create a new PCI EPF device 434 434 * @name: the name of the PCI EPF device. This name will be used to bind the 435 - * the EPF device to a EPF driver 435 + * EPF device to a EPF driver 436 436 * 437 437 * Invoke to create a new PCI EPF device by providing the name of the function 438 438 * device.
+1 -1
drivers/pci/hotplug/acpiphp_glue.c
··· 22 22 * when the bridge is scanned and it loses a refcount when the bridge 23 23 * is removed. 24 24 * - When a P2P bridge is present, we elevate the refcount on the subordinate 25 - * bus. It loses the refcount when the the driver unloads. 25 + * bus. It loses the refcount when the driver unloads. 26 26 */ 27 27 28 28 #define pr_fmt(fmt) "acpiphp_glue: " fmt
+1 -1
drivers/pci/hotplug/cpqphp.h
··· 15 15 #define _CPQPHP_H 16 16 17 17 #include <linux/interrupt.h> 18 - #include <asm/io.h> /* for read? and write? functions */ 18 + #include <linux/io.h> /* for read? and write? functions */ 19 19 #include <linux/delay.h> /* for delays */ 20 20 #include <linux/mutex.h> 21 21 #include <linux/sched/signal.h> /* for signal_pending() */
+2 -2
drivers/pci/hotplug/cpqphp_ctrl.c
··· 519 519 * @head: list to search 520 520 * @size: size of node to find, must be a power of two. 521 521 * 522 - * Description: This function sorts the resource list by size and then returns 522 + * Description: This function sorts the resource list by size and then 523 523 * returns the first node of "size" length that is not in the ISA aliasing 524 524 * window. If it finds a node larger than "size" it will split it up. 525 525 */ ··· 1202 1202 1203 1203 mdelay(5); 1204 1204 1205 - /* Reenable interrupts */ 1205 + /* Re-enable interrupts */ 1206 1206 writel(0, ctrl->hpc_reg + INT_MASK); 1207 1207 1208 1208 pci_write_config_byte(ctrl->pci_dev, 0x41, reg);
+4 -2
drivers/pci/hotplug/cpqphp_pci.c
··· 189 189 /* This should only be for x86 as it sets the Edge Level 190 190 * Control Register 191 191 */ 192 - outb((u8) (temp_word & 0xFF), 0x4d0); outb((u8) ((temp_word & 193 - 0xFF00) >> 8), 0x4d1); rc = 0; } 192 + outb((u8)(temp_word & 0xFF), 0x4d0); 193 + outb((u8)((temp_word & 0xFF00) >> 8), 0x4d1); 194 + rc = 0; 195 + } 194 196 195 197 return rc; 196 198 }
+2 -2
drivers/pci/hotplug/ibmphp.h
··· 352 352 u32 len; 353 353 int type; /* MEM, IO, PFMEM */ 354 354 u8 fromMem; /* this is to indicate that the range is from 355 - * from the Memory bucket rather than from PFMem */ 355 + * the Memory bucket rather than from PFMem */ 356 356 struct resource_node *next; 357 357 struct resource_node *nextRange; /* for the other mem range on bus */ 358 358 }; ··· 736 736 737 737 int ibmphp_init_devno(struct slot **); /* This function is called from EBDA, so we need it not be static */ 738 738 int ibmphp_do_disable_slot(struct slot *slot_cur); 739 - int ibmphp_update_slot_info(struct slot *); /* This function is called from HPC, so we need it to not be be static */ 739 + int ibmphp_update_slot_info(struct slot *); /* This function is called from HPC, so we need it to not be static */ 740 740 int ibmphp_configure_card(struct pci_func *, u8); 741 741 int ibmphp_unconfigure_card(struct slot **, int); 742 742 extern const struct hotplug_slot_ops ibmphp_hotplug_slot_ops;
+2
drivers/pci/hotplug/pciehp.h
··· 189 189 int pciehp_set_raw_indicator_status(struct hotplug_slot *h_slot, u8 status); 190 190 int pciehp_get_raw_indicator_status(struct hotplug_slot *h_slot, u8 *status); 191 191 192 + int pciehp_slot_reset(struct pcie_device *dev); 193 + 192 194 static inline const char *slot_name(struct controller *ctrl) 193 195 { 194 196 return hotplug_slot_name(&ctrl->hotplug_slot);
+2
drivers/pci/hotplug/pciehp_core.c
··· 351 351 .runtime_suspend = pciehp_runtime_suspend, 352 352 .runtime_resume = pciehp_runtime_resume, 353 353 #endif /* PM */ 354 + 355 + .slot_reset = pciehp_slot_reset, 354 356 }; 355 357 356 358 int __init pcie_hp_init(void)
+26
drivers/pci/hotplug/pciehp_hpc.c
··· 862 862 pcie_write_cmd(ctrl, 0, mask); 863 863 } 864 864 865 + /** 866 + * pciehp_slot_reset() - ignore link event caused by error-induced hot reset 867 + * @dev: PCI Express port service device 868 + * 869 + * Called from pcie_portdrv_slot_reset() after AER or DPC initiated a reset 870 + * further up in the hierarchy to recover from an error. The reset was 871 + * propagated down to this hotplug port. Ignore the resulting link flap. 872 + * If the link failed to retrain successfully, synthesize the ignored event. 873 + * Surprise removal during reset is detected through Presence Detect Changed. 874 + */ 875 + int pciehp_slot_reset(struct pcie_device *dev) 876 + { 877 + struct controller *ctrl = get_service_data(dev); 878 + 879 + if (ctrl->state != ON_STATE) 880 + return 0; 881 + 882 + pcie_capability_write_word(dev->port, PCI_EXP_SLTSTA, 883 + PCI_EXP_SLTSTA_DLLSC); 884 + 885 + if (!pciehp_check_link_active(ctrl)) 886 + pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC); 887 + 888 + return 0; 889 + } 890 + 865 891 /* 866 892 * pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary 867 893 * bus reset of the bridge, but at the same time we want to ensure that it is
+1 -1
drivers/pci/hotplug/shpchp_hpc.c
··· 295 295 mutex_lock(&slot->ctrl->cmd_lock); 296 296 297 297 if (!shpc_poll_ctrl_busy(ctrl)) { 298 - /* After 1 sec and and the controller is still busy */ 298 + /* After 1 sec and the controller is still busy */ 299 299 ctrl_err(ctrl, "Controller is still busy after 1 sec\n"); 300 300 retval = -EBUSY; 301 301 goto out;
+21 -17
drivers/pci/iov.c
··· 164 164 char *buf) 165 165 { 166 166 struct pci_dev *pdev = to_pci_dev(dev); 167 + struct pci_driver *pdrv; 167 168 u32 vf_total_msix = 0; 168 169 169 170 device_lock(dev); 170 - if (!pdev->driver || !pdev->driver->sriov_get_vf_total_msix) 171 + pdrv = to_pci_driver(dev->driver); 172 + if (!pdrv || !pdrv->sriov_get_vf_total_msix) 171 173 goto unlock; 172 174 173 - vf_total_msix = pdev->driver->sriov_get_vf_total_msix(pdev); 175 + vf_total_msix = pdrv->sriov_get_vf_total_msix(pdev); 174 176 unlock: 175 177 device_unlock(dev); 176 178 return sysfs_emit(buf, "%u\n", vf_total_msix); ··· 185 183 { 186 184 struct pci_dev *vf_dev = to_pci_dev(dev); 187 185 struct pci_dev *pdev = pci_physfn(vf_dev); 188 - int val, ret; 186 + struct pci_driver *pdrv; 187 + int val, ret = 0; 189 188 190 - ret = kstrtoint(buf, 0, &val); 191 - if (ret) 192 - return ret; 189 + if (kstrtoint(buf, 0, &val) < 0) 190 + return -EINVAL; 193 191 194 192 if (val < 0) 195 193 return -EINVAL; 196 194 197 195 device_lock(&pdev->dev); 198 - if (!pdev->driver || !pdev->driver->sriov_set_msix_vec_count) { 196 + pdrv = to_pci_driver(dev->driver); 197 + if (!pdrv || !pdrv->sriov_set_msix_vec_count) { 199 198 ret = -EOPNOTSUPP; 200 199 goto err_pdev; 201 200 } 202 201 203 202 device_lock(&vf_dev->dev); 204 - if (vf_dev->driver) { 203 + if (to_pci_driver(vf_dev->dev.driver)) { 205 204 /* 206 205 * A driver is already attached to this VF and has configured 207 206 * itself based on the current MSI-X vector count. Changing ··· 212 209 goto err_dev; 213 210 } 214 211 215 - ret = pdev->driver->sriov_set_msix_vec_count(vf_dev, val); 212 + ret = pdrv->sriov_set_msix_vec_count(vf_dev, val); 216 213 217 214 err_dev: 218 215 device_unlock(&vf_dev->dev); ··· 379 376 const char *buf, size_t count) 380 377 { 381 378 struct pci_dev *pdev = to_pci_dev(dev); 382 - int ret; 379 + struct pci_driver *pdrv; 380 + int ret = 0; 383 381 u16 num_vfs; 384 382 385 - ret = kstrtou16(buf, 0, &num_vfs); 386 - if (ret < 0) 387 - return ret; 383 + if (kstrtou16(buf, 0, &num_vfs) < 0) 384 + return -EINVAL; 388 385 389 386 if (num_vfs > pci_sriov_get_totalvfs(pdev)) 390 387 return -ERANGE; ··· 395 392 goto exit; 396 393 397 394 /* is PF driver loaded */ 398 - if (!pdev->driver) { 395 + pdrv = to_pci_driver(dev->driver); 396 + if (!pdrv) { 399 397 pci_info(pdev, "no driver bound to device; cannot configure SR-IOV\n"); 400 398 ret = -ENOENT; 401 399 goto exit; 402 400 } 403 401 404 402 /* is PF driver loaded w/callback */ 405 - if (!pdev->driver->sriov_configure) { 403 + if (!pdrv->sriov_configure) { 406 404 pci_info(pdev, "driver does not support SR-IOV configuration via sysfs\n"); 407 405 ret = -ENOENT; 408 406 goto exit; ··· 411 407 412 408 if (num_vfs == 0) { 413 409 /* disable VFs */ 414 - ret = pdev->driver->sriov_configure(pdev, 0); 410 + ret = pdrv->sriov_configure(pdev, 0); 415 411 goto exit; 416 412 } 417 413 ··· 423 419 goto exit; 424 420 } 425 421 426 - ret = pdev->driver->sriov_configure(pdev, num_vfs); 422 + ret = pdrv->sriov_configure(pdev, num_vfs); 427 423 if (ret < 0) 428 424 goto exit; 429 425
+2 -1
drivers/pci/msi.c
··· 582 582 return ret; 583 583 } 584 584 585 - static void __iomem *msix_map_region(struct pci_dev *dev, unsigned nr_entries) 585 + static void __iomem *msix_map_region(struct pci_dev *dev, 586 + unsigned int nr_entries) 586 587 { 587 588 resource_size_t phys_addr; 588 589 u32 table_offset;
+8 -2
drivers/pci/of.c
··· 423 423 */ 424 424 static int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq) 425 425 { 426 - struct device_node *dn, *ppnode; 426 + struct device_node *dn, *ppnode = NULL; 427 427 struct pci_dev *ppdev; 428 428 __be32 laddr[3]; 429 429 u8 pin; ··· 452 452 if (pin == 0) 453 453 return -ENODEV; 454 454 455 + /* Local interrupt-map in the device node? Use it! */ 456 + if (of_get_property(dn, "interrupt-map", NULL)) { 457 + pin = pci_swizzle_interrupt_pin(pdev, pin); 458 + ppnode = dn; 459 + } 460 + 455 461 /* Now we walk up the PCI tree */ 456 - for (;;) { 462 + while (!ppnode) { 457 463 /* Get the pci_dev of our parent */ 458 464 ppdev = pdev->bus->self; 459 465
+4 -4
drivers/pci/p2pdma.c
··· 874 874 int i; 875 875 876 876 for_each_sg(sg, s, nents, i) { 877 - s->dma_address = sg_phys(s) - p2p_pgmap->bus_offset; 877 + s->dma_address = sg_phys(s) + p2p_pgmap->bus_offset; 878 878 sg_dma_len(s) = s->length; 879 879 } 880 880 ··· 943 943 * 944 944 * Parses an attribute value to decide whether to enable p2pdma. 945 945 * The value can select a PCI device (using its full BDF device 946 - * name) or a boolean (in any format strtobool() accepts). A false 946 + * name) or a boolean (in any format kstrtobool() accepts). A false 947 947 * value disables p2pdma, a true value expects the caller 948 948 * to automatically find a compatible device and specifying a PCI device 949 949 * expects the caller to use the specific provider. ··· 975 975 } else if ((page[0] == '0' || page[0] == '1') && !iscntrl(page[1])) { 976 976 /* 977 977 * If the user enters a PCI device that doesn't exist 978 - * like "0000:01:00.1", we don't want strtobool to think 978 + * like "0000:01:00.1", we don't want kstrtobool to think 979 979 * it's a '0' when it's clearly not what the user wanted. 980 980 * So we require 0's and 1's to be exactly one character. 981 981 */ 982 - } else if (!strtobool(page, use_p2pdma)) { 982 + } else if (!kstrtobool(page, use_p2pdma)) { 983 983 return 0; 984 984 } 985 985
+13
drivers/pci/pci-bridge-emul.c
··· 431 431 /* Clear the W1C bits */ 432 432 new &= ~((value << shift) & (behavior[reg / 4].w1c & mask)); 433 433 434 + /* Save the new value with the cleared W1C bits into the cfgspace */ 434 435 cfgspace[reg / 4] = cpu_to_le32(new); 436 + 437 + /* 438 + * Clear the W1C bits not specified by the write mask, so that the 439 + * write_op() does not clear them. 440 + */ 441 + new &= ~(behavior[reg / 4].w1c & ~mask); 442 + 443 + /* 444 + * Set the W1C bits specified by the write mask, so that write_op() 445 + * knows about that they are to be cleared. 446 + */ 447 + new |= (value << shift) & (behavior[reg / 4].w1c & mask); 435 448 436 449 if (write_op) 437 450 write_op(bridge, reg, old, new, mask);
+26 -31
drivers/pci/pci-driver.c
··· 319 319 * its remove routine. 320 320 */ 321 321 pm_runtime_get_sync(dev); 322 - pci_dev->driver = pci_drv; 323 322 rc = pci_drv->probe(pci_dev, ddi->id); 324 323 if (!rc) 325 324 return rc; 326 325 if (rc < 0) { 327 - pci_dev->driver = NULL; 328 326 pm_runtime_put_sync(dev); 329 327 return rc; 330 328 } ··· 388 390 * @pci_dev: PCI device being probed 389 391 * 390 392 * returns 0 on success, else error. 391 - * side-effect: pci_dev->driver is set to drv when drv claims pci_dev. 392 393 */ 393 394 static int __pci_device_probe(struct pci_driver *drv, struct pci_dev *pci_dev) 394 395 { 395 396 const struct pci_device_id *id; 396 397 int error = 0; 397 398 398 - if (!pci_dev->driver && drv->probe) { 399 + if (drv->probe) { 399 400 error = -ENODEV; 400 401 401 402 id = pci_match_device(drv, pci_dev); ··· 454 457 static void pci_device_remove(struct device *dev) 455 458 { 456 459 struct pci_dev *pci_dev = to_pci_dev(dev); 457 - struct pci_driver *drv = pci_dev->driver; 460 + struct pci_driver *drv = to_pci_driver(dev->driver); 458 461 459 - if (drv) { 460 - if (drv->remove) { 461 - pm_runtime_get_sync(dev); 462 - drv->remove(pci_dev); 463 - pm_runtime_put_noidle(dev); 464 - } 465 - pcibios_free_irq(pci_dev); 466 - pci_dev->driver = NULL; 467 - pci_iov_remove(pci_dev); 462 + if (drv->remove) { 463 + pm_runtime_get_sync(dev); 464 + drv->remove(pci_dev); 465 + pm_runtime_put_noidle(dev); 468 466 } 467 + pcibios_free_irq(pci_dev); 468 + pci_iov_remove(pci_dev); 469 469 470 470 /* Undo the runtime PM settings in local_pci_probe() */ 471 471 pm_runtime_put_sync(dev); ··· 489 495 static void pci_device_shutdown(struct device *dev) 490 496 { 491 497 struct pci_dev *pci_dev = to_pci_dev(dev); 492 - struct pci_driver *drv = pci_dev->driver; 498 + struct pci_driver *drv = to_pci_driver(dev->driver); 493 499 494 500 pm_runtime_resume(dev); 495 501 ··· 570 576 { 571 577 int retval; 572 578 573 - /* if the device was enabled before suspend, reenable */ 579 + /* if the device was enabled before suspend, re-enable */ 574 580 retval = pci_reenable_device(pci_dev); 575 581 /* 576 582 * if the device was busmaster before the suspend, make it busmaster ··· 585 591 static int pci_legacy_suspend(struct device *dev, pm_message_t state) 586 592 { 587 593 struct pci_dev *pci_dev = to_pci_dev(dev); 588 - struct pci_driver *drv = pci_dev->driver; 594 + struct pci_driver *drv = to_pci_driver(dev->driver); 589 595 590 596 if (drv && drv->suspend) { 591 597 pci_power_t prev = pci_dev->current_state; ··· 626 632 static int pci_legacy_resume(struct device *dev) 627 633 { 628 634 struct pci_dev *pci_dev = to_pci_dev(dev); 629 - struct pci_driver *drv = pci_dev->driver; 635 + struct pci_driver *drv = to_pci_driver(dev->driver); 630 636 631 637 pci_fixup_device(pci_fixup_resume, pci_dev); 632 638 ··· 645 651 646 652 static bool pci_has_legacy_pm_support(struct pci_dev *pci_dev) 647 653 { 648 - struct pci_driver *drv = pci_dev->driver; 654 + struct pci_driver *drv = to_pci_driver(pci_dev->dev.driver); 649 655 bool ret = drv && (drv->suspend || drv->resume); 650 656 651 657 /* ··· 1238 1244 int error; 1239 1245 1240 1246 /* 1241 - * If pci_dev->driver is not set (unbound), we leave the device in D0, 1242 - * but it may go to D3cold when the bridge above it runtime suspends. 1243 - * Save its config space in case that happens. 1247 + * If the device has no driver, we leave it in D0, but it may go to 1248 + * D3cold when the bridge above it runtime suspends. Save its 1249 + * config space in case that happens. 1244 1250 */ 1245 - if (!pci_dev->driver) { 1251 + if (!to_pci_driver(dev->driver)) { 1246 1252 pci_save_state(pci_dev); 1247 1253 return 0; 1248 1254 } ··· 1299 1305 */ 1300 1306 pci_restore_standard_config(pci_dev); 1301 1307 1302 - if (!pci_dev->driver) 1308 + if (!to_pci_driver(dev->driver)) 1303 1309 return 0; 1304 1310 1305 1311 pci_fixup_device(pci_fixup_resume_early, pci_dev); ··· 1318 1324 1319 1325 static int pci_pm_runtime_idle(struct device *dev) 1320 1326 { 1321 - struct pci_dev *pci_dev = to_pci_dev(dev); 1322 1327 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 1323 1328 1324 1329 /* 1325 - * If pci_dev->driver is not set (unbound), the device should 1326 - * always remain in D0 regardless of the runtime PM status 1330 + * If the device has no driver, it should always remain in D0 1331 + * regardless of the runtime PM status 1327 1332 */ 1328 - if (!pci_dev->driver) 1333 + if (!to_pci_driver(dev->driver)) 1329 1334 return 0; 1330 1335 1331 1336 if (!pm) ··· 1431 1438 */ 1432 1439 struct pci_driver *pci_dev_driver(const struct pci_dev *dev) 1433 1440 { 1434 - if (dev->driver) 1435 - return dev->driver; 1441 + struct pci_driver *drv = to_pci_driver(dev->dev.driver); 1442 + 1443 + if (drv) 1444 + return drv; 1436 1445 else { 1437 1446 int i; 1438 1447 for (i = 0; i <= PCI_ROM_RESOURCE; i++) ··· 1537 1542 return 0; 1538 1543 } 1539 1544 1540 - #if defined(CONFIG_PCIEPORTBUS) || defined(CONFIG_EEH) 1545 + #if defined(CONFIG_PCIEAER) || defined(CONFIG_EEH) 1541 1546 /** 1542 1547 * pci_uevent_ers - emit a uevent during recovery path of PCI device 1543 1548 * @pdev: PCI device undergoing error recovery
+36 -15
drivers/pci/pci-sysfs.c
··· 26 26 #include <linux/slab.h> 27 27 #include <linux/vgaarb.h> 28 28 #include <linux/pm_runtime.h> 29 + #include <linux/msi.h> 29 30 #include <linux/of.h> 30 31 #include "pci.h" 31 32 ··· 50 49 pci_config_attr(subsystem_device, "0x%04x\n"); 51 50 pci_config_attr(revision, "0x%02x\n"); 52 51 pci_config_attr(class, "0x%06x\n"); 53 - pci_config_attr(irq, "%u\n"); 52 + 53 + static ssize_t irq_show(struct device *dev, 54 + struct device_attribute *attr, 55 + char *buf) 56 + { 57 + struct pci_dev *pdev = to_pci_dev(dev); 58 + 59 + #ifdef CONFIG_PCI_MSI 60 + /* 61 + * For MSI, show the first MSI IRQ; for all other cases including 62 + * MSI-X, show the legacy INTx IRQ. 63 + */ 64 + if (pdev->msi_enabled) { 65 + struct msi_desc *desc = first_pci_msi_entry(pdev); 66 + 67 + return sysfs_emit(buf, "%u\n", desc->irq); 68 + } 69 + #endif 70 + 71 + return sysfs_emit(buf, "%u\n", pdev->irq); 72 + } 73 + static DEVICE_ATTR_RO(irq); 54 74 55 75 static ssize_t broken_parity_status_show(struct device *dev, 56 76 struct device_attribute *attr, ··· 297 275 { 298 276 struct pci_dev *pdev = to_pci_dev(dev); 299 277 unsigned long val; 300 - ssize_t result = kstrtoul(buf, 0, &val); 301 - 302 - if (result < 0) 303 - return result; 278 + ssize_t result = 0; 304 279 305 280 /* this can crash the machine when done on the "wrong" device */ 306 281 if (!capable(CAP_SYS_ADMIN)) 307 282 return -EPERM; 283 + 284 + if (kstrtoul(buf, 0, &val) < 0) 285 + return -EINVAL; 308 286 309 287 device_lock(dev); 310 288 if (dev->driver) ··· 336 314 size_t count) 337 315 { 338 316 struct pci_dev *pdev = to_pci_dev(dev); 339 - int node, ret; 317 + int node; 340 318 341 319 if (!capable(CAP_SYS_ADMIN)) 342 320 return -EPERM; 343 321 344 - ret = kstrtoint(buf, 0, &node); 345 - if (ret) 346 - return ret; 322 + if (kstrtoint(buf, 0, &node) < 0) 323 + return -EINVAL; 347 324 348 325 if ((node < 0 && node != NUMA_NO_NODE) || node >= MAX_NUMNODES) 349 326 return -EINVAL; ··· 401 380 struct pci_bus *subordinate = pdev->subordinate; 402 381 unsigned long val; 403 382 404 - if (kstrtoul(buf, 0, &val) < 0) 405 - return -EINVAL; 406 - 407 383 if (!capable(CAP_SYS_ADMIN)) 408 384 return -EPERM; 385 + 386 + if (kstrtoul(buf, 0, &val) < 0) 387 + return -EINVAL; 409 388 410 389 /* 411 390 * "no_msi" and "bus_flags" only affect what happens when a driver ··· 1362 1341 { 1363 1342 struct pci_dev *pdev = to_pci_dev(dev); 1364 1343 unsigned long val; 1365 - ssize_t result = kstrtoul(buf, 0, &val); 1344 + ssize_t result; 1366 1345 1367 - if (result < 0) 1368 - return result; 1346 + if (kstrtoul(buf, 0, &val) < 0) 1347 + return -EINVAL; 1369 1348 1370 1349 if (val != 1) 1371 1350 return -EINVAL;
+51 -14
drivers/pci/pci.c
··· 269 269 const char **endptr) 270 270 { 271 271 int ret; 272 - int seg, bus, slot, func; 272 + unsigned int seg, bus, slot, func; 273 273 char *wpath, *p; 274 274 char end; 275 275 ··· 1439 1439 return 0; 1440 1440 } 1441 1441 1442 + void pci_bridge_reconfigure_ltr(struct pci_dev *dev) 1443 + { 1444 + #ifdef CONFIG_PCIEASPM 1445 + struct pci_dev *bridge; 1446 + u32 ctl; 1447 + 1448 + bridge = pci_upstream_bridge(dev); 1449 + if (bridge && bridge->ltr_path) { 1450 + pcie_capability_read_dword(bridge, PCI_EXP_DEVCTL2, &ctl); 1451 + if (!(ctl & PCI_EXP_DEVCTL2_LTR_EN)) { 1452 + pci_dbg(bridge, "re-enabling LTR\n"); 1453 + pcie_capability_set_word(bridge, PCI_EXP_DEVCTL2, 1454 + PCI_EXP_DEVCTL2_LTR_EN); 1455 + } 1456 + } 1457 + #endif 1458 + } 1459 + 1442 1460 static void pci_restore_pcie_state(struct pci_dev *dev) 1443 1461 { 1444 1462 int i = 0; ··· 1466 1448 save_state = pci_find_saved_cap(dev, PCI_CAP_ID_EXP); 1467 1449 if (!save_state) 1468 1450 return; 1451 + 1452 + /* 1453 + * Downstream ports reset the LTR enable bit when link goes down. 1454 + * Check and re-configure the bit here before restoring device. 1455 + * PCIe r5.0, sec 7.5.3.16. 1456 + */ 1457 + pci_bridge_reconfigure_ltr(dev); 1469 1458 1470 1459 cap = (u16 *)&save_state->cap.data[0]; 1471 1460 pcie_capability_write_word(dev, PCI_EXP_DEVCTL, cap[i++]); ··· 2078 2053 EXPORT_SYMBOL(pcim_pin_device); 2079 2054 2080 2055 /* 2081 - * pcibios_add_device - provide arch specific hooks when adding device dev 2056 + * pcibios_device_add - provide arch specific hooks when adding device dev 2082 2057 * @dev: the PCI device being added 2083 2058 * 2084 2059 * Permits the platform to provide architecture specific functionality when 2085 2060 * devices are added. This is the default implementation. Architecture 2086 2061 * implementations can override this. 2087 2062 */ 2088 - int __weak pcibios_add_device(struct pci_dev *dev) 2063 + int __weak pcibios_device_add(struct pci_dev *dev) 2089 2064 { 2090 2065 return 0; 2091 2066 } ··· 2205 2180 } 2206 2181 EXPORT_SYMBOL_GPL(pci_set_pcie_reset_state); 2207 2182 2183 + #ifdef CONFIG_PCIEAER 2208 2184 void pcie_clear_device_status(struct pci_dev *dev) 2209 2185 { 2210 2186 u16 sta; ··· 2213 2187 pcie_capability_read_word(dev, PCI_EXP_DEVSTA, &sta); 2214 2188 pcie_capability_write_word(dev, PCI_EXP_DEVSTA, sta); 2215 2189 } 2190 + #endif 2216 2191 2217 2192 /** 2218 2193 * pcie_clear_root_pme_status - Clear root port PME interrupt status. ··· 3724 3697 struct pci_dev *bridge; 3725 3698 u32 cap, ctl2; 3726 3699 3700 + /* 3701 + * Per PCIe r5.0, sec 9.3.5.10, the AtomicOp Requester Enable bit 3702 + * in Device Control 2 is reserved in VFs and the PF value applies 3703 + * to all associated VFs. 3704 + */ 3705 + if (dev->is_virtfn) 3706 + return -EINVAL; 3707 + 3727 3708 if (!pci_is_pcie(dev)) 3728 3709 return -EINVAL; 3729 3710 ··· 5103 5068 5104 5069 static void pci_dev_save_and_disable(struct pci_dev *dev) 5105 5070 { 5071 + struct pci_driver *drv = to_pci_driver(dev->dev.driver); 5106 5072 const struct pci_error_handlers *err_handler = 5107 - dev->driver ? dev->driver->err_handler : NULL; 5073 + drv ? drv->err_handler : NULL; 5108 5074 5109 5075 /* 5110 - * dev->driver->err_handler->reset_prepare() is protected against 5111 - * races with ->remove() by the device lock, which must be held by 5112 - * the caller. 5076 + * drv->err_handler->reset_prepare() is protected against races 5077 + * with ->remove() by the device lock, which must be held by the 5078 + * caller. 5113 5079 */ 5114 5080 if (err_handler && err_handler->reset_prepare) 5115 5081 err_handler->reset_prepare(dev); ··· 5135 5099 5136 5100 static void pci_dev_restore(struct pci_dev *dev) 5137 5101 { 5102 + struct pci_driver *drv = to_pci_driver(dev->dev.driver); 5138 5103 const struct pci_error_handlers *err_handler = 5139 - dev->driver ? dev->driver->err_handler : NULL; 5104 + drv ? drv->err_handler : NULL; 5140 5105 5141 5106 pci_restore_state(dev); 5142 5107 5143 5108 /* 5144 - * dev->driver->err_handler->reset_done() is protected against 5145 - * races with ->remove() by the device lock, which must be held by 5146 - * the caller. 5109 + * drv->err_handler->reset_done() is protected against races with 5110 + * ->remove() by the device lock, which must be held by the caller. 5147 5111 */ 5148 5112 if (err_handler && err_handler->reset_done) 5149 5113 err_handler->reset_done(dev); ··· 5304 5268 */ 5305 5269 int __pci_reset_function_locked(struct pci_dev *dev) 5306 5270 { 5307 - int i, m, rc = -ENOTTY; 5271 + int i, m, rc; 5308 5272 5309 5273 might_sleep(); 5310 5274 ··· 6340 6304 * cannot be left as a userspace activity). DMA aliases should therefore 6341 6305 * be configured via quirks, such as the PCI fixup header quirk. 6342 6306 */ 6343 - void pci_add_dma_alias(struct pci_dev *dev, u8 devfn_from, unsigned nr_devfns) 6307 + void pci_add_dma_alias(struct pci_dev *dev, u8 devfn_from, 6308 + unsigned int nr_devfns) 6344 6309 { 6345 6310 int devfn_to; 6346 6311 6347 - nr_devfns = min(nr_devfns, (unsigned) MAX_NR_DEVFNS - devfn_from); 6312 + nr_devfns = min(nr_devfns, (unsigned int)MAX_NR_DEVFNS - devfn_from); 6348 6313 devfn_to = devfn_from + nr_devfns - 1; 6349 6314 6350 6315 if (!dev->dma_alias_mask)
+1
drivers/pci/pci.h
··· 86 86 bool pci_bridge_d3_possible(struct pci_dev *dev); 87 87 void pci_bridge_d3_update(struct pci_dev *dev); 88 88 void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev); 89 + void pci_bridge_reconfigure_ltr(struct pci_dev *dev); 89 90 90 91 static inline void pci_wakeup_event(struct pci_dev *dev) 91 92 {
+2 -2
drivers/pci/pcie/Makefile
··· 2 2 # 3 3 # Makefile for PCI Express features and port driver 4 4 5 - pcieportdrv-y := portdrv_core.o portdrv_pci.o err.o rcec.o 5 + pcieportdrv-y := portdrv_core.o portdrv_pci.o rcec.o 6 6 7 7 obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o 8 8 9 9 obj-$(CONFIG_PCIEASPM) += aspm.o 10 - obj-$(CONFIG_PCIEAER) += aer.o 10 + obj-$(CONFIG_PCIEAER) += aer.o err.o 11 11 obj-$(CONFIG_PCIEAER_INJECT) += aer_inject.o 12 12 obj-$(CONFIG_PCIE_PME) += pme.o 13 13 obj-$(CONFIG_PCIE_DPC) += dpc.o
+1 -1
drivers/pci/pcie/aer.c
··· 57 57 * "as seen by this device". Note that this may mean that if an 58 58 * end point is causing problems, the AER counters may increment 59 59 * at its link partner (e.g. root port) because the errors will be 60 - * "seen" by the link partner and not the the problematic end point 60 + * "seen" by the link partner and not the problematic end point 61 61 * itself (which may report all counters as 0 as it never saw any 62 62 * problems). 63 63 */
+2 -2
drivers/pci/pcie/aspm.c
··· 1219 1219 struct pcie_link_state *link = pcie_aspm_get_link(pdev); 1220 1220 bool state_enable; 1221 1221 1222 - if (strtobool(buf, &state_enable) < 0) 1222 + if (kstrtobool(buf, &state_enable) < 0) 1223 1223 return -EINVAL; 1224 1224 1225 1225 down_read(&pci_bus_sem); ··· 1276 1276 struct pcie_link_state *link = pcie_aspm_get_link(pdev); 1277 1277 bool state_enable; 1278 1278 1279 - if (strtobool(buf, &state_enable) < 0) 1279 + if (kstrtobool(buf, &state_enable) < 0) 1280 1280 return -EINVAL; 1281 1281 1282 1282 down_read(&pci_bus_sem);
+24 -16
drivers/pci/pcie/err.c
··· 49 49 pci_channel_state_t state, 50 50 enum pci_ers_result *result) 51 51 { 52 + struct pci_driver *pdrv; 52 53 pci_ers_result_t vote; 53 54 const struct pci_error_handlers *err_handler; 54 55 55 56 device_lock(&dev->dev); 57 + pdrv = to_pci_driver(dev->dev.driver); 56 58 if (!pci_dev_set_io_state(dev, state) || 57 - !dev->driver || 58 - !dev->driver->err_handler || 59 - !dev->driver->err_handler->error_detected) { 59 + !pdrv || 60 + !pdrv->err_handler || 61 + !pdrv->err_handler->error_detected) { 60 62 /* 61 63 * If any device in the subtree does not have an error_detected 62 64 * callback, PCI_ERS_RESULT_NO_AER_DRIVER prevents subsequent ··· 72 70 vote = PCI_ERS_RESULT_NONE; 73 71 } 74 72 } else { 75 - err_handler = dev->driver->err_handler; 73 + err_handler = pdrv->err_handler; 76 74 vote = err_handler->error_detected(dev, state); 77 75 } 78 76 pci_uevent_ers(dev, vote); ··· 93 91 94 92 static int report_mmio_enabled(struct pci_dev *dev, void *data) 95 93 { 94 + struct pci_driver *pdrv; 96 95 pci_ers_result_t vote, *result = data; 97 96 const struct pci_error_handlers *err_handler; 98 97 99 98 device_lock(&dev->dev); 100 - if (!dev->driver || 101 - !dev->driver->err_handler || 102 - !dev->driver->err_handler->mmio_enabled) 99 + pdrv = to_pci_driver(dev->dev.driver); 100 + if (!pdrv || 101 + !pdrv->err_handler || 102 + !pdrv->err_handler->mmio_enabled) 103 103 goto out; 104 104 105 - err_handler = dev->driver->err_handler; 105 + err_handler = pdrv->err_handler; 106 106 vote = err_handler->mmio_enabled(dev); 107 107 *result = merge_result(*result, vote); 108 108 out: ··· 114 110 115 111 static int report_slot_reset(struct pci_dev *dev, void *data) 116 112 { 113 + struct pci_driver *pdrv; 117 114 pci_ers_result_t vote, *result = data; 118 115 const struct pci_error_handlers *err_handler; 119 116 120 117 device_lock(&dev->dev); 121 - if (!dev->driver || 122 - !dev->driver->err_handler || 123 - !dev->driver->err_handler->slot_reset) 118 + pdrv = to_pci_driver(dev->dev.driver); 119 + if (!pdrv || 120 + !pdrv->err_handler || 121 + !pdrv->err_handler->slot_reset) 124 122 goto out; 125 123 126 - err_handler = dev->driver->err_handler; 124 + err_handler = pdrv->err_handler; 127 125 vote = err_handler->slot_reset(dev); 128 126 *result = merge_result(*result, vote); 129 127 out: ··· 135 129 136 130 static int report_resume(struct pci_dev *dev, void *data) 137 131 { 132 + struct pci_driver *pdrv; 138 133 const struct pci_error_handlers *err_handler; 139 134 140 135 device_lock(&dev->dev); 136 + pdrv = to_pci_driver(dev->dev.driver); 141 137 if (!pci_dev_set_io_state(dev, pci_channel_io_normal) || 142 - !dev->driver || 143 - !dev->driver->err_handler || 144 - !dev->driver->err_handler->resume) 138 + !pdrv || 139 + !pdrv->err_handler || 140 + !pdrv->err_handler->resume) 145 141 goto out; 146 142 147 - err_handler = dev->driver->err_handler; 143 + err_handler = pdrv->err_handler; 148 144 err_handler->resume(dev); 149 145 out: 150 146 pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED);
+2 -4
drivers/pci/pcie/portdrv.h
··· 85 85 int (*runtime_suspend)(struct pcie_device *dev); 86 86 int (*runtime_resume)(struct pcie_device *dev); 87 87 88 - /* Device driver may resume normal operations */ 89 - void (*error_resume)(struct pci_dev *dev); 88 + int (*slot_reset)(struct pcie_device *dev); 90 89 91 90 int port_type; /* Type of the port this driver can handle */ 92 91 u32 service; /* Port service this device represents */ ··· 109 110 110 111 extern struct bus_type pcie_port_bus_type; 111 112 int pcie_port_device_register(struct pci_dev *dev); 113 + int pcie_port_device_iter(struct device *dev, void *data); 112 114 #ifdef CONFIG_PM 113 115 int pcie_port_device_suspend(struct device *dev); 114 116 int pcie_port_device_resume_noirq(struct device *dev); ··· 118 118 int pcie_port_device_runtime_resume(struct device *dev); 119 119 #endif 120 120 void pcie_port_device_remove(struct pci_dev *dev); 121 - int __must_check pcie_port_bus_register(void); 122 - void pcie_port_bus_unregister(void); 123 121 124 122 struct pci_dev; 125 123
+40 -27
drivers/pci/pcie/portdrv_core.c
··· 166 166 { 167 167 int ret, i; 168 168 169 - for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) 170 - irqs[i] = -1; 171 - 172 169 /* 173 170 * If we support PME but can't use MSI/MSI-X for it, we have to 174 171 * fall back to INTx or other interrupts, e.g., a system shared ··· 314 317 */ 315 318 int pcie_port_device_register(struct pci_dev *dev) 316 319 { 317 - int status, capabilities, i, nr_service; 318 - int irqs[PCIE_PORT_DEVICE_MAXSERVICES]; 320 + int status, capabilities, irq_services, i, nr_service; 321 + int irqs[PCIE_PORT_DEVICE_MAXSERVICES] = { 322 + [0 ... PCIE_PORT_DEVICE_MAXSERVICES-1] = -1 323 + }; 319 324 320 325 /* Enable PCI Express port device */ 321 326 status = pci_enable_device(dev); ··· 330 331 return 0; 331 332 332 333 pci_set_master(dev); 333 - /* 334 - * Initialize service irqs. Don't use service devices that 335 - * require interrupts if there is no way to generate them. 336 - * However, some drivers may have a polling mode (e.g. pciehp_poll_mode) 337 - * that can be used in the absence of irqs. Allow them to determine 338 - * if that is to be used. 339 - */ 340 - status = pcie_init_service_irqs(dev, irqs, capabilities); 341 - if (status) { 342 - capabilities &= PCIE_PORT_SERVICE_HP; 343 - if (!capabilities) 344 - goto error_disable; 334 + 335 + irq_services = 0; 336 + if (IS_ENABLED(CONFIG_PCIE_PME)) 337 + irq_services |= PCIE_PORT_SERVICE_PME; 338 + if (IS_ENABLED(CONFIG_PCIEAER)) 339 + irq_services |= PCIE_PORT_SERVICE_AER; 340 + if (IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE)) 341 + irq_services |= PCIE_PORT_SERVICE_HP; 342 + if (IS_ENABLED(CONFIG_PCIE_DPC)) 343 + irq_services |= PCIE_PORT_SERVICE_DPC; 344 + irq_services &= capabilities; 345 + 346 + if (irq_services) { 347 + /* 348 + * Initialize service IRQs. Don't use service devices that 349 + * require interrupts if there is no way to generate them. 350 + * However, some drivers may have a polling mode (e.g. 351 + * pciehp_poll_mode) that can be used in the absence of IRQs. 352 + * Allow them to determine if that is to be used. 353 + */ 354 + status = pcie_init_service_irqs(dev, irqs, irq_services); 355 + if (status) { 356 + irq_services &= PCIE_PORT_SERVICE_HP; 357 + if (!irq_services) 358 + goto error_disable; 359 + } 345 360 } 346 361 347 362 /* Allocate child services if any */ ··· 380 367 return status; 381 368 } 382 369 383 - #ifdef CONFIG_PM 384 - typedef int (*pcie_pm_callback_t)(struct pcie_device *); 370 + typedef int (*pcie_callback_t)(struct pcie_device *); 385 371 386 - static int pm_iter(struct device *dev, void *data) 372 + int pcie_port_device_iter(struct device *dev, void *data) 387 373 { 388 374 struct pcie_port_service_driver *service_driver; 389 375 size_t offset = *(size_t *)data; 390 - pcie_pm_callback_t cb; 376 + pcie_callback_t cb; 391 377 392 378 if ((dev->bus == &pcie_port_bus_type) && dev->driver) { 393 379 service_driver = to_service_driver(dev->driver); 394 - cb = *(pcie_pm_callback_t *)((void *)service_driver + offset); 380 + cb = *(pcie_callback_t *)((void *)service_driver + offset); 395 381 if (cb) 396 382 return cb(to_pcie_device(dev)); 397 383 } 398 384 return 0; 399 385 } 400 386 387 + #ifdef CONFIG_PM 401 388 /** 402 389 * pcie_port_device_suspend - suspend port services associated with a PCIe port 403 390 * @dev: PCI Express port to handle ··· 405 392 int pcie_port_device_suspend(struct device *dev) 406 393 { 407 394 size_t off = offsetof(struct pcie_port_service_driver, suspend); 408 - return device_for_each_child(dev, &off, pm_iter); 395 + return device_for_each_child(dev, &off, pcie_port_device_iter); 409 396 } 410 397 411 398 int pcie_port_device_resume_noirq(struct device *dev) 412 399 { 413 400 size_t off = offsetof(struct pcie_port_service_driver, resume_noirq); 414 - return device_for_each_child(dev, &off, pm_iter); 401 + return device_for_each_child(dev, &off, pcie_port_device_iter); 415 402 } 416 403 417 404 /** ··· 421 408 int pcie_port_device_resume(struct device *dev) 422 409 { 423 410 size_t off = offsetof(struct pcie_port_service_driver, resume); 424 - return device_for_each_child(dev, &off, pm_iter); 411 + return device_for_each_child(dev, &off, pcie_port_device_iter); 425 412 } 426 413 427 414 /** ··· 431 418 int pcie_port_device_runtime_suspend(struct device *dev) 432 419 { 433 420 size_t off = offsetof(struct pcie_port_service_driver, runtime_suspend); 434 - return device_for_each_child(dev, &off, pm_iter); 421 + return device_for_each_child(dev, &off, pcie_port_device_iter); 435 422 } 436 423 437 424 /** ··· 441 428 int pcie_port_device_runtime_resume(struct device *dev) 442 429 { 443 430 size_t off = offsetof(struct pcie_port_service_driver, runtime_resume); 444 - return device_for_each_child(dev, &off, pm_iter); 431 + return device_for_each_child(dev, &off, pcie_port_device_iter); 445 432 } 446 433 #endif /* PM */ 447 434
+3 -24
drivers/pci/pcie/portdrv_pci.c
··· 160 160 161 161 static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev) 162 162 { 163 + size_t off = offsetof(struct pcie_port_service_driver, slot_reset); 164 + device_for_each_child(&dev->dev, &off, pcie_port_device_iter); 165 + 163 166 pci_restore_state(dev); 164 167 pci_save_state(dev); 165 168 return PCI_ERS_RESULT_RECOVERED; ··· 171 168 static pci_ers_result_t pcie_portdrv_mmio_enabled(struct pci_dev *dev) 172 169 { 173 170 return PCI_ERS_RESULT_RECOVERED; 174 - } 175 - 176 - static int resume_iter(struct device *device, void *data) 177 - { 178 - struct pcie_device *pcie_device; 179 - struct pcie_port_service_driver *driver; 180 - 181 - if (device->bus == &pcie_port_bus_type && device->driver) { 182 - driver = to_service_driver(device->driver); 183 - if (driver && driver->error_resume) { 184 - pcie_device = to_pcie_device(device); 185 - 186 - /* Forward error message to service drivers */ 187 - driver->error_resume(pcie_device->port); 188 - } 189 - } 190 - 191 - return 0; 192 - } 193 - 194 - static void pcie_portdrv_err_resume(struct pci_dev *dev) 195 - { 196 - device_for_each_child(&dev->dev, NULL, resume_iter); 197 171 } 198 172 199 173 /* ··· 190 210 .error_detected = pcie_portdrv_error_detected, 191 211 .slot_reset = pcie_portdrv_slot_reset, 192 212 .mmio_enabled = pcie_portdrv_mmio_enabled, 193 - .resume = pcie_portdrv_err_resume, 194 213 }; 195 214 196 215 static struct pci_driver pcie_portdriver = {
+49 -13
drivers/pci/probe.c
··· 883 883 static int pci_register_host_bridge(struct pci_host_bridge *bridge) 884 884 { 885 885 struct device *parent = bridge->dev.parent; 886 - struct resource_entry *window, *n; 886 + struct resource_entry *window, *next, *n; 887 887 struct pci_bus *bus, *b; 888 - resource_size_t offset; 888 + resource_size_t offset, next_offset; 889 889 LIST_HEAD(resources); 890 - struct resource *res; 890 + struct resource *res, *next_res; 891 891 char addr[64], *fmt; 892 892 const char *name; 893 893 int err; ··· 970 970 if (nr_node_ids > 1 && pcibus_to_node(bus) == NUMA_NO_NODE) 971 971 dev_warn(&bus->dev, "Unknown NUMA node; performance will be reduced\n"); 972 972 973 - /* Add initial resources to the bus */ 973 + /* Coalesce contiguous windows */ 974 974 resource_list_for_each_entry_safe(window, n, &resources) { 975 - list_move_tail(&window->node, &bridge->windows); 975 + if (list_is_last(&window->node, &resources)) 976 + break; 977 + 978 + next = list_next_entry(window, node); 976 979 offset = window->offset; 977 980 res = window->res; 981 + next_offset = next->offset; 982 + next_res = next->res; 983 + 984 + if (res->flags != next_res->flags || offset != next_offset) 985 + continue; 986 + 987 + if (res->end + 1 == next_res->start) { 988 + next_res->start = res->start; 989 + res->flags = res->start = res->end = 0; 990 + } 991 + } 992 + 993 + /* Add initial resources to the bus */ 994 + resource_list_for_each_entry_safe(window, n, &resources) { 995 + offset = window->offset; 996 + res = window->res; 997 + if (!res->end) 998 + continue; 999 + 1000 + list_move_tail(&window->node, &bridge->windows); 978 1001 979 1002 if (res->flags & IORESOURCE_BUS) 980 1003 pci_bus_insert_busn_res(bus, bus->number, res->end); ··· 2191 2168 * Complex and all intermediate Switches indicate support for LTR. 2192 2169 * PCIe r4.0, sec 6.18. 2193 2170 */ 2194 - if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || 2195 - ((bridge = pci_upstream_bridge(dev)) && 2196 - bridge->ltr_path)) { 2171 + if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) { 2172 + pcie_capability_set_word(dev, PCI_EXP_DEVCTL2, 2173 + PCI_EXP_DEVCTL2_LTR_EN); 2174 + dev->ltr_path = 1; 2175 + return; 2176 + } 2177 + 2178 + /* 2179 + * If we're configuring a hot-added device, LTR was likely 2180 + * disabled in the upstream bridge, so re-enable it before enabling 2181 + * it in the new device. 2182 + */ 2183 + bridge = pci_upstream_bridge(dev); 2184 + if (bridge && bridge->ltr_path) { 2185 + pci_bridge_reconfigure_ltr(dev); 2197 2186 pcie_capability_set_word(dev, PCI_EXP_DEVCTL2, 2198 2187 PCI_EXP_DEVCTL2_LTR_EN); 2199 2188 dev->ltr_path = 1; ··· 2485 2450 struct irq_domain *d; 2486 2451 2487 2452 /* 2488 - * If a domain has been set through the pcibios_add_device() 2453 + * If a domain has been set through the pcibios_device_add() 2489 2454 * callback, then this is the one (platform code knows best). 2490 2455 */ 2491 2456 d = dev_get_msi_domain(&dev->dev); ··· 2553 2518 list_add_tail(&dev->bus_list, &bus->devices); 2554 2519 up_write(&pci_bus_sem); 2555 2520 2556 - ret = pcibios_add_device(dev); 2521 + ret = pcibios_device_add(dev); 2557 2522 WARN_ON(ret < 0); 2558 2523 2559 2524 /* Set up MSI IRQ domain */ ··· 2585 2550 } 2586 2551 EXPORT_SYMBOL(pci_scan_single_device); 2587 2552 2588 - static unsigned next_fn(struct pci_bus *bus, struct pci_dev *dev, unsigned fn) 2553 + static unsigned int next_fn(struct pci_bus *bus, struct pci_dev *dev, 2554 + unsigned int fn) 2589 2555 { 2590 2556 int pos; 2591 2557 u16 cap = 0; 2592 - unsigned next_fn; 2558 + unsigned int next_fn; 2593 2559 2594 2560 if (pci_ari_enabled(bus)) { 2595 2561 if (!dev) ··· 2649 2613 */ 2650 2614 int pci_scan_slot(struct pci_bus *bus, int devfn) 2651 2615 { 2652 - unsigned fn, nr = 0; 2616 + unsigned int fn, nr = 0; 2653 2617 struct pci_dev *dev; 2654 2618 2655 2619 if (only_one_child(bus) && (devfn > 0))
+63 -7
drivers/pci/quirks.c
··· 501 501 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_868, quirk_s3_64M); 502 502 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_968, quirk_s3_64M); 503 503 504 - static void quirk_io(struct pci_dev *dev, int pos, unsigned size, 504 + static void quirk_io(struct pci_dev *dev, int pos, unsigned int size, 505 505 const char *name) 506 506 { 507 507 u32 region; ··· 552 552 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CS5536_ISA, quirk_cs5536_vsa); 553 553 554 554 static void quirk_io_region(struct pci_dev *dev, int port, 555 - unsigned size, int nr, const char *name) 555 + unsigned int size, int nr, const char *name) 556 556 { 557 557 u16 region; 558 558 struct pci_bus_region bus_region; ··· 666 666 base = devres & 0xffff; 667 667 size = 16; 668 668 for (;;) { 669 - unsigned bit = size >> 1; 669 + unsigned int bit = size >> 1; 670 670 if ((bit & mask) == bit) 671 671 break; 672 672 size = bit; ··· 692 692 mask = (devres & 0x3f) << 16; 693 693 size = 128 << 16; 694 694 for (;;) { 695 - unsigned bit = size >> 1; 695 + unsigned int bit = size >> 1; 696 696 if ((bit & mask) == bit) 697 697 break; 698 698 size = bit; ··· 806 806 "ICH6 GPIO"); 807 807 } 808 808 809 - static void ich6_lpc_generic_decode(struct pci_dev *dev, unsigned reg, 809 + static void ich6_lpc_generic_decode(struct pci_dev *dev, unsigned int reg, 810 810 const char *name, int dynsize) 811 811 { 812 812 u32 val; ··· 850 850 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_0, quirk_ich6_lpc); 851 851 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, quirk_ich6_lpc); 852 852 853 - static void ich7_lpc_generic_decode(struct pci_dev *dev, unsigned reg, 853 + static void ich7_lpc_generic_decode(struct pci_dev *dev, unsigned int reg, 854 854 const char *name) 855 855 { 856 856 u32 val; ··· 2700 2700 * then the device can't use INTx interrupts. Tegra's PCIe root ports don't 2701 2701 * generate MSI interrupts for PME and AER events instead only INTx interrupts 2702 2702 * are generated. Though Tegra's PCIe root ports can generate MSI interrupts 2703 - * for other events, since PCIe specificiation doesn't support using a mix of 2703 + * for other events, since PCIe specification doesn't support using a mix of 2704 2704 * INTx and MSI/MSI-X, it is required to disable MSI interrupts to avoid port 2705 2705 * service drivers registering their respective ISRs for MSIs. 2706 2706 */ ··· 3612 3612 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003c, quirk_no_bus_reset); 3613 3613 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0033, quirk_no_bus_reset); 3614 3614 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0034, quirk_no_bus_reset); 3615 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003e, quirk_no_bus_reset); 3615 3616 3616 3617 /* 3617 3618 * Root port on some Cavium CN8xxx chips do not successfully complete a bus ··· 5796 5795 } 5797 5796 DECLARE_PCI_FIXUP_CLASS_HEADER(0x1ac1, 0x089a, 5798 5797 PCI_CLASS_NOT_DEFINED, 8, apex_pci_fixup_class); 5798 + 5799 + /* 5800 + * Pericom PI7C9X2G404/PI7C9X2G304/PI7C9X2G303 switch erratum E5 - 5801 + * ACS P2P Request Redirect is not functional 5802 + * 5803 + * When ACS P2P Request Redirect is enabled and bandwidth is not balanced 5804 + * between upstream and downstream ports, packets are queued in an internal 5805 + * buffer until CPLD packet. The workaround is to use the switch in store and 5806 + * forward mode. 5807 + */ 5808 + #define PI7C9X2Gxxx_MODE_REG 0x74 5809 + #define PI7C9X2Gxxx_STORE_FORWARD_MODE BIT(0) 5810 + static void pci_fixup_pericom_acs_store_forward(struct pci_dev *pdev) 5811 + { 5812 + struct pci_dev *upstream; 5813 + u16 val; 5814 + 5815 + /* Downstream ports only */ 5816 + if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM) 5817 + return; 5818 + 5819 + /* Check for ACS P2P Request Redirect use */ 5820 + if (!pdev->acs_cap) 5821 + return; 5822 + pci_read_config_word(pdev, pdev->acs_cap + PCI_ACS_CTRL, &val); 5823 + if (!(val & PCI_ACS_RR)) 5824 + return; 5825 + 5826 + upstream = pci_upstream_bridge(pdev); 5827 + if (!upstream) 5828 + return; 5829 + 5830 + pci_read_config_word(upstream, PI7C9X2Gxxx_MODE_REG, &val); 5831 + if (!(val & PI7C9X2Gxxx_STORE_FORWARD_MODE)) { 5832 + pci_info(upstream, "Setting PI7C9X2Gxxx store-forward mode to avoid ACS erratum\n"); 5833 + pci_write_config_word(upstream, PI7C9X2Gxxx_MODE_REG, val | 5834 + PI7C9X2Gxxx_STORE_FORWARD_MODE); 5835 + } 5836 + } 5837 + /* 5838 + * Apply fixup on enable and on resume, in order to apply the fix up whenever 5839 + * ACS configuration changes or switch mode is reset 5840 + */ 5841 + DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_PERICOM, 0x2404, 5842 + pci_fixup_pericom_acs_store_forward); 5843 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_PERICOM, 0x2404, 5844 + pci_fixup_pericom_acs_store_forward); 5845 + DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_PERICOM, 0x2304, 5846 + pci_fixup_pericom_acs_store_forward); 5847 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_PERICOM, 0x2304, 5848 + pci_fixup_pericom_acs_store_forward); 5849 + DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_PERICOM, 0x2303, 5850 + pci_fixup_pericom_acs_store_forward); 5851 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_PERICOM, 0x2303, 5852 + pci_fixup_pericom_acs_store_forward);
+1 -1
drivers/pci/rom.c
··· 85 85 { 86 86 void __iomem *image; 87 87 int last_image; 88 - unsigned length; 88 + unsigned int length; 89 89 90 90 image = rom; 91 91 do {
+1 -1
drivers/pci/setup-bus.c
··· 1525 1525 { 1526 1526 struct pci_dev *dev = bus->self; 1527 1527 struct resource *r; 1528 - unsigned old_flags = 0; 1528 + unsigned int old_flags = 0; 1529 1529 struct resource *b_res; 1530 1530 int idx = 1; 1531 1531
+14 -12
drivers/pci/setup-irq.c
··· 8 8 * David Miller (davem@redhat.com) 9 9 */ 10 10 11 - 12 11 #include <linux/kernel.h> 13 12 #include <linux/pci.h> 14 13 #include <linux/errno.h> ··· 27 28 return; 28 29 } 29 30 30 - /* If this device is not on the primary bus, we need to figure out 31 - which interrupt pin it will come in on. We know which slot it 32 - will come in on 'cos that slot is where the bridge is. Each 33 - time the interrupt line passes through a PCI-PCI bridge we must 34 - apply the swizzle function. */ 35 - 31 + /* 32 + * If this device is not on the primary bus, we need to figure out 33 + * which interrupt pin it will come in on. We know which slot it 34 + * will come in on because that slot is where the bridge is. Each 35 + * time the interrupt line passes through a PCI-PCI bridge we must 36 + * apply the swizzle function. 37 + */ 36 38 pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin); 37 39 /* Cope with illegal. */ 38 40 if (pin > 4) 39 41 pin = 1; 40 42 41 43 if (pin) { 42 - /* Follow the chain of bridges, swizzling as we go. */ 44 + /* Follow the chain of bridges, swizzling as we go. */ 43 45 if (hbrg->swizzle_irq) 44 46 slot = (*(hbrg->swizzle_irq))(dev, &pin); 45 47 46 48 /* 47 - * If a swizzling function is not used map_irq must 48 - * ignore slot 49 + * If a swizzling function is not used, map_irq() must 50 + * ignore slot. 49 51 */ 50 52 irq = (*(hbrg->map_irq))(dev, slot, pin); 51 53 if (irq == -1) ··· 56 56 57 57 pci_dbg(dev, "assign IRQ: got %d\n", dev->irq); 58 58 59 - /* Always tell the device, so the driver knows what is 60 - the real IRQ to use; the device does not use it. */ 59 + /* 60 + * Always tell the device, so the driver knows what is the real IRQ 61 + * to use; the device does not use it. 62 + */ 61 63 pci_write_config_byte(dev, PCI_INTERRUPT_LINE, irq); 62 64 }
+78 -17
drivers/pci/switch/switchtec.c
··· 45 45 MRPC_QUEUED, 46 46 MRPC_RUNNING, 47 47 MRPC_DONE, 48 + MRPC_IO_ERROR, 48 49 }; 49 50 50 51 struct switchtec_user { ··· 66 65 unsigned char data[SWITCHTEC_MRPC_PAYLOAD_SIZE]; 67 66 int event_cnt; 68 67 }; 68 + 69 + /* 70 + * The MMIO reads to the device_id register should always return the device ID 71 + * of the device, otherwise the firmware is probably stuck or unreachable 72 + * due to a firmware reset which clears PCI state including the BARs and Memory 73 + * Space Enable bits. 74 + */ 75 + static int is_firmware_running(struct switchtec_dev *stdev) 76 + { 77 + u32 device = ioread32(&stdev->mmio_sys_info->device_id); 78 + 79 + return stdev->pdev->device == device; 80 + } 69 81 70 82 static struct switchtec_user *stuser_create(struct switchtec_dev *stdev) 71 83 { ··· 127 113 [MRPC_QUEUED] = "QUEUED", 128 114 [MRPC_RUNNING] = "RUNNING", 129 115 [MRPC_DONE] = "DONE", 116 + [MRPC_IO_ERROR] = "IO_ERROR", 130 117 }; 131 118 132 119 stuser->state = state; ··· 199 184 return 0; 200 185 } 201 186 187 + static void mrpc_cleanup_cmd(struct switchtec_dev *stdev) 188 + { 189 + /* requires the mrpc_mutex to already be held when called */ 190 + 191 + struct switchtec_user *stuser = list_entry(stdev->mrpc_queue.next, 192 + struct switchtec_user, list); 193 + 194 + stuser->cmd_done = true; 195 + wake_up_interruptible(&stuser->cmd_comp); 196 + list_del_init(&stuser->list); 197 + stuser_put(stuser); 198 + stdev->mrpc_busy = 0; 199 + 200 + mrpc_cmd_submit(stdev); 201 + } 202 + 202 203 static void mrpc_complete_cmd(struct switchtec_dev *stdev) 203 204 { 204 205 /* requires the mrpc_mutex to already be held when called */ 206 + 205 207 struct switchtec_user *stuser; 206 208 207 209 if (list_empty(&stdev->mrpc_queue)) ··· 238 206 stuser_set_state(stuser, MRPC_DONE); 239 207 stuser->return_code = 0; 240 208 241 - if (stuser->status != SWITCHTEC_MRPC_STATUS_DONE) 209 + if (stuser->status != SWITCHTEC_MRPC_STATUS_DONE && 210 + stuser->status != SWITCHTEC_MRPC_STATUS_ERROR) 242 211 goto out; 243 212 244 213 if (stdev->dma_mrpc) ··· 256 223 memcpy_fromio(stuser->data, &stdev->mmio_mrpc->output_data, 257 224 stuser->read_len); 258 225 out: 259 - stuser->cmd_done = true; 260 - wake_up_interruptible(&stuser->cmd_comp); 261 - list_del_init(&stuser->list); 262 - stuser_put(stuser); 263 - stdev->mrpc_busy = 0; 264 - 265 - mrpc_cmd_submit(stdev); 226 + mrpc_cleanup_cmd(stdev); 266 227 } 267 228 268 229 static void mrpc_event_work(struct work_struct *work) ··· 273 246 mutex_unlock(&stdev->mrpc_mutex); 274 247 } 275 248 249 + static void mrpc_error_complete_cmd(struct switchtec_dev *stdev) 250 + { 251 + /* requires the mrpc_mutex to already be held when called */ 252 + 253 + struct switchtec_user *stuser; 254 + 255 + if (list_empty(&stdev->mrpc_queue)) 256 + return; 257 + 258 + stuser = list_entry(stdev->mrpc_queue.next, 259 + struct switchtec_user, list); 260 + 261 + stuser_set_state(stuser, MRPC_IO_ERROR); 262 + 263 + mrpc_cleanup_cmd(stdev); 264 + } 265 + 276 266 static void mrpc_timeout_work(struct work_struct *work) 277 267 { 278 268 struct switchtec_dev *stdev; ··· 300 256 dev_dbg(&stdev->dev, "%s\n", __func__); 301 257 302 258 mutex_lock(&stdev->mrpc_mutex); 259 + 260 + if (!is_firmware_running(stdev)) { 261 + mrpc_error_complete_cmd(stdev); 262 + goto out; 263 + } 303 264 304 265 if (stdev->dma_mrpc) 305 266 status = stdev->dma_mrpc->status; ··· 376 327 return io_string_show(buf, &si->gen4.field, \ 377 328 sizeof(si->gen4.field)); \ 378 329 else \ 379 - return -ENOTSUPP; \ 330 + return -EOPNOTSUPP; \ 380 331 } \ 381 332 \ 382 333 static DEVICE_ATTR_RO(field) ··· 593 544 if (rc) 594 545 return rc; 595 546 547 + if (stuser->state == MRPC_IO_ERROR) { 548 + mutex_unlock(&stdev->mrpc_mutex); 549 + return -EIO; 550 + } 551 + 596 552 if (stuser->state != MRPC_DONE) { 597 553 mutex_unlock(&stdev->mrpc_mutex); 598 554 return -EBADE; ··· 623 569 out: 624 570 mutex_unlock(&stdev->mrpc_mutex); 625 571 626 - if (stuser->status == SWITCHTEC_MRPC_STATUS_DONE) 572 + if (stuser->status == SWITCHTEC_MRPC_STATUS_DONE || 573 + stuser->status == SWITCHTEC_MRPC_STATUS_ERROR) 627 574 return size; 628 575 else if (stuser->status == SWITCHTEC_MRPC_STATUS_INTERRUPTED) 629 576 return -ENXIO; ··· 668 613 info.flash_length = ioread32(&fi->gen4.flash_length); 669 614 info.num_partitions = SWITCHTEC_NUM_PARTITIONS_GEN4; 670 615 } else { 671 - return -ENOTSUPP; 616 + return -EOPNOTSUPP; 672 617 } 673 618 674 619 if (copy_to_user(uinfo, &info, sizeof(info))) ··· 876 821 if (ret) 877 822 return ret; 878 823 } else { 879 - return -ENOTSUPP; 824 + return -EOPNOTSUPP; 880 825 } 881 826 882 827 if (copy_to_user(uinfo, &info, sizeof(info))) ··· 1024 969 return PTR_ERR(reg); 1025 970 1026 971 hdr = ioread32(reg); 972 + if (hdr & SWITCHTEC_EVENT_NOT_SUPP) 973 + return -EOPNOTSUPP; 974 + 1027 975 for (i = 0; i < ARRAY_SIZE(ctl->data); i++) 1028 976 ctl->data[i] = ioread32(&reg[i + 1]); 1029 977 ··· 1099 1041 for (ctl.index = 0; ctl.index < nr_idxs; ctl.index++) { 1100 1042 ctl.flags = event_flags; 1101 1043 ret = event_ctl(stdev, &ctl); 1102 - if (ret < 0) 1044 + if (ret < 0 && ret != -EOPNOTSUPP) 1103 1045 return ret; 1104 1046 } 1105 1047 } else { ··· 1136 1078 break; 1137 1079 } 1138 1080 1139 - reg = ioread32(&pcfg->vep_pff_inst_id); 1081 + reg = ioread32(&pcfg->vep_pff_inst_id) & 0xFF; 1140 1082 if (reg == p.pff) { 1141 1083 p.port = SWITCHTEC_IOCTL_PFF_VEP; 1142 1084 break; ··· 1182 1124 p.pff = ioread32(&pcfg->usp_pff_inst_id); 1183 1125 break; 1184 1126 case SWITCHTEC_IOCTL_PFF_VEP: 1185 - p.pff = ioread32(&pcfg->vep_pff_inst_id); 1127 + p.pff = ioread32(&pcfg->vep_pff_inst_id) & 0xFF; 1186 1128 break; 1187 1129 default: 1188 1130 if (p.port > ARRAY_SIZE(pcfg->dsp_pff_inst_id)) ··· 1406 1348 hdr_reg = event_regs[eid].map_reg(stdev, off, idx); 1407 1349 hdr = ioread32(hdr_reg); 1408 1350 1351 + if (hdr & SWITCHTEC_EVENT_NOT_SUPP) 1352 + return 0; 1353 + 1409 1354 if (!(hdr & SWITCHTEC_EVENT_OCCURRED && hdr & SWITCHTEC_EVENT_EN_IRQ)) 1410 1355 return 0; 1411 1356 ··· 1559 1498 if (reg < stdev->pff_csr_count) 1560 1499 stdev->pff_local[reg] = 1; 1561 1500 1562 - reg = ioread32(&pcfg->vep_pff_inst_id); 1501 + reg = ioread32(&pcfg->vep_pff_inst_id) & 0xFF; 1563 1502 if (reg < stdev->pff_csr_count) 1564 1503 stdev->pff_local[reg] = 1; 1565 1504 ··· 1617 1556 else if (stdev->gen == SWITCHTEC_GEN4) 1618 1557 part_id = &stdev->mmio_sys_info->gen4.partition_id; 1619 1558 else 1620 - return -ENOTSUPP; 1559 + return -EOPNOTSUPP; 1621 1560 1622 1561 stdev->partition = ioread8(part_id); 1623 1562 stdev->partition_count = ioread8(&stdev->mmio_ntb->partition_count);
+61 -32
drivers/pci/vpd.c
··· 57 57 size_t off = 0, size; 58 58 unsigned char tag, header[1+2]; /* 1 byte tag, 2 bytes length */ 59 59 60 - /* Otherwise the following reads would fail. */ 61 - dev->vpd.len = PCI_VPD_MAX_SIZE; 62 - 63 - while (pci_read_vpd(dev, off, 1, header) == 1) { 60 + while (pci_read_vpd_any(dev, off, 1, header) == 1) { 64 61 size = 0; 65 62 66 63 if (off == 0 && (header[0] == 0x00 || header[0] == 0xff)) ··· 65 68 66 69 if (header[0] & PCI_VPD_LRDT) { 67 70 /* Large Resource Data Type Tag */ 68 - if (pci_read_vpd(dev, off + 1, 2, &header[1]) != 2) { 71 + if (pci_read_vpd_any(dev, off + 1, 2, &header[1]) != 2) { 69 72 pci_warn(dev, "failed VPD read at offset %zu\n", 70 73 off + 1); 71 74 return off ?: PCI_VPD_SZ_INVALID; ··· 96 99 return off ?: PCI_VPD_SZ_INVALID; 97 100 } 98 101 99 - static bool pci_vpd_available(struct pci_dev *dev) 102 + static bool pci_vpd_available(struct pci_dev *dev, bool check_size) 100 103 { 101 104 struct pci_vpd *vpd = &dev->vpd; 102 105 103 106 if (!vpd->cap) 104 107 return false; 105 108 106 - if (vpd->len == 0) { 109 + if (vpd->len == 0 && check_size) { 107 110 vpd->len = pci_vpd_size(dev); 108 111 if (vpd->len == PCI_VPD_SZ_INVALID) { 109 112 vpd->cap = 0; ··· 153 156 } 154 157 155 158 static ssize_t pci_vpd_read(struct pci_dev *dev, loff_t pos, size_t count, 156 - void *arg) 159 + void *arg, bool check_size) 157 160 { 158 161 struct pci_vpd *vpd = &dev->vpd; 162 + unsigned int max_len; 159 163 int ret = 0; 160 164 loff_t end = pos + count; 161 165 u8 *buf = arg; 162 166 163 - if (!pci_vpd_available(dev)) 167 + if (!pci_vpd_available(dev, check_size)) 164 168 return -ENODEV; 165 169 166 170 if (pos < 0) 167 171 return -EINVAL; 168 172 169 - if (pos > vpd->len) 173 + max_len = check_size ? vpd->len : PCI_VPD_MAX_SIZE; 174 + 175 + if (pos >= max_len) 170 176 return 0; 171 177 172 - if (end > vpd->len) { 173 - end = vpd->len; 178 + if (end > max_len) { 179 + end = max_len; 174 180 count = end - pos; 175 181 } 176 182 ··· 217 217 } 218 218 219 219 static ssize_t pci_vpd_write(struct pci_dev *dev, loff_t pos, size_t count, 220 - const void *arg) 220 + const void *arg, bool check_size) 221 221 { 222 222 struct pci_vpd *vpd = &dev->vpd; 223 + unsigned int max_len; 223 224 const u8 *buf = arg; 224 225 loff_t end = pos + count; 225 226 int ret = 0; 226 227 227 - if (!pci_vpd_available(dev)) 228 + if (!pci_vpd_available(dev, check_size)) 228 229 return -ENODEV; 229 230 230 231 if (pos < 0 || (pos & 3) || (count & 3)) 231 232 return -EINVAL; 232 233 233 - if (end > vpd->len) 234 + max_len = check_size ? vpd->len : PCI_VPD_MAX_SIZE; 235 + 236 + if (end > max_len) 234 237 return -EINVAL; 235 238 236 239 if (mutex_lock_killable(&vpd->lock)) ··· 316 313 void *buf; 317 314 int cnt; 318 315 319 - if (!pci_vpd_available(dev)) 316 + if (!pci_vpd_available(dev, true)) 320 317 return ERR_PTR(-ENODEV); 321 318 322 319 len = dev->vpd.len; ··· 384 381 return -ENOENT; 385 382 } 386 383 384 + static ssize_t __pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf, 385 + bool check_size) 386 + { 387 + ssize_t ret; 388 + 389 + if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) { 390 + dev = pci_get_func0_dev(dev); 391 + if (!dev) 392 + return -ENODEV; 393 + 394 + ret = pci_vpd_read(dev, pos, count, buf, check_size); 395 + pci_dev_put(dev); 396 + return ret; 397 + } 398 + 399 + return pci_vpd_read(dev, pos, count, buf, check_size); 400 + } 401 + 387 402 /** 388 403 * pci_read_vpd - Read one entry from Vital Product Data 389 404 * @dev: PCI device struct ··· 411 390 */ 412 391 ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf) 413 392 { 393 + return __pci_read_vpd(dev, pos, count, buf, true); 394 + } 395 + EXPORT_SYMBOL(pci_read_vpd); 396 + 397 + /* Same, but allow to access any address */ 398 + ssize_t pci_read_vpd_any(struct pci_dev *dev, loff_t pos, size_t count, void *buf) 399 + { 400 + return __pci_read_vpd(dev, pos, count, buf, false); 401 + } 402 + EXPORT_SYMBOL(pci_read_vpd_any); 403 + 404 + static ssize_t __pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, 405 + const void *buf, bool check_size) 406 + { 414 407 ssize_t ret; 415 408 416 409 if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) { ··· 432 397 if (!dev) 433 398 return -ENODEV; 434 399 435 - ret = pci_vpd_read(dev, pos, count, buf); 400 + ret = pci_vpd_write(dev, pos, count, buf, check_size); 436 401 pci_dev_put(dev); 437 402 return ret; 438 403 } 439 404 440 - return pci_vpd_read(dev, pos, count, buf); 405 + return pci_vpd_write(dev, pos, count, buf, check_size); 441 406 } 442 - EXPORT_SYMBOL(pci_read_vpd); 443 407 444 408 /** 445 409 * pci_write_vpd - Write entry to Vital Product Data ··· 449 415 */ 450 416 ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf) 451 417 { 452 - ssize_t ret; 453 - 454 - if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) { 455 - dev = pci_get_func0_dev(dev); 456 - if (!dev) 457 - return -ENODEV; 458 - 459 - ret = pci_vpd_write(dev, pos, count, buf); 460 - pci_dev_put(dev); 461 - return ret; 462 - } 463 - 464 - return pci_vpd_write(dev, pos, count, buf); 418 + return __pci_write_vpd(dev, pos, count, buf, true); 465 419 } 466 420 EXPORT_SYMBOL(pci_write_vpd); 421 + 422 + /* Same, but allow to access any address */ 423 + ssize_t pci_write_vpd_any(struct pci_dev *dev, loff_t pos, size_t count, const void *buf) 424 + { 425 + return __pci_write_vpd(dev, pos, count, buf, false); 426 + } 427 + EXPORT_SYMBOL(pci_write_vpd_any); 467 428 468 429 int pci_vpd_find_ro_info_keyword(const void *buf, unsigned int len, 469 430 const char *kw, unsigned int *size)
+19 -37
drivers/pci/xen-pcifront.c
··· 588 588 struct pcifront_device *pdev, 589 589 pci_channel_state_t state) 590 590 { 591 - pci_ers_result_t result; 592 591 struct pci_driver *pdrv; 593 592 int bus = pdev->sh_info->aer_op.bus; 594 593 int devfn = pdev->sh_info->aer_op.devfn; 595 594 int domain = pdev->sh_info->aer_op.domain; 596 595 struct pci_dev *pcidev; 597 - int flag = 0; 598 596 599 597 dev_dbg(&pdev->xdev->dev, 600 598 "pcifront AER process: cmd %x (bus:%x, devfn%x)", 601 599 cmd, bus, devfn); 602 - result = PCI_ERS_RESULT_NONE; 603 600 604 601 pcidev = pci_get_domain_bus_and_slot(domain, bus, devfn); 605 - if (!pcidev || !pcidev->driver) { 602 + if (!pcidev || !pcidev->dev.driver) { 606 603 dev_err(&pdev->xdev->dev, "device or AER driver is NULL\n"); 607 604 pci_dev_put(pcidev); 608 - return result; 605 + return PCI_ERS_RESULT_NONE; 609 606 } 610 - pdrv = pcidev->driver; 607 + pdrv = to_pci_driver(pcidev->dev.driver); 611 608 612 - if (pdrv) { 613 - if (pdrv->err_handler && pdrv->err_handler->error_detected) { 614 - pci_dbg(pcidev, "trying to call AER service\n"); 615 - if (pcidev) { 616 - flag = 1; 617 - switch (cmd) { 618 - case XEN_PCI_OP_aer_detected: 619 - result = pdrv->err_handler-> 620 - error_detected(pcidev, state); 621 - break; 622 - case XEN_PCI_OP_aer_mmio: 623 - result = pdrv->err_handler-> 624 - mmio_enabled(pcidev); 625 - break; 626 - case XEN_PCI_OP_aer_slotreset: 627 - result = pdrv->err_handler-> 628 - slot_reset(pcidev); 629 - break; 630 - case XEN_PCI_OP_aer_resume: 631 - pdrv->err_handler->resume(pcidev); 632 - break; 633 - default: 634 - dev_err(&pdev->xdev->dev, 635 - "bad request in aer recovery " 636 - "operation!\n"); 637 - 638 - } 639 - } 609 + if (pdrv->err_handler && pdrv->err_handler->error_detected) { 610 + pci_dbg(pcidev, "trying to call AER service\n"); 611 + switch (cmd) { 612 + case XEN_PCI_OP_aer_detected: 613 + return pdrv->err_handler->error_detected(pcidev, state); 614 + case XEN_PCI_OP_aer_mmio: 615 + return pdrv->err_handler->mmio_enabled(pcidev); 616 + case XEN_PCI_OP_aer_slotreset: 617 + return pdrv->err_handler->slot_reset(pcidev); 618 + case XEN_PCI_OP_aer_resume: 619 + pdrv->err_handler->resume(pcidev); 620 + return PCI_ERS_RESULT_NONE; 621 + default: 622 + dev_err(&pdev->xdev->dev, 623 + "bad request in aer recovery operation!\n"); 640 624 } 641 625 } 642 - if (!flag) 643 - result = PCI_ERS_RESULT_NONE; 644 626 645 - return result; 627 + return PCI_ERS_RESULT_NONE; 646 628 } 647 629 648 630
+1 -5
drivers/ssb/pcihost_wrapper.c
··· 69 69 { 70 70 struct ssb_bus *ssb; 71 71 int err = -ENOMEM; 72 - const char *name; 73 72 u32 val; 74 73 75 74 ssb = kzalloc(sizeof(*ssb), GFP_KERNEL); ··· 77 78 err = pci_enable_device(dev); 78 79 if (err) 79 80 goto err_kfree_ssb; 80 - name = dev_name(&dev->dev); 81 - if (dev->driver && dev->driver->name) 82 - name = dev->driver->name; 83 - err = pci_request_regions(dev, name); 81 + err = pci_request_regions(dev, dev_driver_string(&dev->dev)); 84 82 if (err) 85 83 goto err_pci_disable; 86 84 pci_set_master(dev);
-2
drivers/staging/Kconfig
··· 86 86 87 87 source "drivers/staging/pi433/Kconfig" 88 88 89 - source "drivers/staging/mt7621-pci/Kconfig" 90 - 91 89 source "drivers/staging/mt7621-dma/Kconfig" 92 90 93 91 source "drivers/staging/ralink-gdma/Kconfig"
-1
drivers/staging/Makefile
··· 33 33 obj-$(CONFIG_GREYBUS) += greybus/ 34 34 obj-$(CONFIG_BCM2835_VCHIQ) += vc04_services/ 35 35 obj-$(CONFIG_PI433) += pi433/ 36 - obj-$(CONFIG_PCI_MT7621) += mt7621-pci/ 37 36 obj-$(CONFIG_SOC_MT7621) += mt7621-dma/ 38 37 obj-$(CONFIG_DMA_RALINK) += ralink-gdma/ 39 38 obj-$(CONFIG_SOC_MT7621) += mt7621-dts/
-8
drivers/staging/mt7621-pci/Kconfig
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - config PCI_MT7621 3 - tristate "MediaTek MT7621 PCI Controller" 4 - depends on RALINK 5 - select PCI_DRIVERS_GENERIC 6 - help 7 - This selects a driver for the MediaTek MT7621 PCI Controller. 8 -
-2
drivers/staging/mt7621-pci/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - obj-$(CONFIG_PCI_MT7621) += pci-mt7621.o
-4
drivers/staging/mt7621-pci/TODO
··· 1 - 2 - - general code review and cleanup 3 - 4 - Cc: NeilBrown <neil@brown.name>
-104
drivers/staging/mt7621-pci/mediatek,mt7621-pci.txt
··· 1 - MediaTek MT7621 PCIe controller 2 - 3 - Required properties: 4 - - compatible: "mediatek,mt7621-pci" 5 - - device_type: Must be "pci" 6 - - reg: Base addresses and lengths of the PCIe subsys and root ports. 7 - - bus-range: Range of bus numbers associated with this controller. 8 - - #address-cells: Address representation for root ports (must be 3) 9 - - pinctrl-names : The pin control state names. 10 - - pinctrl-0: The "default" pinctrl state. 11 - - #size-cells: Size representation for root ports (must be 2) 12 - - ranges: Ranges for the PCI memory and I/O regions. 13 - - #interrupt-cells: Must be 1 14 - - interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties. 15 - Please refer to the standard PCI bus binding document for a more detailed 16 - explanation. 17 - - status: either "disabled" or "okay". 18 - - resets: Must contain an entry for each entry in reset-names. 19 - See ../reset/reset.txt for details. 20 - - reset-names: Must be "pcie0", "pcie1", "pcieN"... based on the number of 21 - root ports. 22 - - clocks: Must contain an entry for each entry in clock-names. 23 - See ../clocks/clock-bindings.txt for details. 24 - - clock-names: Must be "pcie0", "pcie1", "pcieN"... based on the number of 25 - root ports. 26 - - reset-gpios: GPIO specs for the reset pins. 27 - 28 - In addition, the device tree node must have sub-nodes describing each PCIe port 29 - interface, having the following mandatory properties: 30 - 31 - Required properties: 32 - - reg: Only the first four bytes are used to refer to the correct bus number 33 - and device number. 34 - - #address-cells: Must be 3 35 - - #size-cells: Must be 2 36 - - ranges: Sub-ranges distributed from the PCIe controller node. An empty 37 - property is sufficient. 38 - - bus-range: Range of bus numbers associated with this port. 39 - 40 - Example for MT7621: 41 - 42 - pcie: pcie@1e140000 { 43 - compatible = "mediatek,mt7621-pci"; 44 - reg = <0x1e140000 0x100 /* host-pci bridge registers */ 45 - 0x1e142000 0x100 /* pcie port 0 RC control registers */ 46 - 0x1e143000 0x100 /* pcie port 1 RC control registers */ 47 - 0x1e144000 0x100>; /* pcie port 2 RC control registers */ 48 - 49 - #address-cells = <3>; 50 - #size-cells = <2>; 51 - 52 - pinctrl-names = "default"; 53 - pinctrl-0 = <&pcie_pins>; 54 - 55 - device_type = "pci"; 56 - 57 - bus-range = <0 255>; 58 - ranges = < 59 - 0x02000000 0 0x00000000 0x60000000 0 0x10000000 /* pci memory */ 60 - 0x01000000 0 0x00000000 0x1e160000 0 0x00010000 /* io space */ 61 - >; 62 - 63 - #interrupt-cells = <1>; 64 - interrupt-map-mask = <0xF0000 0 0 1>; 65 - interrupt-map = <0x10000 0 0 1 &gic GIC_SHARED 4 IRQ_TYPE_LEVEL_HIGH>, 66 - <0x20000 0 0 1 &gic GIC_SHARED 24 IRQ_TYPE_LEVEL_HIGH>, 67 - <0x30000 0 0 1 &gic GIC_SHARED 25 IRQ_TYPE_LEVEL_HIGH>; 68 - 69 - status = "disabled"; 70 - 71 - resets = <&rstctrl 24 &rstctrl 25 &rstctrl 26>; 72 - reset-names = "pcie0", "pcie1", "pcie2"; 73 - clocks = <&clkctrl 24 &clkctrl 25 &clkctrl 26>; 74 - clock-names = "pcie0", "pcie1", "pcie2"; 75 - 76 - reset-gpios = <&gpio 19 GPIO_ACTIVE_LOW>, 77 - <&gpio 8 GPIO_ACTIVE_LOW>, 78 - <&gpio 7 GPIO_ACTIVE_LOW>; 79 - 80 - pcie@0,0 { 81 - reg = <0x0000 0 0 0 0>; 82 - #address-cells = <3>; 83 - #size-cells = <2>; 84 - ranges; 85 - bus-range = <0x00 0xff>; 86 - }; 87 - 88 - pcie@1,0 { 89 - reg = <0x0800 0 0 0 0>; 90 - #address-cells = <3>; 91 - #size-cells = <2>; 92 - ranges; 93 - bus-range = <0x00 0xff>; 94 - }; 95 - 96 - pcie@2,0 { 97 - reg = <0x1000 0 0 0 0>; 98 - #address-cells = <3>; 99 - #size-cells = <2>; 100 - ranges; 101 - bus-range = <0x00 0xff>; 102 - }; 103 - }; 104 -
+12 -12
drivers/staging/mt7621-pci/pci-mt7621.c drivers/pci/controller/pcie-mt7621.c
··· 30 30 #include <linux/reset.h> 31 31 #include <linux/sys_soc.h> 32 32 33 - /* MediaTek specific configuration registers */ 33 + /* MediaTek-specific configuration registers */ 34 34 #define PCIE_FTS_NUM 0x70c 35 35 #define PCIE_FTS_NUM_MASK GENMASK(15, 8) 36 36 #define PCIE_FTS_NUM_L0(x) (((x) & 0xff) << 8) 37 37 38 38 /* Host-PCI bridge registers */ 39 39 #define RALINK_PCI_PCICFG_ADDR 0x0000 40 - #define RALINK_PCI_PCIMSK_ADDR 0x000C 40 + #define RALINK_PCI_PCIMSK_ADDR 0x000c 41 41 #define RALINK_PCI_CONFIG_ADDR 0x0020 42 42 #define RALINK_PCI_CONFIG_DATA 0x0024 43 43 #define RALINK_PCI_MEMBASE 0x0028 44 - #define RALINK_PCI_IOBASE 0x002C 44 + #define RALINK_PCI_IOBASE 0x002c 45 45 46 46 /* PCIe RC control registers */ 47 47 #define RALINK_PCI_ID 0x0030 ··· 132 132 static inline u32 mt7621_pci_get_cfgaddr(unsigned int bus, unsigned int slot, 133 133 unsigned int func, unsigned int where) 134 134 { 135 - return (((where & 0xF00) >> 8) << 24) | (bus << 16) | (slot << 11) | 135 + return (((where & 0xf00) >> 8) << 24) | (bus << 16) | (slot << 11) | 136 136 (func << 8) | (where & 0xfc) | 0x80000000; 137 137 } 138 138 ··· 217 217 218 218 entry = resource_list_first_type(&host->windows, IORESOURCE_MEM); 219 219 if (!entry) { 220 - dev_err(dev, "Cannot get memory resource\n"); 220 + dev_err(dev, "cannot get memory resource\n"); 221 221 return -EINVAL; 222 222 } 223 223 ··· 280 280 port->gpio_rst = devm_gpiod_get_index_optional(dev, "reset", slot, 281 281 GPIOD_OUT_LOW); 282 282 if (IS_ERR(port->gpio_rst)) { 283 - dev_err(dev, "Failed to get GPIO for PCIe%d\n", slot); 283 + dev_err(dev, "failed to get GPIO for PCIe%d\n", slot); 284 284 err = PTR_ERR(port->gpio_rst); 285 285 goto remove_reset; 286 286 } ··· 409 409 410 410 err = mt7621_pcie_init_port(port); 411 411 if (err) { 412 - dev_err(dev, "Initiating port %d failed\n", slot); 412 + dev_err(dev, "initializing port %d failed\n", slot); 413 413 list_del(&port->list); 414 414 } 415 415 } ··· 476 476 477 477 entry = resource_list_first_type(&host->windows, IORESOURCE_IO); 478 478 if (!entry) { 479 - dev_err(dev, "Cannot get io resource\n"); 479 + dev_err(dev, "cannot get io resource\n"); 480 480 return -EINVAL; 481 481 } 482 482 ··· 541 541 542 542 err = mt7621_pcie_parse_dt(pcie); 543 543 if (err) { 544 - dev_err(dev, "Parsing DT failed\n"); 544 + dev_err(dev, "parsing DT failed\n"); 545 545 return err; 546 546 } 547 547 548 548 err = mt7621_pcie_init_ports(pcie); 549 549 if (err) { 550 - dev_err(dev, "Nothing connected in virtual bridges\n"); 550 + dev_err(dev, "nothing connected in virtual bridges\n"); 551 551 return 0; 552 552 } 553 553 554 554 err = mt7621_pcie_enable_ports(bridge); 555 555 if (err) { 556 - dev_err(dev, "Error enabling pcie ports\n"); 556 + dev_err(dev, "error enabling pcie ports\n"); 557 557 goto remove_resets; 558 558 } 559 559 560 560 err = setup_cm_memory_region(bridge); 561 561 if (err) { 562 - dev_err(dev, "Error setting up iocu mem regions\n"); 562 + dev_err(dev, "error setting up iocu mem regions\n"); 563 563 goto remove_resets; 564 564 } 565 565
+1 -1
drivers/usb/host/xhci-pci.c
··· 111 111 struct xhci_driver_data *driver_data; 112 112 const struct pci_device_id *id; 113 113 114 - id = pci_match_id(pdev->driver->id_table, pdev); 114 + id = pci_match_id(to_pci_driver(pdev->dev.driver)->id_table, pdev); 115 115 116 116 if (id && id->driver_data) { 117 117 driver_data = (struct xhci_driver_data *)id->driver_data;
-2
include/linux/acpi.h
··· 577 577 #define OSC_PCI_MSI_SUPPORT 0x00000010 578 578 #define OSC_PCI_EDR_SUPPORT 0x00000080 579 579 #define OSC_PCI_HPX_TYPE_3_SUPPORT 0x00000100 580 - #define OSC_PCI_SUPPORT_MASKS 0x0000019f 581 580 582 581 /* PCI Host Bridge _OSC: Capabilities DWORD 3: Control Field */ 583 582 #define OSC_PCI_EXPRESS_NATIVE_HP_CONTROL 0x00000001 ··· 586 587 #define OSC_PCI_EXPRESS_CAPABILITY_CONTROL 0x00000010 587 588 #define OSC_PCI_EXPRESS_LTR_CONTROL 0x00000020 588 589 #define OSC_PCI_EXPRESS_DPC_CONTROL 0x00000080 589 - #define OSC_PCI_CONTROL_MASKS 0x000000bf 590 590 591 591 #define ACPI_GSB_ACCESS_ATTRIB_QUICK 0x00000002 592 592 #define ACPI_GSB_ACCESS_ATTRIB_SEND_RCV 0x00000004
+4
include/linux/irqdomain.h
··· 64 64 u32 param[IRQ_DOMAIN_IRQ_SPEC_PARAMS]; 65 65 }; 66 66 67 + /* Conversion function from of_phandle_args fields to fwspec */ 68 + void of_phandle_args_to_fwspec(struct device_node *np, const u32 *args, 69 + unsigned int count, struct irq_fwspec *fwspec); 70 + 67 71 /* 68 72 * Should several domains have the same device node, but serve 69 73 * different purposes (for example one domain is for PCI/MSI, and the
+7 -14
include/linux/pci.h
··· 342 342 u16 pcie_flags_reg; /* Cached PCIe Capabilities Register */ 343 343 unsigned long *dma_alias_mask;/* Mask of enabled devfn aliases */ 344 344 345 - struct pci_driver *driver; /* Driver bound to this device */ 346 345 u64 dma_mask; /* Mask of the bits of bus address this 347 346 device implements. Normally this is 348 347 0xffffffff. You only need to change ··· 899 900 struct pci_dynids dynids; 900 901 }; 901 902 902 - #define to_pci_driver(drv) container_of(drv, struct pci_driver, driver) 903 + static inline struct pci_driver *to_pci_driver(struct device_driver *drv) 904 + { 905 + return drv ? container_of(drv, struct pci_driver, driver) : NULL; 906 + } 903 907 904 908 /** 905 909 * PCI_DEVICE - macro used to describe a specific PCI device ··· 1352 1350 /* Vital Product Data routines */ 1353 1351 ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf); 1354 1352 ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf); 1353 + ssize_t pci_read_vpd_any(struct pci_dev *dev, loff_t pos, size_t count, void *buf); 1354 + ssize_t pci_write_vpd_any(struct pci_dev *dev, loff_t pos, size_t count, const void *buf); 1355 1355 1356 1356 /* Helper functions for low-level code (drivers/pci/setup-[bus,res].c) */ 1357 1357 resource_size_t pcibios_retrieve_fw_addr(struct pci_dev *dev, int idx); ··· 1502 1498 #define PCI_IRQ_ALL_TYPES \ 1503 1499 (PCI_IRQ_LEGACY | PCI_IRQ_MSI | PCI_IRQ_MSIX) 1504 1500 1505 - /* kmem_cache style wrapper around pci_alloc_consistent() */ 1506 - 1507 1501 #include <linux/dmapool.h> 1508 - 1509 - #define pci_pool dma_pool 1510 - #define pci_pool_create(name, pdev, size, align, allocation) \ 1511 - dma_pool_create(name, &pdev->dev, size, align, allocation) 1512 - #define pci_pool_destroy(pool) dma_pool_destroy(pool) 1513 - #define pci_pool_alloc(pool, flags, handle) dma_pool_alloc(pool, flags, handle) 1514 - #define pci_pool_zalloc(pool, flags, handle) \ 1515 - dma_pool_zalloc(pool, flags, handle) 1516 - #define pci_pool_free(pool, vaddr, addr) dma_pool_free(pool, vaddr, addr) 1517 1502 1518 1503 struct msix_entry { 1519 1504 u32 vector; /* Kernel uses to write allocated vector */ ··· 2119 2126 void pcibios_set_master(struct pci_dev *dev); 2120 2127 int pcibios_set_pcie_reset_state(struct pci_dev *dev, 2121 2128 enum pcie_reset_state state); 2122 - int pcibios_add_device(struct pci_dev *dev); 2129 + int pcibios_device_add(struct pci_dev *dev); 2123 2130 void pcibios_release_device(struct pci_dev *dev); 2124 2131 #ifdef CONFIG_PCI 2125 2132 void pcibios_penalize_isa_irq(int irq, int active);
+1
include/linux/switchtec.h
··· 19 19 #define SWITCHTEC_EVENT_EN_CLI BIT(2) 20 20 #define SWITCHTEC_EVENT_EN_IRQ BIT(3) 21 21 #define SWITCHTEC_EVENT_FATAL BIT(4) 22 + #define SWITCHTEC_EVENT_NOT_SUPP BIT(31) 22 23 23 24 #define SWITCHTEC_DMA_MRPC_EN BIT(0) 24 25
+6
include/uapi/linux/pci_regs.h
··· 504 504 #define PCI_EXP_DEVCTL_URRE 0x0008 /* Unsupported Request Reporting En. */ 505 505 #define PCI_EXP_DEVCTL_RELAX_EN 0x0010 /* Enable relaxed ordering */ 506 506 #define PCI_EXP_DEVCTL_PAYLOAD 0x00e0 /* Max_Payload_Size */ 507 + #define PCI_EXP_DEVCTL_PAYLOAD_128B 0x0000 /* 128 Bytes */ 508 + #define PCI_EXP_DEVCTL_PAYLOAD_256B 0x0020 /* 256 Bytes */ 509 + #define PCI_EXP_DEVCTL_PAYLOAD_512B 0x0040 /* 512 Bytes */ 510 + #define PCI_EXP_DEVCTL_PAYLOAD_1024B 0x0060 /* 1024 Bytes */ 511 + #define PCI_EXP_DEVCTL_PAYLOAD_2048B 0x0080 /* 2048 Bytes */ 512 + #define PCI_EXP_DEVCTL_PAYLOAD_4096B 0x00a0 /* 4096 Bytes */ 507 513 #define PCI_EXP_DEVCTL_EXT_TAG 0x0100 /* Extended Tag Field Enable */ 508 514 #define PCI_EXP_DEVCTL_PHANTOM 0x0200 /* Phantom Functions Enable */ 509 515 #define PCI_EXP_DEVCTL_AUX_PME 0x0400 /* Auxiliary Power PM Enable */
+4 -3
kernel/irq/irqdomain.c
··· 744 744 return 0; 745 745 } 746 746 747 - static void of_phandle_args_to_fwspec(struct device_node *np, const u32 *args, 748 - unsigned int count, 749 - struct irq_fwspec *fwspec) 747 + void of_phandle_args_to_fwspec(struct device_node *np, const u32 *args, 748 + unsigned int count, struct irq_fwspec *fwspec) 750 749 { 751 750 int i; 752 751 ··· 755 756 for (i = 0; i < count; i++) 756 757 fwspec->param[i] = args[i]; 757 758 } 759 + EXPORT_SYMBOL_GPL(of_phandle_args_to_fwspec); 758 760 759 761 unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec) 760 762 { ··· 1502 1502 irq_free_descs(virq, nr_irqs); 1503 1503 return ret; 1504 1504 } 1505 + EXPORT_SYMBOL_GPL(__irq_domain_alloc_irqs); 1505 1506 1506 1507 /* The irq_data was moved, fix the revmap to refer to the new location */ 1507 1508 static void irq_domain_fix_revmap(struct irq_data *d)