Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v4.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:

- unify AER decoding for native and ACPI CPER sources (Alexandru
Gagniuc)

- add TLP header info to AER tracepoint (Thomas Tai)

- add generic pcie_wait_for_link() interface (Oza Pawandeep)

- handle AER ERR_FATAL by removing and re-enumerating devices, as
Downstream Port Containment does (Oza Pawandeep)

- factor out common code between AER and DPC recovery (Oza Pawandeep)

- stop triggering DPC for ERR_NONFATAL errors (Oza Pawandeep)

- share ERR_FATAL recovery path between AER and DPC (Oza Pawandeep)

- disable ASPM L1.2 substate if we don't have LTR (Bjorn Helgaas)

- respect platform ownership of LTR (Bjorn Helgaas)

- clear interrupt status in top half to avoid interrupt storm (Oza
Pawandeep)

- neaten pci=earlydump output (Andy Shevchenko)

- avoid errors when extended config space inaccessible (Gilles Buloz)

- prevent sysfs disable of device while driver attached (Christoph
Hellwig)

- use core interface to report PCIe link properties in bnx2x, bnxt_en,
cxgb4, ixgbe (Bjorn Helgaas)

- remove unused pcie_get_minimum_link() (Bjorn Helgaas)

- fix use-before-set error in ibmphp (Dan Carpenter)

- fix pciehp timeouts caused by Command Completed errata (Bjorn
Helgaas)

- fix refcounting in pnv_php hotplug (Julia Lawall)

- clear pciehp Presence Detect and Data Link Layer Status Changed on
resume so we don't miss hotplug events (Mika Westerberg)

- only request pciehp control if we support it, so platform can use
ACPI hotplug otherwise (Mika Westerberg)

- convert SHPC to be builtin only (Mika Westerberg)

- request SHPC control via _OSC if we support it (Mika Westerberg)

- simplify SHPC handoff from firmware (Mika Westerberg)

- fix an SHPC quirk that mistakenly included *all* AMD bridges as well
as devices from any vendor with device ID 0x7458 (Bjorn Helgaas)

- assign a bus number even to non-native hotplug bridges to leave
space for acpiphp additions, to fix a common Thunderbolt xHCI
hot-add failure (Mika Westerberg)

- keep acpiphp from scanning native hotplug bridges, to fix common
Thunderbolt hot-add failures (Mika Westerberg)

- improve "partially hidden behind bridge" messages from core (Mika
Westerberg)

- add macros for PCIe Link Control 2 register (Frederick Lawler)

- replace IB/hfi1 custom macros with PCI core versions (Frederick
Lawler)

- remove dead microblaze and xtensa code (Bjorn Helgaas)

- use dev_printk() when possible in xtensa and mips (Bjorn Helgaas)

- remove unused pcie_port_acpi_setup() and portdrv_acpi.c (Bjorn
Helgaas)

- add managed interface to get PCI host bridge resources from OF (Jan
Kiszka)

- add support for unbinding generic PCI host controller (Jan Kiszka)

- fix memory leaks when unbinding generic PCI host controller (Jan
Kiszka)

- request legacy VGA framebuffer only for VGA devices to avoid false
device conflicts (Bjorn Helgaas)

- turn on PCI_COMMAND_IO & PCI_COMMAND_MEMORY in pci_enable_device()
like everybody else, not in pcibios_fixup_bus() (Bjorn Helgaas)

- add generic enable function for simple SR-IOV hardware (Alexander
Duyck)

- use generic SR-IOV enable for ena, nvme (Alexander Duyck)

- add ACS quirk for Intel 7th & 8th Gen mobile (Alex Williamson)

- add ACS quirk for Intel 300 series (Mika Westerberg)

- enable register clock for Armada 7K/8K (Gregory CLEMENT)

- reduce Keystone "link already up" log level (Fabio Estevam)

- move private DT functions to drivers/pci/ (Rob Herring)

- factor out dwc CONFIG_PCI Kconfig dependencies (Rob Herring)

- add DesignWare support to the endpoint test driver (Gustavo
Pimentel)

- add DesignWare support for endpoint mode (Gustavo Pimentel)

- use devm_ioremap_resource() instead of devm_ioremap() in dra7xx and
artpec6 (Gustavo Pimentel)

- fix Qualcomm bitwise NOT issue (Dan Carpenter)

- add Qualcomm runtime PM support (Srinivas Kandagatla)

- fix DesignWare enumeration below bridges (Koen Vandeputte)

- use usleep() instead of mdelay() in endpoint test (Jia-Ju Bai)

- add configfs entries for pci_epf_driver device IDs (Kishon Vijay
Abraham I)

- clean up pci_endpoint_test driver (Gustavo Pimentel)

- update Layerscape maintainer email addresses (Minghuan Lian)

- add COMPILE_TEST to improve build test coverage (Rob Herring)

- fix Hyper-V bus registration failure caused by domain/serial number
confusion (Sridhar Pitchai)

- improve Hyper-V refcounting and coding style (Stephen Hemminger)

- avoid potential Hyper-V hang waiting for a response that will never
come (Dexuan Cui)

- implement Mediatek chained IRQ handling (Honghui Zhang)

- fix vendor ID & class type for Mediatek MT7622 (Honghui Zhang)

- add Mobiveil PCIe host controller driver (Subrahmanya Lingappa)

- add Mobiveil MSI support (Subrahmanya Lingappa)

- clean up clocks, MSI, IRQ mappings in R-Car probe failure paths
(Marek Vasut)

- poll more frequently (5us vs 5ms) while waiting for R-Car data link
active (Marek Vasut)

- use generic OF parsing interface in R-Car (Vladimir Zapolskiy)

- add R-Car V3H (R8A77980) "compatible" string (Sergei Shtylyov)

- add R-Car gen3 PHY support (Sergei Shtylyov)

- improve R-Car PHYRDY polling (Sergei Shtylyov)

- clean up R-Car macros (Marek Vasut)

- use runtime PM for R-Car controller clock (Dien Pham)

- update arm64 defconfig for Rockchip (Shawn Lin)

- refactor Rockchip code to facilitate both root port and endpoint
mode (Shawn Lin)

- add Rockchip endpoint mode driver (Shawn Lin)

- support VMD "membar shadow" feature (Jon Derrick)

- support VMD bus number offsets (Jon Derrick)

- add VMD "no AER source ID" quirk for more device IDs (Jon Derrick)

- remove unnecessary host controller CONFIG_PCIEPORTBUS Kconfig
selections (Bjorn Helgaas)

- clean up quirks.c organization and whitespace (Bjorn Helgaas)

* tag 'pci-v4.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (144 commits)
PCI/AER: Replace struct pcie_device with pci_dev
PCI/AER: Remove unused parameters
PCI: qcom: Include gpio/consumer.h
PCI: Improve "partially hidden behind bridge" log message
PCI: Improve pci_scan_bridge() and pci_scan_bridge_extend() doc
PCI: Move resource distribution for single bridge outside loop
PCI: Account for all bridges on bus when distributing bus numbers
ACPI / hotplug / PCI: Drop unnecessary parentheses
ACPI / hotplug / PCI: Mark stale PCI devices disconnected
ACPI / hotplug / PCI: Don't scan bridges managed by native hotplug
PCI: hotplug: Add hotplug_is_native()
PCI: shpchp: Add shpchp_is_native()
PCI: shpchp: Fix AMD POGO identification
PCI: mobiveil: Add MSI support
PCI: mobiveil: Add Mobiveil PCIe Host Bridge IP driver
PCI/AER: Decode Error Source Requester ID
PCI/AER: Remove aer_recover_work_func() forward declaration
PCI/DPC: Use the generic pcie_do_fatal_recovery() path
PCI/AER: Pass service type to pcie_do_fatal_recovery()
PCI/DPC: Disable ERR_NONFATAL handling by DPC
...

+6189 -3863
+25 -10
Documentation/PCI/pci-error-recovery.txt
··· 110 110 event will be platform-dependent, but will follow the general 111 111 sequence described below. 112 112 113 - STEP 0: Error Event 113 + STEP 0: Error Event: ERR_NONFATAL 114 114 ------------------- 115 115 A PCI bus error is detected by the PCI hardware. On powerpc, the slot 116 116 is isolated, in that all I/O is blocked: all reads return 0xffffffff, ··· 228 228 If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform 229 229 proceeds to STEP 4 (Slot Reset) 230 230 231 - STEP 3: Link Reset 232 - ------------------ 233 - The platform resets the link. This is a PCI-Express specific step 234 - and is done whenever a fatal error has been detected that can be 235 - "solved" by resetting the link. 236 - 237 - STEP 4: Slot Reset 231 + STEP 3: Slot Reset 238 232 ------------------ 239 233 240 234 In response to a return value of PCI_ERS_RESULT_NEED_RESET, the ··· 314 320 >>> However, it probably should. 315 321 316 322 317 - STEP 5: Resume Operations 323 + STEP 4: Resume Operations 318 324 ------------------------- 319 325 The platform will call the resume() callback on all affected device 320 326 drivers if all drivers on the segment have returned ··· 326 332 At this point, if a new error happens, the platform will restart 327 333 a new error recovery sequence. 328 334 329 - STEP 6: Permanent Failure 335 + STEP 5: Permanent Failure 330 336 ------------------------- 331 337 A "permanent failure" has occurred, and the platform cannot recover 332 338 the device. The platform will call error_detected() with a ··· 349 355 for additional detail on real-life experience of the causes of 350 356 software errors. 351 357 358 + STEP 0: Error Event: ERR_FATAL 359 + ------------------- 360 + PCI bus error is detected by the PCI hardware. On powerpc, the slot is 361 + isolated, in that all I/O is blocked: all reads return 0xffffffff, all 362 + writes are ignored. 363 + 364 + STEP 1: Remove devices 365 + -------------------- 366 + Platform removes the devices depending on the error agent, it could be 367 + this port for all subordinates or upstream component (likely downstream 368 + port) 369 + 370 + STEP 2: Reset link 371 + -------------------- 372 + The platform resets the link. This is a PCI-Express specific step and is 373 + done whenever a fatal error has been detected that can be "solved" by 374 + resetting the link. 375 + 376 + STEP 3: Re-enumerate the devices 377 + -------------------- 378 + Initiates the re-enumeration. 352 379 353 380 Conclusion; General Remarks 354 381 ---------------------------
+2
Documentation/admin-guide/kernel-parameters.txt
··· 3162 3162 on: Turn realloc on 3163 3163 realloc same as realloc=on 3164 3164 noari do not use PCIe ARI. 3165 + noats [PCIE, Intel-IOMMU, AMD-IOMMU] 3166 + do not use PCIe ATS (and IOMMU device IOTLB). 3165 3167 pcie_scan_all Scan all possible PCIe devices. Otherwise we 3166 3168 only look for one device below a PCIe downstream 3167 3169 port.
+18 -6
Documentation/devicetree/bindings/pci/designware-pcie.txt
··· 1 1 * Synopsys DesignWare PCIe interface 2 2 3 3 Required properties: 4 - - compatible: should contain "snps,dw-pcie" to identify the core. 4 + - compatible: 5 + "snps,dw-pcie" for RC mode; 6 + "snps,dw-pcie-ep" for EP mode; 5 7 - reg: Should contain the configuration address space. 6 8 - reg-names: Must be "config" for the PCIe configuration space. 7 9 (The old way of getting the configuration address space from "ranges" ··· 43 41 44 42 Example configuration: 45 43 46 - pcie: pcie@dffff000 { 44 + pcie: pcie@dfc00000 { 47 45 compatible = "snps,dw-pcie"; 48 - reg = <0xdffff000 0x1000>, /* Controller registers */ 49 - <0xd0000000 0x2000>; /* PCI config space */ 50 - reg-names = "ctrlreg", "config"; 46 + reg = <0xdfc00000 0x0001000>, /* IP registers */ 47 + <0xd0000000 0x0002000>; /* Configuration space */ 48 + reg-names = "dbi", "config"; 51 49 #address-cells = <3>; 52 50 #size-cells = <2>; 53 51 device_type = "pci"; ··· 56 54 interrupts = <25>, <24>; 57 55 #interrupt-cells = <1>; 58 56 num-lanes = <1>; 59 - num-viewport = <3>; 57 + }; 58 + or 59 + pcie: pcie@dfc00000 { 60 + compatible = "snps,dw-pcie-ep"; 61 + reg = <0xdfc00000 0x0001000>, /* IP registers 1 */ 62 + <0xdfc01000 0x0001000>, /* IP registers 2 */ 63 + <0xd0000000 0x2000000>; /* Configuration space */ 64 + reg-names = "dbi", "dbi2", "addr_space"; 65 + num-ib-windows = <6>; 66 + num-ob-windows = <2>; 67 + num-lanes = <1>; 60 68 };
+73
Documentation/devicetree/bindings/pci/mobiveil-pcie.txt
··· 1 + * Mobiveil AXI PCIe Root Port Bridge DT description 2 + 3 + Mobiveil's GPEX 4.0 is a PCIe Gen4 root port bridge IP. This configurable IP 4 + has up to 8 outbound and inbound windows for the address translation. 5 + 6 + Required properties: 7 + - #address-cells: Address representation for root ports, set to <3> 8 + - #size-cells: Size representation for root ports, set to <2> 9 + - #interrupt-cells: specifies the number of cells needed to encode an 10 + interrupt source. The value must be 1. 11 + - compatible: Should contain "mbvl,gpex40-pcie" 12 + - reg: Should contain PCIe registers location and length 13 + "config_axi_slave": PCIe controller registers 14 + "csr_axi_slave" : Bridge config registers 15 + "gpio_slave" : GPIO registers to control slot power 16 + "apb_csr" : MSI registers 17 + 18 + - device_type: must be "pci" 19 + - apio-wins : number of requested apio outbound windows 20 + default 2 outbound windows are configured - 21 + 1. Config window 22 + 2. Memory window 23 + - ppio-wins : number of requested ppio inbound windows 24 + default 1 inbound memory window is configured. 25 + - bus-range: PCI bus numbers covered 26 + - interrupt-controller: identifies the node as an interrupt controller 27 + - #interrupt-cells: specifies the number of cells needed to encode an 28 + interrupt source. The value must be 1. 29 + - interrupt-parent : phandle to the interrupt controller that 30 + it is attached to, it should be set to gic to point to 31 + ARM's Generic Interrupt Controller node in system DT. 32 + - interrupts: The interrupt line of the PCIe controller 33 + last cell of this field is set to 4 to 34 + denote it as IRQ_TYPE_LEVEL_HIGH type interrupt. 35 + - interrupt-map-mask, 36 + interrupt-map: standard PCI properties to define the mapping of the 37 + PCI interface to interrupt numbers. 38 + - ranges: ranges for the PCI memory regions (I/O space region is not 39 + supported by hardware) 40 + Please refer to the standard PCI bus binding document for a more 41 + detailed explanation 42 + 43 + 44 + Example: 45 + ++++++++ 46 + pcie0: pcie@a0000000 { 47 + #address-cells = <3>; 48 + #size-cells = <2>; 49 + compatible = "mbvl,gpex40-pcie"; 50 + reg = <0xa0000000 0x00001000>, 51 + <0xb0000000 0x00010000>, 52 + <0xff000000 0x00200000>, 53 + <0xb0010000 0x00001000>; 54 + reg-names = "config_axi_slave", 55 + "csr_axi_slave", 56 + "gpio_slave", 57 + "apb_csr"; 58 + device_type = "pci"; 59 + apio-wins = <2>; 60 + ppio-wins = <1>; 61 + bus-range = <0x00000000 0x000000ff>; 62 + interrupt-controller; 63 + interrupt-parent = <&gic>; 64 + #interrupt-cells = <1>; 65 + interrupts = < 0 89 4 >; 66 + interrupt-map-mask = <0 0 0 7>; 67 + interrupt-map = <0 0 0 0 &pci_express 0>, 68 + <0 0 0 1 &pci_express 1>, 69 + <0 0 0 2 &pci_express 2>, 70 + <0 0 0 3 &pci_express 3>; 71 + ranges = < 0x83000000 0 0x00000000 0xa8000000 0 0x8000000>; 72 + 73 + };
+4 -1
Documentation/devicetree/bindings/pci/pci-armada8k.txt
··· 12 12 - "ctrl" for the control register region 13 13 - "config" for the config space region 14 14 - interrupts: Interrupt specifier for the PCIe controler 15 - - clocks: reference to the PCIe controller clock 15 + - clocks: reference to the PCIe controller clocks 16 + - clock-names: mandatory if there is a second clock, in this case the 17 + name must be "core" for the first clock and "reg" for the second 18 + one 16 19 17 20 Example: 18 21
+6
Documentation/devicetree/bindings/pci/rcar-pci.txt
··· 8 8 "renesas,pcie-r8a7793" for the R8A7793 SoC; 9 9 "renesas,pcie-r8a7795" for the R8A7795 SoC; 10 10 "renesas,pcie-r8a7796" for the R8A7796 SoC; 11 + "renesas,pcie-r8a77980" for the R8A77980 SoC; 11 12 "renesas,pcie-rcar-gen2" for a generic R-Car Gen2 or 12 13 RZ/G1 compatible device. 13 14 "renesas,pcie-rcar-gen3" for a generic R-Car Gen3 compatible device. ··· 32 31 - clocks: from common clock binding: clock specifiers for the PCIe controller 33 32 and PCIe bus clocks. 34 33 - clock-names: from common clock binding: should be "pcie" and "pcie_bus". 34 + 35 + Optional properties: 36 + - phys: from common PHY binding: PHY phandle and specifier (only make sense 37 + for R-Car gen3 SoCs where the PCIe PHYs have their own register blocks). 38 + - phy-names: from common PHY binding: should be "pcie". 35 39 36 40 Example: 37 41
+62
Documentation/devicetree/bindings/pci/rockchip-pcie-ep.txt
··· 1 + * Rockchip AXI PCIe Endpoint Controller DT description 2 + 3 + Required properties: 4 + - compatible: Should contain "rockchip,rk3399-pcie-ep" 5 + - reg: Two register ranges as listed in the reg-names property 6 + - reg-names: Must include the following names 7 + - "apb-base" 8 + - "mem-base" 9 + - clocks: Must contain an entry for each entry in clock-names. 10 + See ../clocks/clock-bindings.txt for details. 11 + - clock-names: Must include the following entries: 12 + - "aclk" 13 + - "aclk-perf" 14 + - "hclk" 15 + - "pm" 16 + - resets: Must contain seven entries for each entry in reset-names. 17 + See ../reset/reset.txt for details. 18 + - reset-names: Must include the following names 19 + - "core" 20 + - "mgmt" 21 + - "mgmt-sticky" 22 + - "pipe" 23 + - "pm" 24 + - "aclk" 25 + - "pclk" 26 + - pinctrl-names : The pin control state names 27 + - pinctrl-0: The "default" pinctrl state 28 + - phys: Must contain an phandle to a PHY for each entry in phy-names. 29 + - phy-names: Must include 4 entries for all 4 lanes even if some of 30 + them won't be used for your cases. Entries are of the form "pcie-phy-N": 31 + where N ranges from 0 to 3. 32 + (see example below and you MUST also refer to ../phy/rockchip-pcie-phy.txt 33 + for changing the #phy-cells of phy node to support it) 34 + - rockchip,max-outbound-regions: Maximum number of outbound regions 35 + 36 + Optional Property: 37 + - num-lanes: number of lanes to use 38 + - max-functions: Maximum number of functions that can be configured (default 1). 39 + 40 + pcie0-ep: pcie@f8000000 { 41 + compatible = "rockchip,rk3399-pcie-ep"; 42 + #address-cells = <3>; 43 + #size-cells = <2>; 44 + rockchip,max-outbound-regions = <16>; 45 + clocks = <&cru ACLK_PCIE>, <&cru ACLK_PERF_PCIE>, 46 + <&cru PCLK_PCIE>, <&cru SCLK_PCIE_PM>; 47 + clock-names = "aclk", "aclk-perf", 48 + "hclk", "pm"; 49 + max-functions = /bits/ 8 <8>; 50 + num-lanes = <4>; 51 + reg = <0x0 0xfd000000 0x0 0x1000000>, <0x0 0x80000000 0x0 0x20000>; 52 + reg-names = "apb-base", "mem-base"; 53 + resets = <&cru SRST_PCIE_CORE>, <&cru SRST_PCIE_MGMT>, 54 + <&cru SRST_PCIE_MGMT_STICKY>, <&cru SRST_PCIE_PIPE> , 55 + <&cru SRST_PCIE_PM>, <&cru SRST_P_PCIE>, <&cru SRST_A_PCIE>; 56 + reset-names = "core", "mgmt", "mgmt-sticky", "pipe", 57 + "pm", "pclk", "aclk"; 58 + phys = <&pcie_phy 0>, <&pcie_phy 1>, <&pcie_phy 2>, <&pcie_phy 3>; 59 + phy-names = "pcie-phy-0", "pcie-phy-1", "pcie-phy-2", "pcie-phy-3"; 60 + pinctrl-names = "default"; 61 + pinctrl-0 = <&pcie_clkreq>; 62 + };
Documentation/devicetree/bindings/pci/rockchip-pcie.txt Documentation/devicetree/bindings/pci/rockchip-pcie-host.txt
+1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 205 205 macnica Macnica Americas 206 206 marvell Marvell Technology Group Ltd. 207 207 maxim Maxim Integrated Products 208 + mbvl Mobiveil Inc. 208 209 mcube mCube 209 210 meas Measurement Specialties 210 211 mediatek MediaTek Inc.
+12 -5
MAINTAINERS
··· 9484 9484 S: Maintained 9485 9485 F: drivers/media/dvb-frontends/mn88473* 9486 9486 9487 + PCI DRIVER FOR MOBIVEIL PCIE IP 9488 + M: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in> 9489 + L: linux-pci@vger.kernel.org 9490 + S: Supported 9491 + F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt 9492 + F: drivers/pci/host/pcie-mobiveil.c 9493 + 9487 9494 MODULE SUPPORT 9488 9495 M: Jessica Yu <jeyu@kernel.org> 9489 9496 T: git git://git.kernel.org/pub/scm/linux/kernel/git/jeyu/linux.git modules-next ··· 10833 10826 F: drivers/pci/cadence/pcie-cadence* 10834 10827 10835 10828 PCI DRIVER FOR FREESCALE LAYERSCAPE 10836 - M: Minghuan Lian <minghuan.Lian@freescale.com> 10837 - M: Mingkai Hu <mingkai.hu@freescale.com> 10838 - M: Roy Zang <tie-fei.zang@freescale.com> 10829 + M: Minghuan Lian <minghuan.Lian@nxp.com> 10830 + M: Mingkai Hu <mingkai.hu@nxp.com> 10831 + M: Roy Zang <roy.zang@nxp.com> 10839 10832 L: linuxppc-dev@lists.ozlabs.org 10840 10833 L: linux-pci@vger.kernel.org 10841 10834 L: linux-arm-kernel@lists.infradead.org ··· 11061 11054 L: linux-pci@vger.kernel.org 11062 11055 L: linux-rockchip@lists.infradead.org 11063 11056 S: Maintained 11064 - F: Documentation/devicetree/bindings/pci/rockchip-pcie.txt 11065 - F: drivers/pci/host/pcie-rockchip.c 11057 + F: Documentation/devicetree/bindings/pci/rockchip-pcie* 11058 + F: drivers/pci/host/pcie-rockchip* 11066 11059 11067 11060 PCI DRIVER FOR V3 SEMICONDUCTOR V360EPC 11068 11061 M: Linus Walleij <linus.walleij@linaro.org>
+2 -1
arch/arm64/configs/defconfig
··· 78 78 CONFIG_PCI_AARDVARK=y 79 79 CONFIG_PCI_TEGRA=y 80 80 CONFIG_PCIE_RCAR=y 81 - CONFIG_PCIE_ROCKCHIP=m 81 + CONFIG_PCIE_ROCKCHIP=y 82 + CONFIG_PCIE_ROCKCHIP_HOST=m 82 83 CONFIG_PCI_HOST_GENERIC=y 83 84 CONFIG_PCI_XGENE=y 84 85 CONFIG_PCI_HOST_THUNDER_PEM=y
-4
arch/microblaze/include/asm/pci.h
··· 61 61 62 62 #define HAVE_PCI_LEGACY 1 63 63 64 - extern void pcibios_claim_one_bus(struct pci_bus *b); 65 - 66 - extern void pcibios_finish_adding_to_bus(struct pci_bus *bus); 67 - 68 64 extern void pcibios_resource_survey(void); 69 65 70 66 struct file;
-61
arch/microblaze/pci/pci-common.c
··· 915 915 pci_assign_unassigned_resources(); 916 916 } 917 917 918 - /* This is used by the PCI hotplug driver to allocate resource 919 - * of newly plugged busses. We can try to consolidate with the 920 - * rest of the code later, for now, keep it as-is as our main 921 - * resource allocation function doesn't deal with sub-trees yet. 922 - */ 923 - void pcibios_claim_one_bus(struct pci_bus *bus) 924 - { 925 - struct pci_dev *dev; 926 - struct pci_bus *child_bus; 927 - 928 - list_for_each_entry(dev, &bus->devices, bus_list) { 929 - int i; 930 - 931 - for (i = 0; i < PCI_NUM_RESOURCES; i++) { 932 - struct resource *r = &dev->resource[i]; 933 - 934 - if (r->parent || !r->start || !r->flags) 935 - continue; 936 - 937 - pr_debug("PCI: Claiming %s: ", pci_name(dev)); 938 - pr_debug("Resource %d: %016llx..%016llx [%x]\n", 939 - i, (unsigned long long)r->start, 940 - (unsigned long long)r->end, 941 - (unsigned int)r->flags); 942 - 943 - if (pci_claim_resource(dev, i) == 0) 944 - continue; 945 - 946 - pci_claim_bridge_resource(dev, i); 947 - } 948 - } 949 - 950 - list_for_each_entry(child_bus, &bus->children, node) 951 - pcibios_claim_one_bus(child_bus); 952 - } 953 - EXPORT_SYMBOL_GPL(pcibios_claim_one_bus); 954 - 955 - 956 - /* pcibios_finish_adding_to_bus 957 - * 958 - * This is to be called by the hotplug code after devices have been 959 - * added to a bus, this include calling it for a PHB that is just 960 - * being added 961 - */ 962 - void pcibios_finish_adding_to_bus(struct pci_bus *bus) 963 - { 964 - pr_debug("PCI: Finishing adding to hotplug bus %04x:%02x\n", 965 - pci_domain_nr(bus), bus->number); 966 - 967 - /* Allocate bus and devices resources */ 968 - pcibios_allocate_bus_resources(bus); 969 - pcibios_claim_one_bus(bus); 970 - 971 - /* Add new devices to global lists. Register in proc, sysfs. */ 972 - pci_bus_add_devices(bus); 973 - 974 - /* Fixup EEH */ 975 - /* eeh_add_device_tree_late(bus); */ 976 - } 977 - EXPORT_SYMBOL_GPL(pcibios_finish_adding_to_bus); 978 - 979 918 static void pcibios_setup_phb_resources(struct pci_controller *hose, 980 919 struct list_head *resources) 981 920 {
+3 -5
arch/mips/pci/pci-legacy.c
··· 263 263 (!(r->flags & IORESOURCE_ROM_ENABLE))) 264 264 continue; 265 265 if (!r->start && r->end) { 266 - printk(KERN_ERR "PCI: Device %s not available " 267 - "because of resource collisions\n", 268 - pci_name(dev)); 266 + pci_err(dev, 267 + "can't enable device: resource collisions\n"); 269 268 return -EINVAL; 270 269 } 271 270 if (r->flags & IORESOURCE_IO) ··· 273 274 cmd |= PCI_COMMAND_MEMORY; 274 275 } 275 276 if (cmd != old_cmd) { 276 - printk("PCI: Enabling device %s (%04x -> %04x)\n", 277 - pci_name(dev), old_cmd, cmd); 277 + pci_info(dev, "enabling device (%04x -> %04x)\n", old_cmd, cmd); 278 278 pci_write_config_word(dev, PCI_COMMAND, cmd); 279 279 } 280 280 return 0;
+21 -41
arch/sparc/kernel/leon_pci.c
··· 60 60 pci_bus_add_devices(root_bus); 61 61 } 62 62 63 - void pcibios_fixup_bus(struct pci_bus *pbus) 63 + int pcibios_enable_device(struct pci_dev *dev, int mask) 64 64 { 65 - struct pci_dev *dev; 66 - int i, has_io, has_mem; 67 - u16 cmd; 65 + u16 cmd, oldcmd; 66 + int i; 68 67 69 - list_for_each_entry(dev, &pbus->devices, bus_list) { 70 - /* 71 - * We can not rely on that the bootloader has enabled I/O 72 - * or memory access to PCI devices. Instead we enable it here 73 - * if the device has BARs of respective type. 74 - */ 75 - has_io = has_mem = 0; 76 - for (i = 0; i < PCI_ROM_RESOURCE; i++) { 77 - unsigned long f = dev->resource[i].flags; 78 - if (f & IORESOURCE_IO) 79 - has_io = 1; 80 - else if (f & IORESOURCE_MEM) 81 - has_mem = 1; 82 - } 83 - /* ROM BARs are mapped into 32-bit memory space */ 84 - if (dev->resource[PCI_ROM_RESOURCE].end != 0) { 85 - dev->resource[PCI_ROM_RESOURCE].flags |= 86 - IORESOURCE_ROM_ENABLE; 87 - has_mem = 1; 88 - } 89 - pci_bus_read_config_word(pbus, dev->devfn, PCI_COMMAND, &cmd); 90 - if (has_io && !(cmd & PCI_COMMAND_IO)) { 91 - #ifdef CONFIG_PCI_DEBUG 92 - printk(KERN_INFO "LEONPCI: Enabling I/O for dev %s\n", 93 - pci_name(dev)); 94 - #endif 68 + pci_read_config_word(dev, PCI_COMMAND, &cmd); 69 + oldcmd = cmd; 70 + 71 + for (i = 0; i < PCI_NUM_RESOURCES; i++) { 72 + struct resource *res = &dev->resource[i]; 73 + 74 + /* Only set up the requested stuff */ 75 + if (!(mask & (1<<i))) 76 + continue; 77 + 78 + if (res->flags & IORESOURCE_IO) 95 79 cmd |= PCI_COMMAND_IO; 96 - pci_bus_write_config_word(pbus, dev->devfn, PCI_COMMAND, 97 - cmd); 98 - } 99 - if (has_mem && !(cmd & PCI_COMMAND_MEMORY)) { 100 - #ifdef CONFIG_PCI_DEBUG 101 - printk(KERN_INFO "LEONPCI: Enabling MEMORY for dev" 102 - "%s\n", pci_name(dev)); 103 - #endif 80 + if (res->flags & IORESOURCE_MEM) 104 81 cmd |= PCI_COMMAND_MEMORY; 105 - pci_bus_write_config_word(pbus, dev->devfn, PCI_COMMAND, 106 - cmd); 107 - } 108 82 } 83 + 84 + if (cmd != oldcmd) { 85 + pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd); 86 + pci_write_config_word(dev, PCI_COMMAND, cmd); 87 + } 88 + return 0; 109 89 }
+86 -50
arch/sparc/kernel/pci.c
··· 214 214 if (!addrs) 215 215 return; 216 216 if (ofpci_verbose) 217 - printk(" parse addresses (%d bytes) @ %p\n", 218 - proplen, addrs); 217 + pci_info(dev, " parse addresses (%d bytes) @ %p\n", 218 + proplen, addrs); 219 219 op_res = &op->resource[0]; 220 220 for (; proplen >= 20; proplen -= 20, addrs += 5, op_res++) { 221 221 struct resource *res; ··· 227 227 continue; 228 228 i = addrs[0] & 0xff; 229 229 if (ofpci_verbose) 230 - printk(" start: %llx, end: %llx, i: %x\n", 231 - op_res->start, op_res->end, i); 230 + pci_info(dev, " start: %llx, end: %llx, i: %x\n", 231 + op_res->start, op_res->end, i); 232 232 233 233 if (PCI_BASE_ADDRESS_0 <= i && i <= PCI_BASE_ADDRESS_5) { 234 234 res = &dev->resource[(i - PCI_BASE_ADDRESS_0) >> 2]; ··· 236 236 res = &dev->resource[PCI_ROM_RESOURCE]; 237 237 flags |= IORESOURCE_READONLY | IORESOURCE_SIZEALIGN; 238 238 } else { 239 - printk(KERN_ERR "PCI: bad cfg reg num 0x%x\n", i); 239 + pci_err(dev, "bad cfg reg num 0x%x\n", i); 240 240 continue; 241 241 } 242 242 res->start = op_res->start; 243 243 res->end = op_res->end; 244 244 res->flags = flags; 245 245 res->name = pci_name(dev); 246 + 247 + pci_info(dev, "reg 0x%x: %pR\n", i, res); 246 248 } 247 249 } 248 250 ··· 291 289 type = ""; 292 290 293 291 if (ofpci_verbose) 294 - printk(" create device, devfn: %x, type: %s\n", 295 - devfn, type); 292 + pci_info(bus," create device, devfn: %x, type: %s\n", 293 + devfn, type); 296 294 297 295 dev->sysdata = node; 298 296 dev->dev.parent = bus->bridge; ··· 325 323 dev_set_name(&dev->dev, "%04x:%02x:%02x.%d", pci_domain_nr(bus), 326 324 dev->bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn)); 327 325 328 - if (ofpci_verbose) 329 - printk(" class: 0x%x device name: %s\n", 330 - dev->class, pci_name(dev)); 331 - 332 326 /* I have seen IDE devices which will not respond to 333 327 * the bmdma simplex check reads if bus mastering is 334 328 * disabled. ··· 351 353 dev->irq = PCI_IRQ_NONE; 352 354 } 353 355 356 + pci_info(dev, "[%04x:%04x] type %02x class %#08x\n", 357 + dev->vendor, dev->device, dev->hdr_type, dev->class); 358 + 354 359 pci_parse_of_addrs(sd->op, node, dev); 355 360 356 361 if (ofpci_verbose) 357 - printk(" adding to system ...\n"); 362 + pci_info(dev, " adding to system ...\n"); 358 363 359 364 pci_device_add(dev, bus); 360 365 ··· 431 430 u64 size; 432 431 433 432 if (ofpci_verbose) 434 - printk("of_scan_pci_bridge(%s)\n", node->full_name); 433 + pci_info(dev, "of_scan_pci_bridge(%s)\n", node->full_name); 435 434 436 435 /* parse bus-range property */ 437 436 busrange = of_get_property(node, "bus-range", &len); 438 437 if (busrange == NULL || len != 8) { 439 - printk(KERN_DEBUG "Can't get bus-range for PCI-PCI bridge %s\n", 438 + pci_info(dev, "Can't get bus-range for PCI-PCI bridge %s\n", 440 439 node->full_name); 441 440 return; 442 441 } 443 442 444 443 if (ofpci_verbose) 445 - printk(" Bridge bus range [%u --> %u]\n", 446 - busrange[0], busrange[1]); 444 + pci_info(dev, " Bridge bus range [%u --> %u]\n", 445 + busrange[0], busrange[1]); 447 446 448 447 ranges = of_get_property(node, "ranges", &len); 449 448 simba = 0; ··· 455 454 456 455 bus = pci_add_new_bus(dev->bus, dev, busrange[0]); 457 456 if (!bus) { 458 - printk(KERN_ERR "Failed to create pci bus for %s\n", 459 - node->full_name); 457 + pci_err(dev, "Failed to create pci bus for %s\n", 458 + node->full_name); 460 459 return; 461 460 } 462 461 ··· 465 464 bus->bridge_ctl = 0; 466 465 467 466 if (ofpci_verbose) 468 - printk(" Bridge ranges[%p] simba[%d]\n", 469 - ranges, simba); 467 + pci_info(dev, " Bridge ranges[%p] simba[%d]\n", 468 + ranges, simba); 470 469 471 470 /* parse ranges property, or cook one up by hand for Simba */ 472 471 /* PCI #address-cells == 3 and #size-cells == 2 always */ ··· 488 487 u64 start; 489 488 490 489 if (ofpci_verbose) 491 - printk(" RAW Range[%08x:%08x:%08x:%08x:%08x:%08x:" 492 - "%08x:%08x]\n", 493 - ranges[0], ranges[1], ranges[2], ranges[3], 494 - ranges[4], ranges[5], ranges[6], ranges[7]); 490 + pci_info(dev, " RAW Range[%08x:%08x:%08x:%08x:%08x:%08x:" 491 + "%08x:%08x]\n", 492 + ranges[0], ranges[1], ranges[2], ranges[3], 493 + ranges[4], ranges[5], ranges[6], ranges[7]); 495 494 496 495 flags = pci_parse_of_flags(ranges[0]); 497 496 size = GET_64BIT(ranges, 6); ··· 511 510 if (flags & IORESOURCE_IO) { 512 511 res = bus->resource[0]; 513 512 if (res->flags) { 514 - printk(KERN_ERR "PCI: ignoring extra I/O range" 515 - " for bridge %s\n", node->full_name); 513 + pci_err(dev, "ignoring extra I/O range" 514 + " for bridge %s\n", node->full_name); 516 515 continue; 517 516 } 518 517 } else { 519 518 if (i >= PCI_NUM_RESOURCES - PCI_BRIDGE_RESOURCES) { 520 - printk(KERN_ERR "PCI: too many memory ranges" 521 - " for bridge %s\n", node->full_name); 519 + pci_err(dev, "too many memory ranges" 520 + " for bridge %s\n", node->full_name); 522 521 continue; 523 522 } 524 523 res = bus->resource[i]; ··· 530 529 region.end = region.start + size - 1; 531 530 532 531 if (ofpci_verbose) 533 - printk(" Using flags[%08x] start[%016llx] size[%016llx]\n", 534 - flags, start, size); 532 + pci_info(dev, " Using flags[%08x] start[%016llx] size[%016llx]\n", 533 + flags, start, size); 535 534 536 535 pcibios_bus_to_resource(dev->bus, res, &region); 537 536 } ··· 539 538 sprintf(bus->name, "PCI Bus %04x:%02x", pci_domain_nr(bus), 540 539 bus->number); 541 540 if (ofpci_verbose) 542 - printk(" bus name: %s\n", bus->name); 541 + pci_info(dev, " bus name: %s\n", bus->name); 543 542 544 543 pci_of_scan_bus(pbm, node, bus); 545 544 } ··· 554 553 struct pci_dev *dev; 555 554 556 555 if (ofpci_verbose) 557 - printk("PCI: scan_bus[%s] bus no %d\n", 558 - node->full_name, bus->number); 556 + pci_info(bus, "scan_bus[%s] bus no %d\n", 557 + node->full_name, bus->number); 559 558 560 559 child = NULL; 561 560 prev_devfn = -1; 562 561 while ((child = of_get_next_child(node, child)) != NULL) { 563 562 if (ofpci_verbose) 564 - printk(" * %s\n", child->full_name); 563 + pci_info(bus, " * %s\n", child->full_name); 565 564 reg = of_get_property(child, "reg", &reglen); 566 565 if (reg == NULL || reglen < 20) 567 566 continue; ··· 582 581 if (!dev) 583 582 continue; 584 583 if (ofpci_verbose) 585 - printk("PCI: dev header type: %x\n", 586 - dev->hdr_type); 584 + pci_info(dev, "dev header type: %x\n", dev->hdr_type); 587 585 588 586 if (pci_is_bridge(dev)) 589 587 of_scan_pci_bridge(pbm, child, dev); ··· 624 624 pci_bus_register_of_sysfs(child_bus); 625 625 } 626 626 627 + static void pci_claim_legacy_resources(struct pci_dev *dev) 628 + { 629 + struct pci_bus_region region; 630 + struct resource *p, *root, *conflict; 631 + 632 + if ((dev->class >> 8) != PCI_CLASS_DISPLAY_VGA) 633 + return; 634 + 635 + p = kzalloc(sizeof(*p), GFP_KERNEL); 636 + if (!p) 637 + return; 638 + 639 + p->name = "Video RAM area"; 640 + p->flags = IORESOURCE_MEM | IORESOURCE_BUSY; 641 + 642 + region.start = 0xa0000UL; 643 + region.end = region.start + 0x1ffffUL; 644 + pcibios_bus_to_resource(dev->bus, p, &region); 645 + 646 + root = pci_find_parent_resource(dev, p); 647 + if (!root) { 648 + pci_info(dev, "can't claim VGA legacy %pR: no compatible bridge window\n", p); 649 + goto err; 650 + } 651 + 652 + conflict = request_resource_conflict(root, p); 653 + if (conflict) { 654 + pci_info(dev, "can't claim VGA legacy %pR: address conflict with %s %pR\n", 655 + p, conflict->name, conflict); 656 + goto err; 657 + } 658 + 659 + pci_info(dev, "VGA legacy framebuffer %pR\n", p); 660 + return; 661 + 662 + err: 663 + kfree(p); 664 + } 665 + 627 666 static void pci_claim_bus_resources(struct pci_bus *bus) 628 667 { 629 668 struct pci_bus *child_bus; ··· 678 639 continue; 679 640 680 641 if (ofpci_verbose) 681 - printk("PCI: Claiming %s: " 682 - "Resource %d: %016llx..%016llx [%x]\n", 683 - pci_name(dev), i, 684 - (unsigned long long)r->start, 685 - (unsigned long long)r->end, 686 - (unsigned int)r->flags); 642 + pci_info(dev, "Claiming Resource %d: %pR\n", 643 + i, r); 687 644 688 645 pci_claim_resource(dev, i); 689 646 } 647 + 648 + pci_claim_legacy_resources(dev); 690 649 } 691 650 692 651 list_for_each_entry(child_bus, &bus->children, node) ··· 724 687 pci_bus_register_of_sysfs(bus); 725 688 726 689 pci_claim_bus_resources(bus); 690 + 727 691 pci_bus_add_devices(bus); 728 692 return bus; 729 693 } ··· 751 713 } 752 714 753 715 if (cmd != oldcmd) { 754 - printk(KERN_DEBUG "PCI: Enabling device: (%s), cmd %x\n", 755 - pci_name(dev), cmd); 756 - /* Enable the appropriate bits in the PCI command register. */ 716 + pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd); 757 717 pci_write_config_word(dev, PCI_COMMAND, cmd); 758 718 } 759 719 return 0; ··· 1111 1075 sp = prop->names; 1112 1076 1113 1077 if (ofpci_verbose) 1114 - printk("PCI: Making slots for [%s] mask[0x%02x]\n", 1115 - node->full_name, mask); 1078 + pci_info(bus, "Making slots for [%s] mask[0x%02x]\n", 1079 + node->full_name, mask); 1116 1080 1117 1081 i = 0; 1118 1082 while (mask) { ··· 1125 1089 } 1126 1090 1127 1091 if (ofpci_verbose) 1128 - printk("PCI: Making slot [%s]\n", sp); 1092 + pci_info(bus, "Making slot [%s]\n", sp); 1129 1093 1130 1094 pci_slot = pci_create_slot(bus, i, sp, NULL); 1131 1095 if (IS_ERR(pci_slot)) 1132 - printk(KERN_ERR "PCI: pci_create_slot returned %ld\n", 1133 - PTR_ERR(pci_slot)); 1096 + pci_err(bus, "pci_create_slot returned %ld\n", 1097 + PTR_ERR(pci_slot)); 1134 1098 1135 1099 sp += strlen(sp) + 1; 1136 1100 mask &= ~this_bit;
+6 -25
arch/sparc/kernel/pci_common.c
··· 329 329 } 330 330 } 331 331 332 - static void pci_register_legacy_regions(struct resource *io_res, 333 - struct resource *mem_res) 334 - { 335 - struct resource *p; 336 - 337 - /* VGA Video RAM. */ 338 - p = kzalloc(sizeof(*p), GFP_KERNEL); 339 - if (!p) 340 - return; 341 - 342 - p->name = "Video RAM area"; 343 - p->start = mem_res->start + 0xa0000UL; 344 - p->end = p->start + 0x1ffffUL; 345 - p->flags = IORESOURCE_BUSY; 346 - request_resource(mem_res, p); 347 - } 348 - 349 332 static void pci_register_iommu_region(struct pci_pbm_info *pbm) 350 333 { 351 334 const u32 *vdma = of_get_property(pbm->op->dev.of_node, "virtual-dma", ··· 470 487 if (pbm->mem64_space.flags) 471 488 request_resource(&iomem_resource, &pbm->mem64_space); 472 489 473 - pci_register_legacy_regions(&pbm->io_space, 474 - &pbm->mem_space); 475 490 pci_register_iommu_region(pbm); 476 491 } 477 492 ··· 489 508 PCI_STATUS_REC_TARGET_ABORT)); 490 509 if (error_bits) { 491 510 pci_write_config_word(pdev, PCI_STATUS, error_bits); 492 - printk("%s: Device %s saw Target Abort [%016x]\n", 493 - pbm->name, pci_name(pdev), status); 511 + pci_info(pdev, "%s: Device saw Target Abort [%016x]\n", 512 + pbm->name, status); 494 513 } 495 514 } 496 515 ··· 512 531 (status & (PCI_STATUS_REC_MASTER_ABORT)); 513 532 if (error_bits) { 514 533 pci_write_config_word(pdev, PCI_STATUS, error_bits); 515 - printk("%s: Device %s received Master Abort [%016x]\n", 516 - pbm->name, pci_name(pdev), status); 534 + pci_info(pdev, "%s: Device received Master Abort " 535 + "[%016x]\n", pbm->name, status); 517 536 } 518 537 } 519 538 ··· 536 555 PCI_STATUS_DETECTED_PARITY)); 537 556 if (error_bits) { 538 557 pci_write_config_word(pdev, PCI_STATUS, error_bits); 539 - printk("%s: Device %s saw Parity Error [%016x]\n", 540 - pbm->name, pci_name(pdev), status); 558 + pci_info(pdev, "%s: Device saw Parity Error [%016x]\n", 559 + pbm->name, status); 541 560 } 542 561 } 543 562
+5 -5
arch/sparc/kernel/pci_msi.c
··· 191 191 break; 192 192 } 193 193 if (i >= pbm->msi_num) { 194 - printk(KERN_ERR "%s: teardown: No MSI for irq %u\n", 195 - pbm->name, irq); 194 + pci_err(pdev, "%s: teardown: No MSI for irq %u\n", pbm->name, 195 + irq); 196 196 return; 197 197 } 198 198 ··· 201 201 202 202 err = ops->msi_teardown(pbm, msi_num); 203 203 if (err) { 204 - printk(KERN_ERR "%s: teardown: ops->teardown() on MSI %u, " 205 - "irq %u, gives error %d\n", 206 - pbm->name, msi_num, irq, err); 204 + pci_err(pdev, "%s: teardown: ops->teardown() on MSI %u, " 205 + "irq %u, gives error %d\n", pbm->name, msi_num, irq, 206 + err); 207 207 return; 208 208 } 209 209
+41 -53
arch/sparc/kernel/pcic.c
··· 518 518 * board in a PCI slot. We must remap it 519 519 * under 64K but it is not done yet. XXX 520 520 */ 521 - printk("PCIC: Skipping I/O space at 0x%lx, " 522 - "this will Oops if a driver attaches " 523 - "device '%s' at %02x:%02x)\n", address, 524 - namebuf, dev->bus->number, dev->devfn); 521 + pci_info(dev, "PCIC: Skipping I/O space at " 522 + "0x%lx, this will Oops if a driver " 523 + "attaches device '%s'\n", address, 524 + namebuf); 525 525 } 526 526 } 527 527 } ··· 551 551 p++; 552 552 } 553 553 if (i >= pcic->pcic_imdim) { 554 - printk("PCIC: device %s devfn %02x:%02x not found in %d\n", 555 - namebuf, dev->bus->number, dev->devfn, pcic->pcic_imdim); 554 + pci_info(dev, "PCIC: device %s not found in %d\n", namebuf, 555 + pcic->pcic_imdim); 556 556 dev->irq = 0; 557 557 return; 558 558 } ··· 565 565 ivec = readw(pcic->pcic_regs+PCI_INT_SELECT_HI); 566 566 real_irq = ivec >> ((i-4) << 2) & 0xF; 567 567 } else { /* Corrupted map */ 568 - printk("PCIC: BAD PIN %d\n", i); for (;;) {} 568 + pci_info(dev, "PCIC: BAD PIN %d\n", i); for (;;) {} 569 569 } 570 570 /* P3 */ /* printk("PCIC: device %s pin %d ivec 0x%x irq %x\n", namebuf, i, ivec, dev->irq); */ 571 571 ··· 574 574 */ 575 575 if (real_irq == 0 || p->force) { 576 576 if (p->irq == 0 || p->irq >= 15) { /* Corrupted map */ 577 - printk("PCIC: BAD IRQ %d\n", p->irq); for (;;) {} 577 + pci_info(dev, "PCIC: BAD IRQ %d\n", p->irq); for (;;) {} 578 578 } 579 - printk("PCIC: setting irq %d at pin %d for device %02x:%02x\n", 580 - p->irq, p->pin, dev->bus->number, dev->devfn); 579 + pci_info(dev, "PCIC: setting irq %d at pin %d\n", p->irq, 580 + p->pin); 581 581 real_irq = p->irq; 582 582 583 583 i = p->pin; ··· 602 602 void pcibios_fixup_bus(struct pci_bus *bus) 603 603 { 604 604 struct pci_dev *dev; 605 - int i, has_io, has_mem; 606 - unsigned int cmd = 0; 607 605 struct linux_pcic *pcic; 608 606 /* struct linux_pbm_info* pbm = &pcic->pbm; */ 609 607 int node; 610 608 struct pcidev_cookie *pcp; 611 609 612 610 if (!pcic0_up) { 613 - printk("pcibios_fixup_bus: no PCIC\n"); 611 + pci_info(bus, "pcibios_fixup_bus: no PCIC\n"); 614 612 return; 615 613 } 616 614 pcic = &pcic0; ··· 617 619 * Next crud is an equivalent of pbm = pcic_bus_to_pbm(bus); 618 620 */ 619 621 if (bus->number != 0) { 620 - printk("pcibios_fixup_bus: nonzero bus 0x%x\n", bus->number); 622 + pci_info(bus, "pcibios_fixup_bus: nonzero bus 0x%x\n", 623 + bus->number); 621 624 return; 622 625 } 623 626 624 627 list_for_each_entry(dev, &bus->devices, bus_list) { 625 - 626 - /* 627 - * Comment from i386 branch: 628 - * There are buggy BIOSes that forget to enable I/O and memory 629 - * access to PCI devices. We try to fix this, but we need to 630 - * be sure that the BIOS didn't forget to assign an address 631 - * to the device. [mj] 632 - * OBP is a case of such BIOS :-) 633 - */ 634 - has_io = has_mem = 0; 635 - for(i=0; i<6; i++) { 636 - unsigned long f = dev->resource[i].flags; 637 - if (f & IORESOURCE_IO) { 638 - has_io = 1; 639 - } else if (f & IORESOURCE_MEM) 640 - has_mem = 1; 641 - } 642 - pcic_read_config(dev->bus, dev->devfn, PCI_COMMAND, 2, &cmd); 643 - if (has_io && !(cmd & PCI_COMMAND_IO)) { 644 - printk("PCIC: Enabling I/O for device %02x:%02x\n", 645 - dev->bus->number, dev->devfn); 646 - cmd |= PCI_COMMAND_IO; 647 - pcic_write_config(dev->bus, dev->devfn, 648 - PCI_COMMAND, 2, cmd); 649 - } 650 - if (has_mem && !(cmd & PCI_COMMAND_MEMORY)) { 651 - printk("PCIC: Enabling memory for device %02x:%02x\n", 652 - dev->bus->number, dev->devfn); 653 - cmd |= PCI_COMMAND_MEMORY; 654 - pcic_write_config(dev->bus, dev->devfn, 655 - PCI_COMMAND, 2, cmd); 656 - } 657 - 658 628 node = pdev_to_pnode(&pcic->pbm, dev); 659 629 if(node == 0) 660 630 node = -1; ··· 639 673 640 674 pcic_fill_irq(pcic, dev, node); 641 675 } 676 + } 677 + 678 + int pcibios_enable_device(struct pci_dev *dev, int mask) 679 + { 680 + u16 cmd, oldcmd; 681 + int i; 682 + 683 + pci_read_config_word(dev, PCI_COMMAND, &cmd); 684 + oldcmd = cmd; 685 + 686 + for (i = 0; i < PCI_NUM_RESOURCES; i++) { 687 + struct resource *res = &dev->resource[i]; 688 + 689 + /* Only set up the requested stuff */ 690 + if (!(mask & (1<<i))) 691 + continue; 692 + 693 + if (res->flags & IORESOURCE_IO) 694 + cmd |= PCI_COMMAND_IO; 695 + if (res->flags & IORESOURCE_MEM) 696 + cmd |= PCI_COMMAND_MEMORY; 697 + } 698 + 699 + if (cmd != oldcmd) { 700 + pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd); 701 + pci_write_config_word(dev, PCI_COMMAND, cmd); 702 + } 703 + return 0; 642 704 } 643 705 644 706 /* Makes compiler happy */ ··· 741 747 } 742 748 #endif 743 749 744 - int pcibios_enable_device(struct pci_dev *pdev, int mask) 745 - { 746 - return 0; 747 - } 748 - 749 750 /* 750 751 * NMI 751 752 */ 752 753 void pcic_nmi(unsigned int pend, struct pt_regs *regs) 753 754 { 754 - 755 755 pend = swab32(pend); 756 756 757 757 if (!pcic_speculative || (pend & PCI_SYS_INT_PENDING_PIO) == 0) {
+5 -14
arch/x86/pci/early.c
··· 59 59 60 60 void early_dump_pci_device(u8 bus, u8 slot, u8 func) 61 61 { 62 + u32 value[256 / 4]; 62 63 int i; 63 - int j; 64 - u32 val; 65 64 66 - printk(KERN_INFO "pci 0000:%02x:%02x.%d config space:", 67 - bus, slot, func); 65 + pr_info("pci 0000:%02x:%02x.%d config space:\n", bus, slot, func); 68 66 69 - for (i = 0; i < 256; i += 4) { 70 - if (!(i & 0x0f)) 71 - printk("\n %02x:",i); 67 + for (i = 0; i < 256; i += 4) 68 + value[i / 4] = read_pci_config(bus, slot, func, i); 72 69 73 - val = read_pci_config(bus, slot, func, i); 74 - for (j = 0; j < 4; j++) { 75 - printk(" %02x", val & 0xff); 76 - val >>= 8; 77 - } 78 - } 79 - printk("\n"); 70 + print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 16, 1, value, 256, false); 80 71 } 81 72 82 73 void early_dump_pci_devices(void)
+4
arch/x86/pci/fixup.c
··· 636 636 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2031, quirk_no_aersid); 637 637 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2032, quirk_no_aersid); 638 638 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2033, quirk_no_aersid); 639 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334a, quirk_no_aersid); 640 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334b, quirk_no_aersid); 641 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334c, quirk_no_aersid); 642 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334d, quirk_no_aersid); 639 643 640 644 #ifdef CONFIG_PHYS_ADDR_T_64BIT 641 645
-2
arch/xtensa/include/asm/pci.h
··· 20 20 21 21 #define pcibios_assign_all_busses() 0 22 22 23 - extern struct pci_controller* pcibios_alloc_controller(void); 24 - 25 23 /* Assume some values. (We should revise them, if necessary) */ 26 24 27 25 #define PCIBIOS_MIN_IO 0x2000
+4 -65
arch/xtensa/kernel/pci.c
··· 41 41 * pci_bus_add_device 42 42 */ 43 43 44 - struct pci_controller* pci_ctrl_head; 45 - struct pci_controller** pci_ctrl_tail = &pci_ctrl_head; 44 + static struct pci_controller *pci_ctrl_head; 45 + static struct pci_controller **pci_ctrl_tail = &pci_ctrl_head; 46 46 47 47 static int pci_bus_count; 48 48 ··· 78 78 } 79 79 80 80 return start; 81 - } 82 - 83 - int 84 - pcibios_enable_resources(struct pci_dev *dev, int mask) 85 - { 86 - u16 cmd, old_cmd; 87 - int idx; 88 - struct resource *r; 89 - 90 - pci_read_config_word(dev, PCI_COMMAND, &cmd); 91 - old_cmd = cmd; 92 - for(idx=0; idx<6; idx++) { 93 - r = &dev->resource[idx]; 94 - if (!r->start && r->end) { 95 - pr_err("PCI: Device %s not available because " 96 - "of resource collisions\n", pci_name(dev)); 97 - return -EINVAL; 98 - } 99 - if (r->flags & IORESOURCE_IO) 100 - cmd |= PCI_COMMAND_IO; 101 - if (r->flags & IORESOURCE_MEM) 102 - cmd |= PCI_COMMAND_MEMORY; 103 - } 104 - if (dev->resource[PCI_ROM_RESOURCE].start) 105 - cmd |= PCI_COMMAND_MEMORY; 106 - if (cmd != old_cmd) { 107 - pr_info("PCI: Enabling device %s (%04x -> %04x)\n", 108 - pci_name(dev), old_cmd, cmd); 109 - pci_write_config_word(dev, PCI_COMMAND, cmd); 110 - } 111 - return 0; 112 - } 113 - 114 - struct pci_controller * __init pcibios_alloc_controller(void) 115 - { 116 - struct pci_controller *pci_ctrl; 117 - 118 - pci_ctrl = (struct pci_controller *)alloc_bootmem(sizeof(*pci_ctrl)); 119 - memset(pci_ctrl, 0, sizeof(struct pci_controller)); 120 - 121 - *pci_ctrl_tail = pci_ctrl; 122 - pci_ctrl_tail = &pci_ctrl->next; 123 - 124 - return pci_ctrl; 125 81 } 126 82 127 83 static void __init pci_controller_apertures(struct pci_controller *pci_ctrl, ··· 179 223 for (idx=0; idx<6; idx++) { 180 224 r = &dev->resource[idx]; 181 225 if (!r->start && r->end) { 182 - pr_err("PCI: Device %s not available because " 183 - "of resource collisions\n", pci_name(dev)); 226 + pci_err(dev, "can't enable device: resource collisions\n"); 184 227 return -EINVAL; 185 228 } 186 229 if (r->flags & IORESOURCE_IO) ··· 188 233 cmd |= PCI_COMMAND_MEMORY; 189 234 } 190 235 if (cmd != old_cmd) { 191 - pr_info("PCI: Enabling device %s (%04x -> %04x)\n", 192 - pci_name(dev), old_cmd, cmd); 236 + pci_info(dev, "enabling device (%04x -> %04x)\n", old_cmd, cmd); 193 237 pci_write_config_word(dev, PCI_COMMAND, cmd); 194 238 } 195 239 196 240 return 0; 197 241 } 198 - 199 - #ifdef CONFIG_PROC_FS 200 - 201 - /* 202 - * Return the index of the PCI controller for device pdev. 203 - */ 204 - 205 - int 206 - pci_controller_num(struct pci_dev *dev) 207 - { 208 - struct pci_controller *pci_ctrl = (struct pci_controller*) dev->sysdata; 209 - return pci_ctrl->index; 210 - } 211 - 212 - #endif /* CONFIG_PROC_FS */ 213 242 214 243 /* 215 244 * Platform support for /proc/bus/pci/X/Y mmap()s.
+15 -2
drivers/acpi/pci_root.c
··· 153 153 { OSC_PCI_EXPRESS_PME_CONTROL, "PME" }, 154 154 { OSC_PCI_EXPRESS_AER_CONTROL, "AER" }, 155 155 { OSC_PCI_EXPRESS_CAPABILITY_CONTROL, "PCIeCapability" }, 156 + { OSC_PCI_EXPRESS_LTR_CONTROL, "LTR" }, 156 157 }; 157 158 158 159 static void decode_osc_bits(struct acpi_pci_root *root, char *msg, u32 word, ··· 473 472 } 474 473 475 474 control = OSC_PCI_EXPRESS_CAPABILITY_CONTROL 476 - | OSC_PCI_EXPRESS_NATIVE_HP_CONTROL 477 475 | OSC_PCI_EXPRESS_PME_CONTROL; 476 + 477 + if (IS_ENABLED(CONFIG_PCIEASPM)) 478 + control |= OSC_PCI_EXPRESS_LTR_CONTROL; 479 + 480 + if (IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE)) 481 + control |= OSC_PCI_EXPRESS_NATIVE_HP_CONTROL; 482 + 483 + if (IS_ENABLED(CONFIG_HOTPLUG_PCI_SHPC)) 484 + control |= OSC_PCI_SHPC_NATIVE_HP_CONTROL; 478 485 479 486 if (pci_aer_available()) { 480 487 if (aer_acpi_firmware_first()) ··· 909 900 910 901 host_bridge = to_pci_host_bridge(bus->bridge); 911 902 if (!(root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL)) 912 - host_bridge->native_hotplug = 0; 903 + host_bridge->native_pcie_hotplug = 0; 904 + if (!(root->osc_control_set & OSC_PCI_SHPC_NATIVE_HP_CONTROL)) 905 + host_bridge->native_shpc_hotplug = 0; 913 906 if (!(root->osc_control_set & OSC_PCI_EXPRESS_AER_CONTROL)) 914 907 host_bridge->native_aer = 0; 915 908 if (!(root->osc_control_set & OSC_PCI_EXPRESS_PME_CONTROL)) 916 909 host_bridge->native_pme = 0; 910 + if (!(root->osc_control_set & OSC_PCI_EXPRESS_LTR_CONTROL)) 911 + host_bridge->native_ltr = 0; 917 912 918 913 pci_scan_child_bus(bus); 919 914 pci_set_host_bridge_release(host_bridge, acpi_pci_root_release_info,
+8 -16
drivers/infiniband/hw/hfi1/pcie.c
··· 56 56 #include "chip_registers.h" 57 57 #include "aspm.h" 58 58 59 - /* link speed vector for Gen3 speed - not in Linux headers */ 60 - #define GEN1_SPEED_VECTOR 0x1 61 - #define GEN2_SPEED_VECTOR 0x2 62 - #define GEN3_SPEED_VECTOR 0x3 63 - 64 59 /* 65 60 * This file contains PCIe utility routines. 66 61 */ ··· 257 262 case PCI_EXP_LNKSTA_CLS_5_0GB: 258 263 speed = 5000; /* Gen 2, 5GHz */ 259 264 break; 260 - case GEN3_SPEED_VECTOR: 265 + case PCI_EXP_LNKSTA_CLS_8_0GB: 261 266 speed = 8000; /* Gen 3, 8GHz */ 262 267 break; 263 268 } ··· 312 317 return ret; 313 318 } 314 319 315 - if ((linkcap & PCI_EXP_LNKCAP_SLS) != GEN3_SPEED_VECTOR) { 320 + if ((linkcap & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_8_0GB) { 316 321 dd_dev_info(dd, 317 322 "This HFI is not Gen3 capable, max speed 0x%x, need 0x3\n", 318 323 linkcap & PCI_EXP_LNKCAP_SLS); ··· 689 694 /* gasket block secondary bus reset delay */ 690 695 #define SBR_DELAY_US 200000 /* 200ms */ 691 696 692 - /* mask for PCIe capability register lnkctl2 target link speed */ 693 - #define LNKCTL2_TARGET_LINK_SPEED_MASK 0xf 694 - 695 697 static uint pcie_target = 3; 696 698 module_param(pcie_target, uint, S_IRUGO); 697 699 MODULE_PARM_DESC(pcie_target, "PCIe target speed (0 skip, 1-3 Gen1-3)"); ··· 1037 1045 return 0; 1038 1046 1039 1047 if (pcie_target == 1) { /* target Gen1 */ 1040 - target_vector = GEN1_SPEED_VECTOR; 1048 + target_vector = PCI_EXP_LNKCTL2_TLS_2_5GT; 1041 1049 target_speed = 2500; 1042 1050 } else if (pcie_target == 2) { /* target Gen2 */ 1043 - target_vector = GEN2_SPEED_VECTOR; 1051 + target_vector = PCI_EXP_LNKCTL2_TLS_5_0GT; 1044 1052 target_speed = 5000; 1045 1053 } else if (pcie_target == 3) { /* target Gen3 */ 1046 - target_vector = GEN3_SPEED_VECTOR; 1054 + target_vector = PCI_EXP_LNKCTL2_TLS_8_0GT; 1047 1055 target_speed = 8000; 1048 1056 } else { 1049 1057 /* off or invalid target - skip */ ··· 1282 1290 dd_dev_info(dd, "%s: ..old link control2: 0x%x\n", __func__, 1283 1291 (u32)lnkctl2); 1284 1292 /* only write to parent if target is not as high as ours */ 1285 - if ((lnkctl2 & LNKCTL2_TARGET_LINK_SPEED_MASK) < target_vector) { 1286 - lnkctl2 &= ~LNKCTL2_TARGET_LINK_SPEED_MASK; 1293 + if ((lnkctl2 & PCI_EXP_LNKCTL2_TLS) < target_vector) { 1294 + lnkctl2 &= ~PCI_EXP_LNKCTL2_TLS; 1287 1295 lnkctl2 |= target_vector; 1288 1296 dd_dev_info(dd, "%s: ..new link control2: 0x%x\n", __func__, 1289 1297 (u32)lnkctl2); ··· 1308 1316 1309 1317 dd_dev_info(dd, "%s: ..old link control2: 0x%x\n", __func__, 1310 1318 (u32)lnkctl2); 1311 - lnkctl2 &= ~LNKCTL2_TARGET_LINK_SPEED_MASK; 1319 + lnkctl2 &= ~PCI_EXP_LNKCTL2_TLS; 1312 1320 lnkctl2 |= target_vector; 1313 1321 dd_dev_info(dd, "%s: ..new link control2: 0x%x\n", __func__, 1314 1322 (u32)lnkctl2);
+8 -3
drivers/iommu/amd_iommu.c
··· 354 354 }; 355 355 int i, pos; 356 356 357 + if (pci_ats_disabled()) 358 + return false; 359 + 357 360 for (i = 0; i < 3; ++i) { 358 361 pos = pci_find_ext_capability(pdev, caps[i]); 359 362 if (pos == 0) ··· 3526 3523 3527 3524 memset(info, 0, sizeof(*info)); 3528 3525 3529 - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ATS); 3530 - if (pos) 3531 - info->flags |= AMD_IOMMU_DEVICE_FLAG_ATS_SUP; 3526 + if (!pci_ats_disabled()) { 3527 + pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ATS); 3528 + if (pos) 3529 + info->flags |= AMD_IOMMU_DEVICE_FLAG_ATS_SUP; 3530 + } 3532 3531 3533 3532 pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); 3534 3533 if (pos)
+2 -1
drivers/iommu/intel-iommu.c
··· 2459 2459 if (dev && dev_is_pci(dev)) { 2460 2460 struct pci_dev *pdev = to_pci_dev(info->dev); 2461 2461 2462 - if (ecap_dev_iotlb_support(iommu->ecap) && 2462 + if (!pci_ats_disabled() && 2463 + ecap_dev_iotlb_support(iommu->ecap) && 2463 2464 pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ATS) && 2464 2465 dmar_find_matched_atsr_unit(pdev)) 2465 2466 info->ats_supported = 1;
+15 -14
drivers/misc/pci_endpoint_test.c
··· 203 203 if (!val) 204 204 return false; 205 205 206 - if (test->last_irq - pdev->irq == msi_num - 1) 206 + if (pci_irq_vector(pdev, msi_num - 1) == test->last_irq) 207 207 return true; 208 208 209 209 return false; ··· 233 233 orig_src_addr = dma_alloc_coherent(dev, size + alignment, 234 234 &orig_src_phys_addr, GFP_KERNEL); 235 235 if (!orig_src_addr) { 236 - dev_err(dev, "failed to allocate source buffer\n"); 236 + dev_err(dev, "Failed to allocate source buffer\n"); 237 237 ret = false; 238 238 goto err; 239 239 } ··· 259 259 orig_dst_addr = dma_alloc_coherent(dev, size + alignment, 260 260 &orig_dst_phys_addr, GFP_KERNEL); 261 261 if (!orig_dst_addr) { 262 - dev_err(dev, "failed to allocate destination address\n"); 262 + dev_err(dev, "Failed to allocate destination address\n"); 263 263 ret = false; 264 264 goto err_orig_src_addr; 265 265 } ··· 321 321 orig_addr = dma_alloc_coherent(dev, size + alignment, &orig_phys_addr, 322 322 GFP_KERNEL); 323 323 if (!orig_addr) { 324 - dev_err(dev, "failed to allocate address\n"); 324 + dev_err(dev, "Failed to allocate address\n"); 325 325 ret = false; 326 326 goto err; 327 327 } ··· 382 382 orig_addr = dma_alloc_coherent(dev, size + alignment, &orig_phys_addr, 383 383 GFP_KERNEL); 384 384 if (!orig_addr) { 385 - dev_err(dev, "failed to allocate destination address\n"); 385 + dev_err(dev, "Failed to allocate destination address\n"); 386 386 ret = false; 387 387 goto err; 388 388 } ··· 513 513 if (!no_msi) { 514 514 irq = pci_alloc_irq_vectors(pdev, 1, 32, PCI_IRQ_MSI); 515 515 if (irq < 0) 516 - dev_err(dev, "failed to get MSI interrupts\n"); 516 + dev_err(dev, "Failed to get MSI interrupts\n"); 517 517 test->num_irqs = irq; 518 518 } 519 519 520 520 err = devm_request_irq(dev, pdev->irq, pci_endpoint_test_irqhandler, 521 521 IRQF_SHARED, DRV_MODULE_NAME, test); 522 522 if (err) { 523 - dev_err(dev, "failed to request IRQ %d\n", pdev->irq); 523 + dev_err(dev, "Failed to request IRQ %d\n", pdev->irq); 524 524 goto err_disable_msi; 525 525 } 526 526 527 527 for (i = 1; i < irq; i++) { 528 - err = devm_request_irq(dev, pdev->irq + i, 528 + err = devm_request_irq(dev, pci_irq_vector(pdev, i), 529 529 pci_endpoint_test_irqhandler, 530 530 IRQF_SHARED, DRV_MODULE_NAME, test); 531 531 if (err) 532 532 dev_err(dev, "failed to request IRQ %d for MSI %d\n", 533 - pdev->irq + i, i + 1); 533 + pci_irq_vector(pdev, i), i + 1); 534 534 } 535 535 536 536 for (bar = BAR_0; bar <= BAR_5; bar++) { 537 537 if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) { 538 538 base = pci_ioremap_bar(pdev, bar); 539 539 if (!base) { 540 - dev_err(dev, "failed to read BAR%d\n", bar); 540 + dev_err(dev, "Failed to read BAR%d\n", bar); 541 541 WARN_ON(bar == test_reg_bar); 542 542 } 543 543 test->bar[bar] = base; ··· 557 557 id = ida_simple_get(&pci_endpoint_test_ida, 0, 0, GFP_KERNEL); 558 558 if (id < 0) { 559 559 err = id; 560 - dev_err(dev, "unable to get id\n"); 560 + dev_err(dev, "Unable to get id\n"); 561 561 goto err_iounmap; 562 562 } 563 563 ··· 573 573 574 574 err = misc_register(misc_device); 575 575 if (err) { 576 - dev_err(dev, "failed to register device\n"); 576 + dev_err(dev, "Failed to register device\n"); 577 577 goto err_kfree_name; 578 578 } 579 579 ··· 592 592 } 593 593 594 594 for (i = 0; i < irq; i++) 595 - devm_free_irq(dev, pdev->irq + i, test); 595 + devm_free_irq(&pdev->dev, pci_irq_vector(pdev, i), test); 596 596 597 597 err_disable_msi: 598 598 pci_disable_msi(pdev); ··· 625 625 pci_iounmap(pdev, test->bar[bar]); 626 626 } 627 627 for (i = 0; i < test->num_irqs; i++) 628 - devm_free_irq(&pdev->dev, pdev->irq + i, test); 628 + devm_free_irq(&pdev->dev, pci_irq_vector(pdev, i), test); 629 629 pci_disable_msi(pdev); 630 630 pci_release_regions(pdev); 631 631 pci_disable_device(pdev); ··· 634 634 static const struct pci_device_id pci_endpoint_test_tbl[] = { 635 635 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x) }, 636 636 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA72x) }, 637 + { PCI_DEVICE(PCI_VENDOR_ID_SYNOPSYS, 0xedda) }, 637 638 { } 638 639 }; 639 640 MODULE_DEVICE_TABLE(pci, pci_endpoint_test_tbl);
+1 -27
drivers/net/ethernet/amazon/ena/ena_netdev.c
··· 3386 3386 } 3387 3387 3388 3388 /*****************************************************************************/ 3389 - static int ena_sriov_configure(struct pci_dev *dev, int numvfs) 3390 - { 3391 - int rc; 3392 - 3393 - if (numvfs > 0) { 3394 - rc = pci_enable_sriov(dev, numvfs); 3395 - if (rc != 0) { 3396 - dev_err(&dev->dev, 3397 - "pci_enable_sriov failed to enable: %d vfs with the error: %d\n", 3398 - numvfs, rc); 3399 - return rc; 3400 - } 3401 - 3402 - return numvfs; 3403 - } 3404 - 3405 - if (numvfs == 0) { 3406 - pci_disable_sriov(dev); 3407 - return 0; 3408 - } 3409 - 3410 - return -EINVAL; 3411 - } 3412 - 3413 - /*****************************************************************************/ 3414 - /*****************************************************************************/ 3415 3389 3416 3390 /* ena_remove - Device Removal Routine 3417 3391 * @pdev: PCI device information struct ··· 3500 3526 .suspend = ena_suspend, 3501 3527 .resume = ena_resume, 3502 3528 #endif 3503 - .sriov_configure = ena_sriov_configure, 3529 + .sriov_configure = pci_sriov_configure_simple, 3504 3530 }; 3505 3531 3506 3532 static int __init ena_init(void)
+6 -17
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 13922 13922 { 13923 13923 struct net_device *dev = NULL; 13924 13924 struct bnx2x *bp; 13925 - enum pcie_link_width pcie_width; 13926 - enum pci_bus_speed pcie_speed; 13927 13925 int rc, max_non_def_sbs; 13928 13926 int rx_count, tx_count, rss_count, doorbell_size; 13929 13927 int max_cos_est; ··· 14089 14091 dev_addr_add(bp->dev, bp->fip_mac, NETDEV_HW_ADDR_T_SAN); 14090 14092 rtnl_unlock(); 14091 14093 } 14092 - if (pcie_get_minimum_link(bp->pdev, &pcie_speed, &pcie_width) || 14093 - pcie_speed == PCI_SPEED_UNKNOWN || 14094 - pcie_width == PCIE_LNK_WIDTH_UNKNOWN) 14095 - BNX2X_DEV_INFO("Failed to determine PCI Express Bandwidth\n"); 14096 - else 14097 - BNX2X_DEV_INFO( 14098 - "%s (%c%d) PCI-E x%d %s found at mem %lx, IRQ %d, node addr %pM\n", 14099 - board_info[ent->driver_data].name, 14100 - (CHIP_REV(bp) >> 12) + 'A', (CHIP_METAL(bp) >> 4), 14101 - pcie_width, 14102 - pcie_speed == PCIE_SPEED_2_5GT ? "2.5GHz" : 14103 - pcie_speed == PCIE_SPEED_5_0GT ? "5.0GHz" : 14104 - pcie_speed == PCIE_SPEED_8_0GT ? "8.0GHz" : 14105 - "Unknown", 14106 - dev->base_addr, bp->pdev->irq, dev->dev_addr); 14094 + BNX2X_DEV_INFO( 14095 + "%s (%c%d) PCI-E found at mem %lx, IRQ %d, node addr %pM\n", 14096 + board_info[ent->driver_data].name, 14097 + (CHIP_REV(bp) >> 12) + 'A', (CHIP_METAL(bp) >> 4), 14098 + dev->base_addr, bp->pdev->irq, dev->dev_addr); 14099 + pcie_print_link_status(bp->pdev); 14107 14100 14108 14101 bnx2x_register_phc(bp); 14109 14102
+1 -18
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 8685 8685 return rc; 8686 8686 } 8687 8687 8688 - static void bnxt_parse_log_pcie_link(struct bnxt *bp) 8689 - { 8690 - enum pcie_link_width width = PCIE_LNK_WIDTH_UNKNOWN; 8691 - enum pci_bus_speed speed = PCI_SPEED_UNKNOWN; 8692 - 8693 - if (pcie_get_minimum_link(pci_physfn(bp->pdev), &speed, &width) || 8694 - speed == PCI_SPEED_UNKNOWN || width == PCIE_LNK_WIDTH_UNKNOWN) 8695 - netdev_info(bp->dev, "Failed to determine PCIe Link Info\n"); 8696 - else 8697 - netdev_info(bp->dev, "PCIe: Speed %s Width x%d\n", 8698 - speed == PCIE_SPEED_2_5GT ? "2.5GT/s" : 8699 - speed == PCIE_SPEED_5_0GT ? "5.0GT/s" : 8700 - speed == PCIE_SPEED_8_0GT ? "8.0GT/s" : 8701 - "Unknown", width); 8702 - } 8703 - 8704 8688 static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) 8705 8689 { 8706 8690 static int version_printed; ··· 8899 8915 netdev_info(dev, "%s found at mem %lx, node addr %pM\n", 8900 8916 board_info[ent->driver_data].name, 8901 8917 (long)pci_resource_start(pdev, 0), dev->dev_addr); 8902 - 8903 - bnxt_parse_log_pcie_link(bp); 8918 + pcie_print_link_status(pdev); 8904 8919 8905 8920 return 0; 8906 8921
+1 -74
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 5066 5066 return 0; 5067 5067 } 5068 5068 5069 - static int cxgb4_get_pcie_dev_link_caps(struct adapter *adap, 5070 - enum pci_bus_speed *speed, 5071 - enum pcie_link_width *width) 5072 - { 5073 - u32 lnkcap1, lnkcap2; 5074 - int err1, err2; 5075 - 5076 - #define PCIE_MLW_CAP_SHIFT 4 /* start of MLW mask in link capabilities */ 5077 - 5078 - *speed = PCI_SPEED_UNKNOWN; 5079 - *width = PCIE_LNK_WIDTH_UNKNOWN; 5080 - 5081 - err1 = pcie_capability_read_dword(adap->pdev, PCI_EXP_LNKCAP, 5082 - &lnkcap1); 5083 - err2 = pcie_capability_read_dword(adap->pdev, PCI_EXP_LNKCAP2, 5084 - &lnkcap2); 5085 - if (!err2 && lnkcap2) { /* PCIe r3.0-compliant */ 5086 - if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_8_0GB) 5087 - *speed = PCIE_SPEED_8_0GT; 5088 - else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_5_0GB) 5089 - *speed = PCIE_SPEED_5_0GT; 5090 - else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_2_5GB) 5091 - *speed = PCIE_SPEED_2_5GT; 5092 - } 5093 - if (!err1) { 5094 - *width = (lnkcap1 & PCI_EXP_LNKCAP_MLW) >> PCIE_MLW_CAP_SHIFT; 5095 - if (!lnkcap2) { /* pre-r3.0 */ 5096 - if (lnkcap1 & PCI_EXP_LNKCAP_SLS_5_0GB) 5097 - *speed = PCIE_SPEED_5_0GT; 5098 - else if (lnkcap1 & PCI_EXP_LNKCAP_SLS_2_5GB) 5099 - *speed = PCIE_SPEED_2_5GT; 5100 - } 5101 - } 5102 - 5103 - if (*speed == PCI_SPEED_UNKNOWN || *width == PCIE_LNK_WIDTH_UNKNOWN) 5104 - return err1 ? err1 : err2 ? err2 : -EINVAL; 5105 - return 0; 5106 - } 5107 - 5108 - static void cxgb4_check_pcie_caps(struct adapter *adap) 5109 - { 5110 - enum pcie_link_width width, width_cap; 5111 - enum pci_bus_speed speed, speed_cap; 5112 - 5113 - #define PCIE_SPEED_STR(speed) \ 5114 - (speed == PCIE_SPEED_8_0GT ? "8.0GT/s" : \ 5115 - speed == PCIE_SPEED_5_0GT ? "5.0GT/s" : \ 5116 - speed == PCIE_SPEED_2_5GT ? "2.5GT/s" : \ 5117 - "Unknown") 5118 - 5119 - if (cxgb4_get_pcie_dev_link_caps(adap, &speed_cap, &width_cap)) { 5120 - dev_warn(adap->pdev_dev, 5121 - "Unable to determine PCIe device BW capabilities\n"); 5122 - return; 5123 - } 5124 - 5125 - if (pcie_get_minimum_link(adap->pdev, &speed, &width) || 5126 - speed == PCI_SPEED_UNKNOWN || width == PCIE_LNK_WIDTH_UNKNOWN) { 5127 - dev_warn(adap->pdev_dev, 5128 - "Unable to determine PCI Express bandwidth.\n"); 5129 - return; 5130 - } 5131 - 5132 - dev_info(adap->pdev_dev, "PCIe link speed is %s, device supports %s\n", 5133 - PCIE_SPEED_STR(speed), PCIE_SPEED_STR(speed_cap)); 5134 - dev_info(adap->pdev_dev, "PCIe link width is x%d, device supports x%d\n", 5135 - width, width_cap); 5136 - if (speed < speed_cap || width < width_cap) 5137 - dev_info(adap->pdev_dev, 5138 - "A slot with more lanes and/or higher speed is " 5139 - "suggested for optimal performance.\n"); 5140 - } 5141 - 5142 5069 /* Dump basic information about the adapter */ 5143 5070 static void print_adapter_info(struct adapter *adapter) 5144 5071 { ··· 5725 5798 } 5726 5799 5727 5800 /* check for PCI Express bandwidth capabiltites */ 5728 - cxgb4_check_pcie_caps(adapter); 5801 + pcie_print_link_status(pdev); 5729 5802 5730 5803 err = init_rss(adapter); 5731 5804 if (err)
+1 -46
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 245 245 int expected_gts) 246 246 { 247 247 struct ixgbe_hw *hw = &adapter->hw; 248 - int max_gts = 0; 249 - enum pci_bus_speed speed = PCI_SPEED_UNKNOWN; 250 - enum pcie_link_width width = PCIE_LNK_WIDTH_UNKNOWN; 251 248 struct pci_dev *pdev; 252 249 253 250 /* Some devices are not connected over PCIe and thus do not negotiate ··· 260 263 else 261 264 pdev = adapter->pdev; 262 265 263 - if (pcie_get_minimum_link(pdev, &speed, &width) || 264 - speed == PCI_SPEED_UNKNOWN || width == PCIE_LNK_WIDTH_UNKNOWN) { 265 - e_dev_warn("Unable to determine PCI Express bandwidth.\n"); 266 - return; 267 - } 268 - 269 - switch (speed) { 270 - case PCIE_SPEED_2_5GT: 271 - /* 8b/10b encoding reduces max throughput by 20% */ 272 - max_gts = 2 * width; 273 - break; 274 - case PCIE_SPEED_5_0GT: 275 - /* 8b/10b encoding reduces max throughput by 20% */ 276 - max_gts = 4 * width; 277 - break; 278 - case PCIE_SPEED_8_0GT: 279 - /* 128b/130b encoding reduces throughput by less than 2% */ 280 - max_gts = 8 * width; 281 - break; 282 - default: 283 - e_dev_warn("Unable to determine PCI Express bandwidth.\n"); 284 - return; 285 - } 286 - 287 - e_dev_info("PCI Express bandwidth of %dGT/s available\n", 288 - max_gts); 289 - e_dev_info("(Speed:%s, Width: x%d, Encoding Loss:%s)\n", 290 - (speed == PCIE_SPEED_8_0GT ? "8.0GT/s" : 291 - speed == PCIE_SPEED_5_0GT ? "5.0GT/s" : 292 - speed == PCIE_SPEED_2_5GT ? "2.5GT/s" : 293 - "Unknown"), 294 - width, 295 - (speed == PCIE_SPEED_2_5GT ? "20%" : 296 - speed == PCIE_SPEED_5_0GT ? "20%" : 297 - speed == PCIE_SPEED_8_0GT ? "<2%" : 298 - "Unknown")); 299 - 300 - if (max_gts < expected_gts) { 301 - e_dev_warn("This is not sufficient for optimal performance of this card.\n"); 302 - e_dev_warn("For optimal performance, at least %dGT/s of bandwidth is required.\n", 303 - expected_gts); 304 - e_dev_warn("A slot with more lanes and/or higher speed is suggested.\n"); 305 - } 266 + pcie_print_link_status(pdev); 306 267 } 307 268 308 269 static void ixgbe_service_event_schedule(struct ixgbe_adapter *adapter)
+1 -19
drivers/nvme/host/pci.c
··· 2630 2630 nvme_put_ctrl(&dev->ctrl); 2631 2631 } 2632 2632 2633 - static int nvme_pci_sriov_configure(struct pci_dev *pdev, int numvfs) 2634 - { 2635 - int ret = 0; 2636 - 2637 - if (numvfs == 0) { 2638 - if (pci_vfs_assigned(pdev)) { 2639 - dev_warn(&pdev->dev, 2640 - "Cannot disable SR-IOV VFs while assigned\n"); 2641 - return -EPERM; 2642 - } 2643 - pci_disable_sriov(pdev); 2644 - return 0; 2645 - } 2646 - 2647 - ret = pci_enable_sriov(pdev, numvfs); 2648 - return ret ? ret : numvfs; 2649 - } 2650 - 2651 2633 #ifdef CONFIG_PM_SLEEP 2652 2634 static int nvme_suspend(struct device *dev) 2653 2635 { ··· 2756 2774 .driver = { 2757 2775 .pm = &nvme_dev_pm_ops, 2758 2776 }, 2759 - .sriov_configure = nvme_pci_sriov_configure, 2777 + .sriov_configure = pci_sriov_configure_simple, 2760 2778 .err_handler = &nvme_err_handler, 2761 2779 }; 2762 2780
+12
drivers/pci/Kconfig
··· 67 67 68 68 When in doubt, say N. 69 69 70 + config PCI_PF_STUB 71 + tristate "PCI PF Stub driver" 72 + depends on PCI 73 + depends on PCI_IOV 74 + help 75 + Say Y or M here if you want to enable support for devices that 76 + require SR-IOV support, while at the same time the PF itself is 77 + not providing any actual services on the host itself such as 78 + storage or networking. 79 + 80 + When in doubt, say N. 81 + 70 82 config XEN_PCIDEV_FRONTEND 71 83 tristate "Xen PCI Frontend" 72 84 depends on PCI && X86 && XEN
+1
drivers/pci/Makefile
··· 24 24 obj-$(CONFIG_X86_INTEL_MID) += pci-mid.o 25 25 obj-$(CONFIG_PCI_SYSCALL) += syscall.o 26 26 obj-$(CONFIG_PCI_STUB) += pci-stub.o 27 + obj-$(CONFIG_PCI_PF_STUB) += pci-pf-stub.o 27 28 obj-$(CONFIG_PCI_ECAM) += ecam.o 28 29 obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o 29 30
+3
drivers/pci/ats.c
··· 20 20 { 21 21 int pos; 22 22 23 + if (pci_ats_disabled()) 24 + return; 25 + 23 26 pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ATS); 24 27 if (!pos) 25 28 return;
+44 -44
drivers/pci/dwc/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 3 menu "DesignWare PCI Core Support" 4 + depends on PCI 4 5 5 6 config PCIE_DW 6 7 bool 7 8 8 9 config PCIE_DW_HOST 9 10 bool 10 - depends on PCI 11 11 depends on PCI_MSI_IRQ_DOMAIN 12 12 select PCIE_DW 13 13 ··· 22 22 config PCI_DRA7XX_HOST 23 23 bool "TI DRA7xx PCIe controller Host Mode" 24 24 depends on SOC_DRA7XX || COMPILE_TEST 25 - depends on PCI && PCI_MSI_IRQ_DOMAIN 25 + depends on PCI_MSI_IRQ_DOMAIN 26 26 depends on OF && HAS_IOMEM && TI_PIPE3 27 27 select PCIE_DW_HOST 28 28 select PCI_DRA7XX ··· 51 51 This uses the DesignWare core. 52 52 53 53 config PCIE_DW_PLAT 54 - bool "Platform bus based DesignWare PCIe Controller" 55 - depends on PCI 56 - depends on PCI_MSI_IRQ_DOMAIN 54 + bool 55 + 56 + config PCIE_DW_PLAT_HOST 57 + bool "Platform bus based DesignWare PCIe Controller - Host mode" 58 + depends on PCI && PCI_MSI_IRQ_DOMAIN 57 59 select PCIE_DW_HOST 58 - ---help--- 59 - This selects the DesignWare PCIe controller support. Select this if 60 - you have a PCIe controller on Platform bus. 60 + select PCIE_DW_PLAT 61 + default y 62 + help 63 + Enables support for the PCIe controller in the Designware IP to 64 + work in host mode. There are two instances of PCIe controller in 65 + Designware IP. 66 + This controller can work either as EP or RC. In order to enable 67 + host-specific features PCIE_DW_PLAT_HOST must be selected and in 68 + order to enable device-specific features PCI_DW_PLAT_EP must be 69 + selected. 61 70 62 - If you have a controller with this interface, say Y or M here. 63 - 64 - If unsure, say N. 71 + config PCIE_DW_PLAT_EP 72 + bool "Platform bus based DesignWare PCIe Controller - Endpoint mode" 73 + depends on PCI && PCI_MSI_IRQ_DOMAIN 74 + depends on PCI_ENDPOINT 75 + select PCIE_DW_EP 76 + select PCIE_DW_PLAT 77 + help 78 + Enables support for the PCIe controller in the Designware IP to 79 + work in endpoint mode. There are two instances of PCIe controller 80 + in Designware IP. 81 + This controller can work either as EP or RC. In order to enable 82 + host-specific features PCIE_DW_PLAT_HOST must be selected and in 83 + order to enable device-specific features PCI_DW_PLAT_EP must be 84 + selected. 65 85 66 86 config PCI_EXYNOS 67 87 bool "Samsung Exynos PCIe controller" 68 - depends on PCI 69 - depends on SOC_EXYNOS5440 88 + depends on SOC_EXYNOS5440 || COMPILE_TEST 70 89 depends on PCI_MSI_IRQ_DOMAIN 71 - select PCIEPORTBUS 72 90 select PCIE_DW_HOST 73 91 74 92 config PCI_IMX6 75 93 bool "Freescale i.MX6 PCIe controller" 76 - depends on PCI 77 - depends on SOC_IMX6Q 94 + depends on SOC_IMX6Q || (ARM && COMPILE_TEST) 78 95 depends on PCI_MSI_IRQ_DOMAIN 79 - select PCIEPORTBUS 80 96 select PCIE_DW_HOST 81 97 82 98 config PCIE_SPEAR13XX 83 99 bool "STMicroelectronics SPEAr PCIe controller" 84 - depends on PCI 85 - depends on ARCH_SPEAR13XX 100 + depends on ARCH_SPEAR13XX || COMPILE_TEST 86 101 depends on PCI_MSI_IRQ_DOMAIN 87 - select PCIEPORTBUS 88 102 select PCIE_DW_HOST 89 103 help 90 104 Say Y here if you want PCIe support on SPEAr13XX SoCs. 91 105 92 106 config PCI_KEYSTONE 93 107 bool "TI Keystone PCIe controller" 94 - depends on PCI 95 - depends on ARCH_KEYSTONE 108 + depends on ARCH_KEYSTONE || (ARM && COMPILE_TEST) 96 109 depends on PCI_MSI_IRQ_DOMAIN 97 - select PCIEPORTBUS 98 110 select PCIE_DW_HOST 99 111 help 100 112 Say Y here if you want to enable PCI controller support on Keystone ··· 116 104 117 105 config PCI_LAYERSCAPE 118 106 bool "Freescale Layerscape PCIe controller" 119 - depends on PCI 120 - depends on OF && (ARM || ARCH_LAYERSCAPE) 107 + depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST) 121 108 depends on PCI_MSI_IRQ_DOMAIN 122 109 select MFD_SYSCON 123 110 select PCIE_DW_HOST ··· 124 113 Say Y here if you want PCIe controller support on Layerscape SoCs. 125 114 126 115 config PCI_HISI 127 - depends on OF && ARM64 116 + depends on OF && (ARM64 || COMPILE_TEST) 128 117 bool "HiSilicon Hip05 and Hip06 SoCs PCIe controllers" 129 - depends on PCI 130 118 depends on PCI_MSI_IRQ_DOMAIN 131 - select PCIEPORTBUS 132 119 select PCIE_DW_HOST 133 120 select PCI_HOST_COMMON 134 121 help ··· 135 126 136 127 config PCIE_QCOM 137 128 bool "Qualcomm PCIe controller" 138 - depends on PCI 139 - depends on ARCH_QCOM && OF 129 + depends on OF && (ARCH_QCOM || COMPILE_TEST) 140 130 depends on PCI_MSI_IRQ_DOMAIN 141 - select PCIEPORTBUS 142 131 select PCIE_DW_HOST 143 132 help 144 133 Say Y here to enable PCIe controller support on Qualcomm SoCs. The ··· 145 138 146 139 config PCIE_ARMADA_8K 147 140 bool "Marvell Armada-8K PCIe controller" 148 - depends on PCI 149 - depends on ARCH_MVEBU 141 + depends on ARCH_MVEBU || COMPILE_TEST 150 142 depends on PCI_MSI_IRQ_DOMAIN 151 - select PCIEPORTBUS 152 143 select PCIE_DW_HOST 153 144 help 154 145 Say Y here if you want to enable PCIe controller support on ··· 159 154 160 155 config PCIE_ARTPEC6_HOST 161 156 bool "Axis ARTPEC-6 PCIe controller Host Mode" 162 - depends on MACH_ARTPEC6 163 - depends on PCI && PCI_MSI_IRQ_DOMAIN 164 - select PCIEPORTBUS 157 + depends on MACH_ARTPEC6 || COMPILE_TEST 158 + depends on PCI_MSI_IRQ_DOMAIN 165 159 select PCIE_DW_HOST 166 160 select PCIE_ARTPEC6 167 161 help ··· 169 165 170 166 config PCIE_ARTPEC6_EP 171 167 bool "Axis ARTPEC-6 PCIe controller Endpoint Mode" 172 - depends on MACH_ARTPEC6 168 + depends on MACH_ARTPEC6 || COMPILE_TEST 173 169 depends on PCI_ENDPOINT 174 170 select PCIE_DW_EP 175 171 select PCIE_ARTPEC6 ··· 178 174 endpoint mode. This uses the DesignWare core. 179 175 180 176 config PCIE_KIRIN 181 - depends on OF && ARM64 177 + depends on OF && (ARM64 || COMPILE_TEST) 182 178 bool "HiSilicon Kirin series SoCs PCIe controllers" 183 179 depends on PCI_MSI_IRQ_DOMAIN 184 - depends on PCI 185 - select PCIEPORTBUS 186 180 select PCIE_DW_HOST 187 181 help 188 182 Say Y here if you want PCIe controller support ··· 188 186 189 187 config PCIE_HISI_STB 190 188 bool "HiSilicon STB SoCs PCIe controllers" 191 - depends on ARCH_HISI 192 - depends on PCI 189 + depends on ARCH_HISI || COMPILE_TEST 193 190 depends on PCI_MSI_IRQ_DOMAIN 194 - select PCIEPORTBUS 195 191 select PCIE_DW_HOST 196 192 help 197 193 Say Y here if you want PCIe controller support on HiSilicon STB SoCs
+10 -9
drivers/pci/dwc/pci-dra7xx.c
··· 27 27 #include <linux/mfd/syscon.h> 28 28 #include <linux/regmap.h> 29 29 30 + #include "../pci.h" 30 31 #include "pcie-designware.h" 31 32 32 33 /* PCIe controller wrapper DRA7XX configuration registers */ ··· 407 406 ep->ops = &pcie_ep_ops; 408 407 409 408 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ep_dbics"); 410 - pci->dbi_base = devm_ioremap(dev, res->start, resource_size(res)); 411 - if (!pci->dbi_base) 412 - return -ENOMEM; 409 + pci->dbi_base = devm_ioremap_resource(dev, res); 410 + if (IS_ERR(pci->dbi_base)) 411 + return PTR_ERR(pci->dbi_base); 413 412 414 413 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ep_dbics2"); 415 - pci->dbi_base2 = devm_ioremap(dev, res->start, resource_size(res)); 416 - if (!pci->dbi_base2) 417 - return -ENOMEM; 414 + pci->dbi_base2 = devm_ioremap_resource(dev, res); 415 + if (IS_ERR(pci->dbi_base2)) 416 + return PTR_ERR(pci->dbi_base2); 418 417 419 418 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space"); 420 419 if (!res) ··· 460 459 return ret; 461 460 462 461 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rc_dbics"); 463 - pci->dbi_base = devm_ioremap(dev, res->start, resource_size(res)); 464 - if (!pci->dbi_base) 465 - return -ENOMEM; 462 + pci->dbi_base = devm_ioremap_resource(dev, res); 463 + if (IS_ERR(pci->dbi_base)) 464 + return PTR_ERR(pci->dbi_base); 466 465 467 466 pp->ops = &dra7xx_pcie_host_ops; 468 467
+1 -1
drivers/pci/dwc/pci-imx6.c
··· 338 338 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 339 339 IMX6SX_GPR12_PCIE_TEST_POWERDOWN, 0); 340 340 break; 341 - case IMX6QP: /* FALLTHROUGH */ 341 + case IMX6QP: /* FALLTHROUGH */ 342 342 case IMX6Q: 343 343 /* power up core phy and enable ref clock */ 344 344 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,
+1 -1
drivers/pci/dwc/pci-keystone.c
··· 89 89 dw_pcie_setup_rc(pp); 90 90 91 91 if (dw_pcie_link_up(pci)) { 92 - dev_err(dev, "Link already up\n"); 92 + dev_info(dev, "Link already up\n"); 93 93 return 0; 94 94 } 95 95
+17 -4
drivers/pci/dwc/pcie-armada8k.c
··· 28 28 struct armada8k_pcie { 29 29 struct dw_pcie *pci; 30 30 struct clk *clk; 31 + struct clk *clk_reg; 31 32 }; 32 33 33 34 #define PCIE_VENDOR_REGS_OFFSET 0x8000 ··· 230 229 if (ret) 231 230 return ret; 232 231 232 + pcie->clk_reg = devm_clk_get(dev, "reg"); 233 + if (pcie->clk_reg == ERR_PTR(-EPROBE_DEFER)) { 234 + ret = -EPROBE_DEFER; 235 + goto fail; 236 + } 237 + if (!IS_ERR(pcie->clk_reg)) { 238 + ret = clk_prepare_enable(pcie->clk_reg); 239 + if (ret) 240 + goto fail_clkreg; 241 + } 242 + 233 243 /* Get the dw-pcie unit configuration/control registers base. */ 234 244 base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctrl"); 235 245 pci->dbi_base = devm_pci_remap_cfg_resource(dev, base); 236 246 if (IS_ERR(pci->dbi_base)) { 237 247 dev_err(dev, "couldn't remap regs base %p\n", base); 238 248 ret = PTR_ERR(pci->dbi_base); 239 - goto fail; 249 + goto fail_clkreg; 240 250 } 241 251 242 252 platform_set_drvdata(pdev, pcie); 243 253 244 254 ret = armada8k_add_pcie_port(pcie, pdev); 245 255 if (ret) 246 - goto fail; 256 + goto fail_clkreg; 247 257 248 258 return 0; 249 259 260 + fail_clkreg: 261 + clk_disable_unprepare(pcie->clk_reg); 250 262 fail: 251 - if (!IS_ERR(pcie->clk)) 252 - clk_disable_unprepare(pcie->clk); 263 + clk_disable_unprepare(pcie->clk); 253 264 254 265 return ret; 255 266 }
+3 -3
drivers/pci/dwc/pcie-artpec6.c
··· 463 463 ep->ops = &pcie_ep_ops; 464 464 465 465 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi2"); 466 - pci->dbi_base2 = devm_ioremap(dev, res->start, resource_size(res)); 467 - if (!pci->dbi_base2) 468 - return -ENOMEM; 466 + pci->dbi_base2 = devm_ioremap_resource(dev, res); 467 + if (IS_ERR(pci->dbi_base2)) 468 + return PTR_ERR(pci->dbi_base2); 469 469 470 470 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space"); 471 471 if (!res)
+11 -8
drivers/pci/dwc/pcie-designware-ep.c
··· 75 75 76 76 free_win = find_first_zero_bit(ep->ib_window_map, ep->num_ib_windows); 77 77 if (free_win >= ep->num_ib_windows) { 78 - dev_err(pci->dev, "no free inbound window\n"); 78 + dev_err(pci->dev, "No free inbound window\n"); 79 79 return -EINVAL; 80 80 } 81 81 ··· 100 100 101 101 free_win = find_first_zero_bit(ep->ob_window_map, ep->num_ob_windows); 102 102 if (free_win >= ep->num_ob_windows) { 103 - dev_err(pci->dev, "no free outbound window\n"); 103 + dev_err(pci->dev, "No free outbound window\n"); 104 104 return -EINVAL; 105 105 } 106 106 ··· 204 204 205 205 ret = dw_pcie_ep_outbound_atu(ep, addr, pci_addr, size); 206 206 if (ret) { 207 - dev_err(pci->dev, "failed to enable address\n"); 207 + dev_err(pci->dev, "Failed to enable address\n"); 208 208 return ret; 209 209 } 210 210 ··· 348 348 349 349 ret = of_property_read_u32(np, "num-ib-windows", &ep->num_ib_windows); 350 350 if (ret < 0) { 351 - dev_err(dev, "unable to read *num-ib-windows* property\n"); 351 + dev_err(dev, "Unable to read *num-ib-windows* property\n"); 352 352 return ret; 353 353 } 354 354 if (ep->num_ib_windows > MAX_IATU_IN) { 355 - dev_err(dev, "invalid *num-ib-windows*\n"); 355 + dev_err(dev, "Invalid *num-ib-windows*\n"); 356 356 return -EINVAL; 357 357 } 358 358 359 359 ret = of_property_read_u32(np, "num-ob-windows", &ep->num_ob_windows); 360 360 if (ret < 0) { 361 - dev_err(dev, "unable to read *num-ob-windows* property\n"); 361 + dev_err(dev, "Unable to read *num-ob-windows* property\n"); 362 362 return ret; 363 363 } 364 364 if (ep->num_ob_windows > MAX_IATU_OUT) { 365 - dev_err(dev, "invalid *num-ob-windows*\n"); 365 + dev_err(dev, "Invalid *num-ob-windows*\n"); 366 366 return -EINVAL; 367 367 } 368 368 ··· 389 389 390 390 epc = devm_pci_epc_create(dev, &epc_ops); 391 391 if (IS_ERR(epc)) { 392 - dev_err(dev, "failed to create epc device\n"); 392 + dev_err(dev, "Failed to create epc device\n"); 393 393 return PTR_ERR(epc); 394 394 } 395 395 ··· 410 410 dev_err(dev, "Failed to reserve memory for MSI\n"); 411 411 return -ENOMEM; 412 412 } 413 + 414 + epc->features = EPC_FEATURE_NO_LINKUP_NOTIFIER; 415 + EPC_FEATURE_SET_BAR(epc->features, BAR_0); 413 416 414 417 ep->epc = epc; 415 418 epc_set_drvdata(epc, ep);
+45 -35
drivers/pci/dwc/pcie-designware-host.c
··· 15 15 #include <linux/pci_regs.h> 16 16 #include <linux/platform_device.h> 17 17 18 + #include "../pci.h" 18 19 #include "pcie-designware.h" 19 20 20 21 static struct pci_ops dw_pcie_ops; ··· 84 83 num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; 85 84 86 85 for (i = 0; i < num_ctrls; i++) { 87 - dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 4, 88 - &val); 86 + dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_STATUS + 87 + (i * MSI_REG_CTRL_BLOCK_SIZE), 88 + 4, &val); 89 89 if (!val) 90 90 continue; 91 91 92 92 ret = IRQ_HANDLED; 93 93 pos = 0; 94 - while ((pos = find_next_bit((unsigned long *) &val, 32, 95 - pos)) != 32) { 96 - irq = irq_find_mapping(pp->irq_domain, i * 32 + pos); 94 + while ((pos = find_next_bit((unsigned long *) &val, 95 + MAX_MSI_IRQS_PER_CTRL, 96 + pos)) != MAX_MSI_IRQS_PER_CTRL) { 97 + irq = irq_find_mapping(pp->irq_domain, 98 + (i * MAX_MSI_IRQS_PER_CTRL) + 99 + pos); 97 100 generic_handle_irq(irq); 98 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 101 + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + 102 + (i * MSI_REG_CTRL_BLOCK_SIZE), 99 103 4, 1 << pos); 100 104 pos++; 101 105 } ··· 163 157 if (pp->ops->msi_clear_irq) { 164 158 pp->ops->msi_clear_irq(pp, data->hwirq); 165 159 } else { 166 - ctrl = data->hwirq / 32; 167 - res = ctrl * 12; 168 - bit = data->hwirq % 32; 160 + ctrl = data->hwirq / MAX_MSI_IRQS_PER_CTRL; 161 + res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 162 + bit = data->hwirq % MAX_MSI_IRQS_PER_CTRL; 169 163 170 164 pp->irq_status[ctrl] &= ~(1 << bit); 171 165 dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, ··· 186 180 if (pp->ops->msi_set_irq) { 187 181 pp->ops->msi_set_irq(pp, data->hwirq); 188 182 } else { 189 - ctrl = data->hwirq / 32; 190 - res = ctrl * 12; 191 - bit = data->hwirq % 32; 183 + ctrl = data->hwirq / MAX_MSI_IRQS_PER_CTRL; 184 + res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 185 + bit = data->hwirq % MAX_MSI_IRQS_PER_CTRL; 192 186 193 187 pp->irq_status[ctrl] |= 1 << bit; 194 188 dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, ··· 254 248 unsigned long flags; 255 249 256 250 raw_spin_lock_irqsave(&pp->lock, flags); 251 + 257 252 bitmap_release_region(pp->msi_irq_in_use, data->hwirq, 258 253 order_base_2(nr_irqs)); 254 + 259 255 raw_spin_unlock_irqrestore(&pp->lock, flags); 260 256 } 261 257 ··· 274 266 pp->irq_domain = irq_domain_create_linear(fwnode, pp->num_vectors, 275 267 &dw_pcie_msi_domain_ops, pp); 276 268 if (!pp->irq_domain) { 277 - dev_err(pci->dev, "failed to create IRQ domain\n"); 269 + dev_err(pci->dev, "Failed to create IRQ domain\n"); 278 270 return -ENOMEM; 279 271 } 280 272 ··· 282 274 &dw_pcie_msi_domain_info, 283 275 pp->irq_domain); 284 276 if (!pp->msi_domain) { 285 - dev_err(pci->dev, "failed to create MSI domain\n"); 277 + dev_err(pci->dev, "Failed to create MSI domain\n"); 286 278 irq_domain_remove(pp->irq_domain); 287 279 return -ENOMEM; 288 280 } ··· 309 301 page = alloc_page(GFP_KERNEL); 310 302 pp->msi_data = dma_map_page(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE); 311 303 if (dma_mapping_error(dev, pp->msi_data)) { 312 - dev_err(dev, "failed to map MSI data\n"); 304 + dev_err(dev, "Failed to map MSI data\n"); 313 305 __free_page(page); 314 306 return; 315 307 } 316 308 msi_target = (u64)pp->msi_data; 317 309 318 - /* program the msi_data */ 310 + /* Program the msi_data */ 319 311 dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_LO, 4, 320 312 lower_32_bits(msi_target)); 321 313 dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_HI, 4, ··· 338 330 339 331 cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); 340 332 if (cfg_res) { 341 - pp->cfg0_size = resource_size(cfg_res) / 2; 342 - pp->cfg1_size = resource_size(cfg_res) / 2; 333 + pp->cfg0_size = resource_size(cfg_res) >> 1; 334 + pp->cfg1_size = resource_size(cfg_res) >> 1; 343 335 pp->cfg0_base = cfg_res->start; 344 336 pp->cfg1_base = cfg_res->start + pp->cfg0_size; 345 337 } else if (!pp->va_cfg0_base) { 346 - dev_err(dev, "missing *config* reg space\n"); 338 + dev_err(dev, "Missing *config* reg space\n"); 347 339 } 348 340 349 341 bridge = pci_alloc_host_bridge(0); 350 342 if (!bridge) 351 343 return -ENOMEM; 352 344 353 - ret = of_pci_get_host_bridge_resources(np, 0, 0xff, 345 + ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, 354 346 &bridge->windows, &pp->io_base); 355 347 if (ret) 356 348 return ret; ··· 365 357 case IORESOURCE_IO: 366 358 ret = pci_remap_iospace(win->res, pp->io_base); 367 359 if (ret) { 368 - dev_warn(dev, "error %d: failed to map resource %pR\n", 360 + dev_warn(dev, "Error %d: failed to map resource %pR\n", 369 361 ret, win->res); 370 362 resource_list_destroy_entry(win); 371 363 } else { ··· 383 375 break; 384 376 case 0: 385 377 pp->cfg = win->res; 386 - pp->cfg0_size = resource_size(pp->cfg) / 2; 387 - pp->cfg1_size = resource_size(pp->cfg) / 2; 378 + pp->cfg0_size = resource_size(pp->cfg) >> 1; 379 + pp->cfg1_size = resource_size(pp->cfg) >> 1; 388 380 pp->cfg0_base = pp->cfg->start; 389 381 pp->cfg1_base = pp->cfg->start + pp->cfg0_size; 390 382 break; ··· 399 391 pp->cfg->start, 400 392 resource_size(pp->cfg)); 401 393 if (!pci->dbi_base) { 402 - dev_err(dev, "error with ioremap\n"); 394 + dev_err(dev, "Error with ioremap\n"); 403 395 ret = -ENOMEM; 404 396 goto error; 405 397 } ··· 411 403 pp->va_cfg0_base = devm_pci_remap_cfgspace(dev, 412 404 pp->cfg0_base, pp->cfg0_size); 413 405 if (!pp->va_cfg0_base) { 414 - dev_err(dev, "error with ioremap in function\n"); 406 + dev_err(dev, "Error with ioremap in function\n"); 415 407 ret = -ENOMEM; 416 408 goto error; 417 409 } ··· 422 414 pp->cfg1_base, 423 415 pp->cfg1_size); 424 416 if (!pp->va_cfg1_base) { 425 - dev_err(dev, "error with ioremap\n"); 417 + dev_err(dev, "Error with ioremap\n"); 426 418 ret = -ENOMEM; 427 419 goto error; 428 420 } ··· 594 586 return 0; 595 587 } 596 588 597 - /* access only one slot on each root port */ 589 + /* Access only one slot on each root port */ 598 590 if (bus->number == pp->root_bus_nr && dev > 0) 599 591 return 0; 600 592 ··· 658 650 659 651 /* Initialize IRQ Status array */ 660 652 for (ctrl = 0; ctrl < num_ctrls; ctrl++) 661 - dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + (ctrl * 12), 4, 662 - &pp->irq_status[ctrl]); 663 - /* setup RC BARs */ 653 + dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + 654 + (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 655 + 4, &pp->irq_status[ctrl]); 656 + 657 + /* Setup RC BARs */ 664 658 dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0x00000004); 665 659 dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000); 666 660 667 - /* setup interrupt pins */ 661 + /* Setup interrupt pins */ 668 662 dw_pcie_dbi_ro_wr_en(pci); 669 663 val = dw_pcie_readl_dbi(pci, PCI_INTERRUPT_LINE); 670 664 val &= 0xffff00ff; ··· 674 664 dw_pcie_writel_dbi(pci, PCI_INTERRUPT_LINE, val); 675 665 dw_pcie_dbi_ro_wr_dis(pci); 676 666 677 - /* setup bus numbers */ 667 + /* Setup bus numbers */ 678 668 val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS); 679 669 val &= 0xff000000; 680 670 val |= 0x00ff0100; 681 671 dw_pcie_writel_dbi(pci, PCI_PRIMARY_BUS, val); 682 672 683 - /* setup command register */ 673 + /* Setup command register */ 684 674 val = dw_pcie_readl_dbi(pci, PCI_COMMAND); 685 675 val &= 0xffff0000; 686 676 val |= PCI_COMMAND_IO | PCI_COMMAND_MEMORY | ··· 693 683 * we should not program the ATU here. 694 684 */ 695 685 if (!pp->ops->rd_other_conf) { 696 - /* get iATU unroll support */ 686 + /* Get iATU unroll support */ 697 687 pci->iatu_unroll_enabled = dw_pcie_iatu_unroll_enabled(pci); 698 688 dev_dbg(pci->dev, "iATU unroll: %s\n", 699 689 pci->iatu_unroll_enabled ? "enabled" : "disabled"); ··· 711 701 712 702 /* Enable write permission for the DBI read-only register */ 713 703 dw_pcie_dbi_ro_wr_en(pci); 714 - /* program correct class for RC */ 704 + /* Program correct class for RC */ 715 705 dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI); 716 706 /* Better disable write permission right after the update */ 717 707 dw_pcie_dbi_ro_wr_dis(pci);
+145 -10
drivers/pci/dwc/pcie-designware-plat.c
··· 12 12 #include <linux/interrupt.h> 13 13 #include <linux/kernel.h> 14 14 #include <linux/init.h> 15 + #include <linux/of_device.h> 15 16 #include <linux/of_gpio.h> 16 17 #include <linux/pci.h> 17 18 #include <linux/platform_device.h> 18 19 #include <linux/resource.h> 19 20 #include <linux/signal.h> 20 21 #include <linux/types.h> 22 + #include <linux/regmap.h> 21 23 22 24 #include "pcie-designware.h" 23 25 24 26 struct dw_plat_pcie { 25 - struct dw_pcie *pci; 27 + struct dw_pcie *pci; 28 + struct regmap *regmap; 29 + enum dw_pcie_device_mode mode; 26 30 }; 31 + 32 + struct dw_plat_pcie_of_data { 33 + enum dw_pcie_device_mode mode; 34 + }; 35 + 36 + static const struct of_device_id dw_plat_pcie_of_match[]; 27 37 28 38 static int dw_plat_pcie_host_init(struct pcie_port *pp) 29 39 { ··· 48 38 return 0; 49 39 } 50 40 41 + static void dw_plat_set_num_vectors(struct pcie_port *pp) 42 + { 43 + pp->num_vectors = MAX_MSI_IRQS; 44 + } 45 + 51 46 static const struct dw_pcie_host_ops dw_plat_pcie_host_ops = { 52 47 .host_init = dw_plat_pcie_host_init, 48 + .set_num_vectors = dw_plat_set_num_vectors, 53 49 }; 54 50 55 - static int dw_plat_add_pcie_port(struct pcie_port *pp, 51 + static int dw_plat_pcie_establish_link(struct dw_pcie *pci) 52 + { 53 + return 0; 54 + } 55 + 56 + static const struct dw_pcie_ops dw_pcie_ops = { 57 + .start_link = dw_plat_pcie_establish_link, 58 + }; 59 + 60 + static void dw_plat_pcie_ep_init(struct dw_pcie_ep *ep) 61 + { 62 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 63 + enum pci_barno bar; 64 + 65 + for (bar = BAR_0; bar <= BAR_5; bar++) 66 + dw_pcie_ep_reset_bar(pci, bar); 67 + } 68 + 69 + static int dw_plat_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 70 + enum pci_epc_irq_type type, 71 + u8 interrupt_num) 72 + { 73 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 74 + 75 + switch (type) { 76 + case PCI_EPC_IRQ_LEGACY: 77 + dev_err(pci->dev, "EP cannot trigger legacy IRQs\n"); 78 + return -EINVAL; 79 + case PCI_EPC_IRQ_MSI: 80 + return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); 81 + default: 82 + dev_err(pci->dev, "UNKNOWN IRQ type\n"); 83 + } 84 + 85 + return 0; 86 + } 87 + 88 + static struct dw_pcie_ep_ops pcie_ep_ops = { 89 + .ep_init = dw_plat_pcie_ep_init, 90 + .raise_irq = dw_plat_pcie_ep_raise_irq, 91 + }; 92 + 93 + static int dw_plat_add_pcie_port(struct dw_plat_pcie *dw_plat_pcie, 56 94 struct platform_device *pdev) 57 95 { 96 + struct dw_pcie *pci = dw_plat_pcie->pci; 97 + struct pcie_port *pp = &pci->pp; 58 98 struct device *dev = &pdev->dev; 59 99 int ret; 60 100 ··· 123 63 124 64 ret = dw_pcie_host_init(pp); 125 65 if (ret) { 126 - dev_err(dev, "failed to initialize host\n"); 66 + dev_err(dev, "Failed to initialize host\n"); 127 67 return ret; 128 68 } 129 69 130 70 return 0; 131 71 } 132 72 133 - static const struct dw_pcie_ops dw_pcie_ops = { 134 - }; 73 + static int dw_plat_add_pcie_ep(struct dw_plat_pcie *dw_plat_pcie, 74 + struct platform_device *pdev) 75 + { 76 + int ret; 77 + struct dw_pcie_ep *ep; 78 + struct resource *res; 79 + struct device *dev = &pdev->dev; 80 + struct dw_pcie *pci = dw_plat_pcie->pci; 81 + 82 + ep = &pci->ep; 83 + ep->ops = &pcie_ep_ops; 84 + 85 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi2"); 86 + pci->dbi_base2 = devm_ioremap_resource(dev, res); 87 + if (IS_ERR(pci->dbi_base2)) 88 + return PTR_ERR(pci->dbi_base2); 89 + 90 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space"); 91 + if (!res) 92 + return -EINVAL; 93 + 94 + ep->phys_base = res->start; 95 + ep->addr_size = resource_size(res); 96 + 97 + ret = dw_pcie_ep_init(ep); 98 + if (ret) { 99 + dev_err(dev, "Failed to initialize endpoint\n"); 100 + return ret; 101 + } 102 + return 0; 103 + } 135 104 136 105 static int dw_plat_pcie_probe(struct platform_device *pdev) 137 106 { ··· 169 80 struct dw_pcie *pci; 170 81 struct resource *res; /* Resource from DT */ 171 82 int ret; 83 + const struct of_device_id *match; 84 + const struct dw_plat_pcie_of_data *data; 85 + enum dw_pcie_device_mode mode; 86 + 87 + match = of_match_device(dw_plat_pcie_of_match, dev); 88 + if (!match) 89 + return -EINVAL; 90 + 91 + data = (struct dw_plat_pcie_of_data *)match->data; 92 + mode = (enum dw_pcie_device_mode)data->mode; 172 93 173 94 dw_plat_pcie = devm_kzalloc(dev, sizeof(*dw_plat_pcie), GFP_KERNEL); 174 95 if (!dw_plat_pcie) ··· 192 93 pci->ops = &dw_pcie_ops; 193 94 194 95 dw_plat_pcie->pci = pci; 96 + dw_plat_pcie->mode = mode; 195 97 196 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 98 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); 99 + if (!res) 100 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 101 + 197 102 pci->dbi_base = devm_ioremap_resource(dev, res); 198 103 if (IS_ERR(pci->dbi_base)) 199 104 return PTR_ERR(pci->dbi_base); 200 105 201 106 platform_set_drvdata(pdev, dw_plat_pcie); 202 107 203 - ret = dw_plat_add_pcie_port(&pci->pp, pdev); 204 - if (ret < 0) 205 - return ret; 108 + switch (dw_plat_pcie->mode) { 109 + case DW_PCIE_RC_TYPE: 110 + if (!IS_ENABLED(CONFIG_PCIE_DW_PLAT_HOST)) 111 + return -ENODEV; 112 + 113 + ret = dw_plat_add_pcie_port(dw_plat_pcie, pdev); 114 + if (ret < 0) 115 + return ret; 116 + break; 117 + case DW_PCIE_EP_TYPE: 118 + if (!IS_ENABLED(CONFIG_PCIE_DW_PLAT_EP)) 119 + return -ENODEV; 120 + 121 + ret = dw_plat_add_pcie_ep(dw_plat_pcie, pdev); 122 + if (ret < 0) 123 + return ret; 124 + break; 125 + default: 126 + dev_err(dev, "INVALID device type %d\n", dw_plat_pcie->mode); 127 + } 206 128 207 129 return 0; 208 130 } 209 131 132 + static const struct dw_plat_pcie_of_data dw_plat_pcie_rc_of_data = { 133 + .mode = DW_PCIE_RC_TYPE, 134 + }; 135 + 136 + static const struct dw_plat_pcie_of_data dw_plat_pcie_ep_of_data = { 137 + .mode = DW_PCIE_EP_TYPE, 138 + }; 139 + 210 140 static const struct of_device_id dw_plat_pcie_of_match[] = { 211 - { .compatible = "snps,dw-pcie", }, 141 + { 142 + .compatible = "snps,dw-pcie", 143 + .data = &dw_plat_pcie_rc_of_data, 144 + }, 145 + { 146 + .compatible = "snps,dw-pcie-ep", 147 + .data = &dw_plat_pcie_ep_of_data, 148 + }, 212 149 {}, 213 150 }; 214 151
+11 -11
drivers/pci/dwc/pcie-designware.c
··· 69 69 70 70 ret = dw_pcie_read(base + reg, size, &val); 71 71 if (ret) 72 - dev_err(pci->dev, "read DBI address failed\n"); 72 + dev_err(pci->dev, "Read DBI address failed\n"); 73 73 74 74 return val; 75 75 } ··· 86 86 87 87 ret = dw_pcie_write(base + reg, size, val); 88 88 if (ret) 89 - dev_err(pci->dev, "write DBI address failed\n"); 89 + dev_err(pci->dev, "Write DBI address failed\n"); 90 90 } 91 91 92 92 static u32 dw_pcie_readl_ob_unroll(struct dw_pcie *pci, u32 index, u32 reg) ··· 137 137 138 138 usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); 139 139 } 140 - dev_err(pci->dev, "outbound iATU is not being enabled\n"); 140 + dev_err(pci->dev, "Outbound iATU is not being enabled\n"); 141 141 } 142 142 143 143 void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type, ··· 180 180 181 181 usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); 182 182 } 183 - dev_err(pci->dev, "outbound iATU is not being enabled\n"); 183 + dev_err(pci->dev, "Outbound iATU is not being enabled\n"); 184 184 } 185 185 186 186 static u32 dw_pcie_readl_ib_unroll(struct dw_pcie *pci, u32 index, u32 reg) ··· 238 238 239 239 usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); 240 240 } 241 - dev_err(pci->dev, "inbound iATU is not being enabled\n"); 241 + dev_err(pci->dev, "Inbound iATU is not being enabled\n"); 242 242 243 243 return -EBUSY; 244 244 } ··· 284 284 285 285 usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); 286 286 } 287 - dev_err(pci->dev, "inbound iATU is not being enabled\n"); 287 + dev_err(pci->dev, "Inbound iATU is not being enabled\n"); 288 288 289 289 return -EBUSY; 290 290 } ··· 313 313 { 314 314 int retries; 315 315 316 - /* check if the link is up or not */ 316 + /* Check if the link is up or not */ 317 317 for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) { 318 318 if (dw_pcie_link_up(pci)) { 319 - dev_info(pci->dev, "link up\n"); 319 + dev_info(pci->dev, "Link up\n"); 320 320 return 0; 321 321 } 322 322 usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX); 323 323 } 324 324 325 - dev_err(pci->dev, "phy link never came up\n"); 325 + dev_err(pci->dev, "Phy link never came up\n"); 326 326 327 327 return -ETIMEDOUT; 328 328 } ··· 351 351 if (ret) 352 352 lanes = 0; 353 353 354 - /* set the number of lanes */ 354 + /* Set the number of lanes */ 355 355 val = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL); 356 356 val &= ~PORT_LINK_MODE_MASK; 357 357 switch (lanes) { ··· 373 373 } 374 374 dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val); 375 375 376 - /* set link width speed control register */ 376 + /* Set link width speed control register */ 377 377 val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL); 378 378 val &= ~PORT_LOGIC_LINK_WIDTH_MASK; 379 379 switch (lanes) {
+1
drivers/pci/dwc/pcie-designware.h
··· 110 110 #define MAX_MSI_IRQS 256 111 111 #define MAX_MSI_IRQS_PER_CTRL 32 112 112 #define MAX_MSI_CTRLS (MAX_MSI_IRQS / MAX_MSI_IRQS_PER_CTRL) 113 + #define MSI_REG_CTRL_BLOCK_SIZE 12 113 114 #define MSI_DEF_NUM_VECTORS 32 114 115 115 116 /* Maximum number of inbound/outbound iATUs */
+10 -3
drivers/pci/dwc/pcie-qcom.c
··· 10 10 11 11 #include <linux/clk.h> 12 12 #include <linux/delay.h> 13 - #include <linux/gpio.h> 13 + #include <linux/gpio/consumer.h> 14 14 #include <linux/interrupt.h> 15 15 #include <linux/io.h> 16 16 #include <linux/iopoll.h> ··· 19 19 #include <linux/of_device.h> 20 20 #include <linux/of_gpio.h> 21 21 #include <linux/pci.h> 22 + #include <linux/pm_runtime.h> 22 23 #include <linux/platform_device.h> 23 24 #include <linux/phy/phy.h> 24 25 #include <linux/regulator/consumer.h> ··· 870 869 871 870 /* enable PCIe clocks and resets */ 872 871 val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL); 873 - val &= !BIT(0); 872 + val &= ~BIT(0); 874 873 writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL); 875 874 876 875 /* change DBI base address */ ··· 1089 1088 struct qcom_pcie *pcie = to_qcom_pcie(pci); 1090 1089 int ret; 1091 1090 1091 + pm_runtime_get_sync(pci->dev); 1092 1092 qcom_ep_reset_assert(pcie); 1093 1093 1094 1094 ret = pcie->ops->init(pcie); ··· 1126 1124 phy_power_off(pcie->phy); 1127 1125 err_deinit: 1128 1126 pcie->ops->deinit(pcie); 1127 + pm_runtime_put(pci->dev); 1129 1128 1130 1129 return ret; 1131 1130 } ··· 1215 1212 if (!pci) 1216 1213 return -ENOMEM; 1217 1214 1215 + pm_runtime_enable(dev); 1218 1216 pci->dev = dev; 1219 1217 pci->ops = &dw_pcie_ops; 1220 1218 pp = &pci->pp; ··· 1261 1257 } 1262 1258 1263 1259 ret = phy_init(pcie->phy); 1264 - if (ret) 1260 + if (ret) { 1261 + pm_runtime_disable(&pdev->dev); 1265 1262 return ret; 1263 + } 1266 1264 1267 1265 platform_set_drvdata(pdev, pcie); 1268 1266 1269 1267 ret = dw_pcie_host_init(pp); 1270 1268 if (ret) { 1271 1269 dev_err(dev, "cannot initialize host\n"); 1270 + pm_runtime_disable(&pdev->dev); 1272 1271 return ret; 1273 1272 } 1274 1273
+21 -14
drivers/pci/endpoint/functions/pci-epf-test.c
··· 87 87 88 88 src_addr = pci_epc_mem_alloc_addr(epc, &src_phys_addr, reg->size); 89 89 if (!src_addr) { 90 - dev_err(dev, "failed to allocate source address\n"); 90 + dev_err(dev, "Failed to allocate source address\n"); 91 91 reg->status = STATUS_SRC_ADDR_INVALID; 92 92 ret = -ENOMEM; 93 93 goto err; ··· 96 96 ret = pci_epc_map_addr(epc, epf->func_no, src_phys_addr, reg->src_addr, 97 97 reg->size); 98 98 if (ret) { 99 - dev_err(dev, "failed to map source address\n"); 99 + dev_err(dev, "Failed to map source address\n"); 100 100 reg->status = STATUS_SRC_ADDR_INVALID; 101 101 goto err_src_addr; 102 102 } 103 103 104 104 dst_addr = pci_epc_mem_alloc_addr(epc, &dst_phys_addr, reg->size); 105 105 if (!dst_addr) { 106 - dev_err(dev, "failed to allocate destination address\n"); 106 + dev_err(dev, "Failed to allocate destination address\n"); 107 107 reg->status = STATUS_DST_ADDR_INVALID; 108 108 ret = -ENOMEM; 109 109 goto err_src_map_addr; ··· 112 112 ret = pci_epc_map_addr(epc, epf->func_no, dst_phys_addr, reg->dst_addr, 113 113 reg->size); 114 114 if (ret) { 115 - dev_err(dev, "failed to map destination address\n"); 115 + dev_err(dev, "Failed to map destination address\n"); 116 116 reg->status = STATUS_DST_ADDR_INVALID; 117 117 goto err_dst_addr; 118 118 } ··· 149 149 150 150 src_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); 151 151 if (!src_addr) { 152 - dev_err(dev, "failed to allocate address\n"); 152 + dev_err(dev, "Failed to allocate address\n"); 153 153 reg->status = STATUS_SRC_ADDR_INVALID; 154 154 ret = -ENOMEM; 155 155 goto err; ··· 158 158 ret = pci_epc_map_addr(epc, epf->func_no, phys_addr, reg->src_addr, 159 159 reg->size); 160 160 if (ret) { 161 - dev_err(dev, "failed to map address\n"); 161 + dev_err(dev, "Failed to map address\n"); 162 162 reg->status = STATUS_SRC_ADDR_INVALID; 163 163 goto err_addr; 164 164 } ··· 201 201 202 202 dst_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); 203 203 if (!dst_addr) { 204 - dev_err(dev, "failed to allocate address\n"); 204 + dev_err(dev, "Failed to allocate address\n"); 205 205 reg->status = STATUS_DST_ADDR_INVALID; 206 206 ret = -ENOMEM; 207 207 goto err; ··· 210 210 ret = pci_epc_map_addr(epc, epf->func_no, phys_addr, reg->dst_addr, 211 211 reg->size); 212 212 if (ret) { 213 - dev_err(dev, "failed to map address\n"); 213 + dev_err(dev, "Failed to map address\n"); 214 214 reg->status = STATUS_DST_ADDR_INVALID; 215 215 goto err_addr; 216 216 } ··· 230 230 * wait 1ms inorder for the write to complete. Without this delay L3 231 231 * error in observed in the host system. 232 232 */ 233 - mdelay(1); 233 + usleep_range(1000, 2000); 234 234 235 235 kfree(buf); 236 236 ··· 379 379 ret = pci_epc_set_bar(epc, epf->func_no, epf_bar); 380 380 if (ret) { 381 381 pci_epf_free_space(epf, epf_test->reg[bar], bar); 382 - dev_err(dev, "failed to set BAR%d\n", bar); 382 + dev_err(dev, "Failed to set BAR%d\n", bar); 383 383 if (bar == test_reg_bar) 384 384 return ret; 385 385 } ··· 406 406 base = pci_epf_alloc_space(epf, sizeof(struct pci_epf_test_reg), 407 407 test_reg_bar); 408 408 if (!base) { 409 - dev_err(dev, "failed to allocated register space\n"); 409 + dev_err(dev, "Failed to allocated register space\n"); 410 410 return -ENOMEM; 411 411 } 412 412 epf_test->reg[test_reg_bar] = base; ··· 416 416 continue; 417 417 base = pci_epf_alloc_space(epf, bar_size[bar], bar); 418 418 if (!base) 419 - dev_err(dev, "failed to allocate space for BAR%d\n", 419 + dev_err(dev, "Failed to allocate space for BAR%d\n", 420 420 bar); 421 421 epf_test->reg[bar] = base; 422 422 } ··· 435 435 if (WARN_ON_ONCE(!epc)) 436 436 return -EINVAL; 437 437 438 + if (epc->features & EPC_FEATURE_NO_LINKUP_NOTIFIER) 439 + epf_test->linkup_notifier = false; 440 + else 441 + epf_test->linkup_notifier = true; 442 + 443 + epf_test->test_reg_bar = EPC_FEATURE_GET_BAR(epc->features); 444 + 438 445 ret = pci_epc_write_header(epc, epf->func_no, header); 439 446 if (ret) { 440 - dev_err(dev, "configuration header write failed\n"); 447 + dev_err(dev, "Configuration header write failed\n"); 441 448 return ret; 442 449 } 443 450 ··· 526 519 WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); 527 520 ret = pci_epf_register_driver(&test_driver); 528 521 if (ret) { 529 - pr_err("failed to register pci epf test driver --> %d\n", ret); 522 + pr_err("Failed to register pci epf test driver --> %d\n", ret); 530 523 return ret; 531 524 } 532 525
+21 -2
drivers/pci/endpoint/pci-epf-core.c
··· 15 15 #include <linux/pci-epf.h> 16 16 #include <linux/pci-ep-cfs.h> 17 17 18 + static DEFINE_MUTEX(pci_epf_mutex); 19 + 18 20 static struct bus_type pci_epf_bus_type; 19 21 static const struct device_type pci_epf_type; 20 22 ··· 145 143 */ 146 144 void pci_epf_unregister_driver(struct pci_epf_driver *driver) 147 145 { 148 - pci_ep_cfs_remove_epf_group(driver->group); 146 + struct config_group *group; 147 + 148 + mutex_lock(&pci_epf_mutex); 149 + list_for_each_entry(group, &driver->epf_group, group_entry) 150 + pci_ep_cfs_remove_epf_group(group); 151 + list_del(&driver->epf_group); 152 + mutex_unlock(&pci_epf_mutex); 149 153 driver_unregister(&driver->driver); 150 154 } 151 155 EXPORT_SYMBOL_GPL(pci_epf_unregister_driver); ··· 167 159 struct module *owner) 168 160 { 169 161 int ret; 162 + struct config_group *group; 163 + const struct pci_epf_device_id *id; 170 164 171 165 if (!driver->ops) 172 166 return -EINVAL; ··· 183 173 if (ret) 184 174 return ret; 185 175 186 - driver->group = pci_ep_cfs_add_epf_group(driver->driver.name); 176 + INIT_LIST_HEAD(&driver->epf_group); 177 + 178 + id = driver->id_table; 179 + while (id->name[0]) { 180 + group = pci_ep_cfs_add_epf_group(id->name); 181 + mutex_lock(&pci_epf_mutex); 182 + list_add_tail(&group->group_entry, &driver->epf_group); 183 + mutex_unlock(&pci_epf_mutex); 184 + id++; 185 + } 187 186 188 187 return 0; 189 188 }
+35 -20
drivers/pci/host/Kconfig
··· 5 5 6 6 config PCI_MVEBU 7 7 bool "Marvell EBU PCIe controller" 8 - depends on ARCH_MVEBU || ARCH_DOVE 8 + depends on ARCH_MVEBU || ARCH_DOVE || COMPILE_TEST 9 + depends on MVEBU_MBUS 9 10 depends on ARM 10 11 depends on OF 11 12 12 13 config PCI_AARDVARK 13 14 bool "Aardvark PCIe controller" 14 - depends on ARCH_MVEBU && ARM64 15 + depends on (ARCH_MVEBU && ARM64) || COMPILE_TEST 15 16 depends on OF 16 17 depends on PCI_MSI_IRQ_DOMAIN 17 18 help ··· 22 21 23 22 config PCIE_XILINX_NWL 24 23 bool "NWL PCIe Core" 25 - depends on ARCH_ZYNQMP 24 + depends on ARCH_ZYNQMP || COMPILE_TEST 26 25 depends on PCI_MSI_IRQ_DOMAIN 27 26 help 28 27 Say 'Y' here if you want kernel support for Xilinx ··· 33 32 config PCI_FTPCI100 34 33 bool "Faraday Technology FTPCI100 PCI controller" 35 34 depends on OF 36 - depends on ARM 37 35 default ARCH_GEMINI 38 36 39 37 config PCI_TEGRA 40 38 bool "NVIDIA Tegra PCIe controller" 41 - depends on ARCH_TEGRA 39 + depends on ARCH_TEGRA || COMPILE_TEST 42 40 depends on PCI_MSI_IRQ_DOMAIN 43 41 help 44 42 Say Y here if you want support for the PCIe host controller found ··· 45 45 46 46 config PCI_RCAR_GEN2 47 47 bool "Renesas R-Car Gen2 Internal PCI controller" 48 - depends on ARM 49 48 depends on ARCH_RENESAS || COMPILE_TEST 49 + depends on ARM 50 50 help 51 51 Say Y here if you want internal PCI support on R-Car Gen2 SoC. 52 52 There are 3 internal PCI controllers available with a single ··· 54 54 55 55 config PCIE_RCAR 56 56 bool "Renesas R-Car PCIe controller" 57 - depends on ARCH_RENESAS || (ARM && COMPILE_TEST) 57 + depends on ARCH_RENESAS || COMPILE_TEST 58 58 depends on PCI_MSI_IRQ_DOMAIN 59 59 help 60 60 Say Y here if you want PCIe controller support on R-Car SoCs. ··· 65 65 66 66 config PCI_HOST_GENERIC 67 67 bool "Generic PCI host controller" 68 - depends on (ARM || ARM64) && OF 68 + depends on OF 69 69 select PCI_HOST_COMMON 70 70 select IRQ_DOMAIN 71 + select PCI_DOMAINS 71 72 help 72 73 Say Y here if you want to support a simple generic PCI host 73 74 controller, such as the one emulated by kvmtool. 74 75 75 76 config PCIE_XILINX 76 77 bool "Xilinx AXI PCIe host bridge support" 77 - depends on ARCH_ZYNQ || MICROBLAZE || (MIPS && PCI_DRIVERS_GENERIC) 78 + depends on ARCH_ZYNQ || MICROBLAZE || (MIPS && PCI_DRIVERS_GENERIC) || COMPILE_TEST 78 79 help 79 80 Say 'Y' here if you want kernel to support the Xilinx AXI PCIe 80 81 Host Bridge driver. 81 82 82 83 config PCI_XGENE 83 84 bool "X-Gene PCIe controller" 84 - depends on ARM64 85 + depends on ARM64 || COMPILE_TEST 85 86 depends on OF || (ACPI && PCI_QUIRKS) 86 - select PCIEPORTBUS 87 87 help 88 88 Say Y here if you want internal PCI support on APM X-Gene SoC. 89 89 There are 5 internal PCIe ports available. Each port is GEN3 capable ··· 101 101 config PCI_V3_SEMI 102 102 bool "V3 Semiconductor PCI controller" 103 103 depends on OF 104 - depends on ARM 104 + depends on ARM || COMPILE_TEST 105 105 default ARCH_INTEGRATOR_AP 106 106 107 107 config PCI_VERSATILE ··· 147 147 148 148 config PCIE_ALTERA 149 149 bool "Altera PCIe controller" 150 - depends on ARM || NIOS2 151 - depends on OF_PCI 150 + depends on ARM || NIOS2 || COMPILE_TEST 152 151 select PCI_DOMAINS 153 152 help 154 153 Say Y here if you want to enable PCIe controller support on Altera ··· 163 164 164 165 config PCI_HOST_THUNDER_PEM 165 166 bool "Cavium Thunder PCIe controller to off-chip devices" 166 - depends on ARM64 167 + depends on ARM64 || COMPILE_TEST 167 168 depends on OF || (ACPI && PCI_QUIRKS) 168 169 select PCI_HOST_COMMON 169 170 help ··· 171 172 172 173 config PCI_HOST_THUNDER_ECAM 173 174 bool "Cavium Thunder ECAM controller to on-chip devices on pass-1.x silicon" 174 - depends on ARM64 175 + depends on ARM64 || COMPILE_TEST 175 176 depends on OF || (ACPI && PCI_QUIRKS) 176 177 select PCI_HOST_COMMON 177 178 help 178 179 Say Y here if you want ECAM support for CN88XX-Pass-1.x Cavium Thunder SoCs. 179 180 180 181 config PCIE_ROCKCHIP 181 - tristate "Rockchip PCIe controller" 182 + bool 183 + depends on PCI 184 + 185 + config PCIE_ROCKCHIP_HOST 186 + tristate "Rockchip PCIe host controller" 182 187 depends on ARCH_ROCKCHIP || COMPILE_TEST 183 188 depends on OF 184 189 depends on PCI_MSI_IRQ_DOMAIN 185 190 select MFD_SYSCON 191 + select PCIE_ROCKCHIP 186 192 help 187 193 Say Y here if you want internal PCI support on Rockchip SoC. 188 194 There is 1 internal PCIe port available to support GEN2 with 189 195 4 slots. 190 196 197 + config PCIE_ROCKCHIP_EP 198 + bool "Rockchip PCIe endpoint controller" 199 + depends on ARCH_ROCKCHIP || COMPILE_TEST 200 + depends on OF 201 + depends on PCI_ENDPOINT 202 + select MFD_SYSCON 203 + select PCIE_ROCKCHIP 204 + help 205 + Say Y here if you want to support Rockchip PCIe controller in 206 + endpoint mode on Rockchip SoC. There is 1 internal PCIe port 207 + available to support GEN2 with 4 slots. 208 + 191 209 config PCIE_MEDIATEK 192 210 bool "MediaTek PCIe controller" 193 - depends on (ARM || ARM64) && (ARCH_MEDIATEK || COMPILE_TEST) 211 + depends on ARCH_MEDIATEK || COMPILE_TEST 194 212 depends on OF 195 - depends on PCI 196 - select PCIEPORTBUS 213 + depends on PCI_MSI_IRQ_DOMAIN 197 214 help 198 215 Say Y here if you want to enable PCIe controller support on 199 216 MediaTek SoCs.
+2
drivers/pci/host/Makefile
··· 20 20 obj-$(CONFIG_PCIE_ALTERA) += pcie-altera.o 21 21 obj-$(CONFIG_PCIE_ALTERA_MSI) += pcie-altera-msi.o 22 22 obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o 23 + obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o 24 + obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o 23 25 obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o 24 26 obj-$(CONFIG_PCIE_TANGO_SMP8759) += pcie-tango.o 25 27 obj-$(CONFIG_VMD) += vmd.o
+4 -3
drivers/pci/host/pci-aardvark.c
··· 19 19 #include <linux/of_address.h> 20 20 #include <linux/of_pci.h> 21 21 22 + #include "../pci.h" 23 + 22 24 /* PCIe core registers */ 23 25 #define PCIE_CORE_CMD_STATUS_REG 0x4 24 26 #define PCIE_CORE_CMD_IO_ACCESS_EN BIT(0) ··· 824 822 { 825 823 int err, res_valid = 0; 826 824 struct device *dev = &pcie->pdev->dev; 827 - struct device_node *np = dev->of_node; 828 825 struct resource_entry *win, *tmp; 829 826 resource_size_t iobase; 830 827 831 828 INIT_LIST_HEAD(&pcie->resources); 832 829 833 - err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pcie->resources, 834 - &iobase); 830 + err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, 831 + &pcie->resources, &iobase); 835 832 if (err) 836 833 return err; 837 834
+4 -2
drivers/pci/host/pci-ftpci100.c
··· 28 28 #include <linux/irq.h> 29 29 #include <linux/clk.h> 30 30 31 + #include "../pci.h" 32 + 31 33 /* 32 34 * Special configuration registers directly in the first few words 33 35 * in I/O space. ··· 478 476 if (IS_ERR(p->base)) 479 477 return PTR_ERR(p->base); 480 478 481 - ret = of_pci_get_host_bridge_resources(dev->of_node, 0, 0xff, 482 - &res, &io_base); 479 + ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, 480 + &res, &io_base); 483 481 if (ret) 484 482 return ret; 485 483
+13
drivers/pci/host/pci-host-common.c
··· 101 101 return ret; 102 102 } 103 103 104 + platform_set_drvdata(pdev, bridge->bus); 105 + return 0; 106 + } 107 + 108 + int pci_host_common_remove(struct platform_device *pdev) 109 + { 110 + struct pci_bus *bus = platform_get_drvdata(pdev); 111 + 112 + pci_lock_rescan_remove(); 113 + pci_stop_root_bus(bus); 114 + pci_remove_root_bus(bus); 115 + pci_unlock_rescan_remove(); 116 + 104 117 return 0; 105 118 }
+1
drivers/pci/host/pci-host-generic.c
··· 95 95 .suppress_bind_attrs = true, 96 96 }, 97 97 .probe = gen_pci_probe, 98 + .remove = pci_host_common_remove, 98 99 }; 99 100 builtin_platform_driver(gen_pci_driver);
+71 -91
drivers/pci/host/pci-hyperv.c
··· 433 433 struct hv_pcibus_device { 434 434 struct pci_sysdata sysdata; 435 435 enum hv_pcibus_state state; 436 - atomic_t remove_lock; 436 + refcount_t remove_lock; 437 437 struct hv_device *hdev; 438 438 resource_size_t low_mmio_space; 439 439 resource_size_t high_mmio_space; ··· 488 488 hv_pcichild_maximum 489 489 }; 490 490 491 - enum hv_pcidev_ref_reason { 492 - hv_pcidev_ref_invalid = 0, 493 - hv_pcidev_ref_initial, 494 - hv_pcidev_ref_by_slot, 495 - hv_pcidev_ref_packet, 496 - hv_pcidev_ref_pnp, 497 - hv_pcidev_ref_childlist, 498 - hv_pcidev_irqdata, 499 - hv_pcidev_ref_max 500 - }; 501 - 502 491 struct hv_pci_dev { 503 492 /* List protected by pci_rescan_remove_lock */ 504 493 struct list_head list_entry; ··· 537 548 538 549 static struct hv_pci_dev *get_pcichild_wslot(struct hv_pcibus_device *hbus, 539 550 u32 wslot); 540 - static void get_pcichild(struct hv_pci_dev *hv_pcidev, 541 - enum hv_pcidev_ref_reason reason); 542 - static void put_pcichild(struct hv_pci_dev *hv_pcidev, 543 - enum hv_pcidev_ref_reason reason); 551 + 552 + static void get_pcichild(struct hv_pci_dev *hpdev) 553 + { 554 + refcount_inc(&hpdev->refs); 555 + } 556 + 557 + static void put_pcichild(struct hv_pci_dev *hpdev) 558 + { 559 + if (refcount_dec_and_test(&hpdev->refs)) 560 + kfree(hpdev); 561 + } 544 562 545 563 static void get_hvpcibus(struct hv_pcibus_device *hv_pcibus); 546 564 static void put_hvpcibus(struct hv_pcibus_device *hv_pcibus); 565 + 566 + /* 567 + * There is no good way to get notified from vmbus_onoffer_rescind(), 568 + * so let's use polling here, since this is not a hot path. 569 + */ 570 + static int wait_for_response(struct hv_device *hdev, 571 + struct completion *comp) 572 + { 573 + while (true) { 574 + if (hdev->channel->rescind) { 575 + dev_warn_once(&hdev->device, "The device is gone.\n"); 576 + return -ENODEV; 577 + } 578 + 579 + if (wait_for_completion_timeout(comp, HZ / 10)) 580 + break; 581 + } 582 + 583 + return 0; 584 + } 547 585 548 586 /** 549 587 * devfn_to_wslot() - Convert from Linux PCI slot to Windows ··· 778 762 779 763 _hv_pcifront_read_config(hpdev, where, size, val); 780 764 781 - put_pcichild(hpdev, hv_pcidev_ref_by_slot); 765 + put_pcichild(hpdev); 782 766 return PCIBIOS_SUCCESSFUL; 783 767 } 784 768 ··· 806 790 807 791 _hv_pcifront_write_config(hpdev, where, size, val); 808 792 809 - put_pcichild(hpdev, hv_pcidev_ref_by_slot); 793 + put_pcichild(hpdev); 810 794 return PCIBIOS_SUCCESSFUL; 811 795 } 812 796 ··· 872 856 } 873 857 874 858 hv_int_desc_free(hpdev, int_desc); 875 - put_pcichild(hpdev, hv_pcidev_ref_by_slot); 859 + put_pcichild(hpdev); 876 860 } 877 861 878 862 static int hv_set_affinity(struct irq_data *data, const struct cpumask *dest, ··· 1202 1186 msg->address_lo = comp.int_desc.address & 0xffffffff; 1203 1187 msg->data = comp.int_desc.data; 1204 1188 1205 - put_pcichild(hpdev, hv_pcidev_ref_by_slot); 1189 + put_pcichild(hpdev); 1206 1190 return; 1207 1191 1208 1192 free_int_desc: 1209 1193 kfree(int_desc); 1210 1194 drop_reference: 1211 - put_pcichild(hpdev, hv_pcidev_ref_by_slot); 1195 + put_pcichild(hpdev); 1212 1196 return_null_message: 1213 1197 msg->address_hi = 0; 1214 1198 msg->address_lo = 0; ··· 1299 1283 */ 1300 1284 static void survey_child_resources(struct hv_pcibus_device *hbus) 1301 1285 { 1302 - struct list_head *iter; 1303 1286 struct hv_pci_dev *hpdev; 1304 1287 resource_size_t bar_size = 0; 1305 1288 unsigned long flags; ··· 1324 1309 * for a child device are a power of 2 in size and aligned in memory, 1325 1310 * so it's sufficient to just add them up without tracking alignment. 1326 1311 */ 1327 - list_for_each(iter, &hbus->children) { 1328 - hpdev = container_of(iter, struct hv_pci_dev, list_entry); 1312 + list_for_each_entry(hpdev, &hbus->children, list_entry) { 1329 1313 for (i = 0; i < 6; i++) { 1330 1314 if (hpdev->probed_bar[i] & PCI_BASE_ADDRESS_SPACE_IO) 1331 1315 dev_err(&hbus->hdev->device, ··· 1377 1363 resource_size_t low_base = 0; 1378 1364 resource_size_t bar_size; 1379 1365 struct hv_pci_dev *hpdev; 1380 - struct list_head *iter; 1381 1366 unsigned long flags; 1382 1367 u64 bar_val; 1383 1368 u32 command; ··· 1398 1385 1399 1386 /* Pick addresses for the BARs. */ 1400 1387 do { 1401 - list_for_each(iter, &hbus->children) { 1402 - hpdev = container_of(iter, struct hv_pci_dev, 1403 - list_entry); 1388 + list_for_each_entry(hpdev, &hbus->children, list_entry) { 1404 1389 for (i = 0; i < 6; i++) { 1405 1390 bar_val = hpdev->probed_bar[i]; 1406 1391 if (bar_val == 0) ··· 1519 1508 complete(&completion->host_event); 1520 1509 } 1521 1510 1522 - static void get_pcichild(struct hv_pci_dev *hpdev, 1523 - enum hv_pcidev_ref_reason reason) 1524 - { 1525 - refcount_inc(&hpdev->refs); 1526 - } 1527 - 1528 - static void put_pcichild(struct hv_pci_dev *hpdev, 1529 - enum hv_pcidev_ref_reason reason) 1530 - { 1531 - if (refcount_dec_and_test(&hpdev->refs)) 1532 - kfree(hpdev); 1533 - } 1534 - 1535 1511 /** 1536 1512 * new_pcichild_device() - Create a new child device 1537 1513 * @hbus: The internal struct tracking this root PCI bus. ··· 1566 1568 if (ret) 1567 1569 goto error; 1568 1570 1569 - wait_for_completion(&comp_pkt.host_event); 1571 + if (wait_for_response(hbus->hdev, &comp_pkt.host_event)) 1572 + goto error; 1570 1573 1571 1574 hpdev->desc = *desc; 1572 1575 refcount_set(&hpdev->refs, 1); 1573 - get_pcichild(hpdev, hv_pcidev_ref_childlist); 1576 + get_pcichild(hpdev); 1574 1577 spin_lock_irqsave(&hbus->device_list_lock, flags); 1575 1578 1576 - /* 1577 - * When a device is being added to the bus, we set the PCI domain 1578 - * number to be the device serial number, which is non-zero and 1579 - * unique on the same VM. The serial numbers start with 1, and 1580 - * increase by 1 for each device. So device names including this 1581 - * can have shorter names than based on the bus instance UUID. 1582 - * Only the first device serial number is used for domain, so the 1583 - * domain number will not change after the first device is added. 1584 - */ 1585 - if (list_empty(&hbus->children)) 1586 - hbus->sysdata.domain = desc->ser; 1587 1579 list_add_tail(&hpdev->list_entry, &hbus->children); 1588 1580 spin_unlock_irqrestore(&hbus->device_list_lock, flags); 1589 1581 return hpdev; ··· 1606 1618 list_for_each_entry(iter, &hbus->children, list_entry) { 1607 1619 if (iter->desc.win_slot.slot == wslot) { 1608 1620 hpdev = iter; 1609 - get_pcichild(hpdev, hv_pcidev_ref_by_slot); 1621 + get_pcichild(hpdev); 1610 1622 break; 1611 1623 } 1612 1624 } ··· 1642 1654 { 1643 1655 u32 child_no; 1644 1656 bool found; 1645 - struct list_head *iter; 1646 1657 struct pci_function_description *new_desc; 1647 1658 struct hv_pci_dev *hpdev; 1648 1659 struct hv_pcibus_device *hbus; ··· 1678 1691 1679 1692 /* First, mark all existing children as reported missing. */ 1680 1693 spin_lock_irqsave(&hbus->device_list_lock, flags); 1681 - list_for_each(iter, &hbus->children) { 1682 - hpdev = container_of(iter, struct hv_pci_dev, 1683 - list_entry); 1684 - hpdev->reported_missing = true; 1694 + list_for_each_entry(hpdev, &hbus->children, list_entry) { 1695 + hpdev->reported_missing = true; 1685 1696 } 1686 1697 spin_unlock_irqrestore(&hbus->device_list_lock, flags); 1687 1698 ··· 1689 1704 new_desc = &dr->func[child_no]; 1690 1705 1691 1706 spin_lock_irqsave(&hbus->device_list_lock, flags); 1692 - list_for_each(iter, &hbus->children) { 1693 - hpdev = container_of(iter, struct hv_pci_dev, 1694 - list_entry); 1695 - if ((hpdev->desc.win_slot.slot == 1696 - new_desc->win_slot.slot) && 1707 + list_for_each_entry(hpdev, &hbus->children, list_entry) { 1708 + if ((hpdev->desc.win_slot.slot == new_desc->win_slot.slot) && 1697 1709 (hpdev->desc.v_id == new_desc->v_id) && 1698 1710 (hpdev->desc.d_id == new_desc->d_id) && 1699 1711 (hpdev->desc.ser == new_desc->ser)) { ··· 1712 1730 spin_lock_irqsave(&hbus->device_list_lock, flags); 1713 1731 do { 1714 1732 found = false; 1715 - list_for_each(iter, &hbus->children) { 1716 - hpdev = container_of(iter, struct hv_pci_dev, 1717 - list_entry); 1733 + list_for_each_entry(hpdev, &hbus->children, list_entry) { 1718 1734 if (hpdev->reported_missing) { 1719 1735 found = true; 1720 - put_pcichild(hpdev, hv_pcidev_ref_childlist); 1736 + put_pcichild(hpdev); 1721 1737 list_move_tail(&hpdev->list_entry, &removed); 1722 1738 break; 1723 1739 } ··· 1728 1748 hpdev = list_first_entry(&removed, struct hv_pci_dev, 1729 1749 list_entry); 1730 1750 list_del(&hpdev->list_entry); 1731 - put_pcichild(hpdev, hv_pcidev_ref_initial); 1751 + put_pcichild(hpdev); 1732 1752 } 1733 1753 1734 1754 switch (hbus->state) { ··· 1863 1883 sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt, 1864 1884 VM_PKT_DATA_INBAND, 0); 1865 1885 1866 - put_pcichild(hpdev, hv_pcidev_ref_childlist); 1867 - put_pcichild(hpdev, hv_pcidev_ref_pnp); 1886 + put_pcichild(hpdev); 1887 + put_pcichild(hpdev); 1868 1888 put_hvpcibus(hpdev->hbus); 1869 1889 } 1870 1890 ··· 1879 1899 static void hv_pci_eject_device(struct hv_pci_dev *hpdev) 1880 1900 { 1881 1901 hpdev->state = hv_pcichild_ejecting; 1882 - get_pcichild(hpdev, hv_pcidev_ref_pnp); 1902 + get_pcichild(hpdev); 1883 1903 INIT_WORK(&hpdev->wrk, hv_eject_device_work); 1884 1904 get_hvpcibus(hpdev->hbus); 1885 1905 queue_work(hpdev->hbus->wq, &hpdev->wrk); ··· 1979 1999 dev_message->wslot.slot); 1980 2000 if (hpdev) { 1981 2001 hv_pci_eject_device(hpdev); 1982 - put_pcichild(hpdev, 1983 - hv_pcidev_ref_by_slot); 2002 + put_pcichild(hpdev); 1984 2003 } 1985 2004 break; 1986 2005 ··· 2048 2069 sizeof(struct pci_version_request), 2049 2070 (unsigned long)pkt, VM_PKT_DATA_INBAND, 2050 2071 VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 2072 + if (!ret) 2073 + ret = wait_for_response(hdev, &comp_pkt.host_event); 2074 + 2051 2075 if (ret) { 2052 2076 dev_err(&hdev->device, 2053 - "PCI Pass-through VSP failed sending version reqquest: %#x", 2077 + "PCI Pass-through VSP failed to request version: %d", 2054 2078 ret); 2055 2079 goto exit; 2056 2080 } 2057 - 2058 - wait_for_completion(&comp_pkt.host_event); 2059 2081 2060 2082 if (comp_pkt.completion_status >= 0) { 2061 2083 pci_protocol_version = pci_protocol_versions[i]; ··· 2266 2286 ret = vmbus_sendpacket(hdev->channel, d0_entry, sizeof(*d0_entry), 2267 2287 (unsigned long)pkt, VM_PKT_DATA_INBAND, 2268 2288 VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 2289 + if (!ret) 2290 + ret = wait_for_response(hdev, &comp_pkt.host_event); 2291 + 2269 2292 if (ret) 2270 2293 goto exit; 2271 - 2272 - wait_for_completion(&comp_pkt.host_event); 2273 2294 2274 2295 if (comp_pkt.completion_status < 0) { 2275 2296 dev_err(&hdev->device, ··· 2311 2330 2312 2331 ret = vmbus_sendpacket(hdev->channel, &message, sizeof(message), 2313 2332 0, VM_PKT_DATA_INBAND, 0); 2314 - if (ret) 2315 - return ret; 2333 + if (!ret) 2334 + ret = wait_for_response(hdev, &comp); 2316 2335 2317 - wait_for_completion(&comp); 2318 - return 0; 2336 + return ret; 2319 2337 } 2320 2338 2321 2339 /** ··· 2378 2398 PCI_RESOURCES_ASSIGNED2; 2379 2399 res_assigned2->wslot.slot = hpdev->desc.win_slot.slot; 2380 2400 } 2381 - put_pcichild(hpdev, hv_pcidev_ref_by_slot); 2401 + put_pcichild(hpdev); 2382 2402 2383 2403 ret = vmbus_sendpacket(hdev->channel, &pkt->message, 2384 2404 size_res, (unsigned long)pkt, 2385 2405 VM_PKT_DATA_INBAND, 2386 2406 VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 2407 + if (!ret) 2408 + ret = wait_for_response(hdev, &comp_pkt.host_event); 2387 2409 if (ret) 2388 2410 break; 2389 - 2390 - wait_for_completion(&comp_pkt.host_event); 2391 2411 2392 2412 if (comp_pkt.completion_status < 0) { 2393 2413 ret = -EPROTO; ··· 2426 2446 pkt.message_type.type = PCI_RESOURCES_RELEASED; 2427 2447 pkt.wslot.slot = hpdev->desc.win_slot.slot; 2428 2448 2429 - put_pcichild(hpdev, hv_pcidev_ref_by_slot); 2449 + put_pcichild(hpdev); 2430 2450 2431 2451 ret = vmbus_sendpacket(hdev->channel, &pkt, sizeof(pkt), 0, 2432 2452 VM_PKT_DATA_INBAND, 0); ··· 2439 2459 2440 2460 static void get_hvpcibus(struct hv_pcibus_device *hbus) 2441 2461 { 2442 - atomic_inc(&hbus->remove_lock); 2462 + refcount_inc(&hbus->remove_lock); 2443 2463 } 2444 2464 2445 2465 static void put_hvpcibus(struct hv_pcibus_device *hbus) 2446 2466 { 2447 - if (atomic_dec_and_test(&hbus->remove_lock)) 2467 + if (refcount_dec_and_test(&hbus->remove_lock)) 2448 2468 complete(&hbus->remove_event); 2449 2469 } 2450 2470 ··· 2488 2508 hdev->dev_instance.b[8] << 8; 2489 2509 2490 2510 hbus->hdev = hdev; 2491 - atomic_inc(&hbus->remove_lock); 2511 + refcount_set(&hbus->remove_lock, 1); 2492 2512 INIT_LIST_HEAD(&hbus->children); 2493 2513 INIT_LIST_HEAD(&hbus->dr_list); 2494 2514 INIT_LIST_HEAD(&hbus->resources_for_children);
+2
drivers/pci/host/pci-mvebu.c
··· 21 21 #include <linux/of_pci.h> 22 22 #include <linux/of_platform.h> 23 23 24 + #include "../pci.h" 25 + 24 26 /* 25 27 * PCIe unit register offsets. 26 28 */
+2
drivers/pci/host/pci-rcar-gen2.c
··· 21 21 #include <linux/sizes.h> 22 22 #include <linux/slab.h> 23 23 24 + #include "../pci.h" 25 + 24 26 /* AHB-PCI Bridge PCI communication registers */ 25 27 #define RCAR_AHBPCI_PCICOM_OFFSET 0x800 26 28
+2
drivers/pci/host/pci-tegra.c
··· 40 40 #include <soc/tegra/cpuidle.h> 41 41 #include <soc/tegra/pmc.h> 42 42 43 + #include "../pci.h" 44 + 43 45 #define INT_PCI_MSI_NR (8 * 32) 44 46 45 47 /* register definitions */
+4 -1
drivers/pci/host/pci-v3-semi.c
··· 33 33 #include <linux/regmap.h> 34 34 #include <linux/clk.h> 35 35 36 + #include "../pci.h" 37 + 36 38 #define V3_PCI_VENDOR 0x00000000 37 39 #define V3_PCI_DEVICE 0x00000002 38 40 #define V3_PCI_CMD 0x00000004 ··· 793 791 if (IS_ERR(v3->config_base)) 794 792 return PTR_ERR(v3->config_base); 795 793 796 - ret = of_pci_get_host_bridge_resources(np, 0, 0xff, &res, &io_base); 794 + ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &res, 795 + &io_base); 797 796 if (ret) 798 797 return ret; 799 798
+3 -2
drivers/pci/host/pci-versatile.c
··· 15 15 #include <linux/pci.h> 16 16 #include <linux/platform_device.h> 17 17 18 + #include "../pci.h" 19 + 18 20 static void __iomem *versatile_pci_base; 19 21 static void __iomem *versatile_cfg_base[2]; 20 22 ··· 66 64 struct list_head *res) 67 65 { 68 66 int err, mem = 1, res_valid = 0; 69 - struct device_node *np = dev->of_node; 70 67 resource_size_t iobase; 71 68 struct resource_entry *win, *tmp; 72 69 73 - err = of_pci_get_host_bridge_resources(np, 0, 0xff, res, &iobase); 70 + err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, res, &iobase); 74 71 if (err) 75 72 return err; 76 73
+4 -1
drivers/pci/host/pci-xgene.c
··· 22 22 #include <linux/platform_device.h> 23 23 #include <linux/slab.h> 24 24 25 + #include "../pci.h" 26 + 25 27 #define PCIECORE_CTLANDSTATUS 0x50 26 28 #define PIM1_1L 0x80 27 29 #define IBAR2 0x98 ··· 634 632 if (ret) 635 633 return ret; 636 634 637 - ret = of_pci_get_host_bridge_resources(dn, 0, 0xff, &res, &iobase); 635 + ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &res, 636 + &iobase); 638 637 if (ret) 639 638 return ret; 640 639
+4 -3
drivers/pci/host/pcie-altera.c
··· 17 17 #include <linux/platform_device.h> 18 18 #include <linux/slab.h> 19 19 20 + #include "../pci.h" 21 + 20 22 #define RP_TX_REG0 0x2000 21 23 #define RP_TX_REG1 0x2004 22 24 #define RP_TX_CNTRL 0x2008 ··· 490 488 { 491 489 int err, res_valid = 0; 492 490 struct device *dev = &pcie->pdev->dev; 493 - struct device_node *np = dev->of_node; 494 491 struct resource_entry *win; 495 492 496 - err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pcie->resources, 497 - NULL); 493 + err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, 494 + &pcie->resources, NULL); 498 495 if (err) 499 496 return err; 500 497
+3 -2
drivers/pci/host/pcie-iproc-platform.c
··· 16 16 #include <linux/of_platform.h> 17 17 #include <linux/phy/phy.h> 18 18 19 + #include "../pci.h" 19 20 #include "pcie-iproc.h" 20 21 21 22 static const struct of_device_id iproc_pcie_of_match_table[] = { ··· 100 99 pcie->phy = NULL; 101 100 } 102 101 103 - ret = of_pci_get_host_bridge_resources(np, 0, 0xff, &resources, 104 - &iobase); 102 + ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &resources, 103 + &iobase); 105 104 if (ret) { 106 105 dev_err(dev, "unable to get PCI host bridge resources\n"); 107 106 return ret;
+160 -110
drivers/pci/host/pcie-mediatek.c
··· 11 11 #include <linux/delay.h> 12 12 #include <linux/iopoll.h> 13 13 #include <linux/irq.h> 14 + #include <linux/irqchip/chained_irq.h> 14 15 #include <linux/irqdomain.h> 15 16 #include <linux/kernel.h> 17 + #include <linux/msi.h> 16 18 #include <linux/of_address.h> 17 19 #include <linux/of_pci.h> 18 20 #include <linux/of_platform.h> ··· 23 21 #include <linux/platform_device.h> 24 22 #include <linux/pm_runtime.h> 25 23 #include <linux/reset.h> 24 + 25 + #include "../pci.h" 26 26 27 27 /* PCIe shared registers */ 28 28 #define PCIE_SYS_CFG 0x00 ··· 70 66 71 67 /* PCIe V2 per-port registers */ 72 68 #define PCIE_MSI_VECTOR 0x0c0 69 + 70 + #define PCIE_CONF_VEND_ID 0x100 71 + #define PCIE_CONF_CLASS_ID 0x106 72 + 73 73 #define PCIE_INT_MASK 0x420 74 74 #define INTX_MASK GENMASK(19, 16) 75 75 #define INTX_SHIFT 16 ··· 133 125 134 126 /** 135 127 * struct mtk_pcie_soc - differentiate between host generations 136 - * @has_msi: whether this host supports MSI interrupts or not 128 + * @need_fix_class_id: whether this host's class ID needed to be fixed or not 137 129 * @ops: pointer to configuration access functions 138 130 * @startup: pointer to controller setting functions 139 131 * @setup_irq: pointer to initialize IRQ functions 140 132 */ 141 133 struct mtk_pcie_soc { 142 - bool has_msi; 134 + bool need_fix_class_id; 143 135 struct pci_ops *ops; 144 136 int (*startup)(struct mtk_pcie_port *port); 145 137 int (*setup_irq)(struct mtk_pcie_port *port, struct device_node *node); ··· 163 155 * @lane: lane count 164 156 * @slot: port slot 165 157 * @irq_domain: legacy INTx IRQ domain 158 + * @inner_domain: inner IRQ domain 166 159 * @msi_domain: MSI IRQ domain 160 + * @lock: protect the msi_irq_in_use bitmap 167 161 * @msi_irq_in_use: bit map for assigned MSI IRQ 168 162 */ 169 163 struct mtk_pcie_port { ··· 183 173 u32 lane; 184 174 u32 slot; 185 175 struct irq_domain *irq_domain; 176 + struct irq_domain *inner_domain; 186 177 struct irq_domain *msi_domain; 178 + struct mutex lock; 187 179 DECLARE_BITMAP(msi_irq_in_use, MTK_MSI_IRQS_NUM); 188 180 }; 189 181 ··· 387 375 { 388 376 struct mtk_pcie *pcie = port->pcie; 389 377 struct resource *mem = &pcie->mem; 378 + const struct mtk_pcie_soc *soc = port->pcie->soc; 390 379 u32 val; 391 380 size_t size; 392 381 int err; ··· 416 403 PCIE_MAC_SRSTB | PCIE_CRSTB; 417 404 writel(val, port->base + PCIE_RST_CTRL); 418 405 406 + /* Set up vendor ID and class code */ 407 + if (soc->need_fix_class_id) { 408 + val = PCI_VENDOR_ID_MEDIATEK; 409 + writew(val, port->base + PCIE_CONF_VEND_ID); 410 + 411 + val = PCI_CLASS_BRIDGE_HOST; 412 + writew(val, port->base + PCIE_CONF_CLASS_ID); 413 + } 414 + 419 415 /* 100ms timeout value should be enough for Gen1/2 training */ 420 416 err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val, 421 417 !!(val & PCIE_PORT_LINKUP_V2), 20, ··· 452 430 return 0; 453 431 } 454 432 455 - static int mtk_pcie_msi_alloc(struct mtk_pcie_port *port) 433 + static void mtk_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 456 434 { 457 - int msi; 458 - 459 - msi = find_first_zero_bit(port->msi_irq_in_use, MTK_MSI_IRQS_NUM); 460 - if (msi < MTK_MSI_IRQS_NUM) 461 - set_bit(msi, port->msi_irq_in_use); 462 - else 463 - return -ENOSPC; 464 - 465 - return msi; 466 - } 467 - 468 - static void mtk_pcie_msi_free(struct mtk_pcie_port *port, unsigned long hwirq) 469 - { 470 - clear_bit(hwirq, port->msi_irq_in_use); 471 - } 472 - 473 - static int mtk_pcie_msi_setup_irq(struct msi_controller *chip, 474 - struct pci_dev *pdev, struct msi_desc *desc) 475 - { 476 - struct mtk_pcie_port *port; 477 - struct msi_msg msg; 478 - unsigned int irq; 479 - int hwirq; 480 - phys_addr_t msg_addr; 481 - 482 - port = mtk_pcie_find_port(pdev->bus, pdev->devfn); 483 - if (!port) 484 - return -EINVAL; 485 - 486 - hwirq = mtk_pcie_msi_alloc(port); 487 - if (hwirq < 0) 488 - return hwirq; 489 - 490 - irq = irq_create_mapping(port->msi_domain, hwirq); 491 - if (!irq) { 492 - mtk_pcie_msi_free(port, hwirq); 493 - return -EINVAL; 494 - } 495 - 496 - chip->dev = &pdev->dev; 497 - 498 - irq_set_msi_desc(irq, desc); 435 + struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data); 436 + phys_addr_t addr; 499 437 500 438 /* MT2712/MT7622 only support 32-bit MSI addresses */ 501 - msg_addr = virt_to_phys(port->base + PCIE_MSI_VECTOR); 502 - msg.address_hi = 0; 503 - msg.address_lo = lower_32_bits(msg_addr); 504 - msg.data = hwirq; 439 + addr = virt_to_phys(port->base + PCIE_MSI_VECTOR); 440 + msg->address_hi = 0; 441 + msg->address_lo = lower_32_bits(addr); 505 442 506 - pci_write_msi_msg(irq, &msg); 443 + msg->data = data->hwirq; 444 + 445 + dev_dbg(port->pcie->dev, "msi#%d address_hi %#x address_lo %#x\n", 446 + (int)data->hwirq, msg->address_hi, msg->address_lo); 447 + } 448 + 449 + static int mtk_msi_set_affinity(struct irq_data *irq_data, 450 + const struct cpumask *mask, bool force) 451 + { 452 + return -EINVAL; 453 + } 454 + 455 + static void mtk_msi_ack_irq(struct irq_data *data) 456 + { 457 + struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data); 458 + u32 hwirq = data->hwirq; 459 + 460 + writel(1 << hwirq, port->base + PCIE_IMSI_STATUS); 461 + } 462 + 463 + static struct irq_chip mtk_msi_bottom_irq_chip = { 464 + .name = "MTK MSI", 465 + .irq_compose_msi_msg = mtk_compose_msi_msg, 466 + .irq_set_affinity = mtk_msi_set_affinity, 467 + .irq_ack = mtk_msi_ack_irq, 468 + }; 469 + 470 + static int mtk_pcie_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, 471 + unsigned int nr_irqs, void *args) 472 + { 473 + struct mtk_pcie_port *port = domain->host_data; 474 + unsigned long bit; 475 + 476 + WARN_ON(nr_irqs != 1); 477 + mutex_lock(&port->lock); 478 + 479 + bit = find_first_zero_bit(port->msi_irq_in_use, MTK_MSI_IRQS_NUM); 480 + if (bit >= MTK_MSI_IRQS_NUM) { 481 + mutex_unlock(&port->lock); 482 + return -ENOSPC; 483 + } 484 + 485 + __set_bit(bit, port->msi_irq_in_use); 486 + 487 + mutex_unlock(&port->lock); 488 + 489 + irq_domain_set_info(domain, virq, bit, &mtk_msi_bottom_irq_chip, 490 + domain->host_data, handle_edge_irq, 491 + NULL, NULL); 507 492 508 493 return 0; 509 494 } 510 495 511 - static void mtk_msi_teardown_irq(struct msi_controller *chip, unsigned int irq) 496 + static void mtk_pcie_irq_domain_free(struct irq_domain *domain, 497 + unsigned int virq, unsigned int nr_irqs) 512 498 { 513 - struct pci_dev *pdev = to_pci_dev(chip->dev); 514 - struct irq_data *d = irq_get_irq_data(irq); 515 - irq_hw_number_t hwirq = irqd_to_hwirq(d); 516 - struct mtk_pcie_port *port; 499 + struct irq_data *d = irq_domain_get_irq_data(domain, virq); 500 + struct mtk_pcie_port *port = irq_data_get_irq_chip_data(d); 517 501 518 - port = mtk_pcie_find_port(pdev->bus, pdev->devfn); 519 - if (!port) 520 - return; 502 + mutex_lock(&port->lock); 521 503 522 - irq_dispose_mapping(irq); 523 - mtk_pcie_msi_free(port, hwirq); 524 - } 504 + if (!test_bit(d->hwirq, port->msi_irq_in_use)) 505 + dev_err(port->pcie->dev, "trying to free unused MSI#%lu\n", 506 + d->hwirq); 507 + else 508 + __clear_bit(d->hwirq, port->msi_irq_in_use); 525 509 526 - static struct msi_controller mtk_pcie_msi_chip = { 527 - .setup_irq = mtk_pcie_msi_setup_irq, 528 - .teardown_irq = mtk_msi_teardown_irq, 529 - }; 510 + mutex_unlock(&port->lock); 530 511 531 - static struct irq_chip mtk_msi_irq_chip = { 532 - .name = "MTK PCIe MSI", 533 - .irq_enable = pci_msi_unmask_irq, 534 - .irq_disable = pci_msi_mask_irq, 535 - .irq_mask = pci_msi_mask_irq, 536 - .irq_unmask = pci_msi_unmask_irq, 537 - }; 538 - 539 - static int mtk_pcie_msi_map(struct irq_domain *domain, unsigned int irq, 540 - irq_hw_number_t hwirq) 541 - { 542 - irq_set_chip_and_handler(irq, &mtk_msi_irq_chip, handle_simple_irq); 543 - irq_set_chip_data(irq, domain->host_data); 544 - 545 - return 0; 512 + irq_domain_free_irqs_parent(domain, virq, nr_irqs); 546 513 } 547 514 548 515 static const struct irq_domain_ops msi_domain_ops = { 549 - .map = mtk_pcie_msi_map, 516 + .alloc = mtk_pcie_irq_domain_alloc, 517 + .free = mtk_pcie_irq_domain_free, 550 518 }; 519 + 520 + static struct irq_chip mtk_msi_irq_chip = { 521 + .name = "MTK PCIe MSI", 522 + .irq_ack = irq_chip_ack_parent, 523 + .irq_mask = pci_msi_mask_irq, 524 + .irq_unmask = pci_msi_unmask_irq, 525 + }; 526 + 527 + static struct msi_domain_info mtk_msi_domain_info = { 528 + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 529 + MSI_FLAG_PCI_MSIX), 530 + .chip = &mtk_msi_irq_chip, 531 + }; 532 + 533 + static int mtk_pcie_allocate_msi_domains(struct mtk_pcie_port *port) 534 + { 535 + struct fwnode_handle *fwnode = of_node_to_fwnode(port->pcie->dev->of_node); 536 + 537 + mutex_init(&port->lock); 538 + 539 + port->inner_domain = irq_domain_create_linear(fwnode, MTK_MSI_IRQS_NUM, 540 + &msi_domain_ops, port); 541 + if (!port->inner_domain) { 542 + dev_err(port->pcie->dev, "failed to create IRQ domain\n"); 543 + return -ENOMEM; 544 + } 545 + 546 + port->msi_domain = pci_msi_create_irq_domain(fwnode, &mtk_msi_domain_info, 547 + port->inner_domain); 548 + if (!port->msi_domain) { 549 + dev_err(port->pcie->dev, "failed to create MSI domain\n"); 550 + irq_domain_remove(port->inner_domain); 551 + return -ENOMEM; 552 + } 553 + 554 + return 0; 555 + } 551 556 552 557 static void mtk_pcie_enable_msi(struct mtk_pcie_port *port) 553 558 { ··· 608 559 { 609 560 struct device *dev = port->pcie->dev; 610 561 struct device_node *pcie_intc_node; 562 + int ret; 611 563 612 564 /* Setup INTx */ 613 565 pcie_intc_node = of_get_next_child(node, NULL); ··· 625 575 } 626 576 627 577 if (IS_ENABLED(CONFIG_PCI_MSI)) { 628 - port->msi_domain = irq_domain_add_linear(node, MTK_MSI_IRQS_NUM, 629 - &msi_domain_ops, 630 - &mtk_pcie_msi_chip); 631 - if (!port->msi_domain) { 632 - dev_err(dev, "failed to create MSI IRQ domain\n"); 633 - return -ENODEV; 634 - } 578 + ret = mtk_pcie_allocate_msi_domains(port); 579 + if (ret) 580 + return ret; 581 + 635 582 mtk_pcie_enable_msi(port); 636 583 } 637 584 638 585 return 0; 639 586 } 640 587 641 - static irqreturn_t mtk_pcie_intr_handler(int irq, void *data) 588 + static void mtk_pcie_intr_handler(struct irq_desc *desc) 642 589 { 643 - struct mtk_pcie_port *port = (struct mtk_pcie_port *)data; 590 + struct mtk_pcie_port *port = irq_desc_get_handler_data(desc); 591 + struct irq_chip *irqchip = irq_desc_get_chip(desc); 644 592 unsigned long status; 645 593 u32 virq; 646 594 u32 bit = INTX_SHIFT; 647 595 648 - while ((status = readl(port->base + PCIE_INT_STATUS)) & INTX_MASK) { 596 + chained_irq_enter(irqchip, desc); 597 + 598 + status = readl(port->base + PCIE_INT_STATUS); 599 + if (status & INTX_MASK) { 649 600 for_each_set_bit_from(bit, &status, PCI_NUM_INTX + INTX_SHIFT) { 650 601 /* Clear the INTx */ 651 602 writel(1 << bit, port->base + PCIE_INT_STATUS); ··· 657 606 } 658 607 659 608 if (IS_ENABLED(CONFIG_PCI_MSI)) { 660 - while ((status = readl(port->base + PCIE_INT_STATUS)) & MSI_STATUS) { 609 + if (status & MSI_STATUS){ 661 610 unsigned long imsi_status; 662 611 663 612 while ((imsi_status = readl(port->base + PCIE_IMSI_STATUS))) { 664 613 for_each_set_bit(bit, &imsi_status, MTK_MSI_IRQS_NUM) { 665 - /* Clear the MSI */ 666 - writel(1 << bit, port->base + PCIE_IMSI_STATUS); 667 - virq = irq_find_mapping(port->msi_domain, bit); 614 + virq = irq_find_mapping(port->inner_domain, bit); 668 615 generic_handle_irq(virq); 669 616 } 670 617 } ··· 671 622 } 672 623 } 673 624 674 - return IRQ_HANDLED; 625 + chained_irq_exit(irqchip, desc); 626 + 627 + return; 675 628 } 676 629 677 630 static int mtk_pcie_setup_irq(struct mtk_pcie_port *port, ··· 684 633 struct platform_device *pdev = to_platform_device(dev); 685 634 int err, irq; 686 635 687 - irq = platform_get_irq(pdev, port->slot); 688 - err = devm_request_irq(dev, irq, mtk_pcie_intr_handler, 689 - IRQF_SHARED, "mtk-pcie", port); 690 - if (err) { 691 - dev_err(dev, "unable to request IRQ %d\n", irq); 692 - return err; 693 - } 694 - 695 636 err = mtk_pcie_init_irq_domain(port, node); 696 637 if (err) { 697 638 dev_err(dev, "failed to init PCIe IRQ domain\n"); 698 639 return err; 699 640 } 641 + 642 + irq = platform_get_irq(pdev, port->slot); 643 + irq_set_chained_handler_and_data(irq, mtk_pcie_intr_handler, port); 700 644 701 645 return 0; 702 646 } ··· 1126 1080 host->map_irq = of_irq_parse_and_map_pci; 1127 1081 host->swizzle_irq = pci_common_swizzle; 1128 1082 host->sysdata = pcie; 1129 - if (IS_ENABLED(CONFIG_PCI_MSI) && pcie->soc->has_msi) 1130 - host->msi = &mtk_pcie_msi_chip; 1131 1083 1132 1084 err = pci_scan_root_bus_bridge(host); 1133 1085 if (err < 0) ··· 1186 1142 .startup = mtk_pcie_startup_port, 1187 1143 }; 1188 1144 1189 - static const struct mtk_pcie_soc mtk_pcie_soc_v2 = { 1190 - .has_msi = true, 1145 + static const struct mtk_pcie_soc mtk_pcie_soc_mt2712 = { 1146 + .ops = &mtk_pcie_ops_v2, 1147 + .startup = mtk_pcie_startup_port_v2, 1148 + .setup_irq = mtk_pcie_setup_irq, 1149 + }; 1150 + 1151 + static const struct mtk_pcie_soc mtk_pcie_soc_mt7622 = { 1152 + .need_fix_class_id = true, 1191 1153 .ops = &mtk_pcie_ops_v2, 1192 1154 .startup = mtk_pcie_startup_port_v2, 1193 1155 .setup_irq = mtk_pcie_setup_irq, ··· 1202 1152 static const struct of_device_id mtk_pcie_ids[] = { 1203 1153 { .compatible = "mediatek,mt2701-pcie", .data = &mtk_pcie_soc_v1 }, 1204 1154 { .compatible = "mediatek,mt7623-pcie", .data = &mtk_pcie_soc_v1 }, 1205 - { .compatible = "mediatek,mt2712-pcie", .data = &mtk_pcie_soc_v2 }, 1206 - { .compatible = "mediatek,mt7622-pcie", .data = &mtk_pcie_soc_v2 }, 1155 + { .compatible = "mediatek,mt2712-pcie", .data = &mtk_pcie_soc_mt2712 }, 1156 + { .compatible = "mediatek,mt7622-pcie", .data = &mtk_pcie_soc_mt7622 }, 1207 1157 {}, 1208 1158 }; 1209 1159
+866
drivers/pci/host/pcie-mobiveil.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCIe host controller driver for Mobiveil PCIe Host controller 4 + * 5 + * Copyright (c) 2018 Mobiveil Inc. 6 + * Author: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in> 7 + */ 8 + 9 + #include <linux/delay.h> 10 + #include <linux/init.h> 11 + #include <linux/interrupt.h> 12 + #include <linux/irq.h> 13 + #include <linux/irqchip/chained_irq.h> 14 + #include <linux/irqdomain.h> 15 + #include <linux/kernel.h> 16 + #include <linux/module.h> 17 + #include <linux/msi.h> 18 + #include <linux/of_address.h> 19 + #include <linux/of_irq.h> 20 + #include <linux/of_platform.h> 21 + #include <linux/of_pci.h> 22 + #include <linux/pci.h> 23 + #include <linux/platform_device.h> 24 + #include <linux/slab.h> 25 + 26 + /* register offsets and bit positions */ 27 + 28 + /* 29 + * translation tables are grouped into windows, each window registers are 30 + * grouped into blocks of 4 or 16 registers each 31 + */ 32 + #define PAB_REG_BLOCK_SIZE 16 33 + #define PAB_EXT_REG_BLOCK_SIZE 4 34 + 35 + #define PAB_REG_ADDR(offset, win) (offset + (win * PAB_REG_BLOCK_SIZE)) 36 + #define PAB_EXT_REG_ADDR(offset, win) (offset + (win * PAB_EXT_REG_BLOCK_SIZE)) 37 + 38 + #define LTSSM_STATUS 0x0404 39 + #define LTSSM_STATUS_L0_MASK 0x3f 40 + #define LTSSM_STATUS_L0 0x2d 41 + 42 + #define PAB_CTRL 0x0808 43 + #define AMBA_PIO_ENABLE_SHIFT 0 44 + #define PEX_PIO_ENABLE_SHIFT 1 45 + #define PAGE_SEL_SHIFT 13 46 + #define PAGE_SEL_MASK 0x3f 47 + #define PAGE_LO_MASK 0x3ff 48 + #define PAGE_SEL_EN 0xc00 49 + #define PAGE_SEL_OFFSET_SHIFT 10 50 + 51 + #define PAB_AXI_PIO_CTRL 0x0840 52 + #define APIO_EN_MASK 0xf 53 + 54 + #define PAB_PEX_PIO_CTRL 0x08c0 55 + #define PIO_ENABLE_SHIFT 0 56 + 57 + #define PAB_INTP_AMBA_MISC_ENB 0x0b0c 58 + #define PAB_INTP_AMBA_MISC_STAT 0x0b1c 59 + #define PAB_INTP_INTX_MASK 0x01e0 60 + #define PAB_INTP_MSI_MASK 0x8 61 + 62 + #define PAB_AXI_AMAP_CTRL(win) PAB_REG_ADDR(0x0ba0, win) 63 + #define WIN_ENABLE_SHIFT 0 64 + #define WIN_TYPE_SHIFT 1 65 + 66 + #define PAB_EXT_AXI_AMAP_SIZE(win) PAB_EXT_REG_ADDR(0xbaf0, win) 67 + 68 + #define PAB_AXI_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x0ba4, win) 69 + #define AXI_WINDOW_ALIGN_MASK 3 70 + 71 + #define PAB_AXI_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x0ba8, win) 72 + #define PAB_BUS_SHIFT 24 73 + #define PAB_DEVICE_SHIFT 19 74 + #define PAB_FUNCTION_SHIFT 16 75 + 76 + #define PAB_AXI_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x0bac, win) 77 + #define PAB_INTP_AXI_PIO_CLASS 0x474 78 + 79 + #define PAB_PEX_AMAP_CTRL(win) PAB_REG_ADDR(0x4ba0, win) 80 + #define AMAP_CTRL_EN_SHIFT 0 81 + #define AMAP_CTRL_TYPE_SHIFT 1 82 + 83 + #define PAB_EXT_PEX_AMAP_SIZEN(win) PAB_EXT_REG_ADDR(0xbef0, win) 84 + #define PAB_PEX_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x4ba4, win) 85 + #define PAB_PEX_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x4ba8, win) 86 + #define PAB_PEX_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x4bac, win) 87 + 88 + /* starting offset of INTX bits in status register */ 89 + #define PAB_INTX_START 5 90 + 91 + /* supported number of MSI interrupts */ 92 + #define PCI_NUM_MSI 16 93 + 94 + /* MSI registers */ 95 + #define MSI_BASE_LO_OFFSET 0x04 96 + #define MSI_BASE_HI_OFFSET 0x08 97 + #define MSI_SIZE_OFFSET 0x0c 98 + #define MSI_ENABLE_OFFSET 0x14 99 + #define MSI_STATUS_OFFSET 0x18 100 + #define MSI_DATA_OFFSET 0x20 101 + #define MSI_ADDR_L_OFFSET 0x24 102 + #define MSI_ADDR_H_OFFSET 0x28 103 + 104 + /* outbound and inbound window definitions */ 105 + #define WIN_NUM_0 0 106 + #define WIN_NUM_1 1 107 + #define CFG_WINDOW_TYPE 0 108 + #define IO_WINDOW_TYPE 1 109 + #define MEM_WINDOW_TYPE 2 110 + #define IB_WIN_SIZE (256 * 1024 * 1024 * 1024) 111 + #define MAX_PIO_WINDOWS 8 112 + 113 + /* Parameters for the waiting for link up routine */ 114 + #define LINK_WAIT_MAX_RETRIES 10 115 + #define LINK_WAIT_MIN 90000 116 + #define LINK_WAIT_MAX 100000 117 + 118 + struct mobiveil_msi { /* MSI information */ 119 + struct mutex lock; /* protect bitmap variable */ 120 + struct irq_domain *msi_domain; 121 + struct irq_domain *dev_domain; 122 + phys_addr_t msi_pages_phys; 123 + int num_of_vectors; 124 + DECLARE_BITMAP(msi_irq_in_use, PCI_NUM_MSI); 125 + }; 126 + 127 + struct mobiveil_pcie { 128 + struct platform_device *pdev; 129 + struct list_head resources; 130 + void __iomem *config_axi_slave_base; /* endpoint config base */ 131 + void __iomem *csr_axi_slave_base; /* root port config base */ 132 + void __iomem *apb_csr_base; /* MSI register base */ 133 + void __iomem *pcie_reg_base; /* Physical PCIe Controller Base */ 134 + struct irq_domain *intx_domain; 135 + raw_spinlock_t intx_mask_lock; 136 + int irq; 137 + int apio_wins; 138 + int ppio_wins; 139 + int ob_wins_configured; /* configured outbound windows */ 140 + int ib_wins_configured; /* configured inbound windows */ 141 + struct resource *ob_io_res; 142 + char root_bus_nr; 143 + struct mobiveil_msi msi; 144 + }; 145 + 146 + static inline void csr_writel(struct mobiveil_pcie *pcie, const u32 value, 147 + const u32 reg) 148 + { 149 + writel_relaxed(value, pcie->csr_axi_slave_base + reg); 150 + } 151 + 152 + static inline u32 csr_readl(struct mobiveil_pcie *pcie, const u32 reg) 153 + { 154 + return readl_relaxed(pcie->csr_axi_slave_base + reg); 155 + } 156 + 157 + static bool mobiveil_pcie_link_up(struct mobiveil_pcie *pcie) 158 + { 159 + return (csr_readl(pcie, LTSSM_STATUS) & 160 + LTSSM_STATUS_L0_MASK) == LTSSM_STATUS_L0; 161 + } 162 + 163 + static bool mobiveil_pcie_valid_device(struct pci_bus *bus, unsigned int devfn) 164 + { 165 + struct mobiveil_pcie *pcie = bus->sysdata; 166 + 167 + /* Only one device down on each root port */ 168 + if ((bus->number == pcie->root_bus_nr) && (devfn > 0)) 169 + return false; 170 + 171 + /* 172 + * Do not read more than one device on the bus directly 173 + * attached to RC 174 + */ 175 + if ((bus->primary == pcie->root_bus_nr) && (devfn > 0)) 176 + return false; 177 + 178 + return true; 179 + } 180 + 181 + /* 182 + * mobiveil_pcie_map_bus - routine to get the configuration base of either 183 + * root port or endpoint 184 + */ 185 + static void __iomem *mobiveil_pcie_map_bus(struct pci_bus *bus, 186 + unsigned int devfn, int where) 187 + { 188 + struct mobiveil_pcie *pcie = bus->sysdata; 189 + 190 + if (!mobiveil_pcie_valid_device(bus, devfn)) 191 + return NULL; 192 + 193 + if (bus->number == pcie->root_bus_nr) { 194 + /* RC config access */ 195 + return pcie->csr_axi_slave_base + where; 196 + } 197 + 198 + /* 199 + * EP config access (in Config/APIO space) 200 + * Program PEX Address base (31..16 bits) with appropriate value 201 + * (BDF) in PAB_AXI_AMAP_PEX_WIN_L0 Register. 202 + * Relies on pci_lock serialization 203 + */ 204 + csr_writel(pcie, bus->number << PAB_BUS_SHIFT | 205 + PCI_SLOT(devfn) << PAB_DEVICE_SHIFT | 206 + PCI_FUNC(devfn) << PAB_FUNCTION_SHIFT, 207 + PAB_AXI_AMAP_PEX_WIN_L(WIN_NUM_0)); 208 + return pcie->config_axi_slave_base + where; 209 + } 210 + 211 + static struct pci_ops mobiveil_pcie_ops = { 212 + .map_bus = mobiveil_pcie_map_bus, 213 + .read = pci_generic_config_read, 214 + .write = pci_generic_config_write, 215 + }; 216 + 217 + static void mobiveil_pcie_isr(struct irq_desc *desc) 218 + { 219 + struct irq_chip *chip = irq_desc_get_chip(desc); 220 + struct mobiveil_pcie *pcie = irq_desc_get_handler_data(desc); 221 + struct device *dev = &pcie->pdev->dev; 222 + struct mobiveil_msi *msi = &pcie->msi; 223 + u32 msi_data, msi_addr_lo, msi_addr_hi; 224 + u32 intr_status, msi_status; 225 + unsigned long shifted_status; 226 + u32 bit, virq, val, mask; 227 + 228 + /* 229 + * The core provides a single interrupt for both INTx/MSI messages. 230 + * So we'll read both INTx and MSI status 231 + */ 232 + 233 + chained_irq_enter(chip, desc); 234 + 235 + /* read INTx status */ 236 + val = csr_readl(pcie, PAB_INTP_AMBA_MISC_STAT); 237 + mask = csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB); 238 + intr_status = val & mask; 239 + 240 + /* Handle INTx */ 241 + if (intr_status & PAB_INTP_INTX_MASK) { 242 + shifted_status = csr_readl(pcie, PAB_INTP_AMBA_MISC_STAT) >> 243 + PAB_INTX_START; 244 + do { 245 + for_each_set_bit(bit, &shifted_status, PCI_NUM_INTX) { 246 + virq = irq_find_mapping(pcie->intx_domain, 247 + bit + 1); 248 + if (virq) 249 + generic_handle_irq(virq); 250 + else 251 + dev_err_ratelimited(dev, 252 + "unexpected IRQ, INT%d\n", bit); 253 + 254 + /* clear interrupt */ 255 + csr_writel(pcie, 256 + shifted_status << PAB_INTX_START, 257 + PAB_INTP_AMBA_MISC_STAT); 258 + } 259 + } while ((shifted_status >> PAB_INTX_START) != 0); 260 + } 261 + 262 + /* read extra MSI status register */ 263 + msi_status = readl_relaxed(pcie->apb_csr_base + MSI_STATUS_OFFSET); 264 + 265 + /* handle MSI interrupts */ 266 + while (msi_status & 1) { 267 + msi_data = readl_relaxed(pcie->apb_csr_base 268 + + MSI_DATA_OFFSET); 269 + 270 + /* 271 + * MSI_STATUS_OFFSET register gets updated to zero 272 + * once we pop not only the MSI data but also address 273 + * from MSI hardware FIFO. So keeping these following 274 + * two dummy reads. 275 + */ 276 + msi_addr_lo = readl_relaxed(pcie->apb_csr_base + 277 + MSI_ADDR_L_OFFSET); 278 + msi_addr_hi = readl_relaxed(pcie->apb_csr_base + 279 + MSI_ADDR_H_OFFSET); 280 + dev_dbg(dev, "MSI registers, data: %08x, addr: %08x:%08x\n", 281 + msi_data, msi_addr_hi, msi_addr_lo); 282 + 283 + virq = irq_find_mapping(msi->dev_domain, msi_data); 284 + if (virq) 285 + generic_handle_irq(virq); 286 + 287 + msi_status = readl_relaxed(pcie->apb_csr_base + 288 + MSI_STATUS_OFFSET); 289 + } 290 + 291 + /* Clear the interrupt status */ 292 + csr_writel(pcie, intr_status, PAB_INTP_AMBA_MISC_STAT); 293 + chained_irq_exit(chip, desc); 294 + } 295 + 296 + static int mobiveil_pcie_parse_dt(struct mobiveil_pcie *pcie) 297 + { 298 + struct device *dev = &pcie->pdev->dev; 299 + struct platform_device *pdev = pcie->pdev; 300 + struct device_node *node = dev->of_node; 301 + struct resource *res; 302 + const char *type; 303 + 304 + type = of_get_property(node, "device_type", NULL); 305 + if (!type || strcmp(type, "pci")) { 306 + dev_err(dev, "invalid \"device_type\" %s\n", type); 307 + return -EINVAL; 308 + } 309 + 310 + /* map config resource */ 311 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 312 + "config_axi_slave"); 313 + pcie->config_axi_slave_base = devm_pci_remap_cfg_resource(dev, res); 314 + if (IS_ERR(pcie->config_axi_slave_base)) 315 + return PTR_ERR(pcie->config_axi_slave_base); 316 + pcie->ob_io_res = res; 317 + 318 + /* map csr resource */ 319 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 320 + "csr_axi_slave"); 321 + pcie->csr_axi_slave_base = devm_pci_remap_cfg_resource(dev, res); 322 + if (IS_ERR(pcie->csr_axi_slave_base)) 323 + return PTR_ERR(pcie->csr_axi_slave_base); 324 + pcie->pcie_reg_base = res->start; 325 + 326 + /* map MSI config resource */ 327 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "apb_csr"); 328 + pcie->apb_csr_base = devm_pci_remap_cfg_resource(dev, res); 329 + if (IS_ERR(pcie->apb_csr_base)) 330 + return PTR_ERR(pcie->apb_csr_base); 331 + 332 + /* read the number of windows requested */ 333 + if (of_property_read_u32(node, "apio-wins", &pcie->apio_wins)) 334 + pcie->apio_wins = MAX_PIO_WINDOWS; 335 + 336 + if (of_property_read_u32(node, "ppio-wins", &pcie->ppio_wins)) 337 + pcie->ppio_wins = MAX_PIO_WINDOWS; 338 + 339 + pcie->irq = platform_get_irq(pdev, 0); 340 + if (pcie->irq <= 0) { 341 + dev_err(dev, "failed to map IRQ: %d\n", pcie->irq); 342 + return -ENODEV; 343 + } 344 + 345 + irq_set_chained_handler_and_data(pcie->irq, mobiveil_pcie_isr, pcie); 346 + 347 + return 0; 348 + } 349 + 350 + /* 351 + * select_paged_register - routine to access paged register of root complex 352 + * 353 + * registers of RC are paged, for this scheme to work 354 + * extracted higher 6 bits of the offset will be written to pg_sel 355 + * field of PAB_CTRL register and rest of the lower 10 bits enabled with 356 + * PAGE_SEL_EN are used as offset of the register. 357 + */ 358 + static void select_paged_register(struct mobiveil_pcie *pcie, u32 offset) 359 + { 360 + int pab_ctrl_dw, pg_sel; 361 + 362 + /* clear pg_sel field */ 363 + pab_ctrl_dw = csr_readl(pcie, PAB_CTRL); 364 + pab_ctrl_dw = (pab_ctrl_dw & ~(PAGE_SEL_MASK << PAGE_SEL_SHIFT)); 365 + 366 + /* set pg_sel field */ 367 + pg_sel = (offset >> PAGE_SEL_OFFSET_SHIFT) & PAGE_SEL_MASK; 368 + pab_ctrl_dw |= ((pg_sel << PAGE_SEL_SHIFT)); 369 + csr_writel(pcie, pab_ctrl_dw, PAB_CTRL); 370 + } 371 + 372 + static void write_paged_register(struct mobiveil_pcie *pcie, 373 + u32 val, u32 offset) 374 + { 375 + u32 off = (offset & PAGE_LO_MASK) | PAGE_SEL_EN; 376 + 377 + select_paged_register(pcie, offset); 378 + csr_writel(pcie, val, off); 379 + } 380 + 381 + static u32 read_paged_register(struct mobiveil_pcie *pcie, u32 offset) 382 + { 383 + u32 off = (offset & PAGE_LO_MASK) | PAGE_SEL_EN; 384 + 385 + select_paged_register(pcie, offset); 386 + return csr_readl(pcie, off); 387 + } 388 + 389 + static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num, 390 + int pci_addr, u32 type, u64 size) 391 + { 392 + int pio_ctrl_val; 393 + int amap_ctrl_dw; 394 + u64 size64 = ~(size - 1); 395 + 396 + if ((pcie->ib_wins_configured + 1) > pcie->ppio_wins) { 397 + dev_err(&pcie->pdev->dev, 398 + "ERROR: max inbound windows reached !\n"); 399 + return; 400 + } 401 + 402 + pio_ctrl_val = csr_readl(pcie, PAB_PEX_PIO_CTRL); 403 + csr_writel(pcie, 404 + pio_ctrl_val | (1 << PIO_ENABLE_SHIFT), PAB_PEX_PIO_CTRL); 405 + amap_ctrl_dw = read_paged_register(pcie, PAB_PEX_AMAP_CTRL(win_num)); 406 + amap_ctrl_dw = (amap_ctrl_dw | (type << AMAP_CTRL_TYPE_SHIFT)); 407 + amap_ctrl_dw = (amap_ctrl_dw | (1 << AMAP_CTRL_EN_SHIFT)); 408 + 409 + write_paged_register(pcie, amap_ctrl_dw | lower_32_bits(size64), 410 + PAB_PEX_AMAP_CTRL(win_num)); 411 + 412 + write_paged_register(pcie, upper_32_bits(size64), 413 + PAB_EXT_PEX_AMAP_SIZEN(win_num)); 414 + 415 + write_paged_register(pcie, pci_addr, PAB_PEX_AMAP_AXI_WIN(win_num)); 416 + write_paged_register(pcie, pci_addr, PAB_PEX_AMAP_PEX_WIN_L(win_num)); 417 + write_paged_register(pcie, 0, PAB_PEX_AMAP_PEX_WIN_H(win_num)); 418 + } 419 + 420 + /* 421 + * routine to program the outbound windows 422 + */ 423 + static void program_ob_windows(struct mobiveil_pcie *pcie, int win_num, 424 + u64 cpu_addr, u64 pci_addr, u32 config_io_bit, u64 size) 425 + { 426 + 427 + u32 value, type; 428 + u64 size64 = ~(size - 1); 429 + 430 + if ((pcie->ob_wins_configured + 1) > pcie->apio_wins) { 431 + dev_err(&pcie->pdev->dev, 432 + "ERROR: max outbound windows reached !\n"); 433 + return; 434 + } 435 + 436 + /* 437 + * program Enable Bit to 1, Type Bit to (00) base 2, AXI Window Size Bit 438 + * to 4 KB in PAB_AXI_AMAP_CTRL register 439 + */ 440 + type = config_io_bit; 441 + value = csr_readl(pcie, PAB_AXI_AMAP_CTRL(win_num)); 442 + csr_writel(pcie, 1 << WIN_ENABLE_SHIFT | type << WIN_TYPE_SHIFT | 443 + lower_32_bits(size64), PAB_AXI_AMAP_CTRL(win_num)); 444 + 445 + write_paged_register(pcie, upper_32_bits(size64), 446 + PAB_EXT_AXI_AMAP_SIZE(win_num)); 447 + 448 + /* 449 + * program AXI window base with appropriate value in 450 + * PAB_AXI_AMAP_AXI_WIN0 register 451 + */ 452 + value = csr_readl(pcie, PAB_AXI_AMAP_AXI_WIN(win_num)); 453 + csr_writel(pcie, cpu_addr & (~AXI_WINDOW_ALIGN_MASK), 454 + PAB_AXI_AMAP_AXI_WIN(win_num)); 455 + 456 + value = csr_readl(pcie, PAB_AXI_AMAP_PEX_WIN_H(win_num)); 457 + 458 + csr_writel(pcie, lower_32_bits(pci_addr), 459 + PAB_AXI_AMAP_PEX_WIN_L(win_num)); 460 + csr_writel(pcie, upper_32_bits(pci_addr), 461 + PAB_AXI_AMAP_PEX_WIN_H(win_num)); 462 + 463 + pcie->ob_wins_configured++; 464 + } 465 + 466 + static int mobiveil_bringup_link(struct mobiveil_pcie *pcie) 467 + { 468 + int retries; 469 + 470 + /* check if the link is up or not */ 471 + for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) { 472 + if (mobiveil_pcie_link_up(pcie)) 473 + return 0; 474 + 475 + usleep_range(LINK_WAIT_MIN, LINK_WAIT_MAX); 476 + } 477 + dev_err(&pcie->pdev->dev, "link never came up\n"); 478 + return -ETIMEDOUT; 479 + } 480 + 481 + static void mobiveil_pcie_enable_msi(struct mobiveil_pcie *pcie) 482 + { 483 + phys_addr_t msg_addr = pcie->pcie_reg_base; 484 + struct mobiveil_msi *msi = &pcie->msi; 485 + 486 + pcie->msi.num_of_vectors = PCI_NUM_MSI; 487 + msi->msi_pages_phys = (phys_addr_t)msg_addr; 488 + 489 + writel_relaxed(lower_32_bits(msg_addr), 490 + pcie->apb_csr_base + MSI_BASE_LO_OFFSET); 491 + writel_relaxed(upper_32_bits(msg_addr), 492 + pcie->apb_csr_base + MSI_BASE_HI_OFFSET); 493 + writel_relaxed(4096, pcie->apb_csr_base + MSI_SIZE_OFFSET); 494 + writel_relaxed(1, pcie->apb_csr_base + MSI_ENABLE_OFFSET); 495 + } 496 + 497 + static int mobiveil_host_init(struct mobiveil_pcie *pcie) 498 + { 499 + u32 value, pab_ctrl, type = 0; 500 + int err; 501 + struct resource_entry *win, *tmp; 502 + 503 + err = mobiveil_bringup_link(pcie); 504 + if (err) { 505 + dev_info(&pcie->pdev->dev, "link bring-up failed\n"); 506 + return err; 507 + } 508 + 509 + /* 510 + * program Bus Master Enable Bit in Command Register in PAB Config 511 + * Space 512 + */ 513 + value = csr_readl(pcie, PCI_COMMAND); 514 + csr_writel(pcie, value | PCI_COMMAND_IO | PCI_COMMAND_MEMORY | 515 + PCI_COMMAND_MASTER, PCI_COMMAND); 516 + 517 + /* 518 + * program PIO Enable Bit to 1 (and PEX PIO Enable to 1) in PAB_CTRL 519 + * register 520 + */ 521 + pab_ctrl = csr_readl(pcie, PAB_CTRL); 522 + csr_writel(pcie, pab_ctrl | (1 << AMBA_PIO_ENABLE_SHIFT) | 523 + (1 << PEX_PIO_ENABLE_SHIFT), PAB_CTRL); 524 + 525 + csr_writel(pcie, (PAB_INTP_INTX_MASK | PAB_INTP_MSI_MASK), 526 + PAB_INTP_AMBA_MISC_ENB); 527 + 528 + /* 529 + * program PIO Enable Bit to 1 and Config Window Enable Bit to 1 in 530 + * PAB_AXI_PIO_CTRL Register 531 + */ 532 + value = csr_readl(pcie, PAB_AXI_PIO_CTRL); 533 + csr_writel(pcie, value | APIO_EN_MASK, PAB_AXI_PIO_CTRL); 534 + 535 + /* 536 + * we'll program one outbound window for config reads and 537 + * another default inbound window for all the upstream traffic 538 + * rest of the outbound windows will be configured according to 539 + * the "ranges" field defined in device tree 540 + */ 541 + 542 + /* config outbound translation window */ 543 + program_ob_windows(pcie, pcie->ob_wins_configured, 544 + pcie->ob_io_res->start, 0, CFG_WINDOW_TYPE, 545 + resource_size(pcie->ob_io_res)); 546 + 547 + /* memory inbound translation window */ 548 + program_ib_windows(pcie, WIN_NUM_1, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE); 549 + 550 + /* Get the I/O and memory ranges from DT */ 551 + resource_list_for_each_entry_safe(win, tmp, &pcie->resources) { 552 + type = 0; 553 + if (resource_type(win->res) == IORESOURCE_MEM) 554 + type = MEM_WINDOW_TYPE; 555 + if (resource_type(win->res) == IORESOURCE_IO) 556 + type = IO_WINDOW_TYPE; 557 + if (type) { 558 + /* configure outbound translation window */ 559 + program_ob_windows(pcie, pcie->ob_wins_configured, 560 + win->res->start, 0, type, 561 + resource_size(win->res)); 562 + } 563 + } 564 + 565 + /* setup MSI hardware registers */ 566 + mobiveil_pcie_enable_msi(pcie); 567 + 568 + return err; 569 + } 570 + 571 + static void mobiveil_mask_intx_irq(struct irq_data *data) 572 + { 573 + struct irq_desc *desc = irq_to_desc(data->irq); 574 + struct mobiveil_pcie *pcie; 575 + unsigned long flags; 576 + u32 mask, shifted_val; 577 + 578 + pcie = irq_desc_get_chip_data(desc); 579 + mask = 1 << ((data->hwirq + PAB_INTX_START) - 1); 580 + raw_spin_lock_irqsave(&pcie->intx_mask_lock, flags); 581 + shifted_val = csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB); 582 + csr_writel(pcie, (shifted_val & (~mask)), PAB_INTP_AMBA_MISC_ENB); 583 + raw_spin_unlock_irqrestore(&pcie->intx_mask_lock, flags); 584 + } 585 + 586 + static void mobiveil_unmask_intx_irq(struct irq_data *data) 587 + { 588 + struct irq_desc *desc = irq_to_desc(data->irq); 589 + struct mobiveil_pcie *pcie; 590 + unsigned long flags; 591 + u32 shifted_val, mask; 592 + 593 + pcie = irq_desc_get_chip_data(desc); 594 + mask = 1 << ((data->hwirq + PAB_INTX_START) - 1); 595 + raw_spin_lock_irqsave(&pcie->intx_mask_lock, flags); 596 + shifted_val = csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB); 597 + csr_writel(pcie, (shifted_val | mask), PAB_INTP_AMBA_MISC_ENB); 598 + raw_spin_unlock_irqrestore(&pcie->intx_mask_lock, flags); 599 + } 600 + 601 + static struct irq_chip intx_irq_chip = { 602 + .name = "mobiveil_pcie:intx", 603 + .irq_enable = mobiveil_unmask_intx_irq, 604 + .irq_disable = mobiveil_mask_intx_irq, 605 + .irq_mask = mobiveil_mask_intx_irq, 606 + .irq_unmask = mobiveil_unmask_intx_irq, 607 + }; 608 + 609 + /* routine to setup the INTx related data */ 610 + static int mobiveil_pcie_intx_map(struct irq_domain *domain, unsigned int irq, 611 + irq_hw_number_t hwirq) 612 + { 613 + irq_set_chip_and_handler(irq, &intx_irq_chip, handle_level_irq); 614 + irq_set_chip_data(irq, domain->host_data); 615 + return 0; 616 + } 617 + 618 + /* INTx domain operations structure */ 619 + static const struct irq_domain_ops intx_domain_ops = { 620 + .map = mobiveil_pcie_intx_map, 621 + }; 622 + 623 + static struct irq_chip mobiveil_msi_irq_chip = { 624 + .name = "Mobiveil PCIe MSI", 625 + .irq_mask = pci_msi_mask_irq, 626 + .irq_unmask = pci_msi_unmask_irq, 627 + }; 628 + 629 + static struct msi_domain_info mobiveil_msi_domain_info = { 630 + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 631 + MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX), 632 + .chip = &mobiveil_msi_irq_chip, 633 + }; 634 + 635 + static void mobiveil_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 636 + { 637 + struct mobiveil_pcie *pcie = irq_data_get_irq_chip_data(data); 638 + phys_addr_t addr = pcie->pcie_reg_base + (data->hwirq * sizeof(int)); 639 + 640 + msg->address_lo = lower_32_bits(addr); 641 + msg->address_hi = upper_32_bits(addr); 642 + msg->data = data->hwirq; 643 + 644 + dev_dbg(&pcie->pdev->dev, "msi#%d address_hi %#x address_lo %#x\n", 645 + (int)data->hwirq, msg->address_hi, msg->address_lo); 646 + } 647 + 648 + static int mobiveil_msi_set_affinity(struct irq_data *irq_data, 649 + const struct cpumask *mask, bool force) 650 + { 651 + return -EINVAL; 652 + } 653 + 654 + static struct irq_chip mobiveil_msi_bottom_irq_chip = { 655 + .name = "Mobiveil MSI", 656 + .irq_compose_msi_msg = mobiveil_compose_msi_msg, 657 + .irq_set_affinity = mobiveil_msi_set_affinity, 658 + }; 659 + 660 + static int mobiveil_irq_msi_domain_alloc(struct irq_domain *domain, 661 + unsigned int virq, unsigned int nr_irqs, void *args) 662 + { 663 + struct mobiveil_pcie *pcie = domain->host_data; 664 + struct mobiveil_msi *msi = &pcie->msi; 665 + unsigned long bit; 666 + 667 + WARN_ON(nr_irqs != 1); 668 + mutex_lock(&msi->lock); 669 + 670 + bit = find_first_zero_bit(msi->msi_irq_in_use, msi->num_of_vectors); 671 + if (bit >= msi->num_of_vectors) { 672 + mutex_unlock(&msi->lock); 673 + return -ENOSPC; 674 + } 675 + 676 + set_bit(bit, msi->msi_irq_in_use); 677 + 678 + mutex_unlock(&msi->lock); 679 + 680 + irq_domain_set_info(domain, virq, bit, &mobiveil_msi_bottom_irq_chip, 681 + domain->host_data, handle_level_irq, 682 + NULL, NULL); 683 + return 0; 684 + } 685 + 686 + static void mobiveil_irq_msi_domain_free(struct irq_domain *domain, 687 + unsigned int virq, unsigned int nr_irqs) 688 + { 689 + struct irq_data *d = irq_domain_get_irq_data(domain, virq); 690 + struct mobiveil_pcie *pcie = irq_data_get_irq_chip_data(d); 691 + struct mobiveil_msi *msi = &pcie->msi; 692 + 693 + mutex_lock(&msi->lock); 694 + 695 + if (!test_bit(d->hwirq, msi->msi_irq_in_use)) { 696 + dev_err(&pcie->pdev->dev, "trying to free unused MSI#%lu\n", 697 + d->hwirq); 698 + } else { 699 + __clear_bit(d->hwirq, msi->msi_irq_in_use); 700 + } 701 + 702 + mutex_unlock(&msi->lock); 703 + } 704 + static const struct irq_domain_ops msi_domain_ops = { 705 + .alloc = mobiveil_irq_msi_domain_alloc, 706 + .free = mobiveil_irq_msi_domain_free, 707 + }; 708 + 709 + static int mobiveil_allocate_msi_domains(struct mobiveil_pcie *pcie) 710 + { 711 + struct device *dev = &pcie->pdev->dev; 712 + struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node); 713 + struct mobiveil_msi *msi = &pcie->msi; 714 + 715 + mutex_init(&pcie->msi.lock); 716 + msi->dev_domain = irq_domain_add_linear(NULL, msi->num_of_vectors, 717 + &msi_domain_ops, pcie); 718 + if (!msi->dev_domain) { 719 + dev_err(dev, "failed to create IRQ domain\n"); 720 + return -ENOMEM; 721 + } 722 + 723 + msi->msi_domain = pci_msi_create_irq_domain(fwnode, 724 + &mobiveil_msi_domain_info, msi->dev_domain); 725 + if (!msi->msi_domain) { 726 + dev_err(dev, "failed to create MSI domain\n"); 727 + irq_domain_remove(msi->dev_domain); 728 + return -ENOMEM; 729 + } 730 + return 0; 731 + } 732 + 733 + static int mobiveil_pcie_init_irq_domain(struct mobiveil_pcie *pcie) 734 + { 735 + struct device *dev = &pcie->pdev->dev; 736 + struct device_node *node = dev->of_node; 737 + int ret; 738 + 739 + /* setup INTx */ 740 + pcie->intx_domain = irq_domain_add_linear(node, 741 + PCI_NUM_INTX, &intx_domain_ops, pcie); 742 + 743 + if (!pcie->intx_domain) { 744 + dev_err(dev, "Failed to get a INTx IRQ domain\n"); 745 + return -ENODEV; 746 + } 747 + 748 + raw_spin_lock_init(&pcie->intx_mask_lock); 749 + 750 + /* setup MSI */ 751 + ret = mobiveil_allocate_msi_domains(pcie); 752 + if (ret) 753 + return ret; 754 + 755 + return 0; 756 + } 757 + 758 + static int mobiveil_pcie_probe(struct platform_device *pdev) 759 + { 760 + struct mobiveil_pcie *pcie; 761 + struct pci_bus *bus; 762 + struct pci_bus *child; 763 + struct pci_host_bridge *bridge; 764 + struct device *dev = &pdev->dev; 765 + resource_size_t iobase; 766 + int ret; 767 + 768 + /* allocate the PCIe port */ 769 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); 770 + if (!bridge) 771 + return -ENODEV; 772 + 773 + pcie = pci_host_bridge_priv(bridge); 774 + if (!pcie) 775 + return -ENOMEM; 776 + 777 + pcie->pdev = pdev; 778 + 779 + ret = mobiveil_pcie_parse_dt(pcie); 780 + if (ret) { 781 + dev_err(dev, "Parsing DT failed, ret: %x\n", ret); 782 + return ret; 783 + } 784 + 785 + INIT_LIST_HEAD(&pcie->resources); 786 + 787 + /* parse the host bridge base addresses from the device tree file */ 788 + ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, 789 + &pcie->resources, &iobase); 790 + if (ret) { 791 + dev_err(dev, "Getting bridge resources failed\n"); 792 + return -ENOMEM; 793 + } 794 + 795 + /* 796 + * configure all inbound and outbound windows and prepare the RC for 797 + * config access 798 + */ 799 + ret = mobiveil_host_init(pcie); 800 + if (ret) { 801 + dev_err(dev, "Failed to initialize host\n"); 802 + goto error; 803 + } 804 + 805 + /* fixup for PCIe class register */ 806 + csr_writel(pcie, 0x060402ab, PAB_INTP_AXI_PIO_CLASS); 807 + 808 + /* initialize the IRQ domains */ 809 + ret = mobiveil_pcie_init_irq_domain(pcie); 810 + if (ret) { 811 + dev_err(dev, "Failed creating IRQ Domain\n"); 812 + goto error; 813 + } 814 + 815 + ret = devm_request_pci_bus_resources(dev, &pcie->resources); 816 + if (ret) 817 + goto error; 818 + 819 + /* Initialize bridge */ 820 + list_splice_init(&pcie->resources, &bridge->windows); 821 + bridge->dev.parent = dev; 822 + bridge->sysdata = pcie; 823 + bridge->busnr = pcie->root_bus_nr; 824 + bridge->ops = &mobiveil_pcie_ops; 825 + bridge->map_irq = of_irq_parse_and_map_pci; 826 + bridge->swizzle_irq = pci_common_swizzle; 827 + 828 + /* setup the kernel resources for the newly added PCIe root bus */ 829 + ret = pci_scan_root_bus_bridge(bridge); 830 + if (ret) 831 + goto error; 832 + 833 + bus = bridge->bus; 834 + 835 + pci_assign_unassigned_bus_resources(bus); 836 + list_for_each_entry(child, &bus->children, node) 837 + pcie_bus_configure_settings(child); 838 + pci_bus_add_devices(bus); 839 + 840 + return 0; 841 + error: 842 + pci_free_resource_list(&pcie->resources); 843 + return ret; 844 + } 845 + 846 + static const struct of_device_id mobiveil_pcie_of_match[] = { 847 + {.compatible = "mbvl,gpex40-pcie",}, 848 + {}, 849 + }; 850 + 851 + MODULE_DEVICE_TABLE(of, mobiveil_pcie_of_match); 852 + 853 + static struct platform_driver mobiveil_pcie_driver = { 854 + .probe = mobiveil_pcie_probe, 855 + .driver = { 856 + .name = "mobiveil-pcie", 857 + .of_match_table = mobiveil_pcie_of_match, 858 + .suppress_bind_attrs = true, 859 + }, 860 + }; 861 + 862 + builtin_platform_driver(mobiveil_pcie_driver); 863 + 864 + MODULE_LICENSE("GPL v2"); 865 + MODULE_DESCRIPTION("Mobiveil PCIe host controller driver"); 866 + MODULE_AUTHOR("Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>");
+156 -128
drivers/pci/host/pcie-rcar.c
··· 11 11 * Author: Phil Edworthy <phil.edworthy@renesas.com> 12 12 */ 13 13 14 + #include <linux/bitops.h> 14 15 #include <linux/clk.h> 15 16 #include <linux/delay.h> 16 17 #include <linux/interrupt.h> ··· 25 24 #include <linux/of_pci.h> 26 25 #include <linux/of_platform.h> 27 26 #include <linux/pci.h> 27 + #include <linux/phy/phy.h> 28 28 #include <linux/platform_device.h> 29 29 #include <linux/pm_runtime.h> 30 30 #include <linux/slab.h> 31 31 32 + #include "../pci.h" 33 + 32 34 #define PCIECAR 0x000010 33 35 #define PCIECCTLR 0x000018 34 - #define CONFIG_SEND_ENABLE (1 << 31) 36 + #define CONFIG_SEND_ENABLE BIT(31) 35 37 #define TYPE0 (0 << 8) 36 - #define TYPE1 (1 << 8) 38 + #define TYPE1 BIT(8) 37 39 #define PCIECDR 0x000020 38 40 #define PCIEMSR 0x000028 39 41 #define PCIEINTXR 0x000400 42 + #define PCIEPHYSR 0x0007f0 43 + #define PHYRDY BIT(0) 40 44 #define PCIEMSITXR 0x000840 41 45 42 46 /* Transfer control */ ··· 50 44 #define PCIETSTR 0x02004 51 45 #define DATA_LINK_ACTIVE 1 52 46 #define PCIEERRFR 0x02020 53 - #define UNSUPPORTED_REQUEST (1 << 4) 47 + #define UNSUPPORTED_REQUEST BIT(4) 54 48 #define PCIEMSIFR 0x02044 55 49 #define PCIEMSIALR 0x02048 56 50 #define MSIFE 1 ··· 63 57 /* local address reg & mask */ 64 58 #define PCIELAR(x) (0x02200 + ((x) * 0x20)) 65 59 #define PCIELAMR(x) (0x02208 + ((x) * 0x20)) 66 - #define LAM_PREFETCH (1 << 3) 67 - #define LAM_64BIT (1 << 2) 68 - #define LAR_ENABLE (1 << 1) 60 + #define LAM_PREFETCH BIT(3) 61 + #define LAM_64BIT BIT(2) 62 + #define LAR_ENABLE BIT(1) 69 63 70 64 /* PCIe address reg & mask */ 71 65 #define PCIEPALR(x) (0x03400 + ((x) * 0x20)) 72 66 #define PCIEPAUR(x) (0x03404 + ((x) * 0x20)) 73 67 #define PCIEPAMR(x) (0x03408 + ((x) * 0x20)) 74 68 #define PCIEPTCTLR(x) (0x0340c + ((x) * 0x20)) 75 - #define PAR_ENABLE (1 << 31) 76 - #define IO_SPACE (1 << 8) 69 + #define PAR_ENABLE BIT(31) 70 + #define IO_SPACE BIT(8) 77 71 78 72 /* Configuration */ 79 73 #define PCICONF(x) (0x010000 + ((x) * 0x4)) ··· 85 79 #define IDSETR1 0x011004 86 80 #define TLCTLR 0x011048 87 81 #define MACSR 0x011054 88 - #define SPCHGFIN (1 << 4) 89 - #define SPCHGFAIL (1 << 6) 90 - #define SPCHGSUC (1 << 7) 82 + #define SPCHGFIN BIT(4) 83 + #define SPCHGFAIL BIT(6) 84 + #define SPCHGSUC BIT(7) 91 85 #define LINK_SPEED (0xf << 16) 92 86 #define LINK_SPEED_2_5GTS (1 << 16) 93 87 #define LINK_SPEED_5_0GTS (2 << 16) 94 88 #define MACCTLR 0x011058 95 - #define SPEED_CHANGE (1 << 24) 96 - #define SCRAMBLE_DISABLE (1 << 27) 89 + #define SPEED_CHANGE BIT(24) 90 + #define SCRAMBLE_DISABLE BIT(27) 97 91 #define MACS2R 0x011078 98 92 #define MACCGSPSETR 0x011084 99 - #define SPCNGRSN (1 << 31) 93 + #define SPCNGRSN BIT(31) 100 94 101 95 /* R-Car H1 PHY */ 102 96 #define H1_PCIEPHYADRR 0x04000c 103 - #define WRITE_CMD (1 << 16) 104 - #define PHY_ACK (1 << 24) 97 + #define WRITE_CMD BIT(16) 98 + #define PHY_ACK BIT(24) 105 99 #define RATE_POS 12 106 100 #define LANE_POS 8 107 101 #define ADR_POS 0 108 102 #define H1_PCIEPHYDOUTR 0x040014 109 - #define H1_PCIEPHYSR 0x040018 110 103 111 104 /* R-Car Gen2 PHY */ 112 105 #define GEN2_PCIEPHYADDR 0x780 113 106 #define GEN2_PCIEPHYDATA 0x784 114 107 #define GEN2_PCIEPHYCTRL 0x78c 115 108 116 - #define INT_PCI_MSI_NR 32 109 + #define INT_PCI_MSI_NR 32 117 110 118 - #define RCONF(x) (PCICONF(0)+(x)) 119 - #define RPMCAP(x) (PMCAP(0)+(x)) 120 - #define REXPCAP(x) (EXPCAP(0)+(x)) 121 - #define RVCCAP(x) (VCCAP(0)+(x)) 111 + #define RCONF(x) (PCICONF(0) + (x)) 112 + #define RPMCAP(x) (PMCAP(0) + (x)) 113 + #define REXPCAP(x) (EXPCAP(0) + (x)) 114 + #define RVCCAP(x) (VCCAP(0) + (x)) 122 115 123 - #define PCIE_CONF_BUS(b) (((b) & 0xff) << 24) 124 - #define PCIE_CONF_DEV(d) (((d) & 0x1f) << 19) 125 - #define PCIE_CONF_FUNC(f) (((f) & 0x7) << 16) 116 + #define PCIE_CONF_BUS(b) (((b) & 0xff) << 24) 117 + #define PCIE_CONF_DEV(d) (((d) & 0x1f) << 19) 118 + #define PCIE_CONF_FUNC(f) (((f) & 0x7) << 16) 126 119 127 - #define RCAR_PCI_MAX_RESOURCES 4 128 - #define MAX_NR_INBOUND_MAPS 6 120 + #define RCAR_PCI_MAX_RESOURCES 4 121 + #define MAX_NR_INBOUND_MAPS 6 129 122 130 123 struct rcar_msi { 131 124 DECLARE_BITMAP(used, INT_PCI_MSI_NR); ··· 144 139 /* Structure representing the PCIe interface */ 145 140 struct rcar_pcie { 146 141 struct device *dev; 142 + struct phy *phy; 147 143 void __iomem *base; 148 144 struct list_head resources; 149 145 int root_bus_nr; 150 - struct clk *clk; 151 146 struct clk *bus_clk; 152 147 struct rcar_msi msi; 153 148 }; ··· 532 527 phy_wait_for_ack(pcie); 533 528 } 534 529 535 - static int rcar_pcie_wait_for_dl(struct rcar_pcie *pcie) 530 + static int rcar_pcie_wait_for_phyrdy(struct rcar_pcie *pcie) 536 531 { 537 532 unsigned int timeout = 10; 533 + 534 + while (timeout--) { 535 + if (rcar_pci_read_reg(pcie, PCIEPHYSR) & PHYRDY) 536 + return 0; 537 + 538 + msleep(5); 539 + } 540 + 541 + return -ETIMEDOUT; 542 + } 543 + 544 + static int rcar_pcie_wait_for_dl(struct rcar_pcie *pcie) 545 + { 546 + unsigned int timeout = 10000; 538 547 539 548 while (timeout--) { 540 549 if ((rcar_pci_read_reg(pcie, PCIETSTR) & DATA_LINK_ACTIVE)) 541 550 return 0; 542 551 543 - msleep(5); 552 + udelay(5); 553 + cpu_relax(); 544 554 } 545 555 546 556 return -ETIMEDOUT; ··· 570 550 571 551 /* Set mode */ 572 552 rcar_pci_write_reg(pcie, 1, PCIEMSR); 553 + 554 + err = rcar_pcie_wait_for_phyrdy(pcie); 555 + if (err) 556 + return err; 573 557 574 558 /* 575 559 * Initial header for port config space is type 1, set the device ··· 629 605 return 0; 630 606 } 631 607 632 - static int rcar_pcie_hw_init_h1(struct rcar_pcie *pcie) 608 + static int rcar_pcie_phy_init_h1(struct rcar_pcie *pcie) 633 609 { 634 - unsigned int timeout = 10; 635 - 636 610 /* Initialize the phy */ 637 611 phy_write_reg(pcie, 0, 0x42, 0x1, 0x0EC34191); 638 612 phy_write_reg(pcie, 1, 0x42, 0x1, 0x0EC34180); ··· 649 627 phy_write_reg(pcie, 0, 0x64, 0x1, 0x3F0F1F0F); 650 628 phy_write_reg(pcie, 0, 0x66, 0x1, 0x00008000); 651 629 652 - while (timeout--) { 653 - if (rcar_pci_read_reg(pcie, H1_PCIEPHYSR)) 654 - return rcar_pcie_hw_init(pcie); 655 - 656 - msleep(5); 657 - } 658 - 659 - return -ETIMEDOUT; 630 + return 0; 660 631 } 661 632 662 - static int rcar_pcie_hw_init_gen2(struct rcar_pcie *pcie) 633 + static int rcar_pcie_phy_init_gen2(struct rcar_pcie *pcie) 663 634 { 664 635 /* 665 636 * These settings come from the R-Car Series, 2nd Generation User's ··· 669 654 rcar_pci_write_reg(pcie, 0x00000001, GEN2_PCIEPHYCTRL); 670 655 rcar_pci_write_reg(pcie, 0x00000006, GEN2_PCIEPHYCTRL); 671 656 672 - return rcar_pcie_hw_init(pcie); 657 + return 0; 658 + } 659 + 660 + static int rcar_pcie_phy_init_gen3(struct rcar_pcie *pcie) 661 + { 662 + int err; 663 + 664 + err = phy_init(pcie->phy); 665 + if (err) 666 + return err; 667 + 668 + return phy_power_on(pcie->phy); 673 669 } 674 670 675 671 static int rcar_msi_alloc(struct rcar_msi *chip) ··· 868 842 .map = rcar_msi_map, 869 843 }; 870 844 845 + static void rcar_pcie_unmap_msi(struct rcar_pcie *pcie) 846 + { 847 + struct rcar_msi *msi = &pcie->msi; 848 + int i, irq; 849 + 850 + for (i = 0; i < INT_PCI_MSI_NR; i++) { 851 + irq = irq_find_mapping(msi->domain, i); 852 + if (irq > 0) 853 + irq_dispose_mapping(irq); 854 + } 855 + 856 + irq_domain_remove(msi->domain); 857 + } 858 + 871 859 static int rcar_pcie_enable_msi(struct rcar_pcie *pcie) 872 860 { 873 861 struct device *dev = pcie->dev; ··· 936 896 return 0; 937 897 938 898 err: 939 - irq_domain_remove(msi->domain); 899 + rcar_pcie_unmap_msi(pcie); 940 900 return err; 901 + } 902 + 903 + static void rcar_pcie_teardown_msi(struct rcar_pcie *pcie) 904 + { 905 + struct rcar_msi *msi = &pcie->msi; 906 + 907 + /* Disable all MSI interrupts */ 908 + rcar_pci_write_reg(pcie, 0, PCIEMSIIER); 909 + 910 + /* Disable address decoding of the MSI interrupt, MSIFE */ 911 + rcar_pci_write_reg(pcie, 0, PCIEMSIALR); 912 + 913 + free_pages(msi->pages, 0); 914 + 915 + rcar_pcie_unmap_msi(pcie); 941 916 } 942 917 943 918 static int rcar_pcie_get_resources(struct rcar_pcie *pcie) ··· 960 905 struct device *dev = pcie->dev; 961 906 struct resource res; 962 907 int err, i; 908 + 909 + pcie->phy = devm_phy_optional_get(dev, "pcie"); 910 + if (IS_ERR(pcie->phy)) 911 + return PTR_ERR(pcie->phy); 963 912 964 913 err = of_address_to_resource(dev->of_node, 0, &res); 965 914 if (err) ··· 973 914 if (IS_ERR(pcie->base)) 974 915 return PTR_ERR(pcie->base); 975 916 976 - pcie->clk = devm_clk_get(dev, "pcie"); 977 - if (IS_ERR(pcie->clk)) { 978 - dev_err(dev, "cannot get platform clock\n"); 979 - return PTR_ERR(pcie->clk); 980 - } 981 - err = clk_prepare_enable(pcie->clk); 982 - if (err) 983 - return err; 984 - 985 917 pcie->bus_clk = devm_clk_get(dev, "pcie_bus"); 986 918 if (IS_ERR(pcie->bus_clk)) { 987 919 dev_err(dev, "cannot get pcie bus clock\n"); 988 - err = PTR_ERR(pcie->bus_clk); 989 - goto fail_clk; 920 + return PTR_ERR(pcie->bus_clk); 990 921 } 991 - err = clk_prepare_enable(pcie->bus_clk); 992 - if (err) 993 - goto fail_clk; 994 922 995 923 i = irq_of_parse_and_map(dev->of_node, 0); 996 924 if (!i) { 997 925 dev_err(dev, "cannot get platform resources for msi interrupt\n"); 998 926 err = -ENOENT; 999 - goto err_map_reg; 927 + goto err_irq1; 1000 928 } 1001 929 pcie->msi.irq1 = i; 1002 930 ··· 991 945 if (!i) { 992 946 dev_err(dev, "cannot get platform resources for msi interrupt\n"); 993 947 err = -ENOENT; 994 - goto err_map_reg; 948 + goto err_irq2; 995 949 } 996 950 pcie->msi.irq2 = i; 997 951 998 952 return 0; 999 953 1000 - err_map_reg: 1001 - clk_disable_unprepare(pcie->bus_clk); 1002 - fail_clk: 1003 - clk_disable_unprepare(pcie->clk); 1004 - 954 + err_irq2: 955 + irq_dispose_mapping(pcie->msi.irq1); 956 + err_irq1: 1005 957 return err; 1006 958 } 1007 959 ··· 1095 1051 } 1096 1052 1097 1053 static const struct of_device_id rcar_pcie_of_match[] = { 1098 - { .compatible = "renesas,pcie-r8a7779", .data = rcar_pcie_hw_init_h1 }, 1054 + { .compatible = "renesas,pcie-r8a7779", 1055 + .data = rcar_pcie_phy_init_h1 }, 1099 1056 { .compatible = "renesas,pcie-r8a7790", 1100 - .data = rcar_pcie_hw_init_gen2 }, 1057 + .data = rcar_pcie_phy_init_gen2 }, 1101 1058 { .compatible = "renesas,pcie-r8a7791", 1102 - .data = rcar_pcie_hw_init_gen2 }, 1059 + .data = rcar_pcie_phy_init_gen2 }, 1103 1060 { .compatible = "renesas,pcie-rcar-gen2", 1104 - .data = rcar_pcie_hw_init_gen2 }, 1105 - { .compatible = "renesas,pcie-r8a7795", .data = rcar_pcie_hw_init }, 1106 - { .compatible = "renesas,pcie-rcar-gen3", .data = rcar_pcie_hw_init }, 1061 + .data = rcar_pcie_phy_init_gen2 }, 1062 + { .compatible = "renesas,pcie-r8a7795", 1063 + .data = rcar_pcie_phy_init_gen3 }, 1064 + { .compatible = "renesas,pcie-rcar-gen3", 1065 + .data = rcar_pcie_phy_init_gen3 }, 1107 1066 {}, 1108 1067 }; 1109 - 1110 - static int rcar_pcie_parse_request_of_pci_ranges(struct rcar_pcie *pci) 1111 - { 1112 - int err; 1113 - struct device *dev = pci->dev; 1114 - struct device_node *np = dev->of_node; 1115 - resource_size_t iobase; 1116 - struct resource_entry *win, *tmp; 1117 - 1118 - err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pci->resources, 1119 - &iobase); 1120 - if (err) 1121 - return err; 1122 - 1123 - err = devm_request_pci_bus_resources(dev, &pci->resources); 1124 - if (err) 1125 - goto out_release_res; 1126 - 1127 - resource_list_for_each_entry_safe(win, tmp, &pci->resources) { 1128 - struct resource *res = win->res; 1129 - 1130 - if (resource_type(res) == IORESOURCE_IO) { 1131 - err = pci_remap_iospace(res, iobase); 1132 - if (err) { 1133 - dev_warn(dev, "error %d: failed to map resource %pR\n", 1134 - err, res); 1135 - 1136 - resource_list_destroy_entry(win); 1137 - } 1138 - } 1139 - } 1140 - 1141 - return 0; 1142 - 1143 - out_release_res: 1144 - pci_free_resource_list(&pci->resources); 1145 - return err; 1146 - } 1147 1068 1148 1069 static int rcar_pcie_probe(struct platform_device *pdev) 1149 1070 { ··· 1116 1107 struct rcar_pcie *pcie; 1117 1108 unsigned int data; 1118 1109 int err; 1119 - int (*hw_init_fn)(struct rcar_pcie *); 1110 + int (*phy_init_fn)(struct rcar_pcie *); 1120 1111 struct pci_host_bridge *bridge; 1121 1112 1122 1113 bridge = pci_alloc_host_bridge(sizeof(*pcie)); ··· 1127 1118 1128 1119 pcie->dev = dev; 1129 1120 1130 - INIT_LIST_HEAD(&pcie->resources); 1131 - 1132 - err = rcar_pcie_parse_request_of_pci_ranges(pcie); 1121 + err = pci_parse_request_of_pci_ranges(dev, &pcie->resources, NULL); 1133 1122 if (err) 1134 1123 goto err_free_bridge; 1124 + 1125 + pm_runtime_enable(pcie->dev); 1126 + err = pm_runtime_get_sync(pcie->dev); 1127 + if (err < 0) { 1128 + dev_err(pcie->dev, "pm_runtime_get_sync failed\n"); 1129 + goto err_pm_disable; 1130 + } 1135 1131 1136 1132 err = rcar_pcie_get_resources(pcie); 1137 1133 if (err < 0) { 1138 1134 dev_err(dev, "failed to request resources: %d\n", err); 1139 - goto err_free_resource_list; 1135 + goto err_pm_put; 1136 + } 1137 + 1138 + err = clk_prepare_enable(pcie->bus_clk); 1139 + if (err) { 1140 + dev_err(dev, "failed to enable bus clock: %d\n", err); 1141 + goto err_unmap_msi_irqs; 1140 1142 } 1141 1143 1142 1144 err = rcar_pcie_parse_map_dma_ranges(pcie, dev->of_node); 1143 1145 if (err) 1144 - goto err_free_resource_list; 1146 + goto err_clk_disable; 1145 1147 1146 - pm_runtime_enable(dev); 1147 - err = pm_runtime_get_sync(dev); 1148 - if (err < 0) { 1149 - dev_err(dev, "pm_runtime_get_sync failed\n"); 1150 - goto err_pm_disable; 1148 + phy_init_fn = of_device_get_match_data(dev); 1149 + err = phy_init_fn(pcie); 1150 + if (err) { 1151 + dev_err(dev, "failed to init PCIe PHY\n"); 1152 + goto err_clk_disable; 1151 1153 } 1152 1154 1153 1155 /* Failure to get a link might just be that no cards are inserted */ 1154 - hw_init_fn = of_device_get_match_data(dev); 1155 - err = hw_init_fn(pcie); 1156 - if (err) { 1156 + if (rcar_pcie_hw_init(pcie)) { 1157 1157 dev_info(dev, "PCIe link down\n"); 1158 1158 err = -ENODEV; 1159 - goto err_pm_put; 1159 + goto err_clk_disable; 1160 1160 } 1161 1161 1162 1162 data = rcar_pci_read_reg(pcie, MACSR); ··· 1177 1159 dev_err(dev, 1178 1160 "failed to enable MSI support: %d\n", 1179 1161 err); 1180 - goto err_pm_put; 1162 + goto err_clk_disable; 1181 1163 } 1182 1164 } 1183 1165 1184 1166 err = rcar_pcie_enable(pcie); 1185 1167 if (err) 1186 - goto err_pm_put; 1168 + goto err_msi_teardown; 1187 1169 1188 1170 return 0; 1171 + 1172 + err_msi_teardown: 1173 + if (IS_ENABLED(CONFIG_PCI_MSI)) 1174 + rcar_pcie_teardown_msi(pcie); 1175 + 1176 + err_clk_disable: 1177 + clk_disable_unprepare(pcie->bus_clk); 1178 + 1179 + err_unmap_msi_irqs: 1180 + irq_dispose_mapping(pcie->msi.irq2); 1181 + irq_dispose_mapping(pcie->msi.irq1); 1189 1182 1190 1183 err_pm_put: 1191 1184 pm_runtime_put(dev); 1192 1185 1193 1186 err_pm_disable: 1194 1187 pm_runtime_disable(dev); 1195 - 1196 - err_free_resource_list: 1197 1188 pci_free_resource_list(&pcie->resources); 1189 + 1198 1190 err_free_bridge: 1199 1191 pci_free_host_bridge(bridge); 1200 1192
+642
drivers/pci/host/pcie-rockchip-ep.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Rockchip AXI PCIe endpoint controller driver 4 + * 5 + * Copyright (c) 2018 Rockchip, Inc. 6 + * 7 + * Author: Shawn Lin <shawn.lin@rock-chips.com> 8 + * Simon Xue <xxm@rock-chips.com> 9 + */ 10 + 11 + #include <linux/configfs.h> 12 + #include <linux/delay.h> 13 + #include <linux/kernel.h> 14 + #include <linux/of.h> 15 + #include <linux/pci-epc.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/pci-epf.h> 18 + #include <linux/sizes.h> 19 + 20 + #include "pcie-rockchip.h" 21 + 22 + /** 23 + * struct rockchip_pcie_ep - private data for PCIe endpoint controller driver 24 + * @rockchip: Rockchip PCIe controller 25 + * @max_regions: maximum number of regions supported by hardware 26 + * @ob_region_map: bitmask of mapped outbound regions 27 + * @ob_addr: base addresses in the AXI bus where the outbound regions start 28 + * @irq_phys_addr: base address on the AXI bus where the MSI/legacy IRQ 29 + * dedicated outbound regions is mapped. 30 + * @irq_cpu_addr: base address in the CPU space where a write access triggers 31 + * the sending of a memory write (MSI) / normal message (legacy 32 + * IRQ) TLP through the PCIe bus. 33 + * @irq_pci_addr: used to save the current mapping of the MSI/legacy IRQ 34 + * dedicated outbound region. 35 + * @irq_pci_fn: the latest PCI function that has updated the mapping of 36 + * the MSI/legacy IRQ dedicated outbound region. 37 + * @irq_pending: bitmask of asserted legacy IRQs. 38 + */ 39 + struct rockchip_pcie_ep { 40 + struct rockchip_pcie rockchip; 41 + struct pci_epc *epc; 42 + u32 max_regions; 43 + unsigned long ob_region_map; 44 + phys_addr_t *ob_addr; 45 + phys_addr_t irq_phys_addr; 46 + void __iomem *irq_cpu_addr; 47 + u64 irq_pci_addr; 48 + u8 irq_pci_fn; 49 + u8 irq_pending; 50 + }; 51 + 52 + static void rockchip_pcie_clear_ep_ob_atu(struct rockchip_pcie *rockchip, 53 + u32 region) 54 + { 55 + rockchip_pcie_write(rockchip, 0, 56 + ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(region)); 57 + rockchip_pcie_write(rockchip, 0, 58 + ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(region)); 59 + rockchip_pcie_write(rockchip, 0, 60 + ROCKCHIP_PCIE_AT_OB_REGION_DESC0(region)); 61 + rockchip_pcie_write(rockchip, 0, 62 + ROCKCHIP_PCIE_AT_OB_REGION_DESC1(region)); 63 + rockchip_pcie_write(rockchip, 0, 64 + ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(region)); 65 + rockchip_pcie_write(rockchip, 0, 66 + ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(region)); 67 + } 68 + 69 + static void rockchip_pcie_prog_ep_ob_atu(struct rockchip_pcie *rockchip, u8 fn, 70 + u32 r, u32 type, u64 cpu_addr, 71 + u64 pci_addr, size_t size) 72 + { 73 + u64 sz = 1ULL << fls64(size - 1); 74 + int num_pass_bits = ilog2(sz); 75 + u32 addr0, addr1, desc0, desc1; 76 + bool is_nor_msg = (type == AXI_WRAPPER_NOR_MSG); 77 + 78 + /* The minimal region size is 1MB */ 79 + if (num_pass_bits < 8) 80 + num_pass_bits = 8; 81 + 82 + cpu_addr -= rockchip->mem_res->start; 83 + addr0 = ((is_nor_msg ? 0x10 : (num_pass_bits - 1)) & 84 + PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) | 85 + (lower_32_bits(cpu_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR); 86 + addr1 = upper_32_bits(is_nor_msg ? cpu_addr : pci_addr); 87 + desc0 = ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(fn) | type; 88 + desc1 = 0; 89 + 90 + if (is_nor_msg) { 91 + rockchip_pcie_write(rockchip, 0, 92 + ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r)); 93 + rockchip_pcie_write(rockchip, 0, 94 + ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r)); 95 + rockchip_pcie_write(rockchip, desc0, 96 + ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r)); 97 + rockchip_pcie_write(rockchip, desc1, 98 + ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r)); 99 + } else { 100 + /* PCI bus address region */ 101 + rockchip_pcie_write(rockchip, addr0, 102 + ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r)); 103 + rockchip_pcie_write(rockchip, addr1, 104 + ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r)); 105 + rockchip_pcie_write(rockchip, desc0, 106 + ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r)); 107 + rockchip_pcie_write(rockchip, desc1, 108 + ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r)); 109 + 110 + addr0 = 111 + ((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) | 112 + (lower_32_bits(cpu_addr) & 113 + PCIE_CORE_OB_REGION_ADDR0_LO_ADDR); 114 + addr1 = upper_32_bits(cpu_addr); 115 + } 116 + 117 + /* CPU bus address region */ 118 + rockchip_pcie_write(rockchip, addr0, 119 + ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(r)); 120 + rockchip_pcie_write(rockchip, addr1, 121 + ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r)); 122 + } 123 + 124 + static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn, 125 + struct pci_epf_header *hdr) 126 + { 127 + struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 128 + struct rockchip_pcie *rockchip = &ep->rockchip; 129 + 130 + /* All functions share the same vendor ID with function 0 */ 131 + if (fn == 0) { 132 + u32 vid_regs = (hdr->vendorid & GENMASK(15, 0)) | 133 + (hdr->subsys_vendor_id & GENMASK(31, 16)) << 16; 134 + 135 + rockchip_pcie_write(rockchip, vid_regs, 136 + PCIE_CORE_CONFIG_VENDOR); 137 + } 138 + 139 + rockchip_pcie_write(rockchip, hdr->deviceid << 16, 140 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + PCI_VENDOR_ID); 141 + 142 + rockchip_pcie_write(rockchip, 143 + hdr->revid | 144 + hdr->progif_code << 8 | 145 + hdr->subclass_code << 16 | 146 + hdr->baseclass_code << 24, 147 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + PCI_REVISION_ID); 148 + rockchip_pcie_write(rockchip, hdr->cache_line_size, 149 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 150 + PCI_CACHE_LINE_SIZE); 151 + rockchip_pcie_write(rockchip, hdr->subsys_id << 16, 152 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 153 + PCI_SUBSYSTEM_VENDOR_ID); 154 + rockchip_pcie_write(rockchip, hdr->interrupt_pin << 8, 155 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 156 + PCI_INTERRUPT_LINE); 157 + 158 + return 0; 159 + } 160 + 161 + static int rockchip_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, 162 + struct pci_epf_bar *epf_bar) 163 + { 164 + struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 165 + struct rockchip_pcie *rockchip = &ep->rockchip; 166 + dma_addr_t bar_phys = epf_bar->phys_addr; 167 + enum pci_barno bar = epf_bar->barno; 168 + int flags = epf_bar->flags; 169 + u32 addr0, addr1, reg, cfg, b, aperture, ctrl; 170 + u64 sz; 171 + 172 + /* BAR size is 2^(aperture + 7) */ 173 + sz = max_t(size_t, epf_bar->size, MIN_EP_APERTURE); 174 + 175 + /* 176 + * roundup_pow_of_two() returns an unsigned long, which is not suited 177 + * for 64bit values. 178 + */ 179 + sz = 1ULL << fls64(sz - 1); 180 + aperture = ilog2(sz) - 7; /* 128B -> 0, 256B -> 1, 512B -> 2, ... */ 181 + 182 + if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) { 183 + ctrl = ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_IO_32BITS; 184 + } else { 185 + bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH); 186 + bool is_64bits = sz > SZ_2G; 187 + 188 + if (is_64bits && (bar & 1)) 189 + return -EINVAL; 190 + 191 + if (is_64bits && is_prefetch) 192 + ctrl = 193 + ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_PREFETCH_MEM_64BITS; 194 + else if (is_prefetch) 195 + ctrl = 196 + ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_PREFETCH_MEM_32BITS; 197 + else if (is_64bits) 198 + ctrl = ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_MEM_64BITS; 199 + else 200 + ctrl = ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_MEM_32BITS; 201 + } 202 + 203 + if (bar < BAR_4) { 204 + reg = ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG0(fn); 205 + b = bar; 206 + } else { 207 + reg = ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG1(fn); 208 + b = bar - BAR_4; 209 + } 210 + 211 + addr0 = lower_32_bits(bar_phys); 212 + addr1 = upper_32_bits(bar_phys); 213 + 214 + cfg = rockchip_pcie_read(rockchip, reg); 215 + cfg &= ~(ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) | 216 + ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)); 217 + cfg |= (ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) | 218 + ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl)); 219 + 220 + rockchip_pcie_write(rockchip, cfg, reg); 221 + rockchip_pcie_write(rockchip, addr0, 222 + ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar)); 223 + rockchip_pcie_write(rockchip, addr1, 224 + ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar)); 225 + 226 + return 0; 227 + } 228 + 229 + static void rockchip_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, 230 + struct pci_epf_bar *epf_bar) 231 + { 232 + struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 233 + struct rockchip_pcie *rockchip = &ep->rockchip; 234 + u32 reg, cfg, b, ctrl; 235 + enum pci_barno bar = epf_bar->barno; 236 + 237 + if (bar < BAR_4) { 238 + reg = ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG0(fn); 239 + b = bar; 240 + } else { 241 + reg = ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG1(fn); 242 + b = bar - BAR_4; 243 + } 244 + 245 + ctrl = ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_DISABLED; 246 + cfg = rockchip_pcie_read(rockchip, reg); 247 + cfg &= ~(ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) | 248 + ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)); 249 + cfg |= ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl); 250 + 251 + rockchip_pcie_write(rockchip, cfg, reg); 252 + rockchip_pcie_write(rockchip, 0x0, 253 + ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar)); 254 + rockchip_pcie_write(rockchip, 0x0, 255 + ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar)); 256 + } 257 + 258 + static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, 259 + phys_addr_t addr, u64 pci_addr, 260 + size_t size) 261 + { 262 + struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 263 + struct rockchip_pcie *pcie = &ep->rockchip; 264 + u32 r; 265 + 266 + r = find_first_zero_bit(&ep->ob_region_map, 267 + sizeof(ep->ob_region_map) * BITS_PER_LONG); 268 + /* 269 + * Region 0 is reserved for configuration space and shouldn't 270 + * be used elsewhere per TRM, so leave it out. 271 + */ 272 + if (r >= ep->max_regions - 1) { 273 + dev_err(&epc->dev, "no free outbound region\n"); 274 + return -EINVAL; 275 + } 276 + 277 + rockchip_pcie_prog_ep_ob_atu(pcie, fn, r, AXI_WRAPPER_MEM_WRITE, addr, 278 + pci_addr, size); 279 + 280 + set_bit(r, &ep->ob_region_map); 281 + ep->ob_addr[r] = addr; 282 + 283 + return 0; 284 + } 285 + 286 + static void rockchip_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, 287 + phys_addr_t addr) 288 + { 289 + struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 290 + struct rockchip_pcie *rockchip = &ep->rockchip; 291 + u32 r; 292 + 293 + for (r = 0; r < ep->max_regions - 1; r++) 294 + if (ep->ob_addr[r] == addr) 295 + break; 296 + 297 + /* 298 + * Region 0 is reserved for configuration space and shouldn't 299 + * be used elsewhere per TRM, so leave it out. 300 + */ 301 + if (r == ep->max_regions - 1) 302 + return; 303 + 304 + rockchip_pcie_clear_ep_ob_atu(rockchip, r); 305 + 306 + ep->ob_addr[r] = 0; 307 + clear_bit(r, &ep->ob_region_map); 308 + } 309 + 310 + static int rockchip_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, 311 + u8 multi_msg_cap) 312 + { 313 + struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 314 + struct rockchip_pcie *rockchip = &ep->rockchip; 315 + u16 flags; 316 + 317 + flags = rockchip_pcie_read(rockchip, 318 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 319 + ROCKCHIP_PCIE_EP_MSI_CTRL_REG); 320 + flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK; 321 + flags |= 322 + ((multi_msg_cap << 1) << ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) | 323 + PCI_MSI_FLAGS_64BIT; 324 + flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP; 325 + rockchip_pcie_write(rockchip, flags, 326 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 327 + ROCKCHIP_PCIE_EP_MSI_CTRL_REG); 328 + return 0; 329 + } 330 + 331 + static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn) 332 + { 333 + struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 334 + struct rockchip_pcie *rockchip = &ep->rockchip; 335 + u16 flags; 336 + 337 + flags = rockchip_pcie_read(rockchip, 338 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 339 + ROCKCHIP_PCIE_EP_MSI_CTRL_REG); 340 + if (!(flags & ROCKCHIP_PCIE_EP_MSI_CTRL_ME)) 341 + return -EINVAL; 342 + 343 + return ((flags & ROCKCHIP_PCIE_EP_MSI_CTRL_MME_MASK) >> 344 + ROCKCHIP_PCIE_EP_MSI_CTRL_MME_OFFSET); 345 + } 346 + 347 + static void rockchip_pcie_ep_assert_intx(struct rockchip_pcie_ep *ep, u8 fn, 348 + u8 intx, bool is_asserted) 349 + { 350 + struct rockchip_pcie *rockchip = &ep->rockchip; 351 + u32 r = ep->max_regions - 1; 352 + u32 offset; 353 + u16 status; 354 + u8 msg_code; 355 + 356 + if (unlikely(ep->irq_pci_addr != ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR || 357 + ep->irq_pci_fn != fn)) { 358 + rockchip_pcie_prog_ep_ob_atu(rockchip, fn, r, 359 + AXI_WRAPPER_NOR_MSG, 360 + ep->irq_phys_addr, 0, 0); 361 + ep->irq_pci_addr = ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR; 362 + ep->irq_pci_fn = fn; 363 + } 364 + 365 + intx &= 3; 366 + if (is_asserted) { 367 + ep->irq_pending |= BIT(intx); 368 + msg_code = ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTA + intx; 369 + } else { 370 + ep->irq_pending &= ~BIT(intx); 371 + msg_code = ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTA + intx; 372 + } 373 + 374 + status = rockchip_pcie_read(rockchip, 375 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 376 + ROCKCHIP_PCIE_EP_CMD_STATUS); 377 + status &= ROCKCHIP_PCIE_EP_CMD_STATUS_IS; 378 + 379 + if ((status != 0) ^ (ep->irq_pending != 0)) { 380 + status ^= ROCKCHIP_PCIE_EP_CMD_STATUS_IS; 381 + rockchip_pcie_write(rockchip, status, 382 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 383 + ROCKCHIP_PCIE_EP_CMD_STATUS); 384 + } 385 + 386 + offset = 387 + ROCKCHIP_PCIE_MSG_ROUTING(ROCKCHIP_PCIE_MSG_ROUTING_LOCAL_INTX) | 388 + ROCKCHIP_PCIE_MSG_CODE(msg_code) | ROCKCHIP_PCIE_MSG_NO_DATA; 389 + writel(0, ep->irq_cpu_addr + offset); 390 + } 391 + 392 + static int rockchip_pcie_ep_send_legacy_irq(struct rockchip_pcie_ep *ep, u8 fn, 393 + u8 intx) 394 + { 395 + u16 cmd; 396 + 397 + cmd = rockchip_pcie_read(&ep->rockchip, 398 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 399 + ROCKCHIP_PCIE_EP_CMD_STATUS); 400 + 401 + if (cmd & PCI_COMMAND_INTX_DISABLE) 402 + return -EINVAL; 403 + 404 + /* 405 + * Should add some delay between toggling INTx per TRM vaguely saying 406 + * it depends on some cycles of the AHB bus clock to function it. So 407 + * add sufficient 1ms here. 408 + */ 409 + rockchip_pcie_ep_assert_intx(ep, fn, intx, true); 410 + mdelay(1); 411 + rockchip_pcie_ep_assert_intx(ep, fn, intx, false); 412 + return 0; 413 + } 414 + 415 + static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn, 416 + u8 interrupt_num) 417 + { 418 + struct rockchip_pcie *rockchip = &ep->rockchip; 419 + u16 flags, mme, data, data_mask; 420 + u8 msi_count; 421 + u64 pci_addr, pci_addr_mask = 0xff; 422 + 423 + /* Check MSI enable bit */ 424 + flags = rockchip_pcie_read(&ep->rockchip, 425 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 426 + ROCKCHIP_PCIE_EP_MSI_CTRL_REG); 427 + if (!(flags & ROCKCHIP_PCIE_EP_MSI_CTRL_ME)) 428 + return -EINVAL; 429 + 430 + /* Get MSI numbers from MME */ 431 + mme = ((flags & ROCKCHIP_PCIE_EP_MSI_CTRL_MME_MASK) >> 432 + ROCKCHIP_PCIE_EP_MSI_CTRL_MME_OFFSET); 433 + msi_count = 1 << mme; 434 + if (!interrupt_num || interrupt_num > msi_count) 435 + return -EINVAL; 436 + 437 + /* Set MSI private data */ 438 + data_mask = msi_count - 1; 439 + data = rockchip_pcie_read(rockchip, 440 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 441 + ROCKCHIP_PCIE_EP_MSI_CTRL_REG + 442 + PCI_MSI_DATA_64); 443 + data = (data & ~data_mask) | ((interrupt_num - 1) & data_mask); 444 + 445 + /* Get MSI PCI address */ 446 + pci_addr = rockchip_pcie_read(rockchip, 447 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 448 + ROCKCHIP_PCIE_EP_MSI_CTRL_REG + 449 + PCI_MSI_ADDRESS_HI); 450 + pci_addr <<= 32; 451 + pci_addr |= rockchip_pcie_read(rockchip, 452 + ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 453 + ROCKCHIP_PCIE_EP_MSI_CTRL_REG + 454 + PCI_MSI_ADDRESS_LO); 455 + pci_addr &= GENMASK_ULL(63, 2); 456 + 457 + /* Set the outbound region if needed. */ 458 + if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) || 459 + ep->irq_pci_fn != fn)) { 460 + rockchip_pcie_prog_ep_ob_atu(rockchip, fn, ep->max_regions - 1, 461 + AXI_WRAPPER_MEM_WRITE, 462 + ep->irq_phys_addr, 463 + pci_addr & ~pci_addr_mask, 464 + pci_addr_mask + 1); 465 + ep->irq_pci_addr = (pci_addr & ~pci_addr_mask); 466 + ep->irq_pci_fn = fn; 467 + } 468 + 469 + writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask)); 470 + return 0; 471 + } 472 + 473 + static int rockchip_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, 474 + enum pci_epc_irq_type type, 475 + u8 interrupt_num) 476 + { 477 + struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 478 + 479 + switch (type) { 480 + case PCI_EPC_IRQ_LEGACY: 481 + return rockchip_pcie_ep_send_legacy_irq(ep, fn, 0); 482 + case PCI_EPC_IRQ_MSI: 483 + return rockchip_pcie_ep_send_msi_irq(ep, fn, interrupt_num); 484 + default: 485 + return -EINVAL; 486 + } 487 + } 488 + 489 + static int rockchip_pcie_ep_start(struct pci_epc *epc) 490 + { 491 + struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 492 + struct rockchip_pcie *rockchip = &ep->rockchip; 493 + struct pci_epf *epf; 494 + u32 cfg; 495 + 496 + cfg = BIT(0); 497 + list_for_each_entry(epf, &epc->pci_epf, list) 498 + cfg |= BIT(epf->func_no); 499 + 500 + rockchip_pcie_write(rockchip, cfg, PCIE_CORE_PHY_FUNC_CFG); 501 + 502 + list_for_each_entry(epf, &epc->pci_epf, list) 503 + pci_epf_linkup(epf); 504 + 505 + return 0; 506 + } 507 + 508 + static const struct pci_epc_ops rockchip_pcie_epc_ops = { 509 + .write_header = rockchip_pcie_ep_write_header, 510 + .set_bar = rockchip_pcie_ep_set_bar, 511 + .clear_bar = rockchip_pcie_ep_clear_bar, 512 + .map_addr = rockchip_pcie_ep_map_addr, 513 + .unmap_addr = rockchip_pcie_ep_unmap_addr, 514 + .set_msi = rockchip_pcie_ep_set_msi, 515 + .get_msi = rockchip_pcie_ep_get_msi, 516 + .raise_irq = rockchip_pcie_ep_raise_irq, 517 + .start = rockchip_pcie_ep_start, 518 + }; 519 + 520 + static int rockchip_pcie_parse_ep_dt(struct rockchip_pcie *rockchip, 521 + struct rockchip_pcie_ep *ep) 522 + { 523 + struct device *dev = rockchip->dev; 524 + int err; 525 + 526 + err = rockchip_pcie_parse_dt(rockchip); 527 + if (err) 528 + return err; 529 + 530 + err = rockchip_pcie_get_phys(rockchip); 531 + if (err) 532 + return err; 533 + 534 + err = of_property_read_u32(dev->of_node, 535 + "rockchip,max-outbound-regions", 536 + &ep->max_regions); 537 + if (err < 0 || ep->max_regions > MAX_REGION_LIMIT) 538 + ep->max_regions = MAX_REGION_LIMIT; 539 + 540 + err = of_property_read_u8(dev->of_node, "max-functions", 541 + &ep->epc->max_functions); 542 + if (err < 0) 543 + ep->epc->max_functions = 1; 544 + 545 + return 0; 546 + } 547 + 548 + static const struct of_device_id rockchip_pcie_ep_of_match[] = { 549 + { .compatible = "rockchip,rk3399-pcie-ep"}, 550 + {}, 551 + }; 552 + 553 + static int rockchip_pcie_ep_probe(struct platform_device *pdev) 554 + { 555 + struct device *dev = &pdev->dev; 556 + struct rockchip_pcie_ep *ep; 557 + struct rockchip_pcie *rockchip; 558 + struct pci_epc *epc; 559 + size_t max_regions; 560 + int err; 561 + 562 + ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); 563 + if (!ep) 564 + return -ENOMEM; 565 + 566 + rockchip = &ep->rockchip; 567 + rockchip->is_rc = false; 568 + rockchip->dev = dev; 569 + 570 + epc = devm_pci_epc_create(dev, &rockchip_pcie_epc_ops); 571 + if (IS_ERR(epc)) { 572 + dev_err(dev, "failed to create epc device\n"); 573 + return PTR_ERR(epc); 574 + } 575 + 576 + ep->epc = epc; 577 + epc_set_drvdata(epc, ep); 578 + 579 + err = rockchip_pcie_parse_ep_dt(rockchip, ep); 580 + if (err) 581 + return err; 582 + 583 + err = rockchip_pcie_enable_clocks(rockchip); 584 + if (err) 585 + return err; 586 + 587 + err = rockchip_pcie_init_port(rockchip); 588 + if (err) 589 + goto err_disable_clocks; 590 + 591 + /* Establish the link automatically */ 592 + rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE, 593 + PCIE_CLIENT_CONFIG); 594 + 595 + max_regions = ep->max_regions; 596 + ep->ob_addr = devm_kzalloc(dev, max_regions * sizeof(*ep->ob_addr), 597 + GFP_KERNEL); 598 + 599 + if (!ep->ob_addr) { 600 + err = -ENOMEM; 601 + goto err_uninit_port; 602 + } 603 + 604 + /* Only enable function 0 by default */ 605 + rockchip_pcie_write(rockchip, BIT(0), PCIE_CORE_PHY_FUNC_CFG); 606 + 607 + err = pci_epc_mem_init(epc, rockchip->mem_res->start, 608 + resource_size(rockchip->mem_res)); 609 + if (err < 0) { 610 + dev_err(dev, "failed to initialize the memory space\n"); 611 + goto err_uninit_port; 612 + } 613 + 614 + ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr, 615 + SZ_128K); 616 + if (!ep->irq_cpu_addr) { 617 + dev_err(dev, "failed to reserve memory space for MSI\n"); 618 + err = -ENOMEM; 619 + goto err_epc_mem_exit; 620 + } 621 + 622 + ep->irq_pci_addr = ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR; 623 + 624 + return 0; 625 + err_epc_mem_exit: 626 + pci_epc_mem_exit(epc); 627 + err_uninit_port: 628 + rockchip_pcie_deinit_phys(rockchip); 629 + err_disable_clocks: 630 + rockchip_pcie_disable_clocks(rockchip); 631 + return err; 632 + } 633 + 634 + static struct platform_driver rockchip_pcie_ep_driver = { 635 + .driver = { 636 + .name = "rockchip-pcie-ep", 637 + .of_match_table = rockchip_pcie_ep_of_match, 638 + }, 639 + .probe = rockchip_pcie_ep_probe, 640 + }; 641 + 642 + builtin_platform_driver(rockchip_pcie_ep_driver);
+1142
drivers/pci/host/pcie-rockchip-host.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Rockchip AXI PCIe host controller driver 4 + * 5 + * Copyright (c) 2016 Rockchip, Inc. 6 + * 7 + * Author: Shawn Lin <shawn.lin@rock-chips.com> 8 + * Wenrui Li <wenrui.li@rock-chips.com> 9 + * 10 + * Bits taken from Synopsys DesignWare Host controller driver and 11 + * ARM PCI Host generic driver. 12 + */ 13 + 14 + #include <linux/bitrev.h> 15 + #include <linux/clk.h> 16 + #include <linux/delay.h> 17 + #include <linux/gpio/consumer.h> 18 + #include <linux/init.h> 19 + #include <linux/interrupt.h> 20 + #include <linux/iopoll.h> 21 + #include <linux/irq.h> 22 + #include <linux/irqchip/chained_irq.h> 23 + #include <linux/irqdomain.h> 24 + #include <linux/kernel.h> 25 + #include <linux/mfd/syscon.h> 26 + #include <linux/module.h> 27 + #include <linux/of_address.h> 28 + #include <linux/of_device.h> 29 + #include <linux/of_pci.h> 30 + #include <linux/of_platform.h> 31 + #include <linux/of_irq.h> 32 + #include <linux/pci.h> 33 + #include <linux/pci_ids.h> 34 + #include <linux/phy/phy.h> 35 + #include <linux/platform_device.h> 36 + #include <linux/reset.h> 37 + #include <linux/regmap.h> 38 + 39 + #include "../pci.h" 40 + #include "pcie-rockchip.h" 41 + 42 + static void rockchip_pcie_enable_bw_int(struct rockchip_pcie *rockchip) 43 + { 44 + u32 status; 45 + 46 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 47 + status |= (PCI_EXP_LNKCTL_LBMIE | PCI_EXP_LNKCTL_LABIE); 48 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 49 + } 50 + 51 + static void rockchip_pcie_clr_bw_int(struct rockchip_pcie *rockchip) 52 + { 53 + u32 status; 54 + 55 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 56 + status |= (PCI_EXP_LNKSTA_LBMS | PCI_EXP_LNKSTA_LABS) << 16; 57 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 58 + } 59 + 60 + static void rockchip_pcie_update_txcredit_mui(struct rockchip_pcie *rockchip) 61 + { 62 + u32 val; 63 + 64 + /* Update Tx credit maximum update interval */ 65 + val = rockchip_pcie_read(rockchip, PCIE_CORE_TXCREDIT_CFG1); 66 + val &= ~PCIE_CORE_TXCREDIT_CFG1_MUI_MASK; 67 + val |= PCIE_CORE_TXCREDIT_CFG1_MUI_ENCODE(24000); /* ns */ 68 + rockchip_pcie_write(rockchip, val, PCIE_CORE_TXCREDIT_CFG1); 69 + } 70 + 71 + static int rockchip_pcie_valid_device(struct rockchip_pcie *rockchip, 72 + struct pci_bus *bus, int dev) 73 + { 74 + /* access only one slot on each root port */ 75 + if (bus->number == rockchip->root_bus_nr && dev > 0) 76 + return 0; 77 + 78 + /* 79 + * do not read more than one device on the bus directly attached 80 + * to RC's downstream side. 81 + */ 82 + if (bus->primary == rockchip->root_bus_nr && dev > 0) 83 + return 0; 84 + 85 + return 1; 86 + } 87 + 88 + static u8 rockchip_pcie_lane_map(struct rockchip_pcie *rockchip) 89 + { 90 + u32 val; 91 + u8 map; 92 + 93 + if (rockchip->legacy_phy) 94 + return GENMASK(MAX_LANE_NUM - 1, 0); 95 + 96 + val = rockchip_pcie_read(rockchip, PCIE_CORE_LANE_MAP); 97 + map = val & PCIE_CORE_LANE_MAP_MASK; 98 + 99 + /* The link may be using a reverse-indexed mapping. */ 100 + if (val & PCIE_CORE_LANE_MAP_REVERSE) 101 + map = bitrev8(map) >> 4; 102 + 103 + return map; 104 + } 105 + 106 + static int rockchip_pcie_rd_own_conf(struct rockchip_pcie *rockchip, 107 + int where, int size, u32 *val) 108 + { 109 + void __iomem *addr; 110 + 111 + addr = rockchip->apb_base + PCIE_RC_CONFIG_NORMAL_BASE + where; 112 + 113 + if (!IS_ALIGNED((uintptr_t)addr, size)) { 114 + *val = 0; 115 + return PCIBIOS_BAD_REGISTER_NUMBER; 116 + } 117 + 118 + if (size == 4) { 119 + *val = readl(addr); 120 + } else if (size == 2) { 121 + *val = readw(addr); 122 + } else if (size == 1) { 123 + *val = readb(addr); 124 + } else { 125 + *val = 0; 126 + return PCIBIOS_BAD_REGISTER_NUMBER; 127 + } 128 + return PCIBIOS_SUCCESSFUL; 129 + } 130 + 131 + static int rockchip_pcie_wr_own_conf(struct rockchip_pcie *rockchip, 132 + int where, int size, u32 val) 133 + { 134 + u32 mask, tmp, offset; 135 + void __iomem *addr; 136 + 137 + offset = where & ~0x3; 138 + addr = rockchip->apb_base + PCIE_RC_CONFIG_NORMAL_BASE + offset; 139 + 140 + if (size == 4) { 141 + writel(val, addr); 142 + return PCIBIOS_SUCCESSFUL; 143 + } 144 + 145 + mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8)); 146 + 147 + /* 148 + * N.B. This read/modify/write isn't safe in general because it can 149 + * corrupt RW1C bits in adjacent registers. But the hardware 150 + * doesn't support smaller writes. 151 + */ 152 + tmp = readl(addr) & mask; 153 + tmp |= val << ((where & 0x3) * 8); 154 + writel(tmp, addr); 155 + 156 + return PCIBIOS_SUCCESSFUL; 157 + } 158 + 159 + static int rockchip_pcie_rd_other_conf(struct rockchip_pcie *rockchip, 160 + struct pci_bus *bus, u32 devfn, 161 + int where, int size, u32 *val) 162 + { 163 + u32 busdev; 164 + 165 + busdev = PCIE_ECAM_ADDR(bus->number, PCI_SLOT(devfn), 166 + PCI_FUNC(devfn), where); 167 + 168 + if (!IS_ALIGNED(busdev, size)) { 169 + *val = 0; 170 + return PCIBIOS_BAD_REGISTER_NUMBER; 171 + } 172 + 173 + if (bus->parent->number == rockchip->root_bus_nr) 174 + rockchip_pcie_cfg_configuration_accesses(rockchip, 175 + AXI_WRAPPER_TYPE0_CFG); 176 + else 177 + rockchip_pcie_cfg_configuration_accesses(rockchip, 178 + AXI_WRAPPER_TYPE1_CFG); 179 + 180 + if (size == 4) { 181 + *val = readl(rockchip->reg_base + busdev); 182 + } else if (size == 2) { 183 + *val = readw(rockchip->reg_base + busdev); 184 + } else if (size == 1) { 185 + *val = readb(rockchip->reg_base + busdev); 186 + } else { 187 + *val = 0; 188 + return PCIBIOS_BAD_REGISTER_NUMBER; 189 + } 190 + return PCIBIOS_SUCCESSFUL; 191 + } 192 + 193 + static int rockchip_pcie_wr_other_conf(struct rockchip_pcie *rockchip, 194 + struct pci_bus *bus, u32 devfn, 195 + int where, int size, u32 val) 196 + { 197 + u32 busdev; 198 + 199 + busdev = PCIE_ECAM_ADDR(bus->number, PCI_SLOT(devfn), 200 + PCI_FUNC(devfn), where); 201 + if (!IS_ALIGNED(busdev, size)) 202 + return PCIBIOS_BAD_REGISTER_NUMBER; 203 + 204 + if (bus->parent->number == rockchip->root_bus_nr) 205 + rockchip_pcie_cfg_configuration_accesses(rockchip, 206 + AXI_WRAPPER_TYPE0_CFG); 207 + else 208 + rockchip_pcie_cfg_configuration_accesses(rockchip, 209 + AXI_WRAPPER_TYPE1_CFG); 210 + 211 + if (size == 4) 212 + writel(val, rockchip->reg_base + busdev); 213 + else if (size == 2) 214 + writew(val, rockchip->reg_base + busdev); 215 + else if (size == 1) 216 + writeb(val, rockchip->reg_base + busdev); 217 + else 218 + return PCIBIOS_BAD_REGISTER_NUMBER; 219 + 220 + return PCIBIOS_SUCCESSFUL; 221 + } 222 + 223 + static int rockchip_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where, 224 + int size, u32 *val) 225 + { 226 + struct rockchip_pcie *rockchip = bus->sysdata; 227 + 228 + if (!rockchip_pcie_valid_device(rockchip, bus, PCI_SLOT(devfn))) { 229 + *val = 0xffffffff; 230 + return PCIBIOS_DEVICE_NOT_FOUND; 231 + } 232 + 233 + if (bus->number == rockchip->root_bus_nr) 234 + return rockchip_pcie_rd_own_conf(rockchip, where, size, val); 235 + 236 + return rockchip_pcie_rd_other_conf(rockchip, bus, devfn, where, size, 237 + val); 238 + } 239 + 240 + static int rockchip_pcie_wr_conf(struct pci_bus *bus, u32 devfn, 241 + int where, int size, u32 val) 242 + { 243 + struct rockchip_pcie *rockchip = bus->sysdata; 244 + 245 + if (!rockchip_pcie_valid_device(rockchip, bus, PCI_SLOT(devfn))) 246 + return PCIBIOS_DEVICE_NOT_FOUND; 247 + 248 + if (bus->number == rockchip->root_bus_nr) 249 + return rockchip_pcie_wr_own_conf(rockchip, where, size, val); 250 + 251 + return rockchip_pcie_wr_other_conf(rockchip, bus, devfn, where, size, 252 + val); 253 + } 254 + 255 + static struct pci_ops rockchip_pcie_ops = { 256 + .read = rockchip_pcie_rd_conf, 257 + .write = rockchip_pcie_wr_conf, 258 + }; 259 + 260 + static void rockchip_pcie_set_power_limit(struct rockchip_pcie *rockchip) 261 + { 262 + int curr; 263 + u32 status, scale, power; 264 + 265 + if (IS_ERR(rockchip->vpcie3v3)) 266 + return; 267 + 268 + /* 269 + * Set RC's captured slot power limit and scale if 270 + * vpcie3v3 available. The default values are both zero 271 + * which means the software should set these two according 272 + * to the actual power supply. 273 + */ 274 + curr = regulator_get_current_limit(rockchip->vpcie3v3); 275 + if (curr <= 0) 276 + return; 277 + 278 + scale = 3; /* 0.001x */ 279 + curr = curr / 1000; /* convert to mA */ 280 + power = (curr * 3300) / 1000; /* milliwatt */ 281 + while (power > PCIE_RC_CONFIG_DCR_CSPL_LIMIT) { 282 + if (!scale) { 283 + dev_warn(rockchip->dev, "invalid power supply\n"); 284 + return; 285 + } 286 + scale--; 287 + power = power / 10; 288 + } 289 + 290 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_DCR); 291 + status |= (power << PCIE_RC_CONFIG_DCR_CSPL_SHIFT) | 292 + (scale << PCIE_RC_CONFIG_DCR_CPLS_SHIFT); 293 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCR); 294 + } 295 + 296 + /** 297 + * rockchip_pcie_host_init_port - Initialize hardware 298 + * @rockchip: PCIe port information 299 + */ 300 + static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip) 301 + { 302 + struct device *dev = rockchip->dev; 303 + int err, i = MAX_LANE_NUM; 304 + u32 status; 305 + 306 + gpiod_set_value_cansleep(rockchip->ep_gpio, 0); 307 + 308 + err = rockchip_pcie_init_port(rockchip); 309 + if (err) 310 + return err; 311 + 312 + /* Fix the transmitted FTS count desired to exit from L0s. */ 313 + status = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL_PLC1); 314 + status = (status & ~PCIE_CORE_CTRL_PLC1_FTS_MASK) | 315 + (PCIE_CORE_CTRL_PLC1_FTS_CNT << PCIE_CORE_CTRL_PLC1_FTS_SHIFT); 316 + rockchip_pcie_write(rockchip, status, PCIE_CORE_CTRL_PLC1); 317 + 318 + rockchip_pcie_set_power_limit(rockchip); 319 + 320 + /* Set RC's clock architecture as common clock */ 321 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 322 + status |= PCI_EXP_LNKSTA_SLC << 16; 323 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 324 + 325 + /* Set RC's RCB to 128 */ 326 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 327 + status |= PCI_EXP_LNKCTL_RCB; 328 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 329 + 330 + /* Enable Gen1 training */ 331 + rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE, 332 + PCIE_CLIENT_CONFIG); 333 + 334 + gpiod_set_value_cansleep(rockchip->ep_gpio, 1); 335 + 336 + /* 500ms timeout value should be enough for Gen1/2 training */ 337 + err = readl_poll_timeout(rockchip->apb_base + PCIE_CLIENT_BASIC_STATUS1, 338 + status, PCIE_LINK_UP(status), 20, 339 + 500 * USEC_PER_MSEC); 340 + if (err) { 341 + dev_err(dev, "PCIe link training gen1 timeout!\n"); 342 + goto err_power_off_phy; 343 + } 344 + 345 + if (rockchip->link_gen == 2) { 346 + /* 347 + * Enable retrain for gen2. This should be configured only after 348 + * gen1 finished. 349 + */ 350 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 351 + status |= PCI_EXP_LNKCTL_RL; 352 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 353 + 354 + err = readl_poll_timeout(rockchip->apb_base + PCIE_CORE_CTRL, 355 + status, PCIE_LINK_IS_GEN2(status), 20, 356 + 500 * USEC_PER_MSEC); 357 + if (err) 358 + dev_dbg(dev, "PCIe link training gen2 timeout, fall back to gen1!\n"); 359 + } 360 + 361 + /* Check the final link width from negotiated lane counter from MGMT */ 362 + status = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL); 363 + status = 0x1 << ((status & PCIE_CORE_PL_CONF_LANE_MASK) >> 364 + PCIE_CORE_PL_CONF_LANE_SHIFT); 365 + dev_dbg(dev, "current link width is x%d\n", status); 366 + 367 + /* Power off unused lane(s) */ 368 + rockchip->lanes_map = rockchip_pcie_lane_map(rockchip); 369 + for (i = 0; i < MAX_LANE_NUM; i++) { 370 + if (!(rockchip->lanes_map & BIT(i))) { 371 + dev_dbg(dev, "idling lane %d\n", i); 372 + phy_power_off(rockchip->phys[i]); 373 + } 374 + } 375 + 376 + rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID, 377 + PCIE_CORE_CONFIG_VENDOR); 378 + rockchip_pcie_write(rockchip, 379 + PCI_CLASS_BRIDGE_PCI << PCIE_RC_CONFIG_SCC_SHIFT, 380 + PCIE_RC_CONFIG_RID_CCR); 381 + 382 + /* Clear THP cap's next cap pointer to remove L1 substate cap */ 383 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_THP_CAP); 384 + status &= ~PCIE_RC_CONFIG_THP_CAP_NEXT_MASK; 385 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_THP_CAP); 386 + 387 + /* Clear L0s from RC's link cap */ 388 + if (of_property_read_bool(dev->of_node, "aspm-no-l0s")) { 389 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LINK_CAP); 390 + status &= ~PCIE_RC_CONFIG_LINK_CAP_L0S; 391 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LINK_CAP); 392 + } 393 + 394 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_DCSR); 395 + status &= ~PCIE_RC_CONFIG_DCSR_MPS_MASK; 396 + status |= PCIE_RC_CONFIG_DCSR_MPS_256; 397 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCSR); 398 + 399 + return 0; 400 + err_power_off_phy: 401 + while (i--) 402 + phy_power_off(rockchip->phys[i]); 403 + i = MAX_LANE_NUM; 404 + while (i--) 405 + phy_exit(rockchip->phys[i]); 406 + return err; 407 + } 408 + 409 + static irqreturn_t rockchip_pcie_subsys_irq_handler(int irq, void *arg) 410 + { 411 + struct rockchip_pcie *rockchip = arg; 412 + struct device *dev = rockchip->dev; 413 + u32 reg; 414 + u32 sub_reg; 415 + 416 + reg = rockchip_pcie_read(rockchip, PCIE_CLIENT_INT_STATUS); 417 + if (reg & PCIE_CLIENT_INT_LOCAL) { 418 + dev_dbg(dev, "local interrupt received\n"); 419 + sub_reg = rockchip_pcie_read(rockchip, PCIE_CORE_INT_STATUS); 420 + if (sub_reg & PCIE_CORE_INT_PRFPE) 421 + dev_dbg(dev, "parity error detected while reading from the PNP receive FIFO RAM\n"); 422 + 423 + if (sub_reg & PCIE_CORE_INT_CRFPE) 424 + dev_dbg(dev, "parity error detected while reading from the Completion Receive FIFO RAM\n"); 425 + 426 + if (sub_reg & PCIE_CORE_INT_RRPE) 427 + dev_dbg(dev, "parity error detected while reading from replay buffer RAM\n"); 428 + 429 + if (sub_reg & PCIE_CORE_INT_PRFO) 430 + dev_dbg(dev, "overflow occurred in the PNP receive FIFO\n"); 431 + 432 + if (sub_reg & PCIE_CORE_INT_CRFO) 433 + dev_dbg(dev, "overflow occurred in the completion receive FIFO\n"); 434 + 435 + if (sub_reg & PCIE_CORE_INT_RT) 436 + dev_dbg(dev, "replay timer timed out\n"); 437 + 438 + if (sub_reg & PCIE_CORE_INT_RTR) 439 + dev_dbg(dev, "replay timer rolled over after 4 transmissions of the same TLP\n"); 440 + 441 + if (sub_reg & PCIE_CORE_INT_PE) 442 + dev_dbg(dev, "phy error detected on receive side\n"); 443 + 444 + if (sub_reg & PCIE_CORE_INT_MTR) 445 + dev_dbg(dev, "malformed TLP received from the link\n"); 446 + 447 + if (sub_reg & PCIE_CORE_INT_UCR) 448 + dev_dbg(dev, "malformed TLP received from the link\n"); 449 + 450 + if (sub_reg & PCIE_CORE_INT_FCE) 451 + dev_dbg(dev, "an error was observed in the flow control advertisements from the other side\n"); 452 + 453 + if (sub_reg & PCIE_CORE_INT_CT) 454 + dev_dbg(dev, "a request timed out waiting for completion\n"); 455 + 456 + if (sub_reg & PCIE_CORE_INT_UTC) 457 + dev_dbg(dev, "unmapped TC error\n"); 458 + 459 + if (sub_reg & PCIE_CORE_INT_MMVC) 460 + dev_dbg(dev, "MSI mask register changes\n"); 461 + 462 + rockchip_pcie_write(rockchip, sub_reg, PCIE_CORE_INT_STATUS); 463 + } else if (reg & PCIE_CLIENT_INT_PHY) { 464 + dev_dbg(dev, "phy link changes\n"); 465 + rockchip_pcie_update_txcredit_mui(rockchip); 466 + rockchip_pcie_clr_bw_int(rockchip); 467 + } 468 + 469 + rockchip_pcie_write(rockchip, reg & PCIE_CLIENT_INT_LOCAL, 470 + PCIE_CLIENT_INT_STATUS); 471 + 472 + return IRQ_HANDLED; 473 + } 474 + 475 + static irqreturn_t rockchip_pcie_client_irq_handler(int irq, void *arg) 476 + { 477 + struct rockchip_pcie *rockchip = arg; 478 + struct device *dev = rockchip->dev; 479 + u32 reg; 480 + 481 + reg = rockchip_pcie_read(rockchip, PCIE_CLIENT_INT_STATUS); 482 + if (reg & PCIE_CLIENT_INT_LEGACY_DONE) 483 + dev_dbg(dev, "legacy done interrupt received\n"); 484 + 485 + if (reg & PCIE_CLIENT_INT_MSG) 486 + dev_dbg(dev, "message done interrupt received\n"); 487 + 488 + if (reg & PCIE_CLIENT_INT_HOT_RST) 489 + dev_dbg(dev, "hot reset interrupt received\n"); 490 + 491 + if (reg & PCIE_CLIENT_INT_DPA) 492 + dev_dbg(dev, "dpa interrupt received\n"); 493 + 494 + if (reg & PCIE_CLIENT_INT_FATAL_ERR) 495 + dev_dbg(dev, "fatal error interrupt received\n"); 496 + 497 + if (reg & PCIE_CLIENT_INT_NFATAL_ERR) 498 + dev_dbg(dev, "no fatal error interrupt received\n"); 499 + 500 + if (reg & PCIE_CLIENT_INT_CORR_ERR) 501 + dev_dbg(dev, "correctable error interrupt received\n"); 502 + 503 + if (reg & PCIE_CLIENT_INT_PHY) 504 + dev_dbg(dev, "phy interrupt received\n"); 505 + 506 + rockchip_pcie_write(rockchip, reg & (PCIE_CLIENT_INT_LEGACY_DONE | 507 + PCIE_CLIENT_INT_MSG | PCIE_CLIENT_INT_HOT_RST | 508 + PCIE_CLIENT_INT_DPA | PCIE_CLIENT_INT_FATAL_ERR | 509 + PCIE_CLIENT_INT_NFATAL_ERR | 510 + PCIE_CLIENT_INT_CORR_ERR | 511 + PCIE_CLIENT_INT_PHY), 512 + PCIE_CLIENT_INT_STATUS); 513 + 514 + return IRQ_HANDLED; 515 + } 516 + 517 + static void rockchip_pcie_legacy_int_handler(struct irq_desc *desc) 518 + { 519 + struct irq_chip *chip = irq_desc_get_chip(desc); 520 + struct rockchip_pcie *rockchip = irq_desc_get_handler_data(desc); 521 + struct device *dev = rockchip->dev; 522 + u32 reg; 523 + u32 hwirq; 524 + u32 virq; 525 + 526 + chained_irq_enter(chip, desc); 527 + 528 + reg = rockchip_pcie_read(rockchip, PCIE_CLIENT_INT_STATUS); 529 + reg = (reg & PCIE_CLIENT_INTR_MASK) >> PCIE_CLIENT_INTR_SHIFT; 530 + 531 + while (reg) { 532 + hwirq = ffs(reg) - 1; 533 + reg &= ~BIT(hwirq); 534 + 535 + virq = irq_find_mapping(rockchip->irq_domain, hwirq); 536 + if (virq) 537 + generic_handle_irq(virq); 538 + else 539 + dev_err(dev, "unexpected IRQ, INT%d\n", hwirq); 540 + } 541 + 542 + chained_irq_exit(chip, desc); 543 + } 544 + 545 + static int rockchip_pcie_setup_irq(struct rockchip_pcie *rockchip) 546 + { 547 + int irq, err; 548 + struct device *dev = rockchip->dev; 549 + struct platform_device *pdev = to_platform_device(dev); 550 + 551 + irq = platform_get_irq_byname(pdev, "sys"); 552 + if (irq < 0) { 553 + dev_err(dev, "missing sys IRQ resource\n"); 554 + return irq; 555 + } 556 + 557 + err = devm_request_irq(dev, irq, rockchip_pcie_subsys_irq_handler, 558 + IRQF_SHARED, "pcie-sys", rockchip); 559 + if (err) { 560 + dev_err(dev, "failed to request PCIe subsystem IRQ\n"); 561 + return err; 562 + } 563 + 564 + irq = platform_get_irq_byname(pdev, "legacy"); 565 + if (irq < 0) { 566 + dev_err(dev, "missing legacy IRQ resource\n"); 567 + return irq; 568 + } 569 + 570 + irq_set_chained_handler_and_data(irq, 571 + rockchip_pcie_legacy_int_handler, 572 + rockchip); 573 + 574 + irq = platform_get_irq_byname(pdev, "client"); 575 + if (irq < 0) { 576 + dev_err(dev, "missing client IRQ resource\n"); 577 + return irq; 578 + } 579 + 580 + err = devm_request_irq(dev, irq, rockchip_pcie_client_irq_handler, 581 + IRQF_SHARED, "pcie-client", rockchip); 582 + if (err) { 583 + dev_err(dev, "failed to request PCIe client IRQ\n"); 584 + return err; 585 + } 586 + 587 + return 0; 588 + } 589 + 590 + /** 591 + * rockchip_pcie_parse_host_dt - Parse Device Tree 592 + * @rockchip: PCIe port information 593 + * 594 + * Return: '0' on success and error value on failure 595 + */ 596 + static int rockchip_pcie_parse_host_dt(struct rockchip_pcie *rockchip) 597 + { 598 + struct device *dev = rockchip->dev; 599 + int err; 600 + 601 + err = rockchip_pcie_parse_dt(rockchip); 602 + if (err) 603 + return err; 604 + 605 + err = rockchip_pcie_setup_irq(rockchip); 606 + if (err) 607 + return err; 608 + 609 + rockchip->vpcie12v = devm_regulator_get_optional(dev, "vpcie12v"); 610 + if (IS_ERR(rockchip->vpcie12v)) { 611 + if (PTR_ERR(rockchip->vpcie12v) == -EPROBE_DEFER) 612 + return -EPROBE_DEFER; 613 + dev_info(dev, "no vpcie12v regulator found\n"); 614 + } 615 + 616 + rockchip->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3"); 617 + if (IS_ERR(rockchip->vpcie3v3)) { 618 + if (PTR_ERR(rockchip->vpcie3v3) == -EPROBE_DEFER) 619 + return -EPROBE_DEFER; 620 + dev_info(dev, "no vpcie3v3 regulator found\n"); 621 + } 622 + 623 + rockchip->vpcie1v8 = devm_regulator_get_optional(dev, "vpcie1v8"); 624 + if (IS_ERR(rockchip->vpcie1v8)) { 625 + if (PTR_ERR(rockchip->vpcie1v8) == -EPROBE_DEFER) 626 + return -EPROBE_DEFER; 627 + dev_info(dev, "no vpcie1v8 regulator found\n"); 628 + } 629 + 630 + rockchip->vpcie0v9 = devm_regulator_get_optional(dev, "vpcie0v9"); 631 + if (IS_ERR(rockchip->vpcie0v9)) { 632 + if (PTR_ERR(rockchip->vpcie0v9) == -EPROBE_DEFER) 633 + return -EPROBE_DEFER; 634 + dev_info(dev, "no vpcie0v9 regulator found\n"); 635 + } 636 + 637 + return 0; 638 + } 639 + 640 + static int rockchip_pcie_set_vpcie(struct rockchip_pcie *rockchip) 641 + { 642 + struct device *dev = rockchip->dev; 643 + int err; 644 + 645 + if (!IS_ERR(rockchip->vpcie12v)) { 646 + err = regulator_enable(rockchip->vpcie12v); 647 + if (err) { 648 + dev_err(dev, "fail to enable vpcie12v regulator\n"); 649 + goto err_out; 650 + } 651 + } 652 + 653 + if (!IS_ERR(rockchip->vpcie3v3)) { 654 + err = regulator_enable(rockchip->vpcie3v3); 655 + if (err) { 656 + dev_err(dev, "fail to enable vpcie3v3 regulator\n"); 657 + goto err_disable_12v; 658 + } 659 + } 660 + 661 + if (!IS_ERR(rockchip->vpcie1v8)) { 662 + err = regulator_enable(rockchip->vpcie1v8); 663 + if (err) { 664 + dev_err(dev, "fail to enable vpcie1v8 regulator\n"); 665 + goto err_disable_3v3; 666 + } 667 + } 668 + 669 + if (!IS_ERR(rockchip->vpcie0v9)) { 670 + err = regulator_enable(rockchip->vpcie0v9); 671 + if (err) { 672 + dev_err(dev, "fail to enable vpcie0v9 regulator\n"); 673 + goto err_disable_1v8; 674 + } 675 + } 676 + 677 + return 0; 678 + 679 + err_disable_1v8: 680 + if (!IS_ERR(rockchip->vpcie1v8)) 681 + regulator_disable(rockchip->vpcie1v8); 682 + err_disable_3v3: 683 + if (!IS_ERR(rockchip->vpcie3v3)) 684 + regulator_disable(rockchip->vpcie3v3); 685 + err_disable_12v: 686 + if (!IS_ERR(rockchip->vpcie12v)) 687 + regulator_disable(rockchip->vpcie12v); 688 + err_out: 689 + return err; 690 + } 691 + 692 + static void rockchip_pcie_enable_interrupts(struct rockchip_pcie *rockchip) 693 + { 694 + rockchip_pcie_write(rockchip, (PCIE_CLIENT_INT_CLI << 16) & 695 + (~PCIE_CLIENT_INT_CLI), PCIE_CLIENT_INT_MASK); 696 + rockchip_pcie_write(rockchip, (u32)(~PCIE_CORE_INT), 697 + PCIE_CORE_INT_MASK); 698 + 699 + rockchip_pcie_enable_bw_int(rockchip); 700 + } 701 + 702 + static int rockchip_pcie_intx_map(struct irq_domain *domain, unsigned int irq, 703 + irq_hw_number_t hwirq) 704 + { 705 + irq_set_chip_and_handler(irq, &dummy_irq_chip, handle_simple_irq); 706 + irq_set_chip_data(irq, domain->host_data); 707 + 708 + return 0; 709 + } 710 + 711 + static const struct irq_domain_ops intx_domain_ops = { 712 + .map = rockchip_pcie_intx_map, 713 + }; 714 + 715 + static int rockchip_pcie_init_irq_domain(struct rockchip_pcie *rockchip) 716 + { 717 + struct device *dev = rockchip->dev; 718 + struct device_node *intc = of_get_next_child(dev->of_node, NULL); 719 + 720 + if (!intc) { 721 + dev_err(dev, "missing child interrupt-controller node\n"); 722 + return -EINVAL; 723 + } 724 + 725 + rockchip->irq_domain = irq_domain_add_linear(intc, PCI_NUM_INTX, 726 + &intx_domain_ops, rockchip); 727 + if (!rockchip->irq_domain) { 728 + dev_err(dev, "failed to get a INTx IRQ domain\n"); 729 + return -EINVAL; 730 + } 731 + 732 + return 0; 733 + } 734 + 735 + static int rockchip_pcie_prog_ob_atu(struct rockchip_pcie *rockchip, 736 + int region_no, int type, u8 num_pass_bits, 737 + u32 lower_addr, u32 upper_addr) 738 + { 739 + u32 ob_addr_0; 740 + u32 ob_addr_1; 741 + u32 ob_desc_0; 742 + u32 aw_offset; 743 + 744 + if (region_no >= MAX_AXI_WRAPPER_REGION_NUM) 745 + return -EINVAL; 746 + if (num_pass_bits + 1 < 8) 747 + return -EINVAL; 748 + if (num_pass_bits > 63) 749 + return -EINVAL; 750 + if (region_no == 0) { 751 + if (AXI_REGION_0_SIZE < (2ULL << num_pass_bits)) 752 + return -EINVAL; 753 + } 754 + if (region_no != 0) { 755 + if (AXI_REGION_SIZE < (2ULL << num_pass_bits)) 756 + return -EINVAL; 757 + } 758 + 759 + aw_offset = (region_no << OB_REG_SIZE_SHIFT); 760 + 761 + ob_addr_0 = num_pass_bits & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS; 762 + ob_addr_0 |= lower_addr & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR; 763 + ob_addr_1 = upper_addr; 764 + ob_desc_0 = (1 << 23 | type); 765 + 766 + rockchip_pcie_write(rockchip, ob_addr_0, 767 + PCIE_CORE_OB_REGION_ADDR0 + aw_offset); 768 + rockchip_pcie_write(rockchip, ob_addr_1, 769 + PCIE_CORE_OB_REGION_ADDR1 + aw_offset); 770 + rockchip_pcie_write(rockchip, ob_desc_0, 771 + PCIE_CORE_OB_REGION_DESC0 + aw_offset); 772 + rockchip_pcie_write(rockchip, 0, 773 + PCIE_CORE_OB_REGION_DESC1 + aw_offset); 774 + 775 + return 0; 776 + } 777 + 778 + static int rockchip_pcie_prog_ib_atu(struct rockchip_pcie *rockchip, 779 + int region_no, u8 num_pass_bits, 780 + u32 lower_addr, u32 upper_addr) 781 + { 782 + u32 ib_addr_0; 783 + u32 ib_addr_1; 784 + u32 aw_offset; 785 + 786 + if (region_no > MAX_AXI_IB_ROOTPORT_REGION_NUM) 787 + return -EINVAL; 788 + if (num_pass_bits + 1 < MIN_AXI_ADDR_BITS_PASSED) 789 + return -EINVAL; 790 + if (num_pass_bits > 63) 791 + return -EINVAL; 792 + 793 + aw_offset = (region_no << IB_ROOT_PORT_REG_SIZE_SHIFT); 794 + 795 + ib_addr_0 = num_pass_bits & PCIE_CORE_IB_REGION_ADDR0_NUM_BITS; 796 + ib_addr_0 |= (lower_addr << 8) & PCIE_CORE_IB_REGION_ADDR0_LO_ADDR; 797 + ib_addr_1 = upper_addr; 798 + 799 + rockchip_pcie_write(rockchip, ib_addr_0, PCIE_RP_IB_ADDR0 + aw_offset); 800 + rockchip_pcie_write(rockchip, ib_addr_1, PCIE_RP_IB_ADDR1 + aw_offset); 801 + 802 + return 0; 803 + } 804 + 805 + static int rockchip_pcie_cfg_atu(struct rockchip_pcie *rockchip) 806 + { 807 + struct device *dev = rockchip->dev; 808 + int offset; 809 + int err; 810 + int reg_no; 811 + 812 + rockchip_pcie_cfg_configuration_accesses(rockchip, 813 + AXI_WRAPPER_TYPE0_CFG); 814 + 815 + for (reg_no = 0; reg_no < (rockchip->mem_size >> 20); reg_no++) { 816 + err = rockchip_pcie_prog_ob_atu(rockchip, reg_no + 1, 817 + AXI_WRAPPER_MEM_WRITE, 818 + 20 - 1, 819 + rockchip->mem_bus_addr + 820 + (reg_no << 20), 821 + 0); 822 + if (err) { 823 + dev_err(dev, "program RC mem outbound ATU failed\n"); 824 + return err; 825 + } 826 + } 827 + 828 + err = rockchip_pcie_prog_ib_atu(rockchip, 2, 32 - 1, 0x0, 0); 829 + if (err) { 830 + dev_err(dev, "program RC mem inbound ATU failed\n"); 831 + return err; 832 + } 833 + 834 + offset = rockchip->mem_size >> 20; 835 + for (reg_no = 0; reg_no < (rockchip->io_size >> 20); reg_no++) { 836 + err = rockchip_pcie_prog_ob_atu(rockchip, 837 + reg_no + 1 + offset, 838 + AXI_WRAPPER_IO_WRITE, 839 + 20 - 1, 840 + rockchip->io_bus_addr + 841 + (reg_no << 20), 842 + 0); 843 + if (err) { 844 + dev_err(dev, "program RC io outbound ATU failed\n"); 845 + return err; 846 + } 847 + } 848 + 849 + /* assign message regions */ 850 + rockchip_pcie_prog_ob_atu(rockchip, reg_no + 1 + offset, 851 + AXI_WRAPPER_NOR_MSG, 852 + 20 - 1, 0, 0); 853 + 854 + rockchip->msg_bus_addr = rockchip->mem_bus_addr + 855 + ((reg_no + offset) << 20); 856 + return err; 857 + } 858 + 859 + static int rockchip_pcie_wait_l2(struct rockchip_pcie *rockchip) 860 + { 861 + u32 value; 862 + int err; 863 + 864 + /* send PME_TURN_OFF message */ 865 + writel(0x0, rockchip->msg_region + PCIE_RC_SEND_PME_OFF); 866 + 867 + /* read LTSSM and wait for falling into L2 link state */ 868 + err = readl_poll_timeout(rockchip->apb_base + PCIE_CLIENT_DEBUG_OUT_0, 869 + value, PCIE_LINK_IS_L2(value), 20, 870 + jiffies_to_usecs(5 * HZ)); 871 + if (err) { 872 + dev_err(rockchip->dev, "PCIe link enter L2 timeout!\n"); 873 + return err; 874 + } 875 + 876 + return 0; 877 + } 878 + 879 + static int __maybe_unused rockchip_pcie_suspend_noirq(struct device *dev) 880 + { 881 + struct rockchip_pcie *rockchip = dev_get_drvdata(dev); 882 + int ret; 883 + 884 + /* disable core and cli int since we don't need to ack PME_ACK */ 885 + rockchip_pcie_write(rockchip, (PCIE_CLIENT_INT_CLI << 16) | 886 + PCIE_CLIENT_INT_CLI, PCIE_CLIENT_INT_MASK); 887 + rockchip_pcie_write(rockchip, (u32)PCIE_CORE_INT, PCIE_CORE_INT_MASK); 888 + 889 + ret = rockchip_pcie_wait_l2(rockchip); 890 + if (ret) { 891 + rockchip_pcie_enable_interrupts(rockchip); 892 + return ret; 893 + } 894 + 895 + rockchip_pcie_deinit_phys(rockchip); 896 + 897 + rockchip_pcie_disable_clocks(rockchip); 898 + 899 + if (!IS_ERR(rockchip->vpcie0v9)) 900 + regulator_disable(rockchip->vpcie0v9); 901 + 902 + return ret; 903 + } 904 + 905 + static int __maybe_unused rockchip_pcie_resume_noirq(struct device *dev) 906 + { 907 + struct rockchip_pcie *rockchip = dev_get_drvdata(dev); 908 + int err; 909 + 910 + if (!IS_ERR(rockchip->vpcie0v9)) { 911 + err = regulator_enable(rockchip->vpcie0v9); 912 + if (err) { 913 + dev_err(dev, "fail to enable vpcie0v9 regulator\n"); 914 + return err; 915 + } 916 + } 917 + 918 + err = rockchip_pcie_enable_clocks(rockchip); 919 + if (err) 920 + goto err_disable_0v9; 921 + 922 + err = rockchip_pcie_host_init_port(rockchip); 923 + if (err) 924 + goto err_pcie_resume; 925 + 926 + err = rockchip_pcie_cfg_atu(rockchip); 927 + if (err) 928 + goto err_err_deinit_port; 929 + 930 + /* Need this to enter L1 again */ 931 + rockchip_pcie_update_txcredit_mui(rockchip); 932 + rockchip_pcie_enable_interrupts(rockchip); 933 + 934 + return 0; 935 + 936 + err_err_deinit_port: 937 + rockchip_pcie_deinit_phys(rockchip); 938 + err_pcie_resume: 939 + rockchip_pcie_disable_clocks(rockchip); 940 + err_disable_0v9: 941 + if (!IS_ERR(rockchip->vpcie0v9)) 942 + regulator_disable(rockchip->vpcie0v9); 943 + return err; 944 + } 945 + 946 + static int rockchip_pcie_probe(struct platform_device *pdev) 947 + { 948 + struct rockchip_pcie *rockchip; 949 + struct device *dev = &pdev->dev; 950 + struct pci_bus *bus, *child; 951 + struct pci_host_bridge *bridge; 952 + struct resource_entry *win; 953 + resource_size_t io_base; 954 + struct resource *mem; 955 + struct resource *io; 956 + int err; 957 + 958 + LIST_HEAD(res); 959 + 960 + if (!dev->of_node) 961 + return -ENODEV; 962 + 963 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rockchip)); 964 + if (!bridge) 965 + return -ENOMEM; 966 + 967 + rockchip = pci_host_bridge_priv(bridge); 968 + 969 + platform_set_drvdata(pdev, rockchip); 970 + rockchip->dev = dev; 971 + rockchip->is_rc = true; 972 + 973 + err = rockchip_pcie_parse_host_dt(rockchip); 974 + if (err) 975 + return err; 976 + 977 + err = rockchip_pcie_enable_clocks(rockchip); 978 + if (err) 979 + return err; 980 + 981 + err = rockchip_pcie_set_vpcie(rockchip); 982 + if (err) { 983 + dev_err(dev, "failed to set vpcie regulator\n"); 984 + goto err_set_vpcie; 985 + } 986 + 987 + err = rockchip_pcie_host_init_port(rockchip); 988 + if (err) 989 + goto err_vpcie; 990 + 991 + rockchip_pcie_enable_interrupts(rockchip); 992 + 993 + err = rockchip_pcie_init_irq_domain(rockchip); 994 + if (err < 0) 995 + goto err_deinit_port; 996 + 997 + err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, 998 + &res, &io_base); 999 + if (err) 1000 + goto err_remove_irq_domain; 1001 + 1002 + err = devm_request_pci_bus_resources(dev, &res); 1003 + if (err) 1004 + goto err_free_res; 1005 + 1006 + /* Get the I/O and memory ranges from DT */ 1007 + resource_list_for_each_entry(win, &res) { 1008 + switch (resource_type(win->res)) { 1009 + case IORESOURCE_IO: 1010 + io = win->res; 1011 + io->name = "I/O"; 1012 + rockchip->io_size = resource_size(io); 1013 + rockchip->io_bus_addr = io->start - win->offset; 1014 + err = pci_remap_iospace(io, io_base); 1015 + if (err) { 1016 + dev_warn(dev, "error %d: failed to map resource %pR\n", 1017 + err, io); 1018 + continue; 1019 + } 1020 + rockchip->io = io; 1021 + break; 1022 + case IORESOURCE_MEM: 1023 + mem = win->res; 1024 + mem->name = "MEM"; 1025 + rockchip->mem_size = resource_size(mem); 1026 + rockchip->mem_bus_addr = mem->start - win->offset; 1027 + break; 1028 + case IORESOURCE_BUS: 1029 + rockchip->root_bus_nr = win->res->start; 1030 + break; 1031 + default: 1032 + continue; 1033 + } 1034 + } 1035 + 1036 + err = rockchip_pcie_cfg_atu(rockchip); 1037 + if (err) 1038 + goto err_unmap_iospace; 1039 + 1040 + rockchip->msg_region = devm_ioremap(dev, rockchip->msg_bus_addr, SZ_1M); 1041 + if (!rockchip->msg_region) { 1042 + err = -ENOMEM; 1043 + goto err_unmap_iospace; 1044 + } 1045 + 1046 + list_splice_init(&res, &bridge->windows); 1047 + bridge->dev.parent = dev; 1048 + bridge->sysdata = rockchip; 1049 + bridge->busnr = 0; 1050 + bridge->ops = &rockchip_pcie_ops; 1051 + bridge->map_irq = of_irq_parse_and_map_pci; 1052 + bridge->swizzle_irq = pci_common_swizzle; 1053 + 1054 + err = pci_scan_root_bus_bridge(bridge); 1055 + if (err < 0) 1056 + goto err_unmap_iospace; 1057 + 1058 + bus = bridge->bus; 1059 + 1060 + rockchip->root_bus = bus; 1061 + 1062 + pci_bus_size_bridges(bus); 1063 + pci_bus_assign_resources(bus); 1064 + list_for_each_entry(child, &bus->children, node) 1065 + pcie_bus_configure_settings(child); 1066 + 1067 + pci_bus_add_devices(bus); 1068 + return 0; 1069 + 1070 + err_unmap_iospace: 1071 + pci_unmap_iospace(rockchip->io); 1072 + err_free_res: 1073 + pci_free_resource_list(&res); 1074 + err_remove_irq_domain: 1075 + irq_domain_remove(rockchip->irq_domain); 1076 + err_deinit_port: 1077 + rockchip_pcie_deinit_phys(rockchip); 1078 + err_vpcie: 1079 + if (!IS_ERR(rockchip->vpcie12v)) 1080 + regulator_disable(rockchip->vpcie12v); 1081 + if (!IS_ERR(rockchip->vpcie3v3)) 1082 + regulator_disable(rockchip->vpcie3v3); 1083 + if (!IS_ERR(rockchip->vpcie1v8)) 1084 + regulator_disable(rockchip->vpcie1v8); 1085 + if (!IS_ERR(rockchip->vpcie0v9)) 1086 + regulator_disable(rockchip->vpcie0v9); 1087 + err_set_vpcie: 1088 + rockchip_pcie_disable_clocks(rockchip); 1089 + return err; 1090 + } 1091 + 1092 + static int rockchip_pcie_remove(struct platform_device *pdev) 1093 + { 1094 + struct device *dev = &pdev->dev; 1095 + struct rockchip_pcie *rockchip = dev_get_drvdata(dev); 1096 + 1097 + pci_stop_root_bus(rockchip->root_bus); 1098 + pci_remove_root_bus(rockchip->root_bus); 1099 + pci_unmap_iospace(rockchip->io); 1100 + irq_domain_remove(rockchip->irq_domain); 1101 + 1102 + rockchip_pcie_deinit_phys(rockchip); 1103 + 1104 + rockchip_pcie_disable_clocks(rockchip); 1105 + 1106 + if (!IS_ERR(rockchip->vpcie12v)) 1107 + regulator_disable(rockchip->vpcie12v); 1108 + if (!IS_ERR(rockchip->vpcie3v3)) 1109 + regulator_disable(rockchip->vpcie3v3); 1110 + if (!IS_ERR(rockchip->vpcie1v8)) 1111 + regulator_disable(rockchip->vpcie1v8); 1112 + if (!IS_ERR(rockchip->vpcie0v9)) 1113 + regulator_disable(rockchip->vpcie0v9); 1114 + 1115 + return 0; 1116 + } 1117 + 1118 + static const struct dev_pm_ops rockchip_pcie_pm_ops = { 1119 + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(rockchip_pcie_suspend_noirq, 1120 + rockchip_pcie_resume_noirq) 1121 + }; 1122 + 1123 + static const struct of_device_id rockchip_pcie_of_match[] = { 1124 + { .compatible = "rockchip,rk3399-pcie", }, 1125 + {} 1126 + }; 1127 + MODULE_DEVICE_TABLE(of, rockchip_pcie_of_match); 1128 + 1129 + static struct platform_driver rockchip_pcie_driver = { 1130 + .driver = { 1131 + .name = "rockchip-pcie", 1132 + .of_match_table = rockchip_pcie_of_match, 1133 + .pm = &rockchip_pcie_pm_ops, 1134 + }, 1135 + .probe = rockchip_pcie_probe, 1136 + .remove = rockchip_pcie_remove, 1137 + }; 1138 + module_platform_driver(rockchip_pcie_driver); 1139 + 1140 + MODULE_AUTHOR("Rockchip Inc"); 1141 + MODULE_DESCRIPTION("Rockchip AXI PCIe driver"); 1142 + MODULE_LICENSE("GPL v2");
+154 -1438
drivers/pci/host/pcie-rockchip.c
··· 11 11 * ARM PCI Host generic driver. 12 12 */ 13 13 14 - #include <linux/bitrev.h> 15 14 #include <linux/clk.h> 16 15 #include <linux/delay.h> 17 16 #include <linux/gpio/consumer.h> 18 - #include <linux/init.h> 19 - #include <linux/interrupt.h> 20 - #include <linux/iopoll.h> 21 - #include <linux/irq.h> 22 - #include <linux/irqchip/chained_irq.h> 23 - #include <linux/irqdomain.h> 24 - #include <linux/kernel.h> 25 - #include <linux/mfd/syscon.h> 26 - #include <linux/module.h> 27 - #include <linux/of_address.h> 28 - #include <linux/of_device.h> 29 17 #include <linux/of_pci.h> 30 - #include <linux/of_platform.h> 31 - #include <linux/of_irq.h> 32 - #include <linux/pci.h> 33 - #include <linux/pci_ids.h> 34 18 #include <linux/phy/phy.h> 35 19 #include <linux/platform_device.h> 36 20 #include <linux/reset.h> 37 - #include <linux/regmap.h> 38 21 39 - /* 40 - * The upper 16 bits of PCIE_CLIENT_CONFIG are a write mask for the lower 16 41 - * bits. This allows atomic updates of the register without locking. 42 - */ 43 - #define HIWORD_UPDATE(mask, val) (((mask) << 16) | (val)) 44 - #define HIWORD_UPDATE_BIT(val) HIWORD_UPDATE(val, val) 22 + #include "../pci.h" 23 + #include "pcie-rockchip.h" 45 24 46 - #define ENCODE_LANES(x) ((((x) >> 1) & 3) << 4) 47 - #define MAX_LANE_NUM 4 48 - 49 - #define PCIE_CLIENT_BASE 0x0 50 - #define PCIE_CLIENT_CONFIG (PCIE_CLIENT_BASE + 0x00) 51 - #define PCIE_CLIENT_CONF_ENABLE HIWORD_UPDATE_BIT(0x0001) 52 - #define PCIE_CLIENT_LINK_TRAIN_ENABLE HIWORD_UPDATE_BIT(0x0002) 53 - #define PCIE_CLIENT_ARI_ENABLE HIWORD_UPDATE_BIT(0x0008) 54 - #define PCIE_CLIENT_CONF_LANE_NUM(x) HIWORD_UPDATE(0x0030, ENCODE_LANES(x)) 55 - #define PCIE_CLIENT_MODE_RC HIWORD_UPDATE_BIT(0x0040) 56 - #define PCIE_CLIENT_GEN_SEL_1 HIWORD_UPDATE(0x0080, 0) 57 - #define PCIE_CLIENT_GEN_SEL_2 HIWORD_UPDATE_BIT(0x0080) 58 - #define PCIE_CLIENT_DEBUG_OUT_0 (PCIE_CLIENT_BASE + 0x3c) 59 - #define PCIE_CLIENT_DEBUG_LTSSM_MASK GENMASK(5, 0) 60 - #define PCIE_CLIENT_DEBUG_LTSSM_L1 0x18 61 - #define PCIE_CLIENT_DEBUG_LTSSM_L2 0x19 62 - #define PCIE_CLIENT_BASIC_STATUS1 (PCIE_CLIENT_BASE + 0x48) 63 - #define PCIE_CLIENT_LINK_STATUS_UP 0x00300000 64 - #define PCIE_CLIENT_LINK_STATUS_MASK 0x00300000 65 - #define PCIE_CLIENT_INT_MASK (PCIE_CLIENT_BASE + 0x4c) 66 - #define PCIE_CLIENT_INT_STATUS (PCIE_CLIENT_BASE + 0x50) 67 - #define PCIE_CLIENT_INTR_MASK GENMASK(8, 5) 68 - #define PCIE_CLIENT_INTR_SHIFT 5 69 - #define PCIE_CLIENT_INT_LEGACY_DONE BIT(15) 70 - #define PCIE_CLIENT_INT_MSG BIT(14) 71 - #define PCIE_CLIENT_INT_HOT_RST BIT(13) 72 - #define PCIE_CLIENT_INT_DPA BIT(12) 73 - #define PCIE_CLIENT_INT_FATAL_ERR BIT(11) 74 - #define PCIE_CLIENT_INT_NFATAL_ERR BIT(10) 75 - #define PCIE_CLIENT_INT_CORR_ERR BIT(9) 76 - #define PCIE_CLIENT_INT_INTD BIT(8) 77 - #define PCIE_CLIENT_INT_INTC BIT(7) 78 - #define PCIE_CLIENT_INT_INTB BIT(6) 79 - #define PCIE_CLIENT_INT_INTA BIT(5) 80 - #define PCIE_CLIENT_INT_LOCAL BIT(4) 81 - #define PCIE_CLIENT_INT_UDMA BIT(3) 82 - #define PCIE_CLIENT_INT_PHY BIT(2) 83 - #define PCIE_CLIENT_INT_HOT_PLUG BIT(1) 84 - #define PCIE_CLIENT_INT_PWR_STCG BIT(0) 85 - 86 - #define PCIE_CLIENT_INT_LEGACY \ 87 - (PCIE_CLIENT_INT_INTA | PCIE_CLIENT_INT_INTB | \ 88 - PCIE_CLIENT_INT_INTC | PCIE_CLIENT_INT_INTD) 89 - 90 - #define PCIE_CLIENT_INT_CLI \ 91 - (PCIE_CLIENT_INT_CORR_ERR | PCIE_CLIENT_INT_NFATAL_ERR | \ 92 - PCIE_CLIENT_INT_FATAL_ERR | PCIE_CLIENT_INT_DPA | \ 93 - PCIE_CLIENT_INT_HOT_RST | PCIE_CLIENT_INT_MSG | \ 94 - PCIE_CLIENT_INT_LEGACY_DONE | PCIE_CLIENT_INT_LEGACY | \ 95 - PCIE_CLIENT_INT_PHY) 96 - 97 - #define PCIE_CORE_CTRL_MGMT_BASE 0x900000 98 - #define PCIE_CORE_CTRL (PCIE_CORE_CTRL_MGMT_BASE + 0x000) 99 - #define PCIE_CORE_PL_CONF_SPEED_5G 0x00000008 100 - #define PCIE_CORE_PL_CONF_SPEED_MASK 0x00000018 101 - #define PCIE_CORE_PL_CONF_LANE_MASK 0x00000006 102 - #define PCIE_CORE_PL_CONF_LANE_SHIFT 1 103 - #define PCIE_CORE_CTRL_PLC1 (PCIE_CORE_CTRL_MGMT_BASE + 0x004) 104 - #define PCIE_CORE_CTRL_PLC1_FTS_MASK GENMASK(23, 8) 105 - #define PCIE_CORE_CTRL_PLC1_FTS_SHIFT 8 106 - #define PCIE_CORE_CTRL_PLC1_FTS_CNT 0xffff 107 - #define PCIE_CORE_TXCREDIT_CFG1 (PCIE_CORE_CTRL_MGMT_BASE + 0x020) 108 - #define PCIE_CORE_TXCREDIT_CFG1_MUI_MASK 0xFFFF0000 109 - #define PCIE_CORE_TXCREDIT_CFG1_MUI_SHIFT 16 110 - #define PCIE_CORE_TXCREDIT_CFG1_MUI_ENCODE(x) \ 111 - (((x) >> 3) << PCIE_CORE_TXCREDIT_CFG1_MUI_SHIFT) 112 - #define PCIE_CORE_LANE_MAP (PCIE_CORE_CTRL_MGMT_BASE + 0x200) 113 - #define PCIE_CORE_LANE_MAP_MASK 0x0000000f 114 - #define PCIE_CORE_LANE_MAP_REVERSE BIT(16) 115 - #define PCIE_CORE_INT_STATUS (PCIE_CORE_CTRL_MGMT_BASE + 0x20c) 116 - #define PCIE_CORE_INT_PRFPE BIT(0) 117 - #define PCIE_CORE_INT_CRFPE BIT(1) 118 - #define PCIE_CORE_INT_RRPE BIT(2) 119 - #define PCIE_CORE_INT_PRFO BIT(3) 120 - #define PCIE_CORE_INT_CRFO BIT(4) 121 - #define PCIE_CORE_INT_RT BIT(5) 122 - #define PCIE_CORE_INT_RTR BIT(6) 123 - #define PCIE_CORE_INT_PE BIT(7) 124 - #define PCIE_CORE_INT_MTR BIT(8) 125 - #define PCIE_CORE_INT_UCR BIT(9) 126 - #define PCIE_CORE_INT_FCE BIT(10) 127 - #define PCIE_CORE_INT_CT BIT(11) 128 - #define PCIE_CORE_INT_UTC BIT(18) 129 - #define PCIE_CORE_INT_MMVC BIT(19) 130 - #define PCIE_CORE_CONFIG_VENDOR (PCIE_CORE_CTRL_MGMT_BASE + 0x44) 131 - #define PCIE_CORE_INT_MASK (PCIE_CORE_CTRL_MGMT_BASE + 0x210) 132 - #define PCIE_RC_BAR_CONF (PCIE_CORE_CTRL_MGMT_BASE + 0x300) 133 - 134 - #define PCIE_CORE_INT \ 135 - (PCIE_CORE_INT_PRFPE | PCIE_CORE_INT_CRFPE | \ 136 - PCIE_CORE_INT_RRPE | PCIE_CORE_INT_CRFO | \ 137 - PCIE_CORE_INT_RT | PCIE_CORE_INT_RTR | \ 138 - PCIE_CORE_INT_PE | PCIE_CORE_INT_MTR | \ 139 - PCIE_CORE_INT_UCR | PCIE_CORE_INT_FCE | \ 140 - PCIE_CORE_INT_CT | PCIE_CORE_INT_UTC | \ 141 - PCIE_CORE_INT_MMVC) 142 - 143 - #define PCIE_RC_CONFIG_NORMAL_BASE 0x800000 144 - #define PCIE_RC_CONFIG_BASE 0xa00000 145 - #define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08) 146 - #define PCIE_RC_CONFIG_SCC_SHIFT 16 147 - #define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4) 148 - #define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18 149 - #define PCIE_RC_CONFIG_DCR_CSPL_LIMIT 0xff 150 - #define PCIE_RC_CONFIG_DCR_CPLS_SHIFT 26 151 - #define PCIE_RC_CONFIG_DCSR (PCIE_RC_CONFIG_BASE + 0xc8) 152 - #define PCIE_RC_CONFIG_DCSR_MPS_MASK GENMASK(7, 5) 153 - #define PCIE_RC_CONFIG_DCSR_MPS_256 (0x1 << 5) 154 - #define PCIE_RC_CONFIG_LINK_CAP (PCIE_RC_CONFIG_BASE + 0xcc) 155 - #define PCIE_RC_CONFIG_LINK_CAP_L0S BIT(10) 156 - #define PCIE_RC_CONFIG_LCS (PCIE_RC_CONFIG_BASE + 0xd0) 157 - #define PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2 (PCIE_RC_CONFIG_BASE + 0x90c) 158 - #define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274) 159 - #define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20) 160 - 161 - #define PCIE_CORE_AXI_CONF_BASE 0xc00000 162 - #define PCIE_CORE_OB_REGION_ADDR0 (PCIE_CORE_AXI_CONF_BASE + 0x0) 163 - #define PCIE_CORE_OB_REGION_ADDR0_NUM_BITS 0x3f 164 - #define PCIE_CORE_OB_REGION_ADDR0_LO_ADDR 0xffffff00 165 - #define PCIE_CORE_OB_REGION_ADDR1 (PCIE_CORE_AXI_CONF_BASE + 0x4) 166 - #define PCIE_CORE_OB_REGION_DESC0 (PCIE_CORE_AXI_CONF_BASE + 0x8) 167 - #define PCIE_CORE_OB_REGION_DESC1 (PCIE_CORE_AXI_CONF_BASE + 0xc) 168 - 169 - #define PCIE_CORE_AXI_INBOUND_BASE 0xc00800 170 - #define PCIE_RP_IB_ADDR0 (PCIE_CORE_AXI_INBOUND_BASE + 0x0) 171 - #define PCIE_CORE_IB_REGION_ADDR0_NUM_BITS 0x3f 172 - #define PCIE_CORE_IB_REGION_ADDR0_LO_ADDR 0xffffff00 173 - #define PCIE_RP_IB_ADDR1 (PCIE_CORE_AXI_INBOUND_BASE + 0x4) 174 - 175 - /* Size of one AXI Region (not Region 0) */ 176 - #define AXI_REGION_SIZE BIT(20) 177 - /* Size of Region 0, equal to sum of sizes of other regions */ 178 - #define AXI_REGION_0_SIZE (32 * (0x1 << 20)) 179 - #define OB_REG_SIZE_SHIFT 5 180 - #define IB_ROOT_PORT_REG_SIZE_SHIFT 3 181 - #define AXI_WRAPPER_IO_WRITE 0x6 182 - #define AXI_WRAPPER_MEM_WRITE 0x2 183 - #define AXI_WRAPPER_TYPE0_CFG 0xa 184 - #define AXI_WRAPPER_TYPE1_CFG 0xb 185 - #define AXI_WRAPPER_NOR_MSG 0xc 186 - 187 - #define MAX_AXI_IB_ROOTPORT_REGION_NUM 3 188 - #define MIN_AXI_ADDR_BITS_PASSED 8 189 - #define PCIE_RC_SEND_PME_OFF 0x11960 190 - #define ROCKCHIP_VENDOR_ID 0x1d87 191 - #define PCIE_ECAM_BUS(x) (((x) & 0xff) << 20) 192 - #define PCIE_ECAM_DEV(x) (((x) & 0x1f) << 15) 193 - #define PCIE_ECAM_FUNC(x) (((x) & 0x7) << 12) 194 - #define PCIE_ECAM_REG(x) (((x) & 0xfff) << 0) 195 - #define PCIE_ECAM_ADDR(bus, dev, func, reg) \ 196 - (PCIE_ECAM_BUS(bus) | PCIE_ECAM_DEV(dev) | \ 197 - PCIE_ECAM_FUNC(func) | PCIE_ECAM_REG(reg)) 198 - #define PCIE_LINK_IS_L2(x) \ 199 - (((x) & PCIE_CLIENT_DEBUG_LTSSM_MASK) == PCIE_CLIENT_DEBUG_LTSSM_L2) 200 - #define PCIE_LINK_UP(x) \ 201 - (((x) & PCIE_CLIENT_LINK_STATUS_MASK) == PCIE_CLIENT_LINK_STATUS_UP) 202 - #define PCIE_LINK_IS_GEN2(x) \ 203 - (((x) & PCIE_CORE_PL_CONF_SPEED_MASK) == PCIE_CORE_PL_CONF_SPEED_5G) 204 - 205 - #define RC_REGION_0_ADDR_TRANS_H 0x00000000 206 - #define RC_REGION_0_ADDR_TRANS_L 0x00000000 207 - #define RC_REGION_0_PASS_BITS (25 - 1) 208 - #define RC_REGION_0_TYPE_MASK GENMASK(3, 0) 209 - #define MAX_AXI_WRAPPER_REGION_NUM 33 210 - 211 - struct rockchip_pcie { 212 - void __iomem *reg_base; /* DT axi-base */ 213 - void __iomem *apb_base; /* DT apb-base */ 214 - bool legacy_phy; 215 - struct phy *phys[MAX_LANE_NUM]; 216 - struct reset_control *core_rst; 217 - struct reset_control *mgmt_rst; 218 - struct reset_control *mgmt_sticky_rst; 219 - struct reset_control *pipe_rst; 220 - struct reset_control *pm_rst; 221 - struct reset_control *aclk_rst; 222 - struct reset_control *pclk_rst; 223 - struct clk *aclk_pcie; 224 - struct clk *aclk_perf_pcie; 225 - struct clk *hclk_pcie; 226 - struct clk *clk_pcie_pm; 227 - struct regulator *vpcie12v; /* 12V power supply */ 228 - struct regulator *vpcie3v3; /* 3.3V power supply */ 229 - struct regulator *vpcie1v8; /* 1.8V power supply */ 230 - struct regulator *vpcie0v9; /* 0.9V power supply */ 231 - struct gpio_desc *ep_gpio; 232 - u32 lanes; 233 - u8 lanes_map; 234 - u8 root_bus_nr; 235 - int link_gen; 236 - struct device *dev; 237 - struct irq_domain *irq_domain; 238 - int offset; 239 - struct pci_bus *root_bus; 240 - struct resource *io; 241 - phys_addr_t io_bus_addr; 242 - u32 io_size; 243 - void __iomem *msg_region; 244 - u32 mem_size; 245 - phys_addr_t msg_bus_addr; 246 - phys_addr_t mem_bus_addr; 247 - }; 248 - 249 - static u32 rockchip_pcie_read(struct rockchip_pcie *rockchip, u32 reg) 25 + int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip) 250 26 { 251 - return readl(rockchip->apb_base + reg); 252 - } 27 + struct device *dev = rockchip->dev; 28 + struct platform_device *pdev = to_platform_device(dev); 29 + struct device_node *node = dev->of_node; 30 + struct resource *regs; 31 + int err; 253 32 254 - static void rockchip_pcie_write(struct rockchip_pcie *rockchip, u32 val, 255 - u32 reg) 256 - { 257 - writel(val, rockchip->apb_base + reg); 258 - } 259 - 260 - static void rockchip_pcie_enable_bw_int(struct rockchip_pcie *rockchip) 261 - { 262 - u32 status; 263 - 264 - status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 265 - status |= (PCI_EXP_LNKCTL_LBMIE | PCI_EXP_LNKCTL_LABIE); 266 - rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 267 - } 268 - 269 - static void rockchip_pcie_clr_bw_int(struct rockchip_pcie *rockchip) 270 - { 271 - u32 status; 272 - 273 - status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 274 - status |= (PCI_EXP_LNKSTA_LBMS | PCI_EXP_LNKSTA_LABS) << 16; 275 - rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 276 - } 277 - 278 - static void rockchip_pcie_update_txcredit_mui(struct rockchip_pcie *rockchip) 279 - { 280 - u32 val; 281 - 282 - /* Update Tx credit maximum update interval */ 283 - val = rockchip_pcie_read(rockchip, PCIE_CORE_TXCREDIT_CFG1); 284 - val &= ~PCIE_CORE_TXCREDIT_CFG1_MUI_MASK; 285 - val |= PCIE_CORE_TXCREDIT_CFG1_MUI_ENCODE(24000); /* ns */ 286 - rockchip_pcie_write(rockchip, val, PCIE_CORE_TXCREDIT_CFG1); 287 - } 288 - 289 - static int rockchip_pcie_valid_device(struct rockchip_pcie *rockchip, 290 - struct pci_bus *bus, int dev) 291 - { 292 - /* access only one slot on each root port */ 293 - if (bus->number == rockchip->root_bus_nr && dev > 0) 294 - return 0; 295 - 296 - /* 297 - * do not read more than one device on the bus directly attached 298 - * to RC's downstream side. 299 - */ 300 - if (bus->primary == rockchip->root_bus_nr && dev > 0) 301 - return 0; 302 - 303 - return 1; 304 - } 305 - 306 - static u8 rockchip_pcie_lane_map(struct rockchip_pcie *rockchip) 307 - { 308 - u32 val; 309 - u8 map; 310 - 311 - if (rockchip->legacy_phy) 312 - return GENMASK(MAX_LANE_NUM - 1, 0); 313 - 314 - val = rockchip_pcie_read(rockchip, PCIE_CORE_LANE_MAP); 315 - map = val & PCIE_CORE_LANE_MAP_MASK; 316 - 317 - /* The link may be using a reverse-indexed mapping. */ 318 - if (val & PCIE_CORE_LANE_MAP_REVERSE) 319 - map = bitrev8(map) >> 4; 320 - 321 - return map; 322 - } 323 - 324 - static int rockchip_pcie_rd_own_conf(struct rockchip_pcie *rockchip, 325 - int where, int size, u32 *val) 326 - { 327 - void __iomem *addr; 328 - 329 - addr = rockchip->apb_base + PCIE_RC_CONFIG_NORMAL_BASE + where; 330 - 331 - if (!IS_ALIGNED((uintptr_t)addr, size)) { 332 - *val = 0; 333 - return PCIBIOS_BAD_REGISTER_NUMBER; 334 - } 335 - 336 - if (size == 4) { 337 - *val = readl(addr); 338 - } else if (size == 2) { 339 - *val = readw(addr); 340 - } else if (size == 1) { 341 - *val = readb(addr); 33 + if (rockchip->is_rc) { 34 + regs = platform_get_resource_byname(pdev, 35 + IORESOURCE_MEM, 36 + "axi-base"); 37 + rockchip->reg_base = devm_pci_remap_cfg_resource(dev, regs); 38 + if (IS_ERR(rockchip->reg_base)) 39 + return PTR_ERR(rockchip->reg_base); 342 40 } else { 343 - *val = 0; 344 - return PCIBIOS_BAD_REGISTER_NUMBER; 345 - } 346 - return PCIBIOS_SUCCESSFUL; 347 - } 348 - 349 - static int rockchip_pcie_wr_own_conf(struct rockchip_pcie *rockchip, 350 - int where, int size, u32 val) 351 - { 352 - u32 mask, tmp, offset; 353 - void __iomem *addr; 354 - 355 - offset = where & ~0x3; 356 - addr = rockchip->apb_base + PCIE_RC_CONFIG_NORMAL_BASE + offset; 357 - 358 - if (size == 4) { 359 - writel(val, addr); 360 - return PCIBIOS_SUCCESSFUL; 41 + rockchip->mem_res = 42 + platform_get_resource_byname(pdev, IORESOURCE_MEM, 43 + "mem-base"); 44 + if (!rockchip->mem_res) 45 + return -EINVAL; 361 46 } 362 47 363 - mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8)); 48 + regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, 49 + "apb-base"); 50 + rockchip->apb_base = devm_ioremap_resource(dev, regs); 51 + if (IS_ERR(rockchip->apb_base)) 52 + return PTR_ERR(rockchip->apb_base); 364 53 365 - /* 366 - * N.B. This read/modify/write isn't safe in general because it can 367 - * corrupt RW1C bits in adjacent registers. But the hardware 368 - * doesn't support smaller writes. 369 - */ 370 - tmp = readl(addr) & mask; 371 - tmp |= val << ((where & 0x3) * 8); 372 - writel(tmp, addr); 54 + err = rockchip_pcie_get_phys(rockchip); 55 + if (err) 56 + return err; 373 57 374 - return PCIBIOS_SUCCESSFUL; 375 - } 376 - 377 - static void rockchip_pcie_cfg_configuration_accesses( 378 - struct rockchip_pcie *rockchip, u32 type) 379 - { 380 - u32 ob_desc_0; 381 - 382 - /* Configuration Accesses for region 0 */ 383 - rockchip_pcie_write(rockchip, 0x0, PCIE_RC_BAR_CONF); 384 - 385 - rockchip_pcie_write(rockchip, 386 - (RC_REGION_0_ADDR_TRANS_L + RC_REGION_0_PASS_BITS), 387 - PCIE_CORE_OB_REGION_ADDR0); 388 - rockchip_pcie_write(rockchip, RC_REGION_0_ADDR_TRANS_H, 389 - PCIE_CORE_OB_REGION_ADDR1); 390 - ob_desc_0 = rockchip_pcie_read(rockchip, PCIE_CORE_OB_REGION_DESC0); 391 - ob_desc_0 &= ~(RC_REGION_0_TYPE_MASK); 392 - ob_desc_0 |= (type | (0x1 << 23)); 393 - rockchip_pcie_write(rockchip, ob_desc_0, PCIE_CORE_OB_REGION_DESC0); 394 - rockchip_pcie_write(rockchip, 0x0, PCIE_CORE_OB_REGION_DESC1); 395 - } 396 - 397 - static int rockchip_pcie_rd_other_conf(struct rockchip_pcie *rockchip, 398 - struct pci_bus *bus, u32 devfn, 399 - int where, int size, u32 *val) 400 - { 401 - u32 busdev; 402 - 403 - busdev = PCIE_ECAM_ADDR(bus->number, PCI_SLOT(devfn), 404 - PCI_FUNC(devfn), where); 405 - 406 - if (!IS_ALIGNED(busdev, size)) { 407 - *val = 0; 408 - return PCIBIOS_BAD_REGISTER_NUMBER; 58 + rockchip->lanes = 1; 59 + err = of_property_read_u32(node, "num-lanes", &rockchip->lanes); 60 + if (!err && (rockchip->lanes == 0 || 61 + rockchip->lanes == 3 || 62 + rockchip->lanes > 4)) { 63 + dev_warn(dev, "invalid num-lanes, default to use one lane\n"); 64 + rockchip->lanes = 1; 409 65 } 410 66 411 - if (bus->parent->number == rockchip->root_bus_nr) 412 - rockchip_pcie_cfg_configuration_accesses(rockchip, 413 - AXI_WRAPPER_TYPE0_CFG); 414 - else 415 - rockchip_pcie_cfg_configuration_accesses(rockchip, 416 - AXI_WRAPPER_TYPE1_CFG); 67 + rockchip->link_gen = of_pci_get_max_link_speed(node); 68 + if (rockchip->link_gen < 0 || rockchip->link_gen > 2) 69 + rockchip->link_gen = 2; 417 70 418 - if (size == 4) { 419 - *val = readl(rockchip->reg_base + busdev); 420 - } else if (size == 2) { 421 - *val = readw(rockchip->reg_base + busdev); 422 - } else if (size == 1) { 423 - *val = readb(rockchip->reg_base + busdev); 424 - } else { 425 - *val = 0; 426 - return PCIBIOS_BAD_REGISTER_NUMBER; 427 - } 428 - return PCIBIOS_SUCCESSFUL; 429 - } 430 - 431 - static int rockchip_pcie_wr_other_conf(struct rockchip_pcie *rockchip, 432 - struct pci_bus *bus, u32 devfn, 433 - int where, int size, u32 val) 434 - { 435 - u32 busdev; 436 - 437 - busdev = PCIE_ECAM_ADDR(bus->number, PCI_SLOT(devfn), 438 - PCI_FUNC(devfn), where); 439 - if (!IS_ALIGNED(busdev, size)) 440 - return PCIBIOS_BAD_REGISTER_NUMBER; 441 - 442 - if (bus->parent->number == rockchip->root_bus_nr) 443 - rockchip_pcie_cfg_configuration_accesses(rockchip, 444 - AXI_WRAPPER_TYPE0_CFG); 445 - else 446 - rockchip_pcie_cfg_configuration_accesses(rockchip, 447 - AXI_WRAPPER_TYPE1_CFG); 448 - 449 - if (size == 4) 450 - writel(val, rockchip->reg_base + busdev); 451 - else if (size == 2) 452 - writew(val, rockchip->reg_base + busdev); 453 - else if (size == 1) 454 - writeb(val, rockchip->reg_base + busdev); 455 - else 456 - return PCIBIOS_BAD_REGISTER_NUMBER; 457 - 458 - return PCIBIOS_SUCCESSFUL; 459 - } 460 - 461 - static int rockchip_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where, 462 - int size, u32 *val) 463 - { 464 - struct rockchip_pcie *rockchip = bus->sysdata; 465 - 466 - if (!rockchip_pcie_valid_device(rockchip, bus, PCI_SLOT(devfn))) { 467 - *val = 0xffffffff; 468 - return PCIBIOS_DEVICE_NOT_FOUND; 71 + rockchip->core_rst = devm_reset_control_get_exclusive(dev, "core"); 72 + if (IS_ERR(rockchip->core_rst)) { 73 + if (PTR_ERR(rockchip->core_rst) != -EPROBE_DEFER) 74 + dev_err(dev, "missing core reset property in node\n"); 75 + return PTR_ERR(rockchip->core_rst); 469 76 } 470 77 471 - if (bus->number == rockchip->root_bus_nr) 472 - return rockchip_pcie_rd_own_conf(rockchip, where, size, val); 78 + rockchip->mgmt_rst = devm_reset_control_get_exclusive(dev, "mgmt"); 79 + if (IS_ERR(rockchip->mgmt_rst)) { 80 + if (PTR_ERR(rockchip->mgmt_rst) != -EPROBE_DEFER) 81 + dev_err(dev, "missing mgmt reset property in node\n"); 82 + return PTR_ERR(rockchip->mgmt_rst); 83 + } 473 84 474 - return rockchip_pcie_rd_other_conf(rockchip, bus, devfn, where, size, val); 475 - } 85 + rockchip->mgmt_sticky_rst = devm_reset_control_get_exclusive(dev, 86 + "mgmt-sticky"); 87 + if (IS_ERR(rockchip->mgmt_sticky_rst)) { 88 + if (PTR_ERR(rockchip->mgmt_sticky_rst) != -EPROBE_DEFER) 89 + dev_err(dev, "missing mgmt-sticky reset property in node\n"); 90 + return PTR_ERR(rockchip->mgmt_sticky_rst); 91 + } 476 92 477 - static int rockchip_pcie_wr_conf(struct pci_bus *bus, u32 devfn, 478 - int where, int size, u32 val) 479 - { 480 - struct rockchip_pcie *rockchip = bus->sysdata; 93 + rockchip->pipe_rst = devm_reset_control_get_exclusive(dev, "pipe"); 94 + if (IS_ERR(rockchip->pipe_rst)) { 95 + if (PTR_ERR(rockchip->pipe_rst) != -EPROBE_DEFER) 96 + dev_err(dev, "missing pipe reset property in node\n"); 97 + return PTR_ERR(rockchip->pipe_rst); 98 + } 481 99 482 - if (!rockchip_pcie_valid_device(rockchip, bus, PCI_SLOT(devfn))) 483 - return PCIBIOS_DEVICE_NOT_FOUND; 100 + rockchip->pm_rst = devm_reset_control_get_exclusive(dev, "pm"); 101 + if (IS_ERR(rockchip->pm_rst)) { 102 + if (PTR_ERR(rockchip->pm_rst) != -EPROBE_DEFER) 103 + dev_err(dev, "missing pm reset property in node\n"); 104 + return PTR_ERR(rockchip->pm_rst); 105 + } 484 106 485 - if (bus->number == rockchip->root_bus_nr) 486 - return rockchip_pcie_wr_own_conf(rockchip, where, size, val); 107 + rockchip->pclk_rst = devm_reset_control_get_exclusive(dev, "pclk"); 108 + if (IS_ERR(rockchip->pclk_rst)) { 109 + if (PTR_ERR(rockchip->pclk_rst) != -EPROBE_DEFER) 110 + dev_err(dev, "missing pclk reset property in node\n"); 111 + return PTR_ERR(rockchip->pclk_rst); 112 + } 487 113 488 - return rockchip_pcie_wr_other_conf(rockchip, bus, devfn, where, size, val); 489 - } 114 + rockchip->aclk_rst = devm_reset_control_get_exclusive(dev, "aclk"); 115 + if (IS_ERR(rockchip->aclk_rst)) { 116 + if (PTR_ERR(rockchip->aclk_rst) != -EPROBE_DEFER) 117 + dev_err(dev, "missing aclk reset property in node\n"); 118 + return PTR_ERR(rockchip->aclk_rst); 119 + } 490 120 491 - static struct pci_ops rockchip_pcie_ops = { 492 - .read = rockchip_pcie_rd_conf, 493 - .write = rockchip_pcie_wr_conf, 494 - }; 495 - 496 - static void rockchip_pcie_set_power_limit(struct rockchip_pcie *rockchip) 497 - { 498 - int curr; 499 - u32 status, scale, power; 500 - 501 - if (IS_ERR(rockchip->vpcie3v3)) 502 - return; 503 - 504 - /* 505 - * Set RC's captured slot power limit and scale if 506 - * vpcie3v3 available. The default values are both zero 507 - * which means the software should set these two according 508 - * to the actual power supply. 509 - */ 510 - curr = regulator_get_current_limit(rockchip->vpcie3v3); 511 - if (curr <= 0) 512 - return; 513 - 514 - scale = 3; /* 0.001x */ 515 - curr = curr / 1000; /* convert to mA */ 516 - power = (curr * 3300) / 1000; /* milliwatt */ 517 - while (power > PCIE_RC_CONFIG_DCR_CSPL_LIMIT) { 518 - if (!scale) { 519 - dev_warn(rockchip->dev, "invalid power supply\n"); 520 - return; 121 + if (rockchip->is_rc) { 122 + rockchip->ep_gpio = devm_gpiod_get(dev, "ep", GPIOD_OUT_HIGH); 123 + if (IS_ERR(rockchip->ep_gpio)) { 124 + dev_err(dev, "missing ep-gpios property in node\n"); 125 + return PTR_ERR(rockchip->ep_gpio); 521 126 } 522 - scale--; 523 - power = power / 10; 524 127 } 525 128 526 - status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_DCR); 527 - status |= (power << PCIE_RC_CONFIG_DCR_CSPL_SHIFT) | 528 - (scale << PCIE_RC_CONFIG_DCR_CPLS_SHIFT); 529 - rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCR); 530 - } 129 + rockchip->aclk_pcie = devm_clk_get(dev, "aclk"); 130 + if (IS_ERR(rockchip->aclk_pcie)) { 131 + dev_err(dev, "aclk clock not found\n"); 132 + return PTR_ERR(rockchip->aclk_pcie); 133 + } 531 134 532 - /** 533 - * rockchip_pcie_init_port - Initialize hardware 534 - * @rockchip: PCIe port information 535 - */ 536 - static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) 135 + rockchip->aclk_perf_pcie = devm_clk_get(dev, "aclk-perf"); 136 + if (IS_ERR(rockchip->aclk_perf_pcie)) { 137 + dev_err(dev, "aclk_perf clock not found\n"); 138 + return PTR_ERR(rockchip->aclk_perf_pcie); 139 + } 140 + 141 + rockchip->hclk_pcie = devm_clk_get(dev, "hclk"); 142 + if (IS_ERR(rockchip->hclk_pcie)) { 143 + dev_err(dev, "hclk clock not found\n"); 144 + return PTR_ERR(rockchip->hclk_pcie); 145 + } 146 + 147 + rockchip->clk_pcie_pm = devm_clk_get(dev, "pm"); 148 + if (IS_ERR(rockchip->clk_pcie_pm)) { 149 + dev_err(dev, "pm clock not found\n"); 150 + return PTR_ERR(rockchip->clk_pcie_pm); 151 + } 152 + 153 + return 0; 154 + } 155 + EXPORT_SYMBOL_GPL(rockchip_pcie_parse_dt); 156 + 157 + int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) 537 158 { 538 159 struct device *dev = rockchip->dev; 539 160 int err, i; 540 - u32 status; 541 - 542 - gpiod_set_value_cansleep(rockchip->ep_gpio, 0); 161 + u32 regs; 543 162 544 163 err = reset_control_assert(rockchip->aclk_rst); 545 164 if (err) { ··· 237 618 rockchip_pcie_write(rockchip, PCIE_CLIENT_GEN_SEL_1, 238 619 PCIE_CLIENT_CONFIG); 239 620 240 - rockchip_pcie_write(rockchip, 241 - PCIE_CLIENT_CONF_ENABLE | 242 - PCIE_CLIENT_LINK_TRAIN_ENABLE | 243 - PCIE_CLIENT_ARI_ENABLE | 244 - PCIE_CLIENT_CONF_LANE_NUM(rockchip->lanes) | 245 - PCIE_CLIENT_MODE_RC, 246 - PCIE_CLIENT_CONFIG); 621 + regs = PCIE_CLIENT_LINK_TRAIN_ENABLE | PCIE_CLIENT_ARI_ENABLE | 622 + PCIE_CLIENT_CONF_LANE_NUM(rockchip->lanes); 623 + 624 + if (rockchip->is_rc) 625 + regs |= PCIE_CLIENT_CONF_ENABLE | PCIE_CLIENT_MODE_RC; 626 + else 627 + regs |= PCIE_CLIENT_CONF_DISABLE | PCIE_CLIENT_MODE_EP; 628 + 629 + rockchip_pcie_write(rockchip, regs, PCIE_CLIENT_CONFIG); 247 630 248 631 for (i = 0; i < MAX_LANE_NUM; i++) { 249 632 err = phy_power_on(rockchip->phys[i]); ··· 283 662 goto err_power_off_phy; 284 663 } 285 664 286 - /* Fix the transmitted FTS count desired to exit from L0s. */ 287 - status = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL_PLC1); 288 - status = (status & ~PCIE_CORE_CTRL_PLC1_FTS_MASK) | 289 - (PCIE_CORE_CTRL_PLC1_FTS_CNT << PCIE_CORE_CTRL_PLC1_FTS_SHIFT); 290 - rockchip_pcie_write(rockchip, status, PCIE_CORE_CTRL_PLC1); 291 - 292 - rockchip_pcie_set_power_limit(rockchip); 293 - 294 - /* Set RC's clock architecture as common clock */ 295 - status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 296 - status |= PCI_EXP_LNKSTA_SLC << 16; 297 - rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 298 - 299 - /* Set RC's RCB to 128 */ 300 - status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 301 - status |= PCI_EXP_LNKCTL_RCB; 302 - rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 303 - 304 - /* Enable Gen1 training */ 305 - rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE, 306 - PCIE_CLIENT_CONFIG); 307 - 308 - gpiod_set_value_cansleep(rockchip->ep_gpio, 1); 309 - 310 - /* 500ms timeout value should be enough for Gen1/2 training */ 311 - err = readl_poll_timeout(rockchip->apb_base + PCIE_CLIENT_BASIC_STATUS1, 312 - status, PCIE_LINK_UP(status), 20, 313 - 500 * USEC_PER_MSEC); 314 - if (err) { 315 - dev_err(dev, "PCIe link training gen1 timeout!\n"); 316 - goto err_power_off_phy; 317 - } 318 - 319 - if (rockchip->link_gen == 2) { 320 - /* 321 - * Enable retrain for gen2. This should be configured only after 322 - * gen1 finished. 323 - */ 324 - status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 325 - status |= PCI_EXP_LNKCTL_RL; 326 - rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 327 - 328 - err = readl_poll_timeout(rockchip->apb_base + PCIE_CORE_CTRL, 329 - status, PCIE_LINK_IS_GEN2(status), 20, 330 - 500 * USEC_PER_MSEC); 331 - if (err) 332 - dev_dbg(dev, "PCIe link training gen2 timeout, fall back to gen1!\n"); 333 - } 334 - 335 - /* Check the final link width from negotiated lane counter from MGMT */ 336 - status = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL); 337 - status = 0x1 << ((status & PCIE_CORE_PL_CONF_LANE_MASK) >> 338 - PCIE_CORE_PL_CONF_LANE_SHIFT); 339 - dev_dbg(dev, "current link width is x%d\n", status); 340 - 341 - /* Power off unused lane(s) */ 342 - rockchip->lanes_map = rockchip_pcie_lane_map(rockchip); 343 - for (i = 0; i < MAX_LANE_NUM; i++) { 344 - if (!(rockchip->lanes_map & BIT(i))) { 345 - dev_dbg(dev, "idling lane %d\n", i); 346 - phy_power_off(rockchip->phys[i]); 347 - } 348 - } 349 - 350 - rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID, 351 - PCIE_CORE_CONFIG_VENDOR); 352 - rockchip_pcie_write(rockchip, 353 - PCI_CLASS_BRIDGE_PCI << PCIE_RC_CONFIG_SCC_SHIFT, 354 - PCIE_RC_CONFIG_RID_CCR); 355 - 356 - /* Clear THP cap's next cap pointer to remove L1 substate cap */ 357 - status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_THP_CAP); 358 - status &= ~PCIE_RC_CONFIG_THP_CAP_NEXT_MASK; 359 - rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_THP_CAP); 360 - 361 - /* Clear L0s from RC's link cap */ 362 - if (of_property_read_bool(dev->of_node, "aspm-no-l0s")) { 363 - status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LINK_CAP); 364 - status &= ~PCIE_RC_CONFIG_LINK_CAP_L0S; 365 - rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LINK_CAP); 366 - } 367 - 368 - status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_DCSR); 369 - status &= ~PCIE_RC_CONFIG_DCSR_MPS_MASK; 370 - status |= PCIE_RC_CONFIG_DCSR_MPS_256; 371 - rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCSR); 372 - 373 665 return 0; 374 666 err_power_off_phy: 375 667 while (i--) ··· 293 759 phy_exit(rockchip->phys[i]); 294 760 return err; 295 761 } 762 + EXPORT_SYMBOL_GPL(rockchip_pcie_init_port); 296 763 297 - static void rockchip_pcie_deinit_phys(struct rockchip_pcie *rockchip) 298 - { 299 - int i; 300 - 301 - for (i = 0; i < MAX_LANE_NUM; i++) { 302 - /* inactive lanes are already powered off */ 303 - if (rockchip->lanes_map & BIT(i)) 304 - phy_power_off(rockchip->phys[i]); 305 - phy_exit(rockchip->phys[i]); 306 - } 307 - } 308 - 309 - static irqreturn_t rockchip_pcie_subsys_irq_handler(int irq, void *arg) 310 - { 311 - struct rockchip_pcie *rockchip = arg; 312 - struct device *dev = rockchip->dev; 313 - u32 reg; 314 - u32 sub_reg; 315 - 316 - reg = rockchip_pcie_read(rockchip, PCIE_CLIENT_INT_STATUS); 317 - if (reg & PCIE_CLIENT_INT_LOCAL) { 318 - dev_dbg(dev, "local interrupt received\n"); 319 - sub_reg = rockchip_pcie_read(rockchip, PCIE_CORE_INT_STATUS); 320 - if (sub_reg & PCIE_CORE_INT_PRFPE) 321 - dev_dbg(dev, "parity error detected while reading from the PNP receive FIFO RAM\n"); 322 - 323 - if (sub_reg & PCIE_CORE_INT_CRFPE) 324 - dev_dbg(dev, "parity error detected while reading from the Completion Receive FIFO RAM\n"); 325 - 326 - if (sub_reg & PCIE_CORE_INT_RRPE) 327 - dev_dbg(dev, "parity error detected while reading from replay buffer RAM\n"); 328 - 329 - if (sub_reg & PCIE_CORE_INT_PRFO) 330 - dev_dbg(dev, "overflow occurred in the PNP receive FIFO\n"); 331 - 332 - if (sub_reg & PCIE_CORE_INT_CRFO) 333 - dev_dbg(dev, "overflow occurred in the completion receive FIFO\n"); 334 - 335 - if (sub_reg & PCIE_CORE_INT_RT) 336 - dev_dbg(dev, "replay timer timed out\n"); 337 - 338 - if (sub_reg & PCIE_CORE_INT_RTR) 339 - dev_dbg(dev, "replay timer rolled over after 4 transmissions of the same TLP\n"); 340 - 341 - if (sub_reg & PCIE_CORE_INT_PE) 342 - dev_dbg(dev, "phy error detected on receive side\n"); 343 - 344 - if (sub_reg & PCIE_CORE_INT_MTR) 345 - dev_dbg(dev, "malformed TLP received from the link\n"); 346 - 347 - if (sub_reg & PCIE_CORE_INT_UCR) 348 - dev_dbg(dev, "malformed TLP received from the link\n"); 349 - 350 - if (sub_reg & PCIE_CORE_INT_FCE) 351 - dev_dbg(dev, "an error was observed in the flow control advertisements from the other side\n"); 352 - 353 - if (sub_reg & PCIE_CORE_INT_CT) 354 - dev_dbg(dev, "a request timed out waiting for completion\n"); 355 - 356 - if (sub_reg & PCIE_CORE_INT_UTC) 357 - dev_dbg(dev, "unmapped TC error\n"); 358 - 359 - if (sub_reg & PCIE_CORE_INT_MMVC) 360 - dev_dbg(dev, "MSI mask register changes\n"); 361 - 362 - rockchip_pcie_write(rockchip, sub_reg, PCIE_CORE_INT_STATUS); 363 - } else if (reg & PCIE_CLIENT_INT_PHY) { 364 - dev_dbg(dev, "phy link changes\n"); 365 - rockchip_pcie_update_txcredit_mui(rockchip); 366 - rockchip_pcie_clr_bw_int(rockchip); 367 - } 368 - 369 - rockchip_pcie_write(rockchip, reg & PCIE_CLIENT_INT_LOCAL, 370 - PCIE_CLIENT_INT_STATUS); 371 - 372 - return IRQ_HANDLED; 373 - } 374 - 375 - static irqreturn_t rockchip_pcie_client_irq_handler(int irq, void *arg) 376 - { 377 - struct rockchip_pcie *rockchip = arg; 378 - struct device *dev = rockchip->dev; 379 - u32 reg; 380 - 381 - reg = rockchip_pcie_read(rockchip, PCIE_CLIENT_INT_STATUS); 382 - if (reg & PCIE_CLIENT_INT_LEGACY_DONE) 383 - dev_dbg(dev, "legacy done interrupt received\n"); 384 - 385 - if (reg & PCIE_CLIENT_INT_MSG) 386 - dev_dbg(dev, "message done interrupt received\n"); 387 - 388 - if (reg & PCIE_CLIENT_INT_HOT_RST) 389 - dev_dbg(dev, "hot reset interrupt received\n"); 390 - 391 - if (reg & PCIE_CLIENT_INT_DPA) 392 - dev_dbg(dev, "dpa interrupt received\n"); 393 - 394 - if (reg & PCIE_CLIENT_INT_FATAL_ERR) 395 - dev_dbg(dev, "fatal error interrupt received\n"); 396 - 397 - if (reg & PCIE_CLIENT_INT_NFATAL_ERR) 398 - dev_dbg(dev, "no fatal error interrupt received\n"); 399 - 400 - if (reg & PCIE_CLIENT_INT_CORR_ERR) 401 - dev_dbg(dev, "correctable error interrupt received\n"); 402 - 403 - if (reg & PCIE_CLIENT_INT_PHY) 404 - dev_dbg(dev, "phy interrupt received\n"); 405 - 406 - rockchip_pcie_write(rockchip, reg & (PCIE_CLIENT_INT_LEGACY_DONE | 407 - PCIE_CLIENT_INT_MSG | PCIE_CLIENT_INT_HOT_RST | 408 - PCIE_CLIENT_INT_DPA | PCIE_CLIENT_INT_FATAL_ERR | 409 - PCIE_CLIENT_INT_NFATAL_ERR | 410 - PCIE_CLIENT_INT_CORR_ERR | 411 - PCIE_CLIENT_INT_PHY), 412 - PCIE_CLIENT_INT_STATUS); 413 - 414 - return IRQ_HANDLED; 415 - } 416 - 417 - static void rockchip_pcie_legacy_int_handler(struct irq_desc *desc) 418 - { 419 - struct irq_chip *chip = irq_desc_get_chip(desc); 420 - struct rockchip_pcie *rockchip = irq_desc_get_handler_data(desc); 421 - struct device *dev = rockchip->dev; 422 - u32 reg; 423 - u32 hwirq; 424 - u32 virq; 425 - 426 - chained_irq_enter(chip, desc); 427 - 428 - reg = rockchip_pcie_read(rockchip, PCIE_CLIENT_INT_STATUS); 429 - reg = (reg & PCIE_CLIENT_INTR_MASK) >> PCIE_CLIENT_INTR_SHIFT; 430 - 431 - while (reg) { 432 - hwirq = ffs(reg) - 1; 433 - reg &= ~BIT(hwirq); 434 - 435 - virq = irq_find_mapping(rockchip->irq_domain, hwirq); 436 - if (virq) 437 - generic_handle_irq(virq); 438 - else 439 - dev_err(dev, "unexpected IRQ, INT%d\n", hwirq); 440 - } 441 - 442 - chained_irq_exit(chip, desc); 443 - } 444 - 445 - static int rockchip_pcie_get_phys(struct rockchip_pcie *rockchip) 764 + int rockchip_pcie_get_phys(struct rockchip_pcie *rockchip) 446 765 { 447 766 struct device *dev = rockchip->dev; 448 767 struct phy *phy; ··· 335 948 336 949 return 0; 337 950 } 951 + EXPORT_SYMBOL_GPL(rockchip_pcie_get_phys); 338 952 339 - static int rockchip_pcie_setup_irq(struct rockchip_pcie *rockchip) 953 + void rockchip_pcie_deinit_phys(struct rockchip_pcie *rockchip) 340 954 { 341 - int irq, err; 342 - struct device *dev = rockchip->dev; 343 - struct platform_device *pdev = to_platform_device(dev); 955 + int i; 344 956 345 - irq = platform_get_irq_byname(pdev, "sys"); 346 - if (irq < 0) { 347 - dev_err(dev, "missing sys IRQ resource\n"); 348 - return irq; 957 + for (i = 0; i < MAX_LANE_NUM; i++) { 958 + /* inactive lanes are already powered off */ 959 + if (rockchip->lanes_map & BIT(i)) 960 + phy_power_off(rockchip->phys[i]); 961 + phy_exit(rockchip->phys[i]); 349 962 } 350 - 351 - err = devm_request_irq(dev, irq, rockchip_pcie_subsys_irq_handler, 352 - IRQF_SHARED, "pcie-sys", rockchip); 353 - if (err) { 354 - dev_err(dev, "failed to request PCIe subsystem IRQ\n"); 355 - return err; 356 - } 357 - 358 - irq = platform_get_irq_byname(pdev, "legacy"); 359 - if (irq < 0) { 360 - dev_err(dev, "missing legacy IRQ resource\n"); 361 - return irq; 362 - } 363 - 364 - irq_set_chained_handler_and_data(irq, 365 - rockchip_pcie_legacy_int_handler, 366 - rockchip); 367 - 368 - irq = platform_get_irq_byname(pdev, "client"); 369 - if (irq < 0) { 370 - dev_err(dev, "missing client IRQ resource\n"); 371 - return irq; 372 - } 373 - 374 - err = devm_request_irq(dev, irq, rockchip_pcie_client_irq_handler, 375 - IRQF_SHARED, "pcie-client", rockchip); 376 - if (err) { 377 - dev_err(dev, "failed to request PCIe client IRQ\n"); 378 - return err; 379 - } 380 - 381 - return 0; 382 963 } 964 + EXPORT_SYMBOL_GPL(rockchip_pcie_deinit_phys); 383 965 384 - /** 385 - * rockchip_pcie_parse_dt - Parse Device Tree 386 - * @rockchip: PCIe port information 387 - * 388 - * Return: '0' on success and error value on failure 389 - */ 390 - static int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip) 391 - { 392 - struct device *dev = rockchip->dev; 393 - struct platform_device *pdev = to_platform_device(dev); 394 - struct device_node *node = dev->of_node; 395 - struct resource *regs; 396 - int err; 397 - 398 - regs = platform_get_resource_byname(pdev, 399 - IORESOURCE_MEM, 400 - "axi-base"); 401 - rockchip->reg_base = devm_pci_remap_cfg_resource(dev, regs); 402 - if (IS_ERR(rockchip->reg_base)) 403 - return PTR_ERR(rockchip->reg_base); 404 - 405 - regs = platform_get_resource_byname(pdev, 406 - IORESOURCE_MEM, 407 - "apb-base"); 408 - rockchip->apb_base = devm_ioremap_resource(dev, regs); 409 - if (IS_ERR(rockchip->apb_base)) 410 - return PTR_ERR(rockchip->apb_base); 411 - 412 - err = rockchip_pcie_get_phys(rockchip); 413 - if (err) 414 - return err; 415 - 416 - rockchip->lanes = 1; 417 - err = of_property_read_u32(node, "num-lanes", &rockchip->lanes); 418 - if (!err && (rockchip->lanes == 0 || 419 - rockchip->lanes == 3 || 420 - rockchip->lanes > 4)) { 421 - dev_warn(dev, "invalid num-lanes, default to use one lane\n"); 422 - rockchip->lanes = 1; 423 - } 424 - 425 - rockchip->link_gen = of_pci_get_max_link_speed(node); 426 - if (rockchip->link_gen < 0 || rockchip->link_gen > 2) 427 - rockchip->link_gen = 2; 428 - 429 - rockchip->core_rst = devm_reset_control_get_exclusive(dev, "core"); 430 - if (IS_ERR(rockchip->core_rst)) { 431 - if (PTR_ERR(rockchip->core_rst) != -EPROBE_DEFER) 432 - dev_err(dev, "missing core reset property in node\n"); 433 - return PTR_ERR(rockchip->core_rst); 434 - } 435 - 436 - rockchip->mgmt_rst = devm_reset_control_get_exclusive(dev, "mgmt"); 437 - if (IS_ERR(rockchip->mgmt_rst)) { 438 - if (PTR_ERR(rockchip->mgmt_rst) != -EPROBE_DEFER) 439 - dev_err(dev, "missing mgmt reset property in node\n"); 440 - return PTR_ERR(rockchip->mgmt_rst); 441 - } 442 - 443 - rockchip->mgmt_sticky_rst = devm_reset_control_get_exclusive(dev, 444 - "mgmt-sticky"); 445 - if (IS_ERR(rockchip->mgmt_sticky_rst)) { 446 - if (PTR_ERR(rockchip->mgmt_sticky_rst) != -EPROBE_DEFER) 447 - dev_err(dev, "missing mgmt-sticky reset property in node\n"); 448 - return PTR_ERR(rockchip->mgmt_sticky_rst); 449 - } 450 - 451 - rockchip->pipe_rst = devm_reset_control_get_exclusive(dev, "pipe"); 452 - if (IS_ERR(rockchip->pipe_rst)) { 453 - if (PTR_ERR(rockchip->pipe_rst) != -EPROBE_DEFER) 454 - dev_err(dev, "missing pipe reset property in node\n"); 455 - return PTR_ERR(rockchip->pipe_rst); 456 - } 457 - 458 - rockchip->pm_rst = devm_reset_control_get_exclusive(dev, "pm"); 459 - if (IS_ERR(rockchip->pm_rst)) { 460 - if (PTR_ERR(rockchip->pm_rst) != -EPROBE_DEFER) 461 - dev_err(dev, "missing pm reset property in node\n"); 462 - return PTR_ERR(rockchip->pm_rst); 463 - } 464 - 465 - rockchip->pclk_rst = devm_reset_control_get_exclusive(dev, "pclk"); 466 - if (IS_ERR(rockchip->pclk_rst)) { 467 - if (PTR_ERR(rockchip->pclk_rst) != -EPROBE_DEFER) 468 - dev_err(dev, "missing pclk reset property in node\n"); 469 - return PTR_ERR(rockchip->pclk_rst); 470 - } 471 - 472 - rockchip->aclk_rst = devm_reset_control_get_exclusive(dev, "aclk"); 473 - if (IS_ERR(rockchip->aclk_rst)) { 474 - if (PTR_ERR(rockchip->aclk_rst) != -EPROBE_DEFER) 475 - dev_err(dev, "missing aclk reset property in node\n"); 476 - return PTR_ERR(rockchip->aclk_rst); 477 - } 478 - 479 - rockchip->ep_gpio = devm_gpiod_get(dev, "ep", GPIOD_OUT_HIGH); 480 - if (IS_ERR(rockchip->ep_gpio)) { 481 - dev_err(dev, "missing ep-gpios property in node\n"); 482 - return PTR_ERR(rockchip->ep_gpio); 483 - } 484 - 485 - rockchip->aclk_pcie = devm_clk_get(dev, "aclk"); 486 - if (IS_ERR(rockchip->aclk_pcie)) { 487 - dev_err(dev, "aclk clock not found\n"); 488 - return PTR_ERR(rockchip->aclk_pcie); 489 - } 490 - 491 - rockchip->aclk_perf_pcie = devm_clk_get(dev, "aclk-perf"); 492 - if (IS_ERR(rockchip->aclk_perf_pcie)) { 493 - dev_err(dev, "aclk_perf clock not found\n"); 494 - return PTR_ERR(rockchip->aclk_perf_pcie); 495 - } 496 - 497 - rockchip->hclk_pcie = devm_clk_get(dev, "hclk"); 498 - if (IS_ERR(rockchip->hclk_pcie)) { 499 - dev_err(dev, "hclk clock not found\n"); 500 - return PTR_ERR(rockchip->hclk_pcie); 501 - } 502 - 503 - rockchip->clk_pcie_pm = devm_clk_get(dev, "pm"); 504 - if (IS_ERR(rockchip->clk_pcie_pm)) { 505 - dev_err(dev, "pm clock not found\n"); 506 - return PTR_ERR(rockchip->clk_pcie_pm); 507 - } 508 - 509 - err = rockchip_pcie_setup_irq(rockchip); 510 - if (err) 511 - return err; 512 - 513 - rockchip->vpcie12v = devm_regulator_get_optional(dev, "vpcie12v"); 514 - if (IS_ERR(rockchip->vpcie12v)) { 515 - if (PTR_ERR(rockchip->vpcie12v) == -EPROBE_DEFER) 516 - return -EPROBE_DEFER; 517 - dev_info(dev, "no vpcie12v regulator found\n"); 518 - } 519 - 520 - rockchip->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3"); 521 - if (IS_ERR(rockchip->vpcie3v3)) { 522 - if (PTR_ERR(rockchip->vpcie3v3) == -EPROBE_DEFER) 523 - return -EPROBE_DEFER; 524 - dev_info(dev, "no vpcie3v3 regulator found\n"); 525 - } 526 - 527 - rockchip->vpcie1v8 = devm_regulator_get_optional(dev, "vpcie1v8"); 528 - if (IS_ERR(rockchip->vpcie1v8)) { 529 - if (PTR_ERR(rockchip->vpcie1v8) == -EPROBE_DEFER) 530 - return -EPROBE_DEFER; 531 - dev_info(dev, "no vpcie1v8 regulator found\n"); 532 - } 533 - 534 - rockchip->vpcie0v9 = devm_regulator_get_optional(dev, "vpcie0v9"); 535 - if (IS_ERR(rockchip->vpcie0v9)) { 536 - if (PTR_ERR(rockchip->vpcie0v9) == -EPROBE_DEFER) 537 - return -EPROBE_DEFER; 538 - dev_info(dev, "no vpcie0v9 regulator found\n"); 539 - } 540 - 541 - return 0; 542 - } 543 - 544 - static int rockchip_pcie_set_vpcie(struct rockchip_pcie *rockchip) 545 - { 546 - struct device *dev = rockchip->dev; 547 - int err; 548 - 549 - if (!IS_ERR(rockchip->vpcie12v)) { 550 - err = regulator_enable(rockchip->vpcie12v); 551 - if (err) { 552 - dev_err(dev, "fail to enable vpcie12v regulator\n"); 553 - goto err_out; 554 - } 555 - } 556 - 557 - if (!IS_ERR(rockchip->vpcie3v3)) { 558 - err = regulator_enable(rockchip->vpcie3v3); 559 - if (err) { 560 - dev_err(dev, "fail to enable vpcie3v3 regulator\n"); 561 - goto err_disable_12v; 562 - } 563 - } 564 - 565 - if (!IS_ERR(rockchip->vpcie1v8)) { 566 - err = regulator_enable(rockchip->vpcie1v8); 567 - if (err) { 568 - dev_err(dev, "fail to enable vpcie1v8 regulator\n"); 569 - goto err_disable_3v3; 570 - } 571 - } 572 - 573 - if (!IS_ERR(rockchip->vpcie0v9)) { 574 - err = regulator_enable(rockchip->vpcie0v9); 575 - if (err) { 576 - dev_err(dev, "fail to enable vpcie0v9 regulator\n"); 577 - goto err_disable_1v8; 578 - } 579 - } 580 - 581 - return 0; 582 - 583 - err_disable_1v8: 584 - if (!IS_ERR(rockchip->vpcie1v8)) 585 - regulator_disable(rockchip->vpcie1v8); 586 - err_disable_3v3: 587 - if (!IS_ERR(rockchip->vpcie3v3)) 588 - regulator_disable(rockchip->vpcie3v3); 589 - err_disable_12v: 590 - if (!IS_ERR(rockchip->vpcie12v)) 591 - regulator_disable(rockchip->vpcie12v); 592 - err_out: 593 - return err; 594 - } 595 - 596 - static void rockchip_pcie_enable_interrupts(struct rockchip_pcie *rockchip) 597 - { 598 - rockchip_pcie_write(rockchip, (PCIE_CLIENT_INT_CLI << 16) & 599 - (~PCIE_CLIENT_INT_CLI), PCIE_CLIENT_INT_MASK); 600 - rockchip_pcie_write(rockchip, (u32)(~PCIE_CORE_INT), 601 - PCIE_CORE_INT_MASK); 602 - 603 - rockchip_pcie_enable_bw_int(rockchip); 604 - } 605 - 606 - static int rockchip_pcie_intx_map(struct irq_domain *domain, unsigned int irq, 607 - irq_hw_number_t hwirq) 608 - { 609 - irq_set_chip_and_handler(irq, &dummy_irq_chip, handle_simple_irq); 610 - irq_set_chip_data(irq, domain->host_data); 611 - 612 - return 0; 613 - } 614 - 615 - static const struct irq_domain_ops intx_domain_ops = { 616 - .map = rockchip_pcie_intx_map, 617 - }; 618 - 619 - static int rockchip_pcie_init_irq_domain(struct rockchip_pcie *rockchip) 620 - { 621 - struct device *dev = rockchip->dev; 622 - struct device_node *intc = of_get_next_child(dev->of_node, NULL); 623 - 624 - if (!intc) { 625 - dev_err(dev, "missing child interrupt-controller node\n"); 626 - return -EINVAL; 627 - } 628 - 629 - rockchip->irq_domain = irq_domain_add_linear(intc, PCI_NUM_INTX, 630 - &intx_domain_ops, rockchip); 631 - if (!rockchip->irq_domain) { 632 - dev_err(dev, "failed to get a INTx IRQ domain\n"); 633 - return -EINVAL; 634 - } 635 - 636 - return 0; 637 - } 638 - 639 - static int rockchip_pcie_prog_ob_atu(struct rockchip_pcie *rockchip, 640 - int region_no, int type, u8 num_pass_bits, 641 - u32 lower_addr, u32 upper_addr) 642 - { 643 - u32 ob_addr_0; 644 - u32 ob_addr_1; 645 - u32 ob_desc_0; 646 - u32 aw_offset; 647 - 648 - if (region_no >= MAX_AXI_WRAPPER_REGION_NUM) 649 - return -EINVAL; 650 - if (num_pass_bits + 1 < 8) 651 - return -EINVAL; 652 - if (num_pass_bits > 63) 653 - return -EINVAL; 654 - if (region_no == 0) { 655 - if (AXI_REGION_0_SIZE < (2ULL << num_pass_bits)) 656 - return -EINVAL; 657 - } 658 - if (region_no != 0) { 659 - if (AXI_REGION_SIZE < (2ULL << num_pass_bits)) 660 - return -EINVAL; 661 - } 662 - 663 - aw_offset = (region_no << OB_REG_SIZE_SHIFT); 664 - 665 - ob_addr_0 = num_pass_bits & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS; 666 - ob_addr_0 |= lower_addr & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR; 667 - ob_addr_1 = upper_addr; 668 - ob_desc_0 = (1 << 23 | type); 669 - 670 - rockchip_pcie_write(rockchip, ob_addr_0, 671 - PCIE_CORE_OB_REGION_ADDR0 + aw_offset); 672 - rockchip_pcie_write(rockchip, ob_addr_1, 673 - PCIE_CORE_OB_REGION_ADDR1 + aw_offset); 674 - rockchip_pcie_write(rockchip, ob_desc_0, 675 - PCIE_CORE_OB_REGION_DESC0 + aw_offset); 676 - rockchip_pcie_write(rockchip, 0, 677 - PCIE_CORE_OB_REGION_DESC1 + aw_offset); 678 - 679 - return 0; 680 - } 681 - 682 - static int rockchip_pcie_prog_ib_atu(struct rockchip_pcie *rockchip, 683 - int region_no, u8 num_pass_bits, 684 - u32 lower_addr, u32 upper_addr) 685 - { 686 - u32 ib_addr_0; 687 - u32 ib_addr_1; 688 - u32 aw_offset; 689 - 690 - if (region_no > MAX_AXI_IB_ROOTPORT_REGION_NUM) 691 - return -EINVAL; 692 - if (num_pass_bits + 1 < MIN_AXI_ADDR_BITS_PASSED) 693 - return -EINVAL; 694 - if (num_pass_bits > 63) 695 - return -EINVAL; 696 - 697 - aw_offset = (region_no << IB_ROOT_PORT_REG_SIZE_SHIFT); 698 - 699 - ib_addr_0 = num_pass_bits & PCIE_CORE_IB_REGION_ADDR0_NUM_BITS; 700 - ib_addr_0 |= (lower_addr << 8) & PCIE_CORE_IB_REGION_ADDR0_LO_ADDR; 701 - ib_addr_1 = upper_addr; 702 - 703 - rockchip_pcie_write(rockchip, ib_addr_0, PCIE_RP_IB_ADDR0 + aw_offset); 704 - rockchip_pcie_write(rockchip, ib_addr_1, PCIE_RP_IB_ADDR1 + aw_offset); 705 - 706 - return 0; 707 - } 708 - 709 - static int rockchip_pcie_cfg_atu(struct rockchip_pcie *rockchip) 710 - { 711 - struct device *dev = rockchip->dev; 712 - int offset; 713 - int err; 714 - int reg_no; 715 - 716 - rockchip_pcie_cfg_configuration_accesses(rockchip, 717 - AXI_WRAPPER_TYPE0_CFG); 718 - 719 - for (reg_no = 0; reg_no < (rockchip->mem_size >> 20); reg_no++) { 720 - err = rockchip_pcie_prog_ob_atu(rockchip, reg_no + 1, 721 - AXI_WRAPPER_MEM_WRITE, 722 - 20 - 1, 723 - rockchip->mem_bus_addr + 724 - (reg_no << 20), 725 - 0); 726 - if (err) { 727 - dev_err(dev, "program RC mem outbound ATU failed\n"); 728 - return err; 729 - } 730 - } 731 - 732 - err = rockchip_pcie_prog_ib_atu(rockchip, 2, 32 - 1, 0x0, 0); 733 - if (err) { 734 - dev_err(dev, "program RC mem inbound ATU failed\n"); 735 - return err; 736 - } 737 - 738 - offset = rockchip->mem_size >> 20; 739 - for (reg_no = 0; reg_no < (rockchip->io_size >> 20); reg_no++) { 740 - err = rockchip_pcie_prog_ob_atu(rockchip, 741 - reg_no + 1 + offset, 742 - AXI_WRAPPER_IO_WRITE, 743 - 20 - 1, 744 - rockchip->io_bus_addr + 745 - (reg_no << 20), 746 - 0); 747 - if (err) { 748 - dev_err(dev, "program RC io outbound ATU failed\n"); 749 - return err; 750 - } 751 - } 752 - 753 - /* assign message regions */ 754 - rockchip_pcie_prog_ob_atu(rockchip, reg_no + 1 + offset, 755 - AXI_WRAPPER_NOR_MSG, 756 - 20 - 1, 0, 0); 757 - 758 - rockchip->msg_bus_addr = rockchip->mem_bus_addr + 759 - ((reg_no + offset) << 20); 760 - return err; 761 - } 762 - 763 - static int rockchip_pcie_wait_l2(struct rockchip_pcie *rockchip) 764 - { 765 - u32 value; 766 - int err; 767 - 768 - /* send PME_TURN_OFF message */ 769 - writel(0x0, rockchip->msg_region + PCIE_RC_SEND_PME_OFF); 770 - 771 - /* read LTSSM and wait for falling into L2 link state */ 772 - err = readl_poll_timeout(rockchip->apb_base + PCIE_CLIENT_DEBUG_OUT_0, 773 - value, PCIE_LINK_IS_L2(value), 20, 774 - jiffies_to_usecs(5 * HZ)); 775 - if (err) { 776 - dev_err(rockchip->dev, "PCIe link enter L2 timeout!\n"); 777 - return err; 778 - } 779 - 780 - return 0; 781 - } 782 - 783 - static int rockchip_pcie_enable_clocks(struct rockchip_pcie *rockchip) 966 + int rockchip_pcie_enable_clocks(struct rockchip_pcie *rockchip) 784 967 { 785 968 struct device *dev = rockchip->dev; 786 969 int err; ··· 389 1432 clk_disable_unprepare(rockchip->aclk_pcie); 390 1433 return err; 391 1434 } 1435 + EXPORT_SYMBOL_GPL(rockchip_pcie_enable_clocks); 392 1436 393 - static void rockchip_pcie_disable_clocks(void *data) 1437 + void rockchip_pcie_disable_clocks(void *data) 394 1438 { 395 1439 struct rockchip_pcie *rockchip = data; 396 1440 ··· 400 1442 clk_disable_unprepare(rockchip->aclk_perf_pcie); 401 1443 clk_disable_unprepare(rockchip->aclk_pcie); 402 1444 } 1445 + EXPORT_SYMBOL_GPL(rockchip_pcie_disable_clocks); 403 1446 404 - static int __maybe_unused rockchip_pcie_suspend_noirq(struct device *dev) 1447 + void rockchip_pcie_cfg_configuration_accesses( 1448 + struct rockchip_pcie *rockchip, u32 type) 405 1449 { 406 - struct rockchip_pcie *rockchip = dev_get_drvdata(dev); 407 - int ret; 1450 + u32 ob_desc_0; 408 1451 409 - /* disable core and cli int since we don't need to ack PME_ACK */ 410 - rockchip_pcie_write(rockchip, (PCIE_CLIENT_INT_CLI << 16) | 411 - PCIE_CLIENT_INT_CLI, PCIE_CLIENT_INT_MASK); 412 - rockchip_pcie_write(rockchip, (u32)PCIE_CORE_INT, PCIE_CORE_INT_MASK); 1452 + /* Configuration Accesses for region 0 */ 1453 + rockchip_pcie_write(rockchip, 0x0, PCIE_RC_BAR_CONF); 413 1454 414 - ret = rockchip_pcie_wait_l2(rockchip); 415 - if (ret) { 416 - rockchip_pcie_enable_interrupts(rockchip); 417 - return ret; 418 - } 419 - 420 - rockchip_pcie_deinit_phys(rockchip); 421 - 422 - rockchip_pcie_disable_clocks(rockchip); 423 - 424 - if (!IS_ERR(rockchip->vpcie0v9)) 425 - regulator_disable(rockchip->vpcie0v9); 426 - 427 - return ret; 1455 + rockchip_pcie_write(rockchip, 1456 + (RC_REGION_0_ADDR_TRANS_L + RC_REGION_0_PASS_BITS), 1457 + PCIE_CORE_OB_REGION_ADDR0); 1458 + rockchip_pcie_write(rockchip, RC_REGION_0_ADDR_TRANS_H, 1459 + PCIE_CORE_OB_REGION_ADDR1); 1460 + ob_desc_0 = rockchip_pcie_read(rockchip, PCIE_CORE_OB_REGION_DESC0); 1461 + ob_desc_0 &= ~(RC_REGION_0_TYPE_MASK); 1462 + ob_desc_0 |= (type | (0x1 << 23)); 1463 + rockchip_pcie_write(rockchip, ob_desc_0, PCIE_CORE_OB_REGION_DESC0); 1464 + rockchip_pcie_write(rockchip, 0x0, PCIE_CORE_OB_REGION_DESC1); 428 1465 } 429 - 430 - static int __maybe_unused rockchip_pcie_resume_noirq(struct device *dev) 431 - { 432 - struct rockchip_pcie *rockchip = dev_get_drvdata(dev); 433 - int err; 434 - 435 - if (!IS_ERR(rockchip->vpcie0v9)) { 436 - err = regulator_enable(rockchip->vpcie0v9); 437 - if (err) { 438 - dev_err(dev, "fail to enable vpcie0v9 regulator\n"); 439 - return err; 440 - } 441 - } 442 - 443 - err = rockchip_pcie_enable_clocks(rockchip); 444 - if (err) 445 - goto err_disable_0v9; 446 - 447 - err = rockchip_pcie_init_port(rockchip); 448 - if (err) 449 - goto err_pcie_resume; 450 - 451 - err = rockchip_pcie_cfg_atu(rockchip); 452 - if (err) 453 - goto err_err_deinit_port; 454 - 455 - /* Need this to enter L1 again */ 456 - rockchip_pcie_update_txcredit_mui(rockchip); 457 - rockchip_pcie_enable_interrupts(rockchip); 458 - 459 - return 0; 460 - 461 - err_err_deinit_port: 462 - rockchip_pcie_deinit_phys(rockchip); 463 - err_pcie_resume: 464 - rockchip_pcie_disable_clocks(rockchip); 465 - err_disable_0v9: 466 - if (!IS_ERR(rockchip->vpcie0v9)) 467 - regulator_disable(rockchip->vpcie0v9); 468 - return err; 469 - } 470 - 471 - static int rockchip_pcie_probe(struct platform_device *pdev) 472 - { 473 - struct rockchip_pcie *rockchip; 474 - struct device *dev = &pdev->dev; 475 - struct pci_bus *bus, *child; 476 - struct pci_host_bridge *bridge; 477 - struct resource_entry *win; 478 - resource_size_t io_base; 479 - struct resource *mem; 480 - struct resource *io; 481 - int err; 482 - 483 - LIST_HEAD(res); 484 - 485 - if (!dev->of_node) 486 - return -ENODEV; 487 - 488 - bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rockchip)); 489 - if (!bridge) 490 - return -ENOMEM; 491 - 492 - rockchip = pci_host_bridge_priv(bridge); 493 - 494 - platform_set_drvdata(pdev, rockchip); 495 - rockchip->dev = dev; 496 - 497 - err = rockchip_pcie_parse_dt(rockchip); 498 - if (err) 499 - return err; 500 - 501 - err = rockchip_pcie_enable_clocks(rockchip); 502 - if (err) 503 - return err; 504 - 505 - err = rockchip_pcie_set_vpcie(rockchip); 506 - if (err) { 507 - dev_err(dev, "failed to set vpcie regulator\n"); 508 - goto err_set_vpcie; 509 - } 510 - 511 - err = rockchip_pcie_init_port(rockchip); 512 - if (err) 513 - goto err_vpcie; 514 - 515 - rockchip_pcie_enable_interrupts(rockchip); 516 - 517 - err = rockchip_pcie_init_irq_domain(rockchip); 518 - if (err < 0) 519 - goto err_deinit_port; 520 - 521 - err = of_pci_get_host_bridge_resources(dev->of_node, 0, 0xff, 522 - &res, &io_base); 523 - if (err) 524 - goto err_remove_irq_domain; 525 - 526 - err = devm_request_pci_bus_resources(dev, &res); 527 - if (err) 528 - goto err_free_res; 529 - 530 - /* Get the I/O and memory ranges from DT */ 531 - resource_list_for_each_entry(win, &res) { 532 - switch (resource_type(win->res)) { 533 - case IORESOURCE_IO: 534 - io = win->res; 535 - io->name = "I/O"; 536 - rockchip->io_size = resource_size(io); 537 - rockchip->io_bus_addr = io->start - win->offset; 538 - err = pci_remap_iospace(io, io_base); 539 - if (err) { 540 - dev_warn(dev, "error %d: failed to map resource %pR\n", 541 - err, io); 542 - continue; 543 - } 544 - rockchip->io = io; 545 - break; 546 - case IORESOURCE_MEM: 547 - mem = win->res; 548 - mem->name = "MEM"; 549 - rockchip->mem_size = resource_size(mem); 550 - rockchip->mem_bus_addr = mem->start - win->offset; 551 - break; 552 - case IORESOURCE_BUS: 553 - rockchip->root_bus_nr = win->res->start; 554 - break; 555 - default: 556 - continue; 557 - } 558 - } 559 - 560 - err = rockchip_pcie_cfg_atu(rockchip); 561 - if (err) 562 - goto err_unmap_iospace; 563 - 564 - rockchip->msg_region = devm_ioremap(dev, rockchip->msg_bus_addr, SZ_1M); 565 - if (!rockchip->msg_region) { 566 - err = -ENOMEM; 567 - goto err_unmap_iospace; 568 - } 569 - 570 - list_splice_init(&res, &bridge->windows); 571 - bridge->dev.parent = dev; 572 - bridge->sysdata = rockchip; 573 - bridge->busnr = 0; 574 - bridge->ops = &rockchip_pcie_ops; 575 - bridge->map_irq = of_irq_parse_and_map_pci; 576 - bridge->swizzle_irq = pci_common_swizzle; 577 - 578 - err = pci_scan_root_bus_bridge(bridge); 579 - if (err < 0) 580 - goto err_unmap_iospace; 581 - 582 - bus = bridge->bus; 583 - 584 - rockchip->root_bus = bus; 585 - 586 - pci_bus_size_bridges(bus); 587 - pci_bus_assign_resources(bus); 588 - list_for_each_entry(child, &bus->children, node) 589 - pcie_bus_configure_settings(child); 590 - 591 - pci_bus_add_devices(bus); 592 - return 0; 593 - 594 - err_unmap_iospace: 595 - pci_unmap_iospace(rockchip->io); 596 - err_free_res: 597 - pci_free_resource_list(&res); 598 - err_remove_irq_domain: 599 - irq_domain_remove(rockchip->irq_domain); 600 - err_deinit_port: 601 - rockchip_pcie_deinit_phys(rockchip); 602 - err_vpcie: 603 - if (!IS_ERR(rockchip->vpcie12v)) 604 - regulator_disable(rockchip->vpcie12v); 605 - if (!IS_ERR(rockchip->vpcie3v3)) 606 - regulator_disable(rockchip->vpcie3v3); 607 - if (!IS_ERR(rockchip->vpcie1v8)) 608 - regulator_disable(rockchip->vpcie1v8); 609 - if (!IS_ERR(rockchip->vpcie0v9)) 610 - regulator_disable(rockchip->vpcie0v9); 611 - err_set_vpcie: 612 - rockchip_pcie_disable_clocks(rockchip); 613 - return err; 614 - } 615 - 616 - static int rockchip_pcie_remove(struct platform_device *pdev) 617 - { 618 - struct device *dev = &pdev->dev; 619 - struct rockchip_pcie *rockchip = dev_get_drvdata(dev); 620 - 621 - pci_stop_root_bus(rockchip->root_bus); 622 - pci_remove_root_bus(rockchip->root_bus); 623 - pci_unmap_iospace(rockchip->io); 624 - irq_domain_remove(rockchip->irq_domain); 625 - 626 - rockchip_pcie_deinit_phys(rockchip); 627 - 628 - rockchip_pcie_disable_clocks(rockchip); 629 - 630 - if (!IS_ERR(rockchip->vpcie12v)) 631 - regulator_disable(rockchip->vpcie12v); 632 - if (!IS_ERR(rockchip->vpcie3v3)) 633 - regulator_disable(rockchip->vpcie3v3); 634 - if (!IS_ERR(rockchip->vpcie1v8)) 635 - regulator_disable(rockchip->vpcie1v8); 636 - if (!IS_ERR(rockchip->vpcie0v9)) 637 - regulator_disable(rockchip->vpcie0v9); 638 - 639 - return 0; 640 - } 641 - 642 - static const struct dev_pm_ops rockchip_pcie_pm_ops = { 643 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(rockchip_pcie_suspend_noirq, 644 - rockchip_pcie_resume_noirq) 645 - }; 646 - 647 - static const struct of_device_id rockchip_pcie_of_match[] = { 648 - { .compatible = "rockchip,rk3399-pcie", }, 649 - {} 650 - }; 651 - MODULE_DEVICE_TABLE(of, rockchip_pcie_of_match); 652 - 653 - static struct platform_driver rockchip_pcie_driver = { 654 - .driver = { 655 - .name = "rockchip-pcie", 656 - .of_match_table = rockchip_pcie_of_match, 657 - .pm = &rockchip_pcie_pm_ops, 658 - }, 659 - .probe = rockchip_pcie_probe, 660 - .remove = rockchip_pcie_remove, 661 - }; 662 - module_platform_driver(rockchip_pcie_driver); 663 - 664 - MODULE_AUTHOR("Rockchip Inc"); 665 - MODULE_DESCRIPTION("Rockchip AXI PCIe driver"); 666 - MODULE_LICENSE("GPL v2"); 1466 + EXPORT_SYMBOL_GPL(rockchip_pcie_cfg_configuration_accesses);
+338
drivers/pci/host/pcie-rockchip.h
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Rockchip AXI PCIe controller driver 4 + * 5 + * Copyright (c) 2018 Rockchip, Inc. 6 + * 7 + * Author: Shawn Lin <shawn.lin@rock-chips.com> 8 + * 9 + */ 10 + 11 + #ifndef _PCIE_ROCKCHIP_H 12 + #define _PCIE_ROCKCHIP_H 13 + 14 + #include <linux/kernel.h> 15 + #include <linux/pci.h> 16 + 17 + /* 18 + * The upper 16 bits of PCIE_CLIENT_CONFIG are a write mask for the lower 16 19 + * bits. This allows atomic updates of the register without locking. 20 + */ 21 + #define HIWORD_UPDATE(mask, val) (((mask) << 16) | (val)) 22 + #define HIWORD_UPDATE_BIT(val) HIWORD_UPDATE(val, val) 23 + 24 + #define ENCODE_LANES(x) ((((x) >> 1) & 3) << 4) 25 + #define MAX_LANE_NUM 4 26 + #define MAX_REGION_LIMIT 32 27 + #define MIN_EP_APERTURE 28 28 + 29 + #define PCIE_CLIENT_BASE 0x0 30 + #define PCIE_CLIENT_CONFIG (PCIE_CLIENT_BASE + 0x00) 31 + #define PCIE_CLIENT_CONF_ENABLE HIWORD_UPDATE_BIT(0x0001) 32 + #define PCIE_CLIENT_CONF_DISABLE HIWORD_UPDATE(0x0001, 0) 33 + #define PCIE_CLIENT_LINK_TRAIN_ENABLE HIWORD_UPDATE_BIT(0x0002) 34 + #define PCIE_CLIENT_ARI_ENABLE HIWORD_UPDATE_BIT(0x0008) 35 + #define PCIE_CLIENT_CONF_LANE_NUM(x) HIWORD_UPDATE(0x0030, ENCODE_LANES(x)) 36 + #define PCIE_CLIENT_MODE_RC HIWORD_UPDATE_BIT(0x0040) 37 + #define PCIE_CLIENT_MODE_EP HIWORD_UPDATE(0x0040, 0) 38 + #define PCIE_CLIENT_GEN_SEL_1 HIWORD_UPDATE(0x0080, 0) 39 + #define PCIE_CLIENT_GEN_SEL_2 HIWORD_UPDATE_BIT(0x0080) 40 + #define PCIE_CLIENT_DEBUG_OUT_0 (PCIE_CLIENT_BASE + 0x3c) 41 + #define PCIE_CLIENT_DEBUG_LTSSM_MASK GENMASK(5, 0) 42 + #define PCIE_CLIENT_DEBUG_LTSSM_L1 0x18 43 + #define PCIE_CLIENT_DEBUG_LTSSM_L2 0x19 44 + #define PCIE_CLIENT_BASIC_STATUS1 (PCIE_CLIENT_BASE + 0x48) 45 + #define PCIE_CLIENT_LINK_STATUS_UP 0x00300000 46 + #define PCIE_CLIENT_LINK_STATUS_MASK 0x00300000 47 + #define PCIE_CLIENT_INT_MASK (PCIE_CLIENT_BASE + 0x4c) 48 + #define PCIE_CLIENT_INT_STATUS (PCIE_CLIENT_BASE + 0x50) 49 + #define PCIE_CLIENT_INTR_MASK GENMASK(8, 5) 50 + #define PCIE_CLIENT_INTR_SHIFT 5 51 + #define PCIE_CLIENT_INT_LEGACY_DONE BIT(15) 52 + #define PCIE_CLIENT_INT_MSG BIT(14) 53 + #define PCIE_CLIENT_INT_HOT_RST BIT(13) 54 + #define PCIE_CLIENT_INT_DPA BIT(12) 55 + #define PCIE_CLIENT_INT_FATAL_ERR BIT(11) 56 + #define PCIE_CLIENT_INT_NFATAL_ERR BIT(10) 57 + #define PCIE_CLIENT_INT_CORR_ERR BIT(9) 58 + #define PCIE_CLIENT_INT_INTD BIT(8) 59 + #define PCIE_CLIENT_INT_INTC BIT(7) 60 + #define PCIE_CLIENT_INT_INTB BIT(6) 61 + #define PCIE_CLIENT_INT_INTA BIT(5) 62 + #define PCIE_CLIENT_INT_LOCAL BIT(4) 63 + #define PCIE_CLIENT_INT_UDMA BIT(3) 64 + #define PCIE_CLIENT_INT_PHY BIT(2) 65 + #define PCIE_CLIENT_INT_HOT_PLUG BIT(1) 66 + #define PCIE_CLIENT_INT_PWR_STCG BIT(0) 67 + 68 + #define PCIE_CLIENT_INT_LEGACY \ 69 + (PCIE_CLIENT_INT_INTA | PCIE_CLIENT_INT_INTB | \ 70 + PCIE_CLIENT_INT_INTC | PCIE_CLIENT_INT_INTD) 71 + 72 + #define PCIE_CLIENT_INT_CLI \ 73 + (PCIE_CLIENT_INT_CORR_ERR | PCIE_CLIENT_INT_NFATAL_ERR | \ 74 + PCIE_CLIENT_INT_FATAL_ERR | PCIE_CLIENT_INT_DPA | \ 75 + PCIE_CLIENT_INT_HOT_RST | PCIE_CLIENT_INT_MSG | \ 76 + PCIE_CLIENT_INT_LEGACY_DONE | PCIE_CLIENT_INT_LEGACY | \ 77 + PCIE_CLIENT_INT_PHY) 78 + 79 + #define PCIE_CORE_CTRL_MGMT_BASE 0x900000 80 + #define PCIE_CORE_CTRL (PCIE_CORE_CTRL_MGMT_BASE + 0x000) 81 + #define PCIE_CORE_PL_CONF_SPEED_5G 0x00000008 82 + #define PCIE_CORE_PL_CONF_SPEED_MASK 0x00000018 83 + #define PCIE_CORE_PL_CONF_LANE_MASK 0x00000006 84 + #define PCIE_CORE_PL_CONF_LANE_SHIFT 1 85 + #define PCIE_CORE_CTRL_PLC1 (PCIE_CORE_CTRL_MGMT_BASE + 0x004) 86 + #define PCIE_CORE_CTRL_PLC1_FTS_MASK GENMASK(23, 8) 87 + #define PCIE_CORE_CTRL_PLC1_FTS_SHIFT 8 88 + #define PCIE_CORE_CTRL_PLC1_FTS_CNT 0xffff 89 + #define PCIE_CORE_TXCREDIT_CFG1 (PCIE_CORE_CTRL_MGMT_BASE + 0x020) 90 + #define PCIE_CORE_TXCREDIT_CFG1_MUI_MASK 0xFFFF0000 91 + #define PCIE_CORE_TXCREDIT_CFG1_MUI_SHIFT 16 92 + #define PCIE_CORE_TXCREDIT_CFG1_MUI_ENCODE(x) \ 93 + (((x) >> 3) << PCIE_CORE_TXCREDIT_CFG1_MUI_SHIFT) 94 + #define PCIE_CORE_LANE_MAP (PCIE_CORE_CTRL_MGMT_BASE + 0x200) 95 + #define PCIE_CORE_LANE_MAP_MASK 0x0000000f 96 + #define PCIE_CORE_LANE_MAP_REVERSE BIT(16) 97 + #define PCIE_CORE_INT_STATUS (PCIE_CORE_CTRL_MGMT_BASE + 0x20c) 98 + #define PCIE_CORE_INT_PRFPE BIT(0) 99 + #define PCIE_CORE_INT_CRFPE BIT(1) 100 + #define PCIE_CORE_INT_RRPE BIT(2) 101 + #define PCIE_CORE_INT_PRFO BIT(3) 102 + #define PCIE_CORE_INT_CRFO BIT(4) 103 + #define PCIE_CORE_INT_RT BIT(5) 104 + #define PCIE_CORE_INT_RTR BIT(6) 105 + #define PCIE_CORE_INT_PE BIT(7) 106 + #define PCIE_CORE_INT_MTR BIT(8) 107 + #define PCIE_CORE_INT_UCR BIT(9) 108 + #define PCIE_CORE_INT_FCE BIT(10) 109 + #define PCIE_CORE_INT_CT BIT(11) 110 + #define PCIE_CORE_INT_UTC BIT(18) 111 + #define PCIE_CORE_INT_MMVC BIT(19) 112 + #define PCIE_CORE_CONFIG_VENDOR (PCIE_CORE_CTRL_MGMT_BASE + 0x44) 113 + #define PCIE_CORE_INT_MASK (PCIE_CORE_CTRL_MGMT_BASE + 0x210) 114 + #define PCIE_CORE_PHY_FUNC_CFG (PCIE_CORE_CTRL_MGMT_BASE + 0x2c0) 115 + #define PCIE_RC_BAR_CONF (PCIE_CORE_CTRL_MGMT_BASE + 0x300) 116 + #define ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_DISABLED 0x0 117 + #define ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_IO_32BITS 0x1 118 + #define ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_MEM_32BITS 0x4 119 + #define ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_PREFETCH_MEM_32BITS 0x5 120 + #define ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_MEM_64BITS 0x6 121 + #define ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_PREFETCH_MEM_64BITS 0x7 122 + 123 + #define PCIE_CORE_INT \ 124 + (PCIE_CORE_INT_PRFPE | PCIE_CORE_INT_CRFPE | \ 125 + PCIE_CORE_INT_RRPE | PCIE_CORE_INT_CRFO | \ 126 + PCIE_CORE_INT_RT | PCIE_CORE_INT_RTR | \ 127 + PCIE_CORE_INT_PE | PCIE_CORE_INT_MTR | \ 128 + PCIE_CORE_INT_UCR | PCIE_CORE_INT_FCE | \ 129 + PCIE_CORE_INT_CT | PCIE_CORE_INT_UTC | \ 130 + PCIE_CORE_INT_MMVC) 131 + 132 + #define PCIE_RC_RP_ATS_BASE 0x400000 133 + #define PCIE_RC_CONFIG_NORMAL_BASE 0x800000 134 + #define PCIE_RC_CONFIG_BASE 0xa00000 135 + #define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08) 136 + #define PCIE_RC_CONFIG_SCC_SHIFT 16 137 + #define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4) 138 + #define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18 139 + #define PCIE_RC_CONFIG_DCR_CSPL_LIMIT 0xff 140 + #define PCIE_RC_CONFIG_DCR_CPLS_SHIFT 26 141 + #define PCIE_RC_CONFIG_DCSR (PCIE_RC_CONFIG_BASE + 0xc8) 142 + #define PCIE_RC_CONFIG_DCSR_MPS_MASK GENMASK(7, 5) 143 + #define PCIE_RC_CONFIG_DCSR_MPS_256 (0x1 << 5) 144 + #define PCIE_RC_CONFIG_LINK_CAP (PCIE_RC_CONFIG_BASE + 0xcc) 145 + #define PCIE_RC_CONFIG_LINK_CAP_L0S BIT(10) 146 + #define PCIE_RC_CONFIG_LCS (PCIE_RC_CONFIG_BASE + 0xd0) 147 + #define PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2 (PCIE_RC_CONFIG_BASE + 0x90c) 148 + #define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274) 149 + #define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20) 150 + 151 + #define PCIE_CORE_AXI_CONF_BASE 0xc00000 152 + #define PCIE_CORE_OB_REGION_ADDR0 (PCIE_CORE_AXI_CONF_BASE + 0x0) 153 + #define PCIE_CORE_OB_REGION_ADDR0_NUM_BITS 0x3f 154 + #define PCIE_CORE_OB_REGION_ADDR0_LO_ADDR 0xffffff00 155 + #define PCIE_CORE_OB_REGION_ADDR1 (PCIE_CORE_AXI_CONF_BASE + 0x4) 156 + #define PCIE_CORE_OB_REGION_DESC0 (PCIE_CORE_AXI_CONF_BASE + 0x8) 157 + #define PCIE_CORE_OB_REGION_DESC1 (PCIE_CORE_AXI_CONF_BASE + 0xc) 158 + 159 + #define PCIE_CORE_AXI_INBOUND_BASE 0xc00800 160 + #define PCIE_RP_IB_ADDR0 (PCIE_CORE_AXI_INBOUND_BASE + 0x0) 161 + #define PCIE_CORE_IB_REGION_ADDR0_NUM_BITS 0x3f 162 + #define PCIE_CORE_IB_REGION_ADDR0_LO_ADDR 0xffffff00 163 + #define PCIE_RP_IB_ADDR1 (PCIE_CORE_AXI_INBOUND_BASE + 0x4) 164 + 165 + /* Size of one AXI Region (not Region 0) */ 166 + #define AXI_REGION_SIZE BIT(20) 167 + /* Size of Region 0, equal to sum of sizes of other regions */ 168 + #define AXI_REGION_0_SIZE (32 * (0x1 << 20)) 169 + #define OB_REG_SIZE_SHIFT 5 170 + #define IB_ROOT_PORT_REG_SIZE_SHIFT 3 171 + #define AXI_WRAPPER_IO_WRITE 0x6 172 + #define AXI_WRAPPER_MEM_WRITE 0x2 173 + #define AXI_WRAPPER_TYPE0_CFG 0xa 174 + #define AXI_WRAPPER_TYPE1_CFG 0xb 175 + #define AXI_WRAPPER_NOR_MSG 0xc 176 + 177 + #define MAX_AXI_IB_ROOTPORT_REGION_NUM 3 178 + #define MIN_AXI_ADDR_BITS_PASSED 8 179 + #define PCIE_RC_SEND_PME_OFF 0x11960 180 + #define ROCKCHIP_VENDOR_ID 0x1d87 181 + #define PCIE_ECAM_BUS(x) (((x) & 0xff) << 20) 182 + #define PCIE_ECAM_DEV(x) (((x) & 0x1f) << 15) 183 + #define PCIE_ECAM_FUNC(x) (((x) & 0x7) << 12) 184 + #define PCIE_ECAM_REG(x) (((x) & 0xfff) << 0) 185 + #define PCIE_ECAM_ADDR(bus, dev, func, reg) \ 186 + (PCIE_ECAM_BUS(bus) | PCIE_ECAM_DEV(dev) | \ 187 + PCIE_ECAM_FUNC(func) | PCIE_ECAM_REG(reg)) 188 + #define PCIE_LINK_IS_L2(x) \ 189 + (((x) & PCIE_CLIENT_DEBUG_LTSSM_MASK) == PCIE_CLIENT_DEBUG_LTSSM_L2) 190 + #define PCIE_LINK_UP(x) \ 191 + (((x) & PCIE_CLIENT_LINK_STATUS_MASK) == PCIE_CLIENT_LINK_STATUS_UP) 192 + #define PCIE_LINK_IS_GEN2(x) \ 193 + (((x) & PCIE_CORE_PL_CONF_SPEED_MASK) == PCIE_CORE_PL_CONF_SPEED_5G) 194 + 195 + #define RC_REGION_0_ADDR_TRANS_H 0x00000000 196 + #define RC_REGION_0_ADDR_TRANS_L 0x00000000 197 + #define RC_REGION_0_PASS_BITS (25 - 1) 198 + #define RC_REGION_0_TYPE_MASK GENMASK(3, 0) 199 + #define MAX_AXI_WRAPPER_REGION_NUM 33 200 + 201 + #define ROCKCHIP_PCIE_MSG_ROUTING_TO_RC 0x0 202 + #define ROCKCHIP_PCIE_MSG_ROUTING_VIA_ADDR 0x1 203 + #define ROCKCHIP_PCIE_MSG_ROUTING_VIA_ID 0x2 204 + #define ROCKCHIP_PCIE_MSG_ROUTING_BROADCAST 0x3 205 + #define ROCKCHIP_PCIE_MSG_ROUTING_LOCAL_INTX 0x4 206 + #define ROCKCHIP_PCIE_MSG_ROUTING_PME_ACK 0x5 207 + #define ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTA 0x20 208 + #define ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTB 0x21 209 + #define ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTC 0x22 210 + #define ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTD 0x23 211 + #define ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTA 0x24 212 + #define ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTB 0x25 213 + #define ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTC 0x26 214 + #define ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTD 0x27 215 + #define ROCKCHIP_PCIE_MSG_ROUTING_MASK GENMASK(7, 5) 216 + #define ROCKCHIP_PCIE_MSG_ROUTING(route) \ 217 + (((route) << 5) & ROCKCHIP_PCIE_MSG_ROUTING_MASK) 218 + #define ROCKCHIP_PCIE_MSG_CODE_MASK GENMASK(15, 8) 219 + #define ROCKCHIP_PCIE_MSG_CODE(code) \ 220 + (((code) << 8) & ROCKCHIP_PCIE_MSG_CODE_MASK) 221 + #define ROCKCHIP_PCIE_MSG_NO_DATA BIT(16) 222 + 223 + #define ROCKCHIP_PCIE_EP_CMD_STATUS 0x4 224 + #define ROCKCHIP_PCIE_EP_CMD_STATUS_IS BIT(19) 225 + #define ROCKCHIP_PCIE_EP_MSI_CTRL_REG 0x90 226 + #define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET 17 227 + #define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK GENMASK(19, 17) 228 + #define ROCKCHIP_PCIE_EP_MSI_CTRL_MME_OFFSET 20 229 + #define ROCKCHIP_PCIE_EP_MSI_CTRL_MME_MASK GENMASK(22, 20) 230 + #define ROCKCHIP_PCIE_EP_MSI_CTRL_ME BIT(16) 231 + #define ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP BIT(24) 232 + #define ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR 0x1 233 + #define ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR 0x3 234 + #define ROCKCHIP_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12)) 235 + #define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \ 236 + (PCIE_RC_RP_ATS_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008) 237 + #define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \ 238 + (PCIE_RC_RP_ATS_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008) 239 + #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r) \ 240 + (PCIE_RC_RP_ATS_BASE + 0x0000 + ((r) & 0x1f) * 0x0020) 241 + #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(19, 12) 242 + #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \ 243 + (((devfn) << 12) & \ 244 + ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK) 245 + #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(27, 20) 246 + #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \ 247 + (((bus) << 20) & ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK) 248 + #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r) \ 249 + (PCIE_RC_RP_ATS_BASE + 0x0004 + ((r) & 0x1f) * 0x0020) 250 + #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID BIT(23) 251 + #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK GENMASK(31, 24) 252 + #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \ 253 + (((devfn) << 24) & ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK) 254 + #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r) \ 255 + (PCIE_RC_RP_ATS_BASE + 0x0008 + ((r) & 0x1f) * 0x0020) 256 + #define ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r) \ 257 + (PCIE_RC_RP_ATS_BASE + 0x000c + ((r) & 0x1f) * 0x0020) 258 + #define ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(r) \ 259 + (PCIE_RC_RP_ATS_BASE + 0x0018 + ((r) & 0x1f) * 0x0020) 260 + #define ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r) \ 261 + (PCIE_RC_RP_ATS_BASE + 0x001c + ((r) & 0x1f) * 0x0020) 262 + 263 + #define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG0(fn) \ 264 + (PCIE_CORE_CTRL_MGMT_BASE + 0x0240 + (fn) * 0x0008) 265 + #define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG1(fn) \ 266 + (PCIE_CORE_CTRL_MGMT_BASE + 0x0244 + (fn) * 0x0008) 267 + #define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) \ 268 + (GENMASK(4, 0) << ((b) * 8)) 269 + #define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \ 270 + (((a) << ((b) * 8)) & \ 271 + ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b)) 272 + #define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b) \ 273 + (GENMASK(7, 5) << ((b) * 8)) 274 + #define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \ 275 + (((c) << ((b) * 8 + 5)) & \ 276 + ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)) 277 + 278 + struct rockchip_pcie { 279 + void __iomem *reg_base; /* DT axi-base */ 280 + void __iomem *apb_base; /* DT apb-base */ 281 + bool legacy_phy; 282 + struct phy *phys[MAX_LANE_NUM]; 283 + struct reset_control *core_rst; 284 + struct reset_control *mgmt_rst; 285 + struct reset_control *mgmt_sticky_rst; 286 + struct reset_control *pipe_rst; 287 + struct reset_control *pm_rst; 288 + struct reset_control *aclk_rst; 289 + struct reset_control *pclk_rst; 290 + struct clk *aclk_pcie; 291 + struct clk *aclk_perf_pcie; 292 + struct clk *hclk_pcie; 293 + struct clk *clk_pcie_pm; 294 + struct regulator *vpcie12v; /* 12V power supply */ 295 + struct regulator *vpcie3v3; /* 3.3V power supply */ 296 + struct regulator *vpcie1v8; /* 1.8V power supply */ 297 + struct regulator *vpcie0v9; /* 0.9V power supply */ 298 + struct gpio_desc *ep_gpio; 299 + u32 lanes; 300 + u8 lanes_map; 301 + u8 root_bus_nr; 302 + int link_gen; 303 + struct device *dev; 304 + struct irq_domain *irq_domain; 305 + int offset; 306 + struct pci_bus *root_bus; 307 + struct resource *io; 308 + phys_addr_t io_bus_addr; 309 + u32 io_size; 310 + void __iomem *msg_region; 311 + u32 mem_size; 312 + phys_addr_t msg_bus_addr; 313 + phys_addr_t mem_bus_addr; 314 + bool is_rc; 315 + struct resource *mem_res; 316 + }; 317 + 318 + static u32 rockchip_pcie_read(struct rockchip_pcie *rockchip, u32 reg) 319 + { 320 + return readl(rockchip->apb_base + reg); 321 + } 322 + 323 + static void rockchip_pcie_write(struct rockchip_pcie *rockchip, u32 val, 324 + u32 reg) 325 + { 326 + writel(val, rockchip->apb_base + reg); 327 + } 328 + 329 + int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip); 330 + int rockchip_pcie_init_port(struct rockchip_pcie *rockchip); 331 + int rockchip_pcie_get_phys(struct rockchip_pcie *rockchip); 332 + void rockchip_pcie_deinit_phys(struct rockchip_pcie *rockchip); 333 + int rockchip_pcie_enable_clocks(struct rockchip_pcie *rockchip); 334 + void rockchip_pcie_disable_clocks(void *data); 335 + void rockchip_pcie_cfg_configuration_accesses( 336 + struct rockchip_pcie *rockchip, u32 type); 337 + 338 + #endif /* _PCIE_ROCKCHIP_H */
+4 -2
drivers/pci/host/pcie-xilinx-nwl.c
··· 21 21 #include <linux/platform_device.h> 22 22 #include <linux/irqchip/chained_irq.h> 23 23 24 + #include "../pci.h" 25 + 24 26 /* Bridge core config registers */ 25 27 #define BRCFG_PCIE_RX0 0x00000000 26 28 #define BRCFG_INTERRUPT 0x00000010 ··· 827 825 static int nwl_pcie_probe(struct platform_device *pdev) 828 826 { 829 827 struct device *dev = &pdev->dev; 830 - struct device_node *node = dev->of_node; 831 828 struct nwl_pcie *pcie; 832 829 struct pci_bus *bus; 833 830 struct pci_bus *child; ··· 856 855 return err; 857 856 } 858 857 859 - err = of_pci_get_host_bridge_resources(node, 0, 0xff, &res, &iobase); 858 + err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &res, 859 + &iobase); 860 860 if (err) { 861 861 dev_err(dev, "Getting bridge resources failed\n"); 862 862 return err;
+4 -2
drivers/pci/host/pcie-xilinx.c
··· 23 23 #include <linux/pci.h> 24 24 #include <linux/platform_device.h> 25 25 26 + #include "../pci.h" 27 + 26 28 /* Register definitions */ 27 29 #define XILINX_PCIE_REG_BIR 0x00000130 28 30 #define XILINX_PCIE_REG_IDR 0x00000138 ··· 645 643 return err; 646 644 } 647 645 648 - err = of_pci_get_host_bridge_resources(dev->of_node, 0, 0xff, &res, 649 - &iobase); 646 + err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &res, 647 + &iobase); 650 648 if (err) { 651 649 dev_err(dev, "Getting bridge resources failed\n"); 652 650 return err;
+81 -10
drivers/pci/host/vmd.c
··· 24 24 #define VMD_MEMBAR1 2 25 25 #define VMD_MEMBAR2 4 26 26 27 + #define PCI_REG_VMCAP 0x40 28 + #define BUS_RESTRICT_CAP(vmcap) (vmcap & 0x1) 29 + #define PCI_REG_VMCONFIG 0x44 30 + #define BUS_RESTRICT_CFG(vmcfg) ((vmcfg >> 8) & 0x3) 31 + #define PCI_REG_VMLOCK 0x70 32 + #define MB2_SHADOW_EN(vmlock) (vmlock & 0x2) 33 + 34 + enum vmd_features { 35 + /* 36 + * Device may contain registers which hint the physical location of the 37 + * membars, in order to allow proper address translation during 38 + * resource assignment to enable guest virtualization 39 + */ 40 + VMD_FEAT_HAS_MEMBAR_SHADOW = (1 << 0), 41 + 42 + /* 43 + * Device may provide root port configuration information which limits 44 + * bus numbering 45 + */ 46 + VMD_FEAT_HAS_BUS_RESTRICTIONS = (1 << 1), 47 + }; 48 + 27 49 /* 28 50 * Lock for manipulating VMD IRQ lists. 29 51 */ ··· 568 546 return domain + 1; 569 547 } 570 548 571 - static int vmd_enable_domain(struct vmd_dev *vmd) 549 + static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) 572 550 { 573 551 struct pci_sysdata *sd = &vmd->sysdata; 574 552 struct fwnode_handle *fn; ··· 576 554 u32 upper_bits; 577 555 unsigned long flags; 578 556 LIST_HEAD(resources); 557 + resource_size_t offset[2] = {0}; 558 + resource_size_t membar2_offset = 0x2000, busn_start = 0; 559 + 560 + /* 561 + * Shadow registers may exist in certain VMD device ids which allow 562 + * guests to correctly assign host physical addresses to the root ports 563 + * and child devices. These registers will either return the host value 564 + * or 0, depending on an enable bit in the VMD device. 565 + */ 566 + if (features & VMD_FEAT_HAS_MEMBAR_SHADOW) { 567 + u32 vmlock; 568 + int ret; 569 + 570 + membar2_offset = 0x2018; 571 + ret = pci_read_config_dword(vmd->dev, PCI_REG_VMLOCK, &vmlock); 572 + if (ret || vmlock == ~0) 573 + return -ENODEV; 574 + 575 + if (MB2_SHADOW_EN(vmlock)) { 576 + void __iomem *membar2; 577 + 578 + membar2 = pci_iomap(vmd->dev, VMD_MEMBAR2, 0); 579 + if (!membar2) 580 + return -ENOMEM; 581 + offset[0] = vmd->dev->resource[VMD_MEMBAR1].start - 582 + readq(membar2 + 0x2008); 583 + offset[1] = vmd->dev->resource[VMD_MEMBAR2].start - 584 + readq(membar2 + 0x2010); 585 + pci_iounmap(vmd->dev, membar2); 586 + } 587 + } 588 + 589 + /* 590 + * Certain VMD devices may have a root port configuration option which 591 + * limits the bus range to between 0-127 or 128-255 592 + */ 593 + if (features & VMD_FEAT_HAS_BUS_RESTRICTIONS) { 594 + u32 vmcap, vmconfig; 595 + 596 + pci_read_config_dword(vmd->dev, PCI_REG_VMCAP, &vmcap); 597 + pci_read_config_dword(vmd->dev, PCI_REG_VMCONFIG, &vmconfig); 598 + if (BUS_RESTRICT_CAP(vmcap) && 599 + (BUS_RESTRICT_CFG(vmconfig) == 0x1)) 600 + busn_start = 128; 601 + } 579 602 580 603 res = &vmd->dev->resource[VMD_CFGBAR]; 581 604 vmd->resources[0] = (struct resource) { 582 605 .name = "VMD CFGBAR", 583 - .start = 0, 584 - .end = (resource_size(res) >> 20) - 1, 606 + .start = busn_start, 607 + .end = busn_start + (resource_size(res) >> 20) - 1, 585 608 .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED, 586 609 }; 587 610 ··· 667 600 flags &= ~IORESOURCE_MEM_64; 668 601 vmd->resources[2] = (struct resource) { 669 602 .name = "VMD MEMBAR2", 670 - .start = res->start + 0x2000, 603 + .start = res->start + membar2_offset, 671 604 .end = res->end, 672 605 .flags = flags, 673 606 .parent = res, ··· 691 624 return -ENODEV; 692 625 693 626 pci_add_resource(&resources, &vmd->resources[0]); 694 - pci_add_resource(&resources, &vmd->resources[1]); 695 - pci_add_resource(&resources, &vmd->resources[2]); 696 - vmd->bus = pci_create_root_bus(&vmd->dev->dev, 0, &vmd_ops, sd, 697 - &resources); 627 + pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]); 628 + pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]); 629 + 630 + vmd->bus = pci_create_root_bus(&vmd->dev->dev, busn_start, &vmd_ops, 631 + sd, &resources); 698 632 if (!vmd->bus) { 699 633 pci_free_resource_list(&resources); 700 634 irq_domain_remove(vmd->irq_domain); ··· 781 713 782 714 spin_lock_init(&vmd->cfg_lock); 783 715 pci_set_drvdata(dev, vmd); 784 - err = vmd_enable_domain(vmd); 716 + err = vmd_enable_domain(vmd, (unsigned long) id->driver_data); 785 717 if (err) 786 718 return err; 787 719 ··· 846 778 static SIMPLE_DEV_PM_OPS(vmd_dev_pm_ops, vmd_suspend, vmd_resume); 847 779 848 780 static const struct pci_device_id vmd_ids[] = { 849 - {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x201d),}, 781 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_201D),}, 782 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_28C0), 783 + .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW | 784 + VMD_FEAT_HAS_BUS_RESTRICTIONS,}, 850 785 {0,} 851 786 }; 852 787 MODULE_DEVICE_TABLE(pci, vmd_ids);
+1 -4
drivers/pci/hotplug/Kconfig
··· 104 104 When in doubt, say N. 105 105 106 106 config HOTPLUG_PCI_SHPC 107 - tristate "SHPC PCI Hotplug driver" 107 + bool "SHPC PCI Hotplug driver" 108 108 help 109 109 Say Y here if you have a motherboard with a SHPC PCI Hotplug 110 110 controller. 111 - 112 - To compile this driver as a module, choose M here: the 113 - module will be called shpchp. 114 111 115 112 When in doubt, say N. 116 113
+17 -28
drivers/pci/hotplug/acpi_pcihp.c
··· 63 63 /** 64 64 * acpi_get_hp_hw_control_from_firmware 65 65 * @dev: the pci_dev of the bridge that has a hotplug controller 66 - * @flags: requested control bits for _OSC 67 66 * 68 67 * Attempt to take hotplug control from firmware. 69 68 */ 70 - int acpi_get_hp_hw_control_from_firmware(struct pci_dev *pdev, u32 flags) 69 + int acpi_get_hp_hw_control_from_firmware(struct pci_dev *pdev) 71 70 { 71 + const struct pci_host_bridge *host; 72 + const struct acpi_pci_root *root; 72 73 acpi_status status; 73 74 acpi_handle chandle, handle; 74 75 struct acpi_buffer string = { ACPI_ALLOCATE_BUFFER, NULL }; 75 - 76 - flags &= OSC_PCI_SHPC_NATIVE_HP_CONTROL; 77 - if (!flags) { 78 - err("Invalid flags %u specified!\n", flags); 79 - return -EINVAL; 80 - } 81 76 82 77 /* 83 78 * Per PCI firmware specification, we should run the ACPI _OSC ··· 83 88 * OSHP within the scope of the hotplug controller and its parents, 84 89 * up to the host bridge under which this controller exists. 85 90 */ 86 - handle = acpi_find_root_bridge_handle(pdev); 87 - if (handle) { 88 - acpi_get_name(handle, ACPI_FULL_PATHNAME, &string); 89 - dbg("Trying to get hotplug control for %s\n", 90 - (char *)string.pointer); 91 - status = acpi_pci_osc_control_set(handle, &flags, flags); 92 - if (ACPI_SUCCESS(status)) 93 - goto got_one; 94 - if (status == AE_SUPPORT) 95 - goto no_control; 96 - kfree(string.pointer); 97 - string = (struct acpi_buffer){ ACPI_ALLOCATE_BUFFER, NULL }; 98 - } 91 + if (shpchp_is_native(pdev)) 92 + return 0; 93 + 94 + /* If _OSC exists, we should not evaluate OSHP */ 95 + host = pci_find_host_bridge(pdev->bus); 96 + root = acpi_pci_find_root(ACPI_HANDLE(&host->dev)); 97 + if (root->osc_support_set) 98 + goto no_control; 99 99 100 100 handle = ACPI_HANDLE(&pdev->dev); 101 101 if (!handle) { 102 102 /* 103 103 * This hotplug controller was not listed in the ACPI name 104 - * space at all. Try to get acpi handle of parent pci bus. 104 + * space at all. Try to get ACPI handle of parent PCI bus. 105 105 */ 106 106 struct pci_bus *pbus; 107 107 for (pbus = pdev->bus; pbus; pbus = pbus->parent) { ··· 108 118 109 119 while (handle) { 110 120 acpi_get_name(handle, ACPI_FULL_PATHNAME, &string); 111 - dbg("Trying to get hotplug control for %s\n", 112 - (char *)string.pointer); 121 + pci_info(pdev, "Requesting control of SHPC hotplug via OSHP (%s)\n", 122 + (char *)string.pointer); 113 123 status = acpi_run_oshp(handle); 114 124 if (ACPI_SUCCESS(status)) 115 125 goto got_one; ··· 121 131 break; 122 132 } 123 133 no_control: 124 - dbg("Cannot get control of hotplug hardware for pci %s\n", 125 - pci_name(pdev)); 134 + pci_info(pdev, "Cannot get control of SHPC hotplug\n"); 126 135 kfree(string.pointer); 127 136 return -ENODEV; 128 137 got_one: 129 - dbg("Gained control for hotplug HW for pci %s (%s)\n", 130 - pci_name(pdev), (char *)string.pointer); 138 + pci_info(pdev, "Gained control of SHPC hotplug (%s)\n", 139 + (char *)string.pointer); 131 140 kfree(string.pointer); 132 141 return 0; 133 142 }
+64 -18
drivers/pci/hotplug/acpiphp_glue.c
··· 287 287 /* 288 288 * Expose slots to user space for functions that have _EJ0 or _RMV or 289 289 * are located in dock stations. Do not expose them for devices handled 290 - * by the native PCIe hotplug (PCIeHP), becuase that code is supposed to 291 - * expose slots to user space in those cases. 290 + * by the native PCIe hotplug (PCIeHP) or standard PCI hotplug 291 + * (SHPCHP), because that code is supposed to expose slots to user 292 + * space in those cases. 292 293 */ 293 294 if ((acpi_pci_check_ejectable(pbus, handle) || is_dock_device(adev)) 294 - && !(pdev && pdev->is_hotplug_bridge && pciehp_is_native(pdev))) { 295 + && !(pdev && hotplug_is_native(pdev))) { 295 296 unsigned long long sun; 296 297 int retval; 297 298 ··· 431 430 return pci_scan_slot(slot->bus, PCI_DEVFN(slot->device, 0)); 432 431 } 433 432 433 + static void acpiphp_native_scan_bridge(struct pci_dev *bridge) 434 + { 435 + struct pci_bus *bus = bridge->subordinate; 436 + struct pci_dev *dev; 437 + int max; 438 + 439 + if (!bus) 440 + return; 441 + 442 + max = bus->busn_res.start; 443 + /* Scan already configured non-hotplug bridges */ 444 + for_each_pci_bridge(dev, bus) { 445 + if (!hotplug_is_native(dev)) 446 + max = pci_scan_bridge(bus, dev, max, 0); 447 + } 448 + 449 + /* Scan non-hotplug bridges that need to be reconfigured */ 450 + for_each_pci_bridge(dev, bus) { 451 + if (!hotplug_is_native(dev)) 452 + max = pci_scan_bridge(bus, dev, max, 1); 453 + } 454 + } 455 + 434 456 /** 435 457 * enable_slot - enable, configure a slot 436 458 * @slot: slot to be enabled ··· 466 442 struct pci_dev *dev; 467 443 struct pci_bus *bus = slot->bus; 468 444 struct acpiphp_func *func; 469 - int max, pass; 470 - LIST_HEAD(add_list); 471 445 472 - acpiphp_rescan_slot(slot); 473 - max = acpiphp_max_busnr(bus); 474 - for (pass = 0; pass < 2; pass++) { 446 + if (bus->self && hotplug_is_native(bus->self)) { 447 + /* 448 + * If native hotplug is used, it will take care of hotplug 449 + * slot management and resource allocation for hotplug 450 + * bridges. However, ACPI hotplug may still be used for 451 + * non-hotplug bridges to bring in additional devices such 452 + * as a Thunderbolt host controller. 453 + */ 475 454 for_each_pci_bridge(dev, bus) { 476 - if (PCI_SLOT(dev->devfn) != slot->device) 477 - continue; 455 + if (PCI_SLOT(dev->devfn) == slot->device) 456 + acpiphp_native_scan_bridge(dev); 457 + } 458 + pci_assign_unassigned_bridge_resources(bus->self); 459 + } else { 460 + LIST_HEAD(add_list); 461 + int max, pass; 478 462 479 - max = pci_scan_bridge(bus, dev, max, pass); 480 - if (pass && dev->subordinate) { 481 - check_hotplug_bridge(slot, dev); 482 - pcibios_resource_survey_bus(dev->subordinate); 483 - __pci_bus_size_bridges(dev->subordinate, &add_list); 463 + acpiphp_rescan_slot(slot); 464 + max = acpiphp_max_busnr(bus); 465 + for (pass = 0; pass < 2; pass++) { 466 + for_each_pci_bridge(dev, bus) { 467 + if (PCI_SLOT(dev->devfn) != slot->device) 468 + continue; 469 + 470 + max = pci_scan_bridge(bus, dev, max, pass); 471 + if (pass && dev->subordinate) { 472 + check_hotplug_bridge(slot, dev); 473 + pcibios_resource_survey_bus(dev->subordinate); 474 + __pci_bus_size_bridges(dev->subordinate, 475 + &add_list); 476 + } 484 477 } 485 478 } 479 + __pci_bus_assign_resources(bus, &add_list, NULL); 486 480 } 487 - __pci_bus_assign_resources(bus, &add_list, NULL); 488 481 489 482 acpiphp_sanitize_bus(bus); 490 483 pcie_bus_configure_settings(bus); ··· 522 481 if (!dev) { 523 482 /* Do not set SLOT_ENABLED flag if some funcs 524 483 are not added. */ 525 - slot->flags &= (~SLOT_ENABLED); 484 + slot->flags &= ~SLOT_ENABLED; 526 485 continue; 527 486 } 528 487 } ··· 551 510 list_for_each_entry(func, &slot->funcs, sibling) 552 511 acpi_bus_trim(func_to_acpi_device(func)); 553 512 554 - slot->flags &= (~SLOT_ENABLED); 513 + slot->flags &= ~SLOT_ENABLED; 555 514 } 556 515 557 516 static bool slot_no_hotplug(struct acpiphp_slot *slot) ··· 649 608 alive = pci_device_is_present(dev); 650 609 651 610 if (!alive) { 611 + pci_dev_set_disconnected(dev, NULL); 612 + if (pci_has_subordinate(dev)) 613 + pci_walk_bus(dev->subordinate, pci_dev_set_disconnected, 614 + NULL); 615 + 652 616 pci_stop_and_remove_bus_device(dev); 653 617 if (adev) 654 618 acpi_bus_trim(adev);
+1 -1
drivers/pci/hotplug/ibmphp_core.c
··· 379 379 380 380 static int get_max_bus_speed(struct slot *slot) 381 381 { 382 - int rc; 382 + int rc = 0; 383 383 u8 mode = 0; 384 384 enum pci_bus_speed speed; 385 385 struct pci_bus *bus = slot->hotplug_slot->pci_slot->bus;
+1 -1
drivers/pci/hotplug/pciehp.h
··· 121 121 int pcie_init_notification(struct controller *ctrl); 122 122 int pciehp_enable_slot(struct slot *p_slot); 123 123 int pciehp_disable_slot(struct slot *p_slot); 124 - void pcie_enable_notification(struct controller *ctrl); 124 + void pcie_reenable_notification(struct controller *ctrl); 125 125 int pciehp_power_on_slot(struct slot *slot); 126 126 void pciehp_power_off_slot(struct slot *slot); 127 127 void pciehp_get_power_status(struct slot *slot, u8 *status);
+1 -1
drivers/pci/hotplug/pciehp_core.c
··· 283 283 ctrl = get_service_data(dev); 284 284 285 285 /* reinitialize the chipset's event detection logic */ 286 - pcie_enable_notification(ctrl); 286 + pcie_reenable_notification(ctrl); 287 287 288 288 slot = ctrl->slot; 289 289
+54 -30
drivers/pci/hotplug/pciehp_hpc.c
··· 10 10 * All rights reserved. 11 11 * 12 12 * Send feedback to <greg@kroah.com>,<kristen.c.accardi@intel.com> 13 - * 14 13 */ 15 14 16 15 #include <linux/kernel.h> ··· 146 147 else 147 148 rc = pcie_poll_cmd(ctrl, jiffies_to_msecs(timeout)); 148 149 149 - /* 150 - * Controllers with errata like Intel CF118 don't generate 151 - * completion notifications unless the power/indicator/interlock 152 - * control bits are changed. On such controllers, we'll emit this 153 - * timeout message when we wait for completion of commands that 154 - * don't change those bits, e.g., commands that merely enable 155 - * interrupts. 156 - */ 157 150 if (!rc) 158 151 ctrl_info(ctrl, "Timeout on hotplug command %#06x (issued %u msec ago)\n", 159 152 ctrl->slot_ctrl, 160 153 jiffies_to_msecs(jiffies - ctrl->cmd_started)); 161 154 } 162 155 156 + #define CC_ERRATUM_MASK (PCI_EXP_SLTCTL_PCC | \ 157 + PCI_EXP_SLTCTL_PIC | \ 158 + PCI_EXP_SLTCTL_AIC | \ 159 + PCI_EXP_SLTCTL_EIC) 160 + 163 161 static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd, 164 162 u16 mask, bool wait) 165 163 { 166 164 struct pci_dev *pdev = ctrl_dev(ctrl); 167 - u16 slot_ctrl; 165 + u16 slot_ctrl_orig, slot_ctrl; 168 166 169 167 mutex_lock(&ctrl->ctrl_lock); 170 168 ··· 176 180 goto out; 177 181 } 178 182 183 + slot_ctrl_orig = slot_ctrl; 179 184 slot_ctrl &= ~mask; 180 185 slot_ctrl |= (cmd & mask); 181 186 ctrl->cmd_busy = 1; ··· 184 187 pcie_capability_write_word(pdev, PCI_EXP_SLTCTL, slot_ctrl); 185 188 ctrl->cmd_started = jiffies; 186 189 ctrl->slot_ctrl = slot_ctrl; 190 + 191 + /* 192 + * Controllers with the Intel CF118 and similar errata advertise 193 + * Command Completed support, but they only set Command Completed 194 + * if we change the "Control" bits for power, power indicator, 195 + * attention indicator, or interlock. If we only change the 196 + * "Enable" bits, they never set the Command Completed bit. 197 + */ 198 + if (pdev->broken_cmd_compl && 199 + (slot_ctrl_orig & CC_ERRATUM_MASK) == (slot_ctrl & CC_ERRATUM_MASK)) 200 + ctrl->cmd_busy = 0; 187 201 188 202 /* 189 203 * Optionally wait for the hardware to be ready for a new command, ··· 239 231 return ret; 240 232 } 241 233 242 - static void __pcie_wait_link_active(struct controller *ctrl, bool active) 243 - { 244 - int timeout = 1000; 245 - 246 - if (pciehp_check_link_active(ctrl) == active) 247 - return; 248 - while (timeout > 0) { 249 - msleep(10); 250 - timeout -= 10; 251 - if (pciehp_check_link_active(ctrl) == active) 252 - return; 253 - } 254 - ctrl_dbg(ctrl, "Data Link Layer Link Active not %s in 1000 msec\n", 255 - active ? "set" : "cleared"); 256 - } 257 - 258 234 static void pcie_wait_link_active(struct controller *ctrl) 259 235 { 260 - __pcie_wait_link_active(ctrl, true); 236 + struct pci_dev *pdev = ctrl_dev(ctrl); 237 + 238 + pcie_wait_for_link(pdev, true); 261 239 } 262 240 263 241 static bool pci_bus_check_dev(struct pci_bus *bus, int devfn) ··· 653 659 return handled; 654 660 } 655 661 656 - void pcie_enable_notification(struct controller *ctrl) 662 + static void pcie_enable_notification(struct controller *ctrl) 657 663 { 658 664 u16 cmd, mask; 659 665 ··· 689 695 pcie_write_cmd_nowait(ctrl, cmd, mask); 690 696 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 691 697 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd); 698 + } 699 + 700 + void pcie_reenable_notification(struct controller *ctrl) 701 + { 702 + /* 703 + * Clear both Presence and Data Link Layer Changed to make sure 704 + * those events still fire after we have re-enabled them. 705 + */ 706 + pcie_capability_write_word(ctrl->pcie->port, PCI_EXP_SLTSTA, 707 + PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC); 708 + pcie_enable_notification(ctrl); 692 709 } 693 710 694 711 static void pcie_disable_notification(struct controller *ctrl) ··· 866 861 PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_CC | 867 862 PCI_EXP_SLTSTA_DLLSC); 868 863 869 - ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c LLActRep%c\n", 864 + ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c LLActRep%c%s\n", 870 865 (slot_cap & PCI_EXP_SLTCAP_PSN) >> 19, 871 866 FLAG(slot_cap, PCI_EXP_SLTCAP_ABP), 872 867 FLAG(slot_cap, PCI_EXP_SLTCAP_PCP), ··· 877 872 FLAG(slot_cap, PCI_EXP_SLTCAP_HPS), 878 873 FLAG(slot_cap, PCI_EXP_SLTCAP_EIP), 879 874 FLAG(slot_cap, PCI_EXP_SLTCAP_NCCS), 880 - FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC)); 875 + FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC), 876 + pdev->broken_cmd_compl ? " (with Cmd Compl erratum)" : ""); 881 877 882 878 if (pcie_init_slot(ctrl)) 883 879 goto abort_ctrl; ··· 897 891 pcie_cleanup_slot(ctrl); 898 892 kfree(ctrl); 899 893 } 894 + 895 + static void quirk_cmd_compl(struct pci_dev *pdev) 896 + { 897 + u32 slot_cap; 898 + 899 + if (pci_is_pcie(pdev)) { 900 + pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap); 901 + if (slot_cap & PCI_EXP_SLTCAP_HPC && 902 + !(slot_cap & PCI_EXP_SLTCAP_NCCS)) 903 + pdev->broken_cmd_compl = 1; 904 + } 905 + } 906 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, 907 + PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl); 908 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0400, 909 + PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl); 910 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0401, 911 + PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
+6 -2
drivers/pci/hotplug/pnv_php.c
··· 220 220 221 221 for_each_child_of_node(dn, child) { 222 222 ret = of_changeset_attach_node(ocs, child); 223 - if (ret) 223 + if (ret) { 224 + of_node_put(child); 224 225 break; 226 + } 225 227 226 228 ret = pnv_php_populate_changeset(ocs, child); 227 - if (ret) 229 + if (ret) { 230 + of_node_put(child); 228 231 break; 232 + } 229 233 } 230 234 231 235 return ret;
-12
drivers/pci/hotplug/shpchp.h
··· 105 105 }; 106 106 107 107 /* Define AMD SHPC ID */ 108 - #define PCI_DEVICE_ID_AMD_GOLAM_7450 0x7450 109 108 #define PCI_DEVICE_ID_AMD_POGO_7458 0x7458 110 109 111 110 /* AMD PCI-X bridge registers */ ··· 171 172 { 172 173 return hotplug_slot_name(slot->hotplug_slot); 173 174 } 174 - 175 - #ifdef CONFIG_ACPI 176 - #include <linux/pci-acpi.h> 177 - static inline int get_hp_hw_control_from_firmware(struct pci_dev *dev) 178 - { 179 - u32 flags = OSC_PCI_SHPC_NATIVE_HP_CONTROL; 180 - return acpi_get_hp_hw_control_from_firmware(dev, flags); 181 - } 182 - #else 183 - #define get_hp_hw_control_from_firmware(dev) (0) 184 - #endif 185 175 186 176 struct ctrl_reg { 187 177 volatile u32 base_offset;
+1 -13
drivers/pci/hotplug/shpchp_core.c
··· 270 270 return 0; 271 271 } 272 272 273 - static int is_shpc_capable(struct pci_dev *dev) 274 - { 275 - if (dev->vendor == PCI_VENDOR_ID_AMD && 276 - dev->device == PCI_DEVICE_ID_AMD_GOLAM_7450) 277 - return 1; 278 - if (!pci_find_capability(dev, PCI_CAP_ID_SHPC)) 279 - return 0; 280 - if (get_hp_hw_control_from_firmware(dev)) 281 - return 0; 282 - return 1; 283 - } 284 - 285 273 static int shpc_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 286 274 { 287 275 int rc; 288 276 struct controller *ctrl; 289 277 290 - if (!is_shpc_capable(pdev)) 278 + if (acpi_get_hp_hw_control_from_firmware(pdev)) 291 279 return -ENODEV; 292 280 293 281 ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL);
+4 -4
drivers/pci/hotplug/shpchp_ctrl.c
··· 585 585 ctrl_dbg(ctrl, "%s: p_slot->pwr_save %x\n", __func__, p_slot->pwr_save); 586 586 p_slot->hpc_ops->get_latch_status(p_slot, &getstatus); 587 587 588 - if (((p_slot->ctrl->pci_dev->vendor == PCI_VENDOR_ID_AMD) || 589 - (p_slot->ctrl->pci_dev->device == PCI_DEVICE_ID_AMD_POGO_7458)) 588 + if ((p_slot->ctrl->pci_dev->vendor == PCI_VENDOR_ID_AMD && 589 + p_slot->ctrl->pci_dev->device == PCI_DEVICE_ID_AMD_POGO_7458) 590 590 && p_slot->ctrl->num_slots == 1) { 591 - /* handle amd pogo errata; this must be done before enable */ 591 + /* handle AMD POGO errata; this must be done before enable */ 592 592 amd_pogo_errata_save_misc_reg(p_slot); 593 593 retval = board_added(p_slot); 594 - /* handle amd pogo errata; this must be done after enable */ 594 + /* handle AMD POGO errata; this must be done after enable */ 595 595 amd_pogo_errata_restore_misc_reg(p_slot); 596 596 } else 597 597 retval = board_added(p_slot);
+38 -4
drivers/pci/iov.c
··· 469 469 iov->nres = nres; 470 470 iov->ctrl = ctrl; 471 471 iov->total_VFs = total; 472 + iov->driver_max_VFs = total; 472 473 pci_read_config_word(dev, pos + PCI_SRIOV_VF_DID, &iov->vf_device); 473 474 iov->pgsz = pgsz; 474 475 iov->self = dev; ··· 828 827 if (!dev->is_physfn) 829 828 return 0; 830 829 831 - if (dev->sriov->driver_max_VFs) 832 - return dev->sriov->driver_max_VFs; 833 - 834 - return dev->sriov->total_VFs; 830 + return dev->sriov->driver_max_VFs; 835 831 } 836 832 EXPORT_SYMBOL_GPL(pci_sriov_get_totalvfs); 833 + 834 + /** 835 + * pci_sriov_configure_simple - helper to configure SR-IOV 836 + * @dev: the PCI device 837 + * @nr_virtfn: number of virtual functions to enable, 0 to disable 838 + * 839 + * Enable or disable SR-IOV for devices that don't require any PF setup 840 + * before enabling SR-IOV. Return value is negative on error, or number of 841 + * VFs allocated on success. 842 + */ 843 + int pci_sriov_configure_simple(struct pci_dev *dev, int nr_virtfn) 844 + { 845 + int rc; 846 + 847 + might_sleep(); 848 + 849 + if (!dev->is_physfn) 850 + return -ENODEV; 851 + 852 + if (pci_vfs_assigned(dev)) { 853 + pci_warn(dev, "Cannot modify SR-IOV while VFs are assigned\n"); 854 + return -EPERM; 855 + } 856 + 857 + if (nr_virtfn == 0) { 858 + sriov_disable(dev); 859 + return 0; 860 + } 861 + 862 + rc = sriov_enable(dev, nr_virtfn); 863 + if (rc < 0) 864 + return rc; 865 + 866 + return nr_virtfn; 867 + } 868 + EXPORT_SYMBOL_GPL(pci_sriov_configure_simple);
+29 -34
drivers/pci/of.c
··· 244 244 245 245 #if defined(CONFIG_OF_ADDRESS) 246 246 /** 247 - * of_pci_get_host_bridge_resources - Parse PCI host bridge resources from DT 248 - * @dev: device node of the host bridge having the range property 247 + * devm_of_pci_get_host_bridge_resources() - Resource-managed parsing of PCI 248 + * host bridge resources from DT 249 + * @dev: host bridge device 249 250 * @busno: bus number associated with the bridge root bus 250 251 * @bus_max: maximum number of buses for this bridge 251 252 * @resources: list where the range of resources will be added after DT parsing 252 253 * @io_base: pointer to a variable that will contain on return the physical 253 254 * address for the start of the I/O range. Can be NULL if the caller doesn't 254 255 * expect I/O ranges to be present in the device tree. 255 - * 256 - * It is the caller's job to free the @resources list. 257 256 * 258 257 * This function will parse the "ranges" property of a PCI host bridge device 259 258 * node and setup the resource mapping based on its content. It is expected ··· 261 262 * It returns zero if the range parsing has been successful or a standard error 262 263 * value if it failed. 263 264 */ 264 - int of_pci_get_host_bridge_resources(struct device_node *dev, 265 + int devm_of_pci_get_host_bridge_resources(struct device *dev, 265 266 unsigned char busno, unsigned char bus_max, 266 267 struct list_head *resources, resource_size_t *io_base) 267 268 { 268 - struct resource_entry *window; 269 + struct device_node *dev_node = dev->of_node; 269 270 struct resource *res; 270 271 struct resource *bus_range; 271 272 struct of_pci_range range; ··· 276 277 if (io_base) 277 278 *io_base = (resource_size_t)OF_BAD_ADDR; 278 279 279 - bus_range = kzalloc(sizeof(*bus_range), GFP_KERNEL); 280 + bus_range = devm_kzalloc(dev, sizeof(*bus_range), GFP_KERNEL); 280 281 if (!bus_range) 281 282 return -ENOMEM; 282 283 283 - pr_info("host bridge %pOF ranges:\n", dev); 284 + dev_info(dev, "host bridge %pOF ranges:\n", dev_node); 284 285 285 - err = of_pci_parse_bus_range(dev, bus_range); 286 + err = of_pci_parse_bus_range(dev_node, bus_range); 286 287 if (err) { 287 288 bus_range->start = busno; 288 289 bus_range->end = bus_max; 289 290 bus_range->flags = IORESOURCE_BUS; 290 - pr_info(" No bus range found for %pOF, using %pR\n", 291 - dev, bus_range); 291 + dev_info(dev, " No bus range found for %pOF, using %pR\n", 292 + dev_node, bus_range); 292 293 } else { 293 294 if (bus_range->end > bus_range->start + bus_max) 294 295 bus_range->end = bus_range->start + bus_max; ··· 296 297 pci_add_resource(resources, bus_range); 297 298 298 299 /* Check for ranges property */ 299 - err = of_pci_range_parser_init(&parser, dev); 300 + err = of_pci_range_parser_init(&parser, dev_node); 300 301 if (err) 301 - goto parse_failed; 302 + goto failed; 302 303 303 - pr_debug("Parsing ranges property...\n"); 304 + dev_dbg(dev, "Parsing ranges property...\n"); 304 305 for_each_of_pci_range(&parser, &range) { 305 306 /* Read next ranges element */ 306 307 if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_IO) ··· 309 310 snprintf(range_type, 4, "MEM"); 310 311 else 311 312 snprintf(range_type, 4, "err"); 312 - pr_info(" %s %#010llx..%#010llx -> %#010llx\n", range_type, 313 - range.cpu_addr, range.cpu_addr + range.size - 1, 314 - range.pci_addr); 313 + dev_info(dev, " %s %#010llx..%#010llx -> %#010llx\n", 314 + range_type, range.cpu_addr, 315 + range.cpu_addr + range.size - 1, range.pci_addr); 315 316 316 317 /* 317 318 * If we failed translation or got a zero-sized region ··· 320 321 if (range.cpu_addr == OF_BAD_ADDR || range.size == 0) 321 322 continue; 322 323 323 - res = kzalloc(sizeof(struct resource), GFP_KERNEL); 324 + res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL); 324 325 if (!res) { 325 326 err = -ENOMEM; 326 - goto parse_failed; 327 + goto failed; 327 328 } 328 329 329 - err = of_pci_range_to_resource(&range, dev, res); 330 + err = of_pci_range_to_resource(&range, dev_node, res); 330 331 if (err) { 331 - kfree(res); 332 + devm_kfree(dev, res); 332 333 continue; 333 334 } 334 335 335 336 if (resource_type(res) == IORESOURCE_IO) { 336 337 if (!io_base) { 337 - pr_err("I/O range found for %pOF. Please provide an io_base pointer to save CPU base address\n", 338 - dev); 338 + dev_err(dev, "I/O range found for %pOF. Please provide an io_base pointer to save CPU base address\n", 339 + dev_node); 339 340 err = -EINVAL; 340 - goto conversion_failed; 341 + goto failed; 341 342 } 342 343 if (*io_base != (resource_size_t)OF_BAD_ADDR) 343 - pr_warn("More than one I/O resource converted for %pOF. CPU base address for old range lost!\n", 344 - dev); 344 + dev_warn(dev, "More than one I/O resource converted for %pOF. CPU base address for old range lost!\n", 345 + dev_node); 345 346 *io_base = range.cpu_addr; 346 347 } 347 348 ··· 350 351 351 352 return 0; 352 353 353 - conversion_failed: 354 - kfree(res); 355 - parse_failed: 356 - resource_list_for_each_entry(window, resources) 357 - kfree(window->res); 354 + failed: 358 355 pci_free_resource_list(resources); 359 356 return err; 360 357 } 361 - EXPORT_SYMBOL_GPL(of_pci_get_host_bridge_resources); 358 + EXPORT_SYMBOL_GPL(devm_of_pci_get_host_bridge_resources); 362 359 #endif /* CONFIG_OF_ADDRESS */ 363 360 364 361 /** ··· 594 599 struct resource **bus_range) 595 600 { 596 601 int err, res_valid = 0; 597 - struct device_node *np = dev->of_node; 598 602 resource_size_t iobase; 599 603 struct resource_entry *win, *tmp; 600 604 601 605 INIT_LIST_HEAD(resources); 602 - err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase); 606 + err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, resources, 607 + &iobase); 603 608 if (err) 604 609 return err; 605 610
+43 -12
drivers/pci/pci-acpi.c
··· 370 370 371 371 /** 372 372 * pciehp_is_native - Check whether a hotplug port is handled by the OS 373 - * @pdev: Hotplug port to check 373 + * @bridge: Hotplug port to check 374 374 * 375 - * Walk up from @pdev to the host bridge, obtain its cached _OSC Control Field 376 - * and return the value of the "PCI Express Native Hot Plug control" bit. 377 - * On failure to obtain the _OSC Control Field return %false. 375 + * Returns true if the given @bridge is handled by the native PCIe hotplug 376 + * driver. 378 377 */ 379 - bool pciehp_is_native(struct pci_dev *pdev) 378 + bool pciehp_is_native(struct pci_dev *bridge) 380 379 { 381 - struct acpi_pci_root *root; 382 - acpi_handle handle; 380 + const struct pci_host_bridge *host; 381 + u32 slot_cap; 383 382 384 - handle = acpi_find_root_bridge_handle(pdev); 385 - if (!handle) 383 + if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE)) 386 384 return false; 387 385 388 - root = acpi_pci_find_root(handle); 389 - if (!root) 386 + pcie_capability_read_dword(bridge, PCI_EXP_SLTCAP, &slot_cap); 387 + if (!(slot_cap & PCI_EXP_SLTCAP_HPC)) 390 388 return false; 391 389 392 - return root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL; 390 + if (pcie_ports_native) 391 + return true; 392 + 393 + host = pci_find_host_bridge(bridge->bus); 394 + return host->native_pcie_hotplug; 395 + } 396 + 397 + /** 398 + * shpchp_is_native - Check whether a hotplug port is handled by the OS 399 + * @bridge: Hotplug port to check 400 + * 401 + * Returns true if the given @bridge is handled by the native SHPC hotplug 402 + * driver. 403 + */ 404 + bool shpchp_is_native(struct pci_dev *bridge) 405 + { 406 + const struct pci_host_bridge *host; 407 + 408 + if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_SHPC)) 409 + return false; 410 + 411 + /* 412 + * It is assumed that AMD GOLAM chips support SHPC but they do not 413 + * have SHPC capability. 414 + */ 415 + if (bridge->vendor == PCI_VENDOR_ID_AMD && 416 + bridge->device == PCI_DEVICE_ID_AMD_GOLAM_7450) 417 + return true; 418 + 419 + if (!pci_find_capability(bridge, PCI_CAP_ID_SHPC)) 420 + return false; 421 + 422 + host = pci_find_host_bridge(bridge->bus); 423 + return host->native_shpc_hotplug; 393 424 } 394 425 395 426 /**
+1 -1
drivers/pci/pci-driver.c
··· 1539 1539 return 0; 1540 1540 } 1541 1541 1542 - #if defined(CONFIG_PCIEAER) || defined(CONFIG_EEH) 1542 + #if defined(CONFIG_PCIEPORTBUS) || defined(CONFIG_EEH) 1543 1543 /** 1544 1544 * pci_uevent_ers - emit a uevent during recovery path of PCI device 1545 1545 * @pdev: PCI device undergoing error recovery
+54
drivers/pci/pci-pf-stub.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* pci-pf-stub - simple stub driver for PCI SR-IOV PF device 3 + * 4 + * This driver is meant to act as a "whitelist" for devices that provde 5 + * SR-IOV functionality while at the same time not actually needing a 6 + * driver of their own. 7 + */ 8 + 9 + #include <linux/module.h> 10 + #include <linux/pci.h> 11 + 12 + /** 13 + * pci_pf_stub_whitelist - White list of devices to bind pci-pf-stub onto 14 + * 15 + * This table provides the list of IDs this driver is supposed to bind 16 + * onto. You could think of this as a list of "quirked" devices where we 17 + * are adding support for SR-IOV here since there are no other drivers 18 + * that they would be running under. 19 + */ 20 + static const struct pci_device_id pci_pf_stub_whitelist[] = { 21 + { PCI_VDEVICE(AMAZON, 0x0053) }, 22 + /* required last entry */ 23 + { 0 } 24 + }; 25 + MODULE_DEVICE_TABLE(pci, pci_pf_stub_whitelist); 26 + 27 + static int pci_pf_stub_probe(struct pci_dev *dev, 28 + const struct pci_device_id *id) 29 + { 30 + pci_info(dev, "claimed by pci-pf-stub\n"); 31 + return 0; 32 + } 33 + 34 + static struct pci_driver pf_stub_driver = { 35 + .name = "pci-pf-stub", 36 + .id_table = pci_pf_stub_whitelist, 37 + .probe = pci_pf_stub_probe, 38 + .sriov_configure = pci_sriov_configure_simple, 39 + }; 40 + 41 + static int __init pci_pf_stub_init(void) 42 + { 43 + return pci_register_driver(&pf_stub_driver); 44 + } 45 + 46 + static void __exit pci_pf_stub_exit(void) 47 + { 48 + pci_unregister_driver(&pf_stub_driver); 49 + } 50 + 51 + module_init(pci_pf_stub_init); 52 + module_exit(pci_pf_stub_exit); 53 + 54 + MODULE_LICENSE("GPL");
+9 -6
drivers/pci/pci-sysfs.c
··· 288 288 if (!capable(CAP_SYS_ADMIN)) 289 289 return -EPERM; 290 290 291 - if (!val) { 292 - if (pci_is_enabled(pdev)) 293 - pci_disable_device(pdev); 294 - else 295 - result = -EIO; 296 - } else 291 + device_lock(dev); 292 + if (dev->driver) 293 + result = -EBUSY; 294 + else if (val) 297 295 result = pci_enable_device(pdev); 296 + else if (pci_is_enabled(pdev)) 297 + pci_disable_device(pdev); 298 + else 299 + result = -EIO; 300 + device_unlock(dev); 298 301 299 302 return result < 0 ? result : count; 300 303 }
+42 -47
drivers/pci/pci.c
··· 112 112 /* If set, the PCIe ARI capability will not be used. */ 113 113 static bool pcie_ari_disabled; 114 114 115 + /* If set, the PCIe ATS capability will not be used. */ 116 + static bool pcie_ats_disabled; 117 + 118 + bool pci_ats_disabled(void) 119 + { 120 + return pcie_ats_disabled; 121 + } 122 + 115 123 /* Disable bridge_d3 for all PCIe ports */ 116 124 static bool pci_bridge_d3_disable; 117 125 /* Force bridge_d3 for all PCIe ports */ ··· 4161 4153 4162 4154 return pci_dev_wait(dev, "PM D3->D0", PCIE_RESET_READY_POLL_MS); 4163 4155 } 4156 + /** 4157 + * pcie_wait_for_link - Wait until link is active or inactive 4158 + * @pdev: Bridge device 4159 + * @active: waiting for active or inactive? 4160 + * 4161 + * Use this to wait till link becomes active or inactive. 4162 + */ 4163 + bool pcie_wait_for_link(struct pci_dev *pdev, bool active) 4164 + { 4165 + int timeout = 1000; 4166 + bool ret; 4167 + u16 lnk_status; 4168 + 4169 + for (;;) { 4170 + pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); 4171 + ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA); 4172 + if (ret == active) 4173 + return true; 4174 + if (timeout <= 0) 4175 + break; 4176 + msleep(10); 4177 + timeout -= 10; 4178 + } 4179 + 4180 + pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n", 4181 + active ? "set" : "cleared"); 4182 + 4183 + return false; 4184 + } 4164 4185 4165 4186 void pci_reset_secondary_bus(struct pci_dev *dev) 4166 4187 { ··· 5122 5085 EXPORT_SYMBOL(pcie_set_mps); 5123 5086 5124 5087 /** 5125 - * pcie_get_minimum_link - determine minimum link settings of a PCI device 5126 - * @dev: PCI device to query 5127 - * @speed: storage for minimum speed 5128 - * @width: storage for minimum width 5129 - * 5130 - * This function will walk up the PCI device chain and determine the minimum 5131 - * link width and speed of the device. 5132 - */ 5133 - int pcie_get_minimum_link(struct pci_dev *dev, enum pci_bus_speed *speed, 5134 - enum pcie_link_width *width) 5135 - { 5136 - int ret; 5137 - 5138 - *speed = PCI_SPEED_UNKNOWN; 5139 - *width = PCIE_LNK_WIDTH_UNKNOWN; 5140 - 5141 - while (dev) { 5142 - u16 lnksta; 5143 - enum pci_bus_speed next_speed; 5144 - enum pcie_link_width next_width; 5145 - 5146 - ret = pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta); 5147 - if (ret) 5148 - return ret; 5149 - 5150 - next_speed = pcie_link_speed[lnksta & PCI_EXP_LNKSTA_CLS]; 5151 - next_width = (lnksta & PCI_EXP_LNKSTA_NLW) >> 5152 - PCI_EXP_LNKSTA_NLW_SHIFT; 5153 - 5154 - if (next_speed < *speed) 5155 - *speed = next_speed; 5156 - 5157 - if (next_width < *width) 5158 - *width = next_width; 5159 - 5160 - dev = dev->bus->self; 5161 - } 5162 - 5163 - return 0; 5164 - } 5165 - EXPORT_SYMBOL(pcie_get_minimum_link); 5166 - 5167 - /** 5168 5088 * pcie_bandwidth_available - determine minimum link settings of a PCIe 5169 5089 * device and its bandwidth limitation 5170 5090 * @dev: PCI device to query ··· 5711 5717 #endif 5712 5718 } 5713 5719 5714 - #ifdef CONFIG_PCI_DOMAINS 5720 + #ifdef CONFIG_PCI_DOMAINS_GENERIC 5715 5721 static atomic_t __domain_nr = ATOMIC_INIT(-1); 5716 5722 5717 - int pci_get_new_domain_nr(void) 5723 + static int pci_get_new_domain_nr(void) 5718 5724 { 5719 5725 return atomic_inc_return(&__domain_nr); 5720 5726 } 5721 5727 5722 - #ifdef CONFIG_PCI_DOMAINS_GENERIC 5723 5728 static int of_pci_bus_find_domain_nr(struct device *parent) 5724 5729 { 5725 5730 static int use_dt_domains = -1; ··· 5773 5780 acpi_pci_bus_find_domain_nr(bus); 5774 5781 } 5775 5782 #endif 5776 - #endif 5777 5783 5778 5784 /** 5779 5785 * pci_ext_cfg_avail - can we access extended PCI config space? ··· 5800 5808 if (*str && (str = pcibios_setup(str)) && *str) { 5801 5809 if (!strcmp(str, "nomsi")) { 5802 5810 pci_no_msi(); 5811 + } else if (!strncmp(str, "noats", 5)) { 5812 + pr_info("PCIe: ATS is disabled\n"); 5813 + pcie_ats_disabled = true; 5803 5814 } else if (!strcmp(str, "noaer")) { 5804 5815 pci_no_aer(); 5805 5816 } else if (!strncmp(str, "realloc=", 8)) {
+45
drivers/pci/pci.h
··· 353 353 354 354 void pci_enable_acs(struct pci_dev *dev); 355 355 356 + /* PCI error reporting and recovery */ 357 + void pcie_do_fatal_recovery(struct pci_dev *dev, u32 service); 358 + void pcie_do_nonfatal_recovery(struct pci_dev *dev); 359 + 360 + bool pcie_wait_for_link(struct pci_dev *pdev, bool active); 356 361 #ifdef CONFIG_PCIEASPM 357 362 void pcie_aspm_init_link_state(struct pci_dev *pdev); 358 363 void pcie_aspm_exit_link_state(struct pci_dev *pdev); ··· 411 406 { 412 407 return 1ULL << (size + 20); 413 408 } 409 + 410 + struct device_node; 411 + 412 + #ifdef CONFIG_OF 413 + int of_pci_parse_bus_range(struct device_node *node, struct resource *res); 414 + int of_get_pci_domain_nr(struct device_node *node); 415 + int of_pci_get_max_link_speed(struct device_node *node); 416 + 417 + #else 418 + static inline int 419 + of_pci_parse_bus_range(struct device_node *node, struct resource *res) 420 + { 421 + return -EINVAL; 422 + } 423 + 424 + static inline int 425 + of_get_pci_domain_nr(struct device_node *node) 426 + { 427 + return -1; 428 + } 429 + 430 + static inline int 431 + of_pci_get_max_link_speed(struct device_node *node) 432 + { 433 + return -EINVAL; 434 + } 435 + #endif /* CONFIG_OF */ 436 + 437 + #if defined(CONFIG_OF_ADDRESS) 438 + int devm_of_pci_get_host_bridge_resources(struct device *dev, 439 + unsigned char busno, unsigned char bus_max, 440 + struct list_head *resources, resource_size_t *io_base); 441 + #else 442 + static inline int devm_of_pci_get_host_bridge_resources(struct device *dev, 443 + unsigned char busno, unsigned char bus_max, 444 + struct list_head *resources, resource_size_t *io_base) 445 + { 446 + return -EINVAL; 447 + } 448 + #endif 414 449 415 450 #endif /* DRIVERS_PCI_H */
+1 -1
drivers/pci/pcie/Makefile
··· 2 2 # 3 3 # Makefile for PCI Express features and port driver 4 4 5 - pcieportdrv-y := portdrv_core.o portdrv_pci.o 5 + pcieportdrv-y := portdrv_core.o portdrv_pci.o err.o 6 6 7 7 obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o 8 8
+4 -7
drivers/pci/pcie/aer/aerdrv.c
··· 94 94 */ 95 95 static void aer_enable_rootport(struct aer_rpc *rpc) 96 96 { 97 - struct pci_dev *pdev = rpc->rpd->port; 97 + struct pci_dev *pdev = rpc->rpd; 98 98 int aer_pos; 99 99 u16 reg16; 100 100 u32 reg32; ··· 136 136 */ 137 137 static void aer_disable_rootport(struct aer_rpc *rpc) 138 138 { 139 - struct pci_dev *pdev = rpc->rpd->port; 139 + struct pci_dev *pdev = rpc->rpd; 140 140 u32 reg32; 141 141 int pos; 142 142 ··· 232 232 /* Initialize Root lock access, e_lock, to Root Error Status Reg */ 233 233 spin_lock_init(&rpc->e_lock); 234 234 235 - rpc->rpd = dev; 235 + rpc->rpd = dev->port; 236 236 INIT_WORK(&rpc->dpc_handler, aer_isr); 237 237 mutex_init(&rpc->rpc_mutex); 238 238 ··· 353 353 pos = dev->aer_cap; 354 354 pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); 355 355 pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &mask); 356 - if (dev->error_state == pci_channel_io_normal) 357 - status &= ~mask; /* Clear corresponding nonfatal bits */ 358 - else 359 - status &= mask; /* Clear corresponding fatal bits */ 356 + status &= ~mask; /* Clear corresponding nonfatal bits */ 360 357 pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); 361 358 } 362 359
+1 -31
drivers/pci/pcie/aer/aerdrv.h
··· 58 58 }; 59 59 60 60 struct aer_rpc { 61 - struct pcie_device *rpd; /* Root Port device */ 61 + struct pci_dev *rpd; /* Root Port device */ 62 62 struct work_struct dpc_handler; 63 63 struct aer_err_source e_sources[AER_ERROR_SOURCES_MAX]; 64 64 struct aer_err_info e_info; ··· 75 75 * root port hierarchy 76 76 */ 77 77 }; 78 - 79 - struct aer_broadcast_data { 80 - enum pci_channel_state state; 81 - enum pci_ers_result result; 82 - }; 83 - 84 - static inline pci_ers_result_t merge_result(enum pci_ers_result orig, 85 - enum pci_ers_result new) 86 - { 87 - if (new == PCI_ERS_RESULT_NO_AER_DRIVER) 88 - return PCI_ERS_RESULT_NO_AER_DRIVER; 89 - 90 - if (new == PCI_ERS_RESULT_NONE) 91 - return orig; 92 - 93 - switch (orig) { 94 - case PCI_ERS_RESULT_CAN_RECOVER: 95 - case PCI_ERS_RESULT_RECOVERED: 96 - orig = new; 97 - break; 98 - case PCI_ERS_RESULT_DISCONNECT: 99 - if (new == PCI_ERS_RESULT_NEED_RESET) 100 - orig = PCI_ERS_RESULT_NEED_RESET; 101 - break; 102 - default: 103 - break; 104 - } 105 - 106 - return orig; 107 - } 108 78 109 79 extern struct bus_type pcie_port_bus_type; 110 80 void aer_isr(struct work_struct *work);
+42 -355
drivers/pci/pcie/aer/aerdrv_core.c
··· 20 20 #include <linux/slab.h> 21 21 #include <linux/kfifo.h> 22 22 #include "aerdrv.h" 23 + #include "../../pci.h" 23 24 24 25 #define PCI_EXP_AER_FLAGS (PCI_EXP_DEVCTL_CERE | PCI_EXP_DEVCTL_NFERE | \ 25 26 PCI_EXP_DEVCTL_FERE | PCI_EXP_DEVCTL_URRE) ··· 228 227 return true; 229 228 } 230 229 231 - static int report_error_detected(struct pci_dev *dev, void *data) 232 - { 233 - pci_ers_result_t vote; 234 - const struct pci_error_handlers *err_handler; 235 - struct aer_broadcast_data *result_data; 236 - result_data = (struct aer_broadcast_data *) data; 237 - 238 - device_lock(&dev->dev); 239 - dev->error_state = result_data->state; 240 - 241 - if (!dev->driver || 242 - !dev->driver->err_handler || 243 - !dev->driver->err_handler->error_detected) { 244 - if (result_data->state == pci_channel_io_frozen && 245 - dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) { 246 - /* 247 - * In case of fatal recovery, if one of down- 248 - * stream device has no driver. We might be 249 - * unable to recover because a later insmod 250 - * of a driver for this device is unaware of 251 - * its hw state. 252 - */ 253 - pci_printk(KERN_DEBUG, dev, "device has %s\n", 254 - dev->driver ? 255 - "no AER-aware driver" : "no driver"); 256 - } 257 - 258 - /* 259 - * If there's any device in the subtree that does not 260 - * have an error_detected callback, returning 261 - * PCI_ERS_RESULT_NO_AER_DRIVER prevents calling of 262 - * the subsequent mmio_enabled/slot_reset/resume 263 - * callbacks of "any" device in the subtree. All the 264 - * devices in the subtree are left in the error state 265 - * without recovery. 266 - */ 267 - 268 - if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) 269 - vote = PCI_ERS_RESULT_NO_AER_DRIVER; 270 - else 271 - vote = PCI_ERS_RESULT_NONE; 272 - } else { 273 - err_handler = dev->driver->err_handler; 274 - vote = err_handler->error_detected(dev, result_data->state); 275 - pci_uevent_ers(dev, PCI_ERS_RESULT_NONE); 276 - } 277 - 278 - result_data->result = merge_result(result_data->result, vote); 279 - device_unlock(&dev->dev); 280 - return 0; 281 - } 282 - 283 - static int report_mmio_enabled(struct pci_dev *dev, void *data) 284 - { 285 - pci_ers_result_t vote; 286 - const struct pci_error_handlers *err_handler; 287 - struct aer_broadcast_data *result_data; 288 - result_data = (struct aer_broadcast_data *) data; 289 - 290 - device_lock(&dev->dev); 291 - if (!dev->driver || 292 - !dev->driver->err_handler || 293 - !dev->driver->err_handler->mmio_enabled) 294 - goto out; 295 - 296 - err_handler = dev->driver->err_handler; 297 - vote = err_handler->mmio_enabled(dev); 298 - result_data->result = merge_result(result_data->result, vote); 299 - out: 300 - device_unlock(&dev->dev); 301 - return 0; 302 - } 303 - 304 - static int report_slot_reset(struct pci_dev *dev, void *data) 305 - { 306 - pci_ers_result_t vote; 307 - const struct pci_error_handlers *err_handler; 308 - struct aer_broadcast_data *result_data; 309 - result_data = (struct aer_broadcast_data *) data; 310 - 311 - device_lock(&dev->dev); 312 - if (!dev->driver || 313 - !dev->driver->err_handler || 314 - !dev->driver->err_handler->slot_reset) 315 - goto out; 316 - 317 - err_handler = dev->driver->err_handler; 318 - vote = err_handler->slot_reset(dev); 319 - result_data->result = merge_result(result_data->result, vote); 320 - out: 321 - device_unlock(&dev->dev); 322 - return 0; 323 - } 324 - 325 - static int report_resume(struct pci_dev *dev, void *data) 326 - { 327 - const struct pci_error_handlers *err_handler; 328 - 329 - device_lock(&dev->dev); 330 - dev->error_state = pci_channel_io_normal; 331 - 332 - if (!dev->driver || 333 - !dev->driver->err_handler || 334 - !dev->driver->err_handler->resume) 335 - goto out; 336 - 337 - err_handler = dev->driver->err_handler; 338 - err_handler->resume(dev); 339 - pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED); 340 - out: 341 - device_unlock(&dev->dev); 342 - return 0; 343 - } 344 - 345 - /** 346 - * broadcast_error_message - handle message broadcast to downstream drivers 347 - * @dev: pointer to from where in a hierarchy message is broadcasted down 348 - * @state: error state 349 - * @error_mesg: message to print 350 - * @cb: callback to be broadcasted 351 - * 352 - * Invoked during error recovery process. Once being invoked, the content 353 - * of error severity will be broadcasted to all downstream drivers in a 354 - * hierarchy in question. 355 - */ 356 - static pci_ers_result_t broadcast_error_message(struct pci_dev *dev, 357 - enum pci_channel_state state, 358 - char *error_mesg, 359 - int (*cb)(struct pci_dev *, void *)) 360 - { 361 - struct aer_broadcast_data result_data; 362 - 363 - pci_printk(KERN_DEBUG, dev, "broadcast %s message\n", error_mesg); 364 - result_data.state = state; 365 - if (cb == report_error_detected) 366 - result_data.result = PCI_ERS_RESULT_CAN_RECOVER; 367 - else 368 - result_data.result = PCI_ERS_RESULT_RECOVERED; 369 - 370 - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { 371 - /* 372 - * If the error is reported by a bridge, we think this error 373 - * is related to the downstream link of the bridge, so we 374 - * do error recovery on all subordinates of the bridge instead 375 - * of the bridge and clear the error status of the bridge. 376 - */ 377 - if (cb == report_error_detected) 378 - dev->error_state = state; 379 - pci_walk_bus(dev->subordinate, cb, &result_data); 380 - if (cb == report_resume) { 381 - pci_cleanup_aer_uncorrect_error_status(dev); 382 - dev->error_state = pci_channel_io_normal; 383 - } 384 - } else { 385 - /* 386 - * If the error is reported by an end point, we think this 387 - * error is related to the upstream link of the end point. 388 - */ 389 - if (state == pci_channel_io_normal) 390 - /* 391 - * the error is non fatal so the bus is ok, just invoke 392 - * the callback for the function that logged the error. 393 - */ 394 - cb(dev, &result_data); 395 - else 396 - pci_walk_bus(dev->bus, cb, &result_data); 397 - } 398 - 399 - return result_data.result; 400 - } 401 - 402 - /** 403 - * default_reset_link - default reset function 404 - * @dev: pointer to pci_dev data structure 405 - * 406 - * Invoked when performing link reset on a Downstream Port or a 407 - * Root Port with no aer driver. 408 - */ 409 - static pci_ers_result_t default_reset_link(struct pci_dev *dev) 410 - { 411 - pci_reset_bridge_secondary_bus(dev); 412 - pci_printk(KERN_DEBUG, dev, "downstream link has been reset\n"); 413 - return PCI_ERS_RESULT_RECOVERED; 414 - } 415 - 416 - static int find_aer_service_iter(struct device *device, void *data) 417 - { 418 - struct pcie_port_service_driver *service_driver, **drv; 419 - 420 - drv = (struct pcie_port_service_driver **) data; 421 - 422 - if (device->bus == &pcie_port_bus_type && device->driver) { 423 - service_driver = to_service_driver(device->driver); 424 - if (service_driver->service == PCIE_PORT_SERVICE_AER) { 425 - *drv = service_driver; 426 - return 1; 427 - } 428 - } 429 - 430 - return 0; 431 - } 432 - 433 - static struct pcie_port_service_driver *find_aer_service(struct pci_dev *dev) 434 - { 435 - struct pcie_port_service_driver *drv = NULL; 436 - 437 - device_for_each_child(&dev->dev, &drv, find_aer_service_iter); 438 - 439 - return drv; 440 - } 441 - 442 - static pci_ers_result_t reset_link(struct pci_dev *dev) 443 - { 444 - struct pci_dev *udev; 445 - pci_ers_result_t status; 446 - struct pcie_port_service_driver *driver; 447 - 448 - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { 449 - /* Reset this port for all subordinates */ 450 - udev = dev; 451 - } else { 452 - /* Reset the upstream component (likely downstream port) */ 453 - udev = dev->bus->self; 454 - } 455 - 456 - /* Use the aer driver of the component firstly */ 457 - driver = find_aer_service(udev); 458 - 459 - if (driver && driver->reset_link) { 460 - status = driver->reset_link(udev); 461 - } else if (udev->has_secondary_link) { 462 - status = default_reset_link(udev); 463 - } else { 464 - pci_printk(KERN_DEBUG, dev, "no link-reset support at upstream device %s\n", 465 - pci_name(udev)); 466 - return PCI_ERS_RESULT_DISCONNECT; 467 - } 468 - 469 - if (status != PCI_ERS_RESULT_RECOVERED) { 470 - pci_printk(KERN_DEBUG, dev, "link reset at upstream device %s failed\n", 471 - pci_name(udev)); 472 - return PCI_ERS_RESULT_DISCONNECT; 473 - } 474 - 475 - return status; 476 - } 477 - 478 - /** 479 - * do_recovery - handle nonfatal/fatal error recovery process 480 - * @dev: pointer to a pci_dev data structure of agent detecting an error 481 - * @severity: error severity type 482 - * 483 - * Invoked when an error is nonfatal/fatal. Once being invoked, broadcast 484 - * error detected message to all downstream drivers within a hierarchy in 485 - * question and return the returned code. 486 - */ 487 - static void do_recovery(struct pci_dev *dev, int severity) 488 - { 489 - pci_ers_result_t status, result = PCI_ERS_RESULT_RECOVERED; 490 - enum pci_channel_state state; 491 - 492 - if (severity == AER_FATAL) 493 - state = pci_channel_io_frozen; 494 - else 495 - state = pci_channel_io_normal; 496 - 497 - status = broadcast_error_message(dev, 498 - state, 499 - "error_detected", 500 - report_error_detected); 501 - 502 - if (severity == AER_FATAL) { 503 - result = reset_link(dev); 504 - if (result != PCI_ERS_RESULT_RECOVERED) 505 - goto failed; 506 - } 507 - 508 - if (status == PCI_ERS_RESULT_CAN_RECOVER) 509 - status = broadcast_error_message(dev, 510 - state, 511 - "mmio_enabled", 512 - report_mmio_enabled); 513 - 514 - if (status == PCI_ERS_RESULT_NEED_RESET) { 515 - /* 516 - * TODO: Should call platform-specific 517 - * functions to reset slot before calling 518 - * drivers' slot_reset callbacks? 519 - */ 520 - status = broadcast_error_message(dev, 521 - state, 522 - "slot_reset", 523 - report_slot_reset); 524 - } 525 - 526 - if (status != PCI_ERS_RESULT_RECOVERED) 527 - goto failed; 528 - 529 - broadcast_error_message(dev, 530 - state, 531 - "resume", 532 - report_resume); 533 - 534 - pci_info(dev, "AER: Device recovery successful\n"); 535 - return; 536 - 537 - failed: 538 - pci_uevent_ers(dev, PCI_ERS_RESULT_DISCONNECT); 539 - /* TODO: Should kernel panic here? */ 540 - pci_info(dev, "AER: Device recovery failed\n"); 541 - } 542 - 543 230 /** 544 231 * handle_error_source - handle logging error into an event log 545 - * @aerdev: pointer to pcie_device data structure of the root port 546 232 * @dev: pointer to pci_dev data structure of error source device 547 233 * @info: comprehensive error information 548 234 * 549 235 * Invoked when an error being detected by Root Port. 550 236 */ 551 - static void handle_error_source(struct pcie_device *aerdev, 552 - struct pci_dev *dev, 553 - struct aer_err_info *info) 237 + static void handle_error_source(struct pci_dev *dev, struct aer_err_info *info) 554 238 { 555 239 int pos; 556 240 ··· 248 562 if (pos) 249 563 pci_write_config_dword(dev, pos + PCI_ERR_COR_STATUS, 250 564 info->status); 251 - } else 252 - do_recovery(dev, info->severity); 565 + } else if (info->severity == AER_NONFATAL) 566 + pcie_do_nonfatal_recovery(dev); 567 + else if (info->severity == AER_FATAL) 568 + pcie_do_fatal_recovery(dev, PCIE_PORT_SERVICE_AER); 253 569 } 254 570 255 571 #ifdef CONFIG_ACPI_APEI_PCIEAER 256 - static void aer_recover_work_func(struct work_struct *work); 257 572 258 573 #define AER_RECOVER_RING_ORDER 4 259 574 #define AER_RECOVER_RING_SIZE (1 << AER_RECOVER_RING_ORDER) ··· 269 582 270 583 static DEFINE_KFIFO(aer_recover_ring, struct aer_recover_entry, 271 584 AER_RECOVER_RING_SIZE); 585 + 586 + static void aer_recover_work_func(struct work_struct *work) 587 + { 588 + struct aer_recover_entry entry; 589 + struct pci_dev *pdev; 590 + 591 + while (kfifo_get(&aer_recover_ring, &entry)) { 592 + pdev = pci_get_domain_bus_and_slot(entry.domain, entry.bus, 593 + entry.devfn); 594 + if (!pdev) { 595 + pr_err("AER recover: Can not find pci_dev for %04x:%02x:%02x:%x\n", 596 + entry.domain, entry.bus, 597 + PCI_SLOT(entry.devfn), PCI_FUNC(entry.devfn)); 598 + continue; 599 + } 600 + cper_print_aer(pdev, entry.severity, entry.regs); 601 + if (entry.severity == AER_NONFATAL) 602 + pcie_do_nonfatal_recovery(pdev); 603 + else if (entry.severity == AER_FATAL) 604 + pcie_do_fatal_recovery(pdev, PCIE_PORT_SERVICE_AER); 605 + pci_dev_put(pdev); 606 + } 607 + } 608 + 272 609 /* 273 610 * Mutual exclusion for writers of aer_recover_ring, reader side don't 274 611 * need lock, because there is only one reader and lock is not needed ··· 322 611 spin_unlock_irqrestore(&aer_recover_ring_lock, flags); 323 612 } 324 613 EXPORT_SYMBOL_GPL(aer_recover_queue); 325 - 326 - static void aer_recover_work_func(struct work_struct *work) 327 - { 328 - struct aer_recover_entry entry; 329 - struct pci_dev *pdev; 330 - 331 - while (kfifo_get(&aer_recover_ring, &entry)) { 332 - pdev = pci_get_domain_bus_and_slot(entry.domain, entry.bus, 333 - entry.devfn); 334 - if (!pdev) { 335 - pr_err("AER recover: Can not find pci_dev for %04x:%02x:%02x:%x\n", 336 - entry.domain, entry.bus, 337 - PCI_SLOT(entry.devfn), PCI_FUNC(entry.devfn)); 338 - continue; 339 - } 340 - cper_print_aer(pdev, entry.severity, entry.regs); 341 - if (entry.severity != AER_CORRECTABLE) 342 - do_recovery(pdev, entry.severity); 343 - pci_dev_put(pdev); 344 - } 345 - } 346 614 #endif 347 615 348 616 /** ··· 385 695 return 1; 386 696 } 387 697 388 - static inline void aer_process_err_devices(struct pcie_device *p_device, 389 - struct aer_err_info *e_info) 698 + static inline void aer_process_err_devices(struct aer_err_info *e_info) 390 699 { 391 700 int i; 392 701 ··· 396 707 } 397 708 for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) { 398 709 if (get_device_error_info(e_info->dev[i], e_info)) 399 - handle_error_source(p_device, e_info->dev[i], e_info); 710 + handle_error_source(e_info->dev[i], e_info); 400 711 } 401 712 } 402 713 403 714 /** 404 715 * aer_isr_one_error - consume an error detected by root port 405 - * @p_device: pointer to error root port service device 716 + * @rpc: pointer to the root port which holds an error 406 717 * @e_src: pointer to an error source 407 718 */ 408 - static void aer_isr_one_error(struct pcie_device *p_device, 719 + static void aer_isr_one_error(struct aer_rpc *rpc, 409 720 struct aer_err_source *e_src) 410 721 { 411 - struct aer_rpc *rpc = get_service_data(p_device); 722 + struct pci_dev *pdev = rpc->rpd; 412 723 struct aer_err_info *e_info = &rpc->e_info; 413 724 414 725 /* ··· 423 734 e_info->multi_error_valid = 1; 424 735 else 425 736 e_info->multi_error_valid = 0; 737 + aer_print_port_info(pdev, e_info); 426 738 427 - aer_print_port_info(p_device->port, e_info); 428 - 429 - if (find_source_device(p_device->port, e_info)) 430 - aer_process_err_devices(p_device, e_info); 739 + if (find_source_device(pdev, e_info)) 740 + aer_process_err_devices(e_info); 431 741 } 432 742 433 743 if (e_src->status & PCI_ERR_ROOT_UNCOR_RCV) { ··· 442 754 else 443 755 e_info->multi_error_valid = 0; 444 756 445 - aer_print_port_info(p_device->port, e_info); 757 + aer_print_port_info(pdev, e_info); 446 758 447 - if (find_source_device(p_device->port, e_info)) 448 - aer_process_err_devices(p_device, e_info); 759 + if (find_source_device(pdev, e_info)) 760 + aer_process_err_devices(e_info); 449 761 } 450 762 } 451 763 ··· 487 799 void aer_isr(struct work_struct *work) 488 800 { 489 801 struct aer_rpc *rpc = container_of(work, struct aer_rpc, dpc_handler); 490 - struct pcie_device *p_device = rpc->rpd; 491 802 struct aer_err_source uninitialized_var(e_src); 492 803 493 804 mutex_lock(&rpc->rpc_mutex); 494 805 while (get_e_source(rpc, &e_src)) 495 - aer_isr_one_error(p_device, &e_src); 806 + aer_isr_one_error(rpc, &e_src); 496 807 mutex_unlock(&rpc->rpc_mutex); 497 808 }
+22 -16
drivers/pci/pcie/aer/aerdrv_errprint.c
··· 163 163 int id = ((dev->bus->number << 8) | dev->devfn); 164 164 165 165 if (!info->status) { 166 - pci_err(dev, "PCIe Bus Error: severity=%s, type=Unaccessible, id=%04x(Unregistered Agent ID)\n", 167 - aer_error_severity_string[info->severity], id); 166 + pci_err(dev, "PCIe Bus Error: severity=%s, type=Inaccessible, (Unregistered Agent ID)\n", 167 + aer_error_severity_string[info->severity]); 168 168 goto out; 169 169 } 170 170 171 171 layer = AER_GET_LAYER_ERROR(info->severity, info->status); 172 172 agent = AER_GET_AGENT(info->severity, info->status); 173 173 174 - pci_err(dev, "PCIe Bus Error: severity=%s, type=%s, id=%04x(%s)\n", 174 + pci_err(dev, "PCIe Bus Error: severity=%s, type=%s, (%s)\n", 175 175 aer_error_severity_string[info->severity], 176 - aer_error_layer[layer], id, aer_agent_string[agent]); 176 + aer_error_layer[layer], aer_agent_string[agent]); 177 177 178 178 pci_err(dev, " device [%04x:%04x] error status/mask=%08x/%08x\n", 179 179 dev->vendor, dev->device, ··· 186 186 187 187 out: 188 188 if (info->id && info->error_dev_num > 1 && info->id == id) 189 - pci_err(dev, " Error of this Agent(%04x) is reported first\n", id); 189 + pci_err(dev, " Error of this Agent is reported first\n"); 190 190 191 191 trace_aer_event(dev_name(&dev->dev), (info->status & ~info->mask), 192 - info->severity); 192 + info->severity, info->tlp_header_valid, &info->tlp); 193 193 } 194 194 195 195 void aer_print_port_info(struct pci_dev *dev, struct aer_err_info *info) 196 196 { 197 - pci_info(dev, "AER: %s%s error received: id=%04x\n", 197 + u8 bus = info->id >> 8; 198 + u8 devfn = info->id & 0xff; 199 + 200 + pci_info(dev, "AER: %s%s error received: %04x:%02x:%02x.%d\n", 198 201 info->multi_error_valid ? "Multiple " : "", 199 - aer_error_severity_string[info->severity], info->id); 202 + aer_error_severity_string[info->severity], 203 + pci_domain_nr(dev->bus), bus, PCI_SLOT(devfn), PCI_FUNC(devfn)); 200 204 } 201 205 202 206 #ifdef CONFIG_ACPI_APEI_PCIEAER ··· 220 216 void cper_print_aer(struct pci_dev *dev, int aer_severity, 221 217 struct aer_capability_regs *aer) 222 218 { 223 - int layer, agent, status_strs_size, tlp_header_valid = 0; 219 + int layer, agent, tlp_header_valid = 0; 224 220 u32 status, mask; 225 - const char **status_strs; 221 + struct aer_err_info info; 226 222 227 223 if (aer_severity == AER_CORRECTABLE) { 228 224 status = aer->cor_status; 229 225 mask = aer->cor_mask; 230 - status_strs = aer_correctable_error_string; 231 - status_strs_size = ARRAY_SIZE(aer_correctable_error_string); 232 226 } else { 233 227 status = aer->uncor_status; 234 228 mask = aer->uncor_mask; 235 - status_strs = aer_uncorrectable_error_string; 236 - status_strs_size = ARRAY_SIZE(aer_uncorrectable_error_string); 237 229 tlp_header_valid = status & AER_LOG_TLP_MASKS; 238 230 } 239 231 240 232 layer = AER_GET_LAYER_ERROR(aer_severity, status); 241 233 agent = AER_GET_AGENT(aer_severity, status); 242 234 235 + memset(&info, 0, sizeof(info)); 236 + info.severity = aer_severity; 237 + info.status = status; 238 + info.mask = mask; 239 + info.first_error = PCI_ERR_CAP_FEP(aer->cap_control); 240 + 243 241 pci_err(dev, "aer_status: 0x%08x, aer_mask: 0x%08x\n", status, mask); 244 - cper_print_bits("", status, status_strs, status_strs_size); 242 + __aer_print_error(dev, &info); 245 243 pci_err(dev, "aer_layer=%s, aer_agent=%s\n", 246 244 aer_error_layer[layer], aer_agent_string[agent]); 247 245 ··· 255 249 __print_tlp_header(dev, &aer->header_log); 256 250 257 251 trace_aer_event(dev_name(&dev->dev), (status & ~mask), 258 - aer_severity); 252 + aer_severity, tlp_header_valid, &aer->header_log); 259 253 } 260 254 #endif
+9
drivers/pci/pcie/aspm.c
··· 400 400 info->l1ss_cap = 0; 401 401 return; 402 402 } 403 + 404 + /* 405 + * If we don't have LTR for the entire path from the Root Complex 406 + * to this device, we can't use ASPM L1.2 because it relies on the 407 + * LTR_L1.2_THRESHOLD. See PCIe r4.0, secs 5.5.4, 6.18. 408 + */ 409 + if (!pdev->ltr_path) 410 + info->l1ss_cap &= ~PCI_L1SS_CAP_ASPM_L1_2; 411 + 403 412 pci_read_config_dword(pdev, info->l1ss_cap_ptr + PCI_L1SS_CTL1, 404 413 &info->l1ss_ctl1); 405 414 pci_read_config_dword(pdev, info->l1ss_cap_ptr + PCI_L1SS_CTL2,
+38 -32
drivers/pci/pcie/dpc.c
··· 68 68 69 69 static void dpc_wait_link_inactive(struct dpc_dev *dpc) 70 70 { 71 - unsigned long timeout = jiffies + HZ; 72 71 struct pci_dev *pdev = dpc->dev->port; 73 - struct device *dev = &dpc->dev->device; 74 - u16 lnk_status; 75 72 76 - pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); 77 - while (lnk_status & PCI_EXP_LNKSTA_DLLLA && 78 - !time_after(jiffies, timeout)) { 79 - msleep(10); 80 - pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); 81 - } 82 - if (lnk_status & PCI_EXP_LNKSTA_DLLLA) 83 - dev_warn(dev, "Link state not disabled for DPC event\n"); 73 + pcie_wait_for_link(pdev, false); 84 74 } 85 75 86 - static void dpc_work(struct work_struct *work) 76 + static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev) 87 77 { 88 - struct dpc_dev *dpc = container_of(work, struct dpc_dev, work); 89 - struct pci_dev *dev, *temp, *pdev = dpc->dev->port; 90 - struct pci_bus *parent = pdev->subordinate; 91 - u16 cap = dpc->cap_pos, ctl; 78 + struct dpc_dev *dpc; 79 + struct pcie_device *pciedev; 80 + struct device *devdpc; 81 + u16 cap, ctl; 92 82 93 - pci_lock_rescan_remove(); 94 - list_for_each_entry_safe_reverse(dev, temp, &parent->devices, 95 - bus_list) { 96 - pci_dev_get(dev); 97 - pci_dev_set_disconnected(dev, NULL); 98 - if (pci_has_subordinate(dev)) 99 - pci_walk_bus(dev->subordinate, 100 - pci_dev_set_disconnected, NULL); 101 - pci_stop_and_remove_bus_device(dev); 102 - pci_dev_put(dev); 103 - } 104 - pci_unlock_rescan_remove(); 83 + /* 84 + * DPC disables the Link automatically in hardware, so it has 85 + * already been reset by the time we get here. 86 + */ 87 + devdpc = pcie_port_find_device(pdev, PCIE_PORT_SERVICE_DPC); 88 + pciedev = to_pcie_device(devdpc); 89 + dpc = get_service_data(pciedev); 90 + cap = dpc->cap_pos; 105 91 92 + /* 93 + * Wait until the Link is inactive, then clear DPC Trigger Status 94 + * to allow the Port to leave DPC. 95 + */ 106 96 dpc_wait_link_inactive(dpc); 97 + 107 98 if (dpc->rp_extensions && dpc_wait_rp_inactive(dpc)) 108 - return; 99 + return PCI_ERS_RESULT_DISCONNECT; 109 100 if (dpc->rp_extensions && dpc->rp_pio_status) { 110 101 pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, 111 102 dpc->rp_pio_status); ··· 104 113 } 105 114 106 115 pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS, 107 - PCI_EXP_DPC_STATUS_TRIGGER | PCI_EXP_DPC_STATUS_INTERRUPT); 116 + PCI_EXP_DPC_STATUS_TRIGGER); 108 117 109 118 pci_read_config_word(pdev, cap + PCI_EXP_DPC_CTL, &ctl); 110 119 pci_write_config_word(pdev, cap + PCI_EXP_DPC_CTL, 111 120 ctl | PCI_EXP_DPC_CTL_INT_EN); 121 + 122 + return PCI_ERS_RESULT_RECOVERED; 123 + } 124 + 125 + static void dpc_work(struct work_struct *work) 126 + { 127 + struct dpc_dev *dpc = container_of(work, struct dpc_dev, work); 128 + struct pci_dev *pdev = dpc->dev->port; 129 + 130 + /* We configure DPC so it only triggers on ERR_FATAL */ 131 + pcie_do_fatal_recovery(pdev, PCIE_PORT_SERVICE_DPC); 112 132 } 113 133 114 134 static void dpc_process_rp_pio_error(struct dpc_dev *dpc) ··· 225 223 if (dpc->rp_extensions && reason == 3 && ext_reason == 0) 226 224 dpc_process_rp_pio_error(dpc); 227 225 226 + pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS, 227 + PCI_EXP_DPC_STATUS_INTERRUPT); 228 + 228 229 schedule_work(&dpc->work); 229 230 230 231 return IRQ_HANDLED; ··· 275 270 } 276 271 } 277 272 278 - ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_NONFATAL | PCI_EXP_DPC_CTL_INT_EN; 273 + ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN; 279 274 pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl); 280 275 281 276 dev_info(device, "DPC error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", ··· 293 288 u16 ctl; 294 289 295 290 pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, &ctl); 296 - ctl &= ~(PCI_EXP_DPC_CTL_EN_NONFATAL | PCI_EXP_DPC_CTL_INT_EN); 291 + ctl &= ~(PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN); 297 292 pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl); 298 293 } 299 294 ··· 303 298 .service = PCIE_PORT_SERVICE_DPC, 304 299 .probe = dpc_probe, 305 300 .remove = dpc_remove, 301 + .reset_link = dpc_reset_link, 306 302 }; 307 303 308 304 static int __init dpc_service_init(void)
+388
drivers/pci/pcie/err.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * This file implements the error recovery as a core part of PCIe error 4 + * reporting. When a PCIe error is delivered, an error message will be 5 + * collected and printed to console, then, an error recovery procedure 6 + * will be executed by following the PCI error recovery rules. 7 + * 8 + * Copyright (C) 2006 Intel Corp. 9 + * Tom Long Nguyen (tom.l.nguyen@intel.com) 10 + * Zhang Yanmin (yanmin.zhang@intel.com) 11 + */ 12 + 13 + #include <linux/pci.h> 14 + #include <linux/module.h> 15 + #include <linux/pci.h> 16 + #include <linux/kernel.h> 17 + #include <linux/errno.h> 18 + #include <linux/aer.h> 19 + #include "portdrv.h" 20 + #include "../pci.h" 21 + 22 + struct aer_broadcast_data { 23 + enum pci_channel_state state; 24 + enum pci_ers_result result; 25 + }; 26 + 27 + static pci_ers_result_t merge_result(enum pci_ers_result orig, 28 + enum pci_ers_result new) 29 + { 30 + if (new == PCI_ERS_RESULT_NO_AER_DRIVER) 31 + return PCI_ERS_RESULT_NO_AER_DRIVER; 32 + 33 + if (new == PCI_ERS_RESULT_NONE) 34 + return orig; 35 + 36 + switch (orig) { 37 + case PCI_ERS_RESULT_CAN_RECOVER: 38 + case PCI_ERS_RESULT_RECOVERED: 39 + orig = new; 40 + break; 41 + case PCI_ERS_RESULT_DISCONNECT: 42 + if (new == PCI_ERS_RESULT_NEED_RESET) 43 + orig = PCI_ERS_RESULT_NEED_RESET; 44 + break; 45 + default: 46 + break; 47 + } 48 + 49 + return orig; 50 + } 51 + 52 + static int report_error_detected(struct pci_dev *dev, void *data) 53 + { 54 + pci_ers_result_t vote; 55 + const struct pci_error_handlers *err_handler; 56 + struct aer_broadcast_data *result_data; 57 + 58 + result_data = (struct aer_broadcast_data *) data; 59 + 60 + device_lock(&dev->dev); 61 + dev->error_state = result_data->state; 62 + 63 + if (!dev->driver || 64 + !dev->driver->err_handler || 65 + !dev->driver->err_handler->error_detected) { 66 + if (result_data->state == pci_channel_io_frozen && 67 + dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) { 68 + /* 69 + * In case of fatal recovery, if one of down- 70 + * stream device has no driver. We might be 71 + * unable to recover because a later insmod 72 + * of a driver for this device is unaware of 73 + * its hw state. 74 + */ 75 + pci_printk(KERN_DEBUG, dev, "device has %s\n", 76 + dev->driver ? 77 + "no AER-aware driver" : "no driver"); 78 + } 79 + 80 + /* 81 + * If there's any device in the subtree that does not 82 + * have an error_detected callback, returning 83 + * PCI_ERS_RESULT_NO_AER_DRIVER prevents calling of 84 + * the subsequent mmio_enabled/slot_reset/resume 85 + * callbacks of "any" device in the subtree. All the 86 + * devices in the subtree are left in the error state 87 + * without recovery. 88 + */ 89 + 90 + if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) 91 + vote = PCI_ERS_RESULT_NO_AER_DRIVER; 92 + else 93 + vote = PCI_ERS_RESULT_NONE; 94 + } else { 95 + err_handler = dev->driver->err_handler; 96 + vote = err_handler->error_detected(dev, result_data->state); 97 + pci_uevent_ers(dev, PCI_ERS_RESULT_NONE); 98 + } 99 + 100 + result_data->result = merge_result(result_data->result, vote); 101 + device_unlock(&dev->dev); 102 + return 0; 103 + } 104 + 105 + static int report_mmio_enabled(struct pci_dev *dev, void *data) 106 + { 107 + pci_ers_result_t vote; 108 + const struct pci_error_handlers *err_handler; 109 + struct aer_broadcast_data *result_data; 110 + 111 + result_data = (struct aer_broadcast_data *) data; 112 + 113 + device_lock(&dev->dev); 114 + if (!dev->driver || 115 + !dev->driver->err_handler || 116 + !dev->driver->err_handler->mmio_enabled) 117 + goto out; 118 + 119 + err_handler = dev->driver->err_handler; 120 + vote = err_handler->mmio_enabled(dev); 121 + result_data->result = merge_result(result_data->result, vote); 122 + out: 123 + device_unlock(&dev->dev); 124 + return 0; 125 + } 126 + 127 + static int report_slot_reset(struct pci_dev *dev, void *data) 128 + { 129 + pci_ers_result_t vote; 130 + const struct pci_error_handlers *err_handler; 131 + struct aer_broadcast_data *result_data; 132 + 133 + result_data = (struct aer_broadcast_data *) data; 134 + 135 + device_lock(&dev->dev); 136 + if (!dev->driver || 137 + !dev->driver->err_handler || 138 + !dev->driver->err_handler->slot_reset) 139 + goto out; 140 + 141 + err_handler = dev->driver->err_handler; 142 + vote = err_handler->slot_reset(dev); 143 + result_data->result = merge_result(result_data->result, vote); 144 + out: 145 + device_unlock(&dev->dev); 146 + return 0; 147 + } 148 + 149 + static int report_resume(struct pci_dev *dev, void *data) 150 + { 151 + const struct pci_error_handlers *err_handler; 152 + 153 + device_lock(&dev->dev); 154 + dev->error_state = pci_channel_io_normal; 155 + 156 + if (!dev->driver || 157 + !dev->driver->err_handler || 158 + !dev->driver->err_handler->resume) 159 + goto out; 160 + 161 + err_handler = dev->driver->err_handler; 162 + err_handler->resume(dev); 163 + pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED); 164 + out: 165 + device_unlock(&dev->dev); 166 + return 0; 167 + } 168 + 169 + /** 170 + * default_reset_link - default reset function 171 + * @dev: pointer to pci_dev data structure 172 + * 173 + * Invoked when performing link reset on a Downstream Port or a 174 + * Root Port with no aer driver. 175 + */ 176 + static pci_ers_result_t default_reset_link(struct pci_dev *dev) 177 + { 178 + pci_reset_bridge_secondary_bus(dev); 179 + pci_printk(KERN_DEBUG, dev, "downstream link has been reset\n"); 180 + return PCI_ERS_RESULT_RECOVERED; 181 + } 182 + 183 + static pci_ers_result_t reset_link(struct pci_dev *dev, u32 service) 184 + { 185 + struct pci_dev *udev; 186 + pci_ers_result_t status; 187 + struct pcie_port_service_driver *driver = NULL; 188 + 189 + if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { 190 + /* Reset this port for all subordinates */ 191 + udev = dev; 192 + } else { 193 + /* Reset the upstream component (likely downstream port) */ 194 + udev = dev->bus->self; 195 + } 196 + 197 + /* Use the aer driver of the component firstly */ 198 + driver = pcie_port_find_service(udev, service); 199 + 200 + if (driver && driver->reset_link) { 201 + status = driver->reset_link(udev); 202 + } else if (udev->has_secondary_link) { 203 + status = default_reset_link(udev); 204 + } else { 205 + pci_printk(KERN_DEBUG, dev, "no link-reset support at upstream device %s\n", 206 + pci_name(udev)); 207 + return PCI_ERS_RESULT_DISCONNECT; 208 + } 209 + 210 + if (status != PCI_ERS_RESULT_RECOVERED) { 211 + pci_printk(KERN_DEBUG, dev, "link reset at upstream device %s failed\n", 212 + pci_name(udev)); 213 + return PCI_ERS_RESULT_DISCONNECT; 214 + } 215 + 216 + return status; 217 + } 218 + 219 + /** 220 + * broadcast_error_message - handle message broadcast to downstream drivers 221 + * @dev: pointer to from where in a hierarchy message is broadcasted down 222 + * @state: error state 223 + * @error_mesg: message to print 224 + * @cb: callback to be broadcasted 225 + * 226 + * Invoked during error recovery process. Once being invoked, the content 227 + * of error severity will be broadcasted to all downstream drivers in a 228 + * hierarchy in question. 229 + */ 230 + static pci_ers_result_t broadcast_error_message(struct pci_dev *dev, 231 + enum pci_channel_state state, 232 + char *error_mesg, 233 + int (*cb)(struct pci_dev *, void *)) 234 + { 235 + struct aer_broadcast_data result_data; 236 + 237 + pci_printk(KERN_DEBUG, dev, "broadcast %s message\n", error_mesg); 238 + result_data.state = state; 239 + if (cb == report_error_detected) 240 + result_data.result = PCI_ERS_RESULT_CAN_RECOVER; 241 + else 242 + result_data.result = PCI_ERS_RESULT_RECOVERED; 243 + 244 + if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { 245 + /* 246 + * If the error is reported by a bridge, we think this error 247 + * is related to the downstream link of the bridge, so we 248 + * do error recovery on all subordinates of the bridge instead 249 + * of the bridge and clear the error status of the bridge. 250 + */ 251 + if (cb == report_error_detected) 252 + dev->error_state = state; 253 + pci_walk_bus(dev->subordinate, cb, &result_data); 254 + if (cb == report_resume) { 255 + pci_cleanup_aer_uncorrect_error_status(dev); 256 + dev->error_state = pci_channel_io_normal; 257 + } 258 + } else { 259 + /* 260 + * If the error is reported by an end point, we think this 261 + * error is related to the upstream link of the end point. 262 + */ 263 + if (state == pci_channel_io_normal) 264 + /* 265 + * the error is non fatal so the bus is ok, just invoke 266 + * the callback for the function that logged the error. 267 + */ 268 + cb(dev, &result_data); 269 + else 270 + pci_walk_bus(dev->bus, cb, &result_data); 271 + } 272 + 273 + return result_data.result; 274 + } 275 + 276 + /** 277 + * pcie_do_fatal_recovery - handle fatal error recovery process 278 + * @dev: pointer to a pci_dev data structure of agent detecting an error 279 + * 280 + * Invoked when an error is fatal. Once being invoked, removes the devices 281 + * beneath this AER agent, followed by reset link e.g. secondary bus reset 282 + * followed by re-enumeration of devices. 283 + */ 284 + void pcie_do_fatal_recovery(struct pci_dev *dev, u32 service) 285 + { 286 + struct pci_dev *udev; 287 + struct pci_bus *parent; 288 + struct pci_dev *pdev, *temp; 289 + pci_ers_result_t result; 290 + 291 + if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) 292 + udev = dev; 293 + else 294 + udev = dev->bus->self; 295 + 296 + parent = udev->subordinate; 297 + pci_lock_rescan_remove(); 298 + list_for_each_entry_safe_reverse(pdev, temp, &parent->devices, 299 + bus_list) { 300 + pci_dev_get(pdev); 301 + pci_dev_set_disconnected(pdev, NULL); 302 + if (pci_has_subordinate(pdev)) 303 + pci_walk_bus(pdev->subordinate, 304 + pci_dev_set_disconnected, NULL); 305 + pci_stop_and_remove_bus_device(pdev); 306 + pci_dev_put(pdev); 307 + } 308 + 309 + result = reset_link(udev, service); 310 + 311 + if ((service == PCIE_PORT_SERVICE_AER) && 312 + (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE)) { 313 + /* 314 + * If the error is reported by a bridge, we think this error 315 + * is related to the downstream link of the bridge, so we 316 + * do error recovery on all subordinates of the bridge instead 317 + * of the bridge and clear the error status of the bridge. 318 + */ 319 + pci_cleanup_aer_uncorrect_error_status(dev); 320 + } 321 + 322 + if (result == PCI_ERS_RESULT_RECOVERED) { 323 + if (pcie_wait_for_link(udev, true)) 324 + pci_rescan_bus(udev->bus); 325 + pci_info(dev, "Device recovery from fatal error successful\n"); 326 + } else { 327 + pci_uevent_ers(dev, PCI_ERS_RESULT_DISCONNECT); 328 + pci_info(dev, "Device recovery from fatal error failed\n"); 329 + } 330 + 331 + pci_unlock_rescan_remove(); 332 + } 333 + 334 + /** 335 + * pcie_do_nonfatal_recovery - handle nonfatal error recovery process 336 + * @dev: pointer to a pci_dev data structure of agent detecting an error 337 + * 338 + * Invoked when an error is nonfatal/fatal. Once being invoked, broadcast 339 + * error detected message to all downstream drivers within a hierarchy in 340 + * question and return the returned code. 341 + */ 342 + void pcie_do_nonfatal_recovery(struct pci_dev *dev) 343 + { 344 + pci_ers_result_t status; 345 + enum pci_channel_state state; 346 + 347 + state = pci_channel_io_normal; 348 + 349 + status = broadcast_error_message(dev, 350 + state, 351 + "error_detected", 352 + report_error_detected); 353 + 354 + if (status == PCI_ERS_RESULT_CAN_RECOVER) 355 + status = broadcast_error_message(dev, 356 + state, 357 + "mmio_enabled", 358 + report_mmio_enabled); 359 + 360 + if (status == PCI_ERS_RESULT_NEED_RESET) { 361 + /* 362 + * TODO: Should call platform-specific 363 + * functions to reset slot before calling 364 + * drivers' slot_reset callbacks? 365 + */ 366 + status = broadcast_error_message(dev, 367 + state, 368 + "slot_reset", 369 + report_slot_reset); 370 + } 371 + 372 + if (status != PCI_ERS_RESULT_RECOVERED) 373 + goto failed; 374 + 375 + broadcast_error_message(dev, 376 + state, 377 + "resume", 378 + report_resume); 379 + 380 + pci_info(dev, "AER: Device recovery successful\n"); 381 + return; 382 + 383 + failed: 384 + pci_uevent_ers(dev, PCI_ERS_RESULT_DISCONNECT); 385 + 386 + /* TODO: Should kernel panic here? */ 387 + pci_info(dev, "AER: Device recovery failed\n"); 388 + }
+3 -2
drivers/pci/pcie/portdrv.h
··· 11 11 12 12 #include <linux/compiler.h> 13 13 14 - extern bool pcie_ports_native; 15 - 16 14 /* Service Type */ 17 15 #define PCIE_PORT_SERVICE_PME_SHIFT 0 /* Power Management Event */ 18 16 #define PCIE_PORT_SERVICE_PME (1 << PCIE_PORT_SERVICE_PME_SHIFT) ··· 110 112 static inline void pcie_pme_interrupt_enable(struct pci_dev *dev, bool en) {} 111 113 #endif /* !CONFIG_PCIE_PME */ 112 114 115 + struct pcie_port_service_driver *pcie_port_find_service(struct pci_dev *dev, 116 + u32 service); 117 + struct device *pcie_port_find_device(struct pci_dev *dev, u32 service); 113 118 #endif /* _PORTDRV_H_ */
-57
drivers/pci/pcie/portdrv_acpi.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * PCIe Port Native Services Support, ACPI-Related Part 4 - * 5 - * Copyright (C) 2010 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. 6 - */ 7 - 8 - #include <linux/pci.h> 9 - #include <linux/kernel.h> 10 - #include <linux/errno.h> 11 - #include <linux/acpi.h> 12 - #include <linux/pci-acpi.h> 13 - 14 - #include "aer/aerdrv.h" 15 - #include "../pci.h" 16 - #include "portdrv.h" 17 - 18 - /** 19 - * pcie_port_acpi_setup - Request the BIOS to release control of PCIe services. 20 - * @port: PCIe Port service for a root port or event collector. 21 - * @srv_mask: Bit mask of services that can be enabled for @port. 22 - * 23 - * Invoked when @port is identified as a PCIe port device. To avoid conflicts 24 - * with the BIOS PCIe port native services support requires the BIOS to yield 25 - * control of these services to the kernel. The mask of services that the BIOS 26 - * allows to be enabled for @port is written to @srv_mask. 27 - * 28 - * NOTE: It turns out that we cannot do that for individual port services 29 - * separately, because that would make some systems work incorrectly. 30 - */ 31 - void pcie_port_acpi_setup(struct pci_dev *port, int *srv_mask) 32 - { 33 - struct acpi_pci_root *root; 34 - acpi_handle handle; 35 - u32 flags; 36 - 37 - if (acpi_pci_disabled) 38 - return; 39 - 40 - handle = acpi_find_root_bridge_handle(port); 41 - if (!handle) 42 - return; 43 - 44 - root = acpi_pci_find_root(handle); 45 - if (!root) 46 - return; 47 - 48 - flags = root->osc_control_set; 49 - 50 - *srv_mask = 0; 51 - if (flags & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL) 52 - *srv_mask |= PCIE_PORT_SERVICE_HP; 53 - if (flags & OSC_PCI_EXPRESS_PME_CONTROL) 54 - *srv_mask |= PCIE_PORT_SERVICE_PME; 55 - if (flags & OSC_PCI_EXPRESS_AER_CONTROL) 56 - *srv_mask |= PCIE_PORT_SERVICE_AER | PCIE_PORT_SERVICE_DPC; 57 - }
+70 -1
drivers/pci/pcie/portdrv_core.c
··· 19 19 #include "../pci.h" 20 20 #include "portdrv.h" 21 21 22 + struct portdrv_service_data { 23 + struct pcie_port_service_driver *drv; 24 + struct device *dev; 25 + u32 service; 26 + }; 27 + 22 28 /** 23 29 * release_pcie_device - free PCI Express port service device structure 24 30 * @dev: Port service device to release ··· 205 199 int services = 0; 206 200 207 201 if (dev->is_hotplug_bridge && 208 - (pcie_ports_native || host->native_hotplug)) { 202 + (pcie_ports_native || host->native_pcie_hotplug)) { 209 203 services |= PCIE_PORT_SERVICE_HP; 210 204 211 205 /* ··· 402 396 if (dev->bus == &pcie_port_bus_type) 403 397 device_unregister(dev); 404 398 return 0; 399 + } 400 + 401 + static int find_service_iter(struct device *device, void *data) 402 + { 403 + struct pcie_port_service_driver *service_driver; 404 + struct portdrv_service_data *pdrvs; 405 + u32 service; 406 + 407 + pdrvs = (struct portdrv_service_data *) data; 408 + service = pdrvs->service; 409 + 410 + if (device->bus == &pcie_port_bus_type && device->driver) { 411 + service_driver = to_service_driver(device->driver); 412 + if (service_driver->service == service) { 413 + pdrvs->drv = service_driver; 414 + pdrvs->dev = device; 415 + return 1; 416 + } 417 + } 418 + 419 + return 0; 420 + } 421 + 422 + /** 423 + * pcie_port_find_service - find the service driver 424 + * @dev: PCI Express port the service is associated with 425 + * @service: Service to find 426 + * 427 + * Find PCI Express port service driver associated with given service 428 + */ 429 + struct pcie_port_service_driver *pcie_port_find_service(struct pci_dev *dev, 430 + u32 service) 431 + { 432 + struct pcie_port_service_driver *drv; 433 + struct portdrv_service_data pdrvs; 434 + 435 + pdrvs.drv = NULL; 436 + pdrvs.service = service; 437 + device_for_each_child(&dev->dev, &pdrvs, find_service_iter); 438 + 439 + drv = pdrvs.drv; 440 + return drv; 441 + } 442 + 443 + /** 444 + * pcie_port_find_device - find the struct device 445 + * @dev: PCI Express port the service is associated with 446 + * @service: For the service to find 447 + * 448 + * Find the struct device associated with given service on a pci_dev 449 + */ 450 + struct device *pcie_port_find_device(struct pci_dev *dev, 451 + u32 service) 452 + { 453 + struct device *device; 454 + struct portdrv_service_data pdrvs; 455 + 456 + pdrvs.dev = NULL; 457 + pdrvs.service = service; 458 + device_for_each_child(&dev->dev, &pdrvs, find_service_iter); 459 + 460 + device = pdrvs.dev; 461 + return device; 405 462 } 406 463 407 464 /**
+82 -14
drivers/pci/probe.c
··· 526 526 527 527 if (bridge->release_fn) 528 528 bridge->release_fn(bridge); 529 + 530 + pci_free_resource_list(&bridge->windows); 529 531 } 530 532 531 533 static void pci_release_host_bridge_dev(struct device *dev) 532 534 { 533 535 devm_pci_release_host_bridge_dev(dev); 534 - pci_free_host_bridge(to_pci_host_bridge(dev)); 536 + kfree(to_pci_host_bridge(dev)); 535 537 } 536 538 537 539 struct pci_host_bridge *pci_alloc_host_bridge(size_t priv) ··· 554 552 * OS from interfering. 555 553 */ 556 554 bridge->native_aer = 1; 557 - bridge->native_hotplug = 1; 555 + bridge->native_pcie_hotplug = 1; 556 + bridge->native_shpc_hotplug = 1; 558 557 bridge->native_pme = 1; 558 + bridge->native_ltr = 1; 559 559 560 560 return bridge; 561 561 } ··· 886 882 return err; 887 883 } 888 884 885 + static bool pci_bridge_child_ext_cfg_accessible(struct pci_dev *bridge) 886 + { 887 + int pos; 888 + u32 status; 889 + 890 + /* 891 + * If extended config space isn't accessible on a bridge's primary 892 + * bus, we certainly can't access it on the secondary bus. 893 + */ 894 + if (bridge->bus->bus_flags & PCI_BUS_FLAGS_NO_EXTCFG) 895 + return false; 896 + 897 + /* 898 + * PCIe Root Ports and switch ports are PCIe on both sides, so if 899 + * extended config space is accessible on the primary, it's also 900 + * accessible on the secondary. 901 + */ 902 + if (pci_is_pcie(bridge) && 903 + (pci_pcie_type(bridge) == PCI_EXP_TYPE_ROOT_PORT || 904 + pci_pcie_type(bridge) == PCI_EXP_TYPE_UPSTREAM || 905 + pci_pcie_type(bridge) == PCI_EXP_TYPE_DOWNSTREAM)) 906 + return true; 907 + 908 + /* 909 + * For the other bridge types: 910 + * - PCI-to-PCI bridges 911 + * - PCIe-to-PCI/PCI-X forward bridges 912 + * - PCI/PCI-X-to-PCIe reverse bridges 913 + * extended config space on the secondary side is only accessible 914 + * if the bridge supports PCI-X Mode 2. 915 + */ 916 + pos = pci_find_capability(bridge, PCI_CAP_ID_PCIX); 917 + if (!pos) 918 + return false; 919 + 920 + pci_read_config_dword(bridge, pos + PCI_X_STATUS, &status); 921 + return status & (PCI_X_STATUS_266MHZ | PCI_X_STATUS_533MHZ); 922 + } 923 + 889 924 static struct pci_bus *pci_alloc_child_bus(struct pci_bus *parent, 890 925 struct pci_dev *bridge, int busnr) 891 926 { ··· 965 922 child->dev.parent = child->bridge; 966 923 pci_set_bus_of_node(child); 967 924 pci_set_bus_speed(child); 925 + 926 + /* 927 + * Check whether extended config space is accessible on the child 928 + * bus. Note that we currently assume it is always accessible on 929 + * the root bus. 930 + */ 931 + if (!pci_bridge_child_ext_cfg_accessible(bridge)) { 932 + child->bus_flags |= PCI_BUS_FLAGS_NO_EXTCFG; 933 + pci_info(child, "extended config space not accessible\n"); 934 + } 968 935 969 936 /* Set up default resource pointers and names */ 970 937 for (i = 0; i < PCI_BRIDGE_RESOURCE_NUM; i++) { ··· 1051 998 * already configured by the BIOS and after we are done with all of 1052 999 * them, we proceed to assigning numbers to the remaining buses in 1053 1000 * order to avoid overlaps between old and new bus numbers. 1001 + * 1002 + * Return: New subordinate number covering all buses behind this bridge. 1054 1003 */ 1055 1004 static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev, 1056 1005 int max, unsigned int available_buses, ··· 1243 1188 (is_cardbus ? "PCI CardBus %04x:%02x" : "PCI Bus %04x:%02x"), 1244 1189 pci_domain_nr(bus), child->number); 1245 1190 1246 - /* Has only triggered on CardBus, fixup is in yenta_socket */ 1191 + /* Check that all devices are accessible */ 1247 1192 while (bus->parent) { 1248 1193 if ((child->busn_res.end > bus->busn_res.end) || 1249 1194 (child->number > bus->busn_res.end) || 1250 1195 (child->number < bus->number) || 1251 1196 (child->busn_res.end < bus->number)) { 1252 - dev_info(&child->dev, "%pR %s hidden behind%s bridge %s %pR\n", 1253 - &child->busn_res, 1254 - (bus->number > child->busn_res.end && 1255 - bus->busn_res.end < child->number) ? 1256 - "wholly" : "partially", 1257 - bus->self->transparent ? " transparent" : "", 1258 - dev_name(&bus->dev), 1259 - &bus->busn_res); 1197 + dev_info(&dev->dev, "devices behind bridge are unusable because %pR cannot be assigned for them\n", 1198 + &child->busn_res); 1199 + break; 1260 1200 } 1261 1201 bus = bus->parent; 1262 1202 } ··· 1280 1230 * already configured by the BIOS and after we are done with all of 1281 1231 * them, we proceed to assigning numbers to the remaining buses in 1282 1232 * order to avoid overlaps between old and new bus numbers. 1233 + * 1234 + * Return: New subordinate number covering all buses behind this bridge. 1283 1235 */ 1284 1236 int pci_scan_bridge(struct pci_bus *bus, struct pci_dev *dev, int max, int pass) 1285 1237 { ··· 1444 1392 int pos; 1445 1393 u32 status; 1446 1394 u16 class; 1395 + 1396 + if (dev->bus->bus_flags & PCI_BUS_FLAGS_NO_EXTCFG) 1397 + return PCI_CFG_SPACE_SIZE; 1447 1398 1448 1399 class = dev->class >> 8; 1449 1400 if (class == PCI_CLASS_BRIDGE_HOST) ··· 2009 1954 static void pci_configure_ltr(struct pci_dev *dev) 2010 1955 { 2011 1956 #ifdef CONFIG_PCIEASPM 1957 + struct pci_host_bridge *host = pci_find_host_bridge(dev->bus); 2012 1958 u32 cap; 2013 1959 struct pci_dev *bridge; 1960 + 1961 + if (!host->native_ltr) 1962 + return; 2014 1963 2015 1964 if (!pci_is_pcie(dev)) 2016 1965 return; ··· 2697 2638 for_each_pci_bridge(dev, bus) { 2698 2639 cmax = max; 2699 2640 max = pci_scan_bridge_extend(bus, dev, max, 0, 0); 2700 - used_buses += cmax - max; 2641 + 2642 + /* 2643 + * Reserve one bus for each bridge now to avoid extending 2644 + * hotplug bridges too much during the second scan below. 2645 + */ 2646 + used_buses++; 2647 + if (cmax - max > 1) 2648 + used_buses += cmax - max - 1; 2701 2649 } 2702 2650 2703 2651 /* Scan bridges that need to be reconfigured */ ··· 2727 2661 * bridges if any. 2728 2662 */ 2729 2663 buses = available_buses / hotplug_bridges; 2730 - buses = min(buses, available_buses - used_buses); 2664 + buses = min(buses, available_buses - used_buses + 1); 2731 2665 } 2732 2666 2733 2667 cmax = max; 2734 2668 max = pci_scan_bridge_extend(bus, dev, cmax, buses, 1); 2735 - used_buses += max - cmax; 2669 + /* One bus is already accounted so don't add it again */ 2670 + if (max - cmax > 1) 2671 + used_buses += max - cmax - 1; 2736 2672 } 2737 2673 2738 2674 /*
+507 -495
drivers/pci/quirks.c
··· 30 30 #include <asm/dma.h> /* isa_dma_bridge_buggy */ 31 31 #include "pci.h" 32 32 33 + static ktime_t fixup_debug_start(struct pci_dev *dev, 34 + void (*fn)(struct pci_dev *dev)) 35 + { 36 + if (initcall_debug) 37 + pci_info(dev, "calling %pF @ %i\n", fn, task_pid_nr(current)); 38 + 39 + return ktime_get(); 40 + } 41 + 42 + static void fixup_debug_report(struct pci_dev *dev, ktime_t calltime, 43 + void (*fn)(struct pci_dev *dev)) 44 + { 45 + ktime_t delta, rettime; 46 + unsigned long long duration; 47 + 48 + rettime = ktime_get(); 49 + delta = ktime_sub(rettime, calltime); 50 + duration = (unsigned long long) ktime_to_ns(delta) >> 10; 51 + if (initcall_debug || duration > 10000) 52 + pci_info(dev, "%pF took %lld usecs\n", fn, duration); 53 + } 54 + 55 + static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f, 56 + struct pci_fixup *end) 57 + { 58 + ktime_t calltime; 59 + 60 + for (; f < end; f++) 61 + if ((f->class == (u32) (dev->class >> f->class_shift) || 62 + f->class == (u32) PCI_ANY_ID) && 63 + (f->vendor == dev->vendor || 64 + f->vendor == (u16) PCI_ANY_ID) && 65 + (f->device == dev->device || 66 + f->device == (u16) PCI_ANY_ID)) { 67 + calltime = fixup_debug_start(dev, f->hook); 68 + f->hook(dev); 69 + fixup_debug_report(dev, calltime, f->hook); 70 + } 71 + } 72 + 73 + extern struct pci_fixup __start_pci_fixups_early[]; 74 + extern struct pci_fixup __end_pci_fixups_early[]; 75 + extern struct pci_fixup __start_pci_fixups_header[]; 76 + extern struct pci_fixup __end_pci_fixups_header[]; 77 + extern struct pci_fixup __start_pci_fixups_final[]; 78 + extern struct pci_fixup __end_pci_fixups_final[]; 79 + extern struct pci_fixup __start_pci_fixups_enable[]; 80 + extern struct pci_fixup __end_pci_fixups_enable[]; 81 + extern struct pci_fixup __start_pci_fixups_resume[]; 82 + extern struct pci_fixup __end_pci_fixups_resume[]; 83 + extern struct pci_fixup __start_pci_fixups_resume_early[]; 84 + extern struct pci_fixup __end_pci_fixups_resume_early[]; 85 + extern struct pci_fixup __start_pci_fixups_suspend[]; 86 + extern struct pci_fixup __end_pci_fixups_suspend[]; 87 + extern struct pci_fixup __start_pci_fixups_suspend_late[]; 88 + extern struct pci_fixup __end_pci_fixups_suspend_late[]; 89 + 90 + static bool pci_apply_fixup_final_quirks; 91 + 92 + void pci_fixup_device(enum pci_fixup_pass pass, struct pci_dev *dev) 93 + { 94 + struct pci_fixup *start, *end; 95 + 96 + switch (pass) { 97 + case pci_fixup_early: 98 + start = __start_pci_fixups_early; 99 + end = __end_pci_fixups_early; 100 + break; 101 + 102 + case pci_fixup_header: 103 + start = __start_pci_fixups_header; 104 + end = __end_pci_fixups_header; 105 + break; 106 + 107 + case pci_fixup_final: 108 + if (!pci_apply_fixup_final_quirks) 109 + return; 110 + start = __start_pci_fixups_final; 111 + end = __end_pci_fixups_final; 112 + break; 113 + 114 + case pci_fixup_enable: 115 + start = __start_pci_fixups_enable; 116 + end = __end_pci_fixups_enable; 117 + break; 118 + 119 + case pci_fixup_resume: 120 + start = __start_pci_fixups_resume; 121 + end = __end_pci_fixups_resume; 122 + break; 123 + 124 + case pci_fixup_resume_early: 125 + start = __start_pci_fixups_resume_early; 126 + end = __end_pci_fixups_resume_early; 127 + break; 128 + 129 + case pci_fixup_suspend: 130 + start = __start_pci_fixups_suspend; 131 + end = __end_pci_fixups_suspend; 132 + break; 133 + 134 + case pci_fixup_suspend_late: 135 + start = __start_pci_fixups_suspend_late; 136 + end = __end_pci_fixups_suspend_late; 137 + break; 138 + 139 + default: 140 + /* stupid compiler warning, you would think with an enum... */ 141 + return; 142 + } 143 + pci_do_fixups(dev, start, end); 144 + } 145 + EXPORT_SYMBOL(pci_fixup_device); 146 + 147 + static int __init pci_apply_final_quirks(void) 148 + { 149 + struct pci_dev *dev = NULL; 150 + u8 cls = 0; 151 + u8 tmp; 152 + 153 + if (pci_cache_line_size) 154 + printk(KERN_DEBUG "PCI: CLS %u bytes\n", 155 + pci_cache_line_size << 2); 156 + 157 + pci_apply_fixup_final_quirks = true; 158 + for_each_pci_dev(dev) { 159 + pci_fixup_device(pci_fixup_final, dev); 160 + /* 161 + * If arch hasn't set it explicitly yet, use the CLS 162 + * value shared by all PCI devices. If there's a 163 + * mismatch, fall back to the default value. 164 + */ 165 + if (!pci_cache_line_size) { 166 + pci_read_config_byte(dev, PCI_CACHE_LINE_SIZE, &tmp); 167 + if (!cls) 168 + cls = tmp; 169 + if (!tmp || cls == tmp) 170 + continue; 171 + 172 + printk(KERN_DEBUG "PCI: CLS mismatch (%u != %u), using %u bytes\n", 173 + cls << 2, tmp << 2, 174 + pci_dfl_cache_line_size << 2); 175 + pci_cache_line_size = pci_dfl_cache_line_size; 176 + } 177 + } 178 + 179 + if (!pci_cache_line_size) { 180 + printk(KERN_DEBUG "PCI: CLS %u bytes, default %u\n", 181 + cls << 2, pci_dfl_cache_line_size << 2); 182 + pci_cache_line_size = cls ? cls : pci_dfl_cache_line_size; 183 + } 184 + 185 + return 0; 186 + } 187 + fs_initcall_sync(pci_apply_final_quirks); 188 + 33 189 /* 34 190 * Decoding should be disabled for a PCI device during BAR sizing to avoid 35 191 * conflict. But doing so may cause problems on host bridge and perhaps other ··· 199 43 DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_ANY_ID, PCI_ANY_ID, 200 44 PCI_CLASS_BRIDGE_HOST, 8, quirk_mmio_always_on); 201 45 202 - /* The Mellanox Tavor device gives false positive parity errors 203 - * Mark this device with a broken_parity_status, to allow 204 - * PCI scanning code to "skip" this now blacklisted device. 46 + /* 47 + * The Mellanox Tavor device gives false positive parity errors. Mark this 48 + * device with a broken_parity_status to allow PCI scanning code to "skip" 49 + * this now blacklisted device. 205 50 */ 206 51 static void quirk_mellanox_tavor(struct pci_dev *dev) 207 52 { ··· 211 54 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX, PCI_DEVICE_ID_MELLANOX_TAVOR, quirk_mellanox_tavor); 212 55 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX, PCI_DEVICE_ID_MELLANOX_TAVOR_BRIDGE, quirk_mellanox_tavor); 213 56 214 - /* Deal with broken BIOSes that neglect to enable passive release, 215 - which can cause problems in combination with the 82441FX/PPro MTRRs */ 57 + /* 58 + * Deal with broken BIOSes that neglect to enable passive release, 59 + * which can cause problems in combination with the 82441FX/PPro MTRRs 60 + */ 216 61 static void quirk_passive_release(struct pci_dev *dev) 217 62 { 218 63 struct pci_dev *d = NULL; 219 64 unsigned char dlc; 220 65 221 - /* We have to make sure a particular bit is set in the PIIX3 222 - ISA bridge, so we have to go out and find it. */ 66 + /* 67 + * We have to make sure a particular bit is set in the PIIX3 68 + * ISA bridge, so we have to go out and find it. 69 + */ 223 70 while ((d = pci_get_device(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371SB_0, d))) { 224 71 pci_read_config_byte(d, 0x82, &dlc); 225 72 if (!(dlc & 1<<1)) { ··· 236 75 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_passive_release); 237 76 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_passive_release); 238 77 239 - /* The VIA VP2/VP3/MVP3 seem to have some 'features'. There may be a workaround 240 - but VIA don't answer queries. If you happen to have good contacts at VIA 241 - ask them for me please -- Alan 242 - 243 - This appears to be BIOS not version dependent. So presumably there is a 244 - chipset level fix */ 245 - 78 + /* 79 + * The VIA VP2/VP3/MVP3 seem to have some 'features'. There may be a 80 + * workaround but VIA don't answer queries. If you happen to have good 81 + * contacts at VIA ask them for me please -- Alan 82 + * 83 + * This appears to be BIOS not version dependent. So presumably there is a 84 + * chipset level fix. 85 + */ 246 86 static void quirk_isa_dma_hangs(struct pci_dev *dev) 247 87 { 248 88 if (!isa_dma_bridge_buggy) { ··· 251 89 pci_info(dev, "Activating ISA DMA hang workarounds\n"); 252 90 } 253 91 } 254 - /* 255 - * Its not totally clear which chipsets are the problematic ones 256 - * We know 82C586 and 82C596 variants are affected. 257 - */ 92 + /* 93 + * It's not totally clear which chipsets are the problematic ones. We know 94 + * 82C586 and 82C596 variants are affected. 95 + */ 258 96 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_0, quirk_isa_dma_hangs); 259 97 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C596, quirk_isa_dma_hangs); 260 98 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371SB_0, quirk_isa_dma_hangs); ··· 283 121 } 284 122 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_TGP_LPC, quirk_tigerpoint_bm_sts); 285 123 286 - /* 287 - * Chipsets where PCI->PCI transfers vanish or hang 288 - */ 124 + /* Chipsets where PCI->PCI transfers vanish or hang */ 289 125 static void quirk_nopcipci(struct pci_dev *dev) 290 126 { 291 127 if ((pci_pci_problems & PCIPCI_FAIL) == 0) { ··· 306 146 } 307 147 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8151_0, quirk_nopciamd); 308 148 309 - /* 310 - * Triton requires workarounds to be used by the drivers 311 - */ 149 + /* Triton requires workarounds to be used by the drivers */ 312 150 static void quirk_triton(struct pci_dev *dev) 313 151 { 314 152 if ((pci_pci_problems&PCIPCI_TRITON) == 0) { ··· 320 162 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82439TX, quirk_triton); 321 163 322 164 /* 323 - * VIA Apollo KT133 needs PCI latency patch 324 - * Made according to a windows driver based patch by George E. Breese 325 - * see PCI Latency Adjust on http://www.viahardware.com/download/viatweak.shtm 326 - * Also see http://www.au-ja.org/review-kt133a-1-en.phtml for 327 - * the info on which Mr Breese based his work. 165 + * VIA Apollo KT133 needs PCI latency patch 166 + * Made according to a Windows driver-based patch by George E. Breese; 167 + * see PCI Latency Adjust on http://www.viahardware.com/download/viatweak.shtm 168 + * Also see http://www.au-ja.org/review-kt133a-1-en.phtml for the info on 169 + * which Mr Breese based his work. 328 170 * 329 - * Updated based on further information from the site and also on 330 - * information provided by VIA 171 + * Updated based on further information from the site and also on 172 + * information provided by VIA 331 173 */ 332 174 static void quirk_vialatency(struct pci_dev *dev) 333 175 { 334 176 struct pci_dev *p; 335 177 u8 busarb; 336 - /* Ok we have a potential problem chipset here. Now see if we have 337 - a buggy southbridge */ 338 178 179 + /* 180 + * Ok, we have a potential problem chipset here. Now see if we have 181 + * a buggy southbridge. 182 + */ 339 183 p = pci_get_device(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, NULL); 340 184 if (p != NULL) { 341 - /* 0x40 - 0x4f == 686B, 0x10 - 0x2f == 686A; thanks Dan Hollis */ 342 - /* Check for buggy part revisions */ 185 + 186 + /* 187 + * 0x40 - 0x4f == 686B, 0x10 - 0x2f == 686A; 188 + * thanks Dan Hollis. 189 + * Check for buggy part revisions 190 + */ 343 191 if (p->revision < 0x40 || p->revision > 0x42) 344 192 goto exit; 345 193 } else { 346 194 p = pci_get_device(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8231, NULL); 347 195 if (p == NULL) /* No problem parts */ 348 196 goto exit; 197 + 349 198 /* Check for buggy part revisions */ 350 199 if (p->revision < 0x10 || p->revision > 0x12) 351 200 goto exit; 352 201 } 353 202 354 203 /* 355 - * Ok we have the problem. Now set the PCI master grant to 356 - * occur every master grant. The apparent bug is that under high 357 - * PCI load (quite common in Linux of course) you can get data 358 - * loss when the CPU is held off the bus for 3 bus master requests 359 - * This happens to include the IDE controllers.... 204 + * Ok we have the problem. Now set the PCI master grant to occur 205 + * every master grant. The apparent bug is that under high PCI load 206 + * (quite common in Linux of course) you can get data loss when the 207 + * CPU is held off the bus for 3 bus master requests. This happens 208 + * to include the IDE controllers.... 360 209 * 361 - * VIA only apply this fix when an SB Live! is present but under 362 - * both Linux and Windows this isn't enough, and we have seen 363 - * corruption without SB Live! but with things like 3 UDMA IDE 364 - * controllers. So we ignore that bit of the VIA recommendation.. 210 + * VIA only apply this fix when an SB Live! is present but under 211 + * both Linux and Windows this isn't enough, and we have seen 212 + * corruption without SB Live! but with things like 3 UDMA IDE 213 + * controllers. So we ignore that bit of the VIA recommendation.. 365 214 */ 366 - 367 215 pci_read_config_byte(dev, 0x76, &busarb); 368 - /* Set bit 4 and bi 5 of byte 76 to 0x01 369 - "Master priority rotation on every PCI master grant */ 216 + 217 + /* 218 + * Set bit 4 and bit 5 of byte 76 to 0x01 219 + * "Master priority rotation on every PCI master grant" 220 + */ 370 221 busarb &= ~(1<<5); 371 222 busarb |= (1<<4); 372 223 pci_write_config_byte(dev, 0x76, busarb); ··· 391 224 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8371_1, quirk_vialatency); 392 225 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8361, quirk_vialatency); 393 226 394 - /* 395 - * VIA Apollo VP3 needs ETBF on BT848/878 396 - */ 227 + /* VIA Apollo VP3 needs ETBF on BT848/878 */ 397 228 static void quirk_viaetbf(struct pci_dev *dev) 398 229 { 399 230 if ((pci_pci_problems&PCIPCI_VIAETBF) == 0) { ··· 411 246 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C576, quirk_vsfx); 412 247 413 248 /* 414 - * Ali Magik requires workarounds to be used by the drivers 415 - * that DMA to AGP space. Latency must be set to 0xA and triton 416 - * workaround applied too 417 - * [Info kindly provided by ALi] 249 + * ALi Magik requires workarounds to be used by the drivers that DMA to AGP 250 + * space. Latency must be set to 0xA and Triton workaround applied too. 251 + * [Info kindly provided by ALi] 418 252 */ 419 253 static void quirk_alimagik(struct pci_dev *dev) 420 254 { ··· 425 261 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1647, quirk_alimagik); 426 262 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1651, quirk_alimagik); 427 263 428 - /* 429 - * Natoma has some interesting boundary conditions with Zoran stuff 430 - * at least 431 - */ 264 + /* Natoma has some interesting boundary conditions with Zoran stuff at least */ 432 265 static void quirk_natoma(struct pci_dev *dev) 433 266 { 434 267 if ((pci_pci_problems&PCIPCI_NATOMA) == 0) { ··· 441 280 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_2, quirk_natoma); 442 281 443 282 /* 444 - * This chip can cause PCI parity errors if config register 0xA0 is read 445 - * while DMAs are occurring. 283 + * This chip can cause PCI parity errors if config register 0xA0 is read 284 + * while DMAs are occurring. 446 285 */ 447 286 static void quirk_citrine(struct pci_dev *dev) 448 287 { ··· 482 321 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, 0x034a, quirk_extend_bar_to_page); 483 322 484 323 /* 485 - * S3 868 and 968 chips report region size equal to 32M, but they decode 64M. 486 - * If it's needed, re-allocate the region. 324 + * S3 868 and 968 chips report region size equal to 32M, but they decode 64M. 325 + * If it's needed, re-allocate the region. 487 326 */ 488 327 static void quirk_s3_64M(struct pci_dev *dev) 489 328 { ··· 574 413 } 575 414 576 415 /* 577 - * ATI Northbridge setups MCE the processor if you even 578 - * read somewhere between 0x3b0->0x3bb or read 0x3d3 416 + * ATI Northbridge setups MCE the processor if you even read somewhere 417 + * between 0x3b0->0x3bb or read 0x3d3 579 418 */ 580 419 static void quirk_ati_exploding_mce(struct pci_dev *dev) 581 420 { ··· 590 429 * In the AMD NL platform, this device ([1022:7912]) has a class code of 591 430 * PCI_CLASS_SERIAL_USB_XHCI (0x0c0330), which means the xhci driver will 592 431 * claim it. 432 + * 593 433 * But the dwc3 driver is a more specific driver for this device, and we'd 594 434 * prefer to use it instead of xhci. To prevent xhci from claiming the 595 435 * device, change the class code to 0x0c03fe, which the PCI r3.0 spec ··· 610 448 quirk_amd_nl_class); 611 449 612 450 /* 613 - * Let's make the southbridge information explicit instead 614 - * of having to worry about people probing the ACPI areas, 615 - * for example.. (Yes, it happens, and if you read the wrong 616 - * ACPI register it will put the machine to sleep with no 617 - * way of waking it up again. Bummer). 451 + * Let's make the southbridge information explicit instead of having to 452 + * worry about people probing the ACPI areas, for example.. (Yes, it 453 + * happens, and if you read the wrong ACPI register it will put the machine 454 + * to sleep with no way of waking it up again. Bummer). 618 455 * 619 456 * ALI M7101: Two IO regions pointed to by words at 620 457 * 0xE0 (64 bytes of ACPI registers) ··· 669 508 break; 670 509 size = bit; 671 510 } 511 + 672 512 /* 673 513 * For now we only print it out. Eventually we'll want to 674 514 * reserve it, but let's get enough confirmation reports first. ··· 741 579 * priority and can't tell whether the legacy device or the one created 742 580 * here is really at that address. This happens on boards with broken 743 581 * BIOSes. 744 - */ 745 - 582 + */ 746 583 pci_read_config_byte(dev, ICH_ACPI_CNTL, &enable); 747 584 if (enable & ICH4_ACPI_EN) 748 585 quirk_io_region(dev, ICH_PMBASE, 128, PCI_BRIDGE_RESOURCES, ··· 778 617 "ICH6 GPIO"); 779 618 } 780 619 781 - static void ich6_lpc_generic_decode(struct pci_dev *dev, unsigned reg, const char *name, int dynsize) 620 + static void ich6_lpc_generic_decode(struct pci_dev *dev, unsigned reg, 621 + const char *name, int dynsize) 782 622 { 783 623 u32 val; 784 624 u32 size, base; ··· 803 641 } 804 642 base &= ~(size-1); 805 643 806 - /* Just print it out for now. We should reserve it after more debugging */ 644 + /* 645 + * Just print it out for now. We should reserve it after more 646 + * debugging. 647 + */ 807 648 pci_info(dev, "%s PIO at %04x-%04x\n", name, base, base+size-1); 808 649 } 809 650 ··· 822 657 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_0, quirk_ich6_lpc); 823 658 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, quirk_ich6_lpc); 824 659 825 - static void ich7_lpc_generic_decode(struct pci_dev *dev, unsigned reg, const char *name) 660 + static void ich7_lpc_generic_decode(struct pci_dev *dev, unsigned reg, 661 + const char *name) 826 662 { 827 663 u32 val; 828 664 u32 mask, base; ··· 834 668 if (!(val & 1)) 835 669 return; 836 670 837 - /* 838 - * IO base in bits 15:2, mask in bits 23:18, both 839 - * are dword-based 840 - */ 671 + /* IO base in bits 15:2, mask in bits 23:18, both are dword-based */ 841 672 base = val & 0xfffc; 842 673 mask = (val >> 16) & 0xfc; 843 674 mask |= 3; 844 675 845 - /* Just print it out for now. We should reserve it after more debugging */ 676 + /* 677 + * Just print it out for now. We should reserve it after more 678 + * debugging. 679 + */ 846 680 pci_info(dev, "%s PIO at %04x (mask %04x)\n", name, base, mask); 847 681 } 848 682 ··· 914 748 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8235, quirk_vt8235_acpi); 915 749 916 750 /* 917 - * TI XIO2000a PCIe-PCI Bridge erroneously reports it supports fast back-to-back: 918 - * Disable fast back-to-back on the secondary bus segment 751 + * TI XIO2000a PCIe-PCI Bridge erroneously reports it supports fast 752 + * back-to-back: Disable fast back-to-back on the secondary bus segment 919 753 */ 920 754 static void quirk_xio2000a(struct pci_dev *dev) 921 755 { ··· 940 774 * VIA 686A/B: If an IO-APIC is active, we need to route all on-chip 941 775 * devices to the external APIC. 942 776 * 943 - * TODO: When we have device-specific interrupt routers, 944 - * this code will go away from quirks. 777 + * TODO: When we have device-specific interrupt routers, this code will go 778 + * away from quirks. 945 779 */ 946 780 static void quirk_via_ioapic(struct pci_dev *dev) 947 781 { ··· 982 816 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237, quirk_via_vt8237_bypass_apic_deassert); 983 817 984 818 /* 985 - * The AMD io apic can hang the box when an apic irq is masked. 819 + * The AMD IO-APIC can hang the box when an APIC IRQ is masked. 986 820 * We check all revs >= B0 (yet not in the pre production!) as the bug 987 821 * is currently marked NoFix 988 822 * 989 823 * We have multiple reports of hangs with this chipset that went away with 990 824 * noapic specified. For the moment we assume it's the erratum. We may be wrong 991 - * of course. However the advice is demonstrably good even if so.. 825 + * of course. However the advice is demonstrably good even if so. 992 826 */ 993 827 static void quirk_amd_ioapic(struct pci_dev *dev) 994 828 { ··· 1004 838 1005 839 static void quirk_cavium_sriov_rnm_link(struct pci_dev *dev) 1006 840 { 1007 - /* Fix for improper SRIOV configuration on Cavium cn88xx RNM device */ 841 + /* Fix for improper SR-IOV configuration on Cavium cn88xx RNM device */ 1008 842 if (dev->subsystem_device == 0xa118) 1009 843 dev->sriov->link = dev->devfn; 1010 844 } ··· 1026 860 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8131_BRIDGE, quirk_amd_8131_mmrbc); 1027 861 1028 862 /* 1029 - * FIXME: it is questionable that quirk_via_acpi 1030 - * is needed. It shows up as an ISA bridge, and does not 1031 - * support the PCI_INTERRUPT_LINE register at all. Therefore 1032 - * it seems like setting the pci_dev's 'irq' to the 1033 - * value of the ACPI SCI interrupt is only done for convenience. 863 + * FIXME: it is questionable that quirk_via_acpi() is needed. It shows up 864 + * as an ISA bridge, and does not support the PCI_INTERRUPT_LINE register 865 + * at all. Therefore it seems like setting the pci_dev's IRQ to the value 866 + * of the ACPI SCI interrupt is only done for convenience. 1034 867 * -jgarzik 1035 868 */ 1036 869 static void quirk_via_acpi(struct pci_dev *d) 1037 870 { 1038 - /* 1039 - * VIA ACPI device: SCI IRQ line in PCI config byte 0x42 1040 - */ 1041 871 u8 irq; 872 + 873 + /* VIA ACPI device: SCI IRQ line in PCI config byte 0x42 */ 1042 874 pci_read_config_byte(d, 0x42, &irq); 1043 875 irq &= 0xf; 1044 876 if (irq && (irq != 2)) ··· 1045 881 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_3, quirk_via_acpi); 1046 882 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686_4, quirk_via_acpi); 1047 883 1048 - 1049 - /* 1050 - * VIA bridges which have VLink 1051 - */ 1052 - 884 + /* VIA bridges which have VLink */ 1053 885 static int via_vlink_dev_lo = -1, via_vlink_dev_hi = 18; 1054 886 1055 887 static void quirk_via_bridge(struct pci_dev *dev) ··· 1053 893 /* See what bridge we have and find the device ranges */ 1054 894 switch (dev->device) { 1055 895 case PCI_DEVICE_ID_VIA_82C686: 1056 - /* The VT82C686 is special, it attaches to PCI and can have 1057 - any device number. All its subdevices are functions of 1058 - that single device. */ 896 + /* 897 + * The VT82C686 is special; it attaches to PCI and can have 898 + * any device number. All its subdevices are functions of 899 + * that single device. 900 + */ 1059 901 via_vlink_dev_lo = PCI_SLOT(dev->devfn); 1060 902 via_vlink_dev_hi = PCI_SLOT(dev->devfn); 1061 903 break; ··· 1085 923 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237, quirk_via_bridge); 1086 924 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237A, quirk_via_bridge); 1087 925 1088 - /** 1089 - * quirk_via_vlink - VIA VLink IRQ number update 1090 - * @dev: PCI device 926 + /* 927 + * quirk_via_vlink - VIA VLink IRQ number update 928 + * @dev: PCI device 1091 929 * 1092 - * If the device we are dealing with is on a PIC IRQ we need to 1093 - * ensure that the IRQ line register which usually is not relevant 1094 - * for PCI cards, is actually written so that interrupts get sent 1095 - * to the right place. 1096 - * We only do this on systems where a VIA south bridge was detected, 1097 - * and only for VIA devices on the motherboard (see quirk_via_bridge 1098 - * above). 930 + * If the device we are dealing with is on a PIC IRQ we need to ensure that 931 + * the IRQ line register which usually is not relevant for PCI cards, is 932 + * actually written so that interrupts get sent to the right place. 933 + * 934 + * We only do this on systems where a VIA south bridge was detected, and 935 + * only for VIA devices on the motherboard (see quirk_via_bridge above). 1099 936 */ 1100 - 1101 937 static void quirk_via_vlink(struct pci_dev *dev) 1102 938 { 1103 939 u8 irq, new_irq; ··· 1115 955 PCI_SLOT(dev->devfn) < via_vlink_dev_lo) 1116 956 return; 1117 957 1118 - /* This is an internal VLink device on a PIC interrupt. The BIOS 1119 - ought to have set this but may not have, so we redo it */ 1120 - 958 + /* 959 + * This is an internal VLink device on a PIC interrupt. The BIOS 960 + * ought to have set this but may not have, so we redo it. 961 + */ 1121 962 pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &irq); 1122 963 if (new_irq != irq) { 1123 964 pci_info(dev, "VIA VLink IRQ fixup, from %d to %d\n", ··· 1130 969 DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_VIA, PCI_ANY_ID, quirk_via_vlink); 1131 970 1132 971 /* 1133 - * VIA VT82C598 has its device ID settable and many BIOSes 1134 - * set it to the ID of VT82C597 for backward compatibility. 1135 - * We need to switch it off to be able to recognize the real 1136 - * type of the chip. 972 + * VIA VT82C598 has its device ID settable and many BIOSes set it to the ID 973 + * of VT82C597 for backward compatibility. We need to switch it off to be 974 + * able to recognize the real type of the chip. 1137 975 */ 1138 976 static void quirk_vt82c598_id(struct pci_dev *dev) 1139 977 { ··· 1142 982 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C597_0, quirk_vt82c598_id); 1143 983 1144 984 /* 1145 - * CardBus controllers have a legacy base address that enables them 1146 - * to respond as i82365 pcmcia controllers. We don't want them to 1147 - * do this even if the Linux CardBus driver is not loaded, because 1148 - * the Linux i82365 driver does not (and should not) handle CardBus. 985 + * CardBus controllers have a legacy base address that enables them to 986 + * respond as i82365 pcmcia controllers. We don't want them to do this 987 + * even if the Linux CardBus driver is not loaded, because the Linux i82365 988 + * driver does not (and should not) handle CardBus. 1149 989 */ 1150 990 static void quirk_cardbus_legacy(struct pci_dev *dev) 1151 991 { ··· 1157 997 PCI_CLASS_BRIDGE_CARDBUS, 8, quirk_cardbus_legacy); 1158 998 1159 999 /* 1160 - * Following the PCI ordering rules is optional on the AMD762. I'm not 1161 - * sure what the designers were smoking but let's not inhale... 1000 + * Following the PCI ordering rules is optional on the AMD762. I'm not sure 1001 + * what the designers were smoking but let's not inhale... 1162 1002 * 1163 - * To be fair to AMD, it follows the spec by default, its BIOS people 1164 - * who turn it off! 1003 + * To be fair to AMD, it follows the spec by default, it's BIOS people who 1004 + * turn it off! 1165 1005 */ 1166 1006 static void quirk_amd_ordering(struct pci_dev *dev) 1167 1007 { ··· 1180 1020 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_FE_GATE_700C, quirk_amd_ordering); 1181 1021 1182 1022 /* 1183 - * DreamWorks provided workaround for Dunord I-3000 problem 1023 + * DreamWorks-provided workaround for Dunord I-3000 problem 1184 1024 * 1185 - * This card decodes and responds to addresses not apparently 1186 - * assigned to it. We force a larger allocation to ensure that 1187 - * nothing gets put too close to it. 1025 + * This card decodes and responds to addresses not apparently assigned to 1026 + * it. We force a larger allocation to ensure that nothing gets put too 1027 + * close to it. 1188 1028 */ 1189 1029 static void quirk_dunord(struct pci_dev *dev) 1190 1030 { ··· 1197 1037 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_DUNORD, PCI_DEVICE_ID_DUNORD_I3000, quirk_dunord); 1198 1038 1199 1039 /* 1200 - * i82380FB mobile docking controller: its PCI-to-PCI bridge 1201 - * is subtractive decoding (transparent), and does indicate this 1202 - * in the ProgIf. Unfortunately, the ProgIf value is wrong - 0x80 1203 - * instead of 0x01. 1040 + * i82380FB mobile docking controller: its PCI-to-PCI bridge is subtractive 1041 + * decoding (transparent), and does indicate this in the ProgIf. 1042 + * Unfortunately, the ProgIf value is wrong - 0x80 instead of 0x01. 1204 1043 */ 1205 1044 static void quirk_transparent_bridge(struct pci_dev *dev) 1206 1045 { ··· 1209 1050 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TOSHIBA, 0x605, quirk_transparent_bridge); 1210 1051 1211 1052 /* 1212 - * Common misconfiguration of the MediaGX/Geode PCI master that will 1213 - * reduce PCI bandwidth from 70MB/s to 25MB/s. See the GXM/GXLV/GX1 1214 - * datasheets found at http://www.national.com/analog for info on what 1215 - * these bits do. <christer@weinigel.se> 1053 + * Common misconfiguration of the MediaGX/Geode PCI master that will reduce 1054 + * PCI bandwidth from 70MB/s to 25MB/s. See the GXM/GXLV/GX1 datasheets 1055 + * found at http://www.national.com/analog for info on what these bits do. 1056 + * <christer@weinigel.se> 1216 1057 */ 1217 1058 static void quirk_mediagx_master(struct pci_dev *dev) 1218 1059 { ··· 1230 1071 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_PCI_MASTER, quirk_mediagx_master); 1231 1072 1232 1073 /* 1233 - * Ensure C0 rev restreaming is off. This is normally done by 1234 - * the BIOS but in the odd case it is not the results are corruption 1235 - * hence the presence of a Linux check 1074 + * Ensure C0 rev restreaming is off. This is normally done by the BIOS but 1075 + * in the odd case it is not the results are corruption hence the presence 1076 + * of a Linux check. 1236 1077 */ 1237 1078 static void quirk_disable_pxb(struct pci_dev *pdev) 1238 1079 { ··· 1276 1117 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, 0x7900, quirk_amd_ide_mode); 1277 1118 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_AMD, 0x7900, quirk_amd_ide_mode); 1278 1119 1279 - /* 1280 - * Serverworks CSB5 IDE does not fully support native mode 1281 - */ 1120 + /* Serverworks CSB5 IDE does not fully support native mode */ 1282 1121 static void quirk_svwks_csb5ide(struct pci_dev *pdev) 1283 1122 { 1284 1123 u8 prog; ··· 1290 1133 } 1291 1134 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_CSB5IDE, quirk_svwks_csb5ide); 1292 1135 1293 - /* 1294 - * Intel 82801CAM ICH3-M datasheet says IDE modes must be the same 1295 - */ 1136 + /* Intel 82801CAM ICH3-M datasheet says IDE modes must be the same */ 1296 1137 static void quirk_ide_samemode(struct pci_dev *pdev) 1297 1138 { 1298 1139 u8 prog; ··· 1306 1151 } 1307 1152 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_10, quirk_ide_samemode); 1308 1153 1309 - /* 1310 - * Some ATA devices break if put into D3 1311 - */ 1312 - 1154 + /* Some ATA devices break if put into D3 */ 1313 1155 static void quirk_no_ata_d3(struct pci_dev *pdev) 1314 1156 { 1315 1157 pdev->dev_flags |= PCI_DEV_FLAGS_NO_D3; ··· 1324 1172 DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_VIA, PCI_ANY_ID, 1325 1173 PCI_CLASS_STORAGE_IDE, 8, quirk_no_ata_d3); 1326 1174 1327 - /* This was originally an Alpha specific thing, but it really fits here. 1175 + /* 1176 + * This was originally an Alpha-specific thing, but it really fits here. 1328 1177 * The i82375 PCI/EISA bridge appears as non-classified. Fix that. 1329 1178 */ 1330 1179 static void quirk_eisa_bridge(struct pci_dev *dev) ··· 1333 1180 dev->class = PCI_CLASS_BRIDGE_EISA << 8; 1334 1181 } 1335 1182 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82375, quirk_eisa_bridge); 1336 - 1337 1183 1338 1184 /* 1339 1185 * On ASUS P4B boards, the SMBus PCI Device within the ICH2/4 southbridge ··· 1550 1398 1551 1399 if (likely(!asus_hides_smbus || !asus_rcba_base)) 1552 1400 return; 1401 + 1553 1402 /* read the Function Disable register, dword mode only */ 1554 1403 val = readl(asus_rcba_base + 0x3418); 1555 - writel(val & 0xFFFFFFF7, asus_rcba_base + 0x3418); /* enable the SMBus device */ 1404 + 1405 + /* enable the SMBus device */ 1406 + writel(val & 0xFFFFFFF7, asus_rcba_base + 0x3418); 1556 1407 } 1557 1408 1558 1409 static void asus_hides_smbus_lpc_ich6_resume(struct pci_dev *dev) 1559 1410 { 1560 1411 if (likely(!asus_hides_smbus || !asus_rcba_base)) 1561 1412 return; 1413 + 1562 1414 iounmap(asus_rcba_base); 1563 1415 asus_rcba_base = NULL; 1564 1416 pci_info(dev, "Enabled ICH6/i801 SMBus device\n"); ··· 1579 1423 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, asus_hides_smbus_lpc_ich6_resume); 1580 1424 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, asus_hides_smbus_lpc_ich6_resume_early); 1581 1425 1582 - /* 1583 - * SiS 96x south bridge: BIOS typically hides SMBus device... 1584 - */ 1426 + /* SiS 96x south bridge: BIOS typically hides SMBus device... */ 1585 1427 static void quirk_sis_96x_smbus(struct pci_dev *dev) 1586 1428 { 1587 1429 u8 val = 0; ··· 1602 1448 * ... This is further complicated by the fact that some SiS96x south 1603 1449 * bridges pretend to be 85C503/5513 instead. In that case see if we 1604 1450 * spotted a compatible north bridge to make sure. 1605 - * (pci_find_device doesn't work yet) 1451 + * (pci_find_device() doesn't work yet) 1606 1452 * 1607 1453 * We can also enable the sis96x bit in the discovery register.. 1608 1454 */ ··· 1622 1468 } 1623 1469 1624 1470 /* 1625 - * Ok, it now shows up as a 96x.. run the 96x quirk by 1626 - * hand in case it has already been processed. 1627 - * (depends on link order, which is apparently not guaranteed) 1471 + * Ok, it now shows up as a 96x. Run the 96x quirk by hand in case 1472 + * it has already been processed. (Depends on link order, which is 1473 + * apparently not guaranteed) 1628 1474 */ 1629 1475 dev->device = devid; 1630 1476 quirk_sis_96x_smbus(dev); 1631 1477 } 1632 1478 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_503, quirk_sis_503); 1633 1479 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_503, quirk_sis_503); 1634 - 1635 1480 1636 1481 /* 1637 1482 * On ASUS A8V and A8V Deluxe boards, the onboard AC97 audio controller ··· 1668 1515 #if defined(CONFIG_ATA) || defined(CONFIG_ATA_MODULE) 1669 1516 1670 1517 /* 1671 - * If we are using libata we can drive this chip properly but must 1672 - * do this early on to make the additional device appear during 1673 - * the PCI scanning. 1518 + * If we are using libata we can drive this chip properly but must do this 1519 + * early on to make the additional device appear during the PCI scanning. 1674 1520 */ 1675 1521 static void quirk_jmicron_ata(struct pci_dev *pdev) 1676 1522 { ··· 1765 1613 if ((pdev->class >> 8) != 0xff00) 1766 1614 return; 1767 1615 1768 - /* the first BAR is the location of the IO APIC...we must 1616 + /* 1617 + * The first BAR is the location of the IO-APIC... we must 1769 1618 * not touch this (and it's already covered by the fixmap), so 1770 - * forcibly insert it into the resource tree */ 1619 + * forcibly insert it into the resource tree. 1620 + */ 1771 1621 if (pci_resource_start(pdev, 0) && pci_resource_len(pdev, 0)) 1772 1622 insert_resource(&iomem_resource, &pdev->resource[0]); 1773 1623 1774 - /* The next five BARs all seem to be rubbish, so just clean 1775 - * them out */ 1624 + /* 1625 + * The next five BARs all seem to be rubbish, so just clean 1626 + * them out. 1627 + */ 1776 1628 for (i = 1; i < 6; i++) 1777 1629 memset(&pdev->resource[i], 0, sizeof(pdev->resource[i])); 1778 1630 } ··· 1794 1638 DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_HUAWEI, 0x1610, PCI_CLASS_BRIDGE_PCI, 8, quirk_pcie_mch); 1795 1639 1796 1640 /* 1797 - * It's possible for the MSI to get corrupted if shpc and acpi 1798 - * are used together on certain PXH-based systems. 1641 + * It's possible for the MSI to get corrupted if SHPC and ACPI are used 1642 + * together on certain PXH-based systems. 1799 1643 */ 1800 1644 static void quirk_pcie_pxh(struct pci_dev *dev) 1801 1645 { ··· 1809 1653 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PXHV, quirk_pcie_pxh); 1810 1654 1811 1655 /* 1812 - * Some Intel PCI Express chipsets have trouble with downstream 1813 - * device power management. 1656 + * Some Intel PCI Express chipsets have trouble with downstream device 1657 + * power management. 1814 1658 */ 1815 1659 static void quirk_intel_pcie_pm(struct pci_dev *dev) 1816 1660 { 1817 1661 pci_pm_d3_delay = 120; 1818 1662 dev->no_d1d2 = 1; 1819 1663 } 1820 - 1821 1664 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x25e2, quirk_intel_pcie_pm); 1822 1665 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x25e3, quirk_intel_pcie_pm); 1823 1666 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x25e4, quirk_intel_pcie_pm); ··· 1878 1723 1879 1724 /* 1880 1725 * Boot interrupts on some chipsets cannot be turned off. For these chipsets, 1881 - * remap the original interrupt in the linux kernel to the boot interrupt, so 1726 + * remap the original interrupt in the Linux kernel to the boot interrupt, so 1882 1727 * that a PCI device's interrupt handler is installed on the boot interrupt 1883 1728 * line instead. 1884 1729 */ ··· 1915 1760 */ 1916 1761 1917 1762 /* 1918 - * IO-APIC1 on 6300ESB generates boot interrupts, see intel order no 1763 + * IO-APIC1 on 6300ESB generates boot interrupts, see Intel order no 1919 1764 * 300641-004US, section 5.7.3. 1920 1765 */ 1921 1766 #define INTEL_6300_IOAPIC_ABAR 0x40 ··· 1938 1783 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt); 1939 1784 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt); 1940 1785 1941 - /* 1942 - * disable boot interrupts on HT-1000 1943 - */ 1786 + /* Disable boot interrupts on HT-1000 */ 1944 1787 #define BC_HT1000_FEATURE_REG 0x64 1945 1788 #define BC_HT1000_PIC_REGS_ENABLE (1<<0) 1946 1789 #define BC_HT1000_MAP_IDX 0xC00 ··· 1969 1816 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT1000SB, quirk_disable_broadcom_boot_interrupt); 1970 1817 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT1000SB, quirk_disable_broadcom_boot_interrupt); 1971 1818 1972 - /* 1973 - * disable boot interrupts on AMD and ATI chipsets 1974 - */ 1819 + /* Disable boot interrupts on AMD and ATI chipsets */ 1820 + 1975 1821 /* 1976 1822 * NOIOAMODE needs to be disabled to disable "boot interrupts". For AMD 8131 1977 1823 * rev. A0 and B0, NOIOAMODE needs to be disabled anyway to fix IO-APIC mode ··· 2046 1894 quirk_tc86c001_ide); 2047 1895 2048 1896 /* 2049 - * PLX PCI 9050 PCI Target bridge controller has an errata that prevents the 1897 + * PLX PCI 9050 PCI Target bridge controller has an erratum that prevents the 2050 1898 * local configuration registers accessible via BAR0 (memory) or BAR1 (i/o) 2051 1899 * being read correctly if bit 7 of the base address is set. 2052 1900 * The BAR0 or BAR1 region may be disabled (size 0) or enabled (size 128). ··· 2239 2087 dev->io_window_1k = 1; 2240 2088 } 2241 2089 } 2242 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1460, quirk_p64h2_1k_io); 2090 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1460, quirk_p64h2_1k_io); 2243 2091 2244 - /* Under some circumstances, AER is not linked with extended capabilities. 2092 + /* 2093 + * Under some circumstances, AER is not linked with extended capabilities. 2245 2094 * Force it to be linked by setting the corresponding control bit in the 2246 2095 * config space. 2247 2096 */ 2248 2097 static void quirk_nvidia_ck804_pcie_aer_ext_cap(struct pci_dev *dev) 2249 2098 { 2250 2099 uint8_t b; 2100 + 2251 2101 if (pci_read_config_byte(dev, 0xf41, &b) == 0) { 2252 2102 if (!(b & 0x20)) { 2253 2103 pci_write_config_byte(dev, 0xf41, b | 0x20); ··· 2279 2125 PCI_DEVICE_ID_VIA_8235_USB_2, NULL); 2280 2126 uint8_t b; 2281 2127 2282 - /* p should contain the first (internal) VT6212L -- see if we have 2283 - an external one by searching again */ 2128 + /* 2129 + * p should contain the first (internal) VT6212L -- see if we have 2130 + * an external one by searching again. 2131 + */ 2284 2132 p = pci_get_device(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8235_USB_2, p); 2285 2133 if (!p) 2286 2134 return; ··· 2327 2171 pcie_set_readrq(dev, 2048); 2328 2172 } 2329 2173 } 2330 - 2331 2174 DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_BROADCOM, 2332 2175 PCI_DEVICE_ID_TIGON3_5719, 2333 2176 quirk_brcm_5719_limit_mrrs); ··· 2334 2179 #ifdef CONFIG_PCIE_IPROC_PLATFORM 2335 2180 static void quirk_paxc_bridge(struct pci_dev *pdev) 2336 2181 { 2337 - /* The PCI config space is shared with the PAXC root port and the first 2182 + /* 2183 + * The PCI config space is shared with the PAXC root port and the first 2338 2184 * Ethernet device. So, we need to workaround this by telling the PCI 2339 2185 * code that the bridge is not an Ethernet device. 2340 2186 */ 2341 2187 if (pdev->hdr_type == PCI_HEADER_TYPE_BRIDGE) 2342 2188 pdev->class = PCI_CLASS_BRIDGE_PCI << 8; 2343 2189 2344 - /* MPSS is not being set properly (as it is currently 0). This is 2190 + /* 2191 + * MPSS is not being set properly (as it is currently 0). This is 2345 2192 * because that area of the PCI config space is hard coded to zero, and 2346 2193 * is not modifiable by firmware. Set this to 2 (e.g., 512 byte MPS) 2347 2194 * so that the MPS can be set to the real max value. ··· 2354 2197 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x16f0, quirk_paxc_bridge); 2355 2198 #endif 2356 2199 2357 - /* Originally in EDAC sources for i82875P: 2358 - * Intel tells BIOS developers to hide device 6 which 2359 - * configures the overflow device access containing 2360 - * the DRBs - this is where we expose device 6. 2200 + /* 2201 + * Originally in EDAC sources for i82875P: Intel tells BIOS developers to 2202 + * hide device 6 which configures the overflow device access containing the 2203 + * DRBs - this is where we expose device 6. 2361 2204 * http://www.x86-secret.com/articles/tweak/pat/patsecrets-2.htm 2362 2205 */ 2363 2206 static void quirk_unhide_mch_dev6(struct pci_dev *dev) ··· 2369 2212 pci_write_config_byte(dev, 0xF4, reg | 0x02); 2370 2213 } 2371 2214 } 2372 - 2373 2215 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82865_HB, 2374 2216 quirk_unhide_mch_dev6); 2375 2217 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82875_HB, 2376 2218 quirk_unhide_mch_dev6); 2377 2219 2378 2220 #ifdef CONFIG_PCI_MSI 2379 - /* Some chipsets do not support MSI. We cannot easily rely on setting 2380 - * PCI_BUS_FLAGS_NO_MSI in its bus flags because there are actually 2381 - * some other buses controlled by the chipset even if Linux is not 2382 - * aware of it. Instead of setting the flag on all buses in the 2383 - * machine, simply disable MSI globally. 2221 + /* 2222 + * Some chipsets do not support MSI. We cannot easily rely on setting 2223 + * PCI_BUS_FLAGS_NO_MSI in its bus flags because there are actually some 2224 + * other buses controlled by the chipset even if Linux is not aware of it. 2225 + * Instead of setting the flag on all buses in the machine, simply disable 2226 + * MSI globally. 2384 2227 */ 2385 2228 static void quirk_disable_all_msi(struct pci_dev *dev) 2386 2229 { ··· 2428 2271 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x9600, quirk_amd_780_apc_msi); 2429 2272 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x9601, quirk_amd_780_apc_msi); 2430 2273 2431 - /* Go through the list of Hypertransport capabilities and 2432 - * return 1 if a HT MSI capability is found and enabled */ 2274 + /* 2275 + * Go through the list of HyperTransport capabilities and return 1 if a HT 2276 + * MSI capability is found and enabled. 2277 + */ 2433 2278 static int msi_ht_cap_enabled(struct pci_dev *dev) 2434 2279 { 2435 2280 int pos, ttl = PCI_FIND_CAP_TTL; ··· 2454 2295 return 0; 2455 2296 } 2456 2297 2457 - /* Check the hypertransport MSI mapping to know whether MSI is enabled or not */ 2298 + /* Check the HyperTransport MSI mapping to know whether MSI is enabled or not */ 2458 2299 static void quirk_msi_ht_cap(struct pci_dev *dev) 2459 2300 { 2460 2301 if (dev->subordinate && !msi_ht_cap_enabled(dev)) { ··· 2465 2306 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT2000_PCIE, 2466 2307 quirk_msi_ht_cap); 2467 2308 2468 - /* The nVidia CK804 chipset may have 2 HT MSI mappings. 2469 - * MSI are supported if the MSI capability set in any of these mappings. 2309 + /* 2310 + * The nVidia CK804 chipset may have 2 HT MSI mappings. MSI is supported 2311 + * if the MSI capability is set in any of these mappings. 2470 2312 */ 2471 2313 static void quirk_nvidia_ck804_msi_ht_cap(struct pci_dev *dev) 2472 2314 { ··· 2476 2316 if (!dev->subordinate) 2477 2317 return; 2478 2318 2479 - /* check HT MSI cap on this chipset and the root one. 2480 - * a single one having MSI is enough to be sure that MSI are supported. 2319 + /* 2320 + * Check HT MSI cap on this chipset and the root one. A single one 2321 + * having MSI is enough to be sure that MSI is supported. 2481 2322 */ 2482 2323 pdev = pci_get_slot(dev->bus, 0); 2483 2324 if (!pdev) ··· 2515 2354 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SERVERWORKS, 2516 2355 PCI_DEVICE_ID_SERVERWORKS_HT1000_PXB, 2517 2356 ht_enable_msi_mapping); 2518 - 2519 2357 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8132_BRIDGE, 2520 2358 ht_enable_msi_mapping); 2521 2359 2522 - /* The P5N32-SLI motherboards from Asus have a problem with msi 2523 - * for the MCP55 NIC. It is not yet determined whether the msi problem 2524 - * also affects other devices. As for now, turn off msi for this device. 2360 + /* 2361 + * The P5N32-SLI motherboards from Asus have a problem with MSI 2362 + * for the MCP55 NIC. It is not yet determined whether the MSI problem 2363 + * also affects other devices. As for now, turn off MSI for this device. 2525 2364 */ 2526 2365 static void nvenet_msi_disable(struct pci_dev *dev) 2527 2366 { ··· 2558 2397 pci_read_config_dword(dev, 0x74, &cfg); 2559 2398 2560 2399 if (cfg & ((1 << 2) | (1 << 15))) { 2561 - printk(KERN_INFO "Rewriting irq routing register on MCP55\n"); 2400 + printk(KERN_INFO "Rewriting IRQ routing register on MCP55\n"); 2562 2401 cfg &= ~((1 << 2) | (1 << 15)); 2563 2402 pci_write_config_dword(dev, 0x74, cfg); 2564 2403 } 2565 2404 } 2566 - 2567 2405 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, 2568 2406 PCI_DEVICE_ID_NVIDIA_MCP55_BRIDGE_V0, 2569 2407 nvbridge_check_legacy_irq_routing); 2570 - 2571 2408 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, 2572 2409 PCI_DEVICE_ID_NVIDIA_MCP55_BRIDGE_V4, 2573 2410 nvbridge_check_legacy_irq_routing); ··· 2575 2416 int pos, ttl = PCI_FIND_CAP_TTL; 2576 2417 int found = 0; 2577 2418 2578 - /* check if there is HT MSI cap or enabled on this device */ 2419 + /* Check if there is HT MSI cap or enabled on this device */ 2579 2420 pos = pci_find_ht_capability(dev, HT_CAPTYPE_MSI_MAPPING); 2580 2421 while (pos && ttl--) { 2581 2422 u8 flags; ··· 2611 2452 if (!dev) 2612 2453 continue; 2613 2454 2614 - /* found next host bridge ?*/ 2455 + /* found next host bridge? */ 2615 2456 pos = pci_find_ht_capability(dev, HT_CAPTYPE_SLAVE); 2616 2457 if (pos != 0) { 2617 2458 pci_dev_put(dev); ··· 2770 2611 { 2771 2612 return __nv_msi_ht_cap_quirk(dev, 1); 2772 2613 } 2614 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_ANY_ID, nv_msi_ht_cap_quirk_all); 2615 + DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_AL, PCI_ANY_ID, nv_msi_ht_cap_quirk_all); 2773 2616 2774 2617 static void nv_msi_ht_cap_quirk_leaf(struct pci_dev *dev) 2775 2618 { 2776 2619 return __nv_msi_ht_cap_quirk(dev, 0); 2777 2620 } 2778 - 2779 2621 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, nv_msi_ht_cap_quirk_leaf); 2780 2622 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, nv_msi_ht_cap_quirk_leaf); 2781 - 2782 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_ANY_ID, nv_msi_ht_cap_quirk_all); 2783 - DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_AL, PCI_ANY_ID, nv_msi_ht_cap_quirk_all); 2784 2623 2785 2624 static void quirk_msi_intx_disable_bug(struct pci_dev *dev) 2786 2625 { 2787 2626 dev->dev_flags |= PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG; 2788 2627 } 2628 + 2789 2629 static void quirk_msi_intx_disable_ati_bug(struct pci_dev *dev) 2790 2630 { 2791 2631 struct pci_dev *p; 2792 2632 2793 - /* SB700 MSI issue will be fixed at HW level from revision A21, 2633 + /* 2634 + * SB700 MSI issue will be fixed at HW level from revision A21; 2794 2635 * we need check PCI REVISION ID of SMBus controller to get SB700 2795 2636 * revision. 2796 2637 */ ··· 2803 2644 dev->dev_flags |= PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG; 2804 2645 pci_dev_put(p); 2805 2646 } 2647 + 2806 2648 static void quirk_msi_intx_disable_qca_bug(struct pci_dev *dev) 2807 2649 { 2808 2650 /* AR816X/AR817X/E210X MSI is fixed at HW level from revision 0x18 */ ··· 2873 2713 quirk_msi_intx_disable_qca_bug); 2874 2714 #endif /* CONFIG_PCI_MSI */ 2875 2715 2876 - /* Allow manual resource allocation for PCI hotplug bridges 2877 - * via pci=hpmemsize=nnM and pci=hpiosize=nnM parameters. For 2878 - * some PCI-PCI hotplug bridges, like PLX 6254 (former HINT HB6), 2879 - * kernel fails to allocate resources when hotplug device is 2880 - * inserted and PCI bus is rescanned. 2716 + /* 2717 + * Allow manual resource allocation for PCI hotplug bridges via 2718 + * pci=hpmemsize=nnM and pci=hpiosize=nnM parameters. For some PCI-PCI 2719 + * hotplug bridges, like PLX 6254 (former HINT HB6), kernel fails to 2720 + * allocate resources when hotplug device is inserted and PCI bus is 2721 + * rescanned. 2881 2722 */ 2882 2723 static void quirk_hotplug_bridge(struct pci_dev *dev) 2883 2724 { 2884 2725 dev->is_hotplug_bridge = 1; 2885 2726 } 2886 - 2887 2727 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_HINT, 0x0020, quirk_hotplug_bridge); 2888 2728 2889 2729 /* 2890 - * This is a quirk for the Ricoh MMC controller found as a part of 2891 - * some mulifunction chips. 2892 - 2730 + * This is a quirk for the Ricoh MMC controller found as a part of some 2731 + * multifunction chips. 2732 + * 2893 2733 * This is very similar and based on the ricoh_mmc driver written by 2894 2734 * Philip Langdale. Thank you for these magic sequences. 2895 2735 * 2896 - * These chips implement the four main memory card controllers (SD, MMC, MS, xD) 2897 - * and one or both of cardbus or firewire. 2736 + * These chips implement the four main memory card controllers (SD, MMC, 2737 + * MS, xD) and one or both of CardBus or FireWire. 2898 2738 * 2899 - * It happens that they implement SD and MMC 2900 - * support as separate controllers (and PCI functions). The linux SDHCI 2901 - * driver supports MMC cards but the chip detects MMC cards in hardware 2902 - * and directs them to the MMC controller - so the SDHCI driver never sees 2903 - * them. 2739 + * It happens that they implement SD and MMC support as separate 2740 + * controllers (and PCI functions). The Linux SDHCI driver supports MMC 2741 + * cards but the chip detects MMC cards in hardware and directs them to the 2742 + * MMC controller - so the SDHCI driver never sees them. 2904 2743 * 2905 - * To get around this, we must disable the useless MMC controller. 2906 - * At that point, the SDHCI controller will start seeing them 2907 - * It seems to be the case that the relevant PCI registers to deactivate the 2908 - * MMC controller live on PCI function 0, which might be the cardbus controller 2909 - * or the firewire controller, depending on the particular chip in question 2744 + * To get around this, we must disable the useless MMC controller. At that 2745 + * point, the SDHCI controller will start seeing them. It seems to be the 2746 + * case that the relevant PCI registers to deactivate the MMC controller 2747 + * live on PCI function 0, which might be the CardBus controller or the 2748 + * FireWire controller, depending on the particular chip in question 2910 2749 * 2911 2750 * This has to be done early, because as soon as we disable the MMC controller 2912 - * other pci functions shift up one level, e.g. function #2 becomes function 2913 - * #1, and this will confuse the pci core. 2751 + * other PCI functions shift up one level, e.g. function #2 becomes function 2752 + * #1, and this will confuse the PCI core. 2914 2753 */ 2915 - 2916 2754 #ifdef CONFIG_MMC_RICOH_MMC 2917 2755 static void ricoh_mmc_fixup_rl5c476(struct pci_dev *dev) 2918 2756 { 2919 - /* disable via cardbus interface */ 2920 2757 u8 write_enable; 2921 2758 u8 write_target; 2922 2759 u8 disable; 2923 2760 2924 - /* disable must be done via function #0 */ 2761 + /* 2762 + * Disable via CardBus interface 2763 + * 2764 + * This must be done via function #0 2765 + */ 2925 2766 if (PCI_FUNC(dev->devfn)) 2926 2767 return; 2927 2768 ··· 2938 2777 pci_write_config_byte(dev, 0x8E, write_enable); 2939 2778 pci_write_config_byte(dev, 0x8D, write_target); 2940 2779 2941 - pci_notice(dev, "proprietary Ricoh MMC controller disabled (via cardbus function)\n"); 2780 + pci_notice(dev, "proprietary Ricoh MMC controller disabled (via CardBus function)\n"); 2942 2781 pci_notice(dev, "MMC cards are now supported by standard SDHCI controller\n"); 2943 2782 } 2944 2783 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_RICOH, PCI_DEVICE_ID_RICOH_RL5C476, ricoh_mmc_fixup_rl5c476); ··· 2946 2785 2947 2786 static void ricoh_mmc_fixup_r5c832(struct pci_dev *dev) 2948 2787 { 2949 - /* disable via firewire interface */ 2950 2788 u8 write_enable; 2951 2789 u8 disable; 2952 2790 2953 - /* disable must be done via function #0 */ 2791 + /* 2792 + * Disable via FireWire interface 2793 + * 2794 + * This must be done via function #0 2795 + */ 2954 2796 if (PCI_FUNC(dev->devfn)) 2955 2797 return; 2956 2798 /* 2957 2799 * RICOH 0xe822 and 0xe823 SD/MMC card readers fail to recognize 2958 - * certain types of SD/MMC cards. Lowering the SD base 2959 - * clock frequency from 200Mhz to 50Mhz fixes this issue. 2800 + * certain types of SD/MMC cards. Lowering the SD base clock 2801 + * frequency from 200Mhz to 50Mhz fixes this issue. 2960 2802 * 2961 2803 * 0x150 - SD2.0 mode enable for changing base clock 2962 2804 * frequency to 50Mhz ··· 2990 2826 pci_write_config_byte(dev, 0xCB, disable | 0x02); 2991 2827 pci_write_config_byte(dev, 0xCA, write_enable); 2992 2828 2993 - pci_notice(dev, "proprietary Ricoh MMC controller disabled (via firewire function)\n"); 2829 + pci_notice(dev, "proprietary Ricoh MMC controller disabled (via FireWire function)\n"); 2994 2830 pci_notice(dev, "MMC cards are now supported by standard SDHCI controller\n"); 2995 2831 2996 2832 } ··· 3006 2842 #define VTUNCERRMSK_REG 0x1ac 3007 2843 #define VTD_MSK_SPEC_ERRORS (1 << 31) 3008 2844 /* 3009 - * This is a quirk for masking vt-d spec defined errors to platform error 3010 - * handling logic. With out this, platforms using Intel 7500, 5500 chipsets 2845 + * This is a quirk for masking VT-d spec-defined errors to platform error 2846 + * handling logic. Without this, platforms using Intel 7500, 5500 chipsets 3011 2847 * (and the derivative chipsets like X58 etc) seem to generate NMI/SMI (based 3012 - * on the RAS config settings of the platform) when a vt-d fault happens. 2848 + * on the RAS config settings of the platform) when a VT-d fault happens. 3013 2849 * The resulting SMI caused the system to hang. 3014 2850 * 3015 - * VT-d spec related errors are already handled by the VT-d OS code, so no 2851 + * VT-d spec-related errors are already handled by the VT-d OS code, so no 3016 2852 * need to report the same error through other channels. 3017 2853 */ 3018 2854 static void vtd_mask_spec_errors(struct pci_dev *dev) ··· 3038 2874 DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_TI, 0xb800, 3039 2875 PCI_CLASS_NOT_DEFINED, 8, fixup_ti816x_class); 3040 2876 3041 - /* Some PCIe devices do not work reliably with the claimed maximum 2877 + /* 2878 + * Some PCIe devices do not work reliably with the claimed maximum 3042 2879 * payload size supported. 3043 2880 */ 3044 2881 static void fixup_mpss_256(struct pci_dev *dev) ··· 3053 2888 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE, 3054 2889 PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256); 3055 2890 3056 - /* Intel 5000 and 5100 Memory controllers have an errata with read completion 2891 + /* 2892 + * Intel 5000 and 5100 Memory controllers have an erratum with read completion 3057 2893 * coalescing (which is enabled by default on some BIOSes) and MPS of 256B. 3058 - * Since there is no way of knowing what the PCIE MPS on each fabric will be 2894 + * Since there is no way of knowing what the PCIe MPS on each fabric will be 3059 2895 * until all of the devices are discovered and buses walked, read completion 3060 2896 * coalescing must be disabled. Unfortunately, it cannot be re-enabled because 3061 2897 * it is possible to hotplug a device with MPS of 256B. ··· 3070 2904 pcie_bus_config == PCIE_BUS_DEFAULT) 3071 2905 return; 3072 2906 3073 - /* Intel errata specifies bits to change but does not say what they are. 3074 - * Keeping them magical until such time as the registers and values can 3075 - * be explained. 2907 + /* 2908 + * Intel erratum specifies bits to change but does not say what 2909 + * they are. Keeping them magical until such time as the registers 2910 + * and values can be explained. 3076 2911 */ 3077 2912 err = pci_read_config_word(dev, 0x48, &rcc); 3078 2913 if (err) { ··· 3092 2925 return; 3093 2926 } 3094 2927 3095 - pr_info_once("Read completion coalescing disabled due to hardware errata relating to 256B MPS\n"); 2928 + pr_info_once("Read completion coalescing disabled due to hardware erratum relating to 256B MPS\n"); 3096 2929 } 3097 2930 /* Intel 5000 series memory controllers and ports 2-7 */ 3098 2931 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x25c0, quirk_intel_mc_errata); ··· 3122 2955 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x65f9, quirk_intel_mc_errata); 3123 2956 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x65fa, quirk_intel_mc_errata); 3124 2957 3125 - 3126 2958 /* 3127 - * Ivytown NTB BAR sizes are misreported by the hardware due to an erratum. To 3128 - * work around this, query the size it should be configured to by the device and 3129 - * modify the resource end to correspond to this new size. 2959 + * Ivytown NTB BAR sizes are misreported by the hardware due to an erratum. 2960 + * To work around this, query the size it should be configured to by the 2961 + * device and modify the resource end to correspond to this new size. 3130 2962 */ 3131 2963 static void quirk_intel_ntb(struct pci_dev *dev) 3132 2964 { ··· 3147 2981 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0e08, quirk_intel_ntb); 3148 2982 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0e0d, quirk_intel_ntb); 3149 2983 3150 - static ktime_t fixup_debug_start(struct pci_dev *dev, 3151 - void (*fn)(struct pci_dev *dev)) 3152 - { 3153 - if (initcall_debug) 3154 - pci_info(dev, "calling %pF @ %i\n", fn, task_pid_nr(current)); 3155 - 3156 - return ktime_get(); 3157 - } 3158 - 3159 - static void fixup_debug_report(struct pci_dev *dev, ktime_t calltime, 3160 - void (*fn)(struct pci_dev *dev)) 3161 - { 3162 - ktime_t delta, rettime; 3163 - unsigned long long duration; 3164 - 3165 - rettime = ktime_get(); 3166 - delta = ktime_sub(rettime, calltime); 3167 - duration = (unsigned long long) ktime_to_ns(delta) >> 10; 3168 - if (initcall_debug || duration > 10000) 3169 - pci_info(dev, "%pF took %lld usecs\n", fn, duration); 3170 - } 3171 - 3172 2984 /* 3173 - * Some BIOS implementations leave the Intel GPU interrupts enabled, 3174 - * even though no one is handling them (f.e. i915 driver is never loaded). 3175 - * Additionally the interrupt destination is not set up properly 2985 + * Some BIOS implementations leave the Intel GPU interrupts enabled, even 2986 + * though no one is handling them (e.g., if the i915 driver is never 2987 + * loaded). Additionally the interrupt destination is not set up properly 3176 2988 * and the interrupt ends up -somewhere-. 3177 2989 * 3178 - * These spurious interrupts are "sticky" and the kernel disables 3179 - * the (shared) interrupt line after 100.000+ generated interrupts. 2990 + * These spurious interrupts are "sticky" and the kernel disables the 2991 + * (shared) interrupt line after 100,000+ generated interrupts. 3180 2992 * 3181 - * Fix it by disabling the still enabled interrupts. 3182 - * This resolves crashes often seen on monitor unplug. 2993 + * Fix it by disabling the still enabled interrupts. This resolves crashes 2994 + * often seen on monitor unplug. 3183 2995 */ 3184 2996 #define I915_DEIER_REG 0x4400c 3185 2997 static void disable_igfx_irq(struct pci_dev *dev) ··· 3245 3101 * Intel i40e (XL710/X710) 10/20/40GbE NICs all have broken INTx masking, 3246 3102 * DisINTx can be set but the interrupt status bit is non-functional. 3247 3103 */ 3248 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1572, 3249 - quirk_broken_intx_masking); 3250 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1574, 3251 - quirk_broken_intx_masking); 3252 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1580, 3253 - quirk_broken_intx_masking); 3254 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1581, 3255 - quirk_broken_intx_masking); 3256 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1583, 3257 - quirk_broken_intx_masking); 3258 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1584, 3259 - quirk_broken_intx_masking); 3260 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1585, 3261 - quirk_broken_intx_masking); 3262 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1586, 3263 - quirk_broken_intx_masking); 3264 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1587, 3265 - quirk_broken_intx_masking); 3266 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1588, 3267 - quirk_broken_intx_masking); 3268 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1589, 3269 - quirk_broken_intx_masking); 3270 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x158a, 3271 - quirk_broken_intx_masking); 3272 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x158b, 3273 - quirk_broken_intx_masking); 3274 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x37d0, 3275 - quirk_broken_intx_masking); 3276 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x37d1, 3277 - quirk_broken_intx_masking); 3278 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x37d2, 3279 - quirk_broken_intx_masking); 3104 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1572, quirk_broken_intx_masking); 3105 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1574, quirk_broken_intx_masking); 3106 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1580, quirk_broken_intx_masking); 3107 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1581, quirk_broken_intx_masking); 3108 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1583, quirk_broken_intx_masking); 3109 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1584, quirk_broken_intx_masking); 3110 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1585, quirk_broken_intx_masking); 3111 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1586, quirk_broken_intx_masking); 3112 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1587, quirk_broken_intx_masking); 3113 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1588, quirk_broken_intx_masking); 3114 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1589, quirk_broken_intx_masking); 3115 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x158a, quirk_broken_intx_masking); 3116 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x158b, quirk_broken_intx_masking); 3117 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x37d0, quirk_broken_intx_masking); 3118 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x37d1, quirk_broken_intx_masking); 3119 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x37d2, quirk_broken_intx_masking); 3280 3120 3281 3121 static u16 mellanox_broken_intx_devs[] = { 3282 3122 PCI_DEVICE_ID_MELLANOX_HERMON_SDR, ··· 3305 3177 } 3306 3178 } 3307 3179 3308 - /* Getting here means Connect-IB cards and up. Connect-IB has no INTx 3180 + /* 3181 + * Getting here means Connect-IB cards and up. Connect-IB has no INTx 3309 3182 * support so shouldn't be checked further 3310 3183 */ 3311 3184 if (pdev->device == PCI_DEVICE_ID_MELLANOX_CONNECTIB) ··· 3426 3297 * shutdown before suspend. Otherwise the native host interface (NHI) will not 3427 3298 * be present after resume if a device was plugged in before suspend. 3428 3299 * 3429 - * The thunderbolt controller consists of a pcie switch with downstream 3430 - * bridges leading to the NHI and to the tunnel pci bridges. 3300 + * The Thunderbolt controller consists of a PCIe switch with downstream 3301 + * bridges leading to the NHI and to the tunnel PCI bridges. 3431 3302 * 3432 3303 * This quirk cuts power to the whole chip. Therefore we have to apply it 3433 3304 * during suspend_noirq of the upstream bridge. ··· 3445 3316 bridge = ACPI_HANDLE(&dev->dev); 3446 3317 if (!bridge) 3447 3318 return; 3319 + 3448 3320 /* 3449 3321 * SXIO and SXLV are present only on machines requiring this quirk. 3450 - * TB bridges in external devices might have the same device id as those 3451 - * on the host, but they will not have the associated ACPI methods. This 3452 - * implicitly checks that we are at the right bridge. 3322 + * Thunderbolt bridges in external devices might have the same 3323 + * device ID as those on the host, but they will not have the 3324 + * associated ACPI methods. This implicitly checks that we are at 3325 + * the right bridge. 3453 3326 */ 3454 3327 if (ACPI_FAILURE(acpi_get_handle(bridge, "DSB0.NHI0.SXIO", &SXIO)) 3455 3328 || ACPI_FAILURE(acpi_get_handle(bridge, "DSB0.NHI0.SXFP", &SXFP)) 3456 3329 || ACPI_FAILURE(acpi_get_handle(bridge, "DSB0.NHI0.SXLV", &SXLV))) 3457 3330 return; 3458 - pci_info(dev, "quirk: cutting power to thunderbolt controller...\n"); 3331 + pci_info(dev, "quirk: cutting power to Thunderbolt controller...\n"); 3459 3332 3460 3333 /* magic sequence */ 3461 3334 acpi_execute_simple_method(SXIO, NULL, 1); ··· 3472 3341 quirk_apple_poweroff_thunderbolt); 3473 3342 3474 3343 /* 3475 - * Apple: Wait for the thunderbolt controller to reestablish pci tunnels. 3344 + * Apple: Wait for the Thunderbolt controller to reestablish PCI tunnels 3476 3345 * 3477 - * During suspend the thunderbolt controller is reset and all pci 3346 + * During suspend the Thunderbolt controller is reset and all PCI 3478 3347 * tunnels are lost. The NHI driver will try to reestablish all tunnels 3479 3348 * during resume. We have to manually wait for the NHI since there is 3480 3349 * no parent child relationship between the NHI and the tunneled ··· 3489 3358 return; 3490 3359 if (pci_pcie_type(dev) != PCI_EXP_TYPE_DOWNSTREAM) 3491 3360 return; 3361 + 3492 3362 /* 3493 - * Find the NHI and confirm that we are a bridge on the tb host 3494 - * controller and not on a tb endpoint. 3363 + * Find the NHI and confirm that we are a bridge on the Thunderbolt 3364 + * host controller and not on a Thunderbolt endpoint. 3495 3365 */ 3496 3366 sibling = pci_get_slot(dev->bus, 0x0); 3497 3367 if (sibling == dev) ··· 3509 3377 nhi->device != PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI) 3510 3378 || nhi->class != PCI_CLASS_SYSTEM_OTHER << 8) 3511 3379 goto out; 3512 - pci_info(dev, "quirk: waiting for thunderbolt to reestablish PCI tunnels...\n"); 3380 + pci_info(dev, "quirk: waiting for Thunderbolt to reestablish PCI tunnels...\n"); 3513 3381 device_pm_wait_for_dev(&dev->dev, &nhi->dev); 3514 3382 out: 3515 3383 pci_dev_put(nhi); ··· 3528 3396 PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE, 3529 3397 quirk_apple_wait_for_thunderbolt); 3530 3398 #endif 3531 - 3532 - static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f, 3533 - struct pci_fixup *end) 3534 - { 3535 - ktime_t calltime; 3536 - 3537 - for (; f < end; f++) 3538 - if ((f->class == (u32) (dev->class >> f->class_shift) || 3539 - f->class == (u32) PCI_ANY_ID) && 3540 - (f->vendor == dev->vendor || 3541 - f->vendor == (u16) PCI_ANY_ID) && 3542 - (f->device == dev->device || 3543 - f->device == (u16) PCI_ANY_ID)) { 3544 - calltime = fixup_debug_start(dev, f->hook); 3545 - f->hook(dev); 3546 - fixup_debug_report(dev, calltime, f->hook); 3547 - } 3548 - } 3549 - 3550 - extern struct pci_fixup __start_pci_fixups_early[]; 3551 - extern struct pci_fixup __end_pci_fixups_early[]; 3552 - extern struct pci_fixup __start_pci_fixups_header[]; 3553 - extern struct pci_fixup __end_pci_fixups_header[]; 3554 - extern struct pci_fixup __start_pci_fixups_final[]; 3555 - extern struct pci_fixup __end_pci_fixups_final[]; 3556 - extern struct pci_fixup __start_pci_fixups_enable[]; 3557 - extern struct pci_fixup __end_pci_fixups_enable[]; 3558 - extern struct pci_fixup __start_pci_fixups_resume[]; 3559 - extern struct pci_fixup __end_pci_fixups_resume[]; 3560 - extern struct pci_fixup __start_pci_fixups_resume_early[]; 3561 - extern struct pci_fixup __end_pci_fixups_resume_early[]; 3562 - extern struct pci_fixup __start_pci_fixups_suspend[]; 3563 - extern struct pci_fixup __end_pci_fixups_suspend[]; 3564 - extern struct pci_fixup __start_pci_fixups_suspend_late[]; 3565 - extern struct pci_fixup __end_pci_fixups_suspend_late[]; 3566 - 3567 - static bool pci_apply_fixup_final_quirks; 3568 - 3569 - void pci_fixup_device(enum pci_fixup_pass pass, struct pci_dev *dev) 3570 - { 3571 - struct pci_fixup *start, *end; 3572 - 3573 - switch (pass) { 3574 - case pci_fixup_early: 3575 - start = __start_pci_fixups_early; 3576 - end = __end_pci_fixups_early; 3577 - break; 3578 - 3579 - case pci_fixup_header: 3580 - start = __start_pci_fixups_header; 3581 - end = __end_pci_fixups_header; 3582 - break; 3583 - 3584 - case pci_fixup_final: 3585 - if (!pci_apply_fixup_final_quirks) 3586 - return; 3587 - start = __start_pci_fixups_final; 3588 - end = __end_pci_fixups_final; 3589 - break; 3590 - 3591 - case pci_fixup_enable: 3592 - start = __start_pci_fixups_enable; 3593 - end = __end_pci_fixups_enable; 3594 - break; 3595 - 3596 - case pci_fixup_resume: 3597 - start = __start_pci_fixups_resume; 3598 - end = __end_pci_fixups_resume; 3599 - break; 3600 - 3601 - case pci_fixup_resume_early: 3602 - start = __start_pci_fixups_resume_early; 3603 - end = __end_pci_fixups_resume_early; 3604 - break; 3605 - 3606 - case pci_fixup_suspend: 3607 - start = __start_pci_fixups_suspend; 3608 - end = __end_pci_fixups_suspend; 3609 - break; 3610 - 3611 - case pci_fixup_suspend_late: 3612 - start = __start_pci_fixups_suspend_late; 3613 - end = __end_pci_fixups_suspend_late; 3614 - break; 3615 - 3616 - default: 3617 - /* stupid compiler warning, you would think with an enum... */ 3618 - return; 3619 - } 3620 - pci_do_fixups(dev, start, end); 3621 - } 3622 - EXPORT_SYMBOL(pci_fixup_device); 3623 - 3624 - 3625 - static int __init pci_apply_final_quirks(void) 3626 - { 3627 - struct pci_dev *dev = NULL; 3628 - u8 cls = 0; 3629 - u8 tmp; 3630 - 3631 - if (pci_cache_line_size) 3632 - printk(KERN_DEBUG "PCI: CLS %u bytes\n", 3633 - pci_cache_line_size << 2); 3634 - 3635 - pci_apply_fixup_final_quirks = true; 3636 - for_each_pci_dev(dev) { 3637 - pci_fixup_device(pci_fixup_final, dev); 3638 - /* 3639 - * If arch hasn't set it explicitly yet, use the CLS 3640 - * value shared by all PCI devices. If there's a 3641 - * mismatch, fall back to the default value. 3642 - */ 3643 - if (!pci_cache_line_size) { 3644 - pci_read_config_byte(dev, PCI_CACHE_LINE_SIZE, &tmp); 3645 - if (!cls) 3646 - cls = tmp; 3647 - if (!tmp || cls == tmp) 3648 - continue; 3649 - 3650 - printk(KERN_DEBUG "PCI: CLS mismatch (%u != %u), using %u bytes\n", 3651 - cls << 2, tmp << 2, 3652 - pci_dfl_cache_line_size << 2); 3653 - pci_cache_line_size = pci_dfl_cache_line_size; 3654 - } 3655 - } 3656 - 3657 - if (!pci_cache_line_size) { 3658 - printk(KERN_DEBUG "PCI: CLS %u bytes, default %u\n", 3659 - cls << 2, pci_dfl_cache_line_size << 2); 3660 - pci_cache_line_size = cls ? cls : pci_dfl_cache_line_size; 3661 - } 3662 - 3663 - return 0; 3664 - } 3665 - 3666 - fs_initcall_sync(pci_apply_final_quirks); 3667 3399 3668 3400 /* 3669 3401 * Following are device-specific reset methods which can be used to ··· 3598 3602 return 0; 3599 3603 } 3600 3604 3601 - /* 3602 - * Device-specific reset method for Chelsio T4-based adapters. 3603 - */ 3605 + /* Device-specific reset method for Chelsio T4-based adapters */ 3604 3606 static int reset_chelsio_generic_dev(struct pci_dev *dev, int probe) 3605 3607 { 3606 3608 u16 old_command; ··· 3881 3887 /* 3882 3888 * Some devices have problems with Transaction Layer Packets with the Relaxed 3883 3889 * Ordering Attribute set. Such devices should mark themselves and other 3884 - * Device Drivers should check before sending TLPs with RO set. 3890 + * device drivers should check before sending TLPs with RO set. 3885 3891 */ 3886 3892 static void quirk_relaxedordering_disable(struct pci_dev *dev) 3887 3893 { ··· 3891 3897 3892 3898 /* 3893 3899 * Intel Xeon processors based on Broadwell/Haswell microarchitecture Root 3894 - * Complex has a Flow Control Credit issue which can cause performance 3900 + * Complex have a Flow Control Credit issue which can cause performance 3895 3901 * problems with Upstream Transaction Layer Packets with Relaxed Ordering set. 3896 3902 */ 3897 3903 DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f01, PCI_CLASS_NOT_DEFINED, 8, ··· 3952 3958 quirk_relaxedordering_disable); 3953 3959 3954 3960 /* 3955 - * The AMD ARM A1100 (AKA "SEATTLE") SoC has a bug in its PCIe Root Complex 3961 + * The AMD ARM A1100 (aka "SEATTLE") SoC has a bug in its PCIe Root Complex 3956 3962 * where Upstream Transaction Layer Packets with the Relaxed Ordering 3957 3963 * Attribute clear are allowed to bypass earlier TLPs with Relaxed Ordering 3958 3964 * set. This is a violation of the PCIe 3.0 Transaction Ordering Rules ··· 4016 4022 * This mask/compare operation selects for Physical Function 4 on a 4017 4023 * T5. We only need to fix up the Root Port once for any of the 4018 4024 * PFs. PF[0..3] have PCI Device IDs of 0x50xx, but PF4 is uniquely 4019 - * 0x54xx so we use that one, 4025 + * 0x54xx so we use that one. 4020 4026 */ 4021 4027 if ((pdev->device & 0xff00) == 0x5400) 4022 4028 quirk_disable_root_port_attributes(pdev); ··· 4107 4113 static int pci_quirk_xgene_acs(struct pci_dev *dev, u16 acs_flags) 4108 4114 { 4109 4115 /* 4110 - * X-Gene root matching this quirk do not allow peer-to-peer 4116 + * X-Gene Root Ports matching this quirk do not allow peer-to-peer 4111 4117 * transactions with others, allowing masking out these bits as if they 4112 4118 * were unimplemented in the ACS capability. 4113 4119 */ ··· 4224 4230 * 0xa290-0xa29f PCI Express Root port #{0-16} 4225 4231 * 0xa2e7-0xa2ee PCI Express Root port #{17-24} 4226 4232 * 4233 + * Mobile chipsets are also affected, 7th & 8th Generation 4234 + * Specification update confirms ACS errata 22, status no fix: (7th Generation 4235 + * Intel Processor Family I/O for U/Y Platforms and 8th Generation Intel 4236 + * Processor Family I/O for U Quad Core Platforms Specification Update, 4237 + * August 2017, Revision 002, Document#: 334660-002)[6] 4238 + * Device IDs from I/O datasheet: (7th Generation Intel Processor Family I/O 4239 + * for U/Y Platforms and 8th Generation Intel ® Processor Family I/O for U 4240 + * Quad Core Platforms, Vol 1 of 2, August 2017, Document#: 334658-003)[7] 4241 + * 4242 + * 0x9d10-0x9d1b PCI Express Root port #{1-12} 4243 + * 4244 + * The 300 series chipset suffers from the same bug so include those root 4245 + * ports here as well. 4246 + * 4247 + * 0xa32c-0xa343 PCI Express Root port #{0-24} 4248 + * 4227 4249 * [1] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-2.html 4228 4250 * [2] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-1.html 4229 4251 * [3] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-spec-update.html 4230 4252 * [4] http://www.intel.com/content/www/us/en/chipsets/200-series-chipset-pch-spec-update.html 4231 4253 * [5] http://www.intel.com/content/www/us/en/chipsets/200-series-chipset-pch-datasheet-vol-1.html 4254 + * [6] https://www.intel.com/content/www/us/en/processors/core/7th-gen-core-family-mobile-u-y-processor-lines-i-o-spec-update.html 4255 + * [7] https://www.intel.com/content/www/us/en/processors/core/7th-gen-core-family-mobile-u-y-processor-lines-i-o-datasheet-vol-1.html 4232 4256 */ 4233 4257 static bool pci_quirk_intel_spt_pch_acs_match(struct pci_dev *dev) 4234 4258 { ··· 4256 4244 switch (dev->device) { 4257 4245 case 0xa110 ... 0xa11f: case 0xa167 ... 0xa16a: /* Sunrise Point */ 4258 4246 case 0xa290 ... 0xa29f: case 0xa2e7 ... 0xa2ee: /* Union Point */ 4247 + case 0x9d10 ... 0x9d1b: /* 7th & 8th Gen Mobile */ 4248 + case 0xa32c ... 0xa343: /* 300 series */ 4259 4249 return true; 4260 4250 } 4261 4251 ··· 4375 4361 { PCI_VENDOR_ID_INTEL, 0x15b7, pci_quirk_mf_endpoint_acs }, 4376 4362 { PCI_VENDOR_ID_INTEL, 0x15b8, pci_quirk_mf_endpoint_acs }, 4377 4363 /* QCOM QDF2xxx root ports */ 4378 - { 0x17cb, 0x400, pci_quirk_qcom_rp_acs }, 4379 - { 0x17cb, 0x401, pci_quirk_qcom_rp_acs }, 4364 + { PCI_VENDOR_ID_QCOM, 0x0400, pci_quirk_qcom_rp_acs }, 4365 + { PCI_VENDOR_ID_QCOM, 0x0401, pci_quirk_qcom_rp_acs }, 4380 4366 /* Intel PCH root ports */ 4381 4367 { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_pch_acs }, 4382 4368 { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_spt_pch_acs }, ··· 4450 4436 /* 4451 4437 * Read the RCBA register from the LPC (D31:F0). PCH root ports 4452 4438 * are D28:F* and therefore get probed before LPC, thus we can't 4453 - * use pci_get_slot/pci_read_config_dword here. 4439 + * use pci_get_slot()/pci_read_config_dword() here. 4454 4440 */ 4455 4441 pci_bus_read_config_dword(dev->bus, PCI_DEVFN(31, 0), 4456 4442 INTEL_LPC_RCBA_REG, &rcba); ··· 4583 4569 } 4584 4570 4585 4571 /* 4586 - * The PCI capabilities list for Intel DH895xCC VFs (device id 0x0443) with 4572 + * The PCI capabilities list for Intel DH895xCC VFs (device ID 0x0443) with 4587 4573 * QuickAssist Technology (QAT) is prematurely terminated in hardware. The 4588 4574 * Next Capability pointer in the MSI Capability Structure should point to 4589 4575 * the PCIe Capability Structure but is incorrectly hardwired as 0 terminating ··· 4644 4630 if (pci_find_saved_cap(pdev, PCI_CAP_ID_EXP)) 4645 4631 return; 4646 4632 4647 - /* 4648 - * Save PCIE cap 4649 - */ 4633 + /* Save PCIe cap */ 4650 4634 state = kzalloc(sizeof(*state) + size, GFP_KERNEL); 4651 4635 if (!state) 4652 4636 return; ··· 4665 4653 } 4666 4654 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x443, quirk_intel_qat_vf_cap); 4667 4655 4668 - /* FLR may cause some 82579 devices to hang. */ 4656 + /* FLR may cause some 82579 devices to hang */ 4669 4657 static void quirk_intel_no_flr(struct pci_dev *dev) 4670 4658 { 4671 4659 dev->dev_flags |= PCI_DEV_FLAGS_NO_FLR_RESET;
+38 -38
drivers/pci/setup-bus.c
··· 1943 1943 } 1944 1944 1945 1945 /* 1946 + * There is only one bridge on the bus so it gets all available 1947 + * resources which it can then distribute to the possible 1948 + * hotplug bridges below. 1949 + */ 1950 + if (hotplug_bridges + normal_bridges == 1) { 1951 + dev = list_first_entry(&bus->devices, struct pci_dev, bus_list); 1952 + if (dev->subordinate) { 1953 + pci_bus_distribute_available_resources(dev->subordinate, 1954 + add_list, available_io, available_mmio, 1955 + available_mmio_pref); 1956 + } 1957 + return; 1958 + } 1959 + 1960 + /* 1946 1961 * Go over devices on this bus and distribute the remaining 1947 1962 * resource space between hotplug bridges. 1948 1963 */ 1949 1964 for_each_pci_bridge(dev, bus) { 1965 + resource_size_t align, io, mmio, mmio_pref; 1950 1966 struct pci_bus *b; 1951 1967 1952 1968 b = dev->subordinate; 1953 - if (!b) 1969 + if (!b || !dev->is_hotplug_bridge) 1954 1970 continue; 1955 1971 1956 - if (!hotplug_bridges && normal_bridges == 1) { 1957 - /* 1958 - * There is only one bridge on the bus (upstream 1959 - * port) so it gets all available resources 1960 - * which it can then distribute to the possible 1961 - * hotplug bridges below. 1962 - */ 1963 - pci_bus_distribute_available_resources(b, add_list, 1964 - available_io, available_mmio, 1965 - available_mmio_pref); 1966 - } else if (dev->is_hotplug_bridge) { 1967 - resource_size_t align, io, mmio, mmio_pref; 1972 + /* 1973 + * Distribute available extra resources equally between 1974 + * hotplug-capable downstream ports taking alignment into 1975 + * account. 1976 + * 1977 + * Here hotplug_bridges is always != 0. 1978 + */ 1979 + align = pci_resource_alignment(bridge, io_res); 1980 + io = div64_ul(available_io, hotplug_bridges); 1981 + io = min(ALIGN(io, align), remaining_io); 1982 + remaining_io -= io; 1968 1983 1969 - /* 1970 - * Distribute available extra resources equally 1971 - * between hotplug-capable downstream ports 1972 - * taking alignment into account. 1973 - * 1974 - * Here hotplug_bridges is always != 0. 1975 - */ 1976 - align = pci_resource_alignment(bridge, io_res); 1977 - io = div64_ul(available_io, hotplug_bridges); 1978 - io = min(ALIGN(io, align), remaining_io); 1979 - remaining_io -= io; 1984 + align = pci_resource_alignment(bridge, mmio_res); 1985 + mmio = div64_ul(available_mmio, hotplug_bridges); 1986 + mmio = min(ALIGN(mmio, align), remaining_mmio); 1987 + remaining_mmio -= mmio; 1980 1988 1981 - align = pci_resource_alignment(bridge, mmio_res); 1982 - mmio = div64_ul(available_mmio, hotplug_bridges); 1983 - mmio = min(ALIGN(mmio, align), remaining_mmio); 1984 - remaining_mmio -= mmio; 1989 + align = pci_resource_alignment(bridge, mmio_pref_res); 1990 + mmio_pref = div64_ul(available_mmio_pref, hotplug_bridges); 1991 + mmio_pref = min(ALIGN(mmio_pref, align), remaining_mmio_pref); 1992 + remaining_mmio_pref -= mmio_pref; 1985 1993 1986 - align = pci_resource_alignment(bridge, mmio_pref_res); 1987 - mmio_pref = div64_ul(available_mmio_pref, 1988 - hotplug_bridges); 1989 - mmio_pref = min(ALIGN(mmio_pref, align), 1990 - remaining_mmio_pref); 1991 - remaining_mmio_pref -= mmio_pref; 1992 - 1993 - pci_bus_distribute_available_resources(b, add_list, io, 1994 - mmio, mmio_pref); 1995 - } 1994 + pci_bus_distribute_available_resources(b, add_list, io, mmio, 1995 + mmio_pref); 1996 1996 } 1997 1997 } 1998 1998
+2 -1
include/linux/acpi.h
··· 506 506 #define OSC_PCI_EXPRESS_PME_CONTROL 0x00000004 507 507 #define OSC_PCI_EXPRESS_AER_CONTROL 0x00000008 508 508 #define OSC_PCI_EXPRESS_CAPABILITY_CONTROL 0x00000010 509 - #define OSC_PCI_CONTROL_MASKS 0x0000001f 509 + #define OSC_PCI_EXPRESS_LTR_CONTROL 0x00000020 510 + #define OSC_PCI_CONTROL_MASKS 0x0000003f 510 511 511 512 #define ACPI_GSB_ACCESS_ATTRIB_QUICK 0x00000002 512 513 #define ACPI_GSB_ACCESS_ATTRIB_SEND_RCV 0x00000004
+1
include/linux/aer.h
··· 14 14 #define AER_NONFATAL 0 15 15 #define AER_FATAL 1 16 16 #define AER_CORRECTABLE 2 17 + #define DPC_FATAL 3 17 18 18 19 struct pci_dev; 19 20
-34
include/linux/of_pci.h
··· 13 13 struct device_node *of_pci_find_child_device(struct device_node *parent, 14 14 unsigned int devfn); 15 15 int of_pci_get_devfn(struct device_node *np); 16 - int of_pci_parse_bus_range(struct device_node *node, struct resource *res); 17 - int of_get_pci_domain_nr(struct device_node *node); 18 - int of_pci_get_max_link_speed(struct device_node *node); 19 16 void of_pci_check_probe_only(void); 20 17 int of_pci_map_rid(struct device_node *np, u32 rid, 21 18 const char *map_name, const char *map_mask_name, ··· 29 32 return -EINVAL; 30 33 } 31 34 32 - static inline int 33 - of_pci_parse_bus_range(struct device_node *node, struct resource *res) 34 - { 35 - return -EINVAL; 36 - } 37 - 38 - static inline int 39 - of_get_pci_domain_nr(struct device_node *node) 40 - { 41 - return -1; 42 - } 43 - 44 35 static inline int of_pci_map_rid(struct device_node *np, u32 rid, 45 36 const char *map_name, const char *map_mask_name, 46 37 struct device_node **target, u32 *id_out) 47 - { 48 - return -EINVAL; 49 - } 50 - 51 - static inline int 52 - of_pci_get_max_link_speed(struct device_node *node) 53 38 { 54 39 return -EINVAL; 55 40 } ··· 46 67 of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin) 47 68 { 48 69 return 0; 49 - } 50 - #endif 51 - 52 - #if defined(CONFIG_OF_ADDRESS) 53 - int of_pci_get_host_bridge_resources(struct device_node *dev, 54 - unsigned char busno, unsigned char bus_max, 55 - struct list_head *resources, resource_size_t *io_base); 56 - #else 57 - static inline int of_pci_get_host_bridge_resources(struct device_node *dev, 58 - unsigned char busno, unsigned char bus_max, 59 - struct list_head *resources, resource_size_t *io_base) 60 - { 61 - return -EINVAL; 62 70 } 63 71 #endif 64 72
+1
include/linux/pci-ecam.h
··· 62 62 /* for DT-based PCI controllers that support ECAM */ 63 63 int pci_host_common_probe(struct platform_device *pdev, 64 64 struct pci_ecam_ops *ops); 65 + int pci_host_common_remove(struct platform_device *pdev); 65 66 #endif 66 67 #endif
+8
include/linux/pci-epc.h
··· 90 90 struct config_group *group; 91 91 /* spinlock to protect against concurrent access of EP controller */ 92 92 spinlock_t lock; 93 + unsigned int features; 93 94 }; 95 + 96 + #define EPC_FEATURE_NO_LINKUP_NOTIFIER BIT(0) 97 + #define EPC_FEATURE_BAR_MASK (BIT(1) | BIT(2) | BIT(3)) 98 + #define EPC_FEATURE_SET_BAR(features, bar) \ 99 + (features |= (EPC_FEATURE_BAR_MASK & (bar << 1))) 100 + #define EPC_FEATURE_GET_BAR(features) \ 101 + ((features & EPC_FEATURE_BAR_MASK) >> 1) 94 102 95 103 #define to_pci_epc(device) container_of((device), struct pci_epc, dev) 96 104
+2 -2
include/linux/pci-epf.h
··· 72 72 * @driver: PCI EPF driver 73 73 * @ops: set of function pointers for performing EPF operations 74 74 * @owner: the owner of the module that registers the PCI EPF driver 75 - * @group: configfs group corresponding to the PCI EPF driver 75 + * @epf_group: list of configfs group corresponding to the PCI EPF driver 76 76 * @id_table: identifies EPF devices for probing 77 77 */ 78 78 struct pci_epf_driver { ··· 82 82 struct device_driver driver; 83 83 struct pci_epf_ops *ops; 84 84 struct module *owner; 85 - struct config_group *group; 85 + struct list_head epf_group; 86 86 const struct pci_epf_device_id *id_table; 87 87 }; 88 88
+14 -7
include/linux/pci.h
··· 217 217 PCI_BUS_FLAGS_NO_MSI = (__force pci_bus_flags_t) 1, 218 218 PCI_BUS_FLAGS_NO_MMRBC = (__force pci_bus_flags_t) 2, 219 219 PCI_BUS_FLAGS_NO_AERSID = (__force pci_bus_flags_t) 4, 220 + PCI_BUS_FLAGS_NO_EXTCFG = (__force pci_bus_flags_t) 8, 220 221 }; 221 222 222 223 /* Values from Link Status register, PCIe r3.1, sec 7.8.8 */ ··· 407 406 struct bin_attribute *res_attr[DEVICE_COUNT_RESOURCE]; /* sysfs file for resources */ 408 407 struct bin_attribute *res_attr_wc[DEVICE_COUNT_RESOURCE]; /* sysfs file for WC mapping of resources */ 409 408 409 + #ifdef CONFIG_HOTPLUG_PCI_PCIE 410 + unsigned int broken_cmd_compl:1; /* No compl for some cmds */ 411 + #endif 410 412 #ifdef CONFIG_PCIE_PTM 411 413 unsigned int ptm_root:1; 412 414 unsigned int ptm_enabled:1; ··· 475 471 unsigned int ignore_reset_delay:1; /* For entire hierarchy */ 476 472 unsigned int no_ext_tags:1; /* No Extended Tags */ 477 473 unsigned int native_aer:1; /* OS may use PCIe AER */ 478 - unsigned int native_hotplug:1; /* OS may use PCIe hotplug */ 474 + unsigned int native_pcie_hotplug:1; /* OS may use PCIe hotplug */ 475 + unsigned int native_shpc_hotplug:1; /* OS may use SHPC hotplug */ 479 476 unsigned int native_pme:1; /* OS may use PCIe PME */ 477 + unsigned int native_ltr:1; /* OS may use PCIe LTR */ 480 478 /* Resource alignment requirements */ 481 479 resource_size_t (*align_resource)(struct pci_dev *dev, 482 480 const struct resource *res, ··· 1085 1079 int pcie_set_readrq(struct pci_dev *dev, int rq); 1086 1080 int pcie_get_mps(struct pci_dev *dev); 1087 1081 int pcie_set_mps(struct pci_dev *dev, int mps); 1088 - int pcie_get_minimum_link(struct pci_dev *dev, enum pci_bus_speed *speed, 1089 - enum pcie_link_width *width); 1090 1082 u32 pcie_bandwidth_available(struct pci_dev *dev, struct pci_dev **limiting_dev, 1091 1083 enum pci_bus_speed *speed, 1092 1084 enum pcie_link_width *width); ··· 1455 1451 1456 1452 #ifdef CONFIG_PCIEPORTBUS 1457 1453 extern bool pcie_ports_disabled; 1454 + extern bool pcie_ports_native; 1458 1455 #else 1459 1456 #define pcie_ports_disabled true 1457 + #define pcie_ports_native false 1460 1458 #endif 1461 1459 1462 1460 #ifdef CONFIG_PCIEASPM ··· 1484 1478 static inline void pcie_set_ecrc_checking(struct pci_dev *dev) { } 1485 1479 static inline void pcie_ecrc_get_policy(char *str) { } 1486 1480 #endif 1481 + 1482 + bool pci_ats_disabled(void); 1487 1483 1488 1484 #ifdef CONFIG_PCI_ATS 1489 1485 /* Address Translation Service */ ··· 1518 1510 */ 1519 1511 #ifdef CONFIG_PCI_DOMAINS 1520 1512 extern int pci_domains_supported; 1521 - int pci_get_new_domain_nr(void); 1522 1513 #else 1523 1514 enum { pci_domains_supported = 0 }; 1524 1515 static inline int pci_domain_nr(struct pci_bus *bus) { return 0; } 1525 1516 static inline int pci_proc_domain(struct pci_bus *bus) { return 0; } 1526 - static inline int pci_get_new_domain_nr(void) { return -ENOSYS; } 1527 1517 #endif /* CONFIG_PCI_DOMAINS */ 1528 1518 1529 1519 /* ··· 1676 1670 1677 1671 static inline int pci_domain_nr(struct pci_bus *bus) { return 0; } 1678 1672 static inline struct pci_dev *pci_dev_get(struct pci_dev *dev) { return NULL; } 1679 - static inline int pci_get_new_domain_nr(void) { return -ENOSYS; } 1680 1673 1681 1674 #define dev_is_pci(d) (false) 1682 1675 #define dev_is_pf(d) (false) ··· 1959 1954 int pci_vfs_assigned(struct pci_dev *dev); 1960 1955 int pci_sriov_set_totalvfs(struct pci_dev *dev, u16 numvfs); 1961 1956 int pci_sriov_get_totalvfs(struct pci_dev *dev); 1957 + int pci_sriov_configure_simple(struct pci_dev *dev, int nr_virtfn); 1962 1958 resource_size_t pci_iov_resource_size(struct pci_dev *dev, int resno); 1963 1959 void pci_vf_drivers_autoprobe(struct pci_dev *dev, bool probe); 1964 1960 ··· 1992 1986 { return 0; } 1993 1987 static inline int pci_sriov_get_totalvfs(struct pci_dev *dev) 1994 1988 { return 0; } 1989 + #define pci_sriov_configure_simple NULL 1995 1990 static inline resource_size_t pci_iov_resource_size(struct pci_dev *dev, int resno) 1996 1991 { return 0; } 1997 1992 static inline void pci_vf_drivers_autoprobe(struct pci_dev *dev, bool probe) { } ··· 2291 2284 return false; 2292 2285 } 2293 2286 2294 - #if defined(CONFIG_PCIEAER) || defined(CONFIG_EEH) 2287 + #if defined(CONFIG_PCIEPORTBUS) || defined(CONFIG_EEH) 2295 2288 void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type); 2296 2289 #endif 2297 2290
+15 -3
include/linux/pci_hotplug.h
··· 162 162 #ifdef CONFIG_ACPI 163 163 #include <linux/acpi.h> 164 164 int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp); 165 - bool pciehp_is_native(struct pci_dev *pdev); 166 - int acpi_get_hp_hw_control_from_firmware(struct pci_dev *dev, u32 flags); 165 + bool pciehp_is_native(struct pci_dev *bridge); 166 + int acpi_get_hp_hw_control_from_firmware(struct pci_dev *bridge); 167 + bool shpchp_is_native(struct pci_dev *bridge); 167 168 int acpi_pci_check_ejectable(struct pci_bus *pbus, acpi_handle handle); 168 169 int acpi_pci_detect_ejectable(acpi_handle handle); 169 170 #else ··· 173 172 { 174 173 return -ENODEV; 175 174 } 176 - static inline bool pciehp_is_native(struct pci_dev *pdev) { return true; } 175 + 176 + static inline int acpi_get_hp_hw_control_from_firmware(struct pci_dev *bridge) 177 + { 178 + return 0; 179 + } 180 + static inline bool pciehp_is_native(struct pci_dev *bridge) { return true; } 181 + static inline bool shpchp_is_native(struct pci_dev *bridge) { return true; } 177 182 #endif 183 + 184 + static inline bool hotplug_is_native(struct pci_dev *bridge) 185 + { 186 + return pciehp_is_native(bridge) || shpchp_is_native(bridge); 187 + } 178 188 #endif
+9
include/linux/pci_ids.h
··· 561 561 #define PCI_DEVICE_ID_AMD_OPUS_7443 0x7443 562 562 #define PCI_DEVICE_ID_AMD_VIPER_7443 0x7443 563 563 #define PCI_DEVICE_ID_AMD_OPUS_7445 0x7445 564 + #define PCI_DEVICE_ID_AMD_GOLAM_7450 0x7450 564 565 #define PCI_DEVICE_ID_AMD_8111_PCI 0x7460 565 566 #define PCI_DEVICE_ID_AMD_8111_LPC 0x7468 566 567 #define PCI_DEVICE_ID_AMD_8111_IDE 0x7469 ··· 2120 2119 2121 2120 #define PCI_VENDOR_ID_MYRICOM 0x14c1 2122 2121 2122 + #define PCI_VENDOR_ID_MEDIATEK 0x14c3 2123 + 2123 2124 #define PCI_VENDOR_ID_TITAN 0x14D2 2124 2125 #define PCI_DEVICE_ID_TITAN_010L 0x8001 2125 2126 #define PCI_DEVICE_ID_TITAN_100L 0x8010 ··· 2390 2387 2391 2388 #define PCI_VENDOR_ID_LENOVO 0x17aa 2392 2389 2390 + #define PCI_VENDOR_ID_QCOM 0x17cb 2391 + 2393 2392 #define PCI_VENDOR_ID_CDNS 0x17cd 2394 2393 2395 2394 #define PCI_VENDOR_ID_ARECA 0x17d3 ··· 2557 2552 #define PCI_VENDOR_ID_CIRCUITCO 0x1cc8 2558 2553 #define PCI_SUBSYSTEM_ID_CIRCUITCO_MINNOWBOARD 0x0001 2559 2554 2555 + #define PCI_VENDOR_ID_AMAZON 0x1d0f 2556 + 2560 2557 #define PCI_VENDOR_ID_TEKRAM 0x1de1 2561 2558 #define PCI_DEVICE_ID_TEKRAM_DC290 0xdc29 2562 2559 ··· 2679 2672 #define PCI_DEVICE_ID_INTEL_PANTHERPOINT_XHCI 0x1e31 2680 2673 #define PCI_DEVICE_ID_INTEL_PANTHERPOINT_LPC_MIN 0x1e40 2681 2674 #define PCI_DEVICE_ID_INTEL_PANTHERPOINT_LPC_MAX 0x1e5f 2675 + #define PCI_DEVICE_ID_INTEL_VMD_201D 0x201d 2682 2676 #define PCI_DEVICE_ID_INTEL_DH89XXCC_LPC_MIN 0x2310 2683 2677 #define PCI_DEVICE_ID_INTEL_DH89XXCC_LPC_MAX 0x231f 2684 2678 #define PCI_DEVICE_ID_INTEL_82801AA_0 0x2410 ··· 2784 2776 #define PCI_DEVICE_ID_INTEL_ICH8_4 0x2815 2785 2777 #define PCI_DEVICE_ID_INTEL_ICH8_5 0x283e 2786 2778 #define PCI_DEVICE_ID_INTEL_ICH8_6 0x2850 2779 + #define PCI_DEVICE_ID_INTEL_VMD_28C0 0x28c0 2787 2780 #define PCI_DEVICE_ID_INTEL_ICH9_0 0x2910 2788 2781 #define PCI_DEVICE_ID_INTEL_ICH9_1 0x2917 2789 2782 #define PCI_DEVICE_ID_INTEL_ICH9_2 0x2912
+18 -4
include/ras/ras_event.h
··· 298 298 TRACE_EVENT(aer_event, 299 299 TP_PROTO(const char *dev_name, 300 300 const u32 status, 301 - const u8 severity), 301 + const u8 severity, 302 + const u8 tlp_header_valid, 303 + struct aer_header_log_regs *tlp), 302 304 303 - TP_ARGS(dev_name, status, severity), 305 + TP_ARGS(dev_name, status, severity, tlp_header_valid, tlp), 304 306 305 307 TP_STRUCT__entry( 306 308 __string( dev_name, dev_name ) 307 309 __field( u32, status ) 308 310 __field( u8, severity ) 311 + __field( u8, tlp_header_valid) 312 + __array( u32, tlp_header, 4 ) 309 313 ), 310 314 311 315 TP_fast_assign( 312 316 __assign_str(dev_name, dev_name); 313 317 __entry->status = status; 314 318 __entry->severity = severity; 319 + __entry->tlp_header_valid = tlp_header_valid; 320 + if (tlp_header_valid) { 321 + __entry->tlp_header[0] = tlp->dw0; 322 + __entry->tlp_header[1] = tlp->dw1; 323 + __entry->tlp_header[2] = tlp->dw2; 324 + __entry->tlp_header[3] = tlp->dw3; 325 + } 315 326 ), 316 327 317 - TP_printk("%s PCIe Bus Error: severity=%s, %s\n", 328 + TP_printk("%s PCIe Bus Error: severity=%s, %s, TLP Header=%s\n", 318 329 __get_str(dev_name), 319 330 __entry->severity == AER_CORRECTABLE ? "Corrected" : 320 331 __entry->severity == AER_FATAL ? 321 332 "Fatal" : "Uncorrected, non-fatal", 322 333 __entry->severity == AER_CORRECTABLE ? 323 334 __print_flags(__entry->status, "|", aer_correctable_errors) : 324 - __print_flags(__entry->status, "|", aer_uncorrectable_errors)) 335 + __print_flags(__entry->status, "|", aer_uncorrectable_errors), 336 + __entry->tlp_header_valid ? 337 + __print_array(__entry->tlp_header, 4, 4) : 338 + "Not available") 325 339 ); 326 340 327 341 /*
+6
include/uapi/linux/pci_regs.h
··· 657 657 #define PCI_EXP_LNKCAP2_SLS_16_0GB 0x00000010 /* Supported Speed 16GT/s */ 658 658 #define PCI_EXP_LNKCAP2_CROSSLINK 0x00000100 /* Crosslink supported */ 659 659 #define PCI_EXP_LNKCTL2 48 /* Link Control 2 */ 660 + #define PCI_EXP_LNKCTL2_TLS 0x000f 661 + #define PCI_EXP_LNKCTL2_TLS_2_5GT 0x0001 /* Supported Speed 2.5GT/s */ 662 + #define PCI_EXP_LNKCTL2_TLS_5_0GT 0x0002 /* Supported Speed 5GT/s */ 663 + #define PCI_EXP_LNKCTL2_TLS_8_0GT 0x0003 /* Supported Speed 8GT/s */ 664 + #define PCI_EXP_LNKCTL2_TLS_16_0GT 0x0004 /* Supported Speed 16GT/s */ 660 665 #define PCI_EXP_LNKSTA2 50 /* Link Status 2 */ 661 666 #define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 52 /* v2 endpoints with link end here */ 662 667 #define PCI_EXP_SLTCAP2 52 /* Slot Capabilities 2 */ ··· 988 983 #define PCI_EXP_DPC_CAP_DL_ACTIVE 0x1000 /* ERR_COR signal on DL_Active supported */ 989 984 990 985 #define PCI_EXP_DPC_CTL 6 /* DPC control */ 986 + #define PCI_EXP_DPC_CTL_EN_FATAL 0x0001 /* Enable trigger on ERR_FATAL message */ 991 987 #define PCI_EXP_DPC_CTL_EN_NONFATAL 0x0002 /* Enable trigger on ERR_NONFATAL message */ 992 988 #define PCI_EXP_DPC_CTL_INT_EN 0x0008 /* DPC Interrupt Enable */ 993 989