Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v5.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:
- Move the VGA arbiter from drivers/gpu to drivers/pci because it's
PCI-specific, not GPU-specific (Bjorn Helgaas)
- Select the default VGA device consistently whether it's enumerated
before or after VGA arbiter init, which fixes arches that enumerate
PCI devices late (Huacai Chen)

Resource management:
- Support BAR sizes up to 8TB (Dongdong Liu)

PCIe native device hotplug:
- Fix "Command Completed" tracking to avoid spurious timouts when
powering off empty slots (Liguang Zhang)
- Quirk Qualcomm devices that don't implement Command Completed
correctly, again to avoid spurious timeouts (Manivannan Sadhasivam)

Peer-to-peer DMA:
- Add Intel 3rd Gen Intel Xeon Scalable Processors to whitelist
(Michael J. Ruhl)

APM X-Gene PCIe controller driver:
- Revert generic DT parsing changes that broke some machines in the
field (Marc Zyngier)

Freescale i.MX6 PCIe controller driver:
- Allow controller probe to succeed even when no devices currently
present to allow hot-add later (Fabio Estevam)
- Enable power management on i.MX6QP (Richard Zhu)
- Assert CLKREQ# on i.MX8MM so enumeration doesn't hang when no
device is connected (Richard Zhu)

Marvell Aardvark PCIe controller driver:
- Fix MSI and MSI-X support (Marek Behún, Pali Rohár)
- Add support for ERR and PME interrupts (Pali Rohár)

Marvell MVEBU PCIe controller driver:
- Add DT binding and support for "num-lanes" (Pali Rohár)
- Add support for INTx interrupts (Pali Rohár)

Microsoft Hyper-V host bridge driver:
- Avoid unnecessary hypercalls when unmasking IRQs on ARM64 (Boqun
Feng)

Qualcomm PCIe controller driver:
- Add SM8450 DT binding and driver support (Dmitry Baryshkov)

Renesas R-Car PCIe controller driver:
- Help the controller get to the L1 state since the hardware can't do
it on its own (Marek Vasut)
- Return PCI_ERROR_RESPONSE (~0) for reads that fail on PCIe (Marek
Vasut)

SiFive FU740 PCIe controller driver:
- Drop redundant '-gpios' from DT GPIO lookup (Ben Dooks)
- Force 2.5GT/s for initial device probe (Ben Dooks)

Socionext UniPhier Pro5 controller driver:
- Add NX1 DT binding and driver support (Kunihiko Hayashi)

Synopsys DesignWare PCIe controller driver:
- Restore MSI configuration so MSI works after resume (Jisheng
Zhang)"

* tag 'pci-v5.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (94 commits)
x86/PCI: Add #includes to asm/pci_x86.h
PCI: ibmphp: Remove unused assignments
PCI: cpqphp: Remove unused assignments
PCI: fu740: Remove unused assignments
PCI: kirin: Remove unused assignments
PCI: Remove unused assignments
PCI: Declare pci_filp_private only when HAVE_PCI_MMAP
PCI: Avoid broken MSI on SB600 USB devices
PCI: fu740: Force 2.5GT/s for initial device probe
PCI: xgene: Revert "PCI: xgene: Fix IB window setup"
PCI: xgene: Revert "PCI: xgene: Use inbound resources for setup"
PCI: imx6: Assert i.MX8MM CLKREQ# even if no device present
PCI: imx6: Invoke the PHY exit function after PHY power off
PCI: rcar: Use PCI_SET_ERROR_RESPONSE after read which triggered an exception
PCI: rcar: Finish transition to L1 state in rcar_pcie_config_access()
PCI: dwc: Restore MSI Receiver mask during resume
PCI: fu740: Drop redundant '-gpios' from DT GPIO lookup
PCI/VGA: Replace full MIT license text with SPDX identifier
PCI/VGA: Use unsigned format string to print lock counts
PCI/VGA: Log bridge control messages when adding devices
...

+1591 -764
+16
Documentation/devicetree/bindings/pci/mvebu-pci.txt
··· 77 77 - marvell,pcie-lane: the physical PCIe lane number, for ports having 78 78 multiple lanes. If this property is not found, we assume that the 79 79 value is 0. 80 + - num-lanes: number of SerDes PCIe lanes for this link (1 or 4) 80 81 - reset-gpios: optional GPIO to PERST# 81 82 - reset-delay-us: delay in us to wait after reset de-assertion, if not 82 83 specified will default to 100ms, as required by the PCIe specification. 84 + - interrupt-names: list of interrupt names, supported are: 85 + - "intx" - interrupt line triggered by one of the legacy interrupt 86 + - interrupts or interrupts-extended: List of the interrupt sources which 87 + corresponding to the "interrupt-names". If non-empty then also additional 88 + 'interrupt-controller' subnode must be defined. 83 89 84 90 Example: 85 91 ··· 147 141 interrupt-map = <0 0 0 0 &mpic 58>; 148 142 marvell,pcie-port = <0>; 149 143 marvell,pcie-lane = <0>; 144 + num-lanes = <1>; 150 145 /* low-active PERST# reset on GPIO 25 */ 151 146 reset-gpios = <&gpio0 25 1>; 152 147 /* wait 20ms for device settle after reset deassertion */ ··· 168 161 interrupt-map = <0 0 0 0 &mpic 59>; 169 162 marvell,pcie-port = <0>; 170 163 marvell,pcie-lane = <1>; 164 + num-lanes = <1>; 171 165 clocks = <&gateclk 6>; 172 166 }; 173 167 ··· 185 177 interrupt-map = <0 0 0 0 &mpic 60>; 186 178 marvell,pcie-port = <0>; 187 179 marvell,pcie-lane = <2>; 180 + num-lanes = <1>; 188 181 clocks = <&gateclk 7>; 189 182 }; 190 183 ··· 202 193 interrupt-map = <0 0 0 0 &mpic 61>; 203 194 marvell,pcie-port = <0>; 204 195 marvell,pcie-lane = <3>; 196 + num-lanes = <1>; 205 197 clocks = <&gateclk 8>; 206 198 }; 207 199 ··· 219 209 interrupt-map = <0 0 0 0 &mpic 62>; 220 210 marvell,pcie-port = <1>; 221 211 marvell,pcie-lane = <0>; 212 + num-lanes = <1>; 222 213 clocks = <&gateclk 9>; 223 214 }; 224 215 ··· 236 225 interrupt-map = <0 0 0 0 &mpic 63>; 237 226 marvell,pcie-port = <1>; 238 227 marvell,pcie-lane = <1>; 228 + num-lanes = <1>; 239 229 clocks = <&gateclk 10>; 240 230 }; 241 231 ··· 253 241 interrupt-map = <0 0 0 0 &mpic 64>; 254 242 marvell,pcie-port = <1>; 255 243 marvell,pcie-lane = <2>; 244 + num-lanes = <1>; 256 245 clocks = <&gateclk 11>; 257 246 }; 258 247 ··· 270 257 interrupt-map = <0 0 0 0 &mpic 65>; 271 258 marvell,pcie-port = <1>; 272 259 marvell,pcie-lane = <3>; 260 + num-lanes = <1>; 273 261 clocks = <&gateclk 12>; 274 262 }; 275 263 ··· 287 273 interrupt-map = <0 0 0 0 &mpic 99>; 288 274 marvell,pcie-port = <2>; 289 275 marvell,pcie-lane = <0>; 276 + num-lanes = <1>; 290 277 clocks = <&gateclk 26>; 291 278 }; 292 279 ··· 304 289 interrupt-map = <0 0 0 0 &mpic 103>; 305 290 marvell,pcie-port = <3>; 306 291 marvell,pcie-lane = <0>; 292 + num-lanes = <1>; 307 293 clocks = <&gateclk 27>; 308 294 }; 309 295 };
+21 -1
Documentation/devicetree/bindings/pci/qcom,pcie.txt
··· 15 15 - "qcom,pcie-sc8180x" for sc8180x 16 16 - "qcom,pcie-sdm845" for sdm845 17 17 - "qcom,pcie-sm8250" for sm8250 18 + - "qcom,pcie-sm8450-pcie0" for PCIe0 on sm8450 19 + - "qcom,pcie-sm8450-pcie1" for PCIe1 on sm8450 18 20 - "qcom,pcie-ipq6018" for ipq6018 19 21 20 22 - reg: ··· 171 169 - "ddrss_sf_tbu" PCIe SF TBU clock 172 170 - "pipe" PIPE clock 173 171 172 + - clock-names: 173 + Usage: required for sm8450-pcie0 and sm8450-pcie1 174 + Value type: <stringlist> 175 + Definition: Should contain the following entries 176 + - "aux" Auxiliary clock 177 + - "cfg" Configuration clock 178 + - "bus_master" Master AXI clock 179 + - "bus_slave" Slave AXI clock 180 + - "slave_q2a" Slave Q2A clock 181 + - "tbu" PCIe TBU clock 182 + - "ddrss_sf_tbu" PCIe SF TBU clock 183 + - "pipe" PIPE clock 184 + - "pipe_mux" PIPE MUX 185 + - "phy_pipe" PIPE output clock 186 + - "ref" REFERENCE clock 187 + - "aggre0" Aggre NoC PCIe0 AXI clock, only for sm8450-pcie0 188 + - "aggre1" Aggre NoC PCIe1 AXI clock 189 + 174 190 - resets: 175 191 Usage: required 176 192 Value type: <prop-encoded-array> ··· 266 246 - "ahb" AHB reset 267 247 268 248 - reset-names: 269 - Usage: required for sc8180x, sdm845 and sm8250 249 + Usage: required for sc8180x, sdm845, sm8250 and sm8450 270 250 Value type: <stringlist> 271 251 Definition: Should contain the following entries 272 252 - "pci" PCIe core reset
+15 -7
Documentation/devicetree/bindings/pci/socionext,uniphier-pcie-ep.yaml
··· 20 20 21 21 properties: 22 22 compatible: 23 - const: socionext,uniphier-pro5-pcie-ep 23 + enum: 24 + - socionext,uniphier-pro5-pcie-ep 25 + - socionext,uniphier-nx1-pcie-ep 24 26 25 27 reg: 26 28 minItems: 4 ··· 43 41 - const: atu 44 42 45 43 clocks: 44 + minItems: 1 46 45 maxItems: 2 47 46 48 47 clock-names: 49 - items: 50 - - const: gio 51 - - const: link 48 + oneOf: 49 + - items: # for Pro5 50 + - const: gio 51 + - const: link 52 + - const: link # for NX1 52 53 53 54 resets: 55 + minItems: 1 54 56 maxItems: 2 55 57 56 58 reset-names: 57 - items: 58 - - const: gio 59 - - const: link 59 + oneOf: 60 + - items: # for Pro5 61 + - const: gio 62 + - const: link 63 + - const: link # for NX1 60 64 61 65 num-ib-windows: 62 66 const: 16
+1 -1
Documentation/gpu/vgaarbiter.rst
··· 100 100 .. kernel-doc:: include/linux/vgaarb.h 101 101 :internal: 102 102 103 - .. kernel-doc:: drivers/gpu/vga/vgaarb.c 103 + .. kernel-doc:: drivers/pci/vgaarb.c 104 104 :export: 105 105 106 106 libpciaccess
+1
MAINTAINERS
··· 14938 14938 14939 14939 PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support) 14940 14940 M: Thomas Petazzoni <thomas.petazzoni@bootlin.com> 14941 + M: Pali Rohár <pali@kernel.org> 14941 14942 L: linux-pci@vger.kernel.org 14942 14943 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 14943 14944 S: Maintained
-2
arch/mips/include/asm/mach-bcm63xx/bcm63xx_regs.h
··· 1380 1380 1381 1381 #define PCIE_IDVAL3_REG 0x43c 1382 1382 #define IDVAL3_CLASS_CODE_MASK 0xffffff 1383 - #define IDVAL3_SUBCLASS_SHIFT 8 1384 - #define IDVAL3_CLASS_SHIFT 16 1385 1383 1386 1384 #define PCIE_DLSTATUS_REG 0x1048 1387 1385 #define DLSTATUS_PHYLINKUP (1 << 13)
+1 -1
arch/mips/pci/fixup-sb1250.c
··· 75 75 */ 76 76 static void quirk_sb1250_ht(struct pci_dev *dev) 77 77 { 78 - dev->class = PCI_CLASS_BRIDGE_PCI << 8; 78 + dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL; 79 79 } 80 80 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIBYTE, PCI_DEVICE_ID_BCM1250_HT, 81 81 quirk_sb1250_ht);
+1 -1
arch/mips/pci/pci-bcm63xx.c
··· 186 186 /* setup class code as bridge */ 187 187 val = bcm_pcie_readl(PCIE_IDVAL3_REG); 188 188 val &= ~IDVAL3_CLASS_CODE_MASK; 189 - val |= (PCI_CLASS_BRIDGE_PCI << IDVAL3_SUBCLASS_SHIFT); 189 + val |= PCI_CLASS_BRIDGE_PCI_NORMAL; 190 190 bcm_pcie_writel(val, PCIE_IDVAL3_REG); 191 191 192 192 /* disable bar1 size */
+1 -1
arch/powerpc/platforms/powernv/pci.c
··· 815 815 /* Fixup wrong class code in p7ioc and p8 root complex */ 816 816 static void pnv_p7ioc_rc_quirk(struct pci_dev *dev) 817 817 { 818 - dev->class = PCI_CLASS_BRIDGE_PCI << 8; 818 + dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL; 819 819 } 820 820 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_IBM, 0x3b9, pnv_p7ioc_rc_quirk); 821 821
+1 -1
arch/powerpc/sysdev/fsl_pci.c
··· 55 55 if ((hdr_type & 0x7f) != PCI_HEADER_TYPE_BRIDGE) 56 56 return; 57 57 58 - dev->class = PCI_CLASS_BRIDGE_PCI << 8; 58 + dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL; 59 59 fsl_pcie_bus_fixup = 1; 60 60 return; 61 61 }
+1 -1
arch/sh/drivers/pci/pcie-sh7786.c
··· 314 314 * class to match. Hardware takes care of propagating the IDSETR 315 315 * settings, so there is no need to bother with a quirk. 316 316 */ 317 - pci_write_reg(chan, PCI_CLASS_BRIDGE_PCI << 16, SH4A_PCIEIDSETR1); 317 + pci_write_reg(chan, PCI_CLASS_BRIDGE_PCI_NORMAL << 8, SH4A_PCIEIDSETR1); 318 318 319 319 /* Initialize default capabilities. */ 320 320 data = pci_read_reg(chan, SH4A_PCIEEXPCAP0);
+3
arch/x86/include/asm/pci_x86.h
··· 5 5 * (c) 1999 Martin Mares <mj@ucw.cz> 6 6 */ 7 7 8 + #include <linux/errno.h> 9 + #include <linux/init.h> 8 10 #include <linux/ioport.h> 11 + #include <linux/spinlock.h> 9 12 10 13 #undef DEBUG 11 14
-19
drivers/gpu/vga/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - config VGA_ARB 3 - bool "VGA Arbitration" if EXPERT 4 - default y 5 - depends on (PCI && !S390) 6 - help 7 - Some "legacy" VGA devices implemented on PCI typically have the same 8 - hard-decoded addresses as they did on ISA. When multiple PCI devices 9 - are accessed at same time they need some kind of coordination. Please 10 - see Documentation/gpu/vgaarbiter.rst for more details. Select this to 11 - enable VGA arbiter. 12 - 13 - config VGA_ARB_MAX_GPUS 14 - int "Maximum number of GPUs" 15 - default 16 16 - depends on VGA_ARB 17 - help 18 - Reserves space in the kernel to maintain resource locking for 19 - multiple GPUS. The overhead for each GPU is very small. 20 - 21 2 config VGA_SWITCHEROO 22 3 bool "Laptop Hybrid Graphics - GPU switching support" 23 4 depends on X86
-1
drivers/gpu/vga/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - obj-$(CONFIG_VGA_ARB) += vgaarb.o 3 2 obj-$(CONFIG_VGA_SWITCHEROO) += vga_switcheroo.o
+155 -158
drivers/gpu/vga/vgaarb.c drivers/pci/vgaarb.c
··· 1 + // SPDX-License-Identifier: MIT 1 2 /* 2 3 * vgaarb.c: Implements the VGA arbitration. For details refer to 3 4 * Documentation/gpu/vgaarbiter.rst 4 5 * 5 - * 6 6 * (C) Copyright 2005 Benjamin Herrenschmidt <benh@kernel.crashing.org> 7 7 * (C) Copyright 2007 Paulo R. Zanoni <przanoni@gmail.com> 8 8 * (C) Copyright 2007, 2009 Tiago Vignatti <vignatti@freedesktop.org> 9 - * 10 - * Permission is hereby granted, free of charge, to any person obtaining a 11 - * copy of this software and associated documentation files (the "Software"), 12 - * to deal in the Software without restriction, including without limitation 13 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 14 - * and/or sell copies of the Software, and to permit persons to whom the 15 - * Software is furnished to do so, subject to the following conditions: 16 - * 17 - * The above copyright notice and this permission notice (including the next 18 - * paragraph) shall be included in all copies or substantial portions of the 19 - * Software. 20 - * 21 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 22 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 23 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 24 - * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 25 - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 26 - * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 27 - * DEALINGS 28 - * IN THE SOFTWARE. 29 - * 30 9 */ 31 10 32 11 #define pr_fmt(fmt) "vgaarb: " fmt ··· 51 72 unsigned int io_norm_cnt; /* normal IO count */ 52 73 unsigned int mem_norm_cnt; /* normal MEM count */ 53 74 bool bridge_has_one_vga; 75 + bool is_firmware_default; /* device selected by firmware */ 54 76 unsigned int (*set_decode)(struct pci_dev *pdev, bool decode); 55 77 }; 56 78 ··· 101 121 102 122 /* this is only used a cookie - it should not be dereferenced */ 103 123 static struct pci_dev *vga_default; 104 - 105 - static void vga_arb_device_card_gone(struct pci_dev *pdev); 106 124 107 125 /* Find somebody in our list */ 108 126 static struct vga_device *vgadev_find(struct pci_dev *pdev) ··· 543 565 } 544 566 EXPORT_SYMBOL(vga_put); 545 567 568 + static bool vga_is_firmware_default(struct pci_dev *pdev) 569 + { 570 + #if defined(CONFIG_X86) || defined(CONFIG_IA64) 571 + u64 base = screen_info.lfb_base; 572 + u64 size = screen_info.lfb_size; 573 + u64 limit; 574 + resource_size_t start, end; 575 + unsigned long flags; 576 + int i; 577 + 578 + /* Select the device owning the boot framebuffer if there is one */ 579 + 580 + if (screen_info.capabilities & VIDEO_CAPABILITY_64BIT_BASE) 581 + base |= (u64)screen_info.ext_lfb_base << 32; 582 + 583 + limit = base + size; 584 + 585 + /* Does firmware framebuffer belong to us? */ 586 + for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 587 + flags = pci_resource_flags(pdev, i); 588 + 589 + if ((flags & IORESOURCE_MEM) == 0) 590 + continue; 591 + 592 + start = pci_resource_start(pdev, i); 593 + end = pci_resource_end(pdev, i); 594 + 595 + if (!start || !end) 596 + continue; 597 + 598 + if (base < start || limit >= end) 599 + continue; 600 + 601 + return true; 602 + } 603 + #endif 604 + return false; 605 + } 606 + 607 + static bool vga_arb_integrated_gpu(struct device *dev) 608 + { 609 + #if defined(CONFIG_ACPI) 610 + struct acpi_device *adev = ACPI_COMPANION(dev); 611 + 612 + return adev && !strcmp(acpi_device_hid(adev), ACPI_VIDEO_HID); 613 + #else 614 + return false; 615 + #endif 616 + } 617 + 618 + /* 619 + * Return true if vgadev is a better default VGA device than the best one 620 + * we've seen so far. 621 + */ 622 + static bool vga_is_boot_device(struct vga_device *vgadev) 623 + { 624 + struct vga_device *boot_vga = vgadev_find(vga_default_device()); 625 + struct pci_dev *pdev = vgadev->pdev; 626 + u16 cmd, boot_cmd; 627 + 628 + /* 629 + * We select the default VGA device in this order: 630 + * Firmware framebuffer (see vga_arb_select_default_device()) 631 + * Legacy VGA device (owns VGA_RSRC_LEGACY_MASK) 632 + * Non-legacy integrated device (see vga_arb_select_default_device()) 633 + * Non-legacy discrete device (see vga_arb_select_default_device()) 634 + * Other device (see vga_arb_select_default_device()) 635 + */ 636 + 637 + /* 638 + * We always prefer a firmware default device, so if we've already 639 + * found one, there's no need to consider vgadev. 640 + */ 641 + if (boot_vga && boot_vga->is_firmware_default) 642 + return false; 643 + 644 + if (vga_is_firmware_default(pdev)) { 645 + vgadev->is_firmware_default = true; 646 + return true; 647 + } 648 + 649 + /* 650 + * A legacy VGA device has MEM and IO enabled and any bridges 651 + * leading to it have PCI_BRIDGE_CTL_VGA enabled so the legacy 652 + * resources ([mem 0xa0000-0xbffff], [io 0x3b0-0x3bb], etc) are 653 + * routed to it. 654 + * 655 + * We use the first one we find, so if we've already found one, 656 + * vgadev is no better. 657 + */ 658 + if (boot_vga && 659 + (boot_vga->owns & VGA_RSRC_LEGACY_MASK) == VGA_RSRC_LEGACY_MASK) 660 + return false; 661 + 662 + if ((vgadev->owns & VGA_RSRC_LEGACY_MASK) == VGA_RSRC_LEGACY_MASK) 663 + return true; 664 + 665 + /* 666 + * If we haven't found a legacy VGA device, accept a non-legacy 667 + * device. It may have either IO or MEM enabled, and bridges may 668 + * not have PCI_BRIDGE_CTL_VGA enabled, so it may not be able to 669 + * use legacy VGA resources. Prefer an integrated GPU over others. 670 + */ 671 + pci_read_config_word(pdev, PCI_COMMAND, &cmd); 672 + if (cmd & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 673 + 674 + /* 675 + * An integrated GPU overrides a previous non-legacy 676 + * device. We expect only a single integrated GPU, but if 677 + * there are more, we use the *last* because that was the 678 + * previous behavior. 679 + */ 680 + if (vga_arb_integrated_gpu(&pdev->dev)) 681 + return true; 682 + 683 + /* 684 + * We prefer the first non-legacy discrete device we find. 685 + * If we already found one, vgadev is no better. 686 + */ 687 + if (boot_vga) { 688 + pci_read_config_word(boot_vga->pdev, PCI_COMMAND, 689 + &boot_cmd); 690 + if (boot_cmd & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) 691 + return false; 692 + } 693 + return true; 694 + } 695 + 696 + /* 697 + * vgadev has neither IO nor MEM enabled. If we haven't found any 698 + * other VGA devices, it is the best candidate so far. 699 + */ 700 + if (!boot_vga) 701 + return true; 702 + 703 + return false; 704 + } 705 + 546 706 /* 547 707 * Rules for using a bridge to control a VGA descendant decoding: if a bridge 548 708 * has only one VGA descendant then it can be used to control the VGA routing ··· 698 582 699 583 vgadev->bridge_has_one_vga = true; 700 584 701 - if (list_empty(&vga_list)) 585 + if (list_empty(&vga_list)) { 586 + vgaarb_info(&vgadev->pdev->dev, "bridge control possible\n"); 702 587 return; 588 + } 703 589 704 590 /* okay iterate the new devices bridge hierarachy */ 705 591 new_bus = vgadev->pdev->bus; ··· 740 622 } 741 623 new_bus = new_bus->parent; 742 624 } 625 + 626 + if (vgadev->bridge_has_one_vga) 627 + vgaarb_info(&vgadev->pdev->dev, "bridge control possible\n"); 628 + else 629 + vgaarb_info(&vgadev->pdev->dev, "no bridge control possible\n"); 743 630 } 744 631 745 632 /* ··· 815 692 bus = bus->parent; 816 693 } 817 694 818 - /* Deal with VGA default device. Use first enabled one 819 - * by default if arch doesn't have it's own hook 820 - */ 821 - if (vga_default == NULL && 822 - ((vgadev->owns & VGA_RSRC_LEGACY_MASK) == VGA_RSRC_LEGACY_MASK)) { 823 - vgaarb_info(&pdev->dev, "setting as boot VGA device\n"); 695 + if (vga_is_boot_device(vgadev)) { 696 + vgaarb_info(&pdev->dev, "setting as boot VGA device%s\n", 697 + vga_default_device() ? 698 + " (overriding previous)" : ""); 824 699 vga_set_default_device(pdev); 825 700 } 826 701 ··· 862 741 /* Remove entry from list */ 863 742 list_del(&vgadev->list); 864 743 vga_count--; 865 - /* Notify userland driver that the device is gone so it discards 866 - * it's copies of the pci_dev pointer 867 - */ 868 - vga_arb_device_card_gone(pdev); 869 744 870 745 /* Wake up all possible waiters */ 871 746 wake_up_all(&vga_wait_queue); ··· 1111 994 if (lbuf == NULL) 1112 995 return -ENOMEM; 1113 996 1114 - /* Shields against vga_arb_device_card_gone (pci_dev going 1115 - * away), and allows access to vga list 1116 - */ 997 + /* Protects vga_list */ 1117 998 spin_lock_irqsave(&vga_lock, flags); 1118 999 1119 1000 /* If we are targeting the default, use it */ ··· 1128 1013 /* Wow, it's not in the list, that shouldn't happen, 1129 1014 * let's fix us up and return invalid card 1130 1015 */ 1131 - if (pdev == priv->target) 1132 - vga_arb_device_card_gone(pdev); 1133 1016 spin_unlock_irqrestore(&vga_lock, flags); 1134 1017 len = sprintf(lbuf, "invalid"); 1135 1018 goto done; ··· 1135 1022 1136 1023 /* Fill the buffer with infos */ 1137 1024 len = snprintf(lbuf, 1024, 1138 - "count:%d,PCI:%s,decodes=%s,owns=%s,locks=%s(%d:%d)\n", 1025 + "count:%d,PCI:%s,decodes=%s,owns=%s,locks=%s(%u:%u)\n", 1139 1026 vga_decode_count, pci_name(pdev), 1140 1027 vga_iostate_to_str(vgadev->decodes), 1141 1028 vga_iostate_to_str(vgadev->owns), ··· 1471 1358 return 0; 1472 1359 } 1473 1360 1474 - static void vga_arb_device_card_gone(struct pci_dev *pdev) 1475 - { 1476 - } 1477 - 1478 1361 /* 1479 1362 * callback any registered clients to let them know we have a 1480 1363 * change in VGA cards ··· 1539 1430 MISC_DYNAMIC_MINOR, "vga_arbiter", &vga_arb_device_fops 1540 1431 }; 1541 1432 1542 - #if defined(CONFIG_ACPI) 1543 - static bool vga_arb_integrated_gpu(struct device *dev) 1544 - { 1545 - struct acpi_device *adev = ACPI_COMPANION(dev); 1546 - 1547 - return adev && !strcmp(acpi_device_hid(adev), ACPI_VIDEO_HID); 1548 - } 1549 - #else 1550 - static bool vga_arb_integrated_gpu(struct device *dev) 1551 - { 1552 - return false; 1553 - } 1554 - #endif 1555 - 1556 - static void __init vga_arb_select_default_device(void) 1557 - { 1558 - struct pci_dev *pdev, *found = NULL; 1559 - struct vga_device *vgadev; 1560 - 1561 - #if defined(CONFIG_X86) || defined(CONFIG_IA64) 1562 - u64 base = screen_info.lfb_base; 1563 - u64 size = screen_info.lfb_size; 1564 - u64 limit; 1565 - resource_size_t start, end; 1566 - unsigned long flags; 1567 - int i; 1568 - 1569 - if (screen_info.capabilities & VIDEO_CAPABILITY_64BIT_BASE) 1570 - base |= (u64)screen_info.ext_lfb_base << 32; 1571 - 1572 - limit = base + size; 1573 - 1574 - list_for_each_entry(vgadev, &vga_list, list) { 1575 - struct device *dev = &vgadev->pdev->dev; 1576 - /* 1577 - * Override vga_arbiter_add_pci_device()'s I/O based detection 1578 - * as it may take the wrong device (e.g. on Apple system under 1579 - * EFI). 1580 - * 1581 - * Select the device owning the boot framebuffer if there is 1582 - * one. 1583 - */ 1584 - 1585 - /* Does firmware framebuffer belong to us? */ 1586 - for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 1587 - flags = pci_resource_flags(vgadev->pdev, i); 1588 - 1589 - if ((flags & IORESOURCE_MEM) == 0) 1590 - continue; 1591 - 1592 - start = pci_resource_start(vgadev->pdev, i); 1593 - end = pci_resource_end(vgadev->pdev, i); 1594 - 1595 - if (!start || !end) 1596 - continue; 1597 - 1598 - if (base < start || limit >= end) 1599 - continue; 1600 - 1601 - if (!vga_default_device()) 1602 - vgaarb_info(dev, "setting as boot device\n"); 1603 - else if (vgadev->pdev != vga_default_device()) 1604 - vgaarb_info(dev, "overriding boot device\n"); 1605 - vga_set_default_device(vgadev->pdev); 1606 - } 1607 - } 1608 - #endif 1609 - 1610 - if (!vga_default_device()) { 1611 - list_for_each_entry_reverse(vgadev, &vga_list, list) { 1612 - struct device *dev = &vgadev->pdev->dev; 1613 - u16 cmd; 1614 - 1615 - pdev = vgadev->pdev; 1616 - pci_read_config_word(pdev, PCI_COMMAND, &cmd); 1617 - if (cmd & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 1618 - found = pdev; 1619 - if (vga_arb_integrated_gpu(dev)) 1620 - break; 1621 - } 1622 - } 1623 - } 1624 - 1625 - if (found) { 1626 - vgaarb_info(&found->dev, "setting as boot device (VGA legacy resources not available)\n"); 1627 - vga_set_default_device(found); 1628 - return; 1629 - } 1630 - 1631 - if (!vga_default_device()) { 1632 - vgadev = list_first_entry_or_null(&vga_list, 1633 - struct vga_device, list); 1634 - if (vgadev) { 1635 - struct device *dev = &vgadev->pdev->dev; 1636 - vgaarb_info(dev, "setting as boot device (VGA legacy resources not available)\n"); 1637 - vga_set_default_device(vgadev->pdev); 1638 - } 1639 - } 1640 - } 1641 - 1642 1433 static int __init vga_arb_device_init(void) 1643 1434 { 1644 1435 int rc; 1645 1436 struct pci_dev *pdev; 1646 - struct vga_device *vgadev; 1647 1437 1648 1438 rc = misc_register(&vga_arb_device); 1649 1439 if (rc < 0) ··· 1558 1550 PCI_ANY_ID, pdev)) != NULL) 1559 1551 vga_arbiter_add_pci_device(pdev); 1560 1552 1561 - list_for_each_entry(vgadev, &vga_list, list) { 1562 - struct device *dev = &vgadev->pdev->dev; 1563 - 1564 - if (vgadev->bridge_has_one_vga) 1565 - vgaarb_info(dev, "bridge control possible\n"); 1566 - else 1567 - vgaarb_info(dev, "no bridge control possible\n"); 1568 - } 1569 - 1570 - vga_arb_select_default_device(); 1571 - 1572 1553 pr_info("loaded\n"); 1573 1554 return rc; 1574 1555 } 1575 - subsys_initcall(vga_arb_device_init); 1556 + subsys_initcall_sync(vga_arb_device_init);
+19
drivers/pci/Kconfig
··· 252 252 253 253 endchoice 254 254 255 + config VGA_ARB 256 + bool "VGA Arbitration" if EXPERT 257 + default y 258 + depends on (PCI && !S390) 259 + help 260 + Some "legacy" VGA devices implemented on PCI typically have the same 261 + hard-decoded addresses as they did on ISA. When multiple PCI devices 262 + are accessed at same time they need some kind of coordination. Please 263 + see Documentation/gpu/vgaarbiter.rst for more details. Select this to 264 + enable VGA arbiter. 265 + 266 + config VGA_ARB_MAX_GPUS 267 + int "Maximum number of GPUs" 268 + default 16 269 + depends on VGA_ARB 270 + help 271 + Reserves space in the kernel to maintain resource locking for 272 + multiple GPUS. The overhead for each GPU is very small. 273 + 255 274 source "drivers/pci/hotplug/Kconfig" 256 275 source "drivers/pci/controller/Kconfig" 257 276 source "drivers/pci/endpoint/Kconfig"
+1
drivers/pci/Makefile
··· 30 30 obj-$(CONFIG_PCI_ECAM) += ecam.o 31 31 obj-$(CONFIG_PCI_P2PDMA) += p2pdma.o 32 32 obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o 33 + obj-$(CONFIG_VGA_ARB) += vgaarb.o 33 34 34 35 # Endpoint library must be initialized before its users 35 36 obj-$(CONFIG_PCI_ENDPOINT) += endpoint/
+6 -3
drivers/pci/access.c
··· 159 159 * write happen to have any RW1C (write-one-to-clear) bits set, we 160 160 * just inadvertently cleared something we shouldn't have. 161 161 */ 162 - dev_warn_ratelimited(&bus->dev, "%d-byte config write to %04x:%02x:%02x.%d offset %#x may corrupt adjacent RW1C bits\n", 163 - size, pci_domain_nr(bus), bus->number, 164 - PCI_SLOT(devfn), PCI_FUNC(devfn), where); 162 + if (!bus->unsafe_warn) { 163 + dev_warn(&bus->dev, "%d-byte config write to %04x:%02x:%02x.%d offset %#x may corrupt adjacent RW1C bits\n", 164 + size, pci_domain_nr(bus), bus->number, 165 + PCI_SLOT(devfn), PCI_FUNC(devfn), where); 166 + bus->unsafe_warn = 1; 167 + } 165 168 166 169 mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8)); 167 170 tmp = readl(addr) & mask;
+4
drivers/pci/controller/Kconfig
··· 10 10 depends on ARM 11 11 depends on OF 12 12 select PCI_BRIDGE_EMUL 13 + help 14 + Add support for Marvell EBU PCIe controller. This PCIe controller 15 + is used on 32-bit Marvell ARM SoCs: Dove, Kirkwood, Armada 370, 16 + Armada XP, Armada 375, Armada 38x and Armada 39x. 13 17 14 18 config PCI_AARDVARK 15 19 tristate "Aardvark PCIe controller"
+6 -13
drivers/pci/controller/dwc/pci-imx6.c
··· 453 453 case IMX7D: 454 454 break; 455 455 case IMX8MM: 456 - ret = clk_prepare_enable(imx6_pcie->pcie_aux); 457 - if (ret) 458 - dev_err(dev, "unable to enable pcie_aux clock\n"); 459 - break; 460 456 case IMX8MQ: 461 457 ret = clk_prepare_enable(imx6_pcie->pcie_aux); 462 458 if (ret) { ··· 805 809 /* Start LTSSM. */ 806 810 imx6_pcie_ltssm_enable(dev); 807 811 808 - ret = dw_pcie_wait_for_link(pci); 809 - if (ret) 810 - goto err_reset_phy; 812 + dw_pcie_wait_for_link(pci); 811 813 812 814 if (pci->link_gen == 2) { 813 815 /* Allow Gen2 mode after the link is up. */ ··· 841 847 } 842 848 843 849 /* Make sure link training is finished as well! */ 844 - ret = dw_pcie_wait_for_link(pci); 845 - if (ret) { 846 - dev_err(dev, "Failed to bring link up!\n"); 847 - goto err_reset_phy; 848 - } 850 + dw_pcie_wait_for_link(pci); 849 851 } else { 850 852 dev_info(dev, "Link: Gen2 disabled\n"); 851 853 } ··· 913 923 /* Others poke directly at IOMUXC registers */ 914 924 switch (imx6_pcie->drvdata->variant) { 915 925 case IMX6SX: 926 + case IMX6QP: 916 927 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 917 928 IMX6SX_GPR12_PCIE_PM_TURN_OFF, 918 929 IMX6SX_GPR12_PCIE_PM_TURN_OFF); ··· 974 983 case IMX8MM: 975 984 if (phy_power_off(imx6_pcie->phy)) 976 985 dev_err(dev, "unable to power off PHY\n"); 986 + phy_exit(imx6_pcie->phy); 977 987 break; 978 988 default: 979 989 break; ··· 1244 1252 [IMX6QP] = { 1245 1253 .variant = IMX6QP, 1246 1254 .flags = IMX6_PCIE_FLAG_IMX6_PHY | 1247 - IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE, 1255 + IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE | 1256 + IMX6_PCIE_FLAG_SUPPORTS_SUSPEND, 1248 1257 .dbi_length = 0x200, 1249 1258 }, 1250 1259 [IMX7D] = {
+4 -4
drivers/pci/controller/dwc/pci-keystone.c
··· 531 531 struct pci_dev *bridge; 532 532 static const struct pci_device_id rc_pci_devids[] = { 533 533 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK), 534 - .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, 534 + .class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, }, 535 535 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2E), 536 - .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, 536 + .class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, }, 537 537 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2L), 538 - .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, 538 + .class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, }, 539 539 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2G), 540 - .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, 540 + .class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, }, 541 541 { 0, }, 542 542 }; 543 543
+8 -8
drivers/pci/controller/dwc/pci-meson.c
··· 313 313 * cannot program the PCI_CLASS_DEVICE register, so we must fabricate 314 314 * the return value in the config accessors. 315 315 */ 316 - if (where == PCI_CLASS_REVISION && size == 4) 317 - *val = (PCI_CLASS_BRIDGE_PCI << 16) | (*val & 0xffff); 318 - else if (where == PCI_CLASS_DEVICE && size == 2) 319 - *val = PCI_CLASS_BRIDGE_PCI; 320 - else if (where == PCI_CLASS_DEVICE && size == 1) 321 - *val = PCI_CLASS_BRIDGE_PCI & 0xff; 322 - else if (where == PCI_CLASS_DEVICE + 1 && size == 1) 323 - *val = (PCI_CLASS_BRIDGE_PCI >> 8) & 0xff; 316 + if ((where & ~3) == PCI_CLASS_REVISION) { 317 + if (size <= 2) 318 + *val = (*val & ((1 << (size * 8)) - 1)) << (8 * (where & 3)); 319 + *val &= ~0xffffff00; 320 + *val |= PCI_CLASS_BRIDGE_PCI_NORMAL << 8; 321 + if (size <= 2) 322 + *val = (*val >> (8 * (where & 3))) & ((1 << (size * 8)) - 1); 323 + } 324 324 325 325 return PCIBIOS_SUCCESSFUL; 326 326 }
+6 -1
drivers/pci/controller/dwc/pcie-designware-host.c
··· 362 362 if (ret < 0) 363 363 return ret; 364 364 } else if (pp->has_msi_ctrl) { 365 + u32 ctrl, num_ctrls; 366 + 367 + num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; 368 + for (ctrl = 0; ctrl < num_ctrls; ctrl++) 369 + pp->irq_mask[ctrl] = ~0; 370 + 365 371 if (!pp->msi_irq) { 366 372 pp->msi_irq = platform_get_irq_byname_optional(pdev, "msi"); 367 373 if (pp->msi_irq < 0) { ··· 547 541 548 542 /* Initialize IRQ Status array */ 549 543 for (ctrl = 0; ctrl < num_ctrls; ctrl++) { 550 - pp->irq_mask[ctrl] = ~0; 551 544 dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + 552 545 (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 553 546 pp->irq_mask[ctrl]);
+53 -4
drivers/pci/controller/dwc/pcie-fu740.c
··· 181 181 { 182 182 struct device *dev = pci->dev; 183 183 struct fu740_pcie *afp = dev_get_drvdata(dev); 184 + u8 cap_exp = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 185 + int ret; 186 + u32 orig, tmp; 187 + 188 + /* 189 + * Force 2.5GT/s when starting the link, due to some devices not 190 + * probing at higher speeds. This happens with the PCIe switch 191 + * on the Unmatched board when U-Boot has not initialised the PCIe. 192 + * The fix in U-Boot is to force 2.5GT/s, which then gets cleared 193 + * by the soft reset done by this driver. 194 + */ 195 + dev_dbg(dev, "cap_exp at %x\n", cap_exp); 196 + dw_pcie_dbi_ro_wr_en(pci); 197 + 198 + tmp = dw_pcie_readl_dbi(pci, cap_exp + PCI_EXP_LNKCAP); 199 + orig = tmp & PCI_EXP_LNKCAP_SLS; 200 + tmp &= ~PCI_EXP_LNKCAP_SLS; 201 + tmp |= PCI_EXP_LNKCAP_SLS_2_5GB; 202 + dw_pcie_writel_dbi(pci, cap_exp + PCI_EXP_LNKCAP, tmp); 184 203 185 204 /* Enable LTSSM */ 186 205 writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_APP_LTSSM_ENABLE); 187 - return 0; 206 + 207 + ret = dw_pcie_wait_for_link(pci); 208 + if (ret) { 209 + dev_err(dev, "error: link did not start\n"); 210 + goto err; 211 + } 212 + 213 + tmp = dw_pcie_readl_dbi(pci, cap_exp + PCI_EXP_LNKCAP); 214 + if ((tmp & PCI_EXP_LNKCAP_SLS) != orig) { 215 + dev_dbg(dev, "changing speed back to original\n"); 216 + 217 + tmp &= ~PCI_EXP_LNKCAP_SLS; 218 + tmp |= orig; 219 + dw_pcie_writel_dbi(pci, cap_exp + PCI_EXP_LNKCAP, tmp); 220 + 221 + tmp = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL); 222 + tmp |= PORT_LOGIC_SPEED_CHANGE; 223 + dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, tmp); 224 + 225 + ret = dw_pcie_wait_for_link(pci); 226 + if (ret) { 227 + dev_err(dev, "error: link did not start at new speed\n"); 228 + goto err; 229 + } 230 + } 231 + 232 + ret = 0; 233 + err: 234 + WARN_ON(ret); /* we assume that errors will be very rare */ 235 + dw_pcie_dbi_ro_wr_dis(pci); 236 + return ret; 188 237 } 189 238 190 239 static int fu740_pcie_host_init(struct pcie_port *pp) ··· 273 224 /* Clear hold_phy_rst */ 274 225 writel_relaxed(0x0, afp->mgmt_base + PCIEX8MGMT_APP_HOLD_PHY_RST); 275 226 /* Enable pcieauxclk */ 276 - ret = clk_prepare_enable(afp->pcie_aux); 227 + clk_prepare_enable(afp->pcie_aux); 277 228 /* Set RC mode */ 278 229 writel_relaxed(0x4, afp->mgmt_base + PCIEX8MGMT_DEVICE_TYPE); 279 230 ··· 308 259 return PTR_ERR(afp->mgmt_base); 309 260 310 261 /* Fetch GPIOs */ 311 - afp->reset = devm_gpiod_get_optional(dev, "reset-gpios", GPIOD_OUT_LOW); 262 + afp->reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW); 312 263 if (IS_ERR(afp->reset)) 313 264 return dev_err_probe(dev, PTR_ERR(afp->reset), "unable to get reset-gpios\n"); 314 265 315 - afp->pwren = devm_gpiod_get_optional(dev, "pwren-gpios", GPIOD_OUT_LOW); 266 + afp->pwren = devm_gpiod_get_optional(dev, "pwren", GPIOD_OUT_LOW); 316 267 if (IS_ERR(afp->pwren)) 317 268 return dev_err_probe(dev, PTR_ERR(afp->pwren), "unable to get pwren-gpios\n"); 318 269
-3
drivers/pci/controller/dwc/pcie-kirin.c
··· 332 332 pcie->phy_priv = phy; 333 333 phy->dev = dev; 334 334 335 - /* registers */ 336 - pdev = container_of(dev, struct platform_device, dev); 337 - 338 335 ret = hi3660_pcie_phy_get_clk(phy); 339 336 if (ret) 340 337 return ret;
+63 -32
drivers/pci/controller/dwc/pcie-qcom.c
··· 161 161 162 162 /* 6 clocks typically, 7 for sm8250 */ 163 163 struct qcom_pcie_resources_2_7_0 { 164 - struct clk_bulk_data clks[7]; 164 + struct clk_bulk_data clks[9]; 165 165 int num_clks; 166 166 struct regulator_bulk_data supplies[2]; 167 167 struct reset_control *pci_reset; ··· 195 195 struct qcom_pcie_cfg { 196 196 const struct qcom_pcie_ops *ops; 197 197 unsigned int pipe_clk_need_muxing:1; 198 + unsigned int has_tbu_clk:1; 199 + unsigned int has_ddrss_sf_tbu_clk:1; 200 + unsigned int has_aggre0_clk:1; 201 + unsigned int has_aggre1_clk:1; 198 202 }; 199 203 200 204 struct qcom_pcie { ··· 208 204 union qcom_pcie_resources res; 209 205 struct phy *phy; 210 206 struct gpio_desc *reset; 211 - const struct qcom_pcie_ops *ops; 212 - unsigned int pipe_clk_need_muxing:1; 207 + const struct qcom_pcie_cfg *cfg; 213 208 }; 214 209 215 210 #define to_qcom_pcie(x) dev_get_drvdata((x)->dev) ··· 232 229 struct qcom_pcie *pcie = to_qcom_pcie(pci); 233 230 234 231 /* Enable Link Training state machine */ 235 - if (pcie->ops->ltssm_enable) 236 - pcie->ops->ltssm_enable(pcie); 232 + if (pcie->cfg->ops->ltssm_enable) 233 + pcie->cfg->ops->ltssm_enable(pcie); 237 234 238 235 return 0; 239 236 } ··· 1149 1146 struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; 1150 1147 struct dw_pcie *pci = pcie->pci; 1151 1148 struct device *dev = pci->dev; 1149 + unsigned int idx; 1152 1150 int ret; 1153 1151 1154 1152 res->pci_reset = devm_reset_control_get_exclusive(dev, "pci"); ··· 1163 1159 if (ret) 1164 1160 return ret; 1165 1161 1166 - res->clks[0].id = "aux"; 1167 - res->clks[1].id = "cfg"; 1168 - res->clks[2].id = "bus_master"; 1169 - res->clks[3].id = "bus_slave"; 1170 - res->clks[4].id = "slave_q2a"; 1171 - res->clks[5].id = "tbu"; 1172 - if (of_device_is_compatible(dev->of_node, "qcom,pcie-sm8250")) { 1173 - res->clks[6].id = "ddrss_sf_tbu"; 1174 - res->num_clks = 7; 1175 - } else { 1176 - res->num_clks = 6; 1177 - } 1162 + idx = 0; 1163 + res->clks[idx++].id = "aux"; 1164 + res->clks[idx++].id = "cfg"; 1165 + res->clks[idx++].id = "bus_master"; 1166 + res->clks[idx++].id = "bus_slave"; 1167 + res->clks[idx++].id = "slave_q2a"; 1168 + if (pcie->cfg->has_tbu_clk) 1169 + res->clks[idx++].id = "tbu"; 1170 + if (pcie->cfg->has_ddrss_sf_tbu_clk) 1171 + res->clks[idx++].id = "ddrss_sf_tbu"; 1172 + if (pcie->cfg->has_aggre0_clk) 1173 + res->clks[idx++].id = "aggre0"; 1174 + if (pcie->cfg->has_aggre1_clk) 1175 + res->clks[idx++].id = "aggre1"; 1176 + 1177 + res->num_clks = idx; 1178 1178 1179 1179 ret = devm_clk_bulk_get(dev, res->num_clks, res->clks); 1180 1180 if (ret < 0) 1181 1181 return ret; 1182 1182 1183 - if (pcie->pipe_clk_need_muxing) { 1183 + if (pcie->cfg->pipe_clk_need_muxing) { 1184 1184 res->pipe_clk_src = devm_clk_get(dev, "pipe_mux"); 1185 1185 if (IS_ERR(res->pipe_clk_src)) 1186 1186 return PTR_ERR(res->pipe_clk_src); ··· 1217 1209 } 1218 1210 1219 1211 /* Set TCXO as clock source for pcie_pipe_clk_src */ 1220 - if (pcie->pipe_clk_need_muxing) 1212 + if (pcie->cfg->pipe_clk_need_muxing) 1221 1213 clk_set_parent(res->pipe_clk_src, res->ref_clk_src); 1222 1214 1223 1215 ret = clk_bulk_prepare_enable(res->num_clks, res->clks); ··· 1243 1235 dev_err(dev, "cannot prepare/enable pipe clock\n"); 1244 1236 goto err_disable_clocks; 1245 1237 } 1238 + 1239 + /* Wait for reset to complete, required on SM8450 */ 1240 + usleep_range(1000, 1500); 1246 1241 1247 1242 /* configure PCIe to RC mode */ 1248 1243 writel(DEVICE_TYPE_RC, pcie->parf + PCIE20_PARF_DEVICE_TYPE); ··· 1295 1284 struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; 1296 1285 1297 1286 /* Set pipe clock as clock source for pcie_pipe_clk_src */ 1298 - if (pcie->pipe_clk_need_muxing) 1287 + if (pcie->cfg->pipe_clk_need_muxing) 1299 1288 clk_set_parent(res->pipe_clk_src, res->phy_pipe_clk); 1300 1289 1301 1290 return clk_prepare_enable(res->pipe_clk); ··· 1395 1384 1396 1385 qcom_ep_reset_assert(pcie); 1397 1386 1398 - ret = pcie->ops->init(pcie); 1387 + ret = pcie->cfg->ops->init(pcie); 1399 1388 if (ret) 1400 1389 return ret; 1401 1390 ··· 1403 1392 if (ret) 1404 1393 goto err_deinit; 1405 1394 1406 - if (pcie->ops->post_init) { 1407 - ret = pcie->ops->post_init(pcie); 1395 + if (pcie->cfg->ops->post_init) { 1396 + ret = pcie->cfg->ops->post_init(pcie); 1408 1397 if (ret) 1409 1398 goto err_disable_phy; 1410 1399 } 1411 1400 1412 1401 qcom_ep_reset_deassert(pcie); 1413 1402 1414 - if (pcie->ops->config_sid) { 1415 - ret = pcie->ops->config_sid(pcie); 1403 + if (pcie->cfg->ops->config_sid) { 1404 + ret = pcie->cfg->ops->config_sid(pcie); 1416 1405 if (ret) 1417 1406 goto err; 1418 1407 } ··· 1421 1410 1422 1411 err: 1423 1412 qcom_ep_reset_assert(pcie); 1424 - if (pcie->ops->post_deinit) 1425 - pcie->ops->post_deinit(pcie); 1413 + if (pcie->cfg->ops->post_deinit) 1414 + pcie->cfg->ops->post_deinit(pcie); 1426 1415 err_disable_phy: 1427 1416 phy_power_off(pcie->phy); 1428 1417 err_deinit: 1429 - pcie->ops->deinit(pcie); 1418 + pcie->cfg->ops->deinit(pcie); 1430 1419 1431 1420 return ret; 1432 1421 } ··· 1520 1509 1521 1510 static const struct qcom_pcie_cfg sdm845_cfg = { 1522 1511 .ops = &ops_2_7_0, 1512 + .has_tbu_clk = true, 1523 1513 }; 1524 1514 1525 1515 static const struct qcom_pcie_cfg sm8250_cfg = { 1526 1516 .ops = &ops_1_9_0, 1517 + .has_tbu_clk = true, 1518 + .has_ddrss_sf_tbu_clk = true, 1519 + }; 1520 + 1521 + static const struct qcom_pcie_cfg sm8450_pcie0_cfg = { 1522 + .ops = &ops_1_9_0, 1523 + .has_ddrss_sf_tbu_clk = true, 1524 + .pipe_clk_need_muxing = true, 1525 + .has_aggre0_clk = true, 1526 + .has_aggre1_clk = true, 1527 + }; 1528 + 1529 + static const struct qcom_pcie_cfg sm8450_pcie1_cfg = { 1530 + .ops = &ops_1_9_0, 1531 + .has_ddrss_sf_tbu_clk = true, 1532 + .pipe_clk_need_muxing = true, 1533 + .has_aggre1_clk = true, 1527 1534 }; 1528 1535 1529 1536 static const struct qcom_pcie_cfg sc7280_cfg = { 1530 1537 .ops = &ops_1_9_0, 1538 + .has_tbu_clk = true, 1531 1539 .pipe_clk_need_muxing = true, 1532 1540 }; 1533 1541 ··· 1589 1559 1590 1560 pcie->pci = pci; 1591 1561 1592 - pcie->ops = pcie_cfg->ops; 1593 - pcie->pipe_clk_need_muxing = pcie_cfg->pipe_clk_need_muxing; 1562 + pcie->cfg = pcie_cfg; 1594 1563 1595 1564 pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH); 1596 1565 if (IS_ERR(pcie->reset)) { ··· 1615 1586 goto err_pm_runtime_put; 1616 1587 } 1617 1588 1618 - ret = pcie->ops->get_resources(pcie); 1589 + ret = pcie->cfg->ops->get_resources(pcie); 1619 1590 if (ret) 1620 1591 goto err_pm_runtime_put; 1621 1592 ··· 1657 1628 { .compatible = "qcom,pcie-sdm845", .data = &sdm845_cfg }, 1658 1629 { .compatible = "qcom,pcie-sm8250", .data = &sm8250_cfg }, 1659 1630 { .compatible = "qcom,pcie-sc8180x", .data = &sm8250_cfg }, 1631 + { .compatible = "qcom,pcie-sm8450-pcie0", .data = &sm8450_pcie0_cfg }, 1632 + { .compatible = "qcom,pcie-sm8450-pcie1", .data = &sm8450_pcie1_cfg }, 1660 1633 { .compatible = "qcom,pcie-sc7280", .data = &sc7280_cfg }, 1661 1634 { } 1662 1635 }; 1663 1636 1664 1637 static void qcom_fixup_class(struct pci_dev *dev) 1665 1638 { 1666 - dev->class = PCI_CLASS_BRIDGE_PCI << 8; 1639 + dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL; 1667 1640 } 1668 1641 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0101, qcom_fixup_class); 1669 1642 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0104, qcom_fixup_class);
+123 -19
drivers/pci/controller/dwc/pcie-uniphier-ep.c
··· 10 10 #include <linux/clk.h> 11 11 #include <linux/delay.h> 12 12 #include <linux/init.h> 13 + #include <linux/iopoll.h> 13 14 #include <linux/of_device.h> 14 15 #include <linux/pci.h> 15 16 #include <linux/phy/phy.h> ··· 32 31 #define PCL_RSTCTRL2 0x0024 33 32 #define PCL_RSTCTRL_PHY_RESET BIT(0) 34 33 34 + #define PCL_PINCTRL0 0x002c 35 + #define PCL_PERST_PLDN_REGEN BIT(12) 36 + #define PCL_PERST_NOE_REGEN BIT(11) 37 + #define PCL_PERST_OUT_REGEN BIT(8) 38 + #define PCL_PERST_PLDN_REGVAL BIT(4) 39 + #define PCL_PERST_NOE_REGVAL BIT(3) 40 + #define PCL_PERST_OUT_REGVAL BIT(0) 41 + 42 + #define PCL_PIPEMON 0x0044 43 + #define PCL_PCLK_ALIVE BIT(15) 44 + 35 45 #define PCL_MODE 0x8000 36 46 #define PCL_MODE_REGEN BIT(8) 37 47 #define PCL_MODE_REGVAL BIT(0) ··· 63 51 #define PCL_APP_INTX 0x8074 64 52 #define PCL_APP_INTX_SYS_INT BIT(0) 65 53 54 + #define PCL_APP_PM0 0x8078 55 + #define PCL_SYS_AUX_PWR_DET BIT(8) 56 + 66 57 /* assertion time of INTx in usec */ 67 58 #define PCL_INTX_WIDTH_USEC 30 68 59 ··· 75 60 struct clk *clk, *clk_gio; 76 61 struct reset_control *rst, *rst_gio; 77 62 struct phy *phy; 78 - const struct pci_epc_features *features; 63 + const struct uniphier_pcie_ep_soc_data *data; 64 + }; 65 + 66 + struct uniphier_pcie_ep_soc_data { 67 + bool has_gio; 68 + void (*init)(struct uniphier_pcie_ep_priv *priv); 69 + int (*wait)(struct uniphier_pcie_ep_priv *priv); 70 + const struct pci_epc_features features; 79 71 }; 80 72 81 73 #define to_uniphier_pcie(x) dev_get_drvdata((x)->dev) ··· 113 91 writel(val, priv->base + PCL_RSTCTRL2); 114 92 } 115 93 116 - static void uniphier_pcie_init_ep(struct uniphier_pcie_ep_priv *priv) 94 + static void uniphier_pcie_pro5_init_ep(struct uniphier_pcie_ep_priv *priv) 117 95 { 118 96 u32 val; 119 97 ··· 136 114 uniphier_pcie_ltssm_enable(priv, false); 137 115 138 116 msleep(100); 117 + } 118 + 119 + static void uniphier_pcie_nx1_init_ep(struct uniphier_pcie_ep_priv *priv) 120 + { 121 + u32 val; 122 + 123 + /* set EP mode */ 124 + val = readl(priv->base + PCL_MODE); 125 + val |= PCL_MODE_REGEN | PCL_MODE_REGVAL; 126 + writel(val, priv->base + PCL_MODE); 127 + 128 + /* use auxiliary power detection */ 129 + val = readl(priv->base + PCL_APP_PM0); 130 + val |= PCL_SYS_AUX_PWR_DET; 131 + writel(val, priv->base + PCL_APP_PM0); 132 + 133 + /* assert PERST# */ 134 + val = readl(priv->base + PCL_PINCTRL0); 135 + val &= ~(PCL_PERST_NOE_REGVAL | PCL_PERST_OUT_REGVAL 136 + | PCL_PERST_PLDN_REGVAL); 137 + val |= PCL_PERST_NOE_REGEN | PCL_PERST_OUT_REGEN 138 + | PCL_PERST_PLDN_REGEN; 139 + writel(val, priv->base + PCL_PINCTRL0); 140 + 141 + uniphier_pcie_ltssm_enable(priv, false); 142 + 143 + usleep_range(100000, 200000); 144 + 145 + /* deassert PERST# */ 146 + val = readl(priv->base + PCL_PINCTRL0); 147 + val |= PCL_PERST_OUT_REGVAL | PCL_PERST_OUT_REGEN; 148 + writel(val, priv->base + PCL_PINCTRL0); 149 + } 150 + 151 + static int uniphier_pcie_nx1_wait_ep(struct uniphier_pcie_ep_priv *priv) 152 + { 153 + u32 status; 154 + int ret; 155 + 156 + /* wait PIPE clock */ 157 + ret = readl_poll_timeout(priv->base + PCL_PIPEMON, status, 158 + status & PCL_PCLK_ALIVE, 100000, 1000000); 159 + if (ret) { 160 + dev_err(priv->pci.dev, 161 + "Failed to initialize controller in EP mode\n"); 162 + return ret; 163 + } 164 + 165 + return 0; 139 166 } 140 167 141 168 static int uniphier_pcie_start_link(struct dw_pcie *pci) ··· 280 209 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 281 210 struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci); 282 211 283 - return priv->features; 212 + return &priv->data->features; 284 213 } 285 214 286 215 static const struct dw_pcie_ep_ops uniphier_pcie_ep_ops = { ··· 309 238 if (ret) 310 239 goto out_rst_assert; 311 240 312 - uniphier_pcie_init_ep(priv); 241 + if (priv->data->init) 242 + priv->data->init(priv); 313 243 314 244 uniphier_pcie_phy_reset(priv, true); 315 245 ··· 320 248 321 249 uniphier_pcie_phy_reset(priv, false); 322 250 251 + if (priv->data->wait) { 252 + ret = priv->data->wait(priv); 253 + if (ret) 254 + goto out_phy_exit; 255 + } 256 + 323 257 return 0; 324 258 259 + out_phy_exit: 260 + phy_exit(priv->phy); 325 261 out_rst_gio_assert: 326 262 reset_control_assert(priv->rst_gio); 327 263 out_rst_assert: ··· 357 277 if (!priv) 358 278 return -ENOMEM; 359 279 360 - priv->features = of_device_get_match_data(dev); 361 - if (WARN_ON(!priv->features)) 280 + priv->data = of_device_get_match_data(dev); 281 + if (WARN_ON(!priv->data)) 362 282 return -EINVAL; 363 283 364 284 priv->pci.dev = dev; ··· 368 288 if (IS_ERR(priv->base)) 369 289 return PTR_ERR(priv->base); 370 290 371 - priv->clk_gio = devm_clk_get(dev, "gio"); 372 - if (IS_ERR(priv->clk_gio)) 373 - return PTR_ERR(priv->clk_gio); 291 + if (priv->data->has_gio) { 292 + priv->clk_gio = devm_clk_get(dev, "gio"); 293 + if (IS_ERR(priv->clk_gio)) 294 + return PTR_ERR(priv->clk_gio); 374 295 375 - priv->rst_gio = devm_reset_control_get_shared(dev, "gio"); 376 - if (IS_ERR(priv->rst_gio)) 377 - return PTR_ERR(priv->rst_gio); 296 + priv->rst_gio = devm_reset_control_get_shared(dev, "gio"); 297 + if (IS_ERR(priv->rst_gio)) 298 + return PTR_ERR(priv->rst_gio); 299 + } 378 300 379 301 priv->clk = devm_clk_get(dev, "link"); 380 302 if (IS_ERR(priv->clk)) ··· 403 321 return dw_pcie_ep_init(&priv->pci.ep); 404 322 } 405 323 406 - static const struct pci_epc_features uniphier_pro5_data = { 407 - .linkup_notifier = false, 408 - .msi_capable = true, 409 - .msix_capable = false, 410 - .align = 1 << 16, 411 - .bar_fixed_64bit = BIT(BAR_0) | BIT(BAR_2) | BIT(BAR_4), 412 - .reserved_bar = BIT(BAR_4), 324 + static const struct uniphier_pcie_ep_soc_data uniphier_pro5_data = { 325 + .has_gio = true, 326 + .init = uniphier_pcie_pro5_init_ep, 327 + .wait = NULL, 328 + .features = { 329 + .linkup_notifier = false, 330 + .msi_capable = true, 331 + .msix_capable = false, 332 + .align = 1 << 16, 333 + .bar_fixed_64bit = BIT(BAR_0) | BIT(BAR_2) | BIT(BAR_4), 334 + .reserved_bar = BIT(BAR_4), 335 + }, 336 + }; 337 + 338 + static const struct uniphier_pcie_ep_soc_data uniphier_nx1_data = { 339 + .has_gio = false, 340 + .init = uniphier_pcie_nx1_init_ep, 341 + .wait = uniphier_pcie_nx1_wait_ep, 342 + .features = { 343 + .linkup_notifier = false, 344 + .msi_capable = true, 345 + .msix_capable = false, 346 + .align = 1 << 12, 347 + .bar_fixed_64bit = BIT(BAR_0) | BIT(BAR_2) | BIT(BAR_4), 348 + }, 413 349 }; 414 350 415 351 static const struct of_device_id uniphier_pcie_ep_match[] = { 416 352 { 417 353 .compatible = "socionext,uniphier-pro5-pcie-ep", 418 354 .data = &uniphier_pro5_data, 355 + }, 356 + { 357 + .compatible = "socionext,uniphier-nx1-pcie-ep", 358 + .data = &uniphier_nx1_data, 419 359 }, 420 360 { /* sentinel */ }, 421 361 };
+1 -1
drivers/pci/controller/mobiveil/pcie-mobiveil-host.c
··· 295 295 /* fixup for PCIe class register */ 296 296 value = mobiveil_csr_readl(pcie, PAB_INTP_AXI_PIO_CLASS); 297 297 value &= 0xff; 298 - value |= (PCI_CLASS_BRIDGE_PCI << 16); 298 + value |= PCI_CLASS_BRIDGE_PCI_NORMAL << 8; 299 299 mobiveil_csr_writel(pcie, value, PAB_INTP_AXI_PIO_CLASS); 300 300 301 301 return 0;
+276 -118
drivers/pci/controller/pci-aardvark.c
··· 38 38 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN BIT(6) 39 39 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK BIT(7) 40 40 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK_RCV BIT(8) 41 - #define PCIE_CORE_INT_A_ASSERT_ENABLE 1 42 - #define PCIE_CORE_INT_B_ASSERT_ENABLE 2 43 - #define PCIE_CORE_INT_C_ASSERT_ENABLE 3 44 - #define PCIE_CORE_INT_D_ASSERT_ENABLE 4 45 41 /* PIO registers base address and register offsets */ 46 42 #define PIO_BASE_ADDR 0x4000 47 43 #define PIO_CTRL (PIO_BASE_ADDR + 0x0) ··· 98 102 #define PCIE_MSG_PM_PME_MASK BIT(7) 99 103 #define PCIE_ISR0_MASK_REG (CONTROL_BASE_ADDR + 0x44) 100 104 #define PCIE_ISR0_MSI_INT_PENDING BIT(24) 105 + #define PCIE_ISR0_CORR_ERR BIT(11) 106 + #define PCIE_ISR0_NFAT_ERR BIT(12) 107 + #define PCIE_ISR0_FAT_ERR BIT(13) 108 + #define PCIE_ISR0_ERR_MASK GENMASK(13, 11) 101 109 #define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val)) 102 110 #define PCIE_ISR0_INTX_DEASSERT(val) BIT(20 + (val)) 103 111 #define PCIE_ISR0_ALL_MASK GENMASK(31, 0) ··· 272 272 u32 actions; 273 273 } wins[OB_WIN_COUNT]; 274 274 u8 wins_count; 275 + int irq; 276 + struct irq_domain *rp_irq_domain; 275 277 struct irq_domain *irq_domain; 276 278 struct irq_chip irq_chip; 277 279 raw_spinlock_t irq_lock; 278 280 struct irq_domain *msi_domain; 279 281 struct irq_domain *msi_inner_domain; 280 - struct irq_chip msi_bottom_irq_chip; 281 - struct irq_chip msi_irq_chip; 282 - struct msi_domain_info msi_domain_info; 282 + raw_spinlock_t msi_irq_lock; 283 283 DECLARE_BITMAP(msi_used, MSI_IRQ_NUM); 284 284 struct mutex msi_used_lock; 285 - u16 msi_msg; 286 285 int link_gen; 287 286 struct pci_bridge_emul bridge; 288 287 struct gpio_desc *reset_gpio; ··· 476 477 477 478 static void advk_pcie_setup_hw(struct advk_pcie *pcie) 478 479 { 480 + phys_addr_t msi_addr; 479 481 u32 reg; 480 482 int i; 481 483 ··· 529 529 */ 530 530 reg = advk_readl(pcie, PCIE_CORE_DEV_REV_REG); 531 531 reg &= ~0xffffff00; 532 - reg |= (PCI_CLASS_BRIDGE_PCI << 8) << 8; 532 + reg |= PCI_CLASS_BRIDGE_PCI_NORMAL << 8; 533 533 advk_writel(pcie, reg, PCIE_CORE_DEV_REV_REG); 534 534 535 535 /* Disable Root Bridge I/O space, memory space and bus mastering */ ··· 565 565 reg |= LANE_COUNT_1; 566 566 advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 567 567 568 + /* Set MSI address */ 569 + msi_addr = virt_to_phys(pcie); 570 + advk_writel(pcie, lower_32_bits(msi_addr), PCIE_MSI_ADDR_LOW_REG); 571 + advk_writel(pcie, upper_32_bits(msi_addr), PCIE_MSI_ADDR_HIGH_REG); 572 + 568 573 /* Enable MSI */ 569 574 reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG); 570 575 reg |= PCIE_CORE_CTRL2_MSI_ENABLE; ··· 581 576 advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_REG); 582 577 advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_STATUS_REG); 583 578 584 - /* Disable All ISR0/1 Sources */ 585 - reg = PCIE_ISR0_ALL_MASK; 579 + /* Disable All ISR0/1 and MSI Sources */ 580 + advk_writel(pcie, PCIE_ISR0_ALL_MASK, PCIE_ISR0_MASK_REG); 581 + advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_MASK_REG); 582 + advk_writel(pcie, PCIE_MSI_ALL_MASK, PCIE_MSI_MASK_REG); 583 + 584 + /* Unmask summary MSI interrupt */ 585 + reg = advk_readl(pcie, PCIE_ISR0_MASK_REG); 586 586 reg &= ~PCIE_ISR0_MSI_INT_PENDING; 587 587 advk_writel(pcie, reg, PCIE_ISR0_MASK_REG); 588 588 589 - advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_MASK_REG); 590 - 591 - /* Unmask all MSIs */ 592 - advk_writel(pcie, ~(u32)PCIE_MSI_ALL_MASK, PCIE_MSI_MASK_REG); 589 + /* Unmask PME interrupt for processing of PME requester */ 590 + reg = advk_readl(pcie, PCIE_ISR0_MASK_REG); 591 + reg &= ~PCIE_MSG_PM_PME_MASK; 592 + advk_writel(pcie, reg, PCIE_ISR0_MASK_REG); 593 593 594 594 /* Enable summary interrupt for GIC SPI source */ 595 595 reg = PCIE_IRQ_ALL_MASK & (~PCIE_IRQ_ENABLE_INTS_MASK); ··· 788 778 case PCI_INTERRUPT_LINE: { 789 779 /* 790 780 * From the whole 32bit register we support reading from HW only 791 - * one bit: PCI_BRIDGE_CTL_BUS_RESET. 781 + * two bits: PCI_BRIDGE_CTL_BUS_RESET and PCI_BRIDGE_CTL_SERR. 792 782 * Other bits are retrieved only from emulated config buffer. 793 783 */ 794 784 __le32 *cfgspace = (__le32 *)&bridge->conf; 795 785 u32 val = le32_to_cpu(cfgspace[PCI_INTERRUPT_LINE / 4]); 786 + if (advk_readl(pcie, PCIE_ISR0_MASK_REG) & PCIE_ISR0_ERR_MASK) 787 + val &= ~(PCI_BRIDGE_CTL_SERR << 16); 788 + else 789 + val |= PCI_BRIDGE_CTL_SERR << 16; 796 790 if (advk_readl(pcie, PCIE_CORE_CTRL1_REG) & HOT_RESET_GEN) 797 791 val |= PCI_BRIDGE_CTL_BUS_RESET << 16; 798 792 else ··· 822 808 break; 823 809 824 810 case PCI_INTERRUPT_LINE: 811 + /* 812 + * According to Figure 6-3: Pseudo Logic Diagram for Error 813 + * Message Controls in PCIe base specification, SERR# Enable bit 814 + * in Bridge Control register enable receiving of ERR_* messages 815 + */ 816 + if (mask & (PCI_BRIDGE_CTL_SERR << 16)) { 817 + u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG); 818 + if (new & (PCI_BRIDGE_CTL_SERR << 16)) 819 + val &= ~PCIE_ISR0_ERR_MASK; 820 + else 821 + val |= PCIE_ISR0_ERR_MASK; 822 + advk_writel(pcie, val, PCIE_ISR0_MASK_REG); 823 + } 825 824 if (mask & (PCI_BRIDGE_CTL_BUS_RESET << 16)) { 826 825 u32 val = advk_readl(pcie, PCIE_CORE_CTRL1_REG); 827 826 if (new & (PCI_BRIDGE_CTL_BUS_RESET << 16)) ··· 862 835 *value = PCI_EXP_SLTSTA_PDS << 16; 863 836 return PCI_BRIDGE_EMUL_HANDLED; 864 837 865 - case PCI_EXP_RTCTL: { 866 - u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG); 867 - *value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE; 868 - *value |= le16_to_cpu(bridge->pcie_conf.rootctl) & PCI_EXP_RTCTL_CRSSVE; 869 - *value |= PCI_EXP_RTCAP_CRSVIS << 16; 870 - return PCI_BRIDGE_EMUL_HANDLED; 871 - } 872 - 873 - case PCI_EXP_RTSTA: { 874 - u32 isr0 = advk_readl(pcie, PCIE_ISR0_REG); 875 - u32 msglog = advk_readl(pcie, PCIE_MSG_LOG_REG); 876 - *value = (isr0 & PCIE_MSG_PM_PME_MASK) << 16 | (msglog >> 16); 877 - return PCI_BRIDGE_EMUL_HANDLED; 878 - } 838 + /* 839 + * PCI_EXP_RTCTL and PCI_EXP_RTSTA are also supported, but do not need 840 + * to be handled here, because their values are stored in emulated 841 + * config space buffer, and we read them from there when needed. 842 + */ 879 843 880 844 case PCI_EXP_LNKCAP: { 881 845 u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg); ··· 921 903 break; 922 904 923 905 case PCI_EXP_RTCTL: { 924 - /* Only mask/unmask PME interrupt */ 925 - u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG) & 926 - ~PCIE_MSG_PM_PME_MASK; 927 - if ((new & PCI_EXP_RTCTL_PMEIE) == 0) 928 - val |= PCIE_MSG_PM_PME_MASK; 929 - advk_writel(pcie, val, PCIE_ISR0_MASK_REG); 906 + u16 rootctl = le16_to_cpu(bridge->pcie_conf.rootctl); 907 + /* Only emulation of PMEIE and CRSSVE bits is provided */ 908 + rootctl &= PCI_EXP_RTCTL_PMEIE | PCI_EXP_RTCTL_CRSSVE; 909 + bridge->pcie_conf.rootctl = cpu_to_le16(rootctl); 930 910 break; 931 911 } 932 912 933 - case PCI_EXP_RTSTA: 934 - new = (new & PCI_EXP_RTSTA_PME) >> 9; 935 - advk_writel(pcie, new, PCIE_ISR0_REG); 936 - break; 913 + /* 914 + * PCI_EXP_RTSTA is also supported, but does not need to be handled 915 + * here, because its value is stored in emulated config space buffer, 916 + * and we write it there when needed. 917 + */ 937 918 938 919 case PCI_EXP_DEVCTL: 939 920 case PCI_EXP_DEVCTL2: ··· 945 928 } 946 929 } 947 930 948 - static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = { 931 + static const struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = { 949 932 .read_base = advk_pci_bridge_emul_base_conf_read, 950 933 .write_base = advk_pci_bridge_emul_base_conf_write, 951 934 .read_pcie = advk_pci_bridge_emul_pcie_conf_read, ··· 976 959 bridge->conf.pref_mem_limit = cpu_to_le16(PCI_PREF_RANGE_TYPE_64); 977 960 978 961 /* Support interrupt A for MSI feature */ 979 - bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE; 962 + bridge->conf.intpin = PCI_INTERRUPT_INTA; 980 963 981 964 /* Aardvark HW provides PCIe Capability structure in version 2 */ 982 965 bridge->pcie_conf.cap = cpu_to_le16(2); ··· 998 981 return false; 999 982 1000 983 /* 1001 - * If the link goes down after we check for link-up, nothing bad 1002 - * happens but the config access times out. 984 + * If the link goes down after we check for link-up, we have a problem: 985 + * if a PIO request is executed while link-down, the whole controller 986 + * gets stuck in a non-functional state, and even after link comes up 987 + * again, PIO requests won't work anymore, and a reset of the whole PCIe 988 + * controller is needed. Therefore we need to prevent sending PIO 989 + * requests while the link is down. 1003 990 */ 1004 991 if (!pci_is_root_bus(bus) && !advk_pcie_link_up(pcie)) 1005 992 return false; ··· 1201 1180 struct msi_msg *msg) 1202 1181 { 1203 1182 struct advk_pcie *pcie = irq_data_get_irq_chip_data(data); 1204 - phys_addr_t msi_msg = virt_to_phys(&pcie->msi_msg); 1183 + phys_addr_t msi_addr = virt_to_phys(pcie); 1205 1184 1206 - msg->address_lo = lower_32_bits(msi_msg); 1207 - msg->address_hi = upper_32_bits(msi_msg); 1208 - msg->data = data->irq; 1185 + msg->address_lo = lower_32_bits(msi_addr); 1186 + msg->address_hi = upper_32_bits(msi_addr); 1187 + msg->data = data->hwirq; 1209 1188 } 1210 1189 1211 1190 static int advk_msi_set_affinity(struct irq_data *irq_data, ··· 1213 1192 { 1214 1193 return -EINVAL; 1215 1194 } 1195 + 1196 + static void advk_msi_irq_mask(struct irq_data *d) 1197 + { 1198 + struct advk_pcie *pcie = d->domain->host_data; 1199 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 1200 + unsigned long flags; 1201 + u32 mask; 1202 + 1203 + raw_spin_lock_irqsave(&pcie->msi_irq_lock, flags); 1204 + mask = advk_readl(pcie, PCIE_MSI_MASK_REG); 1205 + mask |= BIT(hwirq); 1206 + advk_writel(pcie, mask, PCIE_MSI_MASK_REG); 1207 + raw_spin_unlock_irqrestore(&pcie->msi_irq_lock, flags); 1208 + } 1209 + 1210 + static void advk_msi_irq_unmask(struct irq_data *d) 1211 + { 1212 + struct advk_pcie *pcie = d->domain->host_data; 1213 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 1214 + unsigned long flags; 1215 + u32 mask; 1216 + 1217 + raw_spin_lock_irqsave(&pcie->msi_irq_lock, flags); 1218 + mask = advk_readl(pcie, PCIE_MSI_MASK_REG); 1219 + mask &= ~BIT(hwirq); 1220 + advk_writel(pcie, mask, PCIE_MSI_MASK_REG); 1221 + raw_spin_unlock_irqrestore(&pcie->msi_irq_lock, flags); 1222 + } 1223 + 1224 + static void advk_msi_top_irq_mask(struct irq_data *d) 1225 + { 1226 + pci_msi_mask_irq(d); 1227 + irq_chip_mask_parent(d); 1228 + } 1229 + 1230 + static void advk_msi_top_irq_unmask(struct irq_data *d) 1231 + { 1232 + pci_msi_unmask_irq(d); 1233 + irq_chip_unmask_parent(d); 1234 + } 1235 + 1236 + static struct irq_chip advk_msi_bottom_irq_chip = { 1237 + .name = "MSI", 1238 + .irq_compose_msi_msg = advk_msi_irq_compose_msi_msg, 1239 + .irq_set_affinity = advk_msi_set_affinity, 1240 + .irq_mask = advk_msi_irq_mask, 1241 + .irq_unmask = advk_msi_irq_unmask, 1242 + }; 1216 1243 1217 1244 static int advk_msi_irq_domain_alloc(struct irq_domain *domain, 1218 1245 unsigned int virq, ··· 1270 1201 int hwirq, i; 1271 1202 1272 1203 mutex_lock(&pcie->msi_used_lock); 1273 - hwirq = bitmap_find_next_zero_area(pcie->msi_used, MSI_IRQ_NUM, 1274 - 0, nr_irqs, 0); 1275 - if (hwirq >= MSI_IRQ_NUM) { 1276 - mutex_unlock(&pcie->msi_used_lock); 1277 - return -ENOSPC; 1278 - } 1279 - 1280 - bitmap_set(pcie->msi_used, hwirq, nr_irqs); 1204 + hwirq = bitmap_find_free_region(pcie->msi_used, MSI_IRQ_NUM, 1205 + order_base_2(nr_irqs)); 1281 1206 mutex_unlock(&pcie->msi_used_lock); 1207 + if (hwirq < 0) 1208 + return -ENOSPC; 1282 1209 1283 1210 for (i = 0; i < nr_irqs; i++) 1284 1211 irq_domain_set_info(domain, virq + i, hwirq + i, 1285 - &pcie->msi_bottom_irq_chip, 1212 + &advk_msi_bottom_irq_chip, 1286 1213 domain->host_data, handle_simple_irq, 1287 1214 NULL, NULL); 1288 1215 ··· 1292 1227 struct advk_pcie *pcie = domain->host_data; 1293 1228 1294 1229 mutex_lock(&pcie->msi_used_lock); 1295 - bitmap_clear(pcie->msi_used, d->hwirq, nr_irqs); 1230 + bitmap_release_region(pcie->msi_used, d->hwirq, order_base_2(nr_irqs)); 1296 1231 mutex_unlock(&pcie->msi_used_lock); 1297 1232 } 1298 1233 ··· 1334 1269 { 1335 1270 struct advk_pcie *pcie = h->host_data; 1336 1271 1337 - advk_pcie_irq_mask(irq_get_irq_data(virq)); 1338 1272 irq_set_status_flags(virq, IRQ_LEVEL); 1339 1273 irq_set_chip_and_handler(virq, &pcie->irq_chip, 1340 1274 handle_level_irq); ··· 1347 1283 .xlate = irq_domain_xlate_onecell, 1348 1284 }; 1349 1285 1286 + static struct irq_chip advk_msi_irq_chip = { 1287 + .name = "advk-MSI", 1288 + .irq_mask = advk_msi_top_irq_mask, 1289 + .irq_unmask = advk_msi_top_irq_unmask, 1290 + }; 1291 + 1292 + static struct msi_domain_info advk_msi_domain_info = { 1293 + .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 1294 + MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX, 1295 + .chip = &advk_msi_irq_chip, 1296 + }; 1297 + 1350 1298 static int advk_pcie_init_msi_irq_domain(struct advk_pcie *pcie) 1351 1299 { 1352 1300 struct device *dev = &pcie->pdev->dev; 1353 - struct device_node *node = dev->of_node; 1354 - struct irq_chip *bottom_ic, *msi_ic; 1355 - struct msi_domain_info *msi_di; 1356 - phys_addr_t msi_msg_phys; 1357 1301 1302 + raw_spin_lock_init(&pcie->msi_irq_lock); 1358 1303 mutex_init(&pcie->msi_used_lock); 1359 - 1360 - bottom_ic = &pcie->msi_bottom_irq_chip; 1361 - 1362 - bottom_ic->name = "MSI"; 1363 - bottom_ic->irq_compose_msi_msg = advk_msi_irq_compose_msi_msg; 1364 - bottom_ic->irq_set_affinity = advk_msi_set_affinity; 1365 - 1366 - msi_ic = &pcie->msi_irq_chip; 1367 - msi_ic->name = "advk-MSI"; 1368 - 1369 - msi_di = &pcie->msi_domain_info; 1370 - msi_di->flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 1371 - MSI_FLAG_MULTI_PCI_MSI; 1372 - msi_di->chip = msi_ic; 1373 - 1374 - msi_msg_phys = virt_to_phys(&pcie->msi_msg); 1375 - 1376 - advk_writel(pcie, lower_32_bits(msi_msg_phys), 1377 - PCIE_MSI_ADDR_LOW_REG); 1378 - advk_writel(pcie, upper_32_bits(msi_msg_phys), 1379 - PCIE_MSI_ADDR_HIGH_REG); 1380 1304 1381 1305 pcie->msi_inner_domain = 1382 1306 irq_domain_add_linear(NULL, MSI_IRQ_NUM, ··· 1373 1321 return -ENOMEM; 1374 1322 1375 1323 pcie->msi_domain = 1376 - pci_msi_create_irq_domain(of_node_to_fwnode(node), 1377 - msi_di, pcie->msi_inner_domain); 1324 + pci_msi_create_irq_domain(dev_fwnode(dev), 1325 + &advk_msi_domain_info, 1326 + pcie->msi_inner_domain); 1378 1327 if (!pcie->msi_domain) { 1379 1328 irq_domain_remove(pcie->msi_inner_domain); 1380 1329 return -ENOMEM; ··· 1416 1363 } 1417 1364 1418 1365 irq_chip->irq_mask = advk_pcie_irq_mask; 1419 - irq_chip->irq_mask_ack = advk_pcie_irq_mask; 1420 1366 irq_chip->irq_unmask = advk_pcie_irq_unmask; 1421 1367 1422 1368 pcie->irq_domain = ··· 1437 1385 irq_domain_remove(pcie->irq_domain); 1438 1386 } 1439 1387 1388 + static struct irq_chip advk_rp_irq_chip = { 1389 + .name = "advk-RP", 1390 + }; 1391 + 1392 + static int advk_pcie_rp_irq_map(struct irq_domain *h, 1393 + unsigned int virq, irq_hw_number_t hwirq) 1394 + { 1395 + struct advk_pcie *pcie = h->host_data; 1396 + 1397 + irq_set_chip_and_handler(virq, &advk_rp_irq_chip, handle_simple_irq); 1398 + irq_set_chip_data(virq, pcie); 1399 + 1400 + return 0; 1401 + } 1402 + 1403 + static const struct irq_domain_ops advk_pcie_rp_irq_domain_ops = { 1404 + .map = advk_pcie_rp_irq_map, 1405 + .xlate = irq_domain_xlate_onecell, 1406 + }; 1407 + 1408 + static int advk_pcie_init_rp_irq_domain(struct advk_pcie *pcie) 1409 + { 1410 + pcie->rp_irq_domain = irq_domain_add_linear(NULL, 1, 1411 + &advk_pcie_rp_irq_domain_ops, 1412 + pcie); 1413 + if (!pcie->rp_irq_domain) { 1414 + dev_err(&pcie->pdev->dev, "Failed to add Root Port IRQ domain\n"); 1415 + return -ENOMEM; 1416 + } 1417 + 1418 + return 0; 1419 + } 1420 + 1421 + static void advk_pcie_remove_rp_irq_domain(struct advk_pcie *pcie) 1422 + { 1423 + irq_domain_remove(pcie->rp_irq_domain); 1424 + } 1425 + 1426 + static void advk_pcie_handle_pme(struct advk_pcie *pcie) 1427 + { 1428 + u32 requester = advk_readl(pcie, PCIE_MSG_LOG_REG) >> 16; 1429 + 1430 + advk_writel(pcie, PCIE_MSG_PM_PME_MASK, PCIE_ISR0_REG); 1431 + 1432 + /* 1433 + * PCIE_MSG_LOG_REG contains the last inbound message, so store 1434 + * the requester ID only when PME was not asserted yet. 1435 + * Also do not trigger PME interrupt when PME is still asserted. 1436 + */ 1437 + if (!(le32_to_cpu(pcie->bridge.pcie_conf.rootsta) & PCI_EXP_RTSTA_PME)) { 1438 + pcie->bridge.pcie_conf.rootsta = cpu_to_le32(requester | PCI_EXP_RTSTA_PME); 1439 + 1440 + /* 1441 + * Trigger PME interrupt only if PMEIE bit in Root Control is set. 1442 + * Aardvark HW returns zero for PCI_EXP_FLAGS_IRQ, so use PCIe interrupt 0. 1443 + */ 1444 + if (!(le16_to_cpu(pcie->bridge.pcie_conf.rootctl) & PCI_EXP_RTCTL_PMEIE)) 1445 + return; 1446 + 1447 + if (generic_handle_domain_irq(pcie->rp_irq_domain, 0) == -EINVAL) 1448 + dev_err_ratelimited(&pcie->pdev->dev, "unhandled PME IRQ\n"); 1449 + } 1450 + } 1451 + 1440 1452 static void advk_pcie_handle_msi(struct advk_pcie *pcie) 1441 1453 { 1442 1454 u32 msi_val, msi_mask, msi_status, msi_idx; 1443 - u16 msi_data; 1444 1455 1445 1456 msi_mask = advk_readl(pcie, PCIE_MSI_MASK_REG); 1446 1457 msi_val = advk_readl(pcie, PCIE_MSI_STATUS_REG); ··· 1513 1398 if (!(BIT(msi_idx) & msi_status)) 1514 1399 continue; 1515 1400 1516 - /* 1517 - * msi_idx contains bits [4:0] of the msi_data and msi_data 1518 - * contains 16bit MSI interrupt number 1519 - */ 1520 1401 advk_writel(pcie, BIT(msi_idx), PCIE_MSI_STATUS_REG); 1521 - msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & PCIE_MSI_DATA_MASK; 1522 - generic_handle_irq(msi_data); 1402 + if (generic_handle_domain_irq(pcie->msi_inner_domain, msi_idx) == -EINVAL) 1403 + dev_err_ratelimited(&pcie->pdev->dev, "unexpected MSI 0x%02x\n", msi_idx); 1523 1404 } 1524 1405 1525 1406 advk_writel(pcie, PCIE_ISR0_MSI_INT_PENDING, ··· 1536 1425 isr1_mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); 1537 1426 isr1_status = isr1_val & ((~isr1_mask) & PCIE_ISR1_ALL_MASK); 1538 1427 1428 + /* Process PME interrupt as the first one to do not miss PME requester id */ 1429 + if (isr0_status & PCIE_MSG_PM_PME_MASK) 1430 + advk_pcie_handle_pme(pcie); 1431 + 1432 + /* Process ERR interrupt */ 1433 + if (isr0_status & PCIE_ISR0_ERR_MASK) { 1434 + advk_writel(pcie, PCIE_ISR0_ERR_MASK, PCIE_ISR0_REG); 1435 + 1436 + /* 1437 + * Aardvark HW returns zero for PCI_ERR_ROOT_AER_IRQ, so use 1438 + * PCIe interrupt 0 1439 + */ 1440 + if (generic_handle_domain_irq(pcie->rp_irq_domain, 0) == -EINVAL) 1441 + dev_err_ratelimited(&pcie->pdev->dev, "unhandled ERR IRQ\n"); 1442 + } 1443 + 1539 1444 /* Process MSI interrupts */ 1540 1445 if (isr0_status & PCIE_ISR0_MSI_INT_PENDING) 1541 1446 advk_pcie_handle_msi(pcie); ··· 1564 1437 advk_writel(pcie, PCIE_ISR1_INTX_ASSERT(i), 1565 1438 PCIE_ISR1_REG); 1566 1439 1567 - generic_handle_domain_irq(pcie->irq_domain, i); 1440 + if (generic_handle_domain_irq(pcie->irq_domain, i) == -EINVAL) 1441 + dev_err_ratelimited(&pcie->pdev->dev, "unexpected INT%c IRQ\n", 1442 + (char)i + 'A'); 1568 1443 } 1569 1444 } 1570 1445 1571 - static irqreturn_t advk_pcie_irq_handler(int irq, void *arg) 1446 + static void advk_pcie_irq_handler(struct irq_desc *desc) 1572 1447 { 1573 - struct advk_pcie *pcie = arg; 1574 - u32 status; 1448 + struct advk_pcie *pcie = irq_desc_get_handler_data(desc); 1449 + struct irq_chip *chip = irq_desc_get_chip(desc); 1450 + u32 val, mask, status; 1575 1451 1576 - status = advk_readl(pcie, HOST_CTRL_INT_STATUS_REG); 1577 - if (!(status & PCIE_IRQ_CORE_INT)) 1578 - return IRQ_NONE; 1452 + chained_irq_enter(chip, desc); 1579 1453 1580 - advk_pcie_handle_int(pcie); 1454 + val = advk_readl(pcie, HOST_CTRL_INT_STATUS_REG); 1455 + mask = advk_readl(pcie, HOST_CTRL_INT_MASK_REG); 1456 + status = val & ((~mask) & PCIE_IRQ_ALL_MASK); 1581 1457 1582 - /* Clear interrupt */ 1583 - advk_writel(pcie, PCIE_IRQ_CORE_INT, HOST_CTRL_INT_STATUS_REG); 1458 + if (status & PCIE_IRQ_CORE_INT) { 1459 + advk_pcie_handle_int(pcie); 1584 1460 1585 - return IRQ_HANDLED; 1461 + /* Clear interrupt */ 1462 + advk_writel(pcie, PCIE_IRQ_CORE_INT, HOST_CTRL_INT_STATUS_REG); 1463 + } 1464 + 1465 + chained_irq_exit(chip, desc); 1586 1466 } 1587 1467 1588 - static void __maybe_unused advk_pcie_disable_phy(struct advk_pcie *pcie) 1468 + static int advk_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 1469 + { 1470 + struct advk_pcie *pcie = dev->bus->sysdata; 1471 + 1472 + /* 1473 + * Emulated root bridge has its own emulated irq chip and irq domain. 1474 + * Argument pin is the INTx pin (1=INTA, 2=INTB, 3=INTC, 4=INTD) and 1475 + * hwirq for irq_create_mapping() is indexed from zero. 1476 + */ 1477 + if (pci_is_root_bus(dev->bus)) 1478 + return irq_create_mapping(pcie->rp_irq_domain, pin - 1); 1479 + else 1480 + return of_irq_parse_and_map_pci(dev, slot, pin); 1481 + } 1482 + 1483 + static void advk_pcie_disable_phy(struct advk_pcie *pcie) 1589 1484 { 1590 1485 phy_power_off(pcie->phy); 1591 1486 phy_exit(pcie->phy); ··· 1671 1522 struct advk_pcie *pcie; 1672 1523 struct pci_host_bridge *bridge; 1673 1524 struct resource_entry *entry; 1674 - int ret, irq; 1525 + int ret; 1675 1526 1676 1527 bridge = devm_pci_alloc_host_bridge(dev, sizeof(struct advk_pcie)); 1677 1528 if (!bridge) ··· 1757 1608 if (IS_ERR(pcie->base)) 1758 1609 return PTR_ERR(pcie->base); 1759 1610 1760 - irq = platform_get_irq(pdev, 0); 1761 - if (irq < 0) 1762 - return irq; 1763 - 1764 - ret = devm_request_irq(dev, irq, advk_pcie_irq_handler, 1765 - IRQF_SHARED | IRQF_NO_THREAD, "advk-pcie", 1766 - pcie); 1767 - if (ret) { 1768 - dev_err(dev, "Failed to register interrupt\n"); 1769 - return ret; 1770 - } 1611 + pcie->irq = platform_get_irq(pdev, 0); 1612 + if (pcie->irq < 0) 1613 + return pcie->irq; 1771 1614 1772 1615 pcie->reset_gpio = devm_gpiod_get_from_of_node(dev, dev->of_node, 1773 1616 "reset-gpios", 0, ··· 1808 1667 return ret; 1809 1668 } 1810 1669 1670 + ret = advk_pcie_init_rp_irq_domain(pcie); 1671 + if (ret) { 1672 + dev_err(dev, "Failed to initialize irq\n"); 1673 + advk_pcie_remove_msi_irq_domain(pcie); 1674 + advk_pcie_remove_irq_domain(pcie); 1675 + return ret; 1676 + } 1677 + 1678 + irq_set_chained_handler_and_data(pcie->irq, advk_pcie_irq_handler, pcie); 1679 + 1811 1680 bridge->sysdata = pcie; 1812 1681 bridge->ops = &advk_pcie_ops; 1682 + bridge->map_irq = advk_pcie_map_irq; 1813 1683 1814 1684 ret = pci_host_probe(bridge); 1815 1685 if (ret < 0) { 1686 + irq_set_chained_handler_and_data(pcie->irq, NULL, NULL); 1687 + advk_pcie_remove_rp_irq_domain(pcie); 1816 1688 advk_pcie_remove_msi_irq_domain(pcie); 1817 1689 advk_pcie_remove_irq_domain(pcie); 1818 1690 return ret; ··· 1873 1719 advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_REG); 1874 1720 advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_STATUS_REG); 1875 1721 1722 + /* Remove IRQ handler */ 1723 + irq_set_chained_handler_and_data(pcie->irq, NULL, NULL); 1724 + 1876 1725 /* Remove IRQ domains */ 1726 + advk_pcie_remove_rp_irq_domain(pcie); 1877 1727 advk_pcie_remove_msi_irq_domain(pcie); 1878 1728 advk_pcie_remove_irq_domain(pcie); 1879 1729
+122 -111
drivers/pci/controller/pci-hyperv.c
··· 616 616 { 617 617 return pci_msi_prepare(domain, dev, nvec, info); 618 618 } 619 + 620 + /** 621 + * hv_arch_irq_unmask() - "Unmask" the IRQ by setting its current 622 + * affinity. 623 + * @data: Describes the IRQ 624 + * 625 + * Build new a destination for the MSI and make a hypercall to 626 + * update the Interrupt Redirection Table. "Device Logical ID" 627 + * is built out of this PCI bus's instance GUID and the function 628 + * number of the device. 629 + */ 630 + static void hv_arch_irq_unmask(struct irq_data *data) 631 + { 632 + struct msi_desc *msi_desc = irq_data_get_msi_desc(data); 633 + struct hv_retarget_device_interrupt *params; 634 + struct hv_pcibus_device *hbus; 635 + struct cpumask *dest; 636 + cpumask_var_t tmp; 637 + struct pci_bus *pbus; 638 + struct pci_dev *pdev; 639 + unsigned long flags; 640 + u32 var_size = 0; 641 + int cpu, nr_bank; 642 + u64 res; 643 + 644 + dest = irq_data_get_effective_affinity_mask(data); 645 + pdev = msi_desc_to_pci_dev(msi_desc); 646 + pbus = pdev->bus; 647 + hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); 648 + 649 + spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags); 650 + 651 + params = &hbus->retarget_msi_interrupt_params; 652 + memset(params, 0, sizeof(*params)); 653 + params->partition_id = HV_PARTITION_ID_SELF; 654 + params->int_entry.source = HV_INTERRUPT_SOURCE_MSI; 655 + hv_set_msi_entry_from_desc(&params->int_entry.msi_entry, msi_desc); 656 + params->device_id = (hbus->hdev->dev_instance.b[5] << 24) | 657 + (hbus->hdev->dev_instance.b[4] << 16) | 658 + (hbus->hdev->dev_instance.b[7] << 8) | 659 + (hbus->hdev->dev_instance.b[6] & 0xf8) | 660 + PCI_FUNC(pdev->devfn); 661 + params->int_target.vector = hv_msi_get_int_vector(data); 662 + 663 + /* 664 + * Honoring apic->delivery_mode set to APIC_DELIVERY_MODE_FIXED by 665 + * setting the HV_DEVICE_INTERRUPT_TARGET_MULTICAST flag results in a 666 + * spurious interrupt storm. Not doing so does not seem to have a 667 + * negative effect (yet?). 668 + */ 669 + 670 + if (hbus->protocol_version >= PCI_PROTOCOL_VERSION_1_2) { 671 + /* 672 + * PCI_PROTOCOL_VERSION_1_2 supports the VP_SET version of the 673 + * HVCALL_RETARGET_INTERRUPT hypercall, which also coincides 674 + * with >64 VP support. 675 + * ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED 676 + * is not sufficient for this hypercall. 677 + */ 678 + params->int_target.flags |= 679 + HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET; 680 + 681 + if (!alloc_cpumask_var(&tmp, GFP_ATOMIC)) { 682 + res = 1; 683 + goto exit_unlock; 684 + } 685 + 686 + cpumask_and(tmp, dest, cpu_online_mask); 687 + nr_bank = cpumask_to_vpset(&params->int_target.vp_set, tmp); 688 + free_cpumask_var(tmp); 689 + 690 + if (nr_bank <= 0) { 691 + res = 1; 692 + goto exit_unlock; 693 + } 694 + 695 + /* 696 + * var-sized hypercall, var-size starts after vp_mask (thus 697 + * vp_set.format does not count, but vp_set.valid_bank_mask 698 + * does). 699 + */ 700 + var_size = 1 + nr_bank; 701 + } else { 702 + for_each_cpu_and(cpu, dest, cpu_online_mask) { 703 + params->int_target.vp_mask |= 704 + (1ULL << hv_cpu_number_to_vp_number(cpu)); 705 + } 706 + } 707 + 708 + res = hv_do_hypercall(HVCALL_RETARGET_INTERRUPT | (var_size << 17), 709 + params, NULL); 710 + 711 + exit_unlock: 712 + spin_unlock_irqrestore(&hbus->retarget_msi_interrupt_lock, flags); 713 + 714 + /* 715 + * During hibernation, when a CPU is offlined, the kernel tries 716 + * to move the interrupt to the remaining CPUs that haven't 717 + * been offlined yet. In this case, the below hv_do_hypercall() 718 + * always fails since the vmbus channel has been closed: 719 + * refer to cpu_disable_common() -> fixup_irqs() -> 720 + * irq_migrate_all_off_this_cpu() -> migrate_one_irq(). 721 + * 722 + * Suppress the error message for hibernation because the failure 723 + * during hibernation does not matter (at this time all the devices 724 + * have been frozen). Note: the correct affinity info is still updated 725 + * into the irqdata data structure in migrate_one_irq() -> 726 + * irq_do_set_affinity() -> hv_set_affinity(), so later when the VM 727 + * resumes, hv_pci_restore_msi_state() is able to correctly restore 728 + * the interrupt with the correct affinity. 729 + */ 730 + if (!hv_result_success(res) && hbus->state != hv_pcibus_removing) 731 + dev_err(&hbus->hdev->device, 732 + "%s() failed: %#llx", __func__, res); 733 + } 619 734 #elif defined(CONFIG_ARM64) 620 735 /* 621 736 * SPI vectors to use for vPCI; arch SPIs range is [32, 1019], but leaving a bit ··· 954 839 { 955 840 return hv_msi_gic_irq_domain; 956 841 } 842 + 843 + /* 844 + * SPIs are used for interrupts of PCI devices and SPIs is managed via GICD 845 + * registers which Hyper-V already supports, so no hypercall needed. 846 + */ 847 + static void hv_arch_irq_unmask(struct irq_data *data) { } 957 848 #endif /* CONFIG_ARM64 */ 958 849 959 850 /** ··· 1577 1456 irq_chip_mask_parent(data); 1578 1457 } 1579 1458 1580 - /** 1581 - * hv_irq_unmask() - "Unmask" the IRQ by setting its current 1582 - * affinity. 1583 - * @data: Describes the IRQ 1584 - * 1585 - * Build new a destination for the MSI and make a hypercall to 1586 - * update the Interrupt Redirection Table. "Device Logical ID" 1587 - * is built out of this PCI bus's instance GUID and the function 1588 - * number of the device. 1589 - */ 1590 1459 static void hv_irq_unmask(struct irq_data *data) 1591 1460 { 1592 - struct msi_desc *msi_desc = irq_data_get_msi_desc(data); 1593 - struct hv_retarget_device_interrupt *params; 1594 - struct hv_pcibus_device *hbus; 1595 - struct cpumask *dest; 1596 - cpumask_var_t tmp; 1597 - struct pci_bus *pbus; 1598 - struct pci_dev *pdev; 1599 - unsigned long flags; 1600 - u32 var_size = 0; 1601 - int cpu, nr_bank; 1602 - u64 res; 1603 - 1604 - dest = irq_data_get_effective_affinity_mask(data); 1605 - pdev = msi_desc_to_pci_dev(msi_desc); 1606 - pbus = pdev->bus; 1607 - hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); 1608 - 1609 - spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags); 1610 - 1611 - params = &hbus->retarget_msi_interrupt_params; 1612 - memset(params, 0, sizeof(*params)); 1613 - params->partition_id = HV_PARTITION_ID_SELF; 1614 - params->int_entry.source = HV_INTERRUPT_SOURCE_MSI; 1615 - hv_set_msi_entry_from_desc(&params->int_entry.msi_entry, msi_desc); 1616 - params->device_id = (hbus->hdev->dev_instance.b[5] << 24) | 1617 - (hbus->hdev->dev_instance.b[4] << 16) | 1618 - (hbus->hdev->dev_instance.b[7] << 8) | 1619 - (hbus->hdev->dev_instance.b[6] & 0xf8) | 1620 - PCI_FUNC(pdev->devfn); 1621 - params->int_target.vector = hv_msi_get_int_vector(data); 1622 - 1623 - /* 1624 - * Honoring apic->delivery_mode set to APIC_DELIVERY_MODE_FIXED by 1625 - * setting the HV_DEVICE_INTERRUPT_TARGET_MULTICAST flag results in a 1626 - * spurious interrupt storm. Not doing so does not seem to have a 1627 - * negative effect (yet?). 1628 - */ 1629 - 1630 - if (hbus->protocol_version >= PCI_PROTOCOL_VERSION_1_2) { 1631 - /* 1632 - * PCI_PROTOCOL_VERSION_1_2 supports the VP_SET version of the 1633 - * HVCALL_RETARGET_INTERRUPT hypercall, which also coincides 1634 - * with >64 VP support. 1635 - * ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED 1636 - * is not sufficient for this hypercall. 1637 - */ 1638 - params->int_target.flags |= 1639 - HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET; 1640 - 1641 - if (!alloc_cpumask_var(&tmp, GFP_ATOMIC)) { 1642 - res = 1; 1643 - goto exit_unlock; 1644 - } 1645 - 1646 - cpumask_and(tmp, dest, cpu_online_mask); 1647 - nr_bank = cpumask_to_vpset(&params->int_target.vp_set, tmp); 1648 - free_cpumask_var(tmp); 1649 - 1650 - if (nr_bank <= 0) { 1651 - res = 1; 1652 - goto exit_unlock; 1653 - } 1654 - 1655 - /* 1656 - * var-sized hypercall, var-size starts after vp_mask (thus 1657 - * vp_set.format does not count, but vp_set.valid_bank_mask 1658 - * does). 1659 - */ 1660 - var_size = 1 + nr_bank; 1661 - } else { 1662 - for_each_cpu_and(cpu, dest, cpu_online_mask) { 1663 - params->int_target.vp_mask |= 1664 - (1ULL << hv_cpu_number_to_vp_number(cpu)); 1665 - } 1666 - } 1667 - 1668 - res = hv_do_hypercall(HVCALL_RETARGET_INTERRUPT | (var_size << 17), 1669 - params, NULL); 1670 - 1671 - exit_unlock: 1672 - spin_unlock_irqrestore(&hbus->retarget_msi_interrupt_lock, flags); 1673 - 1674 - /* 1675 - * During hibernation, when a CPU is offlined, the kernel tries 1676 - * to move the interrupt to the remaining CPUs that haven't 1677 - * been offlined yet. In this case, the below hv_do_hypercall() 1678 - * always fails since the vmbus channel has been closed: 1679 - * refer to cpu_disable_common() -> fixup_irqs() -> 1680 - * irq_migrate_all_off_this_cpu() -> migrate_one_irq(). 1681 - * 1682 - * Suppress the error message for hibernation because the failure 1683 - * during hibernation does not matter (at this time all the devices 1684 - * have been frozen). Note: the correct affinity info is still updated 1685 - * into the irqdata data structure in migrate_one_irq() -> 1686 - * irq_do_set_affinity() -> hv_set_affinity(), so later when the VM 1687 - * resumes, hv_pci_restore_msi_state() is able to correctly restore 1688 - * the interrupt with the correct affinity. 1689 - */ 1690 - if (!hv_result_success(res) && hbus->state != hv_pcibus_removing) 1691 - dev_err(&hbus->hdev->device, 1692 - "%s() failed: %#llx", __func__, res); 1461 + hv_arch_irq_unmask(data); 1693 1462 1694 1463 if (data->parent_data->chip->irq_unmask) 1695 1464 irq_chip_unmask_parent(data);
+1 -1
drivers/pci/controller/pci-loongson.c
··· 35 35 /* Fixup wrong class code in PCIe bridges */ 36 36 static void bridge_class_quirk(struct pci_dev *dev) 37 37 { 38 - dev->class = PCI_CLASS_BRIDGE_PCI << 8; 38 + dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL; 39 39 } 40 40 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, 41 41 DEV_PCIE_PORT_0, bridge_class_quirk);
+334 -82
drivers/pci/controller/pci-mvebu.c
··· 32 32 #define PCIE_DEV_REV_OFF 0x0008 33 33 #define PCIE_BAR_LO_OFF(n) (0x0010 + ((n) << 3)) 34 34 #define PCIE_BAR_HI_OFF(n) (0x0014 + ((n) << 3)) 35 + #define PCIE_SSDEV_ID_OFF 0x002c 35 36 #define PCIE_CAP_PCIEXP 0x0060 36 - #define PCIE_HEADER_LOG_4_OFF 0x0128 37 + #define PCIE_CAP_PCIERR_OFF 0x0100 37 38 #define PCIE_BAR_CTRL_OFF(n) (0x1804 + (((n) - 1) * 4)) 38 39 #define PCIE_WIN04_CTRL_OFF(n) (0x1820 + ((n) << 4)) 39 40 #define PCIE_WIN04_BASE_OFF(n) (0x1824 + ((n) << 4)) ··· 54 53 PCIE_CONF_ADDR_EN) 55 54 #define PCIE_CONF_DATA_OFF 0x18fc 56 55 #define PCIE_INT_CAUSE_OFF 0x1900 56 + #define PCIE_INT_UNMASK_OFF 0x1910 57 + #define PCIE_INT_INTX(i) BIT(24+i) 57 58 #define PCIE_INT_PM_PME BIT(28) 58 - #define PCIE_MASK_OFF 0x1910 59 - #define PCIE_MASK_ENABLE_INTS 0x0f000000 59 + #define PCIE_INT_ALL_MASK GENMASK(31, 0) 60 60 #define PCIE_CTRL_OFF 0x1a00 61 61 #define PCIE_CTRL_X1_MODE 0x0001 62 62 #define PCIE_CTRL_RC_MODE BIT(1) ··· 95 93 void __iomem *base; 96 94 u32 port; 97 95 u32 lane; 96 + bool is_x4; 98 97 int devfn; 99 98 unsigned int mem_target; 100 99 unsigned int mem_attr; ··· 111 108 struct mvebu_pcie_window iowin; 112 109 u32 saved_pcie_stat; 113 110 struct resource regs; 111 + struct irq_domain *intx_irq_domain; 112 + raw_spinlock_t irq_lock; 113 + int intx_irq; 114 114 }; 115 115 116 116 static inline void mvebu_writel(struct mvebu_pcie_port *port, u32 val, u32 reg) ··· 239 233 240 234 static void mvebu_pcie_setup_hw(struct mvebu_pcie_port *port) 241 235 { 242 - u32 ctrl, cmd, dev_rev, mask; 236 + u32 ctrl, lnkcap, cmd, dev_rev, unmask; 243 237 244 238 /* Setup PCIe controller to Root Complex mode. */ 245 239 ctrl = mvebu_readl(port, PCIE_CTRL_OFF); 246 240 ctrl |= PCIE_CTRL_RC_MODE; 247 241 mvebu_writel(port, ctrl, PCIE_CTRL_OFF); 242 + 243 + /* 244 + * Set Maximum Link Width to X1 or X4 in Root Port's PCIe Link 245 + * Capability register. This register is defined by PCIe specification 246 + * as read-only but this mvebu controller has it as read-write and must 247 + * be set to number of SerDes PCIe lanes (1 or 4). If this register is 248 + * not set correctly then link with endpoint card is not established. 249 + */ 250 + lnkcap = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP); 251 + lnkcap &= ~PCI_EXP_LNKCAP_MLW; 252 + lnkcap |= (port->is_x4 ? 4 : 1) << 4; 253 + mvebu_writel(port, lnkcap, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP); 248 254 249 255 /* Disable Root Bridge I/O space, memory space and bus mastering. */ 250 256 cmd = mvebu_readl(port, PCIE_CMD_OFF); ··· 286 268 */ 287 269 dev_rev = mvebu_readl(port, PCIE_DEV_REV_OFF); 288 270 dev_rev &= ~0xffffff00; 289 - dev_rev |= (PCI_CLASS_BRIDGE_PCI << 8) << 8; 271 + dev_rev |= PCI_CLASS_BRIDGE_PCI_NORMAL << 8; 290 272 mvebu_writel(port, dev_rev, PCIE_DEV_REV_OFF); 291 273 292 274 /* Point PCIe unit MBUS decode windows to DRAM space. */ 293 275 mvebu_pcie_setup_wins(port); 294 276 295 - /* Enable interrupt lines A-D. */ 296 - mask = mvebu_readl(port, PCIE_MASK_OFF); 297 - mask |= PCIE_MASK_ENABLE_INTS; 298 - mvebu_writel(port, mask, PCIE_MASK_OFF); 277 + /* Mask all interrupt sources. */ 278 + mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_UNMASK_OFF); 279 + 280 + /* Clear all interrupt causes. */ 281 + mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF); 282 + 283 + /* Check if "intx" interrupt was specified in DT. */ 284 + if (port->intx_irq > 0) 285 + return; 286 + 287 + /* 288 + * Fallback code when "intx" interrupt was not specified in DT: 289 + * Unmask all legacy INTx interrupts as driver does not provide a way 290 + * for masking and unmasking of individual legacy INTx interrupts. 291 + * Legacy INTx are reported via one shared GIC source and therefore 292 + * kernel cannot distinguish which individual legacy INTx was triggered. 293 + * These interrupts are shared, so it should not cause any issue. Just 294 + * performance penalty as every PCIe interrupt handler needs to be 295 + * called when some interrupt is triggered. 296 + */ 297 + unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF); 298 + unmask |= PCIE_INT_INTX(0) | PCIE_INT_INTX(1) | 299 + PCIE_INT_INTX(2) | PCIE_INT_INTX(3); 300 + mvebu_writel(port, unmask, PCIE_INT_UNMASK_OFF); 299 301 } 300 302 301 - static int mvebu_pcie_hw_rd_conf(struct mvebu_pcie_port *port, 302 - struct pci_bus *bus, 303 - u32 devfn, int where, int size, u32 *val) 303 + static struct mvebu_pcie_port *mvebu_pcie_find_port(struct mvebu_pcie *pcie, 304 + struct pci_bus *bus, 305 + int devfn); 306 + 307 + static int mvebu_pcie_child_rd_conf(struct pci_bus *bus, u32 devfn, int where, 308 + int size, u32 *val) 304 309 { 305 - void __iomem *conf_data = port->base + PCIE_CONF_DATA_OFF; 310 + struct mvebu_pcie *pcie = bus->sysdata; 311 + struct mvebu_pcie_port *port; 312 + void __iomem *conf_data; 313 + 314 + port = mvebu_pcie_find_port(pcie, bus, devfn); 315 + if (!port) 316 + return PCIBIOS_DEVICE_NOT_FOUND; 317 + 318 + if (!mvebu_pcie_link_up(port)) 319 + return PCIBIOS_DEVICE_NOT_FOUND; 320 + 321 + conf_data = port->base + PCIE_CONF_DATA_OFF; 306 322 307 323 mvebu_writel(port, PCIE_CONF_ADDR(bus->number, devfn, where), 308 324 PCIE_CONF_ADDR_OFF); ··· 352 300 *val = readl_relaxed(conf_data); 353 301 break; 354 302 default: 355 - *val = 0xffffffff; 356 303 return PCIBIOS_BAD_REGISTER_NUMBER; 357 304 } 358 305 359 306 return PCIBIOS_SUCCESSFUL; 360 307 } 361 308 362 - static int mvebu_pcie_hw_wr_conf(struct mvebu_pcie_port *port, 363 - struct pci_bus *bus, 364 - u32 devfn, int where, int size, u32 val) 309 + static int mvebu_pcie_child_wr_conf(struct pci_bus *bus, u32 devfn, 310 + int where, int size, u32 val) 365 311 { 366 - void __iomem *conf_data = port->base + PCIE_CONF_DATA_OFF; 312 + struct mvebu_pcie *pcie = bus->sysdata; 313 + struct mvebu_pcie_port *port; 314 + void __iomem *conf_data; 315 + 316 + port = mvebu_pcie_find_port(pcie, bus, devfn); 317 + if (!port) 318 + return PCIBIOS_DEVICE_NOT_FOUND; 319 + 320 + if (!mvebu_pcie_link_up(port)) 321 + return PCIBIOS_DEVICE_NOT_FOUND; 322 + 323 + conf_data = port->base + PCIE_CONF_DATA_OFF; 367 324 368 325 mvebu_writel(port, PCIE_CONF_ADDR(bus->number, devfn, where), 369 326 PCIE_CONF_ADDR_OFF); ··· 393 332 394 333 return PCIBIOS_SUCCESSFUL; 395 334 } 335 + 336 + static struct pci_ops mvebu_pcie_child_ops = { 337 + .read = mvebu_pcie_child_rd_conf, 338 + .write = mvebu_pcie_child_wr_conf, 339 + }; 396 340 397 341 /* 398 342 * Remove windows, starting from the largest ones to the smallest ··· 503 437 conf->iolimitupper < conf->iobaseupper) 504 438 return mvebu_pcie_set_window(port, port->io_target, port->io_attr, 505 439 &desired, &port->iowin); 506 - 507 - if (!mvebu_has_ioport(port)) { 508 - dev_WARN(&port->pcie->pdev->dev, 509 - "Attempt to set IO when IO is disabled\n"); 510 - return -EOPNOTSUPP; 511 - } 512 440 513 441 /* 514 442 * We read the PCI-to-PCI bridge emulated registers, and ··· 612 552 613 553 case PCI_EXP_LNKCAP: 614 554 /* 615 - * PCIe requires the clock power management capability to be 616 - * hard-wired to zero for downstream ports 555 + * PCIe requires that the Clock Power Management capability bit 556 + * is hard-wired to zero for downstream ports but HW returns 1. 557 + * Additionally enable Data Link Layer Link Active Reporting 558 + * Capable bit as DL_Active indication is provided too. 617 559 */ 618 - *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP) & 619 - ~PCI_EXP_LNKCAP_CLKPM; 560 + *value = (mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP) & 561 + ~PCI_EXP_LNKCAP_CLKPM) | PCI_EXP_LNKCAP_DLLLARC; 620 562 break; 621 563 622 564 case PCI_EXP_LNKCTL: 623 - *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL); 565 + /* DL_Active indication is provided via PCIE_STAT_OFF */ 566 + *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL) | 567 + (mvebu_pcie_link_up(port) ? 568 + (PCI_EXP_LNKSTA_DLLLA << 16) : 0); 624 569 break; 625 570 626 571 case PCI_EXP_SLTCTL: ··· 655 590 return PCI_BRIDGE_EMUL_HANDLED; 656 591 } 657 592 593 + static pci_bridge_emul_read_status_t 594 + mvebu_pci_bridge_emul_ext_conf_read(struct pci_bridge_emul *bridge, 595 + int reg, u32 *value) 596 + { 597 + struct mvebu_pcie_port *port = bridge->data; 598 + 599 + switch (reg) { 600 + case 0: 601 + case PCI_ERR_UNCOR_STATUS: 602 + case PCI_ERR_UNCOR_MASK: 603 + case PCI_ERR_UNCOR_SEVER: 604 + case PCI_ERR_COR_STATUS: 605 + case PCI_ERR_COR_MASK: 606 + case PCI_ERR_CAP: 607 + case PCI_ERR_HEADER_LOG+0: 608 + case PCI_ERR_HEADER_LOG+4: 609 + case PCI_ERR_HEADER_LOG+8: 610 + case PCI_ERR_HEADER_LOG+12: 611 + case PCI_ERR_ROOT_COMMAND: 612 + case PCI_ERR_ROOT_STATUS: 613 + case PCI_ERR_ROOT_ERR_SRC: 614 + *value = mvebu_readl(port, PCIE_CAP_PCIERR_OFF + reg); 615 + break; 616 + 617 + default: 618 + return PCI_BRIDGE_EMUL_NOT_HANDLED; 619 + } 620 + 621 + return PCI_BRIDGE_EMUL_HANDLED; 622 + } 623 + 658 624 static void 659 625 mvebu_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge, 660 626 int reg, u32 old, u32 new, u32 mask) ··· 695 599 696 600 switch (reg) { 697 601 case PCI_COMMAND: 698 - if (!mvebu_has_ioport(port)) { 699 - conf->command = cpu_to_le16( 700 - le16_to_cpu(conf->command) & ~PCI_COMMAND_IO); 701 - new &= ~PCI_COMMAND_IO; 702 - } 703 - 704 602 mvebu_writel(port, new, PCIE_CMD_OFF); 705 603 break; 706 604 707 605 case PCI_IO_BASE: 708 - if ((mask & 0xffff) && mvebu_pcie_handle_iobase_change(port)) { 606 + if ((mask & 0xffff) && mvebu_has_ioport(port) && 607 + mvebu_pcie_handle_iobase_change(port)) { 709 608 /* On error disable IO range */ 710 609 conf->iobase &= ~0xf0; 711 610 conf->iolimit &= ~0xf0; 611 + conf->iobase |= 0xf0; 712 612 conf->iobaseupper = cpu_to_le16(0x0000); 713 613 conf->iolimitupper = cpu_to_le16(0x0000); 714 - if (mvebu_has_ioport(port)) 715 - conf->iobase |= 0xf0; 716 614 } 717 615 break; 718 616 ··· 720 630 break; 721 631 722 632 case PCI_IO_BASE_UPPER16: 723 - if (mvebu_pcie_handle_iobase_change(port)) { 633 + if (mvebu_has_ioport(port) && 634 + mvebu_pcie_handle_iobase_change(port)) { 724 635 /* On error disable IO range */ 725 636 conf->iobase &= ~0xf0; 726 637 conf->iolimit &= ~0xf0; 638 + conf->iobase |= 0xf0; 727 639 conf->iobaseupper = cpu_to_le16(0x0000); 728 640 conf->iolimitupper = cpu_to_le16(0x0000); 729 - if (mvebu_has_ioport(port)) 730 - conf->iobase |= 0xf0; 731 641 } 732 642 break; 733 643 ··· 765 675 766 676 case PCI_EXP_LNKCTL: 767 677 /* 768 - * If we don't support CLKREQ, we must ensure that the 769 - * CLKREQ enable bit always reads zero. Since we haven't 770 - * had this capability, and it's dependent on board wiring, 771 - * disable it for the time being. 678 + * PCIe requires that the Enable Clock Power Management bit 679 + * is hard-wired to zero for downstream ports but HW allows 680 + * to change it. 772 681 */ 773 682 new &= ~PCI_EXP_LNKCTL_CLKREQ_EN; 774 683 ··· 798 709 } 799 710 } 800 711 801 - static struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = { 712 + static void 713 + mvebu_pci_bridge_emul_ext_conf_write(struct pci_bridge_emul *bridge, 714 + int reg, u32 old, u32 new, u32 mask) 715 + { 716 + struct mvebu_pcie_port *port = bridge->data; 717 + 718 + switch (reg) { 719 + /* These are W1C registers, so clear other bits */ 720 + case PCI_ERR_UNCOR_STATUS: 721 + case PCI_ERR_COR_STATUS: 722 + case PCI_ERR_ROOT_STATUS: 723 + new &= mask; 724 + fallthrough; 725 + 726 + case PCI_ERR_UNCOR_MASK: 727 + case PCI_ERR_UNCOR_SEVER: 728 + case PCI_ERR_COR_MASK: 729 + case PCI_ERR_CAP: 730 + case PCI_ERR_HEADER_LOG+0: 731 + case PCI_ERR_HEADER_LOG+4: 732 + case PCI_ERR_HEADER_LOG+8: 733 + case PCI_ERR_HEADER_LOG+12: 734 + case PCI_ERR_ROOT_COMMAND: 735 + case PCI_ERR_ROOT_ERR_SRC: 736 + mvebu_writel(port, new, PCIE_CAP_PCIERR_OFF + reg); 737 + break; 738 + 739 + default: 740 + break; 741 + } 742 + } 743 + 744 + static const struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = { 802 745 .read_base = mvebu_pci_bridge_emul_base_conf_read, 803 746 .write_base = mvebu_pci_bridge_emul_base_conf_write, 804 747 .read_pcie = mvebu_pci_bridge_emul_pcie_conf_read, 805 748 .write_pcie = mvebu_pci_bridge_emul_pcie_conf_write, 749 + .read_ext = mvebu_pci_bridge_emul_ext_conf_read, 750 + .write_ext = mvebu_pci_bridge_emul_ext_conf_write, 806 751 }; 807 752 808 753 /* ··· 845 722 */ 846 723 static int mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port) 847 724 { 725 + unsigned int bridge_flags = PCI_BRIDGE_EMUL_NO_PREFMEM_FORWARD; 848 726 struct pci_bridge_emul *bridge = &port->bridge; 727 + u32 dev_id = mvebu_readl(port, PCIE_DEV_ID_OFF); 728 + u32 dev_rev = mvebu_readl(port, PCIE_DEV_REV_OFF); 729 + u32 ssdev_id = mvebu_readl(port, PCIE_SSDEV_ID_OFF); 849 730 u32 pcie_cap = mvebu_readl(port, PCIE_CAP_PCIEXP); 850 731 u8 pcie_cap_ver = ((pcie_cap >> 16) & PCI_EXP_FLAGS_VERS); 851 732 852 - bridge->conf.vendor = PCI_VENDOR_ID_MARVELL; 853 - bridge->conf.device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16; 854 - bridge->conf.class_revision = 855 - mvebu_readl(port, PCIE_DEV_REV_OFF) & 0xff; 733 + bridge->conf.vendor = cpu_to_le16(dev_id & 0xffff); 734 + bridge->conf.device = cpu_to_le16(dev_id >> 16); 735 + bridge->conf.class_revision = cpu_to_le32(dev_rev & 0xff); 856 736 857 737 if (mvebu_has_ioport(port)) { 858 738 /* We support 32 bits I/O addressing */ 859 739 bridge->conf.iobase = PCI_IO_RANGE_TYPE_32; 860 740 bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32; 741 + } else { 742 + bridge_flags |= PCI_BRIDGE_EMUL_NO_IO_FORWARD; 861 743 } 862 744 863 745 /* ··· 871 743 */ 872 744 bridge->pcie_conf.cap = cpu_to_le16(pcie_cap_ver); 873 745 746 + bridge->subsystem_vendor_id = ssdev_id & 0xffff; 747 + bridge->subsystem_id = ssdev_id >> 16; 874 748 bridge->has_pcie = true; 875 749 bridge->data = port; 876 750 bridge->ops = &mvebu_pci_bridge_emul_ops; 877 751 878 - return pci_bridge_emul_init(bridge, PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR); 752 + return pci_bridge_emul_init(bridge, bridge_flags); 879 753 } 880 754 881 755 static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys) ··· 914 784 { 915 785 struct mvebu_pcie *pcie = bus->sysdata; 916 786 struct mvebu_pcie_port *port; 917 - int ret; 918 787 919 788 port = mvebu_pcie_find_port(pcie, bus, devfn); 920 789 if (!port) 921 790 return PCIBIOS_DEVICE_NOT_FOUND; 922 791 923 - /* Access the emulated PCI-to-PCI bridge */ 924 - if (bus->number == 0) 925 - return pci_bridge_emul_conf_write(&port->bridge, where, 926 - size, val); 927 - 928 - if (!mvebu_pcie_link_up(port)) 929 - return PCIBIOS_DEVICE_NOT_FOUND; 930 - 931 - /* Access the real PCIe interface */ 932 - ret = mvebu_pcie_hw_wr_conf(port, bus, devfn, 933 - where, size, val); 934 - 935 - return ret; 792 + return pci_bridge_emul_conf_write(&port->bridge, where, size, val); 936 793 } 937 794 938 795 /* PCI configuration space read function */ ··· 928 811 { 929 812 struct mvebu_pcie *pcie = bus->sysdata; 930 813 struct mvebu_pcie_port *port; 931 - int ret; 932 814 933 815 port = mvebu_pcie_find_port(pcie, bus, devfn); 934 816 if (!port) 935 817 return PCIBIOS_DEVICE_NOT_FOUND; 936 818 937 - /* Access the emulated PCI-to-PCI bridge */ 938 - if (bus->number == 0) 939 - return pci_bridge_emul_conf_read(&port->bridge, where, 940 - size, val); 941 - 942 - if (!mvebu_pcie_link_up(port)) 943 - return PCIBIOS_DEVICE_NOT_FOUND; 944 - 945 - /* Access the real PCIe interface */ 946 - ret = mvebu_pcie_hw_rd_conf(port, bus, devfn, 947 - where, size, val); 948 - 949 - return ret; 819 + return pci_bridge_emul_conf_read(&port->bridge, where, size, val); 950 820 } 951 821 952 822 static struct pci_ops mvebu_pcie_ops = { 953 823 .read = mvebu_pcie_rd_conf, 954 824 .write = mvebu_pcie_wr_conf, 955 825 }; 826 + 827 + static void mvebu_pcie_intx_irq_mask(struct irq_data *d) 828 + { 829 + struct mvebu_pcie_port *port = d->domain->host_data; 830 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 831 + unsigned long flags; 832 + u32 unmask; 833 + 834 + raw_spin_lock_irqsave(&port->irq_lock, flags); 835 + unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF); 836 + unmask &= ~PCIE_INT_INTX(hwirq); 837 + mvebu_writel(port, unmask, PCIE_INT_UNMASK_OFF); 838 + raw_spin_unlock_irqrestore(&port->irq_lock, flags); 839 + } 840 + 841 + static void mvebu_pcie_intx_irq_unmask(struct irq_data *d) 842 + { 843 + struct mvebu_pcie_port *port = d->domain->host_data; 844 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 845 + unsigned long flags; 846 + u32 unmask; 847 + 848 + raw_spin_lock_irqsave(&port->irq_lock, flags); 849 + unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF); 850 + unmask |= PCIE_INT_INTX(hwirq); 851 + mvebu_writel(port, unmask, PCIE_INT_UNMASK_OFF); 852 + raw_spin_unlock_irqrestore(&port->irq_lock, flags); 853 + } 854 + 855 + static struct irq_chip intx_irq_chip = { 856 + .name = "mvebu-INTx", 857 + .irq_mask = mvebu_pcie_intx_irq_mask, 858 + .irq_unmask = mvebu_pcie_intx_irq_unmask, 859 + }; 860 + 861 + static int mvebu_pcie_intx_irq_map(struct irq_domain *h, 862 + unsigned int virq, irq_hw_number_t hwirq) 863 + { 864 + struct mvebu_pcie_port *port = h->host_data; 865 + 866 + irq_set_status_flags(virq, IRQ_LEVEL); 867 + irq_set_chip_and_handler(virq, &intx_irq_chip, handle_level_irq); 868 + irq_set_chip_data(virq, port); 869 + 870 + return 0; 871 + } 872 + 873 + static const struct irq_domain_ops mvebu_pcie_intx_irq_domain_ops = { 874 + .map = mvebu_pcie_intx_irq_map, 875 + .xlate = irq_domain_xlate_onecell, 876 + }; 877 + 878 + static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port) 879 + { 880 + struct device *dev = &port->pcie->pdev->dev; 881 + struct device_node *pcie_intc_node; 882 + 883 + raw_spin_lock_init(&port->irq_lock); 884 + 885 + pcie_intc_node = of_get_next_child(port->dn, NULL); 886 + if (!pcie_intc_node) { 887 + dev_err(dev, "No PCIe Intc node found for %s\n", port->name); 888 + return -ENODEV; 889 + } 890 + 891 + port->intx_irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, 892 + &mvebu_pcie_intx_irq_domain_ops, 893 + port); 894 + of_node_put(pcie_intc_node); 895 + if (!port->intx_irq_domain) { 896 + dev_err(dev, "Failed to get INTx IRQ domain for %s\n", port->name); 897 + return -ENOMEM; 898 + } 899 + 900 + return 0; 901 + } 902 + 903 + static void mvebu_pcie_irq_handler(struct irq_desc *desc) 904 + { 905 + struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc); 906 + struct irq_chip *chip = irq_desc_get_chip(desc); 907 + struct device *dev = &port->pcie->pdev->dev; 908 + u32 cause, unmask, status; 909 + int i; 910 + 911 + chained_irq_enter(chip, desc); 912 + 913 + cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF); 914 + unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF); 915 + status = cause & unmask; 916 + 917 + /* Process legacy INTx interrupts */ 918 + for (i = 0; i < PCI_NUM_INTX; i++) { 919 + if (!(status & PCIE_INT_INTX(i))) 920 + continue; 921 + 922 + if (generic_handle_domain_irq(port->intx_irq_domain, i) == -EINVAL) 923 + dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A'); 924 + } 925 + 926 + chained_irq_exit(chip, desc); 927 + } 956 928 957 929 static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 958 930 { ··· 1192 986 struct device *dev = &pcie->pdev->dev; 1193 987 enum of_gpio_flags flags; 1194 988 int reset_gpio, ret; 989 + u32 num_lanes; 1195 990 1196 991 port->pcie = pcie; 1197 992 ··· 1204 997 1205 998 if (of_property_read_u32(child, "marvell,pcie-lane", &port->lane)) 1206 999 port->lane = 0; 1000 + 1001 + if (!of_property_read_u32(child, "num-lanes", &num_lanes) && num_lanes == 4) 1002 + port->is_x4 = true; 1207 1003 1208 1004 port->name = devm_kasprintf(dev, GFP_KERNEL, "pcie%d.%d", port->port, 1209 1005 port->lane); ··· 1238 1028 } else { 1239 1029 port->io_target = -1; 1240 1030 port->io_attr = -1; 1031 + } 1032 + 1033 + /* 1034 + * Old DT bindings do not contain "intx" interrupt 1035 + * so do not fail probing driver when interrupt does not exist. 1036 + */ 1037 + port->intx_irq = of_irq_get_byname(child, "intx"); 1038 + if (port->intx_irq == -EPROBE_DEFER) { 1039 + ret = port->intx_irq; 1040 + goto err; 1041 + } 1042 + if (port->intx_irq <= 0) { 1043 + dev_warn(dev, "%s: legacy INTx interrupts cannot be masked individually, " 1044 + "%pOF does not contain intx interrupt\n", 1045 + port->name, child); 1241 1046 } 1242 1047 1243 1048 reset_gpio = of_get_named_gpio_flags(child, "reset-gpios", 0, &flags); ··· 1451 1226 1452 1227 for (i = 0; i < pcie->nports; i++) { 1453 1228 struct mvebu_pcie_port *port = &pcie->ports[i]; 1229 + int irq = port->intx_irq; 1454 1230 1455 1231 child = port->dn; 1456 1232 if (!child) ··· 1477 1251 port->base = NULL; 1478 1252 mvebu_pcie_powerdown(port); 1479 1253 continue; 1254 + } 1255 + 1256 + if (irq > 0) { 1257 + ret = mvebu_pcie_init_irq_domain(port); 1258 + if (ret) { 1259 + dev_err(dev, "%s: cannot init irq domain\n", 1260 + port->name); 1261 + pci_bridge_emul_cleanup(&port->bridge); 1262 + devm_iounmap(dev, port->base); 1263 + port->base = NULL; 1264 + mvebu_pcie_powerdown(port); 1265 + continue; 1266 + } 1267 + irq_set_chained_handler_and_data(irq, 1268 + mvebu_pcie_irq_handler, 1269 + port); 1480 1270 } 1481 1271 1482 1272 /* ··· 1575 1333 mvebu_pcie_set_local_bus_nr(port, 0); 1576 1334 } 1577 1335 1578 - pcie->nports = i; 1579 - 1580 1336 bridge->sysdata = pcie; 1581 1337 bridge->ops = &mvebu_pcie_ops; 1338 + bridge->child_ops = &mvebu_pcie_child_ops; 1582 1339 bridge->align_resource = mvebu_pcie_align_resource; 1583 1340 bridge->map_irq = mvebu_pcie_map_irq; 1584 1341 ··· 1599 1358 1600 1359 for (i = 0; i < pcie->nports; i++) { 1601 1360 struct mvebu_pcie_port *port = &pcie->ports[i]; 1361 + int irq = port->intx_irq; 1602 1362 1603 1363 if (!port->base) 1604 1364 continue; ··· 1610 1368 mvebu_writel(port, cmd, PCIE_CMD_OFF); 1611 1369 1612 1370 /* Mask all interrupt sources. */ 1613 - mvebu_writel(port, 0, PCIE_MASK_OFF); 1371 + mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_UNMASK_OFF); 1372 + 1373 + /* Clear all interrupt causes. */ 1374 + mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF); 1375 + 1376 + if (irq > 0) 1377 + irq_set_chained_handler_and_data(irq, NULL, NULL); 1378 + 1379 + /* Remove IRQ domains. */ 1380 + if (port->intx_irq_domain) 1381 + irq_domain_remove(port->intx_irq_domain); 1614 1382 1615 1383 /* Free config space for emulated root bridge. */ 1616 1384 pci_bridge_emul_cleanup(&port->bridge);
+1 -1
drivers/pci/controller/pci-tegra.c
··· 726 726 /* Tegra PCIE root complex wrongly reports device class */ 727 727 static void tegra_pcie_fixup_class(struct pci_dev *dev) 728 728 { 729 - dev->class = PCI_CLASS_BRIDGE_PCI << 8; 729 + dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL; 730 730 } 731 731 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf0, tegra_pcie_fixup_class); 732 732 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf1, tegra_pcie_fixup_class);
+23 -13
drivers/pci/controller/pci-xgene.c
··· 49 49 #define EN_REG 0x00000001 50 50 #define OB_LO_IO 0x00000002 51 51 #define XGENE_PCIE_DEVICEID 0xE004 52 - #define SZ_1T (SZ_1G*1024ULL) 53 52 #define PIPE_PHY_RATE_RD(src) ((0xc000 & (u32)(src)) >> 0xe) 54 53 55 54 #define XGENE_V1_PCI_EXP_CAP 0x40 ··· 464 465 return 1; 465 466 } 466 467 467 - if ((size > SZ_1K) && (size < SZ_4G) && !(*ib_reg_mask & (1 << 0))) { 468 + if ((size > SZ_1K) && (size < SZ_1T) && !(*ib_reg_mask & (1 << 0))) { 468 469 *ib_reg_mask |= (1 << 0); 469 470 return 0; 470 471 } ··· 478 479 } 479 480 480 481 static void xgene_pcie_setup_ib_reg(struct xgene_pcie *port, 481 - struct resource_entry *entry, 482 - u8 *ib_reg_mask) 482 + struct of_pci_range *range, u8 *ib_reg_mask) 483 483 { 484 484 void __iomem *cfg_base = port->cfg_base; 485 485 struct device *dev = port->dev; 486 486 void __iomem *bar_addr; 487 487 u32 pim_reg; 488 - u64 cpu_addr = entry->res->start; 489 - u64 pci_addr = cpu_addr - entry->offset; 490 - u64 size = resource_size(entry->res); 488 + u64 cpu_addr = range->cpu_addr; 489 + u64 pci_addr = range->pci_addr; 490 + u64 size = range->size; 491 491 u64 mask = ~(size - 1) | EN_REG; 492 492 u32 flags = PCI_BASE_ADDRESS_MEM_TYPE_64; 493 493 u32 bar_low; 494 494 int region; 495 495 496 - region = xgene_pcie_select_ib_reg(ib_reg_mask, size); 496 + region = xgene_pcie_select_ib_reg(ib_reg_mask, range->size); 497 497 if (region < 0) { 498 498 dev_warn(dev, "invalid pcie dma-range config\n"); 499 499 return; 500 500 } 501 501 502 - if (entry->res->flags & IORESOURCE_PREFETCH) 502 + if (range->flags & IORESOURCE_PREFETCH) 503 503 flags |= PCI_BASE_ADDRESS_MEM_PREFETCH; 504 504 505 505 bar_low = pcie_bar_low_val((u32)cpu_addr, flags); ··· 529 531 530 532 static int xgene_pcie_parse_map_dma_ranges(struct xgene_pcie *port) 531 533 { 532 - struct pci_host_bridge *bridge = pci_host_bridge_from_priv(port); 533 - struct resource_entry *entry; 534 + struct device_node *np = port->node; 535 + struct of_pci_range range; 536 + struct of_pci_range_parser parser; 537 + struct device *dev = port->dev; 534 538 u8 ib_reg_mask = 0; 535 539 536 - resource_list_for_each_entry(entry, &bridge->dma_ranges) 537 - xgene_pcie_setup_ib_reg(port, entry, &ib_reg_mask); 540 + if (of_pci_dma_range_parser_init(&parser, np)) { 541 + dev_err(dev, "missing dma-ranges property\n"); 542 + return -EINVAL; 543 + } 538 544 545 + /* Get the dma-ranges from DT */ 546 + for_each_of_pci_range(&parser, &range) { 547 + u64 end = range.cpu_addr + range.size - 1; 548 + 549 + dev_dbg(dev, "0x%08x 0x%016llx..0x%016llx -> 0x%016llx\n", 550 + range.flags, range.cpu_addr, end, range.pci_addr); 551 + xgene_pcie_setup_ib_reg(port, &range, &ib_reg_mask); 552 + } 539 553 return 0; 540 554 } 541 555
+1 -1
drivers/pci/controller/pcie-iproc-bcma.c
··· 18 18 /* NS: CLASS field is R/O, and set to wrong 0x200 value */ 19 19 static void bcma_pcie2_fixup_class(struct pci_dev *dev) 20 20 { 21 - dev->class = PCI_CLASS_BRIDGE_PCI << 8; 21 + dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL; 22 22 } 23 23 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x8011, bcma_pcie2_fixup_class); 24 24 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x8012, bcma_pcie2_fixup_class);
+5 -6
drivers/pci/controller/pcie-iproc.c
··· 789 789 return -EFAULT; 790 790 } 791 791 792 - /* force class to PCI_CLASS_BRIDGE_PCI (0x0604) */ 792 + /* force class to PCI_CLASS_BRIDGE_PCI_NORMAL (0x060400) */ 793 793 #define PCI_BRIDGE_CTRL_REG_OFFSET 0x43c 794 - #define PCI_CLASS_BRIDGE_MASK 0xffff00 795 - #define PCI_CLASS_BRIDGE_SHIFT 8 794 + #define PCI_BRIDGE_CTRL_REG_CLASS_MASK 0xffffff 796 795 iproc_pci_raw_config_read32(pcie, 0, PCI_BRIDGE_CTRL_REG_OFFSET, 797 796 4, &class); 798 - class &= ~PCI_CLASS_BRIDGE_MASK; 799 - class |= (PCI_CLASS_BRIDGE_PCI << PCI_CLASS_BRIDGE_SHIFT); 797 + class &= ~PCI_BRIDGE_CTRL_REG_CLASS_MASK; 798 + class |= PCI_CLASS_BRIDGE_PCI_NORMAL; 800 799 iproc_pci_raw_config_write32(pcie, 0, PCI_BRIDGE_CTRL_REG_OFFSET, 801 800 4, class); 802 801 ··· 1580 1581 * code that the bridge is not an Ethernet device. 1581 1582 */ 1582 1583 if (pdev->hdr_type == PCI_HEADER_TYPE_BRIDGE) 1583 - pdev->class = PCI_CLASS_BRIDGE_PCI << 8; 1584 + pdev->class = PCI_CLASS_BRIDGE_PCI_NORMAL; 1584 1585 1585 1586 /* 1586 1587 * MPSS is not being set properly (as it is currently 0). This is
+1 -1
drivers/pci/controller/pcie-mediatek-gen3.c
··· 292 292 /* Set class code */ 293 293 val = readl_relaxed(pcie->base + PCIE_PCI_IDS_1); 294 294 val &= ~GENMASK(31, 8); 295 - val |= PCI_CLASS(PCI_CLASS_BRIDGE_PCI << 8); 295 + val |= PCI_CLASS(PCI_CLASS_BRIDGE_PCI_NORMAL); 296 296 writel_relaxed(val, pcie->base + PCIE_PCI_IDS_1); 297 297 298 298 /* Mask all INTx interrupts */
+97 -35
drivers/pci/controller/pcie-rcar-host.c
··· 65 65 int (*phy_init_fn)(struct rcar_pcie_host *host); 66 66 }; 67 67 68 + static DEFINE_SPINLOCK(pmsr_lock); 69 + 70 + static int rcar_pcie_wakeup(struct device *pcie_dev, void __iomem *pcie_base) 71 + { 72 + unsigned long flags; 73 + u32 pmsr, val; 74 + int ret = 0; 75 + 76 + spin_lock_irqsave(&pmsr_lock, flags); 77 + 78 + if (!pcie_base || pm_runtime_suspended(pcie_dev)) { 79 + ret = -EINVAL; 80 + goto unlock_exit; 81 + } 82 + 83 + pmsr = readl(pcie_base + PMSR); 84 + 85 + /* 86 + * Test if the PCIe controller received PM_ENTER_L1 DLLP and 87 + * the PCIe controller is not in L1 link state. If true, apply 88 + * fix, which will put the controller into L1 link state, from 89 + * which it can return to L0s/L0 on its own. 90 + */ 91 + if ((pmsr & PMEL1RX) && ((pmsr & PMSTATE) != PMSTATE_L1)) { 92 + writel(L1IATN, pcie_base + PMCTLR); 93 + ret = readl_poll_timeout_atomic(pcie_base + PMSR, val, 94 + val & L1FAEG, 10, 1000); 95 + WARN(ret, "Timeout waiting for L1 link state, ret=%d\n", ret); 96 + writel(L1FAEG | PMEL1RX, pcie_base + PMSR); 97 + } 98 + 99 + unlock_exit: 100 + spin_unlock_irqrestore(&pmsr_lock, flags); 101 + return ret; 102 + } 103 + 68 104 static struct rcar_pcie_host *msi_to_host(struct rcar_msi *msi) 69 105 { 70 106 return container_of(msi, struct rcar_pcie_host, msi); ··· 114 78 return val >> shift; 115 79 } 116 80 81 + #ifdef CONFIG_ARM 82 + #define __rcar_pci_rw_reg_workaround(instr) \ 83 + " .arch armv7-a\n" \ 84 + "1: " instr " %1, [%2]\n" \ 85 + "2: isb\n" \ 86 + "3: .pushsection .text.fixup,\"ax\"\n" \ 87 + " .align 2\n" \ 88 + "4: mov %0, #" __stringify(PCIBIOS_SET_FAILED) "\n" \ 89 + " b 3b\n" \ 90 + " .popsection\n" \ 91 + " .pushsection __ex_table,\"a\"\n" \ 92 + " .align 3\n" \ 93 + " .long 1b, 4b\n" \ 94 + " .long 2b, 4b\n" \ 95 + " .popsection\n" 96 + #endif 97 + 98 + static int rcar_pci_write_reg_workaround(struct rcar_pcie *pcie, u32 val, 99 + unsigned int reg) 100 + { 101 + int error = PCIBIOS_SUCCESSFUL; 102 + #ifdef CONFIG_ARM 103 + asm volatile( 104 + __rcar_pci_rw_reg_workaround("str") 105 + : "+r"(error):"r"(val), "r"(pcie->base + reg) : "memory"); 106 + #else 107 + rcar_pci_write_reg(pcie, val, reg); 108 + #endif 109 + return error; 110 + } 111 + 112 + static int rcar_pci_read_reg_workaround(struct rcar_pcie *pcie, u32 *val, 113 + unsigned int reg) 114 + { 115 + int error = PCIBIOS_SUCCESSFUL; 116 + #ifdef CONFIG_ARM 117 + asm volatile( 118 + __rcar_pci_rw_reg_workaround("ldr") 119 + : "+r"(error), "=r"(*val) : "r"(pcie->base + reg) : "memory"); 120 + 121 + if (error != PCIBIOS_SUCCESSFUL) 122 + PCI_SET_ERROR_RESPONSE(val); 123 + #else 124 + *val = rcar_pci_read_reg(pcie, reg); 125 + #endif 126 + return error; 127 + } 128 + 117 129 /* Serialization is provided by 'pci_lock' in drivers/pci/access.c */ 118 130 static int rcar_pcie_config_access(struct rcar_pcie_host *host, 119 131 unsigned char access_type, struct pci_bus *bus, ··· 169 85 { 170 86 struct rcar_pcie *pcie = &host->pcie; 171 87 unsigned int dev, func, reg, index; 88 + int ret; 89 + 90 + /* Wake the bus up in case it is in L1 state. */ 91 + ret = rcar_pcie_wakeup(pcie->dev, pcie->base); 92 + if (ret) { 93 + PCI_SET_ERROR_RESPONSE(data); 94 + return PCIBIOS_SET_FAILED; 95 + } 172 96 173 97 dev = PCI_SLOT(devfn); 174 98 func = PCI_FUNC(devfn); ··· 233 141 return PCIBIOS_DEVICE_NOT_FOUND; 234 142 235 143 if (access_type == RCAR_PCI_ACCESS_READ) 236 - *data = rcar_pci_read_reg(pcie, PCIECDR); 144 + ret = rcar_pci_read_reg_workaround(pcie, data, PCIECDR); 237 145 else 238 - rcar_pci_write_reg(pcie, *data, PCIECDR); 146 + ret = rcar_pci_write_reg_workaround(pcie, *data, PCIECDR); 239 147 240 148 /* Disable the configuration access */ 241 149 rcar_pci_write_reg(pcie, 0, PCIECCTLR); 242 150 243 - return PCIBIOS_SUCCESSFUL; 151 + return ret; 244 152 } 245 153 246 154 static int rcar_pcie_read_conf(struct pci_bus *bus, unsigned int devfn, ··· 462 370 * class to match. Hardware takes care of propagating the IDSETR 463 371 * settings, so there is no need to bother with a quirk. 464 372 */ 465 - rcar_pci_write_reg(pcie, PCI_CLASS_BRIDGE_PCI << 16, IDSETR1); 373 + rcar_pci_write_reg(pcie, PCI_CLASS_BRIDGE_PCI_NORMAL << 8, IDSETR1); 466 374 467 375 /* 468 376 * Setup Secondary Bus Number & Subordinate Bus Number, even though ··· 1142 1050 }; 1143 1051 1144 1052 #ifdef CONFIG_ARM 1145 - static DEFINE_SPINLOCK(pmsr_lock); 1146 1053 static int rcar_pcie_aarch32_abort_handler(unsigned long addr, 1147 1054 unsigned int fsr, struct pt_regs *regs) 1148 1055 { 1149 - unsigned long flags; 1150 - u32 pmsr, val; 1151 - int ret = 0; 1152 - 1153 - spin_lock_irqsave(&pmsr_lock, flags); 1154 - 1155 - if (!pcie_base || pm_runtime_suspended(pcie_dev)) { 1156 - ret = 1; 1157 - goto unlock_exit; 1158 - } 1159 - 1160 - pmsr = readl(pcie_base + PMSR); 1161 - 1162 - /* 1163 - * Test if the PCIe controller received PM_ENTER_L1 DLLP and 1164 - * the PCIe controller is not in L1 link state. If true, apply 1165 - * fix, which will put the controller into L1 link state, from 1166 - * which it can return to L0s/L0 on its own. 1167 - */ 1168 - if ((pmsr & PMEL1RX) && ((pmsr & PMSTATE) != PMSTATE_L1)) { 1169 - writel(L1IATN, pcie_base + PMCTLR); 1170 - ret = readl_poll_timeout_atomic(pcie_base + PMSR, val, 1171 - val & L1FAEG, 10, 1000); 1172 - WARN(ret, "Timeout waiting for L1 link state, ret=%d\n", ret); 1173 - writel(L1FAEG | PMEL1RX, pcie_base + PMSR); 1174 - } 1175 - 1176 - unlock_exit: 1177 - spin_unlock_irqrestore(&pmsr_lock, flags); 1178 - return ret; 1056 + return !fixup_exception(regs); 1179 1057 } 1180 1058 1181 1059 static const struct of_device_id rcar_pcie_abort_handler_of_match[] __initconst = {
+1 -1
drivers/pci/controller/pcie-rockchip-host.c
··· 370 370 rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID, 371 371 PCIE_CORE_CONFIG_VENDOR); 372 372 rockchip_pcie_write(rockchip, 373 - PCI_CLASS_BRIDGE_PCI << PCIE_RC_CONFIG_SCC_SHIFT, 373 + PCI_CLASS_BRIDGE_PCI_NORMAL << 8, 374 374 PCIE_RC_CONFIG_RID_CCR); 375 375 376 376 /* Clear THP cap's next cap pointer to remove L1 substate cap */
-1
drivers/pci/controller/pcie-rockchip.h
··· 134 134 #define PCIE_RC_CONFIG_NORMAL_BASE 0x800000 135 135 #define PCIE_RC_CONFIG_BASE 0xa00000 136 136 #define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08) 137 - #define PCIE_RC_CONFIG_SCC_SHIFT 16 138 137 #define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4) 139 138 #define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18 140 139 #define PCIE_RC_CONFIG_DCR_CSPL_LIMIT 0xff
+12 -2
drivers/pci/endpoint/functions/pci-epf-test.c
··· 285 285 if (ret) 286 286 dev_err(dev, "Data transfer failed\n"); 287 287 } else { 288 - memcpy(dst_addr, src_addr, reg->size); 288 + void *buf; 289 + 290 + buf = kzalloc(reg->size, GFP_KERNEL); 291 + if (!buf) { 292 + ret = -ENOMEM; 293 + goto err_map_addr; 294 + } 295 + 296 + memcpy_fromio(buf, src_addr, reg->size); 297 + memcpy_toio(dst_addr, buf, reg->size); 298 + kfree(buf); 289 299 } 290 300 ktime_get_ts64(&end); 291 301 pci_epf_test_print_rate("COPY", reg->size, &start, &end, use_dma); ··· 451 441 if (!epf_test->dma_supported) { 452 442 dev_err(dev, "Cannot transfer data using DMA\n"); 453 443 ret = -EINVAL; 454 - goto err_map_addr; 444 + goto err_dma_map; 455 445 } 456 446 457 447 src_phys_addr = dma_map_single(dma_dev, buf, reg->size,
+4 -3
drivers/pci/hotplug/acpiphp_glue.c
··· 226 226 static acpi_status acpiphp_add_context(acpi_handle handle, u32 lvl, void *data, 227 227 void **rv) 228 228 { 229 + struct acpi_device *adev = acpi_fetch_acpi_dev(handle); 229 230 struct acpiphp_bridge *bridge = data; 230 231 struct acpiphp_context *context; 231 - struct acpi_device *adev; 232 232 struct acpiphp_slot *slot; 233 233 struct acpiphp_func *newfunc; 234 234 acpi_status status = AE_OK; ··· 238 238 struct pci_dev *pdev = bridge->pci_dev; 239 239 u32 val; 240 240 241 + if (!adev) 242 + return AE_OK; 243 + 241 244 status = acpi_evaluate_integer(handle, "_ADR", NULL, &adr); 242 245 if (ACPI_FAILURE(status)) { 243 246 if (status != AE_NOT_FOUND) ··· 248 245 "can't evaluate _ADR (%#x)\n", status); 249 246 return AE_OK; 250 247 } 251 - if (acpi_bus_get_device(handle, &adev)) 252 - return AE_OK; 253 248 254 249 device = (adr >> 16) & 0xffff; 255 250 function = adr & 0xffff;
+3 -2
drivers/pci/hotplug/acpiphp_ibm.c
··· 433 433 goto init_return; 434 434 } 435 435 pr_debug("%s: found IBM aPCI device\n", __func__); 436 - if (acpi_bus_get_device(ibm_acpi_handle, &device)) { 437 - pr_err("%s: acpi_bus_get_device failed\n", __func__); 436 + device = acpi_fetch_acpi_dev(ibm_acpi_handle); 437 + if (!device) { 438 + pr_err("%s: acpi_fetch_acpi_dev failed\n", __func__); 438 439 retval = -ENODEV; 439 440 goto init_return; 440 441 }
+1 -1
drivers/pci/hotplug/cpqphp_core.c
··· 1254 1254 struct pci_resource *res; 1255 1255 struct pci_resource *tres; 1256 1256 1257 - rc = compaq_nvram_store(cpqhp_rom_start); 1257 + compaq_nvram_store(cpqhp_rom_start); 1258 1258 1259 1259 ctrl = cpqhp_ctrl_list; 1260 1260
+5 -17
drivers/pci/hotplug/cpqphp_ctrl.c
··· 881 881 u8 reset; 882 882 u16 misc; 883 883 u32 Diff; 884 - u32 temp_dword; 885 884 886 885 887 886 misc = readw(ctrl->hpc_reg + MISC); ··· 916 917 writel(Diff, ctrl->hpc_reg + INT_INPUT_CLEAR); 917 918 918 919 /* Read it back to clear any posted writes */ 919 - temp_dword = readl(ctrl->hpc_reg + INT_INPUT_CLEAR); 920 + readl(ctrl->hpc_reg + INT_INPUT_CLEAR); 920 921 921 922 if (!Diff) 922 923 /* Clear all interrupts */ ··· 1411 1412 u32 rc = 0; 1412 1413 struct pci_func *new_slot = NULL; 1413 1414 struct pci_bus *bus = ctrl->pci_bus; 1414 - struct slot *p_slot; 1415 1415 struct resource_lists res_lists; 1416 1416 1417 1417 hp_slot = func->device - ctrl->slot_device_offset; ··· 1457 1459 if (rc) 1458 1460 return rc; 1459 1461 1460 - p_slot = cpqhp_find_slot(ctrl, hp_slot + ctrl->slot_device_offset); 1462 + cpqhp_find_slot(ctrl, hp_slot + ctrl->slot_device_offset); 1461 1463 1462 1464 /* turn on board and blink green LED */ 1463 1465 ··· 1612 1614 u8 device; 1613 1615 u8 hp_slot; 1614 1616 u8 temp_byte; 1615 - u32 rc; 1616 1617 struct resource_lists res_lists; 1617 1618 struct pci_func *temp_func; 1618 1619 ··· 1626 1629 /* When we get here, it is safe to change base address registers. 1627 1630 * We will attempt to save the base address register lengths */ 1628 1631 if (replace_flag || !ctrl->add_support) 1629 - rc = cpqhp_save_base_addr_length(ctrl, func); 1632 + cpqhp_save_base_addr_length(ctrl, func); 1630 1633 else if (!func->bus_head && !func->mem_head && 1631 1634 !func->p_mem_head && !func->io_head) { 1632 1635 /* Here we check to see if we've saved any of the board's ··· 1644 1647 } 1645 1648 1646 1649 if (!skip) 1647 - rc = cpqhp_save_used_resources(ctrl, func); 1650 + cpqhp_save_used_resources(ctrl, func); 1648 1651 } 1649 1652 /* Change status to shutdown */ 1650 1653 if (func->is_a_board) ··· 1764 1767 1765 1768 static void interrupt_event_handler(struct controller *ctrl) 1766 1769 { 1767 - int loop = 0; 1770 + int loop; 1768 1771 int change = 1; 1769 1772 struct pci_func *func; 1770 1773 u8 hp_slot; ··· 1882 1885 void cpqhp_pushbutton_thread(struct timer_list *t) 1883 1886 { 1884 1887 u8 hp_slot; 1885 - u8 device; 1886 1888 struct pci_func *func; 1887 1889 struct slot *p_slot = from_timer(p_slot, t, task_event); 1888 1890 struct controller *ctrl = (struct controller *) p_slot->ctrl; 1889 1891 1890 1892 pushbutton_pending = NULL; 1891 1893 hp_slot = p_slot->hp_slot; 1892 - 1893 - device = p_slot->device; 1894 1894 1895 1895 if (is_slot_enabled(ctrl, hp_slot)) { 1896 1896 p_slot->state = POWEROFF_STATE; ··· 1945 1951 u32 tempdword; 1946 1952 int rc; 1947 1953 struct slot *p_slot; 1948 - int physical_slot = 0; 1949 1954 1950 1955 tempdword = 0; 1951 1956 1952 1957 device = func->device; 1953 1958 hp_slot = device - ctrl->slot_device_offset; 1954 1959 p_slot = cpqhp_find_slot(ctrl, device); 1955 - if (p_slot) 1956 - physical_slot = p_slot->number; 1957 1960 1958 1961 /* Check to see if the interlock is closed */ 1959 1962 tempdword = readl(ctrl->hpc_reg + INT_INPUT_CLEAR); ··· 2034 2043 unsigned int devfn; 2035 2044 struct slot *p_slot; 2036 2045 struct pci_bus *pci_bus = ctrl->pci_bus; 2037 - int physical_slot = 0; 2038 2046 2039 2047 device = func->device; 2040 2048 func = cpqhp_slot_find(ctrl->bus, device, index++); 2041 2049 p_slot = cpqhp_find_slot(ctrl, device); 2042 - if (p_slot) 2043 - physical_slot = p_slot->number; 2044 2050 2045 2051 /* Make sure there are no video controllers here */ 2046 2052 while (func && !rc) {
+1 -1
drivers/pci/hotplug/cpqphp_pci.c
··· 473 473 int sub_bus; 474 474 int max_functions; 475 475 int function = 0; 476 - int cloop = 0; 476 + int cloop; 477 477 int stop_it; 478 478 479 479 ID = 0xFFFFFFFF;
-2
drivers/pci/hotplug/ibmphp_hpc.c
··· 325 325 static u8 isa_ctrl_read(struct controller *ctlr_ptr, u8 offset) 326 326 { 327 327 u16 start_address; 328 - u16 end_address; 329 328 u8 data; 330 329 331 330 start_address = ctlr_ptr->u.isa_ctlr.io_start; 332 - end_address = ctlr_ptr->u.isa_ctlr.io_end; 333 331 data = inb(start_address + offset); 334 332 return data; 335 333 }
+1 -2
drivers/pci/hotplug/ibmphp_res.c
··· 1955 1955 bus_sec = find_bus_wprev(sec_busno, NULL, 0); 1956 1956 /* this bus structure doesn't exist yet, PPB was configured during previous loading of ibmphp */ 1957 1957 if (!bus_sec) { 1958 - bus_sec = alloc_error_bus(NULL, sec_busno, 1); 1958 + alloc_error_bus(NULL, sec_busno, 1); 1959 1959 /* the rest will be populated during NVRAM call */ 1960 1960 return 0; 1961 1961 } ··· 2114 2114 } /* end for function */ 2115 2115 } /* end for device */ 2116 2116 2117 - bus = &bus_cur; 2118 2117 return 0; 2119 2118 }
+4
drivers/pci/hotplug/pciehp_hpc.c
··· 98 98 if (slot_status & PCI_EXP_SLTSTA_CC) { 99 99 pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, 100 100 PCI_EXP_SLTSTA_CC); 101 + ctrl->cmd_busy = 0; 102 + smp_mb(); 101 103 return 1; 102 104 } 103 105 msleep(10); ··· 1085 1083 } 1086 1084 } 1087 1085 DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, 1086 + PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl); 1087 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0110, 1088 1088 PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl); 1089 1089 DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0400, 1090 1090 PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
+1 -1
drivers/pci/hotplug/shpchp_core.c
··· 312 312 } 313 313 314 314 static const struct pci_device_id shpcd_pci_tbl[] = { 315 - {PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x00), ~0)}, 315 + {PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI_NORMAL, ~0)}, 316 316 { /* end: all zeroes */ } 317 317 }; 318 318 MODULE_DEVICE_TABLE(pci, shpcd_pci_tbl);
+1
drivers/pci/p2pdma.c
··· 321 321 {PCI_VENDOR_ID_INTEL, 0x2032, 0}, 322 322 {PCI_VENDOR_ID_INTEL, 0x2033, 0}, 323 323 {PCI_VENDOR_ID_INTEL, 0x2020, 0}, 324 + {PCI_VENDOR_ID_INTEL, 0x09a2, 0}, 324 325 {} 325 326 }; 326 327
+3 -3
drivers/pci/pci-acpi.c
··· 89 89 return -ENODEV; 90 90 } 91 91 92 - ret = acpi_bus_get_device(handle, &adev); 93 - if (ret) 94 - return ret; 92 + adev = acpi_fetch_acpi_dev(handle); 93 + if (!adev) 94 + return -ENODEV; 95 95 96 96 ret = acpi_get_rc_addr(adev, res); 97 97 if (ret) {
+127 -59
drivers/pci/pci-bridge-emul.c
··· 21 21 #include "pci-bridge-emul.h" 22 22 23 23 #define PCI_BRIDGE_CONF_END PCI_STD_HEADER_SIZEOF 24 + #define PCI_CAP_SSID_SIZEOF (PCI_SSVID_DEVICE_ID + 2) 25 + #define PCI_CAP_SSID_START PCI_BRIDGE_CONF_END 26 + #define PCI_CAP_SSID_END (PCI_CAP_SSID_START + PCI_CAP_SSID_SIZEOF) 24 27 #define PCI_CAP_PCIE_SIZEOF (PCI_EXP_SLTSTA2 + 2) 25 - #define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END 28 + #define PCI_CAP_PCIE_START PCI_CAP_SSID_END 26 29 #define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_CAP_PCIE_SIZEOF) 27 30 28 31 /** ··· 318 315 }, 319 316 }; 320 317 318 + static pci_bridge_emul_read_status_t 319 + pci_bridge_emul_read_ssid(struct pci_bridge_emul *bridge, int reg, u32 *value) 320 + { 321 + switch (reg) { 322 + case PCI_CAP_LIST_ID: 323 + *value = PCI_CAP_ID_SSVID | 324 + (bridge->has_pcie ? (PCI_CAP_PCIE_START << 8) : 0); 325 + return PCI_BRIDGE_EMUL_HANDLED; 326 + 327 + case PCI_SSVID_VENDOR_ID: 328 + *value = bridge->subsystem_vendor_id | 329 + (bridge->subsystem_id << 16); 330 + return PCI_BRIDGE_EMUL_HANDLED; 331 + 332 + default: 333 + return PCI_BRIDGE_EMUL_NOT_HANDLED; 334 + } 335 + } 336 + 321 337 /* 322 338 * Initialize a pci_bridge_emul structure to represent a fake PCI 323 339 * bridge configuration space. The caller needs to have initialized ··· 350 328 BUILD_BUG_ON(sizeof(bridge->conf) != PCI_BRIDGE_CONF_END); 351 329 352 330 /* 353 - * class_revision: Class is high 24 bits and revision is low 8 bit of this member, 354 - * while class for PCI Bridge Normal Decode has the 24-bit value: PCI_CLASS_BRIDGE_PCI << 8 331 + * class_revision: Class is high 24 bits and revision is low 8 bit 332 + * of this member, while class for PCI Bridge Normal Decode has the 333 + * 24-bit value: PCI_CLASS_BRIDGE_PCI_NORMAL 355 334 */ 356 - bridge->conf.class_revision |= cpu_to_le32((PCI_CLASS_BRIDGE_PCI << 8) << 8); 335 + bridge->conf.class_revision |= 336 + cpu_to_le32(PCI_CLASS_BRIDGE_PCI_NORMAL << 8); 357 337 bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE; 358 338 bridge->conf.cache_line_size = 0x10; 359 339 bridge->conf.status = cpu_to_le16(PCI_STATUS_CAP_LIST); ··· 365 341 if (!bridge->pci_regs_behavior) 366 342 return -ENOMEM; 367 343 368 - if (bridge->has_pcie) { 344 + if (bridge->subsystem_vendor_id) 345 + bridge->conf.capabilities_pointer = PCI_CAP_SSID_START; 346 + else if (bridge->has_pcie) 369 347 bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START; 348 + else 349 + bridge->conf.capabilities_pointer = 0; 350 + 351 + if (bridge->conf.capabilities_pointer) 370 352 bridge->conf.status |= cpu_to_le16(PCI_STATUS_CAP_LIST); 353 + 354 + if (bridge->has_pcie) { 371 355 bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP; 372 356 bridge->pcie_conf.cap |= cpu_to_le16(PCI_EXP_TYPE_ROOT_PORT << 4); 373 357 bridge->pcie_cap_regs_behavior = ··· 409 377 ~(BIT(10) << 16); 410 378 } 411 379 412 - if (flags & PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR) { 380 + if (flags & PCI_BRIDGE_EMUL_NO_PREFMEM_FORWARD) { 413 381 bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].ro = ~0; 414 382 bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].rw = 0; 383 + } 384 + 385 + if (flags & PCI_BRIDGE_EMUL_NO_IO_FORWARD) { 386 + bridge->pci_regs_behavior[PCI_COMMAND / 4].ro |= PCI_COMMAND_IO; 387 + bridge->pci_regs_behavior[PCI_COMMAND / 4].rw &= ~PCI_COMMAND_IO; 388 + bridge->pci_regs_behavior[PCI_IO_BASE / 4].ro |= GENMASK(15, 0); 389 + bridge->pci_regs_behavior[PCI_IO_BASE / 4].rw &= ~GENMASK(15, 0); 390 + bridge->pci_regs_behavior[PCI_IO_BASE_UPPER16 / 4].ro = ~0; 391 + bridge->pci_regs_behavior[PCI_IO_BASE_UPPER16 / 4].rw = 0; 415 392 } 416 393 417 394 return 0; ··· 454 413 __le32 *cfgspace; 455 414 const struct pci_bridge_reg_behavior *behavior; 456 415 457 - if (bridge->has_pcie && reg >= PCI_CAP_PCIE_END) { 458 - *value = 0; 459 - return PCIBIOS_SUCCESSFUL; 460 - } 461 - 462 - if (!bridge->has_pcie && reg >= PCI_BRIDGE_CONF_END) { 463 - *value = 0; 464 - return PCIBIOS_SUCCESSFUL; 465 - } 466 - 467 - if (bridge->has_pcie && reg >= PCI_CAP_PCIE_START) { 416 + if (reg < PCI_BRIDGE_CONF_END) { 417 + /* Emulated PCI space */ 418 + read_op = bridge->ops->read_base; 419 + cfgspace = (__le32 *) &bridge->conf; 420 + behavior = bridge->pci_regs_behavior; 421 + } else if (reg >= PCI_CAP_SSID_START && reg < PCI_CAP_SSID_END && bridge->subsystem_vendor_id) { 422 + /* Emulated PCI Bridge Subsystem Vendor ID capability */ 423 + reg -= PCI_CAP_SSID_START; 424 + read_op = pci_bridge_emul_read_ssid; 425 + cfgspace = NULL; 426 + behavior = NULL; 427 + } else if (reg >= PCI_CAP_PCIE_START && reg < PCI_CAP_PCIE_END && bridge->has_pcie) { 428 + /* Our emulated PCIe capability */ 468 429 reg -= PCI_CAP_PCIE_START; 469 430 read_op = bridge->ops->read_pcie; 470 431 cfgspace = (__le32 *) &bridge->pcie_conf; 471 432 behavior = bridge->pcie_cap_regs_behavior; 433 + } else if (reg >= PCI_CFG_SPACE_SIZE && bridge->has_pcie) { 434 + /* PCIe extended capability space */ 435 + reg -= PCI_CFG_SPACE_SIZE; 436 + read_op = bridge->ops->read_ext; 437 + cfgspace = NULL; 438 + behavior = NULL; 472 439 } else { 473 - read_op = bridge->ops->read_base; 474 - cfgspace = (__le32 *) &bridge->conf; 475 - behavior = bridge->pci_regs_behavior; 440 + /* Not implemented */ 441 + *value = 0; 442 + return PCIBIOS_SUCCESSFUL; 476 443 } 477 444 478 445 if (read_op) ··· 488 439 else 489 440 ret = PCI_BRIDGE_EMUL_NOT_HANDLED; 490 441 491 - if (ret == PCI_BRIDGE_EMUL_NOT_HANDLED) 492 - *value = le32_to_cpu(cfgspace[reg / 4]); 442 + if (ret == PCI_BRIDGE_EMUL_NOT_HANDLED) { 443 + if (cfgspace) 444 + *value = le32_to_cpu(cfgspace[reg / 4]); 445 + else 446 + *value = 0; 447 + } 493 448 494 449 /* 495 450 * Make sure we never return any reserved bit with a value 496 451 * different from 0. 497 452 */ 498 - *value &= behavior[reg / 4].ro | behavior[reg / 4].rw | 499 - behavior[reg / 4].w1c; 453 + if (behavior) 454 + *value &= behavior[reg / 4].ro | behavior[reg / 4].rw | 455 + behavior[reg / 4].w1c; 500 456 501 457 if (size == 1) 502 458 *value = (*value >> (8 * (where & 3))) & 0xff; ··· 529 475 __le32 *cfgspace; 530 476 const struct pci_bridge_reg_behavior *behavior; 531 477 532 - if (bridge->has_pcie && reg >= PCI_CAP_PCIE_END) 533 - return PCIBIOS_SUCCESSFUL; 478 + ret = pci_bridge_emul_conf_read(bridge, reg, 4, &old); 479 + if (ret != PCIBIOS_SUCCESSFUL) 480 + return ret; 534 481 535 - if (!bridge->has_pcie && reg >= PCI_BRIDGE_CONF_END) 482 + if (reg < PCI_BRIDGE_CONF_END) { 483 + /* Emulated PCI space */ 484 + write_op = bridge->ops->write_base; 485 + cfgspace = (__le32 *) &bridge->conf; 486 + behavior = bridge->pci_regs_behavior; 487 + } else if (reg >= PCI_CAP_PCIE_START && reg < PCI_CAP_PCIE_END && bridge->has_pcie) { 488 + /* Our emulated PCIe capability */ 489 + reg -= PCI_CAP_PCIE_START; 490 + write_op = bridge->ops->write_pcie; 491 + cfgspace = (__le32 *) &bridge->pcie_conf; 492 + behavior = bridge->pcie_cap_regs_behavior; 493 + } else if (reg >= PCI_CFG_SPACE_SIZE && bridge->has_pcie) { 494 + /* PCIe extended capability space */ 495 + reg -= PCI_CFG_SPACE_SIZE; 496 + write_op = bridge->ops->write_ext; 497 + cfgspace = NULL; 498 + behavior = NULL; 499 + } else { 500 + /* Not implemented */ 536 501 return PCIBIOS_SUCCESSFUL; 502 + } 537 503 538 504 shift = (where & 0x3) * 8; 539 505 ··· 566 492 else 567 493 return PCIBIOS_BAD_REGISTER_NUMBER; 568 494 569 - ret = pci_bridge_emul_conf_read(bridge, reg, 4, &old); 570 - if (ret != PCIBIOS_SUCCESSFUL) 571 - return ret; 495 + if (behavior) { 496 + /* Keep all bits, except the RW bits */ 497 + new = old & (~mask | ~behavior[reg / 4].rw); 572 498 573 - if (bridge->has_pcie && reg >= PCI_CAP_PCIE_START) { 574 - reg -= PCI_CAP_PCIE_START; 575 - write_op = bridge->ops->write_pcie; 576 - cfgspace = (__le32 *) &bridge->pcie_conf; 577 - behavior = bridge->pcie_cap_regs_behavior; 499 + /* Update the value of the RW bits */ 500 + new |= (value << shift) & (behavior[reg / 4].rw & mask); 501 + 502 + /* Clear the W1C bits */ 503 + new &= ~((value << shift) & (behavior[reg / 4].w1c & mask)); 578 504 } else { 579 - write_op = bridge->ops->write_base; 580 - cfgspace = (__le32 *) &bridge->conf; 581 - behavior = bridge->pci_regs_behavior; 505 + new = old & ~mask; 506 + new |= (value << shift) & mask; 582 507 } 583 508 584 - /* Keep all bits, except the RW bits */ 585 - new = old & (~mask | ~behavior[reg / 4].rw); 509 + if (cfgspace) { 510 + /* Save the new value with the cleared W1C bits into the cfgspace */ 511 + cfgspace[reg / 4] = cpu_to_le32(new); 512 + } 586 513 587 - /* Update the value of the RW bits */ 588 - new |= (value << shift) & (behavior[reg / 4].rw & mask); 514 + if (behavior) { 515 + /* 516 + * Clear the W1C bits not specified by the write mask, so that the 517 + * write_op() does not clear them. 518 + */ 519 + new &= ~(behavior[reg / 4].w1c & ~mask); 589 520 590 - /* Clear the W1C bits */ 591 - new &= ~((value << shift) & (behavior[reg / 4].w1c & mask)); 592 - 593 - /* Save the new value with the cleared W1C bits into the cfgspace */ 594 - cfgspace[reg / 4] = cpu_to_le32(new); 595 - 596 - /* 597 - * Clear the W1C bits not specified by the write mask, so that the 598 - * write_op() does not clear them. 599 - */ 600 - new &= ~(behavior[reg / 4].w1c & ~mask); 601 - 602 - /* 603 - * Set the W1C bits specified by the write mask, so that write_op() 604 - * knows about that they are to be cleared. 605 - */ 606 - new |= (value << shift) & (behavior[reg / 4].w1c & mask); 521 + /* 522 + * Set the W1C bits specified by the write mask, so that write_op() 523 + * knows about that they are to be cleared. 524 + */ 525 + new |= (value << shift) & (behavior[reg / 4].w1c & mask); 526 + } 607 527 608 528 if (write_op) 609 529 write_op(bridge, reg, old, new, mask);
+29 -2
drivers/pci/pci-bridge-emul.h
··· 90 90 */ 91 91 pci_bridge_emul_read_status_t (*read_pcie)(struct pci_bridge_emul *bridge, 92 92 int reg, u32 *value); 93 + 94 + /* 95 + * Same as ->read_base(), except it is for reading from the 96 + * PCIe extended capability configuration space. 97 + */ 98 + pci_bridge_emul_read_status_t (*read_ext)(struct pci_bridge_emul *bridge, 99 + int reg, u32 *value); 100 + 93 101 /* 94 102 * Called when writing to the regular PCI bridge configuration 95 103 * space. old is the current value, new is the new value being ··· 113 105 */ 114 106 void (*write_pcie)(struct pci_bridge_emul *bridge, int reg, 115 107 u32 old, u32 new, u32 mask); 108 + 109 + /* 110 + * Same as ->write_base(), except it is for writing from the 111 + * PCIe extended capability configuration space. 112 + */ 113 + void (*write_ext)(struct pci_bridge_emul *bridge, int reg, 114 + u32 old, u32 new, u32 mask); 116 115 }; 117 116 118 117 struct pci_bridge_reg_behavior; ··· 127 112 struct pci_bridge_emul { 128 113 struct pci_bridge_emul_conf conf; 129 114 struct pci_bridge_emul_pcie_conf pcie_conf; 130 - struct pci_bridge_emul_ops *ops; 115 + const struct pci_bridge_emul_ops *ops; 131 116 struct pci_bridge_reg_behavior *pci_regs_behavior; 132 117 struct pci_bridge_reg_behavior *pcie_cap_regs_behavior; 133 118 void *data; 134 119 bool has_pcie; 120 + u16 subsystem_vendor_id; 121 + u16 subsystem_id; 135 122 }; 136 123 137 124 enum { 138 - PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR = BIT(0), 125 + /* 126 + * PCI bridge does not support forwarding of prefetchable memory 127 + * requests between primary and secondary buses. 128 + */ 129 + PCI_BRIDGE_EMUL_NO_PREFMEM_FORWARD = BIT(0), 130 + 131 + /* 132 + * PCI bridge does not support forwarding of IO requests between 133 + * primary and secondary buses. 134 + */ 135 + PCI_BRIDGE_EMUL_NO_IO_FORWARD = BIT(1), 139 136 }; 140 137 141 138 int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
+1 -6
drivers/pci/pci-sysfs.c
··· 754 754 u8 val; 755 755 pci_user_read_config_byte(dev, off, &val); 756 756 data[off - init_off] = val; 757 - off++; 758 - --size; 759 757 } 760 758 761 759 pci_config_pm_runtime_put(dev); ··· 816 818 size -= 2; 817 819 } 818 820 819 - if (size) { 821 + if (size) 820 822 pci_user_write_config_byte(dev, off, data[off - init_off]); 821 - off++; 822 - --size; 823 - } 824 823 825 824 pci_config_pm_runtime_put(dev); 826 825
+1 -1
drivers/pci/pcie/Kconfig
··· 43 43 error injection can fake almost all kinds of errors with the 44 44 help of a user space helper tool aer-inject, which can be 45 45 gotten from: 46 - https://www.kernel.org/pub/linux/utils/pci/aer-inject/ 46 + https://git.kernel.org/cgit/linux/kernel/git/gong.chen/aer-inject.git/ 47 47 48 48 # 49 49 # PCI Express ECRC
+1 -1
drivers/pci/pcie/aer_inject.c
··· 6 6 * trigger various real hardware errors. Software based error 7 7 * injection can fake almost all kinds of errors with the help of a 8 8 * user space helper tool aer-inject, which can be gotten from: 9 - * https://www.kernel.org/pub/linux/utils/pci/aer-inject/ 9 + * https://git.kernel.org/cgit/linux/kernel/git/gong.chen/aer-inject.git/ 10 10 * 11 11 * Copyright 2009 Intel Corporation. 12 12 * Huang Ying <ying.huang@intel.com>
+2 -2
drivers/pci/pcie/portdrv_pci.c
··· 178 178 */ 179 179 static const struct pci_device_id port_pci_ids[] = { 180 180 /* handle any PCI-Express port */ 181 - { PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x00), ~0) }, 181 + { PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI_NORMAL, ~0) }, 182 182 /* subtractive decode PCI-to-PCI bridge, class type is 060401h */ 183 - { PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x01), ~0) }, 183 + { PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI_SUBTRACTIVE, ~0) }, 184 184 /* handle any Root Complex Event Collector */ 185 185 { PCI_DEVICE_CLASS(((PCI_CLASS_SYSTEM_RCEC << 8) | 0x00), ~0) }, 186 186 { },
+2 -4
drivers/pci/proc.c
··· 99 99 unsigned char val; 100 100 pci_user_read_config_byte(dev, pos, &val); 101 101 __put_user(val, buf); 102 - buf++; 103 102 pos++; 104 - cnt--; 105 103 } 106 104 107 105 pci_config_pm_runtime_put(dev); ··· 174 176 unsigned char val; 175 177 __get_user(val, buf); 176 178 pci_user_write_config_byte(dev, pos, val); 177 - buf++; 178 179 pos++; 179 - cnt--; 180 180 } 181 181 182 182 pci_config_pm_runtime_put(dev); ··· 184 188 return nbytes; 185 189 } 186 190 191 + #ifdef HAVE_PCI_MMAP 187 192 struct pci_filp_private { 188 193 enum pci_mmap_state mmap_state; 189 194 int write_combine; 190 195 }; 196 + #endif /* HAVE_PCI_MMAP */ 191 197 192 198 static long proc_bus_pci_ioctl(struct file *file, unsigned int cmd, 193 199 unsigned long arg)
+12
drivers/pci/quirks.c
··· 1811 1811 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EESSC, quirk_alder_ioapic); 1812 1812 #endif 1813 1813 1814 + static void quirk_no_msi(struct pci_dev *dev) 1815 + { 1816 + pci_info(dev, "avoiding MSI to work around a hardware defect\n"); 1817 + dev->no_msi = 1; 1818 + } 1819 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4386, quirk_no_msi); 1820 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4387, quirk_no_msi); 1821 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4388, quirk_no_msi); 1822 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4389, quirk_no_msi); 1823 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438a, quirk_no_msi); 1824 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438b, quirk_no_msi); 1825 + 1814 1826 static void quirk_pcie_mch(struct pci_dev *pdev) 1815 1827 { 1816 1828 pdev->no_msi = 1;
+2 -2
drivers/pci/setup-bus.c
··· 994 994 { 995 995 struct pci_dev *dev; 996 996 resource_size_t min_align, align, size, size0, size1; 997 - resource_size_t aligns[18]; /* Alignments from 1MB to 128GB */ 997 + resource_size_t aligns[24]; /* Alignments from 1MB to 8TB */ 998 998 int order, max_order; 999 999 struct resource *b_res = find_bus_resource_of_type(bus, 1000 1000 mask | IORESOURCE_PREFETCH, type); ··· 1525 1525 { 1526 1526 struct pci_dev *dev = bus->self; 1527 1527 struct resource *r; 1528 - unsigned int old_flags = 0; 1528 + unsigned int old_flags; 1529 1529 struct resource *b_res; 1530 1530 int idx = 1; 1531 1531
+1
include/linux/pci.h
··· 668 668 struct bin_attribute *legacy_io; /* Legacy I/O for this bus */ 669 669 struct bin_attribute *legacy_mem; /* Legacy mem */ 670 670 unsigned int is_added:1; 671 + unsigned int unsafe_warn:1; /* warned about RW1C config write */ 671 672 }; 672 673 673 674 #define to_pci_bus(n) container_of(n, struct pci_bus, dev)
+2
include/linux/pci_ids.h
··· 60 60 #define PCI_CLASS_BRIDGE_EISA 0x0602 61 61 #define PCI_CLASS_BRIDGE_MC 0x0603 62 62 #define PCI_CLASS_BRIDGE_PCI 0x0604 63 + #define PCI_CLASS_BRIDGE_PCI_NORMAL 0x060400 64 + #define PCI_CLASS_BRIDGE_PCI_SUBTRACTIVE 0x060401 63 65 #define PCI_CLASS_BRIDGE_PCMCIA 0x0605 64 66 #define PCI_CLASS_BRIDGE_NUBUS 0x0606 65 67 #define PCI_CLASS_BRIDGE_CARDBUS 0x0607
+2
include/linux/sizes.h
··· 47 47 #define SZ_8G _AC(0x200000000, ULL) 48 48 #define SZ_16G _AC(0x400000000, ULL) 49 49 #define SZ_32G _AC(0x800000000, ULL) 50 + 51 + #define SZ_1T _AC(0x10000000000, ULL) 50 52 #define SZ_64T _AC(0x400000000000, ULL) 51 53 52 54 #endif /* __LINUX_SIZES_H__ */