Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v5.2-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
"Enumeration changes:

- Add _HPX Type 3 settings support, which gives firmware more
influence over device configuration (Alexandru Gagniuc)

- Support fixed bus numbers from bridge Enhanced Allocation
capabilities (Subbaraya Sundeep)

- Add "external-facing" DT property to identify cases where we
require IOMMU protection against untrusted devices (Jean-Philippe
Brucker)

- Enable PCIe services for host controller drivers that use managed
host bridge alloc (Jean-Philippe Brucker)

- Log PCIe port service messages with pci_dev, not the pcie_device
(Frederick Lawler)

- Convert pciehp from pciehp_debug module parameter to generic
dynamic debug (Frederick Lawler)

Peer-to-peer DMA:

- Add whitelist of Root Complexes that support peer-to-peer DMA
between Root Ports (Christian König)

Native controller drivers:

- Add PCI host bridge DMA ranges for bridges that can't DMA
everywhere, e.g., iProc (Srinath Mannam)

- Add Amazon Annapurna Labs PCIe host controller driver (Jonathan
Chocron)

- Fix Tegra MSI target allocation so DMA doesn't generate unwanted
MSIs (Vidya Sagar)

- Fix of_node reference leaks (Wen Yang)

- Fix Hyper-V module unload & device removal issues (Dexuan Cui)

- Cleanup R-Car driver (Marek Vasut)

- Cleanup Keystone driver (Kishon Vijay Abraham I)

- Cleanup i.MX6 driver (Andrey Smirnov)

Significant bug fixes:

- Reset Lenovo ThinkPad P50 GPU so nouveau works after reboot (Lyude
Paul)

- Fix Switchtec firmware update performance issue (Wesley Sheng)

- Work around Pericom switch link retraining erratum (Stefan Mätje)"

* tag 'pci-v5.2-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (141 commits)
MAINTAINERS: Add Karthikeyan Mitran and Hou Zhiqiang for Mobiveil PCI
PCI: pciehp: Remove pointless MY_NAME definition
PCI: pciehp: Remove pointless PCIE_MODULE_NAME definition
PCI: pciehp: Remove unused dbg/err/info/warn() wrappers
PCI: pciehp: Log messages with pci_dev, not pcie_device
PCI: pciehp: Replace pciehp_debug module param with dyndbg
PCI: pciehp: Remove pciehp_debug uses
PCI/AER: Log messages with pci_dev, not pcie_device
PCI/DPC: Log messages with pci_dev, not pcie_device
PCI/PME: Replace dev_printk(KERN_DEBUG) with dev_info()
PCI/AER: Replace dev_printk(KERN_DEBUG) with dev_info()
PCI: Replace dev_printk(KERN_DEBUG) with dev_info(), etc
PCI: Replace printk(KERN_INFO) with pr_info(), etc
PCI: Use dev_printk() when possible
PCI: Cleanup setup-bus.c comments and whitespace
PCI: imx6: Allow asynchronous probing
PCI: dwc: Save root bus for driver remove hooks
PCI: dwc: Use devm_pci_alloc_host_bridge() to simplify code
PCI: dwc: Free MSI in dw_pcie_host_init() error path
PCI: dwc: Free MSI IRQ page in dw_pcie_free_msi()
...

+2925 -1635
+5 -2
Documentation/devicetree/bindings/pci/designware-pcie.txt
··· 4 4 - compatible: 5 5 "snps,dw-pcie" for RC mode; 6 6 "snps,dw-pcie-ep" for EP mode; 7 - - reg: Should contain the configuration address space. 8 - - reg-names: Must be "config" for the PCIe configuration space. 7 + - reg: For designware cores version < 4.80 contains the configuration 8 + address space. For designware core version >= 4.80, contains 9 + the configuration and ATU address space 10 + - reg-names: Must be "config" for the PCIe configuration space and "atu" for 11 + the ATU address space. 9 12 (The old way of getting the configuration address space from "ranges" 10 13 is deprecated and should be avoided.) 11 14 - num-lanes: number of lanes to use
+55 -3
Documentation/devicetree/bindings/pci/pci-keystone.txt
··· 11 11 12 12 Required Properties:- 13 13 14 - compatibility: "ti,keystone-pcie" 15 - reg: index 1 is the base address and length of DW application registers. 16 - index 2 is the base address and length of PCI device ID register. 14 + compatibility: Should be "ti,keystone-pcie" for RC on Keystone2 SoC 15 + Should be "ti,am654-pcie-rc" for RC on AM654x SoC 16 + reg: Three register ranges as listed in the reg-names property 17 + reg-names: "dbics" for the DesignWare PCIe registers, "app" for the 18 + TI specific application registers, "config" for the 19 + configuration space address 17 20 18 21 pcie_msi_intc : Interrupt controller device node for MSI IRQ chip 19 22 interrupt-cells: should be set to 1 20 23 interrupts: GIC interrupt lines connected to PCI MSI interrupt lines 24 + (required if the compatible is "ti,keystone-pcie") 25 + msi-map: As specified in Documentation/devicetree/bindings/pci/pci-msi.txt 26 + (required if the compatible is "ti,am654-pcie-rc". 21 27 22 28 ti,syscon-pcie-id : phandle to the device control module required to set device 23 29 id and vendor id. 30 + ti,syscon-pcie-mode : phandle to the device control module required to configure 31 + PCI in either RC mode or EP mode. 24 32 25 33 Example: 26 34 pcie_msi_intc: msi-interrupt-controller { ··· 69 61 DesignWare DT Properties not applicable for Keystone PCI 70 62 71 63 1. pcie_bus clock-names not used. Instead, a phandle to phys is used. 64 + 65 + AM654 PCIe Endpoint 66 + =================== 67 + 68 + Required Properties:- 69 + 70 + compatibility: Should be "ti,am654-pcie-ep" for EP on AM654x SoC 71 + reg: Four register ranges as listed in the reg-names property 72 + reg-names: "dbics" for the DesignWare PCIe registers, "app" for the 73 + TI specific application registers, "atu" for the 74 + Address Translation Unit configuration registers and 75 + "addr_space" used to map remote RC address space 76 + num-ib-windows: As specified in 77 + Documentation/devicetree/bindings/pci/designware-pcie.txt 78 + num-ob-windows: As specified in 79 + Documentation/devicetree/bindings/pci/designware-pcie.txt 80 + num-lanes: As specified in 81 + Documentation/devicetree/bindings/pci/designware-pcie.txt 82 + power-domains: As documented by the generic PM domain bindings in 83 + Documentation/devicetree/bindings/power/power_domain.txt. 84 + ti,syscon-pcie-mode: phandle to the device control module required to configure 85 + PCI in either RC mode or EP mode. 86 + 87 + Optional properties:- 88 + 89 + phys: list of PHY specifiers (used by generic PHY framework) 90 + phy-names: must be "pcie-phy0", "pcie-phy1", "pcie-phyN".. based on the 91 + number of lanes as specified in *num-lanes* property. 92 + ("phys" and "phy-names" DT bindings are specified in 93 + Documentation/devicetree/bindings/phy/phy-bindings.txt) 94 + interrupts: platform interrupt for error interrupts. 95 + 96 + pcie-ep { 97 + compatible = "ti,am654-pcie-ep"; 98 + reg = <0x5500000 0x1000>, <0x5501000 0x1000>, 99 + <0x10000000 0x8000000>, <0x5506000 0x1000>; 100 + reg-names = "app", "dbics", "addr_space", "atu"; 101 + power-domains = <&k3_pds 120>; 102 + ti,syscon-pcie-mode = <&pcie0_mode>; 103 + num-lanes = <1>; 104 + num-ib-windows = <16>; 105 + num-ob-windows = <16>; 106 + interrupts = <GIC_SPI 340 IRQ_TYPE_EDGE_RISING>; 107 + };
+50
Documentation/devicetree/bindings/pci/pci.txt
··· 24 24 unsupported link speed, for instance, trying to do training for 25 25 unsupported link speed, etc. Must be '4' for gen4, '3' for gen3, '2' 26 26 for gen2, and '1' for gen1. Any other values are invalid. 27 + 28 + PCI-PCI Bridge properties 29 + ------------------------- 30 + 31 + PCIe root ports and switch ports may be described explicitly in the device 32 + tree, as children of the host bridge node. Even though those devices are 33 + discoverable by probing, it might be necessary to describe properties that 34 + aren't provided by standard PCIe capabilities. 35 + 36 + Required properties: 37 + 38 + - reg: 39 + Identifies the PCI-PCI bridge. As defined in the IEEE Std 1275-1994 40 + document, it is a five-cell address encoded as (phys.hi phys.mid 41 + phys.lo size.hi size.lo). phys.hi should contain the device's BDF as 42 + 0b00000000 bbbbbbbb dddddfff 00000000. The other cells should be zero. 43 + 44 + The bus number is defined by firmware, through the standard bridge 45 + configuration mechanism. If this port is a switch port, then firmware 46 + allocates the bus number and writes it into the Secondary Bus Number 47 + register of the bridge directly above this port. Otherwise, the bus 48 + number of a root port is the first number in the bus-range property, 49 + defaulting to zero. 50 + 51 + If firmware leaves the ARI Forwarding Enable bit set in the bridge 52 + above this port, then phys.hi contains the 8-bit function number as 53 + 0b00000000 bbbbbbbb ffffffff 00000000. Note that the PCIe specification 54 + recommends that firmware only leaves ARI enabled when it knows that the 55 + OS is ARI-aware. 56 + 57 + Optional properties: 58 + 59 + - external-facing: 60 + When present, the port is external-facing. All bridges and endpoints 61 + downstream of this port are external to the machine. The OS can, for 62 + example, use this information to identify devices that cannot be 63 + trusted with relaxed DMA protection, as users could easily attach 64 + malicious devices to this port. 65 + 66 + Example: 67 + 68 + pcie@10000000 { 69 + compatible = "pci-host-ecam-generic"; 70 + ... 71 + pcie@0008 { 72 + /* Root port 00:01.0 is external-facing */ 73 + reg = <0x00000800 0 0 0 0>; 74 + external-facing; 75 + }; 76 + };
+8 -1
MAINTAINERS
··· 12026 12026 F: drivers/ntb/hw/mscc/ 12027 12027 12028 12028 PCI DRIVER FOR MOBIVEIL PCIE IP 12029 - M: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in> 12029 + M: Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in> 12030 + M: Hou Zhiqiang <Zhiqiang.Hou@nxp.com> 12030 12031 L: linux-pci@vger.kernel.org 12031 12032 S: Supported 12032 12033 F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt ··· 12160 12159 T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git/ 12161 12160 S: Supported 12162 12161 F: drivers/pci/controller/ 12162 + 12163 + PCIE DRIVER FOR ANNAPURNA LABS 12164 + M: Jonathan Chocron <jonnyc@amazon.com> 12165 + L: linux-pci@vger.kernel.org 12166 + S: Maintained 12167 + F: drivers/pci/controller/dwc/pcie-al.c 12163 12168 12164 12169 PCIE DRIVER FOR AMLOGIC MESON 12165 12170 M: Yue Wang <yue.wang@Amlogic.com>
-2
arch/arm64/boot/dts/mediatek/mt2712e.dtsi
··· 819 819 #size-cells = <2>; 820 820 #interrupt-cells = <1>; 821 821 ranges; 822 - num-lanes = <1>; 823 822 interrupt-map-mask = <0 0 0 7>; 824 823 interrupt-map = <0 0 0 1 &pcie_intc0 0>, 825 824 <0 0 0 2 &pcie_intc0 1>, ··· 839 840 #size-cells = <2>; 840 841 #interrupt-cells = <1>; 841 842 ranges; 842 - num-lanes = <1>; 843 843 interrupt-map-mask = <0 0 0 7>; 844 844 interrupt-map = <0 0 0 1 &pcie_intc1 0>, 845 845 <0 0 0 2 &pcie_intc1 1>,
+6 -8
arch/powerpc/platforms/powernv/npu-dma.c
··· 1213 1213 * Currently we only support radix and non-zero LPCR only makes sense 1214 1214 * for hash tables so skiboot expects the LPCR parameter to be a zero. 1215 1215 */ 1216 - ret = opal_npu_map_lpar(nphb->opal_id, 1217 - PCI_DEVID(gpdev->bus->number, gpdev->devfn), lparid, 1218 - 0 /* LPCR bits */); 1216 + ret = opal_npu_map_lpar(nphb->opal_id, pci_dev_id(gpdev), lparid, 1217 + 0 /* LPCR bits */); 1219 1218 if (ret) { 1220 1219 dev_err(&gpdev->dev, "Error %d mapping device to LPAR\n", ret); 1221 1220 return ret; ··· 1223 1224 dev_dbg(&gpdev->dev, "init context opalid=%llu msr=%lx\n", 1224 1225 nphb->opal_id, msr); 1225 1226 ret = opal_npu_init_context(nphb->opal_id, 0/*__unused*/, msr, 1226 - PCI_DEVID(gpdev->bus->number, gpdev->devfn)); 1227 + pci_dev_id(gpdev)); 1227 1228 if (ret < 0) 1228 1229 dev_err(&gpdev->dev, "Failed to init context: %d\n", ret); 1229 1230 else ··· 1257 1258 dev_dbg(&gpdev->dev, "destroy context opalid=%llu\n", 1258 1259 nphb->opal_id); 1259 1260 ret = opal_npu_destroy_context(nphb->opal_id, 0/*__unused*/, 1260 - PCI_DEVID(gpdev->bus->number, gpdev->devfn)); 1261 + pci_dev_id(gpdev)); 1261 1262 if (ret < 0) { 1262 1263 dev_err(&gpdev->dev, "Failed to destroy context: %d\n", ret); 1263 1264 return ret; ··· 1265 1266 1266 1267 /* Set LPID to 0 anyway, just to be safe */ 1267 1268 dev_dbg(&gpdev->dev, "Map LPAR opalid=%llu lparid=0\n", nphb->opal_id); 1268 - ret = opal_npu_map_lpar(nphb->opal_id, 1269 - PCI_DEVID(gpdev->bus->number, gpdev->devfn), 0 /*LPID*/, 1270 - 0 /* LPCR bits */); 1269 + ret = opal_npu_map_lpar(nphb->opal_id, pci_dev_id(gpdev), 0 /*LPID*/, 1270 + 0 /* LPCR bits */); 1271 1271 if (ret) 1272 1272 dev_err(&gpdev->dev, "Error %d mapping device to LPAR\n", ret); 1273 1273
+8 -2
arch/x86/pci/irq.c
··· 1119 1119 1120 1120 void __init pcibios_irq_init(void) 1121 1121 { 1122 + struct irq_routing_table *rtable = NULL; 1123 + 1122 1124 DBG(KERN_DEBUG "PCI: IRQ init\n"); 1123 1125 1124 1126 if (raw_pci_ops == NULL) ··· 1131 1129 pirq_table = pirq_find_routing_table(); 1132 1130 1133 1131 #ifdef CONFIG_PCI_BIOS 1134 - if (!pirq_table && (pci_probe & PCI_BIOS_IRQ_SCAN)) 1132 + if (!pirq_table && (pci_probe & PCI_BIOS_IRQ_SCAN)) { 1135 1133 pirq_table = pcibios_get_irq_routing_table(); 1134 + rtable = pirq_table; 1135 + } 1136 1136 #endif 1137 1137 if (pirq_table) { 1138 1138 pirq_peer_trick(); ··· 1149 1145 * If we're using the I/O APIC, avoid using the PCI IRQ 1150 1146 * routing table 1151 1147 */ 1152 - if (io_apic_assign_pci_irqs) 1148 + if (io_apic_assign_pci_irqs) { 1149 + kfree(rtable); 1153 1150 pirq_table = NULL; 1151 + } 1154 1152 } 1155 1153 1156 1154 x86_init.pci.fixup_irqs();
+12
drivers/acpi/pci_mcfg.c
··· 52 52 static struct mcfg_fixup mcfg_quirks[] = { 53 53 /* { OEM_ID, OEM_TABLE_ID, REV, SEGMENT, BUS_RANGE, ops, cfgres }, */ 54 54 55 + #define AL_ECAM(table_id, rev, seg, ops) \ 56 + { "AMAZON", table_id, rev, seg, MCFG_BUS_ANY, ops } 57 + 58 + AL_ECAM("GRAVITON", 0, 0, &al_pcie_ops), 59 + AL_ECAM("GRAVITON", 0, 1, &al_pcie_ops), 60 + AL_ECAM("GRAVITON", 0, 2, &al_pcie_ops), 61 + AL_ECAM("GRAVITON", 0, 3, &al_pcie_ops), 62 + AL_ECAM("GRAVITON", 0, 4, &al_pcie_ops), 63 + AL_ECAM("GRAVITON", 0, 5, &al_pcie_ops), 64 + AL_ECAM("GRAVITON", 0, 6, &al_pcie_ops), 65 + AL_ECAM("GRAVITON", 0, 7, &al_pcie_ops), 66 + 55 67 #define QCOM_ECAM32(seg) \ 56 68 { "QCOM ", "QDF2432 ", 1, seg, MCFG_BUS_ANY, &pci_32b_ops } 57 69
+2
drivers/acpi/pci_root.c
··· 145 145 { OSC_PCI_CLOCK_PM_SUPPORT, "ClockPM" }, 146 146 { OSC_PCI_SEGMENT_GROUPS_SUPPORT, "Segments" }, 147 147 { OSC_PCI_MSI_SUPPORT, "MSI" }, 148 + { OSC_PCI_HPX_TYPE_3_SUPPORT, "HPX-Type3" }, 148 149 }; 149 150 150 151 static struct pci_osc_bit_struct pci_osc_control_bit[] = { ··· 447 446 * PCI domains, so we indicate this in _OSC support capabilities. 448 447 */ 449 448 support = OSC_PCI_SEGMENT_GROUPS_SUPPORT; 449 + support |= OSC_PCI_HPX_TYPE_3_SUPPORT; 450 450 if (pci_ext_cfg_avail()) 451 451 support |= OSC_PCI_EXT_CONFIG_SUPPORT; 452 452 if (pcie_aspm_support_enabled())
+1 -2
drivers/gpu/drm/amd/amdkfd/kfd_topology.c
··· 1272 1272 1273 1273 dev->node_props.vendor_id = gpu->pdev->vendor; 1274 1274 dev->node_props.device_id = gpu->pdev->device; 1275 - dev->node_props.location_id = PCI_DEVID(gpu->pdev->bus->number, 1276 - gpu->pdev->devfn); 1275 + dev->node_props.location_id = pci_dev_id(gpu->pdev); 1277 1276 dev->node_props.max_engine_clk_fcompute = 1278 1277 amdgpu_amdkfd_get_max_engine_clock_in_mhz(dev->gpu->kgd); 1279 1278 dev->node_props.max_engine_clk_ccompute =
+1 -1
drivers/iommu/amd_iommu.c
··· 165 165 { 166 166 struct pci_dev *pdev = to_pci_dev(dev); 167 167 168 - return PCI_DEVID(pdev->bus->number, pdev->devfn); 168 + return pci_dev_id(pdev); 169 169 } 170 170 171 171 static inline int get_acpihid_device_id(struct device *dev,
+32 -3
drivers/iommu/dma-iommu.c
··· 206 206 return 0; 207 207 } 208 208 209 - static void iova_reserve_pci_windows(struct pci_dev *dev, 209 + static int iova_reserve_pci_windows(struct pci_dev *dev, 210 210 struct iova_domain *iovad) 211 211 { 212 212 struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus); 213 213 struct resource_entry *window; 214 214 unsigned long lo, hi; 215 + phys_addr_t start = 0, end; 215 216 216 217 resource_list_for_each_entry(window, &bridge->windows) { 217 218 if (resource_type(window->res) != IORESOURCE_MEM) ··· 222 221 hi = iova_pfn(iovad, window->res->end - window->offset); 223 222 reserve_iova(iovad, lo, hi); 224 223 } 224 + 225 + /* Get reserved DMA windows from host bridge */ 226 + resource_list_for_each_entry(window, &bridge->dma_ranges) { 227 + end = window->res->start - window->offset; 228 + resv_iova: 229 + if (end > start) { 230 + lo = iova_pfn(iovad, start); 231 + hi = iova_pfn(iovad, end); 232 + reserve_iova(iovad, lo, hi); 233 + } else { 234 + /* dma_ranges list should be sorted */ 235 + dev_err(&dev->dev, "Failed to reserve IOVA\n"); 236 + return -EINVAL; 237 + } 238 + 239 + start = window->res->end - window->offset + 1; 240 + /* If window is last entry */ 241 + if (window->node.next == &bridge->dma_ranges && 242 + end != ~(dma_addr_t)0) { 243 + end = ~(dma_addr_t)0; 244 + goto resv_iova; 245 + } 246 + } 247 + 248 + return 0; 225 249 } 226 250 227 251 static int iova_reserve_iommu_regions(struct device *dev, ··· 258 232 LIST_HEAD(resv_regions); 259 233 int ret = 0; 260 234 261 - if (dev_is_pci(dev)) 262 - iova_reserve_pci_windows(to_pci_dev(dev), iovad); 235 + if (dev_is_pci(dev)) { 236 + ret = iova_reserve_pci_windows(to_pci_dev(dev), iovad); 237 + if (ret) 238 + return ret; 239 + } 263 240 264 241 iommu_get_resv_regions(dev, &resv_regions); 265 242 list_for_each_entry(region, &resv_regions, list) {
+1 -1
drivers/iommu/intel-iommu.c
··· 1391 1391 1392 1392 /* pdev will be returned if device is not a vf */ 1393 1393 pf_pdev = pci_physfn(pdev); 1394 - info->pfsid = PCI_DEVID(pf_pdev->bus->number, pf_pdev->devfn); 1394 + info->pfsid = pci_dev_id(pf_pdev); 1395 1395 } 1396 1396 1397 1397 #ifdef CONFIG_INTEL_IOMMU_SVM
+1 -1
drivers/iommu/intel_irq_remapping.c
··· 424 424 set_irte_sid(irte, SVT_VERIFY_SID_SQ, SQ_ALL_16, data.alias); 425 425 else 426 426 set_irte_sid(irte, SVT_VERIFY_SID_SQ, SQ_ALL_16, 427 - PCI_DEVID(dev->bus->number, dev->devfn)); 427 + pci_dev_id(dev)); 428 428 429 429 return 0; 430 430 }
+18
drivers/misc/pci_endpoint_test.c
··· 75 75 #define PCI_ENDPOINT_TEST_IRQ_TYPE 0x24 76 76 #define PCI_ENDPOINT_TEST_IRQ_NUMBER 0x28 77 77 78 + #define PCI_DEVICE_ID_TI_AM654 0xb00c 79 + 80 + #define is_am654_pci_dev(pdev) \ 81 + ((pdev)->device == PCI_DEVICE_ID_TI_AM654) 82 + 78 83 static DEFINE_IDA(pci_endpoint_test_ida); 79 84 80 85 #define to_endpoint_test(priv) container_of((priv), struct pci_endpoint_test, \ ··· 593 588 int ret = -EINVAL; 594 589 enum pci_barno bar; 595 590 struct pci_endpoint_test *test = to_endpoint_test(file->private_data); 591 + struct pci_dev *pdev = test->pdev; 596 592 597 593 mutex_lock(&test->mutex); 598 594 switch (cmd) { 599 595 case PCITEST_BAR: 600 596 bar = arg; 601 597 if (bar < 0 || bar > 5) 598 + goto ret; 599 + if (is_am654_pci_dev(pdev) && bar == BAR_0) 602 600 goto ret; 603 601 ret = pci_endpoint_test_bar(test, bar); 604 602 break; ··· 670 662 data = (struct pci_endpoint_test_data *)ent->driver_data; 671 663 if (data) { 672 664 test_reg_bar = data->test_reg_bar; 665 + test->test_reg_bar = test_reg_bar; 673 666 test->alignment = data->alignment; 674 667 irq_type = data->irq_type; 675 668 } ··· 794 785 pci_disable_device(pdev); 795 786 } 796 787 788 + static const struct pci_endpoint_test_data am654_data = { 789 + .test_reg_bar = BAR_2, 790 + .alignment = SZ_64K, 791 + .irq_type = IRQ_TYPE_MSI, 792 + }; 793 + 797 794 static const struct pci_device_id pci_endpoint_test_tbl[] = { 798 795 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x) }, 799 796 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA72x) }, 800 797 { PCI_DEVICE(PCI_VENDOR_ID_FREESCALE, 0x81c0) }, 801 798 { PCI_DEVICE(PCI_VENDOR_ID_SYNOPSYS, 0xedda) }, 799 + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM654), 800 + .driver_data = (kernel_ulong_t)&am654_data 801 + }, 802 802 { } 803 803 }; 804 804 MODULE_DEVICE_TABLE(pci, pci_endpoint_test_tbl);
+1 -2
drivers/net/ethernet/realtek/r8169.c
··· 6992 6992 new_bus->priv = tp; 6993 6993 new_bus->parent = &pdev->dev; 6994 6994 new_bus->irq[0] = PHY_IGNORE_INTERRUPT; 6995 - snprintf(new_bus->id, MII_BUS_ID_SIZE, "r8169-%x", 6996 - PCI_DEVID(pdev->bus->number, pdev->devfn)); 6995 + snprintf(new_bus->id, MII_BUS_ID_SIZE, "r8169-%x", pci_dev_id(pdev)); 6997 6996 6998 6997 new_bus->read = r8169_mdio_read_reg; 6999 6998 new_bus->write = r8169_mdio_write_reg;
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
··· 208 208 ret = 1; 209 209 } 210 210 211 - plat->bus_id = PCI_DEVID(pdev->bus->number, pdev->devfn); 211 + plat->bus_id = pci_dev_id(pdev); 212 212 plat->phy_addr = ret; 213 213 plat->interface = PHY_INTERFACE_MODE_RMII; 214 214
+1 -1
drivers/pci/Makefile
··· 10 10 ifdef CONFIG_PCI 11 11 obj-$(CONFIG_PROC_FS) += proc.o 12 12 obj-$(CONFIG_SYSFS) += slot.o 13 - obj-$(CONFIG_OF) += of.o 14 13 obj-$(CONFIG_ACPI) += pci-acpi.o 15 14 endif 16 15 16 + obj-$(CONFIG_OF) += of.o 17 17 obj-$(CONFIG_PCI_QUIRKS) += quirks.o 18 18 obj-$(CONFIG_PCIEPORTBUS) += pcie/ 19 19 obj-$(CONFIG_HOTPLUG_PCI) += hotplug/
+2 -3
drivers/pci/bus.c
··· 23 23 24 24 entry = resource_list_create_entry(res, 0); 25 25 if (!entry) { 26 - printk(KERN_ERR "PCI: can't add host bridge window %pR\n", res); 26 + pr_err("PCI: can't add host bridge window %pR\n", res); 27 27 return; 28 28 } 29 29 ··· 288 288 res->end = end; 289 289 res->flags &= ~IORESOURCE_UNSET; 290 290 orig_res.flags &= ~IORESOURCE_UNSET; 291 - pci_printk(KERN_DEBUG, dev, "%pR clipped to %pR\n", 292 - &orig_res, res); 291 + pci_info(dev, "%pR clipped to %pR\n", &orig_res, res); 293 292 294 293 return true; 295 294 }
+23 -6
drivers/pci/controller/dwc/Kconfig
··· 103 103 Say Y here if you want PCIe support on SPEAr13XX SoCs. 104 104 105 105 config PCI_KEYSTONE 106 - bool "TI Keystone PCIe controller" 107 - depends on ARCH_KEYSTONE || (ARM && COMPILE_TEST) 106 + bool 107 + 108 + config PCI_KEYSTONE_HOST 109 + bool "PCI Keystone Host Mode" 110 + depends on ARCH_KEYSTONE || ARCH_K3 || ((ARM || ARM64) && COMPILE_TEST) 108 111 depends on PCI_MSI_IRQ_DOMAIN 109 112 select PCIE_DW_HOST 113 + select PCI_KEYSTONE 114 + default y 110 115 help 111 - Say Y here if you want to enable PCI controller support on Keystone 112 - SoCs. The PCI controller on Keystone is based on DesignWare hardware 113 - and therefore the driver re-uses the DesignWare core functions to 114 - implement the driver. 116 + Enables support for the PCIe controller in the Keystone SoC to 117 + work in host mode. The PCI controller on Keystone is based on 118 + DesignWare hardware and therefore the driver re-uses the 119 + DesignWare core functions to implement the driver. 120 + 121 + config PCI_KEYSTONE_EP 122 + bool "PCI Keystone Endpoint Mode" 123 + depends on ARCH_KEYSTONE || ARCH_K3 || ((ARM || ARM64) && COMPILE_TEST) 124 + depends on PCI_ENDPOINT 125 + select PCIE_DW_EP 126 + select PCI_KEYSTONE 127 + help 128 + Enables support for the PCIe controller in the Keystone SoC to 129 + work in endpoint mode. The PCI controller on Keystone is based 130 + on DesignWare hardware and therefore the driver re-uses the 131 + DesignWare core functions to implement the driver. 115 132 116 133 config PCI_LAYERSCAPE 117 134 bool "Freescale Layerscape PCIe controller"
+1
drivers/pci/controller/dwc/Makefile
··· 28 28 # depending on whether ACPI, the DT driver, or both are enabled. 29 29 30 30 ifdef CONFIG_PCI 31 + obj-$(CONFIG_ARM64) += pcie-al.o 31 32 obj-$(CONFIG_ARM64) += pcie-hisi.o 32 33 endif
+2 -1
drivers/pci/controller/dwc/pci-dra7xx.c
··· 247 247 248 248 dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, 249 249 &intx_domain_ops, pp); 250 + of_node_put(pcie_intc_node); 250 251 if (!dra7xx->irq_domain) { 251 252 dev_err(dev, "Failed to get a INTx IRQ domain\n"); 252 253 return -ENODEV; ··· 407 406 return &dra7xx_pcie_epc_features; 408 407 } 409 408 410 - static struct dw_pcie_ep_ops pcie_ep_ops = { 409 + static const struct dw_pcie_ep_ops pcie_ep_ops = { 411 410 .ep_init = dra7xx_pcie_ep_init, 412 411 .raise_irq = dra7xx_pcie_raise_irq, 413 412 .get_features = dra7xx_pcie_get_features,
+57 -87
drivers/pci/controller/dwc/pci-imx6.c
··· 52 52 53 53 #define IMX6_PCIE_FLAG_IMX6_PHY BIT(0) 54 54 #define IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE BIT(1) 55 + #define IMX6_PCIE_FLAG_SUPPORTS_SUSPEND BIT(2) 55 56 56 57 struct imx6_pcie_drvdata { 57 58 enum imx6_pcie_variants variant; ··· 90 89 }; 91 90 92 91 /* Parameters for the waiting for PCIe PHY PLL to lock on i.MX7 */ 93 - #define PHY_PLL_LOCK_WAIT_MAX_RETRIES 2000 94 - #define PHY_PLL_LOCK_WAIT_USLEEP_MIN 50 95 92 #define PHY_PLL_LOCK_WAIT_USLEEP_MAX 200 93 + #define PHY_PLL_LOCK_WAIT_TIMEOUT (2000 * PHY_PLL_LOCK_WAIT_USLEEP_MAX) 96 94 97 95 /* PCIe Root Complex registers (memory-mapped) */ 98 96 #define PCIE_RC_IMX6_MSI_CAP 0x50 ··· 104 104 105 105 /* PCIe Port Logic registers (memory-mapped) */ 106 106 #define PL_OFFSET 0x700 107 - #define PCIE_PL_PFLR (PL_OFFSET + 0x08) 108 - #define PCIE_PL_PFLR_LINK_STATE_MASK (0x3f << 16) 109 - #define PCIE_PL_PFLR_FORCE_LINK (1 << 15) 110 - #define PCIE_PHY_DEBUG_R0 (PL_OFFSET + 0x28) 111 - #define PCIE_PHY_DEBUG_R1 (PL_OFFSET + 0x2c) 112 107 113 108 #define PCIE_PHY_CTRL (PL_OFFSET + 0x114) 114 - #define PCIE_PHY_CTRL_DATA_LOC 0 115 - #define PCIE_PHY_CTRL_CAP_ADR_LOC 16 116 - #define PCIE_PHY_CTRL_CAP_DAT_LOC 17 117 - #define PCIE_PHY_CTRL_WR_LOC 18 118 - #define PCIE_PHY_CTRL_RD_LOC 19 109 + #define PCIE_PHY_CTRL_DATA(x) FIELD_PREP(GENMASK(15, 0), (x)) 110 + #define PCIE_PHY_CTRL_CAP_ADR BIT(16) 111 + #define PCIE_PHY_CTRL_CAP_DAT BIT(17) 112 + #define PCIE_PHY_CTRL_WR BIT(18) 113 + #define PCIE_PHY_CTRL_RD BIT(19) 119 114 120 115 #define PCIE_PHY_STAT (PL_OFFSET + 0x110) 121 - #define PCIE_PHY_STAT_ACK_LOC 16 116 + #define PCIE_PHY_STAT_ACK BIT(16) 122 117 123 118 #define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C 124 119 125 120 /* PHY registers (not memory-mapped) */ 126 121 #define PCIE_PHY_ATEOVRD 0x10 127 - #define PCIE_PHY_ATEOVRD_EN (0x1 << 2) 122 + #define PCIE_PHY_ATEOVRD_EN BIT(2) 128 123 #define PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT 0 129 124 #define PCIE_PHY_ATEOVRD_REF_CLKDIV_MASK 0x1 130 125 131 126 #define PCIE_PHY_MPLL_OVRD_IN_LO 0x11 132 127 #define PCIE_PHY_MPLL_MULTIPLIER_SHIFT 2 133 128 #define PCIE_PHY_MPLL_MULTIPLIER_MASK 0x7f 134 - #define PCIE_PHY_MPLL_MULTIPLIER_OVRD (0x1 << 9) 129 + #define PCIE_PHY_MPLL_MULTIPLIER_OVRD BIT(9) 135 130 136 131 #define PCIE_PHY_RX_ASIC_OUT 0x100D 137 132 #define PCIE_PHY_RX_ASIC_OUT_VALID (1 << 0) ··· 149 154 #define PCIE_PHY_CMN_REG26_ATT_MODE 0xBC 150 155 151 156 #define PHY_RX_OVRD_IN_LO 0x1005 152 - #define PHY_RX_OVRD_IN_LO_RX_DATA_EN (1 << 5) 153 - #define PHY_RX_OVRD_IN_LO_RX_PLL_EN (1 << 3) 157 + #define PHY_RX_OVRD_IN_LO_RX_DATA_EN BIT(5) 158 + #define PHY_RX_OVRD_IN_LO_RX_PLL_EN BIT(3) 154 159 155 - static int pcie_phy_poll_ack(struct imx6_pcie *imx6_pcie, int exp_val) 160 + static int pcie_phy_poll_ack(struct imx6_pcie *imx6_pcie, bool exp_val) 156 161 { 157 162 struct dw_pcie *pci = imx6_pcie->pci; 158 - u32 val; 163 + bool val; 159 164 u32 max_iterations = 10; 160 165 u32 wait_counter = 0; 161 166 162 167 do { 163 - val = dw_pcie_readl_dbi(pci, PCIE_PHY_STAT); 164 - val = (val >> PCIE_PHY_STAT_ACK_LOC) & 0x1; 168 + val = dw_pcie_readl_dbi(pci, PCIE_PHY_STAT) & 169 + PCIE_PHY_STAT_ACK; 165 170 wait_counter++; 166 171 167 172 if (val == exp_val) ··· 179 184 u32 val; 180 185 int ret; 181 186 182 - val = addr << PCIE_PHY_CTRL_DATA_LOC; 187 + val = PCIE_PHY_CTRL_DATA(addr); 183 188 dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, val); 184 189 185 - val |= (0x1 << PCIE_PHY_CTRL_CAP_ADR_LOC); 190 + val |= PCIE_PHY_CTRL_CAP_ADR; 186 191 dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, val); 187 192 188 - ret = pcie_phy_poll_ack(imx6_pcie, 1); 193 + ret = pcie_phy_poll_ack(imx6_pcie, true); 189 194 if (ret) 190 195 return ret; 191 196 192 - val = addr << PCIE_PHY_CTRL_DATA_LOC; 197 + val = PCIE_PHY_CTRL_DATA(addr); 193 198 dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, val); 194 199 195 - return pcie_phy_poll_ack(imx6_pcie, 0); 200 + return pcie_phy_poll_ack(imx6_pcie, false); 196 201 } 197 202 198 203 /* Read from the 16-bit PCIe PHY control registers (not memory-mapped) */ 199 - static int pcie_phy_read(struct imx6_pcie *imx6_pcie, int addr, int *data) 204 + static int pcie_phy_read(struct imx6_pcie *imx6_pcie, int addr, u16 *data) 200 205 { 201 206 struct dw_pcie *pci = imx6_pcie->pci; 202 - u32 val, phy_ctl; 207 + u32 phy_ctl; 203 208 int ret; 204 209 205 210 ret = pcie_phy_wait_ack(imx6_pcie, addr); ··· 207 212 return ret; 208 213 209 214 /* assert Read signal */ 210 - phy_ctl = 0x1 << PCIE_PHY_CTRL_RD_LOC; 215 + phy_ctl = PCIE_PHY_CTRL_RD; 211 216 dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, phy_ctl); 212 217 213 - ret = pcie_phy_poll_ack(imx6_pcie, 1); 218 + ret = pcie_phy_poll_ack(imx6_pcie, true); 214 219 if (ret) 215 220 return ret; 216 221 217 - val = dw_pcie_readl_dbi(pci, PCIE_PHY_STAT); 218 - *data = val & 0xffff; 222 + *data = dw_pcie_readl_dbi(pci, PCIE_PHY_STAT); 219 223 220 224 /* deassert Read signal */ 221 225 dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, 0x00); 222 226 223 - return pcie_phy_poll_ack(imx6_pcie, 0); 227 + return pcie_phy_poll_ack(imx6_pcie, false); 224 228 } 225 229 226 - static int pcie_phy_write(struct imx6_pcie *imx6_pcie, int addr, int data) 230 + static int pcie_phy_write(struct imx6_pcie *imx6_pcie, int addr, u16 data) 227 231 { 228 232 struct dw_pcie *pci = imx6_pcie->pci; 229 233 u32 var; ··· 234 240 if (ret) 235 241 return ret; 236 242 237 - var = data << PCIE_PHY_CTRL_DATA_LOC; 243 + var = PCIE_PHY_CTRL_DATA(data); 238 244 dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var); 239 245 240 246 /* capture data */ 241 - var |= (0x1 << PCIE_PHY_CTRL_CAP_DAT_LOC); 247 + var |= PCIE_PHY_CTRL_CAP_DAT; 242 248 dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var); 243 249 244 - ret = pcie_phy_poll_ack(imx6_pcie, 1); 250 + ret = pcie_phy_poll_ack(imx6_pcie, true); 245 251 if (ret) 246 252 return ret; 247 253 248 254 /* deassert cap data */ 249 - var = data << PCIE_PHY_CTRL_DATA_LOC; 255 + var = PCIE_PHY_CTRL_DATA(data); 250 256 dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var); 251 257 252 258 /* wait for ack de-assertion */ 253 - ret = pcie_phy_poll_ack(imx6_pcie, 0); 259 + ret = pcie_phy_poll_ack(imx6_pcie, false); 254 260 if (ret) 255 261 return ret; 256 262 257 263 /* assert wr signal */ 258 - var = 0x1 << PCIE_PHY_CTRL_WR_LOC; 264 + var = PCIE_PHY_CTRL_WR; 259 265 dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var); 260 266 261 267 /* wait for ack */ 262 - ret = pcie_phy_poll_ack(imx6_pcie, 1); 268 + ret = pcie_phy_poll_ack(imx6_pcie, true); 263 269 if (ret) 264 270 return ret; 265 271 266 272 /* deassert wr signal */ 267 - var = data << PCIE_PHY_CTRL_DATA_LOC; 273 + var = PCIE_PHY_CTRL_DATA(data); 268 274 dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var); 269 275 270 276 /* wait for ack de-assertion */ 271 - ret = pcie_phy_poll_ack(imx6_pcie, 0); 277 + ret = pcie_phy_poll_ack(imx6_pcie, false); 272 278 if (ret) 273 279 return ret; 274 280 ··· 279 285 280 286 static void imx6_pcie_reset_phy(struct imx6_pcie *imx6_pcie) 281 287 { 282 - u32 tmp; 288 + u16 tmp; 283 289 284 290 if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_IMX6_PHY)) 285 291 return; ··· 449 455 * reset time is too short, cannot meet the requirement. 450 456 * add one ~10us delay here. 451 457 */ 452 - udelay(10); 458 + usleep_range(10, 100); 453 459 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 454 460 IMX6Q_GPR1_PCIE_REF_CLK_EN, 1 << 16); 455 461 break; ··· 482 488 static void imx7d_pcie_wait_for_phy_pll_lock(struct imx6_pcie *imx6_pcie) 483 489 { 484 490 u32 val; 485 - unsigned int retries; 486 491 struct device *dev = imx6_pcie->pci->dev; 487 492 488 - for (retries = 0; retries < PHY_PLL_LOCK_WAIT_MAX_RETRIES; retries++) { 489 - regmap_read(imx6_pcie->iomuxc_gpr, IOMUXC_GPR22, &val); 490 - 491 - if (val & IMX7D_GPR22_PCIE_PHY_PLL_LOCKED) 492 - return; 493 - 494 - usleep_range(PHY_PLL_LOCK_WAIT_USLEEP_MIN, 495 - PHY_PLL_LOCK_WAIT_USLEEP_MAX); 496 - } 497 - 498 - dev_err(dev, "PCIe PLL lock timeout\n"); 493 + if (regmap_read_poll_timeout(imx6_pcie->iomuxc_gpr, 494 + IOMUXC_GPR22, val, 495 + val & IMX7D_GPR22_PCIE_PHY_PLL_LOCKED, 496 + PHY_PLL_LOCK_WAIT_USLEEP_MAX, 497 + PHY_PLL_LOCK_WAIT_TIMEOUT)) 498 + dev_err(dev, "PCIe PLL lock timeout\n"); 499 499 } 500 500 501 501 static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie) ··· 675 687 { 676 688 unsigned long phy_rate = clk_get_rate(imx6_pcie->pcie_phy); 677 689 int mult, div; 678 - u32 val; 690 + u16 val; 679 691 680 692 if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_IMX6_PHY)) 681 693 return 0; ··· 718 730 return 0; 719 731 } 720 732 721 - static int imx6_pcie_wait_for_link(struct imx6_pcie *imx6_pcie) 722 - { 723 - struct dw_pcie *pci = imx6_pcie->pci; 724 - struct device *dev = pci->dev; 725 - 726 - /* check if the link is up or not */ 727 - if (!dw_pcie_wait_for_link(pci)) 728 - return 0; 729 - 730 - dev_dbg(dev, "DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n", 731 - dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R0), 732 - dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R1)); 733 - return -ETIMEDOUT; 734 - } 735 - 736 733 static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie) 737 734 { 738 735 struct dw_pcie *pci = imx6_pcie->pci; ··· 734 761 } 735 762 736 763 dev_err(dev, "Speed change timeout\n"); 737 - return -EINVAL; 764 + return -ETIMEDOUT; 738 765 } 739 766 740 767 static void imx6_pcie_ltssm_enable(struct device *dev) ··· 776 803 /* Start LTSSM. */ 777 804 imx6_pcie_ltssm_enable(dev); 778 805 779 - ret = imx6_pcie_wait_for_link(imx6_pcie); 806 + ret = dw_pcie_wait_for_link(pci); 780 807 if (ret) 781 808 goto err_reset_phy; 782 809 ··· 814 841 } 815 842 816 843 /* Make sure link training is finished as well! */ 817 - ret = imx6_pcie_wait_for_link(imx6_pcie); 844 + ret = dw_pcie_wait_for_link(pci); 818 845 if (ret) { 819 846 dev_err(dev, "Failed to bring link up!\n"); 820 847 goto err_reset_phy; ··· 829 856 830 857 err_reset_phy: 831 858 dev_dbg(dev, "PHY DEBUG_R0=0x%08x DEBUG_R1=0x%08x\n", 832 - dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R0), 833 - dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R1)); 859 + dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG0), 860 + dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG1)); 834 861 imx6_pcie_reset_phy(imx6_pcie); 835 862 return ret; 836 863 } ··· 966 993 } 967 994 } 968 995 969 - static inline bool imx6_pcie_supports_suspend(struct imx6_pcie *imx6_pcie) 970 - { 971 - return (imx6_pcie->drvdata->variant == IMX7D || 972 - imx6_pcie->drvdata->variant == IMX6SX); 973 - } 974 - 975 996 static int imx6_pcie_suspend_noirq(struct device *dev) 976 997 { 977 998 struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); 978 999 979 - if (!imx6_pcie_supports_suspend(imx6_pcie)) 1000 + if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_SUPPORTS_SUSPEND)) 980 1001 return 0; 981 1002 982 1003 imx6_pcie_pm_turnoff(imx6_pcie); ··· 986 1019 struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); 987 1020 struct pcie_port *pp = &imx6_pcie->pci->pp; 988 1021 989 - if (!imx6_pcie_supports_suspend(imx6_pcie)) 1022 + if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_SUPPORTS_SUSPEND)) 990 1023 return 0; 991 1024 992 1025 imx6_pcie_assert_core_reset(imx6_pcie); ··· 1216 1249 [IMX6SX] = { 1217 1250 .variant = IMX6SX, 1218 1251 .flags = IMX6_PCIE_FLAG_IMX6_PHY | 1219 - IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE, 1252 + IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE | 1253 + IMX6_PCIE_FLAG_SUPPORTS_SUSPEND, 1220 1254 }, 1221 1255 [IMX6QP] = { 1222 1256 .variant = IMX6QP, ··· 1226 1258 }, 1227 1259 [IMX7D] = { 1228 1260 .variant = IMX7D, 1261 + .flags = IMX6_PCIE_FLAG_SUPPORTS_SUSPEND, 1229 1262 }, 1230 1263 [IMX8MQ] = { 1231 1264 .variant = IMX8MQ, ··· 1248 1279 .of_match_table = imx6_pcie_of_match, 1249 1280 .suppress_bind_attrs = true, 1250 1281 .pm = &imx6_pcie_pm_ops, 1282 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 1251 1283 }, 1252 1284 .probe = imx6_pcie_probe, 1253 1285 .shutdown = imx6_pcie_shutdown,
+684 -272
drivers/pci/controller/dwc/pci-keystone.c
··· 11 11 12 12 #include <linux/clk.h> 13 13 #include <linux/delay.h> 14 + #include <linux/gpio/consumer.h> 14 15 #include <linux/init.h> 15 16 #include <linux/interrupt.h> 16 17 #include <linux/irqchip/chained_irq.h> ··· 19 18 #include <linux/mfd/syscon.h> 20 19 #include <linux/msi.h> 21 20 #include <linux/of.h> 21 + #include <linux/of_device.h> 22 22 #include <linux/of_irq.h> 23 23 #include <linux/of_pci.h> 24 24 #include <linux/phy/phy.h> ··· 28 26 #include <linux/resource.h> 29 27 #include <linux/signal.h> 30 28 29 + #include "../../pci.h" 31 30 #include "pcie-designware.h" 32 31 33 32 #define PCIE_VENDORID_MASK 0xffff ··· 47 44 #define CFG_TYPE1 BIT(24) 48 45 49 46 #define OB_SIZE 0x030 50 - #define SPACE0_REMOTE_CFG_OFFSET 0x1000 51 47 #define OB_OFFSET_INDEX(n) (0x200 + (8 * (n))) 52 48 #define OB_OFFSET_HI(n) (0x204 + (8 * (n))) 53 49 #define OB_ENABLEN BIT(0) 54 50 #define OB_WIN_SIZE 8 /* 8MB */ 55 51 52 + #define PCIE_LEGACY_IRQ_ENABLE_SET(n) (0x188 + (0x10 * ((n) - 1))) 53 + #define PCIE_LEGACY_IRQ_ENABLE_CLR(n) (0x18c + (0x10 * ((n) - 1))) 54 + #define PCIE_EP_IRQ_SET 0x64 55 + #define PCIE_EP_IRQ_CLR 0x68 56 + #define INT_ENABLE BIT(0) 57 + 56 58 /* IRQ register defines */ 57 59 #define IRQ_EOI 0x050 58 - #define IRQ_STATUS 0x184 59 - #define IRQ_ENABLE_SET 0x188 60 - #define IRQ_ENABLE_CLR 0x18c 61 60 62 61 #define MSI_IRQ 0x054 63 - #define MSI0_IRQ_STATUS 0x104 64 - #define MSI0_IRQ_ENABLE_SET 0x108 65 - #define MSI0_IRQ_ENABLE_CLR 0x10c 66 - #define IRQ_STATUS 0x184 62 + #define MSI_IRQ_STATUS(n) (0x104 + ((n) << 4)) 63 + #define MSI_IRQ_ENABLE_SET(n) (0x108 + ((n) << 4)) 64 + #define MSI_IRQ_ENABLE_CLR(n) (0x10c + ((n) << 4)) 67 65 #define MSI_IRQ_OFFSET 4 66 + 67 + #define IRQ_STATUS(n) (0x184 + ((n) << 4)) 68 + #define IRQ_ENABLE_SET(n) (0x188 + ((n) << 4)) 69 + #define INTx_EN BIT(0) 68 70 69 71 #define ERR_IRQ_STATUS 0x1c4 70 72 #define ERR_IRQ_ENABLE_SET 0x1c8 71 73 #define ERR_AER BIT(5) /* ECRC error */ 74 + #define AM6_ERR_AER BIT(4) /* AM6 ECRC error */ 72 75 #define ERR_AXI BIT(4) /* AXI tag lookup fatal error */ 73 76 #define ERR_CORR BIT(3) /* Correctable error */ 74 77 #define ERR_NONFATAL BIT(2) /* Non-fatal error */ ··· 83 74 #define ERR_IRQ_ALL (ERR_AER | ERR_AXI | ERR_CORR | \ 84 75 ERR_NONFATAL | ERR_FATAL | ERR_SYS) 85 76 86 - #define MAX_MSI_HOST_IRQS 8 87 77 /* PCIE controller device IDs */ 88 78 #define PCIE_RC_K2HK 0xb008 89 79 #define PCIE_RC_K2E 0xb009 90 80 #define PCIE_RC_K2L 0xb00a 91 81 #define PCIE_RC_K2G 0xb00b 92 82 83 + #define KS_PCIE_DEV_TYPE_MASK (0x3 << 1) 84 + #define KS_PCIE_DEV_TYPE(mode) ((mode) << 1) 85 + 86 + #define EP 0x0 87 + #define LEG_EP 0x1 88 + #define RC 0x2 89 + 90 + #define EXP_CAP_ID_OFFSET 0x70 91 + 92 + #define KS_PCIE_SYSCLOCKOUTEN BIT(0) 93 + 94 + #define AM654_PCIE_DEV_TYPE_MASK 0x3 95 + #define AM654_WIN_SIZE SZ_64K 96 + 97 + #define APP_ADDR_SPACE_0 (16 * SZ_1K) 98 + 93 99 #define to_keystone_pcie(x) dev_get_drvdata((x)->dev) 100 + 101 + struct ks_pcie_of_data { 102 + enum dw_pcie_device_mode mode; 103 + const struct dw_pcie_host_ops *host_ops; 104 + const struct dw_pcie_ep_ops *ep_ops; 105 + unsigned int version; 106 + }; 94 107 95 108 struct keystone_pcie { 96 109 struct dw_pcie *pci; 97 110 /* PCI Device ID */ 98 111 u32 device_id; 99 - int num_legacy_host_irqs; 100 112 int legacy_host_irqs[PCI_NUM_INTX]; 101 113 struct device_node *legacy_intc_np; 102 114 103 - int num_msi_host_irqs; 104 - int msi_host_irqs[MAX_MSI_HOST_IRQS]; 115 + int msi_host_irq; 105 116 int num_lanes; 106 117 u32 num_viewport; 107 118 struct phy **phy; ··· 130 101 struct irq_domain *legacy_irq_domain; 131 102 struct device_node *np; 132 103 133 - int error_irq; 134 - 135 104 /* Application register space */ 136 105 void __iomem *va_app_base; /* DT 1st resource */ 137 106 struct resource app; 107 + bool is_am6; 138 108 }; 139 - 140 - static inline void update_reg_offset_bit_pos(u32 offset, u32 *reg_offset, 141 - u32 *bit_pos) 142 - { 143 - *reg_offset = offset % 8; 144 - *bit_pos = offset >> 3; 145 - } 146 - 147 - static phys_addr_t ks_pcie_get_msi_addr(struct pcie_port *pp) 148 - { 149 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 150 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 151 - 152 - return ks_pcie->app.start + MSI_IRQ; 153 - } 154 109 155 110 static u32 ks_pcie_app_readl(struct keystone_pcie *ks_pcie, u32 offset) 156 111 { ··· 147 134 writel(val, ks_pcie->va_app_base + offset); 148 135 } 149 136 150 - static void ks_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset) 137 + static void ks_pcie_msi_irq_ack(struct irq_data *data) 151 138 { 152 - struct dw_pcie *pci = ks_pcie->pci; 153 - struct pcie_port *pp = &pci->pp; 154 - struct device *dev = pci->dev; 155 - u32 pending, vector; 156 - int src, virq; 157 - 158 - pending = ks_pcie_app_readl(ks_pcie, MSI0_IRQ_STATUS + (offset << 4)); 159 - 160 - /* 161 - * MSI0 status bit 0-3 shows vectors 0, 8, 16, 24, MSI1 status bit 162 - * shows 1, 9, 17, 25 and so forth 163 - */ 164 - for (src = 0; src < 4; src++) { 165 - if (BIT(src) & pending) { 166 - vector = offset + (src << 3); 167 - virq = irq_linear_revmap(pp->irq_domain, vector); 168 - dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n", 169 - src, vector, virq); 170 - generic_handle_irq(virq); 171 - } 172 - } 173 - } 174 - 175 - static void ks_pcie_msi_irq_ack(int irq, struct pcie_port *pp) 176 - { 177 - u32 reg_offset, bit_pos; 139 + struct pcie_port *pp = irq_data_get_irq_chip_data(data); 178 140 struct keystone_pcie *ks_pcie; 141 + u32 irq = data->hwirq; 179 142 struct dw_pcie *pci; 143 + u32 reg_offset; 144 + u32 bit_pos; 180 145 181 146 pci = to_dw_pcie_from_pp(pp); 182 147 ks_pcie = to_keystone_pcie(pci); 183 - update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 184 148 185 - ks_pcie_app_writel(ks_pcie, MSI0_IRQ_STATUS + (reg_offset << 4), 149 + reg_offset = irq % 8; 150 + bit_pos = irq >> 3; 151 + 152 + ks_pcie_app_writel(ks_pcie, MSI_IRQ_STATUS(reg_offset), 186 153 BIT(bit_pos)); 187 154 ks_pcie_app_writel(ks_pcie, IRQ_EOI, reg_offset + MSI_IRQ_OFFSET); 188 155 } 189 156 190 - static void ks_pcie_msi_set_irq(struct pcie_port *pp, int irq) 157 + static void ks_pcie_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 191 158 { 192 - u32 reg_offset, bit_pos; 193 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 194 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 159 + struct pcie_port *pp = irq_data_get_irq_chip_data(data); 160 + struct keystone_pcie *ks_pcie; 161 + struct dw_pcie *pci; 162 + u64 msi_target; 195 163 196 - update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 197 - ks_pcie_app_writel(ks_pcie, MSI0_IRQ_ENABLE_SET + (reg_offset << 4), 198 - BIT(bit_pos)); 164 + pci = to_dw_pcie_from_pp(pp); 165 + ks_pcie = to_keystone_pcie(pci); 166 + 167 + msi_target = ks_pcie->app.start + MSI_IRQ; 168 + msg->address_lo = lower_32_bits(msi_target); 169 + msg->address_hi = upper_32_bits(msi_target); 170 + msg->data = data->hwirq; 171 + 172 + dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n", 173 + (int)data->hwirq, msg->address_hi, msg->address_lo); 199 174 } 200 175 201 - static void ks_pcie_msi_clear_irq(struct pcie_port *pp, int irq) 176 + static int ks_pcie_msi_set_affinity(struct irq_data *irq_data, 177 + const struct cpumask *mask, bool force) 202 178 { 203 - u32 reg_offset, bit_pos; 204 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 205 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 206 - 207 - update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 208 - ks_pcie_app_writel(ks_pcie, MSI0_IRQ_ENABLE_CLR + (reg_offset << 4), 209 - BIT(bit_pos)); 179 + return -EINVAL; 210 180 } 181 + 182 + static void ks_pcie_msi_mask(struct irq_data *data) 183 + { 184 + struct pcie_port *pp = irq_data_get_irq_chip_data(data); 185 + struct keystone_pcie *ks_pcie; 186 + u32 irq = data->hwirq; 187 + struct dw_pcie *pci; 188 + unsigned long flags; 189 + u32 reg_offset; 190 + u32 bit_pos; 191 + 192 + raw_spin_lock_irqsave(&pp->lock, flags); 193 + 194 + pci = to_dw_pcie_from_pp(pp); 195 + ks_pcie = to_keystone_pcie(pci); 196 + 197 + reg_offset = irq % 8; 198 + bit_pos = irq >> 3; 199 + 200 + ks_pcie_app_writel(ks_pcie, MSI_IRQ_ENABLE_CLR(reg_offset), 201 + BIT(bit_pos)); 202 + 203 + raw_spin_unlock_irqrestore(&pp->lock, flags); 204 + } 205 + 206 + static void ks_pcie_msi_unmask(struct irq_data *data) 207 + { 208 + struct pcie_port *pp = irq_data_get_irq_chip_data(data); 209 + struct keystone_pcie *ks_pcie; 210 + u32 irq = data->hwirq; 211 + struct dw_pcie *pci; 212 + unsigned long flags; 213 + u32 reg_offset; 214 + u32 bit_pos; 215 + 216 + raw_spin_lock_irqsave(&pp->lock, flags); 217 + 218 + pci = to_dw_pcie_from_pp(pp); 219 + ks_pcie = to_keystone_pcie(pci); 220 + 221 + reg_offset = irq % 8; 222 + bit_pos = irq >> 3; 223 + 224 + ks_pcie_app_writel(ks_pcie, MSI_IRQ_ENABLE_SET(reg_offset), 225 + BIT(bit_pos)); 226 + 227 + raw_spin_unlock_irqrestore(&pp->lock, flags); 228 + } 229 + 230 + static struct irq_chip ks_pcie_msi_irq_chip = { 231 + .name = "KEYSTONE-PCI-MSI", 232 + .irq_ack = ks_pcie_msi_irq_ack, 233 + .irq_compose_msi_msg = ks_pcie_compose_msi_msg, 234 + .irq_set_affinity = ks_pcie_msi_set_affinity, 235 + .irq_mask = ks_pcie_msi_mask, 236 + .irq_unmask = ks_pcie_msi_unmask, 237 + }; 211 238 212 239 static int ks_pcie_msi_host_init(struct pcie_port *pp) 213 240 { 241 + pp->msi_irq_chip = &ks_pcie_msi_irq_chip; 214 242 return dw_pcie_allocate_domains(pp); 215 - } 216 - 217 - static void ks_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie) 218 - { 219 - int i; 220 - 221 - for (i = 0; i < PCI_NUM_INTX; i++) 222 - ks_pcie_app_writel(ks_pcie, IRQ_ENABLE_SET + (i << 4), 0x1); 223 243 } 224 244 225 245 static void ks_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, ··· 263 217 u32 pending; 264 218 int virq; 265 219 266 - pending = ks_pcie_app_readl(ks_pcie, IRQ_STATUS + (offset << 4)); 220 + pending = ks_pcie_app_readl(ks_pcie, IRQ_STATUS(offset)); 267 221 268 222 if (BIT(0) & pending) { 269 223 virq = irq_linear_revmap(ks_pcie->legacy_irq_domain, offset); ··· 273 227 274 228 /* EOI the INTx interrupt */ 275 229 ks_pcie_app_writel(ks_pcie, IRQ_EOI, offset); 230 + } 231 + 232 + /* 233 + * Dummy function so that DW core doesn't configure MSI 234 + */ 235 + static int ks_pcie_am654_msi_host_init(struct pcie_port *pp) 236 + { 237 + return 0; 276 238 } 277 239 278 240 static void ks_pcie_enable_error_irq(struct keystone_pcie *ks_pcie) ··· 309 255 if (reg & ERR_CORR) 310 256 dev_dbg(dev, "Correctable Error\n"); 311 257 312 - if (reg & ERR_AXI) 258 + if (!ks_pcie->is_am6 && (reg & ERR_AXI)) 313 259 dev_err(dev, "AXI tag lookup fatal Error\n"); 314 260 315 - if (reg & ERR_AER) 261 + if (reg & ERR_AER || (ks_pcie->is_am6 && (reg & AM6_ERR_AER))) 316 262 dev_err(dev, "ECRC Error\n"); 317 263 318 264 ks_pcie_app_writel(ks_pcie, ERR_IRQ_STATUS, reg); ··· 410 356 dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0); 411 357 ks_pcie_clear_dbi_mode(ks_pcie); 412 358 359 + if (ks_pcie->is_am6) 360 + return; 361 + 413 362 val = ilog2(OB_WIN_SIZE); 414 363 ks_pcie_app_writel(ks_pcie, OB_SIZE, val); 415 364 ··· 502 445 return (val == PORT_LOGIC_LTSSM_STATE_L0); 503 446 } 504 447 505 - static void ks_pcie_initiate_link_train(struct keystone_pcie *ks_pcie) 448 + static void ks_pcie_stop_link(struct dw_pcie *pci) 506 449 { 450 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 507 451 u32 val; 508 452 509 453 /* Disable Link training */ 510 454 val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 511 455 val &= ~LTSSM_EN_VAL; 512 456 ks_pcie_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val); 457 + } 458 + 459 + static int ks_pcie_start_link(struct dw_pcie *pci) 460 + { 461 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 462 + struct device *dev = pci->dev; 463 + u32 val; 464 + 465 + if (dw_pcie_link_up(pci)) { 466 + dev_dbg(dev, "link is already up\n"); 467 + return 0; 468 + } 513 469 514 470 /* Initiate Link Training */ 515 471 val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 516 472 ks_pcie_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val); 517 - } 518 473 519 - /** 520 - * ks_pcie_dw_host_init() - initialize host for v3_65 dw hardware 521 - * 522 - * Ioremap the register resources, initialize legacy irq domain 523 - * and call dw_pcie_v3_65_host_init() API to initialize the Keystone 524 - * PCI host controller. 525 - */ 526 - static int __init ks_pcie_dw_host_init(struct keystone_pcie *ks_pcie) 527 - { 528 - struct dw_pcie *pci = ks_pcie->pci; 529 - struct pcie_port *pp = &pci->pp; 530 - struct device *dev = pci->dev; 531 - struct platform_device *pdev = to_platform_device(dev); 532 - struct resource *res; 533 - 534 - /* Index 0 is the config reg. space address */ 535 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 536 - pci->dbi_base = devm_pci_remap_cfg_resource(dev, res); 537 - if (IS_ERR(pci->dbi_base)) 538 - return PTR_ERR(pci->dbi_base); 539 - 540 - /* 541 - * We set these same and is used in pcie rd/wr_other_conf 542 - * functions 543 - */ 544 - pp->va_cfg0_base = pci->dbi_base + SPACE0_REMOTE_CFG_OFFSET; 545 - pp->va_cfg1_base = pp->va_cfg0_base; 546 - 547 - /* Index 1 is the application reg. space address */ 548 - res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 549 - ks_pcie->va_app_base = devm_ioremap_resource(dev, res); 550 - if (IS_ERR(ks_pcie->va_app_base)) 551 - return PTR_ERR(ks_pcie->va_app_base); 552 - 553 - ks_pcie->app = *res; 554 - 555 - /* Create legacy IRQ domain */ 556 - ks_pcie->legacy_irq_domain = 557 - irq_domain_add_linear(ks_pcie->legacy_intc_np, 558 - PCI_NUM_INTX, 559 - &ks_pcie_legacy_irq_domain_ops, 560 - NULL); 561 - if (!ks_pcie->legacy_irq_domain) { 562 - dev_err(dev, "Failed to add irq domain for legacy irqs\n"); 563 - return -EINVAL; 564 - } 565 - 566 - return dw_pcie_host_init(pp); 474 + return 0; 567 475 } 568 476 569 477 static void ks_pcie_quirk(struct pci_dev *dev) ··· 574 552 } 575 553 DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, ks_pcie_quirk); 576 554 577 - static int ks_pcie_establish_link(struct keystone_pcie *ks_pcie) 578 - { 579 - struct dw_pcie *pci = ks_pcie->pci; 580 - struct device *dev = pci->dev; 581 - 582 - if (dw_pcie_link_up(pci)) { 583 - dev_info(dev, "Link already up\n"); 584 - return 0; 585 - } 586 - 587 - ks_pcie_initiate_link_train(ks_pcie); 588 - 589 - /* check if the link is up or not */ 590 - if (!dw_pcie_wait_for_link(pci)) 591 - return 0; 592 - 593 - dev_err(dev, "phy link never came up\n"); 594 - return -ETIMEDOUT; 595 - } 596 - 597 555 static void ks_pcie_msi_irq_handler(struct irq_desc *desc) 598 556 { 599 - unsigned int irq = irq_desc_get_irq(desc); 557 + unsigned int irq = desc->irq_data.hwirq; 600 558 struct keystone_pcie *ks_pcie = irq_desc_get_handler_data(desc); 601 - u32 offset = irq - ks_pcie->msi_host_irqs[0]; 559 + u32 offset = irq - ks_pcie->msi_host_irq; 602 560 struct dw_pcie *pci = ks_pcie->pci; 561 + struct pcie_port *pp = &pci->pp; 603 562 struct device *dev = pci->dev; 604 563 struct irq_chip *chip = irq_desc_get_chip(desc); 564 + u32 vector, virq, reg, pos; 605 565 606 566 dev_dbg(dev, "%s, irq %d\n", __func__, irq); 607 567 ··· 593 589 * ack operation. 594 590 */ 595 591 chained_irq_enter(chip, desc); 596 - ks_pcie_handle_msi_irq(ks_pcie, offset); 592 + 593 + reg = ks_pcie_app_readl(ks_pcie, MSI_IRQ_STATUS(offset)); 594 + /* 595 + * MSI0 status bit 0-3 shows vectors 0, 8, 16, 24, MSI1 status bit 596 + * shows 1, 9, 17, 25 and so forth 597 + */ 598 + for (pos = 0; pos < 4; pos++) { 599 + if (!(reg & BIT(pos))) 600 + continue; 601 + 602 + vector = offset + (pos << 3); 603 + virq = irq_linear_revmap(pp->irq_domain, vector); 604 + dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n", pos, vector, 605 + virq); 606 + generic_handle_irq(virq); 607 + } 608 + 597 609 chained_irq_exit(chip, desc); 598 610 } 599 611 ··· 642 622 chained_irq_exit(chip, desc); 643 623 } 644 624 645 - static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie, 646 - char *controller, int *num_irqs) 625 + static int ks_pcie_config_msi_irq(struct keystone_pcie *ks_pcie) 647 626 { 648 - int temp, max_host_irqs, legacy = 1, *host_irqs; 649 627 struct device *dev = ks_pcie->pci->dev; 650 - struct device_node *np_pcie = dev->of_node, **np_temp; 628 + struct device_node *np = ks_pcie->np; 629 + struct device_node *intc_np; 630 + struct irq_data *irq_data; 631 + int irq_count, irq, ret, i; 651 632 652 - if (!strcmp(controller, "msi-interrupt-controller")) 653 - legacy = 0; 654 - 655 - if (legacy) { 656 - np_temp = &ks_pcie->legacy_intc_np; 657 - max_host_irqs = PCI_NUM_INTX; 658 - host_irqs = &ks_pcie->legacy_host_irqs[0]; 659 - } else { 660 - np_temp = &ks_pcie->msi_intc_np; 661 - max_host_irqs = MAX_MSI_HOST_IRQS; 662 - host_irqs = &ks_pcie->msi_host_irqs[0]; 663 - } 664 - 665 - /* interrupt controller is in a child node */ 666 - *np_temp = of_get_child_by_name(np_pcie, controller); 667 - if (!(*np_temp)) { 668 - dev_err(dev, "Node for %s is absent\n", controller); 669 - return -EINVAL; 670 - } 671 - 672 - temp = of_irq_count(*np_temp); 673 - if (!temp) { 674 - dev_err(dev, "No IRQ entries in %s\n", controller); 675 - of_node_put(*np_temp); 676 - return -EINVAL; 677 - } 678 - 679 - if (temp > max_host_irqs) 680 - dev_warn(dev, "Too many %s interrupts defined %u\n", 681 - (legacy ? "legacy" : "MSI"), temp); 682 - 683 - /* 684 - * support upto max_host_irqs. In dt from index 0 to 3 (legacy) or 0 to 685 - * 7 (MSI) 686 - */ 687 - for (temp = 0; temp < max_host_irqs; temp++) { 688 - host_irqs[temp] = irq_of_parse_and_map(*np_temp, temp); 689 - if (!host_irqs[temp]) 690 - break; 691 - } 692 - 693 - of_node_put(*np_temp); 694 - 695 - if (temp) { 696 - *num_irqs = temp; 633 + if (!IS_ENABLED(CONFIG_PCI_MSI)) 697 634 return 0; 635 + 636 + intc_np = of_get_child_by_name(np, "msi-interrupt-controller"); 637 + if (!intc_np) { 638 + if (ks_pcie->is_am6) 639 + return 0; 640 + dev_warn(dev, "msi-interrupt-controller node is absent\n"); 641 + return -EINVAL; 698 642 } 699 643 700 - return -EINVAL; 644 + irq_count = of_irq_count(intc_np); 645 + if (!irq_count) { 646 + dev_err(dev, "No IRQ entries in msi-interrupt-controller\n"); 647 + ret = -EINVAL; 648 + goto err; 649 + } 650 + 651 + for (i = 0; i < irq_count; i++) { 652 + irq = irq_of_parse_and_map(intc_np, i); 653 + if (!irq) { 654 + ret = -EINVAL; 655 + goto err; 656 + } 657 + 658 + if (!ks_pcie->msi_host_irq) { 659 + irq_data = irq_get_irq_data(irq); 660 + if (!irq_data) { 661 + ret = -EINVAL; 662 + goto err; 663 + } 664 + ks_pcie->msi_host_irq = irq_data->hwirq; 665 + } 666 + 667 + irq_set_chained_handler_and_data(irq, ks_pcie_msi_irq_handler, 668 + ks_pcie); 669 + } 670 + 671 + of_node_put(intc_np); 672 + return 0; 673 + 674 + err: 675 + of_node_put(intc_np); 676 + return ret; 701 677 } 702 678 703 - static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie) 679 + static int ks_pcie_config_legacy_irq(struct keystone_pcie *ks_pcie) 704 680 { 705 - int i; 681 + struct device *dev = ks_pcie->pci->dev; 682 + struct irq_domain *legacy_irq_domain; 683 + struct device_node *np = ks_pcie->np; 684 + struct device_node *intc_np; 685 + int irq_count, irq, ret = 0, i; 706 686 707 - /* Legacy IRQ */ 708 - for (i = 0; i < ks_pcie->num_legacy_host_irqs; i++) { 709 - irq_set_chained_handler_and_data(ks_pcie->legacy_host_irqs[i], 687 + intc_np = of_get_child_by_name(np, "legacy-interrupt-controller"); 688 + if (!intc_np) { 689 + /* 690 + * Since legacy interrupts are modeled as edge-interrupts in 691 + * AM6, keep it disabled for now. 692 + */ 693 + if (ks_pcie->is_am6) 694 + return 0; 695 + dev_warn(dev, "legacy-interrupt-controller node is absent\n"); 696 + return -EINVAL; 697 + } 698 + 699 + irq_count = of_irq_count(intc_np); 700 + if (!irq_count) { 701 + dev_err(dev, "No IRQ entries in legacy-interrupt-controller\n"); 702 + ret = -EINVAL; 703 + goto err; 704 + } 705 + 706 + for (i = 0; i < irq_count; i++) { 707 + irq = irq_of_parse_and_map(intc_np, i); 708 + if (!irq) { 709 + ret = -EINVAL; 710 + goto err; 711 + } 712 + ks_pcie->legacy_host_irqs[i] = irq; 713 + 714 + irq_set_chained_handler_and_data(irq, 710 715 ks_pcie_legacy_irq_handler, 711 716 ks_pcie); 712 717 } 713 - ks_pcie_enable_legacy_irqs(ks_pcie); 714 718 715 - /* MSI IRQ */ 716 - if (IS_ENABLED(CONFIG_PCI_MSI)) { 717 - for (i = 0; i < ks_pcie->num_msi_host_irqs; i++) { 718 - irq_set_chained_handler_and_data(ks_pcie->msi_host_irqs[i], 719 - ks_pcie_msi_irq_handler, 720 - ks_pcie); 721 - } 719 + legacy_irq_domain = 720 + irq_domain_add_linear(intc_np, PCI_NUM_INTX, 721 + &ks_pcie_legacy_irq_domain_ops, NULL); 722 + if (!legacy_irq_domain) { 723 + dev_err(dev, "Failed to add irq domain for legacy irqs\n"); 724 + ret = -EINVAL; 725 + goto err; 722 726 } 727 + ks_pcie->legacy_irq_domain = legacy_irq_domain; 723 728 724 - if (ks_pcie->error_irq > 0) 725 - ks_pcie_enable_error_irq(ks_pcie); 729 + for (i = 0; i < PCI_NUM_INTX; i++) 730 + ks_pcie_app_writel(ks_pcie, IRQ_ENABLE_SET(i), INTx_EN); 731 + 732 + err: 733 + of_node_put(intc_np); 734 + return ret; 726 735 } 727 736 737 + #ifdef CONFIG_ARM 728 738 /* 729 739 * When a PCI device does not exist during config cycles, keystone host gets a 730 740 * bus error instead of returning 0xffffffff. This handler always returns 0 ··· 774 724 775 725 return 0; 776 726 } 727 + #endif 777 728 778 729 static int __init ks_pcie_init_id(struct keystone_pcie *ks_pcie) 779 730 { ··· 793 742 if (ret) 794 743 return ret; 795 744 745 + dw_pcie_dbi_ro_wr_en(pci); 796 746 dw_pcie_writew_dbi(pci, PCI_VENDOR_ID, id & PCIE_VENDORID_MASK); 797 747 dw_pcie_writew_dbi(pci, PCI_DEVICE_ID, id >> PCIE_DEVICEID_SHIFT); 748 + dw_pcie_dbi_ro_wr_dis(pci); 798 749 799 750 return 0; 800 751 } ··· 807 754 struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 808 755 int ret; 809 756 757 + ret = ks_pcie_config_legacy_irq(ks_pcie); 758 + if (ret) 759 + return ret; 760 + 761 + ret = ks_pcie_config_msi_irq(ks_pcie); 762 + if (ret) 763 + return ret; 764 + 810 765 dw_pcie_setup_rc(pp); 811 766 812 - ks_pcie_establish_link(ks_pcie); 767 + ks_pcie_stop_link(pci); 813 768 ks_pcie_setup_rc_app_regs(ks_pcie); 814 - ks_pcie_setup_interrupts(ks_pcie); 815 769 writew(PCI_IO_RANGE_TYPE_32 | (PCI_IO_RANGE_TYPE_32 << 8), 816 770 pci->dbi_base + PCI_IO_BASE); 817 771 ··· 826 766 if (ret < 0) 827 767 return ret; 828 768 769 + #ifdef CONFIG_ARM 829 770 /* 830 771 * PCIe access errors that result into OCP errors are caught by ARM as 831 772 * "External aborts" 832 773 */ 833 774 hook_fault_code(17, ks_pcie_fault, SIGBUS, 0, 834 775 "Asynchronous external abort"); 776 + #endif 777 + 778 + ks_pcie_start_link(pci); 779 + dw_pcie_wait_for_link(pci); 835 780 836 781 return 0; 837 782 } ··· 845 780 .rd_other_conf = ks_pcie_rd_other_conf, 846 781 .wr_other_conf = ks_pcie_wr_other_conf, 847 782 .host_init = ks_pcie_host_init, 848 - .msi_set_irq = ks_pcie_msi_set_irq, 849 - .msi_clear_irq = ks_pcie_msi_clear_irq, 850 - .get_msi_addr = ks_pcie_get_msi_addr, 851 783 .msi_host_init = ks_pcie_msi_host_init, 852 - .msi_irq_ack = ks_pcie_msi_irq_ack, 853 784 .scan_bus = ks_pcie_v3_65_scan_bus, 785 + }; 786 + 787 + static const struct dw_pcie_host_ops ks_pcie_am654_host_ops = { 788 + .host_init = ks_pcie_host_init, 789 + .msi_host_init = ks_pcie_am654_msi_host_init, 854 790 }; 855 791 856 792 static irqreturn_t ks_pcie_err_irq_handler(int irq, void *priv) ··· 867 801 struct dw_pcie *pci = ks_pcie->pci; 868 802 struct pcie_port *pp = &pci->pp; 869 803 struct device *dev = &pdev->dev; 804 + struct resource *res; 870 805 int ret; 871 806 872 - ret = ks_pcie_get_irq_controller_info(ks_pcie, 873 - "legacy-interrupt-controller", 874 - &ks_pcie->num_legacy_host_irqs); 875 - if (ret) 876 - return ret; 807 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); 808 + pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res); 809 + if (IS_ERR(pp->va_cfg0_base)) 810 + return PTR_ERR(pp->va_cfg0_base); 877 811 878 - if (IS_ENABLED(CONFIG_PCI_MSI)) { 879 - ret = ks_pcie_get_irq_controller_info(ks_pcie, 880 - "msi-interrupt-controller", 881 - &ks_pcie->num_msi_host_irqs); 882 - if (ret) 883 - return ret; 884 - } 812 + pp->va_cfg1_base = pp->va_cfg0_base; 885 813 886 - /* 887 - * Index 0 is the platform interrupt for error interrupt 888 - * from RC. This is optional. 889 - */ 890 - ks_pcie->error_irq = irq_of_parse_and_map(ks_pcie->np, 0); 891 - if (ks_pcie->error_irq <= 0) 892 - dev_info(dev, "no error IRQ defined\n"); 893 - else { 894 - ret = request_irq(ks_pcie->error_irq, ks_pcie_err_irq_handler, 895 - IRQF_SHARED, "pcie-error-irq", ks_pcie); 896 - if (ret < 0) { 897 - dev_err(dev, "failed to request error IRQ %d\n", 898 - ks_pcie->error_irq); 899 - return ret; 900 - } 901 - } 902 - 903 - pp->ops = &ks_pcie_host_ops; 904 - ret = ks_pcie_dw_host_init(ks_pcie); 814 + ret = dw_pcie_host_init(pp); 905 815 if (ret) { 906 816 dev_err(dev, "failed to initialize host\n"); 907 817 return ret; ··· 886 844 return 0; 887 845 } 888 846 889 - static const struct of_device_id ks_pcie_of_match[] = { 890 - { 891 - .type = "pci", 892 - .compatible = "ti,keystone-pcie", 893 - }, 894 - { }, 895 - }; 847 + static u32 ks_pcie_am654_read_dbi2(struct dw_pcie *pci, void __iomem *base, 848 + u32 reg, size_t size) 849 + { 850 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 851 + u32 val; 852 + 853 + ks_pcie_set_dbi_mode(ks_pcie); 854 + dw_pcie_read(base + reg, size, &val); 855 + ks_pcie_clear_dbi_mode(ks_pcie); 856 + return val; 857 + } 858 + 859 + static void ks_pcie_am654_write_dbi2(struct dw_pcie *pci, void __iomem *base, 860 + u32 reg, size_t size, u32 val) 861 + { 862 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 863 + 864 + ks_pcie_set_dbi_mode(ks_pcie); 865 + dw_pcie_write(base + reg, size, val); 866 + ks_pcie_clear_dbi_mode(ks_pcie); 867 + } 896 868 897 869 static const struct dw_pcie_ops ks_pcie_dw_pcie_ops = { 870 + .start_link = ks_pcie_start_link, 871 + .stop_link = ks_pcie_stop_link, 898 872 .link_up = ks_pcie_link_up, 873 + .read_dbi2 = ks_pcie_am654_read_dbi2, 874 + .write_dbi2 = ks_pcie_am654_write_dbi2, 899 875 }; 876 + 877 + static void ks_pcie_am654_ep_init(struct dw_pcie_ep *ep) 878 + { 879 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 880 + int flags; 881 + 882 + ep->page_size = AM654_WIN_SIZE; 883 + flags = PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_32; 884 + dw_pcie_writel_dbi2(pci, PCI_BASE_ADDRESS_0, APP_ADDR_SPACE_0 - 1); 885 + dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, flags); 886 + } 887 + 888 + static void ks_pcie_am654_raise_legacy_irq(struct keystone_pcie *ks_pcie) 889 + { 890 + struct dw_pcie *pci = ks_pcie->pci; 891 + u8 int_pin; 892 + 893 + int_pin = dw_pcie_readb_dbi(pci, PCI_INTERRUPT_PIN); 894 + if (int_pin == 0 || int_pin > 4) 895 + return; 896 + 897 + ks_pcie_app_writel(ks_pcie, PCIE_LEGACY_IRQ_ENABLE_SET(int_pin), 898 + INT_ENABLE); 899 + ks_pcie_app_writel(ks_pcie, PCIE_EP_IRQ_SET, INT_ENABLE); 900 + mdelay(1); 901 + ks_pcie_app_writel(ks_pcie, PCIE_EP_IRQ_CLR, INT_ENABLE); 902 + ks_pcie_app_writel(ks_pcie, PCIE_LEGACY_IRQ_ENABLE_CLR(int_pin), 903 + INT_ENABLE); 904 + } 905 + 906 + static int ks_pcie_am654_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 907 + enum pci_epc_irq_type type, 908 + u16 interrupt_num) 909 + { 910 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 911 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 912 + 913 + switch (type) { 914 + case PCI_EPC_IRQ_LEGACY: 915 + ks_pcie_am654_raise_legacy_irq(ks_pcie); 916 + break; 917 + case PCI_EPC_IRQ_MSI: 918 + dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); 919 + break; 920 + default: 921 + dev_err(pci->dev, "UNKNOWN IRQ type\n"); 922 + return -EINVAL; 923 + } 924 + 925 + return 0; 926 + } 927 + 928 + static const struct pci_epc_features ks_pcie_am654_epc_features = { 929 + .linkup_notifier = false, 930 + .msi_capable = true, 931 + .msix_capable = false, 932 + .reserved_bar = 1 << BAR_0 | 1 << BAR_1, 933 + .bar_fixed_64bit = 1 << BAR_0, 934 + .bar_fixed_size[2] = SZ_1M, 935 + .bar_fixed_size[3] = SZ_64K, 936 + .bar_fixed_size[4] = 256, 937 + .bar_fixed_size[5] = SZ_1M, 938 + .align = SZ_1M, 939 + }; 940 + 941 + static const struct pci_epc_features* 942 + ks_pcie_am654_get_features(struct dw_pcie_ep *ep) 943 + { 944 + return &ks_pcie_am654_epc_features; 945 + } 946 + 947 + static const struct dw_pcie_ep_ops ks_pcie_am654_ep_ops = { 948 + .ep_init = ks_pcie_am654_ep_init, 949 + .raise_irq = ks_pcie_am654_raise_irq, 950 + .get_features = &ks_pcie_am654_get_features, 951 + }; 952 + 953 + static int __init ks_pcie_add_pcie_ep(struct keystone_pcie *ks_pcie, 954 + struct platform_device *pdev) 955 + { 956 + int ret; 957 + struct dw_pcie_ep *ep; 958 + struct resource *res; 959 + struct device *dev = &pdev->dev; 960 + struct dw_pcie *pci = ks_pcie->pci; 961 + 962 + ep = &pci->ep; 963 + 964 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space"); 965 + if (!res) 966 + return -EINVAL; 967 + 968 + ep->phys_base = res->start; 969 + ep->addr_size = resource_size(res); 970 + 971 + ret = dw_pcie_ep_init(ep); 972 + if (ret) { 973 + dev_err(dev, "failed to initialize endpoint\n"); 974 + return ret; 975 + } 976 + 977 + return 0; 978 + } 900 979 901 980 static void ks_pcie_disable_phy(struct keystone_pcie *ks_pcie) 902 981 { ··· 1036 873 int num_lanes = ks_pcie->num_lanes; 1037 874 1038 875 for (i = 0; i < num_lanes; i++) { 876 + ret = phy_reset(ks_pcie->phy[i]); 877 + if (ret < 0) 878 + goto err_phy; 879 + 1039 880 ret = phy_init(ks_pcie->phy[i]); 1040 881 if (ret < 0) 1041 882 goto err_phy; ··· 1062 895 return ret; 1063 896 } 1064 897 898 + static int ks_pcie_set_mode(struct device *dev) 899 + { 900 + struct device_node *np = dev->of_node; 901 + struct regmap *syscon; 902 + u32 val; 903 + u32 mask; 904 + int ret = 0; 905 + 906 + syscon = syscon_regmap_lookup_by_phandle(np, "ti,syscon-pcie-mode"); 907 + if (IS_ERR(syscon)) 908 + return 0; 909 + 910 + mask = KS_PCIE_DEV_TYPE_MASK | KS_PCIE_SYSCLOCKOUTEN; 911 + val = KS_PCIE_DEV_TYPE(RC) | KS_PCIE_SYSCLOCKOUTEN; 912 + 913 + ret = regmap_update_bits(syscon, 0, mask, val); 914 + if (ret) { 915 + dev_err(dev, "failed to set pcie mode\n"); 916 + return ret; 917 + } 918 + 919 + return 0; 920 + } 921 + 922 + static int ks_pcie_am654_set_mode(struct device *dev, 923 + enum dw_pcie_device_mode mode) 924 + { 925 + struct device_node *np = dev->of_node; 926 + struct regmap *syscon; 927 + u32 val; 928 + u32 mask; 929 + int ret = 0; 930 + 931 + syscon = syscon_regmap_lookup_by_phandle(np, "ti,syscon-pcie-mode"); 932 + if (IS_ERR(syscon)) 933 + return 0; 934 + 935 + mask = AM654_PCIE_DEV_TYPE_MASK; 936 + 937 + switch (mode) { 938 + case DW_PCIE_RC_TYPE: 939 + val = RC; 940 + break; 941 + case DW_PCIE_EP_TYPE: 942 + val = EP; 943 + break; 944 + default: 945 + dev_err(dev, "INVALID device type %d\n", mode); 946 + return -EINVAL; 947 + } 948 + 949 + ret = regmap_update_bits(syscon, 0, mask, val); 950 + if (ret) { 951 + dev_err(dev, "failed to set pcie mode\n"); 952 + return ret; 953 + } 954 + 955 + return 0; 956 + } 957 + 958 + static void ks_pcie_set_link_speed(struct dw_pcie *pci, int link_speed) 959 + { 960 + u32 val; 961 + 962 + dw_pcie_dbi_ro_wr_en(pci); 963 + 964 + val = dw_pcie_readl_dbi(pci, EXP_CAP_ID_OFFSET + PCI_EXP_LNKCAP); 965 + if ((val & PCI_EXP_LNKCAP_SLS) != link_speed) { 966 + val &= ~((u32)PCI_EXP_LNKCAP_SLS); 967 + val |= link_speed; 968 + dw_pcie_writel_dbi(pci, EXP_CAP_ID_OFFSET + PCI_EXP_LNKCAP, 969 + val); 970 + } 971 + 972 + val = dw_pcie_readl_dbi(pci, EXP_CAP_ID_OFFSET + PCI_EXP_LNKCTL2); 973 + if ((val & PCI_EXP_LNKCAP_SLS) != link_speed) { 974 + val &= ~((u32)PCI_EXP_LNKCAP_SLS); 975 + val |= link_speed; 976 + dw_pcie_writel_dbi(pci, EXP_CAP_ID_OFFSET + PCI_EXP_LNKCTL2, 977 + val); 978 + } 979 + 980 + dw_pcie_dbi_ro_wr_dis(pci); 981 + } 982 + 983 + static const struct ks_pcie_of_data ks_pcie_rc_of_data = { 984 + .host_ops = &ks_pcie_host_ops, 985 + .version = 0x365A, 986 + }; 987 + 988 + static const struct ks_pcie_of_data ks_pcie_am654_rc_of_data = { 989 + .host_ops = &ks_pcie_am654_host_ops, 990 + .mode = DW_PCIE_RC_TYPE, 991 + .version = 0x490A, 992 + }; 993 + 994 + static const struct ks_pcie_of_data ks_pcie_am654_ep_of_data = { 995 + .ep_ops = &ks_pcie_am654_ep_ops, 996 + .mode = DW_PCIE_EP_TYPE, 997 + .version = 0x490A, 998 + }; 999 + 1000 + static const struct of_device_id ks_pcie_of_match[] = { 1001 + { 1002 + .type = "pci", 1003 + .data = &ks_pcie_rc_of_data, 1004 + .compatible = "ti,keystone-pcie", 1005 + }, 1006 + { 1007 + .data = &ks_pcie_am654_rc_of_data, 1008 + .compatible = "ti,am654-pcie-rc", 1009 + }, 1010 + { 1011 + .data = &ks_pcie_am654_ep_of_data, 1012 + .compatible = "ti,am654-pcie-ep", 1013 + }, 1014 + { }, 1015 + }; 1016 + 1065 1017 static int __init ks_pcie_probe(struct platform_device *pdev) 1066 1018 { 1019 + const struct dw_pcie_host_ops *host_ops; 1020 + const struct dw_pcie_ep_ops *ep_ops; 1067 1021 struct device *dev = &pdev->dev; 1068 1022 struct device_node *np = dev->of_node; 1023 + const struct ks_pcie_of_data *data; 1024 + const struct of_device_id *match; 1025 + enum dw_pcie_device_mode mode; 1069 1026 struct dw_pcie *pci; 1070 1027 struct keystone_pcie *ks_pcie; 1071 1028 struct device_link **link; 1029 + struct gpio_desc *gpiod; 1030 + void __iomem *atu_base; 1031 + struct resource *res; 1032 + unsigned int version; 1033 + void __iomem *base; 1072 1034 u32 num_viewport; 1073 1035 struct phy **phy; 1036 + int link_speed; 1074 1037 u32 num_lanes; 1075 1038 char name[10]; 1076 1039 int ret; 1040 + int irq; 1077 1041 int i; 1042 + 1043 + match = of_match_device(of_match_ptr(ks_pcie_of_match), dev); 1044 + data = (struct ks_pcie_of_data *)match->data; 1045 + if (!data) 1046 + return -EINVAL; 1047 + 1048 + version = data->version; 1049 + host_ops = data->host_ops; 1050 + ep_ops = data->ep_ops; 1051 + mode = data->mode; 1078 1052 1079 1053 ks_pcie = devm_kzalloc(dev, sizeof(*ks_pcie), GFP_KERNEL); 1080 1054 if (!ks_pcie) ··· 1225 917 if (!pci) 1226 918 return -ENOMEM; 1227 919 920 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app"); 921 + ks_pcie->va_app_base = devm_ioremap_resource(dev, res); 922 + if (IS_ERR(ks_pcie->va_app_base)) 923 + return PTR_ERR(ks_pcie->va_app_base); 924 + 925 + ks_pcie->app = *res; 926 + 927 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbics"); 928 + base = devm_pci_remap_cfg_resource(dev, res); 929 + if (IS_ERR(base)) 930 + return PTR_ERR(base); 931 + 932 + if (of_device_is_compatible(np, "ti,am654-pcie-rc")) 933 + ks_pcie->is_am6 = true; 934 + 935 + pci->dbi_base = base; 936 + pci->dbi_base2 = base; 1228 937 pci->dev = dev; 1229 938 pci->ops = &ks_pcie_dw_pcie_ops; 939 + pci->version = version; 1230 940 1231 - ret = of_property_read_u32(np, "num-viewport", &num_viewport); 941 + irq = platform_get_irq(pdev, 0); 942 + if (irq < 0) { 943 + dev_err(dev, "missing IRQ resource: %d\n", irq); 944 + return irq; 945 + } 946 + 947 + ret = request_irq(irq, ks_pcie_err_irq_handler, IRQF_SHARED, 948 + "ks-pcie-error-irq", ks_pcie); 1232 949 if (ret < 0) { 1233 - dev_err(dev, "unable to read *num-viewport* property\n"); 950 + dev_err(dev, "failed to request error IRQ %d\n", 951 + irq); 1234 952 return ret; 1235 953 } 1236 954 ··· 1294 960 ks_pcie->pci = pci; 1295 961 ks_pcie->link = link; 1296 962 ks_pcie->num_lanes = num_lanes; 1297 - ks_pcie->num_viewport = num_viewport; 1298 963 ks_pcie->phy = phy; 964 + 965 + gpiod = devm_gpiod_get_optional(dev, "reset", 966 + GPIOD_OUT_LOW); 967 + if (IS_ERR(gpiod)) { 968 + ret = PTR_ERR(gpiod); 969 + if (ret != -EPROBE_DEFER) 970 + dev_err(dev, "Failed to get reset GPIO\n"); 971 + goto err_link; 972 + } 1299 973 1300 974 ret = ks_pcie_enable_phy(ks_pcie); 1301 975 if (ret) { ··· 1319 977 goto err_get_sync; 1320 978 } 1321 979 1322 - ret = ks_pcie_add_pcie_port(ks_pcie, pdev); 1323 - if (ret < 0) 1324 - goto err_get_sync; 980 + if (pci->version >= 0x480A) { 981 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "atu"); 982 + atu_base = devm_ioremap_resource(dev, res); 983 + if (IS_ERR(atu_base)) { 984 + ret = PTR_ERR(atu_base); 985 + goto err_get_sync; 986 + } 987 + 988 + pci->atu_base = atu_base; 989 + 990 + ret = ks_pcie_am654_set_mode(dev, mode); 991 + if (ret < 0) 992 + goto err_get_sync; 993 + } else { 994 + ret = ks_pcie_set_mode(dev); 995 + if (ret < 0) 996 + goto err_get_sync; 997 + } 998 + 999 + link_speed = of_pci_get_max_link_speed(np); 1000 + if (link_speed < 0) 1001 + link_speed = 2; 1002 + 1003 + ks_pcie_set_link_speed(pci, link_speed); 1004 + 1005 + switch (mode) { 1006 + case DW_PCIE_RC_TYPE: 1007 + if (!IS_ENABLED(CONFIG_PCI_KEYSTONE_HOST)) { 1008 + ret = -ENODEV; 1009 + goto err_get_sync; 1010 + } 1011 + 1012 + ret = of_property_read_u32(np, "num-viewport", &num_viewport); 1013 + if (ret < 0) { 1014 + dev_err(dev, "unable to read *num-viewport* property\n"); 1015 + return ret; 1016 + } 1017 + 1018 + /* 1019 + * "Power Sequencing and Reset Signal Timings" table in 1020 + * PCI EXPRESS CARD ELECTROMECHANICAL SPECIFICATION, REV. 2.0 1021 + * indicates PERST# should be deasserted after minimum of 100us 1022 + * once REFCLK is stable. The REFCLK to the connector in RC 1023 + * mode is selected while enabling the PHY. So deassert PERST# 1024 + * after 100 us. 1025 + */ 1026 + if (gpiod) { 1027 + usleep_range(100, 200); 1028 + gpiod_set_value_cansleep(gpiod, 1); 1029 + } 1030 + 1031 + ks_pcie->num_viewport = num_viewport; 1032 + pci->pp.ops = host_ops; 1033 + ret = ks_pcie_add_pcie_port(ks_pcie, pdev); 1034 + if (ret < 0) 1035 + goto err_get_sync; 1036 + break; 1037 + case DW_PCIE_EP_TYPE: 1038 + if (!IS_ENABLED(CONFIG_PCI_KEYSTONE_EP)) { 1039 + ret = -ENODEV; 1040 + goto err_get_sync; 1041 + } 1042 + 1043 + pci->ep.ops = ep_ops; 1044 + ret = ks_pcie_add_pcie_ep(ks_pcie, pdev); 1045 + if (ret < 0) 1046 + goto err_get_sync; 1047 + break; 1048 + default: 1049 + dev_err(dev, "INVALID device type %d\n", mode); 1050 + } 1051 + 1052 + ks_pcie_enable_error_irq(ks_pcie); 1325 1053 1326 1054 return 0; 1327 1055
+1 -1
drivers/pci/controller/dwc/pci-layerscape-ep.c
··· 79 79 } 80 80 } 81 81 82 - static struct dw_pcie_ep_ops pcie_ep_ops = { 82 + static const struct dw_pcie_ep_ops pcie_ep_ops = { 83 83 .ep_init = ls_pcie_ep_init, 84 84 .raise_irq = ls_pcie_ep_raise_irq, 85 85 .get_features = ls_pcie_ep_get_features,
+1
drivers/pci/controller/dwc/pci-layerscape.c
··· 201 201 return -EINVAL; 202 202 } 203 203 204 + of_node_put(msi_node); 204 205 return 0; 205 206 } 206 207
+93
drivers/pci/controller/dwc/pcie-al.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCIe host controller driver for Amazon's Annapurna Labs IP (used in chips 4 + * such as Graviton and Alpine) 5 + * 6 + * Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 7 + * 8 + * Author: Jonathan Chocron <jonnyc@amazon.com> 9 + */ 10 + 11 + #include <linux/pci.h> 12 + #include <linux/pci-ecam.h> 13 + #include <linux/pci-acpi.h> 14 + #include "../../pci.h" 15 + 16 + #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) 17 + 18 + struct al_pcie_acpi { 19 + void __iomem *dbi_base; 20 + }; 21 + 22 + static void __iomem *al_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 23 + int where) 24 + { 25 + struct pci_config_window *cfg = bus->sysdata; 26 + struct al_pcie_acpi *pcie = cfg->priv; 27 + void __iomem *dbi_base = pcie->dbi_base; 28 + 29 + if (bus->number == cfg->busr.start) { 30 + /* 31 + * The DW PCIe core doesn't filter out transactions to other 32 + * devices/functions on the root bus num, so we do this here. 33 + */ 34 + if (PCI_SLOT(devfn) > 0) 35 + return NULL; 36 + else 37 + return dbi_base + where; 38 + } 39 + 40 + return pci_ecam_map_bus(bus, devfn, where); 41 + } 42 + 43 + static int al_pcie_init(struct pci_config_window *cfg) 44 + { 45 + struct device *dev = cfg->parent; 46 + struct acpi_device *adev = to_acpi_device(dev); 47 + struct acpi_pci_root *root = acpi_driver_data(adev); 48 + struct al_pcie_acpi *al_pcie; 49 + struct resource *res; 50 + int ret; 51 + 52 + al_pcie = devm_kzalloc(dev, sizeof(*al_pcie), GFP_KERNEL); 53 + if (!al_pcie) 54 + return -ENOMEM; 55 + 56 + res = devm_kzalloc(dev, sizeof(*res), GFP_KERNEL); 57 + if (!res) 58 + return -ENOMEM; 59 + 60 + ret = acpi_get_rc_resources(dev, "AMZN0001", root->segment, res); 61 + if (ret) { 62 + dev_err(dev, "can't get rc dbi base address for SEG %d\n", 63 + root->segment); 64 + return ret; 65 + } 66 + 67 + dev_dbg(dev, "Root port dbi res: %pR\n", res); 68 + 69 + al_pcie->dbi_base = devm_pci_remap_cfg_resource(dev, res); 70 + if (IS_ERR(al_pcie->dbi_base)) { 71 + long err = PTR_ERR(al_pcie->dbi_base); 72 + 73 + dev_err(dev, "couldn't remap dbi base %pR (err:%ld)\n", 74 + res, err); 75 + return err; 76 + } 77 + 78 + cfg->priv = al_pcie; 79 + 80 + return 0; 81 + } 82 + 83 + struct pci_ecam_ops al_pcie_ops = { 84 + .bus_shift = 20, 85 + .init = al_pcie_init, 86 + .pci_ops = { 87 + .map_bus = al_pcie_map_bus, 88 + .read = pci_generic_config_read, 89 + .write = pci_generic_config_write, 90 + } 91 + }; 92 + 93 + #endif /* defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) */
+1 -1
drivers/pci/controller/dwc/pcie-artpec6.c
··· 444 444 return 0; 445 445 } 446 446 447 - static struct dw_pcie_ep_ops pcie_ep_ops = { 447 + static const struct dw_pcie_ep_ops pcie_ep_ops = { 448 448 .ep_init = artpec6_pcie_ep_init, 449 449 .raise_irq = artpec6_pcie_raise_irq, 450 450 };
+44 -11
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 46 46 u8 cap_id, next_cap_ptr; 47 47 u16 reg; 48 48 49 + if (!cap_ptr) 50 + return 0; 51 + 49 52 reg = dw_pcie_readw_dbi(pci, cap_ptr); 50 - next_cap_ptr = (reg & 0xff00) >> 8; 51 53 cap_id = (reg & 0x00ff); 52 54 53 - if (!next_cap_ptr || cap_id > PCI_CAP_ID_MAX) 55 + if (cap_id > PCI_CAP_ID_MAX) 54 56 return 0; 55 57 56 58 if (cap_id == cap) 57 59 return cap_ptr; 58 60 61 + next_cap_ptr = (reg & 0xff00) >> 8; 59 62 return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap); 60 63 } 61 64 ··· 69 66 70 67 reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST); 71 68 next_cap_ptr = (reg & 0x00ff); 72 - 73 - if (!next_cap_ptr) 74 - return 0; 75 69 76 70 return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap); 77 71 } ··· 397 397 { 398 398 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 399 399 struct pci_epc *epc = ep->epc; 400 + unsigned int aligned_offset; 400 401 u16 msg_ctrl, msg_data; 401 402 u32 msg_addr_lower, msg_addr_upper, reg; 402 403 u64 msg_addr; ··· 423 422 reg = ep->msi_cap + PCI_MSI_DATA_32; 424 423 msg_data = dw_pcie_readw_dbi(pci, reg); 425 424 } 426 - msg_addr = ((u64) msg_addr_upper) << 32 | msg_addr_lower; 425 + aligned_offset = msg_addr_lower & (epc->mem->page_size - 1); 426 + msg_addr = ((u64)msg_addr_upper) << 32 | 427 + (msg_addr_lower & ~aligned_offset); 427 428 ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, 428 429 epc->mem->page_size); 429 430 if (ret) 430 431 return ret; 431 432 432 - writel(msg_data | (interrupt_num - 1), ep->msi_mem); 433 + writel(msg_data | (interrupt_num - 1), ep->msi_mem + aligned_offset); 433 434 434 435 dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys); 435 436 ··· 507 504 pci_epc_mem_exit(epc); 508 505 } 509 506 507 + static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap) 508 + { 509 + u32 header; 510 + int pos = PCI_CFG_SPACE_SIZE; 511 + 512 + while (pos) { 513 + header = dw_pcie_readl_dbi(pci, pos); 514 + if (PCI_EXT_CAP_ID(header) == cap) 515 + return pos; 516 + 517 + pos = PCI_EXT_CAP_NEXT(header); 518 + if (!pos) 519 + break; 520 + } 521 + 522 + return 0; 523 + } 524 + 510 525 int dw_pcie_ep_init(struct dw_pcie_ep *ep) 511 526 { 527 + int i; 512 528 int ret; 529 + u32 reg; 513 530 void *addr; 531 + unsigned int nbars; 532 + unsigned int offset; 514 533 struct pci_epc *epc; 515 534 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 516 535 struct device *dev = pci->dev; ··· 540 515 541 516 if (!pci->dbi_base || !pci->dbi_base2) { 542 517 dev_err(dev, "dbi_base/dbi_base2 is not populated\n"); 543 - return -EINVAL; 544 - } 545 - if (pci->iatu_unroll_enabled && !pci->atu_base) { 546 - dev_err(dev, "atu_base is not populated\n"); 547 518 return -EINVAL; 548 519 } 549 520 ··· 615 594 ep->msi_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSI); 616 595 617 596 ep->msix_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSIX); 597 + 598 + offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); 599 + if (offset) { 600 + reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 601 + nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >> 602 + PCI_REBAR_CTRL_NBAR_SHIFT; 603 + 604 + dw_pcie_dbi_ro_wr_en(pci); 605 + for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) 606 + dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0); 607 + dw_pcie_dbi_ro_wr_dis(pci); 608 + } 618 609 619 610 dw_pcie_setup(pci); 620 611
+61 -94
drivers/pci/controller/dwc/pcie-designware-host.c
··· 126 126 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 127 127 u64 msi_target; 128 128 129 - if (pp->ops->get_msi_addr) 130 - msi_target = pp->ops->get_msi_addr(pp); 131 - else 132 - msi_target = (u64)pp->msi_data; 129 + msi_target = (u64)pp->msi_data; 133 130 134 131 msg->address_lo = lower_32_bits(msi_target); 135 132 msg->address_hi = upper_32_bits(msi_target); 136 133 137 - if (pp->ops->get_msi_data) 138 - msg->data = pp->ops->get_msi_data(pp, d->hwirq); 139 - else 140 - msg->data = d->hwirq; 134 + msg->data = d->hwirq; 141 135 142 136 dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n", 143 137 (int)d->hwirq, msg->address_hi, msg->address_lo); ··· 151 157 152 158 raw_spin_lock_irqsave(&pp->lock, flags); 153 159 154 - if (pp->ops->msi_clear_irq) { 155 - pp->ops->msi_clear_irq(pp, d->hwirq); 156 - } else { 157 - ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 158 - res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 159 - bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 160 + ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 161 + res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 162 + bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 160 163 161 - pp->irq_mask[ctrl] |= BIT(bit); 162 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, 163 - pp->irq_mask[ctrl]); 164 - } 164 + pp->irq_mask[ctrl] |= BIT(bit); 165 + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, 166 + pp->irq_mask[ctrl]); 165 167 166 168 raw_spin_unlock_irqrestore(&pp->lock, flags); 167 169 } ··· 170 180 171 181 raw_spin_lock_irqsave(&pp->lock, flags); 172 182 173 - if (pp->ops->msi_set_irq) { 174 - pp->ops->msi_set_irq(pp, d->hwirq); 175 - } else { 176 - ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 177 - res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 178 - bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 183 + ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 184 + res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 185 + bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 179 186 180 - pp->irq_mask[ctrl] &= ~BIT(bit); 181 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, 182 - pp->irq_mask[ctrl]); 183 - } 187 + pp->irq_mask[ctrl] &= ~BIT(bit); 188 + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, 189 + pp->irq_mask[ctrl]); 184 190 185 191 raw_spin_unlock_irqrestore(&pp->lock, flags); 186 192 } ··· 185 199 { 186 200 struct pcie_port *pp = irq_data_get_irq_chip_data(d); 187 201 unsigned int res, bit, ctrl; 188 - unsigned long flags; 189 202 190 203 ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 191 204 res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 192 205 bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 193 206 194 - raw_spin_lock_irqsave(&pp->lock, flags); 195 - 196 207 dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + res, 4, BIT(bit)); 197 - 198 - if (pp->ops->msi_irq_ack) 199 - pp->ops->msi_irq_ack(d->hwirq, pp); 200 - 201 - raw_spin_unlock_irqrestore(&pp->lock, flags); 202 208 } 203 209 204 210 static struct irq_chip dw_pci_msi_bottom_irq_chip = { ··· 223 245 224 246 for (i = 0; i < nr_irqs; i++) 225 247 irq_domain_set_info(domain, virq + i, bit + i, 226 - &dw_pci_msi_bottom_irq_chip, 248 + pp->msi_irq_chip, 227 249 pp, handle_edge_irq, 228 250 NULL, NULL); 229 251 ··· 276 298 277 299 void dw_pcie_free_msi(struct pcie_port *pp) 278 300 { 279 - irq_set_chained_handler(pp->msi_irq, NULL); 280 - irq_set_handler_data(pp->msi_irq, NULL); 301 + if (pp->msi_irq) { 302 + irq_set_chained_handler(pp->msi_irq, NULL); 303 + irq_set_handler_data(pp->msi_irq, NULL); 304 + } 281 305 282 306 irq_domain_remove(pp->msi_domain); 283 307 irq_domain_remove(pp->irq_domain); 308 + 309 + if (pp->msi_page) 310 + __free_page(pp->msi_page); 284 311 } 285 312 286 313 void dw_pcie_msi_init(struct pcie_port *pp) 287 314 { 288 315 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 289 316 struct device *dev = pci->dev; 290 - struct page *page; 291 317 u64 msi_target; 292 318 293 - page = alloc_page(GFP_KERNEL); 294 - pp->msi_data = dma_map_page(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE); 319 + pp->msi_page = alloc_page(GFP_KERNEL); 320 + pp->msi_data = dma_map_page(dev, pp->msi_page, 0, PAGE_SIZE, 321 + DMA_FROM_DEVICE); 295 322 if (dma_mapping_error(dev, pp->msi_data)) { 296 323 dev_err(dev, "Failed to map MSI data\n"); 297 - __free_page(page); 324 + __free_page(pp->msi_page); 325 + pp->msi_page = NULL; 298 326 return; 299 327 } 300 328 msi_target = (u64)pp->msi_data; ··· 319 335 struct device_node *np = dev->of_node; 320 336 struct platform_device *pdev = to_platform_device(dev); 321 337 struct resource_entry *win, *tmp; 322 - struct pci_bus *bus, *child; 338 + struct pci_bus *child; 323 339 struct pci_host_bridge *bridge; 324 340 struct resource *cfg_res; 325 341 int ret; ··· 336 352 dev_err(dev, "Missing *config* reg space\n"); 337 353 } 338 354 339 - bridge = pci_alloc_host_bridge(0); 355 + bridge = devm_pci_alloc_host_bridge(dev, 0); 340 356 if (!bridge) 341 357 return -ENOMEM; 342 358 ··· 347 363 348 364 ret = devm_request_pci_bus_resources(dev, &bridge->windows); 349 365 if (ret) 350 - goto error; 366 + return ret; 351 367 352 368 /* Get the I/O and memory ranges from DT */ 353 369 resource_list_for_each_entry_safe(win, tmp, &bridge->windows) { ··· 391 407 resource_size(pp->cfg)); 392 408 if (!pci->dbi_base) { 393 409 dev_err(dev, "Error with ioremap\n"); 394 - ret = -ENOMEM; 395 - goto error; 410 + return -ENOMEM; 396 411 } 397 412 } 398 413 ··· 402 419 pp->cfg0_base, pp->cfg0_size); 403 420 if (!pp->va_cfg0_base) { 404 421 dev_err(dev, "Error with ioremap in function\n"); 405 - ret = -ENOMEM; 406 - goto error; 422 + return -ENOMEM; 407 423 } 408 424 } 409 425 ··· 412 430 pp->cfg1_size); 413 431 if (!pp->va_cfg1_base) { 414 432 dev_err(dev, "Error with ioremap\n"); 415 - ret = -ENOMEM; 416 - goto error; 433 + return -ENOMEM; 417 434 } 418 435 } 419 436 ··· 420 439 if (ret) 421 440 pci->num_viewport = 2; 422 441 423 - if (IS_ENABLED(CONFIG_PCI_MSI) && pci_msi_enabled()) { 442 + if (pci_msi_enabled()) { 424 443 /* 425 444 * If a specific SoC driver needs to change the 426 445 * default number of vectors, it needs to implement ··· 435 454 pp->num_vectors == 0) { 436 455 dev_err(dev, 437 456 "Invalid number of vectors\n"); 438 - goto error; 457 + return -EINVAL; 439 458 } 440 459 } 441 460 442 461 if (!pp->ops->msi_host_init) { 462 + pp->msi_irq_chip = &dw_pci_msi_bottom_irq_chip; 463 + 443 464 ret = dw_pcie_allocate_domains(pp); 444 465 if (ret) 445 - goto error; 466 + return ret; 446 467 447 468 if (pp->msi_irq) 448 469 irq_set_chained_handler_and_data(pp->msi_irq, ··· 453 470 } else { 454 471 ret = pp->ops->msi_host_init(pp); 455 472 if (ret < 0) 456 - goto error; 473 + return ret; 457 474 } 458 475 } 459 476 460 477 if (pp->ops->host_init) { 461 478 ret = pp->ops->host_init(pp); 462 479 if (ret) 463 - goto error; 480 + goto err_free_msi; 464 481 } 465 482 466 483 pp->root_bus_nr = pp->busn->start; ··· 474 491 475 492 ret = pci_scan_root_bus_bridge(bridge); 476 493 if (ret) 477 - goto error; 494 + goto err_free_msi; 478 495 479 - bus = bridge->bus; 496 + pp->root_bus = bridge->bus; 480 497 481 498 if (pp->ops->scan_bus) 482 499 pp->ops->scan_bus(pp); 483 500 484 - pci_bus_size_bridges(bus); 485 - pci_bus_assign_resources(bus); 501 + pci_bus_size_bridges(pp->root_bus); 502 + pci_bus_assign_resources(pp->root_bus); 486 503 487 - list_for_each_entry(child, &bus->children, node) 504 + list_for_each_entry(child, &pp->root_bus->children, node) 488 505 pcie_bus_configure_settings(child); 489 506 490 - pci_bus_add_devices(bus); 507 + pci_bus_add_devices(pp->root_bus); 491 508 return 0; 492 509 493 - error: 494 - pci_free_host_bridge(bridge); 510 + err_free_msi: 511 + if (pci_msi_enabled() && !pp->ops->msi_host_init) 512 + dw_pcie_free_msi(pp); 495 513 return ret; 496 514 } 497 515 ··· 612 628 .write = dw_pcie_wr_conf, 613 629 }; 614 630 615 - static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci) 616 - { 617 - u32 val; 618 - 619 - val = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT); 620 - if (val == 0xffffffff) 621 - return 1; 622 - 623 - return 0; 624 - } 625 - 626 631 void dw_pcie_setup_rc(struct pcie_port *pp) 627 632 { 628 633 u32 val, ctrl, num_ctrls; ··· 619 646 620 647 dw_pcie_setup(pci); 621 648 622 - num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; 649 + if (!pp->ops->msi_host_init) { 650 + num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; 623 651 624 - /* Initialize IRQ Status array */ 625 - for (ctrl = 0; ctrl < num_ctrls; ctrl++) { 626 - pp->irq_mask[ctrl] = ~0; 627 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + 628 - (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 629 - 4, pp->irq_mask[ctrl]); 630 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + 631 - (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 632 - 4, ~0); 652 + /* Initialize IRQ Status array */ 653 + for (ctrl = 0; ctrl < num_ctrls; ctrl++) { 654 + pp->irq_mask[ctrl] = ~0; 655 + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + 656 + (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 657 + 4, pp->irq_mask[ctrl]); 658 + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + 659 + (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 660 + 4, ~0); 661 + } 633 662 } 634 663 635 664 /* Setup RC BARs */ ··· 665 690 * we should not program the ATU here. 666 691 */ 667 692 if (!pp->ops->rd_other_conf) { 668 - /* Get iATU unroll support */ 669 - pci->iatu_unroll_enabled = dw_pcie_iatu_unroll_enabled(pci); 670 - dev_dbg(pci->dev, "iATU unroll: %s\n", 671 - pci->iatu_unroll_enabled ? "enabled" : "disabled"); 672 - 673 - if (pci->iatu_unroll_enabled && !pci->atu_base) 674 - pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET; 675 - 676 693 dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX0, 677 694 PCIE_ATU_TYPE_MEM, pp->mem_base, 678 695 pp->mem_bus_addr, pp->mem_size);
+1 -1
drivers/pci/controller/dwc/pcie-designware-plat.c
··· 106 106 return &dw_plat_pcie_epc_features; 107 107 } 108 108 109 - static struct dw_pcie_ep_ops pcie_ep_ops = { 109 + static const struct dw_pcie_ep_ops pcie_ep_ops = { 110 110 .ep_init = dw_plat_pcie_ep_init, 111 111 .raise_irq = dw_plat_pcie_ep_raise_irq, 112 112 .get_features = dw_plat_pcie_get_features,
+55 -9
drivers/pci/controller/dwc/pcie-designware.c
··· 14 14 15 15 #include "pcie-designware.h" 16 16 17 - /* PCIe Port Logic registers */ 18 - #define PLR_OFFSET 0x700 19 - #define PCIE_PHY_DEBUG_R1 (PLR_OFFSET + 0x2c) 20 - #define PCIE_PHY_DEBUG_R1_LINK_UP (0x1 << 4) 21 - #define PCIE_PHY_DEBUG_R1_LINK_IN_TRAINING (0x1 << 29) 22 - 23 17 int dw_pcie_read(void __iomem *addr, int size, u32 *val) 24 18 { 25 19 if (!IS_ALIGNED((uintptr_t)addr, size)) { ··· 81 87 ret = dw_pcie_write(base + reg, size, val); 82 88 if (ret) 83 89 dev_err(pci->dev, "Write DBI address failed\n"); 90 + } 91 + 92 + u32 __dw_pcie_read_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg, 93 + size_t size) 94 + { 95 + int ret; 96 + u32 val; 97 + 98 + if (pci->ops->read_dbi2) 99 + return pci->ops->read_dbi2(pci, base, reg, size); 100 + 101 + ret = dw_pcie_read(base + reg, size, &val); 102 + if (ret) 103 + dev_err(pci->dev, "read DBI address failed\n"); 104 + 105 + return val; 106 + } 107 + 108 + void __dw_pcie_write_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg, 109 + size_t size, u32 val) 110 + { 111 + int ret; 112 + 113 + if (pci->ops->write_dbi2) { 114 + pci->ops->write_dbi2(pci, base, reg, size, val); 115 + return; 116 + } 117 + 118 + ret = dw_pcie_write(base + reg, size, val); 119 + if (ret) 120 + dev_err(pci->dev, "write DBI address failed\n"); 84 121 } 85 122 86 123 static u32 dw_pcie_readl_ob_unroll(struct dw_pcie *pci, u32 index, u32 reg) ··· 359 334 if (pci->ops->link_up) 360 335 return pci->ops->link_up(pci); 361 336 362 - val = readl(pci->dbi_base + PCIE_PHY_DEBUG_R1); 363 - return ((val & PCIE_PHY_DEBUG_R1_LINK_UP) && 364 - (!(val & PCIE_PHY_DEBUG_R1_LINK_IN_TRAINING))); 337 + val = readl(pci->dbi_base + PCIE_PORT_DEBUG1); 338 + return ((val & PCIE_PORT_DEBUG1_LINK_UP) && 339 + (!(val & PCIE_PORT_DEBUG1_LINK_IN_TRAINING))); 340 + } 341 + 342 + static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci) 343 + { 344 + u32 val; 345 + 346 + val = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT); 347 + if (val == 0xffffffff) 348 + return 1; 349 + 350 + return 0; 365 351 } 366 352 367 353 void dw_pcie_setup(struct dw_pcie *pci) ··· 382 346 u32 lanes; 383 347 struct device *dev = pci->dev; 384 348 struct device_node *np = dev->of_node; 349 + 350 + if (pci->version >= 0x480A || (!pci->version && 351 + dw_pcie_iatu_unroll_enabled(pci))) { 352 + pci->iatu_unroll_enabled = true; 353 + if (!pci->atu_base) 354 + pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET; 355 + } 356 + dev_dbg(pci->dev, "iATU unroll: %s\n", pci->iatu_unroll_enabled ? 357 + "enabled" : "disabled"); 358 + 385 359 386 360 ret = of_property_read_u32(np, "num-lanes", &lanes); 387 361 if (ret)
+18 -8
drivers/pci/controller/dwc/pcie-designware.h
··· 41 41 #define PCIE_PORT_DEBUG0 0x728 42 42 #define PORT_LOGIC_LTSSM_STATE_MASK 0x1f 43 43 #define PORT_LOGIC_LTSSM_STATE_L0 0x11 44 + #define PCIE_PORT_DEBUG1 0x72C 45 + #define PCIE_PORT_DEBUG1_LINK_UP BIT(4) 46 + #define PCIE_PORT_DEBUG1_LINK_IN_TRAINING BIT(29) 44 47 45 48 #define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C 46 49 #define PORT_LOGIC_SPEED_CHANGE BIT(17) ··· 148 145 int (*wr_other_conf)(struct pcie_port *pp, struct pci_bus *bus, 149 146 unsigned int devfn, int where, int size, u32 val); 150 147 int (*host_init)(struct pcie_port *pp); 151 - void (*msi_set_irq)(struct pcie_port *pp, int irq); 152 - void (*msi_clear_irq)(struct pcie_port *pp, int irq); 153 - phys_addr_t (*get_msi_addr)(struct pcie_port *pp); 154 - u32 (*get_msi_data)(struct pcie_port *pp, int pos); 155 148 void (*scan_bus)(struct pcie_port *pp); 156 149 void (*set_num_vectors)(struct pcie_port *pp); 157 150 int (*msi_host_init)(struct pcie_port *pp); 158 - void (*msi_irq_ack)(int irq, struct pcie_port *pp); 159 151 }; 160 152 161 153 struct pcie_port { ··· 177 179 struct irq_domain *irq_domain; 178 180 struct irq_domain *msi_domain; 179 181 dma_addr_t msi_data; 182 + struct page *msi_page; 183 + struct irq_chip *msi_irq_chip; 180 184 u32 num_vectors; 181 185 u32 irq_mask[MAX_MSI_CTRLS]; 186 + struct pci_bus *root_bus; 182 187 raw_spinlock_t lock; 183 188 DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS); 184 189 }; ··· 201 200 202 201 struct dw_pcie_ep { 203 202 struct pci_epc *epc; 204 - struct dw_pcie_ep_ops *ops; 203 + const struct dw_pcie_ep_ops *ops; 205 204 phys_addr_t phys_base; 206 205 size_t addr_size; 207 206 size_t page_size; ··· 223 222 size_t size); 224 223 void (*write_dbi)(struct dw_pcie *pcie, void __iomem *base, u32 reg, 225 224 size_t size, u32 val); 225 + u32 (*read_dbi2)(struct dw_pcie *pcie, void __iomem *base, u32 reg, 226 + size_t size); 227 + void (*write_dbi2)(struct dw_pcie *pcie, void __iomem *base, u32 reg, 228 + size_t size, u32 val); 226 229 int (*link_up)(struct dw_pcie *pcie); 227 230 int (*start_link)(struct dw_pcie *pcie); 228 231 void (*stop_link)(struct dw_pcie *pcie); ··· 243 238 struct pcie_port pp; 244 239 struct dw_pcie_ep ep; 245 240 const struct dw_pcie_ops *ops; 241 + unsigned int version; 246 242 }; 247 243 248 244 #define to_dw_pcie_from_pp(port) container_of((port), struct dw_pcie, pp) ··· 258 252 size_t size); 259 253 void __dw_pcie_write_dbi(struct dw_pcie *pci, void __iomem *base, u32 reg, 260 254 size_t size, u32 val); 255 + u32 __dw_pcie_read_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg, 256 + size_t size); 257 + void __dw_pcie_write_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg, 258 + size_t size, u32 val); 261 259 int dw_pcie_link_up(struct dw_pcie *pci); 262 260 int dw_pcie_wait_for_link(struct dw_pcie *pci); 263 261 void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, ··· 305 295 306 296 static inline void dw_pcie_writel_dbi2(struct dw_pcie *pci, u32 reg, u32 val) 307 297 { 308 - __dw_pcie_write_dbi(pci, pci->dbi_base2, reg, 0x4, val); 298 + __dw_pcie_write_dbi2(pci, pci->dbi_base2, reg, 0x4, val); 309 299 } 310 300 311 301 static inline u32 dw_pcie_readl_dbi2(struct dw_pcie *pci, u32 reg) 312 302 { 313 - return __dw_pcie_read_dbi(pci, pci->dbi_base2, reg, 0x4); 303 + return __dw_pcie_read_dbi2(pci, pci->dbi_base2, reg, 0x4); 314 304 } 315 305 316 306 static inline void dw_pcie_writel_atu(struct dw_pcie *pci, u32 reg, u32 val)
+6 -17
drivers/pci/controller/dwc/pcie-qcom.c
··· 1129 1129 return ret; 1130 1130 } 1131 1131 1132 - static int qcom_pcie_rd_own_conf(struct pcie_port *pp, int where, int size, 1133 - u32 *val) 1134 - { 1135 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1136 - 1137 - /* the device class is not reported correctly from the register */ 1138 - if (where == PCI_CLASS_REVISION && size == 4) { 1139 - *val = readl(pci->dbi_base + PCI_CLASS_REVISION); 1140 - *val &= 0xff; /* keep revision id */ 1141 - *val |= PCI_CLASS_BRIDGE_PCI << 16; 1142 - return PCIBIOS_SUCCESSFUL; 1143 - } 1144 - 1145 - return dw_pcie_read(pci->dbi_base + where, size, val); 1146 - } 1147 - 1148 1132 static const struct dw_pcie_host_ops qcom_pcie_dw_ops = { 1149 1133 .host_init = qcom_pcie_host_init, 1150 - .rd_own_conf = qcom_pcie_rd_own_conf, 1151 1134 }; 1152 1135 1153 1136 /* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */ ··· 1291 1308 { .compatible = "qcom,pcie-ipq4019", .data = &ops_2_4_0 }, 1292 1309 { } 1293 1310 }; 1311 + 1312 + static void qcom_fixup_class(struct pci_dev *dev) 1313 + { 1314 + dev->class = PCI_CLASS_BRIDGE_PCI << 8; 1315 + } 1316 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, PCI_ANY_ID, qcom_fixup_class); 1294 1317 1295 1318 static struct platform_driver qcom_pcie_driver = { 1296 1319 .probe = qcom_pcie_probe,
+8 -3
drivers/pci/controller/dwc/pcie-uniphier.c
··· 270 270 struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci); 271 271 struct device_node *np = pci->dev->of_node; 272 272 struct device_node *np_intc; 273 + int ret = 0; 273 274 274 275 np_intc = of_get_child_by_name(np, "legacy-interrupt-controller"); 275 276 if (!np_intc) { ··· 281 280 pp->irq = irq_of_parse_and_map(np_intc, 0); 282 281 if (!pp->irq) { 283 282 dev_err(pci->dev, "Failed to get an IRQ entry in legacy-interrupt-controller\n"); 284 - return -EINVAL; 283 + ret = -EINVAL; 284 + goto out_put_node; 285 285 } 286 286 287 287 priv->legacy_irq_domain = irq_domain_add_linear(np_intc, PCI_NUM_INTX, 288 288 &uniphier_intx_domain_ops, pp); 289 289 if (!priv->legacy_irq_domain) { 290 290 dev_err(pci->dev, "Failed to get INTx domain\n"); 291 - return -ENODEV; 291 + ret = -ENODEV; 292 + goto out_put_node; 292 293 } 293 294 294 295 irq_set_chained_handler_and_data(pp->irq, uniphier_pcie_irq_handler, 295 296 pp); 296 297 297 - return 0; 298 + out_put_node: 299 + of_node_put(np_intc); 300 + return ret; 298 301 } 299 302 300 303 static int uniphier_pcie_host_init(struct pcie_port *pp)
+8 -5
drivers/pci/controller/pci-aardvark.c
··· 794 794 struct device_node *node = dev->of_node; 795 795 struct device_node *pcie_intc_node; 796 796 struct irq_chip *irq_chip; 797 + int ret = 0; 797 798 798 799 pcie_intc_node = of_get_next_child(node, NULL); 799 800 if (!pcie_intc_node) { ··· 807 806 irq_chip->name = devm_kasprintf(dev, GFP_KERNEL, "%s-irq", 808 807 dev_name(dev)); 809 808 if (!irq_chip->name) { 810 - of_node_put(pcie_intc_node); 811 - return -ENOMEM; 809 + ret = -ENOMEM; 810 + goto out_put_node; 812 811 } 813 812 814 813 irq_chip->irq_mask = advk_pcie_irq_mask; ··· 820 819 &advk_pcie_irq_domain_ops, pcie); 821 820 if (!pcie->irq_domain) { 822 821 dev_err(dev, "Failed to get a INTx IRQ domain\n"); 823 - of_node_put(pcie_intc_node); 824 - return -ENOMEM; 822 + ret = -ENOMEM; 823 + goto out_put_node; 825 824 } 826 825 827 - return 0; 826 + out_put_node: 827 + of_node_put(pcie_intc_node); 828 + return ret; 828 829 } 829 830 830 831 static void advk_pcie_remove_irq_domain(struct advk_pcie *pcie)
+1 -1
drivers/pci/controller/pci-host-generic.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Simple, generic PCI host controller driver targetting firmware-initialised 3 + * Simple, generic PCI host controller driver targeting firmware-initialised 4 4 * systems and virtual machines (e.g. the PCI emulation provided by kvmtool). 5 5 * 6 6 * Copyright (C) 2014 ARM Limited
+23
drivers/pci/controller/pci-hyperv.c
··· 1486 1486 } 1487 1487 } 1488 1488 1489 + /* 1490 + * Remove entries in sysfs pci slot directory. 1491 + */ 1492 + static void hv_pci_remove_slots(struct hv_pcibus_device *hbus) 1493 + { 1494 + struct hv_pci_dev *hpdev; 1495 + 1496 + list_for_each_entry(hpdev, &hbus->children, list_entry) { 1497 + if (!hpdev->pci_slot) 1498 + continue; 1499 + pci_destroy_slot(hpdev->pci_slot); 1500 + hpdev->pci_slot = NULL; 1501 + } 1502 + } 1503 + 1489 1504 /** 1490 1505 * create_root_hv_pci_bus() - Expose a new root PCI bus 1491 1506 * @hbus: Root PCI bus, as understood by this driver ··· 1776 1761 hpdev = list_first_entry(&removed, struct hv_pci_dev, 1777 1762 list_entry); 1778 1763 list_del(&hpdev->list_entry); 1764 + 1765 + if (hpdev->pci_slot) 1766 + pci_destroy_slot(hpdev->pci_slot); 1767 + 1779 1768 put_pcichild(hpdev); 1780 1769 } 1781 1770 ··· 1919 1900 sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt, 1920 1901 VM_PKT_DATA_INBAND, 0); 1921 1902 1903 + /* For the get_pcichild() in hv_pci_eject_device() */ 1904 + put_pcichild(hpdev); 1905 + /* For the two refs got in new_pcichild_device() */ 1922 1906 put_pcichild(hpdev); 1923 1907 put_pcichild(hpdev); 1924 1908 put_hvpcibus(hpdev->hbus); ··· 2699 2677 pci_lock_rescan_remove(); 2700 2678 pci_stop_root_bus(hbus->pci_bus); 2701 2679 pci_remove_root_bus(hbus->pci_bus); 2680 + hv_pci_remove_slots(hbus); 2702 2681 pci_unlock_rescan_remove(); 2703 2682 hbus->state = hv_pcibus_removed; 2704 2683 }
+28 -9
drivers/pci/controller/pci-tegra.c
··· 231 231 struct msi_controller chip; 232 232 DECLARE_BITMAP(used, INT_PCI_MSI_NR); 233 233 struct irq_domain *domain; 234 - unsigned long pages; 235 234 struct mutex lock; 236 - u64 phys; 235 + void *virt; 236 + dma_addr_t phys; 237 237 int irq; 238 238 }; 239 239 ··· 1536 1536 err = platform_get_irq_byname(pdev, "msi"); 1537 1537 if (err < 0) { 1538 1538 dev_err(dev, "failed to get IRQ: %d\n", err); 1539 - goto err; 1539 + goto free_irq_domain; 1540 1540 } 1541 1541 1542 1542 msi->irq = err; ··· 1545 1545 tegra_msi_irq_chip.name, pcie); 1546 1546 if (err < 0) { 1547 1547 dev_err(dev, "failed to request IRQ: %d\n", err); 1548 - goto err; 1548 + goto free_irq_domain; 1549 1549 } 1550 1550 1551 - /* setup AFI/FPCI range */ 1552 - msi->pages = __get_free_pages(GFP_KERNEL, 0); 1553 - msi->phys = virt_to_phys((void *)msi->pages); 1551 + /* Though the PCIe controller can address >32-bit address space, to 1552 + * facilitate endpoints that support only 32-bit MSI target address, 1553 + * the mask is set to 32-bit to make sure that MSI target address is 1554 + * always a 32-bit address 1555 + */ 1556 + err = dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 1557 + if (err < 0) { 1558 + dev_err(dev, "failed to set DMA coherent mask: %d\n", err); 1559 + goto free_irq; 1560 + } 1561 + 1562 + msi->virt = dma_alloc_attrs(dev, PAGE_SIZE, &msi->phys, GFP_KERNEL, 1563 + DMA_ATTR_NO_KERNEL_MAPPING); 1564 + if (!msi->virt) { 1565 + dev_err(dev, "failed to allocate DMA memory for MSI\n"); 1566 + err = -ENOMEM; 1567 + goto free_irq; 1568 + } 1569 + 1554 1570 host->msi = &msi->chip; 1555 1571 1556 1572 return 0; 1557 1573 1558 - err: 1574 + free_irq: 1575 + free_irq(msi->irq, pcie); 1576 + free_irq_domain: 1559 1577 irq_domain_remove(msi->domain); 1560 1578 return err; 1561 1579 } ··· 1610 1592 struct tegra_msi *msi = &pcie->msi; 1611 1593 unsigned int i, irq; 1612 1594 1613 - free_pages(msi->pages, 0); 1595 + dma_free_attrs(pcie->dev, PAGE_SIZE, msi->virt, msi->phys, 1596 + DMA_ATTR_NO_KERNEL_MAPPING); 1614 1597 1615 1598 if (msi->irq > 0) 1616 1599 free_irq(msi->irq, pcie);
+1 -1
drivers/pci/controller/pcie-iproc-msi.c
··· 367 367 368 368 /* 369 369 * Now go read the tail pointer again to see if there are new 370 - * oustanding events that came in during the above window. 370 + * outstanding events that came in during the above window. 371 371 */ 372 372 } while (true); 373 373
+90 -8
drivers/pci/controller/pcie-iproc.c
··· 60 60 #define APB_ERR_EN_SHIFT 0 61 61 #define APB_ERR_EN BIT(APB_ERR_EN_SHIFT) 62 62 63 + #define CFG_RD_SUCCESS 0 64 + #define CFG_RD_UR 1 65 + #define CFG_RD_CRS 2 66 + #define CFG_RD_CA 3 63 67 #define CFG_RETRY_STATUS 0xffff0001 64 68 #define CFG_RETRY_STATUS_TIMEOUT_US 500000 /* 500 milliseconds */ 65 69 ··· 293 289 IPROC_PCIE_IARR4, 294 290 IPROC_PCIE_IMAP4, 295 291 292 + /* config read status */ 293 + IPROC_PCIE_CFG_RD_STATUS, 294 + 296 295 /* link status */ 297 296 IPROC_PCIE_LINK_STATUS, 298 297 ··· 357 350 [IPROC_PCIE_IMAP3] = 0xe08, 358 351 [IPROC_PCIE_IARR4] = 0xe68, 359 352 [IPROC_PCIE_IMAP4] = 0xe70, 353 + [IPROC_PCIE_CFG_RD_STATUS] = 0xee0, 360 354 [IPROC_PCIE_LINK_STATUS] = 0xf0c, 361 355 [IPROC_PCIE_APB_ERR_EN] = 0xf40, 362 356 }; ··· 482 474 return (pcie->base + offset); 483 475 } 484 476 485 - static unsigned int iproc_pcie_cfg_retry(void __iomem *cfg_data_p) 477 + static unsigned int iproc_pcie_cfg_retry(struct iproc_pcie *pcie, 478 + void __iomem *cfg_data_p) 486 479 { 487 480 int timeout = CFG_RETRY_STATUS_TIMEOUT_US; 488 481 unsigned int data; 482 + u32 status; 489 483 490 484 /* 491 485 * As per PCIe spec r3.1, sec 2.3.2, CRS Software Visibility only ··· 508 498 */ 509 499 data = readl(cfg_data_p); 510 500 while (data == CFG_RETRY_STATUS && timeout--) { 501 + /* 502 + * CRS state is set in CFG_RD status register 503 + * This will handle the case where CFG_RETRY_STATUS is 504 + * valid config data. 505 + */ 506 + status = iproc_pcie_read_reg(pcie, IPROC_PCIE_CFG_RD_STATUS); 507 + if (status != CFG_RD_CRS) 508 + return data; 509 + 511 510 udelay(1); 512 511 data = readl(cfg_data_p); 513 512 } ··· 595 576 if (!cfg_data_p) 596 577 return PCIBIOS_DEVICE_NOT_FOUND; 597 578 598 - data = iproc_pcie_cfg_retry(cfg_data_p); 579 + data = iproc_pcie_cfg_retry(pcie, cfg_data_p); 599 580 600 581 *val = data; 601 582 if (size <= 2) ··· 955 936 resource_size_t window_size = 956 937 ob_map->window_sizes[size_idx] * SZ_1M; 957 938 958 - if (size < window_size) 959 - continue; 939 + /* 940 + * Keep iterating until we reach the last window and 941 + * with the minimal window size at index zero. In this 942 + * case, we take a compromise by mapping it using the 943 + * minimum window size that can be supported 944 + */ 945 + if (size < window_size) { 946 + if (size_idx > 0 || window_idx > 0) 947 + continue; 948 + 949 + /* 950 + * For the corner case of reaching the minimal 951 + * window size that can be supported on the 952 + * last window 953 + */ 954 + axi_addr = ALIGN_DOWN(axi_addr, window_size); 955 + pci_addr = ALIGN_DOWN(pci_addr, window_size); 956 + size = window_size; 957 + } 960 958 961 959 if (!IS_ALIGNED(axi_addr, window_size) || 962 960 !IS_ALIGNED(pci_addr, window_size)) { ··· 1182 1146 return ret; 1183 1147 } 1184 1148 1149 + static int iproc_pcie_add_dma_range(struct device *dev, 1150 + struct list_head *resources, 1151 + struct of_pci_range *range) 1152 + { 1153 + struct resource *res; 1154 + struct resource_entry *entry, *tmp; 1155 + struct list_head *head = resources; 1156 + 1157 + res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL); 1158 + if (!res) 1159 + return -ENOMEM; 1160 + 1161 + resource_list_for_each_entry(tmp, resources) { 1162 + if (tmp->res->start < range->cpu_addr) 1163 + head = &tmp->node; 1164 + } 1165 + 1166 + res->start = range->cpu_addr; 1167 + res->end = res->start + range->size - 1; 1168 + 1169 + entry = resource_list_create_entry(res, 0); 1170 + if (!entry) 1171 + return -ENOMEM; 1172 + 1173 + entry->offset = res->start - range->cpu_addr; 1174 + resource_list_add(entry, head); 1175 + 1176 + return 0; 1177 + } 1178 + 1185 1179 static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie) 1186 1180 { 1181 + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); 1187 1182 struct of_pci_range range; 1188 1183 struct of_pci_range_parser parser; 1189 1184 int ret; 1185 + LIST_HEAD(resources); 1190 1186 1191 1187 /* Get the dma-ranges from DT */ 1192 1188 ret = of_pci_dma_range_parser_init(&parser, pcie->dev->of_node); ··· 1226 1158 return ret; 1227 1159 1228 1160 for_each_of_pci_range(&parser, &range) { 1161 + ret = iproc_pcie_add_dma_range(pcie->dev, 1162 + &resources, 1163 + &range); 1164 + if (ret) 1165 + goto out; 1229 1166 /* Each range entry corresponds to an inbound mapping region */ 1230 1167 ret = iproc_pcie_setup_ib(pcie, &range, IPROC_PCIE_IB_MAP_MEM); 1231 1168 if (ret) 1232 - return ret; 1169 + goto out; 1233 1170 } 1234 1171 1172 + list_splice_init(&resources, &host->dma_ranges); 1173 + 1235 1174 return 0; 1175 + out: 1176 + pci_free_resource_list(&resources); 1177 + return ret; 1236 1178 } 1237 1179 1238 1180 static int iproce_pcie_get_msi(struct iproc_pcie *pcie, ··· 1398 1320 if (pcie->need_msi_steer) { 1399 1321 ret = iproc_pcie_msi_steer(pcie, msi_node); 1400 1322 if (ret) 1401 - return ret; 1323 + goto out_put_node; 1402 1324 } 1403 1325 1404 1326 /* 1405 1327 * If another MSI controller is being used, the call below should fail 1406 1328 * but that is okay 1407 1329 */ 1408 - return iproc_msi_init(pcie, msi_node); 1330 + ret = iproc_msi_init(pcie, msi_node); 1331 + 1332 + out_put_node: 1333 + of_node_put(msi_node); 1334 + return ret; 1409 1335 } 1410 1336 1411 1337 static void iproc_pcie_msi_disable(struct iproc_pcie *pcie) ··· 1429 1347 break; 1430 1348 case IPROC_PCIE_PAXB: 1431 1349 regs = iproc_pcie_reg_paxb; 1432 - pcie->iproc_cfg_read = true; 1433 1350 pcie->has_apb_err_disable = true; 1434 1351 if (pcie->need_ob_cfg) { 1435 1352 pcie->ob_map = paxb_ob_map; ··· 1437 1356 break; 1438 1357 case IPROC_PCIE_PAXB_V2: 1439 1358 regs = iproc_pcie_reg_paxb_v2; 1359 + pcie->iproc_cfg_read = true; 1440 1360 pcie->has_apb_err_disable = true; 1441 1361 if (pcie->need_ob_cfg) { 1442 1362 pcie->ob_map = paxb_v2_ob_map;
+16 -35
drivers/pci/controller/pcie-mediatek.c
··· 578 578 579 579 port->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, 580 580 &intx_domain_ops, port); 581 + of_node_put(pcie_intc_node); 581 582 if (!port->irq_domain) { 582 583 dev_err(dev, "failed to get INTx IRQ domain\n"); 583 584 return -ENODEV; ··· 916 915 917 916 /* sys_ck might be divided into the following parts in some chips */ 918 917 snprintf(name, sizeof(name), "ahb_ck%d", slot); 919 - port->ahb_ck = devm_clk_get(dev, name); 920 - if (IS_ERR(port->ahb_ck)) { 921 - if (PTR_ERR(port->ahb_ck) == -EPROBE_DEFER) 922 - return -EPROBE_DEFER; 923 - 924 - port->ahb_ck = NULL; 925 - } 918 + port->ahb_ck = devm_clk_get_optional(dev, name); 919 + if (IS_ERR(port->ahb_ck)) 920 + return PTR_ERR(port->ahb_ck); 926 921 927 922 snprintf(name, sizeof(name), "axi_ck%d", slot); 928 - port->axi_ck = devm_clk_get(dev, name); 929 - if (IS_ERR(port->axi_ck)) { 930 - if (PTR_ERR(port->axi_ck) == -EPROBE_DEFER) 931 - return -EPROBE_DEFER; 932 - 933 - port->axi_ck = NULL; 934 - } 923 + port->axi_ck = devm_clk_get_optional(dev, name); 924 + if (IS_ERR(port->axi_ck)) 925 + return PTR_ERR(port->axi_ck); 935 926 936 927 snprintf(name, sizeof(name), "aux_ck%d", slot); 937 - port->aux_ck = devm_clk_get(dev, name); 938 - if (IS_ERR(port->aux_ck)) { 939 - if (PTR_ERR(port->aux_ck) == -EPROBE_DEFER) 940 - return -EPROBE_DEFER; 941 - 942 - port->aux_ck = NULL; 943 - } 928 + port->aux_ck = devm_clk_get_optional(dev, name); 929 + if (IS_ERR(port->aux_ck)) 930 + return PTR_ERR(port->aux_ck); 944 931 945 932 snprintf(name, sizeof(name), "obff_ck%d", slot); 946 - port->obff_ck = devm_clk_get(dev, name); 947 - if (IS_ERR(port->obff_ck)) { 948 - if (PTR_ERR(port->obff_ck) == -EPROBE_DEFER) 949 - return -EPROBE_DEFER; 950 - 951 - port->obff_ck = NULL; 952 - } 933 + port->obff_ck = devm_clk_get_optional(dev, name); 934 + if (IS_ERR(port->obff_ck)) 935 + return PTR_ERR(port->obff_ck); 953 936 954 937 snprintf(name, sizeof(name), "pipe_ck%d", slot); 955 - port->pipe_ck = devm_clk_get(dev, name); 956 - if (IS_ERR(port->pipe_ck)) { 957 - if (PTR_ERR(port->pipe_ck) == -EPROBE_DEFER) 958 - return -EPROBE_DEFER; 959 - 960 - port->pipe_ck = NULL; 961 - } 938 + port->pipe_ck = devm_clk_get_optional(dev, name); 939 + if (IS_ERR(port->pipe_ck)) 940 + return PTR_ERR(port->pipe_ck); 962 941 963 942 snprintf(name, sizeof(name), "pcie-rst%d", slot); 964 943 port->reset = devm_reset_control_get_optional_exclusive(dev, name);
+55 -30
drivers/pci/controller/pcie-rcar.c
··· 46 46 47 47 /* Transfer control */ 48 48 #define PCIETCTLR 0x02000 49 - #define CFINIT 1 49 + #define DL_DOWN BIT(3) 50 + #define CFINIT BIT(0) 50 51 #define PCIETSTR 0x02004 51 - #define DATA_LINK_ACTIVE 1 52 + #define DATA_LINK_ACTIVE BIT(0) 52 53 #define PCIEERRFR 0x02020 53 54 #define UNSUPPORTED_REQUEST BIT(4) 54 55 #define PCIEMSIFR 0x02044 55 56 #define PCIEMSIALR 0x02048 56 - #define MSIFE 1 57 + #define MSIFE BIT(0) 57 58 #define PCIEMSIAUR 0x0204c 58 59 #define PCIEMSIIER 0x02050 59 60 ··· 95 94 #define MACCTLR 0x011058 96 95 #define SPEED_CHANGE BIT(24) 97 96 #define SCRAMBLE_DISABLE BIT(27) 97 + #define PMSR 0x01105c 98 98 #define MACS2R 0x011078 99 99 #define MACCGSPSETR 0x011084 100 100 #define SPCNGRSN BIT(31) ··· 154 152 struct rcar_msi msi; 155 153 }; 156 154 157 - static void rcar_pci_write_reg(struct rcar_pcie *pcie, unsigned long val, 158 - unsigned long reg) 155 + static void rcar_pci_write_reg(struct rcar_pcie *pcie, u32 val, 156 + unsigned int reg) 159 157 { 160 158 writel(val, pcie->base + reg); 161 159 } 162 160 163 - static unsigned long rcar_pci_read_reg(struct rcar_pcie *pcie, 164 - unsigned long reg) 161 + static u32 rcar_pci_read_reg(struct rcar_pcie *pcie, unsigned int reg) 165 162 { 166 163 return readl(pcie->base + reg); 167 164 } ··· 172 171 173 172 static void rcar_rmw32(struct rcar_pcie *pcie, int where, u32 mask, u32 data) 174 173 { 175 - int shift = 8 * (where & 3); 174 + unsigned int shift = BITS_PER_BYTE * (where & 3); 176 175 u32 val = rcar_pci_read_reg(pcie, where & ~3); 177 176 178 177 val &= ~(mask << shift); ··· 182 181 183 182 static u32 rcar_read_conf(struct rcar_pcie *pcie, int where) 184 183 { 185 - int shift = 8 * (where & 3); 184 + unsigned int shift = BITS_PER_BYTE * (where & 3); 186 185 u32 val = rcar_pci_read_reg(pcie, where & ~3); 187 186 188 187 return val >> shift; ··· 193 192 unsigned char access_type, struct pci_bus *bus, 194 193 unsigned int devfn, int where, u32 *data) 195 194 { 196 - int dev, func, reg, index; 195 + unsigned int dev, func, reg, index; 197 196 198 197 dev = PCI_SLOT(devfn); 199 198 func = PCI_FUNC(devfn); ··· 282 281 } 283 282 284 283 if (size == 1) 285 - *val = (*val >> (8 * (where & 3))) & 0xff; 284 + *val = (*val >> (BITS_PER_BYTE * (where & 3))) & 0xff; 286 285 else if (size == 2) 287 - *val = (*val >> (8 * (where & 2))) & 0xffff; 286 + *val = (*val >> (BITS_PER_BYTE * (where & 2))) & 0xffff; 288 287 289 - dev_dbg(&bus->dev, "pcie-config-read: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08lx\n", 290 - bus->number, devfn, where, size, (unsigned long)*val); 288 + dev_dbg(&bus->dev, "pcie-config-read: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08x\n", 289 + bus->number, devfn, where, size, *val); 291 290 292 291 return ret; 293 292 } ··· 297 296 int where, int size, u32 val) 298 297 { 299 298 struct rcar_pcie *pcie = bus->sysdata; 300 - int shift, ret; 299 + unsigned int shift; 301 300 u32 data; 301 + int ret; 302 302 303 303 ret = rcar_pcie_config_access(pcie, RCAR_PCI_ACCESS_READ, 304 304 bus, devfn, where, &data); 305 305 if (ret != PCIBIOS_SUCCESSFUL) 306 306 return ret; 307 307 308 - dev_dbg(&bus->dev, "pcie-config-write: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08lx\n", 309 - bus->number, devfn, where, size, (unsigned long)val); 308 + dev_dbg(&bus->dev, "pcie-config-write: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08x\n", 309 + bus->number, devfn, where, size, val); 310 310 311 311 if (size == 1) { 312 - shift = 8 * (where & 3); 312 + shift = BITS_PER_BYTE * (where & 3); 313 313 data &= ~(0xff << shift); 314 314 data |= ((val & 0xff) << shift); 315 315 } else if (size == 2) { 316 - shift = 8 * (where & 2); 316 + shift = BITS_PER_BYTE * (where & 2); 317 317 data &= ~(0xffff << shift); 318 318 data |= ((val & 0xffff) << shift); 319 319 } else ··· 509 507 } 510 508 511 509 static void phy_write_reg(struct rcar_pcie *pcie, 512 - unsigned int rate, unsigned int addr, 513 - unsigned int lane, unsigned int data) 510 + unsigned int rate, u32 addr, 511 + unsigned int lane, u32 data) 514 512 { 515 - unsigned long phyaddr; 513 + u32 phyaddr; 516 514 517 515 phyaddr = WRITE_CMD | 518 516 ((rate & 1) << RATE_POS) | ··· 740 738 741 739 while (reg) { 742 740 unsigned int index = find_first_bit(&reg, 32); 743 - unsigned int irq; 741 + unsigned int msi_irq; 744 742 745 743 /* clear the interrupt */ 746 744 rcar_pci_write_reg(pcie, 1 << index, PCIEMSIFR); 747 745 748 - irq = irq_find_mapping(msi->domain, index); 749 - if (irq) { 746 + msi_irq = irq_find_mapping(msi->domain, index); 747 + if (msi_irq) { 750 748 if (test_bit(index, msi->used)) 751 - generic_handle_irq(irq); 749 + generic_handle_irq(msi_irq); 752 750 else 753 751 dev_info(dev, "unhandled MSI\n"); 754 752 } else { ··· 892 890 { 893 891 struct device *dev = pcie->dev; 894 892 struct rcar_msi *msi = &pcie->msi; 895 - unsigned long base; 893 + phys_addr_t base; 896 894 int err, i; 897 895 898 896 mutex_init(&msi->lock); ··· 931 929 932 930 /* setup MSI data target */ 933 931 msi->pages = __get_free_pages(GFP_KERNEL, 0); 932 + if (!msi->pages) { 933 + err = -ENOMEM; 934 + goto err; 935 + } 934 936 base = virt_to_phys((void *)msi->pages); 935 937 936 - rcar_pci_write_reg(pcie, base | MSIFE, PCIEMSIALR); 937 - rcar_pci_write_reg(pcie, 0, PCIEMSIAUR); 938 + rcar_pci_write_reg(pcie, lower_32_bits(base) | MSIFE, PCIEMSIALR); 939 + rcar_pci_write_reg(pcie, upper_32_bits(base), PCIEMSIAUR); 938 940 939 941 /* enable all MSI interrupts */ 940 942 rcar_pci_write_reg(pcie, 0xffffffff, PCIEMSIIER); ··· 1124 1118 { 1125 1119 struct device *dev = &pdev->dev; 1126 1120 struct rcar_pcie *pcie; 1127 - unsigned int data; 1121 + u32 data; 1128 1122 int err; 1129 1123 int (*phy_init_fn)(struct rcar_pcie *); 1130 1124 struct pci_host_bridge *bridge; ··· 1136 1130 pcie = pci_host_bridge_priv(bridge); 1137 1131 1138 1132 pcie->dev = dev; 1133 + platform_set_drvdata(pdev, pcie); 1139 1134 1140 1135 err = pci_parse_request_of_pci_ranges(dev, &pcie->resources, NULL); 1141 1136 if (err) ··· 1228 1221 return err; 1229 1222 } 1230 1223 1224 + static int rcar_pcie_resume_noirq(struct device *dev) 1225 + { 1226 + struct rcar_pcie *pcie = dev_get_drvdata(dev); 1227 + 1228 + if (rcar_pci_read_reg(pcie, PMSR) && 1229 + !(rcar_pci_read_reg(pcie, PCIETCTLR) & DL_DOWN)) 1230 + return 0; 1231 + 1232 + /* Re-establish the PCIe link */ 1233 + rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR); 1234 + return rcar_pcie_wait_for_dl(pcie); 1235 + } 1236 + 1237 + static const struct dev_pm_ops rcar_pcie_pm_ops = { 1238 + .resume_noirq = rcar_pcie_resume_noirq, 1239 + }; 1240 + 1231 1241 static struct platform_driver rcar_pcie_driver = { 1232 1242 .driver = { 1233 1243 .name = "rcar-pcie", 1234 1244 .of_match_table = rcar_pcie_of_match, 1245 + .pm = &rcar_pcie_pm_ops, 1235 1246 .suppress_bind_attrs = true, 1236 1247 }, 1237 1248 .probe = rcar_pcie_probe,
+1 -1
drivers/pci/controller/pcie-rockchip-ep.c
··· 350 350 struct rockchip_pcie *rockchip = &ep->rockchip; 351 351 u32 r = ep->max_regions - 1; 352 352 u32 offset; 353 - u16 status; 353 + u32 status; 354 354 u8 msg_code; 355 355 356 356 if (unlikely(ep->irq_pci_addr != ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR ||
+1
drivers/pci/controller/pcie-rockchip-host.c
··· 724 724 725 725 rockchip->irq_domain = irq_domain_add_linear(intc, PCI_NUM_INTX, 726 726 &intx_domain_ops, rockchip); 727 + of_node_put(intc); 727 728 if (!rockchip->irq_domain) { 728 729 dev_err(dev, "failed to get a INTx IRQ domain\n"); 729 730 return -EINVAL;
+4 -5
drivers/pci/controller/pcie-xilinx-nwl.c
··· 438 438 #ifdef CONFIG_PCI_MSI 439 439 static struct irq_chip nwl_msi_irq_chip = { 440 440 .name = "nwl_pcie:msi", 441 - .irq_enable = unmask_msi_irq, 442 - .irq_disable = mask_msi_irq, 443 - .irq_mask = mask_msi_irq, 444 - .irq_unmask = unmask_msi_irq, 445 - 441 + .irq_enable = pci_msi_unmask_irq, 442 + .irq_disable = pci_msi_mask_irq, 443 + .irq_mask = pci_msi_mask_irq, 444 + .irq_unmask = pci_msi_unmask_irq, 446 445 }; 447 446 448 447 static struct msi_domain_info nwl_msi_domain_info = {
+10 -2
drivers/pci/controller/pcie-xilinx.c
··· 336 336 * xilinx_pcie_enable_msi - Enable MSI support 337 337 * @port: PCIe port information 338 338 */ 339 - static void xilinx_pcie_enable_msi(struct xilinx_pcie_port *port) 339 + static int xilinx_pcie_enable_msi(struct xilinx_pcie_port *port) 340 340 { 341 341 phys_addr_t msg_addr; 342 342 343 343 port->msi_pages = __get_free_pages(GFP_KERNEL, 0); 344 + if (!port->msi_pages) 345 + return -ENOMEM; 346 + 344 347 msg_addr = virt_to_phys((void *)port->msi_pages); 345 348 pcie_write(port, 0x0, XILINX_PCIE_REG_MSIBASE1); 346 349 pcie_write(port, msg_addr, XILINX_PCIE_REG_MSIBASE2); 350 + 351 + return 0; 347 352 } 348 353 349 354 /* INTx Functions */ ··· 503 498 struct device *dev = port->dev; 504 499 struct device_node *node = dev->of_node; 505 500 struct device_node *pcie_intc_node; 501 + int ret; 506 502 507 503 /* Setup INTx */ 508 504 pcie_intc_node = of_get_next_child(node, NULL); ··· 532 526 return -ENODEV; 533 527 } 534 528 535 - xilinx_pcie_enable_msi(port); 529 + ret = xilinx_pcie_enable_msi(port); 530 + if (ret) 531 + return ret; 536 532 } 537 533 538 534 return 0;
+8 -2
drivers/pci/endpoint/functions/pci-epf-test.c
··· 438 438 epc_features = epf_test->epc_features; 439 439 440 440 base = pci_epf_alloc_space(epf, sizeof(struct pci_epf_test_reg), 441 - test_reg_bar); 441 + test_reg_bar, epc_features->align); 442 442 if (!base) { 443 443 dev_err(dev, "Failed to allocated register space\n"); 444 444 return -ENOMEM; ··· 453 453 if (!!(epc_features->reserved_bar & (1 << bar))) 454 454 continue; 455 455 456 - base = pci_epf_alloc_space(epf, bar_size[bar], bar); 456 + base = pci_epf_alloc_space(epf, bar_size[bar], bar, 457 + epc_features->align); 457 458 if (!base) 458 459 dev_err(dev, "Failed to allocate space for BAR%d\n", 459 460 bar); ··· 592 591 593 592 kpcitest_workqueue = alloc_workqueue("kpcitest", 594 593 WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); 594 + if (!kpcitest_workqueue) { 595 + pr_err("Failed to allocate the kpcitest work queue\n"); 596 + return -ENOMEM; 597 + } 598 + 595 599 ret = pci_epf_register_driver(&test_driver); 596 600 if (ret) { 597 601 pr_err("Failed to register pci epf test driver --> %d\n", ret);
+8 -2
drivers/pci/endpoint/pci-epf-core.c
··· 109 109 * pci_epf_alloc_space() - allocate memory for the PCI EPF register space 110 110 * @size: the size of the memory that has to be allocated 111 111 * @bar: the BAR number corresponding to the allocated register space 112 + * @align: alignment size for the allocation region 112 113 * 113 114 * Invoke to allocate memory for the PCI EPF register space. 114 115 */ 115 - void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar) 116 + void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar, 117 + size_t align) 116 118 { 117 119 void *space; 118 120 struct device *dev = epf->epc->dev.parent; ··· 122 120 123 121 if (size < 128) 124 122 size = 128; 125 - size = roundup_pow_of_two(size); 123 + 124 + if (align) 125 + size = ALIGN(size, align); 126 + else 127 + size = roundup_pow_of_two(size); 126 128 127 129 space = dma_alloc_coherent(dev, size, &phys_addr, GFP_KERNEL); 128 130 if (!space) {
+8 -23
drivers/pci/hotplug/pciehp.h
··· 25 25 26 26 #include "../pcie/portdrv.h" 27 27 28 - #define MY_NAME "pciehp" 29 - 30 28 extern bool pciehp_poll_mode; 31 29 extern int pciehp_poll_time; 32 - extern bool pciehp_debug; 33 30 34 - #define dbg(format, arg...) \ 35 - do { \ 36 - if (pciehp_debug) \ 37 - printk(KERN_DEBUG "%s: " format, MY_NAME, ## arg); \ 38 - } while (0) 39 - #define err(format, arg...) \ 40 - printk(KERN_ERR "%s: " format, MY_NAME, ## arg) 41 - #define info(format, arg...) \ 42 - printk(KERN_INFO "%s: " format, MY_NAME, ## arg) 43 - #define warn(format, arg...) \ 44 - printk(KERN_WARNING "%s: " format, MY_NAME, ## arg) 45 - 31 + /* 32 + * Set CONFIG_DYNAMIC_DEBUG=y and boot with 'dyndbg="file pciehp* +p"' to 33 + * enable debug messages. 34 + */ 46 35 #define ctrl_dbg(ctrl, format, arg...) \ 47 - do { \ 48 - if (pciehp_debug) \ 49 - dev_printk(KERN_DEBUG, &ctrl->pcie->device, \ 50 - format, ## arg); \ 51 - } while (0) 36 + pci_dbg(ctrl->pcie->port, format, ## arg) 52 37 #define ctrl_err(ctrl, format, arg...) \ 53 - dev_err(&ctrl->pcie->device, format, ## arg) 38 + pci_err(ctrl->pcie->port, format, ## arg) 54 39 #define ctrl_info(ctrl, format, arg...) \ 55 - dev_info(&ctrl->pcie->device, format, ## arg) 40 + pci_info(ctrl->pcie->port, format, ## arg) 56 41 #define ctrl_warn(ctrl, format, arg...) \ 57 - dev_warn(&ctrl->pcie->device, format, ## arg) 42 + pci_warn(ctrl->pcie->port, format, ## arg) 58 43 59 44 #define SLOT_NAME_SIZE 10 60 45
+8 -10
drivers/pci/hotplug/pciehp_core.c
··· 17 17 * Dely Sy <dely.l.sy@intel.com>" 18 18 */ 19 19 20 + #define pr_fmt(fmt) "pciehp: " fmt 21 + #define dev_fmt pr_fmt 22 + 20 23 #include <linux/moduleparam.h> 21 24 #include <linux/kernel.h> 22 25 #include <linux/slab.h> ··· 30 27 #include "../pci.h" 31 28 32 29 /* Global variables */ 33 - bool pciehp_debug; 34 30 bool pciehp_poll_mode; 35 31 int pciehp_poll_time; 36 32 ··· 37 35 * not really modular, but the easiest way to keep compat with existing 38 36 * bootargs behaviour is to continue using module_param here. 39 37 */ 40 - module_param(pciehp_debug, bool, 0644); 41 38 module_param(pciehp_poll_mode, bool, 0644); 42 39 module_param(pciehp_poll_time, int, 0644); 43 - MODULE_PARM_DESC(pciehp_debug, "Debugging mode enabled or not"); 44 40 MODULE_PARM_DESC(pciehp_poll_mode, "Using polling mechanism for hot-plug events or not"); 45 41 MODULE_PARM_DESC(pciehp_poll_time, "Polling mechanism frequency, in seconds"); 46 - 47 - #define PCIE_MODULE_NAME "pciehp" 48 42 49 43 static int set_attention_status(struct hotplug_slot *slot, u8 value); 50 44 static int get_power_status(struct hotplug_slot *slot, u8 *value); ··· 180 182 181 183 if (!dev->port->subordinate) { 182 184 /* Can happen if we run out of bus numbers during probe */ 183 - dev_err(&dev->device, 185 + pci_err(dev->port, 184 186 "Hotplug bridge without secondary bus, ignoring\n"); 185 187 return -ENODEV; 186 188 } 187 189 188 190 ctrl = pcie_init(dev); 189 191 if (!ctrl) { 190 - dev_err(&dev->device, "Controller initialization failed\n"); 192 + pci_err(dev->port, "Controller initialization failed\n"); 191 193 return -ENODEV; 192 194 } 193 195 set_service_data(dev, ctrl); ··· 305 307 #endif /* PM */ 306 308 307 309 static struct pcie_port_service_driver hpdriver_portdrv = { 308 - .name = PCIE_MODULE_NAME, 310 + .name = "pciehp", 309 311 .port_type = PCIE_ANY_PORT, 310 312 .service = PCIE_PORT_SERVICE_HP, 311 313 ··· 326 328 int retval = 0; 327 329 328 330 retval = pcie_port_service_register(&hpdriver_portdrv); 329 - dbg("pcie_port_service_register = %d\n", retval); 331 + pr_debug("pcie_port_service_register = %d\n", retval); 330 332 if (retval) 331 - dbg("Failure to register service\n"); 333 + pr_debug("Failure to register service\n"); 332 334 333 335 return retval; 334 336 }
+2
drivers/pci/hotplug/pciehp_ctrl.c
··· 13 13 * 14 14 */ 15 15 16 + #define dev_fmt(fmt) "pciehp: " fmt 17 + 16 18 #include <linux/kernel.h> 17 19 #include <linux/types.h> 18 20 #include <linux/pm_runtime.h>
+8 -9
drivers/pci/hotplug/pciehp_hpc.c
··· 12 12 * Send feedback to <greg@kroah.com>,<kristen.c.accardi@intel.com> 13 13 */ 14 14 15 + #define dev_fmt(fmt) "pciehp: " fmt 16 + 15 17 #include <linux/kernel.h> 16 18 #include <linux/types.h> 17 19 #include <linux/jiffies.h> ··· 48 46 49 47 /* Installs the interrupt handler */ 50 48 retval = request_threaded_irq(irq, pciehp_isr, pciehp_ist, 51 - IRQF_SHARED, MY_NAME, ctrl); 49 + IRQF_SHARED, "pciehp", ctrl); 52 50 if (retval) 53 51 ctrl_err(ctrl, "Cannot get irq %d for the hotplug controller\n", 54 52 irq); ··· 234 232 delay -= step; 235 233 } while (delay > 0); 236 234 237 - if (count > 1 && pciehp_debug) 238 - printk(KERN_DEBUG "pci %04x:%02x:%02x.%d id reading try %d times with interval %d ms to get %08x\n", 235 + if (count > 1) 236 + pr_debug("pci %04x:%02x:%02x.%d id reading try %d times with interval %d ms to get %08x\n", 239 237 pci_domain_nr(bus), bus->number, PCI_SLOT(devfn), 240 238 PCI_FUNC(devfn), count, step, l); 241 239 ··· 824 822 struct pci_dev *pdev = ctrl->pcie->port; 825 823 u16 reg16; 826 824 827 - if (!pciehp_debug) 828 - return; 829 - 830 - ctrl_info(ctrl, "Slot Capabilities : 0x%08x\n", ctrl->slot_cap); 825 + ctrl_dbg(ctrl, "Slot Capabilities : 0x%08x\n", ctrl->slot_cap); 831 826 pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &reg16); 832 - ctrl_info(ctrl, "Slot Status : 0x%04x\n", reg16); 827 + ctrl_dbg(ctrl, "Slot Status : 0x%04x\n", reg16); 833 828 pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &reg16); 834 - ctrl_info(ctrl, "Slot Control : 0x%04x\n", reg16); 829 + ctrl_dbg(ctrl, "Slot Control : 0x%04x\n", reg16); 835 830 } 836 831 837 832 #define FLAG(x, y) (((x) & (y)) ? '+' : '-')
+2
drivers/pci/hotplug/pciehp_pci.c
··· 13 13 * 14 14 */ 15 15 16 + #define dev_fmt(fmt) "pciehp: " fmt 17 + 16 18 #include <linux/kernel.h> 17 19 #include <linux/types.h> 18 20 #include <linux/pci.h>
+4
drivers/pci/hotplug/rpadlpar_core.c
··· 51 51 if (rc == 0) 52 52 break; 53 53 } 54 + of_node_put(parent); 54 55 55 56 return dn; 56 57 } ··· 72 71 return np; 73 72 } 74 73 74 + /* Returns a device_node with its reference count incremented */ 75 75 static struct device_node *find_dlpar_node(char *drc_name, int *node_type) 76 76 { 77 77 struct device_node *dn; ··· 308 306 rc = dlpar_add_phb(drc_name, dn); 309 307 break; 310 308 } 309 + of_node_put(dn); 311 310 312 311 printk(KERN_INFO "%s: slot %s added\n", DLPAR_MODULE_NAME, drc_name); 313 312 exit: ··· 442 439 rc = dlpar_remove_pci_slot(drc_name, dn); 443 440 break; 444 441 } 442 + of_node_put(dn); 445 443 vm_unmap_aliases(); 446 444 447 445 printk(KERN_INFO "%s: slot %s removed\n", DLPAR_MODULE_NAME, drc_name);
+2 -1
drivers/pci/hotplug/rpaphp_slot.c
··· 21 21 /* free up the memory used by a slot */ 22 22 void dealloc_slot_struct(struct slot *slot) 23 23 { 24 + of_node_put(slot->dn); 24 25 kfree(slot->name); 25 26 kfree(slot); 26 27 } ··· 37 36 slot->name = kstrdup(drc_name, GFP_KERNEL); 38 37 if (!slot->name) 39 38 goto error_slot; 40 - slot->dn = dn; 39 + slot->dn = of_node_get(dn); 41 40 slot->index = drc_index; 42 41 slot->power_domain = power_domain; 43 42 slot->hotplug_slot.ops = &rpaphp_hotplug_slot_ops;
+3 -3
drivers/pci/msi.c
··· 1338 1338 struct msi_desc *desc) 1339 1339 { 1340 1340 return (irq_hw_number_t)desc->msi_attrib.entry_nr | 1341 - PCI_DEVID(dev->bus->number, dev->devfn) << 11 | 1341 + pci_dev_id(dev) << 11 | 1342 1342 (pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 27; 1343 1343 } 1344 1344 ··· 1508 1508 u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev) 1509 1509 { 1510 1510 struct device_node *of_node; 1511 - u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn); 1511 + u32 rid = pci_dev_id(pdev); 1512 1512 1513 1513 pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); 1514 1514 ··· 1531 1531 struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev) 1532 1532 { 1533 1533 struct irq_domain *dom; 1534 - u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn); 1534 + u32 rid = pci_dev_id(pdev); 1535 1535 1536 1536 pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); 1537 1537 dom = of_msi_map_get_device_domain(&pdev->dev, rid);
+33 -25
drivers/pci/of.c
··· 15 15 #include <linux/of_pci.h> 16 16 #include "pci.h" 17 17 18 + #ifdef CONFIG_PCI 18 19 void pci_set_of_node(struct pci_dev *dev) 19 20 { 20 21 if (!dev->bus->dev.of_node) ··· 32 31 33 32 void pci_set_bus_of_node(struct pci_bus *bus) 34 33 { 35 - if (bus->self == NULL) 36 - bus->dev.of_node = pcibios_get_phb_of_node(bus); 37 - else 38 - bus->dev.of_node = of_node_get(bus->self->dev.of_node); 34 + struct device_node *node; 35 + 36 + if (bus->self == NULL) { 37 + node = pcibios_get_phb_of_node(bus); 38 + } else { 39 + node = of_node_get(bus->self->dev.of_node); 40 + if (node && of_property_read_bool(node, "external-facing")) 41 + bus->self->untrusted = true; 42 + } 43 + bus->dev.of_node = node; 39 44 } 40 45 41 46 void pci_release_bus_of_node(struct pci_bus *bus) ··· 202 195 return (u16)domain; 203 196 } 204 197 EXPORT_SYMBOL_GPL(of_get_pci_domain_nr); 205 - 206 - /** 207 - * This function will try to find the limitation of link speed by finding 208 - * a property called "max-link-speed" of the given device node. 209 - * 210 - * @node: device tree node with the max link speed information 211 - * 212 - * Returns the associated max link speed from DT, or a negative value if the 213 - * required property is not found or is invalid. 214 - */ 215 - int of_pci_get_max_link_speed(struct device_node *node) 216 - { 217 - u32 max_link_speed; 218 - 219 - if (of_property_read_u32(node, "max-link-speed", &max_link_speed) || 220 - max_link_speed > 4) 221 - return -EINVAL; 222 - 223 - return max_link_speed; 224 - } 225 - EXPORT_SYMBOL_GPL(of_pci_get_max_link_speed); 226 198 227 199 /** 228 200 * of_pci_check_probe_only - Setup probe only mode if linux,pci-probe-only ··· 523 537 return err; 524 538 } 525 539 540 + #endif /* CONFIG_PCI */ 541 + 542 + /** 543 + * This function will try to find the limitation of link speed by finding 544 + * a property called "max-link-speed" of the given device node. 545 + * 546 + * @node: device tree node with the max link speed information 547 + * 548 + * Returns the associated max link speed from DT, or a negative value if the 549 + * required property is not found or is invalid. 550 + */ 551 + int of_pci_get_max_link_speed(struct device_node *node) 552 + { 553 + u32 max_link_speed; 554 + 555 + if (of_property_read_u32(node, "max-link-speed", &max_link_speed) || 556 + max_link_speed > 4) 557 + return -EINVAL; 558 + 559 + return max_link_speed; 560 + } 561 + EXPORT_SYMBOL_GPL(of_pci_get_max_link_speed);
+35 -3
drivers/pci/p2pdma.c
··· 275 275 } 276 276 277 277 /* 278 + * If we can't find a common upstream bridge take a look at the root 279 + * complex and compare it to a whitelist of known good hardware. 280 + */ 281 + static bool root_complex_whitelist(struct pci_dev *dev) 282 + { 283 + struct pci_host_bridge *host = pci_find_host_bridge(dev->bus); 284 + struct pci_dev *root = pci_get_slot(host->bus, PCI_DEVFN(0, 0)); 285 + unsigned short vendor, device; 286 + 287 + if (!root) 288 + return false; 289 + 290 + vendor = root->vendor; 291 + device = root->device; 292 + pci_dev_put(root); 293 + 294 + /* AMD ZEN host bridges can do peer to peer */ 295 + if (vendor == PCI_VENDOR_ID_AMD && device == 0x1450) 296 + return true; 297 + 298 + return false; 299 + } 300 + 301 + /* 278 302 * Find the distance through the nearest common upstream bridge between 279 303 * two PCI devices. 280 304 * ··· 341 317 * In this case, a list of all infringing bridge addresses will be 342 318 * populated in acs_list (assuming it's non-null) for printk purposes. 343 319 */ 344 - static int upstream_bridge_distance(struct pci_dev *a, 345 - struct pci_dev *b, 320 + static int upstream_bridge_distance(struct pci_dev *provider, 321 + struct pci_dev *client, 346 322 struct seq_buf *acs_list) 347 323 { 324 + struct pci_dev *a = provider, *b = client, *bb; 348 325 int dist_a = 0; 349 326 int dist_b = 0; 350 - struct pci_dev *bb = NULL; 351 327 int acs_cnt = 0; 352 328 353 329 /* ··· 377 353 a = pci_upstream_bridge(a); 378 354 dist_a++; 379 355 } 356 + 357 + /* 358 + * Allow the connection if both devices are on a whitelisted root 359 + * complex, but add an arbitary large value to the distance. 360 + */ 361 + if (root_complex_whitelist(provider) && 362 + root_complex_whitelist(client)) 363 + return 0x1000 + dist_a + dist_b; 380 364 381 365 return -1; 382 366
+125 -58
drivers/pci/pci-acpi.c
··· 119 119 } 120 120 121 121 static acpi_status decode_type0_hpx_record(union acpi_object *record, 122 - struct hotplug_params *hpx) 122 + struct hpp_type0 *hpx0) 123 123 { 124 124 int i; 125 125 union acpi_object *fields = record->package.elements; ··· 132 132 for (i = 2; i < 6; i++) 133 133 if (fields[i].type != ACPI_TYPE_INTEGER) 134 134 return AE_ERROR; 135 - hpx->t0 = &hpx->type0_data; 136 - hpx->t0->revision = revision; 137 - hpx->t0->cache_line_size = fields[2].integer.value; 138 - hpx->t0->latency_timer = fields[3].integer.value; 139 - hpx->t0->enable_serr = fields[4].integer.value; 140 - hpx->t0->enable_perr = fields[5].integer.value; 135 + hpx0->revision = revision; 136 + hpx0->cache_line_size = fields[2].integer.value; 137 + hpx0->latency_timer = fields[3].integer.value; 138 + hpx0->enable_serr = fields[4].integer.value; 139 + hpx0->enable_perr = fields[5].integer.value; 141 140 break; 142 141 default: 143 - printk(KERN_WARNING 144 - "%s: Type 0 Revision %d record not supported\n", 142 + pr_warn("%s: Type 0 Revision %d record not supported\n", 145 143 __func__, revision); 146 144 return AE_ERROR; 147 145 } ··· 147 149 } 148 150 149 151 static acpi_status decode_type1_hpx_record(union acpi_object *record, 150 - struct hotplug_params *hpx) 152 + struct hpp_type1 *hpx1) 151 153 { 152 154 int i; 153 155 union acpi_object *fields = record->package.elements; ··· 160 162 for (i = 2; i < 5; i++) 161 163 if (fields[i].type != ACPI_TYPE_INTEGER) 162 164 return AE_ERROR; 163 - hpx->t1 = &hpx->type1_data; 164 - hpx->t1->revision = revision; 165 - hpx->t1->max_mem_read = fields[2].integer.value; 166 - hpx->t1->avg_max_split = fields[3].integer.value; 167 - hpx->t1->tot_max_split = fields[4].integer.value; 165 + hpx1->revision = revision; 166 + hpx1->max_mem_read = fields[2].integer.value; 167 + hpx1->avg_max_split = fields[3].integer.value; 168 + hpx1->tot_max_split = fields[4].integer.value; 168 169 break; 169 170 default: 170 - printk(KERN_WARNING 171 - "%s: Type 1 Revision %d record not supported\n", 171 + pr_warn("%s: Type 1 Revision %d record not supported\n", 172 172 __func__, revision); 173 173 return AE_ERROR; 174 174 } ··· 174 178 } 175 179 176 180 static acpi_status decode_type2_hpx_record(union acpi_object *record, 177 - struct hotplug_params *hpx) 181 + struct hpp_type2 *hpx2) 178 182 { 179 183 int i; 180 184 union acpi_object *fields = record->package.elements; ··· 187 191 for (i = 2; i < 18; i++) 188 192 if (fields[i].type != ACPI_TYPE_INTEGER) 189 193 return AE_ERROR; 190 - hpx->t2 = &hpx->type2_data; 191 - hpx->t2->revision = revision; 192 - hpx->t2->unc_err_mask_and = fields[2].integer.value; 193 - hpx->t2->unc_err_mask_or = fields[3].integer.value; 194 - hpx->t2->unc_err_sever_and = fields[4].integer.value; 195 - hpx->t2->unc_err_sever_or = fields[5].integer.value; 196 - hpx->t2->cor_err_mask_and = fields[6].integer.value; 197 - hpx->t2->cor_err_mask_or = fields[7].integer.value; 198 - hpx->t2->adv_err_cap_and = fields[8].integer.value; 199 - hpx->t2->adv_err_cap_or = fields[9].integer.value; 200 - hpx->t2->pci_exp_devctl_and = fields[10].integer.value; 201 - hpx->t2->pci_exp_devctl_or = fields[11].integer.value; 202 - hpx->t2->pci_exp_lnkctl_and = fields[12].integer.value; 203 - hpx->t2->pci_exp_lnkctl_or = fields[13].integer.value; 204 - hpx->t2->sec_unc_err_sever_and = fields[14].integer.value; 205 - hpx->t2->sec_unc_err_sever_or = fields[15].integer.value; 206 - hpx->t2->sec_unc_err_mask_and = fields[16].integer.value; 207 - hpx->t2->sec_unc_err_mask_or = fields[17].integer.value; 194 + hpx2->revision = revision; 195 + hpx2->unc_err_mask_and = fields[2].integer.value; 196 + hpx2->unc_err_mask_or = fields[3].integer.value; 197 + hpx2->unc_err_sever_and = fields[4].integer.value; 198 + hpx2->unc_err_sever_or = fields[5].integer.value; 199 + hpx2->cor_err_mask_and = fields[6].integer.value; 200 + hpx2->cor_err_mask_or = fields[7].integer.value; 201 + hpx2->adv_err_cap_and = fields[8].integer.value; 202 + hpx2->adv_err_cap_or = fields[9].integer.value; 203 + hpx2->pci_exp_devctl_and = fields[10].integer.value; 204 + hpx2->pci_exp_devctl_or = fields[11].integer.value; 205 + hpx2->pci_exp_lnkctl_and = fields[12].integer.value; 206 + hpx2->pci_exp_lnkctl_or = fields[13].integer.value; 207 + hpx2->sec_unc_err_sever_and = fields[14].integer.value; 208 + hpx2->sec_unc_err_sever_or = fields[15].integer.value; 209 + hpx2->sec_unc_err_mask_and = fields[16].integer.value; 210 + hpx2->sec_unc_err_mask_or = fields[17].integer.value; 208 211 break; 209 212 default: 210 - printk(KERN_WARNING 211 - "%s: Type 2 Revision %d record not supported\n", 213 + pr_warn("%s: Type 2 Revision %d record not supported\n", 212 214 __func__, revision); 213 215 return AE_ERROR; 214 216 } 215 217 return AE_OK; 216 218 } 217 219 218 - static acpi_status acpi_run_hpx(acpi_handle handle, struct hotplug_params *hpx) 220 + static void parse_hpx3_register(struct hpx_type3 *hpx3_reg, 221 + union acpi_object *reg_fields) 222 + { 223 + hpx3_reg->device_type = reg_fields[0].integer.value; 224 + hpx3_reg->function_type = reg_fields[1].integer.value; 225 + hpx3_reg->config_space_location = reg_fields[2].integer.value; 226 + hpx3_reg->pci_exp_cap_id = reg_fields[3].integer.value; 227 + hpx3_reg->pci_exp_cap_ver = reg_fields[4].integer.value; 228 + hpx3_reg->pci_exp_vendor_id = reg_fields[5].integer.value; 229 + hpx3_reg->dvsec_id = reg_fields[6].integer.value; 230 + hpx3_reg->dvsec_rev = reg_fields[7].integer.value; 231 + hpx3_reg->match_offset = reg_fields[8].integer.value; 232 + hpx3_reg->match_mask_and = reg_fields[9].integer.value; 233 + hpx3_reg->match_value = reg_fields[10].integer.value; 234 + hpx3_reg->reg_offset = reg_fields[11].integer.value; 235 + hpx3_reg->reg_mask_and = reg_fields[12].integer.value; 236 + hpx3_reg->reg_mask_or = reg_fields[13].integer.value; 237 + } 238 + 239 + static acpi_status program_type3_hpx_record(struct pci_dev *dev, 240 + union acpi_object *record, 241 + const struct hotplug_program_ops *hp_ops) 242 + { 243 + union acpi_object *fields = record->package.elements; 244 + u32 desc_count, expected_length, revision; 245 + union acpi_object *reg_fields; 246 + struct hpx_type3 hpx3; 247 + int i; 248 + 249 + revision = fields[1].integer.value; 250 + switch (revision) { 251 + case 1: 252 + desc_count = fields[2].integer.value; 253 + expected_length = 3 + desc_count * 14; 254 + 255 + if (record->package.count != expected_length) 256 + return AE_ERROR; 257 + 258 + for (i = 2; i < expected_length; i++) 259 + if (fields[i].type != ACPI_TYPE_INTEGER) 260 + return AE_ERROR; 261 + 262 + for (i = 0; i < desc_count; i++) { 263 + reg_fields = fields + 3 + i * 14; 264 + parse_hpx3_register(&hpx3, reg_fields); 265 + hp_ops->program_type3(dev, &hpx3); 266 + } 267 + 268 + break; 269 + default: 270 + printk(KERN_WARNING 271 + "%s: Type 3 Revision %d record not supported\n", 272 + __func__, revision); 273 + return AE_ERROR; 274 + } 275 + return AE_OK; 276 + } 277 + 278 + static acpi_status acpi_run_hpx(struct pci_dev *dev, acpi_handle handle, 279 + const struct hotplug_program_ops *hp_ops) 219 280 { 220 281 acpi_status status; 221 282 struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL}; 222 283 union acpi_object *package, *record, *fields; 284 + struct hpp_type0 hpx0; 285 + struct hpp_type1 hpx1; 286 + struct hpp_type2 hpx2; 223 287 u32 type; 224 288 int i; 225 - 226 - /* Clear the return buffer with zeros */ 227 - memset(hpx, 0, sizeof(struct hotplug_params)); 228 289 229 290 status = acpi_evaluate_object(handle, "_HPX", NULL, &buffer); 230 291 if (ACPI_FAILURE(status)) ··· 310 257 type = fields[0].integer.value; 311 258 switch (type) { 312 259 case 0: 313 - status = decode_type0_hpx_record(record, hpx); 260 + memset(&hpx0, 0, sizeof(hpx0)); 261 + status = decode_type0_hpx_record(record, &hpx0); 314 262 if (ACPI_FAILURE(status)) 315 263 goto exit; 264 + hp_ops->program_type0(dev, &hpx0); 316 265 break; 317 266 case 1: 318 - status = decode_type1_hpx_record(record, hpx); 267 + memset(&hpx1, 0, sizeof(hpx1)); 268 + status = decode_type1_hpx_record(record, &hpx1); 319 269 if (ACPI_FAILURE(status)) 320 270 goto exit; 271 + hp_ops->program_type1(dev, &hpx1); 321 272 break; 322 273 case 2: 323 - status = decode_type2_hpx_record(record, hpx); 274 + memset(&hpx2, 0, sizeof(hpx2)); 275 + status = decode_type2_hpx_record(record, &hpx2); 276 + if (ACPI_FAILURE(status)) 277 + goto exit; 278 + hp_ops->program_type2(dev, &hpx2); 279 + break; 280 + case 3: 281 + status = program_type3_hpx_record(dev, record, hp_ops); 324 282 if (ACPI_FAILURE(status)) 325 283 goto exit; 326 284 break; 327 285 default: 328 - printk(KERN_ERR "%s: Type %d record not supported\n", 286 + pr_err("%s: Type %d record not supported\n", 329 287 __func__, type); 330 288 status = AE_ERROR; 331 289 goto exit; ··· 347 283 return status; 348 284 } 349 285 350 - static acpi_status acpi_run_hpp(acpi_handle handle, struct hotplug_params *hpp) 286 + static acpi_status acpi_run_hpp(struct pci_dev *dev, acpi_handle handle, 287 + const struct hotplug_program_ops *hp_ops) 351 288 { 352 289 acpi_status status; 353 290 struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 354 291 union acpi_object *package, *fields; 292 + struct hpp_type0 hpp0; 355 293 int i; 356 294 357 - memset(hpp, 0, sizeof(struct hotplug_params)); 295 + memset(&hpp0, 0, sizeof(hpp0)); 358 296 359 297 status = acpi_evaluate_object(handle, "_HPP", NULL, &buffer); 360 298 if (ACPI_FAILURE(status)) ··· 377 311 } 378 312 } 379 313 380 - hpp->t0 = &hpp->type0_data; 381 - hpp->t0->revision = 1; 382 - hpp->t0->cache_line_size = fields[0].integer.value; 383 - hpp->t0->latency_timer = fields[1].integer.value; 384 - hpp->t0->enable_serr = fields[2].integer.value; 385 - hpp->t0->enable_perr = fields[3].integer.value; 314 + hpp0.revision = 1; 315 + hpp0.cache_line_size = fields[0].integer.value; 316 + hpp0.latency_timer = fields[1].integer.value; 317 + hpp0.enable_serr = fields[2].integer.value; 318 + hpp0.enable_perr = fields[3].integer.value; 319 + 320 + hp_ops->program_type0(dev, &hpp0); 386 321 387 322 exit: 388 323 kfree(buffer.pointer); ··· 395 328 * @dev - the pci_dev for which we want parameters 396 329 * @hpp - allocated by the caller 397 330 */ 398 - int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp) 331 + int pci_acpi_program_hp_params(struct pci_dev *dev, 332 + const struct hotplug_program_ops *hp_ops) 399 333 { 400 334 acpi_status status; 401 335 acpi_handle handle, phandle; ··· 419 351 * this pci dev. 420 352 */ 421 353 while (handle) { 422 - status = acpi_run_hpx(handle, hpp); 354 + status = acpi_run_hpx(dev, handle, hp_ops); 423 355 if (ACPI_SUCCESS(status)) 424 356 return 0; 425 - status = acpi_run_hpp(handle, hpp); 357 + status = acpi_run_hpp(dev, handle, hp_ops); 426 358 if (ACPI_SUCCESS(status)) 427 359 return 0; 428 360 if (acpi_is_root_bridge(handle)) ··· 434 366 } 435 367 return -ENODEV; 436 368 } 437 - EXPORT_SYMBOL_GPL(pci_get_hp_params); 438 369 439 370 /** 440 371 * pciehp_is_native - Check whether a hotplug port is handled by the OS
+4 -6
drivers/pci/pci-stub.c
··· 66 66 &class, &class_mask); 67 67 68 68 if (fields < 2) { 69 - printk(KERN_WARNING 70 - "pci-stub: invalid id string \"%s\"\n", id); 69 + pr_warn("pci-stub: invalid ID string \"%s\"\n", id); 71 70 continue; 72 71 } 73 72 74 - printk(KERN_INFO 75 - "pci-stub: add %04X:%04X sub=%04X:%04X cls=%08X/%08X\n", 73 + pr_info("pci-stub: add %04X:%04X sub=%04X:%04X cls=%08X/%08X\n", 76 74 vendor, device, subvendor, subdevice, class, class_mask); 77 75 78 76 rc = pci_add_dynid(&stub_driver, vendor, device, 79 77 subvendor, subdevice, class, class_mask, 0); 80 78 if (rc) 81 - printk(KERN_WARNING 82 - "pci-stub: failed to add dynamic id (%d)\n", rc); 79 + pr_warn("pci-stub: failed to add dynamic ID (%d)\n", 80 + rc); 83 81 } 84 82 85 83 return 0;
+1 -2
drivers/pci/pci-sysfs.c
··· 1111 1111 kfree(b->legacy_io); 1112 1112 b->legacy_io = NULL; 1113 1113 kzalloc_err: 1114 - printk(KERN_WARNING "pci: warning: could not create legacy I/O port and ISA memory resources to sysfs\n"); 1115 - return; 1114 + dev_warn(&b->dev, "could not create legacy I/O port and ISA memory resources in sysfs\n"); 1116 1115 } 1117 1116 1118 1117 void pci_remove_legacy_files(struct pci_bus *b)
+165 -179
drivers/pci/pci.c
··· 197 197 198 198 /** 199 199 * pci_dev_str_match_path - test if a path string matches a device 200 - * @dev: the PCI device to test 201 - * @path: string to match the device against 200 + * @dev: the PCI device to test 201 + * @path: string to match the device against 202 202 * @endptr: pointer to the string after the match 203 203 * 204 204 * Test if a string (typically from a kernel parameter) formatted as a ··· 280 280 281 281 /** 282 282 * pci_dev_str_match - test if a string matches a device 283 - * @dev: the PCI device to test 284 - * @p: string to match the device against 283 + * @dev: the PCI device to test 284 + * @p: string to match the device against 285 285 * @endptr: pointer to the string after the match 286 286 * 287 287 * Test if a string (typically from a kernel parameter) matches a specified ··· 341 341 } else { 342 342 /* 343 343 * PCI Bus, Device, Function IDs are specified 344 - * (optionally, may include a path of devfns following it) 344 + * (optionally, may include a path of devfns following it) 345 345 */ 346 346 ret = pci_dev_str_match_path(dev, p, &p); 347 347 if (ret < 0) ··· 425 425 * Tell if a device supports a given PCI capability. 426 426 * Returns the address of the requested capability structure within the 427 427 * device's PCI configuration space or 0 in case the device does not 428 - * support it. Possible values for @cap: 428 + * support it. Possible values for @cap include: 429 429 * 430 430 * %PCI_CAP_ID_PM Power Management 431 431 * %PCI_CAP_ID_AGP Accelerated Graphics Port ··· 450 450 451 451 /** 452 452 * pci_bus_find_capability - query for devices' capabilities 453 - * @bus: the PCI bus to query 453 + * @bus: the PCI bus to query 454 454 * @devfn: PCI device to query 455 - * @cap: capability code 455 + * @cap: capability code 456 456 * 457 - * Like pci_find_capability() but works for pci devices that do not have a 457 + * Like pci_find_capability() but works for PCI devices that do not have a 458 458 * pci_dev structure set up yet. 459 459 * 460 460 * Returns the address of the requested capability structure within the ··· 535 535 * 536 536 * Returns the address of the requested extended capability structure 537 537 * within the device's PCI configuration space or 0 if the device does 538 - * not support it. Possible values for @cap: 538 + * not support it. Possible values for @cap include: 539 539 * 540 540 * %PCI_EXT_CAP_ID_ERR Advanced Error Reporting 541 541 * %PCI_EXT_CAP_ID_VC Virtual Channel ··· 618 618 EXPORT_SYMBOL_GPL(pci_find_ht_capability); 619 619 620 620 /** 621 - * pci_find_parent_resource - return resource region of parent bus of given region 621 + * pci_find_parent_resource - return resource region of parent bus of given 622 + * region 622 623 * @dev: PCI device structure contains resources to be searched 623 624 * @res: child resource record for which parent is sought 624 625 * 625 - * For given resource region of given device, return the resource 626 - * region of parent bus the given region is contained in. 626 + * For given resource region of given device, return the resource region of 627 + * parent bus the given region is contained in. 627 628 */ 628 629 struct resource *pci_find_parent_resource(const struct pci_dev *dev, 629 630 struct resource *res) ··· 801 800 802 801 /** 803 802 * pci_raw_set_power_state - Use PCI PM registers to set the power state of 804 - * given PCI device 803 + * given PCI device 805 804 * @dev: PCI device to handle. 806 805 * @state: PCI power state (D0, D1, D2, D3hot) to put the device into. 807 806 * ··· 827 826 if (state < PCI_D0 || state > PCI_D3hot) 828 827 return -EINVAL; 829 828 830 - /* Validate current state: 829 + /* 830 + * Validate current state: 831 831 * Can enter D0 from any state, but if we can only go deeper 832 832 * to sleep if we're already in a low power state 833 833 */ ··· 839 837 return -EINVAL; 840 838 } 841 839 842 - /* check if this device supports the desired state */ 840 + /* Check if this device supports the desired state */ 843 841 if ((state == PCI_D1 && !dev->d1_support) 844 842 || (state == PCI_D2 && !dev->d2_support)) 845 843 return -EIO; 846 844 847 845 pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); 848 846 849 - /* If we're (effectively) in D3, force entire word to 0. 847 + /* 848 + * If we're (effectively) in D3, force entire word to 0. 850 849 * This doesn't affect PME_Status, disables PME_En, and 851 850 * sets PowerState to 0. 852 851 */ ··· 870 867 break; 871 868 } 872 869 873 - /* enter specified state */ 870 + /* Enter specified state */ 874 871 pci_write_config_word(dev, dev->pm_cap + PCI_PM_CTRL, pmcsr); 875 872 876 - /* Mandatory power management transition delays */ 877 - /* see PCI PM 1.1 5.6.1 table 18 */ 873 + /* 874 + * Mandatory power management transition delays; see PCI PM 1.1 875 + * 5.6.1 table 18 876 + */ 878 877 if (state == PCI_D3hot || dev->current_state == PCI_D3hot) 879 878 pci_dev_d3_sleep(dev); 880 879 else if (state == PCI_D2 || dev->current_state == PCI_D2) ··· 1090 1085 { 1091 1086 int error; 1092 1087 1093 - /* bound the state we're entering */ 1088 + /* Bound the state we're entering */ 1094 1089 if (state > PCI_D3cold) 1095 1090 state = PCI_D3cold; 1096 1091 else if (state < PCI_D0) 1097 1092 state = PCI_D0; 1098 1093 else if ((state == PCI_D1 || state == PCI_D2) && pci_no_d1d2(dev)) 1094 + 1099 1095 /* 1100 - * If the device or the parent bridge do not support PCI PM, 1101 - * ignore the request if we're doing anything other than putting 1102 - * it into D0 (which would only happen on boot). 1096 + * If the device or the parent bridge do not support PCI 1097 + * PM, ignore the request if we're doing anything other 1098 + * than putting it into D0 (which would only happen on 1099 + * boot). 1103 1100 */ 1104 1101 return 0; 1105 1102 ··· 1111 1104 1112 1105 __pci_start_power_transition(dev, state); 1113 1106 1114 - /* This device is quirked not to be put into D3, so 1115 - don't put it in D3 */ 1107 + /* 1108 + * This device is quirked not to be put into D3, so don't put it in 1109 + * D3 1110 + */ 1116 1111 if (state >= PCI_D3hot && (dev->dev_flags & PCI_DEV_FLAGS_NO_D3)) 1117 1112 return 0; 1118 1113 ··· 1136 1127 * pci_choose_state - Choose the power state of a PCI device 1137 1128 * @dev: PCI device to be suspended 1138 1129 * @state: target sleep state for the whole system. This is the value 1139 - * that is passed to suspend() function. 1130 + * that is passed to suspend() function. 1140 1131 * 1141 1132 * Returns PCI power state suitable for given device and given system 1142 1133 * message. 1143 1134 */ 1144 - 1145 1135 pci_power_t pci_choose_state(struct pci_dev *dev, pm_message_t state) 1146 1136 { 1147 1137 pci_power_t ret; ··· 1318 1310 } 1319 1311 1320 1312 /** 1321 - * pci_save_state - save the PCI configuration space of a device before suspending 1322 - * @dev: - PCI device that we're dealing with 1313 + * pci_save_state - save the PCI configuration space of a device before 1314 + * suspending 1315 + * @dev: PCI device that we're dealing with 1323 1316 */ 1324 1317 int pci_save_state(struct pci_dev *dev) 1325 1318 { ··· 1431 1422 1432 1423 /** 1433 1424 * pci_restore_state - Restore the saved state of a PCI device 1434 - * @dev: - PCI device that we're dealing with 1425 + * @dev: PCI device that we're dealing with 1435 1426 */ 1436 1427 void pci_restore_state(struct pci_dev *dev) 1437 1428 { ··· 1608 1599 * pci_reenable_device - Resume abandoned device 1609 1600 * @dev: PCI device to be resumed 1610 1601 * 1611 - * Note this function is a backend of pci_default_resume and is not supposed 1612 - * to be called by normal code, write proper resume handler and use it instead. 1602 + * NOTE: This function is a backend of pci_default_resume() and is not supposed 1603 + * to be called by normal code, write proper resume handler and use it instead. 1613 1604 */ 1614 1605 int pci_reenable_device(struct pci_dev *dev) 1615 1606 { ··· 1684 1675 * pci_enable_device_io - Initialize a device for use with IO space 1685 1676 * @dev: PCI device to be initialized 1686 1677 * 1687 - * Initialize device before it's used by a driver. Ask low-level code 1688 - * to enable I/O resources. Wake up the device if it was suspended. 1689 - * Beware, this function can fail. 1678 + * Initialize device before it's used by a driver. Ask low-level code 1679 + * to enable I/O resources. Wake up the device if it was suspended. 1680 + * Beware, this function can fail. 1690 1681 */ 1691 1682 int pci_enable_device_io(struct pci_dev *dev) 1692 1683 { ··· 1698 1689 * pci_enable_device_mem - Initialize a device for use with Memory space 1699 1690 * @dev: PCI device to be initialized 1700 1691 * 1701 - * Initialize device before it's used by a driver. Ask low-level code 1702 - * to enable Memory resources. Wake up the device if it was suspended. 1703 - * Beware, this function can fail. 1692 + * Initialize device before it's used by a driver. Ask low-level code 1693 + * to enable Memory resources. Wake up the device if it was suspended. 1694 + * Beware, this function can fail. 1704 1695 */ 1705 1696 int pci_enable_device_mem(struct pci_dev *dev) 1706 1697 { ··· 1712 1703 * pci_enable_device - Initialize device before it's used by a driver. 1713 1704 * @dev: PCI device to be initialized 1714 1705 * 1715 - * Initialize device before it's used by a driver. Ask low-level code 1716 - * to enable I/O and memory. Wake up the device if it was suspended. 1717 - * Beware, this function can fail. 1706 + * Initialize device before it's used by a driver. Ask low-level code 1707 + * to enable I/O and memory. Wake up the device if it was suspended. 1708 + * Beware, this function can fail. 1718 1709 * 1719 - * Note we don't actually enable the device many times if we call 1720 - * this function repeatedly (we just increment the count). 1710 + * Note we don't actually enable the device many times if we call 1711 + * this function repeatedly (we just increment the count). 1721 1712 */ 1722 1713 int pci_enable_device(struct pci_dev *dev) 1723 1714 { ··· 1726 1717 EXPORT_SYMBOL(pci_enable_device); 1727 1718 1728 1719 /* 1729 - * Managed PCI resources. This manages device on/off, intx/msi/msix 1730 - * on/off and BAR regions. pci_dev itself records msi/msix status, so 1720 + * Managed PCI resources. This manages device on/off, INTx/MSI/MSI-X 1721 + * on/off and BAR regions. pci_dev itself records MSI/MSI-X status, so 1731 1722 * there's no need to track it separately. pci_devres is initialized 1732 1723 * when a device is enabled using managed PCI device enable interface. 1733 1724 */ ··· 1845 1836 } 1846 1837 1847 1838 /** 1848 - * pcibios_release_device - provide arch specific hooks when releasing device dev 1839 + * pcibios_release_device - provide arch specific hooks when releasing 1840 + * device dev 1849 1841 * @dev: the PCI device being released 1850 1842 * 1851 1843 * Permits the platform to provide architecture specific functionality when ··· 1937 1927 * @dev: the PCIe device reset 1938 1928 * @state: Reset state to enter into 1939 1929 * 1940 - * 1941 - * Sets the PCIe reset state for the device. This is the default 1930 + * Set the PCIe reset state for the device. This is the default 1942 1931 * implementation. Architecture implementations can override this. 1943 1932 */ 1944 1933 int __weak pcibios_set_pcie_reset_state(struct pci_dev *dev, ··· 1950 1941 * pci_set_pcie_reset_state - set reset state for device dev 1951 1942 * @dev: the PCIe device reset 1952 1943 * @state: Reset state to enter into 1953 - * 1954 1944 * 1955 1945 * Sets the PCI reset state for the device. 1956 1946 */ ··· 2347 2339 } 2348 2340 2349 2341 /** 2350 - * pci_prepare_to_sleep - prepare PCI device for system-wide transition into a sleep state 2342 + * pci_prepare_to_sleep - prepare PCI device for system-wide transition 2343 + * into a sleep state 2351 2344 * @dev: Device to handle. 2352 2345 * 2353 2346 * Choose the power state appropriate for the device depending on whether ··· 2376 2367 EXPORT_SYMBOL(pci_prepare_to_sleep); 2377 2368 2378 2369 /** 2379 - * pci_back_from_sleep - turn PCI device on during system-wide transition into working state 2370 + * pci_back_from_sleep - turn PCI device on during system-wide transition 2371 + * into working state 2380 2372 * @dev: Device to handle. 2381 2373 * 2382 2374 * Disable device's system wake-up capability and put it into D0. ··· 2787 2777 dev->d2_support = true; 2788 2778 2789 2779 if (dev->d1_support || dev->d2_support) 2790 - pci_printk(KERN_DEBUG, dev, "supports%s%s\n", 2780 + pci_info(dev, "supports%s%s\n", 2791 2781 dev->d1_support ? " D1" : "", 2792 2782 dev->d2_support ? " D2" : ""); 2793 2783 } 2794 2784 2795 2785 pmc &= PCI_PM_CAP_PME_MASK; 2796 2786 if (pmc) { 2797 - pci_printk(KERN_DEBUG, dev, "PME# supported from%s%s%s%s%s\n", 2787 + pci_info(dev, "PME# supported from%s%s%s%s%s\n", 2798 2788 (pmc & PCI_PM_CAP_PME_D0) ? " D0" : "", 2799 2789 (pmc & PCI_PM_CAP_PME_D1) ? " D1" : "", 2800 2790 (pmc & PCI_PM_CAP_PME_D2) ? " D2" : "", ··· 2962 2952 res->flags = flags; 2963 2953 2964 2954 if (bei <= PCI_EA_BEI_BAR5) 2965 - pci_printk(KERN_DEBUG, dev, "BAR %d: %pR (from Enhanced Allocation, properties %#02x)\n", 2955 + pci_info(dev, "BAR %d: %pR (from Enhanced Allocation, properties %#02x)\n", 2966 2956 bei, res, prop); 2967 2957 else if (bei == PCI_EA_BEI_ROM) 2968 - pci_printk(KERN_DEBUG, dev, "ROM: %pR (from Enhanced Allocation, properties %#02x)\n", 2958 + pci_info(dev, "ROM: %pR (from Enhanced Allocation, properties %#02x)\n", 2969 2959 res, prop); 2970 2960 else if (bei >= PCI_EA_BEI_VF_BAR0 && bei <= PCI_EA_BEI_VF_BAR5) 2971 - pci_printk(KERN_DEBUG, dev, "VF BAR %d: %pR (from Enhanced Allocation, properties %#02x)\n", 2961 + pci_info(dev, "VF BAR %d: %pR (from Enhanced Allocation, properties %#02x)\n", 2972 2962 bei - PCI_EA_BEI_VF_BAR0, res, prop); 2973 2963 else 2974 - pci_printk(KERN_DEBUG, dev, "BEI %d res: %pR (from Enhanced Allocation, properties %#02x)\n", 2964 + pci_info(dev, "BEI %d res: %pR (from Enhanced Allocation, properties %#02x)\n", 2975 2965 bei, res, prop); 2976 2966 2977 2967 out: ··· 3015 3005 3016 3006 /** 3017 3007 * _pci_add_cap_save_buffer - allocate buffer for saving given 3018 - * capability registers 3008 + * capability registers 3019 3009 * @dev: the PCI device 3020 3010 * @cap: the capability to allocate the buffer for 3021 3011 * @extended: Standard or Extended capability ID ··· 3196 3186 } 3197 3187 3198 3188 /** 3199 - * pci_std_enable_acs - enable ACS on devices using standard ACS capabilites 3189 + * pci_std_enable_acs - enable ACS on devices using standard ACS capabilities 3200 3190 * @dev: the PCI device 3201 3191 */ 3202 3192 static void pci_std_enable_acs(struct pci_dev *dev) ··· 3619 3609 EXPORT_SYMBOL_GPL(pci_common_swizzle); 3620 3610 3621 3611 /** 3622 - * pci_release_region - Release a PCI bar 3623 - * @pdev: PCI device whose resources were previously reserved by pci_request_region 3624 - * @bar: BAR to release 3612 + * pci_release_region - Release a PCI bar 3613 + * @pdev: PCI device whose resources were previously reserved by 3614 + * pci_request_region() 3615 + * @bar: BAR to release 3625 3616 * 3626 - * Releases the PCI I/O and memory resources previously reserved by a 3627 - * successful call to pci_request_region. Call this function only 3628 - * after all use of the PCI regions has ceased. 3617 + * Releases the PCI I/O and memory resources previously reserved by a 3618 + * successful call to pci_request_region(). Call this function only 3619 + * after all use of the PCI regions has ceased. 3629 3620 */ 3630 3621 void pci_release_region(struct pci_dev *pdev, int bar) 3631 3622 { ··· 3648 3637 EXPORT_SYMBOL(pci_release_region); 3649 3638 3650 3639 /** 3651 - * __pci_request_region - Reserved PCI I/O and memory resource 3652 - * @pdev: PCI device whose resources are to be reserved 3653 - * @bar: BAR to be reserved 3654 - * @res_name: Name to be associated with resource. 3655 - * @exclusive: whether the region access is exclusive or not 3640 + * __pci_request_region - Reserved PCI I/O and memory resource 3641 + * @pdev: PCI device whose resources are to be reserved 3642 + * @bar: BAR to be reserved 3643 + * @res_name: Name to be associated with resource. 3644 + * @exclusive: whether the region access is exclusive or not 3656 3645 * 3657 - * Mark the PCI region associated with PCI device @pdev BR @bar as 3658 - * being reserved by owner @res_name. Do not access any 3659 - * address inside the PCI regions unless this call returns 3660 - * successfully. 3646 + * Mark the PCI region associated with PCI device @pdev BAR @bar as 3647 + * being reserved by owner @res_name. Do not access any 3648 + * address inside the PCI regions unless this call returns 3649 + * successfully. 3661 3650 * 3662 - * If @exclusive is set, then the region is marked so that userspace 3663 - * is explicitly not allowed to map the resource via /dev/mem or 3664 - * sysfs MMIO access. 3651 + * If @exclusive is set, then the region is marked so that userspace 3652 + * is explicitly not allowed to map the resource via /dev/mem or 3653 + * sysfs MMIO access. 3665 3654 * 3666 - * Returns 0 on success, or %EBUSY on error. A warning 3667 - * message is also printed on failure. 3655 + * Returns 0 on success, or %EBUSY on error. A warning 3656 + * message is also printed on failure. 3668 3657 */ 3669 3658 static int __pci_request_region(struct pci_dev *pdev, int bar, 3670 3659 const char *res_name, int exclusive) ··· 3698 3687 } 3699 3688 3700 3689 /** 3701 - * pci_request_region - Reserve PCI I/O and memory resource 3702 - * @pdev: PCI device whose resources are to be reserved 3703 - * @bar: BAR to be reserved 3704 - * @res_name: Name to be associated with resource 3690 + * pci_request_region - Reserve PCI I/O and memory resource 3691 + * @pdev: PCI device whose resources are to be reserved 3692 + * @bar: BAR to be reserved 3693 + * @res_name: Name to be associated with resource 3705 3694 * 3706 - * Mark the PCI region associated with PCI device @pdev BAR @bar as 3707 - * being reserved by owner @res_name. Do not access any 3708 - * address inside the PCI regions unless this call returns 3709 - * successfully. 3695 + * Mark the PCI region associated with PCI device @pdev BAR @bar as 3696 + * being reserved by owner @res_name. Do not access any 3697 + * address inside the PCI regions unless this call returns 3698 + * successfully. 3710 3699 * 3711 - * Returns 0 on success, or %EBUSY on error. A warning 3712 - * message is also printed on failure. 3700 + * Returns 0 on success, or %EBUSY on error. A warning 3701 + * message is also printed on failure. 3713 3702 */ 3714 3703 int pci_request_region(struct pci_dev *pdev, int bar, const char *res_name) 3715 3704 { 3716 3705 return __pci_request_region(pdev, bar, res_name, 0); 3717 3706 } 3718 3707 EXPORT_SYMBOL(pci_request_region); 3719 - 3720 - /** 3721 - * pci_request_region_exclusive - Reserved PCI I/O and memory resource 3722 - * @pdev: PCI device whose resources are to be reserved 3723 - * @bar: BAR to be reserved 3724 - * @res_name: Name to be associated with resource. 3725 - * 3726 - * Mark the PCI region associated with PCI device @pdev BR @bar as 3727 - * being reserved by owner @res_name. Do not access any 3728 - * address inside the PCI regions unless this call returns 3729 - * successfully. 3730 - * 3731 - * Returns 0 on success, or %EBUSY on error. A warning 3732 - * message is also printed on failure. 3733 - * 3734 - * The key difference that _exclusive makes it that userspace is 3735 - * explicitly not allowed to map the resource via /dev/mem or 3736 - * sysfs. 3737 - */ 3738 - int pci_request_region_exclusive(struct pci_dev *pdev, int bar, 3739 - const char *res_name) 3740 - { 3741 - return __pci_request_region(pdev, bar, res_name, IORESOURCE_EXCLUSIVE); 3742 - } 3743 - EXPORT_SYMBOL(pci_request_region_exclusive); 3744 3708 3745 3709 /** 3746 3710 * pci_release_selected_regions - Release selected PCI I/O and memory resources ··· 3777 3791 EXPORT_SYMBOL(pci_request_selected_regions_exclusive); 3778 3792 3779 3793 /** 3780 - * pci_release_regions - Release reserved PCI I/O and memory resources 3781 - * @pdev: PCI device whose resources were previously reserved by pci_request_regions 3794 + * pci_release_regions - Release reserved PCI I/O and memory resources 3795 + * @pdev: PCI device whose resources were previously reserved by 3796 + * pci_request_regions() 3782 3797 * 3783 - * Releases all PCI I/O and memory resources previously reserved by a 3784 - * successful call to pci_request_regions. Call this function only 3785 - * after all use of the PCI regions has ceased. 3798 + * Releases all PCI I/O and memory resources previously reserved by a 3799 + * successful call to pci_request_regions(). Call this function only 3800 + * after all use of the PCI regions has ceased. 3786 3801 */ 3787 3802 3788 3803 void pci_release_regions(struct pci_dev *pdev) ··· 3793 3806 EXPORT_SYMBOL(pci_release_regions); 3794 3807 3795 3808 /** 3796 - * pci_request_regions - Reserved PCI I/O and memory resources 3797 - * @pdev: PCI device whose resources are to be reserved 3798 - * @res_name: Name to be associated with resource. 3809 + * pci_request_regions - Reserve PCI I/O and memory resources 3810 + * @pdev: PCI device whose resources are to be reserved 3811 + * @res_name: Name to be associated with resource. 3799 3812 * 3800 - * Mark all PCI regions associated with PCI device @pdev as 3801 - * being reserved by owner @res_name. Do not access any 3802 - * address inside the PCI regions unless this call returns 3803 - * successfully. 3813 + * Mark all PCI regions associated with PCI device @pdev as 3814 + * being reserved by owner @res_name. Do not access any 3815 + * address inside the PCI regions unless this call returns 3816 + * successfully. 3804 3817 * 3805 - * Returns 0 on success, or %EBUSY on error. A warning 3806 - * message is also printed on failure. 3818 + * Returns 0 on success, or %EBUSY on error. A warning 3819 + * message is also printed on failure. 3807 3820 */ 3808 3821 int pci_request_regions(struct pci_dev *pdev, const char *res_name) 3809 3822 { ··· 3812 3825 EXPORT_SYMBOL(pci_request_regions); 3813 3826 3814 3827 /** 3815 - * pci_request_regions_exclusive - Reserved PCI I/O and memory resources 3816 - * @pdev: PCI device whose resources are to be reserved 3817 - * @res_name: Name to be associated with resource. 3828 + * pci_request_regions_exclusive - Reserve PCI I/O and memory resources 3829 + * @pdev: PCI device whose resources are to be reserved 3830 + * @res_name: Name to be associated with resource. 3818 3831 * 3819 - * Mark all PCI regions associated with PCI device @pdev as 3820 - * being reserved by owner @res_name. Do not access any 3821 - * address inside the PCI regions unless this call returns 3822 - * successfully. 3832 + * Mark all PCI regions associated with PCI device @pdev as being reserved 3833 + * by owner @res_name. Do not access any address inside the PCI regions 3834 + * unless this call returns successfully. 3823 3835 * 3824 - * pci_request_regions_exclusive() will mark the region so that 3825 - * /dev/mem and the sysfs MMIO access will not be allowed. 3836 + * pci_request_regions_exclusive() will mark the region so that /dev/mem 3837 + * and the sysfs MMIO access will not be allowed. 3826 3838 * 3827 - * Returns 0 on success, or %EBUSY on error. A warning 3828 - * message is also printed on failure. 3839 + * Returns 0 on success, or %EBUSY on error. A warning message is also 3840 + * printed on failure. 3829 3841 */ 3830 3842 int pci_request_regions_exclusive(struct pci_dev *pdev, const char *res_name) 3831 3843 { ··· 3835 3849 3836 3850 /* 3837 3851 * Record the PCI IO range (expressed as CPU physical address + size). 3838 - * Return a negative value if an error has occured, zero otherwise 3852 + * Return a negative value if an error has occurred, zero otherwise 3839 3853 */ 3840 3854 int pci_register_io_range(struct fwnode_handle *fwnode, phys_addr_t addr, 3841 3855 resource_size_t size) ··· 3891 3905 } 3892 3906 3893 3907 /** 3894 - * pci_remap_iospace - Remap the memory mapped I/O space 3895 - * @res: Resource describing the I/O space 3896 - * @phys_addr: physical address of range to be mapped 3908 + * pci_remap_iospace - Remap the memory mapped I/O space 3909 + * @res: Resource describing the I/O space 3910 + * @phys_addr: physical address of range to be mapped 3897 3911 * 3898 - * Remap the memory mapped I/O space described by the @res 3899 - * and the CPU physical address @phys_addr into virtual address space. 3900 - * Only architectures that have memory mapped IO functions defined 3901 - * (and the PCI_IOBASE value defined) should call this function. 3912 + * Remap the memory mapped I/O space described by the @res and the CPU 3913 + * physical address @phys_addr into virtual address space. Only 3914 + * architectures that have memory mapped IO functions defined (and the 3915 + * PCI_IOBASE value defined) should call this function. 3902 3916 */ 3903 3917 int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr) 3904 3918 { ··· 3914 3928 return ioremap_page_range(vaddr, vaddr + resource_size(res), phys_addr, 3915 3929 pgprot_device(PAGE_KERNEL)); 3916 3930 #else 3917 - /* this architecture does not have memory mapped I/O space, 3918 - so this function should never be called */ 3931 + /* 3932 + * This architecture does not have memory mapped I/O space, 3933 + * so this function should never be called 3934 + */ 3919 3935 WARN_ONCE(1, "This architecture does not support memory mapped I/O\n"); 3920 3936 return -ENODEV; 3921 3937 #endif ··· 3925 3937 EXPORT_SYMBOL(pci_remap_iospace); 3926 3938 3927 3939 /** 3928 - * pci_unmap_iospace - Unmap the memory mapped I/O space 3929 - * @res: resource to be unmapped 3940 + * pci_unmap_iospace - Unmap the memory mapped I/O space 3941 + * @res: resource to be unmapped 3930 3942 * 3931 - * Unmap the CPU virtual address @res from virtual address space. 3932 - * Only architectures that have memory mapped IO functions defined 3933 - * (and the PCI_IOBASE value defined) should call this function. 3943 + * Unmap the CPU virtual address @res from virtual address space. Only 3944 + * architectures that have memory mapped IO functions defined (and the 3945 + * PCI_IOBASE value defined) should call this function. 3934 3946 */ 3935 3947 void pci_unmap_iospace(struct resource *res) 3936 3948 { ··· 4173 4185 if (cacheline_size == pci_cache_line_size) 4174 4186 return 0; 4175 4187 4176 - pci_printk(KERN_DEBUG, dev, "cache line size of %d is not supported\n", 4188 + pci_info(dev, "cache line size of %d is not supported\n", 4177 4189 pci_cache_line_size << 2); 4178 4190 4179 4191 return -EINVAL; ··· 4276 4288 * @pdev: the PCI device to operate on 4277 4289 * @enable: boolean: whether to enable or disable PCI INTx 4278 4290 * 4279 - * Enables/disables PCI INTx for device dev 4291 + * Enables/disables PCI INTx for device @pdev 4280 4292 */ 4281 4293 void pci_intx(struct pci_dev *pdev, int enable) 4282 4294 { ··· 4352 4364 * pci_check_and_mask_intx - mask INTx on pending interrupt 4353 4365 * @dev: the PCI device to operate on 4354 4366 * 4355 - * Check if the device dev has its INTx line asserted, mask it and 4356 - * return true in that case. False is returned if no interrupt was 4357 - * pending. 4367 + * Check if the device dev has its INTx line asserted, mask it and return 4368 + * true in that case. False is returned if no interrupt was pending. 4358 4369 */ 4359 4370 bool pci_check_and_mask_intx(struct pci_dev *dev) 4360 4371 { ··· 4365 4378 * pci_check_and_unmask_intx - unmask INTx if no interrupt is pending 4366 4379 * @dev: the PCI device to operate on 4367 4380 * 4368 - * Check if the device dev has its INTx line asserted, unmask it if not 4369 - * and return true. False is returned and the mask remains active if 4370 - * there was still an interrupt pending. 4381 + * Check if the device dev has its INTx line asserted, unmask it if not and 4382 + * return true. False is returned and the mask remains active if there was 4383 + * still an interrupt pending. 4371 4384 */ 4372 4385 bool pci_check_and_unmask_intx(struct pci_dev *dev) 4373 4386 { ··· 4376 4389 EXPORT_SYMBOL_GPL(pci_check_and_unmask_intx); 4377 4390 4378 4391 /** 4379 - * pci_wait_for_pending_transaction - waits for pending transaction 4392 + * pci_wait_for_pending_transaction - wait for pending transaction 4380 4393 * @dev: the PCI device to operate on 4381 4394 * 4382 4395 * Return 0 if transaction is pending 1 otherwise. ··· 4434 4447 4435 4448 /** 4436 4449 * pcie_has_flr - check if a device supports function level resets 4437 - * @dev: device to check 4450 + * @dev: device to check 4438 4451 * 4439 4452 * Returns true if the device advertises support for PCIe function level 4440 4453 * resets. ··· 4453 4466 4454 4467 /** 4455 4468 * pcie_flr - initiate a PCIe function level reset 4456 - * @dev: device to reset 4469 + * @dev: device to reset 4457 4470 * 4458 4471 * Initiate a function level reset on @dev. The caller should ensure the 4459 4472 * device supports FLR before calling this function, e.g. by using the ··· 4797 4810 * 4798 4811 * The device function is presumed to be unused and the caller is holding 4799 4812 * the device mutex lock when this function is called. 4813 + * 4800 4814 * Resetting the device will make the contents of PCI configuration space 4801 4815 * random, so any caller of this must be prepared to reinitialise the 4802 4816 * device including MSI, bus mastering, BARs, decoding IO and memory spaces, ··· 5361 5373 * pcix_get_max_mmrbc - get PCI-X maximum designed memory read byte count 5362 5374 * @dev: PCI device to query 5363 5375 * 5364 - * Returns mmrbc: maximum designed memory read count in bytes 5365 - * or appropriate error value. 5376 + * Returns mmrbc: maximum designed memory read count in bytes or 5377 + * appropriate error value. 5366 5378 */ 5367 5379 int pcix_get_max_mmrbc(struct pci_dev *dev) 5368 5380 { ··· 5384 5396 * pcix_get_mmrbc - get PCI-X maximum memory read byte count 5385 5397 * @dev: PCI device to query 5386 5398 * 5387 - * Returns mmrbc: maximum memory read count in bytes 5388 - * or appropriate error value. 5399 + * Returns mmrbc: maximum memory read count in bytes or appropriate error 5400 + * value. 5389 5401 */ 5390 5402 int pcix_get_mmrbc(struct pci_dev *dev) 5391 5403 { ··· 5409 5421 * @mmrbc: maximum memory read count in bytes 5410 5422 * valid values are 512, 1024, 2048, 4096 5411 5423 * 5412 - * If possible sets maximum memory read byte count, some bridges have erratas 5424 + * If possible sets maximum memory read byte count, some bridges have errata 5413 5425 * that prevent this. 5414 5426 */ 5415 5427 int pcix_set_mmrbc(struct pci_dev *dev, int mmrbc) ··· 5454 5466 * pcie_get_readrq - get PCI Express read request size 5455 5467 * @dev: PCI device to query 5456 5468 * 5457 - * Returns maximum memory read request in bytes 5458 - * or appropriate error value. 5469 + * Returns maximum memory read request in bytes or appropriate error value. 5459 5470 */ 5460 5471 int pcie_get_readrq(struct pci_dev *dev) 5461 5472 { ··· 5482 5495 return -EINVAL; 5483 5496 5484 5497 /* 5485 - * If using the "performance" PCIe config, we clamp the 5486 - * read rq size to the max packet size to prevent the 5487 - * host bridge generating requests larger than we can 5488 - * cope with 5498 + * If using the "performance" PCIe config, we clamp the read rq 5499 + * size to the max packet size to keep the host bridge from 5500 + * generating requests larger than we can cope with. 5489 5501 */ 5490 5502 if (pcie_bus_config == PCIE_BUS_PERFORMANCE) { 5491 5503 int mps = pcie_get_mps(dev); ··· 6130 6144 6131 6145 if (parent) 6132 6146 domain = of_get_pci_domain_nr(parent->of_node); 6147 + 6133 6148 /* 6134 6149 * Check DT domain and use_dt_domains values. 6135 6150 * ··· 6251 6264 } else if (!strncmp(str, "disable_acs_redir=", 18)) { 6252 6265 disable_acs_redir_param = str + 18; 6253 6266 } else { 6254 - printk(KERN_ERR "PCI: Unknown option `%s'\n", 6255 - str); 6267 + pr_err("PCI: Unknown option `%s'\n", str); 6256 6268 } 6257 6269 } 6258 6270 str = k;
+1 -1
drivers/pci/pci.h
··· 597 597 void pci_aer_clear_device_status(struct pci_dev *dev); 598 598 #else 599 599 static inline void pci_no_aer(void) { } 600 - static inline int pci_aer_init(struct pci_dev *d) { return -ENODEV; } 600 + static inline void pci_aer_init(struct pci_dev *d) { } 601 601 static inline void pci_aer_exit(struct pci_dev *d) { } 602 602 static inline void pci_aer_clear_fatal_status(struct pci_dev *dev) { } 603 603 static inline void pci_aer_clear_device_status(struct pci_dev *dev) { }
+16 -14
drivers/pci/pcie/aer.c
··· 12 12 * Andrew Patterson <andrew.patterson@hp.com> 13 13 */ 14 14 15 + #define pr_fmt(fmt) "AER: " fmt 16 + #define dev_fmt pr_fmt 17 + 15 18 #include <linux/cper.h> 16 19 #include <linux/pci.h> 17 20 #include <linux/pci-acpi.h> ··· 782 779 u8 bus = info->id >> 8; 783 780 u8 devfn = info->id & 0xff; 784 781 785 - pci_info(dev, "AER: %s%s error received: %04x:%02x:%02x.%d\n", 786 - info->multi_error_valid ? "Multiple " : "", 787 - aer_error_severity_string[info->severity], 788 - pci_domain_nr(dev->bus), bus, PCI_SLOT(devfn), PCI_FUNC(devfn)); 782 + pci_info(dev, "%s%s error received: %04x:%02x:%02x.%d\n", 783 + info->multi_error_valid ? "Multiple " : "", 784 + aer_error_severity_string[info->severity], 785 + pci_domain_nr(dev->bus), bus, PCI_SLOT(devfn), 786 + PCI_FUNC(devfn)); 789 787 } 790 788 791 789 #ifdef CONFIG_ACPI_APEI_PCIEAER ··· 968 964 pci_walk_bus(parent->subordinate, find_device_iter, e_info); 969 965 970 966 if (!e_info->error_dev_num) { 971 - pci_printk(KERN_DEBUG, parent, "can't find device of ID%04x\n", 972 - e_info->id); 967 + pci_info(parent, "can't find device of ID%04x\n", e_info->id); 973 968 return false; 974 969 } 975 970 return true; ··· 1380 1377 int status; 1381 1378 struct aer_rpc *rpc; 1382 1379 struct device *device = &dev->device; 1380 + struct pci_dev *port = dev->port; 1383 1381 1384 1382 rpc = devm_kzalloc(device, sizeof(struct aer_rpc), GFP_KERNEL); 1385 - if (!rpc) { 1386 - dev_printk(KERN_DEBUG, device, "alloc AER rpc failed\n"); 1383 + if (!rpc) 1387 1384 return -ENOMEM; 1388 - } 1389 - rpc->rpd = dev->port; 1385 + 1386 + rpc->rpd = port; 1390 1387 set_service_data(dev, rpc); 1391 1388 1392 1389 status = devm_request_threaded_irq(device, dev->irq, aer_irq, aer_isr, 1393 1390 IRQF_SHARED, "aerdrv", dev); 1394 1391 if (status) { 1395 - dev_printk(KERN_DEBUG, device, "request AER IRQ %d failed\n", 1396 - dev->irq); 1392 + pci_err(port, "request AER IRQ %d failed\n", dev->irq); 1397 1393 return status; 1398 1394 } 1399 1395 1400 1396 aer_enable_rootport(rpc); 1401 - dev_info(device, "AER enabled with IRQ %d\n", dev->irq); 1397 + pci_info(port, "enabled with IRQ %d\n", dev->irq); 1402 1398 return 0; 1403 1399 } 1404 1400 ··· 1421 1419 pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); 1422 1420 1423 1421 rc = pci_bus_error_reset(dev); 1424 - pci_printk(KERN_DEBUG, dev, "Root Port link has been reset\n"); 1422 + pci_info(dev, "Root Port link has been reset\n"); 1425 1423 1426 1424 /* Clear Root Error Status */ 1427 1425 pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &reg32);
+10 -10
drivers/pci/pcie/aer_inject.c
··· 12 12 * Huang Ying <ying.huang@intel.com> 13 13 */ 14 14 15 + #define dev_fmt(fmt) "aer_inject: " fmt 16 + 15 17 #include <linux/module.h> 16 18 #include <linux/init.h> 17 19 #include <linux/irq.h> ··· 334 332 return -ENODEV; 335 333 rpdev = pcie_find_root_port(dev); 336 334 if (!rpdev) { 337 - pci_err(dev, "aer_inject: Root port not found\n"); 335 + pci_err(dev, "Root port not found\n"); 338 336 ret = -ENODEV; 339 337 goto out_put; 340 338 } 341 339 342 340 pos_cap_err = dev->aer_cap; 343 341 if (!pos_cap_err) { 344 - pci_err(dev, "aer_inject: Device doesn't support AER\n"); 342 + pci_err(dev, "Device doesn't support AER\n"); 345 343 ret = -EPROTONOSUPPORT; 346 344 goto out_put; 347 345 } ··· 352 350 353 351 rp_pos_cap_err = rpdev->aer_cap; 354 352 if (!rp_pos_cap_err) { 355 - pci_err(rpdev, "aer_inject: Root port doesn't support AER\n"); 353 + pci_err(rpdev, "Root port doesn't support AER\n"); 356 354 ret = -EPROTONOSUPPORT; 357 355 goto out_put; 358 356 } ··· 400 398 if (!aer_mask_override && einj->cor_status && 401 399 !(einj->cor_status & ~cor_mask)) { 402 400 ret = -EINVAL; 403 - pci_warn(dev, "aer_inject: The correctable error(s) is masked by device\n"); 401 + pci_warn(dev, "The correctable error(s) is masked by device\n"); 404 402 spin_unlock_irqrestore(&inject_lock, flags); 405 403 goto out_put; 406 404 } 407 405 if (!aer_mask_override && einj->uncor_status && 408 406 !(einj->uncor_status & ~uncor_mask)) { 409 407 ret = -EINVAL; 410 - pci_warn(dev, "aer_inject: The uncorrectable error(s) is masked by device\n"); 408 + pci_warn(dev, "The uncorrectable error(s) is masked by device\n"); 411 409 spin_unlock_irqrestore(&inject_lock, flags); 412 410 goto out_put; 413 411 } ··· 462 460 if (device) { 463 461 edev = to_pcie_device(device); 464 462 if (!get_service_data(edev)) { 465 - dev_warn(&edev->device, 466 - "aer_inject: AER service is not initialized\n"); 463 + pci_warn(edev->port, "AER service is not initialized\n"); 467 464 ret = -EPROTONOSUPPORT; 468 465 goto out_put; 469 466 } 470 - dev_info(&edev->device, 471 - "aer_inject: Injecting errors %08x/%08x into device %s\n", 467 + pci_info(edev->port, "Injecting errors %08x/%08x into device %s\n", 472 468 einj->cor_status, einj->uncor_status, pci_name(dev)); 473 469 local_irq_disable(); 474 470 generic_handle_irq(edev->irq); 475 471 local_irq_enable(); 476 472 } else { 477 - pci_err(rpdev, "aer_inject: AER device not found\n"); 473 + pci_err(rpdev, "AER device not found\n"); 478 474 ret = -ENODEV; 479 475 } 480 476 out_put:
+31 -16
drivers/pci/pcie/aspm.c
··· 196 196 link->clkpm_capable = (blacklist) ? 0 : capable; 197 197 } 198 198 199 + static bool pcie_retrain_link(struct pcie_link_state *link) 200 + { 201 + struct pci_dev *parent = link->pdev; 202 + unsigned long end_jiffies; 203 + u16 reg16; 204 + 205 + pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16); 206 + reg16 |= PCI_EXP_LNKCTL_RL; 207 + pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); 208 + if (parent->clear_retrain_link) { 209 + /* 210 + * Due to an erratum in some devices the Retrain Link bit 211 + * needs to be cleared again manually to allow the link 212 + * training to succeed. 213 + */ 214 + reg16 &= ~PCI_EXP_LNKCTL_RL; 215 + pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); 216 + } 217 + 218 + /* Wait for link training end. Break out after waiting for timeout */ 219 + end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT; 220 + do { 221 + pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &reg16); 222 + if (!(reg16 & PCI_EXP_LNKSTA_LT)) 223 + break; 224 + msleep(1); 225 + } while (time_before(jiffies, end_jiffies)); 226 + return !(reg16 & PCI_EXP_LNKSTA_LT); 227 + } 228 + 199 229 /* 200 230 * pcie_aspm_configure_common_clock: check if the 2 ends of a link 201 231 * could use common clock. If they are, configure them to use the ··· 235 205 { 236 206 int same_clock = 1; 237 207 u16 reg16, parent_reg, child_reg[8]; 238 - unsigned long start_jiffies; 239 208 struct pci_dev *child, *parent = link->pdev; 240 209 struct pci_bus *linkbus = parent->subordinate; 241 210 /* ··· 292 263 reg16 &= ~PCI_EXP_LNKCTL_CCC; 293 264 pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); 294 265 295 - /* Retrain link */ 296 - reg16 |= PCI_EXP_LNKCTL_RL; 297 - pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); 298 - 299 - /* Wait for link training end. Break out after waiting for timeout */ 300 - start_jiffies = jiffies; 301 - for (;;) { 302 - pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &reg16); 303 - if (!(reg16 & PCI_EXP_LNKSTA_LT)) 304 - break; 305 - if (time_after(jiffies, start_jiffies + LINK_RETRAIN_TIMEOUT)) 306 - break; 307 - msleep(1); 308 - } 309 - if (!(reg16 & PCI_EXP_LNKSTA_LT)) 266 + if (pcie_retrain_link(link)) 310 267 return; 311 268 312 269 /* Training failed. Restore common clock configurations */
+14
drivers/pci/pcie/bw_notification.c
··· 107 107 free_irq(srv->irq, srv); 108 108 } 109 109 110 + static int pcie_bandwidth_notification_suspend(struct pcie_device *srv) 111 + { 112 + pcie_disable_link_bandwidth_notification(srv->port); 113 + return 0; 114 + } 115 + 116 + static int pcie_bandwidth_notification_resume(struct pcie_device *srv) 117 + { 118 + pcie_enable_link_bandwidth_notification(srv->port); 119 + return 0; 120 + } 121 + 110 122 static struct pcie_port_service_driver pcie_bandwidth_notification_driver = { 111 123 .name = "pcie_bw_notification", 112 124 .port_type = PCIE_ANY_PORT, 113 125 .service = PCIE_PORT_SERVICE_BWNOTIF, 114 126 .probe = pcie_bandwidth_notification_probe, 127 + .suspend = pcie_bandwidth_notification_suspend, 128 + .resume = pcie_bandwidth_notification_resume, 115 129 .remove = pcie_bandwidth_notification_remove, 116 130 }; 117 131
+18 -19
drivers/pci/pcie/dpc.c
··· 6 6 * Copyright (C) 2016 Intel Corp. 7 7 */ 8 8 9 + #define dev_fmt(fmt) "DPC: " fmt 10 + 9 11 #include <linux/aer.h> 10 12 #include <linux/delay.h> 11 13 #include <linux/interrupt.h> ··· 102 100 { 103 101 unsigned long timeout = jiffies + HZ; 104 102 struct pci_dev *pdev = dpc->dev->port; 105 - struct device *dev = &dpc->dev->device; 106 103 u16 cap = dpc->cap_pos, status; 107 104 108 105 pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); ··· 111 110 pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); 112 111 } 113 112 if (status & PCI_EXP_DPC_RP_BUSY) { 114 - dev_warn(dev, "DPC root port still busy\n"); 113 + pci_warn(pdev, "root port still busy\n"); 115 114 return -EBUSY; 116 115 } 117 116 return 0; ··· 149 148 150 149 static void dpc_process_rp_pio_error(struct dpc_dev *dpc) 151 150 { 152 - struct device *dev = &dpc->dev->device; 153 151 struct pci_dev *pdev = dpc->dev->port; 154 152 u16 cap = dpc->cap_pos, dpc_status, first_error; 155 153 u32 status, mask, sev, syserr, exc, dw0, dw1, dw2, dw3, log, prefix; ··· 156 156 157 157 pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, &status); 158 158 pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_MASK, &mask); 159 - dev_err(dev, "rp_pio_status: %#010x, rp_pio_mask: %#010x\n", 159 + pci_err(pdev, "rp_pio_status: %#010x, rp_pio_mask: %#010x\n", 160 160 status, mask); 161 161 162 162 pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_SEVERITY, &sev); 163 163 pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_SYSERROR, &syserr); 164 164 pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_EXCEPTION, &exc); 165 - dev_err(dev, "RP PIO severity=%#010x, syserror=%#010x, exception=%#010x\n", 165 + pci_err(pdev, "RP PIO severity=%#010x, syserror=%#010x, exception=%#010x\n", 166 166 sev, syserr, exc); 167 167 168 168 /* Get First Error Pointer */ ··· 171 171 172 172 for (i = 0; i < ARRAY_SIZE(rp_pio_error_string); i++) { 173 173 if ((status & ~mask) & (1 << i)) 174 - dev_err(dev, "[%2d] %s%s\n", i, rp_pio_error_string[i], 174 + pci_err(pdev, "[%2d] %s%s\n", i, rp_pio_error_string[i], 175 175 first_error == i ? " (First)" : ""); 176 176 } 177 177 ··· 185 185 &dw2); 186 186 pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG + 12, 187 187 &dw3); 188 - dev_err(dev, "TLP Header: %#010x %#010x %#010x %#010x\n", 188 + pci_err(pdev, "TLP Header: %#010x %#010x %#010x %#010x\n", 189 189 dw0, dw1, dw2, dw3); 190 190 191 191 if (dpc->rp_log_size < 5) 192 192 goto clear_status; 193 193 pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_IMPSPEC_LOG, &log); 194 - dev_err(dev, "RP PIO ImpSpec Log %#010x\n", log); 194 + pci_err(pdev, "RP PIO ImpSpec Log %#010x\n", log); 195 195 196 196 for (i = 0; i < dpc->rp_log_size - 5; i++) { 197 197 pci_read_config_dword(pdev, 198 198 cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG, &prefix); 199 - dev_err(dev, "TLP Prefix Header: dw%d, %#010x\n", i, prefix); 199 + pci_err(pdev, "TLP Prefix Header: dw%d, %#010x\n", i, prefix); 200 200 } 201 201 clear_status: 202 202 pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, status); ··· 229 229 struct aer_err_info info; 230 230 struct dpc_dev *dpc = context; 231 231 struct pci_dev *pdev = dpc->dev->port; 232 - struct device *dev = &dpc->dev->device; 233 232 u16 cap = dpc->cap_pos, status, source, reason, ext_reason; 234 233 235 234 pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); 236 235 pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source); 237 236 238 - dev_info(dev, "DPC containment event, status:%#06x source:%#06x\n", 237 + pci_info(pdev, "containment event, status:%#06x source:%#06x\n", 239 238 status, source); 240 239 241 240 reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN) >> 1; 242 241 ext_reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT) >> 5; 243 - dev_warn(dev, "DPC %s detected\n", 242 + pci_warn(pdev, "%s detected\n", 244 243 (reason == 0) ? "unmasked uncorrectable error" : 245 244 (reason == 1) ? "ERR_NONFATAL" : 246 245 (reason == 2) ? "ERR_FATAL" : ··· 306 307 dpc_handler, IRQF_SHARED, 307 308 "pcie-dpc", dpc); 308 309 if (status) { 309 - dev_warn(device, "request IRQ%d failed: %d\n", dev->irq, 310 + pci_warn(pdev, "request IRQ%d failed: %d\n", dev->irq, 310 311 status); 311 312 return status; 312 313 } ··· 318 319 if (dpc->rp_extensions) { 319 320 dpc->rp_log_size = (cap & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8; 320 321 if (dpc->rp_log_size < 4 || dpc->rp_log_size > 9) { 321 - dev_err(device, "RP PIO log size %u is invalid\n", 322 + pci_err(pdev, "RP PIO log size %u is invalid\n", 322 323 dpc->rp_log_size); 323 324 dpc->rp_log_size = 0; 324 325 } ··· 327 328 ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN; 328 329 pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl); 329 330 330 - dev_info(device, "DPC error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", 331 - cap & PCI_EXP_DPC_IRQ, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT), 332 - FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP), 333 - FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), dpc->rp_log_size, 334 - FLAG(cap, PCI_EXP_DPC_CAP_DL_ACTIVE)); 331 + pci_info(pdev, "error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", 332 + cap & PCI_EXP_DPC_IRQ, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT), 333 + FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP), 334 + FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), dpc->rp_log_size, 335 + FLAG(cap, PCI_EXP_DPC_CAP_DL_ACTIVE)); 335 336 336 337 pci_add_ext_cap_save_buffer(pdev, PCI_EXT_CAP_ID_DPC, sizeof(u16)); 337 338 return status;
+6 -4
drivers/pci/pcie/pme.c
··· 7 7 * Copyright (C) 2009 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. 8 8 */ 9 9 10 + #define dev_fmt(fmt) "PME: " fmt 11 + 10 12 #include <linux/pci.h> 11 13 #include <linux/kernel.h> 12 14 #include <linux/errno.h> ··· 196 194 * assuming that the PME was reported by a PCIe-PCI bridge that 197 195 * used devfn different from zero. 198 196 */ 199 - pci_dbg(port, "PME interrupt generated for non-existent device %02x:%02x.%d\n", 200 - busnr, PCI_SLOT(devfn), PCI_FUNC(devfn)); 197 + pci_info(port, "interrupt generated for non-existent device %02x:%02x.%d\n", 198 + busnr, PCI_SLOT(devfn), PCI_FUNC(devfn)); 201 199 found = pcie_pme_from_pci_bridge(bus, 0); 202 200 } 203 201 204 202 out: 205 203 if (!found) 206 - pci_dbg(port, "Spurious native PME interrupt!\n"); 204 + pci_info(port, "Spurious native interrupt!\n"); 207 205 } 208 206 209 207 /** ··· 343 341 return ret; 344 342 } 345 343 346 - pci_info(port, "Signaling PME with IRQ %d\n", srv->irq); 344 + pci_info(port, "Signaling with IRQ %d\n", srv->irq); 347 345 348 346 pcie_pme_mark_devices(port); 349 347 pcie_pme_interrupt_enable(port, true);
+195 -35
drivers/pci/probe.c
··· 317 317 res->flags = 0; 318 318 out: 319 319 if (res->flags) 320 - pci_printk(KERN_DEBUG, dev, "reg 0x%x: %pR\n", pos, res); 320 + pci_info(dev, "reg 0x%x: %pR\n", pos, res); 321 321 322 322 return (res->flags & IORESOURCE_MEM_64) ? 1 : 0; 323 323 } ··· 435 435 region.start = base; 436 436 region.end = limit + io_granularity - 1; 437 437 pcibios_bus_to_resource(dev->bus, res, &region); 438 - pci_printk(KERN_DEBUG, dev, " bridge window %pR\n", res); 438 + pci_info(dev, " bridge window %pR\n", res); 439 439 } 440 440 } 441 441 ··· 457 457 region.start = base; 458 458 region.end = limit + 0xfffff; 459 459 pcibios_bus_to_resource(dev->bus, res, &region); 460 - pci_printk(KERN_DEBUG, dev, " bridge window %pR\n", res); 460 + pci_info(dev, " bridge window %pR\n", res); 461 461 } 462 462 } 463 463 ··· 510 510 region.start = base; 511 511 region.end = limit + 0xfffff; 512 512 pcibios_bus_to_resource(dev->bus, res, &region); 513 - pci_printk(KERN_DEBUG, dev, " bridge window %pR\n", res); 513 + pci_info(dev, " bridge window %pR\n", res); 514 514 } 515 515 } 516 516 ··· 540 540 if (res && res->flags) { 541 541 pci_bus_add_resource(child, res, 542 542 PCI_SUBTRACTIVE_DECODE); 543 - pci_printk(KERN_DEBUG, dev, 544 - " bridge window %pR (subtractive decode)\n", 543 + pci_info(dev, " bridge window %pR (subtractive decode)\n", 545 544 res); 546 545 } 547 546 } ··· 585 586 kfree(to_pci_host_bridge(dev)); 586 587 } 587 588 588 - struct pci_host_bridge *pci_alloc_host_bridge(size_t priv) 589 + static void pci_init_host_bridge(struct pci_host_bridge *bridge) 589 590 { 590 - struct pci_host_bridge *bridge; 591 - 592 - bridge = kzalloc(sizeof(*bridge) + priv, GFP_KERNEL); 593 - if (!bridge) 594 - return NULL; 595 - 596 591 INIT_LIST_HEAD(&bridge->windows); 597 - bridge->dev.release = pci_release_host_bridge_dev; 592 + INIT_LIST_HEAD(&bridge->dma_ranges); 598 593 599 594 /* 600 595 * We assume we can manage these PCIe features. Some systems may ··· 601 608 bridge->native_shpc_hotplug = 1; 602 609 bridge->native_pme = 1; 603 610 bridge->native_ltr = 1; 611 + } 612 + 613 + struct pci_host_bridge *pci_alloc_host_bridge(size_t priv) 614 + { 615 + struct pci_host_bridge *bridge; 616 + 617 + bridge = kzalloc(sizeof(*bridge) + priv, GFP_KERNEL); 618 + if (!bridge) 619 + return NULL; 620 + 621 + pci_init_host_bridge(bridge); 622 + bridge->dev.release = pci_release_host_bridge_dev; 604 623 605 624 return bridge; 606 625 } ··· 627 622 if (!bridge) 628 623 return NULL; 629 624 630 - INIT_LIST_HEAD(&bridge->windows); 625 + pci_init_host_bridge(bridge); 631 626 bridge->dev.release = devm_pci_release_host_bridge_dev; 632 627 633 628 return bridge; ··· 637 632 void pci_free_host_bridge(struct pci_host_bridge *bridge) 638 633 { 639 634 pci_free_resource_list(&bridge->windows); 635 + pci_free_resource_list(&bridge->dma_ranges); 640 636 641 637 kfree(bridge); 642 638 } ··· 1087 1081 1088 1082 static unsigned int pci_scan_child_bus_extend(struct pci_bus *bus, 1089 1083 unsigned int available_buses); 1084 + /** 1085 + * pci_ea_fixed_busnrs() - Read fixed Secondary and Subordinate bus 1086 + * numbers from EA capability. 1087 + * @dev: Bridge 1088 + * @sec: updated with secondary bus number from EA 1089 + * @sub: updated with subordinate bus number from EA 1090 + * 1091 + * If @dev is a bridge with EA capability, update @sec and @sub with 1092 + * fixed bus numbers from the capability and return true. Otherwise, 1093 + * return false. 1094 + */ 1095 + static bool pci_ea_fixed_busnrs(struct pci_dev *dev, u8 *sec, u8 *sub) 1096 + { 1097 + int ea, offset; 1098 + u32 dw; 1099 + 1100 + if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) 1101 + return false; 1102 + 1103 + /* find PCI EA capability in list */ 1104 + ea = pci_find_capability(dev, PCI_CAP_ID_EA); 1105 + if (!ea) 1106 + return false; 1107 + 1108 + offset = ea + PCI_EA_FIRST_ENT; 1109 + pci_read_config_dword(dev, offset, &dw); 1110 + *sec = dw & PCI_EA_SEC_BUS_MASK; 1111 + *sub = (dw & PCI_EA_SUB_BUS_MASK) >> PCI_EA_SUB_BUS_SHIFT; 1112 + return true; 1113 + } 1090 1114 1091 1115 /* 1092 1116 * pci_scan_bridge_extend() - Scan buses behind a bridge ··· 1151 1115 u16 bctl; 1152 1116 u8 primary, secondary, subordinate; 1153 1117 int broken = 0; 1118 + bool fixed_buses; 1119 + u8 fixed_sec, fixed_sub; 1120 + int next_busnr; 1154 1121 1155 1122 /* 1156 1123 * Make sure the bridge is powered on to be able to access config ··· 1253 1214 /* Clear errors */ 1254 1215 pci_write_config_word(dev, PCI_STATUS, 0xffff); 1255 1216 1217 + /* Read bus numbers from EA Capability (if present) */ 1218 + fixed_buses = pci_ea_fixed_busnrs(dev, &fixed_sec, &fixed_sub); 1219 + if (fixed_buses) 1220 + next_busnr = fixed_sec; 1221 + else 1222 + next_busnr = max + 1; 1223 + 1256 1224 /* 1257 1225 * Prevent assigning a bus number that already exists. 1258 1226 * This can happen when a bridge is hot-plugged, so in this 1259 1227 * case we only re-scan this bus. 1260 1228 */ 1261 - child = pci_find_bus(pci_domain_nr(bus), max+1); 1229 + child = pci_find_bus(pci_domain_nr(bus), next_busnr); 1262 1230 if (!child) { 1263 - child = pci_add_new_bus(bus, dev, max+1); 1231 + child = pci_add_new_bus(bus, dev, next_busnr); 1264 1232 if (!child) 1265 1233 goto out; 1266 - pci_bus_insert_busn_res(child, max+1, 1234 + pci_bus_insert_busn_res(child, next_busnr, 1267 1235 bus->busn_res.end); 1268 1236 } 1269 1237 max++; ··· 1331 1285 max += i; 1332 1286 } 1333 1287 1334 - /* Set subordinate bus number to its real value */ 1288 + /* 1289 + * Set subordinate bus number to its real value. 1290 + * If fixed subordinate bus number exists from EA 1291 + * capability then use it. 1292 + */ 1293 + if (fixed_buses) 1294 + max = fixed_sub; 1335 1295 pci_bus_update_busn_res_end(child, max); 1336 1296 pci_write_config_byte(dev, PCI_SUBORDINATE_BUS, max); 1337 1297 } ··· 1742 1690 dev->revision = class & 0xff; 1743 1691 dev->class = class >> 8; /* upper 3 bytes */ 1744 1692 1745 - pci_printk(KERN_DEBUG, dev, "[%04x:%04x] type %02x class %#08x\n", 1693 + pci_info(dev, "[%04x:%04x] type %02x class %#08x\n", 1746 1694 dev->vendor, dev->device, dev->hdr_type, dev->class); 1747 1695 1748 1696 if (pci_early_dump) ··· 2078 2026 */ 2079 2027 } 2080 2028 2029 + static u16 hpx3_device_type(struct pci_dev *dev) 2030 + { 2031 + u16 pcie_type = pci_pcie_type(dev); 2032 + const int pcie_to_hpx3_type[] = { 2033 + [PCI_EXP_TYPE_ENDPOINT] = HPX_TYPE_ENDPOINT, 2034 + [PCI_EXP_TYPE_LEG_END] = HPX_TYPE_LEG_END, 2035 + [PCI_EXP_TYPE_RC_END] = HPX_TYPE_RC_END, 2036 + [PCI_EXP_TYPE_RC_EC] = HPX_TYPE_RC_EC, 2037 + [PCI_EXP_TYPE_ROOT_PORT] = HPX_TYPE_ROOT_PORT, 2038 + [PCI_EXP_TYPE_UPSTREAM] = HPX_TYPE_UPSTREAM, 2039 + [PCI_EXP_TYPE_DOWNSTREAM] = HPX_TYPE_DOWNSTREAM, 2040 + [PCI_EXP_TYPE_PCI_BRIDGE] = HPX_TYPE_PCI_BRIDGE, 2041 + [PCI_EXP_TYPE_PCIE_BRIDGE] = HPX_TYPE_PCIE_BRIDGE, 2042 + }; 2043 + 2044 + if (pcie_type >= ARRAY_SIZE(pcie_to_hpx3_type)) 2045 + return 0; 2046 + 2047 + return pcie_to_hpx3_type[pcie_type]; 2048 + } 2049 + 2050 + static u8 hpx3_function_type(struct pci_dev *dev) 2051 + { 2052 + if (dev->is_virtfn) 2053 + return HPX_FN_SRIOV_VIRT; 2054 + else if (pci_find_ext_capability(dev, PCI_EXT_CAP_ID_SRIOV) > 0) 2055 + return HPX_FN_SRIOV_PHYS; 2056 + else 2057 + return HPX_FN_NORMAL; 2058 + } 2059 + 2060 + static bool hpx3_cap_ver_matches(u8 pcie_cap_id, u8 hpx3_cap_id) 2061 + { 2062 + u8 cap_ver = hpx3_cap_id & 0xf; 2063 + 2064 + if ((hpx3_cap_id & BIT(4)) && cap_ver >= pcie_cap_id) 2065 + return true; 2066 + else if (cap_ver == pcie_cap_id) 2067 + return true; 2068 + 2069 + return false; 2070 + } 2071 + 2072 + static void program_hpx_type3_register(struct pci_dev *dev, 2073 + const struct hpx_type3 *reg) 2074 + { 2075 + u32 match_reg, write_reg, header, orig_value; 2076 + u16 pos; 2077 + 2078 + if (!(hpx3_device_type(dev) & reg->device_type)) 2079 + return; 2080 + 2081 + if (!(hpx3_function_type(dev) & reg->function_type)) 2082 + return; 2083 + 2084 + switch (reg->config_space_location) { 2085 + case HPX_CFG_PCICFG: 2086 + pos = 0; 2087 + break; 2088 + case HPX_CFG_PCIE_CAP: 2089 + pos = pci_find_capability(dev, reg->pci_exp_cap_id); 2090 + if (pos == 0) 2091 + return; 2092 + 2093 + break; 2094 + case HPX_CFG_PCIE_CAP_EXT: 2095 + pos = pci_find_ext_capability(dev, reg->pci_exp_cap_id); 2096 + if (pos == 0) 2097 + return; 2098 + 2099 + pci_read_config_dword(dev, pos, &header); 2100 + if (!hpx3_cap_ver_matches(PCI_EXT_CAP_VER(header), 2101 + reg->pci_exp_cap_ver)) 2102 + return; 2103 + 2104 + break; 2105 + case HPX_CFG_VEND_CAP: /* Fall through */ 2106 + case HPX_CFG_DVSEC: /* Fall through */ 2107 + default: 2108 + pci_warn(dev, "Encountered _HPX type 3 with unsupported config space location"); 2109 + return; 2110 + } 2111 + 2112 + pci_read_config_dword(dev, pos + reg->match_offset, &match_reg); 2113 + 2114 + if ((match_reg & reg->match_mask_and) != reg->match_value) 2115 + return; 2116 + 2117 + pci_read_config_dword(dev, pos + reg->reg_offset, &write_reg); 2118 + orig_value = write_reg; 2119 + write_reg &= reg->reg_mask_and; 2120 + write_reg |= reg->reg_mask_or; 2121 + 2122 + if (orig_value == write_reg) 2123 + return; 2124 + 2125 + pci_write_config_dword(dev, pos + reg->reg_offset, write_reg); 2126 + 2127 + pci_dbg(dev, "Applied _HPX3 at [0x%x]: 0x%08x -> 0x%08x", 2128 + pos, orig_value, write_reg); 2129 + } 2130 + 2131 + static void program_hpx_type3(struct pci_dev *dev, struct hpx_type3 *hpx3) 2132 + { 2133 + if (!hpx3) 2134 + return; 2135 + 2136 + if (!pci_is_pcie(dev)) 2137 + return; 2138 + 2139 + program_hpx_type3_register(dev, hpx3); 2140 + } 2141 + 2081 2142 int pci_configure_extended_tags(struct pci_dev *dev, void *ign) 2082 2143 { 2083 2144 struct pci_host_bridge *host; ··· 2371 2206 2372 2207 static void pci_configure_device(struct pci_dev *dev) 2373 2208 { 2374 - struct hotplug_params hpp; 2375 - int ret; 2209 + static const struct hotplug_program_ops hp_ops = { 2210 + .program_type0 = program_hpp_type0, 2211 + .program_type1 = program_hpp_type1, 2212 + .program_type2 = program_hpp_type2, 2213 + .program_type3 = program_hpx_type3, 2214 + }; 2376 2215 2377 2216 pci_configure_mps(dev); 2378 2217 pci_configure_extended_tags(dev, NULL); ··· 2385 2216 pci_configure_eetlp_prefix(dev); 2386 2217 pci_configure_serr(dev); 2387 2218 2388 - memset(&hpp, 0, sizeof(hpp)); 2389 - ret = pci_get_hp_params(dev, &hpp); 2390 - if (ret) 2391 - return; 2392 - 2393 - program_hpp_type2(dev, hpp.t2); 2394 - program_hpp_type1(dev, hpp.t1); 2395 - program_hpp_type0(dev, hpp.t0); 2219 + pci_acpi_program_hp_params(dev, &hp_ops); 2396 2220 } 2397 2221 2398 2222 static void pci_release_capabilities(struct pci_dev *dev) ··· 3248 3086 conflict = request_resource_conflict(parent_res, res); 3249 3087 3250 3088 if (conflict) 3251 - dev_printk(KERN_DEBUG, &b->dev, 3089 + dev_info(&b->dev, 3252 3090 "busn_res: can not insert %pR under %s%pR (conflicts with %s %pR)\n", 3253 3091 res, pci_is_root_bus(b) ? "domain " : "", 3254 3092 parent_res, conflict->name, conflict); ··· 3268 3106 3269 3107 size = bus_max - res->start + 1; 3270 3108 ret = adjust_resource(res, res->start, size); 3271 - dev_printk(KERN_DEBUG, &b->dev, 3272 - "busn_res: %pR end %s updated to %02x\n", 3109 + dev_info(&b->dev, "busn_res: %pR end %s updated to %02x\n", 3273 3110 &old_res, ret ? "can not be" : "is", bus_max); 3274 3111 3275 3112 if (!ret && !res->parent) ··· 3286 3125 return; 3287 3126 3288 3127 ret = release_resource(res); 3289 - dev_printk(KERN_DEBUG, &b->dev, 3290 - "busn_res: %pR %s released\n", 3128 + dev_info(&b->dev, "busn_res: %pR %s released\n", 3291 3129 res, ret ? "can not be" : "is"); 3292 3130 } 3293 3131
+1
drivers/pci/proc.c
··· 222 222 } 223 223 /* If arch decided it can't, fall through... */ 224 224 #endif /* HAVE_PCI_MMAP */ 225 + /* fall through */ 225 226 default: 226 227 ret = -EINVAL; 227 228 break;
+84 -8
drivers/pci/quirks.c
··· 159 159 u8 tmp; 160 160 161 161 if (pci_cache_line_size) 162 - printk(KERN_DEBUG "PCI: CLS %u bytes\n", 163 - pci_cache_line_size << 2); 162 + pr_info("PCI: CLS %u bytes\n", pci_cache_line_size << 2); 164 163 165 164 pci_apply_fixup_final_quirks = true; 166 165 for_each_pci_dev(dev) { ··· 176 177 if (!tmp || cls == tmp) 177 178 continue; 178 179 179 - printk(KERN_DEBUG "PCI: CLS mismatch (%u != %u), using %u bytes\n", 180 - cls << 2, tmp << 2, 181 - pci_dfl_cache_line_size << 2); 180 + pci_info(dev, "CLS mismatch (%u != %u), using %u bytes\n", 181 + cls << 2, tmp << 2, 182 + pci_dfl_cache_line_size << 2); 182 183 pci_cache_line_size = pci_dfl_cache_line_size; 183 184 } 184 185 } 185 186 186 187 if (!pci_cache_line_size) { 187 - printk(KERN_DEBUG "PCI: CLS %u bytes, default %u\n", 188 - cls << 2, pci_dfl_cache_line_size << 2); 188 + pr_info("PCI: CLS %u bytes, default %u\n", cls << 2, 189 + pci_dfl_cache_line_size << 2); 189 190 pci_cache_line_size = cls ? cls : pci_dfl_cache_line_size; 190 191 } 191 192 ··· 2244 2245 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10f4, quirk_disable_aspm_l0s); 2245 2246 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1508, quirk_disable_aspm_l0s); 2246 2247 2248 + /* 2249 + * Some Pericom PCIe-to-PCI bridges in reverse mode need the PCIe Retrain 2250 + * Link bit cleared after starting the link retrain process to allow this 2251 + * process to finish. 2252 + * 2253 + * Affected devices: PI7C9X110, PI7C9X111SL, PI7C9X130. See also the 2254 + * Pericom Errata Sheet PI7C9X111SLB_errata_rev1.2_102711.pdf. 2255 + */ 2256 + static void quirk_enable_clear_retrain_link(struct pci_dev *dev) 2257 + { 2258 + dev->clear_retrain_link = 1; 2259 + pci_info(dev, "Enable PCIe Retrain Link quirk\n"); 2260 + } 2261 + DECLARE_PCI_FIXUP_HEADER(0x12d8, 0xe110, quirk_enable_clear_retrain_link); 2262 + DECLARE_PCI_FIXUP_HEADER(0x12d8, 0xe111, quirk_enable_clear_retrain_link); 2263 + DECLARE_PCI_FIXUP_HEADER(0x12d8, 0xe130, quirk_enable_clear_retrain_link); 2264 + 2247 2265 static void fixup_rev1_53c810(struct pci_dev *dev) 2248 2266 { 2249 2267 u32 class = dev->class; ··· 2612 2596 pci_read_config_dword(dev, 0x74, &cfg); 2613 2597 2614 2598 if (cfg & ((1 << 2) | (1 << 15))) { 2615 - printk(KERN_INFO "Rewriting IRQ routing register on MCP55\n"); 2599 + pr_info("Rewriting IRQ routing register on MCP55\n"); 2616 2600 cfg &= ~((1 << 2) | (1 << 15)); 2617 2601 pci_write_config_dword(dev, 0x74, cfg); 2618 2602 } ··· 3424 3408 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0032, quirk_no_bus_reset); 3425 3409 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003c, quirk_no_bus_reset); 3426 3410 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0033, quirk_no_bus_reset); 3411 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0034, quirk_no_bus_reset); 3427 3412 3428 3413 /* 3429 3414 * Root port on some Cavium CN8xxx chips do not successfully complete a bus ··· 4922 4905 4923 4906 /* AMD Stoney platform GPU */ 4924 4907 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_no_ats); 4908 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6900, quirk_no_ats); 4925 4909 #endif /* CONFIG_PCI_ATS */ 4926 4910 4927 4911 /* Freescale PCIe doesn't support MSI in RC mode */ ··· 5140 5122 SWITCHTEC_QUIRK(0x8574); /* PFXI 64XG3 */ 5141 5123 SWITCHTEC_QUIRK(0x8575); /* PFXI 80XG3 */ 5142 5124 SWITCHTEC_QUIRK(0x8576); /* PFXI 96XG3 */ 5125 + 5126 + /* 5127 + * On Lenovo Thinkpad P50 SKUs with a Nvidia Quadro M1000M, the BIOS does 5128 + * not always reset the secondary Nvidia GPU between reboots if the system 5129 + * is configured to use Hybrid Graphics mode. This results in the GPU 5130 + * being left in whatever state it was in during the *previous* boot, which 5131 + * causes spurious interrupts from the GPU, which in turn causes us to 5132 + * disable the wrong IRQ and end up breaking the touchpad. Unsurprisingly, 5133 + * this also completely breaks nouveau. 5134 + * 5135 + * Luckily, it seems a simple reset of the Nvidia GPU brings it back to a 5136 + * clean state and fixes all these issues. 5137 + * 5138 + * When the machine is configured in Dedicated display mode, the issue 5139 + * doesn't occur. Fortunately the GPU advertises NoReset+ when in this 5140 + * mode, so we can detect that and avoid resetting it. 5141 + */ 5142 + static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev) 5143 + { 5144 + void __iomem *map; 5145 + int ret; 5146 + 5147 + if (pdev->subsystem_vendor != PCI_VENDOR_ID_LENOVO || 5148 + pdev->subsystem_device != 0x222e || 5149 + !pdev->reset_fn) 5150 + return; 5151 + 5152 + if (pci_enable_device_mem(pdev)) 5153 + return; 5154 + 5155 + /* 5156 + * Based on nvkm_device_ctor() in 5157 + * drivers/gpu/drm/nouveau/nvkm/engine/device/base.c 5158 + */ 5159 + map = pci_iomap(pdev, 0, 0x23000); 5160 + if (!map) { 5161 + pci_err(pdev, "Can't map MMIO space\n"); 5162 + goto out_disable; 5163 + } 5164 + 5165 + /* 5166 + * Make sure the GPU looks like it's been POSTed before resetting 5167 + * it. 5168 + */ 5169 + if (ioread32(map + 0x2240c) & 0x2) { 5170 + pci_info(pdev, FW_BUG "GPU left initialized by EFI, resetting\n"); 5171 + ret = pci_reset_function(pdev); 5172 + if (ret < 0) 5173 + pci_err(pdev, "Failed to reset GPU: %d\n", ret); 5174 + } 5175 + 5176 + iounmap(map); 5177 + out_disable: 5178 + pci_disable_device(pdev); 5179 + } 5180 + DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1, 5181 + PCI_CLASS_DISPLAY_VGA, 8, 5182 + quirk_reset_lenovo_thinkpad_p50_nvgpu);
+3 -7
drivers/pci/search.c
··· 33 33 struct pci_bus *bus; 34 34 int ret; 35 35 36 - ret = fn(pdev, PCI_DEVID(pdev->bus->number, pdev->devfn), data); 36 + ret = fn(pdev, pci_dev_id(pdev), data); 37 37 if (ret) 38 38 return ret; 39 39 ··· 88 88 return ret; 89 89 continue; 90 90 case PCI_EXP_TYPE_PCIE_BRIDGE: 91 - ret = fn(tmp, 92 - PCI_DEVID(tmp->bus->number, 93 - tmp->devfn), data); 91 + ret = fn(tmp, pci_dev_id(tmp), data); 94 92 if (ret) 95 93 return ret; 96 94 continue; ··· 99 101 PCI_DEVID(tmp->subordinate->number, 100 102 PCI_DEVFN(0, 0)), data); 101 103 else 102 - ret = fn(tmp, 103 - PCI_DEVID(tmp->bus->number, 104 - tmp->devfn), data); 104 + ret = fn(tmp, pci_dev_id(tmp), data); 105 105 if (ret) 106 106 return ret; 107 107 }
+263 -263
drivers/pci/setup-bus.c
··· 49 49 } 50 50 51 51 /** 52 - * add_to_list() - add a new resource tracker to the list 52 + * add_to_list() - Add a new resource tracker to the list 53 53 * @head: Head of the list 54 - * @dev: device corresponding to which the resource 55 - * belongs 56 - * @res: The resource to be tracked 57 - * @add_size: additional size to be optionally added 58 - * to the resource 54 + * @dev: Device to which the resource belongs 55 + * @res: Resource to be tracked 56 + * @add_size: Additional size to be optionally added to the resource 59 57 */ 60 - static int add_to_list(struct list_head *head, 61 - struct pci_dev *dev, struct resource *res, 62 - resource_size_t add_size, resource_size_t min_align) 58 + static int add_to_list(struct list_head *head, struct pci_dev *dev, 59 + struct resource *res, resource_size_t add_size, 60 + resource_size_t min_align) 63 61 { 64 62 struct pci_dev_resource *tmp; 65 63 ··· 78 80 return 0; 79 81 } 80 82 81 - static void remove_from_list(struct list_head *head, 82 - struct resource *res) 83 + static void remove_from_list(struct list_head *head, struct resource *res) 83 84 { 84 85 struct pci_dev_resource *dev_res, *tmp; 85 86 ··· 155 158 tmp->res = r; 156 159 tmp->dev = dev; 157 160 158 - /* fallback is smallest one or list is empty*/ 161 + /* Fallback is smallest one or list is empty */ 159 162 n = head; 160 163 list_for_each_entry(dev_res, head, list) { 161 164 resource_size_t align; ··· 168 171 break; 169 172 } 170 173 } 171 - /* Insert it just before n*/ 174 + /* Insert it just before n */ 172 175 list_add_tail(&tmp->list, n); 173 176 } 174 177 } 175 178 176 - static void __dev_sort_resources(struct pci_dev *dev, 177 - struct list_head *head) 179 + static void __dev_sort_resources(struct pci_dev *dev, struct list_head *head) 178 180 { 179 181 u16 class = dev->class >> 8; 180 182 181 - /* Don't touch classless devices or host bridges or ioapics. */ 183 + /* Don't touch classless devices or host bridges or IOAPICs */ 182 184 if (class == PCI_CLASS_NOT_DEFINED || class == PCI_CLASS_BRIDGE_HOST) 183 185 return; 184 186 185 - /* Don't touch ioapic devices already enabled by firmware */ 187 + /* Don't touch IOAPIC devices already enabled by firmware */ 186 188 if (class == PCI_CLASS_SYSTEM_PIC) { 187 189 u16 command; 188 190 pci_read_config_word(dev, PCI_COMMAND, &command); ··· 200 204 } 201 205 202 206 /** 203 - * reassign_resources_sorted() - satisfy any additional resource requests 207 + * reassign_resources_sorted() - Satisfy any additional resource requests 204 208 * 205 - * @realloc_head : head of the list tracking requests requiring additional 206 - * resources 207 - * @head : head of the list tracking requests with allocated 208 - * resources 209 + * @realloc_head: Head of the list tracking requests requiring 210 + * additional resources 211 + * @head: Head of the list tracking requests with allocated 212 + * resources 209 213 * 210 - * Walk through each element of the realloc_head and try to procure 211 - * additional resources for the element, provided the element 212 - * is in the head list. 214 + * Walk through each element of the realloc_head and try to procure additional 215 + * resources for the element, provided the element is in the head list. 213 216 */ 214 217 static void reassign_resources_sorted(struct list_head *realloc_head, 215 - struct list_head *head) 218 + struct list_head *head) 216 219 { 217 220 struct resource *res; 218 221 struct pci_dev_resource *add_res, *tmp; ··· 223 228 bool found_match = false; 224 229 225 230 res = add_res->res; 226 - /* skip resource that has been reset */ 231 + /* Skip resource that has been reset */ 227 232 if (!res->flags) 228 233 goto out; 229 234 230 - /* skip this resource if not found in head list */ 235 + /* Skip this resource if not found in head list */ 231 236 list_for_each_entry(dev_res, head, list) { 232 237 if (dev_res->res == res) { 233 238 found_match = true; 234 239 break; 235 240 } 236 241 } 237 - if (!found_match)/* just skip */ 242 + if (!found_match) /* Just skip */ 238 243 continue; 239 244 240 245 idx = res - &add_res->dev->resource[0]; ··· 250 255 (IORESOURCE_STARTALIGN|IORESOURCE_SIZEALIGN); 251 256 if (pci_reassign_resource(add_res->dev, idx, 252 257 add_size, align)) 253 - pci_printk(KERN_DEBUG, add_res->dev, 254 - "failed to add %llx res[%d]=%pR\n", 255 - (unsigned long long)add_size, 256 - idx, res); 258 + pci_info(add_res->dev, "failed to add %llx res[%d]=%pR\n", 259 + (unsigned long long) add_size, idx, 260 + res); 257 261 } 258 262 out: 259 263 list_del(&add_res->list); ··· 261 267 } 262 268 263 269 /** 264 - * assign_requested_resources_sorted() - satisfy resource requests 270 + * assign_requested_resources_sorted() - Satisfy resource requests 265 271 * 266 - * @head : head of the list tracking requests for resources 267 - * @fail_head : head of the list tracking requests that could 268 - * not be allocated 272 + * @head: Head of the list tracking requests for resources 273 + * @fail_head: Head of the list tracking requests that could not be 274 + * allocated 269 275 * 270 - * Satisfy resource requests of each element in the list. Add 271 - * requests that could not satisfied to the failed_list. 276 + * Satisfy resource requests of each element in the list. Add requests that 277 + * could not be satisfied to the failed_list. 272 278 */ 273 279 static void assign_requested_resources_sorted(struct list_head *head, 274 280 struct list_head *fail_head) ··· 284 290 pci_assign_resource(dev_res->dev, idx)) { 285 291 if (fail_head) { 286 292 /* 287 - * if the failed res is for ROM BAR, and it will 288 - * be enabled later, don't add it to the list 293 + * If the failed resource is a ROM BAR and 294 + * it will be enabled later, don't add it 295 + * to the list. 289 296 */ 290 297 if (!((idx == PCI_ROM_RESOURCE) && 291 298 (!(res->flags & IORESOURCE_ROM_ENABLE)))) ··· 305 310 struct pci_dev_resource *fail_res; 306 311 unsigned long mask = 0; 307 312 308 - /* check failed type */ 313 + /* Check failed type */ 309 314 list_for_each_entry(fail_res, fail_head, list) 310 315 mask |= fail_res->flags; 311 316 312 317 /* 313 - * one pref failed resource will set IORESOURCE_MEM, 314 - * as we can allocate pref in non-pref range. 315 - * Will release all assigned non-pref sibling resources 316 - * according to that bit. 318 + * One pref failed resource will set IORESOURCE_MEM, as we can 319 + * allocate pref in non-pref range. Will release all assigned 320 + * non-pref sibling resources according to that bit. 317 321 */ 318 322 return mask & (IORESOURCE_IO | IORESOURCE_MEM | IORESOURCE_PREFETCH); 319 323 } ··· 322 328 if (res->flags & IORESOURCE_IO) 323 329 return !!(mask & IORESOURCE_IO); 324 330 325 - /* check pref at first */ 331 + /* Check pref at first */ 326 332 if (res->flags & IORESOURCE_PREFETCH) { 327 333 if (mask & IORESOURCE_PREFETCH) 328 334 return true; 329 - /* count pref if its parent is non-pref */ 335 + /* Count pref if its parent is non-pref */ 330 336 else if ((mask & IORESOURCE_MEM) && 331 337 !(res->parent->flags & IORESOURCE_PREFETCH)) 332 338 return true; ··· 337 343 if (res->flags & IORESOURCE_MEM) 338 344 return !!(mask & IORESOURCE_MEM); 339 345 340 - return false; /* should not get here */ 346 + return false; /* Should not get here */ 341 347 } 342 348 343 349 static void __assign_resources_sorted(struct list_head *head, 344 - struct list_head *realloc_head, 345 - struct list_head *fail_head) 350 + struct list_head *realloc_head, 351 + struct list_head *fail_head) 346 352 { 347 353 /* 348 - * Should not assign requested resources at first. 349 - * they could be adjacent, so later reassign can not reallocate 350 - * them one by one in parent resource window. 351 - * Try to assign requested + add_size at beginning 352 - * if could do that, could get out early. 353 - * if could not do that, we still try to assign requested at first, 354 - * then try to reassign add_size for some resources. 354 + * Should not assign requested resources at first. They could be 355 + * adjacent, so later reassign can not reallocate them one by one in 356 + * parent resource window. 357 + * 358 + * Try to assign requested + add_size at beginning. If could do that, 359 + * could get out early. If could not do that, we still try to assign 360 + * requested at first, then try to reassign add_size for some resources. 355 361 * 356 362 * Separate three resource type checking if we need to release 357 363 * assigned resource after requested + add_size try. 358 - * 1. if there is io port assign fail, will release assigned 359 - * io port. 360 - * 2. if there is pref mmio assign fail, release assigned 361 - * pref mmio. 362 - * if assigned pref mmio's parent is non-pref mmio and there 363 - * is non-pref mmio assign fail, will release that assigned 364 - * pref mmio. 365 - * 3. if there is non-pref mmio assign fail or pref mmio 366 - * assigned fail, will release assigned non-pref mmio. 364 + * 365 + * 1. If IO port assignment fails, will release assigned IO 366 + * port. 367 + * 2. If pref MMIO assignment fails, release assigned pref 368 + * MMIO. If assigned pref MMIO's parent is non-pref MMIO 369 + * and non-pref MMIO assignment fails, will release that 370 + * assigned pref MMIO. 371 + * 3. If non-pref MMIO assignment fails or pref MMIO 372 + * assignment fails, will release assigned non-pref MMIO. 367 373 */ 368 374 LIST_HEAD(save_head); 369 375 LIST_HEAD(local_fail_head); ··· 392 398 /* 393 399 * There are two kinds of additional resources in the list: 394 400 * 1. bridge resource -- IORESOURCE_STARTALIGN 395 - * 2. SR-IOV resource -- IORESOURCE_SIZEALIGN 401 + * 2. SR-IOV resource -- IORESOURCE_SIZEALIGN 396 402 * Here just fix the additional alignment for bridge 397 403 */ 398 404 if (!(dev_res->res->flags & IORESOURCE_STARTALIGN)) ··· 401 407 add_align = get_res_add_align(realloc_head, dev_res->res); 402 408 403 409 /* 404 - * The "head" list is sorted by the alignment to make sure 405 - * resources with bigger alignment will be assigned first. 406 - * After we change the alignment of a dev_res in "head" list, 407 - * we need to reorder the list by alignment to make it 410 + * The "head" list is sorted by alignment so resources with 411 + * bigger alignment will be assigned first. After we 412 + * change the alignment of a dev_res in "head" list, we 413 + * need to reorder the list by alignment to make it 408 414 * consistent. 409 415 */ 410 416 if (add_align > dev_res->res->start) { ··· 429 435 /* Try updated head list with add_size added */ 430 436 assign_requested_resources_sorted(head, &local_fail_head); 431 437 432 - /* all assigned with add_size ? */ 438 + /* All assigned with add_size? */ 433 439 if (list_empty(&local_fail_head)) { 434 440 /* Remove head list from realloc_head list */ 435 441 list_for_each_entry(dev_res, head, list) ··· 439 445 return; 440 446 } 441 447 442 - /* check failed type */ 448 + /* Check failed type */ 443 449 fail_type = pci_fail_res_type_mask(&local_fail_head); 444 - /* remove not need to be released assigned res from head list etc */ 450 + /* Remove not need to be released assigned res from head list etc */ 445 451 list_for_each_entry_safe(dev_res, tmp_res, head, list) 446 452 if (dev_res->res->parent && 447 453 !pci_need_to_release(fail_type, dev_res->res)) { 448 - /* remove it from realloc_head list */ 454 + /* Remove it from realloc_head list */ 449 455 remove_from_list(realloc_head, dev_res->res); 450 456 remove_from_list(&save_head, dev_res->res); 451 457 list_del(&dev_res->list); ··· 471 477 /* Satisfy the must-have resource requests */ 472 478 assign_requested_resources_sorted(head, fail_head); 473 479 474 - /* Try to satisfy any additional optional resource 475 - requests */ 480 + /* Try to satisfy any additional optional resource requests */ 476 481 if (realloc_head) 477 482 reassign_resources_sorted(realloc_head, head); 478 483 free_list(head); 479 484 } 480 485 481 486 static void pdev_assign_resources_sorted(struct pci_dev *dev, 482 - struct list_head *add_head, 483 - struct list_head *fail_head) 487 + struct list_head *add_head, 488 + struct list_head *fail_head) 484 489 { 485 490 LIST_HEAD(head); 486 491 ··· 556 563 } 557 564 EXPORT_SYMBOL(pci_setup_cardbus); 558 565 559 - /* Initialize bridges with base/limit values we have collected. 560 - PCI-to-PCI Bridge Architecture Specification rev. 1.1 (1998) 561 - requires that if there is no I/O ports or memory behind the 562 - bridge, corresponding range must be turned off by writing base 563 - value greater than limit to the bridge's base/limit registers. 564 - 565 - Note: care must be taken when updating I/O base/limit registers 566 - of bridges which support 32-bit I/O. This update requires two 567 - config space writes, so it's quite possible that an I/O window of 568 - the bridge will have some undesirable address (e.g. 0) after the 569 - first write. Ditto 64-bit prefetchable MMIO. */ 566 + /* 567 + * Initialize bridges with base/limit values we have collected. PCI-to-PCI 568 + * Bridge Architecture Specification rev. 1.1 (1998) requires that if there 569 + * are no I/O ports or memory behind the bridge, the corresponding range 570 + * must be turned off by writing base value greater than limit to the 571 + * bridge's base/limit registers. 572 + * 573 + * Note: care must be taken when updating I/O base/limit registers of 574 + * bridges which support 32-bit I/O. This update requires two config space 575 + * writes, so it's quite possible that an I/O window of the bridge will 576 + * have some undesirable address (e.g. 0) after the first write. Ditto 577 + * 64-bit prefetchable MMIO. 578 + */ 570 579 static void pci_setup_bridge_io(struct pci_dev *bridge) 571 580 { 572 581 struct resource *res; ··· 582 587 if (bridge->io_window_1k) 583 588 io_mask = PCI_IO_1K_RANGE_MASK; 584 589 585 - /* Set up the top and bottom of the PCI I/O segment for this bus. */ 590 + /* Set up the top and bottom of the PCI I/O segment for this bus */ 586 591 res = &bridge->resource[PCI_BRIDGE_RESOURCES + 0]; 587 592 pcibios_resource_to_bus(bridge->bus, &region, res); 588 593 if (res->flags & IORESOURCE_IO) { ··· 590 595 io_base_lo = (region.start >> 8) & io_mask; 591 596 io_limit_lo = (region.end >> 8) & io_mask; 592 597 l = ((u16) io_limit_lo << 8) | io_base_lo; 593 - /* Set up upper 16 bits of I/O base/limit. */ 598 + /* Set up upper 16 bits of I/O base/limit */ 594 599 io_upper16 = (region.end & 0xffff0000) | (region.start >> 16); 595 600 pci_info(bridge, " bridge window %pR\n", res); 596 601 } else { 597 - /* Clear upper 16 bits of I/O base/limit. */ 602 + /* Clear upper 16 bits of I/O base/limit */ 598 603 io_upper16 = 0; 599 604 l = 0x00f0; 600 605 } 601 - /* Temporarily disable the I/O range before updating PCI_IO_BASE. */ 606 + /* Temporarily disable the I/O range before updating PCI_IO_BASE */ 602 607 pci_write_config_dword(bridge, PCI_IO_BASE_UPPER16, 0x0000ffff); 603 - /* Update lower 16 bits of I/O base/limit. */ 608 + /* Update lower 16 bits of I/O base/limit */ 604 609 pci_write_config_word(bridge, PCI_IO_BASE, l); 605 - /* Update upper 16 bits of I/O base/limit. */ 610 + /* Update upper 16 bits of I/O base/limit */ 606 611 pci_write_config_dword(bridge, PCI_IO_BASE_UPPER16, io_upper16); 607 612 } 608 613 ··· 612 617 struct pci_bus_region region; 613 618 u32 l; 614 619 615 - /* Set up the top and bottom of the PCI Memory segment for this bus. */ 620 + /* Set up the top and bottom of the PCI Memory segment for this bus */ 616 621 res = &bridge->resource[PCI_BRIDGE_RESOURCES + 1]; 617 622 pcibios_resource_to_bus(bridge->bus, &region, res); 618 623 if (res->flags & IORESOURCE_MEM) { ··· 631 636 struct pci_bus_region region; 632 637 u32 l, bu, lu; 633 638 634 - /* Clear out the upper 32 bits of PREF limit. 635 - If PCI_PREF_BASE_UPPER32 was non-zero, this temporarily 636 - disables PREF range, which is ok. */ 639 + /* 640 + * Clear out the upper 32 bits of PREF limit. If 641 + * PCI_PREF_BASE_UPPER32 was non-zero, this temporarily disables 642 + * PREF range, which is ok. 643 + */ 637 644 pci_write_config_dword(bridge, PCI_PREF_LIMIT_UPPER32, 0); 638 645 639 - /* Set up PREF base/limit. */ 646 + /* Set up PREF base/limit */ 640 647 bu = lu = 0; 641 648 res = &bridge->resource[PCI_BRIDGE_RESOURCES + 2]; 642 649 pcibios_resource_to_bus(bridge->bus, &region, res); ··· 655 658 } 656 659 pci_write_config_dword(bridge, PCI_PREF_MEMORY_BASE, l); 657 660 658 - /* Set the upper 32 bits of PREF base & limit. */ 661 + /* Set the upper 32 bits of PREF base & limit */ 659 662 pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32, bu); 660 663 pci_write_config_dword(bridge, PCI_PREF_LIMIT_UPPER32, lu); 661 664 } ··· 699 702 return 0; 700 703 701 704 if (pci_claim_resource(bridge, i) == 0) 702 - return 0; /* claimed the window */ 705 + return 0; /* Claimed the window */ 703 706 704 707 if ((bridge->class >> 8) != PCI_CLASS_BRIDGE_PCI) 705 708 return 0; 706 709 707 710 if (!pci_bus_clip_resource(bridge, i)) 708 - return -EINVAL; /* clipping didn't change anything */ 711 + return -EINVAL; /* Clipping didn't change anything */ 709 712 710 713 switch (i - PCI_BRIDGE_RESOURCES) { 711 714 case 0: ··· 722 725 } 723 726 724 727 if (pci_claim_resource(bridge, i) == 0) 725 - return 0; /* claimed a smaller window */ 728 + return 0; /* Claimed a smaller window */ 726 729 727 730 return -EINVAL; 728 731 } 729 732 730 - /* Check whether the bridge supports optional I/O and 731 - prefetchable memory ranges. If not, the respective 732 - base/limit registers must be read-only and read as 0. */ 733 + /* 734 + * Check whether the bridge supports optional I/O and prefetchable memory 735 + * ranges. If not, the respective base/limit registers must be read-only 736 + * and read as 0. 737 + */ 733 738 static void pci_bridge_check_ranges(struct pci_bus *bus) 734 739 { 735 740 struct pci_dev *bridge = bus->self; ··· 751 752 } 752 753 } 753 754 754 - /* Helper function for sizing routines: find first available 755 - bus resource of a given type. Note: we intentionally skip 756 - the bus resources which have already been assigned (that is, 757 - have non-NULL parent resource). */ 755 + /* 756 + * Helper function for sizing routines: find first available bus resource 757 + * of a given type. Note: we intentionally skip the bus resources which 758 + * have already been assigned (that is, have non-NULL parent resource). 759 + */ 758 760 static struct resource *find_free_bus_resource(struct pci_bus *bus, 759 - unsigned long type_mask, unsigned long type) 761 + unsigned long type_mask, 762 + unsigned long type) 760 763 { 761 764 int i; 762 765 struct resource *r; ··· 773 772 } 774 773 775 774 static resource_size_t calculate_iosize(resource_size_t size, 776 - resource_size_t min_size, 777 - resource_size_t size1, 778 - resource_size_t add_size, 779 - resource_size_t children_add_size, 780 - resource_size_t old_size, 781 - resource_size_t align) 775 + resource_size_t min_size, 776 + resource_size_t size1, 777 + resource_size_t add_size, 778 + resource_size_t children_add_size, 779 + resource_size_t old_size, 780 + resource_size_t align) 782 781 { 783 782 if (size < min_size) 784 783 size = min_size; 785 784 if (old_size == 1) 786 785 old_size = 0; 787 - /* To be fixed in 2.5: we should have sort of HAVE_ISA 788 - flag in the struct pci_bus. */ 786 + /* 787 + * To be fixed in 2.5: we should have sort of HAVE_ISA flag in the 788 + * struct pci_bus. 789 + */ 789 790 #if defined(CONFIG_ISA) || defined(CONFIG_EISA) 790 791 size = (size & 0xff) + ((size & ~0xffUL) << 2); 791 792 #endif ··· 800 797 } 801 798 802 799 static resource_size_t calculate_memsize(resource_size_t size, 803 - resource_size_t min_size, 804 - resource_size_t add_size, 805 - resource_size_t children_add_size, 806 - resource_size_t old_size, 807 - resource_size_t align) 800 + resource_size_t min_size, 801 + resource_size_t add_size, 802 + resource_size_t children_add_size, 803 + resource_size_t old_size, 804 + resource_size_t align) 808 805 { 809 806 if (size < min_size) 810 807 size = min_size; ··· 827 824 #define PCI_P2P_DEFAULT_IO_ALIGN 0x1000 /* 4KiB */ 828 825 #define PCI_P2P_DEFAULT_IO_ALIGN_1K 0x400 /* 1KiB */ 829 826 830 - static resource_size_t window_alignment(struct pci_bus *bus, 831 - unsigned long type) 827 + static resource_size_t window_alignment(struct pci_bus *bus, unsigned long type) 832 828 { 833 829 resource_size_t align = 1, arch_align; 834 830 ··· 835 833 align = PCI_P2P_DEFAULT_MEM_ALIGN; 836 834 else if (type & IORESOURCE_IO) { 837 835 /* 838 - * Per spec, I/O windows are 4K-aligned, but some 839 - * bridges have an extension to support 1K alignment. 836 + * Per spec, I/O windows are 4K-aligned, but some bridges have 837 + * an extension to support 1K alignment. 840 838 */ 841 839 if (bus->self->io_window_1k) 842 840 align = PCI_P2P_DEFAULT_IO_ALIGN_1K; ··· 849 847 } 850 848 851 849 /** 852 - * pbus_size_io() - size the io window of a given bus 850 + * pbus_size_io() - Size the I/O window of a given bus 853 851 * 854 - * @bus : the bus 855 - * @min_size : the minimum io window that must to be allocated 856 - * @add_size : additional optional io window 857 - * @realloc_head : track the additional io window on this list 852 + * @bus: The bus 853 + * @min_size: The minimum I/O window that must be allocated 854 + * @add_size: Additional optional I/O window 855 + * @realloc_head: Track the additional I/O window on this list 858 856 * 859 - * Sizing the IO windows of the PCI-PCI bridge is trivial, 860 - * since these windows have 1K or 4K granularity and the IO ranges 861 - * of non-bridge PCI devices are limited to 256 bytes. 862 - * We must be careful with the ISA aliasing though. 857 + * Sizing the I/O windows of the PCI-PCI bridge is trivial, since these 858 + * windows have 1K or 4K granularity and the I/O ranges of non-bridge PCI 859 + * devices are limited to 256 bytes. We must be careful with the ISA 860 + * aliasing though. 863 861 */ 864 862 static void pbus_size_io(struct pci_bus *bus, resource_size_t min_size, 865 - resource_size_t add_size, struct list_head *realloc_head) 863 + resource_size_t add_size, 864 + struct list_head *realloc_head) 866 865 { 867 866 struct pci_dev *dev; 868 867 struct resource *b_res = find_free_bus_resource(bus, IORESOURCE_IO, ··· 921 918 if (size1 > size0 && realloc_head) { 922 919 add_to_list(realloc_head, bus->self, b_res, size1-size0, 923 920 min_align); 924 - pci_printk(KERN_DEBUG, bus->self, "bridge window %pR to %pR add_size %llx\n", 925 - b_res, &bus->busn_res, 926 - (unsigned long long)size1-size0); 921 + pci_info(bus->self, "bridge window %pR to %pR add_size %llx\n", 922 + b_res, &bus->busn_res, 923 + (unsigned long long) size1 - size0); 927 924 } 928 925 } 929 926 ··· 950 947 } 951 948 952 949 /** 953 - * pbus_size_mem() - size the memory window of a given bus 950 + * pbus_size_mem() - Size the memory window of a given bus 954 951 * 955 - * @bus : the bus 956 - * @mask: mask the resource flag, then compare it with type 957 - * @type: the type of free resource from bridge 958 - * @type2: second match type 959 - * @type3: third match type 960 - * @min_size : the minimum memory window that must to be allocated 961 - * @add_size : additional optional memory window 962 - * @realloc_head : track the additional memory window on this list 952 + * @bus: The bus 953 + * @mask: Mask the resource flag, then compare it with type 954 + * @type: The type of free resource from bridge 955 + * @type2: Second match type 956 + * @type3: Third match type 957 + * @min_size: The minimum memory window that must be allocated 958 + * @add_size: Additional optional memory window 959 + * @realloc_head: Track the additional memory window on this list 963 960 * 964 - * Calculate the size of the bus and minimal alignment which 965 - * guarantees that all child resources fit in this size. 961 + * Calculate the size of the bus and minimal alignment which guarantees 962 + * that all child resources fit in this size. 966 963 * 967 - * Returns -ENOSPC if there's no available bus resource of the desired type. 968 - * Otherwise, sets the bus resource start/end to indicate the required 969 - * size, adds things to realloc_head (if supplied), and returns 0. 964 + * Return -ENOSPC if there's no available bus resource of the desired 965 + * type. Otherwise, set the bus resource start/end to indicate the 966 + * required size, add things to realloc_head (if supplied), and return 0. 970 967 */ 971 968 static int pbus_size_mem(struct pci_bus *bus, unsigned long mask, 972 969 unsigned long type, unsigned long type2, 973 - unsigned long type3, 974 - resource_size_t min_size, resource_size_t add_size, 970 + unsigned long type3, resource_size_t min_size, 971 + resource_size_t add_size, 975 972 struct list_head *realloc_head) 976 973 { 977 974 struct pci_dev *dev; 978 975 resource_size_t min_align, align, size, size0, size1; 979 - resource_size_t aligns[18]; /* Alignments from 1Mb to 128Gb */ 976 + resource_size_t aligns[18]; /* Alignments from 1MB to 128GB */ 980 977 int order, max_order; 981 978 struct resource *b_res = find_free_bus_resource(bus, 982 979 mask | IORESOURCE_PREFETCH, type); ··· 1005 1002 continue; 1006 1003 r_size = resource_size(r); 1007 1004 #ifdef CONFIG_PCI_IOV 1008 - /* put SRIOV requested res to the optional list */ 1005 + /* Put SRIOV requested res to the optional list */ 1009 1006 if (realloc_head && i >= PCI_IOV_RESOURCES && 1010 1007 i <= PCI_IOV_RESOURCE_END) { 1011 1008 add_align = max(pci_resource_alignment(dev, r), add_align); 1012 1009 r->end = r->start - 1; 1013 - add_to_list(realloc_head, dev, r, r_size, 0/* don't care */); 1010 + add_to_list(realloc_head, dev, r, r_size, 0 /* Don't care */); 1014 1011 children_add_size += r_size; 1015 1012 continue; 1016 1013 } ··· 1032 1029 continue; 1033 1030 } 1034 1031 size += max(r_size, align); 1035 - /* Exclude ranges with size > align from 1036 - calculation of the alignment. */ 1032 + /* 1033 + * Exclude ranges with size > align from calculation of 1034 + * the alignment. 1035 + */ 1037 1036 if (r_size <= align) 1038 1037 aligns[order] += align; 1039 1038 if (order > max_order) ··· 1068 1063 b_res->flags |= IORESOURCE_STARTALIGN; 1069 1064 if (size1 > size0 && realloc_head) { 1070 1065 add_to_list(realloc_head, bus->self, b_res, size1-size0, add_align); 1071 - pci_printk(KERN_DEBUG, bus->self, "bridge window %pR to %pR add_size %llx add_align %llx\n", 1066 + pci_info(bus->self, "bridge window %pR to %pR add_size %llx add_align %llx\n", 1072 1067 b_res, &bus->busn_res, 1073 1068 (unsigned long long) (size1 - size0), 1074 1069 (unsigned long long) add_align); ··· 1086 1081 } 1087 1082 1088 1083 static void pci_bus_size_cardbus(struct pci_bus *bus, 1089 - struct list_head *realloc_head) 1084 + struct list_head *realloc_head) 1090 1085 { 1091 1086 struct pci_dev *bridge = bus->self; 1092 1087 struct resource *b_res = &bridge->resource[PCI_BRIDGE_RESOURCES]; ··· 1096 1091 if (b_res[0].parent) 1097 1092 goto handle_b_res_1; 1098 1093 /* 1099 - * Reserve some resources for CardBus. We reserve 1100 - * a fixed amount of bus space for CardBus bridges. 1094 + * Reserve some resources for CardBus. We reserve a fixed amount 1095 + * of bus space for CardBus bridges. 1101 1096 */ 1102 1097 b_res[0].start = pci_cardbus_io_size; 1103 1098 b_res[0].end = b_res[0].start + pci_cardbus_io_size - 1; ··· 1121 1116 } 1122 1117 1123 1118 handle_b_res_2: 1124 - /* MEM1 must not be pref mmio */ 1119 + /* MEM1 must not be pref MMIO */ 1125 1120 pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl); 1126 1121 if (ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM1) { 1127 1122 ctrl &= ~PCI_CB_BRIDGE_CTL_PREFETCH_MEM1; ··· 1129 1124 pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl); 1130 1125 } 1131 1126 1132 - /* 1133 - * Check whether prefetchable memory is supported 1134 - * by this bridge. 1135 - */ 1127 + /* Check whether prefetchable memory is supported by this bridge. */ 1136 1128 pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl); 1137 1129 if (!(ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM0)) { 1138 1130 ctrl |= PCI_CB_BRIDGE_CTL_PREFETCH_MEM0; ··· 1140 1138 if (b_res[2].parent) 1141 1139 goto handle_b_res_3; 1142 1140 /* 1143 - * If we have prefetchable memory support, allocate 1144 - * two regions. Otherwise, allocate one region of 1145 - * twice the size. 1141 + * If we have prefetchable memory support, allocate two regions. 1142 + * Otherwise, allocate one region of twice the size. 1146 1143 */ 1147 1144 if (ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM0) { 1148 1145 b_res[2].start = pci_cardbus_mem_size; ··· 1154 1153 pci_cardbus_mem_size, pci_cardbus_mem_size); 1155 1154 } 1156 1155 1157 - /* reduce that to half */ 1156 + /* Reduce that to half */ 1158 1157 b_res_3_size = pci_cardbus_mem_size; 1159 1158 } 1160 1159 ··· 1205 1204 1206 1205 switch (bus->self->hdr_type) { 1207 1206 case PCI_HEADER_TYPE_CARDBUS: 1208 - /* don't size cardbuses yet. */ 1207 + /* Don't size CardBuses yet */ 1209 1208 break; 1210 1209 1211 1210 case PCI_HEADER_TYPE_BRIDGE: ··· 1272 1271 1273 1272 /* 1274 1273 * Compute the size required to put everything else in the 1275 - * non-prefetchable window. This includes: 1274 + * non-prefetchable window. This includes: 1276 1275 * 1277 1276 * - all non-prefetchable resources 1278 1277 * - 32-bit prefetchable resources if there's a 64-bit 1279 1278 * prefetchable window or no prefetchable window at all 1280 - * - 64-bit prefetchable resources if there's no 1281 - * prefetchable window at all 1279 + * - 64-bit prefetchable resources if there's no prefetchable 1280 + * window at all 1282 1281 * 1283 - * Note that the strategy in __pci_assign_resource() must 1284 - * match that used here. Specifically, we cannot put a 1285 - * 32-bit prefetchable resource in a 64-bit prefetchable 1286 - * window. 1282 + * Note that the strategy in __pci_assign_resource() must match 1283 + * that used here. Specifically, we cannot put a 32-bit 1284 + * prefetchable resource in a 64-bit prefetchable window. 1287 1285 */ 1288 1286 pbus_size_mem(bus, mask, IORESOURCE_MEM, type2, type3, 1289 1287 realloc_head ? 0 : additional_mem_size, ··· 1315 1315 } 1316 1316 1317 1317 /* 1318 - * Try to assign any resources marked as IORESOURCE_PCI_FIXED, as they 1319 - * are skipped by pbus_assign_resources_sorted(). 1318 + * Try to assign any resources marked as IORESOURCE_PCI_FIXED, as they are 1319 + * skipped by pbus_assign_resources_sorted(). 1320 1320 */ 1321 1321 static void pdev_assign_fixed_resources(struct pci_dev *dev) 1322 1322 { ··· 1427 1427 struct pci_bus *child; 1428 1428 1429 1429 /* 1430 - * Carry out a depth-first search on the PCI bus 1431 - * tree to allocate bridge apertures. Read the 1432 - * programmed bridge bases and recursively claim 1433 - * the respective bridge resources. 1430 + * Carry out a depth-first search on the PCI bus tree to allocate 1431 + * bridge apertures. Read the programmed bridge bases and 1432 + * recursively claim the respective bridge resources. 1434 1433 */ 1435 1434 if (b->self) { 1436 1435 pci_read_bridge_bases(b); ··· 1483 1484 IORESOURCE_MEM_64) 1484 1485 1485 1486 static void pci_bridge_release_resources(struct pci_bus *bus, 1486 - unsigned long type) 1487 + unsigned long type) 1487 1488 { 1488 1489 struct pci_dev *dev = bus->self; 1489 1490 struct resource *r; ··· 1494 1495 b_res = &dev->resource[PCI_BRIDGE_RESOURCES]; 1495 1496 1496 1497 /* 1497 - * 1. if there is io port assign fail, will release bridge 1498 - * io port. 1499 - * 2. if there is non pref mmio assign fail, release bridge 1500 - * nonpref mmio. 1501 - * 3. if there is 64bit pref mmio assign fail, and bridge pref 1502 - * is 64bit, release bridge pref mmio. 1503 - * 4. if there is pref mmio assign fail, and bridge pref is 1504 - * 32bit mmio, release bridge pref mmio 1505 - * 5. if there is pref mmio assign fail, and bridge pref is not 1506 - * assigned, release bridge nonpref mmio. 1498 + * 1. If IO port assignment fails, release bridge IO port. 1499 + * 2. If non pref MMIO assignment fails, release bridge nonpref MMIO. 1500 + * 3. If 64bit pref MMIO assignment fails, and bridge pref is 64bit, 1501 + * release bridge pref MMIO. 1502 + * 4. If pref MMIO assignment fails, and bridge pref is 32bit, 1503 + * release bridge pref MMIO. 1504 + * 5. If pref MMIO assignment fails, and bridge pref is not 1505 + * assigned, release bridge nonpref MMIO. 1507 1506 */ 1508 1507 if (type & IORESOURCE_IO) 1509 1508 idx = 0; ··· 1521 1524 if (!r->parent) 1522 1525 return; 1523 1526 1524 - /* 1525 - * if there are children under that, we should release them 1526 - * all 1527 - */ 1527 + /* If there are children, release them all */ 1528 1528 release_child_resources(r); 1529 1529 if (!release_resource(r)) { 1530 1530 type = old_flags = r->flags & PCI_RES_TYPE_MASK; 1531 - pci_printk(KERN_DEBUG, dev, "resource %d %pR released\n", 1532 - PCI_BRIDGE_RESOURCES + idx, r); 1533 - /* keep the old size */ 1531 + pci_info(dev, "resource %d %pR released\n", 1532 + PCI_BRIDGE_RESOURCES + idx, r); 1533 + /* Keep the old size */ 1534 1534 r->end = resource_size(r) - 1; 1535 1535 r->start = 0; 1536 1536 r->flags = 0; 1537 1537 1538 - /* avoiding touch the one without PREF */ 1538 + /* Avoiding touch the one without PREF */ 1539 1539 if (type & IORESOURCE_PREFETCH) 1540 1540 type = IORESOURCE_PREFETCH; 1541 1541 __pci_setup_bridge(bus, type); 1542 - /* for next child res under same bridge */ 1542 + /* For next child res under same bridge */ 1543 1543 r->flags = old_flags; 1544 1544 } 1545 1545 } ··· 1545 1551 leaf_only, 1546 1552 whole_subtree, 1547 1553 }; 1554 + 1548 1555 /* 1549 - * try to release pci bridge resources that is from leaf bridge, 1550 - * so we can allocate big new one later 1556 + * Try to release PCI bridge resources from leaf bridge, so we can allocate 1557 + * a larger window later. 1551 1558 */ 1552 1559 static void pci_bus_release_bridge_resources(struct pci_bus *bus, 1553 1560 unsigned long type, ··· 1591 1596 if (!res || !res->end || !res->flags) 1592 1597 continue; 1593 1598 1594 - dev_printk(KERN_DEBUG, &bus->dev, "resource %d %pR\n", i, res); 1599 + dev_info(&bus->dev, "resource %d %pR\n", i, res); 1595 1600 } 1596 1601 } 1597 1602 ··· 1673 1678 pcibios_resource_to_bus(dev->bus, &region, r); 1674 1679 if (!region.start) { 1675 1680 *unassigned = true; 1676 - return 1; /* return early from pci_walk_bus() */ 1681 + return 1; /* Return early from pci_walk_bus() */ 1677 1682 } 1678 1683 } 1679 1684 ··· 1681 1686 } 1682 1687 1683 1688 static enum enable_type pci_realloc_detect(struct pci_bus *bus, 1684 - enum enable_type enable_local) 1689 + enum enable_type enable_local) 1685 1690 { 1686 1691 bool unassigned = false; 1687 1692 ··· 1696 1701 } 1697 1702 #else 1698 1703 static enum enable_type pci_realloc_detect(struct pci_bus *bus, 1699 - enum enable_type enable_local) 1704 + enum enable_type enable_local) 1700 1705 { 1701 1706 return enable_local; 1702 1707 } 1703 1708 #endif 1704 1709 1705 1710 /* 1706 - * first try will not touch pci bridge res 1707 - * second and later try will clear small leaf bridge res 1708 - * will stop till to the max depth if can not find good one 1711 + * First try will not touch PCI bridge res. 1712 + * Second and later try will clear small leaf bridge res. 1713 + * Will stop till to the max depth if can not find good one. 1709 1714 */ 1710 1715 void pci_assign_unassigned_root_bus_resources(struct pci_bus *bus) 1711 1716 { 1712 - LIST_HEAD(realloc_head); /* list of resources that 1713 - want additional resources */ 1717 + LIST_HEAD(realloc_head); 1718 + /* List of resources that want additional resources */ 1714 1719 struct list_head *add_list = NULL; 1715 1720 int tried_times = 0; 1716 1721 enum release_type rel_type = leaf_only; ··· 1719 1724 int pci_try_num = 1; 1720 1725 enum enable_type enable_local; 1721 1726 1722 - /* don't realloc if asked to do so */ 1727 + /* Don't realloc if asked to do so */ 1723 1728 enable_local = pci_realloc_detect(bus, pci_realloc_enable); 1724 1729 if (pci_realloc_enabled(enable_local)) { 1725 1730 int max_depth = pci_bus_get_depth(bus); 1726 1731 1727 1732 pci_try_num = max_depth + 1; 1728 - dev_printk(KERN_DEBUG, &bus->dev, 1729 - "max bus depth: %d pci_try_num: %d\n", 1730 - max_depth, pci_try_num); 1733 + dev_info(&bus->dev, "max bus depth: %d pci_try_num: %d\n", 1734 + max_depth, pci_try_num); 1731 1735 } 1732 1736 1733 1737 again: 1734 1738 /* 1735 - * last try will use add_list, otherwise will try good to have as 1736 - * must have, so can realloc parent bridge resource 1739 + * Last try will use add_list, otherwise will try good to have as must 1740 + * have, so can realloc parent bridge resource 1737 1741 */ 1738 1742 if (tried_times + 1 == pci_try_num) 1739 1743 add_list = &realloc_head; 1740 - /* Depth first, calculate sizes and alignments of all 1741 - subordinate buses. */ 1744 + /* 1745 + * Depth first, calculate sizes and alignments of all subordinate buses. 1746 + */ 1742 1747 __pci_bus_size_bridges(bus, add_list); 1743 1748 1744 1749 /* Depth last, allocate resources and update the hardware. */ ··· 1747 1752 BUG_ON(!list_empty(add_list)); 1748 1753 tried_times++; 1749 1754 1750 - /* any device complain? */ 1755 + /* Any device complain? */ 1751 1756 if (list_empty(&fail_head)) 1752 1757 goto dump; 1753 1758 ··· 1761 1766 goto dump; 1762 1767 } 1763 1768 1764 - dev_printk(KERN_DEBUG, &bus->dev, 1765 - "No. %d try to assign unassigned res\n", tried_times + 1); 1769 + dev_info(&bus->dev, "No. %d try to assign unassigned res\n", 1770 + tried_times + 1); 1766 1771 1767 - /* third times and later will not check if it is leaf */ 1772 + /* Third times and later will not check if it is leaf */ 1768 1773 if ((tried_times + 1) > 2) 1769 1774 rel_type = whole_subtree; 1770 1775 1771 1776 /* 1772 1777 * Try to release leaf bridge's resources that doesn't fit resource of 1773 - * child device under that bridge 1778 + * child device under that bridge. 1774 1779 */ 1775 1780 list_for_each_entry(fail_res, &fail_head, list) 1776 1781 pci_bus_release_bridge_resources(fail_res->dev->bus, 1777 1782 fail_res->flags & PCI_RES_TYPE_MASK, 1778 1783 rel_type); 1779 1784 1780 - /* restore size and flags */ 1785 + /* Restore size and flags */ 1781 1786 list_for_each_entry(fail_res, &fail_head, list) { 1782 1787 struct resource *res = fail_res->res; 1783 1788 ··· 1792 1797 goto again; 1793 1798 1794 1799 dump: 1795 - /* dump the resource on buses */ 1800 + /* Dump the resource on buses */ 1796 1801 pci_bus_dump_resources(bus); 1797 1802 } 1798 1803 ··· 1803 1808 list_for_each_entry(root_bus, &pci_root_buses, node) { 1804 1809 pci_assign_unassigned_root_bus_resources(root_bus); 1805 1810 1806 - /* Make sure the root bridge has a companion ACPI device: */ 1811 + /* Make sure the root bridge has a companion ACPI device */ 1807 1812 if (ACPI_HANDLE(root_bus->bridge)) 1808 1813 acpi_ioapic_add(ACPI_HANDLE(root_bus->bridge)); 1809 1814 } 1810 1815 } 1811 1816 1812 1817 static void extend_bridge_window(struct pci_dev *bridge, struct resource *res, 1813 - struct list_head *add_list, resource_size_t available) 1818 + struct list_head *add_list, 1819 + resource_size_t available) 1814 1820 { 1815 1821 struct pci_dev_resource *dev_res; 1816 1822 ··· 1835 1839 } 1836 1840 1837 1841 static void pci_bus_distribute_available_resources(struct pci_bus *bus, 1838 - struct list_head *add_list, resource_size_t available_io, 1839 - resource_size_t available_mmio, resource_size_t available_mmio_pref) 1842 + struct list_head *add_list, 1843 + resource_size_t available_io, 1844 + resource_size_t available_mmio, 1845 + resource_size_t available_mmio_pref) 1840 1846 { 1841 1847 resource_size_t remaining_io, remaining_mmio, remaining_mmio_pref; 1842 1848 unsigned int normal_bridges = 0, hotplug_bridges = 0; ··· 1862 1864 1863 1865 /* 1864 1866 * Calculate the total amount of extra resource space we can 1865 - * pass to bridges below this one. This is basically the 1867 + * pass to bridges below this one. This is basically the 1866 1868 * extra space reduced by the minimal required space for the 1867 1869 * non-hotplug bridges. 1868 1870 */ ··· 1872 1874 1873 1875 /* 1874 1876 * Calculate how many hotplug bridges and normal bridges there 1875 - * are on this bus. We will distribute the additional available 1877 + * are on this bus. We will distribute the additional available 1876 1878 * resources between hotplug bridges. 1877 1879 */ 1878 1880 for_each_pci_bridge(dev, bus) { ··· 1907 1909 1908 1910 /* 1909 1911 * There is only one bridge on the bus so it gets all available 1910 - * resources which it can then distribute to the possible 1911 - * hotplug bridges below. 1912 + * resources which it can then distribute to the possible hotplug 1913 + * bridges below. 1912 1914 */ 1913 1915 if (hotplug_bridges + normal_bridges == 1) { 1914 1916 dev = list_first_entry(&bus->devices, struct pci_dev, bus_list); ··· 1959 1961 } 1960 1962 } 1961 1963 1962 - static void 1963 - pci_bridge_distribute_available_resources(struct pci_dev *bridge, 1964 - struct list_head *add_list) 1964 + static void pci_bridge_distribute_available_resources(struct pci_dev *bridge, 1965 + struct list_head *add_list) 1965 1966 { 1966 1967 resource_size_t available_io, available_mmio, available_mmio_pref; 1967 1968 const struct resource *res; ··· 1977 1980 available_mmio_pref = resource_size(res); 1978 1981 1979 1982 pci_bus_distribute_available_resources(bridge->subordinate, 1980 - add_list, available_io, available_mmio, available_mmio_pref); 1983 + add_list, available_io, 1984 + available_mmio, 1985 + available_mmio_pref); 1981 1986 } 1982 1987 1983 1988 void pci_assign_unassigned_bridge_resources(struct pci_dev *bridge) 1984 1989 { 1985 1990 struct pci_bus *parent = bridge->subordinate; 1986 - LIST_HEAD(add_list); /* list of resources that 1987 - want additional resources */ 1991 + /* List of resources that want additional resources */ 1992 + LIST_HEAD(add_list); 1993 + 1988 1994 int tried_times = 0; 1989 1995 LIST_HEAD(fail_head); 1990 1996 struct pci_dev_resource *fail_res; ··· 1997 1997 __pci_bus_size_bridges(parent, &add_list); 1998 1998 1999 1999 /* 2000 - * Distribute remaining resources (if any) equally between 2001 - * hotplug bridges below. This makes it possible to extend the 2002 - * hierarchy later without running out of resources. 2000 + * Distribute remaining resources (if any) equally between hotplug 2001 + * bridges below. This makes it possible to extend the hierarchy 2002 + * later without running out of resources. 2003 2003 */ 2004 2004 pci_bridge_distribute_available_resources(bridge, &add_list); 2005 2005 ··· 2011 2011 goto enable_all; 2012 2012 2013 2013 if (tried_times >= 2) { 2014 - /* still fail, don't need to try more */ 2014 + /* Still fail, don't need to try more */ 2015 2015 free_list(&fail_head); 2016 2016 goto enable_all; 2017 2017 } ··· 2020 2020 tried_times + 1); 2021 2021 2022 2022 /* 2023 - * Try to release leaf bridge's resources that doesn't fit resource of 2024 - * child device under that bridge 2023 + * Try to release leaf bridge's resources that aren't big enough 2024 + * to contain child device resources. 2025 2025 */ 2026 2026 list_for_each_entry(fail_res, &fail_head, list) 2027 2027 pci_bus_release_bridge_resources(fail_res->dev->bus, 2028 2028 fail_res->flags & PCI_RES_TYPE_MASK, 2029 2029 whole_subtree); 2030 2030 2031 - /* restore size and flags */ 2031 + /* Restore size and flags */ 2032 2032 list_for_each_entry(fail_res, &fail_head, list) { 2033 2033 struct resource *res = fail_res->res; 2034 2034 ··· 2107 2107 } 2108 2108 2109 2109 list_for_each_entry(dev_res, &saved, list) { 2110 - /* Skip the bridge we just assigned resources for. */ 2110 + /* Skip the bridge we just assigned resources for */ 2111 2111 if (bridge == dev_res->dev) 2112 2112 continue; 2113 2113 ··· 2119 2119 return 0; 2120 2120 2121 2121 cleanup: 2122 - /* restore size and flags */ 2122 + /* Restore size and flags */ 2123 2123 list_for_each_entry(dev_res, &failed, list) { 2124 2124 struct resource *res = dev_res->res; 2125 2125 ··· 2151 2151 void pci_assign_unassigned_bus_resources(struct pci_bus *bus) 2152 2152 { 2153 2153 struct pci_dev *dev; 2154 - LIST_HEAD(add_list); /* list of resources that 2155 - want additional resources */ 2154 + /* List of resources that want additional resources */ 2155 + LIST_HEAD(add_list); 2156 2156 2157 2157 down_read(&pci_bus_sem); 2158 2158 for_each_pci_bridge(dev, bus)
+1 -1
drivers/pci/slot.c
··· 403 403 pci_slots_kset = kset_create_and_add("slots", NULL, 404 404 &pci_bus_kset->kobj); 405 405 if (!pci_slots_kset) { 406 - printk(KERN_ERR "PCI: Slot initialization failure\n"); 406 + pr_err("PCI: Slot initialization failure\n"); 407 407 return -ENOMEM; 408 408 } 409 409 return 0;
+29 -13
drivers/pci/switch/switchtec.c
··· 658 658 659 659 static int ioctl_event_summary(struct switchtec_dev *stdev, 660 660 struct switchtec_user *stuser, 661 - struct switchtec_ioctl_event_summary __user *usum) 661 + struct switchtec_ioctl_event_summary __user *usum, 662 + size_t size) 662 663 { 663 - struct switchtec_ioctl_event_summary s = {0}; 664 + struct switchtec_ioctl_event_summary *s; 664 665 int i; 665 666 u32 reg; 667 + int ret = 0; 666 668 667 - s.global = ioread32(&stdev->mmio_sw_event->global_summary); 668 - s.part_bitmap = ioread32(&stdev->mmio_sw_event->part_event_bitmap); 669 - s.local_part = ioread32(&stdev->mmio_part_cfg->part_event_summary); 669 + s = kzalloc(sizeof(*s), GFP_KERNEL); 670 + if (!s) 671 + return -ENOMEM; 672 + 673 + s->global = ioread32(&stdev->mmio_sw_event->global_summary); 674 + s->part_bitmap = ioread32(&stdev->mmio_sw_event->part_event_bitmap); 675 + s->local_part = ioread32(&stdev->mmio_part_cfg->part_event_summary); 670 676 671 677 for (i = 0; i < stdev->partition_count; i++) { 672 678 reg = ioread32(&stdev->mmio_part_cfg_all[i].part_event_summary); 673 - s.part[i] = reg; 679 + s->part[i] = reg; 674 680 } 675 681 676 682 for (i = 0; i < SWITCHTEC_MAX_PFF_CSR; i++) { ··· 685 679 break; 686 680 687 681 reg = ioread32(&stdev->mmio_pff_csr[i].pff_event_summary); 688 - s.pff[i] = reg; 682 + s->pff[i] = reg; 689 683 } 690 684 691 - if (copy_to_user(usum, &s, sizeof(s))) 692 - return -EFAULT; 685 + if (copy_to_user(usum, s, size)) { 686 + ret = -EFAULT; 687 + goto error_case; 688 + } 693 689 694 690 stuser->event_cnt = atomic_read(&stdev->event_cnt); 695 691 696 - return 0; 692 + error_case: 693 + kfree(s); 694 + return ret; 697 695 } 698 696 699 697 static u32 __iomem *global_ev_reg(struct switchtec_dev *stdev, ··· 987 977 case SWITCHTEC_IOCTL_FLASH_PART_INFO: 988 978 rc = ioctl_flash_part_info(stdev, argp); 989 979 break; 990 - case SWITCHTEC_IOCTL_EVENT_SUMMARY: 991 - rc = ioctl_event_summary(stdev, stuser, argp); 980 + case SWITCHTEC_IOCTL_EVENT_SUMMARY_LEGACY: 981 + rc = ioctl_event_summary(stdev, stuser, argp, 982 + sizeof(struct switchtec_ioctl_event_summary_legacy)); 992 983 break; 993 984 case SWITCHTEC_IOCTL_EVENT_CTL: 994 985 rc = ioctl_event_ctl(stdev, argp); ··· 999 988 break; 1000 989 case SWITCHTEC_IOCTL_PORT_TO_PFF: 1001 990 rc = ioctl_port_to_pff(stdev, argp); 991 + break; 992 + case SWITCHTEC_IOCTL_EVENT_SUMMARY: 993 + rc = ioctl_event_summary(stdev, stuser, argp, 994 + sizeof(struct switchtec_ioctl_event_summary)); 1002 995 break; 1003 996 default: 1004 997 rc = -ENOTTY; ··· 1177 1162 if (!(hdr & SWITCHTEC_EVENT_OCCURRED && hdr & SWITCHTEC_EVENT_EN_IRQ)) 1178 1163 return 0; 1179 1164 1180 - if (eid == SWITCHTEC_IOCTL_EVENT_LINK_STATE) 1165 + if (eid == SWITCHTEC_IOCTL_EVENT_LINK_STATE || 1166 + eid == SWITCHTEC_IOCTL_EVENT_MRPC_COMP) 1181 1167 return 0; 1182 1168 1183 1169 dev_dbg(&stdev->dev, "%s: %d %d %x\n", __func__, eid, idx, hdr);
+4 -5
drivers/pci/xen-pcifront.c
··· 291 291 vector[i] = op.msix_entries[i].vector; 292 292 } 293 293 } else { 294 - printk(KERN_DEBUG "enable msix get value %x\n", 295 - op.value); 294 + pr_info("enable msix get value %x\n", op.value); 296 295 err = op.value; 297 296 } 298 297 } else { ··· 363 364 err = do_pci_op(pdev, &op); 364 365 if (err == XEN_PCI_ERR_dev_not_found) { 365 366 /* XXX No response from backend, what shall we do? */ 366 - printk(KERN_DEBUG "get no response from backend for disable MSI\n"); 367 + pr_info("get no response from backend for disable MSI\n"); 367 368 return; 368 369 } 369 370 if (err) 370 371 /* how can pciback notify us fail? */ 371 - printk(KERN_DEBUG "get fake response frombackend\n"); 372 + pr_info("get fake response from backend\n"); 372 373 } 373 374 374 375 static struct xen_pci_frontend_ops pci_frontend_ops = { ··· 1103 1104 case XenbusStateClosed: 1104 1105 if (xdev->state == XenbusStateClosed) 1105 1106 break; 1106 - /* Missed the backend's CLOSING state -- fallthrough */ 1107 + /* fall through - Missed the backend's CLOSING state. */ 1107 1108 case XenbusStateClosing: 1108 1109 dev_warn(&xdev->dev, "backend going away!\n"); 1109 1110 pcifront_try_disconnect(pdev);
+1 -1
drivers/platform/chrome/chromeos_laptop.c
··· 125 125 return false; 126 126 127 127 pdev = to_pci_dev(dev); 128 - return devid == PCI_DEVID(pdev->bus->number, pdev->devfn); 128 + return devid == pci_dev_id(pdev); 129 129 } 130 130 131 131 static void chromeos_laptop_check_adapter(struct i2c_adapter *adapter)
+2 -1
include/linux/acpi.h
··· 517 517 #define OSC_PCI_CLOCK_PM_SUPPORT 0x00000004 518 518 #define OSC_PCI_SEGMENT_GROUPS_SUPPORT 0x00000008 519 519 #define OSC_PCI_MSI_SUPPORT 0x00000010 520 - #define OSC_PCI_SUPPORT_MASKS 0x0000001f 520 + #define OSC_PCI_HPX_TYPE_3_SUPPORT 0x00000100 521 + #define OSC_PCI_SUPPORT_MASKS 0x0000011f 521 522 522 523 /* PCI Host Bridge _OSC: Capabilities DWORD 3: Control Field */ 523 524 #define OSC_PCI_EXPRESS_NATIVE_HP_CONTROL 0x00000001
+167 -169
include/linux/cper.h
··· 44 44 */ 45 45 #define CPER_REC_LEN 256 46 46 /* 47 - * Severity difinition for error_severity in struct cper_record_header 47 + * Severity definition for error_severity in struct cper_record_header 48 48 * and section_severity in struct cper_section_descriptor 49 49 */ 50 50 enum { ··· 55 55 }; 56 56 57 57 /* 58 - * Validation bits difinition for validation_bits in struct 58 + * Validation bits definition for validation_bits in struct 59 59 * cper_record_header. If set, corresponding fields in struct 60 60 * cper_record_header contain valid information. 61 - * 62 - * corresponds platform_id 63 61 */ 64 62 #define CPER_VALID_PLATFORM_ID 0x0001 65 - /* corresponds timestamp */ 66 63 #define CPER_VALID_TIMESTAMP 0x0002 67 - /* corresponds partition_id */ 68 64 #define CPER_VALID_PARTITION_ID 0x0004 69 65 70 66 /* 71 67 * Notification type used to generate error record, used in 72 - * notification_type in struct cper_record_header 73 - * 74 - * Corrected Machine Check 68 + * notification_type in struct cper_record_header. These UUIDs are defined 69 + * in the UEFI spec v2.7, sec N.2.1. 75 70 */ 71 + 72 + /* Corrected Machine Check */ 76 73 #define CPER_NOTIFY_CMC \ 77 74 GUID_INIT(0x2DCE8BB1, 0xBDD7, 0x450e, 0xB9, 0xAD, 0x9C, 0xF4, \ 78 75 0xEB, 0xD4, 0xF8, 0x90) ··· 119 122 #define CPER_SEC_REV 0x0100 120 123 121 124 /* 122 - * Validation bits difinition for validation_bits in struct 125 + * Validation bits definition for validation_bits in struct 123 126 * cper_section_descriptor. If set, corresponding fields in struct 124 127 * cper_section_descriptor contain valid information. 125 - * 126 - * corresponds fru_id 127 128 */ 128 129 #define CPER_SEC_VALID_FRU_ID 0x1 129 - /* corresponds fru_text */ 130 130 #define CPER_SEC_VALID_FRU_TEXT 0x2 131 131 132 132 /* ··· 159 165 160 166 /* 161 167 * Section type definitions, used in section_type field in struct 162 - * cper_section_descriptor 163 - * 164 - * Processor Generic 168 + * cper_section_descriptor. These UUIDs are defined in the UEFI spec 169 + * v2.7, sec N.2.2. 165 170 */ 171 + 172 + /* Processor Generic */ 166 173 #define CPER_SEC_PROC_GENERIC \ 167 174 GUID_INIT(0x9876CCAD, 0x47B4, 0x4bdb, 0xB6, 0x5E, 0x16, 0xF1, \ 168 175 0x93, 0xC4, 0xF3, 0xDB) ··· 320 325 */ 321 326 #pragma pack(1) 322 327 328 + /* Record Header, UEFI v2.7 sec N.2.1 */ 323 329 struct cper_record_header { 324 330 char signature[CPER_SIG_SIZE]; /* must be CPER_SIG_RECORD */ 325 - __u16 revision; /* must be CPER_RECORD_REV */ 326 - __u32 signature_end; /* must be CPER_SIG_END */ 327 - __u16 section_count; 328 - __u32 error_severity; 329 - __u32 validation_bits; 330 - __u32 record_length; 331 - __u64 timestamp; 331 + u16 revision; /* must be CPER_RECORD_REV */ 332 + u32 signature_end; /* must be CPER_SIG_END */ 333 + u16 section_count; 334 + u32 error_severity; 335 + u32 validation_bits; 336 + u32 record_length; 337 + u64 timestamp; 332 338 guid_t platform_id; 333 339 guid_t partition_id; 334 340 guid_t creator_id; 335 341 guid_t notification_type; 336 - __u64 record_id; 337 - __u32 flags; 338 - __u64 persistence_information; 339 - __u8 reserved[12]; /* must be zero */ 342 + u64 record_id; 343 + u32 flags; 344 + u64 persistence_information; 345 + u8 reserved[12]; /* must be zero */ 340 346 }; 341 347 348 + /* Section Descriptor, UEFI v2.7 sec N.2.2 */ 342 349 struct cper_section_descriptor { 343 - __u32 section_offset; /* Offset in bytes of the 350 + u32 section_offset; /* Offset in bytes of the 344 351 * section body from the base 345 352 * of the record header */ 346 - __u32 section_length; 347 - __u16 revision; /* must be CPER_RECORD_REV */ 348 - __u8 validation_bits; 349 - __u8 reserved; /* must be zero */ 350 - __u32 flags; 353 + u32 section_length; 354 + u16 revision; /* must be CPER_RECORD_REV */ 355 + u8 validation_bits; 356 + u8 reserved; /* must be zero */ 357 + u32 flags; 351 358 guid_t section_type; 352 359 guid_t fru_id; 353 - __u32 section_severity; 354 - __u8 fru_text[20]; 360 + u32 section_severity; 361 + u8 fru_text[20]; 355 362 }; 356 363 357 - /* Generic Processor Error Section */ 364 + /* Generic Processor Error Section, UEFI v2.7 sec N.2.4.1 */ 358 365 struct cper_sec_proc_generic { 359 - __u64 validation_bits; 360 - __u8 proc_type; 361 - __u8 proc_isa; 362 - __u8 proc_error_type; 363 - __u8 operation; 364 - __u8 flags; 365 - __u8 level; 366 - __u16 reserved; 367 - __u64 cpu_version; 366 + u64 validation_bits; 367 + u8 proc_type; 368 + u8 proc_isa; 369 + u8 proc_error_type; 370 + u8 operation; 371 + u8 flags; 372 + u8 level; 373 + u16 reserved; 374 + u64 cpu_version; 368 375 char cpu_brand[128]; 369 - __u64 proc_id; 370 - __u64 target_addr; 371 - __u64 requestor_id; 372 - __u64 responder_id; 373 - __u64 ip; 376 + u64 proc_id; 377 + u64 target_addr; 378 + u64 requestor_id; 379 + u64 responder_id; 380 + u64 ip; 374 381 }; 375 382 376 - /* IA32/X64 Processor Error Section */ 383 + /* IA32/X64 Processor Error Section, UEFI v2.7 sec N.2.4.2 */ 377 384 struct cper_sec_proc_ia { 378 - __u64 validation_bits; 379 - __u64 lapic_id; 380 - __u8 cpuid[48]; 385 + u64 validation_bits; 386 + u64 lapic_id; 387 + u8 cpuid[48]; 381 388 }; 382 389 383 - /* IA32/X64 Processor Error Information Structure */ 390 + /* IA32/X64 Processor Error Information Structure, UEFI v2.7 sec N.2.4.2.1 */ 384 391 struct cper_ia_err_info { 385 392 guid_t err_type; 386 - __u64 validation_bits; 387 - __u64 check_info; 388 - __u64 target_id; 389 - __u64 requestor_id; 390 - __u64 responder_id; 391 - __u64 ip; 393 + u64 validation_bits; 394 + u64 check_info; 395 + u64 target_id; 396 + u64 requestor_id; 397 + u64 responder_id; 398 + u64 ip; 392 399 }; 393 400 394 - /* IA32/X64 Processor Context Information Structure */ 401 + /* IA32/X64 Processor Context Information Structure, UEFI v2.7 sec N.2.4.2.2 */ 395 402 struct cper_ia_proc_ctx { 396 - __u16 reg_ctx_type; 397 - __u16 reg_arr_size; 398 - __u32 msr_addr; 399 - __u64 mm_reg_addr; 403 + u16 reg_ctx_type; 404 + u16 reg_arr_size; 405 + u32 msr_addr; 406 + u64 mm_reg_addr; 400 407 }; 401 408 402 - /* ARM Processor Error Section */ 409 + /* ARM Processor Error Section, UEFI v2.7 sec N.2.4.4 */ 403 410 struct cper_sec_proc_arm { 404 - __u32 validation_bits; 405 - __u16 err_info_num; /* Number of Processor Error Info */ 406 - __u16 context_info_num; /* Number of Processor Context Info Records*/ 407 - __u32 section_length; 408 - __u8 affinity_level; 409 - __u8 reserved[3]; /* must be zero */ 410 - __u64 mpidr; 411 - __u64 midr; 412 - __u32 running_state; /* Bit 0 set - Processor running. PSCI = 0 */ 413 - __u32 psci_state; 411 + u32 validation_bits; 412 + u16 err_info_num; /* Number of Processor Error Info */ 413 + u16 context_info_num; /* Number of Processor Context Info Records*/ 414 + u32 section_length; 415 + u8 affinity_level; 416 + u8 reserved[3]; /* must be zero */ 417 + u64 mpidr; 418 + u64 midr; 419 + u32 running_state; /* Bit 0 set - Processor running. PSCI = 0 */ 420 + u32 psci_state; 414 421 }; 415 422 416 - /* ARM Processor Error Information Structure */ 423 + /* ARM Processor Error Information Structure, UEFI v2.7 sec N.2.4.4.1 */ 417 424 struct cper_arm_err_info { 418 - __u8 version; 419 - __u8 length; 420 - __u16 validation_bits; 421 - __u8 type; 422 - __u16 multiple_error; 423 - __u8 flags; 424 - __u64 error_info; 425 - __u64 virt_fault_addr; 426 - __u64 physical_fault_addr; 425 + u8 version; 426 + u8 length; 427 + u16 validation_bits; 428 + u8 type; 429 + u16 multiple_error; 430 + u8 flags; 431 + u64 error_info; 432 + u64 virt_fault_addr; 433 + u64 physical_fault_addr; 427 434 }; 428 435 429 - /* ARM Processor Context Information Structure */ 436 + /* ARM Processor Context Information Structure, UEFI v2.7 sec N.2.4.4.2 */ 430 437 struct cper_arm_ctx_info { 431 - __u16 version; 432 - __u16 type; 433 - __u32 size; 438 + u16 version; 439 + u16 type; 440 + u32 size; 434 441 }; 435 442 436 - /* Old Memory Error Section UEFI 2.1, 2.2 */ 443 + /* Old Memory Error Section, UEFI v2.1, v2.2 */ 437 444 struct cper_sec_mem_err_old { 438 - __u64 validation_bits; 439 - __u64 error_status; 440 - __u64 physical_addr; 441 - __u64 physical_addr_mask; 442 - __u16 node; 443 - __u16 card; 444 - __u16 module; 445 - __u16 bank; 446 - __u16 device; 447 - __u16 row; 448 - __u16 column; 449 - __u16 bit_pos; 450 - __u64 requestor_id; 451 - __u64 responder_id; 452 - __u64 target_id; 453 - __u8 error_type; 445 + u64 validation_bits; 446 + u64 error_status; 447 + u64 physical_addr; 448 + u64 physical_addr_mask; 449 + u16 node; 450 + u16 card; 451 + u16 module; 452 + u16 bank; 453 + u16 device; 454 + u16 row; 455 + u16 column; 456 + u16 bit_pos; 457 + u64 requestor_id; 458 + u64 responder_id; 459 + u64 target_id; 460 + u8 error_type; 454 461 }; 455 462 456 - /* Memory Error Section UEFI >= 2.3 */ 463 + /* Memory Error Section (UEFI >= v2.3), UEFI v2.7 sec N.2.5 */ 457 464 struct cper_sec_mem_err { 458 - __u64 validation_bits; 459 - __u64 error_status; 460 - __u64 physical_addr; 461 - __u64 physical_addr_mask; 462 - __u16 node; 463 - __u16 card; 464 - __u16 module; 465 - __u16 bank; 466 - __u16 device; 467 - __u16 row; 468 - __u16 column; 469 - __u16 bit_pos; 470 - __u64 requestor_id; 471 - __u64 responder_id; 472 - __u64 target_id; 473 - __u8 error_type; 474 - __u8 reserved; 475 - __u16 rank; 476 - __u16 mem_array_handle; /* card handle in UEFI 2.4 */ 477 - __u16 mem_dev_handle; /* module handle in UEFI 2.4 */ 465 + u64 validation_bits; 466 + u64 error_status; 467 + u64 physical_addr; 468 + u64 physical_addr_mask; 469 + u16 node; 470 + u16 card; 471 + u16 module; 472 + u16 bank; 473 + u16 device; 474 + u16 row; 475 + u16 column; 476 + u16 bit_pos; 477 + u64 requestor_id; 478 + u64 responder_id; 479 + u64 target_id; 480 + u8 error_type; 481 + u8 reserved; 482 + u16 rank; 483 + u16 mem_array_handle; /* "card handle" in UEFI 2.4 */ 484 + u16 mem_dev_handle; /* "module handle" in UEFI 2.4 */ 478 485 }; 479 486 480 487 struct cper_mem_err_compact { 481 - __u64 validation_bits; 482 - __u16 node; 483 - __u16 card; 484 - __u16 module; 485 - __u16 bank; 486 - __u16 device; 487 - __u16 row; 488 - __u16 column; 489 - __u16 bit_pos; 490 - __u64 requestor_id; 491 - __u64 responder_id; 492 - __u64 target_id; 493 - __u16 rank; 494 - __u16 mem_array_handle; 495 - __u16 mem_dev_handle; 488 + u64 validation_bits; 489 + u16 node; 490 + u16 card; 491 + u16 module; 492 + u16 bank; 493 + u16 device; 494 + u16 row; 495 + u16 column; 496 + u16 bit_pos; 497 + u64 requestor_id; 498 + u64 responder_id; 499 + u64 target_id; 500 + u16 rank; 501 + u16 mem_array_handle; 502 + u16 mem_dev_handle; 496 503 }; 497 504 505 + /* PCI Express Error Section, UEFI v2.7 sec N.2.7 */ 498 506 struct cper_sec_pcie { 499 - __u64 validation_bits; 500 - __u32 port_type; 507 + u64 validation_bits; 508 + u32 port_type; 501 509 struct { 502 - __u8 minor; 503 - __u8 major; 504 - __u8 reserved[2]; 510 + u8 minor; 511 + u8 major; 512 + u8 reserved[2]; 505 513 } version; 506 - __u16 command; 507 - __u16 status; 508 - __u32 reserved; 514 + u16 command; 515 + u16 status; 516 + u32 reserved; 509 517 struct { 510 - __u16 vendor_id; 511 - __u16 device_id; 512 - __u8 class_code[3]; 513 - __u8 function; 514 - __u8 device; 515 - __u16 segment; 516 - __u8 bus; 517 - __u8 secondary_bus; 518 - __u16 slot; 519 - __u8 reserved; 518 + u16 vendor_id; 519 + u16 device_id; 520 + u8 class_code[3]; 521 + u8 function; 522 + u8 device; 523 + u16 segment; 524 + u8 bus; 525 + u8 secondary_bus; 526 + u16 slot; 527 + u8 reserved; 520 528 } device_id; 521 529 struct { 522 - __u32 lower; 523 - __u32 upper; 530 + u32 lower; 531 + u32 upper; 524 532 } serial_number; 525 533 struct { 526 - __u16 secondary_status; 527 - __u16 control; 534 + u16 secondary_status; 535 + u16 control; 528 536 } bridge; 529 - __u8 capability[60]; 530 - __u8 aer_info[96]; 537 + u8 capability[60]; 538 + u8 aer_info[96]; 531 539 }; 532 540 533 541 /* Reset to default packing */ 534 542 #pragma pack() 535 543 536 - extern const char * const cper_proc_error_type_strs[4]; 544 + extern const char *const cper_proc_error_type_strs[4]; 537 545 538 546 u64 cper_next_record_id(void); 539 547 const char *cper_severity_str(unsigned int);
-18
include/linux/msi.h
··· 148 148 void pci_msi_mask_irq(struct irq_data *data); 149 149 void pci_msi_unmask_irq(struct irq_data *data); 150 150 151 - /* Conversion helpers. Should be removed after merging */ 152 - static inline void __write_msi_msg(struct msi_desc *entry, struct msi_msg *msg) 153 - { 154 - __pci_write_msi_msg(entry, msg); 155 - } 156 - static inline void write_msi_msg(int irq, struct msi_msg *msg) 157 - { 158 - pci_write_msi_msg(irq, msg); 159 - } 160 - static inline void mask_msi_irq(struct irq_data *data) 161 - { 162 - pci_msi_mask_irq(data); 163 - } 164 - static inline void unmask_msi_irq(struct irq_data *data) 165 - { 166 - pci_msi_unmask_irq(data); 167 - } 168 - 169 151 /* 170 152 * The arch hooks to setup up msi irqs. Those functions are 171 153 * implemented as weak symbols so that they /can/ be overriden by
+1
include/linux/pci-ecam.h
··· 56 56 extern struct pci_ecam_ops pci_thunder_ecam_ops; /* Cavium ThunderX 1.x */ 57 57 extern struct pci_ecam_ops xgene_v1_pcie_ecam_ops; /* APM X-Gene PCIe v1 */ 58 58 extern struct pci_ecam_ops xgene_v2_pcie_ecam_ops; /* APM X-Gene PCIe v2.x */ 59 + extern struct pci_ecam_ops al_pcie_ops; /* Amazon Annapurna Labs PCIe */ 59 60 #endif 60 61 61 62 #ifdef CONFIG_PCI_HOST_COMMON
+2
include/linux/pci-epc.h
··· 109 109 * @reserved_bar: bitmap to indicate reserved BAR unavailable to function driver 110 110 * @bar_fixed_64bit: bitmap to indicate fixed 64bit BARs 111 111 * @bar_fixed_size: Array specifying the size supported by each BAR 112 + * @align: alignment size required for BAR buffer allocation 112 113 */ 113 114 struct pci_epc_features { 114 115 unsigned int linkup_notifier : 1; ··· 118 117 u8 reserved_bar; 119 118 u8 bar_fixed_64bit; 120 119 u64 bar_fixed_size[BAR_5 + 1]; 120 + size_t align; 121 121 }; 122 122 123 123 #define to_pci_epc(device) container_of((device), struct pci_epc, dev)
+2 -1
include/linux/pci-epf.h
··· 149 149 int __pci_epf_register_driver(struct pci_epf_driver *driver, 150 150 struct module *owner); 151 151 void pci_epf_unregister_driver(struct pci_epf_driver *driver); 152 - void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar); 152 + void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar, 153 + size_t align); 153 154 void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar); 154 155 int pci_epf_bind(struct pci_epf *epf); 155 156 void pci_epf_unbind(struct pci_epf *epf);
+8 -1
include/linux/pci.h
··· 348 348 unsigned int hotplug_user_indicators:1; /* SlotCtl indicators 349 349 controlled exclusively by 350 350 user sysfs */ 351 + unsigned int clear_retrain_link:1; /* Need to clear Retrain Link 352 + bit manually */ 351 353 unsigned int d3_delay; /* D3->D0 transition time in ms */ 352 354 unsigned int d3cold_delay; /* D3cold->D0 transition time in ms */ 353 355 ··· 492 490 void *sysdata; 493 491 int busnr; 494 492 struct list_head windows; /* resource_entry */ 493 + struct list_head dma_ranges; /* dma ranges resource list */ 495 494 u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */ 496 495 int (*map_irq)(const struct pci_dev *, u8, u8); 497 496 void (*release_fn)(struct pci_host_bridge *); ··· 598 595 }; 599 596 600 597 #define to_pci_bus(n) container_of(n, struct pci_bus, dev) 598 + 599 + static inline u16 pci_dev_id(struct pci_dev *dev) 600 + { 601 + return PCI_DEVID(dev->bus->number, dev->devfn); 602 + } 601 603 602 604 /* 603 605 * Returns true if the PCI bus is root (behind host-PCI bridge), ··· 1241 1233 int __must_check pci_request_regions_exclusive(struct pci_dev *, const char *); 1242 1234 void pci_release_regions(struct pci_dev *); 1243 1235 int __must_check pci_request_region(struct pci_dev *, int, const char *); 1244 - int __must_check pci_request_region_exclusive(struct pci_dev *, int, const char *); 1245 1236 void pci_release_region(struct pci_dev *, int); 1246 1237 int pci_request_selected_regions(struct pci_dev *, int, const char *); 1247 1238 int pci_request_selected_regions_exclusive(struct pci_dev *, int, const char *);
+56 -10
include/linux/pci_hotplug.h
··· 124 124 u32 sec_unc_err_mask_or; 125 125 }; 126 126 127 - struct hotplug_params { 128 - struct hpp_type0 *t0; /* Type0: NULL if not available */ 129 - struct hpp_type1 *t1; /* Type1: NULL if not available */ 130 - struct hpp_type2 *t2; /* Type2: NULL if not available */ 131 - struct hpp_type0 type0_data; 132 - struct hpp_type1 type1_data; 133 - struct hpp_type2 type2_data; 127 + /* 128 + * _HPX PCI Express Setting Record (Type 3) 129 + */ 130 + struct hpx_type3 { 131 + u16 device_type; 132 + u16 function_type; 133 + u16 config_space_location; 134 + u16 pci_exp_cap_id; 135 + u16 pci_exp_cap_ver; 136 + u16 pci_exp_vendor_id; 137 + u16 dvsec_id; 138 + u16 dvsec_rev; 139 + u16 match_offset; 140 + u32 match_mask_and; 141 + u32 match_value; 142 + u16 reg_offset; 143 + u32 reg_mask_and; 144 + u32 reg_mask_or; 145 + }; 146 + 147 + struct hotplug_program_ops { 148 + void (*program_type0)(struct pci_dev *dev, struct hpp_type0 *hpp); 149 + void (*program_type1)(struct pci_dev *dev, struct hpp_type1 *hpp); 150 + void (*program_type2)(struct pci_dev *dev, struct hpp_type2 *hpp); 151 + void (*program_type3)(struct pci_dev *dev, struct hpx_type3 *hpp); 152 + }; 153 + 154 + enum hpx_type3_dev_type { 155 + HPX_TYPE_ENDPOINT = BIT(0), 156 + HPX_TYPE_LEG_END = BIT(1), 157 + HPX_TYPE_RC_END = BIT(2), 158 + HPX_TYPE_RC_EC = BIT(3), 159 + HPX_TYPE_ROOT_PORT = BIT(4), 160 + HPX_TYPE_UPSTREAM = BIT(5), 161 + HPX_TYPE_DOWNSTREAM = BIT(6), 162 + HPX_TYPE_PCI_BRIDGE = BIT(7), 163 + HPX_TYPE_PCIE_BRIDGE = BIT(8), 164 + }; 165 + 166 + enum hpx_type3_fn_type { 167 + HPX_FN_NORMAL = BIT(0), 168 + HPX_FN_SRIOV_PHYS = BIT(1), 169 + HPX_FN_SRIOV_VIRT = BIT(2), 170 + }; 171 + 172 + enum hpx_type3_cfg_loc { 173 + HPX_CFG_PCICFG = 0, 174 + HPX_CFG_PCIE_CAP = 1, 175 + HPX_CFG_PCIE_CAP_EXT = 2, 176 + HPX_CFG_VEND_CAP = 3, 177 + HPX_CFG_DVSEC = 4, 178 + HPX_CFG_MAX, 134 179 }; 135 180 136 181 #ifdef CONFIG_ACPI 137 182 #include <linux/acpi.h> 138 - int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp); 183 + int pci_acpi_program_hp_params(struct pci_dev *dev, 184 + const struct hotplug_program_ops *hp_ops); 139 185 bool pciehp_is_native(struct pci_dev *bridge); 140 186 int acpi_get_hp_hw_control_from_firmware(struct pci_dev *bridge); 141 187 bool shpchp_is_native(struct pci_dev *bridge); 142 188 int acpi_pci_check_ejectable(struct pci_bus *pbus, acpi_handle handle); 143 189 int acpi_pci_detect_ejectable(acpi_handle handle); 144 190 #else 145 - static inline int pci_get_hp_params(struct pci_dev *dev, 146 - struct hotplug_params *hpp) 191 + static inline int pci_acpi_program_hp_params(struct pci_dev *dev, 192 + const struct hotplug_program_ops *hp_ops) 147 193 { 148 194 return -ENODEV; 149 195 }
+1 -1
include/linux/switchtec.h
··· 20 20 #include <linux/cdev.h> 21 21 22 22 #define SWITCHTEC_MRPC_PAYLOAD_SIZE 1024 23 - #define SWITCHTEC_MAX_PFF_CSR 48 23 + #define SWITCHTEC_MAX_PFF_CSR 255 24 24 25 25 #define SWITCHTEC_EVENT_OCCURRED BIT(0) 26 26 #define SWITCHTEC_EVENT_CLEAR BIT(0)
+71 -67
include/uapi/linux/pci_regs.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 2 /* 3 - * pci_regs.h 4 - * 5 3 * PCI standard defines 6 4 * Copyright 1994, Drew Eckhardt 7 5 * Copyright 1997--1999 Martin Mares <mj@ucw.cz> ··· 13 15 * PCI System Design Guide 14 16 * 15 17 * For HyperTransport information, please consult the following manuals 16 - * from http://www.hypertransport.org 18 + * from http://www.hypertransport.org : 17 19 * 18 20 * The HyperTransport I/O Link Specification 19 21 */ ··· 299 301 #define PCI_SID_ESR_FIC 0x20 /* First In Chassis Flag */ 300 302 #define PCI_SID_CHASSIS_NR 3 /* Chassis Number */ 301 303 302 - /* Message Signalled Interrupts registers */ 304 + /* Message Signalled Interrupt registers */ 303 305 304 306 #define PCI_MSI_FLAGS 2 /* Message Control */ 305 307 #define PCI_MSI_FLAGS_ENABLE 0x0001 /* MSI feature enabled */ ··· 317 319 #define PCI_MSI_MASK_64 16 /* Mask bits register for 64-bit devices */ 318 320 #define PCI_MSI_PENDING_64 20 /* Pending intrs for 64-bit devices */ 319 321 320 - /* MSI-X registers */ 322 + /* MSI-X registers (in MSI-X capability) */ 321 323 #define PCI_MSIX_FLAGS 2 /* Message Control */ 322 324 #define PCI_MSIX_FLAGS_QSIZE 0x07FF /* Table size */ 323 325 #define PCI_MSIX_FLAGS_MASKALL 0x4000 /* Mask all vectors for this function */ ··· 331 333 #define PCI_MSIX_FLAGS_BIRMASK PCI_MSIX_PBA_BIR /* deprecated */ 332 334 #define PCI_CAP_MSIX_SIZEOF 12 /* size of MSIX registers */ 333 335 334 - /* MSI-X Table entry format */ 336 + /* MSI-X Table entry format (in memory mapped by a BAR) */ 335 337 #define PCI_MSIX_ENTRY_SIZE 16 336 - #define PCI_MSIX_ENTRY_LOWER_ADDR 0 337 - #define PCI_MSIX_ENTRY_UPPER_ADDR 4 338 - #define PCI_MSIX_ENTRY_DATA 8 339 - #define PCI_MSIX_ENTRY_VECTOR_CTRL 12 340 - #define PCI_MSIX_ENTRY_CTRL_MASKBIT 1 338 + #define PCI_MSIX_ENTRY_LOWER_ADDR 0 /* Message Address */ 339 + #define PCI_MSIX_ENTRY_UPPER_ADDR 4 /* Message Upper Address */ 340 + #define PCI_MSIX_ENTRY_DATA 8 /* Message Data */ 341 + #define PCI_MSIX_ENTRY_VECTOR_CTRL 12 /* Vector Control */ 342 + #define PCI_MSIX_ENTRY_CTRL_MASKBIT 0x00000001 341 343 342 344 /* CompactPCI Hotswap Register */ 343 345 ··· 370 372 #define PCI_EA_FIRST_ENT_BRIDGE 8 /* First EA Entry for Bridges */ 371 373 #define PCI_EA_ES 0x00000007 /* Entry Size */ 372 374 #define PCI_EA_BEI 0x000000f0 /* BAR Equivalent Indicator */ 375 + 376 + /* EA fixed Secondary and Subordinate bus numbers for Bridge */ 377 + #define PCI_EA_SEC_BUS_MASK 0xff 378 + #define PCI_EA_SUB_BUS_MASK 0xff00 379 + #define PCI_EA_SUB_BUS_SHIFT 8 380 + 373 381 /* 0-5 map to BARs 0-5 respectively */ 374 382 #define PCI_EA_BEI_BAR0 0 375 383 #define PCI_EA_BEI_BAR5 5 ··· 469 465 /* PCI Express capability registers */ 470 466 471 467 #define PCI_EXP_FLAGS 2 /* Capabilities register */ 472 - #define PCI_EXP_FLAGS_VERS 0x000f /* Capability version */ 473 - #define PCI_EXP_FLAGS_TYPE 0x00f0 /* Device/Port type */ 474 - #define PCI_EXP_TYPE_ENDPOINT 0x0 /* Express Endpoint */ 475 - #define PCI_EXP_TYPE_LEG_END 0x1 /* Legacy Endpoint */ 476 - #define PCI_EXP_TYPE_ROOT_PORT 0x4 /* Root Port */ 477 - #define PCI_EXP_TYPE_UPSTREAM 0x5 /* Upstream Port */ 478 - #define PCI_EXP_TYPE_DOWNSTREAM 0x6 /* Downstream Port */ 479 - #define PCI_EXP_TYPE_PCI_BRIDGE 0x7 /* PCIe to PCI/PCI-X Bridge */ 480 - #define PCI_EXP_TYPE_PCIE_BRIDGE 0x8 /* PCI/PCI-X to PCIe Bridge */ 481 - #define PCI_EXP_TYPE_RC_END 0x9 /* Root Complex Integrated Endpoint */ 482 - #define PCI_EXP_TYPE_RC_EC 0xa /* Root Complex Event Collector */ 483 - #define PCI_EXP_FLAGS_SLOT 0x0100 /* Slot implemented */ 484 - #define PCI_EXP_FLAGS_IRQ 0x3e00 /* Interrupt message number */ 468 + #define PCI_EXP_FLAGS_VERS 0x000f /* Capability version */ 469 + #define PCI_EXP_FLAGS_TYPE 0x00f0 /* Device/Port type */ 470 + #define PCI_EXP_TYPE_ENDPOINT 0x0 /* Express Endpoint */ 471 + #define PCI_EXP_TYPE_LEG_END 0x1 /* Legacy Endpoint */ 472 + #define PCI_EXP_TYPE_ROOT_PORT 0x4 /* Root Port */ 473 + #define PCI_EXP_TYPE_UPSTREAM 0x5 /* Upstream Port */ 474 + #define PCI_EXP_TYPE_DOWNSTREAM 0x6 /* Downstream Port */ 475 + #define PCI_EXP_TYPE_PCI_BRIDGE 0x7 /* PCIe to PCI/PCI-X Bridge */ 476 + #define PCI_EXP_TYPE_PCIE_BRIDGE 0x8 /* PCI/PCI-X to PCIe Bridge */ 477 + #define PCI_EXP_TYPE_RC_END 0x9 /* Root Complex Integrated Endpoint */ 478 + #define PCI_EXP_TYPE_RC_EC 0xa /* Root Complex Event Collector */ 479 + #define PCI_EXP_FLAGS_SLOT 0x0100 /* Slot implemented */ 480 + #define PCI_EXP_FLAGS_IRQ 0x3e00 /* Interrupt message number */ 485 481 #define PCI_EXP_DEVCAP 4 /* Device capabilities */ 486 482 #define PCI_EXP_DEVCAP_PAYLOAD 0x00000007 /* Max_Payload_Size */ 487 483 #define PCI_EXP_DEVCAP_PHANTOM 0x00000018 /* Phantom functions */ ··· 620 616 #define PCI_EXP_RTCAP 30 /* Root Capabilities */ 621 617 #define PCI_EXP_RTCAP_CRSVIS 0x0001 /* CRS Software Visibility capability */ 622 618 #define PCI_EXP_RTSTA 32 /* Root Status */ 623 - #define PCI_EXP_RTSTA_PME 0x00010000 /* PME status */ 624 - #define PCI_EXP_RTSTA_PENDING 0x00020000 /* PME pending */ 619 + #define PCI_EXP_RTSTA_PME 0x00010000 /* PME status */ 620 + #define PCI_EXP_RTSTA_PENDING 0x00020000 /* PME pending */ 625 621 /* 626 622 * The Device Capabilities 2, Device Status 2, Device Control 2, 627 623 * Link Capabilities 2, Link Status 2, Link Control 2, ··· 641 637 #define PCI_EXP_DEVCAP2_OBFF_MASK 0x000c0000 /* OBFF support mechanism */ 642 638 #define PCI_EXP_DEVCAP2_OBFF_MSG 0x00040000 /* New message signaling */ 643 639 #define PCI_EXP_DEVCAP2_OBFF_WAKE 0x00080000 /* Re-use WAKE# for OBFF */ 644 - #define PCI_EXP_DEVCAP2_EE_PREFIX 0x00200000 /* End-End TLP Prefix */ 640 + #define PCI_EXP_DEVCAP2_EE_PREFIX 0x00200000 /* End-End TLP Prefix */ 645 641 #define PCI_EXP_DEVCTL2 40 /* Device Control 2 */ 646 642 #define PCI_EXP_DEVCTL2_COMP_TIMEOUT 0x000f /* Completion Timeout Value */ 647 643 #define PCI_EXP_DEVCTL2_COMP_TMOUT_DIS 0x0010 /* Completion Timeout Disable */ 648 644 #define PCI_EXP_DEVCTL2_ARI 0x0020 /* Alternative Routing-ID */ 649 - #define PCI_EXP_DEVCTL2_ATOMIC_REQ 0x0040 /* Set Atomic requests */ 650 - #define PCI_EXP_DEVCTL2_ATOMIC_EGRESS_BLOCK 0x0080 /* Block atomic egress */ 645 + #define PCI_EXP_DEVCTL2_ATOMIC_REQ 0x0040 /* Set Atomic requests */ 646 + #define PCI_EXP_DEVCTL2_ATOMIC_EGRESS_BLOCK 0x0080 /* Block atomic egress */ 651 647 #define PCI_EXP_DEVCTL2_IDO_REQ_EN 0x0100 /* Allow IDO for requests */ 652 648 #define PCI_EXP_DEVCTL2_IDO_CMP_EN 0x0200 /* Allow IDO for completions */ 653 649 #define PCI_EXP_DEVCTL2_LTR_EN 0x0400 /* Enable LTR mechanism */ ··· 663 659 #define PCI_EXP_LNKCAP2_SLS_16_0GB 0x00000010 /* Supported Speed 16GT/s */ 664 660 #define PCI_EXP_LNKCAP2_CROSSLINK 0x00000100 /* Crosslink supported */ 665 661 #define PCI_EXP_LNKCTL2 48 /* Link Control 2 */ 666 - #define PCI_EXP_LNKCTL2_TLS 0x000f 667 - #define PCI_EXP_LNKCTL2_TLS_2_5GT 0x0001 /* Supported Speed 2.5GT/s */ 668 - #define PCI_EXP_LNKCTL2_TLS_5_0GT 0x0002 /* Supported Speed 5GT/s */ 669 - #define PCI_EXP_LNKCTL2_TLS_8_0GT 0x0003 /* Supported Speed 8GT/s */ 670 - #define PCI_EXP_LNKCTL2_TLS_16_0GT 0x0004 /* Supported Speed 16GT/s */ 662 + #define PCI_EXP_LNKCTL2_TLS 0x000f 663 + #define PCI_EXP_LNKCTL2_TLS_2_5GT 0x0001 /* Supported Speed 2.5GT/s */ 664 + #define PCI_EXP_LNKCTL2_TLS_5_0GT 0x0002 /* Supported Speed 5GT/s */ 665 + #define PCI_EXP_LNKCTL2_TLS_8_0GT 0x0003 /* Supported Speed 8GT/s */ 666 + #define PCI_EXP_LNKCTL2_TLS_16_0GT 0x0004 /* Supported Speed 16GT/s */ 671 667 #define PCI_EXP_LNKSTA2 50 /* Link Status 2 */ 672 668 #define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 52 /* v2 endpoints with link end here */ 673 669 #define PCI_EXP_SLTCAP2 52 /* Slot Capabilities 2 */ ··· 756 752 #define PCI_ERR_CAP_ECRC_CHKE 0x00000100 /* ECRC Check Enable */ 757 753 #define PCI_ERR_HEADER_LOG 28 /* Header Log Register (16 bytes) */ 758 754 #define PCI_ERR_ROOT_COMMAND 44 /* Root Error Command */ 759 - #define PCI_ERR_ROOT_CMD_COR_EN 0x00000001 /* Correctable Err Reporting Enable */ 760 - #define PCI_ERR_ROOT_CMD_NONFATAL_EN 0x00000002 /* Non-Fatal Err Reporting Enable */ 761 - #define PCI_ERR_ROOT_CMD_FATAL_EN 0x00000004 /* Fatal Err Reporting Enable */ 755 + #define PCI_ERR_ROOT_CMD_COR_EN 0x00000001 /* Correctable Err Reporting Enable */ 756 + #define PCI_ERR_ROOT_CMD_NONFATAL_EN 0x00000002 /* Non-Fatal Err Reporting Enable */ 757 + #define PCI_ERR_ROOT_CMD_FATAL_EN 0x00000004 /* Fatal Err Reporting Enable */ 762 758 #define PCI_ERR_ROOT_STATUS 48 763 - #define PCI_ERR_ROOT_COR_RCV 0x00000001 /* ERR_COR Received */ 764 - #define PCI_ERR_ROOT_MULTI_COR_RCV 0x00000002 /* Multiple ERR_COR */ 765 - #define PCI_ERR_ROOT_UNCOR_RCV 0x00000004 /* ERR_FATAL/NONFATAL */ 766 - #define PCI_ERR_ROOT_MULTI_UNCOR_RCV 0x00000008 /* Multiple FATAL/NONFATAL */ 767 - #define PCI_ERR_ROOT_FIRST_FATAL 0x00000010 /* First UNC is Fatal */ 768 - #define PCI_ERR_ROOT_NONFATAL_RCV 0x00000020 /* Non-Fatal Received */ 769 - #define PCI_ERR_ROOT_FATAL_RCV 0x00000040 /* Fatal Received */ 770 - #define PCI_ERR_ROOT_AER_IRQ 0xf8000000 /* Advanced Error Interrupt Message Number */ 759 + #define PCI_ERR_ROOT_COR_RCV 0x00000001 /* ERR_COR Received */ 760 + #define PCI_ERR_ROOT_MULTI_COR_RCV 0x00000002 /* Multiple ERR_COR */ 761 + #define PCI_ERR_ROOT_UNCOR_RCV 0x00000004 /* ERR_FATAL/NONFATAL */ 762 + #define PCI_ERR_ROOT_MULTI_UNCOR_RCV 0x00000008 /* Multiple FATAL/NONFATAL */ 763 + #define PCI_ERR_ROOT_FIRST_FATAL 0x00000010 /* First UNC is Fatal */ 764 + #define PCI_ERR_ROOT_NONFATAL_RCV 0x00000020 /* Non-Fatal Received */ 765 + #define PCI_ERR_ROOT_FATAL_RCV 0x00000040 /* Fatal Received */ 766 + #define PCI_ERR_ROOT_AER_IRQ 0xf8000000 /* Advanced Error Interrupt Message Number */ 771 767 #define PCI_ERR_ROOT_ERR_SRC 52 /* Error Source Identification */ 772 768 773 769 /* Virtual Channel */ ··· 879 875 880 876 /* Page Request Interface */ 881 877 #define PCI_PRI_CTRL 0x04 /* PRI control register */ 882 - #define PCI_PRI_CTRL_ENABLE 0x01 /* Enable */ 883 - #define PCI_PRI_CTRL_RESET 0x02 /* Reset */ 878 + #define PCI_PRI_CTRL_ENABLE 0x0001 /* Enable */ 879 + #define PCI_PRI_CTRL_RESET 0x0002 /* Reset */ 884 880 #define PCI_PRI_STATUS 0x06 /* PRI status register */ 885 - #define PCI_PRI_STATUS_RF 0x001 /* Response Failure */ 886 - #define PCI_PRI_STATUS_UPRGI 0x002 /* Unexpected PRG index */ 887 - #define PCI_PRI_STATUS_STOPPED 0x100 /* PRI Stopped */ 881 + #define PCI_PRI_STATUS_RF 0x0001 /* Response Failure */ 882 + #define PCI_PRI_STATUS_UPRGI 0x0002 /* Unexpected PRG index */ 883 + #define PCI_PRI_STATUS_STOPPED 0x0100 /* PRI Stopped */ 888 884 #define PCI_PRI_STATUS_PASID 0x8000 /* PRG Response PASID Required */ 889 885 #define PCI_PRI_MAX_REQ 0x08 /* PRI max reqs supported */ 890 886 #define PCI_PRI_ALLOC_REQ 0x0c /* PRI max reqs allowed */ ··· 902 898 903 899 /* Single Root I/O Virtualization */ 904 900 #define PCI_SRIOV_CAP 0x04 /* SR-IOV Capabilities */ 905 - #define PCI_SRIOV_CAP_VFM 0x01 /* VF Migration Capable */ 901 + #define PCI_SRIOV_CAP_VFM 0x00000001 /* VF Migration Capable */ 906 902 #define PCI_SRIOV_CAP_INTR(x) ((x) >> 21) /* Interrupt Message Number */ 907 903 #define PCI_SRIOV_CTRL 0x08 /* SR-IOV Control */ 908 - #define PCI_SRIOV_CTRL_VFE 0x01 /* VF Enable */ 909 - #define PCI_SRIOV_CTRL_VFM 0x02 /* VF Migration Enable */ 910 - #define PCI_SRIOV_CTRL_INTR 0x04 /* VF Migration Interrupt Enable */ 911 - #define PCI_SRIOV_CTRL_MSE 0x08 /* VF Memory Space Enable */ 912 - #define PCI_SRIOV_CTRL_ARI 0x10 /* ARI Capable Hierarchy */ 904 + #define PCI_SRIOV_CTRL_VFE 0x0001 /* VF Enable */ 905 + #define PCI_SRIOV_CTRL_VFM 0x0002 /* VF Migration Enable */ 906 + #define PCI_SRIOV_CTRL_INTR 0x0004 /* VF Migration Interrupt Enable */ 907 + #define PCI_SRIOV_CTRL_MSE 0x0008 /* VF Memory Space Enable */ 908 + #define PCI_SRIOV_CTRL_ARI 0x0010 /* ARI Capable Hierarchy */ 913 909 #define PCI_SRIOV_STATUS 0x0a /* SR-IOV Status */ 914 - #define PCI_SRIOV_STATUS_VFM 0x01 /* VF Migration Status */ 910 + #define PCI_SRIOV_STATUS_VFM 0x0001 /* VF Migration Status */ 915 911 #define PCI_SRIOV_INITIAL_VF 0x0c /* Initial VFs */ 916 912 #define PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */ 917 913 #define PCI_SRIOV_NUM_VF 0x10 /* Number of VFs */ ··· 941 937 942 938 /* Access Control Service */ 943 939 #define PCI_ACS_CAP 0x04 /* ACS Capability Register */ 944 - #define PCI_ACS_SV 0x01 /* Source Validation */ 945 - #define PCI_ACS_TB 0x02 /* Translation Blocking */ 946 - #define PCI_ACS_RR 0x04 /* P2P Request Redirect */ 947 - #define PCI_ACS_CR 0x08 /* P2P Completion Redirect */ 948 - #define PCI_ACS_UF 0x10 /* Upstream Forwarding */ 949 - #define PCI_ACS_EC 0x20 /* P2P Egress Control */ 950 - #define PCI_ACS_DT 0x40 /* Direct Translated P2P */ 940 + #define PCI_ACS_SV 0x0001 /* Source Validation */ 941 + #define PCI_ACS_TB 0x0002 /* Translation Blocking */ 942 + #define PCI_ACS_RR 0x0004 /* P2P Request Redirect */ 943 + #define PCI_ACS_CR 0x0008 /* P2P Completion Redirect */ 944 + #define PCI_ACS_UF 0x0010 /* Upstream Forwarding */ 945 + #define PCI_ACS_EC 0x0020 /* P2P Egress Control */ 946 + #define PCI_ACS_DT 0x0040 /* Direct Translated P2P */ 951 947 #define PCI_ACS_EGRESS_BITS 0x05 /* ACS Egress Control Vector Size */ 952 948 #define PCI_ACS_CTRL 0x06 /* ACS Control Register */ 953 949 #define PCI_ACS_EGRESS_CTL_V 0x08 /* ACS Egress Control Vector */ ··· 997 993 #define PCI_EXP_DPC_CAP_DL_ACTIVE 0x1000 /* ERR_COR signal on DL_Active supported */ 998 994 999 995 #define PCI_EXP_DPC_CTL 6 /* DPC control */ 1000 - #define PCI_EXP_DPC_CTL_EN_FATAL 0x0001 /* Enable trigger on ERR_FATAL message */ 1001 - #define PCI_EXP_DPC_CTL_EN_NONFATAL 0x0002 /* Enable trigger on ERR_NONFATAL message */ 1002 - #define PCI_EXP_DPC_CTL_INT_EN 0x0008 /* DPC Interrupt Enable */ 996 + #define PCI_EXP_DPC_CTL_EN_FATAL 0x0001 /* Enable trigger on ERR_FATAL message */ 997 + #define PCI_EXP_DPC_CTL_EN_NONFATAL 0x0002 /* Enable trigger on ERR_NONFATAL message */ 998 + #define PCI_EXP_DPC_CTL_INT_EN 0x0008 /* DPC Interrupt Enable */ 1003 999 1004 1000 #define PCI_EXP_DPC_STATUS 8 /* DPC Status */ 1005 1001 #define PCI_EXP_DPC_STATUS_TRIGGER 0x0001 /* Trigger Status */
+12 -1
include/uapi/linux/switchtec_ioctl.h
··· 50 50 __u32 active; 51 51 }; 52 52 53 - struct switchtec_ioctl_event_summary { 53 + struct switchtec_ioctl_event_summary_legacy { 54 54 __u64 global; 55 55 __u64 part_bitmap; 56 56 __u32 local_part; 57 57 __u32 padding; 58 58 __u32 part[48]; 59 59 __u32 pff[48]; 60 + }; 61 + 62 + struct switchtec_ioctl_event_summary { 63 + __u64 global; 64 + __u64 part_bitmap; 65 + __u32 local_part; 66 + __u32 padding; 67 + __u32 part[48]; 68 + __u32 pff[255]; 60 69 }; 61 70 62 71 #define SWITCHTEC_IOCTL_EVENT_STACK_ERROR 0 ··· 136 127 _IOWR('W', 0x41, struct switchtec_ioctl_flash_part_info) 137 128 #define SWITCHTEC_IOCTL_EVENT_SUMMARY \ 138 129 _IOR('W', 0x42, struct switchtec_ioctl_event_summary) 130 + #define SWITCHTEC_IOCTL_EVENT_SUMMARY_LEGACY \ 131 + _IOR('W', 0x42, struct switchtec_ioctl_event_summary_legacy) 139 132 #define SWITCHTEC_IOCTL_EVENT_CTL \ 140 133 _IOWR('W', 0x43, struct switchtec_ioctl_event_ctl) 141 134 #define SWITCHTEC_IOCTL_PFF_TO_PORT \
+7 -1
tools/pci/Makefile
··· 14 14 15 15 CFLAGS += -O2 -Wall -g -D_GNU_SOURCE -I$(OUTPUT)include 16 16 17 - ALL_TARGETS := pcitest pcitest.sh 17 + ALL_TARGETS := pcitest 18 18 ALL_PROGRAMS := $(patsubst %,$(OUTPUT)%,$(ALL_TARGETS)) 19 + 20 + SCRIPTS := pcitest.sh 21 + ALL_SCRIPTS := $(patsubst %,$(OUTPUT)%,$(SCRIPTS)) 19 22 20 23 all: $(ALL_PROGRAMS) 21 24 ··· 49 46 install -d -m 755 $(DESTDIR)$(bindir); \ 50 47 for program in $(ALL_PROGRAMS); do \ 51 48 install $$program $(DESTDIR)$(bindir); \ 49 + done; \ 50 + for script in $(ALL_SCRIPTS); do \ 51 + install $$script $(DESTDIR)$(bindir); \ 52 52 done 53 53 54 54 FORCE:
+4 -4
tools/pci/pcitest.c
··· 140 140 } 141 141 142 142 fflush(stdout); 143 + return (ret < 0) ? ret : 1 - ret; /* return 0 if test succeeded */ 143 144 } 144 145 145 146 int main(int argc, char **argv) ··· 163 162 /* set default endpoint device */ 164 163 test->device = "/dev/pci-endpoint-test.0"; 165 164 166 - while ((c = getopt(argc, argv, "D:b:m:x:i:Ilrwcs:")) != EOF) 165 + while ((c = getopt(argc, argv, "D:b:m:x:i:Ilhrwcs:")) != EOF) 167 166 switch (c) { 168 167 case 'D': 169 168 test->device = optarg; ··· 207 206 case 's': 208 207 test->size = strtoul(optarg, NULL, 0); 209 208 continue; 210 - case '?': 211 209 case 'h': 212 210 default: 213 211 usage: ··· 224 224 "\t-w Write buffer test\n" 225 225 "\t-c Copy buffer test\n" 226 226 "\t-s <size> Size of buffer {default: 100KB}\n", 227 + "\t-h Print this help message\n", 227 228 argv[0]); 228 229 return -EINVAL; 229 230 } 230 231 231 - run_test(test); 232 - return 0; 232 + return run_test(test); 233 233 }