Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v3.15-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI changes from Bjorn Helgaas:
"Enumeration
- Increment max correctly in pci_scan_bridge() (Andreas Noever)
- Clarify the "scan anyway" comment in pci_scan_bridge() (Andreas Noever)
- Assign CardBus bus number only during the second pass (Andreas Noever)
- Use request_resource_conflict() instead of insert_ for bus numbers (Andreas Noever)
- Make sure bus number resources stay within their parents bounds (Andreas Noever)
- Remove pci_fixup_parent_subordinate_busnr() (Andreas Noever)
- Check for child busses which use more bus numbers than allocated (Andreas Noever)
- Don't scan random busses in pci_scan_bridge() (Andreas Noever)
- x86: Drop pcibios_scan_root() check for bus already scanned (Bjorn Helgaas)
- x86: Use pcibios_scan_root() instead of pci_scan_bus_with_sysdata() (Bjorn Helgaas)
- x86: Use pcibios_scan_root() instead of pci_scan_bus_on_node() (Bjorn Helgaas)
- x86: Merge pci_scan_bus_on_node() into pcibios_scan_root() (Bjorn Helgaas)
- x86: Drop return value of pcibios_scan_root() (Bjorn Helgaas)

NUMA
- x86: Add x86_pci_root_bus_node() to look up NUMA node from PCI bus (Bjorn Helgaas)
- x86: Use x86_pci_root_bus_node() instead of get_mp_bus_to_node() (Bjorn Helgaas)
- x86: Remove mp_bus_to_node[], set_mp_bus_to_node(), get_mp_bus_to_node() (Bjorn Helgaas)
- x86: Use NUMA_NO_NODE, not -1, for unknown node (Bjorn Helgaas)
- x86: Remove acpi_get_pxm() usage (Bjorn Helgaas)
- ia64: Use NUMA_NO_NODE, not MAX_NUMNODES, for unknown node (Bjorn Helgaas)
- ia64: Remove acpi_get_pxm() usage (Bjorn Helgaas)
- ACPI: Fix acpi_get_node() prototype (Bjorn Helgaas)

Resource management
- i2o: Fix and refactor PCI space allocation (Bjorn Helgaas)
- Add resource_contains() (Bjorn Helgaas)
- Add %pR support for IORESOURCE_UNSET (Bjorn Helgaas)
- Mark resources as IORESOURCE_UNSET if we can't assign them (Bjorn Helgaas)
- Don't clear IORESOURCE_UNSET when updating BAR (Bjorn Helgaas)
- Check IORESOURCE_UNSET before updating BAR (Bjorn Helgaas)
- Don't try to claim IORESOURCE_UNSET resources (Bjorn Helgaas)
- Mark 64-bit resource as IORESOURCE_UNSET if we only support 32-bit (Bjorn Helgaas)
- Don't enable decoding if BAR hasn't been assigned an address (Bjorn Helgaas)
- Add "weak" generic pcibios_enable_device() implementation (Bjorn Helgaas)
- alpha, microblaze, sh, sparc, tile: Use default pcibios_enable_device() (Bjorn Helgaas)
- s390: Use generic pci_enable_resources() (Bjorn Helgaas)
- Don't check resource_size() in pci_bus_alloc_resource() (Bjorn Helgaas)
- Set type in __request_region() (Bjorn Helgaas)
- Check all IORESOURCE_TYPE_BITS in pci_bus_alloc_from_region() (Bjorn Helgaas)
- Change pci_bus_alloc_resource() type_mask to unsigned long (Bjorn Helgaas)
- Log IDE resource quirk in dmesg (Bjorn Helgaas)
- Revert "[PATCH] Insert GART region into resource map" (Bjorn Helgaas)

PCI device hotplug
- Make check_link_active() non-static (Rajat Jain)
- Use link change notifications for hot-plug and removal (Rajat Jain)
- Enable link state change notifications (Rajat Jain)
- Don't disable the link permanently during removal (Rajat Jain)
- Don't check adapter or latch status while disabling (Rajat Jain)
- Disable link notification across slot reset (Rajat Jain)
- Ensure very fast hotplug events are also processed (Rajat Jain)
- Add hotplug_lock to serialize hotplug events (Rajat Jain)
- Remove a non-existent card, regardless of "surprise" capability (Rajat Jain)
- Don't turn slot off when hot-added device already exists (Yijing Wang)

MSI
- Keep pci_enable_msi() documentation (Alexander Gordeev)
- ahci: Fix broken single MSI fallback (Alexander Gordeev)
- ahci, vfio: Use pci_enable_msi_range() (Alexander Gordeev)
- Check kmalloc() return value, fix leak of name (Greg Kroah-Hartman)
- Fix leak of msi_attrs (Greg Kroah-Hartman)
- Fix pci_msix_vec_count() htmldocs failure (Masanari Iida)

Virtualization
- Device-specific ACS support (Alex Williamson)

Freescale i.MX6
- Wait for retraining (Marek Vasut)

Marvell MVEBU
- Use Device ID and revision from underlying endpoint (Andrew Lunn)
- Fix incorrect size for PCI aperture resources (Jason Gunthorpe)
- Call request_resource() on the apertures (Jason Gunthorpe)
- Fix potential issue in range parsing (Jean-Jacques Hiblot)

Renesas R-Car
- Check platform_get_irq() return code (Ben Dooks)
- Add error interrupt handling (Ben Dooks)
- Fix bridge logic configuration accesses (Ben Dooks)
- Register each instance independently (Magnus Damm)
- Break out window size handling (Magnus Damm)
- Make the Kconfig dependencies more generic (Magnus Damm)

Synopsys DesignWare
- Fix RC BAR to be single 64-bit non-prefetchable memory (Mohit Kumar)

Miscellaneous
- Remove unused SR-IOV VF Migration support (Bjorn Helgaas)
- Enable INTx if BIOS left them disabled (Bjorn Helgaas)
- Fix hex vs decimal typo in cpqhpc_probe() (Dan Carpenter)
- Clean up par-arch object file list (Liviu Dudau)
- Set IORESOURCE_ROM_SHADOW only for the default VGA device (Sander Eikelenboom)
- ACPI, ARM, drm, powerpc, pcmcia, PCI: Use list_for_each_entry() for bus traversal (Yijing Wang)
- Fix pci_bus_b() build failure (Paul Gortmaker)"

* tag 'pci-v3.15-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (108 commits)
Revert "[PATCH] Insert GART region into resource map"
PCI: Log IDE resource quirk in dmesg
PCI: Change pci_bus_alloc_resource() type_mask to unsigned long
PCI: Check all IORESOURCE_TYPE_BITS in pci_bus_alloc_from_region()
resources: Set type in __request_region()
PCI: Don't check resource_size() in pci_bus_alloc_resource()
s390/PCI: Use generic pci_enable_resources()
tile PCI RC: Use default pcibios_enable_device()
sparc/PCI: Use default pcibios_enable_device() (Leon only)
sh/PCI: Use default pcibios_enable_device()
microblaze/PCI: Use default pcibios_enable_device()
alpha/PCI: Use default pcibios_enable_device()
PCI: Add "weak" generic pcibios_enable_device() implementation
PCI: Don't enable decoding if BAR hasn't been assigned an address
PCI: Enable INTx in pci_reenable_device() only when MSI/MSI-X not enabled
PCI: Mark 64-bit resource as IORESOURCE_UNSET if we only support 32-bit
PCI: Don't try to claim IORESOURCE_UNSET resources
PCI: Check IORESOURCE_UNSET before updating BAR
PCI: Don't clear IORESOURCE_UNSET when updating BAR
PCI: Mark resources as IORESOURCE_UNSET if we can't assign them
...

Conflicts:
arch/x86/include/asm/topology.h
drivers/ata/ahci.c

+942 -862
-4
Documentation/PCI/pci-iov-howto.txt
··· 68 68 echo 0 > \ 69 69 /sys/bus/pci/devices/<DOMAIN:BUS:DEVICE.FUNCTION>/sriov_numvfs 70 70 71 - To notify SR-IOV core of Virtual Function Migration: 72 - (a) In the driver: 73 - irqreturn_t pci_sriov_migration(struct pci_dev *dev); 74 - 75 71 3.2 Usage example 76 72 77 73 Following piece of code illustrates the usage of the SR-IOV API.
-6
arch/alpha/kernel/pci.c
··· 254 254 } 255 255 } 256 256 257 - int 258 - pcibios_enable_device(struct pci_dev *dev, int mask) 259 - { 260 - return pci_enable_resources(dev, mask); 261 - } 262 - 263 257 /* 264 258 * If we set up a device for bus mastering, we need to check the latency 265 259 * timer as certain firmware forgets to set it properly, as seen
+3 -6
arch/arm/kernel/bios32.c
··· 19 19 static int debug_pci; 20 20 21 21 /* 22 - * We can't use pci_find_device() here since we are 22 + * We can't use pci_get_device() here since we are 23 23 * called from interrupt context. 24 24 */ 25 25 static void pcibios_bus_report_status(struct pci_bus *bus, u_int status_mask, int warn) ··· 57 57 58 58 void pcibios_report_status(u_int status_mask, int warn) 59 59 { 60 - struct list_head *l; 60 + struct pci_bus *bus; 61 61 62 - list_for_each(l, &pci_root_buses) { 63 - struct pci_bus *bus = pci_bus_b(l); 64 - 62 + list_for_each_entry(bus, &pci_root_buses, node) 65 63 pcibios_bus_report_status(bus, status_mask, warn); 66 - } 67 64 } 68 65 69 66 /*
+1 -1
arch/frv/mb93090-mb00/pci-frv.c
··· 88 88 89 89 /* Depth-First Search on bus tree */ 90 90 for (ln=bus_list->next; ln != bus_list; ln=ln->next) { 91 - bus = pci_bus_b(ln); 91 + bus = list_entry(ln, struct pci_bus, node); 92 92 if ((dev = bus->self)) { 93 93 for (idx = PCI_BRIDGE_RESOURCES; idx < PCI_NUM_RESOURCES; idx++) { 94 94 r = &dev->resource[idx];
+11 -21
arch/ia64/hp/common/sba_iommu.c
··· 1140 1140 1141 1141 #ifdef CONFIG_NUMA 1142 1142 { 1143 + int node = ioc->node; 1143 1144 struct page *page; 1144 - page = alloc_pages_exact_node(ioc->node == MAX_NUMNODES ? 1145 - numa_node_id() : ioc->node, flags, 1146 - get_order(size)); 1147 1145 1146 + if (node == NUMA_NO_NODE) 1147 + node = numa_node_id(); 1148 + 1149 + page = alloc_pages_exact_node(node, flags, get_order(size)); 1148 1150 if (unlikely(!page)) 1149 1151 return NULL; 1150 1152 ··· 1916 1914 seq_printf(s, "Hewlett Packard %s IOC rev %d.%d\n", 1917 1915 ioc->name, ((ioc->rev >> 4) & 0xF), (ioc->rev & 0xF)); 1918 1916 #ifdef CONFIG_NUMA 1919 - if (ioc->node != MAX_NUMNODES) 1917 + if (ioc->node != NUMA_NO_NODE) 1920 1918 seq_printf(s, "NUMA node : %d\n", ioc->node); 1921 1919 #endif 1922 1920 seq_printf(s, "IOVA size : %ld MB\n", ((ioc->pdir_size >> 3) * iovp_size)/(1024*1024)); ··· 2017 2015 printk(KERN_WARNING "No IOC for PCI Bus %04x:%02x in ACPI\n", pci_domain_nr(bus), bus->number); 2018 2016 } 2019 2017 2020 - #ifdef CONFIG_NUMA 2021 2018 static void __init 2022 2019 sba_map_ioc_to_node(struct ioc *ioc, acpi_handle handle) 2023 2020 { 2021 + #ifdef CONFIG_NUMA 2024 2022 unsigned int node; 2025 - int pxm; 2026 2023 2027 - ioc->node = MAX_NUMNODES; 2028 - 2029 - pxm = acpi_get_pxm(handle); 2030 - 2031 - if (pxm < 0) 2032 - return; 2033 - 2034 - node = pxm_to_node(pxm); 2035 - 2036 - if (node >= MAX_NUMNODES || !node_online(node)) 2037 - return; 2024 + node = acpi_get_node(handle); 2025 + if (node != NUMA_NO_NODE && !node_online(node)) 2026 + node = NUMA_NO_NODE; 2038 2027 2039 2028 ioc->node = node; 2040 - return; 2041 - } 2042 - #else 2043 - #define sba_map_ioc_to_node(ioc, handle) 2044 2029 #endif 2030 + } 2045 2031 2046 2032 static int 2047 2033 acpi_sba_ioc_add(struct acpi_device *device,
+1 -1
arch/ia64/include/asm/pci.h
··· 98 98 struct acpi_device *companion; 99 99 void *iommu; 100 100 int segment; 101 - int node; /* nearest node with memory or -1 for global allocation */ 101 + int node; /* nearest node with memory or NUMA_NO_NODE for global allocation */ 102 102 103 103 void *platform_data; 104 104 };
+7 -21
arch/ia64/kernel/acpi.c
··· 799 799 * ACPI based hotplug CPU support 800 800 */ 801 801 #ifdef CONFIG_ACPI_HOTPLUG_CPU 802 - static 803 - int acpi_map_cpu2node(acpi_handle handle, int cpu, int physid) 802 + static int acpi_map_cpu2node(acpi_handle handle, int cpu, int physid) 804 803 { 805 804 #ifdef CONFIG_ACPI_NUMA 806 - int pxm_id; 807 - int nid; 808 - 809 - pxm_id = acpi_get_pxm(handle); 810 805 /* 811 806 * We don't have cpu-only-node hotadd. But if the system equips 812 807 * SRAT table, pxm is already found and node is ready. ··· 809 814 * This code here is for the system which doesn't have full SRAT 810 815 * table for possible cpus. 811 816 */ 812 - nid = acpi_map_pxm_to_node(pxm_id); 813 817 node_cpuid[cpu].phys_id = physid; 814 - node_cpuid[cpu].nid = nid; 818 + node_cpuid[cpu].nid = acpi_get_node(handle); 815 819 #endif 816 - return (0); 820 + return 0; 817 821 } 818 822 819 823 int additional_cpus __initdata = -1; ··· 919 925 union acpi_object *obj; 920 926 struct acpi_madt_io_sapic *iosapic; 921 927 unsigned int gsi_base; 922 - int pxm, node; 928 + int node; 923 929 924 930 /* Only care about objects w/ a method that returns the MADT */ 925 931 if (ACPI_FAILURE(acpi_evaluate_object(handle, "_MAT", NULL, &buffer))) ··· 946 952 947 953 kfree(buffer.pointer); 948 954 949 - /* 950 - * OK, it's an IOSAPIC MADT entry, look for a _PXM value to tell 951 - * us which node to associate this with. 952 - */ 953 - pxm = acpi_get_pxm(handle); 954 - if (pxm < 0) 955 - return AE_OK; 956 - 957 - node = pxm_to_node(pxm); 958 - 959 - if (node >= MAX_NUMNODES || !node_online(node) || 955 + /* OK, it's an IOSAPIC MADT entry; associate it with a node */ 956 + node = acpi_get_node(handle); 957 + if (node == NUMA_NO_NODE || !node_online(node) || 960 958 cpumask_empty(cpumask_of_node(node))) 961 959 return AE_OK; 962 960
+14 -11
arch/ia64/pci/fixup.c
··· 5 5 6 6 #include <linux/pci.h> 7 7 #include <linux/init.h> 8 + #include <linux/vgaarb.h> 8 9 9 10 #include <asm/machvec.h> 10 11 ··· 20 19 * IORESOURCE_ROM_SHADOW is used to associate the boot video 21 20 * card with this copy. On laptops this copy has to be used since 22 21 * the main ROM may be compressed or combined with another image. 23 - * See pci_map_rom() for use of this flag. IORESOURCE_ROM_SHADOW 24 - * is marked here since the boot video device will be the only enabled 25 - * video device at this point. 22 + * See pci_map_rom() for use of this flag. Before marking the device 23 + * with IORESOURCE_ROM_SHADOW check if a vga_default_device is already set 24 + * by either arch cde or vga-arbitration, if so only apply the fixup to this 25 + * already determined primary video card. 26 26 */ 27 27 28 28 static void pci_fixup_video(struct pci_dev *pdev) ··· 36 34 && (strcmp(ia64_platform_name, "hpzx1") != 0)) 37 35 return; 38 36 /* Maybe, this machine supports legacy memory map. */ 39 - 40 - if ((pdev->class >> 8) != PCI_CLASS_DISPLAY_VGA) 41 - return; 42 37 43 38 /* Is VGA routed to us? */ 44 39 bus = pdev->bus; ··· 59 60 } 60 61 bus = bus->parent; 61 62 } 62 - pci_read_config_word(pdev, PCI_COMMAND, &config); 63 - if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 64 - pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW; 65 - dev_printk(KERN_DEBUG, &pdev->dev, "Boot video device\n"); 63 + if (!vga_default_device() || pdev == vga_default_device()) { 64 + pci_read_config_word(pdev, PCI_COMMAND, &config); 65 + if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 66 + pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW; 67 + dev_printk(KERN_DEBUG, &pdev->dev, "Boot video device\n"); 68 + vga_set_default_device(pdev); 69 + } 66 70 } 67 71 } 68 - DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, pci_fixup_video); 72 + DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_ANY_ID, PCI_ANY_ID, 73 + PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_video);
+2 -8
arch/ia64/pci/pci.c
··· 126 126 return NULL; 127 127 128 128 controller->segment = seg; 129 - controller->node = -1; 130 129 return controller; 131 130 } 132 131 ··· 429 430 struct pci_root_info *info = NULL; 430 431 int busnum = root->secondary.start; 431 432 struct pci_bus *pbus; 432 - int pxm, ret; 433 + int ret; 433 434 434 435 controller = alloc_pci_controller(domain); 435 436 if (!controller) 436 437 return NULL; 437 438 438 439 controller->companion = device; 439 - 440 - pxm = acpi_get_pxm(device->handle); 441 - #ifdef CONFIG_NUMA 442 - if (pxm >= 0) 443 - controller->node = pxm_to_node(pxm); 444 - #endif 440 + controller->node = acpi_get_node(device->handle); 445 441 446 442 info = kzalloc(sizeof(*info), GFP_KERNEL); 447 443 if (!info) {
-5
arch/microblaze/pci/pci-common.c
··· 1294 1294 } 1295 1295 EXPORT_SYMBOL_GPL(pcibios_finish_adding_to_bus); 1296 1296 1297 - int pcibios_enable_device(struct pci_dev *dev, int mask) 1298 - { 1299 - return pci_enable_resources(dev, mask); 1300 - } 1301 - 1302 1297 static void pcibios_setup_phb_resources(struct pci_controller *hose, 1303 1298 struct list_head *resources) 1304 1299 {
+1 -3
arch/powerpc/kernel/pci_64.c
··· 208 208 unsigned long in_devfn) 209 209 { 210 210 struct pci_controller* hose; 211 - struct list_head *ln; 212 211 struct pci_bus *bus = NULL; 213 212 struct device_node *hose_node; 214 213 ··· 229 230 * used on pre-domains setup. We return the first match 230 231 */ 231 232 232 - for (ln = pci_root_buses.next; ln != &pci_root_buses; ln = ln->next) { 233 - bus = pci_bus_b(ln); 233 + list_for_each_entry(bus, &pci_root_buses, node) { 234 234 if (in_bus >= bus->number && in_bus <= bus->busn_res.end) 235 235 break; 236 236 bus = NULL;
+3 -3
arch/powerpc/platforms/pseries/pci_dlpar.c
··· 37 37 struct device_node *dn) 38 38 { 39 39 struct pci_bus *child = NULL; 40 - struct list_head *tmp; 40 + struct pci_bus *tmp; 41 41 struct device_node *busdn; 42 42 43 43 busdn = pci_bus_to_OF_node(bus); 44 44 if (busdn == dn) 45 45 return bus; 46 46 47 - list_for_each(tmp, &bus->children) { 48 - child = find_bus_among_children(pci_bus_b(tmp), dn); 47 + list_for_each_entry(tmp, &bus->children, node) { 48 + child = find_bus_among_children(tmp, dn); 49 49 if (child) 50 50 break; 51 51 };
+1 -15
arch/s390/pci/pci.c
··· 686 686 int pcibios_enable_device(struct pci_dev *pdev, int mask) 687 687 { 688 688 struct zpci_dev *zdev = get_zdev(pdev); 689 - struct resource *res; 690 - u16 cmd; 691 - int i; 692 689 693 690 zdev->pdev = pdev; 694 691 zpci_debug_init_device(zdev); 695 692 zpci_fmb_enable_device(zdev); 696 693 zpci_map_resources(zdev); 697 694 698 - pci_read_config_word(pdev, PCI_COMMAND, &cmd); 699 - for (i = 0; i < PCI_BAR_COUNT; i++) { 700 - res = &pdev->resource[i]; 701 - 702 - if (res->flags & IORESOURCE_IO) 703 - return -EINVAL; 704 - 705 - if (res->flags & IORESOURCE_MEM) 706 - cmd |= PCI_COMMAND_MEMORY; 707 - } 708 - pci_write_config_word(pdev, PCI_COMMAND, cmd); 709 - return 0; 695 + return pci_enable_resources(pdev, mask); 710 696 } 711 697 712 698 void pcibios_disable_device(struct pci_dev *pdev)
-5
arch/sh/drivers/pci/pci.c
··· 186 186 return start; 187 187 } 188 188 189 - int pcibios_enable_device(struct pci_dev *dev, int mask) 190 - { 191 - return pci_enable_resources(dev, mask); 192 - } 193 - 194 189 static void __init 195 190 pcibios_bus_report_status_early(struct pci_channel *hose, 196 191 int top_bus, int current_bus,
-5
arch/sparc/kernel/leon_pci.c
··· 99 99 return res->start; 100 100 } 101 101 102 - int pcibios_enable_device(struct pci_dev *dev, int mask) 103 - { 104 - return pci_enable_resources(dev, mask); 105 - } 106 - 107 102 /* in/out routines taken from pcic.c 108 103 * 109 104 * This probably belongs here rather than ioport.c because
-12
arch/tile/kernel/pci_gx.c
··· 1065 1065 } 1066 1066 1067 1067 /* 1068 - * Enable memory address decoding, as appropriate, for the 1069 - * device described by the 'dev' struct. 1070 - * 1071 - * This is called from the generic PCI layer, and can be called 1072 - * for bridges or endpoints. 1073 - */ 1074 - int pcibios_enable_device(struct pci_dev *dev, int mask) 1075 - { 1076 - return pci_enable_resources(dev, mask); 1077 - } 1078 - 1079 - /* 1080 1068 * Called for each device after PCI setup is done. 1081 1069 * We initialize the PCI device capabilities conservatively, assuming that 1082 1070 * all devices can only address the 32-bit DMA space. The exception here is
+1 -6
arch/x86/include/asm/pci.h
··· 26 26 extern int noioapicquirk; 27 27 extern int noioapicreroute; 28 28 29 - /* scan a bus after allocating a pci_sysdata for it */ 30 - extern struct pci_bus *pci_scan_bus_on_node(int busno, struct pci_ops *ops, 31 - int node); 32 - extern struct pci_bus *pci_scan_bus_with_sysdata(int busno); 33 - 34 29 #ifdef CONFIG_PCI 35 30 36 31 #ifdef CONFIG_PCI_DOMAINS ··· 65 70 66 71 extern int pcibios_enabled; 67 72 void pcibios_config_init(void); 68 - struct pci_bus *pcibios_scan_root(int bus); 73 + void pcibios_scan_root(int bus); 69 74 70 75 void pcibios_set_master(struct pci_dev *dev); 71 76 void pcibios_penalize_isa_irq(int irq, int active);
+1 -13
arch/x86/include/asm/topology.h
··· 132 132 } 133 133 134 134 struct pci_bus; 135 + int x86_pci_root_bus_node(int bus); 135 136 void x86_pci_root_bus_resources(int bus, struct list_head *resources); 136 - 137 - #ifdef CONFIG_NUMA 138 - extern int get_mp_bus_to_node(int busnum); 139 - extern void set_mp_bus_to_node(int busnum, int node); 140 - #else 141 - static inline int get_mp_bus_to_node(int busnum) 142 - { 143 - return 0; 144 - } 145 - static inline void set_mp_bus_to_node(int busnum, int node) 146 - { 147 - } 148 - #endif 149 137 150 138 #endif /* _ASM_X86_TOPOLOGY_H */
+18 -41
arch/x86/pci/acpi.c
··· 218 218 } 219 219 #endif 220 220 221 - static acpi_status 222 - resource_to_addr(struct acpi_resource *resource, 223 - struct acpi_resource_address64 *addr) 221 + static acpi_status resource_to_addr(struct acpi_resource *resource, 222 + struct acpi_resource_address64 *addr) 224 223 { 225 224 acpi_status status; 226 225 struct acpi_resource_memory24 *memory24; ··· 264 265 return AE_ERROR; 265 266 } 266 267 267 - static acpi_status 268 - count_resource(struct acpi_resource *acpi_res, void *data) 268 + static acpi_status count_resource(struct acpi_resource *acpi_res, void *data) 269 269 { 270 270 struct pci_root_info *info = data; 271 271 struct acpi_resource_address64 addr; ··· 276 278 return AE_OK; 277 279 } 278 280 279 - static acpi_status 280 - setup_resource(struct acpi_resource *acpi_res, void *data) 281 + static acpi_status setup_resource(struct acpi_resource *acpi_res, void *data) 281 282 { 282 283 struct pci_root_info *info = data; 283 284 struct resource *res; ··· 432 435 __release_pci_root_info(info); 433 436 } 434 437 435 - static void 436 - probe_pci_root_info(struct pci_root_info *info, struct acpi_device *device, 437 - int busnum, int domain) 438 + static void probe_pci_root_info(struct pci_root_info *info, 439 + struct acpi_device *device, 440 + int busnum, int domain) 438 441 { 439 442 size_t size; 440 443 ··· 470 473 struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) 471 474 { 472 475 struct acpi_device *device = root->device; 473 - struct pci_root_info *info = NULL; 476 + struct pci_root_info *info; 474 477 int domain = root->segment; 475 478 int busnum = root->secondary.start; 476 479 LIST_HEAD(resources); 477 - struct pci_bus *bus = NULL; 480 + struct pci_bus *bus; 478 481 struct pci_sysdata *sd; 479 482 int node; 480 - #ifdef CONFIG_ACPI_NUMA 481 - int pxm; 482 - #endif 483 483 484 484 if (pci_ignore_seg) 485 485 domain = 0; ··· 488 494 return NULL; 489 495 } 490 496 491 - node = -1; 492 - #ifdef CONFIG_ACPI_NUMA 493 - pxm = acpi_get_pxm(device->handle); 494 - if (pxm >= 0) 495 - node = pxm_to_node(pxm); 496 - if (node != -1) 497 - set_mp_bus_to_node(busnum, node); 498 - else 499 - #endif 500 - node = get_mp_bus_to_node(busnum); 497 + node = acpi_get_node(device->handle); 498 + if (node == NUMA_NO_NODE) 499 + node = x86_pci_root_bus_node(busnum); 501 500 502 - if (node != -1 && !node_online(node)) 503 - node = -1; 501 + if (node != NUMA_NO_NODE && !node_online(node)) 502 + node = NUMA_NO_NODE; 504 503 505 504 info = kzalloc(sizeof(*info), GFP_KERNEL); 506 505 if (!info) { ··· 506 519 sd->domain = domain; 507 520 sd->node = node; 508 521 sd->companion = device; 509 - /* 510 - * Maybe the desired pci bus has been already scanned. In such case 511 - * it is unnecessary to scan the pci bus with the given domain,busnum. 512 - */ 522 + 513 523 bus = pci_find_bus(domain, busnum); 514 524 if (bus) { 515 525 /* 516 - * If the desired bus exits, the content of bus->sysdata will 517 - * be replaced by sd. 526 + * If the desired bus has been scanned already, replace 527 + * its bus->sysdata. 518 528 */ 519 529 memcpy(bus->sysdata, sd, sizeof(*sd)); 520 530 kfree(info); ··· 556 572 pcie_bus_configure_settings(child); 557 573 } 558 574 559 - if (bus && node != -1) { 560 - #ifdef CONFIG_ACPI_NUMA 561 - if (pxm >= 0) 562 - dev_printk(KERN_DEBUG, &bus->dev, 563 - "on NUMA node %d (pxm %d)\n", node, pxm); 564 - #else 575 + if (bus && node != NUMA_NO_NODE) 565 576 dev_printk(KERN_DEBUG, &bus->dev, "on NUMA node %d\n", node); 566 - #endif 567 - } 568 577 569 578 return bus; 570 579 }
-10
arch/x86/pci/amd_bus.c
··· 44 44 return NULL; 45 45 } 46 46 47 - static void __init set_mp_bus_range_to_node(int min_bus, int max_bus, int node) 48 - { 49 - #ifdef CONFIG_NUMA 50 - int j; 51 - 52 - for (j = min_bus; j <= max_bus; j++) 53 - set_mp_bus_to_node(j, node); 54 - #endif 55 - } 56 47 /** 57 48 * early_fill_mp_bus_to_node() 58 49 * called before pcibios_scan_root and pci_scan_bus ··· 108 117 min_bus = (reg >> 16) & 0xff; 109 118 max_bus = (reg >> 24) & 0xff; 110 119 node = (reg >> 4) & 0x07; 111 - set_mp_bus_range_to_node(min_bus, max_bus, node); 112 120 link = (reg >> 8) & 0x03; 113 121 114 122 info = alloc_pci_root_info(min_bus, max_bus, node, link);
+10 -3
arch/x86/pci/bus_numa.c
··· 10 10 { 11 11 struct pci_root_info *info; 12 12 13 - if (list_empty(&pci_root_infos)) 14 - return NULL; 15 - 16 13 list_for_each_entry(info, &pci_root_infos, list) 17 14 if (info->busn.start == bus) 18 15 return info; 19 16 20 17 return NULL; 18 + } 19 + 20 + int x86_pci_root_bus_node(int bus) 21 + { 22 + struct pci_root_info *info = x86_find_pci_root_info(bus); 23 + 24 + if (!info) 25 + return NUMA_NO_NODE; 26 + 27 + return info->node; 21 28 } 22 29 23 30 void x86_pci_root_bus_resources(int bus, struct list_head *resources)
+16 -112
arch/x86/pci/common.c
··· 456 456 dmi_check_system(pciprobe_dmi_table); 457 457 } 458 458 459 - struct pci_bus *pcibios_scan_root(int busnum) 459 + void pcibios_scan_root(int busnum) 460 460 { 461 - struct pci_bus *bus = NULL; 461 + struct pci_bus *bus; 462 + struct pci_sysdata *sd; 463 + LIST_HEAD(resources); 462 464 463 - while ((bus = pci_find_next_bus(bus)) != NULL) { 464 - if (bus->number == busnum) { 465 - /* Already scanned */ 466 - return bus; 467 - } 465 + sd = kzalloc(sizeof(*sd), GFP_KERNEL); 466 + if (!sd) { 467 + printk(KERN_ERR "PCI: OOM, skipping PCI bus %02x\n", busnum); 468 + return; 468 469 } 469 - 470 - return pci_scan_bus_on_node(busnum, &pci_root_ops, 471 - get_mp_bus_to_node(busnum)); 470 + sd->node = x86_pci_root_bus_node(busnum); 471 + x86_pci_root_bus_resources(busnum, &resources); 472 + printk(KERN_DEBUG "PCI: Probing PCI hardware (bus %02x)\n", busnum); 473 + bus = pci_scan_root_bus(NULL, busnum, &pci_root_ops, sd, &resources); 474 + if (!bus) { 475 + pci_free_resource_list(&resources); 476 + kfree(sd); 477 + } 472 478 } 473 479 474 480 void __init pcibios_set_cache_line_size(void) ··· 683 677 else 684 678 return 0; 685 679 } 686 - 687 - struct pci_bus *pci_scan_bus_on_node(int busno, struct pci_ops *ops, int node) 688 - { 689 - LIST_HEAD(resources); 690 - struct pci_bus *bus = NULL; 691 - struct pci_sysdata *sd; 692 - 693 - /* 694 - * Allocate per-root-bus (not per bus) arch-specific data. 695 - * TODO: leak; this memory is never freed. 696 - * It's arguable whether it's worth the trouble to care. 697 - */ 698 - sd = kzalloc(sizeof(*sd), GFP_KERNEL); 699 - if (!sd) { 700 - printk(KERN_ERR "PCI: OOM, skipping PCI bus %02x\n", busno); 701 - return NULL; 702 - } 703 - sd->node = node; 704 - x86_pci_root_bus_resources(busno, &resources); 705 - printk(KERN_DEBUG "PCI: Probing PCI hardware (bus %02x)\n", busno); 706 - bus = pci_scan_root_bus(NULL, busno, ops, sd, &resources); 707 - if (!bus) { 708 - pci_free_resource_list(&resources); 709 - kfree(sd); 710 - } 711 - 712 - return bus; 713 - } 714 - 715 - struct pci_bus *pci_scan_bus_with_sysdata(int busno) 716 - { 717 - return pci_scan_bus_on_node(busno, &pci_root_ops, -1); 718 - } 719 - 720 - /* 721 - * NUMA info for PCI busses 722 - * 723 - * Early arch code is responsible for filling in reasonable values here. 724 - * A node id of "-1" means "use current node". In other words, if a bus 725 - * has a -1 node id, it's not tightly coupled to any particular chunk 726 - * of memory (as is the case on some Nehalem systems). 727 - */ 728 - #ifdef CONFIG_NUMA 729 - 730 - #define BUS_NR 256 731 - 732 - #ifdef CONFIG_X86_64 733 - 734 - static int mp_bus_to_node[BUS_NR] = { 735 - [0 ... BUS_NR - 1] = -1 736 - }; 737 - 738 - void set_mp_bus_to_node(int busnum, int node) 739 - { 740 - if (busnum >= 0 && busnum < BUS_NR) 741 - mp_bus_to_node[busnum] = node; 742 - } 743 - 744 - int get_mp_bus_to_node(int busnum) 745 - { 746 - int node = -1; 747 - 748 - if (busnum < 0 || busnum > (BUS_NR - 1)) 749 - return node; 750 - 751 - node = mp_bus_to_node[busnum]; 752 - 753 - /* 754 - * let numa_node_id to decide it later in dma_alloc_pages 755 - * if there is no ram on that node 756 - */ 757 - if (node != -1 && !node_online(node)) 758 - node = -1; 759 - 760 - return node; 761 - } 762 - 763 - #else /* CONFIG_X86_32 */ 764 - 765 - static int mp_bus_to_node[BUS_NR] = { 766 - [0 ... BUS_NR - 1] = -1 767 - }; 768 - 769 - void set_mp_bus_to_node(int busnum, int node) 770 - { 771 - if (busnum >= 0 && busnum < BUS_NR) 772 - mp_bus_to_node[busnum] = (unsigned char) node; 773 - } 774 - 775 - int get_mp_bus_to_node(int busnum) 776 - { 777 - int node; 778 - 779 - if (busnum < 0 || busnum > (BUS_NR - 1)) 780 - return 0; 781 - node = mp_bus_to_node[busnum]; 782 - return node; 783 - } 784 - 785 - #endif /* CONFIG_X86_32 */ 786 - 787 - #endif /* CONFIG_NUMA */
+13 -11
arch/x86/pci/fixup.c
··· 25 25 dev_dbg(&d->dev, "i450NX PXB %d: %02x/%02x/%02x\n", pxb, busno, 26 26 suba, subb); 27 27 if (busno) 28 - pci_scan_bus_with_sysdata(busno); /* Bus A */ 28 + pcibios_scan_root(busno); /* Bus A */ 29 29 if (suba < subb) 30 - pci_scan_bus_with_sysdata(suba+1); /* Bus B */ 30 + pcibios_scan_root(suba+1); /* Bus B */ 31 31 } 32 32 pcibios_last_bus = -1; 33 33 } ··· 42 42 u8 busno; 43 43 pci_read_config_byte(d, 0x4a, &busno); 44 44 dev_info(&d->dev, "i440KX/GX host bridge; secondary bus %02x\n", busno); 45 - pci_scan_bus_with_sysdata(busno); 45 + pcibios_scan_root(busno); 46 46 pcibios_last_bus = -1; 47 47 } 48 48 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82454GX, pci_fixup_i450gx); ··· 313 313 * IORESOURCE_ROM_SHADOW is used to associate the boot video 314 314 * card with this copy. On laptops this copy has to be used since 315 315 * the main ROM may be compressed or combined with another image. 316 - * See pci_map_rom() for use of this flag. IORESOURCE_ROM_SHADOW 317 - * is marked here since the boot video device will be the only enabled 318 - * video device at this point. 316 + * See pci_map_rom() for use of this flag. Before marking the device 317 + * with IORESOURCE_ROM_SHADOW check if a vga_default_device is already set 318 + * by either arch cde or vga-arbitration, if so only apply the fixup to this 319 + * already determined primary video card. 319 320 */ 320 321 321 322 static void pci_fixup_video(struct pci_dev *pdev) ··· 347 346 } 348 347 bus = bus->parent; 349 348 } 350 - pci_read_config_word(pdev, PCI_COMMAND, &config); 351 - if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 352 - pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW; 353 - dev_printk(KERN_DEBUG, &pdev->dev, "Boot video device\n"); 354 - if (!vga_default_device()) 349 + if (!vga_default_device() || pdev == vga_default_device()) { 350 + pci_read_config_word(pdev, PCI_COMMAND, &config); 351 + if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 352 + pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW; 353 + dev_printk(KERN_DEBUG, &pdev->dev, "Boot video device\n"); 355 354 vga_set_default_device(pdev); 355 + } 356 356 } 357 357 } 358 358 DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_ANY_ID, PCI_ANY_ID,
+1 -5
arch/x86/pci/irq.c
··· 136 136 busmap[e->bus] = 1; 137 137 } 138 138 for (i = 1; i < 256; i++) { 139 - int node; 140 139 if (!busmap[i] || pci_find_bus(0, i)) 141 140 continue; 142 - node = get_mp_bus_to_node(i); 143 - if (pci_scan_bus_on_node(i, &pci_root_ops, node)) 144 - printk(KERN_INFO "PCI: Discovered primary peer " 145 - "bus %02x [IRQ]\n", i); 141 + pcibios_scan_root(i); 146 142 } 147 143 pcibios_last_bus = -1; 148 144 }
+1 -3
arch/x86/pci/legacy.c
··· 37 37 void pcibios_scan_specific_bus(int busn) 38 38 { 39 39 int devfn; 40 - long node; 41 40 u32 l; 42 41 43 42 if (pci_find_bus(0, busn)) 44 43 return; 45 44 46 - node = get_mp_bus_to_node(busn); 47 45 for (devfn = 0; devfn < 256; devfn += 8) { 48 46 if (!raw_pci_read(0, busn, devfn, PCI_VENDOR_ID, 2, &l) && 49 47 l != 0x0000 && l != 0xffff) { 50 48 DBG("Found device at %02x:%02x [%04x]\n", busn, devfn, l); 51 49 printk(KERN_INFO "PCI: Discovered peer bus %02x\n", busn); 52 - pci_scan_bus_on_node(busn, &pci_root_ops, node); 50 + pcibios_scan_root(busn); 53 51 return; 54 52 } 55 53 }
+3 -3
arch/x86/pci/numaq_32.c
··· 135 135 pxb, busno, suba, subb); 136 136 if (busno) { 137 137 /* Bus A */ 138 - pci_scan_bus_with_sysdata(QUADLOCAL2BUS(quad, busno)); 138 + pcibios_scan_root(QUADLOCAL2BUS(quad, busno)); 139 139 } 140 140 if (suba < subb) { 141 141 /* Bus B */ 142 - pci_scan_bus_with_sysdata(QUADLOCAL2BUS(quad, suba+1)); 142 + pcibios_scan_root(QUADLOCAL2BUS(quad, suba+1)); 143 143 } 144 144 } 145 145 pcibios_last_bus = -1; ··· 159 159 continue; 160 160 printk("Scanning PCI bus %d for quad %d\n", 161 161 QUADLOCAL2BUS(quad,0), quad); 162 - pci_scan_bus_with_sysdata(QUADLOCAL2BUS(quad, 0)); 162 + pcibios_scan_root(QUADLOCAL2BUS(quad, 0)); 163 163 } 164 164 return 0; 165 165 }
+2 -2
arch/x86/pci/visws.c
··· 78 78 "bridge B (PIIX4) bus: %u\n", pci_bus1, pci_bus0); 79 79 80 80 raw_pci_ops = &pci_direct_conf1; 81 - pci_scan_bus_with_sysdata(pci_bus0); 82 - pci_scan_bus_with_sysdata(pci_bus1); 81 + pcibios_scan_root(pci_bus0); 82 + pcibios_scan_root(pci_bus1); 83 83 pci_fixup_irqs(pci_common_swizzle, visws_map_irq); 84 84 pcibios_resource_survey(); 85 85 /* Request bus scan */
+8 -8
drivers/acpi/numa.c
··· 60 60 return node_to_pxm_map[node]; 61 61 } 62 62 63 - void __acpi_map_pxm_to_node(int pxm, int node) 63 + static void __acpi_map_pxm_to_node(int pxm, int node) 64 64 { 65 65 if (pxm_to_node_map[pxm] == NUMA_NO_NODE || node < pxm_to_node_map[pxm]) 66 66 pxm_to_node_map[pxm] = node; ··· 193 193 return 0; 194 194 } 195 195 196 - void __init __attribute__ ((weak)) 196 + void __init __weak 197 197 acpi_numa_x2apic_affinity_init(struct acpi_srat_x2apic_cpu_affinity *pa) 198 198 { 199 199 printk(KERN_WARNING PREFIX ··· 314 314 return 0; 315 315 } 316 316 317 - int acpi_get_pxm(acpi_handle h) 317 + static int acpi_get_pxm(acpi_handle h) 318 318 { 319 319 unsigned long long pxm; 320 320 acpi_status status; ··· 331 331 return -1; 332 332 } 333 333 334 - int acpi_get_node(acpi_handle *handle) 334 + int acpi_get_node(acpi_handle handle) 335 335 { 336 - int pxm, node = NUMA_NO_NODE; 336 + int pxm; 337 337 338 338 pxm = acpi_get_pxm(handle); 339 - if (pxm >= 0 && pxm < MAX_PXM_DOMAINS) 340 - node = acpi_map_pxm_to_node(pxm); 339 + if (pxm < 0 || pxm >= MAX_PXM_DOMAINS) 340 + return NUMA_NO_NODE; 341 341 342 - return node; 342 + return acpi_map_pxm_to_node(pxm); 343 343 } 344 344 EXPORT_SYMBOL(acpi_get_node);
+9 -11
drivers/ata/ahci.c
··· 1166 1166 static int ahci_init_interrupts(struct pci_dev *pdev, unsigned int n_ports, 1167 1167 struct ahci_host_priv *hpriv) 1168 1168 { 1169 - int rc, nvec; 1169 + int nvec; 1170 1170 1171 1171 if (hpriv->flags & AHCI_HFLAG_NO_MSI) 1172 1172 goto intx; 1173 1173 1174 - rc = pci_msi_vec_count(pdev); 1175 - if (rc < 0) 1174 + nvec = pci_msi_vec_count(pdev); 1175 + if (nvec < 0) 1176 1176 goto intx; 1177 1177 1178 1178 /* ··· 1180 1180 * Message mode could be enforced. In this case assume that advantage 1181 1181 * of multipe MSIs is negated and use single MSI mode instead. 1182 1182 */ 1183 - if (rc < n_ports) 1183 + if (nvec < n_ports) 1184 1184 goto single_msi; 1185 1185 1186 - nvec = rc; 1187 - rc = pci_enable_msi_block(pdev, nvec); 1188 - if (rc < 0) 1189 - goto intx; 1190 - else if (rc > 0) 1186 + nvec = pci_enable_msi_range(pdev, nvec, nvec); 1187 + if (nvec == -ENOSPC) 1191 1188 goto single_msi; 1189 + else if (nvec < 0) 1190 + goto intx; 1192 1191 1193 1192 return nvec; 1194 1193 1195 1194 single_msi: 1196 - rc = pci_enable_msi(pdev); 1197 - if (rc) 1195 + if (pci_enable_msi(pdev)) 1198 1196 goto intx; 1199 1197 return 1; 1200 1198
+2 -2
drivers/bus/mvebu-mbus.c
··· 870 870 ret = of_property_read_u32_array(np, "pcie-mem-aperture", reg, ARRAY_SIZE(reg)); 871 871 if (!ret) { 872 872 mem->start = reg[0]; 873 - mem->end = mem->start + reg[1]; 873 + mem->end = mem->start + reg[1] - 1; 874 874 mem->flags = IORESOURCE_MEM; 875 875 } 876 876 877 877 ret = of_property_read_u32_array(np, "pcie-io-aperture", reg, ARRAY_SIZE(reg)); 878 878 if (!ret) { 879 879 io->start = reg[0]; 880 - io->end = io->start + reg[1]; 880 + io->end = io->start + reg[1] - 1; 881 881 io->flags = IORESOURCE_IO; 882 882 } 883 883 }
+2 -1
drivers/gpu/drm/drm_fops.c
··· 319 319 pci_dev_put(pci_dev); 320 320 } 321 321 if (!dev->hose) { 322 - struct pci_bus *b = pci_bus_b(pci_root_buses.next); 322 + struct pci_bus *b = list_entry(pci_root_buses.next, 323 + struct pci_bus, node); 323 324 if (b) 324 325 dev->hose = b->sysdata; 325 326 }
+1
drivers/iommu/amd_iommu_types.h
··· 25 25 #include <linux/list.h> 26 26 #include <linux/spinlock.h> 27 27 #include <linux/pci.h> 28 + #include <linux/irqreturn.h> 28 29 29 30 /* 30 31 * Maximum number of IOMMUs supported
+42 -43
drivers/message/i2o/iop.c
··· 652 652 return i2o_hrt_get(c); 653 653 }; 654 654 655 + static void i2o_res_alloc(struct i2o_controller *c, unsigned long flags) 656 + { 657 + i2o_status_block *sb = c->status_block.virt; 658 + struct resource *res = &c->mem_resource; 659 + resource_size_t size, align; 660 + int err; 661 + 662 + res->name = c->pdev->bus->name; 663 + res->flags = flags; 664 + res->start = 0; 665 + res->end = 0; 666 + osm_info("%s: requires private memory resources.\n", c->name); 667 + 668 + if (flags & IORESOURCE_MEM) { 669 + size = sb->desired_mem_size; 670 + align = 1 << 20; /* unspecified, use 1Mb and play safe */ 671 + } else { 672 + size = sb->desired_io_size; 673 + align = 1 << 12; /* unspecified, use 4Kb and play safe */ 674 + } 675 + 676 + err = pci_bus_alloc_resource(c->pdev->bus, res, size, align, 0, 0, 677 + NULL, NULL); 678 + if (err < 0) 679 + return; 680 + 681 + if (flags & IORESOURCE_MEM) { 682 + c->mem_alloc = 1; 683 + sb->current_mem_size = resource_size(res); 684 + sb->current_mem_base = res->start; 685 + } else if (flags & IORESOURCE_IO) { 686 + c->io_alloc = 1; 687 + sb->current_io_size = resource_size(res); 688 + sb->current_io_base = res->start; 689 + } 690 + osm_info("%s: allocated PCI space %pR\n", c->name, res); 691 + } 692 + 655 693 /** 656 694 * i2o_iop_systab_set - Set the I2O System Table of the specified IOP 657 695 * @c: I2O controller to which the system table should be send ··· 703 665 struct i2o_message *msg; 704 666 i2o_status_block *sb = c->status_block.virt; 705 667 struct device *dev = &c->pdev->dev; 706 - struct resource *root; 707 668 int rc; 708 669 709 - if (sb->current_mem_size < sb->desired_mem_size) { 710 - struct resource *res = &c->mem_resource; 711 - res->name = c->pdev->bus->name; 712 - res->flags = IORESOURCE_MEM; 713 - res->start = 0; 714 - res->end = 0; 715 - osm_info("%s: requires private memory resources.\n", c->name); 716 - root = pci_find_parent_resource(c->pdev, res); 717 - if (root == NULL) 718 - osm_warn("%s: Can't find parent resource!\n", c->name); 719 - if (root && allocate_resource(root, res, sb->desired_mem_size, sb->desired_mem_size, sb->desired_mem_size, 1 << 20, /* Unspecified, so use 1Mb and play safe */ 720 - NULL, NULL) >= 0) { 721 - c->mem_alloc = 1; 722 - sb->current_mem_size = resource_size(res); 723 - sb->current_mem_base = res->start; 724 - osm_info("%s: allocated %llu bytes of PCI memory at " 725 - "0x%016llX.\n", c->name, 726 - (unsigned long long)resource_size(res), 727 - (unsigned long long)res->start); 728 - } 729 - } 670 + if (sb->current_mem_size < sb->desired_mem_size) 671 + i2o_res_alloc(c, IORESOURCE_MEM); 730 672 731 - if (sb->current_io_size < sb->desired_io_size) { 732 - struct resource *res = &c->io_resource; 733 - res->name = c->pdev->bus->name; 734 - res->flags = IORESOURCE_IO; 735 - res->start = 0; 736 - res->end = 0; 737 - osm_info("%s: requires private memory resources.\n", c->name); 738 - root = pci_find_parent_resource(c->pdev, res); 739 - if (root == NULL) 740 - osm_warn("%s: Can't find parent resource!\n", c->name); 741 - if (root && allocate_resource(root, res, sb->desired_io_size, sb->desired_io_size, sb->desired_io_size, 1 << 20, /* Unspecified, so use 1Mb and play safe */ 742 - NULL, NULL) >= 0) { 743 - c->io_alloc = 1; 744 - sb->current_io_size = resource_size(res); 745 - sb->current_mem_base = res->start; 746 - osm_info("%s: allocated %llu bytes of PCI I/O at " 747 - "0x%016llX.\n", c->name, 748 - (unsigned long long)resource_size(res), 749 - (unsigned long long)res->start); 750 - } 751 - } 673 + if (sb->current_io_size < sb->desired_io_size) 674 + i2o_res_alloc(c, IORESOURCE_IO); 752 675 753 676 msg = i2o_msg_get_wait(c, I2O_TIMEOUT_MESSAGE_GET); 754 677 if (IS_ERR(msg))
+1
drivers/misc/mei/hw-me.h
··· 20 20 #define _MEI_INTERFACE_H_ 21 21 22 22 #include <linux/mei.h> 23 + #include <linux/irqreturn.h> 23 24 #include "mei_dev.h" 24 25 #include "client.h" 25 26
+1
drivers/misc/mic/card/mic_device.h
··· 29 29 30 30 #include <linux/workqueue.h> 31 31 #include <linux/io.h> 32 + #include <linux/irqreturn.h> 32 33 33 34 /** 34 35 * struct mic_intr_info - Contains h/w specific interrupt sources info
+1
drivers/misc/mic/host/mic_device.h
··· 24 24 #include <linux/cdev.h> 25 25 #include <linux/idr.h> 26 26 #include <linux/notifier.h> 27 + #include <linux/irqreturn.h> 27 28 28 29 #include "mic_intr.h" 29 30
+8 -14
drivers/pci/Makefile
··· 33 33 # 34 34 # Some architectures use the generic PCI setup functions 35 35 # 36 - obj-$(CONFIG_X86) += setup-bus.o 37 - obj-$(CONFIG_ALPHA) += setup-bus.o setup-irq.o 38 - obj-$(CONFIG_ARM) += setup-bus.o setup-irq.o 39 - obj-$(CONFIG_UNICORE32) += setup-bus.o setup-irq.o 40 - obj-$(CONFIG_PARISC) += setup-bus.o 41 - obj-$(CONFIG_SUPERH) += setup-bus.o setup-irq.o 42 - obj-$(CONFIG_PPC) += setup-bus.o 43 - obj-$(CONFIG_FRV) += setup-bus.o 44 - obj-$(CONFIG_MIPS) += setup-bus.o setup-irq.o 36 + obj-$(CONFIG_ALPHA) += setup-irq.o 37 + obj-$(CONFIG_ARM) += setup-irq.o 38 + obj-$(CONFIG_UNICORE32) += setup-irq.o 39 + obj-$(CONFIG_SUPERH) += setup-irq.o 40 + obj-$(CONFIG_MIPS) += setup-irq.o 45 41 obj-$(CONFIG_X86_VISWS) += setup-irq.o 46 - obj-$(CONFIG_MN10300) += setup-bus.o 47 - obj-$(CONFIG_MICROBLAZE) += setup-bus.o 48 - obj-$(CONFIG_TILE) += setup-bus.o setup-irq.o 49 - obj-$(CONFIG_SPARC_LEON) += setup-bus.o setup-irq.o 50 - obj-$(CONFIG_M68K) += setup-bus.o setup-irq.o 42 + obj-$(CONFIG_TILE) += setup-irq.o 43 + obj-$(CONFIG_SPARC_LEON) += setup-irq.o 44 + obj-$(CONFIG_M68K) += setup-irq.o 51 45 52 46 # 53 47 # ACPI Related PCI FW Functions
+3 -3
drivers/pci/bus.c
··· 132 132 133 133 static int pci_bus_alloc_from_region(struct pci_bus *bus, struct resource *res, 134 134 resource_size_t size, resource_size_t align, 135 - resource_size_t min, unsigned int type_mask, 135 + resource_size_t min, unsigned long type_mask, 136 136 resource_size_t (*alignf)(void *, 137 137 const struct resource *, 138 138 resource_size_t, ··· 144 144 struct resource *r, avail; 145 145 resource_size_t max; 146 146 147 - type_mask |= IORESOURCE_IO | IORESOURCE_MEM; 147 + type_mask |= IORESOURCE_TYPE_BITS; 148 148 149 149 pci_bus_for_each_resource(bus, r, i) { 150 150 if (!r) ··· 200 200 */ 201 201 int pci_bus_alloc_resource(struct pci_bus *bus, struct resource *res, 202 202 resource_size_t size, resource_size_t align, 203 - resource_size_t min, unsigned int type_mask, 203 + resource_size_t min, unsigned long type_mask, 204 204 resource_size_t (*alignf)(void *, 205 205 const struct resource *, 206 206 resource_size_t,
-8
drivers/pci/host-bridge.c
··· 32 32 bridge->release_data = release_data; 33 33 } 34 34 35 - static bool resource_contains(struct resource *res1, struct resource *res2) 36 - { 37 - return res1->start <= res2->start && res1->end >= res2->end; 38 - } 39 - 40 35 void pcibios_resource_to_bus(struct pci_bus *bus, struct pci_bus_region *region, 41 36 struct resource *res) 42 37 { ··· 40 45 resource_size_t offset = 0; 41 46 42 47 list_for_each_entry(window, &bridge->windows, list) { 43 - if (resource_type(res) != resource_type(window->res)) 44 - continue; 45 - 46 48 if (resource_contains(window->res, res)) { 47 49 offset = window->offset; 48 50 break;
+1 -1
drivers/pci/host/Kconfig
··· 27 27 28 28 config PCI_RCAR_GEN2 29 29 bool "Renesas R-Car Gen2 Internal PCI controller" 30 - depends on ARM && (ARCH_R8A7790 || ARCH_R8A7791 || COMPILE_TEST) 30 + depends on ARCH_SHMOBILE || (ARM && COMPILE_TEST) 31 31 help 32 32 Say Y here if you want internal PCI support on R-Car Gen2 SoC. 33 33 There are 3 internal PCI controllers available with a single
+34 -13
drivers/pci/host/pci-imx6.c
··· 424 424 425 425 static int imx6_pcie_link_up(struct pcie_port *pp) 426 426 { 427 - u32 rc, ltssm, rx_valid; 427 + u32 rc, debug_r0, rx_valid; 428 + int count = 5; 428 429 429 430 /* 430 - * Test if the PHY reports that the link is up and also that 431 - * the link training finished. It might happen that the PHY 432 - * reports the link is already up, but the link training bit 433 - * is still set, so make sure to check the training is done 434 - * as well here. 431 + * Test if the PHY reports that the link is up and also that the LTSSM 432 + * training finished. There are three possible states of the link when 433 + * this code is called: 434 + * 1) The link is DOWN (unlikely) 435 + * The link didn't come up yet for some reason. This usually means 436 + * we have a real problem somewhere. Reset the PHY and exit. This 437 + * state calls for inspection of the DEBUG registers. 438 + * 2) The link is UP, but still in LTSSM training 439 + * Wait for the training to finish, which should take a very short 440 + * time. If the training does not finish, we have a problem and we 441 + * need to inspect the DEBUG registers. If the training does finish, 442 + * the link is up and operating correctly. 443 + * 3) The link is UP and no longer in LTSSM training 444 + * The link is up and operating correctly. 435 445 */ 436 - rc = readl(pp->dbi_base + PCIE_PHY_DEBUG_R1); 437 - if ((rc & PCIE_PHY_DEBUG_R1_XMLH_LINK_UP) && 438 - !(rc & PCIE_PHY_DEBUG_R1_XMLH_LINK_IN_TRAINING)) 439 - return 1; 440 - 446 + while (1) { 447 + rc = readl(pp->dbi_base + PCIE_PHY_DEBUG_R1); 448 + if (!(rc & PCIE_PHY_DEBUG_R1_XMLH_LINK_UP)) 449 + break; 450 + if (!(rc & PCIE_PHY_DEBUG_R1_XMLH_LINK_IN_TRAINING)) 451 + return 1; 452 + if (!count--) 453 + break; 454 + dev_dbg(pp->dev, "Link is up, but still in training\n"); 455 + /* 456 + * Wait a little bit, then re-check if the link finished 457 + * the training. 458 + */ 459 + usleep_range(1000, 2000); 460 + } 441 461 /* 442 462 * From L0, initiate MAC entry to gen2 if EP/RC supports gen2. 443 463 * Wait 2ms (LTSSM timeout is 24ms, PHY lock is ~5us in gen2). ··· 466 446 * to gen2 is stuck 467 447 */ 468 448 pcie_phy_read(pp->dbi_base, PCIE_PHY_RX_ASIC_OUT, &rx_valid); 469 - ltssm = readl(pp->dbi_base + PCIE_PHY_DEBUG_R0) & 0x3F; 449 + debug_r0 = readl(pp->dbi_base + PCIE_PHY_DEBUG_R0); 470 450 471 451 if (rx_valid & 0x01) 472 452 return 0; 473 453 474 - if (ltssm != 0x0d) 454 + if ((debug_r0 & 0x3f) != 0x0d) 475 455 return 0; 476 456 477 457 dev_err(pp->dev, "transition to gen2 is stuck, reset PHY!\n"); 458 + dev_dbg(pp->dev, "debug_r0=%08x debug_r1=%08x\n", debug_r0, rc); 478 459 479 460 imx6_pcie_reset_phy(pp); 480 461
+24 -2
drivers/pci/host/pci-mvebu.c
··· 101 101 struct mvebu_pcie_port *ports; 102 102 struct msi_chip *msi; 103 103 struct resource io; 104 + char io_name[30]; 104 105 struct resource realio; 106 + char mem_name[30]; 105 107 struct resource mem; 106 108 struct resource busn; 107 109 int nports; ··· 674 672 { 675 673 struct mvebu_pcie *pcie = sys_to_pcie(sys); 676 674 int i; 675 + int domain = 0; 677 676 678 - if (resource_size(&pcie->realio) != 0) 677 + #ifdef CONFIG_PCI_DOMAINS 678 + domain = sys->domain; 679 + #endif 680 + 681 + snprintf(pcie->mem_name, sizeof(pcie->mem_name), "PCI MEM %04x", 682 + domain); 683 + pcie->mem.name = pcie->mem_name; 684 + 685 + snprintf(pcie->io_name, sizeof(pcie->io_name), "PCI I/O %04x", domain); 686 + pcie->realio.name = pcie->io_name; 687 + 688 + if (request_resource(&iomem_resource, &pcie->mem)) 689 + return 0; 690 + 691 + if (resource_size(&pcie->realio) != 0) { 692 + if (request_resource(&ioport_resource, &pcie->realio)) { 693 + release_resource(&pcie->mem); 694 + return 0; 695 + } 679 696 pci_add_resource_offset(&sys->resources, &pcie->realio, 680 697 sys->io_offset); 698 + } 681 699 pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset); 682 700 pci_add_resource(&sys->resources, &pcie->busn); 683 701 ··· 819 797 820 798 for (i = 0; i < nranges; i++) { 821 799 u32 flags = of_read_number(range, 1); 822 - u32 slot = of_read_number(range, 2); 800 + u32 slot = of_read_number(range + 1, 1); 823 801 u64 cpuaddr = of_read_number(range + na, pna); 824 802 unsigned long rtype; 825 803
+117 -63
drivers/pci/host/pci-rcar-gen2.c
··· 18 18 #include <linux/pci.h> 19 19 #include <linux/platform_device.h> 20 20 #include <linux/pm_runtime.h> 21 + #include <linux/sizes.h> 21 22 #include <linux/slab.h> 22 23 23 24 /* AHB-PCI Bridge PCI communication registers */ ··· 40 39 41 40 #define RCAR_PCI_INT_ENABLE_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x20) 42 41 #define RCAR_PCI_INT_STATUS_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x24) 42 + #define RCAR_PCI_INT_SIGTABORT (1 << 0) 43 + #define RCAR_PCI_INT_SIGRETABORT (1 << 1) 44 + #define RCAR_PCI_INT_REMABORT (1 << 2) 45 + #define RCAR_PCI_INT_PERR (1 << 3) 46 + #define RCAR_PCI_INT_SIGSERR (1 << 4) 47 + #define RCAR_PCI_INT_RESERR (1 << 5) 48 + #define RCAR_PCI_INT_WIN1ERR (1 << 12) 49 + #define RCAR_PCI_INT_WIN2ERR (1 << 13) 43 50 #define RCAR_PCI_INT_A (1 << 16) 44 51 #define RCAR_PCI_INT_B (1 << 17) 45 52 #define RCAR_PCI_INT_PME (1 << 19) 53 + #define RCAR_PCI_INT_ALLERRORS (RCAR_PCI_INT_SIGTABORT | \ 54 + RCAR_PCI_INT_SIGRETABORT | \ 55 + RCAR_PCI_INT_SIGRETABORT | \ 56 + RCAR_PCI_INT_REMABORT | \ 57 + RCAR_PCI_INT_PERR | \ 58 + RCAR_PCI_INT_SIGSERR | \ 59 + RCAR_PCI_INT_RESERR | \ 60 + RCAR_PCI_INT_WIN1ERR | \ 61 + RCAR_PCI_INT_WIN2ERR) 46 62 47 63 #define RCAR_AHB_BUS_CTR_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x30) 48 64 #define RCAR_AHB_BUS_MMODE_HTRANS (1 << 0) ··· 92 74 93 75 #define RCAR_PCI_UNIT_REV_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x48) 94 76 95 - /* Number of internal PCI controllers */ 96 - #define RCAR_PCI_NR_CONTROLLERS 3 97 - 98 77 struct rcar_pci_priv { 99 78 struct device *dev; 100 79 void __iomem *reg; ··· 99 84 struct resource mem_res; 100 85 struct resource *cfg_res; 101 86 int irq; 87 + unsigned long window_size; 102 88 }; 103 89 104 90 /* PCI configuration space operations */ ··· 116 100 /* Only one EHCI/OHCI device built-in */ 117 101 slot = PCI_SLOT(devfn); 118 102 if (slot > 2) 103 + return NULL; 104 + 105 + /* bridge logic only has registers to 0x40 */ 106 + if (slot == 0x0 && where >= 0x40) 119 107 return NULL; 120 108 121 109 val = slot ? RCAR_AHBPCI_WIN1_DEVICE | RCAR_AHBPCI_WIN_CTR_CFG : ··· 176 156 } 177 157 178 158 /* PCI interrupt mapping */ 179 - static int __init rcar_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 159 + static int rcar_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 180 160 { 181 161 struct pci_sys_data *sys = dev->bus->sysdata; 182 162 struct rcar_pci_priv *priv = sys->private_data; ··· 184 164 return priv->irq; 185 165 } 186 166 167 + #ifdef CONFIG_PCI_DEBUG 168 + /* if debug enabled, then attach an error handler irq to the bridge */ 169 + 170 + static irqreturn_t rcar_pci_err_irq(int irq, void *pw) 171 + { 172 + struct rcar_pci_priv *priv = pw; 173 + u32 status = ioread32(priv->reg + RCAR_PCI_INT_STATUS_REG); 174 + 175 + if (status & RCAR_PCI_INT_ALLERRORS) { 176 + dev_err(priv->dev, "error irq: status %08x\n", status); 177 + 178 + /* clear the error(s) */ 179 + iowrite32(status & RCAR_PCI_INT_ALLERRORS, 180 + priv->reg + RCAR_PCI_INT_STATUS_REG); 181 + return IRQ_HANDLED; 182 + } 183 + 184 + return IRQ_NONE; 185 + } 186 + 187 + static void rcar_pci_setup_errirq(struct rcar_pci_priv *priv) 188 + { 189 + int ret; 190 + u32 val; 191 + 192 + ret = devm_request_irq(priv->dev, priv->irq, rcar_pci_err_irq, 193 + IRQF_SHARED, "error irq", priv); 194 + if (ret) { 195 + dev_err(priv->dev, "cannot claim IRQ for error handling\n"); 196 + return; 197 + } 198 + 199 + val = ioread32(priv->reg + RCAR_PCI_INT_ENABLE_REG); 200 + val |= RCAR_PCI_INT_ALLERRORS; 201 + iowrite32(val, priv->reg + RCAR_PCI_INT_ENABLE_REG); 202 + } 203 + #else 204 + static inline void rcar_pci_setup_errirq(struct rcar_pci_priv *priv) { } 205 + #endif 206 + 187 207 /* PCI host controller setup */ 188 - static int __init rcar_pci_setup(int nr, struct pci_sys_data *sys) 208 + static int rcar_pci_setup(int nr, struct pci_sys_data *sys) 189 209 { 190 210 struct rcar_pci_priv *priv = sys->private_data; 191 211 void __iomem *reg = priv->reg; ··· 243 183 iowrite32(val, reg + RCAR_USBCTR_REG); 244 184 udelay(4); 245 185 246 - /* De-assert reset and set PCIAHB window1 size to 1GB */ 186 + /* De-assert reset and reset PCIAHB window1 size */ 247 187 val &= ~(RCAR_USBCTR_PCIAHB_WIN1_MASK | RCAR_USBCTR_PCICLK_MASK | 248 188 RCAR_USBCTR_USBH_RST | RCAR_USBCTR_PLL_RST); 249 - iowrite32(val | RCAR_USBCTR_PCIAHB_WIN1_1G, reg + RCAR_USBCTR_REG); 189 + 190 + /* Setup PCIAHB window1 size */ 191 + switch (priv->window_size) { 192 + case SZ_2G: 193 + val |= RCAR_USBCTR_PCIAHB_WIN1_2G; 194 + break; 195 + case SZ_1G: 196 + val |= RCAR_USBCTR_PCIAHB_WIN1_1G; 197 + break; 198 + case SZ_512M: 199 + val |= RCAR_USBCTR_PCIAHB_WIN1_512M; 200 + break; 201 + default: 202 + pr_warn("unknown window size %ld - defaulting to 256M\n", 203 + priv->window_size); 204 + priv->window_size = SZ_256M; 205 + /* fall-through */ 206 + case SZ_256M: 207 + val |= RCAR_USBCTR_PCIAHB_WIN1_256M; 208 + break; 209 + } 210 + iowrite32(val, reg + RCAR_USBCTR_REG); 250 211 251 212 /* Configure AHB master and slave modes */ 252 213 iowrite32(RCAR_AHB_BUS_MODE, reg + RCAR_AHB_BUS_CTR_REG); ··· 278 197 RCAR_PCI_ARBITER_PCIBP_MODE; 279 198 iowrite32(val, reg + RCAR_PCI_ARBITER_CTR_REG); 280 199 281 - /* PCI-AHB mapping: 0x40000000-0x80000000 */ 200 + /* PCI-AHB mapping: 0x40000000 base */ 282 201 iowrite32(0x40000000 | RCAR_PCIAHB_PREFETCH16, 283 202 reg + RCAR_PCIAHB_WIN1_CTR_REG); 284 203 ··· 305 224 iowrite32(RCAR_PCI_INT_A | RCAR_PCI_INT_B | RCAR_PCI_INT_PME, 306 225 reg + RCAR_PCI_INT_ENABLE_REG); 307 226 227 + if (priv->irq > 0) 228 + rcar_pci_setup_errirq(priv); 229 + 308 230 /* Add PCI resources */ 309 231 pci_add_resource(&sys->resources, &priv->io_res); 310 232 pci_add_resource(&sys->resources, &priv->mem_res); 311 233 234 + /* Setup bus number based on platform device id */ 235 + sys->busnr = to_platform_device(priv->dev)->id; 312 236 return 1; 313 237 } 314 238 ··· 322 236 .write = rcar_pci_write_config, 323 237 }; 324 238 325 - static struct hw_pci rcar_hw_pci __initdata = { 326 - .map_irq = rcar_pci_map_irq, 327 - .ops = &rcar_pci_ops, 328 - .setup = rcar_pci_setup, 329 - }; 330 - 331 - static int rcar_pci_count __initdata; 332 - 333 - static int __init rcar_pci_add_controller(struct rcar_pci_priv *priv) 334 - { 335 - void **private_data; 336 - int count; 337 - 338 - if (rcar_hw_pci.nr_controllers < rcar_pci_count) 339 - goto add_priv; 340 - 341 - /* (Re)allocate private data pointer array if needed */ 342 - count = rcar_pci_count + RCAR_PCI_NR_CONTROLLERS; 343 - private_data = kzalloc(count * sizeof(void *), GFP_KERNEL); 344 - if (!private_data) 345 - return -ENOMEM; 346 - 347 - rcar_pci_count = count; 348 - if (rcar_hw_pci.private_data) { 349 - memcpy(private_data, rcar_hw_pci.private_data, 350 - rcar_hw_pci.nr_controllers * sizeof(void *)); 351 - kfree(rcar_hw_pci.private_data); 352 - } 353 - 354 - rcar_hw_pci.private_data = private_data; 355 - 356 - add_priv: 357 - /* Add private data pointer to the array */ 358 - rcar_hw_pci.private_data[rcar_hw_pci.nr_controllers++] = priv; 359 - return 0; 360 - } 361 - 362 - static int __init rcar_pci_probe(struct platform_device *pdev) 239 + static int rcar_pci_probe(struct platform_device *pdev) 363 240 { 364 241 struct resource *cfg_res, *mem_res; 365 242 struct rcar_pci_priv *priv; 366 243 void __iomem *reg; 244 + struct hw_pci hw; 245 + void *hw_private[1]; 367 246 368 247 cfg_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 369 248 reg = devm_ioremap_resource(&pdev->dev, cfg_res); ··· 359 308 priv->reg = reg; 360 309 priv->dev = &pdev->dev; 361 310 362 - return rcar_pci_add_controller(priv); 311 + if (priv->irq < 0) { 312 + dev_err(&pdev->dev, "no valid irq found\n"); 313 + return priv->irq; 314 + } 315 + 316 + priv->window_size = SZ_1G; 317 + 318 + hw_private[0] = priv; 319 + memset(&hw, 0, sizeof(hw)); 320 + hw.nr_controllers = ARRAY_SIZE(hw_private); 321 + hw.private_data = hw_private; 322 + hw.map_irq = rcar_pci_map_irq; 323 + hw.ops = &rcar_pci_ops; 324 + hw.setup = rcar_pci_setup; 325 + pci_common_init_dev(&pdev->dev, &hw); 326 + return 0; 363 327 } 364 328 365 329 static struct platform_driver rcar_pci_driver = { 366 330 .driver = { 367 331 .name = "pci-rcar-gen2", 332 + .owner = THIS_MODULE, 333 + .suppress_bind_attrs = true, 368 334 }, 335 + .probe = rcar_pci_probe, 369 336 }; 370 337 371 - static int __init rcar_pci_init(void) 372 - { 373 - int retval; 374 - 375 - retval = platform_driver_probe(&rcar_pci_driver, rcar_pci_probe); 376 - if (!retval) 377 - pci_common_init(&rcar_hw_pci); 378 - 379 - /* Private data pointer array is not needed any more */ 380 - kfree(rcar_hw_pci.private_data); 381 - rcar_hw_pci.private_data = NULL; 382 - 383 - return retval; 384 - } 385 - 386 - subsys_initcall(rcar_pci_init); 338 + module_platform_driver(rcar_pci_driver); 387 339 388 340 MODULE_LICENSE("GPL v2"); 389 341 MODULE_DESCRIPTION("Renesas R-Car Gen2 internal PCI");
+1 -1
drivers/pci/host/pcie-designware.c
··· 798 798 799 799 /* setup RC BARs */ 800 800 dw_pcie_writel_rc(pp, 0x00000004, PCI_BASE_ADDRESS_0); 801 - dw_pcie_writel_rc(pp, 0x00000004, PCI_BASE_ADDRESS_1); 801 + dw_pcie_writel_rc(pp, 0x00000000, PCI_BASE_ADDRESS_1); 802 802 803 803 /* setup interrupt pins */ 804 804 dw_pcie_readl_rc(pp, PCI_INTERRUPT_LINE, &val);
+3 -3
drivers/pci/hotplug/acpiphp_glue.c
··· 424 424 */ 425 425 static unsigned char acpiphp_max_busnr(struct pci_bus *bus) 426 426 { 427 - struct list_head *tmp; 427 + struct pci_bus *tmp; 428 428 unsigned char max, n; 429 429 430 430 /* ··· 437 437 */ 438 438 max = bus->busn_res.start; 439 439 440 - list_for_each(tmp, &bus->children) { 441 - n = pci_bus_max_busnr(pci_bus_b(tmp)); 440 + list_for_each_entry(tmp, &bus->children, node) { 441 + n = pci_bus_max_busnr(tmp); 442 442 if (n > max) 443 443 max = n; 444 444 }
+2 -2
drivers/pci/hotplug/cpqphp_core.c
··· 920 920 bus->max_bus_speed = PCI_SPEED_100MHz_PCIX; 921 921 break; 922 922 } 923 - if (bus_cap & 20) { 923 + if (bus_cap & 0x20) { 924 924 dbg("bus max supports 66MHz PCI-X\n"); 925 925 bus->max_bus_speed = PCI_SPEED_66MHz_PCIX; 926 926 break; 927 927 } 928 - if (bus_cap & 10) { 928 + if (bus_cap & 0x10) { 929 929 dbg("bus max supports 66MHz PCI\n"); 930 930 bus->max_bus_speed = PCI_SPEED_66MHz; 931 931 break;
+5
drivers/pci/hotplug/pciehp.h
··· 76 76 struct hotplug_slot *hotplug_slot; 77 77 struct delayed_work work; /* work for button event */ 78 78 struct mutex lock; 79 + struct mutex hotplug_lock; 79 80 struct workqueue_struct *wq; 80 81 }; 81 82 ··· 110 109 #define INT_BUTTON_PRESS 7 111 110 #define INT_BUTTON_RELEASE 8 112 111 #define INT_BUTTON_CANCEL 9 112 + #define INT_LINK_UP 10 113 + #define INT_LINK_DOWN 11 113 114 114 115 #define STATIC_STATE 0 115 116 #define BLINKINGON_STATE 1 ··· 135 132 u8 pciehp_handle_switch_change(struct slot *p_slot); 136 133 u8 pciehp_handle_presence_change(struct slot *p_slot); 137 134 u8 pciehp_handle_power_fault(struct slot *p_slot); 135 + void pciehp_handle_linkstate_change(struct slot *p_slot); 138 136 int pciehp_configure_device(struct slot *p_slot); 139 137 int pciehp_unconfigure_device(struct slot *p_slot); 140 138 void pciehp_queue_pushbutton_work(struct work_struct *work); ··· 157 153 void pciehp_green_led_off(struct slot *slot); 158 154 void pciehp_green_led_blink(struct slot *slot); 159 155 int pciehp_check_link_status(struct controller *ctrl); 156 + bool pciehp_check_link_active(struct controller *ctrl); 160 157 void pciehp_release_ctrl(struct controller *ctrl); 161 158 int pciehp_reset_slot(struct slot *slot, int probe); 162 159
+1
drivers/pci/hotplug/pciehp_acpi.c
··· 112 112 static int __init select_detection_mode(void) 113 113 { 114 114 struct dummy_slot *slot, *tmp; 115 + 115 116 if (pcie_port_service_register(&dummy_driver)) 116 117 return PCIEHP_DETECT_ACPI; 117 118 pcie_port_service_unregister(&dummy_driver);
+7 -1
drivers/pci/hotplug/pciehp_core.c
··· 108 108 ops = kzalloc(sizeof(*ops), GFP_KERNEL); 109 109 if (!ops) 110 110 goto out; 111 + 111 112 ops->enable_slot = enable_slot; 112 113 ops->disable_slot = disable_slot; 113 114 ops->get_power_status = get_power_status; ··· 284 283 slot = ctrl->slot; 285 284 pciehp_get_adapter_status(slot, &occupied); 286 285 pciehp_get_power_status(slot, &poweron); 287 - if (occupied && pciehp_force) 286 + if (occupied && pciehp_force) { 287 + mutex_lock(&slot->hotplug_lock); 288 288 pciehp_enable_slot(slot); 289 + mutex_unlock(&slot->hotplug_lock); 290 + } 289 291 /* If empty slot's power status is on, turn power off */ 290 292 if (!occupied && poweron && POWER_CTRL(ctrl)) 291 293 pciehp_power_off_slot(slot); ··· 332 328 333 329 /* Check if slot is occupied */ 334 330 pciehp_get_adapter_status(slot, &status); 331 + mutex_lock(&slot->hotplug_lock); 335 332 if (status) 336 333 pciehp_enable_slot(slot); 337 334 else 338 335 pciehp_disable_slot(slot); 336 + mutex_unlock(&slot->hotplug_lock); 339 337 return 0; 340 338 } 341 339 #endif /* PM */
+137 -36
drivers/pci/hotplug/pciehp_ctrl.c
··· 150 150 return 1; 151 151 } 152 152 153 + void pciehp_handle_linkstate_change(struct slot *p_slot) 154 + { 155 + u32 event_type; 156 + struct controller *ctrl = p_slot->ctrl; 157 + 158 + /* Link Status Change */ 159 + ctrl_dbg(ctrl, "Data Link Layer State change\n"); 160 + 161 + if (pciehp_check_link_active(ctrl)) { 162 + ctrl_info(ctrl, "slot(%s): Link Up event\n", 163 + slot_name(p_slot)); 164 + event_type = INT_LINK_UP; 165 + } else { 166 + ctrl_info(ctrl, "slot(%s): Link Down event\n", 167 + slot_name(p_slot)); 168 + event_type = INT_LINK_DOWN; 169 + } 170 + 171 + queue_interrupt_event(p_slot, event_type); 172 + } 173 + 153 174 /* The following routines constitute the bulk of the 154 175 hotplug controller logic 155 176 */ ··· 233 212 if (retval) { 234 213 ctrl_err(ctrl, "Cannot add device at %04x:%02x:00\n", 235 214 pci_domain_nr(parent), parent->number); 236 - goto err_exit; 215 + if (retval != -EEXIST) 216 + goto err_exit; 237 217 } 238 218 239 219 pciehp_green_led_on(p_slot); ··· 277 255 struct power_work_info { 278 256 struct slot *p_slot; 279 257 struct work_struct work; 258 + unsigned int req; 259 + #define DISABLE_REQ 0 260 + #define ENABLE_REQ 1 280 261 }; 281 262 282 263 /** ··· 294 269 struct power_work_info *info = 295 270 container_of(work, struct power_work_info, work); 296 271 struct slot *p_slot = info->p_slot; 272 + int ret; 297 273 298 - mutex_lock(&p_slot->lock); 299 - switch (p_slot->state) { 300 - case POWEROFF_STATE: 301 - mutex_unlock(&p_slot->lock); 274 + switch (info->req) { 275 + case DISABLE_REQ: 302 276 ctrl_dbg(p_slot->ctrl, 303 277 "Disabling domain:bus:device=%04x:%02x:00\n", 304 278 pci_domain_nr(p_slot->ctrl->pcie->port->subordinate), 305 279 p_slot->ctrl->pcie->port->subordinate->number); 280 + mutex_lock(&p_slot->hotplug_lock); 306 281 pciehp_disable_slot(p_slot); 282 + mutex_unlock(&p_slot->hotplug_lock); 307 283 mutex_lock(&p_slot->lock); 308 284 p_slot->state = STATIC_STATE; 309 - break; 310 - case POWERON_STATE: 311 285 mutex_unlock(&p_slot->lock); 312 - if (pciehp_enable_slot(p_slot)) 286 + break; 287 + case ENABLE_REQ: 288 + ctrl_dbg(p_slot->ctrl, 289 + "Enabling domain:bus:device=%04x:%02x:00\n", 290 + pci_domain_nr(p_slot->ctrl->pcie->port->subordinate), 291 + p_slot->ctrl->pcie->port->subordinate->number); 292 + mutex_lock(&p_slot->hotplug_lock); 293 + ret = pciehp_enable_slot(p_slot); 294 + mutex_unlock(&p_slot->hotplug_lock); 295 + if (ret) 313 296 pciehp_green_led_off(p_slot); 314 297 mutex_lock(&p_slot->lock); 315 298 p_slot->state = STATIC_STATE; 299 + mutex_unlock(&p_slot->lock); 316 300 break; 317 301 default: 318 302 break; 319 303 } 320 - mutex_unlock(&p_slot->lock); 321 304 322 305 kfree(info); 323 306 } ··· 348 315 switch (p_slot->state) { 349 316 case BLINKINGOFF_STATE: 350 317 p_slot->state = POWEROFF_STATE; 318 + info->req = DISABLE_REQ; 351 319 break; 352 320 case BLINKINGON_STATE: 353 321 p_slot->state = POWERON_STATE; 322 + info->req = ENABLE_REQ; 354 323 break; 355 324 default: 356 325 kfree(info); ··· 399 364 */ 400 365 ctrl_info(ctrl, "Button cancel on Slot(%s)\n", slot_name(p_slot)); 401 366 cancel_delayed_work(&p_slot->work); 402 - if (p_slot->state == BLINKINGOFF_STATE) { 367 + if (p_slot->state == BLINKINGOFF_STATE) 403 368 pciehp_green_led_on(p_slot); 404 - } else { 369 + else 405 370 pciehp_green_led_off(p_slot); 406 - } 407 371 pciehp_set_attention_status(p_slot, 0); 408 372 ctrl_info(ctrl, "PCI slot #%s - action canceled " 409 373 "due to button press\n", slot_name(p_slot)); ··· 441 407 INIT_WORK(&info->work, pciehp_power_thread); 442 408 443 409 pciehp_get_adapter_status(p_slot, &getstatus); 444 - if (!getstatus) 410 + if (!getstatus) { 445 411 p_slot->state = POWEROFF_STATE; 446 - else 412 + info->req = DISABLE_REQ; 413 + } else { 447 414 p_slot->state = POWERON_STATE; 415 + info->req = ENABLE_REQ; 416 + } 448 417 449 418 queue_work(p_slot->wq, &info->work); 419 + } 420 + 421 + /* 422 + * Note: This function must be called with slot->lock held 423 + */ 424 + static void handle_link_event(struct slot *p_slot, u32 event) 425 + { 426 + struct controller *ctrl = p_slot->ctrl; 427 + struct power_work_info *info; 428 + 429 + info = kmalloc(sizeof(*info), GFP_KERNEL); 430 + if (!info) { 431 + ctrl_err(p_slot->ctrl, "%s: Cannot allocate memory\n", 432 + __func__); 433 + return; 434 + } 435 + info->p_slot = p_slot; 436 + info->req = event == INT_LINK_UP ? ENABLE_REQ : DISABLE_REQ; 437 + INIT_WORK(&info->work, pciehp_power_thread); 438 + 439 + switch (p_slot->state) { 440 + case BLINKINGON_STATE: 441 + case BLINKINGOFF_STATE: 442 + cancel_delayed_work(&p_slot->work); 443 + /* Fall through */ 444 + case STATIC_STATE: 445 + p_slot->state = event == INT_LINK_UP ? 446 + POWERON_STATE : POWEROFF_STATE; 447 + queue_work(p_slot->wq, &info->work); 448 + break; 449 + case POWERON_STATE: 450 + if (event == INT_LINK_UP) { 451 + ctrl_info(ctrl, 452 + "Link Up event ignored on slot(%s): already powering on\n", 453 + slot_name(p_slot)); 454 + kfree(info); 455 + } else { 456 + ctrl_info(ctrl, 457 + "Link Down event queued on slot(%s): currently getting powered on\n", 458 + slot_name(p_slot)); 459 + p_slot->state = POWEROFF_STATE; 460 + queue_work(p_slot->wq, &info->work); 461 + } 462 + break; 463 + case POWEROFF_STATE: 464 + if (event == INT_LINK_UP) { 465 + ctrl_info(ctrl, 466 + "Link Up event queued on slot(%s): currently getting powered off\n", 467 + slot_name(p_slot)); 468 + p_slot->state = POWERON_STATE; 469 + queue_work(p_slot->wq, &info->work); 470 + } else { 471 + ctrl_info(ctrl, 472 + "Link Down event ignored on slot(%s): already powering off\n", 473 + slot_name(p_slot)); 474 + kfree(info); 475 + } 476 + break; 477 + default: 478 + ctrl_err(ctrl, "Not a valid state on slot(%s)\n", 479 + slot_name(p_slot)); 480 + kfree(info); 481 + break; 482 + } 450 483 } 451 484 452 485 static void interrupt_event_handler(struct work_struct *work) ··· 534 433 pciehp_green_led_off(p_slot); 535 434 break; 536 435 case INT_PRESENCE_ON: 537 - case INT_PRESENCE_OFF: 538 436 if (!HP_SUPR_RM(ctrl)) 539 437 break; 438 + ctrl_dbg(ctrl, "Surprise Insertion\n"); 439 + handle_surprise_event(p_slot); 440 + break; 441 + case INT_PRESENCE_OFF: 442 + /* 443 + * Regardless of surprise capability, we need to 444 + * definitely remove a card that has been pulled out! 445 + */ 540 446 ctrl_dbg(ctrl, "Surprise Removal\n"); 541 447 handle_surprise_event(p_slot); 448 + break; 449 + case INT_LINK_UP: 450 + case INT_LINK_DOWN: 451 + handle_link_event(p_slot, info->event_type); 542 452 break; 543 453 default: 544 454 break; ··· 559 447 kfree(info); 560 448 } 561 449 450 + /* 451 + * Note: This function must be called with slot->hotplug_lock held 452 + */ 562 453 int pciehp_enable_slot(struct slot *p_slot) 563 454 { 564 455 u8 getstatus = 0; ··· 594 479 pciehp_get_latch_status(p_slot, &getstatus); 595 480 596 481 rc = board_added(p_slot); 597 - if (rc) { 482 + if (rc) 598 483 pciehp_get_latch_status(p_slot, &getstatus); 599 - } 484 + 600 485 return rc; 601 486 } 602 487 603 - 488 + /* 489 + * Note: This function must be called with slot->hotplug_lock held 490 + */ 604 491 int pciehp_disable_slot(struct slot *p_slot) 605 492 { 606 493 u8 getstatus = 0; ··· 610 493 611 494 if (!p_slot->ctrl) 612 495 return 1; 613 - 614 - if (!HP_SUPR_RM(p_slot->ctrl)) { 615 - pciehp_get_adapter_status(p_slot, &getstatus); 616 - if (!getstatus) { 617 - ctrl_info(ctrl, "No adapter on slot(%s)\n", 618 - slot_name(p_slot)); 619 - return -ENODEV; 620 - } 621 - } 622 - 623 - if (MRL_SENS(p_slot->ctrl)) { 624 - pciehp_get_latch_status(p_slot, &getstatus); 625 - if (getstatus) { 626 - ctrl_info(ctrl, "Latch open on slot(%s)\n", 627 - slot_name(p_slot)); 628 - return -ENODEV; 629 - } 630 - } 631 496 632 497 if (POWER_CTRL(p_slot->ctrl)) { 633 498 pciehp_get_power_status(p_slot, &getstatus); ··· 635 536 case STATIC_STATE: 636 537 p_slot->state = POWERON_STATE; 637 538 mutex_unlock(&p_slot->lock); 539 + mutex_lock(&p_slot->hotplug_lock); 638 540 retval = pciehp_enable_slot(p_slot); 541 + mutex_unlock(&p_slot->hotplug_lock); 639 542 mutex_lock(&p_slot->lock); 640 543 p_slot->state = STATIC_STATE; 641 544 break;
+38 -37
drivers/pci/hotplug/pciehp_hpc.c
··· 206 206 mutex_unlock(&ctrl->ctrl_lock); 207 207 } 208 208 209 - static bool check_link_active(struct controller *ctrl) 209 + bool pciehp_check_link_active(struct controller *ctrl) 210 210 { 211 211 struct pci_dev *pdev = ctrl_dev(ctrl); 212 212 u16 lnk_status; ··· 225 225 { 226 226 int timeout = 1000; 227 227 228 - if (check_link_active(ctrl) == active) 228 + if (pciehp_check_link_active(ctrl) == active) 229 229 return; 230 230 while (timeout > 0) { 231 231 msleep(10); 232 232 timeout -= 10; 233 - if (check_link_active(ctrl) == active) 233 + if (pciehp_check_link_active(ctrl) == active) 234 234 return; 235 235 } 236 236 ctrl_dbg(ctrl, "Data Link Layer Link Active not %s in 1000 msec\n", ··· 240 240 static void pcie_wait_link_active(struct controller *ctrl) 241 241 { 242 242 __pcie_wait_link_active(ctrl, true); 243 - } 244 - 245 - static void pcie_wait_link_not_active(struct controller *ctrl) 246 - { 247 - __pcie_wait_link_active(ctrl, false); 248 243 } 249 244 250 245 static bool pci_bus_check_dev(struct pci_bus *bus, int devfn) ··· 325 330 static int pciehp_link_enable(struct controller *ctrl) 326 331 { 327 332 return __pciehp_link_set(ctrl, true); 328 - } 329 - 330 - static int pciehp_link_disable(struct controller *ctrl) 331 - { 332 - return __pciehp_link_set(ctrl, false); 333 333 } 334 334 335 335 void pciehp_get_attention_status(struct slot *slot, u8 *status) ··· 498 508 { 499 509 struct controller *ctrl = slot->ctrl; 500 510 501 - /* Disable the link at first */ 502 - pciehp_link_disable(ctrl); 503 - /* wait the link is down */ 504 - if (ctrl->link_active_reporting) 505 - pcie_wait_link_not_active(ctrl); 506 - else 507 - msleep(1000); 508 - 509 511 pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_OFF, PCI_EXP_SLTCTL_PCC); 510 512 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 511 513 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ··· 522 540 523 541 detected &= (PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | 524 542 PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_PDC | 525 - PCI_EXP_SLTSTA_CC); 543 + PCI_EXP_SLTSTA_CC | PCI_EXP_SLTSTA_DLLSC); 526 544 detected &= ~intr_loc; 527 545 intr_loc |= detected; 528 546 if (!intr_loc) ··· 561 579 ctrl->power_fault_detected = 1; 562 580 pciehp_handle_power_fault(slot); 563 581 } 582 + 583 + if (intr_loc & PCI_EXP_SLTSTA_DLLSC) 584 + pciehp_handle_linkstate_change(slot); 585 + 564 586 return IRQ_HANDLED; 565 587 } 566 588 ··· 582 596 * when it is cleared in the interrupt service routine, and 583 597 * next power fault detected interrupt was notified again. 584 598 */ 585 - cmd = PCI_EXP_SLTCTL_PDCE; 599 + 600 + /* 601 + * Always enable link events: thus link-up and link-down shall 602 + * always be treated as hotplug and unplug respectively. Enable 603 + * presence detect only if Attention Button is not present. 604 + */ 605 + cmd = PCI_EXP_SLTCTL_DLLSCE; 586 606 if (ATTN_BUTTN(ctrl)) 587 607 cmd |= PCI_EXP_SLTCTL_ABPE; 608 + else 609 + cmd |= PCI_EXP_SLTCTL_PDCE; 588 610 if (MRL_SENS(ctrl)) 589 611 cmd |= PCI_EXP_SLTCTL_MRLSCE; 590 612 if (!pciehp_poll_mode) ··· 600 606 601 607 mask = (PCI_EXP_SLTCTL_PDCE | PCI_EXP_SLTCTL_ABPE | 602 608 PCI_EXP_SLTCTL_MRLSCE | PCI_EXP_SLTCTL_PFDE | 603 - PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE); 609 + PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE | 610 + PCI_EXP_SLTCTL_DLLSCE); 604 611 605 612 pcie_write_cmd(ctrl, cmd, mask); 606 613 } ··· 619 624 620 625 /* 621 626 * pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary 622 - * bus reset of the bridge, but if the slot supports surprise removal we need 623 - * to disable presence detection around the bus reset and clear any spurious 627 + * bus reset of the bridge, but at the same time we want to ensure that it is 628 + * not seen as a hot-unplug, followed by the hot-plug of the device. Thus, 629 + * disable link state notification and presence detection change notification 630 + * momentarily, if we see that they could interfere. Also, clear any spurious 624 631 * events after. 625 632 */ 626 633 int pciehp_reset_slot(struct slot *slot, int probe) 627 634 { 628 635 struct controller *ctrl = slot->ctrl; 629 636 struct pci_dev *pdev = ctrl_dev(ctrl); 637 + u16 stat_mask = 0, ctrl_mask = 0; 630 638 631 639 if (probe) 632 640 return 0; 633 641 634 - if (HP_SUPR_RM(ctrl)) { 635 - pcie_write_cmd(ctrl, 0, PCI_EXP_SLTCTL_PDCE); 636 - if (pciehp_poll_mode) 637 - del_timer_sync(&ctrl->poll_timer); 642 + if (!ATTN_BUTTN(ctrl)) { 643 + ctrl_mask |= PCI_EXP_SLTCTL_PDCE; 644 + stat_mask |= PCI_EXP_SLTSTA_PDC; 638 645 } 646 + ctrl_mask |= PCI_EXP_SLTCTL_DLLSCE; 647 + stat_mask |= PCI_EXP_SLTSTA_DLLSC; 648 + 649 + pcie_write_cmd(ctrl, 0, ctrl_mask); 650 + if (pciehp_poll_mode) 651 + del_timer_sync(&ctrl->poll_timer); 639 652 640 653 pci_reset_bridge_secondary_bus(ctrl->pcie->port); 641 654 642 - if (HP_SUPR_RM(ctrl)) { 643 - pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, 644 - PCI_EXP_SLTSTA_PDC); 645 - pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PDCE, PCI_EXP_SLTCTL_PDCE); 646 - if (pciehp_poll_mode) 647 - int_poll_timeout(ctrl->poll_timer.data); 648 - } 655 + pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, stat_mask); 656 + pcie_write_cmd(ctrl, ctrl_mask, ctrl_mask); 657 + if (pciehp_poll_mode) 658 + int_poll_timeout(ctrl->poll_timer.data); 649 659 650 660 return 0; 651 661 } ··· 687 687 688 688 slot->ctrl = ctrl; 689 689 mutex_init(&slot->lock); 690 + mutex_init(&slot->hotplug_lock); 690 691 INIT_DELAYED_WORK(&slot->work, pciehp_queue_pushbutton_work); 691 692 ctrl->slot = slot; 692 693 return 0;
+1 -1
drivers/pci/hotplug/pciehp_pci.c
··· 50 50 "at %04x:%02x:00, cannot hot-add\n", pci_name(dev), 51 51 pci_domain_nr(parent), parent->number); 52 52 pci_dev_put(dev); 53 - ret = -EINVAL; 53 + ret = -EEXIST; 54 54 goto out; 55 55 } 56 56
-119
drivers/pci/iov.c
··· 170 170 pci_dev_put(dev); 171 171 } 172 172 173 - static int sriov_migration(struct pci_dev *dev) 174 - { 175 - u16 status; 176 - struct pci_sriov *iov = dev->sriov; 177 - 178 - if (!iov->num_VFs) 179 - return 0; 180 - 181 - if (!(iov->cap & PCI_SRIOV_CAP_VFM)) 182 - return 0; 183 - 184 - pci_read_config_word(dev, iov->pos + PCI_SRIOV_STATUS, &status); 185 - if (!(status & PCI_SRIOV_STATUS_VFM)) 186 - return 0; 187 - 188 - schedule_work(&iov->mtask); 189 - 190 - return 1; 191 - } 192 - 193 - static void sriov_migration_task(struct work_struct *work) 194 - { 195 - int i; 196 - u8 state; 197 - u16 status; 198 - struct pci_sriov *iov = container_of(work, struct pci_sriov, mtask); 199 - 200 - for (i = iov->initial_VFs; i < iov->num_VFs; i++) { 201 - state = readb(iov->mstate + i); 202 - if (state == PCI_SRIOV_VFM_MI) { 203 - writeb(PCI_SRIOV_VFM_AV, iov->mstate + i); 204 - state = readb(iov->mstate + i); 205 - if (state == PCI_SRIOV_VFM_AV) 206 - virtfn_add(iov->self, i, 1); 207 - } else if (state == PCI_SRIOV_VFM_MO) { 208 - virtfn_remove(iov->self, i, 1); 209 - writeb(PCI_SRIOV_VFM_UA, iov->mstate + i); 210 - state = readb(iov->mstate + i); 211 - if (state == PCI_SRIOV_VFM_AV) 212 - virtfn_add(iov->self, i, 0); 213 - } 214 - } 215 - 216 - pci_read_config_word(iov->self, iov->pos + PCI_SRIOV_STATUS, &status); 217 - status &= ~PCI_SRIOV_STATUS_VFM; 218 - pci_write_config_word(iov->self, iov->pos + PCI_SRIOV_STATUS, status); 219 - } 220 - 221 - static int sriov_enable_migration(struct pci_dev *dev, int nr_virtfn) 222 - { 223 - int bir; 224 - u32 table; 225 - resource_size_t pa; 226 - struct pci_sriov *iov = dev->sriov; 227 - 228 - if (nr_virtfn <= iov->initial_VFs) 229 - return 0; 230 - 231 - pci_read_config_dword(dev, iov->pos + PCI_SRIOV_VFM, &table); 232 - bir = PCI_SRIOV_VFM_BIR(table); 233 - if (bir > PCI_STD_RESOURCE_END) 234 - return -EIO; 235 - 236 - table = PCI_SRIOV_VFM_OFFSET(table); 237 - if (table + nr_virtfn > pci_resource_len(dev, bir)) 238 - return -EIO; 239 - 240 - pa = pci_resource_start(dev, bir) + table; 241 - iov->mstate = ioremap(pa, nr_virtfn); 242 - if (!iov->mstate) 243 - return -ENOMEM; 244 - 245 - INIT_WORK(&iov->mtask, sriov_migration_task); 246 - 247 - iov->ctrl |= PCI_SRIOV_CTRL_VFM | PCI_SRIOV_CTRL_INTR; 248 - pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl); 249 - 250 - return 0; 251 - } 252 - 253 - static void sriov_disable_migration(struct pci_dev *dev) 254 - { 255 - struct pci_sriov *iov = dev->sriov; 256 - 257 - iov->ctrl &= ~(PCI_SRIOV_CTRL_VFM | PCI_SRIOV_CTRL_INTR); 258 - pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl); 259 - 260 - cancel_work_sync(&iov->mtask); 261 - iounmap(iov->mstate); 262 - } 263 - 264 173 static int sriov_enable(struct pci_dev *dev, int nr_virtfn) 265 174 { 266 175 int rc; ··· 260 351 goto failed; 261 352 } 262 353 263 - if (iov->cap & PCI_SRIOV_CAP_VFM) { 264 - rc = sriov_enable_migration(dev, nr_virtfn); 265 - if (rc) 266 - goto failed; 267 - } 268 - 269 354 kobject_uevent(&dev->dev.kobj, KOBJ_CHANGE); 270 355 iov->num_VFs = nr_virtfn; 271 356 ··· 289 386 290 387 if (!iov->num_VFs) 291 388 return; 292 - 293 - if (iov->cap & PCI_SRIOV_CAP_VFM) 294 - sriov_disable_migration(dev); 295 389 296 390 for (i = 0; i < iov->num_VFs; i++) 297 391 virtfn_remove(dev, i, 0); ··· 586 686 sriov_disable(dev); 587 687 } 588 688 EXPORT_SYMBOL_GPL(pci_disable_sriov); 589 - 590 - /** 591 - * pci_sriov_migration - notify SR-IOV core of Virtual Function Migration 592 - * @dev: the PCI device 593 - * 594 - * Returns IRQ_HANDLED if the IRQ is handled, or IRQ_NONE if not. 595 - * 596 - * Physical Function driver is responsible to register IRQ handler using 597 - * VF Migration Interrupt Message Number, and call this function when the 598 - * interrupt is generated by the hardware. 599 - */ 600 - irqreturn_t pci_sriov_migration(struct pci_dev *dev) 601 - { 602 - if (!dev->is_physfn) 603 - return IRQ_NONE; 604 - 605 - return sriov_migration(dev) ? IRQ_HANDLED : IRQ_NONE; 606 - } 607 - EXPORT_SYMBOL_GPL(pci_sriov_migration); 608 689 609 690 /** 610 691 * pci_num_vf - return number of VFs associated with a PF device_release_driver
+71 -45
drivers/pci/pci.c
··· 108 108 */ 109 109 unsigned char pci_bus_max_busnr(struct pci_bus* bus) 110 110 { 111 - struct list_head *tmp; 111 + struct pci_bus *tmp; 112 112 unsigned char max, n; 113 113 114 114 max = bus->busn_res.end; 115 - list_for_each(tmp, &bus->children) { 116 - n = pci_bus_max_busnr(pci_bus_b(tmp)); 115 + list_for_each_entry(tmp, &bus->children, node) { 116 + n = pci_bus_max_busnr(tmp); 117 117 if(n > max) 118 118 max = n; 119 119 } ··· 401 401 * @res: child resource record for which parent is sought 402 402 * 403 403 * For given resource region of given device, return the resource 404 - * region of parent bus the given region is contained in or where 405 - * it should be allocated from. 404 + * region of parent bus the given region is contained in. 406 405 */ 407 406 struct resource * 408 407 pci_find_parent_resource(const struct pci_dev *dev, struct resource *res) 409 408 { 410 409 const struct pci_bus *bus = dev->bus; 410 + struct resource *r; 411 411 int i; 412 - struct resource *best = NULL, *r; 413 412 414 413 pci_bus_for_each_resource(bus, r, i) { 415 414 if (!r) 416 415 continue; 417 - if (res->start && !(res->start >= r->start && res->end <= r->end)) 418 - continue; /* Not contained */ 419 - if ((res->flags ^ r->flags) & (IORESOURCE_IO | IORESOURCE_MEM)) 420 - continue; /* Wrong type */ 421 - if (!((res->flags ^ r->flags) & IORESOURCE_PREFETCH)) 422 - return r; /* Exact match */ 423 - /* We can't insert a non-prefetch resource inside a prefetchable parent .. */ 424 - if (r->flags & IORESOURCE_PREFETCH) 425 - continue; 426 - /* .. but we can put a prefetchable resource inside a non-prefetchable one */ 427 - if (!best) 428 - best = r; 416 + if (res->start && resource_contains(r, res)) { 417 + 418 + /* 419 + * If the window is prefetchable but the BAR is 420 + * not, the allocator made a mistake. 421 + */ 422 + if (r->flags & IORESOURCE_PREFETCH && 423 + !(res->flags & IORESOURCE_PREFETCH)) 424 + return NULL; 425 + 426 + /* 427 + * If we're below a transparent bridge, there may 428 + * be both a positively-decoded aperture and a 429 + * subtractively-decoded region that contain the BAR. 430 + * We want the positively-decoded one, so this depends 431 + * on pci_bus_for_each_resource() giving us those 432 + * first. 433 + */ 434 + return r; 435 + } 429 436 } 430 - return best; 437 + return NULL; 431 438 } 432 439 433 440 /** ··· 1185 1178 } 1186 1179 EXPORT_SYMBOL_GPL(pci_load_and_free_saved_state); 1187 1180 1181 + int __weak pcibios_enable_device(struct pci_dev *dev, int bars) 1182 + { 1183 + return pci_enable_resources(dev, bars); 1184 + } 1185 + 1188 1186 static int do_pci_enable_device(struct pci_dev *dev, int bars) 1189 1187 { 1190 1188 int err; ··· 1636 1624 struct pci_pme_device *pme_dev, *n; 1637 1625 1638 1626 mutex_lock(&pci_pme_list_mutex); 1639 - if (!list_empty(&pci_pme_list)) { 1640 - list_for_each_entry_safe(pme_dev, n, &pci_pme_list, list) { 1641 - if (pme_dev->dev->pme_poll) { 1642 - struct pci_dev *bridge; 1627 + list_for_each_entry_safe(pme_dev, n, &pci_pme_list, list) { 1628 + if (pme_dev->dev->pme_poll) { 1629 + struct pci_dev *bridge; 1643 1630 1644 - bridge = pme_dev->dev->bus->self; 1645 - /* 1646 - * If bridge is in low power state, the 1647 - * configuration space of subordinate devices 1648 - * may be not accessible 1649 - */ 1650 - if (bridge && bridge->current_state != PCI_D0) 1651 - continue; 1652 - pci_pme_wakeup(pme_dev->dev, NULL); 1653 - } else { 1654 - list_del(&pme_dev->list); 1655 - kfree(pme_dev); 1656 - } 1631 + bridge = pme_dev->dev->bus->self; 1632 + /* 1633 + * If bridge is in low power state, the 1634 + * configuration space of subordinate devices 1635 + * may be not accessible 1636 + */ 1637 + if (bridge && bridge->current_state != PCI_D0) 1638 + continue; 1639 + pci_pme_wakeup(pme_dev->dev, NULL); 1640 + } else { 1641 + list_del(&pme_dev->list); 1642 + kfree(pme_dev); 1657 1643 } 1658 - if (!list_empty(&pci_pme_list)) 1659 - schedule_delayed_work(&pci_pme_work, 1660 - msecs_to_jiffies(PME_TIMEOUT)); 1661 1644 } 1645 + if (!list_empty(&pci_pme_list)) 1646 + schedule_delayed_work(&pci_pme_work, 1647 + msecs_to_jiffies(PME_TIMEOUT)); 1662 1648 mutex_unlock(&pci_pme_list_mutex); 1663 1649 } 1664 1650 ··· 2203 2193 } 2204 2194 2205 2195 /** 2206 - * pci_enable_acs - enable ACS if hardware support it 2196 + * pci_std_enable_acs - enable ACS on devices using standard ACS capabilites 2207 2197 * @dev: the PCI device 2208 2198 */ 2209 - void pci_enable_acs(struct pci_dev *dev) 2199 + static int pci_std_enable_acs(struct pci_dev *dev) 2210 2200 { 2211 2201 int pos; 2212 2202 u16 cap; 2213 2203 u16 ctrl; 2214 2204 2215 - if (!pci_acs_enable) 2216 - return; 2217 - 2218 2205 pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS); 2219 2206 if (!pos) 2220 - return; 2207 + return -ENODEV; 2221 2208 2222 2209 pci_read_config_word(dev, pos + PCI_ACS_CAP, &cap); 2223 2210 pci_read_config_word(dev, pos + PCI_ACS_CTRL, &ctrl); ··· 2232 2225 ctrl |= (cap & PCI_ACS_UF); 2233 2226 2234 2227 pci_write_config_word(dev, pos + PCI_ACS_CTRL, ctrl); 2228 + 2229 + return 0; 2230 + } 2231 + 2232 + /** 2233 + * pci_enable_acs - enable ACS if hardware support it 2234 + * @dev: the PCI device 2235 + */ 2236 + void pci_enable_acs(struct pci_dev *dev) 2237 + { 2238 + if (!pci_acs_enable) 2239 + return; 2240 + 2241 + if (!pci_std_enable_acs(dev)) 2242 + return; 2243 + 2244 + pci_dev_specific_enable_acs(dev); 2235 2245 } 2236 2246 2237 2247 static bool pci_acs_flags_enabled(struct pci_dev *pdev, u16 acs_flags) ··· 4274 4250 "Rounding up size of resource #%d to %#llx.\n", 4275 4251 i, (unsigned long long)size); 4276 4252 } 4253 + r->flags |= IORESOURCE_UNSET; 4277 4254 r->end = size - 1; 4278 4255 r->start = 0; 4279 4256 } ··· 4288 4263 r = &dev->resource[i]; 4289 4264 if (!(r->flags & IORESOURCE_MEM)) 4290 4265 continue; 4266 + r->flags |= IORESOURCE_UNSET; 4291 4267 r->end = resource_size(r) - 1; 4292 4268 r->start = 0; 4293 4269 }
-4
drivers/pci/pci.h
··· 1 1 #ifndef DRIVERS_PCI_H 2 2 #define DRIVERS_PCI_H 3 3 4 - #include <linux/workqueue.h> 5 - 6 4 #define PCI_CFG_SPACE_SIZE 256 7 5 #define PCI_CFG_SPACE_EXP_SIZE 4096 8 6 ··· 238 240 struct pci_dev *dev; /* lowest numbered PF */ 239 241 struct pci_dev *self; /* this PF */ 240 242 struct mutex lock; /* lock for VF bus */ 241 - struct work_struct mtask; /* VF Migration task */ 242 - u8 __iomem *mstate; /* VF Migration State Array */ 243 243 }; 244 244 245 245 #ifdef CONFIG_PCI_ATS
+42 -51
drivers/pci/probe.c
··· 252 252 /* Address above 32-bit boundary; disable the BAR */ 253 253 pci_write_config_dword(dev, pos, 0); 254 254 pci_write_config_dword(dev, pos + 4, 0); 255 + res->flags |= IORESOURCE_UNSET; 255 256 region.start = 0; 256 257 region.end = sz64; 257 258 bar_disabled = true; ··· 732 731 return child; 733 732 } 734 733 735 - static void pci_fixup_parent_subordinate_busnr(struct pci_bus *child, int max) 736 - { 737 - struct pci_bus *parent = child->parent; 738 - 739 - /* Attempts to fix that up are really dangerous unless 740 - we're going to re-assign all bus numbers. */ 741 - if (!pcibios_assign_all_busses()) 742 - return; 743 - 744 - while (parent->parent && parent->busn_res.end < max) { 745 - parent->busn_res.end = max; 746 - pci_write_config_byte(parent->self, PCI_SUBORDINATE_BUS, max); 747 - parent = parent->parent; 748 - } 749 - } 750 - 751 734 /* 752 735 * If it's a bridge, configure it and scan the bus behind it. 753 736 * For CardBus bridges, we don't scan behind as the devices will ··· 767 782 /* Check if setup is sensible at all */ 768 783 if (!pass && 769 784 (primary != bus->number || secondary <= bus->number || 770 - secondary > subordinate)) { 785 + secondary > subordinate || subordinate > bus->busn_res.end)) { 771 786 dev_info(&dev->dev, "bridge configuration invalid ([bus %02x-%02x]), reconfiguring\n", 772 787 secondary, subordinate); 773 788 broken = 1; ··· 790 805 goto out; 791 806 792 807 /* 793 - * If we already got to this bus through a different bridge, 794 - * don't re-add it. This can happen with the i450NX chipset. 795 - * 796 - * However, we continue to descend down the hierarchy and 797 - * scan remaining child buses. 808 + * The bus might already exist for two reasons: Either we are 809 + * rescanning the bus or the bus is reachable through more than 810 + * one bridge. The second case can happen with the i450NX 811 + * chipset. 798 812 */ 799 813 child = pci_find_bus(pci_domain_nr(bus), secondary); 800 814 if (!child) { ··· 806 822 } 807 823 808 824 cmax = pci_scan_child_bus(child); 809 - if (cmax > max) 810 - max = cmax; 811 - if (child->busn_res.end > max) 812 - max = child->busn_res.end; 825 + if (cmax > subordinate) 826 + dev_warn(&dev->dev, "bridge has subordinate %02x but max busn %02x\n", 827 + subordinate, cmax); 828 + /* subordinate should equal child->busn_res.end */ 829 + if (subordinate > max) 830 + max = subordinate; 813 831 } else { 814 832 /* 815 833 * We need to assign a number to this bus which we always 816 834 * do in the second pass. 817 835 */ 818 836 if (!pass) { 819 - if (pcibios_assign_all_busses() || broken) 837 + if (pcibios_assign_all_busses() || broken || is_cardbus) 820 838 /* Temporarily disable forwarding of the 821 839 configuration cycles on all bridges in 822 840 this bus segment to avoid possible ··· 830 844 goto out; 831 845 } 832 846 847 + if (max >= bus->busn_res.end) { 848 + dev_warn(&dev->dev, "can't allocate child bus %02x from %pR\n", 849 + max, &bus->busn_res); 850 + goto out; 851 + } 852 + 833 853 /* Clear errors */ 834 854 pci_write_config_word(dev, PCI_STATUS, 0xffff); 835 855 836 - /* Prevent assigning a bus number that already exists. 837 - * This can happen when a bridge is hot-plugged, so in 838 - * this case we only re-scan this bus. */ 856 + /* The bus will already exist if we are rescanning */ 839 857 child = pci_find_bus(pci_domain_nr(bus), max+1); 840 858 if (!child) { 841 - child = pci_add_new_bus(bus, dev, ++max); 859 + child = pci_add_new_bus(bus, dev, max+1); 842 860 if (!child) 843 861 goto out; 844 - pci_bus_insert_busn_res(child, max, 0xff); 862 + pci_bus_insert_busn_res(child, max+1, 863 + bus->busn_res.end); 845 864 } 865 + max++; 846 866 buses = (buses & 0xff000000) 847 867 | ((unsigned int)(child->primary) << 0) 848 868 | ((unsigned int)(child->busn_res.start) << 8) ··· 870 878 871 879 if (!is_cardbus) { 872 880 child->bridge_ctl = bctl; 873 - /* 874 - * Adjust subordinate busnr in parent buses. 875 - * We do this before scanning for children because 876 - * some devices may not be detected if the bios 877 - * was lazy. 878 - */ 879 - pci_fixup_parent_subordinate_busnr(child, max); 880 - /* Now we can scan all subordinate buses... */ 881 881 max = pci_scan_child_bus(child); 882 - /* 883 - * now fix it up again since we have found 884 - * the real value of max. 885 - */ 886 - pci_fixup_parent_subordinate_busnr(child, max); 887 882 } else { 888 883 /* 889 884 * For CardBus bridges, we leave 4 bus numbers ··· 901 922 } 902 923 } 903 924 max += i; 904 - pci_fixup_parent_subordinate_busnr(child, max); 905 925 } 906 926 /* 907 927 * Set the subordinate bus number to its real value. 908 928 */ 929 + if (max > bus->busn_res.end) { 930 + dev_warn(&dev->dev, "max busn %02x is outside %pR\n", 931 + max, &bus->busn_res); 932 + max = bus->busn_res.end; 933 + } 909 934 pci_bus_update_busn_res_end(child, max); 910 935 pci_write_config_byte(dev, PCI_SUBORDINATE_BUS, max); 911 936 } ··· 1108 1125 pci_read_config_word(dev, PCI_SUBSYSTEM_ID, &dev->subsystem_device); 1109 1126 1110 1127 /* 1111 - * Do the ugly legacy mode stuff here rather than broken chip 1112 - * quirk code. Legacy mode ATA controllers have fixed 1113 - * addresses. These are not always echoed in BAR0-3, and 1114 - * BAR0-3 in a few cases contain junk! 1128 + * Do the ugly legacy mode stuff here rather than broken chip 1129 + * quirk code. Legacy mode ATA controllers have fixed 1130 + * addresses. These are not always echoed in BAR0-3, and 1131 + * BAR0-3 in a few cases contain junk! 1115 1132 */ 1116 1133 if (class == PCI_CLASS_STORAGE_IDE) { 1117 1134 u8 progif; ··· 1122 1139 res = &dev->resource[0]; 1123 1140 res->flags = LEGACY_IO_RESOURCE; 1124 1141 pcibios_bus_to_resource(dev->bus, res, &region); 1142 + dev_info(&dev->dev, "legacy IDE quirk: reg 0x10: %pR\n", 1143 + res); 1125 1144 region.start = 0x3F6; 1126 1145 region.end = 0x3F6; 1127 1146 res = &dev->resource[1]; 1128 1147 res->flags = LEGACY_IO_RESOURCE; 1129 1148 pcibios_bus_to_resource(dev->bus, res, &region); 1149 + dev_info(&dev->dev, "legacy IDE quirk: reg 0x14: %pR\n", 1150 + res); 1130 1151 } 1131 1152 if ((progif & 4) == 0) { 1132 1153 region.start = 0x170; ··· 1138 1151 res = &dev->resource[2]; 1139 1152 res->flags = LEGACY_IO_RESOURCE; 1140 1153 pcibios_bus_to_resource(dev->bus, res, &region); 1154 + dev_info(&dev->dev, "legacy IDE quirk: reg 0x18: %pR\n", 1155 + res); 1141 1156 region.start = 0x376; 1142 1157 region.end = 0x376; 1143 1158 res = &dev->resource[3]; 1144 1159 res->flags = LEGACY_IO_RESOURCE; 1145 1160 pcibios_bus_to_resource(dev->bus, res, &region); 1161 + dev_info(&dev->dev, "legacy IDE quirk: reg 0x1c: %pR\n", 1162 + res); 1146 1163 } 1147 1164 } 1148 1165 break; ··· 1826 1835 res->flags |= IORESOURCE_PCI_FIXED; 1827 1836 } 1828 1837 1829 - conflict = insert_resource_conflict(parent_res, res); 1838 + conflict = request_resource_conflict(parent_res, res); 1830 1839 1831 1840 if (conflict) 1832 1841 dev_printk(KERN_DEBUG, &b->dev,
+190
drivers/pci/quirks.c
··· 296 296 struct resource *r = &dev->resource[0]; 297 297 298 298 if ((r->start & 0x3ffffff) || r->end != r->start + 0x3ffffff) { 299 + r->flags |= IORESOURCE_UNSET; 299 300 r->start = 0; 300 301 r->end = 0x3ffffff; 301 302 } ··· 938 937 static void quirk_dunord(struct pci_dev *dev) 939 938 { 940 939 struct resource *r = &dev->resource [1]; 940 + 941 + r->flags |= IORESOURCE_UNSET; 941 942 r->start = 0; 942 943 r->end = 0xffffff; 943 944 } ··· 1743 1740 struct resource *r = &dev->resource[0]; 1744 1741 1745 1742 if (r->start & 0x8) { 1743 + r->flags |= IORESOURCE_UNSET; 1746 1744 r->start = 0; 1747 1745 r->end = 0xf; 1748 1746 } ··· 1773 1769 dev_info(&dev->dev, 1774 1770 "Re-allocating PLX PCI 9050 BAR %u to length 256 to avoid bit 7 bug\n", 1775 1771 bar); 1772 + r->flags |= IORESOURCE_UNSET; 1776 1773 r->start = 0; 1777 1774 r->end = 0xff; 1778 1775 } ··· 3428 3423 #endif 3429 3424 } 3430 3425 3426 + /* 3427 + * Many Intel PCH root ports do provide ACS-like features to disable peer 3428 + * transactions and validate bus numbers in requests, but do not provide an 3429 + * actual PCIe ACS capability. This is the list of device IDs known to fall 3430 + * into that category as provided by Intel in Red Hat bugzilla 1037684. 3431 + */ 3432 + static const u16 pci_quirk_intel_pch_acs_ids[] = { 3433 + /* Ibexpeak PCH */ 3434 + 0x3b42, 0x3b43, 0x3b44, 0x3b45, 0x3b46, 0x3b47, 0x3b48, 0x3b49, 3435 + 0x3b4a, 0x3b4b, 0x3b4c, 0x3b4d, 0x3b4e, 0x3b4f, 0x3b50, 0x3b51, 3436 + /* Cougarpoint PCH */ 3437 + 0x1c10, 0x1c11, 0x1c12, 0x1c13, 0x1c14, 0x1c15, 0x1c16, 0x1c17, 3438 + 0x1c18, 0x1c19, 0x1c1a, 0x1c1b, 0x1c1c, 0x1c1d, 0x1c1e, 0x1c1f, 3439 + /* Pantherpoint PCH */ 3440 + 0x1e10, 0x1e11, 0x1e12, 0x1e13, 0x1e14, 0x1e15, 0x1e16, 0x1e17, 3441 + 0x1e18, 0x1e19, 0x1e1a, 0x1e1b, 0x1e1c, 0x1e1d, 0x1e1e, 0x1e1f, 3442 + /* Lynxpoint-H PCH */ 3443 + 0x8c10, 0x8c11, 0x8c12, 0x8c13, 0x8c14, 0x8c15, 0x8c16, 0x8c17, 3444 + 0x8c18, 0x8c19, 0x8c1a, 0x8c1b, 0x8c1c, 0x8c1d, 0x8c1e, 0x8c1f, 3445 + /* Lynxpoint-LP PCH */ 3446 + 0x9c10, 0x9c11, 0x9c12, 0x9c13, 0x9c14, 0x9c15, 0x9c16, 0x9c17, 3447 + 0x9c18, 0x9c19, 0x9c1a, 0x9c1b, 3448 + /* Wildcat PCH */ 3449 + 0x9c90, 0x9c91, 0x9c92, 0x9c93, 0x9c94, 0x9c95, 0x9c96, 0x9c97, 3450 + 0x9c98, 0x9c99, 0x9c9a, 0x9c9b, 3451 + }; 3452 + 3453 + static bool pci_quirk_intel_pch_acs_match(struct pci_dev *dev) 3454 + { 3455 + int i; 3456 + 3457 + /* Filter out a few obvious non-matches first */ 3458 + if (!pci_is_pcie(dev) || pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) 3459 + return false; 3460 + 3461 + for (i = 0; i < ARRAY_SIZE(pci_quirk_intel_pch_acs_ids); i++) 3462 + if (pci_quirk_intel_pch_acs_ids[i] == dev->device) 3463 + return true; 3464 + 3465 + return false; 3466 + } 3467 + 3468 + #define INTEL_PCH_ACS_FLAGS (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_SV) 3469 + 3470 + static int pci_quirk_intel_pch_acs(struct pci_dev *dev, u16 acs_flags) 3471 + { 3472 + u16 flags = dev->dev_flags & PCI_DEV_FLAGS_ACS_ENABLED_QUIRK ? 3473 + INTEL_PCH_ACS_FLAGS : 0; 3474 + 3475 + if (!pci_quirk_intel_pch_acs_match(dev)) 3476 + return -ENOTTY; 3477 + 3478 + return acs_flags & ~flags ? 0 : 1; 3479 + } 3480 + 3431 3481 static const struct pci_dev_acs_enabled { 3432 3482 u16 vendor; 3433 3483 u16 device; ··· 3494 3434 { PCI_VENDOR_ID_ATI, 0x439d, pci_quirk_amd_sb_acs }, 3495 3435 { PCI_VENDOR_ID_ATI, 0x4384, pci_quirk_amd_sb_acs }, 3496 3436 { PCI_VENDOR_ID_ATI, 0x4399, pci_quirk_amd_sb_acs }, 3437 + { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_pch_acs }, 3497 3438 { 0 } 3498 3439 }; 3499 3440 ··· 3521 3460 } 3522 3461 3523 3462 return -ENOTTY; 3463 + } 3464 + 3465 + /* Config space offset of Root Complex Base Address register */ 3466 + #define INTEL_LPC_RCBA_REG 0xf0 3467 + /* 31:14 RCBA address */ 3468 + #define INTEL_LPC_RCBA_MASK 0xffffc000 3469 + /* RCBA Enable */ 3470 + #define INTEL_LPC_RCBA_ENABLE (1 << 0) 3471 + 3472 + /* Backbone Scratch Pad Register */ 3473 + #define INTEL_BSPR_REG 0x1104 3474 + /* Backbone Peer Non-Posted Disable */ 3475 + #define INTEL_BSPR_REG_BPNPD (1 << 8) 3476 + /* Backbone Peer Posted Disable */ 3477 + #define INTEL_BSPR_REG_BPPD (1 << 9) 3478 + 3479 + /* Upstream Peer Decode Configuration Register */ 3480 + #define INTEL_UPDCR_REG 0x1114 3481 + /* 5:0 Peer Decode Enable bits */ 3482 + #define INTEL_UPDCR_REG_MASK 0x3f 3483 + 3484 + static int pci_quirk_enable_intel_lpc_acs(struct pci_dev *dev) 3485 + { 3486 + u32 rcba, bspr, updcr; 3487 + void __iomem *rcba_mem; 3488 + 3489 + /* 3490 + * Read the RCBA register from the LPC (D31:F0). PCH root ports 3491 + * are D28:F* and therefore get probed before LPC, thus we can't 3492 + * use pci_get_slot/pci_read_config_dword here. 3493 + */ 3494 + pci_bus_read_config_dword(dev->bus, PCI_DEVFN(31, 0), 3495 + INTEL_LPC_RCBA_REG, &rcba); 3496 + if (!(rcba & INTEL_LPC_RCBA_ENABLE)) 3497 + return -EINVAL; 3498 + 3499 + rcba_mem = ioremap_nocache(rcba & INTEL_LPC_RCBA_MASK, 3500 + PAGE_ALIGN(INTEL_UPDCR_REG)); 3501 + if (!rcba_mem) 3502 + return -ENOMEM; 3503 + 3504 + /* 3505 + * The BSPR can disallow peer cycles, but it's set by soft strap and 3506 + * therefore read-only. If both posted and non-posted peer cycles are 3507 + * disallowed, we're ok. If either are allowed, then we need to use 3508 + * the UPDCR to disable peer decodes for each port. This provides the 3509 + * PCIe ACS equivalent of PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF 3510 + */ 3511 + bspr = readl(rcba_mem + INTEL_BSPR_REG); 3512 + bspr &= INTEL_BSPR_REG_BPNPD | INTEL_BSPR_REG_BPPD; 3513 + if (bspr != (INTEL_BSPR_REG_BPNPD | INTEL_BSPR_REG_BPPD)) { 3514 + updcr = readl(rcba_mem + INTEL_UPDCR_REG); 3515 + if (updcr & INTEL_UPDCR_REG_MASK) { 3516 + dev_info(&dev->dev, "Disabling UPDCR peer decodes\n"); 3517 + updcr &= ~INTEL_UPDCR_REG_MASK; 3518 + writel(updcr, rcba_mem + INTEL_UPDCR_REG); 3519 + } 3520 + } 3521 + 3522 + iounmap(rcba_mem); 3523 + return 0; 3524 + } 3525 + 3526 + /* Miscellaneous Port Configuration register */ 3527 + #define INTEL_MPC_REG 0xd8 3528 + /* MPC: Invalid Receive Bus Number Check Enable */ 3529 + #define INTEL_MPC_REG_IRBNCE (1 << 26) 3530 + 3531 + static void pci_quirk_enable_intel_rp_mpc_acs(struct pci_dev *dev) 3532 + { 3533 + u32 mpc; 3534 + 3535 + /* 3536 + * When enabled, the IRBNCE bit of the MPC register enables the 3537 + * equivalent of PCI ACS Source Validation (PCI_ACS_SV), which 3538 + * ensures that requester IDs fall within the bus number range 3539 + * of the bridge. Enable if not already. 3540 + */ 3541 + pci_read_config_dword(dev, INTEL_MPC_REG, &mpc); 3542 + if (!(mpc & INTEL_MPC_REG_IRBNCE)) { 3543 + dev_info(&dev->dev, "Enabling MPC IRBNCE\n"); 3544 + mpc |= INTEL_MPC_REG_IRBNCE; 3545 + pci_write_config_word(dev, INTEL_MPC_REG, mpc); 3546 + } 3547 + } 3548 + 3549 + static int pci_quirk_enable_intel_pch_acs(struct pci_dev *dev) 3550 + { 3551 + if (!pci_quirk_intel_pch_acs_match(dev)) 3552 + return -ENOTTY; 3553 + 3554 + if (pci_quirk_enable_intel_lpc_acs(dev)) { 3555 + dev_warn(&dev->dev, "Failed to enable Intel PCH ACS quirk\n"); 3556 + return 0; 3557 + } 3558 + 3559 + pci_quirk_enable_intel_rp_mpc_acs(dev); 3560 + 3561 + dev->dev_flags |= PCI_DEV_FLAGS_ACS_ENABLED_QUIRK; 3562 + 3563 + dev_info(&dev->dev, "Intel PCH root port ACS workaround enabled\n"); 3564 + 3565 + return 0; 3566 + } 3567 + 3568 + static const struct pci_dev_enable_acs { 3569 + u16 vendor; 3570 + u16 device; 3571 + int (*enable_acs)(struct pci_dev *dev); 3572 + } pci_dev_enable_acs[] = { 3573 + { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_enable_intel_pch_acs }, 3574 + { 0 } 3575 + }; 3576 + 3577 + void pci_dev_specific_enable_acs(struct pci_dev *dev) 3578 + { 3579 + const struct pci_dev_enable_acs *i; 3580 + int ret; 3581 + 3582 + for (i = pci_dev_enable_acs; i->enable_acs; i++) { 3583 + if ((i->vendor == dev->vendor || 3584 + i->vendor == (u16)PCI_ANY_ID) && 3585 + (i->device == dev->device || 3586 + i->device == (u16)PCI_ANY_ID)) { 3587 + ret = i->enable_acs(dev); 3588 + if (ret >= 0) 3589 + return; 3590 + } 3591 + } 3524 3592 }
+2
drivers/pci/rom.c
··· 197 197 void pci_cleanup_rom(struct pci_dev *pdev) 198 198 { 199 199 struct resource *res = &pdev->resource[PCI_ROM_RESOURCE]; 200 + 200 201 if (res->flags & IORESOURCE_ROM_COPY) { 201 202 kfree((void*)(unsigned long)res->start); 203 + res->flags |= IORESOURCE_UNSET; 202 204 res->flags &= ~IORESOURCE_ROM_COPY; 203 205 res->start = 0; 204 206 res->end = 0;
+5 -5
drivers/pci/search.c
··· 54 54 55 55 static struct pci_bus *pci_do_find_bus(struct pci_bus *bus, unsigned char busnr) 56 56 { 57 - struct pci_bus* child; 58 - struct list_head *tmp; 57 + struct pci_bus *child; 58 + struct pci_bus *tmp; 59 59 60 60 if(bus->number == busnr) 61 61 return bus; 62 62 63 - list_for_each(tmp, &bus->children) { 64 - child = pci_do_find_bus(pci_bus_b(tmp), busnr); 63 + list_for_each_entry(tmp, &bus->children, node) { 64 + child = pci_do_find_bus(tmp, busnr); 65 65 if(child) 66 66 return child; 67 67 } ··· 111 111 down_read(&pci_bus_sem); 112 112 n = from ? from->node.next : pci_root_buses.next; 113 113 if (n != &pci_root_buses) 114 - b = pci_bus_b(n); 114 + b = list_entry(n, struct pci_bus, node); 115 115 up_read(&pci_bus_sem); 116 116 return b; 117 117 }
+25 -12
drivers/pci/setup-res.c
··· 44 44 if (!res->flags) 45 45 return; 46 46 47 + if (res->flags & IORESOURCE_UNSET) 48 + return; 49 + 47 50 /* 48 51 * Ignore non-moveable resources. This might be legacy resources for 49 52 * which no functional BAR register exists or another important ··· 104 101 105 102 if (disable) 106 103 pci_write_config_word(dev, PCI_COMMAND, cmd); 107 - 108 - res->flags &= ~IORESOURCE_UNSET; 109 - dev_dbg(&dev->dev, "BAR %d: set to %pR (PCI address [%#llx-%#llx])\n", 110 - resno, res, (unsigned long long)region.start, 111 - (unsigned long long)region.end); 112 104 } 113 105 114 106 int pci_claim_resource(struct pci_dev *dev, int resource) ··· 111 113 struct resource *res = &dev->resource[resource]; 112 114 struct resource *root, *conflict; 113 115 116 + if (res->flags & IORESOURCE_UNSET) { 117 + dev_info(&dev->dev, "can't claim BAR %d %pR: no address assigned\n", 118 + resource, res); 119 + return -EINVAL; 120 + } 121 + 114 122 root = pci_find_parent_resource(dev, res); 115 123 if (!root) { 116 - dev_info(&dev->dev, "no compatible bridge window for %pR\n", 117 - res); 124 + dev_info(&dev->dev, "can't claim BAR %d %pR: no compatible bridge window\n", 125 + resource, res); 118 126 return -EINVAL; 119 127 } 120 128 121 129 conflict = request_resource_conflict(root, res); 122 130 if (conflict) { 123 - dev_info(&dev->dev, 124 - "address space collision: %pR conflicts with %s %pR\n", 125 - res, conflict->name, conflict); 131 + dev_info(&dev->dev, "can't claim BAR %d %pR: address conflict with %s %pR\n", 132 + resource, res, conflict->name, conflict); 126 133 return -EBUSY; 127 134 } 128 135 ··· 266 263 resource_size_t align, size; 267 264 int ret; 268 265 266 + res->flags |= IORESOURCE_UNSET; 269 267 align = pci_resource_alignment(dev, res); 270 268 if (!align) { 271 269 dev_info(&dev->dev, "BAR %d: can't assign %pR " ··· 286 282 ret = pci_revert_fw_address(res, dev, resno, size); 287 283 288 284 if (!ret) { 285 + res->flags &= ~IORESOURCE_UNSET; 289 286 res->flags &= ~IORESOURCE_STARTALIGN; 290 287 dev_info(&dev->dev, "BAR %d: assigned %pR\n", resno, res); 291 288 if (resno < PCI_BRIDGE_RESOURCES) ··· 302 297 resource_size_t new_size; 303 298 int ret; 304 299 300 + res->flags |= IORESOURCE_UNSET; 305 301 if (!res->parent) { 306 302 dev_info(&dev->dev, "BAR %d: can't reassign an unassigned resource %pR " 307 303 "\n", resno, res); ··· 313 307 new_size = resource_size(res) + addsize; 314 308 ret = _pci_assign_resource(dev, resno, new_size, min_align); 315 309 if (!ret) { 310 + res->flags &= ~IORESOURCE_UNSET; 316 311 res->flags &= ~IORESOURCE_STARTALIGN; 317 312 dev_info(&dev->dev, "BAR %d: reassigned %pR\n", resno, res); 318 313 if (resno < PCI_BRIDGE_RESOURCES) ··· 343 336 (!(r->flags & IORESOURCE_ROM_ENABLE))) 344 337 continue; 345 338 339 + if (r->flags & IORESOURCE_UNSET) { 340 + dev_err(&dev->dev, "can't enable device: BAR %d %pR not assigned\n", 341 + i, r); 342 + return -EINVAL; 343 + } 344 + 346 345 if (!r->parent) { 347 - dev_err(&dev->dev, "device not available " 348 - "(can't reserve %pR)\n", r); 346 + dev_err(&dev->dev, "can't enable device: BAR %d %pR not claimed\n", 347 + i, r); 349 348 return -EINVAL; 350 349 } 351 350
+9 -9
drivers/pcmcia/yenta_socket.c
··· 1076 1076 */ 1077 1077 static void yenta_fixup_parent_bridge(struct pci_bus *cardbus_bridge) 1078 1078 { 1079 - struct list_head *tmp; 1079 + struct pci_bus *sibling; 1080 1080 unsigned char upper_limit; 1081 1081 /* 1082 1082 * We only check and fix the parent bridge: All systems which need ··· 1095 1095 /* stay within the limits of the bus range of the parent: */ 1096 1096 upper_limit = bridge_to_fix->parent->busn_res.end; 1097 1097 1098 - /* check the bus ranges of all silbling bridges to prevent overlap */ 1099 - list_for_each(tmp, &bridge_to_fix->parent->children) { 1100 - struct pci_bus *silbling = pci_bus_b(tmp); 1098 + /* check the bus ranges of all sibling bridges to prevent overlap */ 1099 + list_for_each_entry(sibling, &bridge_to_fix->parent->children, 1100 + node) { 1101 1101 /* 1102 - * If the silbling has a higher secondary bus number 1102 + * If the sibling has a higher secondary bus number 1103 1103 * and it's secondary is equal or smaller than our 1104 1104 * current upper limit, set the new upper limit to 1105 - * the bus number below the silbling's range: 1105 + * the bus number below the sibling's range: 1106 1106 */ 1107 - if (silbling->busn_res.start > bridge_to_fix->busn_res.end 1108 - && silbling->busn_res.start <= upper_limit) 1109 - upper_limit = silbling->busn_res.start - 1; 1107 + if (sibling->busn_res.start > bridge_to_fix->busn_res.end 1108 + && sibling->busn_res.start <= upper_limit) 1109 + upper_limit = sibling->busn_res.start - 1; 1110 1110 } 1111 1111 1112 1112 /* Show that the wanted subordinate number is not possible: */
+8 -4
drivers/vfio/pci/vfio_pci_intrs.c
··· 482 482 for (i = 0; i < nvec; i++) 483 483 vdev->msix[i].entry = i; 484 484 485 - ret = pci_enable_msix(pdev, vdev->msix, nvec); 486 - if (ret) { 485 + ret = pci_enable_msix_range(pdev, vdev->msix, 1, nvec); 486 + if (ret < nvec) { 487 + if (ret > 0) 488 + pci_disable_msix(pdev); 487 489 kfree(vdev->msix); 488 490 kfree(vdev->ctx); 489 491 return ret; 490 492 } 491 493 } else { 492 - ret = pci_enable_msi_block(pdev, nvec); 493 - if (ret) { 494 + ret = pci_enable_msi_range(pdev, 1, nvec); 495 + if (ret < nvec) { 496 + if (ret > 0) 497 + pci_disable_msi(pdev); 494 498 kfree(vdev->ctx); 495 499 return ret; 496 500 }
-1
include/acpi/acpi_numa.h
··· 13 13 14 14 extern int pxm_to_node(int); 15 15 extern int node_to_pxm(int); 16 - extern void __acpi_map_pxm_to_node(int, int); 17 16 extern int acpi_map_pxm_to_node(int); 18 17 extern unsigned char acpi_srat_revision; 19 18
+2 -7
include/linux/acpi.h
··· 263 263 extern void acpi_osi_setup(char *str); 264 264 265 265 #ifdef CONFIG_ACPI_NUMA 266 - int acpi_get_pxm(acpi_handle handle); 267 - int acpi_get_node(acpi_handle *handle); 266 + int acpi_get_node(acpi_handle handle); 268 267 #else 269 - static inline int acpi_get_pxm(acpi_handle handle) 270 - { 271 - return 0; 272 - } 273 - static inline int acpi_get_node(acpi_handle *handle) 268 + static inline int acpi_get_node(acpi_handle handle) 274 269 { 275 270 return 0; 276 271 }
+11 -1
include/linux/ioport.h
··· 51 51 52 52 #define IORESOURCE_EXCLUSIVE 0x08000000 /* Userland may not map this resource */ 53 53 #define IORESOURCE_DISABLED 0x10000000 54 - #define IORESOURCE_UNSET 0x20000000 54 + #define IORESOURCE_UNSET 0x20000000 /* No address assigned yet */ 55 55 #define IORESOURCE_AUTO 0x40000000 56 56 #define IORESOURCE_BUSY 0x80000000 /* Driver has marked this resource busy */ 57 57 ··· 169 169 { 170 170 return res->flags & IORESOURCE_TYPE_BITS; 171 171 } 172 + /* True iff r1 completely contains r2 */ 173 + static inline bool resource_contains(struct resource *r1, struct resource *r2) 174 + { 175 + if (resource_type(r1) != resource_type(r2)) 176 + return false; 177 + if (r1->flags & IORESOURCE_UNSET || r2->flags & IORESOURCE_UNSET) 178 + return false; 179 + return r1->start <= r2->start && r1->end >= r2->end; 180 + } 181 + 172 182 173 183 /* Convenience shorthand with allocation */ 174 184 #define request_region(start,n,name) __request_region(&ioport_resource, (start), (n), (name), 0)
+5 -6
include/linux/pci.h
··· 29 29 #include <linux/atomic.h> 30 30 #include <linux/device.h> 31 31 #include <linux/io.h> 32 - #include <linux/irqreturn.h> 33 32 #include <uapi/linux/pci.h> 34 33 35 34 #include <linux/pci_ids.h> ··· 169 170 PCI_DEV_FLAGS_NO_D3 = (__force pci_dev_flags_t) 2, 170 171 /* Provide indication device is assigned by a Virtual Machine Manager */ 171 172 PCI_DEV_FLAGS_ASSIGNED = (__force pci_dev_flags_t) 4, 173 + /* Flag for quirk use to store if quirk-specific ACS is enabled */ 174 + PCI_DEV_FLAGS_ACS_ENABLED_QUIRK = (__force pci_dev_flags_t) 8, 172 175 }; 173 176 174 177 enum pci_irq_reroute_variant { ··· 462 461 unsigned int is_added:1; 463 462 }; 464 463 465 - #define pci_bus_b(n) list_entry(n, struct pci_bus, node) 466 464 #define to_pci_bus(n) container_of(n, struct pci_bus, dev) 467 465 468 466 /* ··· 1066 1066 int __must_check pci_bus_alloc_resource(struct pci_bus *bus, 1067 1067 struct resource *res, resource_size_t size, 1068 1068 resource_size_t align, resource_size_t min, 1069 - unsigned int type_mask, 1069 + unsigned long type_mask, 1070 1070 resource_size_t (*alignf)(void *, 1071 1071 const struct resource *, 1072 1072 resource_size_t, ··· 1530 1530 void pci_fixup_device(enum pci_fixup_pass pass, struct pci_dev *dev); 1531 1531 struct pci_dev *pci_get_dma_source(struct pci_dev *dev); 1532 1532 int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags); 1533 + void pci_dev_specific_enable_acs(struct pci_dev *dev); 1533 1534 #else 1534 1535 static inline void pci_fixup_device(enum pci_fixup_pass pass, 1535 1536 struct pci_dev *dev) { } ··· 1543 1542 { 1544 1543 return -ENOTTY; 1545 1544 } 1545 + static inline void pci_dev_specific_enable_acs(struct pci_dev *dev) { } 1546 1546 #endif 1547 1547 1548 1548 void __iomem *pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen); ··· 1599 1597 #ifdef CONFIG_PCI_IOV 1600 1598 int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn); 1601 1599 void pci_disable_sriov(struct pci_dev *dev); 1602 - irqreturn_t pci_sriov_migration(struct pci_dev *dev); 1603 1600 int pci_num_vf(struct pci_dev *dev); 1604 1601 int pci_vfs_assigned(struct pci_dev *dev); 1605 1602 int pci_sriov_set_totalvfs(struct pci_dev *dev, u16 numvfs); ··· 1607 1606 static inline int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn) 1608 1607 { return -ENODEV; } 1609 1608 static inline void pci_disable_sriov(struct pci_dev *dev) { } 1610 - static inline irqreturn_t pci_sriov_migration(struct pci_dev *dev) 1611 - { return IRQ_NONE; } 1612 1609 static inline int pci_num_vf(struct pci_dev *dev) { return 0; } 1613 1610 static inline int pci_vfs_assigned(struct pci_dev *dev) 1614 1611 { return 0; }
+4 -8
kernel/resource.c
··· 432 432 res->end = max; 433 433 } 434 434 435 - static bool resource_contains(struct resource *res1, struct resource *res2) 436 - { 437 - return res1->start <= res2->start && res1->end >= res2->end; 438 - } 439 - 440 435 /* 441 436 * Find empty slot in the resource tree with the given range and 442 437 * alignment constraints ··· 466 471 arch_remove_reservations(&tmp); 467 472 468 473 /* Check for overflow after ALIGN() */ 469 - avail = *new; 470 474 avail.start = ALIGN(tmp.start, constraint->align); 471 475 avail.end = tmp.end; 476 + avail.flags = new->flags & ~IORESOURCE_UNSET; 472 477 if (avail.start >= tmp.start) { 478 + alloc.flags = avail.flags; 473 479 alloc.start = constraint->alignf(constraint->alignf_data, &avail, 474 480 size, constraint->align); 475 481 alloc.end = alloc.start + size - 1; ··· 945 949 res->name = name; 946 950 res->start = start; 947 951 res->end = start + n - 1; 948 - res->flags = IORESOURCE_BUSY; 949 - res->flags |= flags; 952 + res->flags = resource_type(parent); 953 + res->flags |= IORESOURCE_BUSY | flags; 950 954 951 955 write_lock(&resource_lock); 952 956
+9 -4
lib/vsprintf.c
··· 719 719 specp = &mem_spec; 720 720 decode = 0; 721 721 } 722 - p = number(p, pend, res->start, *specp); 723 - if (res->start != res->end) { 724 - *p++ = '-'; 725 - p = number(p, pend, res->end, *specp); 722 + if (decode && res->flags & IORESOURCE_UNSET) { 723 + p = string(p, pend, "size ", str_spec); 724 + p = number(p, pend, resource_size(res), *specp); 725 + } else { 726 + p = number(p, pend, res->start, *specp); 727 + if (res->start != res->end) { 728 + *p++ = '-'; 729 + p = number(p, pend, res->end, *specp); 730 + } 726 731 } 727 732 if (decode) { 728 733 if (res->flags & IORESOURCE_MEM_64)