Merge git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/pci-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/pci-2.6: (64 commits)
PCI: make pci_bus a struct device
PCI: fix codingstyle issues in include/linux/pci.h
PCI: fix codingstyle issues in drivers/pci/pci.h
PCI: PCIE ASPM support
PCI: Fix fakephp deadlock
PCI: modify SB700 SATA MSI quirk
PCI: Run ACPI _OSC method on root bridges only
PCI ACPI: AER driver should only register PCIe devices with _OSC
PCI ACPI: Added a function to register _OSC with only PCIe devices.
PCI: constify function pointer tables
PCI: Convert drivers/pci/proc.c to use unlocked_ioctl
pciehp: block new requests from the device before power off
pciehp: workaround against Bad DLLP during power off
pciehp: wait for 1000ms before LED operation after power off
PCI: Remove pci_enable_device_bars() from documentation
PCI: Remove pci_enable_device_bars()
PCI: Remove users of pci_enable_device_bars()
PCI: Add pci_enable_device_{io,mem} intefaces
PCI: avoid save the same type of cap multiple times
PCI: correctly initialize a structure for pcie_save_pcix_state()
...

+2101 -897
+1 -36
Documentation/pci.txt
··· 274 274 o allocate an IRQ (if BIOS did not). 275 275 276 276 NOTE: pci_enable_device() can fail! Check the return value. 277 - NOTE2: Also see pci_enable_device_bars() below. Drivers can 278 - attempt to enable only a subset of BARs they need. 279 277 280 278 [ OS BUG: we don't check resource allocations before enabling those 281 279 resources. The sequence would make more sense if we called ··· 603 605 604 606 605 607 606 - 10. pci_enable_device_bars() and Legacy I/O Port space 607 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 608 - 609 - Large servers may not be able to provide I/O port resources to all PCI 610 - devices. I/O Port space is only 64KB on Intel Architecture[1] and is 611 - likely also fragmented since the I/O base register of PCI-to-PCI 612 - bridge will usually be aligned to a 4KB boundary[2]. On such systems, 613 - pci_enable_device() and pci_request_region() will fail when 614 - attempting to enable I/O Port regions that don't have I/O Port 615 - resources assigned. 616 - 617 - Fortunately, many PCI devices which request I/O Port resources also 618 - provide access to the same registers via MMIO BARs. These devices can 619 - be handled without using I/O port space and the drivers typically 620 - offer a CONFIG_ option to only use MMIO regions 621 - (e.g. CONFIG_TULIP_MMIO). PCI devices typically provide I/O port 622 - interface for legacy OSes and will work when I/O port resources are not 623 - assigned. The "PCI Local Bus Specification Revision 3.0" discusses 624 - this on p.44, "IMPLEMENTATION NOTE". 625 - 626 - If your PCI device driver doesn't need I/O port resources assigned to 627 - I/O Port BARs, you should use pci_enable_device_bars() instead of 628 - pci_enable_device() in order not to enable I/O port regions for the 629 - corresponding devices. In addition, you should use 630 - pci_request_selected_regions() and pci_release_selected_regions() 631 - instead of pci_request_regions()/pci_release_regions() in order not to 632 - request/release I/O port regions for the corresponding devices. 633 - 634 - [1] Some systems support 64KB I/O port space per PCI segment. 635 - [2] Some PCI-to-PCI bridges support optional 1KB aligned I/O base. 636 - 637 - 638 - 639 - 11. MMIO Space and "Write Posting" 608 + 10. MMIO Space and "Write Posting" 640 609 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 641 610 642 611 Converting a driver from using I/O Port space to using MMIO space
-5
arch/alpha/Kconfig
··· 318 318 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 319 319 VESA. If you have PCI, say Y, otherwise N. 320 320 321 - The PCI-HOWTO, available from 322 - <http://www.tldp.org/docs.html#howto>, contains valuable 323 - information about which PCI hardware does work under Linux and which 324 - doesn't. 325 - 326 321 config PCI_DOMAINS 327 322 bool 328 323 default y
-5
arch/arm/Kconfig
··· 577 577 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 578 578 VESA. If you have PCI, say Y, otherwise N. 579 579 580 - The PCI-HOWTO, available from 581 - <http://www.tldp.org/docs.html#howto>, contains valuable 582 - information about which PCI hardware does work under Linux and which 583 - doesn't. 584 - 585 580 config PCI_SYSCALL 586 581 def_bool PCI 587 582
-5
arch/frv/Kconfig
··· 322 322 onboard. If you have one of these boards and you wish to use the PCI 323 323 facilities, say Y here. 324 324 325 - The PCI-HOWTO, available from 326 - <http://www.tldp.org/docs.html#howto>, contains valuable 327 - information about which PCI hardware does work under Linux and which 328 - doesn't. 329 - 330 325 config RESERVE_DMA_COHERENT 331 326 bool "Reserve DMA coherent memory" 332 327 depends on PCI && !MMU
-5
arch/m32r/Kconfig
··· 359 359 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 360 360 VESA. If you have PCI, say Y, otherwise N. 361 361 362 - The PCI-HOWTO, available from 363 - <http://www.linuxdoc.org/docs.html#howto>, contains valuable 364 - information about which PCI hardware does work under Linux and which 365 - doesn't. 366 - 367 362 choice 368 363 prompt "PCI access mode" 369 364 depends on PCI
-5
arch/m68k/Kconfig
··· 145 145 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 146 146 VESA. If you have PCI, say Y, otherwise N. 147 147 148 - The PCI-HOWTO, available from 149 - <http://www.tldp.org/docs.html#howto>, contains valuable 150 - information about which PCI hardware does work under Linux and which 151 - doesn't. 152 - 153 148 config MAC 154 149 bool "Macintosh support" 155 150 depends on !MMU_SUN3
-5
arch/mips/Kconfig
··· 1961 1961 your box. Other bus systems are ISA, EISA, or VESA. If you have PCI, 1962 1962 say Y, otherwise N. 1963 1963 1964 - The PCI-HOWTO, available from 1965 - <http://www.tldp.org/docs.html#howto>, contains valuable 1966 - information about which PCI hardware does work under Linux and which 1967 - doesn't. 1968 - 1969 1964 config PCI_DOMAINS 1970 1965 bool 1971 1966
-5
arch/sh/drivers/pci/Kconfig
··· 6 6 bus system, i.e. the way the CPU talks to the other stuff inside 7 7 your box. If you have PCI, say Y, otherwise N. 8 8 9 - The PCI-HOWTO, available from 10 - <http://www.tldp.org/docs.html#howto>, contains valuable 11 - information about which PCI hardware does work under Linux and which 12 - doesn't. 13 - 14 9 config SH_PCIDMA_NONCOHERENT 15 10 bool "Cache and PCI noncoherent" 16 11 depends on PCI
-5
arch/sparc64/Kconfig
··· 351 351 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 352 352 VESA. If you have PCI, say Y, otherwise N. 353 353 354 - The PCI-HOWTO, available from 355 - <http://www.tldp.org/docs.html#howto>, contains valuable 356 - information about which PCI hardware does work under Linux and which 357 - doesn't. 358 - 359 354 config PCI_DOMAINS 360 355 def_bool PCI 361 356
-5
arch/x86/Kconfig
··· 1369 1369 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 1370 1370 VESA. If you have PCI, say Y, otherwise N. 1371 1371 1372 - The PCI-HOWTO, available from 1373 - <http://www.tldp.org/docs.html#howto>, contains valuable 1374 - information about which PCI hardware does work under Linux and which 1375 - doesn't. 1376 - 1377 1372 choice 1378 1373 prompt "PCI access mode" 1379 1374 depends on X86_32 && PCI && !X86_VISWS
+23 -20
arch/x86/kernel/quirks.c
··· 30 30 raw_pci_ops->read(0, 0, 0x40, 0x4c, 2, &word); 31 31 32 32 if (!(word & (1 << 13))) { 33 - printk(KERN_INFO "Intel E7520/7320/7525 detected. " 34 - "Disabling irq balancing and affinity\n"); 33 + dev_info(&dev->dev, "Intel E7520/7320/7525 detected; " 34 + "disabling irq balancing and affinity\n"); 35 35 #ifdef CONFIG_IRQBALANCE 36 36 irqbalance_disable(""); 37 37 #endif ··· 104 104 pci_read_config_dword(dev, 0xF0, &rcba); 105 105 rcba &= 0xFFFFC000; 106 106 if (rcba == 0) { 107 - printk(KERN_DEBUG "RCBA disabled. Cannot force enable HPET\n"); 107 + dev_printk(KERN_DEBUG, &dev->dev, "RCBA disabled; " 108 + "cannot force enable HPET\n"); 108 109 return; 109 110 } 110 111 111 112 /* use bits 31:14, 16 kB aligned */ 112 113 rcba_base = ioremap_nocache(rcba, 0x4000); 113 114 if (rcba_base == NULL) { 114 - printk(KERN_DEBUG "ioremap failed. Cannot force enable HPET\n"); 115 + dev_printk(KERN_DEBUG, &dev->dev, "ioremap failed; " 116 + "cannot force enable HPET\n"); 115 117 return; 116 118 } 117 119 ··· 124 122 /* HPET is enabled in HPTC. Just not reported by BIOS */ 125 123 val = val & 0x3; 126 124 force_hpet_address = 0xFED00000 | (val << 12); 127 - printk(KERN_DEBUG "Force enabled HPET at base address 0x%lx\n", 128 - force_hpet_address); 125 + dev_printk(KERN_DEBUG, &dev->dev, "Force enabled HPET at " 126 + "0x%lx\n", force_hpet_address); 129 127 iounmap(rcba_base); 130 128 return; 131 129 } ··· 144 142 if (err) { 145 143 force_hpet_address = 0; 146 144 iounmap(rcba_base); 147 - printk(KERN_DEBUG "Failed to force enable HPET\n"); 145 + dev_printk(KERN_DEBUG, &dev->dev, 146 + "Failed to force enable HPET\n"); 148 147 } else { 149 148 force_hpet_resume_type = ICH_FORCE_HPET_RESUME; 150 - printk(KERN_DEBUG "Force enabled HPET at base address 0x%lx\n", 151 - force_hpet_address); 149 + dev_printk(KERN_DEBUG, &dev->dev, "Force enabled HPET at " 150 + "0x%lx\n", force_hpet_address); 152 151 } 153 152 } 154 153 ··· 211 208 if (val & 0x4) { 212 209 val &= 0x3; 213 210 force_hpet_address = 0xFED00000 | (val << 12); 214 - printk(KERN_DEBUG "HPET at base address 0x%lx\n", 215 - force_hpet_address); 211 + dev_printk(KERN_DEBUG, &dev->dev, "HPET at 0x%lx\n", 212 + force_hpet_address); 216 213 return; 217 214 } 218 215 ··· 232 229 /* HPET is enabled in HPTC. Just not reported by BIOS */ 233 230 val &= 0x3; 234 231 force_hpet_address = 0xFED00000 | (val << 12); 235 - printk(KERN_DEBUG "Force enabled HPET at base address 0x%lx\n", 236 - force_hpet_address); 232 + dev_printk(KERN_DEBUG, &dev->dev, "Force enabled HPET at " 233 + "0x%lx\n", force_hpet_address); 237 234 cached_dev = dev; 238 235 force_hpet_resume_type = OLD_ICH_FORCE_HPET_RESUME; 239 236 return; 240 237 } 241 238 242 - printk(KERN_DEBUG "Failed to force enable HPET\n"); 239 + dev_printk(KERN_DEBUG, &dev->dev, "Failed to force enable HPET\n"); 243 240 } 244 241 245 242 /* ··· 297 294 */ 298 295 if (val & 0x80) { 299 296 force_hpet_address = (val & ~0x3ff); 300 - printk(KERN_DEBUG "HPET at base address 0x%lx\n", 301 - force_hpet_address); 297 + dev_printk(KERN_DEBUG, &dev->dev, "HPET at 0x%lx\n", 298 + force_hpet_address); 302 299 return; 303 300 } 304 301 ··· 312 309 pci_read_config_dword(dev, 0x68, &val); 313 310 if (val & 0x80) { 314 311 force_hpet_address = (val & ~0x3ff); 315 - printk(KERN_DEBUG "Force enabled HPET at base address 0x%lx\n", 316 - force_hpet_address); 312 + dev_printk(KERN_DEBUG, &dev->dev, "Force enabled HPET at " 313 + "0x%lx\n", force_hpet_address); 317 314 cached_dev = dev; 318 315 force_hpet_resume_type = VT8237_FORCE_HPET_RESUME; 319 316 return; 320 317 } 321 318 322 - printk(KERN_DEBUG "Failed to force enable HPET\n"); 319 + dev_printk(KERN_DEBUG, &dev->dev, "Failed to force enable HPET\n"); 323 320 } 324 321 325 322 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8235, ··· 347 344 pci_read_config_dword(dev, 0x44, &val); 348 345 force_hpet_address = val & 0xfffffffe; 349 346 force_hpet_resume_type = NVIDIA_FORCE_HPET_RESUME; 350 - printk(KERN_DEBUG "Force enabled HPET at base address 0x%lx\n", 347 + dev_printk(KERN_DEBUG, &dev->dev, "Force enabled HPET at 0x%lx\n", 351 348 force_hpet_address); 352 349 cached_dev = dev; 353 350 return;
+11 -11
arch/x86/pci/fixup.c
··· 17 17 int pxb, reg; 18 18 u8 busno, suba, subb; 19 19 20 - printk(KERN_WARNING "PCI: Searching for i450NX host bridges on %s\n", pci_name(d)); 20 + dev_warn(&d->dev, "Searching for i450NX host bridges\n"); 21 21 reg = 0xd0; 22 22 for(pxb = 0; pxb < 2; pxb++) { 23 23 pci_read_config_byte(d, reg++, &busno); ··· 41 41 */ 42 42 u8 busno; 43 43 pci_read_config_byte(d, 0x4a, &busno); 44 - printk(KERN_INFO "PCI: i440KX/GX host bridge %s: secondary bus %02x\n", pci_name(d), busno); 44 + dev_info(&d->dev, "i440KX/GX host bridge; secondary bus %02x\n", busno); 45 45 pci_scan_bus_with_sysdata(busno); 46 46 pcibios_last_bus = -1; 47 47 } ··· 55 55 */ 56 56 int i; 57 57 58 - printk(KERN_WARNING "PCI: Fixing base address flags for device %s\n", pci_name(d)); 58 + dev_warn(&d->dev, "Fixing base address flags\n"); 59 59 for(i = 0; i < 4; i++) 60 60 d->resource[i].flags |= PCI_BASE_ADDRESS_SPACE_IO; 61 61 } ··· 68 68 * Fix class to be PCI_CLASS_STORAGE_SCSI 69 69 */ 70 70 if (!d->class) { 71 - printk(KERN_WARNING "PCI: fixing NCR 53C810 class code for %s\n", pci_name(d)); 71 + dev_warn(&d->dev, "Fixing NCR 53C810 class code\n"); 72 72 d->class = PCI_CLASS_STORAGE_SCSI << 8; 73 73 } 74 74 } ··· 80 80 * SiS 5597 and 5598 chipsets require latency timer set to 81 81 * at most 32 to avoid lockups. 82 82 */ 83 - DBG("PCI: Setting max latency to 32\n"); 83 + dev_dbg(&d->dev, "Setting max latency to 32\n"); 84 84 pcibios_max_latency = 32; 85 85 } 86 86 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_5597, pci_fixup_latency); ··· 138 138 139 139 pci_read_config_byte(d, where, &v); 140 140 if (v & ~mask) { 141 - printk(KERN_WARNING "Disabling VIA memory write queue (PCI ID %04x, rev %02x): [%02x] %02x & %02x -> %02x\n", \ 141 + dev_warn(&d->dev, "Disabling VIA memory write queue (PCI ID %04x, rev %02x): [%02x] %02x & %02x -> %02x\n", \ 142 142 d->device, d->revision, where, v, mask, v & mask); 143 143 v &= mask; 144 144 pci_write_config_byte(d, where, v); ··· 200 200 * Apply fixup if needed, but don't touch disconnect state 201 201 */ 202 202 if ((val & 0x00FF0000) != 0x00010000) { 203 - printk(KERN_WARNING "PCI: nForce2 C1 Halt Disconnect fixup\n"); 203 + dev_warn(&dev->dev, "nForce2 C1 Halt Disconnect fixup\n"); 204 204 pci_write_config_dword(dev, 0x6c, (val & 0xFF00FFFF) | 0x00010000); 205 205 } 206 206 } ··· 348 348 pci_read_config_word(pdev, PCI_COMMAND, &config); 349 349 if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 350 350 pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW; 351 - printk(KERN_DEBUG "Boot video device is %s\n", pci_name(pdev)); 351 + dev_printk(KERN_DEBUG, &pdev->dev, "Boot video device\n"); 352 352 } 353 353 } 354 354 DECLARE_PCI_FIXUP_FINAL(PCI_ANY_ID, PCI_ANY_ID, pci_fixup_video); ··· 388 388 /* verify the change for status output */ 389 389 pci_read_config_byte(dev, 0x50, &val); 390 390 if (val & 0x40) 391 - printk(KERN_INFO "PCI: Detected MSI K8T Neo2-FIR, " 391 + dev_info(&dev->dev, "Detected MSI K8T Neo2-FIR; " 392 392 "can't enable onboard soundcard!\n"); 393 393 else 394 - printk(KERN_INFO "PCI: Detected MSI K8T Neo2-FIR, " 395 - "enabled onboard soundcard.\n"); 394 + dev_info(&dev->dev, "Detected MSI K8T Neo2-FIR; " 395 + "enabled onboard soundcard\n"); 396 396 } 397 397 } 398 398 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237,
-5
arch/xtensa/Kconfig
··· 174 174 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 175 175 VESA. If you have PCI, say Y, otherwise N. 176 176 177 - The PCI-HOWTO, available from 178 - <http://www.linuxdoc.org/docs.html#howto>, contains valuable 179 - information about which PCI hardware does work under Linux and which 180 - doesn't 181 - 182 177 source "drivers/pci/Kconfig" 183 178 184 179 config HOTPLUG
+1 -1
drivers/ata/pata_cs5520.c
··· 229 229 return -ENOMEM; 230 230 231 231 /* Perform set up for DMA */ 232 - if (pci_enable_device_bars(pdev, 1<<2)) { 232 + if (pci_enable_device_io(pdev)) { 233 233 printk(KERN_ERR DRV_NAME ": unable to configure BAR2.\n"); 234 234 return -ENODEV; 235 235 }
+1 -1
drivers/i2c/busses/scx200_acb.c
··· 492 492 iface->pdev = pdev; 493 493 iface->bar = bar; 494 494 495 - rc = pci_enable_device_bars(iface->pdev, 1 << iface->bar); 495 + rc = pci_enable_device_io(iface->pdev); 496 496 if (rc) 497 497 goto errout_free; 498 498
+8 -2
drivers/ide/pci/cs5520.c
··· 156 156 ide_setup_pci_noise(dev, d); 157 157 158 158 /* We must not grab the entire device, it has 'ISA' space in its 159 - BARS too and we will freak out other bits of the kernel */ 160 - if (pci_enable_device_bars(dev, 1<<2)) { 159 + * BARS too and we will freak out other bits of the kernel 160 + * 161 + * pci_enable_device_bars() is going away. I replaced it with 162 + * IO only enable for now but I'll need confirmation this is 163 + * allright for that device. If not, it will need some kind of 164 + * quirk. --BenH. 165 + */ 166 + if (pci_enable_device_io(dev)) { 161 167 printk(KERN_WARNING "%s: Unable to enable 55x0.\n", d->name); 162 168 return -ENODEV; 163 169 }
+4 -2
drivers/ide/setup-pci.c
··· 228 228 * @d: IDE port info 229 229 * 230 230 * Enable the IDE PCI device. We attempt to enable the device in full 231 - * but if that fails then we only need BAR4 so we will enable that. 231 + * but if that fails then we only need IO space. The PCI code should 232 + * have setup the proper resources for us already for controllers in 233 + * legacy mode. 232 234 * 233 235 * Returns zero on success or an error code 234 236 */ ··· 240 238 int ret; 241 239 242 240 if (pci_enable_device(dev)) { 243 - ret = pci_enable_device_bars(dev, 1 << 4); 241 + ret = pci_enable_device_io(dev); 244 242 if (ret < 0) { 245 243 printk(KERN_WARNING "%s: (ide_setup_pci_device:) " 246 244 "Could not enable device.\n", d->name);
+13 -5
drivers/pci/bus.c
··· 108 108 void pci_bus_add_devices(struct pci_bus *bus) 109 109 { 110 110 struct pci_dev *dev; 111 + struct pci_bus *child_bus; 111 112 int retval; 112 113 113 114 list_for_each_entry(dev, &bus->devices, bus_list) { ··· 139 138 up_write(&pci_bus_sem); 140 139 } 141 140 pci_bus_add_devices(dev->subordinate); 142 - retval = sysfs_create_link(&dev->subordinate->class_dev.kobj, 143 - &dev->dev.kobj, "bridge"); 141 + 142 + /* register the bus with sysfs as the parent is now 143 + * properly registered. */ 144 + child_bus = dev->subordinate; 145 + child_bus->dev.parent = child_bus->bridge; 146 + retval = device_register(&child_bus->dev); 147 + if (!retval) 148 + retval = device_create_file(&child_bus->dev, 149 + &dev_attr_cpuaffinity); 144 150 if (retval) 145 - dev_err(&dev->dev, "Error creating sysfs " 146 - "bridge symlink, continuing...\n"); 151 + dev_err(&dev->dev, "Error registering pci_bus" 152 + " device bridge symlink," 153 + " continuing...\n"); 147 154 } 148 155 } 149 156 } ··· 213 204 } 214 205 up_read(&pci_bus_sem); 215 206 } 216 - EXPORT_SYMBOL_GPL(pci_walk_bus); 217 207 218 208 EXPORT_SYMBOL(pci_bus_alloc_resource); 219 209 EXPORT_SYMBOL_GPL(pci_bus_add_device);
+17 -3
drivers/pci/dmar.c
··· 25 25 26 26 #include <linux/pci.h> 27 27 #include <linux/dmar.h> 28 + #include "iova.h" 28 29 29 30 #undef PREFIX 30 31 #define PREFIX "DMAR:" ··· 264 263 if (!dmar) 265 264 return -ENODEV; 266 265 267 - if (!dmar->width) { 268 - printk (KERN_WARNING PREFIX "Zero: Invalid DMAR haw\n"); 266 + if (dmar->width < PAGE_SHIFT_4K - 1) { 267 + printk(KERN_WARNING PREFIX "Invalid DMAR haw\n"); 269 268 return -EINVAL; 270 269 } 271 270 ··· 302 301 int __init dmar_table_init(void) 303 302 { 304 303 305 - parse_dmar_table(); 304 + int ret; 305 + 306 + ret = parse_dmar_table(); 307 + if (ret) { 308 + printk(KERN_INFO PREFIX "parse DMAR table failure.\n"); 309 + return ret; 310 + } 311 + 306 312 if (list_empty(&dmar_drhd_units)) { 307 313 printk(KERN_INFO PREFIX "No DMAR devices found\n"); 308 314 return -ENODEV; 309 315 } 316 + 317 + if (list_empty(&dmar_rmrr_units)) { 318 + printk(KERN_INFO PREFIX "No RMRR found\n"); 319 + return -ENODEV; 320 + } 321 + 310 322 return 0; 311 323 } 312 324
+2 -2
drivers/pci/hotplug/Kconfig
··· 3 3 # 4 4 5 5 menuconfig HOTPLUG_PCI 6 - tristate "Support for PCI Hotplug (EXPERIMENTAL)" 7 - depends on PCI && EXPERIMENTAL && HOTPLUG 6 + tristate "Support for PCI Hotplug" 7 + depends on PCI && HOTPLUG 8 8 ---help--- 9 9 Say Y here if you have a motherboard with a PCI Hotplug controller. 10 10 This allows you to add and remove PCI cards while the machine is
+3 -1
drivers/pci/hotplug/Makefile
··· 3 3 # 4 4 5 5 obj-$(CONFIG_HOTPLUG_PCI) += pci_hotplug.o 6 - obj-$(CONFIG_HOTPLUG_PCI_FAKE) += fakephp.o 7 6 obj-$(CONFIG_HOTPLUG_PCI_COMPAQ) += cpqphp.o 8 7 obj-$(CONFIG_HOTPLUG_PCI_IBM) += ibmphp.o 9 8 obj-$(CONFIG_HOTPLUG_PCI_ACPI) += acpiphp.o ··· 14 15 obj-$(CONFIG_HOTPLUG_PCI_RPA) += rpaphp.o 15 16 obj-$(CONFIG_HOTPLUG_PCI_RPA_DLPAR) += rpadlpar_io.o 16 17 obj-$(CONFIG_HOTPLUG_PCI_SGI) += sgi_hotplug.o 18 + 19 + # Link this last so it doesn't claim devices that have a real hotplug driver 20 + obj-$(CONFIG_HOTPLUG_PCI_FAKE) += fakephp.o 17 21 18 22 pci_hotplug-objs := pci_hotplug_core.o 19 23
-1
drivers/pci/hotplug/acpiphp.h
··· 113 113 u8 device; /* pci device# */ 114 114 115 115 u32 sun; /* ACPI _SUN (slot unique number) */ 116 - u32 slotno; /* slot number relative to bridge */ 117 116 u32 flags; /* see below */ 118 117 }; 119 118
+2 -3
drivers/pci/hotplug/acpiphp_glue.c
··· 102 102 } 103 103 104 104 105 - /* callback routine to check the existence of ejectable slots */ 105 + /* callback routine to check for the existence of ejectable slots */ 106 106 static acpi_status 107 107 is_ejectable_slot(acpi_handle handle, u32 lvl, void *context, void **rv) 108 108 { ··· 117 117 } 118 118 } 119 119 120 - /* callback routine to check for the existance of a pci dock device */ 120 + /* callback routine to check for the existence of a pci dock device */ 121 121 static acpi_status 122 122 is_pci_dock_device(acpi_handle handle, u32 lvl, void *context, void **rv) 123 123 { ··· 1528 1528 acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer); 1529 1529 dbg("%s: re-enumerating slots under %s\n", 1530 1530 __FUNCTION__, objname); 1531 - acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer); 1532 1531 acpiphp_check_bridge(bridge); 1533 1532 } 1534 1533 return AE_OK ;
+35 -4
drivers/pci/hotplug/fakephp.c
··· 39 39 #include <linux/init.h> 40 40 #include <linux/string.h> 41 41 #include <linux/slab.h> 42 + #include <linux/workqueue.h> 42 43 #include "../pci.h" 43 44 44 45 #if !defined(MODULE) ··· 64 63 struct list_head node; 65 64 struct hotplug_slot *slot; 66 65 struct pci_dev *dev; 66 + struct work_struct remove_work; 67 + unsigned long removed; 67 68 }; 68 69 69 70 static int debug; 70 71 static LIST_HEAD(slot_list); 72 + static struct workqueue_struct *dummyphp_wq; 73 + 74 + static void pci_rescan_worker(struct work_struct *work); 75 + static DECLARE_WORK(pci_rescan_work, pci_rescan_worker); 71 76 72 77 static int enable_slot (struct hotplug_slot *slot); 73 78 static int disable_slot (struct hotplug_slot *slot); ··· 116 109 slot->name = &dev->dev.bus_id[0]; 117 110 dbg("slot->name = %s\n", slot->name); 118 111 119 - dslot = kmalloc(sizeof(struct dummy_slot), GFP_KERNEL); 112 + dslot = kzalloc(sizeof(struct dummy_slot), GFP_KERNEL); 120 113 if (!dslot) 121 114 goto error_info; 122 115 ··· 169 162 retval = pci_hp_deregister(dslot->slot); 170 163 if (retval) 171 164 err("Problem unregistering a slot %s\n", dslot->slot->name); 165 + } 166 + 167 + /* called from the single-threaded workqueue handler to remove a slot */ 168 + static void remove_slot_worker(struct work_struct *work) 169 + { 170 + struct dummy_slot *dslot = 171 + container_of(work, struct dummy_slot, remove_work); 172 + remove_slot(dslot); 172 173 } 173 174 174 175 /** ··· 282 267 pci_rescan_buses(&pci_root_buses); 283 268 } 284 269 270 + /* called from the single-threaded workqueue handler to rescan all pci buses */ 271 + static void pci_rescan_worker(struct work_struct *work) 272 + { 273 + pci_rescan(); 274 + } 285 275 286 276 static int enable_slot(struct hotplug_slot *hotplug_slot) 287 277 { 288 278 /* mis-use enable_slot for rescanning of the pci bus */ 289 - pci_rescan(); 279 + cancel_work_sync(&pci_rescan_work); 280 + queue_work(dummyphp_wq, &pci_rescan_work); 290 281 return -ENODEV; 291 282 } 292 283 ··· 327 306 err("Can't remove PCI devices with other PCI devices behind it yet.\n"); 328 307 return -ENODEV; 329 308 } 309 + if (test_and_set_bit(0, &dslot->removed)) { 310 + dbg("Slot already scheduled for removal\n"); 311 + return -ENODEV; 312 + } 330 313 /* search for subfunctions and disable them first */ 331 314 if (!(dslot->dev->devfn & 7)) { 332 315 for (func = 1; func < 8; func++) { ··· 353 328 /* remove the device from the pci core */ 354 329 pci_remove_bus_device(dslot->dev); 355 330 356 - /* blow away this sysfs entry and other parts. */ 357 - remove_slot(dslot); 331 + /* queue work item to blow away this sysfs entry and other parts. */ 332 + INIT_WORK(&dslot->remove_work, remove_slot_worker); 333 + queue_work(dummyphp_wq, &dslot->remove_work); 358 334 359 335 return 0; 360 336 } ··· 366 340 struct list_head *next; 367 341 struct dummy_slot *dslot; 368 342 343 + destroy_workqueue(dummyphp_wq); 369 344 list_for_each_safe (tmp, next, &slot_list) { 370 345 dslot = list_entry (tmp, struct dummy_slot, node); 371 346 remove_slot(dslot); ··· 377 350 static int __init dummyphp_init(void) 378 351 { 379 352 info(DRIVER_DESC "\n"); 353 + 354 + dummyphp_wq = create_singlethread_workqueue(MY_NAME); 355 + if (!dummyphp_wq) 356 + return -ENOMEM; 380 357 381 358 return pci_scan_buses(); 382 359 }
+7 -4
drivers/pci/hotplug/ibmphp_core.c
··· 761 761 debug("func->device << 3 | 0x0 = %x\n", func->device << 3 | 0x0); 762 762 763 763 for (j = 0; j < 0x08; j++) { 764 - temp = pci_find_slot(func->busno, (func->device << 3) | j); 765 - if (temp) 764 + temp = pci_get_bus_and_slot(func->busno, (func->device << 3) | j); 765 + if (temp) { 766 766 pci_remove_bus_device(temp); 767 + pci_dev_put(temp); 768 + } 767 769 } 770 + pci_dev_put(func->dev); 768 771 } 769 772 770 773 /* ··· 826 823 if (!(bus_structure_fixup(func->busno))) 827 824 flag = 1; 828 825 if (func->dev == NULL) 829 - func->dev = pci_find_slot(func->busno, 826 + func->dev = pci_get_bus_and_slot(func->busno, 830 827 PCI_DEVFN(func->device, func->function)); 831 828 832 829 if (func->dev == NULL) { ··· 839 836 if (num) 840 837 pci_bus_add_devices(bus); 841 838 842 - func->dev = pci_find_slot(func->busno, 839 + func->dev = pci_get_bus_and_slot(func->busno, 843 840 PCI_DEVFN(func->device, func->function)); 844 841 if (func->dev == NULL) { 845 842 err("ERROR... : pci_dev still NULL\n");
+2 -2
drivers/pci/hotplug/pci_hotplug_core.c
··· 137 137 int retval = 0; \ 138 138 if (try_module_get(ops->owner)) { \ 139 139 if (ops->get_##name) \ 140 - retval = ops->get_##name (slot, value); \ 140 + retval = ops->get_##name(slot, value); \ 141 141 else \ 142 142 *value = slot->info->name; \ 143 143 module_put(ops->owner); \ ··· 625 625 if ((slot->info == NULL) || (slot->ops == NULL)) 626 626 return -EINVAL; 627 627 if (slot->release == NULL) { 628 - dbg("Why are you trying to register a hotplug slot" 628 + dbg("Why are you trying to register a hotplug slot " 629 629 "without a proper release function?\n"); 630 630 return -EINVAL; 631 631 }
+3 -6
drivers/pci/hotplug/pciehp.h
··· 82 82 }; 83 83 84 84 struct controller { 85 - struct controller *next; 86 85 struct mutex crit_sect; /* critical section mutex */ 87 86 struct mutex ctrl_lock; /* controller lock */ 88 87 int num_slots; /* Number of slots on ctlr */ 89 88 int slot_num_inc; /* 1 or -1 */ 90 89 struct pci_dev *pci_dev; 91 90 struct list_head slot_list; 92 - struct slot *slot; 93 91 struct hpc_ops *hpc_ops; 94 92 wait_queue_head_t queue; /* sleep & wake process */ 95 - u8 bus; 96 - u8 device; 97 - u8 function; 98 93 u8 slot_device_offset; 99 94 u32 first_slot; /* First physical slot number */ /* PCIE only has 1 slot */ 100 95 u8 slot_bus; /* Bus where the slots handled by this controller sit */ 101 96 u8 ctrlcap; 102 - u16 vendor_id; 103 97 u8 cap_base; 104 98 struct timer_list poll_timer; 105 99 volatile int cmd_busy; ··· 155 161 extern int pciehp_unconfigure_device(struct slot *p_slot); 156 162 extern void pciehp_queue_pushbutton_work(struct work_struct *work); 157 163 int pcie_init(struct controller *ctrl, struct pcie_device *dev); 164 + int pciehp_enable_slot(struct slot *p_slot); 165 + int pciehp_disable_slot(struct slot *p_slot); 166 + int pcie_init_hardware_part2(struct controller *ctrl, struct pcie_device *dev); 158 167 159 168 static inline struct slot *pciehp_find_slot(struct controller *ctrl, u8 device) 160 169 {
+26 -7
drivers/pci/hotplug/pciehp_core.c
··· 453 453 454 454 pci_set_drvdata(pdev, ctrl); 455 455 456 - ctrl->bus = pdev->bus->number; /* ctrl bus */ 457 - ctrl->slot_bus = pdev->subordinate->number; /* bus controlled by this HPC */ 458 - 459 - ctrl->device = PCI_SLOT(pdev->devfn); 460 - ctrl->function = PCI_FUNC(pdev->devfn); 461 - dbg("%s: ctrl bus=0x%x, device=%x, function=%x, irq=%x\n", __FUNCTION__, 462 - ctrl->bus, ctrl->device, ctrl->function, pdev->irq); 456 + dbg("%s: ctrl bus=0x%x, device=%x, function=%x, irq=%x\n", 457 + __FUNCTION__, pdev->bus->number, PCI_SLOT(pdev->devfn), 458 + PCI_FUNC(pdev->devfn), pdev->irq); 463 459 464 460 /* Setup the slot information structures */ 465 461 rc = init_slots(ctrl); ··· 467 471 t_slot = pciehp_find_slot(ctrl, ctrl->slot_device_offset); 468 472 469 473 t_slot->hpc_ops->get_adapter_status(t_slot, &value); /* Check if slot is occupied */ 474 + if (value) { 475 + rc = pciehp_enable_slot(t_slot); 476 + if (rc) /* -ENODEV: shouldn't happen, but deal with it */ 477 + value = 0; 478 + } 470 479 if ((POWER_CTRL(ctrl->ctrlcap)) && !value) { 471 480 rc = t_slot->hpc_ops->power_off_slot(t_slot); /* Power off slot if not occupied*/ 472 481 if (rc) ··· 510 509 static int pciehp_resume (struct pcie_device *dev) 511 510 { 512 511 printk("%s ENTRY\n", __FUNCTION__); 512 + if (pciehp_force) { 513 + struct pci_dev *pdev = dev->port; 514 + struct controller *ctrl = pci_get_drvdata(pdev); 515 + struct slot *t_slot; 516 + u8 status; 517 + 518 + /* reinitialize the chipset's event detection logic */ 519 + pcie_init_hardware_part2(ctrl, dev); 520 + 521 + t_slot = pciehp_find_slot(ctrl, ctrl->slot_device_offset); 522 + 523 + /* Check if slot is occupied */ 524 + t_slot->hpc_ops->get_adapter_status(t_slot, &status); 525 + if (status) 526 + pciehp_enable_slot(t_slot); 527 + else 528 + pciehp_disable_slot(t_slot); 529 + } 513 530 return 0; 514 531 } 515 532 #endif
+2 -25
drivers/pci/hotplug/pciehp_ctrl.c
··· 37 37 #include "pciehp.h" 38 38 39 39 static void interrupt_event_handler(struct work_struct *work); 40 - static int pciehp_enable_slot(struct slot *p_slot); 41 - static int pciehp_disable_slot(struct slot *p_slot); 42 40 43 41 static int queue_interrupt_event(struct slot *p_slot, u32 event_type) 44 42 { ··· 195 197 __FUNCTION__); 196 198 return; 197 199 } 198 - /* 199 - * After turning power off, we must wait for at least 200 - * 1 second before taking any action that relies on 201 - * power having been removed from the slot/adapter. 202 - */ 203 - msleep(1000); 204 200 } 205 201 } 206 202 ··· 207 215 */ 208 216 static int board_added(struct slot *p_slot) 209 217 { 210 - u8 hp_slot; 211 218 int retval = 0; 212 219 struct controller *ctrl = p_slot->ctrl; 213 220 214 - hp_slot = p_slot->device - ctrl->slot_device_offset; 215 - 216 221 dbg("%s: slot device, slot offset, hp slot = %d, %d ,%d\n", 217 222 __FUNCTION__, p_slot->device, 218 - ctrl->slot_device_offset, hp_slot); 223 + ctrl->slot_device_offset, p_slot->hp_slot); 219 224 220 225 if (POWER_CTRL(ctrl->ctrlcap)) { 221 226 /* Power on slot */ ··· 270 281 */ 271 282 static int remove_board(struct slot *p_slot) 272 283 { 273 - u8 device; 274 - u8 hp_slot; 275 284 int retval = 0; 276 285 struct controller *ctrl = p_slot->ctrl; 277 286 ··· 277 290 if (retval) 278 291 return retval; 279 292 280 - device = p_slot->device; 281 - hp_slot = p_slot->device - ctrl->slot_device_offset; 282 - p_slot = pciehp_find_slot(ctrl, hp_slot + ctrl->slot_device_offset); 283 - 284 - dbg("In %s, hp_slot = %d\n", __FUNCTION__, hp_slot); 293 + dbg("In %s, hp_slot = %d\n", __FUNCTION__, p_slot->hp_slot); 285 294 286 295 if (POWER_CTRL(ctrl->ctrlcap)) { 287 296 /* power off slot */ ··· 604 621 mutex_unlock(&p_slot->ctrl->crit_sect); 605 622 return -EINVAL; 606 623 } 607 - /* 608 - * After turning power off, we must wait for at least 609 - * 1 second before taking any action that relies on 610 - * power having been removed from the slot/adapter. 611 - */ 612 - msleep(1000); 613 624 } 614 625 615 626 ret = remove_board(p_slot);
+202 -112
drivers/pci/hotplug/pciehp_hpc.c
··· 636 636 return retval; 637 637 } 638 638 639 + static inline int pcie_mask_bad_dllp(struct controller *ctrl) 640 + { 641 + struct pci_dev *dev = ctrl->pci_dev; 642 + int pos; 643 + u32 reg; 644 + 645 + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); 646 + if (!pos) 647 + return 0; 648 + pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, &reg); 649 + if (reg & PCI_ERR_COR_BAD_DLLP) 650 + return 0; 651 + reg |= PCI_ERR_COR_BAD_DLLP; 652 + pci_write_config_dword(dev, pos + PCI_ERR_COR_MASK, reg); 653 + return 1; 654 + } 655 + 656 + static inline void pcie_unmask_bad_dllp(struct controller *ctrl) 657 + { 658 + struct pci_dev *dev = ctrl->pci_dev; 659 + u32 reg; 660 + int pos; 661 + 662 + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); 663 + if (!pos) 664 + return; 665 + pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, &reg); 666 + if (!(reg & PCI_ERR_COR_BAD_DLLP)) 667 + return; 668 + reg &= ~PCI_ERR_COR_BAD_DLLP; 669 + pci_write_config_dword(dev, pos + PCI_ERR_COR_MASK, reg); 670 + } 671 + 639 672 static int hpc_power_off_slot(struct slot * slot) 640 673 { 641 674 struct controller *ctrl = slot->ctrl; 642 675 u16 slot_cmd; 643 676 u16 cmd_mask; 644 677 int retval = 0; 678 + int changed; 645 679 646 680 dbg("%s: slot->hp_slot %x\n", __FUNCTION__, slot->hp_slot); 681 + 682 + /* 683 + * Set Bad DLLP Mask bit in Correctable Error Mask 684 + * Register. This is the workaround against Bad DLLP error 685 + * that sometimes happens during turning power off the slot 686 + * which conforms to PCI Express 1.0a spec. 687 + */ 688 + changed = pcie_mask_bad_dllp(ctrl); 647 689 648 690 slot_cmd = POWER_OFF; 649 691 cmd_mask = PWR_CTRL; ··· 715 673 } 716 674 dbg("%s: SLOTCTRL %x write cmd %x\n", 717 675 __FUNCTION__, ctrl->cap_base + SLOTCTRL, slot_cmd); 676 + 677 + /* 678 + * After turning power off, we must wait for at least 1 second 679 + * before taking any action that relies on power having been 680 + * removed from the slot/adapter. 681 + */ 682 + msleep(1000); 683 + 684 + if (changed) 685 + pcie_unmask_bad_dllp(ctrl); 718 686 719 687 return retval; 720 688 } ··· 1119 1067 } 1120 1068 #endif 1121 1069 1122 - int pcie_init(struct controller * ctrl, struct pcie_device *dev) 1070 + static int pcie_init_hardware_part1(struct controller *ctrl, 1071 + struct pcie_device *dev) 1123 1072 { 1124 1073 int rc; 1125 1074 u16 temp_word; 1126 - u16 cap_reg; 1075 + u32 slot_cap; 1076 + u16 slot_status; 1077 + 1078 + rc = pciehp_readl(ctrl, SLOTCAP, &slot_cap); 1079 + if (rc) { 1080 + err("%s: Cannot read SLOTCAP register\n", __FUNCTION__); 1081 + return -1; 1082 + } 1083 + 1084 + /* Mask Hot-plug Interrupt Enable */ 1085 + rc = pciehp_readw(ctrl, SLOTCTRL, &temp_word); 1086 + if (rc) { 1087 + err("%s: Cannot read SLOTCTRL register\n", __FUNCTION__); 1088 + return -1; 1089 + } 1090 + 1091 + dbg("%s: SLOTCTRL %x value read %x\n", 1092 + __FUNCTION__, ctrl->cap_base + SLOTCTRL, temp_word); 1093 + temp_word = (temp_word & ~HP_INTR_ENABLE & ~CMD_CMPL_INTR_ENABLE) | 1094 + 0x00; 1095 + 1096 + rc = pciehp_writew(ctrl, SLOTCTRL, temp_word); 1097 + if (rc) { 1098 + err("%s: Cannot write to SLOTCTRL register\n", __FUNCTION__); 1099 + return -1; 1100 + } 1101 + 1102 + rc = pciehp_readw(ctrl, SLOTSTATUS, &slot_status); 1103 + if (rc) { 1104 + err("%s: Cannot read SLOTSTATUS register\n", __FUNCTION__); 1105 + return -1; 1106 + } 1107 + 1108 + temp_word = 0x1F; /* Clear all events */ 1109 + rc = pciehp_writew(ctrl, SLOTSTATUS, temp_word); 1110 + if (rc) { 1111 + err("%s: Cannot write to SLOTSTATUS register\n", __FUNCTION__); 1112 + return -1; 1113 + } 1114 + return 0; 1115 + } 1116 + 1117 + int pcie_init_hardware_part2(struct controller *ctrl, struct pcie_device *dev) 1118 + { 1119 + int rc; 1120 + u16 temp_word; 1127 1121 u16 intr_enable = 0; 1122 + u32 slot_cap; 1123 + u16 slot_status; 1124 + 1125 + rc = pciehp_readw(ctrl, SLOTCTRL, &temp_word); 1126 + if (rc) { 1127 + err("%s: Cannot read SLOTCTRL register\n", __FUNCTION__); 1128 + goto abort; 1129 + } 1130 + 1131 + intr_enable = intr_enable | PRSN_DETECT_ENABLE; 1132 + 1133 + rc = pciehp_readl(ctrl, SLOTCAP, &slot_cap); 1134 + if (rc) { 1135 + err("%s: Cannot read SLOTCAP register\n", __FUNCTION__); 1136 + goto abort; 1137 + } 1138 + 1139 + if (ATTN_BUTTN(slot_cap)) 1140 + intr_enable = intr_enable | ATTN_BUTTN_ENABLE; 1141 + 1142 + if (POWER_CTRL(slot_cap)) 1143 + intr_enable = intr_enable | PWR_FAULT_DETECT_ENABLE; 1144 + 1145 + if (MRL_SENS(slot_cap)) 1146 + intr_enable = intr_enable | MRL_DETECT_ENABLE; 1147 + 1148 + temp_word = (temp_word & ~intr_enable) | intr_enable; 1149 + 1150 + if (pciehp_poll_mode) { 1151 + temp_word = (temp_word & ~HP_INTR_ENABLE) | 0x0; 1152 + } else { 1153 + temp_word = (temp_word & ~HP_INTR_ENABLE) | HP_INTR_ENABLE; 1154 + } 1155 + 1156 + /* 1157 + * Unmask Hot-plug Interrupt Enable for the interrupt 1158 + * notification mechanism case. 1159 + */ 1160 + rc = pciehp_writew(ctrl, SLOTCTRL, temp_word); 1161 + if (rc) { 1162 + err("%s: Cannot write to SLOTCTRL register\n", __FUNCTION__); 1163 + goto abort; 1164 + } 1165 + rc = pciehp_readw(ctrl, SLOTSTATUS, &slot_status); 1166 + if (rc) { 1167 + err("%s: Cannot read SLOTSTATUS register\n", __FUNCTION__); 1168 + goto abort_disable_intr; 1169 + } 1170 + 1171 + temp_word = 0x1F; /* Clear all events */ 1172 + rc = pciehp_writew(ctrl, SLOTSTATUS, temp_word); 1173 + if (rc) { 1174 + err("%s: Cannot write to SLOTSTATUS register\n", __FUNCTION__); 1175 + goto abort_disable_intr; 1176 + } 1177 + 1178 + if (pciehp_force) { 1179 + dbg("Bypassing BIOS check for pciehp use on %s\n", 1180 + pci_name(ctrl->pci_dev)); 1181 + } else { 1182 + rc = pciehp_get_hp_hw_control_from_firmware(ctrl->pci_dev); 1183 + if (rc) 1184 + goto abort_disable_intr; 1185 + } 1186 + 1187 + return 0; 1188 + 1189 + /* We end up here for the many possible ways to fail this API. */ 1190 + abort_disable_intr: 1191 + rc = pciehp_readw(ctrl, SLOTCTRL, &temp_word); 1192 + if (!rc) { 1193 + temp_word &= ~(intr_enable | HP_INTR_ENABLE); 1194 + rc = pciehp_writew(ctrl, SLOTCTRL, temp_word); 1195 + } 1196 + if (rc) 1197 + err("%s : disabling interrupts failed\n", __FUNCTION__); 1198 + abort: 1199 + return -1; 1200 + } 1201 + 1202 + int pcie_init(struct controller *ctrl, struct pcie_device *dev) 1203 + { 1204 + int rc; 1205 + u16 cap_reg; 1128 1206 u32 slot_cap; 1129 1207 int cap_base; 1130 1208 u16 slot_status, slot_ctrl; ··· 1266 1084 dbg("%s: hotplug controller vendor id 0x%x device id 0x%x\n", 1267 1085 __FUNCTION__, pdev->vendor, pdev->device); 1268 1086 1269 - if ((cap_base = pci_find_capability(pdev, PCI_CAP_ID_EXP)) == 0) { 1087 + cap_base = pci_find_capability(pdev, PCI_CAP_ID_EXP); 1088 + if (cap_base == 0) { 1270 1089 dbg("%s: Can't find PCI_CAP_ID_EXP (0x10)\n", __FUNCTION__); 1271 - goto abort_free_ctlr; 1090 + goto abort; 1272 1091 } 1273 1092 1274 1093 ctrl->cap_base = cap_base; ··· 1279 1096 rc = pciehp_readw(ctrl, CAPREG, &cap_reg); 1280 1097 if (rc) { 1281 1098 err("%s: Cannot read CAPREG register\n", __FUNCTION__); 1282 - goto abort_free_ctlr; 1099 + goto abort; 1283 1100 } 1284 1101 dbg("%s: CAPREG offset %x cap_reg %x\n", 1285 1102 __FUNCTION__, ctrl->cap_base + CAPREG, cap_reg); ··· 1289 1106 && ((cap_reg & DEV_PORT_TYPE) != 0x0060))) { 1290 1107 dbg("%s : This is not a root port or the port is not " 1291 1108 "connected to a slot\n", __FUNCTION__); 1292 - goto abort_free_ctlr; 1109 + goto abort; 1293 1110 } 1294 1111 1295 1112 rc = pciehp_readl(ctrl, SLOTCAP, &slot_cap); 1296 1113 if (rc) { 1297 1114 err("%s: Cannot read SLOTCAP register\n", __FUNCTION__); 1298 - goto abort_free_ctlr; 1115 + goto abort; 1299 1116 } 1300 1117 dbg("%s: SLOTCAP offset %x slot_cap %x\n", 1301 1118 __FUNCTION__, ctrl->cap_base + SLOTCAP, slot_cap); 1302 1119 1303 1120 if (!(slot_cap & HP_CAP)) { 1304 1121 dbg("%s : This slot is not hot-plug capable\n", __FUNCTION__); 1305 - goto abort_free_ctlr; 1122 + goto abort; 1306 1123 } 1307 1124 /* For debugging purpose */ 1308 1125 rc = pciehp_readw(ctrl, SLOTSTATUS, &slot_status); 1309 1126 if (rc) { 1310 1127 err("%s: Cannot read SLOTSTATUS register\n", __FUNCTION__); 1311 - goto abort_free_ctlr; 1128 + goto abort; 1312 1129 } 1313 1130 dbg("%s: SLOTSTATUS offset %x slot_status %x\n", 1314 1131 __FUNCTION__, ctrl->cap_base + SLOTSTATUS, slot_status); ··· 1316 1133 rc = pciehp_readw(ctrl, SLOTCTRL, &slot_ctrl); 1317 1134 if (rc) { 1318 1135 err("%s: Cannot read SLOTCTRL register\n", __FUNCTION__); 1319 - goto abort_free_ctlr; 1136 + goto abort; 1320 1137 } 1321 1138 dbg("%s: SLOTCTRL offset %x slot_ctrl %x\n", 1322 1139 __FUNCTION__, ctrl->cap_base + SLOTCTRL, slot_ctrl); ··· 1344 1161 ctrl->first_slot = slot_cap >> 19; 1345 1162 ctrl->ctrlcap = slot_cap & 0x0000007f; 1346 1163 1347 - /* Mask Hot-plug Interrupt Enable */ 1348 - rc = pciehp_readw(ctrl, SLOTCTRL, &temp_word); 1349 - if (rc) { 1350 - err("%s: Cannot read SLOTCTRL register\n", __FUNCTION__); 1351 - goto abort_free_ctlr; 1352 - } 1353 - 1354 - dbg("%s: SLOTCTRL %x value read %x\n", 1355 - __FUNCTION__, ctrl->cap_base + SLOTCTRL, temp_word); 1356 - temp_word = (temp_word & ~HP_INTR_ENABLE & ~CMD_CMPL_INTR_ENABLE) | 1357 - 0x00; 1358 - 1359 - rc = pciehp_writew(ctrl, SLOTCTRL, temp_word); 1360 - if (rc) { 1361 - err("%s: Cannot write to SLOTCTRL register\n", __FUNCTION__); 1362 - goto abort_free_ctlr; 1363 - } 1364 - 1365 - rc = pciehp_readw(ctrl, SLOTSTATUS, &slot_status); 1366 - if (rc) { 1367 - err("%s: Cannot read SLOTSTATUS register\n", __FUNCTION__); 1368 - goto abort_free_ctlr; 1369 - } 1370 - 1371 - temp_word = 0x1F; /* Clear all events */ 1372 - rc = pciehp_writew(ctrl, SLOTSTATUS, temp_word); 1373 - if (rc) { 1374 - err("%s: Cannot write to SLOTSTATUS register\n", __FUNCTION__); 1375 - goto abort_free_ctlr; 1376 - } 1164 + rc = pcie_init_hardware_part1(ctrl, dev); 1165 + if (rc) 1166 + goto abort; 1377 1167 1378 1168 if (pciehp_poll_mode) { 1379 1169 /* Install interrupt polling timer. Start with 10 sec delay */ ··· 1362 1206 if (rc) { 1363 1207 err("Can't get irq %d for the hotplug controller\n", 1364 1208 ctrl->pci_dev->irq); 1365 - goto abort_free_ctlr; 1209 + goto abort; 1366 1210 } 1367 1211 } 1368 1212 dbg("pciehp ctrl b:d:f:irq=0x%x:%x:%x:%x\n", pdev->bus->number, ··· 1380 1224 } 1381 1225 } 1382 1226 1383 - rc = pciehp_readw(ctrl, SLOTCTRL, &temp_word); 1384 - if (rc) { 1385 - err("%s: Cannot read SLOTCTRL register\n", __FUNCTION__); 1386 - goto abort_free_irq; 1227 + rc = pcie_init_hardware_part2(ctrl, dev); 1228 + if (rc == 0) { 1229 + ctrl->hpc_ops = &pciehp_hpc_ops; 1230 + return 0; 1387 1231 } 1388 - 1389 - intr_enable = intr_enable | PRSN_DETECT_ENABLE; 1390 - 1391 - if (ATTN_BUTTN(slot_cap)) 1392 - intr_enable = intr_enable | ATTN_BUTTN_ENABLE; 1393 - 1394 - if (POWER_CTRL(slot_cap)) 1395 - intr_enable = intr_enable | PWR_FAULT_DETECT_ENABLE; 1396 - 1397 - if (MRL_SENS(slot_cap)) 1398 - intr_enable = intr_enable | MRL_DETECT_ENABLE; 1399 - 1400 - temp_word = (temp_word & ~intr_enable) | intr_enable; 1401 - 1402 - if (pciehp_poll_mode) { 1403 - temp_word = (temp_word & ~HP_INTR_ENABLE) | 0x0; 1404 - } else { 1405 - temp_word = (temp_word & ~HP_INTR_ENABLE) | HP_INTR_ENABLE; 1406 - } 1407 - 1408 - /* 1409 - * Unmask Hot-plug Interrupt Enable for the interrupt 1410 - * notification mechanism case. 1411 - */ 1412 - rc = pciehp_writew(ctrl, SLOTCTRL, temp_word); 1413 - if (rc) { 1414 - err("%s: Cannot write to SLOTCTRL register\n", __FUNCTION__); 1415 - goto abort_free_irq; 1416 - } 1417 - rc = pciehp_readw(ctrl, SLOTSTATUS, &slot_status); 1418 - if (rc) { 1419 - err("%s: Cannot read SLOTSTATUS register\n", __FUNCTION__); 1420 - goto abort_disable_intr; 1421 - } 1422 - 1423 - temp_word = 0x1F; /* Clear all events */ 1424 - rc = pciehp_writew(ctrl, SLOTSTATUS, temp_word); 1425 - if (rc) { 1426 - err("%s: Cannot write to SLOTSTATUS register\n", __FUNCTION__); 1427 - goto abort_disable_intr; 1428 - } 1429 - 1430 - if (pciehp_force) { 1431 - dbg("Bypassing BIOS check for pciehp use on %s\n", 1432 - pci_name(ctrl->pci_dev)); 1433 - } else { 1434 - rc = pciehp_get_hp_hw_control_from_firmware(ctrl->pci_dev); 1435 - if (rc) 1436 - goto abort_disable_intr; 1437 - } 1438 - 1439 - ctrl->hpc_ops = &pciehp_hpc_ops; 1440 - 1441 - return 0; 1442 - 1443 - /* We end up here for the many possible ways to fail this API. */ 1444 - abort_disable_intr: 1445 - rc = pciehp_readw(ctrl, SLOTCTRL, &temp_word); 1446 - if (!rc) { 1447 - temp_word &= ~(intr_enable | HP_INTR_ENABLE); 1448 - rc = pciehp_writew(ctrl, SLOTCTRL, temp_word); 1449 - } 1450 - if (rc) 1451 - err("%s : disabling interrupts failed\n", __FUNCTION__); 1452 - 1453 1232 abort_free_irq: 1454 1233 if (pciehp_poll_mode) 1455 1234 del_timer_sync(&ctrl->poll_timer); 1456 1235 else 1457 1236 free_irq(ctrl->pci_dev->irq, ctrl); 1458 - 1459 - abort_free_ctlr: 1237 + abort: 1460 1238 return -1; 1461 1239 }
+23 -20
drivers/pci/hotplug/pciehp_pci.c
··· 105 105 } 106 106 107 107 /* Find Advanced Error Reporting Enhanced Capability */ 108 - pos = 256; 109 - do { 110 - pci_read_config_dword(dev, pos, &reg32); 111 - if (PCI_EXT_CAP_ID(reg32) == PCI_EXT_CAP_ID_ERR) 112 - break; 113 - } while ((pos = PCI_EXT_CAP_NEXT(reg32))); 108 + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); 114 109 if (!pos) 115 110 return; 116 111 ··· 243 248 u8 bctl = 0; 244 249 u8 presence = 0; 245 250 struct pci_bus *parent = p_slot->ctrl->pci_dev->subordinate; 251 + u16 command; 246 252 247 253 dbg("%s: bus/dev = %x/%x\n", __FUNCTION__, p_slot->bus, 248 254 p_slot->device); 255 + ret = p_slot->hpc_ops->get_adapter_status(p_slot, &presence); 256 + if (ret) 257 + presence = 0; 249 258 250 - for (j=0; j<8 ; j++) { 259 + for (j = 0; j < 8; j++) { 251 260 struct pci_dev* temp = pci_get_slot(parent, 252 261 (p_slot->device << 3) | j); 253 262 if (!temp) ··· 262 263 pci_dev_put(temp); 263 264 continue; 264 265 } 265 - if (temp->hdr_type == PCI_HEADER_TYPE_BRIDGE) { 266 - ret = p_slot->hpc_ops->get_adapter_status(p_slot, 267 - &presence); 268 - if (!ret && presence) { 269 - pci_read_config_byte(temp, PCI_BRIDGE_CONTROL, 270 - &bctl); 271 - if (bctl & PCI_BRIDGE_CTL_VGA) { 272 - err("Cannot remove display device %s\n", 273 - pci_name(temp)); 274 - pci_dev_put(temp); 275 - continue; 276 - } 266 + if (temp->hdr_type == PCI_HEADER_TYPE_BRIDGE && presence) { 267 + pci_read_config_byte(temp, PCI_BRIDGE_CONTROL, &bctl); 268 + if (bctl & PCI_BRIDGE_CTL_VGA) { 269 + err("Cannot remove display device %s\n", 270 + pci_name(temp)); 271 + pci_dev_put(temp); 272 + continue; 277 273 } 278 274 } 279 275 pci_remove_bus_device(temp); 276 + /* 277 + * Ensure that no new Requests will be generated from 278 + * the device. 279 + */ 280 + if (presence) { 281 + pci_read_config_word(temp, PCI_COMMAND, &command); 282 + command &= ~(PCI_COMMAND_MASTER | PCI_COMMAND_SERR); 283 + command |= PCI_COMMAND_INTX_DISABLE; 284 + pci_write_config_word(temp, PCI_COMMAND, command); 285 + } 280 286 pci_dev_put(temp); 281 287 } 282 288 /* ··· 292 288 293 289 return rc; 294 290 } 295 -
-1
drivers/pci/hotplug/rpaphp.h
··· 74 74 u32 type; 75 75 u32 power_domain; 76 76 char *name; 77 - char *location; 78 77 struct device_node *dn; 79 78 struct pci_bus *bus; 80 79 struct list_head *pci_devs;
-14
drivers/pci/hotplug/rpaphp_pci.c
··· 64 64 return rc; 65 65 } 66 66 67 - static void set_slot_name(struct slot *slot) 68 - { 69 - struct pci_bus *bus = slot->bus; 70 - struct pci_dev *bridge; 71 - 72 - bridge = bus->self; 73 - if (bridge) 74 - strcpy(slot->name, pci_name(bridge)); 75 - else 76 - sprintf(slot->name, "%04x:%02x:00.0", pci_domain_nr(bus), 77 - bus->number); 78 - } 79 - 80 67 /** 81 68 * rpaphp_enable_slot - record slot state, config pci device 82 69 * @slot: target &slot ··· 102 115 info->adapter_status = EMPTY; 103 116 slot->bus = bus; 104 117 slot->pci_devs = &bus->devices; 105 - set_slot_name(slot); 106 118 107 119 /* if there's an adapter in the slot, go add the pci devices */ 108 120 if (state == PRESENT) {
+24 -23
drivers/pci/hotplug/rpaphp_slot.c
··· 33 33 #include <asm/rtas.h> 34 34 #include "rpaphp.h" 35 35 36 - static ssize_t location_read_file (struct hotplug_slot *php_slot, char *buf) 36 + static ssize_t address_read_file (struct hotplug_slot *php_slot, char *buf) 37 37 { 38 - char *value; 39 - int retval = -ENOENT; 38 + int retval; 40 39 struct slot *slot = (struct slot *)php_slot->private; 40 + struct pci_bus *bus; 41 41 42 42 if (!slot) 43 - return retval; 43 + return -ENOENT; 44 44 45 - value = slot->location; 46 - retval = sprintf (buf, "%s\n", value); 45 + bus = slot->bus; 46 + if (!bus) 47 + return -ENOENT; 48 + 49 + if (bus->self) 50 + retval = sprintf(buf, pci_name(bus->self)); 51 + else 52 + retval = sprintf(buf, "%04x:%02x:00.0", 53 + pci_domain_nr(bus), bus->number); 54 + 47 55 return retval; 48 56 } 49 57 50 - static struct hotplug_slot_attribute php_attr_location = { 51 - .attr = {.name = "phy_location", .mode = S_IFREG | S_IRUGO}, 52 - .show = location_read_file, 58 + static struct hotplug_slot_attribute php_attr_address = { 59 + .attr = {.name = "address", .mode = S_IFREG | S_IRUGO}, 60 + .show = address_read_file, 53 61 }; 54 62 55 63 /* free up the memory used by a slot */ ··· 72 64 kfree(slot->hotplug_slot->info); 73 65 kfree(slot->hotplug_slot->name); 74 66 kfree(slot->hotplug_slot); 75 - kfree(slot->location); 76 67 kfree(slot); 77 68 } 78 69 ··· 90 83 GFP_KERNEL); 91 84 if (!slot->hotplug_slot->info) 92 85 goto error_hpslot; 93 - slot->hotplug_slot->name = kmalloc(BUS_ID_SIZE + 1, GFP_KERNEL); 86 + slot->hotplug_slot->name = kmalloc(strlen(drc_name) + 1, GFP_KERNEL); 94 87 if (!slot->hotplug_slot->name) 95 88 goto error_info; 96 - slot->location = kmalloc(strlen(drc_name) + 1, GFP_KERNEL); 97 - if (!slot->location) 98 - goto error_name; 99 89 slot->name = slot->hotplug_slot->name; 90 + strcpy(slot->name, drc_name); 100 91 slot->dn = dn; 101 92 slot->index = drc_index; 102 - strcpy(slot->location, drc_name); 103 93 slot->power_domain = power_domain; 104 94 slot->hotplug_slot->private = slot; 105 95 slot->hotplug_slot->ops = &rpaphp_hotplug_slot_ops; ··· 104 100 105 101 return (slot); 106 102 107 - error_name: 108 - kfree(slot->hotplug_slot->name); 109 103 error_info: 110 104 kfree(slot->hotplug_slot->info); 111 105 error_hpslot: ··· 135 133 136 134 list_del(&slot->rpaphp_slot_list); 137 135 138 - /* remove "phy_location" file */ 139 - sysfs_remove_file(&php_slot->kobj, &php_attr_location.attr); 136 + /* remove "address" file */ 137 + sysfs_remove_file(&php_slot->kobj, &php_attr_address.attr); 140 138 141 139 retval = pci_hp_deregister(php_slot); 142 140 if (retval) ··· 168 166 return retval; 169 167 } 170 168 171 - /* create "phy_location" file */ 172 - retval = sysfs_create_file(&php_slot->kobj, &php_attr_location.attr); 169 + /* create "address" file */ 170 + retval = sysfs_create_file(&php_slot->kobj, &php_attr_address.attr); 173 171 if (retval) { 174 172 err("sysfs_create_file failed with error %d\n", retval); 175 173 goto sysfs_fail; ··· 177 175 178 176 /* add slot to our internal list */ 179 177 list_add(&slot->rpaphp_slot_list, &rpaphp_slot_head); 180 - info("Slot [%s](PCI location=%s) registered\n", slot->name, 181 - slot->location); 178 + info("Slot [%s] registered\n", slot->name); 182 179 return 0; 183 180 184 181 sysfs_fail:
+1 -1
drivers/pci/hotplug/shpchp_hpc.c
··· 597 597 cleanup_slots(ctrl); 598 598 599 599 /* 600 - * Mask SERR and System Interrut generation 600 + * Mask SERR and System Interrupt generation 601 601 */ 602 602 serr_int = shpc_readl(ctrl, SERR_INTR_ENABLE); 603 603 serr_int |= (GLOBAL_INTR_MASK | GLOBAL_SERR_MASK |
+1 -1
drivers/pci/intel-iommu.c
··· 1781 1781 /* 1782 1782 * First try to allocate an io virtual address in 1783 1783 * DMA_32BIT_MASK and if that fails then try allocating 1784 - * from higer range 1784 + * from higher range 1785 1785 */ 1786 1786 iova = iommu_alloc_iova(domain, size, DMA_32BIT_MASK); 1787 1787 if (!iova)
+46 -48
drivers/pci/msi.c
··· 25 25 26 26 static int pci_msi_enable = 1; 27 27 28 + /* Arch hooks */ 29 + 30 + int __attribute__ ((weak)) 31 + arch_msi_check_device(struct pci_dev *dev, int nvec, int type) 32 + { 33 + return 0; 34 + } 35 + 36 + int __attribute__ ((weak)) 37 + arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *entry) 38 + { 39 + return 0; 40 + } 41 + 42 + int __attribute__ ((weak)) 43 + arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) 44 + { 45 + struct msi_desc *entry; 46 + int ret; 47 + 48 + list_for_each_entry(entry, &dev->msi_list, list) { 49 + ret = arch_setup_msi_irq(dev, entry); 50 + if (ret) 51 + return ret; 52 + } 53 + 54 + return 0; 55 + } 56 + 57 + void __attribute__ ((weak)) arch_teardown_msi_irq(unsigned int irq) 58 + { 59 + return; 60 + } 61 + 62 + void __attribute__ ((weak)) 63 + arch_teardown_msi_irqs(struct pci_dev *dev) 64 + { 65 + struct msi_desc *entry; 66 + 67 + list_for_each_entry(entry, &dev->msi_list, list) { 68 + if (entry->irq != 0) 69 + arch_teardown_msi_irq(entry->irq); 70 + } 71 + } 72 + 28 73 static void msi_set_enable(struct pci_dev *dev, int enable) 29 74 { 30 75 int pos; ··· 275 230 pci_intx(dev, enable); 276 231 } 277 232 278 - #ifdef CONFIG_PM 279 233 static void __pci_restore_msi_state(struct pci_dev *dev) 280 234 { 281 235 int pos; ··· 332 288 __pci_restore_msi_state(dev); 333 289 __pci_restore_msix_state(dev); 334 290 } 335 - #endif /* CONFIG_PM */ 291 + EXPORT_SYMBOL_GPL(pci_restore_msi_state); 336 292 337 293 /** 338 294 * msi_capability_init - configure device's MSI capability structure ··· 726 682 void pci_msi_init_pci_dev(struct pci_dev *dev) 727 683 { 728 684 INIT_LIST_HEAD(&dev->msi_list); 729 - } 730 - 731 - 732 - /* Arch hooks */ 733 - 734 - int __attribute__ ((weak)) 735 - arch_msi_check_device(struct pci_dev* dev, int nvec, int type) 736 - { 737 - return 0; 738 - } 739 - 740 - int __attribute__ ((weak)) 741 - arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *entry) 742 - { 743 - return 0; 744 - } 745 - 746 - int __attribute__ ((weak)) 747 - arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) 748 - { 749 - struct msi_desc *entry; 750 - int ret; 751 - 752 - list_for_each_entry(entry, &dev->msi_list, list) { 753 - ret = arch_setup_msi_irq(dev, entry); 754 - if (ret) 755 - return ret; 756 - } 757 - 758 - return 0; 759 - } 760 - 761 - void __attribute__ ((weak)) arch_teardown_msi_irq(unsigned int irq) 762 - { 763 - return; 764 - } 765 - 766 - void __attribute__ ((weak)) 767 - arch_teardown_msi_irqs(struct pci_dev *dev) 768 - { 769 - struct msi_desc *entry; 770 - 771 - list_for_each_entry(entry, &dev->msi_list, list) { 772 - if (entry->irq != 0) 773 - arch_teardown_msi_irq(entry->irq); 774 - } 775 685 }
+3 -4
drivers/pci/pci-acpi.c
··· 156 156 } 157 157 158 158 /** 159 - * pci_osc_support_set - register OS support to Firmware 159 + * __pci_osc_support_set - register OS support to Firmware 160 160 * @flags: OS support bits 161 161 * 162 162 * Update OS support fields and doing a _OSC Query to obtain an update 163 163 * from Firmware on supported control bits. 164 164 **/ 165 - acpi_status pci_osc_support_set(u32 flags) 165 + acpi_status __pci_osc_support_set(u32 flags, const char *hid) 166 166 { 167 167 u32 temp; 168 168 acpi_status retval; ··· 176 176 temp = ctrlset_buf[OSC_CONTROL_TYPE]; 177 177 ctrlset_buf[OSC_QUERY_TYPE] = OSC_QUERY_ENABLE; 178 178 ctrlset_buf[OSC_CONTROL_TYPE] = OSC_CONTROL_MASKS; 179 - acpi_get_devices ( PCI_ROOT_HID_STRING, 179 + acpi_get_devices(hid, 180 180 acpi_query_osc, 181 181 ctrlset_buf, 182 182 (void **) &retval ); ··· 188 188 } 189 189 return AE_OK; 190 190 } 191 - EXPORT_SYMBOL(pci_osc_support_set); 192 191 193 192 /** 194 193 * pci_osc_control_set - commit requested control to Firmware
+1 -3
drivers/pci/pci-driver.c
··· 186 186 set_cpus_allowed(current, node_to_cpumask(node)); 187 187 /* And set default memory allocation policy */ 188 188 oldpol = current->mempolicy; 189 - current->mempolicy = &default_policy; 190 - mpol_get(current->mempolicy); 189 + current->mempolicy = NULL; /* fall back to system default policy */ 191 190 #endif 192 191 error = drv->probe(dev, id); 193 192 #ifdef CONFIG_NUMA 194 193 set_cpus_allowed(current, oldmask); 195 - mpol_free(current->mempolicy); 196 194 current->mempolicy = oldpol; 197 195 #endif 198 196 return error;
+8 -3
drivers/pci/pci-sysfs.c
··· 21 21 #include <linux/topology.h> 22 22 #include <linux/mm.h> 23 23 #include <linux/capability.h> 24 + #include <linux/aspm.h> 24 25 #include "pci.h" 25 26 26 27 static int sysfs_initialized; /* = 0 */ ··· 359 358 char *buf, loff_t off, size_t count) 360 359 { 361 360 struct pci_bus *bus = to_pci_bus(container_of(kobj, 362 - struct class_device, 361 + struct device, 363 362 kobj)); 364 363 365 364 /* Only support 1, 2 or 4 byte accesses */ ··· 384 383 char *buf, loff_t off, size_t count) 385 384 { 386 385 struct pci_bus *bus = to_pci_bus(container_of(kobj, 387 - struct class_device, 386 + struct device, 388 387 kobj)); 389 388 /* Only support 1, 2 or 4 byte accesses */ 390 389 if (count != 1 && count != 2 && count != 4) ··· 408 407 struct vm_area_struct *vma) 409 408 { 410 409 struct pci_bus *bus = to_pci_bus(container_of(kobj, 411 - struct class_device, 410 + struct device, 412 411 kobj)); 413 412 414 413 return pci_mmap_legacy_page_range(bus, vma); ··· 651 650 if (pcibios_add_platform_entries(pdev)) 652 651 goto err_rom_file; 653 652 653 + pcie_aspm_create_sysfs_dev_files(pdev); 654 + 654 655 return 0; 655 656 656 657 err_rom_file: ··· 681 678 { 682 679 if (!sysfs_initialized) 683 680 return; 681 + 682 + pcie_aspm_remove_sysfs_dev_files(pdev); 684 683 685 684 if (pdev->cfg_size < 4096) 686 685 sysfs_remove_bin_file(&pdev->dev.kobj, &pci_config_attr);
+75 -18
drivers/pci/pci.c
··· 18 18 #include <linux/spinlock.h> 19 19 #include <linux/string.h> 20 20 #include <linux/log2.h> 21 + #include <linux/aspm.h> 21 22 #include <asm/dma.h> /* isa_dma_bridge_buggy */ 22 23 #include "pci.h" 23 24 ··· 315 314 } 316 315 EXPORT_SYMBOL_GPL(pci_find_ht_capability); 317 316 317 + void pcie_wait_pending_transaction(struct pci_dev *dev) 318 + { 319 + int pos; 320 + u16 reg16; 321 + 322 + pos = pci_find_capability(dev, PCI_CAP_ID_EXP); 323 + if (!pos) 324 + return; 325 + while (1) { 326 + pci_read_config_word(dev, pos + PCI_EXP_DEVSTA, &reg16); 327 + if (!(reg16 & PCI_EXP_DEVSTA_TRPND)) 328 + break; 329 + cpu_relax(); 330 + } 331 + 332 + } 333 + EXPORT_SYMBOL_GPL(pcie_wait_pending_transaction); 334 + 318 335 /** 319 336 * pci_find_parent_resource - return resource region of parent bus of given region 320 337 * @dev: PCI device structure contains resources to be searched ··· 372 353 * Restore the BAR values for a given device, so as to make it 373 354 * accessible by its driver. 374 355 */ 375 - void 356 + static void 376 357 pci_restore_bars(struct pci_dev *dev) 377 358 { 378 359 int i, numres; ··· 520 501 if (need_restore) 521 502 pci_restore_bars(dev); 522 503 504 + if (dev->bus->self) 505 + pcie_aspm_pm_state_change(dev->bus->self); 506 + 523 507 return 0; 524 508 } 525 509 ··· 573 551 int pos, i = 0; 574 552 struct pci_cap_saved_state *save_state; 575 553 u16 *cap; 554 + int found = 0; 576 555 577 556 pos = pci_find_capability(dev, PCI_CAP_ID_EXP); 578 557 if (pos <= 0) ··· 582 559 save_state = pci_find_saved_cap(dev, PCI_CAP_ID_EXP); 583 560 if (!save_state) 584 561 save_state = kzalloc(sizeof(*save_state) + sizeof(u16) * 4, GFP_KERNEL); 562 + else 563 + found = 1; 585 564 if (!save_state) { 586 565 dev_err(&dev->dev, "Out of memory in pci_save_pcie_state\n"); 587 566 return -ENOMEM; ··· 594 569 pci_read_config_word(dev, pos + PCI_EXP_LNKCTL, &cap[i++]); 595 570 pci_read_config_word(dev, pos + PCI_EXP_SLTCTL, &cap[i++]); 596 571 pci_read_config_word(dev, pos + PCI_EXP_RTCTL, &cap[i++]); 597 - pci_add_saved_cap(dev, save_state); 572 + save_state->cap_nr = PCI_CAP_ID_EXP; 573 + if (!found) 574 + pci_add_saved_cap(dev, save_state); 598 575 return 0; 599 576 } 600 577 ··· 624 597 int pos, i = 0; 625 598 struct pci_cap_saved_state *save_state; 626 599 u16 *cap; 600 + int found = 0; 627 601 628 602 pos = pci_find_capability(dev, PCI_CAP_ID_PCIX); 629 603 if (pos <= 0) 630 604 return 0; 631 605 632 - save_state = pci_find_saved_cap(dev, PCI_CAP_ID_EXP); 606 + save_state = pci_find_saved_cap(dev, PCI_CAP_ID_PCIX); 633 607 if (!save_state) 634 608 save_state = kzalloc(sizeof(*save_state) + sizeof(u16), GFP_KERNEL); 609 + else 610 + found = 1; 635 611 if (!save_state) { 636 612 dev_err(&dev->dev, "Out of memory in pci_save_pcie_state\n"); 637 613 return -ENOMEM; ··· 642 612 cap = (u16 *)&save_state->data[0]; 643 613 644 614 pci_read_config_word(dev, pos + PCI_X_CMD, &cap[i++]); 645 - pci_add_saved_cap(dev, save_state); 615 + save_state->cap_nr = PCI_CAP_ID_PCIX; 616 + if (!found) 617 + pci_add_saved_cap(dev, save_state); 646 618 return 0; 647 619 } 648 620 ··· 745 713 return 0; 746 714 } 747 715 748 - /** 749 - * pci_enable_device_bars - Initialize some of a device for use 750 - * @dev: PCI device to be initialized 751 - * @bars: bitmask of BAR's that must be configured 752 - * 753 - * Initialize device before it's used by a driver. Ask low-level code 754 - * to enable selected I/O and memory resources. Wake up the device if it 755 - * was suspended. Beware, this function can fail. 756 - */ 757 - int 758 - pci_enable_device_bars(struct pci_dev *dev, int bars) 716 + static int __pci_enable_device_flags(struct pci_dev *dev, 717 + resource_size_t flags) 759 718 { 760 719 int err; 720 + int i, bars = 0; 761 721 762 722 if (atomic_add_return(1, &dev->enable_cnt) > 1) 763 723 return 0; /* already enabled */ 724 + 725 + for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) 726 + if (dev->resource[i].flags & flags) 727 + bars |= (1 << i); 764 728 765 729 err = do_pci_enable_device(dev, bars); 766 730 if (err < 0) 767 731 atomic_dec(&dev->enable_cnt); 768 732 return err; 733 + } 734 + 735 + /** 736 + * pci_enable_device_io - Initialize a device for use with IO space 737 + * @dev: PCI device to be initialized 738 + * 739 + * Initialize device before it's used by a driver. Ask low-level code 740 + * to enable I/O resources. Wake up the device if it was suspended. 741 + * Beware, this function can fail. 742 + */ 743 + int pci_enable_device_io(struct pci_dev *dev) 744 + { 745 + return __pci_enable_device_flags(dev, IORESOURCE_IO); 746 + } 747 + 748 + /** 749 + * pci_enable_device_mem - Initialize a device for use with Memory space 750 + * @dev: PCI device to be initialized 751 + * 752 + * Initialize device before it's used by a driver. Ask low-level code 753 + * to enable Memory resources. Wake up the device if it was suspended. 754 + * Beware, this function can fail. 755 + */ 756 + int pci_enable_device_mem(struct pci_dev *dev) 757 + { 758 + return __pci_enable_device_flags(dev, IORESOURCE_MEM); 769 759 } 770 760 771 761 /** ··· 803 749 */ 804 750 int pci_enable_device(struct pci_dev *dev) 805 751 { 806 - return pci_enable_device_bars(dev, (1 << PCI_NUM_RESOURCES) - 1); 752 + return __pci_enable_device_flags(dev, IORESOURCE_MEM | IORESOURCE_IO); 807 753 } 808 754 809 755 /* ··· 938 884 939 885 if (atomic_sub_return(1, &dev->enable_cnt) != 0) 940 886 return; 887 + 888 + /* Wait for all transactions are finished before disabling the device */ 889 + pcie_wait_pending_transaction(dev); 941 890 942 891 pci_read_config_word(dev, PCI_COMMAND, &pci_command); 943 892 if (pci_command & PCI_COMMAND_MASTER) { ··· 1676 1619 1677 1620 device_initcall(pci_init); 1678 1621 1679 - EXPORT_SYMBOL_GPL(pci_restore_bars); 1680 1622 EXPORT_SYMBOL(pci_reenable_device); 1681 - EXPORT_SYMBOL(pci_enable_device_bars); 1623 + EXPORT_SYMBOL(pci_enable_device_io); 1624 + EXPORT_SYMBOL(pci_enable_device_mem); 1682 1625 EXPORT_SYMBOL(pci_enable_device); 1683 1626 EXPORT_SYMBOL(pcim_enable_device); 1684 1627 EXPORT_SYMBOL(pcim_pin_device);
+6 -10
drivers/pci/pci.h
··· 6 6 extern void pci_cleanup_rom(struct pci_dev *dev); 7 7 8 8 /* Firmware callbacks */ 9 - extern pci_power_t (*platform_pci_choose_state)(struct pci_dev *dev, pm_message_t state); 10 - extern int (*platform_pci_set_power_state)(struct pci_dev *dev, pci_power_t state); 9 + extern pci_power_t (*platform_pci_choose_state)(struct pci_dev *dev, 10 + pm_message_t state); 11 + extern int (*platform_pci_set_power_state)(struct pci_dev *dev, 12 + pci_power_t state); 11 13 12 14 extern int pci_user_read_config_byte(struct pci_dev *dev, int where, u8 *val); 13 15 extern int pci_user_read_config_word(struct pci_dev *dev, int where, u16 *val); ··· 47 45 static inline void pci_msi_init_pci_dev(struct pci_dev *dev) { } 48 46 #endif 49 47 50 - #if defined(CONFIG_PCI_MSI) && defined(CONFIG_PM) 51 - void pci_restore_msi_state(struct pci_dev *dev); 52 - #else 53 - static inline void pci_restore_msi_state(struct pci_dev *dev) {} 54 - #endif 55 - 56 48 #ifdef CONFIG_PCIEAER 57 49 void pci_no_aer(void); 58 50 #else ··· 64 68 } 65 69 extern int pcie_mch_quirk; 66 70 extern struct device_attribute pci_dev_attrs[]; 67 - extern struct class_device_attribute class_device_attr_cpuaffinity; 71 + extern struct device_attribute dev_attr_cpuaffinity; 68 72 69 73 /** 70 74 * pci_match_one_device - Tell if a PCI device structure has a matching 71 75 * PCI device id structure 72 76 * @id: single PCI device id structure to match 73 77 * @dev: the PCI device structure to match against 74 - * 78 + * 75 79 * Returns the matching pci_device_id structure or %NULL if there is no match. 76 80 */ 77 81 static inline const struct pci_device_id *
+20
drivers/pci/pcie/Kconfig
··· 26 26 When in doubt, say N. 27 27 28 28 source "drivers/pci/pcie/aer/Kconfig" 29 + 30 + # 31 + # PCI Express ASPM 32 + # 33 + config PCIEASPM 34 + bool "PCI Express ASPM support(Experimental)" 35 + depends on PCI && EXPERIMENTAL 36 + default y 37 + help 38 + This enables PCI Express ASPM (Active State Power Management) and 39 + Clock Power Management. ASPM supports state L0/L0s/L1. 40 + 41 + When in doubt, say N. 42 + config PCIEASPM_DEBUG 43 + bool "Debug PCI Express ASPM" 44 + depends on PCIEASPM 45 + default n 46 + help 47 + This enables PCI Express ASPM debug support. It will add per-device 48 + interface to control ASPM.
+3
drivers/pci/pcie/Makefile
··· 2 2 # Makefile for PCI-Express PORT Driver 3 3 # 4 4 5 + # Build PCI Express ASPM if needed 6 + obj-$(CONFIG_PCIEASPM) += aspm.o 7 + 5 8 pcieportdrv-y := portdrv_core.o portdrv_pci.o portdrv_bus.o 6 9 7 10 obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o
+7 -17
drivers/pci/pcie/aer/aerdrv_acpi.c
··· 31 31 { 32 32 acpi_status status = AE_NOT_FOUND; 33 33 struct pci_dev *pdev = pciedev->port; 34 - acpi_handle handle = DEVICE_ACPI_HANDLE(&pdev->dev); 35 - struct pci_bus *parent; 34 + acpi_handle handle = 0; 36 35 37 - while (!handle) { 38 - if (!pdev || !pdev->bus->parent) 39 - break; 40 - parent = pdev->bus->parent; 41 - if (!parent->self) 42 - /* Parent must be a host bridge */ 43 - handle = acpi_get_pci_rootbridge_handle( 44 - pci_domain_nr(parent), 45 - parent->number); 46 - else 47 - handle = DEVICE_ACPI_HANDLE( 48 - &(parent->self->dev)); 49 - pdev = parent->self; 50 - } 36 + /* Find root host bridge */ 37 + while (pdev->bus && pdev->bus->self) 38 + pdev = pdev->bus->self; 39 + handle = acpi_get_pci_rootbridge_handle( 40 + pci_domain_nr(pdev->bus), pdev->bus->number); 51 41 52 42 if (handle) { 53 - pci_osc_support_set(OSC_EXT_PCI_CONFIG_SUPPORT); 43 + pcie_osc_support_set(OSC_EXT_PCI_CONFIG_SUPPORT); 54 44 status = pci_osc_control_set(handle, 55 45 OSC_PCI_EXPRESS_AER_CONTROL | 56 46 OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL);
+802
drivers/pci/pcie/aspm.c
··· 1 + /* 2 + * File: drivers/pci/pcie/aspm.c 3 + * Enabling PCIE link L0s/L1 state and Clock Power Management 4 + * 5 + * Copyright (C) 2007 Intel 6 + * Copyright (C) Zhang Yanmin (yanmin.zhang@intel.com) 7 + * Copyright (C) Shaohua Li (shaohua.li@intel.com) 8 + */ 9 + 10 + #include <linux/kernel.h> 11 + #include <linux/module.h> 12 + #include <linux/moduleparam.h> 13 + #include <linux/pci.h> 14 + #include <linux/pci_regs.h> 15 + #include <linux/errno.h> 16 + #include <linux/pm.h> 17 + #include <linux/init.h> 18 + #include <linux/slab.h> 19 + #include <linux/aspm.h> 20 + #include <acpi/acpi_bus.h> 21 + #include <linux/pci-acpi.h> 22 + #include "../pci.h" 23 + 24 + #ifdef MODULE_PARAM_PREFIX 25 + #undef MODULE_PARAM_PREFIX 26 + #endif 27 + #define MODULE_PARAM_PREFIX "pcie_aspm." 28 + 29 + struct endpoint_state { 30 + unsigned int l0s_acceptable_latency; 31 + unsigned int l1_acceptable_latency; 32 + }; 33 + 34 + struct pcie_link_state { 35 + struct list_head sibiling; 36 + struct pci_dev *pdev; 37 + 38 + /* ASPM state */ 39 + unsigned int support_state; 40 + unsigned int enabled_state; 41 + unsigned int bios_aspm_state; 42 + /* upstream component */ 43 + unsigned int l0s_upper_latency; 44 + unsigned int l1_upper_latency; 45 + /* downstream component */ 46 + unsigned int l0s_down_latency; 47 + unsigned int l1_down_latency; 48 + /* Clock PM state*/ 49 + unsigned int clk_pm_capable; 50 + unsigned int clk_pm_enabled; 51 + unsigned int bios_clk_state; 52 + 53 + /* 54 + * A pcie downstream port only has one slot under it, so at most there 55 + * are 8 functions 56 + */ 57 + struct endpoint_state endpoints[8]; 58 + }; 59 + 60 + static int aspm_disabled; 61 + static DEFINE_MUTEX(aspm_lock); 62 + static LIST_HEAD(link_list); 63 + 64 + #define POLICY_DEFAULT 0 /* BIOS default setting */ 65 + #define POLICY_PERFORMANCE 1 /* high performance */ 66 + #define POLICY_POWERSAVE 2 /* high power saving */ 67 + static int aspm_policy; 68 + static const char *policy_str[] = { 69 + [POLICY_DEFAULT] = "default", 70 + [POLICY_PERFORMANCE] = "performance", 71 + [POLICY_POWERSAVE] = "powersave" 72 + }; 73 + 74 + static int policy_to_aspm_state(struct pci_dev *pdev) 75 + { 76 + struct pcie_link_state *link_state = pdev->link_state; 77 + 78 + switch (aspm_policy) { 79 + case POLICY_PERFORMANCE: 80 + /* Disable ASPM and Clock PM */ 81 + return 0; 82 + case POLICY_POWERSAVE: 83 + /* Enable ASPM L0s/L1 */ 84 + return PCIE_LINK_STATE_L0S|PCIE_LINK_STATE_L1; 85 + case POLICY_DEFAULT: 86 + return link_state->bios_aspm_state; 87 + } 88 + return 0; 89 + } 90 + 91 + static int policy_to_clkpm_state(struct pci_dev *pdev) 92 + { 93 + struct pcie_link_state *link_state = pdev->link_state; 94 + 95 + switch (aspm_policy) { 96 + case POLICY_PERFORMANCE: 97 + /* Disable ASPM and Clock PM */ 98 + return 0; 99 + case POLICY_POWERSAVE: 100 + /* Disable Clock PM */ 101 + return 1; 102 + case POLICY_DEFAULT: 103 + return link_state->bios_clk_state; 104 + } 105 + return 0; 106 + } 107 + 108 + static void pcie_set_clock_pm(struct pci_dev *pdev, int enable) 109 + { 110 + struct pci_dev *child_dev; 111 + int pos; 112 + u16 reg16; 113 + struct pcie_link_state *link_state = pdev->link_state; 114 + 115 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) { 116 + pos = pci_find_capability(child_dev, PCI_CAP_ID_EXP); 117 + if (!pos) 118 + return; 119 + pci_read_config_word(child_dev, pos + PCI_EXP_LNKCTL, &reg16); 120 + if (enable) 121 + reg16 |= PCI_EXP_LNKCTL_CLKREQ_EN; 122 + else 123 + reg16 &= ~PCI_EXP_LNKCTL_CLKREQ_EN; 124 + pci_write_config_word(child_dev, pos + PCI_EXP_LNKCTL, reg16); 125 + } 126 + link_state->clk_pm_enabled = !!enable; 127 + } 128 + 129 + static void pcie_check_clock_pm(struct pci_dev *pdev) 130 + { 131 + int pos; 132 + u32 reg32; 133 + u16 reg16; 134 + int capable = 1, enabled = 1; 135 + struct pci_dev *child_dev; 136 + struct pcie_link_state *link_state = pdev->link_state; 137 + 138 + /* All functions should have the same cap and state, take the worst */ 139 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) { 140 + pos = pci_find_capability(child_dev, PCI_CAP_ID_EXP); 141 + if (!pos) 142 + return; 143 + pci_read_config_dword(child_dev, pos + PCI_EXP_LNKCAP, &reg32); 144 + if (!(reg32 & PCI_EXP_LNKCAP_CLKPM)) { 145 + capable = 0; 146 + enabled = 0; 147 + break; 148 + } 149 + pci_read_config_word(child_dev, pos + PCI_EXP_LNKCTL, &reg16); 150 + if (!(reg16 & PCI_EXP_LNKCTL_CLKREQ_EN)) 151 + enabled = 0; 152 + } 153 + link_state->clk_pm_capable = capable; 154 + link_state->clk_pm_enabled = enabled; 155 + link_state->bios_clk_state = enabled; 156 + pcie_set_clock_pm(pdev, policy_to_clkpm_state(pdev)); 157 + } 158 + 159 + /* 160 + * pcie_aspm_configure_common_clock: check if the 2 ends of a link 161 + * could use common clock. If they are, configure them to use the 162 + * common clock. That will reduce the ASPM state exit latency. 163 + */ 164 + static void pcie_aspm_configure_common_clock(struct pci_dev *pdev) 165 + { 166 + int pos, child_pos; 167 + u16 reg16 = 0; 168 + struct pci_dev *child_dev; 169 + int same_clock = 1; 170 + 171 + /* 172 + * all functions of a slot should have the same Slot Clock 173 + * Configuration, so just check one function 174 + * */ 175 + child_dev = list_entry(pdev->subordinate->devices.next, struct pci_dev, 176 + bus_list); 177 + BUG_ON(!child_dev->is_pcie); 178 + 179 + /* Check downstream component if bit Slot Clock Configuration is 1 */ 180 + child_pos = pci_find_capability(child_dev, PCI_CAP_ID_EXP); 181 + pci_read_config_word(child_dev, child_pos + PCI_EXP_LNKSTA, &reg16); 182 + if (!(reg16 & PCI_EXP_LNKSTA_SLC)) 183 + same_clock = 0; 184 + 185 + /* Check upstream component if bit Slot Clock Configuration is 1 */ 186 + pos = pci_find_capability(pdev, PCI_CAP_ID_EXP); 187 + pci_read_config_word(pdev, pos + PCI_EXP_LNKSTA, &reg16); 188 + if (!(reg16 & PCI_EXP_LNKSTA_SLC)) 189 + same_clock = 0; 190 + 191 + /* Configure downstream component, all functions */ 192 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) { 193 + child_pos = pci_find_capability(child_dev, PCI_CAP_ID_EXP); 194 + pci_read_config_word(child_dev, child_pos + PCI_EXP_LNKCTL, 195 + &reg16); 196 + if (same_clock) 197 + reg16 |= PCI_EXP_LNKCTL_CCC; 198 + else 199 + reg16 &= ~PCI_EXP_LNKCTL_CCC; 200 + pci_write_config_word(child_dev, child_pos + PCI_EXP_LNKCTL, 201 + reg16); 202 + } 203 + 204 + /* Configure upstream component */ 205 + pci_read_config_word(pdev, pos + PCI_EXP_LNKCTL, &reg16); 206 + if (same_clock) 207 + reg16 |= PCI_EXP_LNKCTL_CCC; 208 + else 209 + reg16 &= ~PCI_EXP_LNKCTL_CCC; 210 + pci_write_config_word(pdev, pos + PCI_EXP_LNKCTL, reg16); 211 + 212 + /* retrain link */ 213 + reg16 |= PCI_EXP_LNKCTL_RL; 214 + pci_write_config_word(pdev, pos + PCI_EXP_LNKCTL, reg16); 215 + 216 + /* Wait for link training end */ 217 + while (1) { 218 + pci_read_config_word(pdev, pos + PCI_EXP_LNKSTA, &reg16); 219 + if (!(reg16 & PCI_EXP_LNKSTA_LT)) 220 + break; 221 + cpu_relax(); 222 + } 223 + } 224 + 225 + /* 226 + * calc_L0S_latency: Convert L0s latency encoding to ns 227 + */ 228 + static unsigned int calc_L0S_latency(unsigned int latency_encoding, int ac) 229 + { 230 + unsigned int ns = 64; 231 + 232 + if (latency_encoding == 0x7) { 233 + if (ac) 234 + ns = -1U; 235 + else 236 + ns = 5*1000; /* > 4us */ 237 + } else 238 + ns *= (1 << latency_encoding); 239 + return ns; 240 + } 241 + 242 + /* 243 + * calc_L1_latency: Convert L1 latency encoding to ns 244 + */ 245 + static unsigned int calc_L1_latency(unsigned int latency_encoding, int ac) 246 + { 247 + unsigned int ns = 1000; 248 + 249 + if (latency_encoding == 0x7) { 250 + if (ac) 251 + ns = -1U; 252 + else 253 + ns = 65*1000; /* > 64us */ 254 + } else 255 + ns *= (1 << latency_encoding); 256 + return ns; 257 + } 258 + 259 + static void pcie_aspm_get_cap_device(struct pci_dev *pdev, u32 *state, 260 + unsigned int *l0s, unsigned int *l1, unsigned int *enabled) 261 + { 262 + int pos; 263 + u16 reg16; 264 + u32 reg32; 265 + unsigned int latency; 266 + 267 + pos = pci_find_capability(pdev, PCI_CAP_ID_EXP); 268 + pci_read_config_dword(pdev, pos + PCI_EXP_LNKCAP, &reg32); 269 + *state = (reg32 & PCI_EXP_LNKCAP_ASPMS) >> 10; 270 + if (*state != PCIE_LINK_STATE_L0S && 271 + *state != (PCIE_LINK_STATE_L1|PCIE_LINK_STATE_L0S)) 272 + * state = 0; 273 + if (*state == 0) 274 + return; 275 + 276 + latency = (reg32 & PCI_EXP_LNKCAP_L0SEL) >> 12; 277 + *l0s = calc_L0S_latency(latency, 0); 278 + if (*state & PCIE_LINK_STATE_L1) { 279 + latency = (reg32 & PCI_EXP_LNKCAP_L1EL) >> 15; 280 + *l1 = calc_L1_latency(latency, 0); 281 + } 282 + pci_read_config_word(pdev, pos + PCI_EXP_LNKCTL, &reg16); 283 + *enabled = reg16 & (PCIE_LINK_STATE_L0S|PCIE_LINK_STATE_L1); 284 + } 285 + 286 + static void pcie_aspm_cap_init(struct pci_dev *pdev) 287 + { 288 + struct pci_dev *child_dev; 289 + u32 state, tmp; 290 + struct pcie_link_state *link_state = pdev->link_state; 291 + 292 + /* upstream component states */ 293 + pcie_aspm_get_cap_device(pdev, &link_state->support_state, 294 + &link_state->l0s_upper_latency, 295 + &link_state->l1_upper_latency, 296 + &link_state->enabled_state); 297 + /* downstream component states, all functions have the same setting */ 298 + child_dev = list_entry(pdev->subordinate->devices.next, struct pci_dev, 299 + bus_list); 300 + pcie_aspm_get_cap_device(child_dev, &state, 301 + &link_state->l0s_down_latency, 302 + &link_state->l1_down_latency, 303 + &tmp); 304 + link_state->support_state &= state; 305 + if (!link_state->support_state) 306 + return; 307 + link_state->enabled_state &= link_state->support_state; 308 + link_state->bios_aspm_state = link_state->enabled_state; 309 + 310 + /* ENDPOINT states*/ 311 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) { 312 + int pos; 313 + u32 reg32; 314 + unsigned int latency; 315 + struct endpoint_state *ep_state = 316 + &link_state->endpoints[PCI_FUNC(child_dev->devfn)]; 317 + 318 + if (child_dev->pcie_type != PCI_EXP_TYPE_ENDPOINT && 319 + child_dev->pcie_type != PCI_EXP_TYPE_LEG_END) 320 + continue; 321 + 322 + pos = pci_find_capability(child_dev, PCI_CAP_ID_EXP); 323 + pci_read_config_dword(child_dev, pos + PCI_EXP_DEVCAP, &reg32); 324 + latency = (reg32 & PCI_EXP_DEVCAP_L0S) >> 6; 325 + latency = calc_L0S_latency(latency, 1); 326 + ep_state->l0s_acceptable_latency = latency; 327 + if (link_state->support_state & PCIE_LINK_STATE_L1) { 328 + latency = (reg32 & PCI_EXP_DEVCAP_L1) >> 9; 329 + latency = calc_L1_latency(latency, 1); 330 + ep_state->l1_acceptable_latency = latency; 331 + } 332 + } 333 + } 334 + 335 + static unsigned int __pcie_aspm_check_state_one(struct pci_dev *pdev, 336 + unsigned int state) 337 + { 338 + struct pci_dev *parent_dev, *tmp_dev; 339 + unsigned int latency, l1_latency = 0; 340 + struct pcie_link_state *link_state; 341 + struct endpoint_state *ep_state; 342 + 343 + parent_dev = pdev->bus->self; 344 + link_state = parent_dev->link_state; 345 + state &= link_state->support_state; 346 + if (state == 0) 347 + return 0; 348 + ep_state = &link_state->endpoints[PCI_FUNC(pdev->devfn)]; 349 + 350 + /* 351 + * Check latency for endpoint device. 352 + * TBD: The latency from the endpoint to root complex vary per 353 + * switch's upstream link state above the device. Here we just do a 354 + * simple check which assumes all links above the device can be in L1 355 + * state, that is we just consider the worst case. If switch's upstream 356 + * link can't be put into L0S/L1, then our check is too strictly. 357 + */ 358 + tmp_dev = pdev; 359 + while (state & (PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1)) { 360 + parent_dev = tmp_dev->bus->self; 361 + link_state = parent_dev->link_state; 362 + if (state & PCIE_LINK_STATE_L0S) { 363 + latency = max_t(unsigned int, 364 + link_state->l0s_upper_latency, 365 + link_state->l0s_down_latency); 366 + if (latency > ep_state->l0s_acceptable_latency) 367 + state &= ~PCIE_LINK_STATE_L0S; 368 + } 369 + if (state & PCIE_LINK_STATE_L1) { 370 + latency = max_t(unsigned int, 371 + link_state->l1_upper_latency, 372 + link_state->l1_down_latency); 373 + if (latency + l1_latency > 374 + ep_state->l1_acceptable_latency) 375 + state &= ~PCIE_LINK_STATE_L1; 376 + } 377 + if (!parent_dev->bus->self) /* parent_dev is a root port */ 378 + break; 379 + else { 380 + /* 381 + * parent_dev is the downstream port of a switch, make 382 + * tmp_dev the upstream port of the switch 383 + */ 384 + tmp_dev = parent_dev->bus->self; 385 + /* 386 + * every switch on the path to root complex need 1 more 387 + * microsecond for L1. Spec doesn't mention L0S. 388 + */ 389 + if (state & PCIE_LINK_STATE_L1) 390 + l1_latency += 1000; 391 + } 392 + } 393 + return state; 394 + } 395 + 396 + static unsigned int pcie_aspm_check_state(struct pci_dev *pdev, 397 + unsigned int state) 398 + { 399 + struct pci_dev *child_dev; 400 + 401 + /* If no child, disable the link */ 402 + if (list_empty(&pdev->subordinate->devices)) 403 + return 0; 404 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) { 405 + if (child_dev->pcie_type == PCI_EXP_TYPE_PCI_BRIDGE) { 406 + /* 407 + * If downstream component of a link is pci bridge, we 408 + * disable ASPM for now for the link 409 + * */ 410 + state = 0; 411 + break; 412 + } 413 + if ((child_dev->pcie_type != PCI_EXP_TYPE_ENDPOINT && 414 + child_dev->pcie_type != PCI_EXP_TYPE_LEG_END)) 415 + continue; 416 + /* Device not in D0 doesn't need check latency */ 417 + if (child_dev->current_state == PCI_D1 || 418 + child_dev->current_state == PCI_D2 || 419 + child_dev->current_state == PCI_D3hot || 420 + child_dev->current_state == PCI_D3cold) 421 + continue; 422 + state = __pcie_aspm_check_state_one(child_dev, state); 423 + } 424 + return state; 425 + } 426 + 427 + static void __pcie_aspm_config_one_dev(struct pci_dev *pdev, unsigned int state) 428 + { 429 + u16 reg16; 430 + int pos = pci_find_capability(pdev, PCI_CAP_ID_EXP); 431 + 432 + pci_read_config_word(pdev, pos + PCI_EXP_LNKCTL, &reg16); 433 + reg16 &= ~0x3; 434 + reg16 |= state; 435 + pci_write_config_word(pdev, pos + PCI_EXP_LNKCTL, reg16); 436 + } 437 + 438 + static void __pcie_aspm_config_link(struct pci_dev *pdev, unsigned int state) 439 + { 440 + struct pci_dev *child_dev; 441 + int valid = 1; 442 + struct pcie_link_state *link_state = pdev->link_state; 443 + 444 + /* 445 + * if the downstream component has pci bridge function, don't do ASPM 446 + * now 447 + */ 448 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) { 449 + if (child_dev->pcie_type == PCI_EXP_TYPE_PCI_BRIDGE) { 450 + valid = 0; 451 + break; 452 + } 453 + } 454 + if (!valid) 455 + return; 456 + 457 + /* 458 + * spec 2.0 suggests all functions should be configured the same 459 + * setting for ASPM. Enabling ASPM L1 should be done in upstream 460 + * component first and then downstream, and vice versa for disabling 461 + * ASPM L1. Spec doesn't mention L0S. 462 + */ 463 + if (state & PCIE_LINK_STATE_L1) 464 + __pcie_aspm_config_one_dev(pdev, state); 465 + 466 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) 467 + __pcie_aspm_config_one_dev(child_dev, state); 468 + 469 + if (!(state & PCIE_LINK_STATE_L1)) 470 + __pcie_aspm_config_one_dev(pdev, state); 471 + 472 + link_state->enabled_state = state; 473 + } 474 + 475 + static void __pcie_aspm_configure_link_state(struct pci_dev *pdev, 476 + unsigned int state) 477 + { 478 + struct pcie_link_state *link_state = pdev->link_state; 479 + 480 + if (link_state->support_state == 0) 481 + return; 482 + state &= PCIE_LINK_STATE_L0S|PCIE_LINK_STATE_L1; 483 + 484 + /* state 0 means disabling aspm */ 485 + state = pcie_aspm_check_state(pdev, state); 486 + if (link_state->enabled_state == state) 487 + return; 488 + __pcie_aspm_config_link(pdev, state); 489 + } 490 + 491 + /* 492 + * pcie_aspm_configure_link_state: enable/disable PCI express link state 493 + * @pdev: the root port or switch downstream port 494 + */ 495 + static void pcie_aspm_configure_link_state(struct pci_dev *pdev, 496 + unsigned int state) 497 + { 498 + down_read(&pci_bus_sem); 499 + mutex_lock(&aspm_lock); 500 + __pcie_aspm_configure_link_state(pdev, state); 501 + mutex_unlock(&aspm_lock); 502 + up_read(&pci_bus_sem); 503 + } 504 + 505 + static void free_link_state(struct pci_dev *pdev) 506 + { 507 + kfree(pdev->link_state); 508 + pdev->link_state = NULL; 509 + } 510 + 511 + /* 512 + * pcie_aspm_init_link_state: Initiate PCI express link state. 513 + * It is called after the pcie and its children devices are scaned. 514 + * @pdev: the root port or switch downstream port 515 + */ 516 + void pcie_aspm_init_link_state(struct pci_dev *pdev) 517 + { 518 + unsigned int state; 519 + struct pcie_link_state *link_state; 520 + int error = 0; 521 + 522 + if (aspm_disabled || !pdev->is_pcie || pdev->link_state) 523 + return; 524 + if (pdev->pcie_type != PCI_EXP_TYPE_ROOT_PORT && 525 + pdev->pcie_type != PCI_EXP_TYPE_DOWNSTREAM) 526 + return; 527 + down_read(&pci_bus_sem); 528 + if (list_empty(&pdev->subordinate->devices)) 529 + goto out; 530 + 531 + mutex_lock(&aspm_lock); 532 + 533 + link_state = kzalloc(sizeof(*link_state), GFP_KERNEL); 534 + if (!link_state) 535 + goto unlock_out; 536 + pdev->link_state = link_state; 537 + 538 + pcie_aspm_configure_common_clock(pdev); 539 + 540 + pcie_aspm_cap_init(pdev); 541 + 542 + /* config link state to avoid BIOS error */ 543 + state = pcie_aspm_check_state(pdev, policy_to_aspm_state(pdev)); 544 + __pcie_aspm_config_link(pdev, state); 545 + 546 + pcie_check_clock_pm(pdev); 547 + 548 + link_state->pdev = pdev; 549 + list_add(&link_state->sibiling, &link_list); 550 + 551 + unlock_out: 552 + if (error) 553 + free_link_state(pdev); 554 + mutex_unlock(&aspm_lock); 555 + out: 556 + up_read(&pci_bus_sem); 557 + } 558 + 559 + /* @pdev: the endpoint device */ 560 + void pcie_aspm_exit_link_state(struct pci_dev *pdev) 561 + { 562 + struct pci_dev *parent = pdev->bus->self; 563 + struct pcie_link_state *link_state = parent->link_state; 564 + 565 + if (aspm_disabled || !pdev->is_pcie || !parent || !link_state) 566 + return; 567 + if (parent->pcie_type != PCI_EXP_TYPE_ROOT_PORT && 568 + parent->pcie_type != PCI_EXP_TYPE_DOWNSTREAM) 569 + return; 570 + down_read(&pci_bus_sem); 571 + mutex_lock(&aspm_lock); 572 + 573 + /* 574 + * All PCIe functions are in one slot, remove one function will remove 575 + * the the whole slot, so just wait 576 + */ 577 + if (!list_empty(&parent->subordinate->devices)) 578 + goto out; 579 + 580 + /* All functions are removed, so just disable ASPM for the link */ 581 + __pcie_aspm_config_one_dev(parent, 0); 582 + list_del(&link_state->sibiling); 583 + /* Clock PM is for endpoint device */ 584 + 585 + free_link_state(parent); 586 + out: 587 + mutex_unlock(&aspm_lock); 588 + up_read(&pci_bus_sem); 589 + } 590 + 591 + /* @pdev: the root port or switch downstream port */ 592 + void pcie_aspm_pm_state_change(struct pci_dev *pdev) 593 + { 594 + struct pcie_link_state *link_state = pdev->link_state; 595 + 596 + if (aspm_disabled || !pdev->is_pcie || !pdev->link_state) 597 + return; 598 + if (pdev->pcie_type != PCI_EXP_TYPE_ROOT_PORT && 599 + pdev->pcie_type != PCI_EXP_TYPE_DOWNSTREAM) 600 + return; 601 + /* 602 + * devices changed PM state, we should recheck if latency meets all 603 + * functions' requirement 604 + */ 605 + pcie_aspm_configure_link_state(pdev, link_state->enabled_state); 606 + } 607 + 608 + /* 609 + * pci_disable_link_state - disable pci device's link state, so the link will 610 + * never enter specific states 611 + */ 612 + void pci_disable_link_state(struct pci_dev *pdev, int state) 613 + { 614 + struct pci_dev *parent = pdev->bus->self; 615 + struct pcie_link_state *link_state; 616 + 617 + if (aspm_disabled || !pdev->is_pcie) 618 + return; 619 + if (pdev->pcie_type == PCI_EXP_TYPE_ROOT_PORT || 620 + pdev->pcie_type == PCI_EXP_TYPE_DOWNSTREAM) 621 + parent = pdev; 622 + if (!parent) 623 + return; 624 + 625 + down_read(&pci_bus_sem); 626 + mutex_lock(&aspm_lock); 627 + link_state = parent->link_state; 628 + link_state->support_state &= 629 + ~(state & (PCIE_LINK_STATE_L0S|PCIE_LINK_STATE_L1)); 630 + if (state & PCIE_LINK_STATE_CLKPM) 631 + link_state->clk_pm_capable = 0; 632 + 633 + __pcie_aspm_configure_link_state(parent, link_state->enabled_state); 634 + if (!link_state->clk_pm_capable && link_state->clk_pm_enabled) 635 + pcie_set_clock_pm(parent, 0); 636 + mutex_unlock(&aspm_lock); 637 + up_read(&pci_bus_sem); 638 + } 639 + EXPORT_SYMBOL(pci_disable_link_state); 640 + 641 + static int pcie_aspm_set_policy(const char *val, struct kernel_param *kp) 642 + { 643 + int i; 644 + struct pci_dev *pdev; 645 + struct pcie_link_state *link_state; 646 + 647 + for (i = 0; i < ARRAY_SIZE(policy_str); i++) 648 + if (!strncmp(val, policy_str[i], strlen(policy_str[i]))) 649 + break; 650 + if (i >= ARRAY_SIZE(policy_str)) 651 + return -EINVAL; 652 + if (i == aspm_policy) 653 + return 0; 654 + 655 + down_read(&pci_bus_sem); 656 + mutex_lock(&aspm_lock); 657 + aspm_policy = i; 658 + list_for_each_entry(link_state, &link_list, sibiling) { 659 + pdev = link_state->pdev; 660 + __pcie_aspm_configure_link_state(pdev, 661 + policy_to_aspm_state(pdev)); 662 + if (link_state->clk_pm_capable && 663 + link_state->clk_pm_enabled != policy_to_clkpm_state(pdev)) 664 + pcie_set_clock_pm(pdev, policy_to_clkpm_state(pdev)); 665 + 666 + } 667 + mutex_unlock(&aspm_lock); 668 + up_read(&pci_bus_sem); 669 + return 0; 670 + } 671 + 672 + static int pcie_aspm_get_policy(char *buffer, struct kernel_param *kp) 673 + { 674 + int i, cnt = 0; 675 + for (i = 0; i < ARRAY_SIZE(policy_str); i++) 676 + if (i == aspm_policy) 677 + cnt += sprintf(buffer + cnt, "[%s] ", policy_str[i]); 678 + else 679 + cnt += sprintf(buffer + cnt, "%s ", policy_str[i]); 680 + return cnt; 681 + } 682 + 683 + module_param_call(policy, pcie_aspm_set_policy, pcie_aspm_get_policy, 684 + NULL, 0644); 685 + 686 + #ifdef CONFIG_PCIEASPM_DEBUG 687 + static ssize_t link_state_show(struct device *dev, 688 + struct device_attribute *attr, 689 + char *buf) 690 + { 691 + struct pci_dev *pci_device = to_pci_dev(dev); 692 + struct pcie_link_state *link_state = pci_device->link_state; 693 + 694 + return sprintf(buf, "%d\n", link_state->enabled_state); 695 + } 696 + 697 + static ssize_t link_state_store(struct device *dev, 698 + struct device_attribute *attr, 699 + const char *buf, 700 + size_t n) 701 + { 702 + struct pci_dev *pci_device = to_pci_dev(dev); 703 + int state; 704 + 705 + if (n < 1) 706 + return -EINVAL; 707 + state = buf[0]-'0'; 708 + if (state >= 0 && state <= 3) { 709 + /* setup link aspm state */ 710 + pcie_aspm_configure_link_state(pci_device, state); 711 + return n; 712 + } 713 + 714 + return -EINVAL; 715 + } 716 + 717 + static ssize_t clk_ctl_show(struct device *dev, 718 + struct device_attribute *attr, 719 + char *buf) 720 + { 721 + struct pci_dev *pci_device = to_pci_dev(dev); 722 + struct pcie_link_state *link_state = pci_device->link_state; 723 + 724 + return sprintf(buf, "%d\n", link_state->clk_pm_enabled); 725 + } 726 + 727 + static ssize_t clk_ctl_store(struct device *dev, 728 + struct device_attribute *attr, 729 + const char *buf, 730 + size_t n) 731 + { 732 + struct pci_dev *pci_device = to_pci_dev(dev); 733 + int state; 734 + 735 + if (n < 1) 736 + return -EINVAL; 737 + state = buf[0]-'0'; 738 + 739 + down_read(&pci_bus_sem); 740 + mutex_lock(&aspm_lock); 741 + pcie_set_clock_pm(pci_device, !!state); 742 + mutex_unlock(&aspm_lock); 743 + up_read(&pci_bus_sem); 744 + 745 + return n; 746 + } 747 + 748 + static DEVICE_ATTR(link_state, 0644, link_state_show, link_state_store); 749 + static DEVICE_ATTR(clk_ctl, 0644, clk_ctl_show, clk_ctl_store); 750 + 751 + static char power_group[] = "power"; 752 + void pcie_aspm_create_sysfs_dev_files(struct pci_dev *pdev) 753 + { 754 + struct pcie_link_state *link_state = pdev->link_state; 755 + 756 + if (!pdev->is_pcie || (pdev->pcie_type != PCI_EXP_TYPE_ROOT_PORT && 757 + pdev->pcie_type != PCI_EXP_TYPE_DOWNSTREAM)) 758 + return; 759 + 760 + if (link_state->support_state) 761 + sysfs_add_file_to_group(&pdev->dev.kobj, 762 + &dev_attr_link_state.attr, power_group); 763 + if (link_state->clk_pm_capable) 764 + sysfs_add_file_to_group(&pdev->dev.kobj, 765 + &dev_attr_clk_ctl.attr, power_group); 766 + } 767 + 768 + void pcie_aspm_remove_sysfs_dev_files(struct pci_dev *pdev) 769 + { 770 + struct pcie_link_state *link_state = pdev->link_state; 771 + 772 + if (!pdev->is_pcie || (pdev->pcie_type != PCI_EXP_TYPE_ROOT_PORT && 773 + pdev->pcie_type != PCI_EXP_TYPE_DOWNSTREAM)) 774 + return; 775 + 776 + if (link_state->support_state) 777 + sysfs_remove_file_from_group(&pdev->dev.kobj, 778 + &dev_attr_link_state.attr, power_group); 779 + if (link_state->clk_pm_capable) 780 + sysfs_remove_file_from_group(&pdev->dev.kobj, 781 + &dev_attr_clk_ctl.attr, power_group); 782 + } 783 + #endif 784 + 785 + static int __init pcie_aspm_disable(char *str) 786 + { 787 + aspm_disabled = 1; 788 + return 1; 789 + } 790 + 791 + __setup("pcie_noaspm", pcie_aspm_disable); 792 + 793 + static int __init pcie_aspm_init(void) 794 + { 795 + if (aspm_disabled) 796 + return 0; 797 + pci_osc_support_set(OSC_ACTIVE_STATE_PWR_SUPPORT| 798 + OSC_CLOCK_PWR_CAPABILITY_SUPPORT); 799 + return 0; 800 + } 801 + 802 + fs_initcall(pcie_aspm_init);
+2 -3
drivers/pci/pcie/portdrv_core.c
··· 192 192 if (reg32 & SLOT_HP_CAPABLE_MASK) 193 193 services |= PCIE_PORT_SERVICE_HP; 194 194 } 195 - /* PME Capable */ 196 - pos = pci_find_capability(dev, PCI_CAP_ID_PME); 197 - if (pos) 195 + /* PME Capable - root port capability */ 196 + if (((reg16 >> 4) & PORT_TYPE_MASK) == PCIE_RC_PORT) 198 197 services |= PCIE_PORT_SERVICE_PME; 199 198 200 199 pos = PCI_CFG_SPACE_SIZE;
+34 -46
drivers/pci/probe.c
··· 9 9 #include <linux/slab.h> 10 10 #include <linux/module.h> 11 11 #include <linux/cpumask.h> 12 + #include <linux/aspm.h> 12 13 #include "pci.h" 13 14 14 15 #define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */ ··· 54 53 b->legacy_io->attr.mode = S_IRUSR | S_IWUSR; 55 54 b->legacy_io->read = pci_read_legacy_io; 56 55 b->legacy_io->write = pci_write_legacy_io; 57 - class_device_create_bin_file(&b->class_dev, b->legacy_io); 56 + device_create_bin_file(&b->dev, b->legacy_io); 58 57 59 58 /* Allocated above after the legacy_io struct */ 60 59 b->legacy_mem = b->legacy_io + 1; ··· 62 61 b->legacy_mem->size = 1024*1024; 63 62 b->legacy_mem->attr.mode = S_IRUSR | S_IWUSR; 64 63 b->legacy_mem->mmap = pci_mmap_legacy_mem; 65 - class_device_create_bin_file(&b->class_dev, b->legacy_mem); 64 + device_create_bin_file(&b->dev, b->legacy_mem); 66 65 } 67 66 } 68 67 69 68 void pci_remove_legacy_files(struct pci_bus *b) 70 69 { 71 70 if (b->legacy_io) { 72 - class_device_remove_bin_file(&b->class_dev, b->legacy_io); 73 - class_device_remove_bin_file(&b->class_dev, b->legacy_mem); 71 + device_remove_bin_file(&b->dev, b->legacy_io); 72 + device_remove_bin_file(&b->dev, b->legacy_mem); 74 73 kfree(b->legacy_io); /* both are allocated here */ 75 74 } 76 75 } ··· 82 81 /* 83 82 * PCI Bus Class Devices 84 83 */ 85 - static ssize_t pci_bus_show_cpuaffinity(struct class_device *class_dev, 84 + static ssize_t pci_bus_show_cpuaffinity(struct device *dev, 85 + struct device_attribute *attr, 86 86 char *buf) 87 87 { 88 88 int ret; 89 89 cpumask_t cpumask; 90 90 91 - cpumask = pcibus_to_cpumask(to_pci_bus(class_dev)); 91 + cpumask = pcibus_to_cpumask(to_pci_bus(dev)); 92 92 ret = cpumask_scnprintf(buf, PAGE_SIZE, cpumask); 93 93 if (ret < PAGE_SIZE) 94 94 buf[ret++] = '\n'; 95 95 return ret; 96 96 } 97 - CLASS_DEVICE_ATTR(cpuaffinity, S_IRUGO, pci_bus_show_cpuaffinity, NULL); 97 + DEVICE_ATTR(cpuaffinity, S_IRUGO, pci_bus_show_cpuaffinity, NULL); 98 98 99 99 /* 100 100 * PCI Bus Class 101 101 */ 102 - static void release_pcibus_dev(struct class_device *class_dev) 102 + static void release_pcibus_dev(struct device *dev) 103 103 { 104 - struct pci_bus *pci_bus = to_pci_bus(class_dev); 104 + struct pci_bus *pci_bus = to_pci_bus(dev); 105 105 106 106 if (pci_bus->bridge) 107 107 put_device(pci_bus->bridge); ··· 111 109 112 110 static struct class pcibus_class = { 113 111 .name = "pci_bus", 114 - .release = &release_pcibus_dev, 112 + .dev_release = &release_pcibus_dev, 115 113 }; 116 114 117 115 static int __init pcibus_class_init(void) ··· 394 392 { 395 393 struct pci_bus *child; 396 394 int i; 397 - int retval; 398 395 399 396 /* 400 397 * Allocate a new bus, and inherit stuff from the parent.. ··· 409 408 child->bus_flags = parent->bus_flags; 410 409 child->bridge = get_device(&bridge->dev); 411 410 412 - child->class_dev.class = &pcibus_class; 413 - sprintf(child->class_dev.class_id, "%04x:%02x", pci_domain_nr(child), busnr); 414 - retval = class_device_register(&child->class_dev); 415 - if (retval) 416 - goto error_register; 417 - retval = class_device_create_file(&child->class_dev, 418 - &class_device_attr_cpuaffinity); 419 - if (retval) 420 - goto error_file_create; 411 + /* initialize some portions of the bus device, but don't register it 412 + * now as the parent is not properly set up yet. This device will get 413 + * registered later in pci_bus_add_devices() 414 + */ 415 + child->dev.class = &pcibus_class; 416 + sprintf(child->dev.bus_id, "%04x:%02x", pci_domain_nr(child), busnr); 421 417 422 418 /* 423 419 * Set up the primary, secondary and subordinate ··· 432 434 bridge->subordinate = child; 433 435 434 436 return child; 435 - 436 - error_file_create: 437 - class_device_unregister(&child->class_dev); 438 - error_register: 439 - kfree(child); 440 - return NULL; 441 437 } 442 438 443 439 struct pci_bus *pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, int busnr) ··· 462 470 parent = parent->parent; 463 471 } 464 472 } 465 - 466 - unsigned int pci_scan_child_bus(struct pci_bus *bus); 467 473 468 474 /* 469 475 * If it's a bridge, configure it and scan the bus behind it. ··· 631 641 (child->number > bus->subordinate) || 632 642 (child->number < bus->number) || 633 643 (child->subordinate < bus->number)) { 634 - pr_debug("PCI: Bus #%02x (-#%02x) is %s" 644 + pr_debug("PCI: Bus #%02x (-#%02x) is %s " 635 645 "hidden behind%s bridge #%02x (-#%02x)\n", 636 646 child->number, child->subordinate, 637 647 (bus->number > child->subordinate && 638 648 bus->subordinate < child->number) ? 639 - "wholly " : " partially", 640 - bus->self->transparent ? " transparent" : " ", 649 + "wholly" : "partially", 650 + bus->self->transparent ? " transparent" : "", 641 651 bus->number, bus->subordinate); 642 652 } 643 653 bus = bus->parent; ··· 961 971 962 972 return dev; 963 973 } 974 + EXPORT_SYMBOL(pci_scan_single_device); 964 975 965 976 /** 966 977 * pci_scan_slot - scan a PCI slot on a bus for devices. ··· 1002 1011 break; 1003 1012 } 1004 1013 } 1014 + 1015 + if (bus->self) 1016 + pcie_aspm_init_link_state(bus->self); 1017 + 1005 1018 return nr; 1006 1019 } 1007 1020 ··· 1098 1103 goto dev_reg_err; 1099 1104 b->bridge = get_device(dev); 1100 1105 1101 - b->class_dev.class = &pcibus_class; 1102 - sprintf(b->class_dev.class_id, "%04x:%02x", pci_domain_nr(b), bus); 1103 - error = class_device_register(&b->class_dev); 1106 + b->dev.class = &pcibus_class; 1107 + b->dev.parent = b->bridge; 1108 + sprintf(b->dev.bus_id, "%04x:%02x", pci_domain_nr(b), bus); 1109 + error = device_register(&b->dev); 1104 1110 if (error) 1105 1111 goto class_dev_reg_err; 1106 - error = class_device_create_file(&b->class_dev, &class_device_attr_cpuaffinity); 1112 + error = device_create_file(&b->dev, &dev_attr_cpuaffinity); 1107 1113 if (error) 1108 - goto class_dev_create_file_err; 1114 + goto dev_create_file_err; 1109 1115 1110 1116 /* Create legacy_io and legacy_mem files for this bus */ 1111 1117 pci_create_legacy_files(b); 1112 - 1113 - error = sysfs_create_link(&b->class_dev.kobj, &b->bridge->kobj, "bridge"); 1114 - if (error) 1115 - goto sys_create_link_err; 1116 1118 1117 1119 b->number = b->secondary = bus; 1118 1120 b->resource[0] = &ioport_resource; ··· 1117 1125 1118 1126 return b; 1119 1127 1120 - sys_create_link_err: 1121 - class_device_remove_file(&b->class_dev, &class_device_attr_cpuaffinity); 1122 - class_dev_create_file_err: 1123 - class_device_unregister(&b->class_dev); 1128 + dev_create_file_err: 1129 + device_unregister(&b->dev); 1124 1130 class_dev_reg_err: 1125 1131 device_unregister(dev); 1126 1132 dev_reg_err: ··· 1130 1140 kfree(b); 1131 1141 return NULL; 1132 1142 } 1133 - EXPORT_SYMBOL_GPL(pci_create_bus); 1134 1143 1135 1144 struct pci_bus *pci_scan_bus_parented(struct device *parent, 1136 1145 int bus, struct pci_ops *ops, void *sysdata) ··· 1148 1159 EXPORT_SYMBOL(pci_do_scan_bus); 1149 1160 EXPORT_SYMBOL(pci_scan_slot); 1150 1161 EXPORT_SYMBOL(pci_scan_bridge); 1151 - EXPORT_SYMBOL(pci_scan_single_device); 1152 1162 EXPORT_SYMBOL_GPL(pci_scan_child_bus); 1153 1163 #endif 1154 1164
+9 -8
drivers/pci/proc.c
··· 11 11 #include <linux/module.h> 12 12 #include <linux/proc_fs.h> 13 13 #include <linux/seq_file.h> 14 + #include <linux/smp_lock.h> 14 15 #include <linux/capability.h> 15 16 #include <asm/uaccess.h> 16 17 #include <asm/byteorder.h> ··· 203 202 int write_combine; 204 203 }; 205 204 206 - static int proc_bus_pci_ioctl(struct inode *inode, struct file *file, unsigned int cmd, unsigned long arg) 205 + static long proc_bus_pci_ioctl(struct file *file, unsigned int cmd, 206 + unsigned long arg) 207 207 { 208 - const struct proc_dir_entry *dp = PDE(inode); 208 + const struct proc_dir_entry *dp = PDE(file->f_dentry->d_inode); 209 209 struct pci_dev *dev = dp->data; 210 210 #ifdef HAVE_PCI_MMAP 211 211 struct pci_filp_private *fpriv = file->private_data; 212 212 #endif /* HAVE_PCI_MMAP */ 213 213 int ret = 0; 214 + 215 + lock_kernel(); 214 216 215 217 switch (cmd) { 216 218 case PCIIOC_CONTROLLER: ··· 243 239 break; 244 240 }; 245 241 242 + unlock_kernel(); 246 243 return ret; 247 244 } 248 245 ··· 296 291 .llseek = proc_bus_pci_lseek, 297 292 .read = proc_bus_pci_read, 298 293 .write = proc_bus_pci_write, 299 - .ioctl = proc_bus_pci_ioctl, 294 + .unlocked_ioctl = proc_bus_pci_ioctl, 300 295 #ifdef HAVE_PCI_MMAP 301 296 .open = proc_bus_pci_open, 302 297 .release = proc_bus_pci_release, ··· 375 370 return 0; 376 371 } 377 372 378 - static struct seq_operations proc_bus_pci_devices_op = { 373 + static const struct seq_operations proc_bus_pci_devices_op = { 379 374 .start = pci_seq_start, 380 375 .next = pci_seq_next, 381 376 .stop = pci_seq_stop, ··· 484 479 } 485 480 486 481 __initcall(pci_proc_init); 487 - 488 - #ifdef CONFIG_HOTPLUG 489 - EXPORT_SYMBOL(pci_proc_detach_bus); 490 - #endif 491 482
+281 -202
drivers/pci/quirks.c
··· 21 21 #include <linux/init.h> 22 22 #include <linux/delay.h> 23 23 #include <linux/acpi.h> 24 + #include <linux/kallsyms.h> 24 25 #include "pci.h" 25 26 26 27 /* The Mellanox Tavor device gives false positive parity errors ··· 47 46 while ((d = pci_get_device(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371SB_0, d))) { 48 47 pci_read_config_byte(d, 0x82, &dlc); 49 48 if (!(dlc & 1<<1)) { 50 - printk(KERN_ERR "PCI: PIIX3: Enabling Passive Release on %s\n", pci_name(d)); 49 + dev_err(&d->dev, "PIIX3: Enabling Passive Release\n"); 51 50 dlc |= 1<<1; 52 51 pci_write_config_byte(d, 0x82, dlc); 53 52 } 54 53 } 55 54 } 56 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_passive_release ); 57 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_passive_release ); 55 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_passive_release); 56 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_passive_release); 58 57 59 58 /* The VIA VP2/VP3/MVP3 seem to have some 'features'. There may be a workaround 60 59 but VIA don't answer queries. If you happen to have good contacts at VIA ··· 69 68 { 70 69 if (!isa_dma_bridge_buggy) { 71 70 isa_dma_bridge_buggy=1; 72 - printk(KERN_INFO "Activating ISA DMA hang workarounds.\n"); 71 + dev_info(&dev->dev, "Activating ISA DMA hang workarounds\n"); 73 72 } 74 73 } 75 74 /* 76 75 * Its not totally clear which chipsets are the problematic ones 77 76 * We know 82C586 and 82C596 variants are affected. 78 77 */ 79 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_0, quirk_isa_dma_hangs ); 80 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C596, quirk_isa_dma_hangs ); 81 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371SB_0, quirk_isa_dma_hangs ); 82 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1533, quirk_isa_dma_hangs ); 83 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_1, quirk_isa_dma_hangs ); 84 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_2, quirk_isa_dma_hangs ); 85 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_3, quirk_isa_dma_hangs ); 78 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_0, quirk_isa_dma_hangs); 79 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C596, quirk_isa_dma_hangs); 80 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371SB_0, quirk_isa_dma_hangs); 81 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1533, quirk_isa_dma_hangs); 82 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_1, quirk_isa_dma_hangs); 83 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_2, quirk_isa_dma_hangs); 84 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_3, quirk_isa_dma_hangs); 86 85 87 86 int pci_pci_problems; 88 87 EXPORT_SYMBOL(pci_pci_problems); ··· 93 92 static void __devinit quirk_nopcipci(struct pci_dev *dev) 94 93 { 95 94 if ((pci_pci_problems & PCIPCI_FAIL)==0) { 96 - printk(KERN_INFO "Disabling direct PCI/PCI transfers.\n"); 95 + dev_info(&dev->dev, "Disabling direct PCI/PCI transfers\n"); 97 96 pci_pci_problems |= PCIPCI_FAIL; 98 97 } 99 98 } 100 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_5597, quirk_nopcipci ); 101 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_496, quirk_nopcipci ); 99 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_5597, quirk_nopcipci); 100 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_496, quirk_nopcipci); 102 101 103 102 static void __devinit quirk_nopciamd(struct pci_dev *dev) 104 103 { ··· 106 105 pci_read_config_byte(dev, 0x08, &rev); 107 106 if (rev == 0x13) { 108 107 /* Erratum 24 */ 109 - printk(KERN_INFO "Chipset erratum: Disabling direct PCI/AGP transfers.\n"); 108 + dev_info(&dev->dev, "Chipset erratum: Disabling direct PCI/AGP transfers\n"); 110 109 pci_pci_problems |= PCIAGP_FAIL; 111 110 } 112 111 } 113 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8151_0, quirk_nopciamd ); 112 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8151_0, quirk_nopciamd); 114 113 115 114 /* 116 115 * Triton requires workarounds to be used by the drivers ··· 118 117 static void __devinit quirk_triton(struct pci_dev *dev) 119 118 { 120 119 if ((pci_pci_problems&PCIPCI_TRITON)==0) { 121 - printk(KERN_INFO "Limiting direct PCI/PCI transfers.\n"); 120 + dev_info(&dev->dev, "Limiting direct PCI/PCI transfers\n"); 122 121 pci_pci_problems |= PCIPCI_TRITON; 123 122 } 124 123 } 125 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82437, quirk_triton ); 126 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82437VX, quirk_triton ); 127 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82439, quirk_triton ); 128 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82439TX, quirk_triton ); 124 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82437, quirk_triton); 125 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82437VX, quirk_triton); 126 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82439, quirk_triton); 127 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82439TX, quirk_triton); 129 128 130 129 /* 131 130 * VIA Apollo KT133 needs PCI latency patch ··· 140 139 static void quirk_vialatency(struct pci_dev *dev) 141 140 { 142 141 struct pci_dev *p; 143 - u8 rev; 144 142 u8 busarb; 145 143 /* Ok we have a potential problem chipset here. Now see if we have 146 144 a buggy southbridge */ 147 145 148 146 p = pci_get_device(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, NULL); 149 147 if (p!=NULL) { 150 - pci_read_config_byte(p, PCI_CLASS_REVISION, &rev); 151 148 /* 0x40 - 0x4f == 686B, 0x10 - 0x2f == 686A; thanks Dan Hollis */ 152 149 /* Check for buggy part revisions */ 153 - if (rev < 0x40 || rev > 0x42) 150 + if (p->revision < 0x40 || p->revision > 0x42) 154 151 goto exit; 155 152 } else { 156 153 p = pci_get_device(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8231, NULL); 157 154 if (p==NULL) /* No problem parts */ 158 155 goto exit; 159 - pci_read_config_byte(p, PCI_CLASS_REVISION, &rev); 160 156 /* Check for buggy part revisions */ 161 - if (rev < 0x10 || rev > 0x12) 157 + if (p->revision < 0x10 || p->revision > 0x12) 162 158 goto exit; 163 159 } 164 160 ··· 178 180 busarb &= ~(1<<5); 179 181 busarb |= (1<<4); 180 182 pci_write_config_byte(dev, 0x76, busarb); 181 - printk(KERN_INFO "Applying VIA southbridge workaround.\n"); 183 + dev_info(&dev->dev, "Applying VIA southbridge workaround\n"); 182 184 exit: 183 185 pci_dev_put(p); 184 186 } 185 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8363_0, quirk_vialatency ); 186 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8371_1, quirk_vialatency ); 187 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8361, quirk_vialatency ); 187 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8363_0, quirk_vialatency); 188 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8371_1, quirk_vialatency); 189 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8361, quirk_vialatency); 188 190 /* Must restore this on a resume from RAM */ 189 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8363_0, quirk_vialatency ); 190 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8371_1, quirk_vialatency ); 191 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8361, quirk_vialatency ); 191 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8363_0, quirk_vialatency); 192 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8371_1, quirk_vialatency); 193 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8361, quirk_vialatency); 192 194 193 195 /* 194 196 * VIA Apollo VP3 needs ETBF on BT848/878 ··· 196 198 static void __devinit quirk_viaetbf(struct pci_dev *dev) 197 199 { 198 200 if ((pci_pci_problems&PCIPCI_VIAETBF)==0) { 199 - printk(KERN_INFO "Limiting direct PCI/PCI transfers.\n"); 201 + dev_info(&dev->dev, "Limiting direct PCI/PCI transfers\n"); 200 202 pci_pci_problems |= PCIPCI_VIAETBF; 201 203 } 202 204 } 203 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C597_0, quirk_viaetbf ); 205 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C597_0, quirk_viaetbf); 204 206 205 207 static void __devinit quirk_vsfx(struct pci_dev *dev) 206 208 { 207 209 if ((pci_pci_problems&PCIPCI_VSFX)==0) { 208 - printk(KERN_INFO "Limiting direct PCI/PCI transfers.\n"); 210 + dev_info(&dev->dev, "Limiting direct PCI/PCI transfers\n"); 209 211 pci_pci_problems |= PCIPCI_VSFX; 210 212 } 211 213 } 212 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C576, quirk_vsfx ); 214 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C576, quirk_vsfx); 213 215 214 216 /* 215 217 * Ali Magik requires workarounds to be used by the drivers ··· 220 222 static void __init quirk_alimagik(struct pci_dev *dev) 221 223 { 222 224 if ((pci_pci_problems&PCIPCI_ALIMAGIK)==0) { 223 - printk(KERN_INFO "Limiting direct PCI/PCI transfers.\n"); 225 + dev_info(&dev->dev, "Limiting direct PCI/PCI transfers\n"); 224 226 pci_pci_problems |= PCIPCI_ALIMAGIK|PCIPCI_TRITON; 225 227 } 226 228 } 227 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1647, quirk_alimagik ); 228 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1651, quirk_alimagik ); 229 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1647, quirk_alimagik); 230 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1651, quirk_alimagik); 229 231 230 232 /* 231 233 * Natoma has some interesting boundary conditions with Zoran stuff ··· 234 236 static void __devinit quirk_natoma(struct pci_dev *dev) 235 237 { 236 238 if ((pci_pci_problems&PCIPCI_NATOMA)==0) { 237 - printk(KERN_INFO "Limiting direct PCI/PCI transfers.\n"); 239 + dev_info(&dev->dev, "Limiting direct PCI/PCI transfers\n"); 238 240 pci_pci_problems |= PCIPCI_NATOMA; 239 241 } 240 242 } 241 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_natoma ); 242 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443LX_0, quirk_natoma ); 243 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443LX_1, quirk_natoma ); 244 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_0, quirk_natoma ); 245 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_1, quirk_natoma ); 246 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_2, quirk_natoma ); 243 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_natoma); 244 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443LX_0, quirk_natoma); 245 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443LX_1, quirk_natoma); 246 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_0, quirk_natoma); 247 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_1, quirk_natoma); 248 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_2, quirk_natoma); 247 249 248 250 /* 249 251 * This chip can cause PCI parity errors if config register 0xA0 is read ··· 253 255 { 254 256 dev->cfg_size = 0xA0; 255 257 } 256 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE, quirk_citrine ); 258 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE, quirk_citrine); 257 259 258 260 /* 259 261 * S3 868 and 968 chips report region size equal to 32M, but they decode 64M. ··· 268 270 r->end = 0x3ffffff; 269 271 } 270 272 } 271 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_868, quirk_s3_64M ); 272 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_968, quirk_s3_64M ); 273 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_868, quirk_s3_64M); 274 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_968, quirk_s3_64M); 273 275 274 276 static void __devinit quirk_io_region(struct pci_dev *dev, unsigned region, 275 277 unsigned size, int nr, const char *name) ··· 290 292 pcibios_bus_to_resource(dev, res, &bus_region); 291 293 292 294 pci_claim_resource(dev, nr); 293 - printk("PCI quirk: region %04x-%04x claimed by %s\n", region, region + size - 1, name); 295 + dev_info(&dev->dev, "quirk: region %04x-%04x claimed by %s\n", region, region + size - 1, name); 294 296 } 295 297 } 296 298 ··· 300 302 */ 301 303 static void __devinit quirk_ati_exploding_mce(struct pci_dev *dev) 302 304 { 303 - printk(KERN_INFO "ATI Northbridge, reserving I/O ports 0x3b0 to 0x3bb.\n"); 305 + dev_info(&dev->dev, "ATI Northbridge, reserving I/O ports 0x3b0 to 0x3bb\n"); 304 306 /* Mae rhaid i ni beidio ag edrych ar y lleoliadiau I/O hyn */ 305 307 request_region(0x3b0, 0x0C, "RadeonIGP"); 306 308 request_region(0x3d3, 0x01, "RadeonIGP"); 307 309 } 308 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_RS100, quirk_ati_exploding_mce ); 310 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_RS100, quirk_ati_exploding_mce); 309 311 310 312 /* 311 313 * Let's make the southbridge information explicit instead ··· 327 329 pci_read_config_word(dev, 0xE2, &region); 328 330 quirk_io_region(dev, region, 32, PCI_BRIDGE_RESOURCES+1, "ali7101 SMB"); 329 331 } 330 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M7101, quirk_ali7101_acpi ); 332 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M7101, quirk_ali7101_acpi); 331 333 332 334 static void piix4_io_quirk(struct pci_dev *dev, const char *name, unsigned int port, unsigned int enable) 333 335 { ··· 352 354 * let's get enough confirmation reports first. 353 355 */ 354 356 base &= -size; 355 - printk("%s PIO at %04x-%04x\n", name, base, base + size - 1); 357 + dev_info(&dev->dev, "%s PIO at %04x-%04x\n", name, base, base + size - 1); 356 358 } 357 359 358 360 static void piix4_mem_quirk(struct pci_dev *dev, const char *name, unsigned int port, unsigned int enable) ··· 377 379 * reserve it, but let's get enough confirmation reports first. 378 380 */ 379 381 base &= -size; 380 - printk("%s MMIO at %04x-%04x\n", name, base, base + size - 1); 382 + dev_info(&dev->dev, "%s MMIO at %04x-%04x\n", name, base, base + size - 1); 381 383 } 382 384 383 385 /* ··· 416 418 piix4_io_quirk(dev, "PIIX4 devres I", 0x78, 1 << 20); 417 419 piix4_io_quirk(dev, "PIIX4 devres J", 0x7c, 1 << 20); 418 420 } 419 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371AB_3, quirk_piix4_acpi ); 420 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443MX_3, quirk_piix4_acpi ); 421 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371AB_3, quirk_piix4_acpi); 422 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443MX_3, quirk_piix4_acpi); 421 423 422 424 /* 423 425 * ICH4, ICH4-M, ICH5, ICH5-M ACPI: Three IO regions pointed to by longwords at ··· 434 436 pci_read_config_dword(dev, 0x58, &region); 435 437 quirk_io_region(dev, region, 64, PCI_BRIDGE_RESOURCES+1, "ICH4 GPIO"); 436 438 } 437 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AA_0, quirk_ich4_lpc_acpi ); 438 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AB_0, quirk_ich4_lpc_acpi ); 439 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0, quirk_ich4_lpc_acpi ); 440 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_10, quirk_ich4_lpc_acpi ); 441 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0, quirk_ich4_lpc_acpi ); 442 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_12, quirk_ich4_lpc_acpi ); 443 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0, quirk_ich4_lpc_acpi ); 444 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_12, quirk_ich4_lpc_acpi ); 445 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0, quirk_ich4_lpc_acpi ); 446 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_1, quirk_ich4_lpc_acpi ); 439 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AA_0, quirk_ich4_lpc_acpi); 440 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AB_0, quirk_ich4_lpc_acpi); 441 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0, quirk_ich4_lpc_acpi); 442 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_10, quirk_ich4_lpc_acpi); 443 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0, quirk_ich4_lpc_acpi); 444 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_12, quirk_ich4_lpc_acpi); 445 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0, quirk_ich4_lpc_acpi); 446 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_12, quirk_ich4_lpc_acpi); 447 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0, quirk_ich4_lpc_acpi); 448 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_1, quirk_ich4_lpc_acpi); 447 449 448 450 static void __devinit quirk_ich6_lpc_acpi(struct pci_dev *dev) 449 451 { ··· 455 457 pci_read_config_dword(dev, 0x48, &region); 456 458 quirk_io_region(dev, region, 64, PCI_BRIDGE_RESOURCES+1, "ICH6 GPIO"); 457 459 } 458 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_0, quirk_ich6_lpc_acpi ); 459 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, quirk_ich6_lpc_acpi ); 460 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH7_0, quirk_ich6_lpc_acpi ); 461 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH7_1, quirk_ich6_lpc_acpi ); 462 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH7_31, quirk_ich6_lpc_acpi ); 463 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_0, quirk_ich6_lpc_acpi ); 464 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_2, quirk_ich6_lpc_acpi ); 465 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_3, quirk_ich6_lpc_acpi ); 466 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_1, quirk_ich6_lpc_acpi ); 467 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_4, quirk_ich6_lpc_acpi ); 468 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_2, quirk_ich6_lpc_acpi ); 469 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_4, quirk_ich6_lpc_acpi ); 470 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_7, quirk_ich6_lpc_acpi ); 471 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_8, quirk_ich6_lpc_acpi ); 460 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_0, quirk_ich6_lpc_acpi); 461 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, quirk_ich6_lpc_acpi); 462 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH7_0, quirk_ich6_lpc_acpi); 463 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH7_1, quirk_ich6_lpc_acpi); 464 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH7_31, quirk_ich6_lpc_acpi); 465 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_0, quirk_ich6_lpc_acpi); 466 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_2, quirk_ich6_lpc_acpi); 467 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_3, quirk_ich6_lpc_acpi); 468 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_1, quirk_ich6_lpc_acpi); 469 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_4, quirk_ich6_lpc_acpi); 470 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_2, quirk_ich6_lpc_acpi); 471 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_4, quirk_ich6_lpc_acpi); 472 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_7, quirk_ich6_lpc_acpi); 473 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_8, quirk_ich6_lpc_acpi); 472 474 473 475 /* 474 476 * VIA ACPI: One IO region pointed to by longword at ··· 484 486 quirk_io_region(dev, region, 256, PCI_BRIDGE_RESOURCES, "vt82c586 ACPI"); 485 487 } 486 488 } 487 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_3, quirk_vt82c586_acpi ); 489 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_3, quirk_vt82c586_acpi); 488 490 489 491 /* 490 492 * VIA VT82C686 ACPI: Three IO region pointed to by (long)words at ··· 507 509 smb &= PCI_BASE_ADDRESS_IO_MASK; 508 510 quirk_io_region(dev, smb, 16, PCI_BRIDGE_RESOURCES + 2, "vt82c686 SMB"); 509 511 } 510 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686_4, quirk_vt82c686_acpi ); 512 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686_4, quirk_vt82c686_acpi); 511 513 512 514 /* 513 515 * VIA VT8235 ISA Bridge: Two IO regions pointed to by words at ··· 549 551 else 550 552 tmp = 0x1f; /* all known bits (4-0) routed to external APIC */ 551 553 552 - printk(KERN_INFO "PCI: %sbling Via external APIC routing\n", 554 + dev_info(&dev->dev, "%sbling VIA external APIC routing\n", 553 555 tmp == 0 ? "Disa" : "Ena"); 554 556 555 557 /* Offset 0x58: External APIC IRQ output control */ 556 558 pci_write_config_byte (dev, 0x58, tmp); 557 559 } 558 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, quirk_via_ioapic ); 559 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, quirk_via_ioapic ); 560 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, quirk_via_ioapic); 561 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, quirk_via_ioapic); 560 562 561 563 /* 562 564 * VIA 8237: Some BIOSs don't set the 'Bypass APIC De-Assert Message' Bit. ··· 571 573 572 574 pci_read_config_byte(dev, 0x5B, &misc_control2); 573 575 if (!(misc_control2 & BYPASS_APIC_DEASSERT)) { 574 - printk(KERN_INFO "PCI: Bypassing VIA 8237 APIC De-Assert Message\n"); 576 + dev_info(&dev->dev, "Bypassing VIA 8237 APIC De-Assert Message\n"); 575 577 pci_write_config_byte(dev, 0x5B, misc_control2|BYPASS_APIC_DEASSERT); 576 578 } 577 579 } ··· 590 592 static void __devinit quirk_amd_ioapic(struct pci_dev *dev) 591 593 { 592 594 if (dev->revision >= 0x02) { 593 - printk(KERN_WARNING "I/O APIC: AMD Erratum #22 may be present. In the event of instability try\n"); 594 - printk(KERN_WARNING " : booting with the \"noapic\" option.\n"); 595 + dev_warn(&dev->dev, "I/O APIC: AMD Erratum #22 may be present. In the event of instability try\n"); 596 + dev_warn(&dev->dev, " : booting with the \"noapic\" option\n"); 595 597 } 596 598 } 597 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_VIPER_7410, quirk_amd_ioapic ); 599 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_VIPER_7410, quirk_amd_ioapic); 598 600 599 601 static void __init quirk_ioapic_rmw(struct pci_dev *dev) 600 602 { 601 603 if (dev->devfn == 0 && dev->bus->number == 0) 602 604 sis_apic_bug = 1; 603 605 } 604 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SI, PCI_ANY_ID, quirk_ioapic_rmw ); 606 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SI, PCI_ANY_ID, quirk_ioapic_rmw); 605 607 606 608 #define AMD8131_revA0 0x01 607 609 #define AMD8131_revB0 0x11 ··· 615 617 return; 616 618 617 619 if (dev->revision == AMD8131_revA0 || dev->revision == AMD8131_revB0) { 618 - printk(KERN_INFO "Fixing up AMD8131 IOAPIC mode\n"); 620 + dev_info(&dev->dev, "Fixing up AMD8131 IOAPIC mode\n"); 619 621 pci_read_config_byte( dev, AMD8131_MISC, &tmp); 620 622 tmp &= ~(1 << AMD8131_NIOAMODE_BIT); 621 623 pci_write_config_byte( dev, AMD8131_MISC, tmp); ··· 632 634 static void __init quirk_amd_8131_mmrbc(struct pci_dev *dev) 633 635 { 634 636 if (dev->subordinate && dev->revision <= 0x12) { 635 - printk(KERN_INFO "AMD8131 rev %x detected, disabling PCI-X " 636 - "MMRBC\n", dev->revision); 637 + dev_info(&dev->dev, "AMD8131 rev %x detected; " 638 + "disabling PCI-X MMRBC\n", dev->revision); 637 639 dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MMRBC; 638 640 } 639 641 } ··· 658 660 if (irq && (irq != 2)) 659 661 d->irq = irq; 660 662 } 661 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_3, quirk_via_acpi ); 662 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686_4, quirk_via_acpi ); 663 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_3, quirk_via_acpi); 664 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686_4, quirk_via_acpi); 663 665 664 666 665 667 /* ··· 740 742 741 743 pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &irq); 742 744 if (new_irq != irq) { 743 - printk(KERN_INFO "PCI: VIA VLink IRQ fixup for %s, from %d to %d\n", 744 - pci_name(dev), irq, new_irq); 745 + dev_info(&dev->dev, "VIA VLink IRQ fixup, from %d to %d\n", 746 + irq, new_irq); 745 747 udelay(15); /* unknown if delay really needed */ 746 748 pci_write_config_byte(dev, PCI_INTERRUPT_LINE, new_irq); 747 749 } ··· 759 761 pci_write_config_byte(dev, 0xfc, 0); 760 762 pci_read_config_word(dev, PCI_DEVICE_ID, &dev->device); 761 763 } 762 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C597_0, quirk_vt82c598_id ); 764 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C597_0, quirk_vt82c598_id); 763 765 764 766 /* 765 767 * CardBus controllers have a legacy base address that enables them ··· 789 791 pci_read_config_dword(dev, 0x4C, &pcic); 790 792 if ((pcic&6)!=6) { 791 793 pcic |= 6; 792 - printk(KERN_WARNING "BIOS failed to enable PCI standards compliance, fixing this error.\n"); 794 + dev_warn(&dev->dev, "BIOS failed to enable PCI standards compliance; fixing this error\n"); 793 795 pci_write_config_dword(dev, 0x4C, pcic); 794 796 pci_read_config_dword(dev, 0x84, &pcic); 795 797 pcic |= (1<<23); /* Required in this mode */ 796 798 pci_write_config_dword(dev, 0x84, pcic); 797 799 } 798 800 } 799 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_FE_GATE_700C, quirk_amd_ordering ); 800 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_FE_GATE_700C, quirk_amd_ordering ); 801 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_FE_GATE_700C, quirk_amd_ordering); 802 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_FE_GATE_700C, quirk_amd_ordering); 801 803 802 804 /* 803 805 * DreamWorks provided workaround for Dunord I-3000 problem ··· 812 814 r->start = 0; 813 815 r->end = 0xffffff; 814 816 } 815 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_DUNORD, PCI_DEVICE_ID_DUNORD_I3000, quirk_dunord ); 817 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_DUNORD, PCI_DEVICE_ID_DUNORD_I3000, quirk_dunord); 816 818 817 819 /* 818 820 * i82380FB mobile docking controller: its PCI-to-PCI bridge ··· 824 826 { 825 827 dev->transparent = 1; 826 828 } 827 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82380FB, quirk_transparent_bridge ); 828 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TOSHIBA, 0x605, quirk_transparent_bridge ); 829 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82380FB, quirk_transparent_bridge); 830 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TOSHIBA, 0x605, quirk_transparent_bridge); 829 831 830 832 /* 831 833 * Common misconfiguration of the MediaGX/Geode PCI master that will ··· 839 841 pci_read_config_byte(dev, 0x41, &reg); 840 842 if (reg & 2) { 841 843 reg &= ~2; 842 - printk(KERN_INFO "PCI: Fixup for MediaGX/Geode Slave Disconnect Boundary (0x41=0x%02x)\n", reg); 844 + dev_info(&dev->dev, "Fixup for MediaGX/Geode Slave Disconnect Boundary (0x41=0x%02x)\n", reg); 843 845 pci_write_config_byte(dev, 0x41, reg); 844 846 } 845 847 } 846 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_PCI_MASTER, quirk_mediagx_master ); 847 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_PCI_MASTER, quirk_mediagx_master ); 848 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_PCI_MASTER, quirk_mediagx_master); 849 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_PCI_MASTER, quirk_mediagx_master); 848 850 849 851 /* 850 852 * Ensure C0 rev restreaming is off. This is normally done by ··· 861 863 if (config & (1<<6)) { 862 864 config &= ~(1<<6); 863 865 pci_write_config_word(pdev, 0x40, config); 864 - printk(KERN_INFO "PCI: C0 revision 450NX. Disabling PCI restreaming.\n"); 866 + dev_info(&pdev->dev, "C0 revision 450NX. Disabling PCI restreaming\n"); 865 867 } 866 868 } 867 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82454NX, quirk_disable_pxb ); 868 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82454NX, quirk_disable_pxb ); 869 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82454NX, quirk_disable_pxb); 870 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82454NX, quirk_disable_pxb); 869 871 870 872 871 873 static void __devinit quirk_sb600_sata(struct pci_dev *pdev) ··· 900 902 /* PCI layer will sort out resources */ 901 903 } 902 904 } 903 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_CSB5IDE, quirk_svwks_csb5ide ); 905 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_CSB5IDE, quirk_svwks_csb5ide); 904 906 905 907 /* 906 908 * Intel 82801CAM ICH3-M datasheet says IDE modes must be the same ··· 912 914 pci_read_config_byte(pdev, PCI_CLASS_PROG, &prog); 913 915 914 916 if (((prog & 1) && !(prog & 4)) || ((prog & 4) && !(prog & 1))) { 915 - printk(KERN_INFO "PCI: IDE mode mismatch; forcing legacy mode\n"); 917 + dev_info(&pdev->dev, "IDE mode mismatch; forcing legacy mode\n"); 916 918 prog &= ~5; 917 919 pdev->class &= ~5; 918 920 pci_write_config_byte(pdev, PCI_CLASS_PROG, prog); ··· 927 929 { 928 930 dev->class = PCI_CLASS_BRIDGE_EISA << 8; 929 931 } 930 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82375, quirk_eisa_bridge ); 932 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82375, quirk_eisa_bridge); 931 933 932 934 933 935 /* ··· 1020 1022 case 0x12bd: /* HP D530 */ 1021 1023 asus_hides_smbus = 1; 1022 1024 } 1025 + else if (dev->device == PCI_DEVICE_ID_INTEL_82875_HB) 1026 + switch (dev->subsystem_device) { 1027 + case 0x12bf: /* HP xw4100 */ 1028 + asus_hides_smbus = 1; 1029 + } 1023 1030 else if (dev->device == PCI_DEVICE_ID_INTEL_82915GM_HB) 1024 1031 switch (dev->subsystem_device) { 1025 1032 case 0x099c: /* HP Compaq nx6110 */ ··· 1052 1049 } 1053 1050 } 1054 1051 } 1055 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82845_HB, asus_hides_smbus_hostbridge ); 1056 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82845G_HB, asus_hides_smbus_hostbridge ); 1057 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82850_HB, asus_hides_smbus_hostbridge ); 1058 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82865_HB, asus_hides_smbus_hostbridge ); 1059 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_7205_0, asus_hides_smbus_hostbridge ); 1060 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7501_MCH, asus_hides_smbus_hostbridge ); 1061 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82855PM_HB, asus_hides_smbus_hostbridge ); 1062 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82855GM_HB, asus_hides_smbus_hostbridge ); 1063 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82915GM_HB, asus_hides_smbus_hostbridge ); 1052 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82845_HB, asus_hides_smbus_hostbridge); 1053 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82845G_HB, asus_hides_smbus_hostbridge); 1054 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82850_HB, asus_hides_smbus_hostbridge); 1055 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82865_HB, asus_hides_smbus_hostbridge); 1056 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82875_HB, asus_hides_smbus_hostbridge); 1057 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_7205_0, asus_hides_smbus_hostbridge); 1058 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7501_MCH, asus_hides_smbus_hostbridge); 1059 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82855PM_HB, asus_hides_smbus_hostbridge); 1060 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82855GM_HB, asus_hides_smbus_hostbridge); 1061 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82915GM_HB, asus_hides_smbus_hostbridge); 1064 1062 1065 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82810_IG3, asus_hides_smbus_hostbridge ); 1063 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82810_IG3, asus_hides_smbus_hostbridge); 1066 1064 1067 1065 static void asus_hides_smbus_lpc(struct pci_dev *dev) 1068 1066 { ··· 1077 1073 pci_write_config_word(dev, 0xF2, val & (~0x8)); 1078 1074 pci_read_config_word(dev, 0xF2, &val); 1079 1075 if (val & 0x8) 1080 - printk(KERN_INFO "PCI: i801 SMBus device continues to play 'hide and seek'! 0x%x\n", val); 1076 + dev_info(&dev->dev, "i801 SMBus device continues to play 'hide and seek'! 0x%x\n", val); 1081 1077 else 1082 - printk(KERN_INFO "PCI: Enabled i801 SMBus device\n"); 1078 + dev_info(&dev->dev, "Enabled i801 SMBus device\n"); 1083 1079 } 1084 1080 } 1085 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AA_0, asus_hides_smbus_lpc ); 1086 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0, asus_hides_smbus_lpc ); 1087 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0, asus_hides_smbus_lpc ); 1088 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0, asus_hides_smbus_lpc ); 1089 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_12, asus_hides_smbus_lpc ); 1090 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_12, asus_hides_smbus_lpc ); 1091 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0, asus_hides_smbus_lpc ); 1092 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AA_0, asus_hides_smbus_lpc ); 1093 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0, asus_hides_smbus_lpc ); 1094 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0, asus_hides_smbus_lpc ); 1095 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0, asus_hides_smbus_lpc ); 1096 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_12, asus_hides_smbus_lpc ); 1097 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_12, asus_hides_smbus_lpc ); 1098 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0, asus_hides_smbus_lpc ); 1081 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AA_0, asus_hides_smbus_lpc); 1082 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0, asus_hides_smbus_lpc); 1083 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0, asus_hides_smbus_lpc); 1084 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0, asus_hides_smbus_lpc); 1085 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_12, asus_hides_smbus_lpc); 1086 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_12, asus_hides_smbus_lpc); 1087 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0, asus_hides_smbus_lpc); 1088 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AA_0, asus_hides_smbus_lpc); 1089 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0, asus_hides_smbus_lpc); 1090 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0, asus_hides_smbus_lpc); 1091 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0, asus_hides_smbus_lpc); 1092 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_12, asus_hides_smbus_lpc); 1093 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_12, asus_hides_smbus_lpc); 1094 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0, asus_hides_smbus_lpc); 1099 1095 1100 1096 static void asus_hides_smbus_lpc_ich6(struct pci_dev *dev) 1101 1097 { ··· 1110 1106 val=readl(base + 0x3418); /* read the Function Disable register, dword mode only */ 1111 1107 writel(val & 0xFFFFFFF7, base + 0x3418); /* enable the SMBus device */ 1112 1108 iounmap(base); 1113 - printk(KERN_INFO "PCI: Enabled ICH6/i801 SMBus device\n"); 1109 + dev_info(&dev->dev, "Enabled ICH6/i801 SMBus device\n"); 1114 1110 } 1115 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, asus_hides_smbus_lpc_ich6 ); 1116 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, asus_hides_smbus_lpc_ich6 ); 1111 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, asus_hides_smbus_lpc_ich6); 1112 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, asus_hides_smbus_lpc_ich6); 1117 1113 1118 1114 /* 1119 1115 * SiS 96x south bridge: BIOS typically hides SMBus device... ··· 1123 1119 u8 val = 0; 1124 1120 pci_read_config_byte(dev, 0x77, &val); 1125 1121 if (val & 0x10) { 1126 - printk(KERN_INFO "Enabling SiS 96x SMBus.\n"); 1122 + dev_info(&dev->dev, "Enabling SiS 96x SMBus\n"); 1127 1123 pci_write_config_byte(dev, 0x77, val & ~0x10); 1128 1124 } 1129 1125 } 1130 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_961, quirk_sis_96x_smbus ); 1131 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_962, quirk_sis_96x_smbus ); 1132 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_963, quirk_sis_96x_smbus ); 1133 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_LPC, quirk_sis_96x_smbus ); 1134 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_961, quirk_sis_96x_smbus ); 1135 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_962, quirk_sis_96x_smbus ); 1136 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_963, quirk_sis_96x_smbus ); 1137 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_LPC, quirk_sis_96x_smbus ); 1126 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_961, quirk_sis_96x_smbus); 1127 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_962, quirk_sis_96x_smbus); 1128 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_963, quirk_sis_96x_smbus); 1129 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_LPC, quirk_sis_96x_smbus); 1130 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_961, quirk_sis_96x_smbus); 1131 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_962, quirk_sis_96x_smbus); 1132 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_963, quirk_sis_96x_smbus); 1133 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_LPC, quirk_sis_96x_smbus); 1138 1134 1139 1135 /* 1140 1136 * ... This is further complicated by the fact that some SiS96x south ··· 1167 1163 dev->device = devid; 1168 1164 quirk_sis_96x_smbus(dev); 1169 1165 } 1170 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_503, quirk_sis_503 ); 1171 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_503, quirk_sis_503 ); 1166 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_503, quirk_sis_503); 1167 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_503, quirk_sis_503); 1172 1168 1173 1169 1174 1170 /* ··· 1195 1191 pci_write_config_byte(dev, 0x50, val & (~0xc0)); 1196 1192 pci_read_config_byte(dev, 0x50, &val); 1197 1193 if (val & 0xc0) 1198 - printk(KERN_INFO "PCI: onboard AC97/MC97 devices continue to play 'hide and seek'! 0x%x\n", val); 1194 + dev_info(&dev->dev, "Onboard AC97/MC97 devices continue to play 'hide and seek'! 0x%x\n", val); 1199 1195 else 1200 - printk(KERN_INFO "PCI: enabled onboard AC97/MC97 devices\n"); 1196 + dev_info(&dev->dev, "Enabled onboard AC97/MC97 devices\n"); 1201 1197 } 1202 1198 } 1203 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237, asus_hides_ac97_lpc ); 1204 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237, asus_hides_ac97_lpc ); 1199 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237, asus_hides_ac97_lpc); 1200 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237, asus_hides_ac97_lpc); 1205 1201 1206 1202 #if defined(CONFIG_ATA) || defined(CONFIG_ATA_MODULE) 1207 1203 ··· 1296 1292 } 1297 1293 1298 1294 } 1299 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EESSC, quirk_alder_ioapic ); 1295 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EESSC, quirk_alder_ioapic); 1300 1296 #endif 1301 1297 1302 1298 int pcie_mch_quirk; ··· 1306 1302 { 1307 1303 pcie_mch_quirk = 1; 1308 1304 } 1309 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7520_MCH, quirk_pcie_mch ); 1310 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7320_MCH, quirk_pcie_mch ); 1311 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7525_MCH, quirk_pcie_mch ); 1305 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7520_MCH, quirk_pcie_mch); 1306 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7320_MCH, quirk_pcie_mch); 1307 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7525_MCH, quirk_pcie_mch); 1312 1308 1313 1309 1314 1310 /* ··· 1318 1314 static void __devinit quirk_pcie_pxh(struct pci_dev *dev) 1319 1315 { 1320 1316 pci_msi_off(dev); 1321 - 1322 1317 dev->no_msi = 1; 1323 - 1324 - printk(KERN_WARNING "PCI: PXH quirk detected, " 1325 - "disabling MSI for SHPC device\n"); 1318 + dev_warn(&dev->dev, "PXH quirk detected; SHPC device MSI disabled\n"); 1326 1319 } 1327 1320 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PXHD_0, quirk_pcie_pxh); 1328 1321 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PXHD_1, quirk_pcie_pxh); ··· 1400 1399 case PCI_DEVICE_ID_NETMOS_9855: 1401 1400 if ((dev->class >> 8) == PCI_CLASS_COMMUNICATION_SERIAL && 1402 1401 num_parallel) { 1403 - printk(KERN_INFO "PCI: Netmos %04x (%u parallel, " 1402 + dev_info(&dev->dev, "Netmos %04x (%u parallel, " 1404 1403 "%u serial); changing class SERIAL to OTHER " 1405 1404 "(use parport_serial)\n", 1406 1405 dev->device, num_parallel, num_serial); ··· 1413 1412 1414 1413 static void __devinit quirk_e100_interrupt(struct pci_dev *dev) 1415 1414 { 1416 - u16 command; 1415 + u16 command, pmcsr; 1417 1416 u8 __iomem *csr; 1418 1417 u8 cmd_hi; 1418 + int pm; 1419 1419 1420 1420 switch (dev->device) { 1421 1421 /* PCI IDs taken from drivers/net/e100.c */ ··· 1450 1448 if (!(command & PCI_COMMAND_MEMORY) || !pci_resource_start(dev, 0)) 1451 1449 return; 1452 1450 1451 + /* 1452 + * Check that the device is in the D0 power state. If it's not, 1453 + * there is no point to look any further. 1454 + */ 1455 + pm = pci_find_capability(dev, PCI_CAP_ID_PM); 1456 + if (pm) { 1457 + pci_read_config_word(dev, pm + PCI_PM_CTRL, &pmcsr); 1458 + if ((pmcsr & PCI_PM_CTRL_STATE_MASK) != PCI_D0) 1459 + return; 1460 + } 1461 + 1453 1462 /* Convert from PCI bus to resource space. */ 1454 1463 csr = ioremap(pci_resource_start(dev, 0), 8); 1455 1464 if (!csr) { 1456 - printk(KERN_WARNING "PCI: Can't map %s e100 registers\n", 1457 - pci_name(dev)); 1465 + dev_warn(&dev->dev, "Can't map e100 registers\n"); 1458 1466 return; 1459 1467 } 1460 1468 1461 1469 cmd_hi = readb(csr + 3); 1462 1470 if (cmd_hi == 0) { 1463 - printk(KERN_WARNING "PCI: Firmware left %s e100 interrupts " 1464 - "enabled, disabling\n", pci_name(dev)); 1471 + dev_warn(&dev->dev, "Firmware left e100 interrupts enabled; " 1472 + "disabling\n"); 1465 1473 writeb(1, csr + 3); 1466 1474 } 1467 1475 ··· 1486 1474 */ 1487 1475 1488 1476 if (dev->class == PCI_CLASS_NOT_DEFINED) { 1489 - printk(KERN_INFO "NCR 53c810 rev 1 detected, setting PCI class.\n"); 1477 + dev_info(&dev->dev, "NCR 53c810 rev 1 detected; setting PCI class\n"); 1490 1478 dev->class = PCI_CLASS_STORAGE_SCSI; 1491 1479 } 1492 1480 } ··· 1497 1485 while (f < end) { 1498 1486 if ((f->vendor == dev->vendor || f->vendor == (u16) PCI_ANY_ID) && 1499 1487 (f->device == dev->device || f->device == (u16) PCI_ANY_ID)) { 1500 - pr_debug("PCI: Calling quirk %p for %s\n", f->hook, pci_name(dev)); 1488 + #ifdef DEBUG 1489 + dev_dbg(&dev->dev, "calling quirk 0x%p", f->hook); 1490 + print_fn_descriptor_symbol(": %s()\n", 1491 + (unsigned long) f->hook); 1492 + #endif 1501 1493 f->hook(dev); 1502 1494 } 1503 1495 f++; ··· 1569 1553 pci_read_config_word(dev, 0x40, &en1k); 1570 1554 1571 1555 if (en1k & 0x200) { 1572 - printk(KERN_INFO "PCI: Enable I/O Space to 1 KB Granularity\n"); 1556 + dev_info(&dev->dev, "Enable I/O Space to 1KB granularity\n"); 1573 1557 1574 1558 pci_read_config_byte(dev, PCI_IO_BASE, &io_base_lo); 1575 1559 pci_read_config_byte(dev, PCI_IO_LIMIT, &io_limit_lo); ··· 1601 1585 iobl_adr_1k = iobl_adr | (res->start >> 8) | (res->end & 0xfc00); 1602 1586 1603 1587 if (iobl_adr != iobl_adr_1k) { 1604 - printk(KERN_INFO "PCI: Fixing P64H2 IOBL_ADR from 0x%x to 0x%x for 1 KB Granularity\n", 1588 + dev_info(&dev->dev, "Fixing P64H2 IOBL_ADR from 0x%x to 0x%x for 1KB granularity\n", 1605 1589 iobl_adr,iobl_adr_1k); 1606 1590 pci_write_config_word(dev, PCI_IO_BASE, iobl_adr_1k); 1607 1591 } ··· 1619 1603 if (pci_read_config_byte(dev, 0xf41, &b) == 0) { 1620 1604 if (!(b & 0x20)) { 1621 1605 pci_write_config_byte(dev, 0xf41, b | 0x20); 1622 - printk(KERN_INFO 1623 - "PCI: Linking AER extended capability on %s\n", 1624 - pci_name(dev)); 1606 + dev_info(&dev->dev, 1607 + "Linking AER extended capability\n"); 1625 1608 } 1626 1609 } 1627 1610 } ··· 1628 1613 quirk_nvidia_ck804_pcie_aer_ext_cap); 1629 1614 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_CK804_PCIE, 1630 1615 quirk_nvidia_ck804_pcie_aer_ext_cap); 1616 + 1617 + static void __devinit quirk_via_cx700_pci_parking_caching(struct pci_dev *dev) 1618 + { 1619 + /* 1620 + * Disable PCI Bus Parking and PCI Master read caching on CX700 1621 + * which causes unspecified timing errors with a VT6212L on the PCI 1622 + * bus leading to USB2.0 packet loss. The defaults are that these 1623 + * features are turned off but some BIOSes turn them on. 1624 + */ 1625 + 1626 + uint8_t b; 1627 + if (pci_read_config_byte(dev, 0x76, &b) == 0) { 1628 + if (b & 0x40) { 1629 + /* Turn off PCI Bus Parking */ 1630 + pci_write_config_byte(dev, 0x76, b ^ 0x40); 1631 + 1632 + /* Turn off PCI Master read caching */ 1633 + pci_write_config_byte(dev, 0x72, 0x0); 1634 + pci_write_config_byte(dev, 0x75, 0x1); 1635 + pci_write_config_byte(dev, 0x77, 0x0); 1636 + 1637 + printk(KERN_INFO 1638 + "PCI: VIA CX700 PCI parking/caching fixup on %s\n", 1639 + pci_name(dev)); 1640 + } 1641 + } 1642 + } 1643 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_VIA, 0x324e, quirk_via_cx700_pci_parking_caching); 1631 1644 1632 1645 #ifdef CONFIG_PCI_MSI 1633 1646 /* Some chipsets do not support MSI. We cannot easily rely on setting ··· 1667 1624 static void __init quirk_disable_all_msi(struct pci_dev *dev) 1668 1625 { 1669 1626 pci_no_msi(); 1670 - printk(KERN_WARNING "PCI: MSI quirk detected. MSI deactivated.\n"); 1627 + dev_warn(&dev->dev, "MSI quirk detected; MSI disabled\n"); 1671 1628 } 1672 1629 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_GCNB_LE, quirk_disable_all_msi); 1673 1630 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_RS400_200, quirk_disable_all_msi); ··· 1678 1635 static void __devinit quirk_disable_msi(struct pci_dev *dev) 1679 1636 { 1680 1637 if (dev->subordinate) { 1681 - printk(KERN_WARNING "PCI: MSI quirk detected. " 1682 - "PCI_BUS_FLAGS_NO_MSI set for %s subordinate bus.\n", 1683 - pci_name(dev)); 1638 + dev_warn(&dev->dev, "MSI quirk detected; " 1639 + "subordinate MSI disabled\n"); 1684 1640 dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 1685 1641 } 1686 1642 } ··· 1698 1656 if (pci_read_config_byte(dev, pos + HT_MSI_FLAGS, 1699 1657 &flags) == 0) 1700 1658 { 1701 - printk(KERN_INFO "PCI: Found %s HT MSI Mapping on %s\n", 1659 + dev_info(&dev->dev, "Found %s HT MSI Mapping\n", 1702 1660 flags & HT_MSI_FLAGS_ENABLE ? 1703 - "enabled" : "disabled", pci_name(dev)); 1661 + "enabled" : "disabled"); 1704 1662 return (flags & HT_MSI_FLAGS_ENABLE) != 0; 1705 1663 } 1706 1664 ··· 1714 1672 static void __devinit quirk_msi_ht_cap(struct pci_dev *dev) 1715 1673 { 1716 1674 if (dev->subordinate && !msi_ht_cap_enabled(dev)) { 1717 - printk(KERN_WARNING "PCI: MSI quirk detected. " 1718 - "MSI disabled on chipset %s.\n", 1719 - pci_name(dev)); 1675 + dev_warn(&dev->dev, "MSI quirk detected; " 1676 + "subordinate MSI disabled\n"); 1720 1677 dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 1721 1678 } 1722 1679 } 1723 1680 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT2000_PCIE, 1724 1681 quirk_msi_ht_cap); 1725 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, 1726 - PCI_DEVICE_ID_SERVERWORKS_HT1000_PXB, 1727 - quirk_msi_ht_cap); 1682 + 1683 + 1684 + /* 1685 + * Force enable MSI mapping capability on HT bridges 1686 + */ 1687 + static void __devinit quirk_msi_ht_cap_enable(struct pci_dev *dev) 1688 + { 1689 + int pos, ttl = 48; 1690 + 1691 + pos = pci_find_ht_capability(dev, HT_CAPTYPE_MSI_MAPPING); 1692 + while (pos && ttl--) { 1693 + u8 flags; 1694 + 1695 + if (pci_read_config_byte(dev, pos + HT_MSI_FLAGS, &flags) == 0) { 1696 + printk(KERN_INFO "PCI: Enabling HT MSI Mapping on %s\n", 1697 + pci_name(dev)); 1698 + 1699 + pci_write_config_byte(dev, pos + HT_MSI_FLAGS, 1700 + flags | HT_MSI_FLAGS_ENABLE); 1701 + } 1702 + pos = pci_find_next_ht_capability(dev, pos, 1703 + HT_CAPTYPE_MSI_MAPPING); 1704 + } 1705 + } 1706 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SERVERWORKS, 1707 + PCI_DEVICE_ID_SERVERWORKS_HT1000_PXB, 1708 + quirk_msi_ht_cap_enable); 1728 1709 1729 1710 /* The nVidia CK804 chipset may have 2 HT MSI mappings. 1730 1711 * MSI are supported if the MSI capability set in any of these mappings. ··· 1766 1701 if (!pdev) 1767 1702 return; 1768 1703 if (!msi_ht_cap_enabled(dev) && !msi_ht_cap_enabled(pdev)) { 1769 - printk(KERN_WARNING "PCI: MSI quirk detected. " 1770 - "MSI disabled on chipset %s.\n", 1771 - pci_name(dev)); 1704 + dev_warn(&dev->dev, "MSI quirk detected; " 1705 + "subordinate MSI disabled\n"); 1772 1706 dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 1773 1707 } 1774 1708 pci_dev_put(pdev); ··· 1778 1714 static void __devinit quirk_msi_intx_disable_bug(struct pci_dev *dev) 1779 1715 { 1780 1716 dev->dev_flags |= PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG; 1717 + } 1718 + static void __devinit quirk_msi_intx_disable_ati_bug(struct pci_dev *dev) 1719 + { 1720 + struct pci_dev *p; 1721 + 1722 + /* SB700 MSI issue will be fixed at HW level from revision A21, 1723 + * we need check PCI REVISION ID of SMBus controller to get SB700 1724 + * revision. 1725 + */ 1726 + p = pci_get_device(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_SBX00_SMBUS, 1727 + NULL); 1728 + if (!p) 1729 + return; 1730 + 1731 + if ((p->revision < 0x3B) && (p->revision >= 0x30)) 1732 + dev->dev_flags |= PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG; 1733 + pci_dev_put(p); 1781 1734 } 1782 1735 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, 1783 1736 PCI_DEVICE_ID_TIGON3_5780, ··· 1816 1735 quirk_msi_intx_disable_bug); 1817 1736 1818 1737 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4390, 1819 - quirk_msi_intx_disable_bug); 1738 + quirk_msi_intx_disable_ati_bug); 1820 1739 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4391, 1821 - quirk_msi_intx_disable_bug); 1740 + quirk_msi_intx_disable_ati_bug); 1822 1741 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4392, 1823 - quirk_msi_intx_disable_bug); 1742 + quirk_msi_intx_disable_ati_bug); 1824 1743 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4393, 1825 - quirk_msi_intx_disable_bug); 1744 + quirk_msi_intx_disable_ati_bug); 1826 1745 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4394, 1827 - quirk_msi_intx_disable_bug); 1828 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4395, 1829 - quirk_msi_intx_disable_bug); 1746 + quirk_msi_intx_disable_ati_bug); 1830 1747 1831 1748 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4373, 1832 1749 quirk_msi_intx_disable_bug);
+6 -4
drivers/pci/remove.c
··· 1 1 #include <linux/pci.h> 2 2 #include <linux/module.h> 3 + #include <linux/aspm.h> 3 4 #include "pci.h" 4 5 5 6 static void pci_free_resources(struct pci_dev *dev) ··· 31 30 dev->global_list.next = dev->global_list.prev = NULL; 32 31 up_write(&pci_bus_sem); 33 32 } 33 + 34 + if (dev->bus->self) 35 + pcie_aspm_exit_link_state(dev); 34 36 } 35 37 36 38 static void pci_destroy_dev(struct pci_dev *dev) ··· 78 74 list_del(&pci_bus->node); 79 75 up_write(&pci_bus_sem); 80 76 pci_remove_legacy_files(pci_bus); 81 - class_device_remove_file(&pci_bus->class_dev, 82 - &class_device_attr_cpuaffinity); 83 - sysfs_remove_link(&pci_bus->class_dev.kobj, "bridge"); 84 - class_device_unregister(&pci_bus->class_dev); 77 + device_remove_file(&pci_bus->dev, &dev_attr_cpuaffinity); 78 + device_unregister(&pci_bus->dev); 85 79 } 86 80 EXPORT_SYMBOL(pci_remove_bus); 87 81
+4 -2
drivers/pci/rom.c
··· 162 162 return rom; 163 163 } 164 164 165 + #if 0 165 166 /** 166 167 * pci_map_rom_copy - map a PCI ROM to kernel space, create a copy 167 168 * @pdev: pointer to pci device struct ··· 197 196 198 197 return (void __iomem *)(unsigned long)res->start; 199 198 } 199 + #endif /* 0 */ 200 200 201 201 /** 202 202 * pci_unmap_rom - unmap the ROM from kernel space ··· 220 218 pci_disable_rom(pdev); 221 219 } 222 220 221 + #if 0 223 222 /** 224 223 * pci_remove_rom - disable the ROM and remove its sysfs attribute 225 224 * @pdev: pointer to pci device struct ··· 239 236 IORESOURCE_ROM_COPY))) 240 237 pci_disable_rom(pdev); 241 238 } 239 + #endif /* 0 */ 242 240 243 241 /** 244 242 * pci_cleanup_rom - internal routine for freeing the ROM copy created ··· 260 256 } 261 257 262 258 EXPORT_SYMBOL(pci_map_rom); 263 - EXPORT_SYMBOL(pci_map_rom_copy); 264 259 EXPORT_SYMBOL(pci_unmap_rom); 265 - EXPORT_SYMBOL(pci_remove_rom);
+40 -24
drivers/pci/setup-bus.c
··· 89 89 * The IO resource is allocated a range twice as large as it 90 90 * would normally need. This allows us to set both IO regs. 91 91 */ 92 - printk(" IO window: %08lx-%08lx\n", 93 - region.start, region.end); 92 + printk(KERN_INFO " IO window: 0x%08lx-0x%08lx\n", 93 + (unsigned long)region.start, 94 + (unsigned long)region.end); 94 95 pci_write_config_dword(bridge, PCI_CB_IO_BASE_0, 95 96 region.start); 96 97 pci_write_config_dword(bridge, PCI_CB_IO_LIMIT_0, ··· 100 99 101 100 pcibios_resource_to_bus(bridge, &region, bus->resource[1]); 102 101 if (bus->resource[1]->flags & IORESOURCE_IO) { 103 - printk(" IO window: %08lx-%08lx\n", 104 - region.start, region.end); 102 + printk(KERN_INFO " IO window: 0x%08lx-0x%08lx\n", 103 + (unsigned long)region.start, 104 + (unsigned long)region.end); 105 105 pci_write_config_dword(bridge, PCI_CB_IO_BASE_1, 106 106 region.start); 107 107 pci_write_config_dword(bridge, PCI_CB_IO_LIMIT_1, ··· 111 109 112 110 pcibios_resource_to_bus(bridge, &region, bus->resource[2]); 113 111 if (bus->resource[2]->flags & IORESOURCE_MEM) { 114 - printk(" PREFETCH window: %08lx-%08lx\n", 115 - region.start, region.end); 112 + printk(KERN_INFO " PREFETCH window: 0x%08lx-0x%08lx\n", 113 + (unsigned long)region.start, 114 + (unsigned long)region.end); 116 115 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_0, 117 116 region.start); 118 117 pci_write_config_dword(bridge, PCI_CB_MEMORY_LIMIT_0, ··· 122 119 123 120 pcibios_resource_to_bus(bridge, &region, bus->resource[3]); 124 121 if (bus->resource[3]->flags & IORESOURCE_MEM) { 125 - printk(" MEM window: %08lx-%08lx\n", 126 - region.start, region.end); 122 + printk(KERN_INFO " MEM window: 0x%08lx-0x%08lx\n", 123 + (unsigned long)region.start, 124 + (unsigned long)region.end); 127 125 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_1, 128 126 region.start); 129 127 pci_write_config_dword(bridge, PCI_CB_MEMORY_LIMIT_1, ··· 149 145 { 150 146 struct pci_dev *bridge = bus->self; 151 147 struct pci_bus_region region; 152 - u32 l, io_upper16; 148 + u32 l, bu, lu, io_upper16; 153 149 154 150 DBG(KERN_INFO "PCI: Bridge: %s\n", pci_name(bridge)); 155 151 ··· 163 159 /* Set up upper 16 bits of I/O base/limit. */ 164 160 io_upper16 = (region.end & 0xffff0000) | (region.start >> 16); 165 161 DBG(KERN_INFO " IO window: %04lx-%04lx\n", 166 - region.start, region.end); 162 + (unsigned long)region.start, 163 + (unsigned long)region.end); 167 164 } 168 165 else { 169 166 /* Clear upper 16 bits of I/O base/limit. */ ··· 185 180 if (bus->resource[1]->flags & IORESOURCE_MEM) { 186 181 l = (region.start >> 16) & 0xfff0; 187 182 l |= region.end & 0xfff00000; 188 - DBG(KERN_INFO " MEM window: %08lx-%08lx\n", 189 - region.start, region.end); 183 + DBG(KERN_INFO " MEM window: 0x%08lx-0x%08lx\n", 184 + (unsigned long)region.start, 185 + (unsigned long)region.end); 190 186 } 191 187 else { 192 188 l = 0x0000fff0; ··· 201 195 pci_write_config_dword(bridge, PCI_PREF_LIMIT_UPPER32, 0); 202 196 203 197 /* Set up PREF base/limit. */ 198 + bu = lu = 0; 204 199 pcibios_resource_to_bus(bridge, &region, bus->resource[2]); 205 200 if (bus->resource[2]->flags & IORESOURCE_PREFETCH) { 206 201 l = (region.start >> 16) & 0xfff0; 207 202 l |= region.end & 0xfff00000; 208 - DBG(KERN_INFO " PREFETCH window: %08lx-%08lx\n", 209 - region.start, region.end); 203 + #ifdef CONFIG_RESOURCES_64BIT 204 + bu = region.start >> 32; 205 + lu = region.end >> 32; 206 + #endif 207 + DBG(KERN_INFO " PREFETCH window: 0x%016llx-0x%016llx\n", 208 + (unsigned long long)region.start, 209 + (unsigned long long)region.end); 210 210 } 211 211 else { 212 212 l = 0x0000fff0; ··· 220 208 } 221 209 pci_write_config_dword(bridge, PCI_PREF_MEMORY_BASE, l); 222 210 223 - /* Clear out the upper 32 bits of PREF base. */ 224 - pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32, 0); 211 + /* Set the upper 32 bits of PREF base & limit. */ 212 + pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32, bu); 213 + pci_write_config_dword(bridge, PCI_PREF_LIMIT_UPPER32, lu); 225 214 226 215 pci_write_config_word(bridge, PCI_BRIDGE_CONTROL, bus->bridge_ctl); 227 216 } ··· 336 323 static int pbus_size_mem(struct pci_bus *bus, unsigned long mask, unsigned long type) 337 324 { 338 325 struct pci_dev *dev; 339 - unsigned long min_align, align, size; 340 - unsigned long aligns[12]; /* Alignments from 1Mb to 2Gb */ 326 + resource_size_t min_align, align, size; 327 + resource_size_t aligns[12]; /* Alignments from 1Mb to 2Gb */ 341 328 int order, max_order; 342 329 struct resource *b_res = find_free_bus_resource(bus, type); 343 330 ··· 353 340 354 341 for (i = 0; i < PCI_NUM_RESOURCES; i++) { 355 342 struct resource *r = &dev->resource[i]; 356 - unsigned long r_size; 343 + resource_size_t r_size; 357 344 358 345 if (r->parent || (r->flags & mask) != type) 359 346 continue; ··· 363 350 order = __ffs(align) - 20; 364 351 if (order > 11) { 365 352 printk(KERN_WARNING "PCI: region %s/%d " 366 - "too large: %llx-%llx\n", 353 + "too large: 0x%016llx-0x%016llx\n", 367 354 pci_name(dev), i, 368 - (unsigned long long)r->start, 369 - (unsigned long long)r->end); 355 + (unsigned long long)r->start, 356 + (unsigned long long)r->end); 370 357 r->flags = 0; 371 358 continue; 372 359 } ··· 385 372 align = 0; 386 373 min_align = 0; 387 374 for (order = 0; order <= max_order; order++) { 388 - unsigned long align1 = 1UL << (order + 20); 389 - 375 + #ifdef CONFIG_RESOURCES_64BIT 376 + resource_size_t align1 = 1ULL << (order + 20); 377 + #else 378 + resource_size_t align1 = 1U << (order + 20); 379 + #endif 390 380 if (!align) 391 381 min_align = align1; 392 382 else if (ALIGN(align + min_align, min_align) < align1)
+4 -3
drivers/pci/setup-res.c
··· 51 51 52 52 pcibios_resource_to_bus(dev, &region, res); 53 53 54 - pr_debug(" got res [%llx:%llx] bus [%lx:%lx] flags %lx for " 54 + pr_debug(" got res [%llx:%llx] bus [%llx:%llx] flags %lx for " 55 55 "BAR %d of %s\n", (unsigned long long)res->start, 56 56 (unsigned long long)res->end, 57 - region.start, region.end, res->flags, resno, pci_name(dev)); 57 + (unsigned long long)region.start, 58 + (unsigned long long)region.end, 59 + (unsigned long)res->flags, resno, pci_name(dev)); 58 60 59 61 new = region.start | (res->flags & PCI_REGION_FLAG_MASK); 60 62 if (res->flags & IORESOURCE_IO) ··· 127 125 128 126 return err; 129 127 } 130 - EXPORT_SYMBOL_GPL(pci_claim_resource); 131 128 132 129 int pci_assign_resource(struct pci_dev *dev, int resno) 133 130 {
-5
drivers/pci/syscall.c
··· 34 34 if (!dev) 35 35 goto error; 36 36 37 - lock_kernel(); 38 37 switch (len) { 39 38 case 1: 40 39 cfg_ret = pci_user_read_config_byte(dev, off, &byte); ··· 46 47 break; 47 48 default: 48 49 err = -EINVAL; 49 - unlock_kernel(); 50 50 goto error; 51 51 }; 52 - unlock_kernel(); 53 52 54 53 err = -EIO; 55 54 if (cfg_ret != PCIBIOS_SUCCESSFUL) ··· 104 107 if (!dev) 105 108 return -ENODEV; 106 109 107 - lock_kernel(); 108 110 switch(len) { 109 111 case 1: 110 112 err = get_user(byte, (u8 __user *)buf); ··· 136 140 err = -EINVAL; 137 141 break; 138 142 } 139 - unlock_kernel(); 140 143 pci_dev_put(dev); 141 144 return err; 142 145 }
+1 -2
drivers/scsi/lpfc/lpfc_init.c
··· 2296 2296 struct Scsi_Host *shost = pci_get_drvdata(pdev); 2297 2297 struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba; 2298 2298 struct lpfc_sli *psli = &phba->sli; 2299 - int bars = pci_select_bars(pdev, IORESOURCE_MEM); 2300 2299 2301 2300 dev_printk(KERN_INFO, &pdev->dev, "recovering from a slot reset.\n"); 2302 - if (pci_enable_device_bars(pdev, bars)) { 2301 + if (pci_enable_device_mem(pdev)) { 2303 2302 printk(KERN_ERR "lpfc: Cannot re-enable " 2304 2303 "PCI device after reset.\n"); 2305 2304 return PCI_ERS_RESULT_DISCONNECT;
+1
drivers/scsi/qla2xxx/qla_def.h
··· 2268 2268 spinlock_t hardware_lock ____cacheline_aligned; 2269 2269 2270 2270 int bars; 2271 + int mem_only; 2271 2272 device_reg_t __iomem *iobase; /* Base I/O address */ 2272 2273 resource_size_t pio_address; 2273 2274 #define MIN_IOBASE_LEN 0x100
+17 -4
drivers/scsi/qla2xxx/qla_os.c
··· 1564 1564 char pci_info[30]; 1565 1565 char fw_str[30]; 1566 1566 struct scsi_host_template *sht; 1567 - int bars; 1567 + int bars, mem_only = 0; 1568 1568 1569 1569 bars = pci_select_bars(pdev, IORESOURCE_MEM | IORESOURCE_IO); 1570 1570 sht = &qla2x00_driver_template; ··· 1575 1575 pdev->device == PCI_DEVICE_ID_QLOGIC_ISP2532) { 1576 1576 bars = pci_select_bars(pdev, IORESOURCE_MEM); 1577 1577 sht = &qla24xx_driver_template; 1578 + mem_only = 1; 1578 1579 } 1579 1580 1580 - if (pci_enable_device_bars(pdev, bars)) 1581 - goto probe_out; 1581 + if (mem_only) { 1582 + if (pci_enable_device_mem(pdev)) 1583 + goto probe_out; 1584 + } else { 1585 + if (pci_enable_device(pdev)) 1586 + goto probe_out; 1587 + } 1582 1588 1583 1589 if (pci_find_aer_capability(pdev)) 1584 1590 if (pci_enable_pcie_error_reporting(pdev)) ··· 1607 1601 sprintf(ha->host_str, "%s_%ld", QLA2XXX_DRIVER_NAME, ha->host_no); 1608 1602 ha->parent = NULL; 1609 1603 ha->bars = bars; 1604 + ha->mem_only = mem_only; 1610 1605 1611 1606 /* Set ISP-type information. */ 1612 1607 qla2x00_set_isp_flags(ha); ··· 2882 2875 { 2883 2876 pci_ers_result_t ret = PCI_ERS_RESULT_DISCONNECT; 2884 2877 scsi_qla_host_t *ha = pci_get_drvdata(pdev); 2878 + int rc; 2885 2879 2886 - if (pci_enable_device_bars(pdev, ha->bars)) { 2880 + if (ha->mem_only) 2881 + rc = pci_enable_device_mem(pdev); 2882 + else 2883 + rc = pci_enable_device(pdev); 2884 + 2885 + if (rc) { 2887 2886 qla_printk(KERN_WARNING, ha, 2888 2887 "Can't re-enable PCI device after reset.\n"); 2889 2888
+8 -14
drivers/usb/host/pci-quirks.c
··· 190 190 msleep(10); 191 191 } 192 192 if (wait_time <= 0) 193 - printk(KERN_WARNING "%s %s: BIOS handoff " 194 - "failed (BIOS bug ?) %08x\n", 195 - pdev->dev.bus_id, "OHCI", 193 + dev_warn(&pdev->dev, "OHCI: BIOS handoff failed" 194 + " (BIOS bug?) %08x\n", 196 195 readl(base + OHCI_CONTROL)); 197 196 198 197 /* reset controller, preserving RWC */ ··· 242 243 switch (cap & 0xff) { 243 244 case 1: /* BIOS/SMM/... handoff support */ 244 245 if ((cap & EHCI_USBLEGSUP_BIOS)) { 245 - pr_debug("%s %s: BIOS handoff\n", 246 - pdev->dev.bus_id, "EHCI"); 246 + dev_dbg(&pdev->dev, "EHCI: BIOS handoff\n"); 247 247 248 248 #if 0 249 249 /* aleksey_gorelov@phoenix.com reports that some systems need SMI forced on, ··· 283 285 /* well, possibly buggy BIOS... try to shut 284 286 * it down, and hope nothing goes too wrong 285 287 */ 286 - printk(KERN_WARNING "%s %s: BIOS handoff " 287 - "failed (BIOS bug ?) %08x\n", 288 - pdev->dev.bus_id, "EHCI", cap); 288 + dev_warn(&pdev->dev, "EHCI: BIOS handoff failed" 289 + " (BIOS bug?) %08x\n", cap); 289 290 pci_write_config_byte(pdev, offset + 2, 0); 290 291 } 291 292 ··· 303 306 cap = 0; 304 307 /* FALLTHROUGH */ 305 308 default: 306 - printk(KERN_WARNING "%s %s: unrecognized " 307 - "capability %02x\n", 308 - pdev->dev.bus_id, "EHCI", 309 - cap & 0xff); 309 + dev_warn(&pdev->dev, "EHCI: unrecognized capability " 310 + "%02x\n", cap & 0xff); 310 311 break; 311 312 } 312 313 offset = (cap >> 8) & 0xff; 313 314 } 314 315 if (!count) 315 - printk(KERN_DEBUG "%s %s: capability loop?\n", 316 - pdev->dev.bus_id, "EHCI"); 316 + dev_printk(KERN_DEBUG, &pdev->dev, "EHCI: capability loop?\n"); 317 317 318 318 /* 319 319 * halt EHCI & disable its interrupts in any case
+44
include/linux/aspm.h
··· 1 + /* 2 + * aspm.h 3 + * 4 + * PCI Express ASPM defines and function prototypes 5 + * 6 + * Copyright (C) 2007 Intel Corp. 7 + * Zhang Yanmin (yanmin.zhang@intel.com) 8 + * Shaohua Li (shaohua.li@intel.com) 9 + * 10 + * For more information, please consult the following manuals (look at 11 + * http://www.pcisig.com/ for how to get them): 12 + * 13 + * PCI Express Specification 14 + */ 15 + 16 + #ifndef LINUX_ASPM_H 17 + #define LINUX_ASPM_H 18 + 19 + #include <linux/pci.h> 20 + 21 + #define PCIE_LINK_STATE_L0S 1 22 + #define PCIE_LINK_STATE_L1 2 23 + #define PCIE_LINK_STATE_CLKPM 4 24 + 25 + #ifdef CONFIG_PCIEASPM 26 + extern void pcie_aspm_init_link_state(struct pci_dev *pdev); 27 + extern void pcie_aspm_exit_link_state(struct pci_dev *pdev); 28 + extern void pcie_aspm_pm_state_change(struct pci_dev *pdev); 29 + extern void pci_disable_link_state(struct pci_dev *pdev, int state); 30 + #else 31 + #define pcie_aspm_init_link_state(pdev) do {} while (0) 32 + #define pcie_aspm_exit_link_state(pdev) do {} while (0) 33 + #define pcie_aspm_pm_state_change(pdev) do {} while (0) 34 + #define pci_disable_link_state(pdev, state) do {} while (0) 35 + #endif 36 + 37 + #ifdef CONFIG_PCIEASPM_DEBUG /* this depends on CONFIG_PCIEASPM */ 38 + extern void pcie_aspm_create_sysfs_dev_files(struct pci_dev *pdev); 39 + extern void pcie_aspm_remove_sysfs_dev_files(struct pci_dev *pdev); 40 + #else 41 + #define pcie_aspm_create_sysfs_dev_files(pdev) do {} while (0) 42 + #define pcie_aspm_remove_sysfs_dev_files(pdev) do {} while (0) 43 + #endif 44 + #endif /* LINUX_ASPM_H */
+10 -1
include/linux/pci-acpi.h
··· 48 48 49 49 #ifdef CONFIG_ACPI 50 50 extern acpi_status pci_osc_control_set(acpi_handle handle, u32 flags); 51 - extern acpi_status pci_osc_support_set(u32 flags); 51 + extern acpi_status __pci_osc_support_set(u32 flags, const char *hid); 52 + static inline acpi_status pci_osc_support_set(u32 flags) 53 + { 54 + return __pci_osc_support_set(flags, PCI_ROOT_HID_STRING); 55 + } 56 + static inline acpi_status pcie_osc_support_set(u32 flags) 57 + { 58 + return __pci_osc_support_set(flags, PCI_EXPRESS_ROOT_HID_STRING); 59 + } 52 60 #else 53 61 #if !defined(AE_ERROR) 54 62 typedef u32 acpi_status; ··· 65 57 static inline acpi_status pci_osc_control_set(acpi_handle handle, u32 flags) 66 58 {return AE_ERROR;} 67 59 static inline acpi_status pci_osc_support_set(u32 flags) {return AE_ERROR;} 60 + static inline acpi_status pcie_osc_support_set(u32 flags) {return AE_ERROR;} 68 61 #endif 69 62 70 63 #endif /* _PCI_ACPI_H_ */
+249 -118
include/linux/pci.h
··· 28 28 * 7:3 = slot 29 29 * 2:0 = function 30 30 */ 31 - #define PCI_DEVFN(slot,func) ((((slot) & 0x1f) << 3) | ((func) & 0x07)) 31 + #define PCI_DEVFN(slot, func) ((((slot) & 0x1f) << 3) | ((func) & 0x07)) 32 32 #define PCI_SLOT(devfn) (((devfn) >> 3) & 0x1f) 33 33 #define PCI_FUNC(devfn) ((devfn) & 0x07) 34 34 ··· 66 66 #define PCI_DMA_FROMDEVICE 2 67 67 #define PCI_DMA_NONE 3 68 68 69 - #define DEVICE_COUNT_COMPATIBLE 4 70 69 #define DEVICE_COUNT_RESOURCE 12 71 70 72 71 typedef int __bitwise pci_power_t; ··· 128 129 u32 data[0]; 129 130 }; 130 131 132 + struct pcie_link_state; 131 133 /* 132 134 * The pci_dev structure is used to describe PCI devices. 133 135 */ ··· 164 164 this is D0-D3, D0 being fully functional, 165 165 and D3 being off. */ 166 166 167 + #ifdef CONFIG_PCIEASPM 168 + struct pcie_link_state *link_state; /* ASPM link state. */ 169 + #endif 170 + 167 171 pci_channel_state_t error_state; /* current connectivity state */ 168 172 struct device dev; /* Generic device interface */ 169 - 170 - /* device is compatible with these IDs */ 171 - unsigned short vendor_compatible[DEVICE_COUNT_COMPATIBLE]; 172 - unsigned short device_compatible[DEVICE_COUNT_COMPATIBLE]; 173 173 174 174 int cfg_size; /* Size of configuration space */ 175 175 ··· 219 219 } 220 220 221 221 static inline struct pci_cap_saved_state *pci_find_saved_cap( 222 - struct pci_dev *pci_dev,char cap) 222 + struct pci_dev *pci_dev, char cap) 223 223 { 224 224 struct pci_cap_saved_state *tmp; 225 225 struct hlist_node *pos; ··· 278 278 unsigned short bridge_ctl; /* manage NO_ISA/FBB/et al behaviors */ 279 279 pci_bus_flags_t bus_flags; /* Inherited by child busses */ 280 280 struct device *bridge; 281 - struct class_device class_dev; 281 + struct device dev; 282 282 struct bin_attribute *legacy_io; /* legacy I/O for this bus */ 283 283 struct bin_attribute *legacy_mem; /* legacy mem */ 284 284 }; 285 285 286 286 #define pci_bus_b(n) list_entry(n, struct pci_bus, node) 287 - #define to_pci_bus(n) container_of(n, struct pci_bus, class_dev) 287 + #define to_pci_bus(n) container_of(n, struct pci_bus, dev) 288 288 289 289 /* 290 290 * Error values that may be returned by PCI functions. ··· 314 314 extern struct pci_raw_ops *raw_pci_ops; 315 315 316 316 struct pci_bus_region { 317 - unsigned long start; 318 - unsigned long end; 317 + resource_size_t start; 318 + resource_size_t end; 319 319 }; 320 320 321 321 struct pci_dynids { ··· 351 351 }; 352 352 353 353 /* PCI bus error event callbacks */ 354 - struct pci_error_handlers 355 - { 354 + struct pci_error_handlers { 356 355 /* PCI bus error detected on this device */ 357 356 pci_ers_result_t (*error_detected)(struct pci_dev *dev, 358 - enum pci_channel_state error); 357 + enum pci_channel_state error); 359 358 360 359 /* MMIO has been re-enabled, but not DMA */ 361 360 pci_ers_result_t (*mmio_enabled)(struct pci_dev *dev); ··· 389 390 struct pci_dynids dynids; 390 391 }; 391 392 392 - #define to_pci_driver(drv) container_of(drv,struct pci_driver, driver) 393 + #define to_pci_driver(drv) container_of(drv, struct pci_driver, driver) 393 394 394 395 /** 395 396 * PCI_DEVICE - macro used to describe a specific pci device ··· 447 448 448 449 void pcibios_fixup_bus(struct pci_bus *); 449 450 int __must_check pcibios_enable_device(struct pci_dev *, int mask); 450 - char *pcibios_setup (char *str); 451 + char *pcibios_setup(char *str); 451 452 452 453 /* Used only when drivers/pci/setup.c is used */ 453 454 void pcibios_align_resource(void *, struct resource *, resource_size_t, ··· 458 459 459 460 extern struct pci_bus *pci_find_bus(int domain, int busnr); 460 461 void pci_bus_add_devices(struct pci_bus *bus); 461 - struct pci_bus *pci_scan_bus_parented(struct device *parent, int bus, struct pci_ops *ops, void *sysdata); 462 - static inline struct pci_bus *pci_scan_bus(int bus, struct pci_ops *ops, void *sysdata) 462 + struct pci_bus *pci_scan_bus_parented(struct device *parent, int bus, 463 + struct pci_ops *ops, void *sysdata); 464 + static inline struct pci_bus *pci_scan_bus(int bus, struct pci_ops *ops, 465 + void *sysdata) 463 466 { 464 467 struct pci_bus *root_bus; 465 468 root_bus = pci_scan_bus_parented(NULL, bus, ops, sysdata); ··· 469 468 pci_bus_add_devices(root_bus); 470 469 return root_bus; 471 470 } 472 - struct pci_bus *pci_create_bus(struct device *parent, int bus, struct pci_ops *ops, void *sysdata); 473 - struct pci_bus * pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, int busnr); 471 + struct pci_bus *pci_create_bus(struct device *parent, int bus, 472 + struct pci_ops *ops, void *sysdata); 473 + struct pci_bus *pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, 474 + int busnr); 474 475 int pci_scan_slot(struct pci_bus *bus, int devfn); 475 - struct pci_dev * pci_scan_single_device(struct pci_bus *bus, int devfn); 476 + struct pci_dev *pci_scan_single_device(struct pci_bus *bus, int devfn); 476 477 void pci_device_add(struct pci_dev *dev, struct pci_bus *bus); 477 478 unsigned int pci_scan_child_bus(struct pci_bus *bus); 478 479 int __must_check pci_bus_add_device(struct pci_dev *dev); 479 480 void pci_read_bridge_bases(struct pci_bus *child); 480 - struct resource *pci_find_parent_resource(const struct pci_dev *dev, struct resource *res); 481 + struct resource *pci_find_parent_resource(const struct pci_dev *dev, 482 + struct resource *res); 481 483 int pci_get_interrupt_pin(struct pci_dev *dev, struct pci_dev **bridge); 482 484 extern struct pci_dev *pci_dev_get(struct pci_dev *dev); 483 485 extern void pci_dev_put(struct pci_dev *dev); ··· 493 489 /* Generic PCI functions exported to card drivers */ 494 490 495 491 #ifdef CONFIG_PCI_LEGACY 496 - struct pci_dev __deprecated *pci_find_device (unsigned int vendor, unsigned int device, const struct pci_dev *from); 497 - struct pci_dev __deprecated *pci_find_slot (unsigned int bus, unsigned int devfn); 492 + struct pci_dev __deprecated *pci_find_device(unsigned int vendor, 493 + unsigned int device, 494 + const struct pci_dev *from); 495 + struct pci_dev __deprecated *pci_find_slot(unsigned int bus, 496 + unsigned int devfn); 498 497 #endif /* CONFIG_PCI_LEGACY */ 499 498 500 - int pci_find_capability (struct pci_dev *dev, int cap); 501 - int pci_find_next_capability (struct pci_dev *dev, u8 pos, int cap); 502 - int pci_find_ext_capability (struct pci_dev *dev, int cap); 503 - int pci_find_ht_capability (struct pci_dev *dev, int ht_cap); 504 - int pci_find_next_ht_capability (struct pci_dev *dev, int pos, int ht_cap); 499 + int pci_find_capability(struct pci_dev *dev, int cap); 500 + int pci_find_next_capability(struct pci_dev *dev, u8 pos, int cap); 501 + int pci_find_ext_capability(struct pci_dev *dev, int cap); 502 + int pci_find_ht_capability(struct pci_dev *dev, int ht_cap); 503 + int pci_find_next_ht_capability(struct pci_dev *dev, int pos, int ht_cap); 504 + void pcie_wait_pending_transaction(struct pci_dev *dev); 505 505 struct pci_bus *pci_find_next_bus(const struct pci_bus *from); 506 506 507 507 struct pci_dev *pci_get_device(unsigned int vendor, unsigned int device, ··· 513 505 struct pci_dev *pci_get_device_reverse(unsigned int vendor, unsigned int device, 514 506 struct pci_dev *from); 515 507 516 - struct pci_dev *pci_get_subsys (unsigned int vendor, unsigned int device, 508 + struct pci_dev *pci_get_subsys(unsigned int vendor, unsigned int device, 517 509 unsigned int ss_vendor, unsigned int ss_device, 518 510 struct pci_dev *from); 519 - struct pci_dev *pci_get_slot (struct pci_bus *bus, unsigned int devfn); 520 - struct pci_dev *pci_get_bus_and_slot (unsigned int bus, unsigned int devfn); 521 - struct pci_dev *pci_get_class (unsigned int class, struct pci_dev *from); 511 + struct pci_dev *pci_get_slot(struct pci_bus *bus, unsigned int devfn); 512 + struct pci_dev *pci_get_bus_and_slot(unsigned int bus, unsigned int devfn); 513 + struct pci_dev *pci_get_class(unsigned int class, struct pci_dev *from); 522 514 int pci_dev_present(const struct pci_device_id *ids); 523 515 const struct pci_device_id *pci_find_present(const struct pci_device_id *ids); 524 516 525 - int pci_bus_read_config_byte (struct pci_bus *bus, unsigned int devfn, int where, u8 *val); 526 - int pci_bus_read_config_word (struct pci_bus *bus, unsigned int devfn, int where, u16 *val); 527 - int pci_bus_read_config_dword (struct pci_bus *bus, unsigned int devfn, int where, u32 *val); 528 - int pci_bus_write_config_byte (struct pci_bus *bus, unsigned int devfn, int where, u8 val); 529 - int pci_bus_write_config_word (struct pci_bus *bus, unsigned int devfn, int where, u16 val); 530 - int pci_bus_write_config_dword (struct pci_bus *bus, unsigned int devfn, int where, u32 val); 517 + int pci_bus_read_config_byte(struct pci_bus *bus, unsigned int devfn, 518 + int where, u8 *val); 519 + int pci_bus_read_config_word(struct pci_bus *bus, unsigned int devfn, 520 + int where, u16 *val); 521 + int pci_bus_read_config_dword(struct pci_bus *bus, unsigned int devfn, 522 + int where, u32 *val); 523 + int pci_bus_write_config_byte(struct pci_bus *bus, unsigned int devfn, 524 + int where, u8 val); 525 + int pci_bus_write_config_word(struct pci_bus *bus, unsigned int devfn, 526 + int where, u16 val); 527 + int pci_bus_write_config_dword(struct pci_bus *bus, unsigned int devfn, 528 + int where, u32 val); 531 529 532 530 static inline int pci_read_config_byte(struct pci_dev *dev, int where, u8 *val) 533 531 { 534 - return pci_bus_read_config_byte (dev->bus, dev->devfn, where, val); 532 + return pci_bus_read_config_byte(dev->bus, dev->devfn, where, val); 535 533 } 536 534 static inline int pci_read_config_word(struct pci_dev *dev, int where, u16 *val) 537 535 { 538 - return pci_bus_read_config_word (dev->bus, dev->devfn, where, val); 536 + return pci_bus_read_config_word(dev->bus, dev->devfn, where, val); 539 537 } 540 - static inline int pci_read_config_dword(struct pci_dev *dev, int where, u32 *val) 538 + static inline int pci_read_config_dword(struct pci_dev *dev, int where, 539 + u32 *val) 541 540 { 542 - return pci_bus_read_config_dword (dev->bus, dev->devfn, where, val); 541 + return pci_bus_read_config_dword(dev->bus, dev->devfn, where, val); 543 542 } 544 543 static inline int pci_write_config_byte(struct pci_dev *dev, int where, u8 val) 545 544 { 546 - return pci_bus_write_config_byte (dev->bus, dev->devfn, where, val); 545 + return pci_bus_write_config_byte(dev->bus, dev->devfn, where, val); 547 546 } 548 547 static inline int pci_write_config_word(struct pci_dev *dev, int where, u16 val) 549 548 { 550 - return pci_bus_write_config_word (dev->bus, dev->devfn, where, val); 549 + return pci_bus_write_config_word(dev->bus, dev->devfn, where, val); 551 550 } 552 - static inline int pci_write_config_dword(struct pci_dev *dev, int where, u32 val) 551 + static inline int pci_write_config_dword(struct pci_dev *dev, int where, 552 + u32 val) 553 553 { 554 - return pci_bus_write_config_dword (dev->bus, dev->devfn, where, val); 554 + return pci_bus_write_config_dword(dev->bus, dev->devfn, where, val); 555 555 } 556 556 557 557 int __must_check pci_enable_device(struct pci_dev *dev); 558 - int __must_check pci_enable_device_bars(struct pci_dev *dev, int mask); 558 + int __must_check pci_enable_device_io(struct pci_dev *dev); 559 + int __must_check pci_enable_device_mem(struct pci_dev *dev); 559 560 int __must_check pci_reenable_device(struct pci_dev *); 560 561 int __must_check pcim_enable_device(struct pci_dev *pdev); 561 562 void pcim_pin_device(struct pci_dev *pdev); ··· 593 576 void pci_update_resource(struct pci_dev *dev, struct resource *res, int resno); 594 577 int __must_check pci_assign_resource(struct pci_dev *dev, int i); 595 578 int __must_check pci_assign_resource_fixed(struct pci_dev *dev, int i); 596 - void pci_restore_bars(struct pci_dev *dev); 597 579 int pci_select_bars(struct pci_dev *dev, unsigned long flags); 598 580 599 581 /* ROM control related routines */ 600 582 void __iomem __must_check *pci_map_rom(struct pci_dev *pdev, size_t *size); 601 - void __iomem __must_check *pci_map_rom_copy(struct pci_dev *pdev, size_t *size); 602 583 void pci_unmap_rom(struct pci_dev *pdev, void __iomem *rom); 603 - void pci_remove_rom(struct pci_dev *pdev); 604 584 size_t pci_get_rom_size(void __iomem *rom, size_t size); 605 585 606 586 /* Power management related routines */ ··· 608 594 int pci_enable_wake(struct pci_dev *dev, pci_power_t state, int enable); 609 595 610 596 /* Functions for PCI Hotplug drivers to use */ 611 - int pci_bus_find_capability (struct pci_bus *bus, unsigned int devfn, int cap); 597 + int pci_bus_find_capability(struct pci_bus *bus, unsigned int devfn, int cap); 612 598 613 599 /* Helper functions for low-level code (drivers/pci/setup-[bus,res].c) */ 614 600 void pci_bus_assign_resources(struct pci_bus *bus); ··· 645 631 return __pci_register_driver(driver, THIS_MODULE, KBUILD_MODNAME); 646 632 } 647 633 648 - void pci_unregister_driver(struct pci_driver *); 649 - void pci_remove_behind_bridge(struct pci_dev *); 650 - struct pci_driver *pci_dev_driver(const struct pci_dev *); 651 - const struct pci_device_id *pci_match_id(const struct pci_device_id *ids, struct pci_dev *dev); 652 - int pci_scan_bridge(struct pci_bus *bus, struct pci_dev * dev, int max, int pass); 634 + void pci_unregister_driver(struct pci_driver *dev); 635 + void pci_remove_behind_bridge(struct pci_dev *dev); 636 + struct pci_driver *pci_dev_driver(const struct pci_dev *dev); 637 + const struct pci_device_id *pci_match_id(const struct pci_device_id *ids, 638 + struct pci_dev *dev); 639 + int pci_scan_bridge(struct pci_bus *bus, struct pci_dev *dev, int max, 640 + int pass); 653 641 654 642 void pci_walk_bus(struct pci_bus *top, void (*cb)(struct pci_dev *, void *), 655 643 void *userdata); 656 644 int pci_cfg_space_size(struct pci_dev *dev); 657 - unsigned char pci_bus_max_busnr(struct pci_bus* bus); 645 + unsigned char pci_bus_max_busnr(struct pci_bus *bus); 658 646 659 647 /* kmem_cache style wrapper around pci_alloc_consistent() */ 660 648 ··· 685 669 686 670 687 671 #ifndef CONFIG_PCI_MSI 688 - static inline int pci_enable_msi(struct pci_dev *dev) {return -1;} 689 - static inline void pci_disable_msi(struct pci_dev *dev) {} 690 - static inline int pci_enable_msix(struct pci_dev* dev, 691 - struct msix_entry *entries, int nvec) {return -1;} 692 - static inline void pci_disable_msix(struct pci_dev *dev) {} 693 - static inline void msi_remove_pci_irq_vectors(struct pci_dev *dev) {} 672 + static inline int pci_enable_msi(struct pci_dev *dev) 673 + { 674 + return -1; 675 + } 676 + 677 + static inline void pci_disable_msi(struct pci_dev *dev) 678 + { } 679 + 680 + static inline int pci_enable_msix(struct pci_dev *dev, 681 + struct msix_entry *entries, int nvec) 682 + { 683 + return -1; 684 + } 685 + 686 + static inline void pci_disable_msix(struct pci_dev *dev) 687 + { } 688 + 689 + static inline void msi_remove_pci_irq_vectors(struct pci_dev *dev) 690 + { } 691 + 692 + static inline void pci_restore_msi_state(struct pci_dev *dev) 693 + { } 694 694 #else 695 695 extern int pci_enable_msi(struct pci_dev *dev); 696 696 extern void pci_disable_msi(struct pci_dev *dev); 697 - extern int pci_enable_msix(struct pci_dev* dev, 697 + extern int pci_enable_msix(struct pci_dev *dev, 698 698 struct msix_entry *entries, int nvec); 699 699 extern void pci_disable_msix(struct pci_dev *dev); 700 700 extern void msi_remove_pci_irq_vectors(struct pci_dev *dev); 701 + extern void pci_restore_msi_state(struct pci_dev *dev); 701 702 #endif 702 703 703 704 #ifdef CONFIG_HT_IRQ ··· 735 702 extern int pci_domains_supported; 736 703 #else 737 704 enum { pci_domains_supported = 0 }; 738 - static inline int pci_domain_nr(struct pci_bus *bus) { return 0; } 705 + static inline int pci_domain_nr(struct pci_bus *bus) 706 + { 707 + return 0; 708 + } 709 + 739 710 static inline int pci_proc_domain(struct pci_bus *bus) 740 711 { 741 712 return 0; ··· 753 716 * these as simple inline functions to avoid hair in drivers. 754 717 */ 755 718 756 - #define _PCI_NOP(o,s,t) \ 757 - static inline int pci_##o##_config_##s (struct pci_dev *dev, int where, t val) \ 719 + #define _PCI_NOP(o, s, t) \ 720 + static inline int pci_##o##_config_##s(struct pci_dev *dev, \ 721 + int where, t val) \ 758 722 { return PCIBIOS_FUNC_NOT_SUPPORTED; } 759 - #define _PCI_NOP_ALL(o,x) _PCI_NOP(o,byte,u8 x) \ 760 - _PCI_NOP(o,word,u16 x) \ 761 - _PCI_NOP(o,dword,u32 x) 723 + 724 + #define _PCI_NOP_ALL(o, x) _PCI_NOP(o, byte, u8 x) \ 725 + _PCI_NOP(o, word, u16 x) \ 726 + _PCI_NOP(o, dword, u32 x) 762 727 _PCI_NOP_ALL(read, *) 763 728 _PCI_NOP_ALL(write,) 764 729 765 - static inline struct pci_dev *pci_find_device(unsigned int vendor, unsigned int device, const struct pci_dev *from) 766 - { return NULL; } 730 + static inline struct pci_dev *pci_find_device(unsigned int vendor, 731 + unsigned int device, 732 + const struct pci_dev *from) 733 + { 734 + return NULL; 735 + } 767 736 768 - static inline struct pci_dev *pci_find_slot(unsigned int bus, unsigned int devfn) 769 - { return NULL; } 737 + static inline struct pci_dev *pci_find_slot(unsigned int bus, 738 + unsigned int devfn) 739 + { 740 + return NULL; 741 + } 770 742 771 743 static inline struct pci_dev *pci_get_device(unsigned int vendor, 772 - unsigned int device, struct pci_dev *from) 773 - { return NULL; } 744 + unsigned int device, 745 + struct pci_dev *from) 746 + { 747 + return NULL; 748 + } 774 749 775 750 static inline struct pci_dev *pci_get_device_reverse(unsigned int vendor, 776 - unsigned int device, struct pci_dev *from) 777 - { return NULL; } 751 + unsigned int device, 752 + struct pci_dev *from) 753 + { 754 + return NULL; 755 + } 778 756 779 - static inline struct pci_dev *pci_get_subsys (unsigned int vendor, unsigned int device, 780 - unsigned int ss_vendor, unsigned int ss_device, struct pci_dev *from) 781 - { return NULL; } 757 + static inline struct pci_dev *pci_get_subsys(unsigned int vendor, 758 + unsigned int device, 759 + unsigned int ss_vendor, 760 + unsigned int ss_device, 761 + struct pci_dev *from) 762 + { 763 + return NULL; 764 + } 782 765 783 - static inline struct pci_dev *pci_get_class(unsigned int class, struct pci_dev *from) 784 - { return NULL; } 766 + static inline struct pci_dev *pci_get_class(unsigned int class, 767 + struct pci_dev *from) 768 + { 769 + return NULL; 770 + } 785 771 786 772 #define pci_dev_present(ids) (0) 787 773 #define no_pci_devices() (1) 788 774 #define pci_find_present(ids) (NULL) 789 775 #define pci_dev_put(dev) do { } while (0) 790 776 791 - static inline void pci_set_master(struct pci_dev *dev) { } 792 - static inline int pci_enable_device(struct pci_dev *dev) { return -EIO; } 793 - static inline void pci_disable_device(struct pci_dev *dev) { } 794 - static inline int pci_set_dma_mask(struct pci_dev *dev, u64 mask) { return -EIO; } 795 - static inline int pci_assign_resource(struct pci_dev *dev, int i) { return -EBUSY;} 796 - static inline int __pci_register_driver(struct pci_driver *drv, struct module *owner) { return 0;} 797 - static inline int pci_register_driver(struct pci_driver *drv) { return 0;} 798 - static inline void pci_unregister_driver(struct pci_driver *drv) { } 799 - static inline int pci_find_capability (struct pci_dev *dev, int cap) {return 0; } 800 - static inline int pci_find_next_capability (struct pci_dev *dev, u8 post, int cap) { return 0; } 801 - static inline int pci_find_ext_capability (struct pci_dev *dev, int cap) {return 0; } 777 + static inline void pci_set_master(struct pci_dev *dev) 778 + { } 779 + 780 + static inline int pci_enable_device(struct pci_dev *dev) 781 + { 782 + return -EIO; 783 + } 784 + 785 + static inline void pci_disable_device(struct pci_dev *dev) 786 + { } 787 + 788 + static inline int pci_set_dma_mask(struct pci_dev *dev, u64 mask) 789 + { 790 + return -EIO; 791 + } 792 + 793 + static inline int pci_assign_resource(struct pci_dev *dev, int i) 794 + { 795 + return -EBUSY; 796 + } 797 + 798 + static inline int __pci_register_driver(struct pci_driver *drv, 799 + struct module *owner) 800 + { 801 + return 0; 802 + } 803 + 804 + static inline int pci_register_driver(struct pci_driver *drv) 805 + { 806 + return 0; 807 + } 808 + 809 + static inline void pci_unregister_driver(struct pci_driver *drv) 810 + { } 811 + 812 + static inline int pci_find_capability(struct pci_dev *dev, int cap) 813 + { 814 + return 0; 815 + } 816 + 817 + static inline int pci_find_next_capability(struct pci_dev *dev, u8 post, 818 + int cap) 819 + { 820 + return 0; 821 + } 822 + 823 + static inline int pci_find_ext_capability(struct pci_dev *dev, int cap) 824 + { 825 + return 0; 826 + } 827 + 828 + static inline void pcie_wait_pending_transaction(struct pci_dev *dev) 829 + { } 802 830 803 831 /* Power management related routines */ 804 - static inline int pci_save_state(struct pci_dev *dev) { return 0; } 805 - static inline int pci_restore_state(struct pci_dev *dev) { return 0; } 806 - static inline int pci_set_power_state(struct pci_dev *dev, pci_power_t state) { return 0; } 807 - static inline pci_power_t pci_choose_state(struct pci_dev *dev, pm_message_t state) { return PCI_D0; } 808 - static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state, int enable) { return 0; } 832 + static inline int pci_save_state(struct pci_dev *dev) 833 + { 834 + return 0; 835 + } 809 836 810 - static inline int pci_request_regions(struct pci_dev *dev, const char *res_name) { return -EIO; } 811 - static inline void pci_release_regions(struct pci_dev *dev) { } 837 + static inline int pci_restore_state(struct pci_dev *dev) 838 + { 839 + return 0; 840 + } 841 + 842 + static inline int pci_set_power_state(struct pci_dev *dev, pci_power_t state) 843 + { 844 + return 0; 845 + } 846 + 847 + static inline pci_power_t pci_choose_state(struct pci_dev *dev, 848 + pm_message_t state) 849 + { 850 + return PCI_D0; 851 + } 852 + 853 + static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state, 854 + int enable) 855 + { 856 + return 0; 857 + } 858 + 859 + static inline int pci_request_regions(struct pci_dev *dev, const char *res_name) 860 + { 861 + return -EIO; 862 + } 863 + 864 + static inline void pci_release_regions(struct pci_dev *dev) 865 + { } 812 866 813 867 #define pci_dma_burst_advice(pdev, strat, strategy_parameter) do { } while (0) 814 868 815 - static inline void pci_block_user_cfg_access(struct pci_dev *dev) { } 816 - static inline void pci_unblock_user_cfg_access(struct pci_dev *dev) { } 869 + static inline void pci_block_user_cfg_access(struct pci_dev *dev) 870 + { } 871 + 872 + static inline void pci_unblock_user_cfg_access(struct pci_dev *dev) 873 + { } 817 874 818 875 static inline struct pci_bus *pci_find_next_bus(const struct pci_bus *from) 819 876 { return NULL; } ··· 928 797 929 798 /* these helpers provide future and backwards compatibility 930 799 * for accessing popular PCI BAR info */ 931 - #define pci_resource_start(dev,bar) ((dev)->resource[(bar)].start) 932 - #define pci_resource_end(dev,bar) ((dev)->resource[(bar)].end) 933 - #define pci_resource_flags(dev,bar) ((dev)->resource[(bar)].flags) 800 + #define pci_resource_start(dev, bar) ((dev)->resource[(bar)].start) 801 + #define pci_resource_end(dev, bar) ((dev)->resource[(bar)].end) 802 + #define pci_resource_flags(dev, bar) ((dev)->resource[(bar)].flags) 934 803 #define pci_resource_len(dev,bar) \ 935 - ((pci_resource_start((dev),(bar)) == 0 && \ 936 - pci_resource_end((dev),(bar)) == \ 937 - pci_resource_start((dev),(bar))) ? 0 : \ 938 - \ 939 - (pci_resource_end((dev),(bar)) - \ 940 - pci_resource_start((dev),(bar)) + 1)) 804 + ((pci_resource_start((dev), (bar)) == 0 && \ 805 + pci_resource_end((dev), (bar)) == \ 806 + pci_resource_start((dev), (bar))) ? 0 : \ 807 + \ 808 + (pci_resource_end((dev), (bar)) - \ 809 + pci_resource_start((dev), (bar)) + 1)) 941 810 942 811 /* Similar to the helpers above, these manipulate per-pci_dev 943 812 * driver-specific data. They are really just a wrapper around 944 813 * the generic device structure functions of these calls. 945 814 */ 946 - static inline void *pci_get_drvdata (struct pci_dev *pdev) 815 + static inline void *pci_get_drvdata(struct pci_dev *pdev) 947 816 { 948 817 return dev_get_drvdata(&pdev->dev); 949 818 } 950 819 951 - static inline void pci_set_drvdata (struct pci_dev *pdev, void *data) 820 + static inline void pci_set_drvdata(struct pci_dev *pdev, void *data) 952 821 { 953 822 dev_set_drvdata(&pdev->dev, data); 954 823 } ··· 967 836 */ 968 837 #ifndef HAVE_ARCH_PCI_RESOURCE_TO_USER 969 838 static inline void pci_resource_to_user(const struct pci_dev *dev, int bar, 970 - const struct resource *rsrc, resource_size_t *start, 839 + const struct resource *rsrc, resource_size_t *start, 971 840 resource_size_t *end) 972 841 { 973 842 *start = rsrc->start; ··· 1019 888 1020 889 void pci_fixup_device(enum pci_fixup_pass pass, struct pci_dev *dev); 1021 890 1022 - void __iomem * pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen); 891 + void __iomem *pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen); 1023 892 void pcim_iounmap(struct pci_dev *pdev, void __iomem *addr); 1024 - void __iomem * const * pcim_iomap_table(struct pci_dev *pdev); 893 + void __iomem * const *pcim_iomap_table(struct pci_dev *pdev); 1025 894 int pcim_iomap_regions(struct pci_dev *pdev, u16 mask, const char *name); 1026 895 void pcim_iounmap_regions(struct pci_dev *pdev, u16 mask); 1027 896
+8
include/linux/pci_regs.h
··· 395 395 #define PCI_EXP_DEVSTA_AUXPD 0x10 /* AUX Power Detected */ 396 396 #define PCI_EXP_DEVSTA_TRPND 0x20 /* Transactions Pending */ 397 397 #define PCI_EXP_LNKCAP 12 /* Link Capabilities */ 398 + #define PCI_EXP_LNKCAP_ASPMS 0xc00 /* ASPM Support */ 399 + #define PCI_EXP_LNKCAP_L0SEL 0x7000 /* L0s Exit Latency */ 400 + #define PCI_EXP_LNKCAP_L1EL 0x38000 /* L1 Exit Latency */ 401 + #define PCI_EXP_LNKCAP_CLKPM 0x40000 /* L1 Clock Power Management */ 398 402 #define PCI_EXP_LNKCTL 16 /* Link Control */ 403 + #define PCI_EXP_LNKCTL_RL 0x20 /* Retrain Link */ 404 + #define PCI_EXP_LNKCTL_CCC 0x40 /* Common Clock COnfiguration */ 399 405 #define PCI_EXP_LNKCTL_CLKREQ_EN 0x100 /* Enable clkreq */ 400 406 #define PCI_EXP_LNKSTA 18 /* Link Status */ 407 + #define PCI_EXP_LNKSTA_LT 0x800 /* Link Training */ 408 + #define PCI_EXP_LNKSTA_SLC 0x1000 /* Slot Clock Configuration */ 401 409 #define PCI_EXP_SLTCAP 20 /* Slot Capabilities */ 402 410 #define PCI_EXP_SLTCTL 24 /* Slot Control */ 403 411 #define PCI_EXP_SLTSTA 26 /* Slot Status */