Merge git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/pci-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/pci-2.6: (64 commits)
PCI: make pci_bus a struct device
PCI: fix codingstyle issues in include/linux/pci.h
PCI: fix codingstyle issues in drivers/pci/pci.h
PCI: PCIE ASPM support
PCI: Fix fakephp deadlock
PCI: modify SB700 SATA MSI quirk
PCI: Run ACPI _OSC method on root bridges only
PCI ACPI: AER driver should only register PCIe devices with _OSC
PCI ACPI: Added a function to register _OSC with only PCIe devices.
PCI: constify function pointer tables
PCI: Convert drivers/pci/proc.c to use unlocked_ioctl
pciehp: block new requests from the device before power off
pciehp: workaround against Bad DLLP during power off
pciehp: wait for 1000ms before LED operation after power off
PCI: Remove pci_enable_device_bars() from documentation
PCI: Remove pci_enable_device_bars()
PCI: Remove users of pci_enable_device_bars()
PCI: Add pci_enable_device_{io,mem} intefaces
PCI: avoid save the same type of cap multiple times
PCI: correctly initialize a structure for pcie_save_pcix_state()
...

+2101 -897
+1 -36
Documentation/pci.txt
··· 274 o allocate an IRQ (if BIOS did not). 275 276 NOTE: pci_enable_device() can fail! Check the return value. 277 - NOTE2: Also see pci_enable_device_bars() below. Drivers can 278 - attempt to enable only a subset of BARs they need. 279 280 [ OS BUG: we don't check resource allocations before enabling those 281 resources. The sequence would make more sense if we called ··· 603 604 605 606 - 10. pci_enable_device_bars() and Legacy I/O Port space 607 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 608 - 609 - Large servers may not be able to provide I/O port resources to all PCI 610 - devices. I/O Port space is only 64KB on Intel Architecture[1] and is 611 - likely also fragmented since the I/O base register of PCI-to-PCI 612 - bridge will usually be aligned to a 4KB boundary[2]. On such systems, 613 - pci_enable_device() and pci_request_region() will fail when 614 - attempting to enable I/O Port regions that don't have I/O Port 615 - resources assigned. 616 - 617 - Fortunately, many PCI devices which request I/O Port resources also 618 - provide access to the same registers via MMIO BARs. These devices can 619 - be handled without using I/O port space and the drivers typically 620 - offer a CONFIG_ option to only use MMIO regions 621 - (e.g. CONFIG_TULIP_MMIO). PCI devices typically provide I/O port 622 - interface for legacy OSes and will work when I/O port resources are not 623 - assigned. The "PCI Local Bus Specification Revision 3.0" discusses 624 - this on p.44, "IMPLEMENTATION NOTE". 625 - 626 - If your PCI device driver doesn't need I/O port resources assigned to 627 - I/O Port BARs, you should use pci_enable_device_bars() instead of 628 - pci_enable_device() in order not to enable I/O port regions for the 629 - corresponding devices. In addition, you should use 630 - pci_request_selected_regions() and pci_release_selected_regions() 631 - instead of pci_request_regions()/pci_release_regions() in order not to 632 - request/release I/O port regions for the corresponding devices. 633 - 634 - [1] Some systems support 64KB I/O port space per PCI segment. 635 - [2] Some PCI-to-PCI bridges support optional 1KB aligned I/O base. 636 - 637 - 638 - 639 - 11. MMIO Space and "Write Posting" 640 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 641 642 Converting a driver from using I/O Port space to using MMIO space
··· 274 o allocate an IRQ (if BIOS did not). 275 276 NOTE: pci_enable_device() can fail! Check the return value. 277 278 [ OS BUG: we don't check resource allocations before enabling those 279 resources. The sequence would make more sense if we called ··· 605 606 607 608 + 10. MMIO Space and "Write Posting" 609 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 610 611 Converting a driver from using I/O Port space to using MMIO space
-5
arch/alpha/Kconfig
··· 318 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 319 VESA. If you have PCI, say Y, otherwise N. 320 321 - The PCI-HOWTO, available from 322 - <http://www.tldp.org/docs.html#howto>, contains valuable 323 - information about which PCI hardware does work under Linux and which 324 - doesn't. 325 - 326 config PCI_DOMAINS 327 bool 328 default y
··· 318 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 319 VESA. If you have PCI, say Y, otherwise N. 320 321 config PCI_DOMAINS 322 bool 323 default y
-5
arch/arm/Kconfig
··· 577 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 578 VESA. If you have PCI, say Y, otherwise N. 579 580 - The PCI-HOWTO, available from 581 - <http://www.tldp.org/docs.html#howto>, contains valuable 582 - information about which PCI hardware does work under Linux and which 583 - doesn't. 584 - 585 config PCI_SYSCALL 586 def_bool PCI 587
··· 577 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 578 VESA. If you have PCI, say Y, otherwise N. 579 580 config PCI_SYSCALL 581 def_bool PCI 582
-5
arch/frv/Kconfig
··· 322 onboard. If you have one of these boards and you wish to use the PCI 323 facilities, say Y here. 324 325 - The PCI-HOWTO, available from 326 - <http://www.tldp.org/docs.html#howto>, contains valuable 327 - information about which PCI hardware does work under Linux and which 328 - doesn't. 329 - 330 config RESERVE_DMA_COHERENT 331 bool "Reserve DMA coherent memory" 332 depends on PCI && !MMU
··· 322 onboard. If you have one of these boards and you wish to use the PCI 323 facilities, say Y here. 324 325 config RESERVE_DMA_COHERENT 326 bool "Reserve DMA coherent memory" 327 depends on PCI && !MMU
-5
arch/m32r/Kconfig
··· 359 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 360 VESA. If you have PCI, say Y, otherwise N. 361 362 - The PCI-HOWTO, available from 363 - <http://www.linuxdoc.org/docs.html#howto>, contains valuable 364 - information about which PCI hardware does work under Linux and which 365 - doesn't. 366 - 367 choice 368 prompt "PCI access mode" 369 depends on PCI
··· 359 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 360 VESA. If you have PCI, say Y, otherwise N. 361 362 choice 363 prompt "PCI access mode" 364 depends on PCI
-5
arch/m68k/Kconfig
··· 145 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 146 VESA. If you have PCI, say Y, otherwise N. 147 148 - The PCI-HOWTO, available from 149 - <http://www.tldp.org/docs.html#howto>, contains valuable 150 - information about which PCI hardware does work under Linux and which 151 - doesn't. 152 - 153 config MAC 154 bool "Macintosh support" 155 depends on !MMU_SUN3
··· 145 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 146 VESA. If you have PCI, say Y, otherwise N. 147 148 config MAC 149 bool "Macintosh support" 150 depends on !MMU_SUN3
-5
arch/mips/Kconfig
··· 1961 your box. Other bus systems are ISA, EISA, or VESA. If you have PCI, 1962 say Y, otherwise N. 1963 1964 - The PCI-HOWTO, available from 1965 - <http://www.tldp.org/docs.html#howto>, contains valuable 1966 - information about which PCI hardware does work under Linux and which 1967 - doesn't. 1968 - 1969 config PCI_DOMAINS 1970 bool 1971
··· 1961 your box. Other bus systems are ISA, EISA, or VESA. If you have PCI, 1962 say Y, otherwise N. 1963 1964 config PCI_DOMAINS 1965 bool 1966
-5
arch/sh/drivers/pci/Kconfig
··· 6 bus system, i.e. the way the CPU talks to the other stuff inside 7 your box. If you have PCI, say Y, otherwise N. 8 9 - The PCI-HOWTO, available from 10 - <http://www.tldp.org/docs.html#howto>, contains valuable 11 - information about which PCI hardware does work under Linux and which 12 - doesn't. 13 - 14 config SH_PCIDMA_NONCOHERENT 15 bool "Cache and PCI noncoherent" 16 depends on PCI
··· 6 bus system, i.e. the way the CPU talks to the other stuff inside 7 your box. If you have PCI, say Y, otherwise N. 8 9 config SH_PCIDMA_NONCOHERENT 10 bool "Cache and PCI noncoherent" 11 depends on PCI
-5
arch/sparc64/Kconfig
··· 351 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 352 VESA. If you have PCI, say Y, otherwise N. 353 354 - The PCI-HOWTO, available from 355 - <http://www.tldp.org/docs.html#howto>, contains valuable 356 - information about which PCI hardware does work under Linux and which 357 - doesn't. 358 - 359 config PCI_DOMAINS 360 def_bool PCI 361
··· 351 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 352 VESA. If you have PCI, say Y, otherwise N. 353 354 config PCI_DOMAINS 355 def_bool PCI 356
-5
arch/x86/Kconfig
··· 1369 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 1370 VESA. If you have PCI, say Y, otherwise N. 1371 1372 - The PCI-HOWTO, available from 1373 - <http://www.tldp.org/docs.html#howto>, contains valuable 1374 - information about which PCI hardware does work under Linux and which 1375 - doesn't. 1376 - 1377 choice 1378 prompt "PCI access mode" 1379 depends on X86_32 && PCI && !X86_VISWS
··· 1369 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 1370 VESA. If you have PCI, say Y, otherwise N. 1371 1372 choice 1373 prompt "PCI access mode" 1374 depends on X86_32 && PCI && !X86_VISWS
+23 -20
arch/x86/kernel/quirks.c
··· 30 raw_pci_ops->read(0, 0, 0x40, 0x4c, 2, &word); 31 32 if (!(word & (1 << 13))) { 33 - printk(KERN_INFO "Intel E7520/7320/7525 detected. " 34 - "Disabling irq balancing and affinity\n"); 35 #ifdef CONFIG_IRQBALANCE 36 irqbalance_disable(""); 37 #endif ··· 104 pci_read_config_dword(dev, 0xF0, &rcba); 105 rcba &= 0xFFFFC000; 106 if (rcba == 0) { 107 - printk(KERN_DEBUG "RCBA disabled. Cannot force enable HPET\n"); 108 return; 109 } 110 111 /* use bits 31:14, 16 kB aligned */ 112 rcba_base = ioremap_nocache(rcba, 0x4000); 113 if (rcba_base == NULL) { 114 - printk(KERN_DEBUG "ioremap failed. Cannot force enable HPET\n"); 115 return; 116 } 117 ··· 124 /* HPET is enabled in HPTC. Just not reported by BIOS */ 125 val = val & 0x3; 126 force_hpet_address = 0xFED00000 | (val << 12); 127 - printk(KERN_DEBUG "Force enabled HPET at base address 0x%lx\n", 128 - force_hpet_address); 129 iounmap(rcba_base); 130 return; 131 } ··· 144 if (err) { 145 force_hpet_address = 0; 146 iounmap(rcba_base); 147 - printk(KERN_DEBUG "Failed to force enable HPET\n"); 148 } else { 149 force_hpet_resume_type = ICH_FORCE_HPET_RESUME; 150 - printk(KERN_DEBUG "Force enabled HPET at base address 0x%lx\n", 151 - force_hpet_address); 152 } 153 } 154 ··· 211 if (val & 0x4) { 212 val &= 0x3; 213 force_hpet_address = 0xFED00000 | (val << 12); 214 - printk(KERN_DEBUG "HPET at base address 0x%lx\n", 215 - force_hpet_address); 216 return; 217 } 218 ··· 232 /* HPET is enabled in HPTC. Just not reported by BIOS */ 233 val &= 0x3; 234 force_hpet_address = 0xFED00000 | (val << 12); 235 - printk(KERN_DEBUG "Force enabled HPET at base address 0x%lx\n", 236 - force_hpet_address); 237 cached_dev = dev; 238 force_hpet_resume_type = OLD_ICH_FORCE_HPET_RESUME; 239 return; 240 } 241 242 - printk(KERN_DEBUG "Failed to force enable HPET\n"); 243 } 244 245 /* ··· 297 */ 298 if (val & 0x80) { 299 force_hpet_address = (val & ~0x3ff); 300 - printk(KERN_DEBUG "HPET at base address 0x%lx\n", 301 - force_hpet_address); 302 return; 303 } 304 ··· 312 pci_read_config_dword(dev, 0x68, &val); 313 if (val & 0x80) { 314 force_hpet_address = (val & ~0x3ff); 315 - printk(KERN_DEBUG "Force enabled HPET at base address 0x%lx\n", 316 - force_hpet_address); 317 cached_dev = dev; 318 force_hpet_resume_type = VT8237_FORCE_HPET_RESUME; 319 return; 320 } 321 322 - printk(KERN_DEBUG "Failed to force enable HPET\n"); 323 } 324 325 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8235, ··· 347 pci_read_config_dword(dev, 0x44, &val); 348 force_hpet_address = val & 0xfffffffe; 349 force_hpet_resume_type = NVIDIA_FORCE_HPET_RESUME; 350 - printk(KERN_DEBUG "Force enabled HPET at base address 0x%lx\n", 351 force_hpet_address); 352 cached_dev = dev; 353 return;
··· 30 raw_pci_ops->read(0, 0, 0x40, 0x4c, 2, &word); 31 32 if (!(word & (1 << 13))) { 33 + dev_info(&dev->dev, "Intel E7520/7320/7525 detected; " 34 + "disabling irq balancing and affinity\n"); 35 #ifdef CONFIG_IRQBALANCE 36 irqbalance_disable(""); 37 #endif ··· 104 pci_read_config_dword(dev, 0xF0, &rcba); 105 rcba &= 0xFFFFC000; 106 if (rcba == 0) { 107 + dev_printk(KERN_DEBUG, &dev->dev, "RCBA disabled; " 108 + "cannot force enable HPET\n"); 109 return; 110 } 111 112 /* use bits 31:14, 16 kB aligned */ 113 rcba_base = ioremap_nocache(rcba, 0x4000); 114 if (rcba_base == NULL) { 115 + dev_printk(KERN_DEBUG, &dev->dev, "ioremap failed; " 116 + "cannot force enable HPET\n"); 117 return; 118 } 119 ··· 122 /* HPET is enabled in HPTC. Just not reported by BIOS */ 123 val = val & 0x3; 124 force_hpet_address = 0xFED00000 | (val << 12); 125 + dev_printk(KERN_DEBUG, &dev->dev, "Force enabled HPET at " 126 + "0x%lx\n", force_hpet_address); 127 iounmap(rcba_base); 128 return; 129 } ··· 142 if (err) { 143 force_hpet_address = 0; 144 iounmap(rcba_base); 145 + dev_printk(KERN_DEBUG, &dev->dev, 146 + "Failed to force enable HPET\n"); 147 } else { 148 force_hpet_resume_type = ICH_FORCE_HPET_RESUME; 149 + dev_printk(KERN_DEBUG, &dev->dev, "Force enabled HPET at " 150 + "0x%lx\n", force_hpet_address); 151 } 152 } 153 ··· 208 if (val & 0x4) { 209 val &= 0x3; 210 force_hpet_address = 0xFED00000 | (val << 12); 211 + dev_printk(KERN_DEBUG, &dev->dev, "HPET at 0x%lx\n", 212 + force_hpet_address); 213 return; 214 } 215 ··· 229 /* HPET is enabled in HPTC. Just not reported by BIOS */ 230 val &= 0x3; 231 force_hpet_address = 0xFED00000 | (val << 12); 232 + dev_printk(KERN_DEBUG, &dev->dev, "Force enabled HPET at " 233 + "0x%lx\n", force_hpet_address); 234 cached_dev = dev; 235 force_hpet_resume_type = OLD_ICH_FORCE_HPET_RESUME; 236 return; 237 } 238 239 + dev_printk(KERN_DEBUG, &dev->dev, "Failed to force enable HPET\n"); 240 } 241 242 /* ··· 294 */ 295 if (val & 0x80) { 296 force_hpet_address = (val & ~0x3ff); 297 + dev_printk(KERN_DEBUG, &dev->dev, "HPET at 0x%lx\n", 298 + force_hpet_address); 299 return; 300 } 301 ··· 309 pci_read_config_dword(dev, 0x68, &val); 310 if (val & 0x80) { 311 force_hpet_address = (val & ~0x3ff); 312 + dev_printk(KERN_DEBUG, &dev->dev, "Force enabled HPET at " 313 + "0x%lx\n", force_hpet_address); 314 cached_dev = dev; 315 force_hpet_resume_type = VT8237_FORCE_HPET_RESUME; 316 return; 317 } 318 319 + dev_printk(KERN_DEBUG, &dev->dev, "Failed to force enable HPET\n"); 320 } 321 322 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8235, ··· 344 pci_read_config_dword(dev, 0x44, &val); 345 force_hpet_address = val & 0xfffffffe; 346 force_hpet_resume_type = NVIDIA_FORCE_HPET_RESUME; 347 + dev_printk(KERN_DEBUG, &dev->dev, "Force enabled HPET at 0x%lx\n", 348 force_hpet_address); 349 cached_dev = dev; 350 return;
+11 -11
arch/x86/pci/fixup.c
··· 17 int pxb, reg; 18 u8 busno, suba, subb; 19 20 - printk(KERN_WARNING "PCI: Searching for i450NX host bridges on %s\n", pci_name(d)); 21 reg = 0xd0; 22 for(pxb = 0; pxb < 2; pxb++) { 23 pci_read_config_byte(d, reg++, &busno); ··· 41 */ 42 u8 busno; 43 pci_read_config_byte(d, 0x4a, &busno); 44 - printk(KERN_INFO "PCI: i440KX/GX host bridge %s: secondary bus %02x\n", pci_name(d), busno); 45 pci_scan_bus_with_sysdata(busno); 46 pcibios_last_bus = -1; 47 } ··· 55 */ 56 int i; 57 58 - printk(KERN_WARNING "PCI: Fixing base address flags for device %s\n", pci_name(d)); 59 for(i = 0; i < 4; i++) 60 d->resource[i].flags |= PCI_BASE_ADDRESS_SPACE_IO; 61 } ··· 68 * Fix class to be PCI_CLASS_STORAGE_SCSI 69 */ 70 if (!d->class) { 71 - printk(KERN_WARNING "PCI: fixing NCR 53C810 class code for %s\n", pci_name(d)); 72 d->class = PCI_CLASS_STORAGE_SCSI << 8; 73 } 74 } ··· 80 * SiS 5597 and 5598 chipsets require latency timer set to 81 * at most 32 to avoid lockups. 82 */ 83 - DBG("PCI: Setting max latency to 32\n"); 84 pcibios_max_latency = 32; 85 } 86 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_5597, pci_fixup_latency); ··· 138 139 pci_read_config_byte(d, where, &v); 140 if (v & ~mask) { 141 - printk(KERN_WARNING "Disabling VIA memory write queue (PCI ID %04x, rev %02x): [%02x] %02x & %02x -> %02x\n", \ 142 d->device, d->revision, where, v, mask, v & mask); 143 v &= mask; 144 pci_write_config_byte(d, where, v); ··· 200 * Apply fixup if needed, but don't touch disconnect state 201 */ 202 if ((val & 0x00FF0000) != 0x00010000) { 203 - printk(KERN_WARNING "PCI: nForce2 C1 Halt Disconnect fixup\n"); 204 pci_write_config_dword(dev, 0x6c, (val & 0xFF00FFFF) | 0x00010000); 205 } 206 } ··· 348 pci_read_config_word(pdev, PCI_COMMAND, &config); 349 if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 350 pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW; 351 - printk(KERN_DEBUG "Boot video device is %s\n", pci_name(pdev)); 352 } 353 } 354 DECLARE_PCI_FIXUP_FINAL(PCI_ANY_ID, PCI_ANY_ID, pci_fixup_video); ··· 388 /* verify the change for status output */ 389 pci_read_config_byte(dev, 0x50, &val); 390 if (val & 0x40) 391 - printk(KERN_INFO "PCI: Detected MSI K8T Neo2-FIR, " 392 "can't enable onboard soundcard!\n"); 393 else 394 - printk(KERN_INFO "PCI: Detected MSI K8T Neo2-FIR, " 395 - "enabled onboard soundcard.\n"); 396 } 397 } 398 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237,
··· 17 int pxb, reg; 18 u8 busno, suba, subb; 19 20 + dev_warn(&d->dev, "Searching for i450NX host bridges\n"); 21 reg = 0xd0; 22 for(pxb = 0; pxb < 2; pxb++) { 23 pci_read_config_byte(d, reg++, &busno); ··· 41 */ 42 u8 busno; 43 pci_read_config_byte(d, 0x4a, &busno); 44 + dev_info(&d->dev, "i440KX/GX host bridge; secondary bus %02x\n", busno); 45 pci_scan_bus_with_sysdata(busno); 46 pcibios_last_bus = -1; 47 } ··· 55 */ 56 int i; 57 58 + dev_warn(&d->dev, "Fixing base address flags\n"); 59 for(i = 0; i < 4; i++) 60 d->resource[i].flags |= PCI_BASE_ADDRESS_SPACE_IO; 61 } ··· 68 * Fix class to be PCI_CLASS_STORAGE_SCSI 69 */ 70 if (!d->class) { 71 + dev_warn(&d->dev, "Fixing NCR 53C810 class code\n"); 72 d->class = PCI_CLASS_STORAGE_SCSI << 8; 73 } 74 } ··· 80 * SiS 5597 and 5598 chipsets require latency timer set to 81 * at most 32 to avoid lockups. 82 */ 83 + dev_dbg(&d->dev, "Setting max latency to 32\n"); 84 pcibios_max_latency = 32; 85 } 86 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_5597, pci_fixup_latency); ··· 138 139 pci_read_config_byte(d, where, &v); 140 if (v & ~mask) { 141 + dev_warn(&d->dev, "Disabling VIA memory write queue (PCI ID %04x, rev %02x): [%02x] %02x & %02x -> %02x\n", \ 142 d->device, d->revision, where, v, mask, v & mask); 143 v &= mask; 144 pci_write_config_byte(d, where, v); ··· 200 * Apply fixup if needed, but don't touch disconnect state 201 */ 202 if ((val & 0x00FF0000) != 0x00010000) { 203 + dev_warn(&dev->dev, "nForce2 C1 Halt Disconnect fixup\n"); 204 pci_write_config_dword(dev, 0x6c, (val & 0xFF00FFFF) | 0x00010000); 205 } 206 } ··· 348 pci_read_config_word(pdev, PCI_COMMAND, &config); 349 if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 350 pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW; 351 + dev_printk(KERN_DEBUG, &pdev->dev, "Boot video device\n"); 352 } 353 } 354 DECLARE_PCI_FIXUP_FINAL(PCI_ANY_ID, PCI_ANY_ID, pci_fixup_video); ··· 388 /* verify the change for status output */ 389 pci_read_config_byte(dev, 0x50, &val); 390 if (val & 0x40) 391 + dev_info(&dev->dev, "Detected MSI K8T Neo2-FIR; " 392 "can't enable onboard soundcard!\n"); 393 else 394 + dev_info(&dev->dev, "Detected MSI K8T Neo2-FIR; " 395 + "enabled onboard soundcard\n"); 396 } 397 } 398 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237,
-5
arch/xtensa/Kconfig
··· 174 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 175 VESA. If you have PCI, say Y, otherwise N. 176 177 - The PCI-HOWTO, available from 178 - <http://www.linuxdoc.org/docs.html#howto>, contains valuable 179 - information about which PCI hardware does work under Linux and which 180 - doesn't 181 - 182 source "drivers/pci/Kconfig" 183 184 config HOTPLUG
··· 174 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 175 VESA. If you have PCI, say Y, otherwise N. 176 177 source "drivers/pci/Kconfig" 178 179 config HOTPLUG
+1 -1
drivers/ata/pata_cs5520.c
··· 229 return -ENOMEM; 230 231 /* Perform set up for DMA */ 232 - if (pci_enable_device_bars(pdev, 1<<2)) { 233 printk(KERN_ERR DRV_NAME ": unable to configure BAR2.\n"); 234 return -ENODEV; 235 }
··· 229 return -ENOMEM; 230 231 /* Perform set up for DMA */ 232 + if (pci_enable_device_io(pdev)) { 233 printk(KERN_ERR DRV_NAME ": unable to configure BAR2.\n"); 234 return -ENODEV; 235 }
+1 -1
drivers/i2c/busses/scx200_acb.c
··· 492 iface->pdev = pdev; 493 iface->bar = bar; 494 495 - rc = pci_enable_device_bars(iface->pdev, 1 << iface->bar); 496 if (rc) 497 goto errout_free; 498
··· 492 iface->pdev = pdev; 493 iface->bar = bar; 494 495 + rc = pci_enable_device_io(iface->pdev); 496 if (rc) 497 goto errout_free; 498
+8 -2
drivers/ide/pci/cs5520.c
··· 156 ide_setup_pci_noise(dev, d); 157 158 /* We must not grab the entire device, it has 'ISA' space in its 159 - BARS too and we will freak out other bits of the kernel */ 160 - if (pci_enable_device_bars(dev, 1<<2)) { 161 printk(KERN_WARNING "%s: Unable to enable 55x0.\n", d->name); 162 return -ENODEV; 163 }
··· 156 ide_setup_pci_noise(dev, d); 157 158 /* We must not grab the entire device, it has 'ISA' space in its 159 + * BARS too and we will freak out other bits of the kernel 160 + * 161 + * pci_enable_device_bars() is going away. I replaced it with 162 + * IO only enable for now but I'll need confirmation this is 163 + * allright for that device. If not, it will need some kind of 164 + * quirk. --BenH. 165 + */ 166 + if (pci_enable_device_io(dev)) { 167 printk(KERN_WARNING "%s: Unable to enable 55x0.\n", d->name); 168 return -ENODEV; 169 }
+4 -2
drivers/ide/setup-pci.c
··· 228 * @d: IDE port info 229 * 230 * Enable the IDE PCI device. We attempt to enable the device in full 231 - * but if that fails then we only need BAR4 so we will enable that. 232 * 233 * Returns zero on success or an error code 234 */ ··· 240 int ret; 241 242 if (pci_enable_device(dev)) { 243 - ret = pci_enable_device_bars(dev, 1 << 4); 244 if (ret < 0) { 245 printk(KERN_WARNING "%s: (ide_setup_pci_device:) " 246 "Could not enable device.\n", d->name);
··· 228 * @d: IDE port info 229 * 230 * Enable the IDE PCI device. We attempt to enable the device in full 231 + * but if that fails then we only need IO space. The PCI code should 232 + * have setup the proper resources for us already for controllers in 233 + * legacy mode. 234 * 235 * Returns zero on success or an error code 236 */ ··· 238 int ret; 239 240 if (pci_enable_device(dev)) { 241 + ret = pci_enable_device_io(dev); 242 if (ret < 0) { 243 printk(KERN_WARNING "%s: (ide_setup_pci_device:) " 244 "Could not enable device.\n", d->name);
+13 -5
drivers/pci/bus.c
··· 108 void pci_bus_add_devices(struct pci_bus *bus) 109 { 110 struct pci_dev *dev; 111 int retval; 112 113 list_for_each_entry(dev, &bus->devices, bus_list) { ··· 139 up_write(&pci_bus_sem); 140 } 141 pci_bus_add_devices(dev->subordinate); 142 - retval = sysfs_create_link(&dev->subordinate->class_dev.kobj, 143 - &dev->dev.kobj, "bridge"); 144 if (retval) 145 - dev_err(&dev->dev, "Error creating sysfs " 146 - "bridge symlink, continuing...\n"); 147 } 148 } 149 } ··· 213 } 214 up_read(&pci_bus_sem); 215 } 216 - EXPORT_SYMBOL_GPL(pci_walk_bus); 217 218 EXPORT_SYMBOL(pci_bus_alloc_resource); 219 EXPORT_SYMBOL_GPL(pci_bus_add_device);
··· 108 void pci_bus_add_devices(struct pci_bus *bus) 109 { 110 struct pci_dev *dev; 111 + struct pci_bus *child_bus; 112 int retval; 113 114 list_for_each_entry(dev, &bus->devices, bus_list) { ··· 138 up_write(&pci_bus_sem); 139 } 140 pci_bus_add_devices(dev->subordinate); 141 + 142 + /* register the bus with sysfs as the parent is now 143 + * properly registered. */ 144 + child_bus = dev->subordinate; 145 + child_bus->dev.parent = child_bus->bridge; 146 + retval = device_register(&child_bus->dev); 147 + if (!retval) 148 + retval = device_create_file(&child_bus->dev, 149 + &dev_attr_cpuaffinity); 150 if (retval) 151 + dev_err(&dev->dev, "Error registering pci_bus" 152 + " device bridge symlink," 153 + " continuing...\n"); 154 } 155 } 156 } ··· 204 } 205 up_read(&pci_bus_sem); 206 } 207 208 EXPORT_SYMBOL(pci_bus_alloc_resource); 209 EXPORT_SYMBOL_GPL(pci_bus_add_device);
+17 -3
drivers/pci/dmar.c
··· 25 26 #include <linux/pci.h> 27 #include <linux/dmar.h> 28 29 #undef PREFIX 30 #define PREFIX "DMAR:" ··· 264 if (!dmar) 265 return -ENODEV; 266 267 - if (!dmar->width) { 268 - printk (KERN_WARNING PREFIX "Zero: Invalid DMAR haw\n"); 269 return -EINVAL; 270 } 271 ··· 302 int __init dmar_table_init(void) 303 { 304 305 - parse_dmar_table(); 306 if (list_empty(&dmar_drhd_units)) { 307 printk(KERN_INFO PREFIX "No DMAR devices found\n"); 308 return -ENODEV; 309 } 310 return 0; 311 } 312
··· 25 26 #include <linux/pci.h> 27 #include <linux/dmar.h> 28 + #include "iova.h" 29 30 #undef PREFIX 31 #define PREFIX "DMAR:" ··· 263 if (!dmar) 264 return -ENODEV; 265 266 + if (dmar->width < PAGE_SHIFT_4K - 1) { 267 + printk(KERN_WARNING PREFIX "Invalid DMAR haw\n"); 268 return -EINVAL; 269 } 270 ··· 301 int __init dmar_table_init(void) 302 { 303 304 + int ret; 305 + 306 + ret = parse_dmar_table(); 307 + if (ret) { 308 + printk(KERN_INFO PREFIX "parse DMAR table failure.\n"); 309 + return ret; 310 + } 311 + 312 if (list_empty(&dmar_drhd_units)) { 313 printk(KERN_INFO PREFIX "No DMAR devices found\n"); 314 return -ENODEV; 315 } 316 + 317 + if (list_empty(&dmar_rmrr_units)) { 318 + printk(KERN_INFO PREFIX "No RMRR found\n"); 319 + return -ENODEV; 320 + } 321 + 322 return 0; 323 } 324
+2 -2
drivers/pci/hotplug/Kconfig
··· 3 # 4 5 menuconfig HOTPLUG_PCI 6 - tristate "Support for PCI Hotplug (EXPERIMENTAL)" 7 - depends on PCI && EXPERIMENTAL && HOTPLUG 8 ---help--- 9 Say Y here if you have a motherboard with a PCI Hotplug controller. 10 This allows you to add and remove PCI cards while the machine is
··· 3 # 4 5 menuconfig HOTPLUG_PCI 6 + tristate "Support for PCI Hotplug" 7 + depends on PCI && HOTPLUG 8 ---help--- 9 Say Y here if you have a motherboard with a PCI Hotplug controller. 10 This allows you to add and remove PCI cards while the machine is
+3 -1
drivers/pci/hotplug/Makefile
··· 3 # 4 5 obj-$(CONFIG_HOTPLUG_PCI) += pci_hotplug.o 6 - obj-$(CONFIG_HOTPLUG_PCI_FAKE) += fakephp.o 7 obj-$(CONFIG_HOTPLUG_PCI_COMPAQ) += cpqphp.o 8 obj-$(CONFIG_HOTPLUG_PCI_IBM) += ibmphp.o 9 obj-$(CONFIG_HOTPLUG_PCI_ACPI) += acpiphp.o ··· 14 obj-$(CONFIG_HOTPLUG_PCI_RPA) += rpaphp.o 15 obj-$(CONFIG_HOTPLUG_PCI_RPA_DLPAR) += rpadlpar_io.o 16 obj-$(CONFIG_HOTPLUG_PCI_SGI) += sgi_hotplug.o 17 18 pci_hotplug-objs := pci_hotplug_core.o 19
··· 3 # 4 5 obj-$(CONFIG_HOTPLUG_PCI) += pci_hotplug.o 6 obj-$(CONFIG_HOTPLUG_PCI_COMPAQ) += cpqphp.o 7 obj-$(CONFIG_HOTPLUG_PCI_IBM) += ibmphp.o 8 obj-$(CONFIG_HOTPLUG_PCI_ACPI) += acpiphp.o ··· 15 obj-$(CONFIG_HOTPLUG_PCI_RPA) += rpaphp.o 16 obj-$(CONFIG_HOTPLUG_PCI_RPA_DLPAR) += rpadlpar_io.o 17 obj-$(CONFIG_HOTPLUG_PCI_SGI) += sgi_hotplug.o 18 + 19 + # Link this last so it doesn't claim devices that have a real hotplug driver 20 + obj-$(CONFIG_HOTPLUG_PCI_FAKE) += fakephp.o 21 22 pci_hotplug-objs := pci_hotplug_core.o 23
-1
drivers/pci/hotplug/acpiphp.h
··· 113 u8 device; /* pci device# */ 114 115 u32 sun; /* ACPI _SUN (slot unique number) */ 116 - u32 slotno; /* slot number relative to bridge */ 117 u32 flags; /* see below */ 118 }; 119
··· 113 u8 device; /* pci device# */ 114 115 u32 sun; /* ACPI _SUN (slot unique number) */ 116 u32 flags; /* see below */ 117 }; 118
+2 -3
drivers/pci/hotplug/acpiphp_glue.c
··· 102 } 103 104 105 - /* callback routine to check the existence of ejectable slots */ 106 static acpi_status 107 is_ejectable_slot(acpi_handle handle, u32 lvl, void *context, void **rv) 108 { ··· 117 } 118 } 119 120 - /* callback routine to check for the existance of a pci dock device */ 121 static acpi_status 122 is_pci_dock_device(acpi_handle handle, u32 lvl, void *context, void **rv) 123 { ··· 1528 acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer); 1529 dbg("%s: re-enumerating slots under %s\n", 1530 __FUNCTION__, objname); 1531 - acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer); 1532 acpiphp_check_bridge(bridge); 1533 } 1534 return AE_OK ;
··· 102 } 103 104 105 + /* callback routine to check for the existence of ejectable slots */ 106 static acpi_status 107 is_ejectable_slot(acpi_handle handle, u32 lvl, void *context, void **rv) 108 { ··· 117 } 118 } 119 120 + /* callback routine to check for the existence of a pci dock device */ 121 static acpi_status 122 is_pci_dock_device(acpi_handle handle, u32 lvl, void *context, void **rv) 123 { ··· 1528 acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer); 1529 dbg("%s: re-enumerating slots under %s\n", 1530 __FUNCTION__, objname); 1531 acpiphp_check_bridge(bridge); 1532 } 1533 return AE_OK ;
+35 -4
drivers/pci/hotplug/fakephp.c
··· 39 #include <linux/init.h> 40 #include <linux/string.h> 41 #include <linux/slab.h> 42 #include "../pci.h" 43 44 #if !defined(MODULE) ··· 64 struct list_head node; 65 struct hotplug_slot *slot; 66 struct pci_dev *dev; 67 }; 68 69 static int debug; 70 static LIST_HEAD(slot_list); 71 72 static int enable_slot (struct hotplug_slot *slot); 73 static int disable_slot (struct hotplug_slot *slot); ··· 116 slot->name = &dev->dev.bus_id[0]; 117 dbg("slot->name = %s\n", slot->name); 118 119 - dslot = kmalloc(sizeof(struct dummy_slot), GFP_KERNEL); 120 if (!dslot) 121 goto error_info; 122 ··· 169 retval = pci_hp_deregister(dslot->slot); 170 if (retval) 171 err("Problem unregistering a slot %s\n", dslot->slot->name); 172 } 173 174 /** ··· 282 pci_rescan_buses(&pci_root_buses); 283 } 284 285 286 static int enable_slot(struct hotplug_slot *hotplug_slot) 287 { 288 /* mis-use enable_slot for rescanning of the pci bus */ 289 - pci_rescan(); 290 return -ENODEV; 291 } 292 ··· 327 err("Can't remove PCI devices with other PCI devices behind it yet.\n"); 328 return -ENODEV; 329 } 330 /* search for subfunctions and disable them first */ 331 if (!(dslot->dev->devfn & 7)) { 332 for (func = 1; func < 8; func++) { ··· 353 /* remove the device from the pci core */ 354 pci_remove_bus_device(dslot->dev); 355 356 - /* blow away this sysfs entry and other parts. */ 357 - remove_slot(dslot); 358 359 return 0; 360 } ··· 366 struct list_head *next; 367 struct dummy_slot *dslot; 368 369 list_for_each_safe (tmp, next, &slot_list) { 370 dslot = list_entry (tmp, struct dummy_slot, node); 371 remove_slot(dslot); ··· 377 static int __init dummyphp_init(void) 378 { 379 info(DRIVER_DESC "\n"); 380 381 return pci_scan_buses(); 382 }
··· 39 #include <linux/init.h> 40 #include <linux/string.h> 41 #include <linux/slab.h> 42 + #include <linux/workqueue.h> 43 #include "../pci.h" 44 45 #if !defined(MODULE) ··· 63 struct list_head node; 64 struct hotplug_slot *slot; 65 struct pci_dev *dev; 66 + struct work_struct remove_work; 67 + unsigned long removed; 68 }; 69 70 static int debug; 71 static LIST_HEAD(slot_list); 72 + static struct workqueue_struct *dummyphp_wq; 73 + 74 + static void pci_rescan_worker(struct work_struct *work); 75 + static DECLARE_WORK(pci_rescan_work, pci_rescan_worker); 76 77 static int enable_slot (struct hotplug_slot *slot); 78 static int disable_slot (struct hotplug_slot *slot); ··· 109 slot->name = &dev->dev.bus_id[0]; 110 dbg("slot->name = %s\n", slot->name); 111 112 + dslot = kzalloc(sizeof(struct dummy_slot), GFP_KERNEL); 113 if (!dslot) 114 goto error_info; 115 ··· 162 retval = pci_hp_deregister(dslot->slot); 163 if (retval) 164 err("Problem unregistering a slot %s\n", dslot->slot->name); 165 + } 166 + 167 + /* called from the single-threaded workqueue handler to remove a slot */ 168 + static void remove_slot_worker(struct work_struct *work) 169 + { 170 + struct dummy_slot *dslot = 171 + container_of(work, struct dummy_slot, remove_work); 172 + remove_slot(dslot); 173 } 174 175 /** ··· 267 pci_rescan_buses(&pci_root_buses); 268 } 269 270 + /* called from the single-threaded workqueue handler to rescan all pci buses */ 271 + static void pci_rescan_worker(struct work_struct *work) 272 + { 273 + pci_rescan(); 274 + } 275 276 static int enable_slot(struct hotplug_slot *hotplug_slot) 277 { 278 /* mis-use enable_slot for rescanning of the pci bus */ 279 + cancel_work_sync(&pci_rescan_work); 280 + queue_work(dummyphp_wq, &pci_rescan_work); 281 return -ENODEV; 282 } 283 ··· 306 err("Can't remove PCI devices with other PCI devices behind it yet.\n"); 307 return -ENODEV; 308 } 309 + if (test_and_set_bit(0, &dslot->removed)) { 310 + dbg("Slot already scheduled for removal\n"); 311 + return -ENODEV; 312 + } 313 /* search for subfunctions and disable them first */ 314 if (!(dslot->dev->devfn & 7)) { 315 for (func = 1; func < 8; func++) { ··· 328 /* remove the device from the pci core */ 329 pci_remove_bus_device(dslot->dev); 330 331 + /* queue work item to blow away this sysfs entry and other parts. */ 332 + INIT_WORK(&dslot->remove_work, remove_slot_worker); 333 + queue_work(dummyphp_wq, &dslot->remove_work); 334 335 return 0; 336 } ··· 340 struct list_head *next; 341 struct dummy_slot *dslot; 342 343 + destroy_workqueue(dummyphp_wq); 344 list_for_each_safe (tmp, next, &slot_list) { 345 dslot = list_entry (tmp, struct dummy_slot, node); 346 remove_slot(dslot); ··· 350 static int __init dummyphp_init(void) 351 { 352 info(DRIVER_DESC "\n"); 353 + 354 + dummyphp_wq = create_singlethread_workqueue(MY_NAME); 355 + if (!dummyphp_wq) 356 + return -ENOMEM; 357 358 return pci_scan_buses(); 359 }
+7 -4
drivers/pci/hotplug/ibmphp_core.c
··· 761 debug("func->device << 3 | 0x0 = %x\n", func->device << 3 | 0x0); 762 763 for (j = 0; j < 0x08; j++) { 764 - temp = pci_find_slot(func->busno, (func->device << 3) | j); 765 - if (temp) 766 pci_remove_bus_device(temp); 767 } 768 } 769 770 /* ··· 826 if (!(bus_structure_fixup(func->busno))) 827 flag = 1; 828 if (func->dev == NULL) 829 - func->dev = pci_find_slot(func->busno, 830 PCI_DEVFN(func->device, func->function)); 831 832 if (func->dev == NULL) { ··· 839 if (num) 840 pci_bus_add_devices(bus); 841 842 - func->dev = pci_find_slot(func->busno, 843 PCI_DEVFN(func->device, func->function)); 844 if (func->dev == NULL) { 845 err("ERROR... : pci_dev still NULL\n");
··· 761 debug("func->device << 3 | 0x0 = %x\n", func->device << 3 | 0x0); 762 763 for (j = 0; j < 0x08; j++) { 764 + temp = pci_get_bus_and_slot(func->busno, (func->device << 3) | j); 765 + if (temp) { 766 pci_remove_bus_device(temp); 767 + pci_dev_put(temp); 768 + } 769 } 770 + pci_dev_put(func->dev); 771 } 772 773 /* ··· 823 if (!(bus_structure_fixup(func->busno))) 824 flag = 1; 825 if (func->dev == NULL) 826 + func->dev = pci_get_bus_and_slot(func->busno, 827 PCI_DEVFN(func->device, func->function)); 828 829 if (func->dev == NULL) { ··· 836 if (num) 837 pci_bus_add_devices(bus); 838 839 + func->dev = pci_get_bus_and_slot(func->busno, 840 PCI_DEVFN(func->device, func->function)); 841 if (func->dev == NULL) { 842 err("ERROR... : pci_dev still NULL\n");
+2 -2
drivers/pci/hotplug/pci_hotplug_core.c
··· 137 int retval = 0; \ 138 if (try_module_get(ops->owner)) { \ 139 if (ops->get_##name) \ 140 - retval = ops->get_##name (slot, value); \ 141 else \ 142 *value = slot->info->name; \ 143 module_put(ops->owner); \ ··· 625 if ((slot->info == NULL) || (slot->ops == NULL)) 626 return -EINVAL; 627 if (slot->release == NULL) { 628 - dbg("Why are you trying to register a hotplug slot" 629 "without a proper release function?\n"); 630 return -EINVAL; 631 }
··· 137 int retval = 0; \ 138 if (try_module_get(ops->owner)) { \ 139 if (ops->get_##name) \ 140 + retval = ops->get_##name(slot, value); \ 141 else \ 142 *value = slot->info->name; \ 143 module_put(ops->owner); \ ··· 625 if ((slot->info == NULL) || (slot->ops == NULL)) 626 return -EINVAL; 627 if (slot->release == NULL) { 628 + dbg("Why are you trying to register a hotplug slot " 629 "without a proper release function?\n"); 630 return -EINVAL; 631 }
+3 -6
drivers/pci/hotplug/pciehp.h
··· 82 }; 83 84 struct controller { 85 - struct controller *next; 86 struct mutex crit_sect; /* critical section mutex */ 87 struct mutex ctrl_lock; /* controller lock */ 88 int num_slots; /* Number of slots on ctlr */ 89 int slot_num_inc; /* 1 or -1 */ 90 struct pci_dev *pci_dev; 91 struct list_head slot_list; 92 - struct slot *slot; 93 struct hpc_ops *hpc_ops; 94 wait_queue_head_t queue; /* sleep & wake process */ 95 - u8 bus; 96 - u8 device; 97 - u8 function; 98 u8 slot_device_offset; 99 u32 first_slot; /* First physical slot number */ /* PCIE only has 1 slot */ 100 u8 slot_bus; /* Bus where the slots handled by this controller sit */ 101 u8 ctrlcap; 102 - u16 vendor_id; 103 u8 cap_base; 104 struct timer_list poll_timer; 105 volatile int cmd_busy; ··· 155 extern int pciehp_unconfigure_device(struct slot *p_slot); 156 extern void pciehp_queue_pushbutton_work(struct work_struct *work); 157 int pcie_init(struct controller *ctrl, struct pcie_device *dev); 158 159 static inline struct slot *pciehp_find_slot(struct controller *ctrl, u8 device) 160 {
··· 82 }; 83 84 struct controller { 85 struct mutex crit_sect; /* critical section mutex */ 86 struct mutex ctrl_lock; /* controller lock */ 87 int num_slots; /* Number of slots on ctlr */ 88 int slot_num_inc; /* 1 or -1 */ 89 struct pci_dev *pci_dev; 90 struct list_head slot_list; 91 struct hpc_ops *hpc_ops; 92 wait_queue_head_t queue; /* sleep & wake process */ 93 u8 slot_device_offset; 94 u32 first_slot; /* First physical slot number */ /* PCIE only has 1 slot */ 95 u8 slot_bus; /* Bus where the slots handled by this controller sit */ 96 u8 ctrlcap; 97 u8 cap_base; 98 struct timer_list poll_timer; 99 volatile int cmd_busy; ··· 161 extern int pciehp_unconfigure_device(struct slot *p_slot); 162 extern void pciehp_queue_pushbutton_work(struct work_struct *work); 163 int pcie_init(struct controller *ctrl, struct pcie_device *dev); 164 + int pciehp_enable_slot(struct slot *p_slot); 165 + int pciehp_disable_slot(struct slot *p_slot); 166 + int pcie_init_hardware_part2(struct controller *ctrl, struct pcie_device *dev); 167 168 static inline struct slot *pciehp_find_slot(struct controller *ctrl, u8 device) 169 {
+26 -7
drivers/pci/hotplug/pciehp_core.c
··· 453 454 pci_set_drvdata(pdev, ctrl); 455 456 - ctrl->bus = pdev->bus->number; /* ctrl bus */ 457 - ctrl->slot_bus = pdev->subordinate->number; /* bus controlled by this HPC */ 458 - 459 - ctrl->device = PCI_SLOT(pdev->devfn); 460 - ctrl->function = PCI_FUNC(pdev->devfn); 461 - dbg("%s: ctrl bus=0x%x, device=%x, function=%x, irq=%x\n", __FUNCTION__, 462 - ctrl->bus, ctrl->device, ctrl->function, pdev->irq); 463 464 /* Setup the slot information structures */ 465 rc = init_slots(ctrl); ··· 467 t_slot = pciehp_find_slot(ctrl, ctrl->slot_device_offset); 468 469 t_slot->hpc_ops->get_adapter_status(t_slot, &value); /* Check if slot is occupied */ 470 if ((POWER_CTRL(ctrl->ctrlcap)) && !value) { 471 rc = t_slot->hpc_ops->power_off_slot(t_slot); /* Power off slot if not occupied*/ 472 if (rc) ··· 510 static int pciehp_resume (struct pcie_device *dev) 511 { 512 printk("%s ENTRY\n", __FUNCTION__); 513 return 0; 514 } 515 #endif
··· 453 454 pci_set_drvdata(pdev, ctrl); 455 456 + dbg("%s: ctrl bus=0x%x, device=%x, function=%x, irq=%x\n", 457 + __FUNCTION__, pdev->bus->number, PCI_SLOT(pdev->devfn), 458 + PCI_FUNC(pdev->devfn), pdev->irq); 459 460 /* Setup the slot information structures */ 461 rc = init_slots(ctrl); ··· 471 t_slot = pciehp_find_slot(ctrl, ctrl->slot_device_offset); 472 473 t_slot->hpc_ops->get_adapter_status(t_slot, &value); /* Check if slot is occupied */ 474 + if (value) { 475 + rc = pciehp_enable_slot(t_slot); 476 + if (rc) /* -ENODEV: shouldn't happen, but deal with it */ 477 + value = 0; 478 + } 479 if ((POWER_CTRL(ctrl->ctrlcap)) && !value) { 480 rc = t_slot->hpc_ops->power_off_slot(t_slot); /* Power off slot if not occupied*/ 481 if (rc) ··· 509 static int pciehp_resume (struct pcie_device *dev) 510 { 511 printk("%s ENTRY\n", __FUNCTION__); 512 + if (pciehp_force) { 513 + struct pci_dev *pdev = dev->port; 514 + struct controller *ctrl = pci_get_drvdata(pdev); 515 + struct slot *t_slot; 516 + u8 status; 517 + 518 + /* reinitialize the chipset's event detection logic */ 519 + pcie_init_hardware_part2(ctrl, dev); 520 + 521 + t_slot = pciehp_find_slot(ctrl, ctrl->slot_device_offset); 522 + 523 + /* Check if slot is occupied */ 524 + t_slot->hpc_ops->get_adapter_status(t_slot, &status); 525 + if (status) 526 + pciehp_enable_slot(t_slot); 527 + else 528 + pciehp_disable_slot(t_slot); 529 + } 530 return 0; 531 } 532 #endif
+2 -25
drivers/pci/hotplug/pciehp_ctrl.c
··· 37 #include "pciehp.h" 38 39 static void interrupt_event_handler(struct work_struct *work); 40 - static int pciehp_enable_slot(struct slot *p_slot); 41 - static int pciehp_disable_slot(struct slot *p_slot); 42 43 static int queue_interrupt_event(struct slot *p_slot, u32 event_type) 44 { ··· 195 __FUNCTION__); 196 return; 197 } 198 - /* 199 - * After turning power off, we must wait for at least 200 - * 1 second before taking any action that relies on 201 - * power having been removed from the slot/adapter. 202 - */ 203 - msleep(1000); 204 } 205 } 206 ··· 207 */ 208 static int board_added(struct slot *p_slot) 209 { 210 - u8 hp_slot; 211 int retval = 0; 212 struct controller *ctrl = p_slot->ctrl; 213 214 - hp_slot = p_slot->device - ctrl->slot_device_offset; 215 - 216 dbg("%s: slot device, slot offset, hp slot = %d, %d ,%d\n", 217 __FUNCTION__, p_slot->device, 218 - ctrl->slot_device_offset, hp_slot); 219 220 if (POWER_CTRL(ctrl->ctrlcap)) { 221 /* Power on slot */ ··· 270 */ 271 static int remove_board(struct slot *p_slot) 272 { 273 - u8 device; 274 - u8 hp_slot; 275 int retval = 0; 276 struct controller *ctrl = p_slot->ctrl; 277 ··· 277 if (retval) 278 return retval; 279 280 - device = p_slot->device; 281 - hp_slot = p_slot->device - ctrl->slot_device_offset; 282 - p_slot = pciehp_find_slot(ctrl, hp_slot + ctrl->slot_device_offset); 283 - 284 - dbg("In %s, hp_slot = %d\n", __FUNCTION__, hp_slot); 285 286 if (POWER_CTRL(ctrl->ctrlcap)) { 287 /* power off slot */ ··· 604 mutex_unlock(&p_slot->ctrl->crit_sect); 605 return -EINVAL; 606 } 607 - /* 608 - * After turning power off, we must wait for at least 609 - * 1 second before taking any action that relies on 610 - * power having been removed from the slot/adapter. 611 - */ 612 - msleep(1000); 613 } 614 615 ret = remove_board(p_slot);
··· 37 #include "pciehp.h" 38 39 static void interrupt_event_handler(struct work_struct *work); 40 41 static int queue_interrupt_event(struct slot *p_slot, u32 event_type) 42 { ··· 197 __FUNCTION__); 198 return; 199 } 200 } 201 } 202 ··· 215 */ 216 static int board_added(struct slot *p_slot) 217 { 218 int retval = 0; 219 struct controller *ctrl = p_slot->ctrl; 220 221 dbg("%s: slot device, slot offset, hp slot = %d, %d ,%d\n", 222 __FUNCTION__, p_slot->device, 223 + ctrl->slot_device_offset, p_slot->hp_slot); 224 225 if (POWER_CTRL(ctrl->ctrlcap)) { 226 /* Power on slot */ ··· 281 */ 282 static int remove_board(struct slot *p_slot) 283 { 284 int retval = 0; 285 struct controller *ctrl = p_slot->ctrl; 286 ··· 290 if (retval) 291 return retval; 292 293 + dbg("In %s, hp_slot = %d\n", __FUNCTION__, p_slot->hp_slot); 294 295 if (POWER_CTRL(ctrl->ctrlcap)) { 296 /* power off slot */ ··· 621 mutex_unlock(&p_slot->ctrl->crit_sect); 622 return -EINVAL; 623 } 624 } 625 626 ret = remove_board(p_slot);
+202 -112
drivers/pci/hotplug/pciehp_hpc.c
··· 636 return retval; 637 } 638 639 static int hpc_power_off_slot(struct slot * slot) 640 { 641 struct controller *ctrl = slot->ctrl; 642 u16 slot_cmd; 643 u16 cmd_mask; 644 int retval = 0; 645 646 dbg("%s: slot->hp_slot %x\n", __FUNCTION__, slot->hp_slot); 647 648 slot_cmd = POWER_OFF; 649 cmd_mask = PWR_CTRL; ··· 715 } 716 dbg("%s: SLOTCTRL %x write cmd %x\n", 717 __FUNCTION__, ctrl->cap_base + SLOTCTRL, slot_cmd); 718 719 return retval; 720 } ··· 1119 } 1120 #endif 1121 1122 - int pcie_init(struct controller * ctrl, struct pcie_device *dev) 1123 { 1124 int rc; 1125 u16 temp_word; 1126 - u16 cap_reg; 1127 u16 intr_enable = 0; 1128 u32 slot_cap; 1129 int cap_base; 1130 u16 slot_status, slot_ctrl; ··· 1266 dbg("%s: hotplug controller vendor id 0x%x device id 0x%x\n", 1267 __FUNCTION__, pdev->vendor, pdev->device); 1268 1269 - if ((cap_base = pci_find_capability(pdev, PCI_CAP_ID_EXP)) == 0) { 1270 dbg("%s: Can't find PCI_CAP_ID_EXP (0x10)\n", __FUNCTION__); 1271 - goto abort_free_ctlr; 1272 } 1273 1274 ctrl->cap_base = cap_base; ··· 1279 rc = pciehp_readw(ctrl, CAPREG, &cap_reg); 1280 if (rc) { 1281 err("%s: Cannot read CAPREG register\n", __FUNCTION__); 1282 - goto abort_free_ctlr; 1283 } 1284 dbg("%s: CAPREG offset %x cap_reg %x\n", 1285 __FUNCTION__, ctrl->cap_base + CAPREG, cap_reg); ··· 1289 && ((cap_reg & DEV_PORT_TYPE) != 0x0060))) { 1290 dbg("%s : This is not a root port or the port is not " 1291 "connected to a slot\n", __FUNCTION__); 1292 - goto abort_free_ctlr; 1293 } 1294 1295 rc = pciehp_readl(ctrl, SLOTCAP, &slot_cap); 1296 if (rc) { 1297 err("%s: Cannot read SLOTCAP register\n", __FUNCTION__); 1298 - goto abort_free_ctlr; 1299 } 1300 dbg("%s: SLOTCAP offset %x slot_cap %x\n", 1301 __FUNCTION__, ctrl->cap_base + SLOTCAP, slot_cap); 1302 1303 if (!(slot_cap & HP_CAP)) { 1304 dbg("%s : This slot is not hot-plug capable\n", __FUNCTION__); 1305 - goto abort_free_ctlr; 1306 } 1307 /* For debugging purpose */ 1308 rc = pciehp_readw(ctrl, SLOTSTATUS, &slot_status); 1309 if (rc) { 1310 err("%s: Cannot read SLOTSTATUS register\n", __FUNCTION__); 1311 - goto abort_free_ctlr; 1312 } 1313 dbg("%s: SLOTSTATUS offset %x slot_status %x\n", 1314 __FUNCTION__, ctrl->cap_base + SLOTSTATUS, slot_status); ··· 1316 rc = pciehp_readw(ctrl, SLOTCTRL, &slot_ctrl); 1317 if (rc) { 1318 err("%s: Cannot read SLOTCTRL register\n", __FUNCTION__); 1319 - goto abort_free_ctlr; 1320 } 1321 dbg("%s: SLOTCTRL offset %x slot_ctrl %x\n", 1322 __FUNCTION__, ctrl->cap_base + SLOTCTRL, slot_ctrl); ··· 1344 ctrl->first_slot = slot_cap >> 19; 1345 ctrl->ctrlcap = slot_cap & 0x0000007f; 1346 1347 - /* Mask Hot-plug Interrupt Enable */ 1348 - rc = pciehp_readw(ctrl, SLOTCTRL, &temp_word); 1349 - if (rc) { 1350 - err("%s: Cannot read SLOTCTRL register\n", __FUNCTION__); 1351 - goto abort_free_ctlr; 1352 - } 1353 - 1354 - dbg("%s: SLOTCTRL %x value read %x\n", 1355 - __FUNCTION__, ctrl->cap_base + SLOTCTRL, temp_word); 1356 - temp_word = (temp_word & ~HP_INTR_ENABLE & ~CMD_CMPL_INTR_ENABLE) | 1357 - 0x00; 1358 - 1359 - rc = pciehp_writew(ctrl, SLOTCTRL, temp_word); 1360 - if (rc) { 1361 - err("%s: Cannot write to SLOTCTRL register\n", __FUNCTION__); 1362 - goto abort_free_ctlr; 1363 - } 1364 - 1365 - rc = pciehp_readw(ctrl, SLOTSTATUS, &slot_status); 1366 - if (rc) { 1367 - err("%s: Cannot read SLOTSTATUS register\n", __FUNCTION__); 1368 - goto abort_free_ctlr; 1369 - } 1370 - 1371 - temp_word = 0x1F; /* Clear all events */ 1372 - rc = pciehp_writew(ctrl, SLOTSTATUS, temp_word); 1373 - if (rc) { 1374 - err("%s: Cannot write to SLOTSTATUS register\n", __FUNCTION__); 1375 - goto abort_free_ctlr; 1376 - } 1377 1378 if (pciehp_poll_mode) { 1379 /* Install interrupt polling timer. Start with 10 sec delay */ ··· 1362 if (rc) { 1363 err("Can't get irq %d for the hotplug controller\n", 1364 ctrl->pci_dev->irq); 1365 - goto abort_free_ctlr; 1366 } 1367 } 1368 dbg("pciehp ctrl b:d:f:irq=0x%x:%x:%x:%x\n", pdev->bus->number, ··· 1380 } 1381 } 1382 1383 - rc = pciehp_readw(ctrl, SLOTCTRL, &temp_word); 1384 - if (rc) { 1385 - err("%s: Cannot read SLOTCTRL register\n", __FUNCTION__); 1386 - goto abort_free_irq; 1387 } 1388 - 1389 - intr_enable = intr_enable | PRSN_DETECT_ENABLE; 1390 - 1391 - if (ATTN_BUTTN(slot_cap)) 1392 - intr_enable = intr_enable | ATTN_BUTTN_ENABLE; 1393 - 1394 - if (POWER_CTRL(slot_cap)) 1395 - intr_enable = intr_enable | PWR_FAULT_DETECT_ENABLE; 1396 - 1397 - if (MRL_SENS(slot_cap)) 1398 - intr_enable = intr_enable | MRL_DETECT_ENABLE; 1399 - 1400 - temp_word = (temp_word & ~intr_enable) | intr_enable; 1401 - 1402 - if (pciehp_poll_mode) { 1403 - temp_word = (temp_word & ~HP_INTR_ENABLE) | 0x0; 1404 - } else { 1405 - temp_word = (temp_word & ~HP_INTR_ENABLE) | HP_INTR_ENABLE; 1406 - } 1407 - 1408 - /* 1409 - * Unmask Hot-plug Interrupt Enable for the interrupt 1410 - * notification mechanism case. 1411 - */ 1412 - rc = pciehp_writew(ctrl, SLOTCTRL, temp_word); 1413 - if (rc) { 1414 - err("%s: Cannot write to SLOTCTRL register\n", __FUNCTION__); 1415 - goto abort_free_irq; 1416 - } 1417 - rc = pciehp_readw(ctrl, SLOTSTATUS, &slot_status); 1418 - if (rc) { 1419 - err("%s: Cannot read SLOTSTATUS register\n", __FUNCTION__); 1420 - goto abort_disable_intr; 1421 - } 1422 - 1423 - temp_word = 0x1F; /* Clear all events */ 1424 - rc = pciehp_writew(ctrl, SLOTSTATUS, temp_word); 1425 - if (rc) { 1426 - err("%s: Cannot write to SLOTSTATUS register\n", __FUNCTION__); 1427 - goto abort_disable_intr; 1428 - } 1429 - 1430 - if (pciehp_force) { 1431 - dbg("Bypassing BIOS check for pciehp use on %s\n", 1432 - pci_name(ctrl->pci_dev)); 1433 - } else { 1434 - rc = pciehp_get_hp_hw_control_from_firmware(ctrl->pci_dev); 1435 - if (rc) 1436 - goto abort_disable_intr; 1437 - } 1438 - 1439 - ctrl->hpc_ops = &pciehp_hpc_ops; 1440 - 1441 - return 0; 1442 - 1443 - /* We end up here for the many possible ways to fail this API. */ 1444 - abort_disable_intr: 1445 - rc = pciehp_readw(ctrl, SLOTCTRL, &temp_word); 1446 - if (!rc) { 1447 - temp_word &= ~(intr_enable | HP_INTR_ENABLE); 1448 - rc = pciehp_writew(ctrl, SLOTCTRL, temp_word); 1449 - } 1450 - if (rc) 1451 - err("%s : disabling interrupts failed\n", __FUNCTION__); 1452 - 1453 abort_free_irq: 1454 if (pciehp_poll_mode) 1455 del_timer_sync(&ctrl->poll_timer); 1456 else 1457 free_irq(ctrl->pci_dev->irq, ctrl); 1458 - 1459 - abort_free_ctlr: 1460 return -1; 1461 }
··· 636 return retval; 637 } 638 639 + static inline int pcie_mask_bad_dllp(struct controller *ctrl) 640 + { 641 + struct pci_dev *dev = ctrl->pci_dev; 642 + int pos; 643 + u32 reg; 644 + 645 + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); 646 + if (!pos) 647 + return 0; 648 + pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, &reg); 649 + if (reg & PCI_ERR_COR_BAD_DLLP) 650 + return 0; 651 + reg |= PCI_ERR_COR_BAD_DLLP; 652 + pci_write_config_dword(dev, pos + PCI_ERR_COR_MASK, reg); 653 + return 1; 654 + } 655 + 656 + static inline void pcie_unmask_bad_dllp(struct controller *ctrl) 657 + { 658 + struct pci_dev *dev = ctrl->pci_dev; 659 + u32 reg; 660 + int pos; 661 + 662 + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); 663 + if (!pos) 664 + return; 665 + pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, &reg); 666 + if (!(reg & PCI_ERR_COR_BAD_DLLP)) 667 + return; 668 + reg &= ~PCI_ERR_COR_BAD_DLLP; 669 + pci_write_config_dword(dev, pos + PCI_ERR_COR_MASK, reg); 670 + } 671 + 672 static int hpc_power_off_slot(struct slot * slot) 673 { 674 struct controller *ctrl = slot->ctrl; 675 u16 slot_cmd; 676 u16 cmd_mask; 677 int retval = 0; 678 + int changed; 679 680 dbg("%s: slot->hp_slot %x\n", __FUNCTION__, slot->hp_slot); 681 + 682 + /* 683 + * Set Bad DLLP Mask bit in Correctable Error Mask 684 + * Register. This is the workaround against Bad DLLP error 685 + * that sometimes happens during turning power off the slot 686 + * which conforms to PCI Express 1.0a spec. 687 + */ 688 + changed = pcie_mask_bad_dllp(ctrl); 689 690 slot_cmd = POWER_OFF; 691 cmd_mask = PWR_CTRL; ··· 673 } 674 dbg("%s: SLOTCTRL %x write cmd %x\n", 675 __FUNCTION__, ctrl->cap_base + SLOTCTRL, slot_cmd); 676 + 677 + /* 678 + * After turning power off, we must wait for at least 1 second 679 + * before taking any action that relies on power having been 680 + * removed from the slot/adapter. 681 + */ 682 + msleep(1000); 683 + 684 + if (changed) 685 + pcie_unmask_bad_dllp(ctrl); 686 687 return retval; 688 } ··· 1067 } 1068 #endif 1069 1070 + static int pcie_init_hardware_part1(struct controller *ctrl, 1071 + struct pcie_device *dev) 1072 { 1073 int rc; 1074 u16 temp_word; 1075 + u32 slot_cap; 1076 + u16 slot_status; 1077 + 1078 + rc = pciehp_readl(ctrl, SLOTCAP, &slot_cap); 1079 + if (rc) { 1080 + err("%s: Cannot read SLOTCAP register\n", __FUNCTION__); 1081 + return -1; 1082 + } 1083 + 1084 + /* Mask Hot-plug Interrupt Enable */ 1085 + rc = pciehp_readw(ctrl, SLOTCTRL, &temp_word); 1086 + if (rc) { 1087 + err("%s: Cannot read SLOTCTRL register\n", __FUNCTION__); 1088 + return -1; 1089 + } 1090 + 1091 + dbg("%s: SLOTCTRL %x value read %x\n", 1092 + __FUNCTION__, ctrl->cap_base + SLOTCTRL, temp_word); 1093 + temp_word = (temp_word & ~HP_INTR_ENABLE & ~CMD_CMPL_INTR_ENABLE) | 1094 + 0x00; 1095 + 1096 + rc = pciehp_writew(ctrl, SLOTCTRL, temp_word); 1097 + if (rc) { 1098 + err("%s: Cannot write to SLOTCTRL register\n", __FUNCTION__); 1099 + return -1; 1100 + } 1101 + 1102 + rc = pciehp_readw(ctrl, SLOTSTATUS, &slot_status); 1103 + if (rc) { 1104 + err("%s: Cannot read SLOTSTATUS register\n", __FUNCTION__); 1105 + return -1; 1106 + } 1107 + 1108 + temp_word = 0x1F; /* Clear all events */ 1109 + rc = pciehp_writew(ctrl, SLOTSTATUS, temp_word); 1110 + if (rc) { 1111 + err("%s: Cannot write to SLOTSTATUS register\n", __FUNCTION__); 1112 + return -1; 1113 + } 1114 + return 0; 1115 + } 1116 + 1117 + int pcie_init_hardware_part2(struct controller *ctrl, struct pcie_device *dev) 1118 + { 1119 + int rc; 1120 + u16 temp_word; 1121 u16 intr_enable = 0; 1122 + u32 slot_cap; 1123 + u16 slot_status; 1124 + 1125 + rc = pciehp_readw(ctrl, SLOTCTRL, &temp_word); 1126 + if (rc) { 1127 + err("%s: Cannot read SLOTCTRL register\n", __FUNCTION__); 1128 + goto abort; 1129 + } 1130 + 1131 + intr_enable = intr_enable | PRSN_DETECT_ENABLE; 1132 + 1133 + rc = pciehp_readl(ctrl, SLOTCAP, &slot_cap); 1134 + if (rc) { 1135 + err("%s: Cannot read SLOTCAP register\n", __FUNCTION__); 1136 + goto abort; 1137 + } 1138 + 1139 + if (ATTN_BUTTN(slot_cap)) 1140 + intr_enable = intr_enable | ATTN_BUTTN_ENABLE; 1141 + 1142 + if (POWER_CTRL(slot_cap)) 1143 + intr_enable = intr_enable | PWR_FAULT_DETECT_ENABLE; 1144 + 1145 + if (MRL_SENS(slot_cap)) 1146 + intr_enable = intr_enable | MRL_DETECT_ENABLE; 1147 + 1148 + temp_word = (temp_word & ~intr_enable) | intr_enable; 1149 + 1150 + if (pciehp_poll_mode) { 1151 + temp_word = (temp_word & ~HP_INTR_ENABLE) | 0x0; 1152 + } else { 1153 + temp_word = (temp_word & ~HP_INTR_ENABLE) | HP_INTR_ENABLE; 1154 + } 1155 + 1156 + /* 1157 + * Unmask Hot-plug Interrupt Enable for the interrupt 1158 + * notification mechanism case. 1159 + */ 1160 + rc = pciehp_writew(ctrl, SLOTCTRL, temp_word); 1161 + if (rc) { 1162 + err("%s: Cannot write to SLOTCTRL register\n", __FUNCTION__); 1163 + goto abort; 1164 + } 1165 + rc = pciehp_readw(ctrl, SLOTSTATUS, &slot_status); 1166 + if (rc) { 1167 + err("%s: Cannot read SLOTSTATUS register\n", __FUNCTION__); 1168 + goto abort_disable_intr; 1169 + } 1170 + 1171 + temp_word = 0x1F; /* Clear all events */ 1172 + rc = pciehp_writew(ctrl, SLOTSTATUS, temp_word); 1173 + if (rc) { 1174 + err("%s: Cannot write to SLOTSTATUS register\n", __FUNCTION__); 1175 + goto abort_disable_intr; 1176 + } 1177 + 1178 + if (pciehp_force) { 1179 + dbg("Bypassing BIOS check for pciehp use on %s\n", 1180 + pci_name(ctrl->pci_dev)); 1181 + } else { 1182 + rc = pciehp_get_hp_hw_control_from_firmware(ctrl->pci_dev); 1183 + if (rc) 1184 + goto abort_disable_intr; 1185 + } 1186 + 1187 + return 0; 1188 + 1189 + /* We end up here for the many possible ways to fail this API. */ 1190 + abort_disable_intr: 1191 + rc = pciehp_readw(ctrl, SLOTCTRL, &temp_word); 1192 + if (!rc) { 1193 + temp_word &= ~(intr_enable | HP_INTR_ENABLE); 1194 + rc = pciehp_writew(ctrl, SLOTCTRL, temp_word); 1195 + } 1196 + if (rc) 1197 + err("%s : disabling interrupts failed\n", __FUNCTION__); 1198 + abort: 1199 + return -1; 1200 + } 1201 + 1202 + int pcie_init(struct controller *ctrl, struct pcie_device *dev) 1203 + { 1204 + int rc; 1205 + u16 cap_reg; 1206 u32 slot_cap; 1207 int cap_base; 1208 u16 slot_status, slot_ctrl; ··· 1084 dbg("%s: hotplug controller vendor id 0x%x device id 0x%x\n", 1085 __FUNCTION__, pdev->vendor, pdev->device); 1086 1087 + cap_base = pci_find_capability(pdev, PCI_CAP_ID_EXP); 1088 + if (cap_base == 0) { 1089 dbg("%s: Can't find PCI_CAP_ID_EXP (0x10)\n", __FUNCTION__); 1090 + goto abort; 1091 } 1092 1093 ctrl->cap_base = cap_base; ··· 1096 rc = pciehp_readw(ctrl, CAPREG, &cap_reg); 1097 if (rc) { 1098 err("%s: Cannot read CAPREG register\n", __FUNCTION__); 1099 + goto abort; 1100 } 1101 dbg("%s: CAPREG offset %x cap_reg %x\n", 1102 __FUNCTION__, ctrl->cap_base + CAPREG, cap_reg); ··· 1106 && ((cap_reg & DEV_PORT_TYPE) != 0x0060))) { 1107 dbg("%s : This is not a root port or the port is not " 1108 "connected to a slot\n", __FUNCTION__); 1109 + goto abort; 1110 } 1111 1112 rc = pciehp_readl(ctrl, SLOTCAP, &slot_cap); 1113 if (rc) { 1114 err("%s: Cannot read SLOTCAP register\n", __FUNCTION__); 1115 + goto abort; 1116 } 1117 dbg("%s: SLOTCAP offset %x slot_cap %x\n", 1118 __FUNCTION__, ctrl->cap_base + SLOTCAP, slot_cap); 1119 1120 if (!(slot_cap & HP_CAP)) { 1121 dbg("%s : This slot is not hot-plug capable\n", __FUNCTION__); 1122 + goto abort; 1123 } 1124 /* For debugging purpose */ 1125 rc = pciehp_readw(ctrl, SLOTSTATUS, &slot_status); 1126 if (rc) { 1127 err("%s: Cannot read SLOTSTATUS register\n", __FUNCTION__); 1128 + goto abort; 1129 } 1130 dbg("%s: SLOTSTATUS offset %x slot_status %x\n", 1131 __FUNCTION__, ctrl->cap_base + SLOTSTATUS, slot_status); ··· 1133 rc = pciehp_readw(ctrl, SLOTCTRL, &slot_ctrl); 1134 if (rc) { 1135 err("%s: Cannot read SLOTCTRL register\n", __FUNCTION__); 1136 + goto abort; 1137 } 1138 dbg("%s: SLOTCTRL offset %x slot_ctrl %x\n", 1139 __FUNCTION__, ctrl->cap_base + SLOTCTRL, slot_ctrl); ··· 1161 ctrl->first_slot = slot_cap >> 19; 1162 ctrl->ctrlcap = slot_cap & 0x0000007f; 1163 1164 + rc = pcie_init_hardware_part1(ctrl, dev); 1165 + if (rc) 1166 + goto abort; 1167 1168 if (pciehp_poll_mode) { 1169 /* Install interrupt polling timer. Start with 10 sec delay */ ··· 1206 if (rc) { 1207 err("Can't get irq %d for the hotplug controller\n", 1208 ctrl->pci_dev->irq); 1209 + goto abort; 1210 } 1211 } 1212 dbg("pciehp ctrl b:d:f:irq=0x%x:%x:%x:%x\n", pdev->bus->number, ··· 1224 } 1225 } 1226 1227 + rc = pcie_init_hardware_part2(ctrl, dev); 1228 + if (rc == 0) { 1229 + ctrl->hpc_ops = &pciehp_hpc_ops; 1230 + return 0; 1231 } 1232 abort_free_irq: 1233 if (pciehp_poll_mode) 1234 del_timer_sync(&ctrl->poll_timer); 1235 else 1236 free_irq(ctrl->pci_dev->irq, ctrl); 1237 + abort: 1238 return -1; 1239 }
+23 -20
drivers/pci/hotplug/pciehp_pci.c
··· 105 } 106 107 /* Find Advanced Error Reporting Enhanced Capability */ 108 - pos = 256; 109 - do { 110 - pci_read_config_dword(dev, pos, &reg32); 111 - if (PCI_EXT_CAP_ID(reg32) == PCI_EXT_CAP_ID_ERR) 112 - break; 113 - } while ((pos = PCI_EXT_CAP_NEXT(reg32))); 114 if (!pos) 115 return; 116 ··· 243 u8 bctl = 0; 244 u8 presence = 0; 245 struct pci_bus *parent = p_slot->ctrl->pci_dev->subordinate; 246 247 dbg("%s: bus/dev = %x/%x\n", __FUNCTION__, p_slot->bus, 248 p_slot->device); 249 250 - for (j=0; j<8 ; j++) { 251 struct pci_dev* temp = pci_get_slot(parent, 252 (p_slot->device << 3) | j); 253 if (!temp) ··· 262 pci_dev_put(temp); 263 continue; 264 } 265 - if (temp->hdr_type == PCI_HEADER_TYPE_BRIDGE) { 266 - ret = p_slot->hpc_ops->get_adapter_status(p_slot, 267 - &presence); 268 - if (!ret && presence) { 269 - pci_read_config_byte(temp, PCI_BRIDGE_CONTROL, 270 - &bctl); 271 - if (bctl & PCI_BRIDGE_CTL_VGA) { 272 - err("Cannot remove display device %s\n", 273 - pci_name(temp)); 274 - pci_dev_put(temp); 275 - continue; 276 - } 277 } 278 } 279 pci_remove_bus_device(temp); 280 pci_dev_put(temp); 281 } 282 /* ··· 292 293 return rc; 294 } 295 -
··· 105 } 106 107 /* Find Advanced Error Reporting Enhanced Capability */ 108 + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); 109 if (!pos) 110 return; 111 ··· 248 u8 bctl = 0; 249 u8 presence = 0; 250 struct pci_bus *parent = p_slot->ctrl->pci_dev->subordinate; 251 + u16 command; 252 253 dbg("%s: bus/dev = %x/%x\n", __FUNCTION__, p_slot->bus, 254 p_slot->device); 255 + ret = p_slot->hpc_ops->get_adapter_status(p_slot, &presence); 256 + if (ret) 257 + presence = 0; 258 259 + for (j = 0; j < 8; j++) { 260 struct pci_dev* temp = pci_get_slot(parent, 261 (p_slot->device << 3) | j); 262 if (!temp) ··· 263 pci_dev_put(temp); 264 continue; 265 } 266 + if (temp->hdr_type == PCI_HEADER_TYPE_BRIDGE && presence) { 267 + pci_read_config_byte(temp, PCI_BRIDGE_CONTROL, &bctl); 268 + if (bctl & PCI_BRIDGE_CTL_VGA) { 269 + err("Cannot remove display device %s\n", 270 + pci_name(temp)); 271 + pci_dev_put(temp); 272 + continue; 273 } 274 } 275 pci_remove_bus_device(temp); 276 + /* 277 + * Ensure that no new Requests will be generated from 278 + * the device. 279 + */ 280 + if (presence) { 281 + pci_read_config_word(temp, PCI_COMMAND, &command); 282 + command &= ~(PCI_COMMAND_MASTER | PCI_COMMAND_SERR); 283 + command |= PCI_COMMAND_INTX_DISABLE; 284 + pci_write_config_word(temp, PCI_COMMAND, command); 285 + } 286 pci_dev_put(temp); 287 } 288 /* ··· 288 289 return rc; 290 }
-1
drivers/pci/hotplug/rpaphp.h
··· 74 u32 type; 75 u32 power_domain; 76 char *name; 77 - char *location; 78 struct device_node *dn; 79 struct pci_bus *bus; 80 struct list_head *pci_devs;
··· 74 u32 type; 75 u32 power_domain; 76 char *name; 77 struct device_node *dn; 78 struct pci_bus *bus; 79 struct list_head *pci_devs;
-14
drivers/pci/hotplug/rpaphp_pci.c
··· 64 return rc; 65 } 66 67 - static void set_slot_name(struct slot *slot) 68 - { 69 - struct pci_bus *bus = slot->bus; 70 - struct pci_dev *bridge; 71 - 72 - bridge = bus->self; 73 - if (bridge) 74 - strcpy(slot->name, pci_name(bridge)); 75 - else 76 - sprintf(slot->name, "%04x:%02x:00.0", pci_domain_nr(bus), 77 - bus->number); 78 - } 79 - 80 /** 81 * rpaphp_enable_slot - record slot state, config pci device 82 * @slot: target &slot ··· 102 info->adapter_status = EMPTY; 103 slot->bus = bus; 104 slot->pci_devs = &bus->devices; 105 - set_slot_name(slot); 106 107 /* if there's an adapter in the slot, go add the pci devices */ 108 if (state == PRESENT) {
··· 64 return rc; 65 } 66 67 /** 68 * rpaphp_enable_slot - record slot state, config pci device 69 * @slot: target &slot ··· 115 info->adapter_status = EMPTY; 116 slot->bus = bus; 117 slot->pci_devs = &bus->devices; 118 119 /* if there's an adapter in the slot, go add the pci devices */ 120 if (state == PRESENT) {
+24 -23
drivers/pci/hotplug/rpaphp_slot.c
··· 33 #include <asm/rtas.h> 34 #include "rpaphp.h" 35 36 - static ssize_t location_read_file (struct hotplug_slot *php_slot, char *buf) 37 { 38 - char *value; 39 - int retval = -ENOENT; 40 struct slot *slot = (struct slot *)php_slot->private; 41 42 if (!slot) 43 - return retval; 44 45 - value = slot->location; 46 - retval = sprintf (buf, "%s\n", value); 47 return retval; 48 } 49 50 - static struct hotplug_slot_attribute php_attr_location = { 51 - .attr = {.name = "phy_location", .mode = S_IFREG | S_IRUGO}, 52 - .show = location_read_file, 53 }; 54 55 /* free up the memory used by a slot */ ··· 72 kfree(slot->hotplug_slot->info); 73 kfree(slot->hotplug_slot->name); 74 kfree(slot->hotplug_slot); 75 - kfree(slot->location); 76 kfree(slot); 77 } 78 ··· 90 GFP_KERNEL); 91 if (!slot->hotplug_slot->info) 92 goto error_hpslot; 93 - slot->hotplug_slot->name = kmalloc(BUS_ID_SIZE + 1, GFP_KERNEL); 94 if (!slot->hotplug_slot->name) 95 goto error_info; 96 - slot->location = kmalloc(strlen(drc_name) + 1, GFP_KERNEL); 97 - if (!slot->location) 98 - goto error_name; 99 slot->name = slot->hotplug_slot->name; 100 slot->dn = dn; 101 slot->index = drc_index; 102 - strcpy(slot->location, drc_name); 103 slot->power_domain = power_domain; 104 slot->hotplug_slot->private = slot; 105 slot->hotplug_slot->ops = &rpaphp_hotplug_slot_ops; ··· 104 105 return (slot); 106 107 - error_name: 108 - kfree(slot->hotplug_slot->name); 109 error_info: 110 kfree(slot->hotplug_slot->info); 111 error_hpslot: ··· 135 136 list_del(&slot->rpaphp_slot_list); 137 138 - /* remove "phy_location" file */ 139 - sysfs_remove_file(&php_slot->kobj, &php_attr_location.attr); 140 141 retval = pci_hp_deregister(php_slot); 142 if (retval) ··· 168 return retval; 169 } 170 171 - /* create "phy_location" file */ 172 - retval = sysfs_create_file(&php_slot->kobj, &php_attr_location.attr); 173 if (retval) { 174 err("sysfs_create_file failed with error %d\n", retval); 175 goto sysfs_fail; ··· 177 178 /* add slot to our internal list */ 179 list_add(&slot->rpaphp_slot_list, &rpaphp_slot_head); 180 - info("Slot [%s](PCI location=%s) registered\n", slot->name, 181 - slot->location); 182 return 0; 183 184 sysfs_fail:
··· 33 #include <asm/rtas.h> 34 #include "rpaphp.h" 35 36 + static ssize_t address_read_file (struct hotplug_slot *php_slot, char *buf) 37 { 38 + int retval; 39 struct slot *slot = (struct slot *)php_slot->private; 40 + struct pci_bus *bus; 41 42 if (!slot) 43 + return -ENOENT; 44 45 + bus = slot->bus; 46 + if (!bus) 47 + return -ENOENT; 48 + 49 + if (bus->self) 50 + retval = sprintf(buf, pci_name(bus->self)); 51 + else 52 + retval = sprintf(buf, "%04x:%02x:00.0", 53 + pci_domain_nr(bus), bus->number); 54 + 55 return retval; 56 } 57 58 + static struct hotplug_slot_attribute php_attr_address = { 59 + .attr = {.name = "address", .mode = S_IFREG | S_IRUGO}, 60 + .show = address_read_file, 61 }; 62 63 /* free up the memory used by a slot */ ··· 64 kfree(slot->hotplug_slot->info); 65 kfree(slot->hotplug_slot->name); 66 kfree(slot->hotplug_slot); 67 kfree(slot); 68 } 69 ··· 83 GFP_KERNEL); 84 if (!slot->hotplug_slot->info) 85 goto error_hpslot; 86 + slot->hotplug_slot->name = kmalloc(strlen(drc_name) + 1, GFP_KERNEL); 87 if (!slot->hotplug_slot->name) 88 goto error_info; 89 slot->name = slot->hotplug_slot->name; 90 + strcpy(slot->name, drc_name); 91 slot->dn = dn; 92 slot->index = drc_index; 93 slot->power_domain = power_domain; 94 slot->hotplug_slot->private = slot; 95 slot->hotplug_slot->ops = &rpaphp_hotplug_slot_ops; ··· 100 101 return (slot); 102 103 error_info: 104 kfree(slot->hotplug_slot->info); 105 error_hpslot: ··· 133 134 list_del(&slot->rpaphp_slot_list); 135 136 + /* remove "address" file */ 137 + sysfs_remove_file(&php_slot->kobj, &php_attr_address.attr); 138 139 retval = pci_hp_deregister(php_slot); 140 if (retval) ··· 166 return retval; 167 } 168 169 + /* create "address" file */ 170 + retval = sysfs_create_file(&php_slot->kobj, &php_attr_address.attr); 171 if (retval) { 172 err("sysfs_create_file failed with error %d\n", retval); 173 goto sysfs_fail; ··· 175 176 /* add slot to our internal list */ 177 list_add(&slot->rpaphp_slot_list, &rpaphp_slot_head); 178 + info("Slot [%s] registered\n", slot->name); 179 return 0; 180 181 sysfs_fail:
+1 -1
drivers/pci/hotplug/shpchp_hpc.c
··· 597 cleanup_slots(ctrl); 598 599 /* 600 - * Mask SERR and System Interrut generation 601 */ 602 serr_int = shpc_readl(ctrl, SERR_INTR_ENABLE); 603 serr_int |= (GLOBAL_INTR_MASK | GLOBAL_SERR_MASK |
··· 597 cleanup_slots(ctrl); 598 599 /* 600 + * Mask SERR and System Interrupt generation 601 */ 602 serr_int = shpc_readl(ctrl, SERR_INTR_ENABLE); 603 serr_int |= (GLOBAL_INTR_MASK | GLOBAL_SERR_MASK |
+1 -1
drivers/pci/intel-iommu.c
··· 1781 /* 1782 * First try to allocate an io virtual address in 1783 * DMA_32BIT_MASK and if that fails then try allocating 1784 - * from higer range 1785 */ 1786 iova = iommu_alloc_iova(domain, size, DMA_32BIT_MASK); 1787 if (!iova)
··· 1781 /* 1782 * First try to allocate an io virtual address in 1783 * DMA_32BIT_MASK and if that fails then try allocating 1784 + * from higher range 1785 */ 1786 iova = iommu_alloc_iova(domain, size, DMA_32BIT_MASK); 1787 if (!iova)
+46 -48
drivers/pci/msi.c
··· 25 26 static int pci_msi_enable = 1; 27 28 static void msi_set_enable(struct pci_dev *dev, int enable) 29 { 30 int pos; ··· 275 pci_intx(dev, enable); 276 } 277 278 - #ifdef CONFIG_PM 279 static void __pci_restore_msi_state(struct pci_dev *dev) 280 { 281 int pos; ··· 332 __pci_restore_msi_state(dev); 333 __pci_restore_msix_state(dev); 334 } 335 - #endif /* CONFIG_PM */ 336 337 /** 338 * msi_capability_init - configure device's MSI capability structure ··· 726 void pci_msi_init_pci_dev(struct pci_dev *dev) 727 { 728 INIT_LIST_HEAD(&dev->msi_list); 729 - } 730 - 731 - 732 - /* Arch hooks */ 733 - 734 - int __attribute__ ((weak)) 735 - arch_msi_check_device(struct pci_dev* dev, int nvec, int type) 736 - { 737 - return 0; 738 - } 739 - 740 - int __attribute__ ((weak)) 741 - arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *entry) 742 - { 743 - return 0; 744 - } 745 - 746 - int __attribute__ ((weak)) 747 - arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) 748 - { 749 - struct msi_desc *entry; 750 - int ret; 751 - 752 - list_for_each_entry(entry, &dev->msi_list, list) { 753 - ret = arch_setup_msi_irq(dev, entry); 754 - if (ret) 755 - return ret; 756 - } 757 - 758 - return 0; 759 - } 760 - 761 - void __attribute__ ((weak)) arch_teardown_msi_irq(unsigned int irq) 762 - { 763 - return; 764 - } 765 - 766 - void __attribute__ ((weak)) 767 - arch_teardown_msi_irqs(struct pci_dev *dev) 768 - { 769 - struct msi_desc *entry; 770 - 771 - list_for_each_entry(entry, &dev->msi_list, list) { 772 - if (entry->irq != 0) 773 - arch_teardown_msi_irq(entry->irq); 774 - } 775 }
··· 25 26 static int pci_msi_enable = 1; 27 28 + /* Arch hooks */ 29 + 30 + int __attribute__ ((weak)) 31 + arch_msi_check_device(struct pci_dev *dev, int nvec, int type) 32 + { 33 + return 0; 34 + } 35 + 36 + int __attribute__ ((weak)) 37 + arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *entry) 38 + { 39 + return 0; 40 + } 41 + 42 + int __attribute__ ((weak)) 43 + arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) 44 + { 45 + struct msi_desc *entry; 46 + int ret; 47 + 48 + list_for_each_entry(entry, &dev->msi_list, list) { 49 + ret = arch_setup_msi_irq(dev, entry); 50 + if (ret) 51 + return ret; 52 + } 53 + 54 + return 0; 55 + } 56 + 57 + void __attribute__ ((weak)) arch_teardown_msi_irq(unsigned int irq) 58 + { 59 + return; 60 + } 61 + 62 + void __attribute__ ((weak)) 63 + arch_teardown_msi_irqs(struct pci_dev *dev) 64 + { 65 + struct msi_desc *entry; 66 + 67 + list_for_each_entry(entry, &dev->msi_list, list) { 68 + if (entry->irq != 0) 69 + arch_teardown_msi_irq(entry->irq); 70 + } 71 + } 72 + 73 static void msi_set_enable(struct pci_dev *dev, int enable) 74 { 75 int pos; ··· 230 pci_intx(dev, enable); 231 } 232 233 static void __pci_restore_msi_state(struct pci_dev *dev) 234 { 235 int pos; ··· 288 __pci_restore_msi_state(dev); 289 __pci_restore_msix_state(dev); 290 } 291 + EXPORT_SYMBOL_GPL(pci_restore_msi_state); 292 293 /** 294 * msi_capability_init - configure device's MSI capability structure ··· 682 void pci_msi_init_pci_dev(struct pci_dev *dev) 683 { 684 INIT_LIST_HEAD(&dev->msi_list); 685 }
+3 -4
drivers/pci/pci-acpi.c
··· 156 } 157 158 /** 159 - * pci_osc_support_set - register OS support to Firmware 160 * @flags: OS support bits 161 * 162 * Update OS support fields and doing a _OSC Query to obtain an update 163 * from Firmware on supported control bits. 164 **/ 165 - acpi_status pci_osc_support_set(u32 flags) 166 { 167 u32 temp; 168 acpi_status retval; ··· 176 temp = ctrlset_buf[OSC_CONTROL_TYPE]; 177 ctrlset_buf[OSC_QUERY_TYPE] = OSC_QUERY_ENABLE; 178 ctrlset_buf[OSC_CONTROL_TYPE] = OSC_CONTROL_MASKS; 179 - acpi_get_devices ( PCI_ROOT_HID_STRING, 180 acpi_query_osc, 181 ctrlset_buf, 182 (void **) &retval ); ··· 188 } 189 return AE_OK; 190 } 191 - EXPORT_SYMBOL(pci_osc_support_set); 192 193 /** 194 * pci_osc_control_set - commit requested control to Firmware
··· 156 } 157 158 /** 159 + * __pci_osc_support_set - register OS support to Firmware 160 * @flags: OS support bits 161 * 162 * Update OS support fields and doing a _OSC Query to obtain an update 163 * from Firmware on supported control bits. 164 **/ 165 + acpi_status __pci_osc_support_set(u32 flags, const char *hid) 166 { 167 u32 temp; 168 acpi_status retval; ··· 176 temp = ctrlset_buf[OSC_CONTROL_TYPE]; 177 ctrlset_buf[OSC_QUERY_TYPE] = OSC_QUERY_ENABLE; 178 ctrlset_buf[OSC_CONTROL_TYPE] = OSC_CONTROL_MASKS; 179 + acpi_get_devices(hid, 180 acpi_query_osc, 181 ctrlset_buf, 182 (void **) &retval ); ··· 188 } 189 return AE_OK; 190 } 191 192 /** 193 * pci_osc_control_set - commit requested control to Firmware
+1 -3
drivers/pci/pci-driver.c
··· 186 set_cpus_allowed(current, node_to_cpumask(node)); 187 /* And set default memory allocation policy */ 188 oldpol = current->mempolicy; 189 - current->mempolicy = &default_policy; 190 - mpol_get(current->mempolicy); 191 #endif 192 error = drv->probe(dev, id); 193 #ifdef CONFIG_NUMA 194 set_cpus_allowed(current, oldmask); 195 - mpol_free(current->mempolicy); 196 current->mempolicy = oldpol; 197 #endif 198 return error;
··· 186 set_cpus_allowed(current, node_to_cpumask(node)); 187 /* And set default memory allocation policy */ 188 oldpol = current->mempolicy; 189 + current->mempolicy = NULL; /* fall back to system default policy */ 190 #endif 191 error = drv->probe(dev, id); 192 #ifdef CONFIG_NUMA 193 set_cpus_allowed(current, oldmask); 194 current->mempolicy = oldpol; 195 #endif 196 return error;
+8 -3
drivers/pci/pci-sysfs.c
··· 21 #include <linux/topology.h> 22 #include <linux/mm.h> 23 #include <linux/capability.h> 24 #include "pci.h" 25 26 static int sysfs_initialized; /* = 0 */ ··· 359 char *buf, loff_t off, size_t count) 360 { 361 struct pci_bus *bus = to_pci_bus(container_of(kobj, 362 - struct class_device, 363 kobj)); 364 365 /* Only support 1, 2 or 4 byte accesses */ ··· 384 char *buf, loff_t off, size_t count) 385 { 386 struct pci_bus *bus = to_pci_bus(container_of(kobj, 387 - struct class_device, 388 kobj)); 389 /* Only support 1, 2 or 4 byte accesses */ 390 if (count != 1 && count != 2 && count != 4) ··· 408 struct vm_area_struct *vma) 409 { 410 struct pci_bus *bus = to_pci_bus(container_of(kobj, 411 - struct class_device, 412 kobj)); 413 414 return pci_mmap_legacy_page_range(bus, vma); ··· 651 if (pcibios_add_platform_entries(pdev)) 652 goto err_rom_file; 653 654 return 0; 655 656 err_rom_file: ··· 681 { 682 if (!sysfs_initialized) 683 return; 684 685 if (pdev->cfg_size < 4096) 686 sysfs_remove_bin_file(&pdev->dev.kobj, &pci_config_attr);
··· 21 #include <linux/topology.h> 22 #include <linux/mm.h> 23 #include <linux/capability.h> 24 + #include <linux/aspm.h> 25 #include "pci.h" 26 27 static int sysfs_initialized; /* = 0 */ ··· 358 char *buf, loff_t off, size_t count) 359 { 360 struct pci_bus *bus = to_pci_bus(container_of(kobj, 361 + struct device, 362 kobj)); 363 364 /* Only support 1, 2 or 4 byte accesses */ ··· 383 char *buf, loff_t off, size_t count) 384 { 385 struct pci_bus *bus = to_pci_bus(container_of(kobj, 386 + struct device, 387 kobj)); 388 /* Only support 1, 2 or 4 byte accesses */ 389 if (count != 1 && count != 2 && count != 4) ··· 407 struct vm_area_struct *vma) 408 { 409 struct pci_bus *bus = to_pci_bus(container_of(kobj, 410 + struct device, 411 kobj)); 412 413 return pci_mmap_legacy_page_range(bus, vma); ··· 650 if (pcibios_add_platform_entries(pdev)) 651 goto err_rom_file; 652 653 + pcie_aspm_create_sysfs_dev_files(pdev); 654 + 655 return 0; 656 657 err_rom_file: ··· 678 { 679 if (!sysfs_initialized) 680 return; 681 + 682 + pcie_aspm_remove_sysfs_dev_files(pdev); 683 684 if (pdev->cfg_size < 4096) 685 sysfs_remove_bin_file(&pdev->dev.kobj, &pci_config_attr);
+75 -18
drivers/pci/pci.c
··· 18 #include <linux/spinlock.h> 19 #include <linux/string.h> 20 #include <linux/log2.h> 21 #include <asm/dma.h> /* isa_dma_bridge_buggy */ 22 #include "pci.h" 23 ··· 315 } 316 EXPORT_SYMBOL_GPL(pci_find_ht_capability); 317 318 /** 319 * pci_find_parent_resource - return resource region of parent bus of given region 320 * @dev: PCI device structure contains resources to be searched ··· 372 * Restore the BAR values for a given device, so as to make it 373 * accessible by its driver. 374 */ 375 - void 376 pci_restore_bars(struct pci_dev *dev) 377 { 378 int i, numres; ··· 520 if (need_restore) 521 pci_restore_bars(dev); 522 523 return 0; 524 } 525 ··· 573 int pos, i = 0; 574 struct pci_cap_saved_state *save_state; 575 u16 *cap; 576 577 pos = pci_find_capability(dev, PCI_CAP_ID_EXP); 578 if (pos <= 0) ··· 582 save_state = pci_find_saved_cap(dev, PCI_CAP_ID_EXP); 583 if (!save_state) 584 save_state = kzalloc(sizeof(*save_state) + sizeof(u16) * 4, GFP_KERNEL); 585 if (!save_state) { 586 dev_err(&dev->dev, "Out of memory in pci_save_pcie_state\n"); 587 return -ENOMEM; ··· 594 pci_read_config_word(dev, pos + PCI_EXP_LNKCTL, &cap[i++]); 595 pci_read_config_word(dev, pos + PCI_EXP_SLTCTL, &cap[i++]); 596 pci_read_config_word(dev, pos + PCI_EXP_RTCTL, &cap[i++]); 597 - pci_add_saved_cap(dev, save_state); 598 return 0; 599 } 600 ··· 624 int pos, i = 0; 625 struct pci_cap_saved_state *save_state; 626 u16 *cap; 627 628 pos = pci_find_capability(dev, PCI_CAP_ID_PCIX); 629 if (pos <= 0) 630 return 0; 631 632 - save_state = pci_find_saved_cap(dev, PCI_CAP_ID_EXP); 633 if (!save_state) 634 save_state = kzalloc(sizeof(*save_state) + sizeof(u16), GFP_KERNEL); 635 if (!save_state) { 636 dev_err(&dev->dev, "Out of memory in pci_save_pcie_state\n"); 637 return -ENOMEM; ··· 642 cap = (u16 *)&save_state->data[0]; 643 644 pci_read_config_word(dev, pos + PCI_X_CMD, &cap[i++]); 645 - pci_add_saved_cap(dev, save_state); 646 return 0; 647 } 648 ··· 745 return 0; 746 } 747 748 - /** 749 - * pci_enable_device_bars - Initialize some of a device for use 750 - * @dev: PCI device to be initialized 751 - * @bars: bitmask of BAR's that must be configured 752 - * 753 - * Initialize device before it's used by a driver. Ask low-level code 754 - * to enable selected I/O and memory resources. Wake up the device if it 755 - * was suspended. Beware, this function can fail. 756 - */ 757 - int 758 - pci_enable_device_bars(struct pci_dev *dev, int bars) 759 { 760 int err; 761 762 if (atomic_add_return(1, &dev->enable_cnt) > 1) 763 return 0; /* already enabled */ 764 765 err = do_pci_enable_device(dev, bars); 766 if (err < 0) 767 atomic_dec(&dev->enable_cnt); 768 return err; 769 } 770 771 /** ··· 803 */ 804 int pci_enable_device(struct pci_dev *dev) 805 { 806 - return pci_enable_device_bars(dev, (1 << PCI_NUM_RESOURCES) - 1); 807 } 808 809 /* ··· 938 939 if (atomic_sub_return(1, &dev->enable_cnt) != 0) 940 return; 941 942 pci_read_config_word(dev, PCI_COMMAND, &pci_command); 943 if (pci_command & PCI_COMMAND_MASTER) { ··· 1676 1677 device_initcall(pci_init); 1678 1679 - EXPORT_SYMBOL_GPL(pci_restore_bars); 1680 EXPORT_SYMBOL(pci_reenable_device); 1681 - EXPORT_SYMBOL(pci_enable_device_bars); 1682 EXPORT_SYMBOL(pci_enable_device); 1683 EXPORT_SYMBOL(pcim_enable_device); 1684 EXPORT_SYMBOL(pcim_pin_device);
··· 18 #include <linux/spinlock.h> 19 #include <linux/string.h> 20 #include <linux/log2.h> 21 + #include <linux/aspm.h> 22 #include <asm/dma.h> /* isa_dma_bridge_buggy */ 23 #include "pci.h" 24 ··· 314 } 315 EXPORT_SYMBOL_GPL(pci_find_ht_capability); 316 317 + void pcie_wait_pending_transaction(struct pci_dev *dev) 318 + { 319 + int pos; 320 + u16 reg16; 321 + 322 + pos = pci_find_capability(dev, PCI_CAP_ID_EXP); 323 + if (!pos) 324 + return; 325 + while (1) { 326 + pci_read_config_word(dev, pos + PCI_EXP_DEVSTA, &reg16); 327 + if (!(reg16 & PCI_EXP_DEVSTA_TRPND)) 328 + break; 329 + cpu_relax(); 330 + } 331 + 332 + } 333 + EXPORT_SYMBOL_GPL(pcie_wait_pending_transaction); 334 + 335 /** 336 * pci_find_parent_resource - return resource region of parent bus of given region 337 * @dev: PCI device structure contains resources to be searched ··· 353 * Restore the BAR values for a given device, so as to make it 354 * accessible by its driver. 355 */ 356 + static void 357 pci_restore_bars(struct pci_dev *dev) 358 { 359 int i, numres; ··· 501 if (need_restore) 502 pci_restore_bars(dev); 503 504 + if (dev->bus->self) 505 + pcie_aspm_pm_state_change(dev->bus->self); 506 + 507 return 0; 508 } 509 ··· 551 int pos, i = 0; 552 struct pci_cap_saved_state *save_state; 553 u16 *cap; 554 + int found = 0; 555 556 pos = pci_find_capability(dev, PCI_CAP_ID_EXP); 557 if (pos <= 0) ··· 559 save_state = pci_find_saved_cap(dev, PCI_CAP_ID_EXP); 560 if (!save_state) 561 save_state = kzalloc(sizeof(*save_state) + sizeof(u16) * 4, GFP_KERNEL); 562 + else 563 + found = 1; 564 if (!save_state) { 565 dev_err(&dev->dev, "Out of memory in pci_save_pcie_state\n"); 566 return -ENOMEM; ··· 569 pci_read_config_word(dev, pos + PCI_EXP_LNKCTL, &cap[i++]); 570 pci_read_config_word(dev, pos + PCI_EXP_SLTCTL, &cap[i++]); 571 pci_read_config_word(dev, pos + PCI_EXP_RTCTL, &cap[i++]); 572 + save_state->cap_nr = PCI_CAP_ID_EXP; 573 + if (!found) 574 + pci_add_saved_cap(dev, save_state); 575 return 0; 576 } 577 ··· 597 int pos, i = 0; 598 struct pci_cap_saved_state *save_state; 599 u16 *cap; 600 + int found = 0; 601 602 pos = pci_find_capability(dev, PCI_CAP_ID_PCIX); 603 if (pos <= 0) 604 return 0; 605 606 + save_state = pci_find_saved_cap(dev, PCI_CAP_ID_PCIX); 607 if (!save_state) 608 save_state = kzalloc(sizeof(*save_state) + sizeof(u16), GFP_KERNEL); 609 + else 610 + found = 1; 611 if (!save_state) { 612 dev_err(&dev->dev, "Out of memory in pci_save_pcie_state\n"); 613 return -ENOMEM; ··· 612 cap = (u16 *)&save_state->data[0]; 613 614 pci_read_config_word(dev, pos + PCI_X_CMD, &cap[i++]); 615 + save_state->cap_nr = PCI_CAP_ID_PCIX; 616 + if (!found) 617 + pci_add_saved_cap(dev, save_state); 618 return 0; 619 } 620 ··· 713 return 0; 714 } 715 716 + static int __pci_enable_device_flags(struct pci_dev *dev, 717 + resource_size_t flags) 718 { 719 int err; 720 + int i, bars = 0; 721 722 if (atomic_add_return(1, &dev->enable_cnt) > 1) 723 return 0; /* already enabled */ 724 + 725 + for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) 726 + if (dev->resource[i].flags & flags) 727 + bars |= (1 << i); 728 729 err = do_pci_enable_device(dev, bars); 730 if (err < 0) 731 atomic_dec(&dev->enable_cnt); 732 return err; 733 + } 734 + 735 + /** 736 + * pci_enable_device_io - Initialize a device for use with IO space 737 + * @dev: PCI device to be initialized 738 + * 739 + * Initialize device before it's used by a driver. Ask low-level code 740 + * to enable I/O resources. Wake up the device if it was suspended. 741 + * Beware, this function can fail. 742 + */ 743 + int pci_enable_device_io(struct pci_dev *dev) 744 + { 745 + return __pci_enable_device_flags(dev, IORESOURCE_IO); 746 + } 747 + 748 + /** 749 + * pci_enable_device_mem - Initialize a device for use with Memory space 750 + * @dev: PCI device to be initialized 751 + * 752 + * Initialize device before it's used by a driver. Ask low-level code 753 + * to enable Memory resources. Wake up the device if it was suspended. 754 + * Beware, this function can fail. 755 + */ 756 + int pci_enable_device_mem(struct pci_dev *dev) 757 + { 758 + return __pci_enable_device_flags(dev, IORESOURCE_MEM); 759 } 760 761 /** ··· 749 */ 750 int pci_enable_device(struct pci_dev *dev) 751 { 752 + return __pci_enable_device_flags(dev, IORESOURCE_MEM | IORESOURCE_IO); 753 } 754 755 /* ··· 884 885 if (atomic_sub_return(1, &dev->enable_cnt) != 0) 886 return; 887 + 888 + /* Wait for all transactions are finished before disabling the device */ 889 + pcie_wait_pending_transaction(dev); 890 891 pci_read_config_word(dev, PCI_COMMAND, &pci_command); 892 if (pci_command & PCI_COMMAND_MASTER) { ··· 1619 1620 device_initcall(pci_init); 1621 1622 EXPORT_SYMBOL(pci_reenable_device); 1623 + EXPORT_SYMBOL(pci_enable_device_io); 1624 + EXPORT_SYMBOL(pci_enable_device_mem); 1625 EXPORT_SYMBOL(pci_enable_device); 1626 EXPORT_SYMBOL(pcim_enable_device); 1627 EXPORT_SYMBOL(pcim_pin_device);
+6 -10
drivers/pci/pci.h
··· 6 extern void pci_cleanup_rom(struct pci_dev *dev); 7 8 /* Firmware callbacks */ 9 - extern pci_power_t (*platform_pci_choose_state)(struct pci_dev *dev, pm_message_t state); 10 - extern int (*platform_pci_set_power_state)(struct pci_dev *dev, pci_power_t state); 11 12 extern int pci_user_read_config_byte(struct pci_dev *dev, int where, u8 *val); 13 extern int pci_user_read_config_word(struct pci_dev *dev, int where, u16 *val); ··· 47 static inline void pci_msi_init_pci_dev(struct pci_dev *dev) { } 48 #endif 49 50 - #if defined(CONFIG_PCI_MSI) && defined(CONFIG_PM) 51 - void pci_restore_msi_state(struct pci_dev *dev); 52 - #else 53 - static inline void pci_restore_msi_state(struct pci_dev *dev) {} 54 - #endif 55 - 56 #ifdef CONFIG_PCIEAER 57 void pci_no_aer(void); 58 #else ··· 64 } 65 extern int pcie_mch_quirk; 66 extern struct device_attribute pci_dev_attrs[]; 67 - extern struct class_device_attribute class_device_attr_cpuaffinity; 68 69 /** 70 * pci_match_one_device - Tell if a PCI device structure has a matching 71 * PCI device id structure 72 * @id: single PCI device id structure to match 73 * @dev: the PCI device structure to match against 74 - * 75 * Returns the matching pci_device_id structure or %NULL if there is no match. 76 */ 77 static inline const struct pci_device_id *
··· 6 extern void pci_cleanup_rom(struct pci_dev *dev); 7 8 /* Firmware callbacks */ 9 + extern pci_power_t (*platform_pci_choose_state)(struct pci_dev *dev, 10 + pm_message_t state); 11 + extern int (*platform_pci_set_power_state)(struct pci_dev *dev, 12 + pci_power_t state); 13 14 extern int pci_user_read_config_byte(struct pci_dev *dev, int where, u8 *val); 15 extern int pci_user_read_config_word(struct pci_dev *dev, int where, u16 *val); ··· 45 static inline void pci_msi_init_pci_dev(struct pci_dev *dev) { } 46 #endif 47 48 #ifdef CONFIG_PCIEAER 49 void pci_no_aer(void); 50 #else ··· 68 } 69 extern int pcie_mch_quirk; 70 extern struct device_attribute pci_dev_attrs[]; 71 + extern struct device_attribute dev_attr_cpuaffinity; 72 73 /** 74 * pci_match_one_device - Tell if a PCI device structure has a matching 75 * PCI device id structure 76 * @id: single PCI device id structure to match 77 * @dev: the PCI device structure to match against 78 + * 79 * Returns the matching pci_device_id structure or %NULL if there is no match. 80 */ 81 static inline const struct pci_device_id *
+20
drivers/pci/pcie/Kconfig
··· 26 When in doubt, say N. 27 28 source "drivers/pci/pcie/aer/Kconfig"
··· 26 When in doubt, say N. 27 28 source "drivers/pci/pcie/aer/Kconfig" 29 + 30 + # 31 + # PCI Express ASPM 32 + # 33 + config PCIEASPM 34 + bool "PCI Express ASPM support(Experimental)" 35 + depends on PCI && EXPERIMENTAL 36 + default y 37 + help 38 + This enables PCI Express ASPM (Active State Power Management) and 39 + Clock Power Management. ASPM supports state L0/L0s/L1. 40 + 41 + When in doubt, say N. 42 + config PCIEASPM_DEBUG 43 + bool "Debug PCI Express ASPM" 44 + depends on PCIEASPM 45 + default n 46 + help 47 + This enables PCI Express ASPM debug support. It will add per-device 48 + interface to control ASPM.
+3
drivers/pci/pcie/Makefile
··· 2 # Makefile for PCI-Express PORT Driver 3 # 4 5 pcieportdrv-y := portdrv_core.o portdrv_pci.o portdrv_bus.o 6 7 obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o
··· 2 # Makefile for PCI-Express PORT Driver 3 # 4 5 + # Build PCI Express ASPM if needed 6 + obj-$(CONFIG_PCIEASPM) += aspm.o 7 + 8 pcieportdrv-y := portdrv_core.o portdrv_pci.o portdrv_bus.o 9 10 obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o
+7 -17
drivers/pci/pcie/aer/aerdrv_acpi.c
··· 31 { 32 acpi_status status = AE_NOT_FOUND; 33 struct pci_dev *pdev = pciedev->port; 34 - acpi_handle handle = DEVICE_ACPI_HANDLE(&pdev->dev); 35 - struct pci_bus *parent; 36 37 - while (!handle) { 38 - if (!pdev || !pdev->bus->parent) 39 - break; 40 - parent = pdev->bus->parent; 41 - if (!parent->self) 42 - /* Parent must be a host bridge */ 43 - handle = acpi_get_pci_rootbridge_handle( 44 - pci_domain_nr(parent), 45 - parent->number); 46 - else 47 - handle = DEVICE_ACPI_HANDLE( 48 - &(parent->self->dev)); 49 - pdev = parent->self; 50 - } 51 52 if (handle) { 53 - pci_osc_support_set(OSC_EXT_PCI_CONFIG_SUPPORT); 54 status = pci_osc_control_set(handle, 55 OSC_PCI_EXPRESS_AER_CONTROL | 56 OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL);
··· 31 { 32 acpi_status status = AE_NOT_FOUND; 33 struct pci_dev *pdev = pciedev->port; 34 + acpi_handle handle = 0; 35 36 + /* Find root host bridge */ 37 + while (pdev->bus && pdev->bus->self) 38 + pdev = pdev->bus->self; 39 + handle = acpi_get_pci_rootbridge_handle( 40 + pci_domain_nr(pdev->bus), pdev->bus->number); 41 42 if (handle) { 43 + pcie_osc_support_set(OSC_EXT_PCI_CONFIG_SUPPORT); 44 status = pci_osc_control_set(handle, 45 OSC_PCI_EXPRESS_AER_CONTROL | 46 OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL);
+802
drivers/pci/pcie/aspm.c
···
··· 1 + /* 2 + * File: drivers/pci/pcie/aspm.c 3 + * Enabling PCIE link L0s/L1 state and Clock Power Management 4 + * 5 + * Copyright (C) 2007 Intel 6 + * Copyright (C) Zhang Yanmin (yanmin.zhang@intel.com) 7 + * Copyright (C) Shaohua Li (shaohua.li@intel.com) 8 + */ 9 + 10 + #include <linux/kernel.h> 11 + #include <linux/module.h> 12 + #include <linux/moduleparam.h> 13 + #include <linux/pci.h> 14 + #include <linux/pci_regs.h> 15 + #include <linux/errno.h> 16 + #include <linux/pm.h> 17 + #include <linux/init.h> 18 + #include <linux/slab.h> 19 + #include <linux/aspm.h> 20 + #include <acpi/acpi_bus.h> 21 + #include <linux/pci-acpi.h> 22 + #include "../pci.h" 23 + 24 + #ifdef MODULE_PARAM_PREFIX 25 + #undef MODULE_PARAM_PREFIX 26 + #endif 27 + #define MODULE_PARAM_PREFIX "pcie_aspm." 28 + 29 + struct endpoint_state { 30 + unsigned int l0s_acceptable_latency; 31 + unsigned int l1_acceptable_latency; 32 + }; 33 + 34 + struct pcie_link_state { 35 + struct list_head sibiling; 36 + struct pci_dev *pdev; 37 + 38 + /* ASPM state */ 39 + unsigned int support_state; 40 + unsigned int enabled_state; 41 + unsigned int bios_aspm_state; 42 + /* upstream component */ 43 + unsigned int l0s_upper_latency; 44 + unsigned int l1_upper_latency; 45 + /* downstream component */ 46 + unsigned int l0s_down_latency; 47 + unsigned int l1_down_latency; 48 + /* Clock PM state*/ 49 + unsigned int clk_pm_capable; 50 + unsigned int clk_pm_enabled; 51 + unsigned int bios_clk_state; 52 + 53 + /* 54 + * A pcie downstream port only has one slot under it, so at most there 55 + * are 8 functions 56 + */ 57 + struct endpoint_state endpoints[8]; 58 + }; 59 + 60 + static int aspm_disabled; 61 + static DEFINE_MUTEX(aspm_lock); 62 + static LIST_HEAD(link_list); 63 + 64 + #define POLICY_DEFAULT 0 /* BIOS default setting */ 65 + #define POLICY_PERFORMANCE 1 /* high performance */ 66 + #define POLICY_POWERSAVE 2 /* high power saving */ 67 + static int aspm_policy; 68 + static const char *policy_str[] = { 69 + [POLICY_DEFAULT] = "default", 70 + [POLICY_PERFORMANCE] = "performance", 71 + [POLICY_POWERSAVE] = "powersave" 72 + }; 73 + 74 + static int policy_to_aspm_state(struct pci_dev *pdev) 75 + { 76 + struct pcie_link_state *link_state = pdev->link_state; 77 + 78 + switch (aspm_policy) { 79 + case POLICY_PERFORMANCE: 80 + /* Disable ASPM and Clock PM */ 81 + return 0; 82 + case POLICY_POWERSAVE: 83 + /* Enable ASPM L0s/L1 */ 84 + return PCIE_LINK_STATE_L0S|PCIE_LINK_STATE_L1; 85 + case POLICY_DEFAULT: 86 + return link_state->bios_aspm_state; 87 + } 88 + return 0; 89 + } 90 + 91 + static int policy_to_clkpm_state(struct pci_dev *pdev) 92 + { 93 + struct pcie_link_state *link_state = pdev->link_state; 94 + 95 + switch (aspm_policy) { 96 + case POLICY_PERFORMANCE: 97 + /* Disable ASPM and Clock PM */ 98 + return 0; 99 + case POLICY_POWERSAVE: 100 + /* Disable Clock PM */ 101 + return 1; 102 + case POLICY_DEFAULT: 103 + return link_state->bios_clk_state; 104 + } 105 + return 0; 106 + } 107 + 108 + static void pcie_set_clock_pm(struct pci_dev *pdev, int enable) 109 + { 110 + struct pci_dev *child_dev; 111 + int pos; 112 + u16 reg16; 113 + struct pcie_link_state *link_state = pdev->link_state; 114 + 115 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) { 116 + pos = pci_find_capability(child_dev, PCI_CAP_ID_EXP); 117 + if (!pos) 118 + return; 119 + pci_read_config_word(child_dev, pos + PCI_EXP_LNKCTL, &reg16); 120 + if (enable) 121 + reg16 |= PCI_EXP_LNKCTL_CLKREQ_EN; 122 + else 123 + reg16 &= ~PCI_EXP_LNKCTL_CLKREQ_EN; 124 + pci_write_config_word(child_dev, pos + PCI_EXP_LNKCTL, reg16); 125 + } 126 + link_state->clk_pm_enabled = !!enable; 127 + } 128 + 129 + static void pcie_check_clock_pm(struct pci_dev *pdev) 130 + { 131 + int pos; 132 + u32 reg32; 133 + u16 reg16; 134 + int capable = 1, enabled = 1; 135 + struct pci_dev *child_dev; 136 + struct pcie_link_state *link_state = pdev->link_state; 137 + 138 + /* All functions should have the same cap and state, take the worst */ 139 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) { 140 + pos = pci_find_capability(child_dev, PCI_CAP_ID_EXP); 141 + if (!pos) 142 + return; 143 + pci_read_config_dword(child_dev, pos + PCI_EXP_LNKCAP, &reg32); 144 + if (!(reg32 & PCI_EXP_LNKCAP_CLKPM)) { 145 + capable = 0; 146 + enabled = 0; 147 + break; 148 + } 149 + pci_read_config_word(child_dev, pos + PCI_EXP_LNKCTL, &reg16); 150 + if (!(reg16 & PCI_EXP_LNKCTL_CLKREQ_EN)) 151 + enabled = 0; 152 + } 153 + link_state->clk_pm_capable = capable; 154 + link_state->clk_pm_enabled = enabled; 155 + link_state->bios_clk_state = enabled; 156 + pcie_set_clock_pm(pdev, policy_to_clkpm_state(pdev)); 157 + } 158 + 159 + /* 160 + * pcie_aspm_configure_common_clock: check if the 2 ends of a link 161 + * could use common clock. If they are, configure them to use the 162 + * common clock. That will reduce the ASPM state exit latency. 163 + */ 164 + static void pcie_aspm_configure_common_clock(struct pci_dev *pdev) 165 + { 166 + int pos, child_pos; 167 + u16 reg16 = 0; 168 + struct pci_dev *child_dev; 169 + int same_clock = 1; 170 + 171 + /* 172 + * all functions of a slot should have the same Slot Clock 173 + * Configuration, so just check one function 174 + * */ 175 + child_dev = list_entry(pdev->subordinate->devices.next, struct pci_dev, 176 + bus_list); 177 + BUG_ON(!child_dev->is_pcie); 178 + 179 + /* Check downstream component if bit Slot Clock Configuration is 1 */ 180 + child_pos = pci_find_capability(child_dev, PCI_CAP_ID_EXP); 181 + pci_read_config_word(child_dev, child_pos + PCI_EXP_LNKSTA, &reg16); 182 + if (!(reg16 & PCI_EXP_LNKSTA_SLC)) 183 + same_clock = 0; 184 + 185 + /* Check upstream component if bit Slot Clock Configuration is 1 */ 186 + pos = pci_find_capability(pdev, PCI_CAP_ID_EXP); 187 + pci_read_config_word(pdev, pos + PCI_EXP_LNKSTA, &reg16); 188 + if (!(reg16 & PCI_EXP_LNKSTA_SLC)) 189 + same_clock = 0; 190 + 191 + /* Configure downstream component, all functions */ 192 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) { 193 + child_pos = pci_find_capability(child_dev, PCI_CAP_ID_EXP); 194 + pci_read_config_word(child_dev, child_pos + PCI_EXP_LNKCTL, 195 + &reg16); 196 + if (same_clock) 197 + reg16 |= PCI_EXP_LNKCTL_CCC; 198 + else 199 + reg16 &= ~PCI_EXP_LNKCTL_CCC; 200 + pci_write_config_word(child_dev, child_pos + PCI_EXP_LNKCTL, 201 + reg16); 202 + } 203 + 204 + /* Configure upstream component */ 205 + pci_read_config_word(pdev, pos + PCI_EXP_LNKCTL, &reg16); 206 + if (same_clock) 207 + reg16 |= PCI_EXP_LNKCTL_CCC; 208 + else 209 + reg16 &= ~PCI_EXP_LNKCTL_CCC; 210 + pci_write_config_word(pdev, pos + PCI_EXP_LNKCTL, reg16); 211 + 212 + /* retrain link */ 213 + reg16 |= PCI_EXP_LNKCTL_RL; 214 + pci_write_config_word(pdev, pos + PCI_EXP_LNKCTL, reg16); 215 + 216 + /* Wait for link training end */ 217 + while (1) { 218 + pci_read_config_word(pdev, pos + PCI_EXP_LNKSTA, &reg16); 219 + if (!(reg16 & PCI_EXP_LNKSTA_LT)) 220 + break; 221 + cpu_relax(); 222 + } 223 + } 224 + 225 + /* 226 + * calc_L0S_latency: Convert L0s latency encoding to ns 227 + */ 228 + static unsigned int calc_L0S_latency(unsigned int latency_encoding, int ac) 229 + { 230 + unsigned int ns = 64; 231 + 232 + if (latency_encoding == 0x7) { 233 + if (ac) 234 + ns = -1U; 235 + else 236 + ns = 5*1000; /* > 4us */ 237 + } else 238 + ns *= (1 << latency_encoding); 239 + return ns; 240 + } 241 + 242 + /* 243 + * calc_L1_latency: Convert L1 latency encoding to ns 244 + */ 245 + static unsigned int calc_L1_latency(unsigned int latency_encoding, int ac) 246 + { 247 + unsigned int ns = 1000; 248 + 249 + if (latency_encoding == 0x7) { 250 + if (ac) 251 + ns = -1U; 252 + else 253 + ns = 65*1000; /* > 64us */ 254 + } else 255 + ns *= (1 << latency_encoding); 256 + return ns; 257 + } 258 + 259 + static void pcie_aspm_get_cap_device(struct pci_dev *pdev, u32 *state, 260 + unsigned int *l0s, unsigned int *l1, unsigned int *enabled) 261 + { 262 + int pos; 263 + u16 reg16; 264 + u32 reg32; 265 + unsigned int latency; 266 + 267 + pos = pci_find_capability(pdev, PCI_CAP_ID_EXP); 268 + pci_read_config_dword(pdev, pos + PCI_EXP_LNKCAP, &reg32); 269 + *state = (reg32 & PCI_EXP_LNKCAP_ASPMS) >> 10; 270 + if (*state != PCIE_LINK_STATE_L0S && 271 + *state != (PCIE_LINK_STATE_L1|PCIE_LINK_STATE_L0S)) 272 + * state = 0; 273 + if (*state == 0) 274 + return; 275 + 276 + latency = (reg32 & PCI_EXP_LNKCAP_L0SEL) >> 12; 277 + *l0s = calc_L0S_latency(latency, 0); 278 + if (*state & PCIE_LINK_STATE_L1) { 279 + latency = (reg32 & PCI_EXP_LNKCAP_L1EL) >> 15; 280 + *l1 = calc_L1_latency(latency, 0); 281 + } 282 + pci_read_config_word(pdev, pos + PCI_EXP_LNKCTL, &reg16); 283 + *enabled = reg16 & (PCIE_LINK_STATE_L0S|PCIE_LINK_STATE_L1); 284 + } 285 + 286 + static void pcie_aspm_cap_init(struct pci_dev *pdev) 287 + { 288 + struct pci_dev *child_dev; 289 + u32 state, tmp; 290 + struct pcie_link_state *link_state = pdev->link_state; 291 + 292 + /* upstream component states */ 293 + pcie_aspm_get_cap_device(pdev, &link_state->support_state, 294 + &link_state->l0s_upper_latency, 295 + &link_state->l1_upper_latency, 296 + &link_state->enabled_state); 297 + /* downstream component states, all functions have the same setting */ 298 + child_dev = list_entry(pdev->subordinate->devices.next, struct pci_dev, 299 + bus_list); 300 + pcie_aspm_get_cap_device(child_dev, &state, 301 + &link_state->l0s_down_latency, 302 + &link_state->l1_down_latency, 303 + &tmp); 304 + link_state->support_state &= state; 305 + if (!link_state->support_state) 306 + return; 307 + link_state->enabled_state &= link_state->support_state; 308 + link_state->bios_aspm_state = link_state->enabled_state; 309 + 310 + /* ENDPOINT states*/ 311 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) { 312 + int pos; 313 + u32 reg32; 314 + unsigned int latency; 315 + struct endpoint_state *ep_state = 316 + &link_state->endpoints[PCI_FUNC(child_dev->devfn)]; 317 + 318 + if (child_dev->pcie_type != PCI_EXP_TYPE_ENDPOINT && 319 + child_dev->pcie_type != PCI_EXP_TYPE_LEG_END) 320 + continue; 321 + 322 + pos = pci_find_capability(child_dev, PCI_CAP_ID_EXP); 323 + pci_read_config_dword(child_dev, pos + PCI_EXP_DEVCAP, &reg32); 324 + latency = (reg32 & PCI_EXP_DEVCAP_L0S) >> 6; 325 + latency = calc_L0S_latency(latency, 1); 326 + ep_state->l0s_acceptable_latency = latency; 327 + if (link_state->support_state & PCIE_LINK_STATE_L1) { 328 + latency = (reg32 & PCI_EXP_DEVCAP_L1) >> 9; 329 + latency = calc_L1_latency(latency, 1); 330 + ep_state->l1_acceptable_latency = latency; 331 + } 332 + } 333 + } 334 + 335 + static unsigned int __pcie_aspm_check_state_one(struct pci_dev *pdev, 336 + unsigned int state) 337 + { 338 + struct pci_dev *parent_dev, *tmp_dev; 339 + unsigned int latency, l1_latency = 0; 340 + struct pcie_link_state *link_state; 341 + struct endpoint_state *ep_state; 342 + 343 + parent_dev = pdev->bus->self; 344 + link_state = parent_dev->link_state; 345 + state &= link_state->support_state; 346 + if (state == 0) 347 + return 0; 348 + ep_state = &link_state->endpoints[PCI_FUNC(pdev->devfn)]; 349 + 350 + /* 351 + * Check latency for endpoint device. 352 + * TBD: The latency from the endpoint to root complex vary per 353 + * switch's upstream link state above the device. Here we just do a 354 + * simple check which assumes all links above the device can be in L1 355 + * state, that is we just consider the worst case. If switch's upstream 356 + * link can't be put into L0S/L1, then our check is too strictly. 357 + */ 358 + tmp_dev = pdev; 359 + while (state & (PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1)) { 360 + parent_dev = tmp_dev->bus->self; 361 + link_state = parent_dev->link_state; 362 + if (state & PCIE_LINK_STATE_L0S) { 363 + latency = max_t(unsigned int, 364 + link_state->l0s_upper_latency, 365 + link_state->l0s_down_latency); 366 + if (latency > ep_state->l0s_acceptable_latency) 367 + state &= ~PCIE_LINK_STATE_L0S; 368 + } 369 + if (state & PCIE_LINK_STATE_L1) { 370 + latency = max_t(unsigned int, 371 + link_state->l1_upper_latency, 372 + link_state->l1_down_latency); 373 + if (latency + l1_latency > 374 + ep_state->l1_acceptable_latency) 375 + state &= ~PCIE_LINK_STATE_L1; 376 + } 377 + if (!parent_dev->bus->self) /* parent_dev is a root port */ 378 + break; 379 + else { 380 + /* 381 + * parent_dev is the downstream port of a switch, make 382 + * tmp_dev the upstream port of the switch 383 + */ 384 + tmp_dev = parent_dev->bus->self; 385 + /* 386 + * every switch on the path to root complex need 1 more 387 + * microsecond for L1. Spec doesn't mention L0S. 388 + */ 389 + if (state & PCIE_LINK_STATE_L1) 390 + l1_latency += 1000; 391 + } 392 + } 393 + return state; 394 + } 395 + 396 + static unsigned int pcie_aspm_check_state(struct pci_dev *pdev, 397 + unsigned int state) 398 + { 399 + struct pci_dev *child_dev; 400 + 401 + /* If no child, disable the link */ 402 + if (list_empty(&pdev->subordinate->devices)) 403 + return 0; 404 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) { 405 + if (child_dev->pcie_type == PCI_EXP_TYPE_PCI_BRIDGE) { 406 + /* 407 + * If downstream component of a link is pci bridge, we 408 + * disable ASPM for now for the link 409 + * */ 410 + state = 0; 411 + break; 412 + } 413 + if ((child_dev->pcie_type != PCI_EXP_TYPE_ENDPOINT && 414 + child_dev->pcie_type != PCI_EXP_TYPE_LEG_END)) 415 + continue; 416 + /* Device not in D0 doesn't need check latency */ 417 + if (child_dev->current_state == PCI_D1 || 418 + child_dev->current_state == PCI_D2 || 419 + child_dev->current_state == PCI_D3hot || 420 + child_dev->current_state == PCI_D3cold) 421 + continue; 422 + state = __pcie_aspm_check_state_one(child_dev, state); 423 + } 424 + return state; 425 + } 426 + 427 + static void __pcie_aspm_config_one_dev(struct pci_dev *pdev, unsigned int state) 428 + { 429 + u16 reg16; 430 + int pos = pci_find_capability(pdev, PCI_CAP_ID_EXP); 431 + 432 + pci_read_config_word(pdev, pos + PCI_EXP_LNKCTL, &reg16); 433 + reg16 &= ~0x3; 434 + reg16 |= state; 435 + pci_write_config_word(pdev, pos + PCI_EXP_LNKCTL, reg16); 436 + } 437 + 438 + static void __pcie_aspm_config_link(struct pci_dev *pdev, unsigned int state) 439 + { 440 + struct pci_dev *child_dev; 441 + int valid = 1; 442 + struct pcie_link_state *link_state = pdev->link_state; 443 + 444 + /* 445 + * if the downstream component has pci bridge function, don't do ASPM 446 + * now 447 + */ 448 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) { 449 + if (child_dev->pcie_type == PCI_EXP_TYPE_PCI_BRIDGE) { 450 + valid = 0; 451 + break; 452 + } 453 + } 454 + if (!valid) 455 + return; 456 + 457 + /* 458 + * spec 2.0 suggests all functions should be configured the same 459 + * setting for ASPM. Enabling ASPM L1 should be done in upstream 460 + * component first and then downstream, and vice versa for disabling 461 + * ASPM L1. Spec doesn't mention L0S. 462 + */ 463 + if (state & PCIE_LINK_STATE_L1) 464 + __pcie_aspm_config_one_dev(pdev, state); 465 + 466 + list_for_each_entry(child_dev, &pdev->subordinate->devices, bus_list) 467 + __pcie_aspm_config_one_dev(child_dev, state); 468 + 469 + if (!(state & PCIE_LINK_STATE_L1)) 470 + __pcie_aspm_config_one_dev(pdev, state); 471 + 472 + link_state->enabled_state = state; 473 + } 474 + 475 + static void __pcie_aspm_configure_link_state(struct pci_dev *pdev, 476 + unsigned int state) 477 + { 478 + struct pcie_link_state *link_state = pdev->link_state; 479 + 480 + if (link_state->support_state == 0) 481 + return; 482 + state &= PCIE_LINK_STATE_L0S|PCIE_LINK_STATE_L1; 483 + 484 + /* state 0 means disabling aspm */ 485 + state = pcie_aspm_check_state(pdev, state); 486 + if (link_state->enabled_state == state) 487 + return; 488 + __pcie_aspm_config_link(pdev, state); 489 + } 490 + 491 + /* 492 + * pcie_aspm_configure_link_state: enable/disable PCI express link state 493 + * @pdev: the root port or switch downstream port 494 + */ 495 + static void pcie_aspm_configure_link_state(struct pci_dev *pdev, 496 + unsigned int state) 497 + { 498 + down_read(&pci_bus_sem); 499 + mutex_lock(&aspm_lock); 500 + __pcie_aspm_configure_link_state(pdev, state); 501 + mutex_unlock(&aspm_lock); 502 + up_read(&pci_bus_sem); 503 + } 504 + 505 + static void free_link_state(struct pci_dev *pdev) 506 + { 507 + kfree(pdev->link_state); 508 + pdev->link_state = NULL; 509 + } 510 + 511 + /* 512 + * pcie_aspm_init_link_state: Initiate PCI express link state. 513 + * It is called after the pcie and its children devices are scaned. 514 + * @pdev: the root port or switch downstream port 515 + */ 516 + void pcie_aspm_init_link_state(struct pci_dev *pdev) 517 + { 518 + unsigned int state; 519 + struct pcie_link_state *link_state; 520 + int error = 0; 521 + 522 + if (aspm_disabled || !pdev->is_pcie || pdev->link_state) 523 + return; 524 + if (pdev->pcie_type != PCI_EXP_TYPE_ROOT_PORT && 525 + pdev->pcie_type != PCI_EXP_TYPE_DOWNSTREAM) 526 + return; 527 + down_read(&pci_bus_sem); 528 + if (list_empty(&pdev->subordinate->devices)) 529 + goto out; 530 + 531 + mutex_lock(&aspm_lock); 532 + 533 + link_state = kzalloc(sizeof(*link_state), GFP_KERNEL); 534 + if (!link_state) 535 + goto unlock_out; 536 + pdev->link_state = link_state; 537 + 538 + pcie_aspm_configure_common_clock(pdev); 539 + 540 + pcie_aspm_cap_init(pdev); 541 + 542 + /* config link state to avoid BIOS error */ 543 + state = pcie_aspm_check_state(pdev, policy_to_aspm_state(pdev)); 544 + __pcie_aspm_config_link(pdev, state); 545 + 546 + pcie_check_clock_pm(pdev); 547 + 548 + link_state->pdev = pdev; 549 + list_add(&link_state->sibiling, &link_list); 550 + 551 + unlock_out: 552 + if (error) 553 + free_link_state(pdev); 554 + mutex_unlock(&aspm_lock); 555 + out: 556 + up_read(&pci_bus_sem); 557 + } 558 + 559 + /* @pdev: the endpoint device */ 560 + void pcie_aspm_exit_link_state(struct pci_dev *pdev) 561 + { 562 + struct pci_dev *parent = pdev->bus->self; 563 + struct pcie_link_state *link_state = parent->link_state; 564 + 565 + if (aspm_disabled || !pdev->is_pcie || !parent || !link_state) 566 + return; 567 + if (parent->pcie_type != PCI_EXP_TYPE_ROOT_PORT && 568 + parent->pcie_type != PCI_EXP_TYPE_DOWNSTREAM) 569 + return; 570 + down_read(&pci_bus_sem); 571 + mutex_lock(&aspm_lock); 572 + 573 + /* 574 + * All PCIe functions are in one slot, remove one function will remove 575 + * the the whole slot, so just wait 576 + */ 577 + if (!list_empty(&parent->subordinate->devices)) 578 + goto out; 579 + 580 + /* All functions are removed, so just disable ASPM for the link */ 581 + __pcie_aspm_config_one_dev(parent, 0); 582 + list_del(&link_state->sibiling); 583 + /* Clock PM is for endpoint device */ 584 + 585 + free_link_state(parent); 586 + out: 587 + mutex_unlock(&aspm_lock); 588 + up_read(&pci_bus_sem); 589 + } 590 + 591 + /* @pdev: the root port or switch downstream port */ 592 + void pcie_aspm_pm_state_change(struct pci_dev *pdev) 593 + { 594 + struct pcie_link_state *link_state = pdev->link_state; 595 + 596 + if (aspm_disabled || !pdev->is_pcie || !pdev->link_state) 597 + return; 598 + if (pdev->pcie_type != PCI_EXP_TYPE_ROOT_PORT && 599 + pdev->pcie_type != PCI_EXP_TYPE_DOWNSTREAM) 600 + return; 601 + /* 602 + * devices changed PM state, we should recheck if latency meets all 603 + * functions' requirement 604 + */ 605 + pcie_aspm_configure_link_state(pdev, link_state->enabled_state); 606 + } 607 + 608 + /* 609 + * pci_disable_link_state - disable pci device's link state, so the link will 610 + * never enter specific states 611 + */ 612 + void pci_disable_link_state(struct pci_dev *pdev, int state) 613 + { 614 + struct pci_dev *parent = pdev->bus->self; 615 + struct pcie_link_state *link_state; 616 + 617 + if (aspm_disabled || !pdev->is_pcie) 618 + return; 619 + if (pdev->pcie_type == PCI_EXP_TYPE_ROOT_PORT || 620 + pdev->pcie_type == PCI_EXP_TYPE_DOWNSTREAM) 621 + parent = pdev; 622 + if (!parent) 623 + return; 624 + 625 + down_read(&pci_bus_sem); 626 + mutex_lock(&aspm_lock); 627 + link_state = parent->link_state; 628 + link_state->support_state &= 629 + ~(state & (PCIE_LINK_STATE_L0S|PCIE_LINK_STATE_L1)); 630 + if (state & PCIE_LINK_STATE_CLKPM) 631 + link_state->clk_pm_capable = 0; 632 + 633 + __pcie_aspm_configure_link_state(parent, link_state->enabled_state); 634 + if (!link_state->clk_pm_capable && link_state->clk_pm_enabled) 635 + pcie_set_clock_pm(parent, 0); 636 + mutex_unlock(&aspm_lock); 637 + up_read(&pci_bus_sem); 638 + } 639 + EXPORT_SYMBOL(pci_disable_link_state); 640 + 641 + static int pcie_aspm_set_policy(const char *val, struct kernel_param *kp) 642 + { 643 + int i; 644 + struct pci_dev *pdev; 645 + struct pcie_link_state *link_state; 646 + 647 + for (i = 0; i < ARRAY_SIZE(policy_str); i++) 648 + if (!strncmp(val, policy_str[i], strlen(policy_str[i]))) 649 + break; 650 + if (i >= ARRAY_SIZE(policy_str)) 651 + return -EINVAL; 652 + if (i == aspm_policy) 653 + return 0; 654 + 655 + down_read(&pci_bus_sem); 656 + mutex_lock(&aspm_lock); 657 + aspm_policy = i; 658 + list_for_each_entry(link_state, &link_list, sibiling) { 659 + pdev = link_state->pdev; 660 + __pcie_aspm_configure_link_state(pdev, 661 + policy_to_aspm_state(pdev)); 662 + if (link_state->clk_pm_capable && 663 + link_state->clk_pm_enabled != policy_to_clkpm_state(pdev)) 664 + pcie_set_clock_pm(pdev, policy_to_clkpm_state(pdev)); 665 + 666 + } 667 + mutex_unlock(&aspm_lock); 668 + up_read(&pci_bus_sem); 669 + return 0; 670 + } 671 + 672 + static int pcie_aspm_get_policy(char *buffer, struct kernel_param *kp) 673 + { 674 + int i, cnt = 0; 675 + for (i = 0; i < ARRAY_SIZE(policy_str); i++) 676 + if (i == aspm_policy) 677 + cnt += sprintf(buffer + cnt, "[%s] ", policy_str[i]); 678 + else 679 + cnt += sprintf(buffer + cnt, "%s ", policy_str[i]); 680 + return cnt; 681 + } 682 + 683 + module_param_call(policy, pcie_aspm_set_policy, pcie_aspm_get_policy, 684 + NULL, 0644); 685 + 686 + #ifdef CONFIG_PCIEASPM_DEBUG 687 + static ssize_t link_state_show(struct device *dev, 688 + struct device_attribute *attr, 689 + char *buf) 690 + { 691 + struct pci_dev *pci_device = to_pci_dev(dev); 692 + struct pcie_link_state *link_state = pci_device->link_state; 693 + 694 + return sprintf(buf, "%d\n", link_state->enabled_state); 695 + } 696 + 697 + static ssize_t link_state_store(struct device *dev, 698 + struct device_attribute *attr, 699 + const char *buf, 700 + size_t n) 701 + { 702 + struct pci_dev *pci_device = to_pci_dev(dev); 703 + int state; 704 + 705 + if (n < 1) 706 + return -EINVAL; 707 + state = buf[0]-'0'; 708 + if (state >= 0 && state <= 3) { 709 + /* setup link aspm state */ 710 + pcie_aspm_configure_link_state(pci_device, state); 711 + return n; 712 + } 713 + 714 + return -EINVAL; 715 + } 716 + 717 + static ssize_t clk_ctl_show(struct device *dev, 718 + struct device_attribute *attr, 719 + char *buf) 720 + { 721 + struct pci_dev *pci_device = to_pci_dev(dev); 722 + struct pcie_link_state *link_state = pci_device->link_state; 723 + 724 + return sprintf(buf, "%d\n", link_state->clk_pm_enabled); 725 + } 726 + 727 + static ssize_t clk_ctl_store(struct device *dev, 728 + struct device_attribute *attr, 729 + const char *buf, 730 + size_t n) 731 + { 732 + struct pci_dev *pci_device = to_pci_dev(dev); 733 + int state; 734 + 735 + if (n < 1) 736 + return -EINVAL; 737 + state = buf[0]-'0'; 738 + 739 + down_read(&pci_bus_sem); 740 + mutex_lock(&aspm_lock); 741 + pcie_set_clock_pm(pci_device, !!state); 742 + mutex_unlock(&aspm_lock); 743 + up_read(&pci_bus_sem); 744 + 745 + return n; 746 + } 747 + 748 + static DEVICE_ATTR(link_state, 0644, link_state_show, link_state_store); 749 + static DEVICE_ATTR(clk_ctl, 0644, clk_ctl_show, clk_ctl_store); 750 + 751 + static char power_group[] = "power"; 752 + void pcie_aspm_create_sysfs_dev_files(struct pci_dev *pdev) 753 + { 754 + struct pcie_link_state *link_state = pdev->link_state; 755 + 756 + if (!pdev->is_pcie || (pdev->pcie_type != PCI_EXP_TYPE_ROOT_PORT && 757 + pdev->pcie_type != PCI_EXP_TYPE_DOWNSTREAM)) 758 + return; 759 + 760 + if (link_state->support_state) 761 + sysfs_add_file_to_group(&pdev->dev.kobj, 762 + &dev_attr_link_state.attr, power_group); 763 + if (link_state->clk_pm_capable) 764 + sysfs_add_file_to_group(&pdev->dev.kobj, 765 + &dev_attr_clk_ctl.attr, power_group); 766 + } 767 + 768 + void pcie_aspm_remove_sysfs_dev_files(struct pci_dev *pdev) 769 + { 770 + struct pcie_link_state *link_state = pdev->link_state; 771 + 772 + if (!pdev->is_pcie || (pdev->pcie_type != PCI_EXP_TYPE_ROOT_PORT && 773 + pdev->pcie_type != PCI_EXP_TYPE_DOWNSTREAM)) 774 + return; 775 + 776 + if (link_state->support_state) 777 + sysfs_remove_file_from_group(&pdev->dev.kobj, 778 + &dev_attr_link_state.attr, power_group); 779 + if (link_state->clk_pm_capable) 780 + sysfs_remove_file_from_group(&pdev->dev.kobj, 781 + &dev_attr_clk_ctl.attr, power_group); 782 + } 783 + #endif 784 + 785 + static int __init pcie_aspm_disable(char *str) 786 + { 787 + aspm_disabled = 1; 788 + return 1; 789 + } 790 + 791 + __setup("pcie_noaspm", pcie_aspm_disable); 792 + 793 + static int __init pcie_aspm_init(void) 794 + { 795 + if (aspm_disabled) 796 + return 0; 797 + pci_osc_support_set(OSC_ACTIVE_STATE_PWR_SUPPORT| 798 + OSC_CLOCK_PWR_CAPABILITY_SUPPORT); 799 + return 0; 800 + } 801 + 802 + fs_initcall(pcie_aspm_init);
+2 -3
drivers/pci/pcie/portdrv_core.c
··· 192 if (reg32 & SLOT_HP_CAPABLE_MASK) 193 services |= PCIE_PORT_SERVICE_HP; 194 } 195 - /* PME Capable */ 196 - pos = pci_find_capability(dev, PCI_CAP_ID_PME); 197 - if (pos) 198 services |= PCIE_PORT_SERVICE_PME; 199 200 pos = PCI_CFG_SPACE_SIZE;
··· 192 if (reg32 & SLOT_HP_CAPABLE_MASK) 193 services |= PCIE_PORT_SERVICE_HP; 194 } 195 + /* PME Capable - root port capability */ 196 + if (((reg16 >> 4) & PORT_TYPE_MASK) == PCIE_RC_PORT) 197 services |= PCIE_PORT_SERVICE_PME; 198 199 pos = PCI_CFG_SPACE_SIZE;
+34 -46
drivers/pci/probe.c
··· 9 #include <linux/slab.h> 10 #include <linux/module.h> 11 #include <linux/cpumask.h> 12 #include "pci.h" 13 14 #define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */ ··· 54 b->legacy_io->attr.mode = S_IRUSR | S_IWUSR; 55 b->legacy_io->read = pci_read_legacy_io; 56 b->legacy_io->write = pci_write_legacy_io; 57 - class_device_create_bin_file(&b->class_dev, b->legacy_io); 58 59 /* Allocated above after the legacy_io struct */ 60 b->legacy_mem = b->legacy_io + 1; ··· 62 b->legacy_mem->size = 1024*1024; 63 b->legacy_mem->attr.mode = S_IRUSR | S_IWUSR; 64 b->legacy_mem->mmap = pci_mmap_legacy_mem; 65 - class_device_create_bin_file(&b->class_dev, b->legacy_mem); 66 } 67 } 68 69 void pci_remove_legacy_files(struct pci_bus *b) 70 { 71 if (b->legacy_io) { 72 - class_device_remove_bin_file(&b->class_dev, b->legacy_io); 73 - class_device_remove_bin_file(&b->class_dev, b->legacy_mem); 74 kfree(b->legacy_io); /* both are allocated here */ 75 } 76 } ··· 82 /* 83 * PCI Bus Class Devices 84 */ 85 - static ssize_t pci_bus_show_cpuaffinity(struct class_device *class_dev, 86 char *buf) 87 { 88 int ret; 89 cpumask_t cpumask; 90 91 - cpumask = pcibus_to_cpumask(to_pci_bus(class_dev)); 92 ret = cpumask_scnprintf(buf, PAGE_SIZE, cpumask); 93 if (ret < PAGE_SIZE) 94 buf[ret++] = '\n'; 95 return ret; 96 } 97 - CLASS_DEVICE_ATTR(cpuaffinity, S_IRUGO, pci_bus_show_cpuaffinity, NULL); 98 99 /* 100 * PCI Bus Class 101 */ 102 - static void release_pcibus_dev(struct class_device *class_dev) 103 { 104 - struct pci_bus *pci_bus = to_pci_bus(class_dev); 105 106 if (pci_bus->bridge) 107 put_device(pci_bus->bridge); ··· 111 112 static struct class pcibus_class = { 113 .name = "pci_bus", 114 - .release = &release_pcibus_dev, 115 }; 116 117 static int __init pcibus_class_init(void) ··· 394 { 395 struct pci_bus *child; 396 int i; 397 - int retval; 398 399 /* 400 * Allocate a new bus, and inherit stuff from the parent.. ··· 409 child->bus_flags = parent->bus_flags; 410 child->bridge = get_device(&bridge->dev); 411 412 - child->class_dev.class = &pcibus_class; 413 - sprintf(child->class_dev.class_id, "%04x:%02x", pci_domain_nr(child), busnr); 414 - retval = class_device_register(&child->class_dev); 415 - if (retval) 416 - goto error_register; 417 - retval = class_device_create_file(&child->class_dev, 418 - &class_device_attr_cpuaffinity); 419 - if (retval) 420 - goto error_file_create; 421 422 /* 423 * Set up the primary, secondary and subordinate ··· 432 bridge->subordinate = child; 433 434 return child; 435 - 436 - error_file_create: 437 - class_device_unregister(&child->class_dev); 438 - error_register: 439 - kfree(child); 440 - return NULL; 441 } 442 443 struct pci_bus *pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, int busnr) ··· 462 parent = parent->parent; 463 } 464 } 465 - 466 - unsigned int pci_scan_child_bus(struct pci_bus *bus); 467 468 /* 469 * If it's a bridge, configure it and scan the bus behind it. ··· 631 (child->number > bus->subordinate) || 632 (child->number < bus->number) || 633 (child->subordinate < bus->number)) { 634 - pr_debug("PCI: Bus #%02x (-#%02x) is %s" 635 "hidden behind%s bridge #%02x (-#%02x)\n", 636 child->number, child->subordinate, 637 (bus->number > child->subordinate && 638 bus->subordinate < child->number) ? 639 - "wholly " : " partially", 640 - bus->self->transparent ? " transparent" : " ", 641 bus->number, bus->subordinate); 642 } 643 bus = bus->parent; ··· 961 962 return dev; 963 } 964 965 /** 966 * pci_scan_slot - scan a PCI slot on a bus for devices. ··· 1002 break; 1003 } 1004 } 1005 return nr; 1006 } 1007 ··· 1098 goto dev_reg_err; 1099 b->bridge = get_device(dev); 1100 1101 - b->class_dev.class = &pcibus_class; 1102 - sprintf(b->class_dev.class_id, "%04x:%02x", pci_domain_nr(b), bus); 1103 - error = class_device_register(&b->class_dev); 1104 if (error) 1105 goto class_dev_reg_err; 1106 - error = class_device_create_file(&b->class_dev, &class_device_attr_cpuaffinity); 1107 if (error) 1108 - goto class_dev_create_file_err; 1109 1110 /* Create legacy_io and legacy_mem files for this bus */ 1111 pci_create_legacy_files(b); 1112 - 1113 - error = sysfs_create_link(&b->class_dev.kobj, &b->bridge->kobj, "bridge"); 1114 - if (error) 1115 - goto sys_create_link_err; 1116 1117 b->number = b->secondary = bus; 1118 b->resource[0] = &ioport_resource; ··· 1117 1118 return b; 1119 1120 - sys_create_link_err: 1121 - class_device_remove_file(&b->class_dev, &class_device_attr_cpuaffinity); 1122 - class_dev_create_file_err: 1123 - class_device_unregister(&b->class_dev); 1124 class_dev_reg_err: 1125 device_unregister(dev); 1126 dev_reg_err: ··· 1130 kfree(b); 1131 return NULL; 1132 } 1133 - EXPORT_SYMBOL_GPL(pci_create_bus); 1134 1135 struct pci_bus *pci_scan_bus_parented(struct device *parent, 1136 int bus, struct pci_ops *ops, void *sysdata) ··· 1148 EXPORT_SYMBOL(pci_do_scan_bus); 1149 EXPORT_SYMBOL(pci_scan_slot); 1150 EXPORT_SYMBOL(pci_scan_bridge); 1151 - EXPORT_SYMBOL(pci_scan_single_device); 1152 EXPORT_SYMBOL_GPL(pci_scan_child_bus); 1153 #endif 1154
··· 9 #include <linux/slab.h> 10 #include <linux/module.h> 11 #include <linux/cpumask.h> 12 + #include <linux/aspm.h> 13 #include "pci.h" 14 15 #define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */ ··· 53 b->legacy_io->attr.mode = S_IRUSR | S_IWUSR; 54 b->legacy_io->read = pci_read_legacy_io; 55 b->legacy_io->write = pci_write_legacy_io; 56 + device_create_bin_file(&b->dev, b->legacy_io); 57 58 /* Allocated above after the legacy_io struct */ 59 b->legacy_mem = b->legacy_io + 1; ··· 61 b->legacy_mem->size = 1024*1024; 62 b->legacy_mem->attr.mode = S_IRUSR | S_IWUSR; 63 b->legacy_mem->mmap = pci_mmap_legacy_mem; 64 + device_create_bin_file(&b->dev, b->legacy_mem); 65 } 66 } 67 68 void pci_remove_legacy_files(struct pci_bus *b) 69 { 70 if (b->legacy_io) { 71 + device_remove_bin_file(&b->dev, b->legacy_io); 72 + device_remove_bin_file(&b->dev, b->legacy_mem); 73 kfree(b->legacy_io); /* both are allocated here */ 74 } 75 } ··· 81 /* 82 * PCI Bus Class Devices 83 */ 84 + static ssize_t pci_bus_show_cpuaffinity(struct device *dev, 85 + struct device_attribute *attr, 86 char *buf) 87 { 88 int ret; 89 cpumask_t cpumask; 90 91 + cpumask = pcibus_to_cpumask(to_pci_bus(dev)); 92 ret = cpumask_scnprintf(buf, PAGE_SIZE, cpumask); 93 if (ret < PAGE_SIZE) 94 buf[ret++] = '\n'; 95 return ret; 96 } 97 + DEVICE_ATTR(cpuaffinity, S_IRUGO, pci_bus_show_cpuaffinity, NULL); 98 99 /* 100 * PCI Bus Class 101 */ 102 + static void release_pcibus_dev(struct device *dev) 103 { 104 + struct pci_bus *pci_bus = to_pci_bus(dev); 105 106 if (pci_bus->bridge) 107 put_device(pci_bus->bridge); ··· 109 110 static struct class pcibus_class = { 111 .name = "pci_bus", 112 + .dev_release = &release_pcibus_dev, 113 }; 114 115 static int __init pcibus_class_init(void) ··· 392 { 393 struct pci_bus *child; 394 int i; 395 396 /* 397 * Allocate a new bus, and inherit stuff from the parent.. ··· 408 child->bus_flags = parent->bus_flags; 409 child->bridge = get_device(&bridge->dev); 410 411 + /* initialize some portions of the bus device, but don't register it 412 + * now as the parent is not properly set up yet. This device will get 413 + * registered later in pci_bus_add_devices() 414 + */ 415 + child->dev.class = &pcibus_class; 416 + sprintf(child->dev.bus_id, "%04x:%02x", pci_domain_nr(child), busnr); 417 418 /* 419 * Set up the primary, secondary and subordinate ··· 434 bridge->subordinate = child; 435 436 return child; 437 } 438 439 struct pci_bus *pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, int busnr) ··· 470 parent = parent->parent; 471 } 472 } 473 474 /* 475 * If it's a bridge, configure it and scan the bus behind it. ··· 641 (child->number > bus->subordinate) || 642 (child->number < bus->number) || 643 (child->subordinate < bus->number)) { 644 + pr_debug("PCI: Bus #%02x (-#%02x) is %s " 645 "hidden behind%s bridge #%02x (-#%02x)\n", 646 child->number, child->subordinate, 647 (bus->number > child->subordinate && 648 bus->subordinate < child->number) ? 649 + "wholly" : "partially", 650 + bus->self->transparent ? " transparent" : "", 651 bus->number, bus->subordinate); 652 } 653 bus = bus->parent; ··· 971 972 return dev; 973 } 974 + EXPORT_SYMBOL(pci_scan_single_device); 975 976 /** 977 * pci_scan_slot - scan a PCI slot on a bus for devices. ··· 1011 break; 1012 } 1013 } 1014 + 1015 + if (bus->self) 1016 + pcie_aspm_init_link_state(bus->self); 1017 + 1018 return nr; 1019 } 1020 ··· 1103 goto dev_reg_err; 1104 b->bridge = get_device(dev); 1105 1106 + b->dev.class = &pcibus_class; 1107 + b->dev.parent = b->bridge; 1108 + sprintf(b->dev.bus_id, "%04x:%02x", pci_domain_nr(b), bus); 1109 + error = device_register(&b->dev); 1110 if (error) 1111 goto class_dev_reg_err; 1112 + error = device_create_file(&b->dev, &dev_attr_cpuaffinity); 1113 if (error) 1114 + goto dev_create_file_err; 1115 1116 /* Create legacy_io and legacy_mem files for this bus */ 1117 pci_create_legacy_files(b); 1118 1119 b->number = b->secondary = bus; 1120 b->resource[0] = &ioport_resource; ··· 1125 1126 return b; 1127 1128 + dev_create_file_err: 1129 + device_unregister(&b->dev); 1130 class_dev_reg_err: 1131 device_unregister(dev); 1132 dev_reg_err: ··· 1140 kfree(b); 1141 return NULL; 1142 } 1143 1144 struct pci_bus *pci_scan_bus_parented(struct device *parent, 1145 int bus, struct pci_ops *ops, void *sysdata) ··· 1159 EXPORT_SYMBOL(pci_do_scan_bus); 1160 EXPORT_SYMBOL(pci_scan_slot); 1161 EXPORT_SYMBOL(pci_scan_bridge); 1162 EXPORT_SYMBOL_GPL(pci_scan_child_bus); 1163 #endif 1164
+9 -8
drivers/pci/proc.c
··· 11 #include <linux/module.h> 12 #include <linux/proc_fs.h> 13 #include <linux/seq_file.h> 14 #include <linux/capability.h> 15 #include <asm/uaccess.h> 16 #include <asm/byteorder.h> ··· 203 int write_combine; 204 }; 205 206 - static int proc_bus_pci_ioctl(struct inode *inode, struct file *file, unsigned int cmd, unsigned long arg) 207 { 208 - const struct proc_dir_entry *dp = PDE(inode); 209 struct pci_dev *dev = dp->data; 210 #ifdef HAVE_PCI_MMAP 211 struct pci_filp_private *fpriv = file->private_data; 212 #endif /* HAVE_PCI_MMAP */ 213 int ret = 0; 214 215 switch (cmd) { 216 case PCIIOC_CONTROLLER: ··· 243 break; 244 }; 245 246 return ret; 247 } 248 ··· 296 .llseek = proc_bus_pci_lseek, 297 .read = proc_bus_pci_read, 298 .write = proc_bus_pci_write, 299 - .ioctl = proc_bus_pci_ioctl, 300 #ifdef HAVE_PCI_MMAP 301 .open = proc_bus_pci_open, 302 .release = proc_bus_pci_release, ··· 375 return 0; 376 } 377 378 - static struct seq_operations proc_bus_pci_devices_op = { 379 .start = pci_seq_start, 380 .next = pci_seq_next, 381 .stop = pci_seq_stop, ··· 484 } 485 486 __initcall(pci_proc_init); 487 - 488 - #ifdef CONFIG_HOTPLUG 489 - EXPORT_SYMBOL(pci_proc_detach_bus); 490 - #endif 491
··· 11 #include <linux/module.h> 12 #include <linux/proc_fs.h> 13 #include <linux/seq_file.h> 14 + #include <linux/smp_lock.h> 15 #include <linux/capability.h> 16 #include <asm/uaccess.h> 17 #include <asm/byteorder.h> ··· 202 int write_combine; 203 }; 204 205 + static long proc_bus_pci_ioctl(struct file *file, unsigned int cmd, 206 + unsigned long arg) 207 { 208 + const struct proc_dir_entry *dp = PDE(file->f_dentry->d_inode); 209 struct pci_dev *dev = dp->data; 210 #ifdef HAVE_PCI_MMAP 211 struct pci_filp_private *fpriv = file->private_data; 212 #endif /* HAVE_PCI_MMAP */ 213 int ret = 0; 214 + 215 + lock_kernel(); 216 217 switch (cmd) { 218 case PCIIOC_CONTROLLER: ··· 239 break; 240 }; 241 242 + unlock_kernel(); 243 return ret; 244 } 245 ··· 291 .llseek = proc_bus_pci_lseek, 292 .read = proc_bus_pci_read, 293 .write = proc_bus_pci_write, 294 + .unlocked_ioctl = proc_bus_pci_ioctl, 295 #ifdef HAVE_PCI_MMAP 296 .open = proc_bus_pci_open, 297 .release = proc_bus_pci_release, ··· 370 return 0; 371 } 372 373 + static const struct seq_operations proc_bus_pci_devices_op = { 374 .start = pci_seq_start, 375 .next = pci_seq_next, 376 .stop = pci_seq_stop, ··· 479 } 480 481 __initcall(pci_proc_init); 482
+281 -202
drivers/pci/quirks.c
··· 21 #include <linux/init.h> 22 #include <linux/delay.h> 23 #include <linux/acpi.h> 24 #include "pci.h" 25 26 /* The Mellanox Tavor device gives false positive parity errors ··· 47 while ((d = pci_get_device(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371SB_0, d))) { 48 pci_read_config_byte(d, 0x82, &dlc); 49 if (!(dlc & 1<<1)) { 50 - printk(KERN_ERR "PCI: PIIX3: Enabling Passive Release on %s\n", pci_name(d)); 51 dlc |= 1<<1; 52 pci_write_config_byte(d, 0x82, dlc); 53 } 54 } 55 } 56 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_passive_release ); 57 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_passive_release ); 58 59 /* The VIA VP2/VP3/MVP3 seem to have some 'features'. There may be a workaround 60 but VIA don't answer queries. If you happen to have good contacts at VIA ··· 69 { 70 if (!isa_dma_bridge_buggy) { 71 isa_dma_bridge_buggy=1; 72 - printk(KERN_INFO "Activating ISA DMA hang workarounds.\n"); 73 } 74 } 75 /* 76 * Its not totally clear which chipsets are the problematic ones 77 * We know 82C586 and 82C596 variants are affected. 78 */ 79 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_0, quirk_isa_dma_hangs ); 80 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C596, quirk_isa_dma_hangs ); 81 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371SB_0, quirk_isa_dma_hangs ); 82 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1533, quirk_isa_dma_hangs ); 83 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_1, quirk_isa_dma_hangs ); 84 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_2, quirk_isa_dma_hangs ); 85 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_3, quirk_isa_dma_hangs ); 86 87 int pci_pci_problems; 88 EXPORT_SYMBOL(pci_pci_problems); ··· 93 static void __devinit quirk_nopcipci(struct pci_dev *dev) 94 { 95 if ((pci_pci_problems & PCIPCI_FAIL)==0) { 96 - printk(KERN_INFO "Disabling direct PCI/PCI transfers.\n"); 97 pci_pci_problems |= PCIPCI_FAIL; 98 } 99 } 100 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_5597, quirk_nopcipci ); 101 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_496, quirk_nopcipci ); 102 103 static void __devinit quirk_nopciamd(struct pci_dev *dev) 104 { ··· 106 pci_read_config_byte(dev, 0x08, &rev); 107 if (rev == 0x13) { 108 /* Erratum 24 */ 109 - printk(KERN_INFO "Chipset erratum: Disabling direct PCI/AGP transfers.\n"); 110 pci_pci_problems |= PCIAGP_FAIL; 111 } 112 } 113 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8151_0, quirk_nopciamd ); 114 115 /* 116 * Triton requires workarounds to be used by the drivers ··· 118 static void __devinit quirk_triton(struct pci_dev *dev) 119 { 120 if ((pci_pci_problems&PCIPCI_TRITON)==0) { 121 - printk(KERN_INFO "Limiting direct PCI/PCI transfers.\n"); 122 pci_pci_problems |= PCIPCI_TRITON; 123 } 124 } 125 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82437, quirk_triton ); 126 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82437VX, quirk_triton ); 127 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82439, quirk_triton ); 128 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82439TX, quirk_triton ); 129 130 /* 131 * VIA Apollo KT133 needs PCI latency patch ··· 140 static void quirk_vialatency(struct pci_dev *dev) 141 { 142 struct pci_dev *p; 143 - u8 rev; 144 u8 busarb; 145 /* Ok we have a potential problem chipset here. Now see if we have 146 a buggy southbridge */ 147 148 p = pci_get_device(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, NULL); 149 if (p!=NULL) { 150 - pci_read_config_byte(p, PCI_CLASS_REVISION, &rev); 151 /* 0x40 - 0x4f == 686B, 0x10 - 0x2f == 686A; thanks Dan Hollis */ 152 /* Check for buggy part revisions */ 153 - if (rev < 0x40 || rev > 0x42) 154 goto exit; 155 } else { 156 p = pci_get_device(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8231, NULL); 157 if (p==NULL) /* No problem parts */ 158 goto exit; 159 - pci_read_config_byte(p, PCI_CLASS_REVISION, &rev); 160 /* Check for buggy part revisions */ 161 - if (rev < 0x10 || rev > 0x12) 162 goto exit; 163 } 164 ··· 178 busarb &= ~(1<<5); 179 busarb |= (1<<4); 180 pci_write_config_byte(dev, 0x76, busarb); 181 - printk(KERN_INFO "Applying VIA southbridge workaround.\n"); 182 exit: 183 pci_dev_put(p); 184 } 185 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8363_0, quirk_vialatency ); 186 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8371_1, quirk_vialatency ); 187 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8361, quirk_vialatency ); 188 /* Must restore this on a resume from RAM */ 189 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8363_0, quirk_vialatency ); 190 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8371_1, quirk_vialatency ); 191 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8361, quirk_vialatency ); 192 193 /* 194 * VIA Apollo VP3 needs ETBF on BT848/878 ··· 196 static void __devinit quirk_viaetbf(struct pci_dev *dev) 197 { 198 if ((pci_pci_problems&PCIPCI_VIAETBF)==0) { 199 - printk(KERN_INFO "Limiting direct PCI/PCI transfers.\n"); 200 pci_pci_problems |= PCIPCI_VIAETBF; 201 } 202 } 203 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C597_0, quirk_viaetbf ); 204 205 static void __devinit quirk_vsfx(struct pci_dev *dev) 206 { 207 if ((pci_pci_problems&PCIPCI_VSFX)==0) { 208 - printk(KERN_INFO "Limiting direct PCI/PCI transfers.\n"); 209 pci_pci_problems |= PCIPCI_VSFX; 210 } 211 } 212 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C576, quirk_vsfx ); 213 214 /* 215 * Ali Magik requires workarounds to be used by the drivers ··· 220 static void __init quirk_alimagik(struct pci_dev *dev) 221 { 222 if ((pci_pci_problems&PCIPCI_ALIMAGIK)==0) { 223 - printk(KERN_INFO "Limiting direct PCI/PCI transfers.\n"); 224 pci_pci_problems |= PCIPCI_ALIMAGIK|PCIPCI_TRITON; 225 } 226 } 227 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1647, quirk_alimagik ); 228 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1651, quirk_alimagik ); 229 230 /* 231 * Natoma has some interesting boundary conditions with Zoran stuff ··· 234 static void __devinit quirk_natoma(struct pci_dev *dev) 235 { 236 if ((pci_pci_problems&PCIPCI_NATOMA)==0) { 237 - printk(KERN_INFO "Limiting direct PCI/PCI transfers.\n"); 238 pci_pci_problems |= PCIPCI_NATOMA; 239 } 240 } 241 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_natoma ); 242 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443LX_0, quirk_natoma ); 243 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443LX_1, quirk_natoma ); 244 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_0, quirk_natoma ); 245 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_1, quirk_natoma ); 246 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_2, quirk_natoma ); 247 248 /* 249 * This chip can cause PCI parity errors if config register 0xA0 is read ··· 253 { 254 dev->cfg_size = 0xA0; 255 } 256 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE, quirk_citrine ); 257 258 /* 259 * S3 868 and 968 chips report region size equal to 32M, but they decode 64M. ··· 268 r->end = 0x3ffffff; 269 } 270 } 271 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_868, quirk_s3_64M ); 272 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_968, quirk_s3_64M ); 273 274 static void __devinit quirk_io_region(struct pci_dev *dev, unsigned region, 275 unsigned size, int nr, const char *name) ··· 290 pcibios_bus_to_resource(dev, res, &bus_region); 291 292 pci_claim_resource(dev, nr); 293 - printk("PCI quirk: region %04x-%04x claimed by %s\n", region, region + size - 1, name); 294 } 295 } 296 ··· 300 */ 301 static void __devinit quirk_ati_exploding_mce(struct pci_dev *dev) 302 { 303 - printk(KERN_INFO "ATI Northbridge, reserving I/O ports 0x3b0 to 0x3bb.\n"); 304 /* Mae rhaid i ni beidio ag edrych ar y lleoliadiau I/O hyn */ 305 request_region(0x3b0, 0x0C, "RadeonIGP"); 306 request_region(0x3d3, 0x01, "RadeonIGP"); 307 } 308 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_RS100, quirk_ati_exploding_mce ); 309 310 /* 311 * Let's make the southbridge information explicit instead ··· 327 pci_read_config_word(dev, 0xE2, &region); 328 quirk_io_region(dev, region, 32, PCI_BRIDGE_RESOURCES+1, "ali7101 SMB"); 329 } 330 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M7101, quirk_ali7101_acpi ); 331 332 static void piix4_io_quirk(struct pci_dev *dev, const char *name, unsigned int port, unsigned int enable) 333 { ··· 352 * let's get enough confirmation reports first. 353 */ 354 base &= -size; 355 - printk("%s PIO at %04x-%04x\n", name, base, base + size - 1); 356 } 357 358 static void piix4_mem_quirk(struct pci_dev *dev, const char *name, unsigned int port, unsigned int enable) ··· 377 * reserve it, but let's get enough confirmation reports first. 378 */ 379 base &= -size; 380 - printk("%s MMIO at %04x-%04x\n", name, base, base + size - 1); 381 } 382 383 /* ··· 416 piix4_io_quirk(dev, "PIIX4 devres I", 0x78, 1 << 20); 417 piix4_io_quirk(dev, "PIIX4 devres J", 0x7c, 1 << 20); 418 } 419 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371AB_3, quirk_piix4_acpi ); 420 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443MX_3, quirk_piix4_acpi ); 421 422 /* 423 * ICH4, ICH4-M, ICH5, ICH5-M ACPI: Three IO regions pointed to by longwords at ··· 434 pci_read_config_dword(dev, 0x58, &region); 435 quirk_io_region(dev, region, 64, PCI_BRIDGE_RESOURCES+1, "ICH4 GPIO"); 436 } 437 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AA_0, quirk_ich4_lpc_acpi ); 438 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AB_0, quirk_ich4_lpc_acpi ); 439 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0, quirk_ich4_lpc_acpi ); 440 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_10, quirk_ich4_lpc_acpi ); 441 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0, quirk_ich4_lpc_acpi ); 442 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_12, quirk_ich4_lpc_acpi ); 443 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0, quirk_ich4_lpc_acpi ); 444 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_12, quirk_ich4_lpc_acpi ); 445 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0, quirk_ich4_lpc_acpi ); 446 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_1, quirk_ich4_lpc_acpi ); 447 448 static void __devinit quirk_ich6_lpc_acpi(struct pci_dev *dev) 449 { ··· 455 pci_read_config_dword(dev, 0x48, &region); 456 quirk_io_region(dev, region, 64, PCI_BRIDGE_RESOURCES+1, "ICH6 GPIO"); 457 } 458 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_0, quirk_ich6_lpc_acpi ); 459 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, quirk_ich6_lpc_acpi ); 460 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH7_0, quirk_ich6_lpc_acpi ); 461 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH7_1, quirk_ich6_lpc_acpi ); 462 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH7_31, quirk_ich6_lpc_acpi ); 463 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_0, quirk_ich6_lpc_acpi ); 464 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_2, quirk_ich6_lpc_acpi ); 465 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_3, quirk_ich6_lpc_acpi ); 466 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_1, quirk_ich6_lpc_acpi ); 467 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_4, quirk_ich6_lpc_acpi ); 468 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_2, quirk_ich6_lpc_acpi ); 469 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_4, quirk_ich6_lpc_acpi ); 470 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_7, quirk_ich6_lpc_acpi ); 471 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_8, quirk_ich6_lpc_acpi ); 472 473 /* 474 * VIA ACPI: One IO region pointed to by longword at ··· 484 quirk_io_region(dev, region, 256, PCI_BRIDGE_RESOURCES, "vt82c586 ACPI"); 485 } 486 } 487 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_3, quirk_vt82c586_acpi ); 488 489 /* 490 * VIA VT82C686 ACPI: Three IO region pointed to by (long)words at ··· 507 smb &= PCI_BASE_ADDRESS_IO_MASK; 508 quirk_io_region(dev, smb, 16, PCI_BRIDGE_RESOURCES + 2, "vt82c686 SMB"); 509 } 510 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686_4, quirk_vt82c686_acpi ); 511 512 /* 513 * VIA VT8235 ISA Bridge: Two IO regions pointed to by words at ··· 549 else 550 tmp = 0x1f; /* all known bits (4-0) routed to external APIC */ 551 552 - printk(KERN_INFO "PCI: %sbling Via external APIC routing\n", 553 tmp == 0 ? "Disa" : "Ena"); 554 555 /* Offset 0x58: External APIC IRQ output control */ 556 pci_write_config_byte (dev, 0x58, tmp); 557 } 558 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, quirk_via_ioapic ); 559 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, quirk_via_ioapic ); 560 561 /* 562 * VIA 8237: Some BIOSs don't set the 'Bypass APIC De-Assert Message' Bit. ··· 571 572 pci_read_config_byte(dev, 0x5B, &misc_control2); 573 if (!(misc_control2 & BYPASS_APIC_DEASSERT)) { 574 - printk(KERN_INFO "PCI: Bypassing VIA 8237 APIC De-Assert Message\n"); 575 pci_write_config_byte(dev, 0x5B, misc_control2|BYPASS_APIC_DEASSERT); 576 } 577 } ··· 590 static void __devinit quirk_amd_ioapic(struct pci_dev *dev) 591 { 592 if (dev->revision >= 0x02) { 593 - printk(KERN_WARNING "I/O APIC: AMD Erratum #22 may be present. In the event of instability try\n"); 594 - printk(KERN_WARNING " : booting with the \"noapic\" option.\n"); 595 } 596 } 597 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_VIPER_7410, quirk_amd_ioapic ); 598 599 static void __init quirk_ioapic_rmw(struct pci_dev *dev) 600 { 601 if (dev->devfn == 0 && dev->bus->number == 0) 602 sis_apic_bug = 1; 603 } 604 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SI, PCI_ANY_ID, quirk_ioapic_rmw ); 605 606 #define AMD8131_revA0 0x01 607 #define AMD8131_revB0 0x11 ··· 615 return; 616 617 if (dev->revision == AMD8131_revA0 || dev->revision == AMD8131_revB0) { 618 - printk(KERN_INFO "Fixing up AMD8131 IOAPIC mode\n"); 619 pci_read_config_byte( dev, AMD8131_MISC, &tmp); 620 tmp &= ~(1 << AMD8131_NIOAMODE_BIT); 621 pci_write_config_byte( dev, AMD8131_MISC, tmp); ··· 632 static void __init quirk_amd_8131_mmrbc(struct pci_dev *dev) 633 { 634 if (dev->subordinate && dev->revision <= 0x12) { 635 - printk(KERN_INFO "AMD8131 rev %x detected, disabling PCI-X " 636 - "MMRBC\n", dev->revision); 637 dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MMRBC; 638 } 639 } ··· 658 if (irq && (irq != 2)) 659 d->irq = irq; 660 } 661 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_3, quirk_via_acpi ); 662 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686_4, quirk_via_acpi ); 663 664 665 /* ··· 740 741 pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &irq); 742 if (new_irq != irq) { 743 - printk(KERN_INFO "PCI: VIA VLink IRQ fixup for %s, from %d to %d\n", 744 - pci_name(dev), irq, new_irq); 745 udelay(15); /* unknown if delay really needed */ 746 pci_write_config_byte(dev, PCI_INTERRUPT_LINE, new_irq); 747 } ··· 759 pci_write_config_byte(dev, 0xfc, 0); 760 pci_read_config_word(dev, PCI_DEVICE_ID, &dev->device); 761 } 762 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C597_0, quirk_vt82c598_id ); 763 764 /* 765 * CardBus controllers have a legacy base address that enables them ··· 789 pci_read_config_dword(dev, 0x4C, &pcic); 790 if ((pcic&6)!=6) { 791 pcic |= 6; 792 - printk(KERN_WARNING "BIOS failed to enable PCI standards compliance, fixing this error.\n"); 793 pci_write_config_dword(dev, 0x4C, pcic); 794 pci_read_config_dword(dev, 0x84, &pcic); 795 pcic |= (1<<23); /* Required in this mode */ 796 pci_write_config_dword(dev, 0x84, pcic); 797 } 798 } 799 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_FE_GATE_700C, quirk_amd_ordering ); 800 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_FE_GATE_700C, quirk_amd_ordering ); 801 802 /* 803 * DreamWorks provided workaround for Dunord I-3000 problem ··· 812 r->start = 0; 813 r->end = 0xffffff; 814 } 815 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_DUNORD, PCI_DEVICE_ID_DUNORD_I3000, quirk_dunord ); 816 817 /* 818 * i82380FB mobile docking controller: its PCI-to-PCI bridge ··· 824 { 825 dev->transparent = 1; 826 } 827 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82380FB, quirk_transparent_bridge ); 828 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TOSHIBA, 0x605, quirk_transparent_bridge ); 829 830 /* 831 * Common misconfiguration of the MediaGX/Geode PCI master that will ··· 839 pci_read_config_byte(dev, 0x41, &reg); 840 if (reg & 2) { 841 reg &= ~2; 842 - printk(KERN_INFO "PCI: Fixup for MediaGX/Geode Slave Disconnect Boundary (0x41=0x%02x)\n", reg); 843 pci_write_config_byte(dev, 0x41, reg); 844 } 845 } 846 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_PCI_MASTER, quirk_mediagx_master ); 847 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_PCI_MASTER, quirk_mediagx_master ); 848 849 /* 850 * Ensure C0 rev restreaming is off. This is normally done by ··· 861 if (config & (1<<6)) { 862 config &= ~(1<<6); 863 pci_write_config_word(pdev, 0x40, config); 864 - printk(KERN_INFO "PCI: C0 revision 450NX. Disabling PCI restreaming.\n"); 865 } 866 } 867 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82454NX, quirk_disable_pxb ); 868 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82454NX, quirk_disable_pxb ); 869 870 871 static void __devinit quirk_sb600_sata(struct pci_dev *pdev) ··· 900 /* PCI layer will sort out resources */ 901 } 902 } 903 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_CSB5IDE, quirk_svwks_csb5ide ); 904 905 /* 906 * Intel 82801CAM ICH3-M datasheet says IDE modes must be the same ··· 912 pci_read_config_byte(pdev, PCI_CLASS_PROG, &prog); 913 914 if (((prog & 1) && !(prog & 4)) || ((prog & 4) && !(prog & 1))) { 915 - printk(KERN_INFO "PCI: IDE mode mismatch; forcing legacy mode\n"); 916 prog &= ~5; 917 pdev->class &= ~5; 918 pci_write_config_byte(pdev, PCI_CLASS_PROG, prog); ··· 927 { 928 dev->class = PCI_CLASS_BRIDGE_EISA << 8; 929 } 930 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82375, quirk_eisa_bridge ); 931 932 933 /* ··· 1020 case 0x12bd: /* HP D530 */ 1021 asus_hides_smbus = 1; 1022 } 1023 else if (dev->device == PCI_DEVICE_ID_INTEL_82915GM_HB) 1024 switch (dev->subsystem_device) { 1025 case 0x099c: /* HP Compaq nx6110 */ ··· 1052 } 1053 } 1054 } 1055 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82845_HB, asus_hides_smbus_hostbridge ); 1056 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82845G_HB, asus_hides_smbus_hostbridge ); 1057 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82850_HB, asus_hides_smbus_hostbridge ); 1058 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82865_HB, asus_hides_smbus_hostbridge ); 1059 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_7205_0, asus_hides_smbus_hostbridge ); 1060 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7501_MCH, asus_hides_smbus_hostbridge ); 1061 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82855PM_HB, asus_hides_smbus_hostbridge ); 1062 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82855GM_HB, asus_hides_smbus_hostbridge ); 1063 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82915GM_HB, asus_hides_smbus_hostbridge ); 1064 1065 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82810_IG3, asus_hides_smbus_hostbridge ); 1066 1067 static void asus_hides_smbus_lpc(struct pci_dev *dev) 1068 { ··· 1077 pci_write_config_word(dev, 0xF2, val & (~0x8)); 1078 pci_read_config_word(dev, 0xF2, &val); 1079 if (val & 0x8) 1080 - printk(KERN_INFO "PCI: i801 SMBus device continues to play 'hide and seek'! 0x%x\n", val); 1081 else 1082 - printk(KERN_INFO "PCI: Enabled i801 SMBus device\n"); 1083 } 1084 } 1085 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AA_0, asus_hides_smbus_lpc ); 1086 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0, asus_hides_smbus_lpc ); 1087 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0, asus_hides_smbus_lpc ); 1088 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0, asus_hides_smbus_lpc ); 1089 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_12, asus_hides_smbus_lpc ); 1090 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_12, asus_hides_smbus_lpc ); 1091 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0, asus_hides_smbus_lpc ); 1092 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AA_0, asus_hides_smbus_lpc ); 1093 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0, asus_hides_smbus_lpc ); 1094 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0, asus_hides_smbus_lpc ); 1095 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0, asus_hides_smbus_lpc ); 1096 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_12, asus_hides_smbus_lpc ); 1097 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_12, asus_hides_smbus_lpc ); 1098 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0, asus_hides_smbus_lpc ); 1099 1100 static void asus_hides_smbus_lpc_ich6(struct pci_dev *dev) 1101 { ··· 1110 val=readl(base + 0x3418); /* read the Function Disable register, dword mode only */ 1111 writel(val & 0xFFFFFFF7, base + 0x3418); /* enable the SMBus device */ 1112 iounmap(base); 1113 - printk(KERN_INFO "PCI: Enabled ICH6/i801 SMBus device\n"); 1114 } 1115 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, asus_hides_smbus_lpc_ich6 ); 1116 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, asus_hides_smbus_lpc_ich6 ); 1117 1118 /* 1119 * SiS 96x south bridge: BIOS typically hides SMBus device... ··· 1123 u8 val = 0; 1124 pci_read_config_byte(dev, 0x77, &val); 1125 if (val & 0x10) { 1126 - printk(KERN_INFO "Enabling SiS 96x SMBus.\n"); 1127 pci_write_config_byte(dev, 0x77, val & ~0x10); 1128 } 1129 } 1130 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_961, quirk_sis_96x_smbus ); 1131 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_962, quirk_sis_96x_smbus ); 1132 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_963, quirk_sis_96x_smbus ); 1133 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_LPC, quirk_sis_96x_smbus ); 1134 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_961, quirk_sis_96x_smbus ); 1135 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_962, quirk_sis_96x_smbus ); 1136 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_963, quirk_sis_96x_smbus ); 1137 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_LPC, quirk_sis_96x_smbus ); 1138 1139 /* 1140 * ... This is further complicated by the fact that some SiS96x south ··· 1167 dev->device = devid; 1168 quirk_sis_96x_smbus(dev); 1169 } 1170 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_503, quirk_sis_503 ); 1171 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_503, quirk_sis_503 ); 1172 1173 1174 /* ··· 1195 pci_write_config_byte(dev, 0x50, val & (~0xc0)); 1196 pci_read_config_byte(dev, 0x50, &val); 1197 if (val & 0xc0) 1198 - printk(KERN_INFO "PCI: onboard AC97/MC97 devices continue to play 'hide and seek'! 0x%x\n", val); 1199 else 1200 - printk(KERN_INFO "PCI: enabled onboard AC97/MC97 devices\n"); 1201 } 1202 } 1203 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237, asus_hides_ac97_lpc ); 1204 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237, asus_hides_ac97_lpc ); 1205 1206 #if defined(CONFIG_ATA) || defined(CONFIG_ATA_MODULE) 1207 ··· 1296 } 1297 1298 } 1299 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EESSC, quirk_alder_ioapic ); 1300 #endif 1301 1302 int pcie_mch_quirk; ··· 1306 { 1307 pcie_mch_quirk = 1; 1308 } 1309 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7520_MCH, quirk_pcie_mch ); 1310 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7320_MCH, quirk_pcie_mch ); 1311 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7525_MCH, quirk_pcie_mch ); 1312 1313 1314 /* ··· 1318 static void __devinit quirk_pcie_pxh(struct pci_dev *dev) 1319 { 1320 pci_msi_off(dev); 1321 - 1322 dev->no_msi = 1; 1323 - 1324 - printk(KERN_WARNING "PCI: PXH quirk detected, " 1325 - "disabling MSI for SHPC device\n"); 1326 } 1327 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PXHD_0, quirk_pcie_pxh); 1328 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PXHD_1, quirk_pcie_pxh); ··· 1400 case PCI_DEVICE_ID_NETMOS_9855: 1401 if ((dev->class >> 8) == PCI_CLASS_COMMUNICATION_SERIAL && 1402 num_parallel) { 1403 - printk(KERN_INFO "PCI: Netmos %04x (%u parallel, " 1404 "%u serial); changing class SERIAL to OTHER " 1405 "(use parport_serial)\n", 1406 dev->device, num_parallel, num_serial); ··· 1413 1414 static void __devinit quirk_e100_interrupt(struct pci_dev *dev) 1415 { 1416 - u16 command; 1417 u8 __iomem *csr; 1418 u8 cmd_hi; 1419 1420 switch (dev->device) { 1421 /* PCI IDs taken from drivers/net/e100.c */ ··· 1450 if (!(command & PCI_COMMAND_MEMORY) || !pci_resource_start(dev, 0)) 1451 return; 1452 1453 /* Convert from PCI bus to resource space. */ 1454 csr = ioremap(pci_resource_start(dev, 0), 8); 1455 if (!csr) { 1456 - printk(KERN_WARNING "PCI: Can't map %s e100 registers\n", 1457 - pci_name(dev)); 1458 return; 1459 } 1460 1461 cmd_hi = readb(csr + 3); 1462 if (cmd_hi == 0) { 1463 - printk(KERN_WARNING "PCI: Firmware left %s e100 interrupts " 1464 - "enabled, disabling\n", pci_name(dev)); 1465 writeb(1, csr + 3); 1466 } 1467 ··· 1486 */ 1487 1488 if (dev->class == PCI_CLASS_NOT_DEFINED) { 1489 - printk(KERN_INFO "NCR 53c810 rev 1 detected, setting PCI class.\n"); 1490 dev->class = PCI_CLASS_STORAGE_SCSI; 1491 } 1492 } ··· 1497 while (f < end) { 1498 if ((f->vendor == dev->vendor || f->vendor == (u16) PCI_ANY_ID) && 1499 (f->device == dev->device || f->device == (u16) PCI_ANY_ID)) { 1500 - pr_debug("PCI: Calling quirk %p for %s\n", f->hook, pci_name(dev)); 1501 f->hook(dev); 1502 } 1503 f++; ··· 1569 pci_read_config_word(dev, 0x40, &en1k); 1570 1571 if (en1k & 0x200) { 1572 - printk(KERN_INFO "PCI: Enable I/O Space to 1 KB Granularity\n"); 1573 1574 pci_read_config_byte(dev, PCI_IO_BASE, &io_base_lo); 1575 pci_read_config_byte(dev, PCI_IO_LIMIT, &io_limit_lo); ··· 1601 iobl_adr_1k = iobl_adr | (res->start >> 8) | (res->end & 0xfc00); 1602 1603 if (iobl_adr != iobl_adr_1k) { 1604 - printk(KERN_INFO "PCI: Fixing P64H2 IOBL_ADR from 0x%x to 0x%x for 1 KB Granularity\n", 1605 iobl_adr,iobl_adr_1k); 1606 pci_write_config_word(dev, PCI_IO_BASE, iobl_adr_1k); 1607 } ··· 1619 if (pci_read_config_byte(dev, 0xf41, &b) == 0) { 1620 if (!(b & 0x20)) { 1621 pci_write_config_byte(dev, 0xf41, b | 0x20); 1622 - printk(KERN_INFO 1623 - "PCI: Linking AER extended capability on %s\n", 1624 - pci_name(dev)); 1625 } 1626 } 1627 } ··· 1628 quirk_nvidia_ck804_pcie_aer_ext_cap); 1629 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_CK804_PCIE, 1630 quirk_nvidia_ck804_pcie_aer_ext_cap); 1631 1632 #ifdef CONFIG_PCI_MSI 1633 /* Some chipsets do not support MSI. We cannot easily rely on setting ··· 1667 static void __init quirk_disable_all_msi(struct pci_dev *dev) 1668 { 1669 pci_no_msi(); 1670 - printk(KERN_WARNING "PCI: MSI quirk detected. MSI deactivated.\n"); 1671 } 1672 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_GCNB_LE, quirk_disable_all_msi); 1673 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_RS400_200, quirk_disable_all_msi); ··· 1678 static void __devinit quirk_disable_msi(struct pci_dev *dev) 1679 { 1680 if (dev->subordinate) { 1681 - printk(KERN_WARNING "PCI: MSI quirk detected. " 1682 - "PCI_BUS_FLAGS_NO_MSI set for %s subordinate bus.\n", 1683 - pci_name(dev)); 1684 dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 1685 } 1686 } ··· 1698 if (pci_read_config_byte(dev, pos + HT_MSI_FLAGS, 1699 &flags) == 0) 1700 { 1701 - printk(KERN_INFO "PCI: Found %s HT MSI Mapping on %s\n", 1702 flags & HT_MSI_FLAGS_ENABLE ? 1703 - "enabled" : "disabled", pci_name(dev)); 1704 return (flags & HT_MSI_FLAGS_ENABLE) != 0; 1705 } 1706 ··· 1714 static void __devinit quirk_msi_ht_cap(struct pci_dev *dev) 1715 { 1716 if (dev->subordinate && !msi_ht_cap_enabled(dev)) { 1717 - printk(KERN_WARNING "PCI: MSI quirk detected. " 1718 - "MSI disabled on chipset %s.\n", 1719 - pci_name(dev)); 1720 dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 1721 } 1722 } 1723 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT2000_PCIE, 1724 quirk_msi_ht_cap); 1725 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, 1726 - PCI_DEVICE_ID_SERVERWORKS_HT1000_PXB, 1727 - quirk_msi_ht_cap); 1728 1729 /* The nVidia CK804 chipset may have 2 HT MSI mappings. 1730 * MSI are supported if the MSI capability set in any of these mappings. ··· 1766 if (!pdev) 1767 return; 1768 if (!msi_ht_cap_enabled(dev) && !msi_ht_cap_enabled(pdev)) { 1769 - printk(KERN_WARNING "PCI: MSI quirk detected. " 1770 - "MSI disabled on chipset %s.\n", 1771 - pci_name(dev)); 1772 dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 1773 } 1774 pci_dev_put(pdev); ··· 1778 static void __devinit quirk_msi_intx_disable_bug(struct pci_dev *dev) 1779 { 1780 dev->dev_flags |= PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG; 1781 } 1782 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, 1783 PCI_DEVICE_ID_TIGON3_5780, ··· 1816 quirk_msi_intx_disable_bug); 1817 1818 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4390, 1819 - quirk_msi_intx_disable_bug); 1820 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4391, 1821 - quirk_msi_intx_disable_bug); 1822 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4392, 1823 - quirk_msi_intx_disable_bug); 1824 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4393, 1825 - quirk_msi_intx_disable_bug); 1826 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4394, 1827 - quirk_msi_intx_disable_bug); 1828 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4395, 1829 - quirk_msi_intx_disable_bug); 1830 1831 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4373, 1832 quirk_msi_intx_disable_bug);
··· 21 #include <linux/init.h> 22 #include <linux/delay.h> 23 #include <linux/acpi.h> 24 + #include <linux/kallsyms.h> 25 #include "pci.h" 26 27 /* The Mellanox Tavor device gives false positive parity errors ··· 46 while ((d = pci_get_device(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371SB_0, d))) { 47 pci_read_config_byte(d, 0x82, &dlc); 48 if (!(dlc & 1<<1)) { 49 + dev_err(&d->dev, "PIIX3: Enabling Passive Release\n"); 50 dlc |= 1<<1; 51 pci_write_config_byte(d, 0x82, dlc); 52 } 53 } 54 } 55 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_passive_release); 56 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_passive_release); 57 58 /* The VIA VP2/VP3/MVP3 seem to have some 'features'. There may be a workaround 59 but VIA don't answer queries. If you happen to have good contacts at VIA ··· 68 { 69 if (!isa_dma_bridge_buggy) { 70 isa_dma_bridge_buggy=1; 71 + dev_info(&dev->dev, "Activating ISA DMA hang workarounds\n"); 72 } 73 } 74 /* 75 * Its not totally clear which chipsets are the problematic ones 76 * We know 82C586 and 82C596 variants are affected. 77 */ 78 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_0, quirk_isa_dma_hangs); 79 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C596, quirk_isa_dma_hangs); 80 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371SB_0, quirk_isa_dma_hangs); 81 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1533, quirk_isa_dma_hangs); 82 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_1, quirk_isa_dma_hangs); 83 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_2, quirk_isa_dma_hangs); 84 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_3, quirk_isa_dma_hangs); 85 86 int pci_pci_problems; 87 EXPORT_SYMBOL(pci_pci_problems); ··· 92 static void __devinit quirk_nopcipci(struct pci_dev *dev) 93 { 94 if ((pci_pci_problems & PCIPCI_FAIL)==0) { 95 + dev_info(&dev->dev, "Disabling direct PCI/PCI transfers\n"); 96 pci_pci_problems |= PCIPCI_FAIL; 97 } 98 } 99 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_5597, quirk_nopcipci); 100 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_496, quirk_nopcipci); 101 102 static void __devinit quirk_nopciamd(struct pci_dev *dev) 103 { ··· 105 pci_read_config_byte(dev, 0x08, &rev); 106 if (rev == 0x13) { 107 /* Erratum 24 */ 108 + dev_info(&dev->dev, "Chipset erratum: Disabling direct PCI/AGP transfers\n"); 109 pci_pci_problems |= PCIAGP_FAIL; 110 } 111 } 112 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8151_0, quirk_nopciamd); 113 114 /* 115 * Triton requires workarounds to be used by the drivers ··· 117 static void __devinit quirk_triton(struct pci_dev *dev) 118 { 119 if ((pci_pci_problems&PCIPCI_TRITON)==0) { 120 + dev_info(&dev->dev, "Limiting direct PCI/PCI transfers\n"); 121 pci_pci_problems |= PCIPCI_TRITON; 122 } 123 } 124 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82437, quirk_triton); 125 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82437VX, quirk_triton); 126 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82439, quirk_triton); 127 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82439TX, quirk_triton); 128 129 /* 130 * VIA Apollo KT133 needs PCI latency patch ··· 139 static void quirk_vialatency(struct pci_dev *dev) 140 { 141 struct pci_dev *p; 142 u8 busarb; 143 /* Ok we have a potential problem chipset here. Now see if we have 144 a buggy southbridge */ 145 146 p = pci_get_device(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, NULL); 147 if (p!=NULL) { 148 /* 0x40 - 0x4f == 686B, 0x10 - 0x2f == 686A; thanks Dan Hollis */ 149 /* Check for buggy part revisions */ 150 + if (p->revision < 0x40 || p->revision > 0x42) 151 goto exit; 152 } else { 153 p = pci_get_device(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8231, NULL); 154 if (p==NULL) /* No problem parts */ 155 goto exit; 156 /* Check for buggy part revisions */ 157 + if (p->revision < 0x10 || p->revision > 0x12) 158 goto exit; 159 } 160 ··· 180 busarb &= ~(1<<5); 181 busarb |= (1<<4); 182 pci_write_config_byte(dev, 0x76, busarb); 183 + dev_info(&dev->dev, "Applying VIA southbridge workaround\n"); 184 exit: 185 pci_dev_put(p); 186 } 187 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8363_0, quirk_vialatency); 188 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8371_1, quirk_vialatency); 189 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8361, quirk_vialatency); 190 /* Must restore this on a resume from RAM */ 191 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8363_0, quirk_vialatency); 192 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8371_1, quirk_vialatency); 193 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8361, quirk_vialatency); 194 195 /* 196 * VIA Apollo VP3 needs ETBF on BT848/878 ··· 198 static void __devinit quirk_viaetbf(struct pci_dev *dev) 199 { 200 if ((pci_pci_problems&PCIPCI_VIAETBF)==0) { 201 + dev_info(&dev->dev, "Limiting direct PCI/PCI transfers\n"); 202 pci_pci_problems |= PCIPCI_VIAETBF; 203 } 204 } 205 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C597_0, quirk_viaetbf); 206 207 static void __devinit quirk_vsfx(struct pci_dev *dev) 208 { 209 if ((pci_pci_problems&PCIPCI_VSFX)==0) { 210 + dev_info(&dev->dev, "Limiting direct PCI/PCI transfers\n"); 211 pci_pci_problems |= PCIPCI_VSFX; 212 } 213 } 214 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C576, quirk_vsfx); 215 216 /* 217 * Ali Magik requires workarounds to be used by the drivers ··· 222 static void __init quirk_alimagik(struct pci_dev *dev) 223 { 224 if ((pci_pci_problems&PCIPCI_ALIMAGIK)==0) { 225 + dev_info(&dev->dev, "Limiting direct PCI/PCI transfers\n"); 226 pci_pci_problems |= PCIPCI_ALIMAGIK|PCIPCI_TRITON; 227 } 228 } 229 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1647, quirk_alimagik); 230 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1651, quirk_alimagik); 231 232 /* 233 * Natoma has some interesting boundary conditions with Zoran stuff ··· 236 static void __devinit quirk_natoma(struct pci_dev *dev) 237 { 238 if ((pci_pci_problems&PCIPCI_NATOMA)==0) { 239 + dev_info(&dev->dev, "Limiting direct PCI/PCI transfers\n"); 240 pci_pci_problems |= PCIPCI_NATOMA; 241 } 242 } 243 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_natoma); 244 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443LX_0, quirk_natoma); 245 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443LX_1, quirk_natoma); 246 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_0, quirk_natoma); 247 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_1, quirk_natoma); 248 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_2, quirk_natoma); 249 250 /* 251 * This chip can cause PCI parity errors if config register 0xA0 is read ··· 255 { 256 dev->cfg_size = 0xA0; 257 } 258 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE, quirk_citrine); 259 260 /* 261 * S3 868 and 968 chips report region size equal to 32M, but they decode 64M. ··· 270 r->end = 0x3ffffff; 271 } 272 } 273 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_868, quirk_s3_64M); 274 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_968, quirk_s3_64M); 275 276 static void __devinit quirk_io_region(struct pci_dev *dev, unsigned region, 277 unsigned size, int nr, const char *name) ··· 292 pcibios_bus_to_resource(dev, res, &bus_region); 293 294 pci_claim_resource(dev, nr); 295 + dev_info(&dev->dev, "quirk: region %04x-%04x claimed by %s\n", region, region + size - 1, name); 296 } 297 } 298 ··· 302 */ 303 static void __devinit quirk_ati_exploding_mce(struct pci_dev *dev) 304 { 305 + dev_info(&dev->dev, "ATI Northbridge, reserving I/O ports 0x3b0 to 0x3bb\n"); 306 /* Mae rhaid i ni beidio ag edrych ar y lleoliadiau I/O hyn */ 307 request_region(0x3b0, 0x0C, "RadeonIGP"); 308 request_region(0x3d3, 0x01, "RadeonIGP"); 309 } 310 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_RS100, quirk_ati_exploding_mce); 311 312 /* 313 * Let's make the southbridge information explicit instead ··· 329 pci_read_config_word(dev, 0xE2, &region); 330 quirk_io_region(dev, region, 32, PCI_BRIDGE_RESOURCES+1, "ali7101 SMB"); 331 } 332 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M7101, quirk_ali7101_acpi); 333 334 static void piix4_io_quirk(struct pci_dev *dev, const char *name, unsigned int port, unsigned int enable) 335 { ··· 354 * let's get enough confirmation reports first. 355 */ 356 base &= -size; 357 + dev_info(&dev->dev, "%s PIO at %04x-%04x\n", name, base, base + size - 1); 358 } 359 360 static void piix4_mem_quirk(struct pci_dev *dev, const char *name, unsigned int port, unsigned int enable) ··· 379 * reserve it, but let's get enough confirmation reports first. 380 */ 381 base &= -size; 382 + dev_info(&dev->dev, "%s MMIO at %04x-%04x\n", name, base, base + size - 1); 383 } 384 385 /* ··· 418 piix4_io_quirk(dev, "PIIX4 devres I", 0x78, 1 << 20); 419 piix4_io_quirk(dev, "PIIX4 devres J", 0x7c, 1 << 20); 420 } 421 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371AB_3, quirk_piix4_acpi); 422 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443MX_3, quirk_piix4_acpi); 423 424 /* 425 * ICH4, ICH4-M, ICH5, ICH5-M ACPI: Three IO regions pointed to by longwords at ··· 436 pci_read_config_dword(dev, 0x58, &region); 437 quirk_io_region(dev, region, 64, PCI_BRIDGE_RESOURCES+1, "ICH4 GPIO"); 438 } 439 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AA_0, quirk_ich4_lpc_acpi); 440 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AB_0, quirk_ich4_lpc_acpi); 441 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0, quirk_ich4_lpc_acpi); 442 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_10, quirk_ich4_lpc_acpi); 443 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0, quirk_ich4_lpc_acpi); 444 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_12, quirk_ich4_lpc_acpi); 445 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0, quirk_ich4_lpc_acpi); 446 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_12, quirk_ich4_lpc_acpi); 447 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0, quirk_ich4_lpc_acpi); 448 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_1, quirk_ich4_lpc_acpi); 449 450 static void __devinit quirk_ich6_lpc_acpi(struct pci_dev *dev) 451 { ··· 457 pci_read_config_dword(dev, 0x48, &region); 458 quirk_io_region(dev, region, 64, PCI_BRIDGE_RESOURCES+1, "ICH6 GPIO"); 459 } 460 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_0, quirk_ich6_lpc_acpi); 461 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, quirk_ich6_lpc_acpi); 462 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH7_0, quirk_ich6_lpc_acpi); 463 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH7_1, quirk_ich6_lpc_acpi); 464 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH7_31, quirk_ich6_lpc_acpi); 465 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_0, quirk_ich6_lpc_acpi); 466 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_2, quirk_ich6_lpc_acpi); 467 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_3, quirk_ich6_lpc_acpi); 468 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_1, quirk_ich6_lpc_acpi); 469 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_4, quirk_ich6_lpc_acpi); 470 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_2, quirk_ich6_lpc_acpi); 471 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_4, quirk_ich6_lpc_acpi); 472 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_7, quirk_ich6_lpc_acpi); 473 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_8, quirk_ich6_lpc_acpi); 474 475 /* 476 * VIA ACPI: One IO region pointed to by longword at ··· 486 quirk_io_region(dev, region, 256, PCI_BRIDGE_RESOURCES, "vt82c586 ACPI"); 487 } 488 } 489 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_3, quirk_vt82c586_acpi); 490 491 /* 492 * VIA VT82C686 ACPI: Three IO region pointed to by (long)words at ··· 509 smb &= PCI_BASE_ADDRESS_IO_MASK; 510 quirk_io_region(dev, smb, 16, PCI_BRIDGE_RESOURCES + 2, "vt82c686 SMB"); 511 } 512 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686_4, quirk_vt82c686_acpi); 513 514 /* 515 * VIA VT8235 ISA Bridge: Two IO regions pointed to by words at ··· 551 else 552 tmp = 0x1f; /* all known bits (4-0) routed to external APIC */ 553 554 + dev_info(&dev->dev, "%sbling VIA external APIC routing\n", 555 tmp == 0 ? "Disa" : "Ena"); 556 557 /* Offset 0x58: External APIC IRQ output control */ 558 pci_write_config_byte (dev, 0x58, tmp); 559 } 560 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, quirk_via_ioapic); 561 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, quirk_via_ioapic); 562 563 /* 564 * VIA 8237: Some BIOSs don't set the 'Bypass APIC De-Assert Message' Bit. ··· 573 574 pci_read_config_byte(dev, 0x5B, &misc_control2); 575 if (!(misc_control2 & BYPASS_APIC_DEASSERT)) { 576 + dev_info(&dev->dev, "Bypassing VIA 8237 APIC De-Assert Message\n"); 577 pci_write_config_byte(dev, 0x5B, misc_control2|BYPASS_APIC_DEASSERT); 578 } 579 } ··· 592 static void __devinit quirk_amd_ioapic(struct pci_dev *dev) 593 { 594 if (dev->revision >= 0x02) { 595 + dev_warn(&dev->dev, "I/O APIC: AMD Erratum #22 may be present. In the event of instability try\n"); 596 + dev_warn(&dev->dev, " : booting with the \"noapic\" option\n"); 597 } 598 } 599 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_VIPER_7410, quirk_amd_ioapic); 600 601 static void __init quirk_ioapic_rmw(struct pci_dev *dev) 602 { 603 if (dev->devfn == 0 && dev->bus->number == 0) 604 sis_apic_bug = 1; 605 } 606 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SI, PCI_ANY_ID, quirk_ioapic_rmw); 607 608 #define AMD8131_revA0 0x01 609 #define AMD8131_revB0 0x11 ··· 617 return; 618 619 if (dev->revision == AMD8131_revA0 || dev->revision == AMD8131_revB0) { 620 + dev_info(&dev->dev, "Fixing up AMD8131 IOAPIC mode\n"); 621 pci_read_config_byte( dev, AMD8131_MISC, &tmp); 622 tmp &= ~(1 << AMD8131_NIOAMODE_BIT); 623 pci_write_config_byte( dev, AMD8131_MISC, tmp); ··· 634 static void __init quirk_amd_8131_mmrbc(struct pci_dev *dev) 635 { 636 if (dev->subordinate && dev->revision <= 0x12) { 637 + dev_info(&dev->dev, "AMD8131 rev %x detected; " 638 + "disabling PCI-X MMRBC\n", dev->revision); 639 dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MMRBC; 640 } 641 } ··· 660 if (irq && (irq != 2)) 661 d->irq = irq; 662 } 663 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_3, quirk_via_acpi); 664 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686_4, quirk_via_acpi); 665 666 667 /* ··· 742 743 pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &irq); 744 if (new_irq != irq) { 745 + dev_info(&dev->dev, "VIA VLink IRQ fixup, from %d to %d\n", 746 + irq, new_irq); 747 udelay(15); /* unknown if delay really needed */ 748 pci_write_config_byte(dev, PCI_INTERRUPT_LINE, new_irq); 749 } ··· 761 pci_write_config_byte(dev, 0xfc, 0); 762 pci_read_config_word(dev, PCI_DEVICE_ID, &dev->device); 763 } 764 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C597_0, quirk_vt82c598_id); 765 766 /* 767 * CardBus controllers have a legacy base address that enables them ··· 791 pci_read_config_dword(dev, 0x4C, &pcic); 792 if ((pcic&6)!=6) { 793 pcic |= 6; 794 + dev_warn(&dev->dev, "BIOS failed to enable PCI standards compliance; fixing this error\n"); 795 pci_write_config_dword(dev, 0x4C, pcic); 796 pci_read_config_dword(dev, 0x84, &pcic); 797 pcic |= (1<<23); /* Required in this mode */ 798 pci_write_config_dword(dev, 0x84, pcic); 799 } 800 } 801 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_FE_GATE_700C, quirk_amd_ordering); 802 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_FE_GATE_700C, quirk_amd_ordering); 803 804 /* 805 * DreamWorks provided workaround for Dunord I-3000 problem ··· 814 r->start = 0; 815 r->end = 0xffffff; 816 } 817 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_DUNORD, PCI_DEVICE_ID_DUNORD_I3000, quirk_dunord); 818 819 /* 820 * i82380FB mobile docking controller: its PCI-to-PCI bridge ··· 826 { 827 dev->transparent = 1; 828 } 829 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82380FB, quirk_transparent_bridge); 830 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TOSHIBA, 0x605, quirk_transparent_bridge); 831 832 /* 833 * Common misconfiguration of the MediaGX/Geode PCI master that will ··· 841 pci_read_config_byte(dev, 0x41, &reg); 842 if (reg & 2) { 843 reg &= ~2; 844 + dev_info(&dev->dev, "Fixup for MediaGX/Geode Slave Disconnect Boundary (0x41=0x%02x)\n", reg); 845 pci_write_config_byte(dev, 0x41, reg); 846 } 847 } 848 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_PCI_MASTER, quirk_mediagx_master); 849 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_PCI_MASTER, quirk_mediagx_master); 850 851 /* 852 * Ensure C0 rev restreaming is off. This is normally done by ··· 863 if (config & (1<<6)) { 864 config &= ~(1<<6); 865 pci_write_config_word(pdev, 0x40, config); 866 + dev_info(&pdev->dev, "C0 revision 450NX. Disabling PCI restreaming\n"); 867 } 868 } 869 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82454NX, quirk_disable_pxb); 870 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82454NX, quirk_disable_pxb); 871 872 873 static void __devinit quirk_sb600_sata(struct pci_dev *pdev) ··· 902 /* PCI layer will sort out resources */ 903 } 904 } 905 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_CSB5IDE, quirk_svwks_csb5ide); 906 907 /* 908 * Intel 82801CAM ICH3-M datasheet says IDE modes must be the same ··· 914 pci_read_config_byte(pdev, PCI_CLASS_PROG, &prog); 915 916 if (((prog & 1) && !(prog & 4)) || ((prog & 4) && !(prog & 1))) { 917 + dev_info(&pdev->dev, "IDE mode mismatch; forcing legacy mode\n"); 918 prog &= ~5; 919 pdev->class &= ~5; 920 pci_write_config_byte(pdev, PCI_CLASS_PROG, prog); ··· 929 { 930 dev->class = PCI_CLASS_BRIDGE_EISA << 8; 931 } 932 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82375, quirk_eisa_bridge); 933 934 935 /* ··· 1022 case 0x12bd: /* HP D530 */ 1023 asus_hides_smbus = 1; 1024 } 1025 + else if (dev->device == PCI_DEVICE_ID_INTEL_82875_HB) 1026 + switch (dev->subsystem_device) { 1027 + case 0x12bf: /* HP xw4100 */ 1028 + asus_hides_smbus = 1; 1029 + } 1030 else if (dev->device == PCI_DEVICE_ID_INTEL_82915GM_HB) 1031 switch (dev->subsystem_device) { 1032 case 0x099c: /* HP Compaq nx6110 */ ··· 1049 } 1050 } 1051 } 1052 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82845_HB, asus_hides_smbus_hostbridge); 1053 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82845G_HB, asus_hides_smbus_hostbridge); 1054 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82850_HB, asus_hides_smbus_hostbridge); 1055 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82865_HB, asus_hides_smbus_hostbridge); 1056 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82875_HB, asus_hides_smbus_hostbridge); 1057 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_7205_0, asus_hides_smbus_hostbridge); 1058 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7501_MCH, asus_hides_smbus_hostbridge); 1059 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82855PM_HB, asus_hides_smbus_hostbridge); 1060 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82855GM_HB, asus_hides_smbus_hostbridge); 1061 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82915GM_HB, asus_hides_smbus_hostbridge); 1062 1063 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82810_IG3, asus_hides_smbus_hostbridge); 1064 1065 static void asus_hides_smbus_lpc(struct pci_dev *dev) 1066 { ··· 1073 pci_write_config_word(dev, 0xF2, val & (~0x8)); 1074 pci_read_config_word(dev, 0xF2, &val); 1075 if (val & 0x8) 1076 + dev_info(&dev->dev, "i801 SMBus device continues to play 'hide and seek'! 0x%x\n", val); 1077 else 1078 + dev_info(&dev->dev, "Enabled i801 SMBus device\n"); 1079 } 1080 } 1081 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AA_0, asus_hides_smbus_lpc); 1082 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0, asus_hides_smbus_lpc); 1083 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0, asus_hides_smbus_lpc); 1084 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0, asus_hides_smbus_lpc); 1085 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_12, asus_hides_smbus_lpc); 1086 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_12, asus_hides_smbus_lpc); 1087 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0, asus_hides_smbus_lpc); 1088 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AA_0, asus_hides_smbus_lpc); 1089 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0, asus_hides_smbus_lpc); 1090 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0, asus_hides_smbus_lpc); 1091 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0, asus_hides_smbus_lpc); 1092 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_12, asus_hides_smbus_lpc); 1093 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_12, asus_hides_smbus_lpc); 1094 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0, asus_hides_smbus_lpc); 1095 1096 static void asus_hides_smbus_lpc_ich6(struct pci_dev *dev) 1097 { ··· 1106 val=readl(base + 0x3418); /* read the Function Disable register, dword mode only */ 1107 writel(val & 0xFFFFFFF7, base + 0x3418); /* enable the SMBus device */ 1108 iounmap(base); 1109 + dev_info(&dev->dev, "Enabled ICH6/i801 SMBus device\n"); 1110 } 1111 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, asus_hides_smbus_lpc_ich6); 1112 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH6_1, asus_hides_smbus_lpc_ich6); 1113 1114 /* 1115 * SiS 96x south bridge: BIOS typically hides SMBus device... ··· 1119 u8 val = 0; 1120 pci_read_config_byte(dev, 0x77, &val); 1121 if (val & 0x10) { 1122 + dev_info(&dev->dev, "Enabling SiS 96x SMBus\n"); 1123 pci_write_config_byte(dev, 0x77, val & ~0x10); 1124 } 1125 } 1126 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_961, quirk_sis_96x_smbus); 1127 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_962, quirk_sis_96x_smbus); 1128 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_963, quirk_sis_96x_smbus); 1129 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_LPC, quirk_sis_96x_smbus); 1130 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_961, quirk_sis_96x_smbus); 1131 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_962, quirk_sis_96x_smbus); 1132 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_963, quirk_sis_96x_smbus); 1133 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_LPC, quirk_sis_96x_smbus); 1134 1135 /* 1136 * ... This is further complicated by the fact that some SiS96x south ··· 1163 dev->device = devid; 1164 quirk_sis_96x_smbus(dev); 1165 } 1166 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_503, quirk_sis_503); 1167 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_503, quirk_sis_503); 1168 1169 1170 /* ··· 1191 pci_write_config_byte(dev, 0x50, val & (~0xc0)); 1192 pci_read_config_byte(dev, 0x50, &val); 1193 if (val & 0xc0) 1194 + dev_info(&dev->dev, "Onboard AC97/MC97 devices continue to play 'hide and seek'! 0x%x\n", val); 1195 else 1196 + dev_info(&dev->dev, "Enabled onboard AC97/MC97 devices\n"); 1197 } 1198 } 1199 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237, asus_hides_ac97_lpc); 1200 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237, asus_hides_ac97_lpc); 1201 1202 #if defined(CONFIG_ATA) || defined(CONFIG_ATA_MODULE) 1203 ··· 1292 } 1293 1294 } 1295 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EESSC, quirk_alder_ioapic); 1296 #endif 1297 1298 int pcie_mch_quirk; ··· 1302 { 1303 pcie_mch_quirk = 1; 1304 } 1305 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7520_MCH, quirk_pcie_mch); 1306 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7320_MCH, quirk_pcie_mch); 1307 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7525_MCH, quirk_pcie_mch); 1308 1309 1310 /* ··· 1314 static void __devinit quirk_pcie_pxh(struct pci_dev *dev) 1315 { 1316 pci_msi_off(dev); 1317 dev->no_msi = 1; 1318 + dev_warn(&dev->dev, "PXH quirk detected; SHPC device MSI disabled\n"); 1319 } 1320 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PXHD_0, quirk_pcie_pxh); 1321 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PXHD_1, quirk_pcie_pxh); ··· 1399 case PCI_DEVICE_ID_NETMOS_9855: 1400 if ((dev->class >> 8) == PCI_CLASS_COMMUNICATION_SERIAL && 1401 num_parallel) { 1402 + dev_info(&dev->dev, "Netmos %04x (%u parallel, " 1403 "%u serial); changing class SERIAL to OTHER " 1404 "(use parport_serial)\n", 1405 dev->device, num_parallel, num_serial); ··· 1412 1413 static void __devinit quirk_e100_interrupt(struct pci_dev *dev) 1414 { 1415 + u16 command, pmcsr; 1416 u8 __iomem *csr; 1417 u8 cmd_hi; 1418 + int pm; 1419 1420 switch (dev->device) { 1421 /* PCI IDs taken from drivers/net/e100.c */ ··· 1448 if (!(command & PCI_COMMAND_MEMORY) || !pci_resource_start(dev, 0)) 1449 return; 1450 1451 + /* 1452 + * Check that the device is in the D0 power state. If it's not, 1453 + * there is no point to look any further. 1454 + */ 1455 + pm = pci_find_capability(dev, PCI_CAP_ID_PM); 1456 + if (pm) { 1457 + pci_read_config_word(dev, pm + PCI_PM_CTRL, &pmcsr); 1458 + if ((pmcsr & PCI_PM_CTRL_STATE_MASK) != PCI_D0) 1459 + return; 1460 + } 1461 + 1462 /* Convert from PCI bus to resource space. */ 1463 csr = ioremap(pci_resource_start(dev, 0), 8); 1464 if (!csr) { 1465 + dev_warn(&dev->dev, "Can't map e100 registers\n"); 1466 return; 1467 } 1468 1469 cmd_hi = readb(csr + 3); 1470 if (cmd_hi == 0) { 1471 + dev_warn(&dev->dev, "Firmware left e100 interrupts enabled; " 1472 + "disabling\n"); 1473 writeb(1, csr + 3); 1474 } 1475 ··· 1474 */ 1475 1476 if (dev->class == PCI_CLASS_NOT_DEFINED) { 1477 + dev_info(&dev->dev, "NCR 53c810 rev 1 detected; setting PCI class\n"); 1478 dev->class = PCI_CLASS_STORAGE_SCSI; 1479 } 1480 } ··· 1485 while (f < end) { 1486 if ((f->vendor == dev->vendor || f->vendor == (u16) PCI_ANY_ID) && 1487 (f->device == dev->device || f->device == (u16) PCI_ANY_ID)) { 1488 + #ifdef DEBUG 1489 + dev_dbg(&dev->dev, "calling quirk 0x%p", f->hook); 1490 + print_fn_descriptor_symbol(": %s()\n", 1491 + (unsigned long) f->hook); 1492 + #endif 1493 f->hook(dev); 1494 } 1495 f++; ··· 1553 pci_read_config_word(dev, 0x40, &en1k); 1554 1555 if (en1k & 0x200) { 1556 + dev_info(&dev->dev, "Enable I/O Space to 1KB granularity\n"); 1557 1558 pci_read_config_byte(dev, PCI_IO_BASE, &io_base_lo); 1559 pci_read_config_byte(dev, PCI_IO_LIMIT, &io_limit_lo); ··· 1585 iobl_adr_1k = iobl_adr | (res->start >> 8) | (res->end & 0xfc00); 1586 1587 if (iobl_adr != iobl_adr_1k) { 1588 + dev_info(&dev->dev, "Fixing P64H2 IOBL_ADR from 0x%x to 0x%x for 1KB granularity\n", 1589 iobl_adr,iobl_adr_1k); 1590 pci_write_config_word(dev, PCI_IO_BASE, iobl_adr_1k); 1591 } ··· 1603 if (pci_read_config_byte(dev, 0xf41, &b) == 0) { 1604 if (!(b & 0x20)) { 1605 pci_write_config_byte(dev, 0xf41, b | 0x20); 1606 + dev_info(&dev->dev, 1607 + "Linking AER extended capability\n"); 1608 } 1609 } 1610 } ··· 1613 quirk_nvidia_ck804_pcie_aer_ext_cap); 1614 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_CK804_PCIE, 1615 quirk_nvidia_ck804_pcie_aer_ext_cap); 1616 + 1617 + static void __devinit quirk_via_cx700_pci_parking_caching(struct pci_dev *dev) 1618 + { 1619 + /* 1620 + * Disable PCI Bus Parking and PCI Master read caching on CX700 1621 + * which causes unspecified timing errors with a VT6212L on the PCI 1622 + * bus leading to USB2.0 packet loss. The defaults are that these 1623 + * features are turned off but some BIOSes turn them on. 1624 + */ 1625 + 1626 + uint8_t b; 1627 + if (pci_read_config_byte(dev, 0x76, &b) == 0) { 1628 + if (b & 0x40) { 1629 + /* Turn off PCI Bus Parking */ 1630 + pci_write_config_byte(dev, 0x76, b ^ 0x40); 1631 + 1632 + /* Turn off PCI Master read caching */ 1633 + pci_write_config_byte(dev, 0x72, 0x0); 1634 + pci_write_config_byte(dev, 0x75, 0x1); 1635 + pci_write_config_byte(dev, 0x77, 0x0); 1636 + 1637 + printk(KERN_INFO 1638 + "PCI: VIA CX700 PCI parking/caching fixup on %s\n", 1639 + pci_name(dev)); 1640 + } 1641 + } 1642 + } 1643 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_VIA, 0x324e, quirk_via_cx700_pci_parking_caching); 1644 1645 #ifdef CONFIG_PCI_MSI 1646 /* Some chipsets do not support MSI. We cannot easily rely on setting ··· 1624 static void __init quirk_disable_all_msi(struct pci_dev *dev) 1625 { 1626 pci_no_msi(); 1627 + dev_warn(&dev->dev, "MSI quirk detected; MSI disabled\n"); 1628 } 1629 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_GCNB_LE, quirk_disable_all_msi); 1630 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_RS400_200, quirk_disable_all_msi); ··· 1635 static void __devinit quirk_disable_msi(struct pci_dev *dev) 1636 { 1637 if (dev->subordinate) { 1638 + dev_warn(&dev->dev, "MSI quirk detected; " 1639 + "subordinate MSI disabled\n"); 1640 dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 1641 } 1642 } ··· 1656 if (pci_read_config_byte(dev, pos + HT_MSI_FLAGS, 1657 &flags) == 0) 1658 { 1659 + dev_info(&dev->dev, "Found %s HT MSI Mapping\n", 1660 flags & HT_MSI_FLAGS_ENABLE ? 1661 + "enabled" : "disabled"); 1662 return (flags & HT_MSI_FLAGS_ENABLE) != 0; 1663 } 1664 ··· 1672 static void __devinit quirk_msi_ht_cap(struct pci_dev *dev) 1673 { 1674 if (dev->subordinate && !msi_ht_cap_enabled(dev)) { 1675 + dev_warn(&dev->dev, "MSI quirk detected; " 1676 + "subordinate MSI disabled\n"); 1677 dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 1678 } 1679 } 1680 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT2000_PCIE, 1681 quirk_msi_ht_cap); 1682 + 1683 + 1684 + /* 1685 + * Force enable MSI mapping capability on HT bridges 1686 + */ 1687 + static void __devinit quirk_msi_ht_cap_enable(struct pci_dev *dev) 1688 + { 1689 + int pos, ttl = 48; 1690 + 1691 + pos = pci_find_ht_capability(dev, HT_CAPTYPE_MSI_MAPPING); 1692 + while (pos && ttl--) { 1693 + u8 flags; 1694 + 1695 + if (pci_read_config_byte(dev, pos + HT_MSI_FLAGS, &flags) == 0) { 1696 + printk(KERN_INFO "PCI: Enabling HT MSI Mapping on %s\n", 1697 + pci_name(dev)); 1698 + 1699 + pci_write_config_byte(dev, pos + HT_MSI_FLAGS, 1700 + flags | HT_MSI_FLAGS_ENABLE); 1701 + } 1702 + pos = pci_find_next_ht_capability(dev, pos, 1703 + HT_CAPTYPE_MSI_MAPPING); 1704 + } 1705 + } 1706 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SERVERWORKS, 1707 + PCI_DEVICE_ID_SERVERWORKS_HT1000_PXB, 1708 + quirk_msi_ht_cap_enable); 1709 1710 /* The nVidia CK804 chipset may have 2 HT MSI mappings. 1711 * MSI are supported if the MSI capability set in any of these mappings. ··· 1701 if (!pdev) 1702 return; 1703 if (!msi_ht_cap_enabled(dev) && !msi_ht_cap_enabled(pdev)) { 1704 + dev_warn(&dev->dev, "MSI quirk detected; " 1705 + "subordinate MSI disabled\n"); 1706 dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 1707 } 1708 pci_dev_put(pdev); ··· 1714 static void __devinit quirk_msi_intx_disable_bug(struct pci_dev *dev) 1715 { 1716 dev->dev_flags |= PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG; 1717 + } 1718 + static void __devinit quirk_msi_intx_disable_ati_bug(struct pci_dev *dev) 1719 + { 1720 + struct pci_dev *p; 1721 + 1722 + /* SB700 MSI issue will be fixed at HW level from revision A21, 1723 + * we need check PCI REVISION ID of SMBus controller to get SB700 1724 + * revision. 1725 + */ 1726 + p = pci_get_device(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_SBX00_SMBUS, 1727 + NULL); 1728 + if (!p) 1729 + return; 1730 + 1731 + if ((p->revision < 0x3B) && (p->revision >= 0x30)) 1732 + dev->dev_flags |= PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG; 1733 + pci_dev_put(p); 1734 } 1735 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, 1736 PCI_DEVICE_ID_TIGON3_5780, ··· 1735 quirk_msi_intx_disable_bug); 1736 1737 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4390, 1738 + quirk_msi_intx_disable_ati_bug); 1739 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4391, 1740 + quirk_msi_intx_disable_ati_bug); 1741 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4392, 1742 + quirk_msi_intx_disable_ati_bug); 1743 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4393, 1744 + quirk_msi_intx_disable_ati_bug); 1745 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4394, 1746 + quirk_msi_intx_disable_ati_bug); 1747 1748 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4373, 1749 quirk_msi_intx_disable_bug);
+6 -4
drivers/pci/remove.c
··· 1 #include <linux/pci.h> 2 #include <linux/module.h> 3 #include "pci.h" 4 5 static void pci_free_resources(struct pci_dev *dev) ··· 31 dev->global_list.next = dev->global_list.prev = NULL; 32 up_write(&pci_bus_sem); 33 } 34 } 35 36 static void pci_destroy_dev(struct pci_dev *dev) ··· 78 list_del(&pci_bus->node); 79 up_write(&pci_bus_sem); 80 pci_remove_legacy_files(pci_bus); 81 - class_device_remove_file(&pci_bus->class_dev, 82 - &class_device_attr_cpuaffinity); 83 - sysfs_remove_link(&pci_bus->class_dev.kobj, "bridge"); 84 - class_device_unregister(&pci_bus->class_dev); 85 } 86 EXPORT_SYMBOL(pci_remove_bus); 87
··· 1 #include <linux/pci.h> 2 #include <linux/module.h> 3 + #include <linux/aspm.h> 4 #include "pci.h" 5 6 static void pci_free_resources(struct pci_dev *dev) ··· 30 dev->global_list.next = dev->global_list.prev = NULL; 31 up_write(&pci_bus_sem); 32 } 33 + 34 + if (dev->bus->self) 35 + pcie_aspm_exit_link_state(dev); 36 } 37 38 static void pci_destroy_dev(struct pci_dev *dev) ··· 74 list_del(&pci_bus->node); 75 up_write(&pci_bus_sem); 76 pci_remove_legacy_files(pci_bus); 77 + device_remove_file(&pci_bus->dev, &dev_attr_cpuaffinity); 78 + device_unregister(&pci_bus->dev); 79 } 80 EXPORT_SYMBOL(pci_remove_bus); 81
+4 -2
drivers/pci/rom.c
··· 162 return rom; 163 } 164 165 /** 166 * pci_map_rom_copy - map a PCI ROM to kernel space, create a copy 167 * @pdev: pointer to pci device struct ··· 197 198 return (void __iomem *)(unsigned long)res->start; 199 } 200 201 /** 202 * pci_unmap_rom - unmap the ROM from kernel space ··· 220 pci_disable_rom(pdev); 221 } 222 223 /** 224 * pci_remove_rom - disable the ROM and remove its sysfs attribute 225 * @pdev: pointer to pci device struct ··· 239 IORESOURCE_ROM_COPY))) 240 pci_disable_rom(pdev); 241 } 242 243 /** 244 * pci_cleanup_rom - internal routine for freeing the ROM copy created ··· 260 } 261 262 EXPORT_SYMBOL(pci_map_rom); 263 - EXPORT_SYMBOL(pci_map_rom_copy); 264 EXPORT_SYMBOL(pci_unmap_rom); 265 - EXPORT_SYMBOL(pci_remove_rom);
··· 162 return rom; 163 } 164 165 + #if 0 166 /** 167 * pci_map_rom_copy - map a PCI ROM to kernel space, create a copy 168 * @pdev: pointer to pci device struct ··· 196 197 return (void __iomem *)(unsigned long)res->start; 198 } 199 + #endif /* 0 */ 200 201 /** 202 * pci_unmap_rom - unmap the ROM from kernel space ··· 218 pci_disable_rom(pdev); 219 } 220 221 + #if 0 222 /** 223 * pci_remove_rom - disable the ROM and remove its sysfs attribute 224 * @pdev: pointer to pci device struct ··· 236 IORESOURCE_ROM_COPY))) 237 pci_disable_rom(pdev); 238 } 239 + #endif /* 0 */ 240 241 /** 242 * pci_cleanup_rom - internal routine for freeing the ROM copy created ··· 256 } 257 258 EXPORT_SYMBOL(pci_map_rom); 259 EXPORT_SYMBOL(pci_unmap_rom);
+40 -24
drivers/pci/setup-bus.c
··· 89 * The IO resource is allocated a range twice as large as it 90 * would normally need. This allows us to set both IO regs. 91 */ 92 - printk(" IO window: %08lx-%08lx\n", 93 - region.start, region.end); 94 pci_write_config_dword(bridge, PCI_CB_IO_BASE_0, 95 region.start); 96 pci_write_config_dword(bridge, PCI_CB_IO_LIMIT_0, ··· 100 101 pcibios_resource_to_bus(bridge, &region, bus->resource[1]); 102 if (bus->resource[1]->flags & IORESOURCE_IO) { 103 - printk(" IO window: %08lx-%08lx\n", 104 - region.start, region.end); 105 pci_write_config_dword(bridge, PCI_CB_IO_BASE_1, 106 region.start); 107 pci_write_config_dword(bridge, PCI_CB_IO_LIMIT_1, ··· 111 112 pcibios_resource_to_bus(bridge, &region, bus->resource[2]); 113 if (bus->resource[2]->flags & IORESOURCE_MEM) { 114 - printk(" PREFETCH window: %08lx-%08lx\n", 115 - region.start, region.end); 116 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_0, 117 region.start); 118 pci_write_config_dword(bridge, PCI_CB_MEMORY_LIMIT_0, ··· 122 123 pcibios_resource_to_bus(bridge, &region, bus->resource[3]); 124 if (bus->resource[3]->flags & IORESOURCE_MEM) { 125 - printk(" MEM window: %08lx-%08lx\n", 126 - region.start, region.end); 127 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_1, 128 region.start); 129 pci_write_config_dword(bridge, PCI_CB_MEMORY_LIMIT_1, ··· 149 { 150 struct pci_dev *bridge = bus->self; 151 struct pci_bus_region region; 152 - u32 l, io_upper16; 153 154 DBG(KERN_INFO "PCI: Bridge: %s\n", pci_name(bridge)); 155 ··· 163 /* Set up upper 16 bits of I/O base/limit. */ 164 io_upper16 = (region.end & 0xffff0000) | (region.start >> 16); 165 DBG(KERN_INFO " IO window: %04lx-%04lx\n", 166 - region.start, region.end); 167 } 168 else { 169 /* Clear upper 16 bits of I/O base/limit. */ ··· 185 if (bus->resource[1]->flags & IORESOURCE_MEM) { 186 l = (region.start >> 16) & 0xfff0; 187 l |= region.end & 0xfff00000; 188 - DBG(KERN_INFO " MEM window: %08lx-%08lx\n", 189 - region.start, region.end); 190 } 191 else { 192 l = 0x0000fff0; ··· 201 pci_write_config_dword(bridge, PCI_PREF_LIMIT_UPPER32, 0); 202 203 /* Set up PREF base/limit. */ 204 pcibios_resource_to_bus(bridge, &region, bus->resource[2]); 205 if (bus->resource[2]->flags & IORESOURCE_PREFETCH) { 206 l = (region.start >> 16) & 0xfff0; 207 l |= region.end & 0xfff00000; 208 - DBG(KERN_INFO " PREFETCH window: %08lx-%08lx\n", 209 - region.start, region.end); 210 } 211 else { 212 l = 0x0000fff0; ··· 220 } 221 pci_write_config_dword(bridge, PCI_PREF_MEMORY_BASE, l); 222 223 - /* Clear out the upper 32 bits of PREF base. */ 224 - pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32, 0); 225 226 pci_write_config_word(bridge, PCI_BRIDGE_CONTROL, bus->bridge_ctl); 227 } ··· 336 static int pbus_size_mem(struct pci_bus *bus, unsigned long mask, unsigned long type) 337 { 338 struct pci_dev *dev; 339 - unsigned long min_align, align, size; 340 - unsigned long aligns[12]; /* Alignments from 1Mb to 2Gb */ 341 int order, max_order; 342 struct resource *b_res = find_free_bus_resource(bus, type); 343 ··· 353 354 for (i = 0; i < PCI_NUM_RESOURCES; i++) { 355 struct resource *r = &dev->resource[i]; 356 - unsigned long r_size; 357 358 if (r->parent || (r->flags & mask) != type) 359 continue; ··· 363 order = __ffs(align) - 20; 364 if (order > 11) { 365 printk(KERN_WARNING "PCI: region %s/%d " 366 - "too large: %llx-%llx\n", 367 pci_name(dev), i, 368 - (unsigned long long)r->start, 369 - (unsigned long long)r->end); 370 r->flags = 0; 371 continue; 372 } ··· 385 align = 0; 386 min_align = 0; 387 for (order = 0; order <= max_order; order++) { 388 - unsigned long align1 = 1UL << (order + 20); 389 - 390 if (!align) 391 min_align = align1; 392 else if (ALIGN(align + min_align, min_align) < align1)
··· 89 * The IO resource is allocated a range twice as large as it 90 * would normally need. This allows us to set both IO regs. 91 */ 92 + printk(KERN_INFO " IO window: 0x%08lx-0x%08lx\n", 93 + (unsigned long)region.start, 94 + (unsigned long)region.end); 95 pci_write_config_dword(bridge, PCI_CB_IO_BASE_0, 96 region.start); 97 pci_write_config_dword(bridge, PCI_CB_IO_LIMIT_0, ··· 99 100 pcibios_resource_to_bus(bridge, &region, bus->resource[1]); 101 if (bus->resource[1]->flags & IORESOURCE_IO) { 102 + printk(KERN_INFO " IO window: 0x%08lx-0x%08lx\n", 103 + (unsigned long)region.start, 104 + (unsigned long)region.end); 105 pci_write_config_dword(bridge, PCI_CB_IO_BASE_1, 106 region.start); 107 pci_write_config_dword(bridge, PCI_CB_IO_LIMIT_1, ··· 109 110 pcibios_resource_to_bus(bridge, &region, bus->resource[2]); 111 if (bus->resource[2]->flags & IORESOURCE_MEM) { 112 + printk(KERN_INFO " PREFETCH window: 0x%08lx-0x%08lx\n", 113 + (unsigned long)region.start, 114 + (unsigned long)region.end); 115 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_0, 116 region.start); 117 pci_write_config_dword(bridge, PCI_CB_MEMORY_LIMIT_0, ··· 119 120 pcibios_resource_to_bus(bridge, &region, bus->resource[3]); 121 if (bus->resource[3]->flags & IORESOURCE_MEM) { 122 + printk(KERN_INFO " MEM window: 0x%08lx-0x%08lx\n", 123 + (unsigned long)region.start, 124 + (unsigned long)region.end); 125 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_1, 126 region.start); 127 pci_write_config_dword(bridge, PCI_CB_MEMORY_LIMIT_1, ··· 145 { 146 struct pci_dev *bridge = bus->self; 147 struct pci_bus_region region; 148 + u32 l, bu, lu, io_upper16; 149 150 DBG(KERN_INFO "PCI: Bridge: %s\n", pci_name(bridge)); 151 ··· 159 /* Set up upper 16 bits of I/O base/limit. */ 160 io_upper16 = (region.end & 0xffff0000) | (region.start >> 16); 161 DBG(KERN_INFO " IO window: %04lx-%04lx\n", 162 + (unsigned long)region.start, 163 + (unsigned long)region.end); 164 } 165 else { 166 /* Clear upper 16 bits of I/O base/limit. */ ··· 180 if (bus->resource[1]->flags & IORESOURCE_MEM) { 181 l = (region.start >> 16) & 0xfff0; 182 l |= region.end & 0xfff00000; 183 + DBG(KERN_INFO " MEM window: 0x%08lx-0x%08lx\n", 184 + (unsigned long)region.start, 185 + (unsigned long)region.end); 186 } 187 else { 188 l = 0x0000fff0; ··· 195 pci_write_config_dword(bridge, PCI_PREF_LIMIT_UPPER32, 0); 196 197 /* Set up PREF base/limit. */ 198 + bu = lu = 0; 199 pcibios_resource_to_bus(bridge, &region, bus->resource[2]); 200 if (bus->resource[2]->flags & IORESOURCE_PREFETCH) { 201 l = (region.start >> 16) & 0xfff0; 202 l |= region.end & 0xfff00000; 203 + #ifdef CONFIG_RESOURCES_64BIT 204 + bu = region.start >> 32; 205 + lu = region.end >> 32; 206 + #endif 207 + DBG(KERN_INFO " PREFETCH window: 0x%016llx-0x%016llx\n", 208 + (unsigned long long)region.start, 209 + (unsigned long long)region.end); 210 } 211 else { 212 l = 0x0000fff0; ··· 208 } 209 pci_write_config_dword(bridge, PCI_PREF_MEMORY_BASE, l); 210 211 + /* Set the upper 32 bits of PREF base & limit. */ 212 + pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32, bu); 213 + pci_write_config_dword(bridge, PCI_PREF_LIMIT_UPPER32, lu); 214 215 pci_write_config_word(bridge, PCI_BRIDGE_CONTROL, bus->bridge_ctl); 216 } ··· 323 static int pbus_size_mem(struct pci_bus *bus, unsigned long mask, unsigned long type) 324 { 325 struct pci_dev *dev; 326 + resource_size_t min_align, align, size; 327 + resource_size_t aligns[12]; /* Alignments from 1Mb to 2Gb */ 328 int order, max_order; 329 struct resource *b_res = find_free_bus_resource(bus, type); 330 ··· 340 341 for (i = 0; i < PCI_NUM_RESOURCES; i++) { 342 struct resource *r = &dev->resource[i]; 343 + resource_size_t r_size; 344 345 if (r->parent || (r->flags & mask) != type) 346 continue; ··· 350 order = __ffs(align) - 20; 351 if (order > 11) { 352 printk(KERN_WARNING "PCI: region %s/%d " 353 + "too large: 0x%016llx-0x%016llx\n", 354 pci_name(dev), i, 355 + (unsigned long long)r->start, 356 + (unsigned long long)r->end); 357 r->flags = 0; 358 continue; 359 } ··· 372 align = 0; 373 min_align = 0; 374 for (order = 0; order <= max_order; order++) { 375 + #ifdef CONFIG_RESOURCES_64BIT 376 + resource_size_t align1 = 1ULL << (order + 20); 377 + #else 378 + resource_size_t align1 = 1U << (order + 20); 379 + #endif 380 if (!align) 381 min_align = align1; 382 else if (ALIGN(align + min_align, min_align) < align1)
+4 -3
drivers/pci/setup-res.c
··· 51 52 pcibios_resource_to_bus(dev, &region, res); 53 54 - pr_debug(" got res [%llx:%llx] bus [%lx:%lx] flags %lx for " 55 "BAR %d of %s\n", (unsigned long long)res->start, 56 (unsigned long long)res->end, 57 - region.start, region.end, res->flags, resno, pci_name(dev)); 58 59 new = region.start | (res->flags & PCI_REGION_FLAG_MASK); 60 if (res->flags & IORESOURCE_IO) ··· 127 128 return err; 129 } 130 - EXPORT_SYMBOL_GPL(pci_claim_resource); 131 132 int pci_assign_resource(struct pci_dev *dev, int resno) 133 {
··· 51 52 pcibios_resource_to_bus(dev, &region, res); 53 54 + pr_debug(" got res [%llx:%llx] bus [%llx:%llx] flags %lx for " 55 "BAR %d of %s\n", (unsigned long long)res->start, 56 (unsigned long long)res->end, 57 + (unsigned long long)region.start, 58 + (unsigned long long)region.end, 59 + (unsigned long)res->flags, resno, pci_name(dev)); 60 61 new = region.start | (res->flags & PCI_REGION_FLAG_MASK); 62 if (res->flags & IORESOURCE_IO) ··· 125 126 return err; 127 } 128 129 int pci_assign_resource(struct pci_dev *dev, int resno) 130 {
-5
drivers/pci/syscall.c
··· 34 if (!dev) 35 goto error; 36 37 - lock_kernel(); 38 switch (len) { 39 case 1: 40 cfg_ret = pci_user_read_config_byte(dev, off, &byte); ··· 46 break; 47 default: 48 err = -EINVAL; 49 - unlock_kernel(); 50 goto error; 51 }; 52 - unlock_kernel(); 53 54 err = -EIO; 55 if (cfg_ret != PCIBIOS_SUCCESSFUL) ··· 104 if (!dev) 105 return -ENODEV; 106 107 - lock_kernel(); 108 switch(len) { 109 case 1: 110 err = get_user(byte, (u8 __user *)buf); ··· 136 err = -EINVAL; 137 break; 138 } 139 - unlock_kernel(); 140 pci_dev_put(dev); 141 return err; 142 }
··· 34 if (!dev) 35 goto error; 36 37 switch (len) { 38 case 1: 39 cfg_ret = pci_user_read_config_byte(dev, off, &byte); ··· 47 break; 48 default: 49 err = -EINVAL; 50 goto error; 51 }; 52 53 err = -EIO; 54 if (cfg_ret != PCIBIOS_SUCCESSFUL) ··· 107 if (!dev) 108 return -ENODEV; 109 110 switch(len) { 111 case 1: 112 err = get_user(byte, (u8 __user *)buf); ··· 140 err = -EINVAL; 141 break; 142 } 143 pci_dev_put(dev); 144 return err; 145 }
+1 -2
drivers/scsi/lpfc/lpfc_init.c
··· 2296 struct Scsi_Host *shost = pci_get_drvdata(pdev); 2297 struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba; 2298 struct lpfc_sli *psli = &phba->sli; 2299 - int bars = pci_select_bars(pdev, IORESOURCE_MEM); 2300 2301 dev_printk(KERN_INFO, &pdev->dev, "recovering from a slot reset.\n"); 2302 - if (pci_enable_device_bars(pdev, bars)) { 2303 printk(KERN_ERR "lpfc: Cannot re-enable " 2304 "PCI device after reset.\n"); 2305 return PCI_ERS_RESULT_DISCONNECT;
··· 2296 struct Scsi_Host *shost = pci_get_drvdata(pdev); 2297 struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba; 2298 struct lpfc_sli *psli = &phba->sli; 2299 2300 dev_printk(KERN_INFO, &pdev->dev, "recovering from a slot reset.\n"); 2301 + if (pci_enable_device_mem(pdev)) { 2302 printk(KERN_ERR "lpfc: Cannot re-enable " 2303 "PCI device after reset.\n"); 2304 return PCI_ERS_RESULT_DISCONNECT;
+1
drivers/scsi/qla2xxx/qla_def.h
··· 2268 spinlock_t hardware_lock ____cacheline_aligned; 2269 2270 int bars; 2271 device_reg_t __iomem *iobase; /* Base I/O address */ 2272 resource_size_t pio_address; 2273 #define MIN_IOBASE_LEN 0x100
··· 2268 spinlock_t hardware_lock ____cacheline_aligned; 2269 2270 int bars; 2271 + int mem_only; 2272 device_reg_t __iomem *iobase; /* Base I/O address */ 2273 resource_size_t pio_address; 2274 #define MIN_IOBASE_LEN 0x100
+17 -4
drivers/scsi/qla2xxx/qla_os.c
··· 1564 char pci_info[30]; 1565 char fw_str[30]; 1566 struct scsi_host_template *sht; 1567 - int bars; 1568 1569 bars = pci_select_bars(pdev, IORESOURCE_MEM | IORESOURCE_IO); 1570 sht = &qla2x00_driver_template; ··· 1575 pdev->device == PCI_DEVICE_ID_QLOGIC_ISP2532) { 1576 bars = pci_select_bars(pdev, IORESOURCE_MEM); 1577 sht = &qla24xx_driver_template; 1578 } 1579 1580 - if (pci_enable_device_bars(pdev, bars)) 1581 - goto probe_out; 1582 1583 if (pci_find_aer_capability(pdev)) 1584 if (pci_enable_pcie_error_reporting(pdev)) ··· 1607 sprintf(ha->host_str, "%s_%ld", QLA2XXX_DRIVER_NAME, ha->host_no); 1608 ha->parent = NULL; 1609 ha->bars = bars; 1610 1611 /* Set ISP-type information. */ 1612 qla2x00_set_isp_flags(ha); ··· 2882 { 2883 pci_ers_result_t ret = PCI_ERS_RESULT_DISCONNECT; 2884 scsi_qla_host_t *ha = pci_get_drvdata(pdev); 2885 2886 - if (pci_enable_device_bars(pdev, ha->bars)) { 2887 qla_printk(KERN_WARNING, ha, 2888 "Can't re-enable PCI device after reset.\n"); 2889
··· 1564 char pci_info[30]; 1565 char fw_str[30]; 1566 struct scsi_host_template *sht; 1567 + int bars, mem_only = 0; 1568 1569 bars = pci_select_bars(pdev, IORESOURCE_MEM | IORESOURCE_IO); 1570 sht = &qla2x00_driver_template; ··· 1575 pdev->device == PCI_DEVICE_ID_QLOGIC_ISP2532) { 1576 bars = pci_select_bars(pdev, IORESOURCE_MEM); 1577 sht = &qla24xx_driver_template; 1578 + mem_only = 1; 1579 } 1580 1581 + if (mem_only) { 1582 + if (pci_enable_device_mem(pdev)) 1583 + goto probe_out; 1584 + } else { 1585 + if (pci_enable_device(pdev)) 1586 + goto probe_out; 1587 + } 1588 1589 if (pci_find_aer_capability(pdev)) 1590 if (pci_enable_pcie_error_reporting(pdev)) ··· 1601 sprintf(ha->host_str, "%s_%ld", QLA2XXX_DRIVER_NAME, ha->host_no); 1602 ha->parent = NULL; 1603 ha->bars = bars; 1604 + ha->mem_only = mem_only; 1605 1606 /* Set ISP-type information. */ 1607 qla2x00_set_isp_flags(ha); ··· 2875 { 2876 pci_ers_result_t ret = PCI_ERS_RESULT_DISCONNECT; 2877 scsi_qla_host_t *ha = pci_get_drvdata(pdev); 2878 + int rc; 2879 2880 + if (ha->mem_only) 2881 + rc = pci_enable_device_mem(pdev); 2882 + else 2883 + rc = pci_enable_device(pdev); 2884 + 2885 + if (rc) { 2886 qla_printk(KERN_WARNING, ha, 2887 "Can't re-enable PCI device after reset.\n"); 2888
+8 -14
drivers/usb/host/pci-quirks.c
··· 190 msleep(10); 191 } 192 if (wait_time <= 0) 193 - printk(KERN_WARNING "%s %s: BIOS handoff " 194 - "failed (BIOS bug ?) %08x\n", 195 - pdev->dev.bus_id, "OHCI", 196 readl(base + OHCI_CONTROL)); 197 198 /* reset controller, preserving RWC */ ··· 242 switch (cap & 0xff) { 243 case 1: /* BIOS/SMM/... handoff support */ 244 if ((cap & EHCI_USBLEGSUP_BIOS)) { 245 - pr_debug("%s %s: BIOS handoff\n", 246 - pdev->dev.bus_id, "EHCI"); 247 248 #if 0 249 /* aleksey_gorelov@phoenix.com reports that some systems need SMI forced on, ··· 283 /* well, possibly buggy BIOS... try to shut 284 * it down, and hope nothing goes too wrong 285 */ 286 - printk(KERN_WARNING "%s %s: BIOS handoff " 287 - "failed (BIOS bug ?) %08x\n", 288 - pdev->dev.bus_id, "EHCI", cap); 289 pci_write_config_byte(pdev, offset + 2, 0); 290 } 291 ··· 303 cap = 0; 304 /* FALLTHROUGH */ 305 default: 306 - printk(KERN_WARNING "%s %s: unrecognized " 307 - "capability %02x\n", 308 - pdev->dev.bus_id, "EHCI", 309 - cap & 0xff); 310 break; 311 } 312 offset = (cap >> 8) & 0xff; 313 } 314 if (!count) 315 - printk(KERN_DEBUG "%s %s: capability loop?\n", 316 - pdev->dev.bus_id, "EHCI"); 317 318 /* 319 * halt EHCI & disable its interrupts in any case
··· 190 msleep(10); 191 } 192 if (wait_time <= 0) 193 + dev_warn(&pdev->dev, "OHCI: BIOS handoff failed" 194 + " (BIOS bug?) %08x\n", 195 readl(base + OHCI_CONTROL)); 196 197 /* reset controller, preserving RWC */ ··· 243 switch (cap & 0xff) { 244 case 1: /* BIOS/SMM/... handoff support */ 245 if ((cap & EHCI_USBLEGSUP_BIOS)) { 246 + dev_dbg(&pdev->dev, "EHCI: BIOS handoff\n"); 247 248 #if 0 249 /* aleksey_gorelov@phoenix.com reports that some systems need SMI forced on, ··· 285 /* well, possibly buggy BIOS... try to shut 286 * it down, and hope nothing goes too wrong 287 */ 288 + dev_warn(&pdev->dev, "EHCI: BIOS handoff failed" 289 + " (BIOS bug?) %08x\n", cap); 290 pci_write_config_byte(pdev, offset + 2, 0); 291 } 292 ··· 306 cap = 0; 307 /* FALLTHROUGH */ 308 default: 309 + dev_warn(&pdev->dev, "EHCI: unrecognized capability " 310 + "%02x\n", cap & 0xff); 311 break; 312 } 313 offset = (cap >> 8) & 0xff; 314 } 315 if (!count) 316 + dev_printk(KERN_DEBUG, &pdev->dev, "EHCI: capability loop?\n"); 317 318 /* 319 * halt EHCI & disable its interrupts in any case
+44
include/linux/aspm.h
···
··· 1 + /* 2 + * aspm.h 3 + * 4 + * PCI Express ASPM defines and function prototypes 5 + * 6 + * Copyright (C) 2007 Intel Corp. 7 + * Zhang Yanmin (yanmin.zhang@intel.com) 8 + * Shaohua Li (shaohua.li@intel.com) 9 + * 10 + * For more information, please consult the following manuals (look at 11 + * http://www.pcisig.com/ for how to get them): 12 + * 13 + * PCI Express Specification 14 + */ 15 + 16 + #ifndef LINUX_ASPM_H 17 + #define LINUX_ASPM_H 18 + 19 + #include <linux/pci.h> 20 + 21 + #define PCIE_LINK_STATE_L0S 1 22 + #define PCIE_LINK_STATE_L1 2 23 + #define PCIE_LINK_STATE_CLKPM 4 24 + 25 + #ifdef CONFIG_PCIEASPM 26 + extern void pcie_aspm_init_link_state(struct pci_dev *pdev); 27 + extern void pcie_aspm_exit_link_state(struct pci_dev *pdev); 28 + extern void pcie_aspm_pm_state_change(struct pci_dev *pdev); 29 + extern void pci_disable_link_state(struct pci_dev *pdev, int state); 30 + #else 31 + #define pcie_aspm_init_link_state(pdev) do {} while (0) 32 + #define pcie_aspm_exit_link_state(pdev) do {} while (0) 33 + #define pcie_aspm_pm_state_change(pdev) do {} while (0) 34 + #define pci_disable_link_state(pdev, state) do {} while (0) 35 + #endif 36 + 37 + #ifdef CONFIG_PCIEASPM_DEBUG /* this depends on CONFIG_PCIEASPM */ 38 + extern void pcie_aspm_create_sysfs_dev_files(struct pci_dev *pdev); 39 + extern void pcie_aspm_remove_sysfs_dev_files(struct pci_dev *pdev); 40 + #else 41 + #define pcie_aspm_create_sysfs_dev_files(pdev) do {} while (0) 42 + #define pcie_aspm_remove_sysfs_dev_files(pdev) do {} while (0) 43 + #endif 44 + #endif /* LINUX_ASPM_H */
+10 -1
include/linux/pci-acpi.h
··· 48 49 #ifdef CONFIG_ACPI 50 extern acpi_status pci_osc_control_set(acpi_handle handle, u32 flags); 51 - extern acpi_status pci_osc_support_set(u32 flags); 52 #else 53 #if !defined(AE_ERROR) 54 typedef u32 acpi_status; ··· 65 static inline acpi_status pci_osc_control_set(acpi_handle handle, u32 flags) 66 {return AE_ERROR;} 67 static inline acpi_status pci_osc_support_set(u32 flags) {return AE_ERROR;} 68 #endif 69 70 #endif /* _PCI_ACPI_H_ */
··· 48 49 #ifdef CONFIG_ACPI 50 extern acpi_status pci_osc_control_set(acpi_handle handle, u32 flags); 51 + extern acpi_status __pci_osc_support_set(u32 flags, const char *hid); 52 + static inline acpi_status pci_osc_support_set(u32 flags) 53 + { 54 + return __pci_osc_support_set(flags, PCI_ROOT_HID_STRING); 55 + } 56 + static inline acpi_status pcie_osc_support_set(u32 flags) 57 + { 58 + return __pci_osc_support_set(flags, PCI_EXPRESS_ROOT_HID_STRING); 59 + } 60 #else 61 #if !defined(AE_ERROR) 62 typedef u32 acpi_status; ··· 57 static inline acpi_status pci_osc_control_set(acpi_handle handle, u32 flags) 58 {return AE_ERROR;} 59 static inline acpi_status pci_osc_support_set(u32 flags) {return AE_ERROR;} 60 + static inline acpi_status pcie_osc_support_set(u32 flags) {return AE_ERROR;} 61 #endif 62 63 #endif /* _PCI_ACPI_H_ */
+249 -118
include/linux/pci.h
··· 28 * 7:3 = slot 29 * 2:0 = function 30 */ 31 - #define PCI_DEVFN(slot,func) ((((slot) & 0x1f) << 3) | ((func) & 0x07)) 32 #define PCI_SLOT(devfn) (((devfn) >> 3) & 0x1f) 33 #define PCI_FUNC(devfn) ((devfn) & 0x07) 34 ··· 66 #define PCI_DMA_FROMDEVICE 2 67 #define PCI_DMA_NONE 3 68 69 - #define DEVICE_COUNT_COMPATIBLE 4 70 #define DEVICE_COUNT_RESOURCE 12 71 72 typedef int __bitwise pci_power_t; ··· 128 u32 data[0]; 129 }; 130 131 /* 132 * The pci_dev structure is used to describe PCI devices. 133 */ ··· 164 this is D0-D3, D0 being fully functional, 165 and D3 being off. */ 166 167 pci_channel_state_t error_state; /* current connectivity state */ 168 struct device dev; /* Generic device interface */ 169 - 170 - /* device is compatible with these IDs */ 171 - unsigned short vendor_compatible[DEVICE_COUNT_COMPATIBLE]; 172 - unsigned short device_compatible[DEVICE_COUNT_COMPATIBLE]; 173 174 int cfg_size; /* Size of configuration space */ 175 ··· 219 } 220 221 static inline struct pci_cap_saved_state *pci_find_saved_cap( 222 - struct pci_dev *pci_dev,char cap) 223 { 224 struct pci_cap_saved_state *tmp; 225 struct hlist_node *pos; ··· 278 unsigned short bridge_ctl; /* manage NO_ISA/FBB/et al behaviors */ 279 pci_bus_flags_t bus_flags; /* Inherited by child busses */ 280 struct device *bridge; 281 - struct class_device class_dev; 282 struct bin_attribute *legacy_io; /* legacy I/O for this bus */ 283 struct bin_attribute *legacy_mem; /* legacy mem */ 284 }; 285 286 #define pci_bus_b(n) list_entry(n, struct pci_bus, node) 287 - #define to_pci_bus(n) container_of(n, struct pci_bus, class_dev) 288 289 /* 290 * Error values that may be returned by PCI functions. ··· 314 extern struct pci_raw_ops *raw_pci_ops; 315 316 struct pci_bus_region { 317 - unsigned long start; 318 - unsigned long end; 319 }; 320 321 struct pci_dynids { ··· 351 }; 352 353 /* PCI bus error event callbacks */ 354 - struct pci_error_handlers 355 - { 356 /* PCI bus error detected on this device */ 357 pci_ers_result_t (*error_detected)(struct pci_dev *dev, 358 - enum pci_channel_state error); 359 360 /* MMIO has been re-enabled, but not DMA */ 361 pci_ers_result_t (*mmio_enabled)(struct pci_dev *dev); ··· 389 struct pci_dynids dynids; 390 }; 391 392 - #define to_pci_driver(drv) container_of(drv,struct pci_driver, driver) 393 394 /** 395 * PCI_DEVICE - macro used to describe a specific pci device ··· 447 448 void pcibios_fixup_bus(struct pci_bus *); 449 int __must_check pcibios_enable_device(struct pci_dev *, int mask); 450 - char *pcibios_setup (char *str); 451 452 /* Used only when drivers/pci/setup.c is used */ 453 void pcibios_align_resource(void *, struct resource *, resource_size_t, ··· 458 459 extern struct pci_bus *pci_find_bus(int domain, int busnr); 460 void pci_bus_add_devices(struct pci_bus *bus); 461 - struct pci_bus *pci_scan_bus_parented(struct device *parent, int bus, struct pci_ops *ops, void *sysdata); 462 - static inline struct pci_bus *pci_scan_bus(int bus, struct pci_ops *ops, void *sysdata) 463 { 464 struct pci_bus *root_bus; 465 root_bus = pci_scan_bus_parented(NULL, bus, ops, sysdata); ··· 469 pci_bus_add_devices(root_bus); 470 return root_bus; 471 } 472 - struct pci_bus *pci_create_bus(struct device *parent, int bus, struct pci_ops *ops, void *sysdata); 473 - struct pci_bus * pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, int busnr); 474 int pci_scan_slot(struct pci_bus *bus, int devfn); 475 - struct pci_dev * pci_scan_single_device(struct pci_bus *bus, int devfn); 476 void pci_device_add(struct pci_dev *dev, struct pci_bus *bus); 477 unsigned int pci_scan_child_bus(struct pci_bus *bus); 478 int __must_check pci_bus_add_device(struct pci_dev *dev); 479 void pci_read_bridge_bases(struct pci_bus *child); 480 - struct resource *pci_find_parent_resource(const struct pci_dev *dev, struct resource *res); 481 int pci_get_interrupt_pin(struct pci_dev *dev, struct pci_dev **bridge); 482 extern struct pci_dev *pci_dev_get(struct pci_dev *dev); 483 extern void pci_dev_put(struct pci_dev *dev); ··· 493 /* Generic PCI functions exported to card drivers */ 494 495 #ifdef CONFIG_PCI_LEGACY 496 - struct pci_dev __deprecated *pci_find_device (unsigned int vendor, unsigned int device, const struct pci_dev *from); 497 - struct pci_dev __deprecated *pci_find_slot (unsigned int bus, unsigned int devfn); 498 #endif /* CONFIG_PCI_LEGACY */ 499 500 - int pci_find_capability (struct pci_dev *dev, int cap); 501 - int pci_find_next_capability (struct pci_dev *dev, u8 pos, int cap); 502 - int pci_find_ext_capability (struct pci_dev *dev, int cap); 503 - int pci_find_ht_capability (struct pci_dev *dev, int ht_cap); 504 - int pci_find_next_ht_capability (struct pci_dev *dev, int pos, int ht_cap); 505 struct pci_bus *pci_find_next_bus(const struct pci_bus *from); 506 507 struct pci_dev *pci_get_device(unsigned int vendor, unsigned int device, ··· 513 struct pci_dev *pci_get_device_reverse(unsigned int vendor, unsigned int device, 514 struct pci_dev *from); 515 516 - struct pci_dev *pci_get_subsys (unsigned int vendor, unsigned int device, 517 unsigned int ss_vendor, unsigned int ss_device, 518 struct pci_dev *from); 519 - struct pci_dev *pci_get_slot (struct pci_bus *bus, unsigned int devfn); 520 - struct pci_dev *pci_get_bus_and_slot (unsigned int bus, unsigned int devfn); 521 - struct pci_dev *pci_get_class (unsigned int class, struct pci_dev *from); 522 int pci_dev_present(const struct pci_device_id *ids); 523 const struct pci_device_id *pci_find_present(const struct pci_device_id *ids); 524 525 - int pci_bus_read_config_byte (struct pci_bus *bus, unsigned int devfn, int where, u8 *val); 526 - int pci_bus_read_config_word (struct pci_bus *bus, unsigned int devfn, int where, u16 *val); 527 - int pci_bus_read_config_dword (struct pci_bus *bus, unsigned int devfn, int where, u32 *val); 528 - int pci_bus_write_config_byte (struct pci_bus *bus, unsigned int devfn, int where, u8 val); 529 - int pci_bus_write_config_word (struct pci_bus *bus, unsigned int devfn, int where, u16 val); 530 - int pci_bus_write_config_dword (struct pci_bus *bus, unsigned int devfn, int where, u32 val); 531 532 static inline int pci_read_config_byte(struct pci_dev *dev, int where, u8 *val) 533 { 534 - return pci_bus_read_config_byte (dev->bus, dev->devfn, where, val); 535 } 536 static inline int pci_read_config_word(struct pci_dev *dev, int where, u16 *val) 537 { 538 - return pci_bus_read_config_word (dev->bus, dev->devfn, where, val); 539 } 540 - static inline int pci_read_config_dword(struct pci_dev *dev, int where, u32 *val) 541 { 542 - return pci_bus_read_config_dword (dev->bus, dev->devfn, where, val); 543 } 544 static inline int pci_write_config_byte(struct pci_dev *dev, int where, u8 val) 545 { 546 - return pci_bus_write_config_byte (dev->bus, dev->devfn, where, val); 547 } 548 static inline int pci_write_config_word(struct pci_dev *dev, int where, u16 val) 549 { 550 - return pci_bus_write_config_word (dev->bus, dev->devfn, where, val); 551 } 552 - static inline int pci_write_config_dword(struct pci_dev *dev, int where, u32 val) 553 { 554 - return pci_bus_write_config_dword (dev->bus, dev->devfn, where, val); 555 } 556 557 int __must_check pci_enable_device(struct pci_dev *dev); 558 - int __must_check pci_enable_device_bars(struct pci_dev *dev, int mask); 559 int __must_check pci_reenable_device(struct pci_dev *); 560 int __must_check pcim_enable_device(struct pci_dev *pdev); 561 void pcim_pin_device(struct pci_dev *pdev); ··· 593 void pci_update_resource(struct pci_dev *dev, struct resource *res, int resno); 594 int __must_check pci_assign_resource(struct pci_dev *dev, int i); 595 int __must_check pci_assign_resource_fixed(struct pci_dev *dev, int i); 596 - void pci_restore_bars(struct pci_dev *dev); 597 int pci_select_bars(struct pci_dev *dev, unsigned long flags); 598 599 /* ROM control related routines */ 600 void __iomem __must_check *pci_map_rom(struct pci_dev *pdev, size_t *size); 601 - void __iomem __must_check *pci_map_rom_copy(struct pci_dev *pdev, size_t *size); 602 void pci_unmap_rom(struct pci_dev *pdev, void __iomem *rom); 603 - void pci_remove_rom(struct pci_dev *pdev); 604 size_t pci_get_rom_size(void __iomem *rom, size_t size); 605 606 /* Power management related routines */ ··· 608 int pci_enable_wake(struct pci_dev *dev, pci_power_t state, int enable); 609 610 /* Functions for PCI Hotplug drivers to use */ 611 - int pci_bus_find_capability (struct pci_bus *bus, unsigned int devfn, int cap); 612 613 /* Helper functions for low-level code (drivers/pci/setup-[bus,res].c) */ 614 void pci_bus_assign_resources(struct pci_bus *bus); ··· 645 return __pci_register_driver(driver, THIS_MODULE, KBUILD_MODNAME); 646 } 647 648 - void pci_unregister_driver(struct pci_driver *); 649 - void pci_remove_behind_bridge(struct pci_dev *); 650 - struct pci_driver *pci_dev_driver(const struct pci_dev *); 651 - const struct pci_device_id *pci_match_id(const struct pci_device_id *ids, struct pci_dev *dev); 652 - int pci_scan_bridge(struct pci_bus *bus, struct pci_dev * dev, int max, int pass); 653 654 void pci_walk_bus(struct pci_bus *top, void (*cb)(struct pci_dev *, void *), 655 void *userdata); 656 int pci_cfg_space_size(struct pci_dev *dev); 657 - unsigned char pci_bus_max_busnr(struct pci_bus* bus); 658 659 /* kmem_cache style wrapper around pci_alloc_consistent() */ 660 ··· 685 686 687 #ifndef CONFIG_PCI_MSI 688 - static inline int pci_enable_msi(struct pci_dev *dev) {return -1;} 689 - static inline void pci_disable_msi(struct pci_dev *dev) {} 690 - static inline int pci_enable_msix(struct pci_dev* dev, 691 - struct msix_entry *entries, int nvec) {return -1;} 692 - static inline void pci_disable_msix(struct pci_dev *dev) {} 693 - static inline void msi_remove_pci_irq_vectors(struct pci_dev *dev) {} 694 #else 695 extern int pci_enable_msi(struct pci_dev *dev); 696 extern void pci_disable_msi(struct pci_dev *dev); 697 - extern int pci_enable_msix(struct pci_dev* dev, 698 struct msix_entry *entries, int nvec); 699 extern void pci_disable_msix(struct pci_dev *dev); 700 extern void msi_remove_pci_irq_vectors(struct pci_dev *dev); 701 #endif 702 703 #ifdef CONFIG_HT_IRQ ··· 735 extern int pci_domains_supported; 736 #else 737 enum { pci_domains_supported = 0 }; 738 - static inline int pci_domain_nr(struct pci_bus *bus) { return 0; } 739 static inline int pci_proc_domain(struct pci_bus *bus) 740 { 741 return 0; ··· 753 * these as simple inline functions to avoid hair in drivers. 754 */ 755 756 - #define _PCI_NOP(o,s,t) \ 757 - static inline int pci_##o##_config_##s (struct pci_dev *dev, int where, t val) \ 758 { return PCIBIOS_FUNC_NOT_SUPPORTED; } 759 - #define _PCI_NOP_ALL(o,x) _PCI_NOP(o,byte,u8 x) \ 760 - _PCI_NOP(o,word,u16 x) \ 761 - _PCI_NOP(o,dword,u32 x) 762 _PCI_NOP_ALL(read, *) 763 _PCI_NOP_ALL(write,) 764 765 - static inline struct pci_dev *pci_find_device(unsigned int vendor, unsigned int device, const struct pci_dev *from) 766 - { return NULL; } 767 768 - static inline struct pci_dev *pci_find_slot(unsigned int bus, unsigned int devfn) 769 - { return NULL; } 770 771 static inline struct pci_dev *pci_get_device(unsigned int vendor, 772 - unsigned int device, struct pci_dev *from) 773 - { return NULL; } 774 775 static inline struct pci_dev *pci_get_device_reverse(unsigned int vendor, 776 - unsigned int device, struct pci_dev *from) 777 - { return NULL; } 778 779 - static inline struct pci_dev *pci_get_subsys (unsigned int vendor, unsigned int device, 780 - unsigned int ss_vendor, unsigned int ss_device, struct pci_dev *from) 781 - { return NULL; } 782 783 - static inline struct pci_dev *pci_get_class(unsigned int class, struct pci_dev *from) 784 - { return NULL; } 785 786 #define pci_dev_present(ids) (0) 787 #define no_pci_devices() (1) 788 #define pci_find_present(ids) (NULL) 789 #define pci_dev_put(dev) do { } while (0) 790 791 - static inline void pci_set_master(struct pci_dev *dev) { } 792 - static inline int pci_enable_device(struct pci_dev *dev) { return -EIO; } 793 - static inline void pci_disable_device(struct pci_dev *dev) { } 794 - static inline int pci_set_dma_mask(struct pci_dev *dev, u64 mask) { return -EIO; } 795 - static inline int pci_assign_resource(struct pci_dev *dev, int i) { return -EBUSY;} 796 - static inline int __pci_register_driver(struct pci_driver *drv, struct module *owner) { return 0;} 797 - static inline int pci_register_driver(struct pci_driver *drv) { return 0;} 798 - static inline void pci_unregister_driver(struct pci_driver *drv) { } 799 - static inline int pci_find_capability (struct pci_dev *dev, int cap) {return 0; } 800 - static inline int pci_find_next_capability (struct pci_dev *dev, u8 post, int cap) { return 0; } 801 - static inline int pci_find_ext_capability (struct pci_dev *dev, int cap) {return 0; } 802 803 /* Power management related routines */ 804 - static inline int pci_save_state(struct pci_dev *dev) { return 0; } 805 - static inline int pci_restore_state(struct pci_dev *dev) { return 0; } 806 - static inline int pci_set_power_state(struct pci_dev *dev, pci_power_t state) { return 0; } 807 - static inline pci_power_t pci_choose_state(struct pci_dev *dev, pm_message_t state) { return PCI_D0; } 808 - static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state, int enable) { return 0; } 809 810 - static inline int pci_request_regions(struct pci_dev *dev, const char *res_name) { return -EIO; } 811 - static inline void pci_release_regions(struct pci_dev *dev) { } 812 813 #define pci_dma_burst_advice(pdev, strat, strategy_parameter) do { } while (0) 814 815 - static inline void pci_block_user_cfg_access(struct pci_dev *dev) { } 816 - static inline void pci_unblock_user_cfg_access(struct pci_dev *dev) { } 817 818 static inline struct pci_bus *pci_find_next_bus(const struct pci_bus *from) 819 { return NULL; } ··· 928 929 /* these helpers provide future and backwards compatibility 930 * for accessing popular PCI BAR info */ 931 - #define pci_resource_start(dev,bar) ((dev)->resource[(bar)].start) 932 - #define pci_resource_end(dev,bar) ((dev)->resource[(bar)].end) 933 - #define pci_resource_flags(dev,bar) ((dev)->resource[(bar)].flags) 934 #define pci_resource_len(dev,bar) \ 935 - ((pci_resource_start((dev),(bar)) == 0 && \ 936 - pci_resource_end((dev),(bar)) == \ 937 - pci_resource_start((dev),(bar))) ? 0 : \ 938 - \ 939 - (pci_resource_end((dev),(bar)) - \ 940 - pci_resource_start((dev),(bar)) + 1)) 941 942 /* Similar to the helpers above, these manipulate per-pci_dev 943 * driver-specific data. They are really just a wrapper around 944 * the generic device structure functions of these calls. 945 */ 946 - static inline void *pci_get_drvdata (struct pci_dev *pdev) 947 { 948 return dev_get_drvdata(&pdev->dev); 949 } 950 951 - static inline void pci_set_drvdata (struct pci_dev *pdev, void *data) 952 { 953 dev_set_drvdata(&pdev->dev, data); 954 } ··· 967 */ 968 #ifndef HAVE_ARCH_PCI_RESOURCE_TO_USER 969 static inline void pci_resource_to_user(const struct pci_dev *dev, int bar, 970 - const struct resource *rsrc, resource_size_t *start, 971 resource_size_t *end) 972 { 973 *start = rsrc->start; ··· 1019 1020 void pci_fixup_device(enum pci_fixup_pass pass, struct pci_dev *dev); 1021 1022 - void __iomem * pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen); 1023 void pcim_iounmap(struct pci_dev *pdev, void __iomem *addr); 1024 - void __iomem * const * pcim_iomap_table(struct pci_dev *pdev); 1025 int pcim_iomap_regions(struct pci_dev *pdev, u16 mask, const char *name); 1026 void pcim_iounmap_regions(struct pci_dev *pdev, u16 mask); 1027
··· 28 * 7:3 = slot 29 * 2:0 = function 30 */ 31 + #define PCI_DEVFN(slot, func) ((((slot) & 0x1f) << 3) | ((func) & 0x07)) 32 #define PCI_SLOT(devfn) (((devfn) >> 3) & 0x1f) 33 #define PCI_FUNC(devfn) ((devfn) & 0x07) 34 ··· 66 #define PCI_DMA_FROMDEVICE 2 67 #define PCI_DMA_NONE 3 68 69 #define DEVICE_COUNT_RESOURCE 12 70 71 typedef int __bitwise pci_power_t; ··· 129 u32 data[0]; 130 }; 131 132 + struct pcie_link_state; 133 /* 134 * The pci_dev structure is used to describe PCI devices. 135 */ ··· 164 this is D0-D3, D0 being fully functional, 165 and D3 being off. */ 166 167 + #ifdef CONFIG_PCIEASPM 168 + struct pcie_link_state *link_state; /* ASPM link state. */ 169 + #endif 170 + 171 pci_channel_state_t error_state; /* current connectivity state */ 172 struct device dev; /* Generic device interface */ 173 174 int cfg_size; /* Size of configuration space */ 175 ··· 219 } 220 221 static inline struct pci_cap_saved_state *pci_find_saved_cap( 222 + struct pci_dev *pci_dev, char cap) 223 { 224 struct pci_cap_saved_state *tmp; 225 struct hlist_node *pos; ··· 278 unsigned short bridge_ctl; /* manage NO_ISA/FBB/et al behaviors */ 279 pci_bus_flags_t bus_flags; /* Inherited by child busses */ 280 struct device *bridge; 281 + struct device dev; 282 struct bin_attribute *legacy_io; /* legacy I/O for this bus */ 283 struct bin_attribute *legacy_mem; /* legacy mem */ 284 }; 285 286 #define pci_bus_b(n) list_entry(n, struct pci_bus, node) 287 + #define to_pci_bus(n) container_of(n, struct pci_bus, dev) 288 289 /* 290 * Error values that may be returned by PCI functions. ··· 314 extern struct pci_raw_ops *raw_pci_ops; 315 316 struct pci_bus_region { 317 + resource_size_t start; 318 + resource_size_t end; 319 }; 320 321 struct pci_dynids { ··· 351 }; 352 353 /* PCI bus error event callbacks */ 354 + struct pci_error_handlers { 355 /* PCI bus error detected on this device */ 356 pci_ers_result_t (*error_detected)(struct pci_dev *dev, 357 + enum pci_channel_state error); 358 359 /* MMIO has been re-enabled, but not DMA */ 360 pci_ers_result_t (*mmio_enabled)(struct pci_dev *dev); ··· 390 struct pci_dynids dynids; 391 }; 392 393 + #define to_pci_driver(drv) container_of(drv, struct pci_driver, driver) 394 395 /** 396 * PCI_DEVICE - macro used to describe a specific pci device ··· 448 449 void pcibios_fixup_bus(struct pci_bus *); 450 int __must_check pcibios_enable_device(struct pci_dev *, int mask); 451 + char *pcibios_setup(char *str); 452 453 /* Used only when drivers/pci/setup.c is used */ 454 void pcibios_align_resource(void *, struct resource *, resource_size_t, ··· 459 460 extern struct pci_bus *pci_find_bus(int domain, int busnr); 461 void pci_bus_add_devices(struct pci_bus *bus); 462 + struct pci_bus *pci_scan_bus_parented(struct device *parent, int bus, 463 + struct pci_ops *ops, void *sysdata); 464 + static inline struct pci_bus *pci_scan_bus(int bus, struct pci_ops *ops, 465 + void *sysdata) 466 { 467 struct pci_bus *root_bus; 468 root_bus = pci_scan_bus_parented(NULL, bus, ops, sysdata); ··· 468 pci_bus_add_devices(root_bus); 469 return root_bus; 470 } 471 + struct pci_bus *pci_create_bus(struct device *parent, int bus, 472 + struct pci_ops *ops, void *sysdata); 473 + struct pci_bus *pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, 474 + int busnr); 475 int pci_scan_slot(struct pci_bus *bus, int devfn); 476 + struct pci_dev *pci_scan_single_device(struct pci_bus *bus, int devfn); 477 void pci_device_add(struct pci_dev *dev, struct pci_bus *bus); 478 unsigned int pci_scan_child_bus(struct pci_bus *bus); 479 int __must_check pci_bus_add_device(struct pci_dev *dev); 480 void pci_read_bridge_bases(struct pci_bus *child); 481 + struct resource *pci_find_parent_resource(const struct pci_dev *dev, 482 + struct resource *res); 483 int pci_get_interrupt_pin(struct pci_dev *dev, struct pci_dev **bridge); 484 extern struct pci_dev *pci_dev_get(struct pci_dev *dev); 485 extern void pci_dev_put(struct pci_dev *dev); ··· 489 /* Generic PCI functions exported to card drivers */ 490 491 #ifdef CONFIG_PCI_LEGACY 492 + struct pci_dev __deprecated *pci_find_device(unsigned int vendor, 493 + unsigned int device, 494 + const struct pci_dev *from); 495 + struct pci_dev __deprecated *pci_find_slot(unsigned int bus, 496 + unsigned int devfn); 497 #endif /* CONFIG_PCI_LEGACY */ 498 499 + int pci_find_capability(struct pci_dev *dev, int cap); 500 + int pci_find_next_capability(struct pci_dev *dev, u8 pos, int cap); 501 + int pci_find_ext_capability(struct pci_dev *dev, int cap); 502 + int pci_find_ht_capability(struct pci_dev *dev, int ht_cap); 503 + int pci_find_next_ht_capability(struct pci_dev *dev, int pos, int ht_cap); 504 + void pcie_wait_pending_transaction(struct pci_dev *dev); 505 struct pci_bus *pci_find_next_bus(const struct pci_bus *from); 506 507 struct pci_dev *pci_get_device(unsigned int vendor, unsigned int device, ··· 505 struct pci_dev *pci_get_device_reverse(unsigned int vendor, unsigned int device, 506 struct pci_dev *from); 507 508 + struct pci_dev *pci_get_subsys(unsigned int vendor, unsigned int device, 509 unsigned int ss_vendor, unsigned int ss_device, 510 struct pci_dev *from); 511 + struct pci_dev *pci_get_slot(struct pci_bus *bus, unsigned int devfn); 512 + struct pci_dev *pci_get_bus_and_slot(unsigned int bus, unsigned int devfn); 513 + struct pci_dev *pci_get_class(unsigned int class, struct pci_dev *from); 514 int pci_dev_present(const struct pci_device_id *ids); 515 const struct pci_device_id *pci_find_present(const struct pci_device_id *ids); 516 517 + int pci_bus_read_config_byte(struct pci_bus *bus, unsigned int devfn, 518 + int where, u8 *val); 519 + int pci_bus_read_config_word(struct pci_bus *bus, unsigned int devfn, 520 + int where, u16 *val); 521 + int pci_bus_read_config_dword(struct pci_bus *bus, unsigned int devfn, 522 + int where, u32 *val); 523 + int pci_bus_write_config_byte(struct pci_bus *bus, unsigned int devfn, 524 + int where, u8 val); 525 + int pci_bus_write_config_word(struct pci_bus *bus, unsigned int devfn, 526 + int where, u16 val); 527 + int pci_bus_write_config_dword(struct pci_bus *bus, unsigned int devfn, 528 + int where, u32 val); 529 530 static inline int pci_read_config_byte(struct pci_dev *dev, int where, u8 *val) 531 { 532 + return pci_bus_read_config_byte(dev->bus, dev->devfn, where, val); 533 } 534 static inline int pci_read_config_word(struct pci_dev *dev, int where, u16 *val) 535 { 536 + return pci_bus_read_config_word(dev->bus, dev->devfn, where, val); 537 } 538 + static inline int pci_read_config_dword(struct pci_dev *dev, int where, 539 + u32 *val) 540 { 541 + return pci_bus_read_config_dword(dev->bus, dev->devfn, where, val); 542 } 543 static inline int pci_write_config_byte(struct pci_dev *dev, int where, u8 val) 544 { 545 + return pci_bus_write_config_byte(dev->bus, dev->devfn, where, val); 546 } 547 static inline int pci_write_config_word(struct pci_dev *dev, int where, u16 val) 548 { 549 + return pci_bus_write_config_word(dev->bus, dev->devfn, where, val); 550 } 551 + static inline int pci_write_config_dword(struct pci_dev *dev, int where, 552 + u32 val) 553 { 554 + return pci_bus_write_config_dword(dev->bus, dev->devfn, where, val); 555 } 556 557 int __must_check pci_enable_device(struct pci_dev *dev); 558 + int __must_check pci_enable_device_io(struct pci_dev *dev); 559 + int __must_check pci_enable_device_mem(struct pci_dev *dev); 560 int __must_check pci_reenable_device(struct pci_dev *); 561 int __must_check pcim_enable_device(struct pci_dev *pdev); 562 void pcim_pin_device(struct pci_dev *pdev); ··· 576 void pci_update_resource(struct pci_dev *dev, struct resource *res, int resno); 577 int __must_check pci_assign_resource(struct pci_dev *dev, int i); 578 int __must_check pci_assign_resource_fixed(struct pci_dev *dev, int i); 579 int pci_select_bars(struct pci_dev *dev, unsigned long flags); 580 581 /* ROM control related routines */ 582 void __iomem __must_check *pci_map_rom(struct pci_dev *pdev, size_t *size); 583 void pci_unmap_rom(struct pci_dev *pdev, void __iomem *rom); 584 size_t pci_get_rom_size(void __iomem *rom, size_t size); 585 586 /* Power management related routines */ ··· 594 int pci_enable_wake(struct pci_dev *dev, pci_power_t state, int enable); 595 596 /* Functions for PCI Hotplug drivers to use */ 597 + int pci_bus_find_capability(struct pci_bus *bus, unsigned int devfn, int cap); 598 599 /* Helper functions for low-level code (drivers/pci/setup-[bus,res].c) */ 600 void pci_bus_assign_resources(struct pci_bus *bus); ··· 631 return __pci_register_driver(driver, THIS_MODULE, KBUILD_MODNAME); 632 } 633 634 + void pci_unregister_driver(struct pci_driver *dev); 635 + void pci_remove_behind_bridge(struct pci_dev *dev); 636 + struct pci_driver *pci_dev_driver(const struct pci_dev *dev); 637 + const struct pci_device_id *pci_match_id(const struct pci_device_id *ids, 638 + struct pci_dev *dev); 639 + int pci_scan_bridge(struct pci_bus *bus, struct pci_dev *dev, int max, 640 + int pass); 641 642 void pci_walk_bus(struct pci_bus *top, void (*cb)(struct pci_dev *, void *), 643 void *userdata); 644 int pci_cfg_space_size(struct pci_dev *dev); 645 + unsigned char pci_bus_max_busnr(struct pci_bus *bus); 646 647 /* kmem_cache style wrapper around pci_alloc_consistent() */ 648 ··· 669 670 671 #ifndef CONFIG_PCI_MSI 672 + static inline int pci_enable_msi(struct pci_dev *dev) 673 + { 674 + return -1; 675 + } 676 + 677 + static inline void pci_disable_msi(struct pci_dev *dev) 678 + { } 679 + 680 + static inline int pci_enable_msix(struct pci_dev *dev, 681 + struct msix_entry *entries, int nvec) 682 + { 683 + return -1; 684 + } 685 + 686 + static inline void pci_disable_msix(struct pci_dev *dev) 687 + { } 688 + 689 + static inline void msi_remove_pci_irq_vectors(struct pci_dev *dev) 690 + { } 691 + 692 + static inline void pci_restore_msi_state(struct pci_dev *dev) 693 + { } 694 #else 695 extern int pci_enable_msi(struct pci_dev *dev); 696 extern void pci_disable_msi(struct pci_dev *dev); 697 + extern int pci_enable_msix(struct pci_dev *dev, 698 struct msix_entry *entries, int nvec); 699 extern void pci_disable_msix(struct pci_dev *dev); 700 extern void msi_remove_pci_irq_vectors(struct pci_dev *dev); 701 + extern void pci_restore_msi_state(struct pci_dev *dev); 702 #endif 703 704 #ifdef CONFIG_HT_IRQ ··· 702 extern int pci_domains_supported; 703 #else 704 enum { pci_domains_supported = 0 }; 705 + static inline int pci_domain_nr(struct pci_bus *bus) 706 + { 707 + return 0; 708 + } 709 + 710 static inline int pci_proc_domain(struct pci_bus *bus) 711 { 712 return 0; ··· 716 * these as simple inline functions to avoid hair in drivers. 717 */ 718 719 + #define _PCI_NOP(o, s, t) \ 720 + static inline int pci_##o##_config_##s(struct pci_dev *dev, \ 721 + int where, t val) \ 722 { return PCIBIOS_FUNC_NOT_SUPPORTED; } 723 + 724 + #define _PCI_NOP_ALL(o, x) _PCI_NOP(o, byte, u8 x) \ 725 + _PCI_NOP(o, word, u16 x) \ 726 + _PCI_NOP(o, dword, u32 x) 727 _PCI_NOP_ALL(read, *) 728 _PCI_NOP_ALL(write,) 729 730 + static inline struct pci_dev *pci_find_device(unsigned int vendor, 731 + unsigned int device, 732 + const struct pci_dev *from) 733 + { 734 + return NULL; 735 + } 736 737 + static inline struct pci_dev *pci_find_slot(unsigned int bus, 738 + unsigned int devfn) 739 + { 740 + return NULL; 741 + } 742 743 static inline struct pci_dev *pci_get_device(unsigned int vendor, 744 + unsigned int device, 745 + struct pci_dev *from) 746 + { 747 + return NULL; 748 + } 749 750 static inline struct pci_dev *pci_get_device_reverse(unsigned int vendor, 751 + unsigned int device, 752 + struct pci_dev *from) 753 + { 754 + return NULL; 755 + } 756 757 + static inline struct pci_dev *pci_get_subsys(unsigned int vendor, 758 + unsigned int device, 759 + unsigned int ss_vendor, 760 + unsigned int ss_device, 761 + struct pci_dev *from) 762 + { 763 + return NULL; 764 + } 765 766 + static inline struct pci_dev *pci_get_class(unsigned int class, 767 + struct pci_dev *from) 768 + { 769 + return NULL; 770 + } 771 772 #define pci_dev_present(ids) (0) 773 #define no_pci_devices() (1) 774 #define pci_find_present(ids) (NULL) 775 #define pci_dev_put(dev) do { } while (0) 776 777 + static inline void pci_set_master(struct pci_dev *dev) 778 + { } 779 + 780 + static inline int pci_enable_device(struct pci_dev *dev) 781 + { 782 + return -EIO; 783 + } 784 + 785 + static inline void pci_disable_device(struct pci_dev *dev) 786 + { } 787 + 788 + static inline int pci_set_dma_mask(struct pci_dev *dev, u64 mask) 789 + { 790 + return -EIO; 791 + } 792 + 793 + static inline int pci_assign_resource(struct pci_dev *dev, int i) 794 + { 795 + return -EBUSY; 796 + } 797 + 798 + static inline int __pci_register_driver(struct pci_driver *drv, 799 + struct module *owner) 800 + { 801 + return 0; 802 + } 803 + 804 + static inline int pci_register_driver(struct pci_driver *drv) 805 + { 806 + return 0; 807 + } 808 + 809 + static inline void pci_unregister_driver(struct pci_driver *drv) 810 + { } 811 + 812 + static inline int pci_find_capability(struct pci_dev *dev, int cap) 813 + { 814 + return 0; 815 + } 816 + 817 + static inline int pci_find_next_capability(struct pci_dev *dev, u8 post, 818 + int cap) 819 + { 820 + return 0; 821 + } 822 + 823 + static inline int pci_find_ext_capability(struct pci_dev *dev, int cap) 824 + { 825 + return 0; 826 + } 827 + 828 + static inline void pcie_wait_pending_transaction(struct pci_dev *dev) 829 + { } 830 831 /* Power management related routines */ 832 + static inline int pci_save_state(struct pci_dev *dev) 833 + { 834 + return 0; 835 + } 836 837 + static inline int pci_restore_state(struct pci_dev *dev) 838 + { 839 + return 0; 840 + } 841 + 842 + static inline int pci_set_power_state(struct pci_dev *dev, pci_power_t state) 843 + { 844 + return 0; 845 + } 846 + 847 + static inline pci_power_t pci_choose_state(struct pci_dev *dev, 848 + pm_message_t state) 849 + { 850 + return PCI_D0; 851 + } 852 + 853 + static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state, 854 + int enable) 855 + { 856 + return 0; 857 + } 858 + 859 + static inline int pci_request_regions(struct pci_dev *dev, const char *res_name) 860 + { 861 + return -EIO; 862 + } 863 + 864 + static inline void pci_release_regions(struct pci_dev *dev) 865 + { } 866 867 #define pci_dma_burst_advice(pdev, strat, strategy_parameter) do { } while (0) 868 869 + static inline void pci_block_user_cfg_access(struct pci_dev *dev) 870 + { } 871 + 872 + static inline void pci_unblock_user_cfg_access(struct pci_dev *dev) 873 + { } 874 875 static inline struct pci_bus *pci_find_next_bus(const struct pci_bus *from) 876 { return NULL; } ··· 797 798 /* these helpers provide future and backwards compatibility 799 * for accessing popular PCI BAR info */ 800 + #define pci_resource_start(dev, bar) ((dev)->resource[(bar)].start) 801 + #define pci_resource_end(dev, bar) ((dev)->resource[(bar)].end) 802 + #define pci_resource_flags(dev, bar) ((dev)->resource[(bar)].flags) 803 #define pci_resource_len(dev,bar) \ 804 + ((pci_resource_start((dev), (bar)) == 0 && \ 805 + pci_resource_end((dev), (bar)) == \ 806 + pci_resource_start((dev), (bar))) ? 0 : \ 807 + \ 808 + (pci_resource_end((dev), (bar)) - \ 809 + pci_resource_start((dev), (bar)) + 1)) 810 811 /* Similar to the helpers above, these manipulate per-pci_dev 812 * driver-specific data. They are really just a wrapper around 813 * the generic device structure functions of these calls. 814 */ 815 + static inline void *pci_get_drvdata(struct pci_dev *pdev) 816 { 817 return dev_get_drvdata(&pdev->dev); 818 } 819 820 + static inline void pci_set_drvdata(struct pci_dev *pdev, void *data) 821 { 822 dev_set_drvdata(&pdev->dev, data); 823 } ··· 836 */ 837 #ifndef HAVE_ARCH_PCI_RESOURCE_TO_USER 838 static inline void pci_resource_to_user(const struct pci_dev *dev, int bar, 839 + const struct resource *rsrc, resource_size_t *start, 840 resource_size_t *end) 841 { 842 *start = rsrc->start; ··· 888 889 void pci_fixup_device(enum pci_fixup_pass pass, struct pci_dev *dev); 890 891 + void __iomem *pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen); 892 void pcim_iounmap(struct pci_dev *pdev, void __iomem *addr); 893 + void __iomem * const *pcim_iomap_table(struct pci_dev *pdev); 894 int pcim_iomap_regions(struct pci_dev *pdev, u16 mask, const char *name); 895 void pcim_iounmap_regions(struct pci_dev *pdev, u16 mask); 896
+8
include/linux/pci_regs.h
··· 395 #define PCI_EXP_DEVSTA_AUXPD 0x10 /* AUX Power Detected */ 396 #define PCI_EXP_DEVSTA_TRPND 0x20 /* Transactions Pending */ 397 #define PCI_EXP_LNKCAP 12 /* Link Capabilities */ 398 #define PCI_EXP_LNKCTL 16 /* Link Control */ 399 #define PCI_EXP_LNKCTL_CLKREQ_EN 0x100 /* Enable clkreq */ 400 #define PCI_EXP_LNKSTA 18 /* Link Status */ 401 #define PCI_EXP_SLTCAP 20 /* Slot Capabilities */ 402 #define PCI_EXP_SLTCTL 24 /* Slot Control */ 403 #define PCI_EXP_SLTSTA 26 /* Slot Status */
··· 395 #define PCI_EXP_DEVSTA_AUXPD 0x10 /* AUX Power Detected */ 396 #define PCI_EXP_DEVSTA_TRPND 0x20 /* Transactions Pending */ 397 #define PCI_EXP_LNKCAP 12 /* Link Capabilities */ 398 + #define PCI_EXP_LNKCAP_ASPMS 0xc00 /* ASPM Support */ 399 + #define PCI_EXP_LNKCAP_L0SEL 0x7000 /* L0s Exit Latency */ 400 + #define PCI_EXP_LNKCAP_L1EL 0x38000 /* L1 Exit Latency */ 401 + #define PCI_EXP_LNKCAP_CLKPM 0x40000 /* L1 Clock Power Management */ 402 #define PCI_EXP_LNKCTL 16 /* Link Control */ 403 + #define PCI_EXP_LNKCTL_RL 0x20 /* Retrain Link */ 404 + #define PCI_EXP_LNKCTL_CCC 0x40 /* Common Clock COnfiguration */ 405 #define PCI_EXP_LNKCTL_CLKREQ_EN 0x100 /* Enable clkreq */ 406 #define PCI_EXP_LNKSTA 18 /* Link Status */ 407 + #define PCI_EXP_LNKSTA_LT 0x800 /* Link Training */ 408 + #define PCI_EXP_LNKSTA_SLC 0x1000 /* Slot Clock Configuration */ 409 #define PCI_EXP_SLTCAP 20 /* Slot Capabilities */ 410 #define PCI_EXP_SLTCTL 24 /* Slot Control */ 411 #define PCI_EXP_SLTSTA 26 /* Slot Status */