Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v3.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci into next

Pull PCI changes from Bjorn Helgaas:
"Enumeration
- Notify driver before and after device reset (Keith Busch)
- Use reset notification in NVMe (Keith Busch)

NUMA
- Warn if we have to guess host bridge node information (Myron Stowe)
- Work around AMD Fam15h BIOSes that fail to provide _PXM (Suravee
Suthikulpanit)
- Clean up and mark early_root_info_init() as deprecated (Suravee
Suthikulpanit)

Driver binding
- Add "driver_override" for force specific binding (Alex Williamson)
- Fail "new_id" addition for devices we already know about (Bandan
Das)

Resource management
- Support BAR sizes up to 8GB (Nikhil Rao, Alan Cox)
- Don't move IORESOURCE_PCI_FIXED resources (Bjorn Helgaas)
- Mark SBx00 HPET BAR as IORESOURCE_PCI_FIXED (Bjorn Helgaas)
- Fail safely if we can't handle BARs larger than 4GB (Bjorn Helgaas)
- Reject BAR above 4GB if dma_addr_t is too small (Bjorn Helgaas)
- Don't convert BAR address to resource if dma_addr_t is too small
(Bjorn Helgaas)
- Don't set BAR to zero if dma_addr_t is too small (Bjorn Helgaas)
- Don't print anything while decoding is disabled (Bjorn Helgaas)
- Don't add disabled subtractive decode bus resources (Bjorn Helgaas)
- Add resource allocation comments (Bjorn Helgaas)
- Restrict 64-bit prefetchable bridge windows to 64-bit resources
(Yinghai Lu)
- Assign i82875p_edac PCI resources before adding device (Yinghai Lu)

PCI device hotplug
- Remove unnecessary "dev->bus" test (Bjorn Helgaas)
- Use PCI_EXP_SLTCAP_PSN define (Bjorn Helgaas)
- Fix rphahp endianess issues (Laurent Dufour)
- Acknowledge spurious "cmd completed" event (Rajat Jain)
- Allow hotplug service drivers to operate in polling mode (Rajat Jain)
- Fix cpqphp possible NULL dereference (Rickard Strandqvist)

MSI
- Replace pci_enable_msi_block() by pci_enable_msi_exact()
(Alexander Gordeev)
- Replace pci_enable_msix() by pci_enable_msix_exact() (Alexander Gordeev)
- Simplify populate_msi_sysfs() (Jan Beulich)

Virtualization
- Add Intel Patsburg (X79) root port ACS quirk (Alex Williamson)
- Mark RTL8110SC INTx masking as broken (Alex Williamson)

Generic host bridge driver
- Add generic PCI host controller driver (Will Deacon)

Freescale i.MX6
- Use new clock names (Lucas Stach)
- Drop old IRQ mapping (Lucas Stach)
- Remove optional (and unused) IRQs (Lucas Stach)
- Add support for MSI (Lucas Stach)
- Fix imx6_add_pcie_port() section mismatch warning (Sachin Kamat)

Renesas R-Car
- Add gen2 device tree support (Ben Dooks)
- Use new OF interrupt mapping when possible (Lucas Stach)
- Add PCIe driver (Phil Edworthy)
- Add PCIe MSI support (Phil Edworthy)
- Add PCIe device tree bindings (Phil Edworthy)

Samsung Exynos
- Remove unnecessary OOM messages (Jingoo Han)
- Fix add_pcie_port() section mismatch warning (Sachin Kamat)

Synopsys DesignWare
- Make MSI ISR shared IRQ aware (Lucas Stach)

Miscellaneous
- Check for broken config space aliasing (Alex Williamson)
- Update email address (Ben Hutchings)
- Fix Broadcom CNB20LE unintended sign extension (Bjorn Helgaas)
- Fix incorrect vgaarb conditional in WARN_ON() (Bjorn Helgaas)
- Remove unnecessary __ref annotations (Bjorn Helgaas)
- Add arch/x86/kernel/quirks.c to MAINTAINERS PCI file patterns
(Bjorn Helgaas)
- Fix use of uninitialized MPS value (Bjorn Helgaas)
- Tidy x86/gart messages (Bjorn Helgaas)
- Fix return value from pci_user_{read,write}_config_*() (Gavin Shan)
- Turn pcibios_penalize_isa_irq() into a weak function (Hanjun Guo)
- Remove unused serial device IDs (Jean Delvare)
- Use designated initialization in PCI_VDEVICE (Mark Rustad)
- Fix powerpc NULL dereference in pci_root_buses traversal (Mike Qiu)
- Configure MPS on ARM (Murali Karicheri)
- Remove unnecessary includes of <linux/init.h> (Paul Gortmaker)
- Move Open Firmware devspec attribute to PCI common code (Sebastian Ott)
- Use pdev->dev.groups for attribute creation on s390 (Sebastian Ott)
- Remove pcibios_add_platform_entries() (Sebastian Ott)
- Add new ID for Intel GPU "spurious interrupt" quirk (Thomas Jarosch)
- Rename pci_is_bridge() to pci_has_subordinate() (Yijing Wang)
- Add and use new pci_is_bridge() interface (Yijing Wang)
- Make pci_bus_add_device() void (Yijing Wang)

DMA API
- Clarify physical/bus address distinction in docs (Bjorn Helgaas)
- Fix typos in docs (Emilio López)
- Update dma_pool_create ()and dma_pool_alloc() descriptions (Gioh Kim)
- Change dma_declare_coherent_memory() CPU address to phys_addr_t
(Bjorn Helgaas)
- Pass GAPSPCI_DMA_BASE CPU & bus address to dma_declare_coherent_memory()
(Bjorn Helgaas)"

* tag 'pci-v3.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (92 commits)
MAINTAINERS: Add generic PCI host controller driver
PCI: generic: Add generic PCI host controller driver
PCI: imx6: Add support for MSI
PCI: designware: Make MSI ISR shared IRQ aware
PCI: imx6: Remove optional (and unused) IRQs
PCI: imx6: Drop old IRQ mapping
PCI: imx6: Use new clock names
i82875p_edac: Assign PCI resources before adding device
ARM/PCI: Call pcie_bus_configure_settings() to set MPS
PCI: imx6: Fix imx6_add_pcie_port() section mismatch warning
PCI: Make pci_bus_add_device() void
PCI: exynos: Fix add_pcie_port() section mismatch warning
PCI: Introduce new device binding path using pci_dev.driver_override
PCI: rcar: Add gen2 device tree support
PCI: cpqphp: Fix possible null pointer dereference
PCI: rcar: Add R-Car PCIe device tree bindings
PCI: rcar: Add MSI support for PCIe
PCI: rcar: Add Renesas R-Car PCIe driver
PCI: Fix return value from pci_user_{read,write}_config_*()
PCI: exynos: Remove unnecessary OOM messages
...

+2709 -823
+21
Documentation/ABI/testing/sysfs-bus-pci
··· 250 250 valid. For example, writing a 2 to this file when sriov_numvfs 251 251 is not 0 and not 2 already will return an error. Writing a 10 252 252 when the value of sriov_totalvfs is 8 will return an error. 253 + 254 + What: /sys/bus/pci/devices/.../driver_override 255 + Date: April 2014 256 + Contact: Alex Williamson <alex.williamson@redhat.com> 257 + Description: 258 + This file allows the driver for a device to be specified which 259 + will override standard static and dynamic ID matching. When 260 + specified, only a driver with a name matching the value written 261 + to driver_override will have an opportunity to bind to the 262 + device. The override is specified by writing a string to the 263 + driver_override file (echo pci-stub > driver_override) and 264 + may be cleared with an empty string (echo > driver_override). 265 + This returns the device to standard matching rules binding. 266 + Writing to driver_override does not automatically unbind the 267 + device from its current driver or make any attempt to 268 + automatically load the specified driver. If no driver with a 269 + matching name is currently loaded in the kernel, the device 270 + will not bind to any driver. This also allows devices to 271 + opt-out of driver binding using a driver_override name such as 272 + "none". Only a single driver may be specified in the override, 273 + there is no support for parsing delimiters.
+132 -78
Documentation/DMA-API-HOWTO.txt
··· 9 9 with example pseudo-code. For a concise description of the API, see 10 10 DMA-API.txt. 11 11 12 - Most of the 64bit platforms have special hardware that translates bus 13 - addresses (DMA addresses) into physical addresses. This is similar to 14 - how page tables and/or a TLB translates virtual addresses to physical 15 - addresses on a CPU. This is needed so that e.g. PCI devices can 16 - access with a Single Address Cycle (32bit DMA address) any page in the 17 - 64bit physical address space. Previously in Linux those 64bit 18 - platforms had to set artificial limits on the maximum RAM size in the 19 - system, so that the virt_to_bus() static scheme works (the DMA address 20 - translation tables were simply filled on bootup to map each bus 21 - address to the physical page __pa(bus_to_virt())). 12 + CPU and DMA addresses 13 + 14 + There are several kinds of addresses involved in the DMA API, and it's 15 + important to understand the differences. 16 + 17 + The kernel normally uses virtual addresses. Any address returned by 18 + kmalloc(), vmalloc(), and similar interfaces is a virtual address and can 19 + be stored in a "void *". 20 + 21 + The virtual memory system (TLB, page tables, etc.) translates virtual 22 + addresses to CPU physical addresses, which are stored as "phys_addr_t" or 23 + "resource_size_t". The kernel manages device resources like registers as 24 + physical addresses. These are the addresses in /proc/iomem. The physical 25 + address is not directly useful to a driver; it must use ioremap() to map 26 + the space and produce a virtual address. 27 + 28 + I/O devices use a third kind of address: a "bus address" or "DMA address". 29 + If a device has registers at an MMIO address, or if it performs DMA to read 30 + or write system memory, the addresses used by the device are bus addresses. 31 + In some systems, bus addresses are identical to CPU physical addresses, but 32 + in general they are not. IOMMUs and host bridges can produce arbitrary 33 + mappings between physical and bus addresses. 34 + 35 + Here's a picture and some examples: 36 + 37 + CPU CPU Bus 38 + Virtual Physical Address 39 + Address Address Space 40 + Space Space 41 + 42 + +-------+ +------+ +------+ 43 + | | |MMIO | Offset | | 44 + | | Virtual |Space | applied | | 45 + C +-------+ --------> B +------+ ----------> +------+ A 46 + | | mapping | | by host | | 47 + +-----+ | | | | bridge | | +--------+ 48 + | | | | +------+ | | | | 49 + | CPU | | | | RAM | | | | Device | 50 + | | | | | | | | | | 51 + +-----+ +-------+ +------+ +------+ +--------+ 52 + | | Virtual |Buffer| Mapping | | 53 + X +-------+ --------> Y +------+ <---------- +------+ Z 54 + | | mapping | RAM | by IOMMU 55 + | | | | 56 + | | | | 57 + +-------+ +------+ 58 + 59 + During the enumeration process, the kernel learns about I/O devices and 60 + their MMIO space and the host bridges that connect them to the system. For 61 + example, if a PCI device has a BAR, the kernel reads the bus address (A) 62 + from the BAR and converts it to a CPU physical address (B). The address B 63 + is stored in a struct resource and usually exposed via /proc/iomem. When a 64 + driver claims a device, it typically uses ioremap() to map physical address 65 + B at a virtual address (C). It can then use, e.g., ioread32(C), to access 66 + the device registers at bus address A. 67 + 68 + If the device supports DMA, the driver sets up a buffer using kmalloc() or 69 + a similar interface, which returns a virtual address (X). The virtual 70 + memory system maps X to a physical address (Y) in system RAM. The driver 71 + can use virtual address X to access the buffer, but the device itself 72 + cannot because DMA doesn't go through the CPU virtual memory system. 73 + 74 + In some simple systems, the device can do DMA directly to physical address 75 + Y. But in many others, there is IOMMU hardware that translates bus 76 + addresses to physical addresses, e.g., it translates Z to Y. This is part 77 + of the reason for the DMA API: the driver can give a virtual address X to 78 + an interface like dma_map_single(), which sets up any required IOMMU 79 + mapping and returns the bus address Z. The driver then tells the device to 80 + do DMA to Z, and the IOMMU maps it to the buffer at address Y in system 81 + RAM. 22 82 23 83 So that Linux can use the dynamic DMA mapping, it needs some help from the 24 84 drivers, namely it has to take into account that DMA addresses should be ··· 89 29 hardware exists. 90 30 91 31 Note that the DMA API works with any bus independent of the underlying 92 - microprocessor architecture. You should use the DMA API rather than 93 - the bus specific DMA API (e.g. pci_dma_*). 32 + microprocessor architecture. You should use the DMA API rather than the 33 + bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the 34 + pci_map_*() interfaces. 94 35 95 36 First of all, you should make sure 96 37 97 38 #include <linux/dma-mapping.h> 98 39 99 - is in your driver. This file will obtain for you the definition of the 100 - dma_addr_t (which can hold any valid DMA address for the platform) 101 - type which should be used everywhere you hold a DMA (bus) address 102 - returned from the DMA mapping functions. 40 + is in your driver, which provides the definition of dma_addr_t. This type 41 + can hold any valid DMA or bus address for the platform and should be used 42 + everywhere you hold a DMA address returned from the DMA mapping functions. 103 43 104 44 What memory is DMA'able? 105 45 ··· 183 123 is a bit mask describing which bits of an address your device 184 124 supports. It returns zero if your card can perform DMA properly on 185 125 the machine given the address mask you provided. In general, the 186 - device struct of your device is embedded in the bus specific device 187 - struct of your device. For example, a pointer to the device struct of 188 - your PCI device is pdev->dev (pdev is a pointer to the PCI device 126 + device struct of your device is embedded in the bus-specific device 127 + struct of your device. For example, &pdev->dev is a pointer to the 128 + device struct of a PCI device (pdev is a pointer to the PCI device 189 129 struct of your device). 190 130 191 131 If it returns non-zero, your device cannot perform DMA properly on ··· 207 147 The standard 32-bit addressing device would do something like this: 208 148 209 149 if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { 210 - printk(KERN_WARNING 211 - "mydev: No suitable DMA available.\n"); 150 + dev_warn(dev, "mydev: No suitable DMA available\n"); 212 151 goto ignore_this_device; 213 152 } 214 153 ··· 229 170 } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) { 230 171 using_dac = 0; 231 172 } else { 232 - printk(KERN_WARNING 233 - "mydev: No suitable DMA available.\n"); 173 + dev_warn(dev, "mydev: No suitable DMA available\n"); 234 174 goto ignore_this_device; 235 175 } 236 176 ··· 245 187 using_dac = 0; 246 188 consistent_using_dac = 0; 247 189 } else { 248 - printk(KERN_WARNING 249 - "mydev: No suitable DMA available.\n"); 190 + dev_warn(dev, "mydev: No suitable DMA available\n"); 250 191 goto ignore_this_device; 251 192 } 252 193 253 - The coherent coherent mask will always be able to set the same or a 254 - smaller mask as the streaming mask. However for the rare case that a 255 - device driver only uses consistent allocations, one would have to 256 - check the return value from dma_set_coherent_mask(). 194 + The coherent mask will always be able to set the same or a smaller mask as 195 + the streaming mask. However for the rare case that a device driver only 196 + uses consistent allocations, one would have to check the return value from 197 + dma_set_coherent_mask(). 257 198 258 199 Finally, if your device can only drive the low 24-bits of 259 200 address you might do something like: 260 201 261 202 if (dma_set_mask(dev, DMA_BIT_MASK(24))) { 262 - printk(KERN_WARNING 263 - "mydev: 24-bit DMA addressing not available.\n"); 203 + dev_warn(dev, "mydev: 24-bit DMA addressing not available\n"); 264 204 goto ignore_this_device; 265 205 } 266 206 ··· 288 232 card->playback_enabled = 1; 289 233 } else { 290 234 card->playback_enabled = 0; 291 - printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n", 235 + dev_warn(dev, "%s: Playback disabled due to DMA limitations\n", 292 236 card->name); 293 237 } 294 238 if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) { 295 239 card->record_enabled = 1; 296 240 } else { 297 241 card->record_enabled = 0; 298 - printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n", 242 + dev_warn(dev, "%s: Record disabled due to DMA limitations\n", 299 243 card->name); 300 244 } 301 245 ··· 387 331 Size is the length of the region you want to allocate, in bytes. 388 332 389 333 This routine will allocate RAM for that region, so it acts similarly to 390 - __get_free_pages (but takes size instead of a page order). If your 334 + __get_free_pages() (but takes size instead of a page order). If your 391 335 driver needs regions sized smaller than a page, you may prefer using 392 336 the dma_pool interface, described below. 393 337 ··· 399 343 dma_set_coherent_mask(). This is true of the dma_pool interface as 400 344 well. 401 345 402 - dma_alloc_coherent returns two values: the virtual address which you 346 + dma_alloc_coherent() returns two values: the virtual address which you 403 347 can use to access it from the CPU and dma_handle which you pass to the 404 348 card. 405 349 406 - The cpu return address and the DMA bus master address are both 350 + The CPU virtual address and the DMA bus address are both 407 351 guaranteed to be aligned to the smallest PAGE_SIZE order which 408 352 is greater than or equal to the requested size. This invariant 409 353 exists (for example) to guarantee that if you allocate a chunk ··· 415 359 dma_free_coherent(dev, size, cpu_addr, dma_handle); 416 360 417 361 where dev, size are the same as in the above call and cpu_addr and 418 - dma_handle are the values dma_alloc_coherent returned to you. 362 + dma_handle are the values dma_alloc_coherent() returned to you. 419 363 This function may not be called in interrupt context. 420 364 421 365 If your driver needs lots of smaller memory regions, you can write 422 - custom code to subdivide pages returned by dma_alloc_coherent, 366 + custom code to subdivide pages returned by dma_alloc_coherent(), 423 367 or you can use the dma_pool API to do that. A dma_pool is like 424 - a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages. 368 + a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages(). 425 369 Also, it understands common hardware constraints for alignment, 426 370 like queue heads needing to be aligned on N byte boundaries. 427 371 ··· 429 373 430 374 struct dma_pool *pool; 431 375 432 - pool = dma_pool_create(name, dev, size, align, alloc); 376 + pool = dma_pool_create(name, dev, size, align, boundary); 433 377 434 378 The "name" is for diagnostics (like a kmem_cache name); dev and size 435 379 are as above. The device's hardware alignment requirement for this 436 380 type of data is "align" (which is expressed in bytes, and must be a 437 381 power of two). If your device has no boundary crossing restrictions, 438 - pass 0 for alloc; passing 4096 says memory allocated from this pool 382 + pass 0 for boundary; passing 4096 says memory allocated from this pool 439 383 must not cross 4KByte boundaries (but at that time it may be better to 440 - go for dma_alloc_coherent directly instead). 384 + use dma_alloc_coherent() directly instead). 441 385 442 - Allocate memory from a dma pool like this: 386 + Allocate memory from a DMA pool like this: 443 387 444 388 cpu_addr = dma_pool_alloc(pool, flags, &dma_handle); 445 389 446 - flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor 447 - holding SMP locks), SLAB_ATOMIC otherwise. Like dma_alloc_coherent, 390 + flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor 391 + holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(), 448 392 this returns two values, cpu_addr and dma_handle. 449 393 450 394 Free memory that was allocated from a dma_pool like this: 451 395 452 396 dma_pool_free(pool, cpu_addr, dma_handle); 453 397 454 - where pool is what you passed to dma_pool_alloc, and cpu_addr and 455 - dma_handle are the values dma_pool_alloc returned. This function 398 + where pool is what you passed to dma_pool_alloc(), and cpu_addr and 399 + dma_handle are the values dma_pool_alloc() returned. This function 456 400 may be called in interrupt context. 457 401 458 402 Destroy a dma_pool by calling: 459 403 460 404 dma_pool_destroy(pool); 461 405 462 - Make sure you've called dma_pool_free for all memory allocated 406 + Make sure you've called dma_pool_free() for all memory allocated 463 407 from a pool before you destroy the pool. This function may not 464 408 be called in interrupt context. 465 409 ··· 474 418 DMA_FROM_DEVICE 475 419 DMA_NONE 476 420 477 - One should provide the exact DMA direction if you know it. 421 + You should provide the exact DMA direction if you know it. 478 422 479 423 DMA_TO_DEVICE means "from main memory to the device" 480 424 DMA_FROM_DEVICE means "from the device to main memory" ··· 545 489 dma_unmap_single(dev, dma_handle, size, direction); 546 490 547 491 You should call dma_mapping_error() as dma_map_single() could fail and return 548 - error. Not all dma implementations support dma_mapping_error() interface. 492 + error. Not all DMA implementations support the dma_mapping_error() interface. 549 493 However, it is a good practice to call dma_mapping_error() interface, which 550 494 will invoke the generic mapping error check interface. Doing so will ensure 551 - that the mapping code will work correctly on all dma implementations without 495 + that the mapping code will work correctly on all DMA implementations without 552 496 any dependency on the specifics of the underlying implementation. Using the 553 497 returned address without checking for errors could result in failures ranging 554 498 from panics to silent data corruption. A couple of examples of incorrect ways 555 - to check for errors that make assumptions about the underlying dma 499 + to check for errors that make assumptions about the underlying DMA 556 500 implementation are as follows and these are applicable to dma_map_page() as 557 501 well. 558 502 ··· 572 516 goto map_error; 573 517 } 574 518 575 - You should call dma_unmap_single when the DMA activity is finished, e.g. 519 + You should call dma_unmap_single() when the DMA activity is finished, e.g., 576 520 from the interrupt which told you that the DMA transfer is done. 577 521 578 - Using cpu pointers like this for single mappings has a disadvantage, 522 + Using CPU pointers like this for single mappings has a disadvantage: 579 523 you cannot reference HIGHMEM memory in this way. Thus, there is a 580 - map/unmap interface pair akin to dma_{map,unmap}_single. These 581 - interfaces deal with page/offset pairs instead of cpu pointers. 524 + map/unmap interface pair akin to dma_{map,unmap}_single(). These 525 + interfaces deal with page/offset pairs instead of CPU pointers. 582 526 Specifically: 583 527 584 528 struct device *dev = &my_dev->dev; ··· 606 550 You should call dma_mapping_error() as dma_map_page() could fail and return 607 551 error as outlined under the dma_map_single() discussion. 608 552 609 - You should call dma_unmap_page when the DMA activity is finished, e.g. 553 + You should call dma_unmap_page() when the DMA activity is finished, e.g., 610 554 from the interrupt which told you that the DMA transfer is done. 611 555 612 556 With scatterlists, you map a region gathered from several regions by: ··· 644 588 it should _NOT_ be the 'count' value _returned_ from the 645 589 dma_map_sg call. 646 590 647 - Every dma_map_{single,sg} call should have its dma_unmap_{single,sg} 648 - counterpart, because the bus address space is a shared resource (although 649 - in some ports the mapping is per each BUS so less devices contend for the 650 - same bus address space) and you could render the machine unusable by eating 651 - all bus addresses. 591 + Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}() 592 + counterpart, because the bus address space is a shared resource and 593 + you could render the machine unusable by consuming all bus addresses. 652 594 653 595 If you need to use the same streaming DMA region multiple times and touch 654 596 the data in between the DMA transfers, the buffer needs to be synced 655 - properly in order for the cpu and device to see the most uptodate and 597 + properly in order for the CPU and device to see the most up-to-date and 656 598 correct copy of the DMA buffer. 657 599 658 - So, firstly, just map it with dma_map_{single,sg}, and after each DMA 600 + So, firstly, just map it with dma_map_{single,sg}(), and after each DMA 659 601 transfer call either: 660 602 661 603 dma_sync_single_for_cpu(dev, dma_handle, size, direction); ··· 665 611 as appropriate. 666 612 667 613 Then, if you wish to let the device get at the DMA area again, 668 - finish accessing the data with the cpu, and then before actually 614 + finish accessing the data with the CPU, and then before actually 669 615 giving the buffer to the hardware call either: 670 616 671 617 dma_sync_single_for_device(dev, dma_handle, size, direction); ··· 677 623 as appropriate. 678 624 679 625 After the last DMA transfer call one of the DMA unmap routines 680 - dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_* 681 - call till dma_unmap_*, then you don't have to call the dma_sync_* 682 - routines at all. 626 + dma_unmap_{single,sg}(). If you don't touch the data from the first 627 + dma_map_*() call till dma_unmap_*(), then you don't have to call the 628 + dma_sync_*() routines at all. 683 629 684 630 Here is pseudo code which shows a situation in which you would need 685 631 to use the dma_sync_*() interfaces. ··· 744 690 } 745 691 } 746 692 747 - Drivers converted fully to this interface should not use virt_to_bus any 748 - longer, nor should they use bus_to_virt. Some drivers have to be changed a 749 - little bit, because there is no longer an equivalent to bus_to_virt in the 693 + Drivers converted fully to this interface should not use virt_to_bus() any 694 + longer, nor should they use bus_to_virt(). Some drivers have to be changed a 695 + little bit, because there is no longer an equivalent to bus_to_virt() in the 750 696 dynamic DMA mapping scheme - you have to always store the DMA addresses 751 - returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single 752 - calls (dma_map_sg stores them in the scatterlist itself if the platform 697 + returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single() 698 + calls (dma_map_sg() stores them in the scatterlist itself if the platform 753 699 supports dynamic DMA mapping in hardware) in your driver structures and/or 754 700 in the card registers. 755 701 ··· 763 709 DMA address space is limited on some architectures and an allocation 764 710 failure can be determined by: 765 711 766 - - checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0 712 + - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0 767 713 768 - - checking the returned dma_addr_t of dma_map_single and dma_map_page 714 + - checking the dma_addr_t returned from dma_map_single() and dma_map_page() 769 715 by using dma_mapping_error(): 770 716 771 717 dma_addr_t dma_handle; ··· 848 794 dma_unmap_single(array[i].dma_addr); 849 795 } 850 796 851 - Networking drivers must call dev_kfree_skb to free the socket buffer 797 + Networking drivers must call dev_kfree_skb() to free the socket buffer 852 798 and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook 853 799 (ndo_start_xmit). This means that the socket buffer is just dropped in 854 800 the failure case. ··· 885 831 DEFINE_DMA_UNMAP_LEN(len); 886 832 }; 887 833 888 - 2) Use dma_unmap_{addr,len}_set to set these values. 834 + 2) Use dma_unmap_{addr,len}_set() to set these values. 889 835 Example, before: 890 836 891 837 ringp->mapping = FOO; ··· 896 842 dma_unmap_addr_set(ringp, mapping, FOO); 897 843 dma_unmap_len_set(ringp, len, BAR); 898 844 899 - 3) Use dma_unmap_{addr,len} to access these values. 845 + 3) Use dma_unmap_{addr,len}() to access these values. 900 846 Example, before: 901 847 902 848 dma_unmap_single(dev, ringp->mapping, ringp->len,
+76 -72
Documentation/DMA-API.txt
··· 4 4 James E.J. Bottomley <James.Bottomley@HansenPartnership.com> 5 5 6 6 This document describes the DMA API. For a more gentle introduction 7 - of the API (and actual examples) see 8 - Documentation/DMA-API-HOWTO.txt. 7 + of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt. 9 8 10 - This API is split into two pieces. Part I describes the API. Part II 11 - describes the extensions to the API for supporting non-consistent 12 - memory machines. Unless you know that your driver absolutely has to 13 - support non-consistent platforms (this is usually only legacy 14 - platforms) you should only use the API described in part I. 9 + This API is split into two pieces. Part I describes the basic API. 10 + Part II describes extensions for supporting non-consistent memory 11 + machines. Unless you know that your driver absolutely has to support 12 + non-consistent platforms (this is usually only legacy platforms) you 13 + should only use the API described in part I. 15 14 16 15 Part I - dma_ API 17 16 ------------------------------------- 18 17 19 - To get the dma_ API, you must #include <linux/dma-mapping.h> 18 + To get the dma_ API, you must #include <linux/dma-mapping.h>. This 19 + provides dma_addr_t and the interfaces described below. 20 20 21 + A dma_addr_t can hold any valid DMA or bus address for the platform. It 22 + can be given to a device to use as a DMA source or target. A CPU cannot 23 + reference a dma_addr_t directly because there may be translation between 24 + its physical address space and the bus address space. 21 25 22 - Part Ia - Using large dma-coherent buffers 26 + Part Ia - Using large DMA-coherent buffers 23 27 ------------------------------------------ 24 28 25 29 void * ··· 37 33 devices to read that memory.) 38 34 39 35 This routine allocates a region of <size> bytes of consistent memory. 40 - It also returns a <dma_handle> which may be cast to an unsigned 41 - integer the same width as the bus and used as the physical address 42 - base of the region. 43 36 44 - Returns: a pointer to the allocated region (in the processor's virtual 37 + It returns a pointer to the allocated region (in the processor's virtual 45 38 address space) or NULL if the allocation failed. 39 + 40 + It also returns a <dma_handle> which may be cast to an unsigned integer the 41 + same width as the bus and given to the device as the bus address base of 42 + the region. 46 43 47 44 Note: consistent memory can be expensive on some platforms, and the 48 45 minimum allocation length may be as big as a page, so you should 49 46 consolidate your requests for consistent memory as much as possible. 50 47 The simplest way to do that is to use the dma_pool calls (see below). 51 48 52 - The flag parameter (dma_alloc_coherent only) allows the caller to 53 - specify the GFP_ flags (see kmalloc) for the allocation (the 49 + The flag parameter (dma_alloc_coherent() only) allows the caller to 50 + specify the GFP_ flags (see kmalloc()) for the allocation (the 54 51 implementation may choose to ignore flags that affect the location of 55 52 the returned memory, like GFP_DMA). 56 53 ··· 66 61 dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, 67 62 dma_addr_t dma_handle) 68 63 69 - Free the region of consistent memory you previously allocated. dev, 70 - size and dma_handle must all be the same as those passed into the 71 - consistent allocate. cpu_addr must be the virtual address returned by 72 - the consistent allocate. 64 + Free a region of consistent memory you previously allocated. dev, 65 + size and dma_handle must all be the same as those passed into 66 + dma_alloc_coherent(). cpu_addr must be the virtual address returned by 67 + the dma_alloc_coherent(). 73 68 74 69 Note that unlike their sibling allocation calls, these routines 75 70 may only be called with IRQs enabled. 76 71 77 72 78 - Part Ib - Using small dma-coherent buffers 73 + Part Ib - Using small DMA-coherent buffers 79 74 ------------------------------------------ 80 75 81 76 To get this part of the dma_ API, you must #include <linux/dmapool.h> 82 77 83 - Many drivers need lots of small dma-coherent memory regions for DMA 78 + Many drivers need lots of small DMA-coherent memory regions for DMA 84 79 descriptors or I/O buffers. Rather than allocating in units of a page 85 80 or more using dma_alloc_coherent(), you can use DMA pools. These work 86 - much like a struct kmem_cache, except that they use the dma-coherent allocator, 81 + much like a struct kmem_cache, except that they use the DMA-coherent allocator, 87 82 not __get_free_pages(). Also, they understand common hardware constraints 88 83 for alignment, like queue heads needing to be aligned on N-byte boundaries. 89 84 ··· 92 87 dma_pool_create(const char *name, struct device *dev, 93 88 size_t size, size_t align, size_t alloc); 94 89 95 - The pool create() routines initialize a pool of dma-coherent buffers 90 + dma_pool_create() initializes a pool of DMA-coherent buffers 96 91 for use with a given device. It must be called in a context which 97 92 can sleep. 98 93 ··· 107 102 void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, 108 103 dma_addr_t *dma_handle); 109 104 110 - This allocates memory from the pool; the returned memory will meet the size 111 - and alignment requirements specified at creation time. Pass GFP_ATOMIC to 112 - prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks), 113 - pass GFP_KERNEL to allow blocking. Like dma_alloc_coherent(), this returns 114 - two values: an address usable by the cpu, and the dma address usable by the 115 - pool's device. 105 + This allocates memory from the pool; the returned memory will meet the 106 + size and alignment requirements specified at creation time. Pass 107 + GFP_ATOMIC to prevent blocking, or if it's permitted (not 108 + in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow 109 + blocking. Like dma_alloc_coherent(), this returns two values: an 110 + address usable by the CPU, and the DMA address usable by the pool's 111 + device. 116 112 117 113 118 114 void dma_pool_free(struct dma_pool *pool, void *vaddr, 119 115 dma_addr_t addr); 120 116 121 117 This puts memory back into the pool. The pool is what was passed to 122 - the pool allocation routine; the cpu (vaddr) and dma addresses are what 118 + dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what 123 119 were returned when that routine allocated the memory being freed. 124 120 125 121 126 122 void dma_pool_destroy(struct dma_pool *pool); 127 123 128 - The pool destroy() routines free the resources of the pool. They must be 124 + dma_pool_destroy() frees the resources of the pool. It must be 129 125 called in a context which can sleep. Make sure you've freed all allocated 130 126 memory back to the pool before you destroy it. 131 127 ··· 193 187 enum dma_data_direction direction) 194 188 195 189 Maps a piece of processor virtual memory so it can be accessed by the 196 - device and returns the physical handle of the memory. 190 + device and returns the bus address of the memory. 197 191 198 - The direction for both api's may be converted freely by casting. 192 + The direction for both APIs may be converted freely by casting. 199 193 However the dma_ API uses a strongly typed enumerator for its 200 194 direction: 201 195 ··· 204 198 DMA_FROM_DEVICE data is coming from the device to the memory 205 199 DMA_BIDIRECTIONAL direction isn't known 206 200 207 - Notes: Not all memory regions in a machine can be mapped by this 208 - API. Further, regions that appear to be physically contiguous in 209 - kernel virtual space may not be contiguous as physical memory. Since 210 - this API does not provide any scatter/gather capability, it will fail 211 - if the user tries to map a non-physically contiguous piece of memory. 212 - For this reason, it is recommended that memory mapped by this API be 213 - obtained only from sources which guarantee it to be physically contiguous 214 - (like kmalloc). 201 + Notes: Not all memory regions in a machine can be mapped by this API. 202 + Further, contiguous kernel virtual space may not be contiguous as 203 + physical memory. Since this API does not provide any scatter/gather 204 + capability, it will fail if the user tries to map a non-physically 205 + contiguous piece of memory. For this reason, memory to be mapped by 206 + this API should be obtained from sources which guarantee it to be 207 + physically contiguous (like kmalloc). 215 208 216 - Further, the physical address of the memory must be within the 217 - dma_mask of the device (the dma_mask represents a bit mask of the 218 - addressable region for the device. I.e., if the physical address of 219 - the memory anded with the dma_mask is still equal to the physical 220 - address, then the device can perform DMA to the memory). In order to 209 + Further, the bus address of the memory must be within the 210 + dma_mask of the device (the dma_mask is a bit mask of the 211 + addressable region for the device, i.e., if the bus address of 212 + the memory ANDed with the dma_mask is still equal to the bus 213 + address, then the device can perform DMA to the memory). To 221 214 ensure that the memory allocated by kmalloc is within the dma_mask, 222 215 the driver may specify various platform-dependent flags to restrict 223 - the physical memory range of the allocation (e.g. on x86, GFP_DMA 224 - guarantees to be within the first 16Mb of available physical memory, 216 + the bus address range of the allocation (e.g., on x86, GFP_DMA 217 + guarantees to be within the first 16MB of available bus addresses, 225 218 as required by ISA devices). 226 219 227 220 Note also that the above constraints on physical contiguity and 228 221 dma_mask may not apply if the platform has an IOMMU (a device which 229 - supplies a physical to virtual mapping between the I/O memory bus and 230 - the device). However, to be portable, device driver writers may *not* 231 - assume that such an IOMMU exists. 222 + maps an I/O bus address to a physical memory address). However, to be 223 + portable, device driver writers may *not* assume that such an IOMMU 224 + exists. 232 225 233 226 Warnings: Memory coherency operates at a granularity called the cache 234 227 line width. In order for memory mapped by this API to operate ··· 286 281 int 287 282 dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 288 283 289 - In some circumstances dma_map_single and dma_map_page will fail to create 284 + In some circumstances dma_map_single() and dma_map_page() will fail to create 290 285 a mapping. A driver can check for these errors by testing the returned 291 - dma address with dma_mapping_error(). A non-zero return value means the mapping 286 + DMA address with dma_mapping_error(). A non-zero return value means the mapping 292 287 could not be created and the driver should take appropriate action (e.g. 293 288 reduce current DMA mapping usage or delay and try again later). 294 289 ··· 296 291 dma_map_sg(struct device *dev, struct scatterlist *sg, 297 292 int nents, enum dma_data_direction direction) 298 293 299 - Returns: the number of physical segments mapped (this may be shorter 294 + Returns: the number of bus address segments mapped (this may be shorter 300 295 than <nents> passed in if some elements of the scatter/gather list are 301 296 physically or virtually adjacent and an IOMMU maps them with a single 302 297 entry). ··· 304 299 Please note that the sg cannot be mapped again if it has been mapped once. 305 300 The mapping process is allowed to destroy information in the sg. 306 301 307 - As with the other mapping interfaces, dma_map_sg can fail. When it 302 + As with the other mapping interfaces, dma_map_sg() can fail. When it 308 303 does, 0 is returned and a driver must take appropriate action. It is 309 304 critical that the driver do something, in the case of a block driver 310 305 aborting the request or even oopsing is better than doing nothing and ··· 340 335 API. 341 336 342 337 Note: <nents> must be the number you passed in, *not* the number of 343 - physical entries returned. 338 + bus address entries returned. 344 339 345 340 void 346 341 dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, ··· 355 350 dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems, 356 351 enum dma_data_direction direction) 357 352 358 - Synchronise a single contiguous or scatter/gather mapping for the cpu 353 + Synchronise a single contiguous or scatter/gather mapping for the CPU 359 354 and device. With the sync_sg API, all the parameters must be the same 360 355 as those passed into the single mapping API. With the sync_single API, 361 356 you can use dma_handle and size parameters that aren't identical to ··· 396 391 without the _attrs suffixes, except that they pass an optional 397 392 struct dma_attrs*. 398 393 399 - struct dma_attrs encapsulates a set of "dma attributes". For the 394 + struct dma_attrs encapsulates a set of "DMA attributes". For the 400 395 definition of struct dma_attrs see linux/dma-attrs.h. 401 396 402 - The interpretation of dma attributes is architecture-specific, and 397 + The interpretation of DMA attributes is architecture-specific, and 403 398 each attribute should be documented in Documentation/DMA-attributes.txt. 404 399 405 400 If struct dma_attrs* is NULL, the semantics of each of these ··· 463 458 guarantee that the sync points become nops. 464 459 465 460 Warning: Handling non-consistent memory is a real pain. You should 466 - only ever use this API if you positively know your driver will be 461 + only use this API if you positively know your driver will be 467 462 required to work on one of the rare (usually non-PCI) architectures 468 463 that simply cannot make consistent memory. 469 464 ··· 497 492 boundaries when doing this. 498 493 499 494 int 500 - dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr, 495 + dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 501 496 dma_addr_t device_addr, size_t size, int 502 497 flags) 503 498 504 - Declare region of memory to be handed out by dma_alloc_coherent when 499 + Declare region of memory to be handed out by dma_alloc_coherent() when 505 500 it's asked for coherent memory for this device. 506 501 507 - bus_addr is the physical address to which the memory is currently 508 - assigned in the bus responding region (this will be used by the 509 - platform to perform the mapping). 502 + phys_addr is the CPU physical address to which the memory is currently 503 + assigned (this will be ioremapped so the CPU can access the region). 510 504 511 - device_addr is the physical address the device needs to be programmed 512 - with actually to address this memory (this will be handed out as the 505 + device_addr is the bus address the device needs to be programmed 506 + with to actually address this memory (this will be handed out as the 513 507 dma_addr_t in dma_alloc_coherent()). 514 508 515 509 size is the size of the area (must be multiples of PAGE_SIZE). 516 510 517 - flags can be or'd together and are: 511 + flags can be ORed together and are: 518 512 519 513 DMA_MEMORY_MAP - request that the memory returned from 520 514 dma_alloc_coherent() be directly writable. 521 515 522 516 DMA_MEMORY_IO - request that the memory returned from 523 - dma_alloc_coherent() be addressable using read/write/memcpy_toio etc. 517 + dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc. 524 518 525 519 One or both of these flags must be present. 526 520 ··· 576 572 Part III - Debug drivers use of the DMA-API 577 573 ------------------------------------------- 578 574 579 - The DMA-API as described above as some constraints. DMA addresses must be 575 + The DMA-API as described above has some constraints. DMA addresses must be 580 576 released with the corresponding function with the same size for example. With 581 577 the advent of hardware IOMMUs it becomes more and more important that drivers 582 578 do not violate those constraints. In the worst case such a violation can ··· 694 690 void debug_dmap_mapping_error(struct device *dev, dma_addr_t dma_addr); 695 691 696 692 dma-debug interface debug_dma_mapping_error() to debug drivers that fail 697 - to check dma mapping errors on addresses returned by dma_map_single() and 693 + to check DMA mapping errors on addresses returned by dma_map_single() and 698 694 dma_map_page() interfaces. This interface clears a flag set by 699 695 debug_dma_map_page() to indicate that dma_mapping_error() has been called by 700 696 the driver. When driver does unmap, debug_dma_unmap() checks the flag and if 701 697 this flag is still set, prints warning message that includes call trace that 702 698 leads up to the unmap. This interface can be called from dma_mapping_error() 703 - routines to enable dma mapping error check debugging. 699 + routines to enable DMA mapping error check debugging. 704 700
+2 -2
Documentation/DMA-ISA-LPC.txt
··· 16 16 #include <asm/dma.h> 17 17 18 18 The first is the generic DMA API used to convert virtual addresses to 19 - physical addresses (see Documentation/DMA-API.txt for details). 19 + bus addresses (see Documentation/DMA-API.txt for details). 20 20 21 21 The second contains the routines specific to ISA DMA transfers. Since 22 22 this is not present on all platforms make sure you construct your ··· 50 50 Part III - Address translation 51 51 ------------------------------ 52 52 53 - To translate the virtual address to a physical use the normal DMA 53 + To translate the virtual address to a bus address, use the normal DMA 54 54 API. Do _not_ use isa_virt_to_phys() even though it does the same 55 55 thing. The reason for this is that the function isa_virt_to_phys() 56 56 will require a Kconfig dependency to ISA, not just ISA_DMA_API which
+100
Documentation/devicetree/bindings/pci/host-generic-pci.txt
··· 1 + * Generic PCI host controller 2 + 3 + Firmware-initialised PCI host controllers and PCI emulations, such as the 4 + virtio-pci implementations found in kvmtool and other para-virtualised 5 + systems, do not require driver support for complexities such as regulator 6 + and clock management. In fact, the controller may not even require the 7 + configuration of a control interface by the operating system, instead 8 + presenting a set of fixed windows describing a subset of IO, Memory and 9 + Configuration Spaces. 10 + 11 + Such a controller can be described purely in terms of the standardized device 12 + tree bindings communicated in pci.txt: 13 + 14 + 15 + Properties of the host controller node: 16 + 17 + - compatible : Must be "pci-host-cam-generic" or "pci-host-ecam-generic" 18 + depending on the layout of configuration space (CAM vs 19 + ECAM respectively). 20 + 21 + - device_type : Must be "pci". 22 + 23 + - ranges : As described in IEEE Std 1275-1994, but must provide 24 + at least a definition of non-prefetchable memory. One 25 + or both of prefetchable Memory and IO Space may also 26 + be provided. 27 + 28 + - bus-range : Optional property (also described in IEEE Std 1275-1994) 29 + to indicate the range of bus numbers for this controller. 30 + If absent, defaults to <0 255> (i.e. all buses). 31 + 32 + - #address-cells : Must be 3. 33 + 34 + - #size-cells : Must be 2. 35 + 36 + - reg : The Configuration Space base address and size, as accessed 37 + from the parent bus. 38 + 39 + 40 + Properties of the /chosen node: 41 + 42 + - linux,pci-probe-only 43 + : Optional property which takes a single-cell argument. 44 + If '0', then Linux will assign devices in its usual manner, 45 + otherwise it will not try to assign devices and instead use 46 + them as they are configured already. 47 + 48 + Configuration Space is assumed to be memory-mapped (as opposed to being 49 + accessed via an ioport) and laid out with a direct correspondence to the 50 + geography of a PCI bus address by concatenating the various components to 51 + form an offset. 52 + 53 + For CAM, this 24-bit offset is: 54 + 55 + cfg_offset(bus, device, function, register) = 56 + bus << 16 | device << 11 | function << 8 | register 57 + 58 + Whilst ECAM extends this by 4 bits to accomodate 4k of function space: 59 + 60 + cfg_offset(bus, device, function, register) = 61 + bus << 20 | device << 15 | function << 12 | register 62 + 63 + Interrupt mapping is exactly as described in `Open Firmware Recommended 64 + Practice: Interrupt Mapping' and requires the following properties: 65 + 66 + - #interrupt-cells : Must be 1 67 + 68 + - interrupt-map : <see aforementioned specification> 69 + 70 + - interrupt-map-mask : <see aforementioned specification> 71 + 72 + 73 + Example: 74 + 75 + pci { 76 + compatible = "pci-host-cam-generic" 77 + device_type = "pci"; 78 + #address-cells = <3>; 79 + #size-cells = <2>; 80 + bus-range = <0x0 0x1>; 81 + 82 + // CPU_PHYSICAL(2) SIZE(2) 83 + reg = <0x0 0x40000000 0x0 0x1000000>; 84 + 85 + // BUS_ADDRESS(3) CPU_PHYSICAL(2) SIZE(2) 86 + ranges = <0x01000000 0x0 0x01000000 0x0 0x01000000 0x0 0x00010000>, 87 + <0x02000000 0x0 0x41000000 0x0 0x41000000 0x0 0x3f000000>; 88 + 89 + 90 + #interrupt-cells = <0x1>; 91 + 92 + // PCI_DEVICE(3) INT#(1) CONTROLLER(PHANDLE) CONTROLLER_DATA(3) 93 + interrupt-map = < 0x0 0x0 0x0 0x1 &gic 0x0 0x4 0x1 94 + 0x800 0x0 0x0 0x1 &gic 0x0 0x5 0x1 95 + 0x1000 0x0 0x0 0x1 &gic 0x0 0x6 0x1 96 + 0x1800 0x0 0x0 0x1 &gic 0x0 0x7 0x1>; 97 + 98 + // PCI_DEVICE(3) INT#(1) 99 + interrupt-map-mask = <0xf800 0x0 0x0 0x7>; 100 + }
+66
Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt
··· 1 + Renesas AHB to PCI bridge 2 + ------------------------- 3 + 4 + This is the bridge used internally to connect the USB controllers to the 5 + AHB. There is one bridge instance per USB port connected to the internal 6 + OHCI and EHCI controllers. 7 + 8 + Required properties: 9 + - compatible: "renesas,pci-r8a7790" for the R8A7790 SoC; 10 + "renesas,pci-r8a7791" for the R8A7791 SoC. 11 + - reg: A list of physical regions to access the device: the first is 12 + the operational registers for the OHCI/EHCI controllers and the 13 + second is for the bridge configuration and control registers. 14 + - interrupts: interrupt for the device. 15 + - clocks: The reference to the device clock. 16 + - bus-range: The PCI bus number range; as this is a single bus, the range 17 + should be specified as the same value twice. 18 + - #address-cells: must be 3. 19 + - #size-cells: must be 2. 20 + - #interrupt-cells: must be 1. 21 + - interrupt-map: standard property used to define the mapping of the PCI 22 + interrupts to the GIC interrupts. 23 + - interrupt-map-mask: standard property that helps to define the interrupt 24 + mapping. 25 + 26 + Example SoC configuration: 27 + 28 + pci0: pci@ee090000 { 29 + compatible = "renesas,pci-r8a7790"; 30 + clocks = <&mstp7_clks R8A7790_CLK_EHCI>; 31 + reg = <0x0 0xee090000 0x0 0xc00>, 32 + <0x0 0xee080000 0x0 0x1100>; 33 + interrupts = <0 108 IRQ_TYPE_LEVEL_HIGH>; 34 + status = "disabled"; 35 + 36 + bus-range = <0 0>; 37 + #address-cells = <3>; 38 + #size-cells = <2>; 39 + #interrupt-cells = <1>; 40 + interrupt-map-mask = <0xff00 0 0 0x7>; 41 + interrupt-map = <0x0000 0 0 1 &gic 0 108 IRQ_TYPE_LEVEL_HIGH 42 + 0x0800 0 0 1 &gic 0 108 IRQ_TYPE_LEVEL_HIGH 43 + 0x1000 0 0 2 &gic 0 108 IRQ_TYPE_LEVEL_HIGH>; 44 + 45 + pci@0,1 { 46 + reg = <0x800 0 0 0 0>; 47 + device_type = "pci"; 48 + phys = <&usbphy 0 0>; 49 + phy-names = "usb"; 50 + }; 51 + 52 + pci@0,2 { 53 + reg = <0x1000 0 0 0 0>; 54 + device_type = "pci"; 55 + phys = <&usbphy 0 0>; 56 + phy-names = "usb"; 57 + }; 58 + }; 59 + 60 + Example board setup: 61 + 62 + &pci0 { 63 + status = "okay"; 64 + pinctrl-0 = <&usb0_pins>; 65 + pinctrl-names = "default"; 66 + };
+47
Documentation/devicetree/bindings/pci/rcar-pci.txt
··· 1 + * Renesas RCar PCIe interface 2 + 3 + Required properties: 4 + - compatible: should contain one of the following 5 + "renesas,pcie-r8a7779", "renesas,pcie-r8a7790", "renesas,pcie-r8a7791" 6 + - reg: base address and length of the pcie controller registers. 7 + - #address-cells: set to <3> 8 + - #size-cells: set to <2> 9 + - bus-range: PCI bus numbers covered 10 + - device_type: set to "pci" 11 + - ranges: ranges for the PCI memory and I/O regions. 12 + - dma-ranges: ranges for the inbound memory regions. 13 + - interrupts: two interrupt sources for MSI interrupts, followed by interrupt 14 + source for hardware related interrupts (e.g. link speed change). 15 + - #interrupt-cells: set to <1> 16 + - interrupt-map-mask and interrupt-map: standard PCI properties 17 + to define the mapping of the PCIe interface to interrupt 18 + numbers. 19 + - clocks: from common clock binding: clock specifiers for the PCIe controller 20 + and PCIe bus clocks. 21 + - clock-names: from common clock binding: should be "pcie" and "pcie_bus". 22 + 23 + Example: 24 + 25 + SoC specific DT Entry: 26 + 27 + pcie: pcie@fe000000 { 28 + compatible = "renesas,pcie-r8a7791"; 29 + reg = <0 0xfe000000 0 0x80000>; 30 + #address-cells = <3>; 31 + #size-cells = <2>; 32 + bus-range = <0x00 0xff>; 33 + device_type = "pci"; 34 + ranges = <0x01000000 0 0x00000000 0 0xfe100000 0 0x00100000 35 + 0x02000000 0 0xfe200000 0 0xfe200000 0 0x00200000 36 + 0x02000000 0 0x30000000 0 0x30000000 0 0x08000000 37 + 0x42000000 0 0x38000000 0 0x38000000 0 0x08000000>; 38 + dma-ranges = <0x42000000 0 0x40000000 0 0x40000000 0 0x40000000 39 + 0x42000000 2 0x00000000 2 0x00000000 0 0x40000000>; 40 + interrupts = <0 116 4>, <0 117 4>, <0 118 4>; 41 + #interrupt-cells = <1>; 42 + interrupt-map-mask = <0 0 0 0>; 43 + interrupt-map = <0 0 0 0 &gic 0 116 4>; 44 + clocks = <&mstp3_clks R8A7791_CLK_PCIE>, <&pcie_bus_clk>; 45 + clock-names = "pcie", "pcie_bus"; 46 + status = "disabled"; 47 + };
+9
MAINTAINERS
··· 6710 6710 F: drivers/pci/ 6711 6711 F: include/linux/pci* 6712 6712 F: arch/x86/pci/ 6713 + F: arch/x86/kernel/quirks.c 6713 6714 6714 6715 PCI DRIVER FOR IMX6 6715 6716 M: Richard Zhu <r65037@freescale.com> ··· 6757 6756 L: linux-pci@vger.kernel.org 6758 6757 S: Maintained 6759 6758 F: drivers/pci/host/*designware* 6759 + 6760 + PCI DRIVER FOR GENERIC OF HOSTS 6761 + M: Will Deacon <will.deacon@arm.com> 6762 + L: linux-pci@vger.kernel.org 6763 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 6764 + S: Maintained 6765 + F: Documentation/devicetree/bindings/pci/host-generic-pci.txt 6766 + F: drivers/pci/host/pci-host-generic.c 6760 6767 6761 6768 PCMCIA SUBSYSTEM 6762 6769 P: Linux PCMCIA Team
-5
arch/alpha/include/asm/pci.h
··· 59 59 60 60 extern void pcibios_set_master(struct pci_dev *dev); 61 61 62 - extern inline void pcibios_penalize_isa_irq(int irq, int active) 63 - { 64 - /* We don't do dynamic PCI IRQ allocation */ 65 - } 66 - 67 62 /* IOMMU controls. */ 68 63 69 64 /* The PCI address space does not equal the physical memory address space.
-5
arch/arm/include/asm/pci.h
··· 31 31 } 32 32 #endif /* CONFIG_PCI_DOMAINS */ 33 33 34 - static inline void pcibios_penalize_isa_irq(int irq, int active) 35 - { 36 - /* We don't do dynamic PCI IRQ allocation */ 37 - } 38 - 39 34 /* 40 35 * The PCI address space does equal the physical memory address space. 41 36 * The networking and block device layers use this boolean for bounce
+12
arch/arm/kernel/bios32.c
··· 545 545 */ 546 546 pci_bus_add_devices(bus); 547 547 } 548 + 549 + list_for_each_entry(sys, &head, node) { 550 + struct pci_bus *bus = sys->bus; 551 + 552 + /* Configure PCI Express settings */ 553 + if (bus && !pci_has_flag(PCI_PROBE_ONLY)) { 554 + struct pci_bus *child; 555 + 556 + list_for_each_entry(child, &bus->children, node) 557 + pcie_bus_configure_settings(child); 558 + } 559 + } 548 560 } 549 561 550 562 #ifndef CONFIG_PCI_HOST_ITE8152
-5
arch/blackfin/include/asm/pci.h
··· 10 10 #define PCIBIOS_MIN_IO 0x00001000 11 11 #define PCIBIOS_MIN_MEM 0x10000000 12 12 13 - static inline void pcibios_penalize_isa_irq(int irq) 14 - { 15 - /* We don't do dynamic PCI IRQ allocation */ 16 - } 17 - 18 13 #endif /* _ASM_BFIN_PCI_H */
-1
arch/cris/include/asm/pci.h
··· 20 20 struct pci_bus * pcibios_scan_root(int bus); 21 21 22 22 void pcibios_set_master(struct pci_dev *dev); 23 - void pcibios_penalize_isa_irq(int irq); 24 23 struct irq_routing_table *pcibios_get_irq_routing_table(void); 25 24 int pcibios_set_irq_routing(struct pci_dev *dev, int pin, int irq); 26 25
-2
arch/frv/include/asm/pci.h
··· 24 24 25 25 extern void pcibios_set_master(struct pci_dev *dev); 26 26 27 - extern void pcibios_penalize_isa_irq(int irq); 28 - 29 27 #ifdef CONFIG_MMU 30 28 extern void *consistent_alloc(gfp_t gfp, size_t size, dma_addr_t *dma_handle); 31 29 extern void consistent_free(void *vaddr);
-4
arch/frv/mb93090-mb00/pci-irq.c
··· 55 55 } 56 56 } 57 57 58 - void __init pcibios_penalize_isa_irq(int irq) 59 - { 60 - } 61 - 62 58 void pcibios_enable_irq(struct pci_dev *dev) 63 59 { 64 60 pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
-6
arch/ia64/include/asm/pci.h
··· 50 50 extern unsigned long ia64_max_iommu_merge_mask; 51 51 #define PCI_DMA_BUS_IS_PHYS (ia64_max_iommu_merge_mask == ~0UL) 52 52 53 - static inline void 54 - pcibios_penalize_isa_irq (int irq, int active) 55 - { 56 - /* We don't do dynamic PCI IRQ allocation */ 57 - } 58 - 59 53 #include <asm-generic/pci-dma-compat.h> 60 54 61 55 #ifdef CONFIG_PCI
+1 -3
arch/ia64/pci/fixup.c
··· 49 49 * type BRIDGE, or CARDBUS. Host to PCI controllers use 50 50 * PCI header type NORMAL. 51 51 */ 52 - if (bridge 53 - &&((bridge->hdr_type == PCI_HEADER_TYPE_BRIDGE) 54 - ||(bridge->hdr_type == PCI_HEADER_TYPE_CARDBUS))) { 52 + if (bridge && (pci_is_bridge(bridge))) { 55 53 pci_read_config_word(bridge, PCI_BRIDGE_CONTROL, 56 54 &config); 57 55 if (!(config & PCI_BRIDGE_CTL_VGA))
-5
arch/microblaze/include/asm/pci.h
··· 44 44 */ 45 45 #define pcibios_assign_all_busses() 0 46 46 47 - static inline void pcibios_penalize_isa_irq(int irq, int active) 48 - { 49 - /* We don't do dynamic PCI IRQ allocation */ 50 - } 51 - 52 47 #ifdef CONFIG_PCI 53 48 extern void set_pci_dma_ops(struct dma_map_ops *dma_ops); 54 49 extern struct dma_map_ops *get_pci_dma_ops(void);
-20
arch/microblaze/pci/pci-common.c
··· 168 168 return NULL; 169 169 } 170 170 171 - static ssize_t pci_show_devspec(struct device *dev, 172 - struct device_attribute *attr, char *buf) 173 - { 174 - struct pci_dev *pdev; 175 - struct device_node *np; 176 - 177 - pdev = to_pci_dev(dev); 178 - np = pci_device_to_OF_node(pdev); 179 - if (np == NULL || np->full_name == NULL) 180 - return 0; 181 - return sprintf(buf, "%s", np->full_name); 182 - } 183 - static DEVICE_ATTR(devspec, S_IRUGO, pci_show_devspec, NULL); 184 - 185 - /* Add sysfs properties */ 186 - int pcibios_add_platform_entries(struct pci_dev *pdev) 187 - { 188 - return device_create_file(&pdev->dev, &dev_attr_devspec); 189 - } 190 - 191 171 void pcibios_set_master(struct pci_dev *dev) 192 172 { 193 173 /* No special bus mastering setup handling */
-5
arch/mips/include/asm/pci.h
··· 73 73 74 74 extern void pcibios_set_master(struct pci_dev *dev); 75 75 76 - static inline void pcibios_penalize_isa_irq(int irq, int active) 77 - { 78 - /* We don't do dynamic PCI IRQ allocation */ 79 - } 80 - 81 76 #define HAVE_PCI_MMAP 82 77 83 78 extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
-1
arch/mn10300/include/asm/pci.h
··· 48 48 #define PCIBIOS_MIN_MEM 0xB8000000 49 49 50 50 void pcibios_set_master(struct pci_dev *dev); 51 - void pcibios_penalize_isa_irq(int irq); 52 51 53 52 /* Dynamic DMA mapping stuff. 54 53 * i386 has everything mapped statically.
-4
arch/mn10300/unit-asb2305/pci-irq.c
··· 40 40 } 41 41 } 42 42 43 - void __init pcibios_penalize_isa_irq(int irq) 44 - { 45 - } 46 - 47 43 void pcibios_enable_irq(struct pci_dev *dev) 48 44 { 49 45 pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
-5
arch/parisc/include/asm/pci.h
··· 215 215 } 216 216 #endif 217 217 218 - static inline void pcibios_penalize_isa_irq(int irq, int active) 219 - { 220 - /* We don't need to penalize isa irq's */ 221 - } 222 - 223 218 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 224 219 { 225 220 return channel ? 15 : 14;
-5
arch/powerpc/include/asm/pci.h
··· 46 46 #define pcibios_assign_all_busses() \ 47 47 (pci_has_flag(PCI_REASSIGN_ALL_BUS)) 48 48 49 - static inline void pcibios_penalize_isa_irq(int irq, int active) 50 - { 51 - /* We don't do dynamic PCI IRQ allocation */ 52 - } 53 - 54 49 #define HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ 55 50 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 56 51 {
-20
arch/powerpc/kernel/pci-common.c
··· 201 201 return NULL; 202 202 } 203 203 204 - static ssize_t pci_show_devspec(struct device *dev, 205 - struct device_attribute *attr, char *buf) 206 - { 207 - struct pci_dev *pdev; 208 - struct device_node *np; 209 - 210 - pdev = to_pci_dev (dev); 211 - np = pci_device_to_OF_node(pdev); 212 - if (np == NULL || np->full_name == NULL) 213 - return 0; 214 - return sprintf(buf, "%s", np->full_name); 215 - } 216 - static DEVICE_ATTR(devspec, S_IRUGO, pci_show_devspec, NULL); 217 - 218 - /* Add sysfs properties */ 219 - int pcibios_add_platform_entries(struct pci_dev *pdev) 220 - { 221 - return device_create_file(&pdev->dev, &dev_attr_devspec); 222 - } 223 - 224 204 /* 225 205 * Reads the interrupt pin to determine if interrupt is use by card. 226 206 * If the interrupt is used, then gets the interrupt line from the
+1 -2
arch/powerpc/kernel/pci-hotplug.c
··· 98 98 max = bus->busn_res.start; 99 99 for (pass = 0; pass < 2; pass++) { 100 100 list_for_each_entry(dev, &bus->devices, bus_list) { 101 - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE || 102 - dev->hdr_type == PCI_HEADER_TYPE_CARDBUS) 101 + if (pci_is_bridge(dev)) 103 102 max = pci_scan_bridge(bus, dev, 104 103 max, pass); 105 104 }
+1 -2
arch/powerpc/kernel/pci_of_scan.c
··· 362 362 363 363 /* Now scan child busses */ 364 364 list_for_each_entry(dev, &bus->devices, bus_list) { 365 - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE || 366 - dev->hdr_type == PCI_HEADER_TYPE_CARDBUS) { 365 + if (pci_is_bridge(dev)) { 367 366 of_scan_pci_bridge(dev); 368 367 } 369 368 }
+2 -4
arch/s390/include/asm/pci.h
··· 120 120 return (zdev->fh & (1UL << 31)) ? true : false; 121 121 } 122 122 123 + extern const struct attribute_group *zpci_attr_groups[]; 124 + 123 125 /* ----------------------------------------------------------------------------- 124 126 Prototypes 125 127 ----------------------------------------------------------------------------- */ ··· 167 165 /* Helpers */ 168 166 struct zpci_dev *get_zdev(struct pci_dev *); 169 167 struct zpci_dev *get_zdev_by_fid(u32); 170 - 171 - /* sysfs */ 172 - int zpci_sysfs_add_device(struct device *); 173 - void zpci_sysfs_remove_device(struct device *); 174 168 175 169 /* DMA */ 176 170 int zpci_dma_init(void);
+1 -5
arch/s390/pci/pci.c
··· 530 530 } 531 531 } 532 532 533 - int pcibios_add_platform_entries(struct pci_dev *pdev) 534 - { 535 - return zpci_sysfs_add_device(&pdev->dev); 536 - } 537 - 538 533 static int __init zpci_irq_init(void) 539 534 { 540 535 int rc; ··· 666 671 int i; 667 672 668 673 zdev->pdev = pdev; 674 + pdev->dev.groups = zpci_attr_groups; 669 675 zpci_map_resources(zdev); 670 676 671 677 for (i = 0; i < PCI_BAR_COUNT; i++) {
+13 -31
arch/s390/pci/pci_sysfs.c
··· 72 72 } 73 73 static DEVICE_ATTR(recover, S_IWUSR, NULL, store_recover); 74 74 75 - static struct device_attribute *zpci_dev_attrs[] = { 76 - &dev_attr_function_id, 77 - &dev_attr_function_handle, 78 - &dev_attr_pchid, 79 - &dev_attr_pfgid, 80 - &dev_attr_recover, 75 + static struct attribute *zpci_dev_attrs[] = { 76 + &dev_attr_function_id.attr, 77 + &dev_attr_function_handle.attr, 78 + &dev_attr_pchid.attr, 79 + &dev_attr_pfgid.attr, 80 + &dev_attr_recover.attr, 81 81 NULL, 82 82 }; 83 - 84 - int zpci_sysfs_add_device(struct device *dev) 85 - { 86 - int i, rc = 0; 87 - 88 - for (i = 0; zpci_dev_attrs[i]; i++) { 89 - rc = device_create_file(dev, zpci_dev_attrs[i]); 90 - if (rc) 91 - goto error; 92 - } 93 - return 0; 94 - 95 - error: 96 - while (--i >= 0) 97 - device_remove_file(dev, zpci_dev_attrs[i]); 98 - return rc; 99 - } 100 - 101 - void zpci_sysfs_remove_device(struct device *dev) 102 - { 103 - int i; 104 - 105 - for (i = 0; zpci_dev_attrs[i]; i++) 106 - device_remove_file(dev, zpci_dev_attrs[i]); 107 - } 83 + static struct attribute_group zpci_attr_group = { 84 + .attrs = zpci_dev_attrs, 85 + }; 86 + const struct attribute_group *zpci_attr_groups[] = { 87 + &zpci_attr_group, 88 + NULL, 89 + };
+15 -3
arch/sh/drivers/pci/fixups-dreamcast.c
··· 31 31 static void gapspci_fixup_resources(struct pci_dev *dev) 32 32 { 33 33 struct pci_channel *p = dev->sysdata; 34 + struct resource res; 35 + struct pci_bus_region region; 34 36 35 37 printk(KERN_NOTICE "PCI: Fixing up device %s\n", pci_name(dev)); 36 38 ··· 52 50 53 51 /* 54 52 * Redirect dma memory allocations to special memory window. 53 + * 54 + * If this GAPSPCI region were mapped by a BAR, the CPU 55 + * phys_addr_t would be pci_resource_start(), and the bus 56 + * address would be pci_bus_address(pci_resource_start()). 57 + * But apparently there's no BAR mapping it, so we just 58 + * "know" its CPU address is GAPSPCI_DMA_BASE. 55 59 */ 60 + res.start = GAPSPCI_DMA_BASE; 61 + res.end = GAPSPCI_DMA_BASE + GAPSPCI_DMA_SIZE - 1; 62 + res.flags = IORESOURCE_MEM; 63 + pcibios_resource_to_bus(dev->bus, &region, &res); 56 64 BUG_ON(!dma_declare_coherent_memory(&dev->dev, 57 - GAPSPCI_DMA_BASE, 58 - GAPSPCI_DMA_BASE, 59 - GAPSPCI_DMA_SIZE, 65 + res.start, 66 + region.start, 67 + resource_size(&res), 60 68 DMA_MEMORY_MAP | 61 69 DMA_MEMORY_EXCLUSIVE)); 62 70 break;
-5
arch/sh/include/asm/pci.h
··· 70 70 enum pci_mmap_state mmap_state, int write_combine); 71 71 extern void pcibios_set_master(struct pci_dev *dev); 72 72 73 - static inline void pcibios_penalize_isa_irq(int irq, int active) 74 - { 75 - /* We don't do dynamic PCI IRQ allocation */ 76 - } 77 - 78 73 /* Dynamic DMA mapping stuff. 79 74 * SuperH has everything mapped statically like x86. 80 75 */
-5
arch/sparc/include/asm/pci_32.h
··· 16 16 17 17 #define PCI_IRQ_NONE 0xffffffff 18 18 19 - static inline void pcibios_penalize_isa_irq(int irq, int active) 20 - { 21 - /* We don't do dynamic PCI IRQ allocation */ 22 - } 23 - 24 19 /* Dynamic DMA mapping stuff. 25 20 */ 26 21 #define PCI_DMA_BUS_IS_PHYS (0)
-5
arch/sparc/include/asm/pci_64.h
··· 16 16 17 17 #define PCI_IRQ_NONE 0xffffffff 18 18 19 - static inline void pcibios_penalize_isa_irq(int irq, int active) 20 - { 21 - /* We don't do dynamic PCI IRQ allocation */ 22 - } 23 - 24 19 /* The PCI address space does not equal the physical memory 25 20 * address space. The networking and block device layers use 26 21 * this boolean for bounce buffer decisions.
+1 -2
arch/sparc/kernel/pci.c
··· 543 543 printk("PCI: dev header type: %x\n", 544 544 dev->hdr_type); 545 545 546 - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE || 547 - dev->hdr_type == PCI_HEADER_TYPE_CARDBUS) 546 + if (pci_is_bridge(dev)) 548 547 of_scan_pci_bridge(pbm, child, dev); 549 548 } 550 549 }
-5
arch/unicore32/include/asm/pci.h
··· 18 18 #include <asm-generic/pci.h> 19 19 #include <mach/hardware.h> /* for PCIBIOS_MIN_* */ 20 20 21 - static inline void pcibios_penalize_isa_irq(int irq, int active) 22 - { 23 - /* We don't do dynamic PCI IRQ allocation */ 24 - } 25 - 26 21 #ifdef CONFIG_PCI 27 22 static inline void pci_dma_burst_advice(struct pci_dev *pdev, 28 23 enum pci_dma_burst_strategy *strat,
-1
arch/x86/include/asm/pci.h
··· 68 68 void pcibios_scan_root(int bus); 69 69 70 70 void pcibios_set_master(struct pci_dev *dev); 71 - void pcibios_penalize_isa_irq(int irq, int active); 72 71 struct irq_routing_table *pcibios_get_irq_routing_table(void); 73 72 int pcibios_set_irq_routing(struct pci_dev *dev, int pin, int irq); 74 73
+31 -28
arch/x86/kernel/aperture_64.c
··· 10 10 * 11 11 * Copyright 2002 Andi Kleen, SuSE Labs. 12 12 */ 13 + #define pr_fmt(fmt) "AGP: " fmt 14 + 13 15 #include <linux/kernel.h> 14 16 #include <linux/types.h> 15 17 #include <linux/init.h> ··· 77 75 addr = memblock_find_in_range(GART_MIN_ADDR, GART_MAX_ADDR, 78 76 aper_size, aper_size); 79 77 if (!addr) { 80 - printk(KERN_ERR 81 - "Cannot allocate aperture memory hole (%lx,%uK)\n", 82 - addr, aper_size>>10); 78 + pr_err("Cannot allocate aperture memory hole [mem %#010lx-%#010lx] (%uKB)\n", 79 + addr, addr + aper_size - 1, aper_size >> 10); 83 80 return 0; 84 81 } 85 82 memblock_reserve(addr, aper_size); 86 - printk(KERN_INFO "Mapping aperture over %d KB of RAM @ %lx\n", 87 - aper_size >> 10, addr); 83 + pr_info("Mapping aperture over RAM [mem %#010lx-%#010lx] (%uKB)\n", 84 + addr, addr + aper_size - 1, aper_size >> 10); 88 85 register_nosave_region(addr >> PAGE_SHIFT, 89 86 (addr+aper_size) >> PAGE_SHIFT); 90 87 ··· 127 126 u64 aper; 128 127 u32 old_order; 129 128 130 - printk(KERN_INFO "AGP bridge at %02x:%02x:%02x\n", bus, slot, func); 129 + pr_info("pci 0000:%02x:%02x:%02x: AGP bridge\n", bus, slot, func); 131 130 apsizereg = read_pci_config_16(bus, slot, func, cap + 0x14); 132 131 if (apsizereg == 0xffffffff) { 133 - printk(KERN_ERR "APSIZE in AGP bridge unreadable\n"); 132 + pr_err("pci 0000:%02x:%02x.%d: APSIZE unreadable\n", 133 + bus, slot, func); 134 134 return 0; 135 135 } 136 136 ··· 155 153 * On some sick chips, APSIZE is 0. It means it wants 4G 156 154 * so let double check that order, and lets trust AMD NB settings: 157 155 */ 158 - printk(KERN_INFO "Aperture from AGP @ %Lx old size %u MB\n", 159 - aper, 32 << old_order); 156 + pr_info("pci 0000:%02x:%02x.%d: AGP aperture [bus addr %#010Lx-%#010Lx] (old size %uMB)\n", 157 + bus, slot, func, aper, aper + (32ULL << (old_order + 20)) - 1, 158 + 32 << old_order); 160 159 if (aper + (32ULL<<(20 + *order)) > 0x100000000ULL) { 161 - printk(KERN_INFO "Aperture size %u MB (APSIZE %x) is not right, using settings from NB\n", 162 - 32 << *order, apsizereg); 160 + pr_info("pci 0000:%02x:%02x.%d: AGP aperture size %uMB (APSIZE %#x) is not right, using settings from NB\n", 161 + bus, slot, func, 32 << *order, apsizereg); 163 162 *order = old_order; 164 163 } 165 164 166 - printk(KERN_INFO "Aperture from AGP @ %Lx size %u MB (APSIZE %x)\n", 167 - aper, 32 << *order, apsizereg); 165 + pr_info("pci 0000:%02x:%02x.%d: AGP aperture [bus addr %#010Lx-%#010Lx] (%uMB, APSIZE %#x)\n", 166 + bus, slot, func, aper, aper + (32ULL << (*order + 20)) - 1, 167 + 32 << *order, apsizereg); 168 168 169 169 if (!aperture_valid(aper, (32*1024*1024) << *order, 32<<20)) 170 170 return 0; ··· 222 218 } 223 219 } 224 220 } 225 - printk(KERN_INFO "No AGP bridge found\n"); 221 + pr_info("No AGP bridge found\n"); 226 222 227 223 return 0; 228 224 } ··· 314 310 if (e820_any_mapped(aper_base, aper_base + aper_size, 315 311 E820_RAM)) { 316 312 /* reserve it, so we can reuse it in second kernel */ 317 - printk(KERN_INFO "update e820 for GART\n"); 313 + pr_info("e820: reserve [mem %#010Lx-%#010Lx] for GART\n", 314 + aper_base, aper_base + aper_size - 1); 318 315 e820_add_region(aper_base, aper_size, E820_RESERVED); 319 316 update_e820(); 320 317 } ··· 359 354 !early_pci_allowed()) 360 355 return -ENODEV; 361 356 362 - printk(KERN_INFO "Checking aperture...\n"); 357 + pr_info("Checking aperture...\n"); 363 358 364 359 if (!fallback_aper_force) 365 360 agp_aper_base = search_agp_bridge(&agp_aper_order, &valid_agp); ··· 400 395 aper_base = read_pci_config(bus, slot, 3, AMD64_GARTAPERTUREBASE) & 0x7fff; 401 396 aper_base <<= 25; 402 397 403 - printk(KERN_INFO "Node %d: aperture @ %Lx size %u MB\n", 404 - node, aper_base, aper_size >> 20); 398 + pr_info("Node %d: aperture [bus addr %#010Lx-%#010Lx] (%uMB)\n", 399 + node, aper_base, aper_base + aper_size - 1, 400 + aper_size >> 20); 405 401 node++; 406 402 407 403 if (!aperture_valid(aper_base, aper_size, 64<<20)) { ··· 413 407 if (!no_iommu && 414 408 max_pfn > MAX_DMA32_PFN && 415 409 !printed_gart_size_msg) { 416 - printk(KERN_ERR "you are using iommu with agp, but GART size is less than 64M\n"); 417 - printk(KERN_ERR "please increase GART size in your BIOS setup\n"); 418 - printk(KERN_ERR "if BIOS doesn't have that option, contact your HW vendor!\n"); 410 + pr_err("you are using iommu with agp, but GART size is less than 64MB\n"); 411 + pr_err("please increase GART size in your BIOS setup\n"); 412 + pr_err("if BIOS doesn't have that option, contact your HW vendor!\n"); 419 413 printed_gart_size_msg = 1; 420 414 } 421 415 } else { ··· 452 446 force_iommu || 453 447 valid_agp || 454 448 fallback_aper_force) { 455 - printk(KERN_INFO 456 - "Your BIOS doesn't leave a aperture memory hole\n"); 457 - printk(KERN_INFO 458 - "Please enable the IOMMU option in the BIOS setup\n"); 459 - printk(KERN_INFO 460 - "This costs you %d MB of RAM\n", 461 - 32 << fallback_aper_order); 449 + pr_info("Your BIOS doesn't leave a aperture memory hole\n"); 450 + pr_info("Please enable the IOMMU option in the BIOS setup\n"); 451 + pr_info("This costs you %dMB of RAM\n", 452 + 32 << fallback_aper_order); 462 453 463 454 aper_order = fallback_aper_order; 464 455 aper_alloc = allocate_aperture();
+5 -1
arch/x86/pci/acpi.c
··· 489 489 } 490 490 491 491 node = acpi_get_node(device->handle); 492 - if (node == NUMA_NO_NODE) 492 + if (node == NUMA_NO_NODE) { 493 493 node = x86_pci_root_bus_node(busnum); 494 + if (node != 0 && node != NUMA_NO_NODE) 495 + dev_info(&device->dev, FW_BUG "no _PXM; falling back to node %d from hardware (may be inconsistent with ACPI node numbers)\n", 496 + node); 497 + } 494 498 495 499 if (node != NUMA_NO_NODE && !node_online(node)) 496 500 node = NUMA_NO_NODE;
+54 -29
arch/x86/pci/amd_bus.c
··· 11 11 12 12 #include "bus_numa.h" 13 13 14 - /* 15 - * This discovers the pcibus <-> node mapping on AMD K8. 16 - * also get peer root bus resource for io,mmio 17 - */ 14 + #define AMD_NB_F0_NODE_ID 0x60 15 + #define AMD_NB_F0_UNIT_ID 0x64 16 + #define AMD_NB_F1_CONFIG_MAP_REG 0xe0 18 17 19 - struct pci_hostbridge_probe { 18 + #define RANGE_NUM 16 19 + #define AMD_NB_F1_CONFIG_MAP_RANGES 4 20 + 21 + struct amd_hostbridge { 20 22 u32 bus; 21 23 u32 slot; 22 - u32 vendor; 23 24 u32 device; 24 25 }; 25 26 26 - static struct pci_hostbridge_probe pci_probes[] __initdata = { 27 - { 0, 0x18, PCI_VENDOR_ID_AMD, 0x1100 }, 28 - { 0, 0x18, PCI_VENDOR_ID_AMD, 0x1200 }, 29 - { 0xff, 0, PCI_VENDOR_ID_AMD, 0x1200 }, 30 - { 0, 0x18, PCI_VENDOR_ID_AMD, 0x1300 }, 27 + /* 28 + * IMPORTANT NOTE: 29 + * hb_probes[] and early_root_info_init() is in maintenance mode. 30 + * It only supports K8, Fam10h, Fam11h, and Fam15h_00h-0fh . 31 + * Future processor will rely on information in ACPI. 32 + */ 33 + static struct amd_hostbridge hb_probes[] __initdata = { 34 + { 0, 0x18, 0x1100 }, /* K8 */ 35 + { 0, 0x18, 0x1200 }, /* Family10h */ 36 + { 0xff, 0, 0x1200 }, /* Family10h */ 37 + { 0, 0x18, 0x1300 }, /* Family11h */ 38 + { 0, 0x18, 0x1600 }, /* Family15h */ 31 39 }; 32 - 33 - #define RANGE_NUM 16 34 40 35 41 static struct pci_root_info __init *find_pci_root_info(int node, int link) 36 42 { ··· 51 45 } 52 46 53 47 /** 54 - * early_fill_mp_bus_to_node() 48 + * early_root_info_init() 55 49 * called before pcibios_scan_root and pci_scan_bus 56 - * fills the mp_bus_to_cpumask array based according to the LDT Bus Number 57 - * Registers found in the K8 northbridge 50 + * fills the mp_bus_to_cpumask array based according 51 + * to the LDT Bus Number Registers found in the northbridge. 58 52 */ 59 - static int __init early_fill_mp_bus_info(void) 53 + static int __init early_root_info_init(void) 60 54 { 61 55 int i; 62 56 unsigned bus; ··· 81 75 return -1; 82 76 83 77 found = false; 84 - for (i = 0; i < ARRAY_SIZE(pci_probes); i++) { 78 + for (i = 0; i < ARRAY_SIZE(hb_probes); i++) { 85 79 u32 id; 86 80 u16 device; 87 81 u16 vendor; 88 82 89 - bus = pci_probes[i].bus; 90 - slot = pci_probes[i].slot; 83 + bus = hb_probes[i].bus; 84 + slot = hb_probes[i].slot; 91 85 id = read_pci_config(bus, slot, 0, PCI_VENDOR_ID); 92 - 93 86 vendor = id & 0xffff; 94 87 device = (id>>16) & 0xffff; 95 - if (pci_probes[i].vendor == vendor && 96 - pci_probes[i].device == device) { 88 + 89 + if (vendor != PCI_VENDOR_ID_AMD) 90 + continue; 91 + 92 + if (hb_probes[i].device == device) { 97 93 found = true; 98 94 break; 99 95 } ··· 104 96 if (!found) 105 97 return 0; 106 98 107 - for (i = 0; i < 4; i++) { 99 + /* 100 + * We should learn topology and routing information from _PXM and 101 + * _CRS methods in the ACPI namespace. We extract node numbers 102 + * here to work around BIOSes that don't supply _PXM. 103 + */ 104 + for (i = 0; i < AMD_NB_F1_CONFIG_MAP_RANGES; i++) { 108 105 int min_bus; 109 106 int max_bus; 110 - reg = read_pci_config(bus, slot, 1, 0xe0 + (i << 2)); 107 + reg = read_pci_config(bus, slot, 1, 108 + AMD_NB_F1_CONFIG_MAP_REG + (i << 2)); 111 109 112 110 /* Check if that register is enabled for bus range */ 113 111 if ((reg & 7) != 3) ··· 127 113 info = alloc_pci_root_info(min_bus, max_bus, node, link); 128 114 } 129 115 116 + /* 117 + * The following code extracts routing information for use on old 118 + * systems where Linux doesn't automatically use host bridge _CRS 119 + * methods (or when the user specifies "pci=nocrs"). 120 + * 121 + * We only do this through Fam11h, because _CRS should be enough on 122 + * newer systems. 123 + */ 124 + if (boot_cpu_data.x86 > 0x11) 125 + return 0; 126 + 130 127 /* get the default node and link for left over res */ 131 - reg = read_pci_config(bus, slot, 0, 0x60); 128 + reg = read_pci_config(bus, slot, 0, AMD_NB_F0_NODE_ID); 132 129 def_node = (reg >> 8) & 0x07; 133 - reg = read_pci_config(bus, slot, 0, 0x64); 130 + reg = read_pci_config(bus, slot, 0, AMD_NB_F0_UNIT_ID); 134 131 def_link = (reg >> 8) & 0x03; 135 132 136 133 memset(range, 0, sizeof(range)); ··· 388 363 int cpu; 389 364 390 365 /* assume all cpus from fam10h have IO ECS */ 391 - if (boot_cpu_data.x86 < 0x10) 366 + if (boot_cpu_data.x86 < 0x10) 392 367 return 0; 393 368 394 369 /* Try the PCI method first. */ ··· 412 387 if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) 413 388 return 0; 414 389 415 - early_fill_mp_bus_info(); 390 + early_root_info_init(); 416 391 pci_io_ecs_init(); 417 392 418 393 return 0;
+2 -2
arch/x86/pci/broadcom_bus.c
··· 60 60 word1 = read_pci_config_16(bus, slot, func, 0xc4); 61 61 word2 = read_pci_config_16(bus, slot, func, 0xc6); 62 62 if (word1 != word2) { 63 - res.start = (word1 << 16) | 0x0000; 64 - res.end = (word2 << 16) | 0xffff; 63 + res.start = ((resource_size_t) word1 << 16) | 0x0000; 64 + res.end = ((resource_size_t) word2 << 16) | 0xffff; 65 65 res.flags = IORESOURCE_MEM | IORESOURCE_PREFETCH; 66 66 update_res(info, res.start, res.end, res.flags, 0); 67 67 }
+15 -3
arch/x86/pci/fixup.c
··· 6 6 #include <linux/dmi.h> 7 7 #include <linux/pci.h> 8 8 #include <linux/vgaarb.h> 9 + #include <asm/hpet.h> 9 10 #include <asm/pci_x86.h> 10 11 11 12 static void pci_fixup_i450nx(struct pci_dev *d) ··· 338 337 * type BRIDGE, or CARDBUS. Host to PCI controllers use 339 338 * PCI header type NORMAL. 340 339 */ 341 - if (bridge 342 - && ((bridge->hdr_type == PCI_HEADER_TYPE_BRIDGE) 343 - || (bridge->hdr_type == PCI_HEADER_TYPE_CARDBUS))) { 340 + if (bridge && (pci_is_bridge(bridge))) { 344 341 pci_read_config_word(bridge, PCI_BRIDGE_CONTROL, 345 342 &config); 346 343 if (!(config & PCI_BRIDGE_CTL_VGA)) ··· 524 525 } 525 526 } 526 527 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_ATI, 0x4385, sb600_disable_hpet_bar); 528 + 529 + #ifdef CONFIG_HPET_TIMER 530 + static void sb600_hpet_quirk(struct pci_dev *dev) 531 + { 532 + struct resource *r = &dev->resource[1]; 533 + 534 + if (r->flags & IORESOURCE_MEM && r->start == hpet_address) { 535 + r->flags |= IORESOURCE_PCI_FIXED; 536 + dev_info(&dev->dev, "reg 0x14 contains HPET; making it immovable\n"); 537 + } 538 + } 539 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATI, 0x4385, sb600_hpet_quirk); 540 + #endif 527 541 528 542 /* 529 543 * Twinhead H12Y needs us to block out a region otherwise we map devices
+16 -11
arch/x86/pci/i386.c
··· 271 271 "BAR %d: reserving %pr (d=%d, p=%d)\n", 272 272 idx, r, disabled, pass); 273 273 if (pci_claim_resource(dev, idx) < 0) { 274 - /* We'll assign a new address later */ 275 - pcibios_save_fw_addr(dev, 276 - idx, r->start); 277 - r->end -= r->start; 278 - r->start = 0; 274 + if (r->flags & IORESOURCE_PCI_FIXED) { 275 + dev_info(&dev->dev, "BAR %d %pR is immovable\n", 276 + idx, r); 277 + } else { 278 + /* We'll assign a new address later */ 279 + pcibios_save_fw_addr(dev, 280 + idx, r->start); 281 + r->end -= r->start; 282 + r->start = 0; 283 + } 279 284 } 280 285 } 281 286 } ··· 361 356 return 0; 362 357 } 363 358 359 + /** 360 + * called in fs_initcall (one below subsys_initcall), 361 + * give a chance for motherboard reserve resources 362 + */ 363 + fs_initcall(pcibios_assign_resources); 364 + 364 365 void pcibios_resource_survey_bus(struct pci_bus *bus) 365 366 { 366 367 dev_printk(KERN_DEBUG, &bus->dev, "Allocating resources\n"); ··· 402 391 */ 403 392 ioapic_insert_resources(); 404 393 } 405 - 406 - /** 407 - * called in fs_initcall (one below subsys_initcall), 408 - * give a chance for motherboard reserve resources 409 - */ 410 - fs_initcall(pcibios_assign_resources); 411 394 412 395 static const struct vm_operations_struct pci_mmap_ops = { 413 396 .access = generic_access_phys,
-5
arch/xtensa/include/asm/pci.h
··· 22 22 23 23 extern struct pci_controller* pcibios_alloc_controller(void); 24 24 25 - static inline void pcibios_penalize_isa_irq(int irq) 26 - { 27 - /* We don't do dynamic PCI IRQ allocation */ 28 - } 29 - 30 25 /* Assume some values. (We should revise them, if necessary) */ 31 26 32 27 #define PCIBIOS_MIN_IO 0x2000
+5 -5
drivers/base/dma-coherent.c
··· 10 10 struct dma_coherent_mem { 11 11 void *virt_base; 12 12 dma_addr_t device_base; 13 - phys_addr_t pfn_base; 13 + unsigned long pfn_base; 14 14 int size; 15 15 int flags; 16 16 unsigned long *bitmap; 17 17 }; 18 18 19 - int dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr, 19 + int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 20 20 dma_addr_t device_addr, size_t size, int flags) 21 21 { 22 22 void __iomem *mem_base = NULL; ··· 32 32 33 33 /* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */ 34 34 35 - mem_base = ioremap(bus_addr, size); 35 + mem_base = ioremap(phys_addr, size); 36 36 if (!mem_base) 37 37 goto out; 38 38 ··· 45 45 46 46 dev->dma_mem->virt_base = mem_base; 47 47 dev->dma_mem->device_base = device_addr; 48 - dev->dma_mem->pfn_base = PFN_DOWN(bus_addr); 48 + dev->dma_mem->pfn_base = PFN_DOWN(phys_addr); 49 49 dev->dma_mem->size = pages; 50 50 dev->dma_mem->flags = flags; 51 51 ··· 208 208 209 209 *ret = -ENXIO; 210 210 if (off < count && user_count <= count - off) { 211 - unsigned pfn = mem->pfn_base + start + off; 211 + unsigned long pfn = mem->pfn_base + start + off; 212 212 *ret = remap_pfn_range(vma, vma->vm_start, pfn, 213 213 user_count << PAGE_SHIFT, 214 214 vma->vm_page_prot);
+3 -3
drivers/base/dma-mapping.c
··· 175 175 /** 176 176 * dmam_declare_coherent_memory - Managed dma_declare_coherent_memory() 177 177 * @dev: Device to declare coherent memory for 178 - * @bus_addr: Bus address of coherent memory to be declared 178 + * @phys_addr: Physical address of coherent memory to be declared 179 179 * @device_addr: Device address of coherent memory to be declared 180 180 * @size: Size of coherent memory to be declared 181 181 * @flags: Flags ··· 185 185 * RETURNS: 186 186 * 0 on success, -errno on failure. 187 187 */ 188 - int dmam_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr, 188 + int dmam_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 189 189 dma_addr_t device_addr, size_t size, int flags) 190 190 { 191 191 void *res; ··· 195 195 if (!res) 196 196 return -ENOMEM; 197 197 198 - rc = dma_declare_coherent_memory(dev, bus_addr, device_addr, size, 198 + rc = dma_declare_coherent_memory(dev, phys_addr, device_addr, size, 199 199 flags); 200 200 if (rc == 0) 201 201 devres_add(dev, res);
+11
drivers/block/nvme-core.c
··· 2775 2775 return result; 2776 2776 } 2777 2777 2778 + static void nvme_reset_notify(struct pci_dev *pdev, bool prepare) 2779 + { 2780 + struct nvme_dev *dev = pci_get_drvdata(pdev); 2781 + 2782 + if (prepare) 2783 + nvme_dev_shutdown(dev); 2784 + else 2785 + nvme_dev_resume(dev); 2786 + } 2787 + 2778 2788 static void nvme_shutdown(struct pci_dev *pdev) 2779 2789 { 2780 2790 struct nvme_dev *dev = pci_get_drvdata(pdev); ··· 2849 2839 .link_reset = nvme_link_reset, 2850 2840 .slot_reset = nvme_slot_reset, 2851 2841 .resume = nvme_error_resume, 2842 + .reset_notify = nvme_reset_notify, 2852 2843 }; 2853 2844 2854 2845 /* Move to pci_ids.h later */
+1 -7
drivers/edac/i82875p_edac.c
··· 275 275 { 276 276 struct pci_dev *dev; 277 277 void __iomem *window; 278 - int err; 279 278 280 279 *ovrfl_pdev = NULL; 281 280 *ovrfl_window = NULL; ··· 292 293 if (dev == NULL) 293 294 return 1; 294 295 295 - err = pci_bus_add_device(dev); 296 - if (err) { 297 - i82875p_printk(KERN_ERR, 298 - "%s(): pci_bus_add_device() Failed\n", 299 - __func__); 300 - } 301 296 pci_bus_assign_resources(dev->bus); 297 + pci_bus_add_device(dev); 302 298 } 303 299 304 300 *ovrfl_pdev = dev;
+7 -7
drivers/iommu/exynos-iommu.c
··· 1011 1011 } 1012 1012 1013 1013 static struct iommu_ops exynos_iommu_ops = { 1014 - .domain_init = &exynos_iommu_domain_init, 1015 - .domain_destroy = &exynos_iommu_domain_destroy, 1016 - .attach_dev = &exynos_iommu_attach_device, 1017 - .detach_dev = &exynos_iommu_detach_device, 1018 - .map = &exynos_iommu_map, 1019 - .unmap = &exynos_iommu_unmap, 1020 - .iova_to_phys = &exynos_iommu_iova_to_phys, 1014 + .domain_init = exynos_iommu_domain_init, 1015 + .domain_destroy = exynos_iommu_domain_destroy, 1016 + .attach_dev = exynos_iommu_attach_device, 1017 + .detach_dev = exynos_iommu_detach_device, 1018 + .map = exynos_iommu_map, 1019 + .unmap = exynos_iommu_unmap, 1020 + .iova_to_phys = exynos_iommu_iova_to_phys, 1021 1021 .pgsize_bitmap = SECT_SIZE | LPAGE_SIZE | SPAGE_SIZE, 1022 1022 }; 1023 1023
+1 -1
drivers/misc/genwqe/card_utils.c
··· 718 718 int rc; 719 719 struct pci_dev *pci_dev = cd->pci_dev; 720 720 721 - rc = pci_enable_msi_block(pci_dev, count); 721 + rc = pci_enable_msi_exact(pci_dev, count); 722 722 if (rc == 0) 723 723 cd->flags |= GENWQE_FLAG_MSI_ENABLED; 724 724 return rc;
+4 -8
drivers/pci/access.c
··· 148 148 int pci_user_read_config_##size \ 149 149 (struct pci_dev *dev, int pos, type *val) \ 150 150 { \ 151 - int ret = 0; \ 151 + int ret = PCIBIOS_SUCCESSFUL; \ 152 152 u32 data = -1; \ 153 153 if (PCI_##size##_BAD) \ 154 154 return -EINVAL; \ ··· 159 159 pos, sizeof(type), &data); \ 160 160 raw_spin_unlock_irq(&pci_lock); \ 161 161 *val = (type)data; \ 162 - if (ret > 0) \ 163 - ret = -EINVAL; \ 164 - return ret; \ 162 + return pcibios_err_to_errno(ret); \ 165 163 } \ 166 164 EXPORT_SYMBOL_GPL(pci_user_read_config_##size); 167 165 ··· 168 170 int pci_user_write_config_##size \ 169 171 (struct pci_dev *dev, int pos, type val) \ 170 172 { \ 171 - int ret = -EIO; \ 173 + int ret = PCIBIOS_SUCCESSFUL; \ 172 174 if (PCI_##size##_BAD) \ 173 175 return -EINVAL; \ 174 176 raw_spin_lock_irq(&pci_lock); \ ··· 177 179 ret = dev->bus->ops->write(dev->bus, dev->devfn, \ 178 180 pos, sizeof(type), val); \ 179 181 raw_spin_unlock_irq(&pci_lock); \ 180 - if (ret > 0) \ 181 - ret = -EINVAL; \ 182 - return ret; \ 182 + return pcibios_err_to_errno(ret); \ 183 183 } \ 184 184 EXPORT_SYMBOL_GPL(pci_user_write_config_##size); 185 185
+2 -9
drivers/pci/bus.c
··· 13 13 #include <linux/errno.h> 14 14 #include <linux/ioport.h> 15 15 #include <linux/proc_fs.h> 16 - #include <linux/init.h> 17 16 #include <linux/slab.h> 18 17 19 18 #include "pci.h" ··· 235 236 * 236 237 * This adds add sysfs entries and start device drivers 237 238 */ 238 - int pci_bus_add_device(struct pci_dev *dev) 239 + void pci_bus_add_device(struct pci_dev *dev) 239 240 { 240 241 int retval; 241 242 ··· 252 253 WARN_ON(retval < 0); 253 254 254 255 dev->is_added = 1; 255 - 256 - return 0; 257 256 } 258 257 259 258 /** ··· 264 267 { 265 268 struct pci_dev *dev; 266 269 struct pci_bus *child; 267 - int retval; 268 270 269 271 list_for_each_entry(dev, &bus->devices, bus_list) { 270 272 /* Skip already-added devices */ 271 273 if (dev->is_added) 272 274 continue; 273 - retval = pci_bus_add_device(dev); 274 - if (retval) 275 - dev_err(&dev->dev, "Error adding device (%d)\n", 276 - retval); 275 + pci_bus_add_device(dev); 277 276 } 278 277 279 278 list_for_each_entry(dev, &bus->devices, bus_list) {
-1
drivers/pci/host-bridge.c
··· 3 3 */ 4 4 5 5 #include <linux/kernel.h> 6 - #include <linux/init.h> 7 6 #include <linux/pci.h> 8 7 #include <linux/module.h> 9 8
+13
drivers/pci/host/Kconfig
··· 33 33 There are 3 internal PCI controllers available with a single 34 34 built-in EHCI/OHCI host controller present on each one. 35 35 36 + config PCI_RCAR_GEN2_PCIE 37 + bool "Renesas R-Car PCIe controller" 38 + depends on ARCH_SHMOBILE || (ARM && COMPILE_TEST) 39 + help 40 + Say Y here if you want PCIe controller support on R-Car Gen2 SoCs. 41 + 42 + config PCI_HOST_GENERIC 43 + bool "Generic PCI host controller" 44 + depends on ARM && OF 45 + help 46 + Say Y here if you want to support a simple generic PCI host 47 + controller, such as the one emulated by kvmtool. 48 + 36 49 endmenu
+2
drivers/pci/host/Makefile
··· 4 4 obj-$(CONFIG_PCI_MVEBU) += pci-mvebu.o 5 5 obj-$(CONFIG_PCI_TEGRA) += pci-tegra.o 6 6 obj-$(CONFIG_PCI_RCAR_GEN2) += pci-rcar-gen2.o 7 + obj-$(CONFIG_PCI_RCAR_GEN2_PCIE) += pcie-rcar.o 8 + obj-$(CONFIG_PCI_HOST_GENERIC) += pci-host-generic.o
+4 -7
drivers/pci/host/pci-exynos.c
··· 415 415 { 416 416 struct pcie_port *pp = arg; 417 417 418 - dw_handle_msi_irq(pp); 419 - 420 - return IRQ_HANDLED; 418 + return dw_handle_msi_irq(pp); 421 419 } 422 420 423 421 static void exynos_pcie_msi_init(struct pcie_port *pp) ··· 509 511 .host_init = exynos_pcie_host_init, 510 512 }; 511 513 512 - static int add_pcie_port(struct pcie_port *pp, struct platform_device *pdev) 514 + static int __init add_pcie_port(struct pcie_port *pp, 515 + struct platform_device *pdev) 513 516 { 514 517 int ret; 515 518 ··· 567 568 568 569 exynos_pcie = devm_kzalloc(&pdev->dev, sizeof(*exynos_pcie), 569 570 GFP_KERNEL); 570 - if (!exynos_pcie) { 571 - dev_err(&pdev->dev, "no memory for exynos pcie\n"); 571 + if (!exynos_pcie) 572 572 return -ENOMEM; 573 - } 574 573 575 574 pp = &exynos_pcie->pp; 576 575
+388
drivers/pci/host/pci-host-generic.c
··· 1 + /* 2 + * Simple, generic PCI host controller driver targetting firmware-initialised 3 + * systems and virtual machines (e.g. the PCI emulation provided by kvmtool). 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 + * 17 + * Copyright (C) 2014 ARM Limited 18 + * 19 + * Author: Will Deacon <will.deacon@arm.com> 20 + */ 21 + 22 + #include <linux/kernel.h> 23 + #include <linux/module.h> 24 + #include <linux/of_address.h> 25 + #include <linux/of_pci.h> 26 + #include <linux/platform_device.h> 27 + 28 + struct gen_pci_cfg_bus_ops { 29 + u32 bus_shift; 30 + void __iomem *(*map_bus)(struct pci_bus *, unsigned int, int); 31 + }; 32 + 33 + struct gen_pci_cfg_windows { 34 + struct resource res; 35 + struct resource bus_range; 36 + void __iomem **win; 37 + 38 + const struct gen_pci_cfg_bus_ops *ops; 39 + }; 40 + 41 + struct gen_pci { 42 + struct pci_host_bridge host; 43 + struct gen_pci_cfg_windows cfg; 44 + struct list_head resources; 45 + }; 46 + 47 + static void __iomem *gen_pci_map_cfg_bus_cam(struct pci_bus *bus, 48 + unsigned int devfn, 49 + int where) 50 + { 51 + struct pci_sys_data *sys = bus->sysdata; 52 + struct gen_pci *pci = sys->private_data; 53 + resource_size_t idx = bus->number - pci->cfg.bus_range.start; 54 + 55 + return pci->cfg.win[idx] + ((devfn << 8) | where); 56 + } 57 + 58 + static struct gen_pci_cfg_bus_ops gen_pci_cfg_cam_bus_ops = { 59 + .bus_shift = 16, 60 + .map_bus = gen_pci_map_cfg_bus_cam, 61 + }; 62 + 63 + static void __iomem *gen_pci_map_cfg_bus_ecam(struct pci_bus *bus, 64 + unsigned int devfn, 65 + int where) 66 + { 67 + struct pci_sys_data *sys = bus->sysdata; 68 + struct gen_pci *pci = sys->private_data; 69 + resource_size_t idx = bus->number - pci->cfg.bus_range.start; 70 + 71 + return pci->cfg.win[idx] + ((devfn << 12) | where); 72 + } 73 + 74 + static struct gen_pci_cfg_bus_ops gen_pci_cfg_ecam_bus_ops = { 75 + .bus_shift = 20, 76 + .map_bus = gen_pci_map_cfg_bus_ecam, 77 + }; 78 + 79 + static int gen_pci_config_read(struct pci_bus *bus, unsigned int devfn, 80 + int where, int size, u32 *val) 81 + { 82 + void __iomem *addr; 83 + struct pci_sys_data *sys = bus->sysdata; 84 + struct gen_pci *pci = sys->private_data; 85 + 86 + addr = pci->cfg.ops->map_bus(bus, devfn, where); 87 + 88 + switch (size) { 89 + case 1: 90 + *val = readb(addr); 91 + break; 92 + case 2: 93 + *val = readw(addr); 94 + break; 95 + default: 96 + *val = readl(addr); 97 + } 98 + 99 + return PCIBIOS_SUCCESSFUL; 100 + } 101 + 102 + static int gen_pci_config_write(struct pci_bus *bus, unsigned int devfn, 103 + int where, int size, u32 val) 104 + { 105 + void __iomem *addr; 106 + struct pci_sys_data *sys = bus->sysdata; 107 + struct gen_pci *pci = sys->private_data; 108 + 109 + addr = pci->cfg.ops->map_bus(bus, devfn, where); 110 + 111 + switch (size) { 112 + case 1: 113 + writeb(val, addr); 114 + break; 115 + case 2: 116 + writew(val, addr); 117 + break; 118 + default: 119 + writel(val, addr); 120 + } 121 + 122 + return PCIBIOS_SUCCESSFUL; 123 + } 124 + 125 + static struct pci_ops gen_pci_ops = { 126 + .read = gen_pci_config_read, 127 + .write = gen_pci_config_write, 128 + }; 129 + 130 + static const struct of_device_id gen_pci_of_match[] = { 131 + { .compatible = "pci-host-cam-generic", 132 + .data = &gen_pci_cfg_cam_bus_ops }, 133 + 134 + { .compatible = "pci-host-ecam-generic", 135 + .data = &gen_pci_cfg_ecam_bus_ops }, 136 + 137 + { }, 138 + }; 139 + MODULE_DEVICE_TABLE(of, gen_pci_of_match); 140 + 141 + static int gen_pci_calc_io_offset(struct device *dev, 142 + struct of_pci_range *range, 143 + struct resource *res, 144 + resource_size_t *offset) 145 + { 146 + static atomic_t wins = ATOMIC_INIT(0); 147 + int err, idx, max_win; 148 + unsigned int window; 149 + 150 + if (!PAGE_ALIGNED(range->cpu_addr)) 151 + return -EINVAL; 152 + 153 + max_win = (IO_SPACE_LIMIT + 1) / SZ_64K; 154 + idx = atomic_inc_return(&wins); 155 + if (idx > max_win) 156 + return -ENOSPC; 157 + 158 + window = (idx - 1) * SZ_64K; 159 + err = pci_ioremap_io(window, range->cpu_addr); 160 + if (err) 161 + return err; 162 + 163 + of_pci_range_to_resource(range, dev->of_node, res); 164 + res->start = window; 165 + res->end = res->start + range->size - 1; 166 + *offset = window - range->pci_addr; 167 + return 0; 168 + } 169 + 170 + static int gen_pci_calc_mem_offset(struct device *dev, 171 + struct of_pci_range *range, 172 + struct resource *res, 173 + resource_size_t *offset) 174 + { 175 + of_pci_range_to_resource(range, dev->of_node, res); 176 + *offset = range->cpu_addr - range->pci_addr; 177 + return 0; 178 + } 179 + 180 + static void gen_pci_release_of_pci_ranges(struct gen_pci *pci) 181 + { 182 + struct pci_host_bridge_window *win; 183 + 184 + list_for_each_entry(win, &pci->resources, list) 185 + release_resource(win->res); 186 + 187 + pci_free_resource_list(&pci->resources); 188 + } 189 + 190 + static int gen_pci_parse_request_of_pci_ranges(struct gen_pci *pci) 191 + { 192 + struct of_pci_range range; 193 + struct of_pci_range_parser parser; 194 + int err, res_valid = 0; 195 + struct device *dev = pci->host.dev.parent; 196 + struct device_node *np = dev->of_node; 197 + 198 + if (of_pci_range_parser_init(&parser, np)) { 199 + dev_err(dev, "missing \"ranges\" property\n"); 200 + return -EINVAL; 201 + } 202 + 203 + for_each_of_pci_range(&parser, &range) { 204 + struct resource *parent, *res; 205 + resource_size_t offset; 206 + u32 restype = range.flags & IORESOURCE_TYPE_BITS; 207 + 208 + res = devm_kmalloc(dev, sizeof(*res), GFP_KERNEL); 209 + if (!res) { 210 + err = -ENOMEM; 211 + goto out_release_res; 212 + } 213 + 214 + switch (restype) { 215 + case IORESOURCE_IO: 216 + parent = &ioport_resource; 217 + err = gen_pci_calc_io_offset(dev, &range, res, &offset); 218 + break; 219 + case IORESOURCE_MEM: 220 + parent = &iomem_resource; 221 + err = gen_pci_calc_mem_offset(dev, &range, res, &offset); 222 + res_valid |= !(res->flags & IORESOURCE_PREFETCH || err); 223 + break; 224 + default: 225 + err = -EINVAL; 226 + continue; 227 + } 228 + 229 + if (err) { 230 + dev_warn(dev, 231 + "error %d: failed to add resource [type 0x%x, %lld bytes]\n", 232 + err, restype, range.size); 233 + continue; 234 + } 235 + 236 + err = request_resource(parent, res); 237 + if (err) 238 + goto out_release_res; 239 + 240 + pci_add_resource_offset(&pci->resources, res, offset); 241 + } 242 + 243 + if (!res_valid) { 244 + dev_err(dev, "non-prefetchable memory resource required\n"); 245 + err = -EINVAL; 246 + goto out_release_res; 247 + } 248 + 249 + return 0; 250 + 251 + out_release_res: 252 + gen_pci_release_of_pci_ranges(pci); 253 + return err; 254 + } 255 + 256 + static int gen_pci_parse_map_cfg_windows(struct gen_pci *pci) 257 + { 258 + int err; 259 + u8 bus_max; 260 + resource_size_t busn; 261 + struct resource *bus_range; 262 + struct device *dev = pci->host.dev.parent; 263 + struct device_node *np = dev->of_node; 264 + 265 + if (of_pci_parse_bus_range(np, &pci->cfg.bus_range)) 266 + pci->cfg.bus_range = (struct resource) { 267 + .name = np->name, 268 + .start = 0, 269 + .end = 0xff, 270 + .flags = IORESOURCE_BUS, 271 + }; 272 + 273 + err = of_address_to_resource(np, 0, &pci->cfg.res); 274 + if (err) { 275 + dev_err(dev, "missing \"reg\" property\n"); 276 + return err; 277 + } 278 + 279 + pci->cfg.win = devm_kcalloc(dev, resource_size(&pci->cfg.bus_range), 280 + sizeof(*pci->cfg.win), GFP_KERNEL); 281 + if (!pci->cfg.win) 282 + return -ENOMEM; 283 + 284 + /* Limit the bus-range to fit within reg */ 285 + bus_max = pci->cfg.bus_range.start + 286 + (resource_size(&pci->cfg.res) >> pci->cfg.ops->bus_shift) - 1; 287 + pci->cfg.bus_range.end = min_t(resource_size_t, pci->cfg.bus_range.end, 288 + bus_max); 289 + 290 + /* Map our Configuration Space windows */ 291 + if (!devm_request_mem_region(dev, pci->cfg.res.start, 292 + resource_size(&pci->cfg.res), 293 + "Configuration Space")) 294 + return -ENOMEM; 295 + 296 + bus_range = &pci->cfg.bus_range; 297 + for (busn = bus_range->start; busn <= bus_range->end; ++busn) { 298 + u32 idx = busn - bus_range->start; 299 + u32 sz = 1 << pci->cfg.ops->bus_shift; 300 + 301 + pci->cfg.win[idx] = devm_ioremap(dev, 302 + pci->cfg.res.start + busn * sz, 303 + sz); 304 + if (!pci->cfg.win[idx]) 305 + return -ENOMEM; 306 + } 307 + 308 + /* Register bus resource */ 309 + pci_add_resource(&pci->resources, bus_range); 310 + return 0; 311 + } 312 + 313 + static int gen_pci_setup(int nr, struct pci_sys_data *sys) 314 + { 315 + struct gen_pci *pci = sys->private_data; 316 + list_splice_init(&pci->resources, &sys->resources); 317 + return 1; 318 + } 319 + 320 + static int gen_pci_probe(struct platform_device *pdev) 321 + { 322 + int err; 323 + const char *type; 324 + const struct of_device_id *of_id; 325 + const int *prop; 326 + struct device *dev = &pdev->dev; 327 + struct device_node *np = dev->of_node; 328 + struct gen_pci *pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL); 329 + struct hw_pci hw = { 330 + .nr_controllers = 1, 331 + .private_data = (void **)&pci, 332 + .setup = gen_pci_setup, 333 + .map_irq = of_irq_parse_and_map_pci, 334 + .ops = &gen_pci_ops, 335 + }; 336 + 337 + if (!pci) 338 + return -ENOMEM; 339 + 340 + type = of_get_property(np, "device_type", NULL); 341 + if (!type || strcmp(type, "pci")) { 342 + dev_err(dev, "invalid \"device_type\" %s\n", type); 343 + return -EINVAL; 344 + } 345 + 346 + prop = of_get_property(of_chosen, "linux,pci-probe-only", NULL); 347 + if (prop) { 348 + if (*prop) 349 + pci_add_flags(PCI_PROBE_ONLY); 350 + else 351 + pci_clear_flags(PCI_PROBE_ONLY); 352 + } 353 + 354 + of_id = of_match_node(gen_pci_of_match, np); 355 + pci->cfg.ops = of_id->data; 356 + pci->host.dev.parent = dev; 357 + INIT_LIST_HEAD(&pci->host.windows); 358 + INIT_LIST_HEAD(&pci->resources); 359 + 360 + /* Parse our PCI ranges and request their resources */ 361 + err = gen_pci_parse_request_of_pci_ranges(pci); 362 + if (err) 363 + return err; 364 + 365 + /* Parse and map our Configuration Space windows */ 366 + err = gen_pci_parse_map_cfg_windows(pci); 367 + if (err) { 368 + gen_pci_release_of_pci_ranges(pci); 369 + return err; 370 + } 371 + 372 + pci_common_init_dev(dev, &hw); 373 + return 0; 374 + } 375 + 376 + static struct platform_driver gen_pci_driver = { 377 + .driver = { 378 + .name = "pci-host-generic", 379 + .owner = THIS_MODULE, 380 + .of_match_table = gen_pci_of_match, 381 + }, 382 + .probe = gen_pci_probe, 383 + }; 384 + module_platform_driver(gen_pci_driver); 385 + 386 + MODULE_DESCRIPTION("Generic PCI host driver"); 387 + MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>"); 388 + MODULE_LICENSE("GPLv2");
+55 -92
drivers/pci/host/pci-imx6.c
··· 25 25 #include <linux/resource.h> 26 26 #include <linux/signal.h> 27 27 #include <linux/types.h> 28 + #include <linux/interrupt.h> 28 29 29 30 #include "pcie-designware.h" 30 31 ··· 33 32 34 33 struct imx6_pcie { 35 34 int reset_gpio; 36 - int power_on_gpio; 37 - int wake_up_gpio; 38 - int disable_gpio; 39 - struct clk *lvds_gate; 40 - struct clk *sata_ref_100m; 41 - struct clk *pcie_ref_125m; 42 - struct clk *pcie_axi; 35 + struct clk *pcie_bus; 36 + struct clk *pcie_phy; 37 + struct clk *pcie; 43 38 struct pcie_port pp; 44 39 struct regmap *iomuxc_gpr; 45 40 void __iomem *mem_base; ··· 228 231 struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp); 229 232 int ret; 230 233 231 - if (gpio_is_valid(imx6_pcie->power_on_gpio)) 232 - gpio_set_value(imx6_pcie->power_on_gpio, 1); 233 - 234 234 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 235 235 IMX6Q_GPR1_PCIE_TEST_PD, 0 << 18); 236 236 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 237 237 IMX6Q_GPR1_PCIE_REF_CLK_EN, 1 << 16); 238 238 239 - ret = clk_prepare_enable(imx6_pcie->sata_ref_100m); 239 + ret = clk_prepare_enable(imx6_pcie->pcie_phy); 240 240 if (ret) { 241 - dev_err(pp->dev, "unable to enable sata_ref_100m\n"); 242 - goto err_sata_ref; 241 + dev_err(pp->dev, "unable to enable pcie_phy clock\n"); 242 + goto err_pcie_phy; 243 243 } 244 244 245 - ret = clk_prepare_enable(imx6_pcie->pcie_ref_125m); 245 + ret = clk_prepare_enable(imx6_pcie->pcie_bus); 246 246 if (ret) { 247 - dev_err(pp->dev, "unable to enable pcie_ref_125m\n"); 248 - goto err_pcie_ref; 247 + dev_err(pp->dev, "unable to enable pcie_bus clock\n"); 248 + goto err_pcie_bus; 249 249 } 250 250 251 - ret = clk_prepare_enable(imx6_pcie->lvds_gate); 251 + ret = clk_prepare_enable(imx6_pcie->pcie); 252 252 if (ret) { 253 - dev_err(pp->dev, "unable to enable lvds_gate\n"); 254 - goto err_lvds_gate; 255 - } 256 - 257 - ret = clk_prepare_enable(imx6_pcie->pcie_axi); 258 - if (ret) { 259 - dev_err(pp->dev, "unable to enable pcie_axi\n"); 260 - goto err_pcie_axi; 253 + dev_err(pp->dev, "unable to enable pcie clock\n"); 254 + goto err_pcie; 261 255 } 262 256 263 257 /* allow the clocks to stabilize */ ··· 262 274 } 263 275 return 0; 264 276 265 - err_pcie_axi: 266 - clk_disable_unprepare(imx6_pcie->lvds_gate); 267 - err_lvds_gate: 268 - clk_disable_unprepare(imx6_pcie->pcie_ref_125m); 269 - err_pcie_ref: 270 - clk_disable_unprepare(imx6_pcie->sata_ref_100m); 271 - err_sata_ref: 277 + err_pcie: 278 + clk_disable_unprepare(imx6_pcie->pcie_bus); 279 + err_pcie_bus: 280 + clk_disable_unprepare(imx6_pcie->pcie_phy); 281 + err_pcie_phy: 272 282 return ret; 273 283 274 284 } ··· 313 327 } 314 328 315 329 return 0; 330 + } 331 + 332 + static irqreturn_t imx6_pcie_msi_handler(int irq, void *arg) 333 + { 334 + struct pcie_port *pp = arg; 335 + 336 + return dw_handle_msi_irq(pp); 316 337 } 317 338 318 339 static int imx6_pcie_start_link(struct pcie_port *pp) ··· 396 403 dw_pcie_setup_rc(pp); 397 404 398 405 imx6_pcie_start_link(pp); 406 + 407 + if (IS_ENABLED(CONFIG_PCI_MSI)) 408 + dw_pcie_msi_init(pp); 399 409 } 400 410 401 411 static void imx6_pcie_reset_phy(struct pcie_port *pp) ··· 483 487 .host_init = imx6_pcie_host_init, 484 488 }; 485 489 486 - static int imx6_add_pcie_port(struct pcie_port *pp, 490 + static int __init imx6_add_pcie_port(struct pcie_port *pp, 487 491 struct platform_device *pdev) 488 492 { 489 493 int ret; 490 494 491 - pp->irq = platform_get_irq(pdev, 0); 492 - if (!pp->irq) { 493 - dev_err(&pdev->dev, "failed to get irq\n"); 494 - return -ENODEV; 495 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 496 + pp->msi_irq = platform_get_irq_byname(pdev, "msi"); 497 + if (pp->msi_irq <= 0) { 498 + dev_err(&pdev->dev, "failed to get MSI irq\n"); 499 + return -ENODEV; 500 + } 501 + 502 + ret = devm_request_irq(&pdev->dev, pp->msi_irq, 503 + imx6_pcie_msi_handler, 504 + IRQF_SHARED, "mx6-pcie-msi", pp); 505 + if (ret) { 506 + dev_err(&pdev->dev, "failed to request MSI irq\n"); 507 + return -ENODEV; 508 + } 495 509 } 496 510 497 511 pp->root_bus_nr = -1; ··· 552 546 } 553 547 } 554 548 555 - imx6_pcie->power_on_gpio = of_get_named_gpio(np, "power-on-gpio", 0); 556 - if (gpio_is_valid(imx6_pcie->power_on_gpio)) { 557 - ret = devm_gpio_request_one(&pdev->dev, 558 - imx6_pcie->power_on_gpio, 559 - GPIOF_OUT_INIT_LOW, 560 - "PCIe power enable"); 561 - if (ret) { 562 - dev_err(&pdev->dev, "unable to get power-on gpio\n"); 563 - return ret; 564 - } 565 - } 566 - 567 - imx6_pcie->wake_up_gpio = of_get_named_gpio(np, "wake-up-gpio", 0); 568 - if (gpio_is_valid(imx6_pcie->wake_up_gpio)) { 569 - ret = devm_gpio_request_one(&pdev->dev, 570 - imx6_pcie->wake_up_gpio, 571 - GPIOF_IN, 572 - "PCIe wake up"); 573 - if (ret) { 574 - dev_err(&pdev->dev, "unable to get wake-up gpio\n"); 575 - return ret; 576 - } 577 - } 578 - 579 - imx6_pcie->disable_gpio = of_get_named_gpio(np, "disable-gpio", 0); 580 - if (gpio_is_valid(imx6_pcie->disable_gpio)) { 581 - ret = devm_gpio_request_one(&pdev->dev, 582 - imx6_pcie->disable_gpio, 583 - GPIOF_OUT_INIT_HIGH, 584 - "PCIe disable endpoint"); 585 - if (ret) { 586 - dev_err(&pdev->dev, "unable to get disable-ep gpio\n"); 587 - return ret; 588 - } 589 - } 590 - 591 549 /* Fetch clocks */ 592 - imx6_pcie->lvds_gate = devm_clk_get(&pdev->dev, "lvds_gate"); 593 - if (IS_ERR(imx6_pcie->lvds_gate)) { 550 + imx6_pcie->pcie_phy = devm_clk_get(&pdev->dev, "pcie_phy"); 551 + if (IS_ERR(imx6_pcie->pcie_phy)) { 594 552 dev_err(&pdev->dev, 595 - "lvds_gate clock select missing or invalid\n"); 596 - return PTR_ERR(imx6_pcie->lvds_gate); 553 + "pcie_phy clock source missing or invalid\n"); 554 + return PTR_ERR(imx6_pcie->pcie_phy); 597 555 } 598 556 599 - imx6_pcie->sata_ref_100m = devm_clk_get(&pdev->dev, "sata_ref_100m"); 600 - if (IS_ERR(imx6_pcie->sata_ref_100m)) { 557 + imx6_pcie->pcie_bus = devm_clk_get(&pdev->dev, "pcie_bus"); 558 + if (IS_ERR(imx6_pcie->pcie_bus)) { 601 559 dev_err(&pdev->dev, 602 - "sata_ref_100m clock source missing or invalid\n"); 603 - return PTR_ERR(imx6_pcie->sata_ref_100m); 560 + "pcie_bus clock source missing or invalid\n"); 561 + return PTR_ERR(imx6_pcie->pcie_bus); 604 562 } 605 563 606 - imx6_pcie->pcie_ref_125m = devm_clk_get(&pdev->dev, "pcie_ref_125m"); 607 - if (IS_ERR(imx6_pcie->pcie_ref_125m)) { 564 + imx6_pcie->pcie = devm_clk_get(&pdev->dev, "pcie"); 565 + if (IS_ERR(imx6_pcie->pcie)) { 608 566 dev_err(&pdev->dev, 609 - "pcie_ref_125m clock source missing or invalid\n"); 610 - return PTR_ERR(imx6_pcie->pcie_ref_125m); 611 - } 612 - 613 - imx6_pcie->pcie_axi = devm_clk_get(&pdev->dev, "pcie_axi"); 614 - if (IS_ERR(imx6_pcie->pcie_axi)) { 615 - dev_err(&pdev->dev, 616 - "pcie_axi clock source missing or invalid\n"); 617 - return PTR_ERR(imx6_pcie->pcie_axi); 567 + "pcie clock source missing or invalid\n"); 568 + return PTR_ERR(imx6_pcie->pcie); 618 569 } 619 570 620 571 /* Grab GPR config register range */
+29 -2
drivers/pci/host/pci-rcar-gen2.c
··· 99 99 struct resource io_res; 100 100 struct resource mem_res; 101 101 struct resource *cfg_res; 102 + unsigned busnr; 102 103 int irq; 103 104 unsigned long window_size; 104 105 }; ··· 319 318 pci_add_resource(&sys->resources, &priv->io_res); 320 319 pci_add_resource(&sys->resources, &priv->mem_res); 321 320 322 - /* Setup bus number based on platform device id */ 323 - sys->busnr = to_platform_device(priv->dev)->id; 321 + /* Setup bus number based on platform device id / of bus-range */ 322 + sys->busnr = priv->busnr; 324 323 return 1; 325 324 } 326 325 ··· 373 372 374 373 priv->window_size = SZ_1G; 375 374 375 + if (pdev->dev.of_node) { 376 + struct resource busnr; 377 + int ret; 378 + 379 + ret = of_pci_parse_bus_range(pdev->dev.of_node, &busnr); 380 + if (ret < 0) { 381 + dev_err(&pdev->dev, "failed to parse bus-range\n"); 382 + return ret; 383 + } 384 + 385 + priv->busnr = busnr.start; 386 + if (busnr.end != busnr.start) 387 + dev_warn(&pdev->dev, "only one bus number supported\n"); 388 + } else { 389 + priv->busnr = pdev->id; 390 + } 391 + 376 392 hw_private[0] = priv; 377 393 memset(&hw, 0, sizeof(hw)); 378 394 hw.nr_controllers = ARRAY_SIZE(hw_private); ··· 401 383 return 0; 402 384 } 403 385 386 + static struct of_device_id rcar_pci_of_match[] = { 387 + { .compatible = "renesas,pci-r8a7790", }, 388 + { .compatible = "renesas,pci-r8a7791", }, 389 + { }, 390 + }; 391 + 392 + MODULE_DEVICE_TABLE(of, rcar_pci_of_match); 393 + 404 394 static struct platform_driver rcar_pci_driver = { 405 395 .driver = { 406 396 .name = "pci-rcar-gen2", 407 397 .owner = THIS_MODULE, 408 398 .suppress_bind_attrs = true, 399 + .of_match_table = rcar_pci_of_match, 409 400 }, 410 401 .probe = rcar_pci_probe, 411 402 };
+5 -1
drivers/pci/host/pcie-designware.c
··· 156 156 }; 157 157 158 158 /* MSI int handler */ 159 - void dw_handle_msi_irq(struct pcie_port *pp) 159 + irqreturn_t dw_handle_msi_irq(struct pcie_port *pp) 160 160 { 161 161 unsigned long val; 162 162 int i, pos, irq; 163 + irqreturn_t ret = IRQ_NONE; 163 164 164 165 for (i = 0; i < MAX_MSI_CTRLS; i++) { 165 166 dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 4, 166 167 (u32 *)&val); 167 168 if (val) { 169 + ret = IRQ_HANDLED; 168 170 pos = 0; 169 171 while ((pos = find_next_bit(&val, 32, pos)) != 32) { 170 172 irq = irq_find_mapping(pp->irq_domain, ··· 179 177 } 180 178 } 181 179 } 180 + 181 + return ret; 182 182 } 183 183 184 184 void dw_pcie_msi_init(struct pcie_port *pp)
+1 -1
drivers/pci/host/pcie-designware.h
··· 68 68 69 69 int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val); 70 70 int dw_pcie_cfg_write(void __iomem *addr, int where, int size, u32 val); 71 - void dw_handle_msi_irq(struct pcie_port *pp); 71 + irqreturn_t dw_handle_msi_irq(struct pcie_port *pp); 72 72 void dw_pcie_msi_init(struct pcie_port *pp); 73 73 int dw_pcie_link_up(struct pcie_port *pp); 74 74 void dw_pcie_setup_rc(struct pcie_port *pp);
+1008
drivers/pci/host/pcie-rcar.c
··· 1 + /* 2 + * PCIe driver for Renesas R-Car SoCs 3 + * Copyright (C) 2014 Renesas Electronics Europe Ltd 4 + * 5 + * Based on: 6 + * arch/sh/drivers/pci/pcie-sh7786.c 7 + * arch/sh/drivers/pci/ops-sh7786.c 8 + * Copyright (C) 2009 - 2011 Paul Mundt 9 + * 10 + * This file is licensed under the terms of the GNU General Public 11 + * License version 2. This program is licensed "as is" without any 12 + * warranty of any kind, whether express or implied. 13 + */ 14 + 15 + #include <linux/clk.h> 16 + #include <linux/delay.h> 17 + #include <linux/interrupt.h> 18 + #include <linux/irq.h> 19 + #include <linux/irqdomain.h> 20 + #include <linux/kernel.h> 21 + #include <linux/module.h> 22 + #include <linux/msi.h> 23 + #include <linux/of_address.h> 24 + #include <linux/of_irq.h> 25 + #include <linux/of_pci.h> 26 + #include <linux/of_platform.h> 27 + #include <linux/pci.h> 28 + #include <linux/platform_device.h> 29 + #include <linux/slab.h> 30 + 31 + #define DRV_NAME "rcar-pcie" 32 + 33 + #define PCIECAR 0x000010 34 + #define PCIECCTLR 0x000018 35 + #define CONFIG_SEND_ENABLE (1 << 31) 36 + #define TYPE0 (0 << 8) 37 + #define TYPE1 (1 << 8) 38 + #define PCIECDR 0x000020 39 + #define PCIEMSR 0x000028 40 + #define PCIEINTXR 0x000400 41 + #define PCIEMSITXR 0x000840 42 + 43 + /* Transfer control */ 44 + #define PCIETCTLR 0x02000 45 + #define CFINIT 1 46 + #define PCIETSTR 0x02004 47 + #define DATA_LINK_ACTIVE 1 48 + #define PCIEERRFR 0x02020 49 + #define UNSUPPORTED_REQUEST (1 << 4) 50 + #define PCIEMSIFR 0x02044 51 + #define PCIEMSIALR 0x02048 52 + #define MSIFE 1 53 + #define PCIEMSIAUR 0x0204c 54 + #define PCIEMSIIER 0x02050 55 + 56 + /* root port address */ 57 + #define PCIEPRAR(x) (0x02080 + ((x) * 0x4)) 58 + 59 + /* local address reg & mask */ 60 + #define PCIELAR(x) (0x02200 + ((x) * 0x20)) 61 + #define PCIELAMR(x) (0x02208 + ((x) * 0x20)) 62 + #define LAM_PREFETCH (1 << 3) 63 + #define LAM_64BIT (1 << 2) 64 + #define LAR_ENABLE (1 << 1) 65 + 66 + /* PCIe address reg & mask */ 67 + #define PCIEPARL(x) (0x03400 + ((x) * 0x20)) 68 + #define PCIEPARH(x) (0x03404 + ((x) * 0x20)) 69 + #define PCIEPAMR(x) (0x03408 + ((x) * 0x20)) 70 + #define PCIEPTCTLR(x) (0x0340c + ((x) * 0x20)) 71 + #define PAR_ENABLE (1 << 31) 72 + #define IO_SPACE (1 << 8) 73 + 74 + /* Configuration */ 75 + #define PCICONF(x) (0x010000 + ((x) * 0x4)) 76 + #define PMCAP(x) (0x010040 + ((x) * 0x4)) 77 + #define EXPCAP(x) (0x010070 + ((x) * 0x4)) 78 + #define VCCAP(x) (0x010100 + ((x) * 0x4)) 79 + 80 + /* link layer */ 81 + #define IDSETR1 0x011004 82 + #define TLCTLR 0x011048 83 + #define MACSR 0x011054 84 + #define MACCTLR 0x011058 85 + #define SCRAMBLE_DISABLE (1 << 27) 86 + 87 + /* R-Car H1 PHY */ 88 + #define H1_PCIEPHYADRR 0x04000c 89 + #define WRITE_CMD (1 << 16) 90 + #define PHY_ACK (1 << 24) 91 + #define RATE_POS 12 92 + #define LANE_POS 8 93 + #define ADR_POS 0 94 + #define H1_PCIEPHYDOUTR 0x040014 95 + #define H1_PCIEPHYSR 0x040018 96 + 97 + #define INT_PCI_MSI_NR 32 98 + 99 + #define RCONF(x) (PCICONF(0)+(x)) 100 + #define RPMCAP(x) (PMCAP(0)+(x)) 101 + #define REXPCAP(x) (EXPCAP(0)+(x)) 102 + #define RVCCAP(x) (VCCAP(0)+(x)) 103 + 104 + #define PCIE_CONF_BUS(b) (((b) & 0xff) << 24) 105 + #define PCIE_CONF_DEV(d) (((d) & 0x1f) << 19) 106 + #define PCIE_CONF_FUNC(f) (((f) & 0x7) << 16) 107 + 108 + #define PCI_MAX_RESOURCES 4 109 + #define MAX_NR_INBOUND_MAPS 6 110 + 111 + struct rcar_msi { 112 + DECLARE_BITMAP(used, INT_PCI_MSI_NR); 113 + struct irq_domain *domain; 114 + struct msi_chip chip; 115 + unsigned long pages; 116 + struct mutex lock; 117 + int irq1; 118 + int irq2; 119 + }; 120 + 121 + static inline struct rcar_msi *to_rcar_msi(struct msi_chip *chip) 122 + { 123 + return container_of(chip, struct rcar_msi, chip); 124 + } 125 + 126 + /* Structure representing the PCIe interface */ 127 + struct rcar_pcie { 128 + struct device *dev; 129 + void __iomem *base; 130 + struct resource res[PCI_MAX_RESOURCES]; 131 + struct resource busn; 132 + int root_bus_nr; 133 + struct clk *clk; 134 + struct clk *bus_clk; 135 + struct rcar_msi msi; 136 + }; 137 + 138 + static inline struct rcar_pcie *sys_to_pcie(struct pci_sys_data *sys) 139 + { 140 + return sys->private_data; 141 + } 142 + 143 + static void pci_write_reg(struct rcar_pcie *pcie, unsigned long val, 144 + unsigned long reg) 145 + { 146 + writel(val, pcie->base + reg); 147 + } 148 + 149 + static unsigned long pci_read_reg(struct rcar_pcie *pcie, unsigned long reg) 150 + { 151 + return readl(pcie->base + reg); 152 + } 153 + 154 + enum { 155 + PCI_ACCESS_READ, 156 + PCI_ACCESS_WRITE, 157 + }; 158 + 159 + static void rcar_rmw32(struct rcar_pcie *pcie, int where, u32 mask, u32 data) 160 + { 161 + int shift = 8 * (where & 3); 162 + u32 val = pci_read_reg(pcie, where & ~3); 163 + 164 + val &= ~(mask << shift); 165 + val |= data << shift; 166 + pci_write_reg(pcie, val, where & ~3); 167 + } 168 + 169 + static u32 rcar_read_conf(struct rcar_pcie *pcie, int where) 170 + { 171 + int shift = 8 * (where & 3); 172 + u32 val = pci_read_reg(pcie, where & ~3); 173 + 174 + return val >> shift; 175 + } 176 + 177 + /* Serialization is provided by 'pci_lock' in drivers/pci/access.c */ 178 + static int rcar_pcie_config_access(struct rcar_pcie *pcie, 179 + unsigned char access_type, struct pci_bus *bus, 180 + unsigned int devfn, int where, u32 *data) 181 + { 182 + int dev, func, reg, index; 183 + 184 + dev = PCI_SLOT(devfn); 185 + func = PCI_FUNC(devfn); 186 + reg = where & ~3; 187 + index = reg / 4; 188 + 189 + /* 190 + * While each channel has its own memory-mapped extended config 191 + * space, it's generally only accessible when in endpoint mode. 192 + * When in root complex mode, the controller is unable to target 193 + * itself with either type 0 or type 1 accesses, and indeed, any 194 + * controller initiated target transfer to its own config space 195 + * result in a completer abort. 196 + * 197 + * Each channel effectively only supports a single device, but as 198 + * the same channel <-> device access works for any PCI_SLOT() 199 + * value, we cheat a bit here and bind the controller's config 200 + * space to devfn 0 in order to enable self-enumeration. In this 201 + * case the regular ECAR/ECDR path is sidelined and the mangled 202 + * config access itself is initiated as an internal bus transaction. 203 + */ 204 + if (pci_is_root_bus(bus)) { 205 + if (dev != 0) 206 + return PCIBIOS_DEVICE_NOT_FOUND; 207 + 208 + if (access_type == PCI_ACCESS_READ) { 209 + *data = pci_read_reg(pcie, PCICONF(index)); 210 + } else { 211 + /* Keep an eye out for changes to the root bus number */ 212 + if (pci_is_root_bus(bus) && (reg == PCI_PRIMARY_BUS)) 213 + pcie->root_bus_nr = *data & 0xff; 214 + 215 + pci_write_reg(pcie, *data, PCICONF(index)); 216 + } 217 + 218 + return PCIBIOS_SUCCESSFUL; 219 + } 220 + 221 + if (pcie->root_bus_nr < 0) 222 + return PCIBIOS_DEVICE_NOT_FOUND; 223 + 224 + /* Clear errors */ 225 + pci_write_reg(pcie, pci_read_reg(pcie, PCIEERRFR), PCIEERRFR); 226 + 227 + /* Set the PIO address */ 228 + pci_write_reg(pcie, PCIE_CONF_BUS(bus->number) | PCIE_CONF_DEV(dev) | 229 + PCIE_CONF_FUNC(func) | reg, PCIECAR); 230 + 231 + /* Enable the configuration access */ 232 + if (bus->parent->number == pcie->root_bus_nr) 233 + pci_write_reg(pcie, CONFIG_SEND_ENABLE | TYPE0, PCIECCTLR); 234 + else 235 + pci_write_reg(pcie, CONFIG_SEND_ENABLE | TYPE1, PCIECCTLR); 236 + 237 + /* Check for errors */ 238 + if (pci_read_reg(pcie, PCIEERRFR) & UNSUPPORTED_REQUEST) 239 + return PCIBIOS_DEVICE_NOT_FOUND; 240 + 241 + /* Check for master and target aborts */ 242 + if (rcar_read_conf(pcie, RCONF(PCI_STATUS)) & 243 + (PCI_STATUS_REC_MASTER_ABORT | PCI_STATUS_REC_TARGET_ABORT)) 244 + return PCIBIOS_DEVICE_NOT_FOUND; 245 + 246 + if (access_type == PCI_ACCESS_READ) 247 + *data = pci_read_reg(pcie, PCIECDR); 248 + else 249 + pci_write_reg(pcie, *data, PCIECDR); 250 + 251 + /* Disable the configuration access */ 252 + pci_write_reg(pcie, 0, PCIECCTLR); 253 + 254 + return PCIBIOS_SUCCESSFUL; 255 + } 256 + 257 + static int rcar_pcie_read_conf(struct pci_bus *bus, unsigned int devfn, 258 + int where, int size, u32 *val) 259 + { 260 + struct rcar_pcie *pcie = sys_to_pcie(bus->sysdata); 261 + int ret; 262 + 263 + if ((size == 2) && (where & 1)) 264 + return PCIBIOS_BAD_REGISTER_NUMBER; 265 + else if ((size == 4) && (where & 3)) 266 + return PCIBIOS_BAD_REGISTER_NUMBER; 267 + 268 + ret = rcar_pcie_config_access(pcie, PCI_ACCESS_READ, 269 + bus, devfn, where, val); 270 + if (ret != PCIBIOS_SUCCESSFUL) { 271 + *val = 0xffffffff; 272 + return ret; 273 + } 274 + 275 + if (size == 1) 276 + *val = (*val >> (8 * (where & 3))) & 0xff; 277 + else if (size == 2) 278 + *val = (*val >> (8 * (where & 2))) & 0xffff; 279 + 280 + dev_dbg(&bus->dev, "pcie-config-read: bus=%3d devfn=0x%04x " 281 + "where=0x%04x size=%d val=0x%08lx\n", bus->number, 282 + devfn, where, size, (unsigned long)*val); 283 + 284 + return ret; 285 + } 286 + 287 + /* Serialization is provided by 'pci_lock' in drivers/pci/access.c */ 288 + static int rcar_pcie_write_conf(struct pci_bus *bus, unsigned int devfn, 289 + int where, int size, u32 val) 290 + { 291 + struct rcar_pcie *pcie = sys_to_pcie(bus->sysdata); 292 + int shift, ret; 293 + u32 data; 294 + 295 + if ((size == 2) && (where & 1)) 296 + return PCIBIOS_BAD_REGISTER_NUMBER; 297 + else if ((size == 4) && (where & 3)) 298 + return PCIBIOS_BAD_REGISTER_NUMBER; 299 + 300 + ret = rcar_pcie_config_access(pcie, PCI_ACCESS_READ, 301 + bus, devfn, where, &data); 302 + if (ret != PCIBIOS_SUCCESSFUL) 303 + return ret; 304 + 305 + dev_dbg(&bus->dev, "pcie-config-write: bus=%3d devfn=0x%04x " 306 + "where=0x%04x size=%d val=0x%08lx\n", bus->number, 307 + devfn, where, size, (unsigned long)val); 308 + 309 + if (size == 1) { 310 + shift = 8 * (where & 3); 311 + data &= ~(0xff << shift); 312 + data |= ((val & 0xff) << shift); 313 + } else if (size == 2) { 314 + shift = 8 * (where & 2); 315 + data &= ~(0xffff << shift); 316 + data |= ((val & 0xffff) << shift); 317 + } else 318 + data = val; 319 + 320 + ret = rcar_pcie_config_access(pcie, PCI_ACCESS_WRITE, 321 + bus, devfn, where, &data); 322 + 323 + return ret; 324 + } 325 + 326 + static struct pci_ops rcar_pcie_ops = { 327 + .read = rcar_pcie_read_conf, 328 + .write = rcar_pcie_write_conf, 329 + }; 330 + 331 + static void rcar_pcie_setup_window(int win, struct resource *res, 332 + struct rcar_pcie *pcie) 333 + { 334 + /* Setup PCIe address space mappings for each resource */ 335 + resource_size_t size; 336 + u32 mask; 337 + 338 + pci_write_reg(pcie, 0x00000000, PCIEPTCTLR(win)); 339 + 340 + /* 341 + * The PAMR mask is calculated in units of 128Bytes, which 342 + * keeps things pretty simple. 343 + */ 344 + size = resource_size(res); 345 + mask = (roundup_pow_of_two(size) / SZ_128) - 1; 346 + pci_write_reg(pcie, mask << 7, PCIEPAMR(win)); 347 + 348 + pci_write_reg(pcie, upper_32_bits(res->start), PCIEPARH(win)); 349 + pci_write_reg(pcie, lower_32_bits(res->start), PCIEPARL(win)); 350 + 351 + /* First resource is for IO */ 352 + mask = PAR_ENABLE; 353 + if (res->flags & IORESOURCE_IO) 354 + mask |= IO_SPACE; 355 + 356 + pci_write_reg(pcie, mask, PCIEPTCTLR(win)); 357 + } 358 + 359 + static int rcar_pcie_setup(int nr, struct pci_sys_data *sys) 360 + { 361 + struct rcar_pcie *pcie = sys_to_pcie(sys); 362 + struct resource *res; 363 + int i; 364 + 365 + pcie->root_bus_nr = -1; 366 + 367 + /* Setup PCI resources */ 368 + for (i = 0; i < PCI_MAX_RESOURCES; i++) { 369 + 370 + res = &pcie->res[i]; 371 + if (!res->flags) 372 + continue; 373 + 374 + rcar_pcie_setup_window(i, res, pcie); 375 + 376 + if (res->flags & IORESOURCE_IO) 377 + pci_ioremap_io(nr * SZ_64K, res->start); 378 + else 379 + pci_add_resource(&sys->resources, res); 380 + } 381 + pci_add_resource(&sys->resources, &pcie->busn); 382 + 383 + return 1; 384 + } 385 + 386 + static void rcar_pcie_add_bus(struct pci_bus *bus) 387 + { 388 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 389 + struct rcar_pcie *pcie = sys_to_pcie(bus->sysdata); 390 + 391 + bus->msi = &pcie->msi.chip; 392 + } 393 + } 394 + 395 + struct hw_pci rcar_pci = { 396 + .setup = rcar_pcie_setup, 397 + .map_irq = of_irq_parse_and_map_pci, 398 + .ops = &rcar_pcie_ops, 399 + .add_bus = rcar_pcie_add_bus, 400 + }; 401 + 402 + static void rcar_pcie_enable(struct rcar_pcie *pcie) 403 + { 404 + struct platform_device *pdev = to_platform_device(pcie->dev); 405 + 406 + rcar_pci.nr_controllers = 1; 407 + rcar_pci.private_data = (void **)&pcie; 408 + 409 + pci_common_init_dev(&pdev->dev, &rcar_pci); 410 + #ifdef CONFIG_PCI_DOMAINS 411 + rcar_pci.domain++; 412 + #endif 413 + } 414 + 415 + static int phy_wait_for_ack(struct rcar_pcie *pcie) 416 + { 417 + unsigned int timeout = 100; 418 + 419 + while (timeout--) { 420 + if (pci_read_reg(pcie, H1_PCIEPHYADRR) & PHY_ACK) 421 + return 0; 422 + 423 + udelay(100); 424 + } 425 + 426 + dev_err(pcie->dev, "Access to PCIe phy timed out\n"); 427 + 428 + return -ETIMEDOUT; 429 + } 430 + 431 + static void phy_write_reg(struct rcar_pcie *pcie, 432 + unsigned int rate, unsigned int addr, 433 + unsigned int lane, unsigned int data) 434 + { 435 + unsigned long phyaddr; 436 + 437 + phyaddr = WRITE_CMD | 438 + ((rate & 1) << RATE_POS) | 439 + ((lane & 0xf) << LANE_POS) | 440 + ((addr & 0xff) << ADR_POS); 441 + 442 + /* Set write data */ 443 + pci_write_reg(pcie, data, H1_PCIEPHYDOUTR); 444 + pci_write_reg(pcie, phyaddr, H1_PCIEPHYADRR); 445 + 446 + /* Ignore errors as they will be dealt with if the data link is down */ 447 + phy_wait_for_ack(pcie); 448 + 449 + /* Clear command */ 450 + pci_write_reg(pcie, 0, H1_PCIEPHYDOUTR); 451 + pci_write_reg(pcie, 0, H1_PCIEPHYADRR); 452 + 453 + /* Ignore errors as they will be dealt with if the data link is down */ 454 + phy_wait_for_ack(pcie); 455 + } 456 + 457 + static int rcar_pcie_wait_for_dl(struct rcar_pcie *pcie) 458 + { 459 + unsigned int timeout = 10; 460 + 461 + while (timeout--) { 462 + if ((pci_read_reg(pcie, PCIETSTR) & DATA_LINK_ACTIVE)) 463 + return 0; 464 + 465 + msleep(5); 466 + } 467 + 468 + return -ETIMEDOUT; 469 + } 470 + 471 + static int rcar_pcie_hw_init(struct rcar_pcie *pcie) 472 + { 473 + int err; 474 + 475 + /* Begin initialization */ 476 + pci_write_reg(pcie, 0, PCIETCTLR); 477 + 478 + /* Set mode */ 479 + pci_write_reg(pcie, 1, PCIEMSR); 480 + 481 + /* 482 + * Initial header for port config space is type 1, set the device 483 + * class to match. Hardware takes care of propagating the IDSETR 484 + * settings, so there is no need to bother with a quirk. 485 + */ 486 + pci_write_reg(pcie, PCI_CLASS_BRIDGE_PCI << 16, IDSETR1); 487 + 488 + /* 489 + * Setup Secondary Bus Number & Subordinate Bus Number, even though 490 + * they aren't used, to avoid bridge being detected as broken. 491 + */ 492 + rcar_rmw32(pcie, RCONF(PCI_SECONDARY_BUS), 0xff, 1); 493 + rcar_rmw32(pcie, RCONF(PCI_SUBORDINATE_BUS), 0xff, 1); 494 + 495 + /* Initialize default capabilities. */ 496 + rcar_rmw32(pcie, REXPCAP(0), 0, PCI_CAP_ID_EXP); 497 + rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS), 498 + PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ROOT_PORT << 4); 499 + rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), 0x7f, 500 + PCI_HEADER_TYPE_BRIDGE); 501 + 502 + /* Enable data link layer active state reporting */ 503 + rcar_rmw32(pcie, REXPCAP(PCI_EXP_LNKCAP), 0, PCI_EXP_LNKCAP_DLLLARC); 504 + 505 + /* Write out the physical slot number = 0 */ 506 + rcar_rmw32(pcie, REXPCAP(PCI_EXP_SLTCAP), PCI_EXP_SLTCAP_PSN, 0); 507 + 508 + /* Set the completion timer timeout to the maximum 50ms. */ 509 + rcar_rmw32(pcie, TLCTLR+1, 0x3f, 50); 510 + 511 + /* Terminate list of capabilities (Next Capability Offset=0) */ 512 + rcar_rmw32(pcie, RVCCAP(0), 0xfff0, 0); 513 + 514 + /* Enable MAC data scrambling. */ 515 + rcar_rmw32(pcie, MACCTLR, SCRAMBLE_DISABLE, 0); 516 + 517 + /* Enable MSI */ 518 + if (IS_ENABLED(CONFIG_PCI_MSI)) 519 + pci_write_reg(pcie, 0x101f0000, PCIEMSITXR); 520 + 521 + /* Finish initialization - establish a PCI Express link */ 522 + pci_write_reg(pcie, CFINIT, PCIETCTLR); 523 + 524 + /* This will timeout if we don't have a link. */ 525 + err = rcar_pcie_wait_for_dl(pcie); 526 + if (err) 527 + return err; 528 + 529 + /* Enable INTx interrupts */ 530 + rcar_rmw32(pcie, PCIEINTXR, 0, 0xF << 8); 531 + 532 + /* Enable slave Bus Mastering */ 533 + rcar_rmw32(pcie, RCONF(PCI_STATUS), PCI_STATUS_DEVSEL_MASK, 534 + PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER | 535 + PCI_STATUS_CAP_LIST | PCI_STATUS_DEVSEL_FAST); 536 + 537 + wmb(); 538 + 539 + return 0; 540 + } 541 + 542 + static int rcar_pcie_hw_init_h1(struct rcar_pcie *pcie) 543 + { 544 + unsigned int timeout = 10; 545 + 546 + /* Initialize the phy */ 547 + phy_write_reg(pcie, 0, 0x42, 0x1, 0x0EC34191); 548 + phy_write_reg(pcie, 1, 0x42, 0x1, 0x0EC34180); 549 + phy_write_reg(pcie, 0, 0x43, 0x1, 0x00210188); 550 + phy_write_reg(pcie, 1, 0x43, 0x1, 0x00210188); 551 + phy_write_reg(pcie, 0, 0x44, 0x1, 0x015C0014); 552 + phy_write_reg(pcie, 1, 0x44, 0x1, 0x015C0014); 553 + phy_write_reg(pcie, 1, 0x4C, 0x1, 0x786174A0); 554 + phy_write_reg(pcie, 1, 0x4D, 0x1, 0x048000BB); 555 + phy_write_reg(pcie, 0, 0x51, 0x1, 0x079EC062); 556 + phy_write_reg(pcie, 0, 0x52, 0x1, 0x20000000); 557 + phy_write_reg(pcie, 1, 0x52, 0x1, 0x20000000); 558 + phy_write_reg(pcie, 1, 0x56, 0x1, 0x00003806); 559 + 560 + phy_write_reg(pcie, 0, 0x60, 0x1, 0x004B03A5); 561 + phy_write_reg(pcie, 0, 0x64, 0x1, 0x3F0F1F0F); 562 + phy_write_reg(pcie, 0, 0x66, 0x1, 0x00008000); 563 + 564 + while (timeout--) { 565 + if (pci_read_reg(pcie, H1_PCIEPHYSR)) 566 + return rcar_pcie_hw_init(pcie); 567 + 568 + msleep(5); 569 + } 570 + 571 + return -ETIMEDOUT; 572 + } 573 + 574 + static int rcar_msi_alloc(struct rcar_msi *chip) 575 + { 576 + int msi; 577 + 578 + mutex_lock(&chip->lock); 579 + 580 + msi = find_first_zero_bit(chip->used, INT_PCI_MSI_NR); 581 + if (msi < INT_PCI_MSI_NR) 582 + set_bit(msi, chip->used); 583 + else 584 + msi = -ENOSPC; 585 + 586 + mutex_unlock(&chip->lock); 587 + 588 + return msi; 589 + } 590 + 591 + static void rcar_msi_free(struct rcar_msi *chip, unsigned long irq) 592 + { 593 + mutex_lock(&chip->lock); 594 + clear_bit(irq, chip->used); 595 + mutex_unlock(&chip->lock); 596 + } 597 + 598 + static irqreturn_t rcar_pcie_msi_irq(int irq, void *data) 599 + { 600 + struct rcar_pcie *pcie = data; 601 + struct rcar_msi *msi = &pcie->msi; 602 + unsigned long reg; 603 + 604 + reg = pci_read_reg(pcie, PCIEMSIFR); 605 + 606 + /* MSI & INTx share an interrupt - we only handle MSI here */ 607 + if (!reg) 608 + return IRQ_NONE; 609 + 610 + while (reg) { 611 + unsigned int index = find_first_bit(&reg, 32); 612 + unsigned int irq; 613 + 614 + /* clear the interrupt */ 615 + pci_write_reg(pcie, 1 << index, PCIEMSIFR); 616 + 617 + irq = irq_find_mapping(msi->domain, index); 618 + if (irq) { 619 + if (test_bit(index, msi->used)) 620 + generic_handle_irq(irq); 621 + else 622 + dev_info(pcie->dev, "unhandled MSI\n"); 623 + } else { 624 + /* Unknown MSI, just clear it */ 625 + dev_dbg(pcie->dev, "unexpected MSI\n"); 626 + } 627 + 628 + /* see if there's any more pending in this vector */ 629 + reg = pci_read_reg(pcie, PCIEMSIFR); 630 + } 631 + 632 + return IRQ_HANDLED; 633 + } 634 + 635 + static int rcar_msi_setup_irq(struct msi_chip *chip, struct pci_dev *pdev, 636 + struct msi_desc *desc) 637 + { 638 + struct rcar_msi *msi = to_rcar_msi(chip); 639 + struct rcar_pcie *pcie = container_of(chip, struct rcar_pcie, msi.chip); 640 + struct msi_msg msg; 641 + unsigned int irq; 642 + int hwirq; 643 + 644 + hwirq = rcar_msi_alloc(msi); 645 + if (hwirq < 0) 646 + return hwirq; 647 + 648 + irq = irq_create_mapping(msi->domain, hwirq); 649 + if (!irq) { 650 + rcar_msi_free(msi, hwirq); 651 + return -EINVAL; 652 + } 653 + 654 + irq_set_msi_desc(irq, desc); 655 + 656 + msg.address_lo = pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE; 657 + msg.address_hi = pci_read_reg(pcie, PCIEMSIAUR); 658 + msg.data = hwirq; 659 + 660 + write_msi_msg(irq, &msg); 661 + 662 + return 0; 663 + } 664 + 665 + static void rcar_msi_teardown_irq(struct msi_chip *chip, unsigned int irq) 666 + { 667 + struct rcar_msi *msi = to_rcar_msi(chip); 668 + struct irq_data *d = irq_get_irq_data(irq); 669 + 670 + rcar_msi_free(msi, d->hwirq); 671 + } 672 + 673 + static struct irq_chip rcar_msi_irq_chip = { 674 + .name = "R-Car PCIe MSI", 675 + .irq_enable = unmask_msi_irq, 676 + .irq_disable = mask_msi_irq, 677 + .irq_mask = mask_msi_irq, 678 + .irq_unmask = unmask_msi_irq, 679 + }; 680 + 681 + static int rcar_msi_map(struct irq_domain *domain, unsigned int irq, 682 + irq_hw_number_t hwirq) 683 + { 684 + irq_set_chip_and_handler(irq, &rcar_msi_irq_chip, handle_simple_irq); 685 + irq_set_chip_data(irq, domain->host_data); 686 + set_irq_flags(irq, IRQF_VALID); 687 + 688 + return 0; 689 + } 690 + 691 + static const struct irq_domain_ops msi_domain_ops = { 692 + .map = rcar_msi_map, 693 + }; 694 + 695 + static int rcar_pcie_enable_msi(struct rcar_pcie *pcie) 696 + { 697 + struct platform_device *pdev = to_platform_device(pcie->dev); 698 + struct rcar_msi *msi = &pcie->msi; 699 + unsigned long base; 700 + int err; 701 + 702 + mutex_init(&msi->lock); 703 + 704 + msi->chip.dev = pcie->dev; 705 + msi->chip.setup_irq = rcar_msi_setup_irq; 706 + msi->chip.teardown_irq = rcar_msi_teardown_irq; 707 + 708 + msi->domain = irq_domain_add_linear(pcie->dev->of_node, INT_PCI_MSI_NR, 709 + &msi_domain_ops, &msi->chip); 710 + if (!msi->domain) { 711 + dev_err(&pdev->dev, "failed to create IRQ domain\n"); 712 + return -ENOMEM; 713 + } 714 + 715 + /* Two irqs are for MSI, but they are also used for non-MSI irqs */ 716 + err = devm_request_irq(&pdev->dev, msi->irq1, rcar_pcie_msi_irq, 717 + IRQF_SHARED, rcar_msi_irq_chip.name, pcie); 718 + if (err < 0) { 719 + dev_err(&pdev->dev, "failed to request IRQ: %d\n", err); 720 + goto err; 721 + } 722 + 723 + err = devm_request_irq(&pdev->dev, msi->irq2, rcar_pcie_msi_irq, 724 + IRQF_SHARED, rcar_msi_irq_chip.name, pcie); 725 + if (err < 0) { 726 + dev_err(&pdev->dev, "failed to request IRQ: %d\n", err); 727 + goto err; 728 + } 729 + 730 + /* setup MSI data target */ 731 + msi->pages = __get_free_pages(GFP_KERNEL, 0); 732 + base = virt_to_phys((void *)msi->pages); 733 + 734 + pci_write_reg(pcie, base | MSIFE, PCIEMSIALR); 735 + pci_write_reg(pcie, 0, PCIEMSIAUR); 736 + 737 + /* enable all MSI interrupts */ 738 + pci_write_reg(pcie, 0xffffffff, PCIEMSIIER); 739 + 740 + return 0; 741 + 742 + err: 743 + irq_domain_remove(msi->domain); 744 + return err; 745 + } 746 + 747 + static int rcar_pcie_get_resources(struct platform_device *pdev, 748 + struct rcar_pcie *pcie) 749 + { 750 + struct resource res; 751 + int err, i; 752 + 753 + err = of_address_to_resource(pdev->dev.of_node, 0, &res); 754 + if (err) 755 + return err; 756 + 757 + pcie->clk = devm_clk_get(&pdev->dev, "pcie"); 758 + if (IS_ERR(pcie->clk)) { 759 + dev_err(pcie->dev, "cannot get platform clock\n"); 760 + return PTR_ERR(pcie->clk); 761 + } 762 + err = clk_prepare_enable(pcie->clk); 763 + if (err) 764 + goto fail_clk; 765 + 766 + pcie->bus_clk = devm_clk_get(&pdev->dev, "pcie_bus"); 767 + if (IS_ERR(pcie->bus_clk)) { 768 + dev_err(pcie->dev, "cannot get pcie bus clock\n"); 769 + err = PTR_ERR(pcie->bus_clk); 770 + goto fail_clk; 771 + } 772 + err = clk_prepare_enable(pcie->bus_clk); 773 + if (err) 774 + goto err_map_reg; 775 + 776 + i = irq_of_parse_and_map(pdev->dev.of_node, 0); 777 + if (i < 0) { 778 + dev_err(pcie->dev, "cannot get platform resources for msi interrupt\n"); 779 + err = -ENOENT; 780 + goto err_map_reg; 781 + } 782 + pcie->msi.irq1 = i; 783 + 784 + i = irq_of_parse_and_map(pdev->dev.of_node, 1); 785 + if (i < 0) { 786 + dev_err(pcie->dev, "cannot get platform resources for msi interrupt\n"); 787 + err = -ENOENT; 788 + goto err_map_reg; 789 + } 790 + pcie->msi.irq2 = i; 791 + 792 + pcie->base = devm_ioremap_resource(&pdev->dev, &res); 793 + if (IS_ERR(pcie->base)) { 794 + err = PTR_ERR(pcie->base); 795 + goto err_map_reg; 796 + } 797 + 798 + return 0; 799 + 800 + err_map_reg: 801 + clk_disable_unprepare(pcie->bus_clk); 802 + fail_clk: 803 + clk_disable_unprepare(pcie->clk); 804 + 805 + return err; 806 + } 807 + 808 + static int rcar_pcie_inbound_ranges(struct rcar_pcie *pcie, 809 + struct of_pci_range *range, 810 + int *index) 811 + { 812 + u64 restype = range->flags; 813 + u64 cpu_addr = range->cpu_addr; 814 + u64 cpu_end = range->cpu_addr + range->size; 815 + u64 pci_addr = range->pci_addr; 816 + u32 flags = LAM_64BIT | LAR_ENABLE; 817 + u64 mask; 818 + u64 size; 819 + int idx = *index; 820 + 821 + if (restype & IORESOURCE_PREFETCH) 822 + flags |= LAM_PREFETCH; 823 + 824 + /* 825 + * If the size of the range is larger than the alignment of the start 826 + * address, we have to use multiple entries to perform the mapping. 827 + */ 828 + if (cpu_addr > 0) { 829 + unsigned long nr_zeros = __ffs64(cpu_addr); 830 + u64 alignment = 1ULL << nr_zeros; 831 + size = min(range->size, alignment); 832 + } else { 833 + size = range->size; 834 + } 835 + /* Hardware supports max 4GiB inbound region */ 836 + size = min(size, 1ULL << 32); 837 + 838 + mask = roundup_pow_of_two(size) - 1; 839 + mask &= ~0xf; 840 + 841 + while (cpu_addr < cpu_end) { 842 + /* 843 + * Set up 64-bit inbound regions as the range parser doesn't 844 + * distinguish between 32 and 64-bit types. 845 + */ 846 + pci_write_reg(pcie, lower_32_bits(pci_addr), PCIEPRAR(idx)); 847 + pci_write_reg(pcie, lower_32_bits(cpu_addr), PCIELAR(idx)); 848 + pci_write_reg(pcie, lower_32_bits(mask) | flags, PCIELAMR(idx)); 849 + 850 + pci_write_reg(pcie, upper_32_bits(pci_addr), PCIEPRAR(idx+1)); 851 + pci_write_reg(pcie, upper_32_bits(cpu_addr), PCIELAR(idx+1)); 852 + pci_write_reg(pcie, 0, PCIELAMR(idx+1)); 853 + 854 + pci_addr += size; 855 + cpu_addr += size; 856 + idx += 2; 857 + 858 + if (idx > MAX_NR_INBOUND_MAPS) { 859 + dev_err(pcie->dev, "Failed to map inbound regions!\n"); 860 + return -EINVAL; 861 + } 862 + } 863 + *index = idx; 864 + 865 + return 0; 866 + } 867 + 868 + static int pci_dma_range_parser_init(struct of_pci_range_parser *parser, 869 + struct device_node *node) 870 + { 871 + const int na = 3, ns = 2; 872 + int rlen; 873 + 874 + parser->node = node; 875 + parser->pna = of_n_addr_cells(node); 876 + parser->np = parser->pna + na + ns; 877 + 878 + parser->range = of_get_property(node, "dma-ranges", &rlen); 879 + if (!parser->range) 880 + return -ENOENT; 881 + 882 + parser->end = parser->range + rlen / sizeof(__be32); 883 + return 0; 884 + } 885 + 886 + static int rcar_pcie_parse_map_dma_ranges(struct rcar_pcie *pcie, 887 + struct device_node *np) 888 + { 889 + struct of_pci_range range; 890 + struct of_pci_range_parser parser; 891 + int index = 0; 892 + int err; 893 + 894 + if (pci_dma_range_parser_init(&parser, np)) 895 + return -EINVAL; 896 + 897 + /* Get the dma-ranges from DT */ 898 + for_each_of_pci_range(&parser, &range) { 899 + u64 end = range.cpu_addr + range.size - 1; 900 + dev_dbg(pcie->dev, "0x%08x 0x%016llx..0x%016llx -> 0x%016llx\n", 901 + range.flags, range.cpu_addr, end, range.pci_addr); 902 + 903 + err = rcar_pcie_inbound_ranges(pcie, &range, &index); 904 + if (err) 905 + return err; 906 + } 907 + 908 + return 0; 909 + } 910 + 911 + static const struct of_device_id rcar_pcie_of_match[] = { 912 + { .compatible = "renesas,pcie-r8a7779", .data = rcar_pcie_hw_init_h1 }, 913 + { .compatible = "renesas,pcie-r8a7790", .data = rcar_pcie_hw_init }, 914 + { .compatible = "renesas,pcie-r8a7791", .data = rcar_pcie_hw_init }, 915 + {}, 916 + }; 917 + MODULE_DEVICE_TABLE(of, rcar_pcie_of_match); 918 + 919 + static int rcar_pcie_probe(struct platform_device *pdev) 920 + { 921 + struct rcar_pcie *pcie; 922 + unsigned int data; 923 + struct of_pci_range range; 924 + struct of_pci_range_parser parser; 925 + const struct of_device_id *of_id; 926 + int err, win = 0; 927 + int (*hw_init_fn)(struct rcar_pcie *); 928 + 929 + pcie = devm_kzalloc(&pdev->dev, sizeof(*pcie), GFP_KERNEL); 930 + if (!pcie) 931 + return -ENOMEM; 932 + 933 + pcie->dev = &pdev->dev; 934 + platform_set_drvdata(pdev, pcie); 935 + 936 + /* Get the bus range */ 937 + if (of_pci_parse_bus_range(pdev->dev.of_node, &pcie->busn)) { 938 + dev_err(&pdev->dev, "failed to parse bus-range property\n"); 939 + return -EINVAL; 940 + } 941 + 942 + if (of_pci_range_parser_init(&parser, pdev->dev.of_node)) { 943 + dev_err(&pdev->dev, "missing ranges property\n"); 944 + return -EINVAL; 945 + } 946 + 947 + err = rcar_pcie_get_resources(pdev, pcie); 948 + if (err < 0) { 949 + dev_err(&pdev->dev, "failed to request resources: %d\n", err); 950 + return err; 951 + } 952 + 953 + for_each_of_pci_range(&parser, &range) { 954 + of_pci_range_to_resource(&range, pdev->dev.of_node, 955 + &pcie->res[win++]); 956 + 957 + if (win > PCI_MAX_RESOURCES) 958 + break; 959 + } 960 + 961 + err = rcar_pcie_parse_map_dma_ranges(pcie, pdev->dev.of_node); 962 + if (err) 963 + return err; 964 + 965 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 966 + err = rcar_pcie_enable_msi(pcie); 967 + if (err < 0) { 968 + dev_err(&pdev->dev, 969 + "failed to enable MSI support: %d\n", 970 + err); 971 + return err; 972 + } 973 + } 974 + 975 + of_id = of_match_device(rcar_pcie_of_match, pcie->dev); 976 + if (!of_id || !of_id->data) 977 + return -EINVAL; 978 + hw_init_fn = of_id->data; 979 + 980 + /* Failure to get a link might just be that no cards are inserted */ 981 + err = hw_init_fn(pcie); 982 + if (err) { 983 + dev_info(&pdev->dev, "PCIe link down\n"); 984 + return 0; 985 + } 986 + 987 + data = pci_read_reg(pcie, MACSR); 988 + dev_info(&pdev->dev, "PCIe x%d: link up\n", (data >> 20) & 0x3f); 989 + 990 + rcar_pcie_enable(pcie); 991 + 992 + return 0; 993 + } 994 + 995 + static struct platform_driver rcar_pcie_driver = { 996 + .driver = { 997 + .name = DRV_NAME, 998 + .owner = THIS_MODULE, 999 + .of_match_table = rcar_pcie_of_match, 1000 + .suppress_bind_attrs = true, 1001 + }, 1002 + .probe = rcar_pcie_probe, 1003 + }; 1004 + module_platform_driver(rcar_pcie_driver); 1005 + 1006 + MODULE_AUTHOR("Phil Edworthy <phil.edworthy@renesas.com>"); 1007 + MODULE_DESCRIPTION("Renesas R-Car PCIe driver"); 1008 + MODULE_LICENSE("GPLv2");
+1 -1
drivers/pci/hotplug-pci.c
··· 4 4 #include <linux/export.h> 5 5 #include "pci.h" 6 6 7 - int __ref pci_hp_add_bridge(struct pci_dev *dev) 7 + int pci_hp_add_bridge(struct pci_dev *dev) 8 8 { 9 9 struct pci_bus *parent = dev->bus; 10 10 int pass, busnr, start = parent->busn_res.start;
+2 -4
drivers/pci/hotplug/acpiphp_glue.c
··· 41 41 42 42 #define pr_fmt(fmt) "acpiphp_glue: " fmt 43 43 44 - #include <linux/init.h> 45 44 #include <linux/module.h> 46 45 47 46 #include <linux/kernel.h> ··· 500 501 * This function should be called per *physical slot*, 501 502 * not per each slot object in ACPI namespace. 502 503 */ 503 - static void __ref enable_slot(struct acpiphp_slot *slot) 504 + static void enable_slot(struct acpiphp_slot *slot) 504 505 { 505 506 struct pci_dev *dev; 506 507 struct pci_bus *bus = slot->bus; ··· 515 516 if (PCI_SLOT(dev->devfn) != slot->device) 516 517 continue; 517 518 518 - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE || 519 - dev->hdr_type == PCI_HEADER_TYPE_CARDBUS) { 519 + if (pci_is_bridge(dev)) { 520 520 max = pci_scan_bridge(bus, dev, max, pass); 521 521 if (pass && dev->subordinate) { 522 522 check_hotplug_bridge(slot, dev);
+2 -3
drivers/pci/hotplug/cpci_hotplug_pci.c
··· 250 250 * Device configuration functions 251 251 */ 252 252 253 - int __ref cpci_configure_slot(struct slot *slot) 253 + int cpci_configure_slot(struct slot *slot) 254 254 { 255 255 struct pci_dev *dev; 256 256 struct pci_bus *parent; ··· 289 289 list_for_each_entry(dev, &parent->devices, bus_list) 290 290 if (PCI_SLOT(dev->devfn) != PCI_SLOT(slot->devfn)) 291 291 continue; 292 - if ((dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) || 293 - (dev->hdr_type == PCI_HEADER_TYPE_CARDBUS)) 292 + if (pci_is_bridge(dev)) 294 293 pci_hp_add_bridge(dev); 295 294 296 295
+2 -1
drivers/pci/hotplug/cpqphp_ctrl.c
··· 709 709 temp = temp->next; 710 710 } 711 711 712 - temp->next = max->next; 712 + if (temp) 713 + temp->next = max->next; 713 714 } 714 715 715 716 max->next = NULL;
-1
drivers/pci/hotplug/cpqphp_nvram.c
··· 34 34 #include <linux/workqueue.h> 35 35 #include <linux/pci.h> 36 36 #include <linux/pci_hotplug.h> 37 - #include <linux/init.h> 38 37 #include <asm/uaccess.h> 39 38 #include "cpqphp.h" 40 39 #include "cpqphp_nvram.h"
+1 -1
drivers/pci/hotplug/pciehp.h
··· 127 127 #define HP_SUPR_RM(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_HPS) 128 128 #define EMI(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_EIP) 129 129 #define NO_CMD_CMPL(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_NCCS) 130 - #define PSN(ctrl) ((ctrl)->slot_cap >> 19) 130 + #define PSN(ctrl) (((ctrl)->slot_cap & PCI_EXP_SLTCAP_PSN) >> 19) 131 131 132 132 int pciehp_sysfs_enable_slot(struct slot *slot); 133 133 int pciehp_sysfs_disable_slot(struct slot *slot);
+2
drivers/pci/hotplug/pciehp_hpc.c
··· 159 159 160 160 pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); 161 161 if (slot_status & PCI_EXP_SLTSTA_CC) { 162 + pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, 163 + PCI_EXP_SLTSTA_CC); 162 164 if (!ctrl->no_cmd_complete) { 163 165 /* 164 166 * After 1 sec and CMD_COMPLETED still not set, just
+1 -2
drivers/pci/hotplug/pciehp_pci.c
··· 62 62 } 63 63 64 64 list_for_each_entry(dev, &parent->devices, bus_list) 65 - if ((dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) || 66 - (dev->hdr_type == PCI_HEADER_TYPE_CARDBUS)) 65 + if (pci_is_bridge(dev)) 67 66 pci_hp_add_bridge(dev); 68 67 69 68 pci_assign_unassigned_bridge_resources(bridge);
+1 -2
drivers/pci/hotplug/pcihp_slot.c
··· 160 160 (dev->class >> 8) == PCI_CLASS_BRIDGE_PCI))) 161 161 return; 162 162 163 - if (dev->bus) 164 - pcie_bus_configure_settings(dev->bus); 163 + pcie_bus_configure_settings(dev->bus); 165 164 166 165 memset(&hpp, 0, sizeof(hpp)); 167 166 ret = pci_get_hp_params(dev, &hpp);
+1 -2
drivers/pci/hotplug/rpadlpar_core.c
··· 157 157 } 158 158 159 159 /* Scan below the new bridge */ 160 - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE || 161 - dev->hdr_type == PCI_HEADER_TYPE_CARDBUS) 160 + if (pci_is_bridge(dev)) 162 161 of_scan_pci_bridge(dev); 163 162 164 163 /* Map IO space for child bus, which may or may not succeed */
+9 -6
drivers/pci/hotplug/rpaphp_core.c
··· 223 223 type_tmp = (char *) &types[1]; 224 224 225 225 /* Iterate through parent properties, looking for my-drc-index */ 226 - for (i = 0; i < indexes[0]; i++) { 226 + for (i = 0; i < be32_to_cpu(indexes[0]); i++) { 227 227 if ((unsigned int) indexes[i + 1] == *my_index) { 228 228 if (drc_name) 229 229 *drc_name = name_tmp; 230 230 if (drc_type) 231 231 *drc_type = type_tmp; 232 232 if (drc_index) 233 - *drc_index = *my_index; 233 + *drc_index = be32_to_cpu(*my_index); 234 234 if (drc_power_domain) 235 - *drc_power_domain = domains[i+1]; 235 + *drc_power_domain = be32_to_cpu(domains[i+1]); 236 236 return 0; 237 237 } 238 238 name_tmp += (strlen(name_tmp) + 1); ··· 321 321 /* register PCI devices */ 322 322 name = (char *) &names[1]; 323 323 type = (char *) &types[1]; 324 - for (i = 0; i < indexes[0]; i++) { 324 + for (i = 0; i < be32_to_cpu(indexes[0]); i++) { 325 + int index; 325 326 326 - slot = alloc_slot_struct(dn, indexes[i + 1], name, power_domains[i + 1]); 327 + index = be32_to_cpu(indexes[i + 1]); 328 + slot = alloc_slot_struct(dn, index, name, 329 + be32_to_cpu(power_domains[i + 1])); 327 330 if (!slot) 328 331 return -ENOMEM; 329 332 330 333 slot->type = simple_strtoul(type, NULL, 10); 331 334 332 335 dbg("Found drc-index:0x%x drc-name:%s drc-type:%s\n", 333 - indexes[i + 1], name, type); 336 + index, name, type); 334 337 335 338 retval = rpaphp_enable_slot(slot); 336 339 if (!retval)
-1
drivers/pci/hotplug/s390_pci_hpc.c
··· 15 15 #include <linux/slab.h> 16 16 #include <linux/pci.h> 17 17 #include <linux/pci_hotplug.h> 18 - #include <linux/init.h> 19 18 #include <asm/pci_debug.h> 20 19 #include <asm/sclp.h> 21 20
+2 -3
drivers/pci/hotplug/shpchp_pci.c
··· 34 34 #include "../pci.h" 35 35 #include "shpchp.h" 36 36 37 - int __ref shpchp_configure_device(struct slot *p_slot) 37 + int shpchp_configure_device(struct slot *p_slot) 38 38 { 39 39 struct pci_dev *dev; 40 40 struct controller *ctrl = p_slot->ctrl; ··· 64 64 list_for_each_entry(dev, &parent->devices, bus_list) { 65 65 if (PCI_SLOT(dev->devfn) != p_slot->device) 66 66 continue; 67 - if ((dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) || 68 - (dev->hdr_type == PCI_HEADER_TYPE_CARDBUS)) 67 + if (pci_is_bridge(dev)) 69 68 pci_hp_add_bridge(dev); 70 69 } 71 70
+1 -1
drivers/pci/iov.c
··· 106 106 pci_device_add(virtfn, virtfn->bus); 107 107 mutex_unlock(&iov->dev->sriov->lock); 108 108 109 - rc = pci_bus_add_device(virtfn); 109 + pci_bus_add_device(virtfn); 110 110 sprintf(buf, "virtfn%u", id); 111 111 rc = sysfs_create_link(&dev->dev.kobj, &virtfn->dev.kobj, buf); 112 112 if (rc)
+39 -57
drivers/pci/msi.c
··· 10 10 #include <linux/mm.h> 11 11 #include <linux/irq.h> 12 12 #include <linux/interrupt.h> 13 - #include <linux/init.h> 14 13 #include <linux/export.h> 15 14 #include <linux/ioport.h> 16 15 #include <linux/pci.h> ··· 543 544 if (!msi_attrs) 544 545 return -ENOMEM; 545 546 list_for_each_entry(entry, &pdev->msi_list, list) { 546 - char *name = kmalloc(20, GFP_KERNEL); 547 - if (!name) 548 - goto error_attrs; 549 - 550 547 msi_dev_attr = kzalloc(sizeof(*msi_dev_attr), GFP_KERNEL); 551 - if (!msi_dev_attr) { 552 - kfree(name); 548 + if (!msi_dev_attr) 553 549 goto error_attrs; 554 - } 550 + msi_attrs[count] = &msi_dev_attr->attr; 555 551 556 - sprintf(name, "%d", entry->irq); 557 552 sysfs_attr_init(&msi_dev_attr->attr); 558 - msi_dev_attr->attr.name = name; 553 + msi_dev_attr->attr.name = kasprintf(GFP_KERNEL, "%d", 554 + entry->irq); 555 + if (!msi_dev_attr->attr.name) 556 + goto error_attrs; 559 557 msi_dev_attr->attr.mode = S_IRUGO; 560 558 msi_dev_attr->show = msi_mode_show; 561 - msi_attrs[count] = &msi_dev_attr->attr; 562 559 ++count; 563 560 } 564 561 ··· 878 883 } 879 884 EXPORT_SYMBOL(pci_msi_vec_count); 880 885 881 - /** 882 - * pci_enable_msi_block - configure device's MSI capability structure 883 - * @dev: device to configure 884 - * @nvec: number of interrupts to configure 885 - * 886 - * Allocate IRQs for a device with the MSI capability. 887 - * This function returns a negative errno if an error occurs. If it 888 - * is unable to allocate the number of interrupts requested, it returns 889 - * the number of interrupts it might be able to allocate. If it successfully 890 - * allocates at least the number of interrupts requested, it returns 0 and 891 - * updates the @dev's irq member to the lowest new interrupt number; the 892 - * other interrupt numbers allocated to this device are consecutive. 893 - */ 894 - int pci_enable_msi_block(struct pci_dev *dev, int nvec) 895 - { 896 - int status, maxvec; 897 - 898 - if (dev->current_state != PCI_D0) 899 - return -EINVAL; 900 - 901 - maxvec = pci_msi_vec_count(dev); 902 - if (maxvec < 0) 903 - return maxvec; 904 - if (nvec > maxvec) 905 - return maxvec; 906 - 907 - status = pci_msi_check_device(dev, nvec, PCI_CAP_ID_MSI); 908 - if (status) 909 - return status; 910 - 911 - WARN_ON(!!dev->msi_enabled); 912 - 913 - /* Check whether driver already requested MSI-X irqs */ 914 - if (dev->msix_enabled) { 915 - dev_info(&dev->dev, "can't enable MSI " 916 - "(MSI-X already enabled)\n"); 917 - return -EINVAL; 918 - } 919 - 920 - status = msi_capability_init(dev, nvec); 921 - return status; 922 - } 923 - EXPORT_SYMBOL(pci_enable_msi_block); 924 - 925 886 void pci_msi_shutdown(struct pci_dev *dev) 926 887 { 927 888 struct msi_desc *desc; ··· 1083 1132 **/ 1084 1133 int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec) 1085 1134 { 1086 - int nvec = maxvec; 1135 + int nvec; 1087 1136 int rc; 1137 + 1138 + if (dev->current_state != PCI_D0) 1139 + return -EINVAL; 1140 + 1141 + WARN_ON(!!dev->msi_enabled); 1142 + 1143 + /* Check whether driver already requested MSI-X irqs */ 1144 + if (dev->msix_enabled) { 1145 + dev_info(&dev->dev, 1146 + "can't enable MSI (MSI-X already enabled)\n"); 1147 + return -EINVAL; 1148 + } 1088 1149 1089 1150 if (maxvec < minvec) 1090 1151 return -ERANGE; 1091 1152 1153 + nvec = pci_msi_vec_count(dev); 1154 + if (nvec < 0) 1155 + return nvec; 1156 + else if (nvec < minvec) 1157 + return -EINVAL; 1158 + else if (nvec > maxvec) 1159 + nvec = maxvec; 1160 + 1092 1161 do { 1093 - rc = pci_enable_msi_block(dev, nvec); 1162 + rc = pci_msi_check_device(dev, nvec, PCI_CAP_ID_MSI); 1163 + if (rc < 0) { 1164 + return rc; 1165 + } else if (rc > 0) { 1166 + if (rc < minvec) 1167 + return -ENOSPC; 1168 + nvec = rc; 1169 + } 1170 + } while (rc); 1171 + 1172 + do { 1173 + rc = msi_capability_init(dev, nvec); 1094 1174 if (rc < 0) { 1095 1175 return rc; 1096 1176 } else if (rc > 0) {
+1 -7
drivers/pci/pci-acpi.c
··· 309 309 bool check_children; 310 310 u64 addr; 311 311 312 - /* 313 - * pci_is_bridge() is not suitable here, because pci_dev->subordinate 314 - * is set only after acpi_pci_find_device() has been called for the 315 - * given device. 316 - */ 317 - check_children = pci_dev->hdr_type == PCI_HEADER_TYPE_BRIDGE 318 - || pci_dev->hdr_type == PCI_HEADER_TYPE_CARDBUS; 312 + check_children = pci_is_bridge(pci_dev); 319 313 /* Please ref to ACPI spec for the syntax of _ADR */ 320 314 addr = (PCI_SLOT(pci_dev->devfn) << 16) | PCI_FUNC(pci_dev->devfn); 321 315 return acpi_find_child_device(ACPI_COMPANION(dev->parent), addr,
+48 -10
drivers/pci/pci-driver.c
··· 107 107 subdevice=PCI_ANY_ID, class=0, class_mask=0; 108 108 unsigned long driver_data=0; 109 109 int fields=0; 110 - int retval; 110 + int retval = 0; 111 111 112 112 fields = sscanf(buf, "%x %x %x %x %x %x %lx", 113 113 &vendor, &device, &subvendor, &subdevice, 114 114 &class, &class_mask, &driver_data); 115 115 if (fields < 2) 116 116 return -EINVAL; 117 + 118 + if (fields != 7) { 119 + struct pci_dev *pdev = kzalloc(sizeof(*pdev), GFP_KERNEL); 120 + if (!pdev) 121 + return -ENOMEM; 122 + 123 + pdev->vendor = vendor; 124 + pdev->device = device; 125 + pdev->subsystem_vendor = subvendor; 126 + pdev->subsystem_device = subdevice; 127 + pdev->class = class; 128 + 129 + if (pci_match_id(pdrv->id_table, pdev)) 130 + retval = -EEXIST; 131 + 132 + kfree(pdev); 133 + 134 + if (retval) 135 + return retval; 136 + } 117 137 118 138 /* Only accept driver_data values that match an existing id_table 119 139 entry */ ··· 236 216 return NULL; 237 217 } 238 218 219 + static const struct pci_device_id pci_device_id_any = { 220 + .vendor = PCI_ANY_ID, 221 + .device = PCI_ANY_ID, 222 + .subvendor = PCI_ANY_ID, 223 + .subdevice = PCI_ANY_ID, 224 + }; 225 + 239 226 /** 240 227 * pci_match_device - Tell if a PCI device structure has a matching PCI device id structure 241 228 * @drv: the PCI driver to match against ··· 256 229 struct pci_dev *dev) 257 230 { 258 231 struct pci_dynid *dynid; 232 + const struct pci_device_id *found_id = NULL; 233 + 234 + /* When driver_override is set, only bind to the matching driver */ 235 + if (dev->driver_override && strcmp(dev->driver_override, drv->name)) 236 + return NULL; 259 237 260 238 /* Look at the dynamic ids first, before the static ones */ 261 239 spin_lock(&drv->dynids.lock); 262 240 list_for_each_entry(dynid, &drv->dynids.list, node) { 263 241 if (pci_match_one_device(&dynid->id, dev)) { 264 - spin_unlock(&drv->dynids.lock); 265 - return &dynid->id; 242 + found_id = &dynid->id; 243 + break; 266 244 } 267 245 } 268 246 spin_unlock(&drv->dynids.lock); 269 247 270 - return pci_match_id(drv->id_table, dev); 248 + if (!found_id) 249 + found_id = pci_match_id(drv->id_table, dev); 250 + 251 + /* driver_override will always match, send a dummy id */ 252 + if (!found_id && dev->driver_override) 253 + found_id = &pci_device_id_any; 254 + 255 + return found_id; 271 256 } 272 257 273 258 struct drv_dev_and_id { ··· 619 580 { 620 581 pci_fixup_device(pci_fixup_resume, pci_dev); 621 582 622 - if (!pci_is_bridge(pci_dev)) 583 + if (!pci_has_subordinate(pci_dev)) 623 584 pci_enable_wake(pci_dev, PCI_D0, false); 624 585 } 625 586 626 587 static void pci_pm_default_suspend(struct pci_dev *pci_dev) 627 588 { 628 589 /* Disable non-bridge devices without PM support */ 629 - if (!pci_is_bridge(pci_dev)) 590 + if (!pci_has_subordinate(pci_dev)) 630 591 pci_disable_enabled_device(pci_dev); 631 592 } 632 593 ··· 756 717 757 718 if (!pci_dev->state_saved) { 758 719 pci_save_state(pci_dev); 759 - if (!pci_is_bridge(pci_dev)) 720 + if (!pci_has_subordinate(pci_dev)) 760 721 pci_prepare_to_sleep(pci_dev); 761 722 } 762 723 ··· 1010 971 return error; 1011 972 } 1012 973 1013 - if (!pci_dev->state_saved && !pci_is_bridge(pci_dev)) 974 + if (!pci_dev->state_saved && !pci_has_subordinate(pci_dev)) 1014 975 pci_prepare_to_sleep(pci_dev); 1015 976 1016 977 /* ··· 1364 1325 return -ENODEV; 1365 1326 1366 1327 pdev = to_pci_dev(dev); 1367 - if (!pdev) 1368 - return -ENODEV; 1369 1328 1370 1329 if (add_uevent_var(env, "PCI_CLASS=%04X", pdev->class)) 1371 1330 return -ENOMEM; ··· 1384 1347 (u8)(pdev->class >> 16), (u8)(pdev->class >> 8), 1385 1348 (u8)(pdev->class))) 1386 1349 return -ENOMEM; 1350 + 1387 1351 return 0; 1388 1352 } 1389 1353
+58 -10
drivers/pci/pci-sysfs.c
··· 29 29 #include <linux/slab.h> 30 30 #include <linux/vgaarb.h> 31 31 #include <linux/pm_runtime.h> 32 + #include <linux/of.h> 32 33 #include "pci.h" 33 34 34 35 static int sysfs_initialized; /* = 0 */ ··· 417 416 static DEVICE_ATTR_RW(d3cold_allowed); 418 417 #endif 419 418 419 + #ifdef CONFIG_OF 420 + static ssize_t devspec_show(struct device *dev, 421 + struct device_attribute *attr, char *buf) 422 + { 423 + struct pci_dev *pdev = to_pci_dev(dev); 424 + struct device_node *np = pci_device_to_OF_node(pdev); 425 + 426 + if (np == NULL || np->full_name == NULL) 427 + return 0; 428 + return sprintf(buf, "%s", np->full_name); 429 + } 430 + static DEVICE_ATTR_RO(devspec); 431 + #endif 432 + 420 433 #ifdef CONFIG_PCI_IOV 421 434 static ssize_t sriov_totalvfs_show(struct device *dev, 422 435 struct device_attribute *attr, ··· 514 499 sriov_numvfs_show, sriov_numvfs_store); 515 500 #endif /* CONFIG_PCI_IOV */ 516 501 502 + static ssize_t driver_override_store(struct device *dev, 503 + struct device_attribute *attr, 504 + const char *buf, size_t count) 505 + { 506 + struct pci_dev *pdev = to_pci_dev(dev); 507 + char *driver_override, *old = pdev->driver_override, *cp; 508 + 509 + if (count > PATH_MAX) 510 + return -EINVAL; 511 + 512 + driver_override = kstrndup(buf, count, GFP_KERNEL); 513 + if (!driver_override) 514 + return -ENOMEM; 515 + 516 + cp = strchr(driver_override, '\n'); 517 + if (cp) 518 + *cp = '\0'; 519 + 520 + if (strlen(driver_override)) { 521 + pdev->driver_override = driver_override; 522 + } else { 523 + kfree(driver_override); 524 + pdev->driver_override = NULL; 525 + } 526 + 527 + kfree(old); 528 + 529 + return count; 530 + } 531 + 532 + static ssize_t driver_override_show(struct device *dev, 533 + struct device_attribute *attr, char *buf) 534 + { 535 + struct pci_dev *pdev = to_pci_dev(dev); 536 + 537 + return sprintf(buf, "%s\n", pdev->driver_override); 538 + } 539 + static DEVICE_ATTR_RW(driver_override); 540 + 517 541 static struct attribute *pci_dev_attrs[] = { 518 542 &dev_attr_resource.attr, 519 543 &dev_attr_vendor.attr, ··· 575 521 #if defined(CONFIG_PM_RUNTIME) && defined(CONFIG_ACPI) 576 522 &dev_attr_d3cold_allowed.attr, 577 523 #endif 524 + #ifdef CONFIG_OF 525 + &dev_attr_devspec.attr, 526 + #endif 527 + &dev_attr_driver_override.attr, 578 528 NULL, 579 529 }; 580 530 ··· 1313 1255 .write = pci_write_config, 1314 1256 }; 1315 1257 1316 - int __weak pcibios_add_platform_entries(struct pci_dev *dev) 1317 - { 1318 - return 0; 1319 - } 1320 - 1321 1258 static ssize_t reset_store(struct device *dev, 1322 1259 struct device_attribute *attr, const char *buf, 1323 1260 size_t count) ··· 1427 1374 } 1428 1375 pdev->rom_attr = attr; 1429 1376 } 1430 - 1431 - /* add platform-specific attributes */ 1432 - retval = pcibios_add_platform_entries(pdev); 1433 - if (retval) 1434 - goto err_rom_file; 1435 1377 1436 1378 /* add sysfs entries for various capabilities */ 1437 1379 retval = pci_create_capabilities_sysfs(pdev);
+33 -1
drivers/pci/pci.c
··· 1468 1468 */ 1469 1469 void __weak pcibios_disable_device (struct pci_dev *dev) {} 1470 1470 1471 + /** 1472 + * pcibios_penalize_isa_irq - penalize an ISA IRQ 1473 + * @irq: ISA IRQ to penalize 1474 + * @active: IRQ active or not 1475 + * 1476 + * Permits the platform to provide architecture-specific functionality when 1477 + * penalizing ISA IRQs. This is the default implementation. Architecture 1478 + * implementations can override this. 1479 + */ 1480 + void __weak pcibios_penalize_isa_irq(int irq, int active) {} 1481 + 1471 1482 static void do_pci_disable_device(struct pci_dev *dev) 1472 1483 { 1473 1484 u16 pci_command; ··· 3317 3306 pci_cfg_access_unlock(dev); 3318 3307 } 3319 3308 3309 + /** 3310 + * pci_reset_notify - notify device driver of reset 3311 + * @dev: device to be notified of reset 3312 + * @prepare: 'true' if device is about to be reset; 'false' if reset attempt 3313 + * completed 3314 + * 3315 + * Must be called prior to device access being disabled and after device 3316 + * access is restored. 3317 + */ 3318 + static void pci_reset_notify(struct pci_dev *dev, bool prepare) 3319 + { 3320 + const struct pci_error_handlers *err_handler = 3321 + dev->driver ? dev->driver->err_handler : NULL; 3322 + if (err_handler && err_handler->reset_notify) 3323 + err_handler->reset_notify(dev, prepare); 3324 + } 3325 + 3320 3326 static void pci_dev_save_and_disable(struct pci_dev *dev) 3321 3327 { 3328 + pci_reset_notify(dev, true); 3329 + 3322 3330 /* 3323 3331 * Wake-up device prior to save. PM registers default to D0 after 3324 3332 * reset and a simple register restore doesn't reliably return ··· 3359 3329 static void pci_dev_restore(struct pci_dev *dev) 3360 3330 { 3361 3331 pci_restore_state(dev); 3332 + pci_reset_notify(dev, false); 3362 3333 } 3363 3334 3364 3335 static int pci_dev_reset(struct pci_dev *dev, int probe) ··· 3376 3345 3377 3346 return rc; 3378 3347 } 3348 + 3379 3349 /** 3380 3350 * __pci_reset_function - reset a PCI device function 3381 3351 * @dev: PCI device to reset ··· 4158 4126 u16 cmd; 4159 4127 int rc; 4160 4128 4161 - WARN_ON((flags & PCI_VGA_STATE_CHANGE_DECODES) & (command_bits & ~(PCI_COMMAND_IO|PCI_COMMAND_MEMORY))); 4129 + WARN_ON((flags & PCI_VGA_STATE_CHANGE_DECODES) && (command_bits & ~(PCI_COMMAND_IO|PCI_COMMAND_MEMORY))); 4162 4130 4163 4131 /* ARCH specific VGA enables */ 4164 4132 rc = pci_set_vga_state_arch(dev, decode, command_bits, flags);
+5 -5
drivers/pci/pci.h
··· 77 77 pm_wakeup_event(&dev->dev, 100); 78 78 } 79 79 80 - static inline bool pci_is_bridge(struct pci_dev *pci_dev) 80 + static inline bool pci_has_subordinate(struct pci_dev *pci_dev) 81 81 { 82 82 return !!(pci_dev->subordinate); 83 83 } ··· 201 201 struct resource *res, unsigned int reg); 202 202 int pci_resource_bar(struct pci_dev *dev, int resno, enum pci_bar_type *type); 203 203 void pci_configure_ari(struct pci_dev *dev); 204 - void __ref __pci_bus_size_bridges(struct pci_bus *bus, 204 + void __pci_bus_size_bridges(struct pci_bus *bus, 205 205 struct list_head *realloc_head); 206 - void __ref __pci_bus_assign_resources(const struct pci_bus *bus, 207 - struct list_head *realloc_head, 208 - struct list_head *fail_head); 206 + void __pci_bus_assign_resources(const struct pci_bus *bus, 207 + struct list_head *realloc_head, 208 + struct list_head *fail_head); 209 209 210 210 /** 211 211 * pci_ari_enabled - query ARI forwarding status
+6 -3
drivers/pci/pcie/portdrv_core.c
··· 99 99 for (i = 0; i < nr_entries; i++) 100 100 msix_entries[i].entry = i; 101 101 102 - status = pci_enable_msix(dev, msix_entries, nr_entries); 102 + status = pci_enable_msix_exact(dev, msix_entries, nr_entries); 103 103 if (status) 104 104 goto Exit; 105 105 ··· 171 171 pci_disable_msix(dev); 172 172 173 173 /* Now allocate the MSI-X vectors for real */ 174 - status = pci_enable_msix(dev, msix_entries, nvec); 174 + status = pci_enable_msix_exact(dev, msix_entries, nvec); 175 175 if (status) 176 176 goto Exit; 177 177 } ··· 379 379 /* 380 380 * Initialize service irqs. Don't use service devices that 381 381 * require interrupts if there is no way to generate them. 382 + * However, some drivers may have a polling mode (e.g. pciehp_poll_mode) 383 + * that can be used in the absence of irqs. Allow them to determine 384 + * if that is to be used. 382 385 */ 383 386 status = init_service_irqs(dev, irqs, capabilities); 384 387 if (status) { 385 - capabilities &= PCIE_PORT_SERVICE_VC; 388 + capabilities &= PCIE_PORT_SERVICE_VC | PCIE_PORT_SERVICE_HP; 386 389 if (!capabilities) 387 390 goto error_disable; 388 391 }
+74 -27
drivers/pci/probe.c
··· 171 171 struct resource *res, unsigned int pos) 172 172 { 173 173 u32 l, sz, mask; 174 + u64 l64, sz64, mask64; 174 175 u16 orig_cmd; 175 176 struct pci_bus_region region, inverted_region; 176 - bool bar_too_big = false, bar_disabled = false; 177 + bool bar_too_big = false, bar_too_high = false, bar_invalid = false; 177 178 178 179 mask = type ? PCI_ROM_ADDRESS_MASK : ~0; 179 180 ··· 227 226 } 228 227 229 228 if (res->flags & IORESOURCE_MEM_64) { 230 - u64 l64 = l; 231 - u64 sz64 = sz; 232 - u64 mask64 = mask | (u64)~0 << 32; 229 + l64 = l; 230 + sz64 = sz; 231 + mask64 = mask | (u64)~0 << 32; 233 232 234 233 pci_read_config_dword(dev, pos + 4, &l); 235 234 pci_write_config_dword(dev, pos + 4, ~0); ··· 244 243 if (!sz64) 245 244 goto fail; 246 245 247 - if ((sizeof(resource_size_t) < 8) && (sz64 > 0x100000000ULL)) { 246 + if ((sizeof(dma_addr_t) < 8 || sizeof(resource_size_t) < 8) && 247 + sz64 > 0x100000000ULL) { 248 + res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED; 249 + res->start = 0; 250 + res->end = 0; 248 251 bar_too_big = true; 249 - goto fail; 252 + goto out; 250 253 } 251 254 252 - if ((sizeof(resource_size_t) < 8) && l) { 253 - /* Address above 32-bit boundary; disable the BAR */ 254 - pci_write_config_dword(dev, pos, 0); 255 - pci_write_config_dword(dev, pos + 4, 0); 255 + if ((sizeof(dma_addr_t) < 8) && l) { 256 + /* Above 32-bit boundary; try to reallocate */ 256 257 res->flags |= IORESOURCE_UNSET; 257 - region.start = 0; 258 - region.end = sz64; 259 - bar_disabled = true; 258 + res->start = 0; 259 + res->end = sz64; 260 + bar_too_high = true; 261 + goto out; 260 262 } else { 261 263 region.start = l64; 262 264 region.end = l64 + sz64; ··· 289 285 * be claimed by the device. 290 286 */ 291 287 if (inverted_region.start != region.start) { 292 - dev_info(&dev->dev, "reg 0x%x: initial BAR value %pa invalid; forcing reassignment\n", 293 - pos, &region.start); 294 288 res->flags |= IORESOURCE_UNSET; 295 - res->end -= res->start; 296 289 res->start = 0; 290 + res->end = region.end - region.start; 291 + bar_invalid = true; 297 292 } 298 293 299 294 goto out; ··· 306 303 pci_write_config_word(dev, PCI_COMMAND, orig_cmd); 307 304 308 305 if (bar_too_big) 309 - dev_err(&dev->dev, "reg 0x%x: can't handle 64-bit BAR\n", pos); 310 - if (res->flags && !bar_disabled) 306 + dev_err(&dev->dev, "reg 0x%x: can't handle BAR larger than 4GB (size %#010llx)\n", 307 + pos, (unsigned long long) sz64); 308 + if (bar_too_high) 309 + dev_info(&dev->dev, "reg 0x%x: can't handle BAR above 4G (bus address %#010llx)\n", 310 + pos, (unsigned long long) l64); 311 + if (bar_invalid) 312 + dev_info(&dev->dev, "reg 0x%x: initial BAR value %#010llx invalid\n", 313 + pos, (unsigned long long) region.start); 314 + if (res->flags) 311 315 dev_printk(KERN_DEBUG, &dev->dev, "reg 0x%x: %pR\n", pos, res); 312 316 313 317 return (res->flags & IORESOURCE_MEM_64) ? 1 : 0; ··· 475 465 476 466 if (dev->transparent) { 477 467 pci_bus_for_each_resource(child->parent, res, i) { 478 - if (res) { 468 + if (res && res->flags) { 479 469 pci_bus_add_resource(child, res, 480 470 PCI_SUBTRACTIVE_DECODE); 481 471 dev_printk(KERN_DEBUG, &dev->dev, ··· 729 719 return child; 730 720 } 731 721 732 - struct pci_bus *__ref pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, int busnr) 722 + struct pci_bus *pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, int busnr) 733 723 { 734 724 struct pci_bus *child; 735 725 ··· 994 984 995 985 996 986 /** 987 + * pci_ext_cfg_is_aliased - is ext config space just an alias of std config? 988 + * @dev: PCI device 989 + * 990 + * PCI Express to PCI/PCI-X Bridge Specification, rev 1.0, 4.1.4 says that 991 + * when forwarding a type1 configuration request the bridge must check that 992 + * the extended register address field is zero. The bridge is not permitted 993 + * to forward the transactions and must handle it as an Unsupported Request. 994 + * Some bridges do not follow this rule and simply drop the extended register 995 + * bits, resulting in the standard config space being aliased, every 256 996 + * bytes across the entire configuration space. Test for this condition by 997 + * comparing the first dword of each potential alias to the vendor/device ID. 998 + * Known offenders: 999 + * ASM1083/1085 PCIe-to-PCI Reversible Bridge (1b21:1080, rev 01 & 03) 1000 + * AMD/ATI SBx00 PCI to PCI Bridge (1002:4384, rev 40) 1001 + */ 1002 + static bool pci_ext_cfg_is_aliased(struct pci_dev *dev) 1003 + { 1004 + #ifdef CONFIG_PCI_QUIRKS 1005 + int pos; 1006 + u32 header, tmp; 1007 + 1008 + pci_read_config_dword(dev, PCI_VENDOR_ID, &header); 1009 + 1010 + for (pos = PCI_CFG_SPACE_SIZE; 1011 + pos < PCI_CFG_SPACE_EXP_SIZE; pos += PCI_CFG_SPACE_SIZE) { 1012 + if (pci_read_config_dword(dev, pos, &tmp) != PCIBIOS_SUCCESSFUL 1013 + || header != tmp) 1014 + return false; 1015 + } 1016 + 1017 + return true; 1018 + #else 1019 + return false; 1020 + #endif 1021 + } 1022 + 1023 + /** 997 1024 * pci_cfg_space_size - get the configuration space size of the PCI device. 998 1025 * @dev: PCI device 999 1026 * ··· 1048 1001 1049 1002 if (pci_read_config_dword(dev, pos, &status) != PCIBIOS_SUCCESSFUL) 1050 1003 goto fail; 1051 - if (status == 0xffffffff) 1004 + if (status == 0xffffffff || pci_ext_cfg_is_aliased(dev)) 1052 1005 goto fail; 1053 1006 1054 1007 return PCI_CFG_SPACE_EXP_SIZE; ··· 1262 1215 pci_release_of_node(pci_dev); 1263 1216 pcibios_release_device(pci_dev); 1264 1217 pci_bus_put(pci_dev->bus); 1218 + kfree(pci_dev->driver_override); 1265 1219 kfree(pci_dev); 1266 1220 } 1267 1221 ··· 1417 1369 WARN_ON(ret < 0); 1418 1370 } 1419 1371 1420 - struct pci_dev *__ref pci_scan_single_device(struct pci_bus *bus, int devfn) 1372 + struct pci_dev *pci_scan_single_device(struct pci_bus *bus, int devfn) 1421 1373 { 1422 1374 struct pci_dev *dev; 1423 1375 ··· 1665 1617 */ 1666 1618 void pcie_bus_configure_settings(struct pci_bus *bus) 1667 1619 { 1668 - u8 smpss; 1620 + u8 smpss = 0; 1669 1621 1670 1622 if (!bus->self) 1671 1623 return; ··· 1718 1670 1719 1671 for (pass=0; pass < 2; pass++) 1720 1672 list_for_each_entry(dev, &bus->devices, bus_list) { 1721 - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE || 1722 - dev->hdr_type == PCI_HEADER_TYPE_CARDBUS) 1673 + if (pci_is_bridge(dev)) 1723 1674 max = pci_scan_bridge(bus, dev, max, pass); 1724 1675 } 1725 1676 ··· 2005 1958 * 2006 1959 * Returns the max number of subordinate bus discovered. 2007 1960 */ 2008 - unsigned int __ref pci_rescan_bus_bridge_resize(struct pci_dev *bridge) 1961 + unsigned int pci_rescan_bus_bridge_resize(struct pci_dev *bridge) 2009 1962 { 2010 1963 unsigned int max; 2011 1964 struct pci_bus *bus = bridge->subordinate; ··· 2028 1981 * 2029 1982 * Returns the max number of subordinate bus discovered. 2030 1983 */ 2031 - unsigned int __ref pci_rescan_bus(struct pci_bus *bus) 1984 + unsigned int pci_rescan_bus(struct pci_bus *bus) 2032 1985 { 2033 1986 unsigned int max; 2034 1987
+11
drivers/pci/quirks.c
··· 2954 2954 } 2955 2955 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq); 2956 2956 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq); 2957 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0152, disable_igfx_irq); 2957 2958 2958 2959 /* 2959 2960 * PCI devices which are on Intel chips can skip the 10ms delay ··· 2991 2990 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_CHELSIO, 0x0030, 2992 2991 quirk_broken_intx_masking); 2993 2992 DECLARE_PCI_FIXUP_HEADER(0x1814, 0x0601, /* Ralink RT2800 802.11n PCI */ 2993 + quirk_broken_intx_masking); 2994 + /* 2995 + * Realtek RTL8169 PCI Gigabit Ethernet Controller (rev 10) 2996 + * Subsystem: Realtek RTL8169/8110 Family PCI Gigabit Ethernet NIC 2997 + * 2998 + * RTL8110SC - Fails under PCI device assignment using DisINTx masking. 2999 + */ 3000 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_REALTEK, 0x8169, 2994 3001 quirk_broken_intx_masking); 2995 3002 2996 3003 static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f, ··· 3462 3453 /* Wildcat PCH */ 3463 3454 0x9c90, 0x9c91, 0x9c92, 0x9c93, 0x9c94, 0x9c95, 0x9c96, 0x9c97, 3464 3455 0x9c98, 0x9c99, 0x9c9a, 0x9c9b, 3456 + /* Patsburg (X79) PCH */ 3457 + 0x1d10, 0x1d12, 0x1d14, 0x1d16, 0x1d18, 0x1d1a, 0x1d1c, 0x1d1e, 3465 3458 }; 3466 3459 3467 3460 static bool pci_quirk_intel_pch_acs_match(struct pci_dev *dev)
-1
drivers/pci/search.c
··· 7 7 * Copyright (C) 2003 -- 2004 Greg Kroah-Hartman <greg@kroah.com> 8 8 */ 9 9 10 - #include <linux/init.h> 11 10 #include <linux/pci.h> 12 11 #include <linux/slab.h> 13 12 #include <linux/module.h>
+168 -83
drivers/pci/setup-bus.c
··· 713 713 bus resource of a given type. Note: we intentionally skip 714 714 the bus resources which have already been assigned (that is, 715 715 have non-NULL parent resource). */ 716 - static struct resource *find_free_bus_resource(struct pci_bus *bus, unsigned long type) 716 + static struct resource *find_free_bus_resource(struct pci_bus *bus, 717 + unsigned long type_mask, unsigned long type) 717 718 { 718 719 int i; 719 720 struct resource *r; 720 - unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM | 721 - IORESOURCE_PREFETCH; 722 721 723 722 pci_bus_for_each_resource(bus, r, i) { 724 723 if (r == &ioport_resource || r == &iomem_resource) ··· 814 815 resource_size_t add_size, struct list_head *realloc_head) 815 816 { 816 817 struct pci_dev *dev; 817 - struct resource *b_res = find_free_bus_resource(bus, IORESOURCE_IO); 818 + struct resource *b_res = find_free_bus_resource(bus, IORESOURCE_IO, 819 + IORESOURCE_IO); 818 820 resource_size_t size = 0, size0 = 0, size1 = 0; 819 821 resource_size_t children_add_size = 0; 820 822 resource_size_t min_align, align; ··· 907 907 * @bus : the bus 908 908 * @mask: mask the resource flag, then compare it with type 909 909 * @type: the type of free resource from bridge 910 + * @type2: second match type 911 + * @type3: third match type 910 912 * @min_size : the minimum memory window that must to be allocated 911 913 * @add_size : additional optional memory window 912 914 * @realloc_head : track the additional memory window on this list 913 915 * 914 916 * Calculate the size of the bus and minimal alignment which 915 917 * guarantees that all child resources fit in this size. 918 + * 919 + * Returns -ENOSPC if there's no available bus resource of the desired type. 920 + * Otherwise, sets the bus resource start/end to indicate the required 921 + * size, adds things to realloc_head (if supplied), and returns 0. 916 922 */ 917 923 static int pbus_size_mem(struct pci_bus *bus, unsigned long mask, 918 - unsigned long type, resource_size_t min_size, 919 - resource_size_t add_size, 920 - struct list_head *realloc_head) 924 + unsigned long type, unsigned long type2, 925 + unsigned long type3, 926 + resource_size_t min_size, resource_size_t add_size, 927 + struct list_head *realloc_head) 921 928 { 922 929 struct pci_dev *dev; 923 930 resource_size_t min_align, align, size, size0, size1; 924 - resource_size_t aligns[12]; /* Alignments from 1Mb to 2Gb */ 931 + resource_size_t aligns[14]; /* Alignments from 1Mb to 8Gb */ 925 932 int order, max_order; 926 - struct resource *b_res = find_free_bus_resource(bus, type); 927 - unsigned int mem64_mask = 0; 933 + struct resource *b_res = find_free_bus_resource(bus, 934 + mask | IORESOURCE_PREFETCH, type); 928 935 resource_size_t children_add_size = 0; 929 936 930 937 if (!b_res) 931 - return 0; 938 + return -ENOSPC; 932 939 933 940 memset(aligns, 0, sizeof(aligns)); 934 941 max_order = 0; 935 942 size = 0; 936 - 937 - mem64_mask = b_res->flags & IORESOURCE_MEM_64; 938 - b_res->flags &= ~IORESOURCE_MEM_64; 939 943 940 944 list_for_each_entry(dev, &bus->devices, bus_list) { 941 945 int i; ··· 948 944 struct resource *r = &dev->resource[i]; 949 945 resource_size_t r_size; 950 946 951 - if (r->parent || (r->flags & mask) != type) 947 + if (r->parent || ((r->flags & mask) != type && 948 + (r->flags & mask) != type2 && 949 + (r->flags & mask) != type3)) 952 950 continue; 953 951 r_size = resource_size(r); 954 952 #ifdef CONFIG_PCI_IOV ··· 963 957 continue; 964 958 } 965 959 #endif 966 - /* For bridges size != alignment */ 960 + /* 961 + * aligns[0] is for 1MB (since bridge memory 962 + * windows are always at least 1MB aligned), so 963 + * keep "order" from being negative for smaller 964 + * resources. 965 + */ 967 966 align = pci_resource_alignment(dev, r); 968 967 order = __ffs(align) - 20; 969 - if (order > 11) { 968 + if (order < 0) 969 + order = 0; 970 + if (order >= ARRAY_SIZE(aligns)) { 970 971 dev_warn(&dev->dev, "disabling BAR %d: %pR " 971 972 "(bad alignment %#llx)\n", i, r, 972 973 (unsigned long long) align); ··· 981 968 continue; 982 969 } 983 970 size += r_size; 984 - if (order < 0) 985 - order = 0; 986 971 /* Exclude ranges with size > align from 987 972 calculation of the alignment. */ 988 973 if (r_size == align) 989 974 aligns[order] += align; 990 975 if (order > max_order) 991 976 max_order = order; 992 - mem64_mask &= r->flags & IORESOURCE_MEM_64; 993 977 994 978 if (realloc_head) 995 979 children_add_size += get_res_add_size(realloc_head, r); ··· 1007 997 "%pR to %pR (unused)\n", b_res, 1008 998 &bus->busn_res); 1009 999 b_res->flags = 0; 1010 - return 1; 1000 + return 0; 1011 1001 } 1012 1002 b_res->start = min_align; 1013 1003 b_res->end = size0 + min_align - 1; 1014 - b_res->flags |= IORESOURCE_STARTALIGN | mem64_mask; 1004 + b_res->flags |= IORESOURCE_STARTALIGN; 1015 1005 if (size1 > size0 && realloc_head) { 1016 1006 add_to_list(realloc_head, bus->self, b_res, size1-size0, min_align); 1017 1007 dev_printk(KERN_DEBUG, &bus->self->dev, "bridge window " 1018 1008 "%pR to %pR add_size %llx\n", b_res, 1019 1009 &bus->busn_res, (unsigned long long)size1-size0); 1020 1010 } 1021 - return 1; 1011 + return 0; 1022 1012 } 1023 1013 1024 1014 unsigned long pci_cardbus_resource_alignment(struct resource *res) ··· 1123 1113 ; 1124 1114 } 1125 1115 1126 - void __ref __pci_bus_size_bridges(struct pci_bus *bus, 1127 - struct list_head *realloc_head) 1116 + void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) 1128 1117 { 1129 1118 struct pci_dev *dev; 1130 - unsigned long mask, prefmask; 1119 + unsigned long mask, prefmask, type2 = 0, type3 = 0; 1131 1120 resource_size_t additional_mem_size = 0, additional_io_size = 0; 1121 + struct resource *b_res; 1122 + int ret; 1132 1123 1133 1124 list_for_each_entry(dev, &bus->devices, bus_list) { 1134 1125 struct pci_bus *b = dev->subordinate; ··· 1163 1152 additional_io_size = pci_hotplug_io_size; 1164 1153 additional_mem_size = pci_hotplug_mem_size; 1165 1154 } 1166 - /* 1167 - * Follow thru 1168 - */ 1155 + /* Fall through */ 1169 1156 default: 1170 1157 pbus_size_io(bus, realloc_head ? 0 : additional_io_size, 1171 1158 additional_io_size, realloc_head); 1172 - /* If the bridge supports prefetchable range, size it 1173 - separately. If it doesn't, or its prefetchable window 1174 - has already been allocated by arch code, try 1175 - non-prefetchable range for both types of PCI memory 1176 - resources. */ 1159 + 1160 + /* 1161 + * If there's a 64-bit prefetchable MMIO window, compute 1162 + * the size required to put all 64-bit prefetchable 1163 + * resources in it. 1164 + */ 1165 + b_res = &bus->self->resource[PCI_BRIDGE_RESOURCES]; 1177 1166 mask = IORESOURCE_MEM; 1178 1167 prefmask = IORESOURCE_MEM | IORESOURCE_PREFETCH; 1179 - if (pbus_size_mem(bus, prefmask, prefmask, 1168 + if (b_res[2].flags & IORESOURCE_MEM_64) { 1169 + prefmask |= IORESOURCE_MEM_64; 1170 + ret = pbus_size_mem(bus, prefmask, prefmask, 1171 + prefmask, prefmask, 1180 1172 realloc_head ? 0 : additional_mem_size, 1181 - additional_mem_size, realloc_head)) 1182 - mask = prefmask; /* Success, size non-prefetch only. */ 1183 - else 1184 - additional_mem_size += additional_mem_size; 1185 - pbus_size_mem(bus, mask, IORESOURCE_MEM, 1173 + additional_mem_size, realloc_head); 1174 + 1175 + /* 1176 + * If successful, all non-prefetchable resources 1177 + * and any 32-bit prefetchable resources will go in 1178 + * the non-prefetchable window. 1179 + */ 1180 + if (ret == 0) { 1181 + mask = prefmask; 1182 + type2 = prefmask & ~IORESOURCE_MEM_64; 1183 + type3 = prefmask & ~IORESOURCE_PREFETCH; 1184 + } 1185 + } 1186 + 1187 + /* 1188 + * If there is no 64-bit prefetchable window, compute the 1189 + * size required to put all prefetchable resources in the 1190 + * 32-bit prefetchable window (if there is one). 1191 + */ 1192 + if (!type2) { 1193 + prefmask &= ~IORESOURCE_MEM_64; 1194 + ret = pbus_size_mem(bus, prefmask, prefmask, 1195 + prefmask, prefmask, 1196 + realloc_head ? 0 : additional_mem_size, 1197 + additional_mem_size, realloc_head); 1198 + 1199 + /* 1200 + * If successful, only non-prefetchable resources 1201 + * will go in the non-prefetchable window. 1202 + */ 1203 + if (ret == 0) 1204 + mask = prefmask; 1205 + else 1206 + additional_mem_size += additional_mem_size; 1207 + 1208 + type2 = type3 = IORESOURCE_MEM; 1209 + } 1210 + 1211 + /* 1212 + * Compute the size required to put everything else in the 1213 + * non-prefetchable window. This includes: 1214 + * 1215 + * - all non-prefetchable resources 1216 + * - 32-bit prefetchable resources if there's a 64-bit 1217 + * prefetchable window or no prefetchable window at all 1218 + * - 64-bit prefetchable resources if there's no 1219 + * prefetchable window at all 1220 + * 1221 + * Note that the strategy in __pci_assign_resource() must 1222 + * match that used here. Specifically, we cannot put a 1223 + * 32-bit prefetchable resource in a 64-bit prefetchable 1224 + * window. 1225 + */ 1226 + pbus_size_mem(bus, mask, IORESOURCE_MEM, type2, type3, 1186 1227 realloc_head ? 0 : additional_mem_size, 1187 1228 additional_mem_size, realloc_head); 1188 1229 break; 1189 1230 } 1190 1231 } 1191 1232 1192 - void __ref pci_bus_size_bridges(struct pci_bus *bus) 1233 + void pci_bus_size_bridges(struct pci_bus *bus) 1193 1234 { 1194 1235 __pci_bus_size_bridges(bus, NULL); 1195 1236 } 1196 1237 EXPORT_SYMBOL(pci_bus_size_bridges); 1197 1238 1198 - void __ref __pci_bus_assign_resources(const struct pci_bus *bus, 1199 - struct list_head *realloc_head, 1200 - struct list_head *fail_head) 1239 + void __pci_bus_assign_resources(const struct pci_bus *bus, 1240 + struct list_head *realloc_head, 1241 + struct list_head *fail_head) 1201 1242 { 1202 1243 struct pci_bus *b; 1203 1244 struct pci_dev *dev; ··· 1281 1218 } 1282 1219 } 1283 1220 1284 - void __ref pci_bus_assign_resources(const struct pci_bus *bus) 1221 + void pci_bus_assign_resources(const struct pci_bus *bus) 1285 1222 { 1286 1223 __pci_bus_assign_resources(bus, NULL, NULL); 1287 1224 } 1288 1225 EXPORT_SYMBOL(pci_bus_assign_resources); 1289 1226 1290 - static void __ref __pci_bridge_assign_resources(const struct pci_dev *bridge, 1291 - struct list_head *add_head, 1292 - struct list_head *fail_head) 1227 + static void __pci_bridge_assign_resources(const struct pci_dev *bridge, 1228 + struct list_head *add_head, 1229 + struct list_head *fail_head) 1293 1230 { 1294 1231 struct pci_bus *b; 1295 1232 ··· 1320 1257 static void pci_bridge_release_resources(struct pci_bus *bus, 1321 1258 unsigned long type) 1322 1259 { 1323 - int idx; 1324 - bool changed = false; 1325 - struct pci_dev *dev; 1260 + struct pci_dev *dev = bus->self; 1326 1261 struct resource *r; 1327 1262 unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM | 1328 - IORESOURCE_PREFETCH; 1263 + IORESOURCE_PREFETCH | IORESOURCE_MEM_64; 1264 + unsigned old_flags = 0; 1265 + struct resource *b_res; 1266 + int idx = 1; 1329 1267 1330 - dev = bus->self; 1331 - for (idx = PCI_BRIDGE_RESOURCES; idx <= PCI_BRIDGE_RESOURCE_END; 1332 - idx++) { 1333 - r = &dev->resource[idx]; 1334 - if ((r->flags & type_mask) != type) 1335 - continue; 1336 - if (!r->parent) 1337 - continue; 1338 - /* 1339 - * if there are children under that, we should release them 1340 - * all 1341 - */ 1342 - release_child_resources(r); 1343 - if (!release_resource(r)) { 1344 - dev_printk(KERN_DEBUG, &dev->dev, 1345 - "resource %d %pR released\n", idx, r); 1346 - /* keep the old size */ 1347 - r->end = resource_size(r) - 1; 1348 - r->start = 0; 1349 - r->flags = 0; 1350 - changed = true; 1351 - } 1352 - } 1268 + b_res = &dev->resource[PCI_BRIDGE_RESOURCES]; 1353 1269 1354 - if (changed) { 1270 + /* 1271 + * 1. if there is io port assign fail, will release bridge 1272 + * io port. 1273 + * 2. if there is non pref mmio assign fail, release bridge 1274 + * nonpref mmio. 1275 + * 3. if there is 64bit pref mmio assign fail, and bridge pref 1276 + * is 64bit, release bridge pref mmio. 1277 + * 4. if there is pref mmio assign fail, and bridge pref is 1278 + * 32bit mmio, release bridge pref mmio 1279 + * 5. if there is pref mmio assign fail, and bridge pref is not 1280 + * assigned, release bridge nonpref mmio. 1281 + */ 1282 + if (type & IORESOURCE_IO) 1283 + idx = 0; 1284 + else if (!(type & IORESOURCE_PREFETCH)) 1285 + idx = 1; 1286 + else if ((type & IORESOURCE_MEM_64) && 1287 + (b_res[2].flags & IORESOURCE_MEM_64)) 1288 + idx = 2; 1289 + else if (!(b_res[2].flags & IORESOURCE_MEM_64) && 1290 + (b_res[2].flags & IORESOURCE_PREFETCH)) 1291 + idx = 2; 1292 + else 1293 + idx = 1; 1294 + 1295 + r = &b_res[idx]; 1296 + 1297 + if (!r->parent) 1298 + return; 1299 + 1300 + /* 1301 + * if there are children under that, we should release them 1302 + * all 1303 + */ 1304 + release_child_resources(r); 1305 + if (!release_resource(r)) { 1306 + type = old_flags = r->flags & type_mask; 1307 + dev_printk(KERN_DEBUG, &dev->dev, "resource %d %pR released\n", 1308 + PCI_BRIDGE_RESOURCES + idx, r); 1309 + /* keep the old size */ 1310 + r->end = resource_size(r) - 1; 1311 + r->start = 0; 1312 + r->flags = 0; 1313 + 1355 1314 /* avoiding touch the one without PREF */ 1356 1315 if (type & IORESOURCE_PREFETCH) 1357 1316 type = IORESOURCE_PREFETCH; 1358 1317 __pci_setup_bridge(bus, type); 1318 + /* for next child res under same bridge */ 1319 + r->flags = old_flags; 1359 1320 } 1360 1321 } 1361 1322 ··· 1391 1304 * try to release pci bridge resources that is from leaf bridge, 1392 1305 * so we can allocate big new one later 1393 1306 */ 1394 - static void __ref pci_bus_release_bridge_resources(struct pci_bus *bus, 1395 - unsigned long type, 1396 - enum release_type rel_type) 1307 + static void pci_bus_release_bridge_resources(struct pci_bus *bus, 1308 + unsigned long type, 1309 + enum release_type rel_type) 1397 1310 { 1398 1311 struct pci_dev *dev; 1399 1312 bool is_leaf_bridge = true; ··· 1558 1471 LIST_HEAD(fail_head); 1559 1472 struct pci_dev_resource *fail_res; 1560 1473 unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM | 1561 - IORESOURCE_PREFETCH; 1474 + IORESOURCE_PREFETCH | IORESOURCE_MEM_64; 1562 1475 int pci_try_num = 1; 1563 1476 enum enable_type enable_local; 1564 1477 ··· 1716 1629 1717 1630 down_read(&pci_bus_sem); 1718 1631 list_for_each_entry(dev, &bus->devices, bus_list) 1719 - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE || 1720 - dev->hdr_type == PCI_HEADER_TYPE_CARDBUS) 1721 - if (dev->subordinate) 1632 + if (pci_is_bridge(dev) && pci_has_subordinate(dev)) 1722 1633 __pci_bus_size_bridges(dev->subordinate, 1723 1634 &add_list); 1724 1635 up_read(&pci_bus_sem);
-1
drivers/pci/setup-irq.c
··· 10 10 */ 11 11 12 12 13 - #include <linux/init.h> 14 13 #include <linux/kernel.h> 15 14 #include <linux/pci.h> 16 15 #include <linux/errno.h>
+31 -11
drivers/pci/setup-res.c
··· 16 16 * Resource sorting 17 17 */ 18 18 19 - #include <linux/init.h> 20 19 #include <linux/kernel.h> 21 20 #include <linux/export.h> 22 21 #include <linux/pci.h> ··· 208 209 209 210 min = (res->flags & IORESOURCE_IO) ? PCIBIOS_MIN_IO : PCIBIOS_MIN_MEM; 210 211 211 - /* First, try exact prefetching match.. */ 212 + /* 213 + * First, try exact prefetching match. Even if a 64-bit 214 + * prefetchable bridge window is below 4GB, we can't put a 32-bit 215 + * prefetchable resource in it because pbus_size_mem() assumes a 216 + * 64-bit window will contain no 32-bit resources. If we assign 217 + * things differently than they were sized, not everything will fit. 218 + */ 212 219 ret = pci_bus_alloc_resource(bus, res, size, align, min, 213 - IORESOURCE_PREFETCH, 220 + IORESOURCE_PREFETCH | IORESOURCE_MEM_64, 214 221 pcibios_align_resource, dev); 222 + if (ret == 0) 223 + return 0; 215 224 216 - if (ret < 0 && (res->flags & IORESOURCE_PREFETCH)) { 217 - /* 218 - * That failed. 219 - * 220 - * But a prefetching area can handle a non-prefetching 221 - * window (it will just not perform as well). 222 - */ 225 + /* 226 + * If the prefetchable window is only 32 bits wide, we can put 227 + * 64-bit prefetchable resources in it. 228 + */ 229 + if ((res->flags & (IORESOURCE_PREFETCH | IORESOURCE_MEM_64)) == 230 + (IORESOURCE_PREFETCH | IORESOURCE_MEM_64)) { 231 + ret = pci_bus_alloc_resource(bus, res, size, align, min, 232 + IORESOURCE_PREFETCH, 233 + pcibios_align_resource, dev); 234 + if (ret == 0) 235 + return 0; 236 + } 237 + 238 + /* 239 + * If we didn't find a better match, we can put any memory resource 240 + * in a non-prefetchable window. If this resource is 32 bits and 241 + * non-prefetchable, the first call already tried the only possibility 242 + * so we don't need to try again. 243 + */ 244 + if (res->flags & (IORESOURCE_PREFETCH | IORESOURCE_MEM_64)) 223 245 ret = pci_bus_alloc_resource(bus, res, size, align, min, 0, 224 246 pcibios_align_resource, dev); 225 - } 247 + 226 248 return ret; 227 249 } 228 250
+1 -2
drivers/pcmcia/cardbus.c
··· 78 78 max = bus->busn_res.start; 79 79 for (pass = 0; pass < 2; pass++) 80 80 list_for_each_entry(dev, &bus->devices, bus_list) 81 - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE || 82 - dev->hdr_type == PCI_HEADER_TYPE_CARDBUS) 81 + if (pci_is_bridge(dev)) 83 82 max = pci_scan_bridge(bus, dev, max, pass); 84 83 85 84 /*
+1 -2
drivers/platform/x86/asus-wmi.c
··· 642 642 dev = pci_scan_single_device(bus, 0); 643 643 if (dev) { 644 644 pci_bus_assign_resources(bus); 645 - if (pci_bus_add_device(dev)) 646 - pr_err("Unable to hotplug wifi\n"); 645 + pci_bus_add_device(dev); 647 646 } 648 647 } else { 649 648 dev = pci_get_slot(bus, 0);
+1 -2
drivers/platform/x86/eeepc-laptop.c
··· 633 633 dev = pci_scan_single_device(bus, 0); 634 634 if (dev) { 635 635 pci_bus_assign_resources(bus); 636 - if (pci_bus_add_device(dev)) 637 - pr_err("Unable to hotplug wifi\n"); 636 + pci_bus_add_device(dev); 638 637 } 639 638 } else { 640 639 dev = pci_get_slot(bus, 0);
+5 -8
include/asm-generic/dma-coherent.h
··· 16 16 * Standard interface 17 17 */ 18 18 #define ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY 19 - extern int 20 - dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr, 21 - dma_addr_t device_addr, size_t size, int flags); 19 + int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 20 + dma_addr_t device_addr, size_t size, int flags); 22 21 23 - extern void 24 - dma_release_declared_memory(struct device *dev); 22 + void dma_release_declared_memory(struct device *dev); 25 23 26 - extern void * 27 - dma_mark_declared_memory_occupied(struct device *dev, 28 - dma_addr_t device_addr, size_t size); 24 + void *dma_mark_declared_memory_occupied(struct device *dev, 25 + dma_addr_t device_addr, size_t size); 29 26 #else 30 27 #define dma_alloc_from_coherent(dev, size, handle, ret) (0) 31 28 #define dma_release_from_coherent(dev, order, vaddr) (0)
+10 -3
include/linux/dma-mapping.h
··· 8 8 #include <linux/dma-direction.h> 9 9 #include <linux/scatterlist.h> 10 10 11 + /* 12 + * A dma_addr_t can hold any valid DMA or bus address for the platform. 13 + * It can be given to a device to use as a DMA source or target. A CPU cannot 14 + * reference a dma_addr_t directly because there may be translation between 15 + * its physical address space and the bus address space. 16 + */ 11 17 struct dma_map_ops { 12 18 void* (*alloc)(struct device *dev, size_t size, 13 19 dma_addr_t *dma_handle, gfp_t gfp, ··· 192 186 193 187 #ifndef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY 194 188 static inline int 195 - dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr, 189 + dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 196 190 dma_addr_t device_addr, size_t size, int flags) 197 191 { 198 192 return 0; ··· 223 217 extern void dmam_free_noncoherent(struct device *dev, size_t size, void *vaddr, 224 218 dma_addr_t dma_handle); 225 219 #ifdef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY 226 - extern int dmam_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr, 220 + extern int dmam_declare_coherent_memory(struct device *dev, 221 + phys_addr_t phys_addr, 227 222 dma_addr_t device_addr, size_t size, 228 223 int flags); 229 224 extern void dmam_release_declared_memory(struct device *dev); 230 225 #else /* ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY */ 231 226 static inline int dmam_declare_coherent_memory(struct device *dev, 232 - dma_addr_t bus_addr, dma_addr_t device_addr, 227 + phys_addr_t phys_addr, dma_addr_t device_addr, 233 228 size_t size, gfp_t gfp) 234 229 { 235 230 return 0;
+27 -13
include/linux/pci.h
··· 365 365 #endif 366 366 phys_addr_t rom; /* Physical address of ROM if it's not from the BAR */ 367 367 size_t romlen; /* Length of ROM if it's not from the BAR */ 368 + char *driver_override; /* Driver name to force a match */ 368 369 }; 369 370 370 371 static inline struct pci_dev *pci_physfn(struct pci_dev *dev) ··· 478 477 return !(pbus->parent); 479 478 } 480 479 480 + /** 481 + * pci_is_bridge - check if the PCI device is a bridge 482 + * @dev: PCI device 483 + * 484 + * Return true if the PCI device is bridge whether it has subordinate 485 + * or not. 486 + */ 487 + static inline bool pci_is_bridge(struct pci_dev *dev) 488 + { 489 + return dev->hdr_type == PCI_HEADER_TYPE_BRIDGE || 490 + dev->hdr_type == PCI_HEADER_TYPE_CARDBUS; 491 + } 492 + 481 493 static inline struct pci_dev *pci_upstream_bridge(struct pci_dev *dev) 482 494 { 483 495 dev = pci_physfn(dev); ··· 532 518 case PCIBIOS_FUNC_NOT_SUPPORTED: 533 519 return -ENOENT; 534 520 case PCIBIOS_BAD_VENDOR_ID: 535 - return -EINVAL; 521 + return -ENOTTY; 536 522 case PCIBIOS_DEVICE_NOT_FOUND: 537 523 return -ENODEV; 538 524 case PCIBIOS_BAD_REGISTER_NUMBER: ··· 543 529 return -ENOSPC; 544 530 } 545 531 546 - return -ENOTTY; 532 + return -ERANGE; 547 533 } 548 534 549 535 /* Low-level architecture-dependent routines */ ··· 616 602 617 603 /* PCI slot has been reset */ 618 604 pci_ers_result_t (*slot_reset)(struct pci_dev *dev); 605 + 606 + /* PCI function reset prepare or completed */ 607 + void (*reset_notify)(struct pci_dev *dev, bool prepare); 619 608 620 609 /* Device driver may resume normal operations */ 621 610 void (*resume)(struct pci_dev *dev); ··· 697 680 698 681 /** 699 682 * PCI_VDEVICE - macro used to describe a specific pci device in short form 700 - * @vendor: the vendor name 701 - * @device: the 16 bit PCI Device ID 683 + * @vend: the vendor name 684 + * @dev: the 16 bit PCI Device ID 702 685 * 703 686 * This macro is used to create a struct pci_device_id that matches a 704 687 * specific PCI device. The subvendor, and subdevice fields will be set ··· 706 689 * private data. 707 690 */ 708 691 709 - #define PCI_VDEVICE(vendor, device) \ 710 - PCI_VENDOR_ID_##vendor, (device), \ 711 - PCI_ANY_ID, PCI_ANY_ID, 0, 0 692 + #define PCI_VDEVICE(vend, dev) \ 693 + .vendor = PCI_VENDOR_ID_##vend, .device = (dev), \ 694 + .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID, 0, 0 712 695 713 696 /* these external functions are only available when PCI support is enabled */ 714 697 #ifdef CONFIG_PCI ··· 781 764 struct pci_dev *pci_scan_single_device(struct pci_bus *bus, int devfn); 782 765 void pci_device_add(struct pci_dev *dev, struct pci_bus *bus); 783 766 unsigned int pci_scan_child_bus(struct pci_bus *bus); 784 - int __must_check pci_bus_add_device(struct pci_dev *dev); 767 + void pci_bus_add_device(struct pci_dev *dev); 785 768 void pci_read_bridge_bases(struct pci_bus *child); 786 769 struct resource *pci_find_parent_resource(const struct pci_dev *dev, 787 770 struct resource *res); ··· 1175 1158 1176 1159 #ifdef CONFIG_PCI_MSI 1177 1160 int pci_msi_vec_count(struct pci_dev *dev); 1178 - int pci_enable_msi_block(struct pci_dev *dev, int nvec); 1179 1161 void pci_msi_shutdown(struct pci_dev *dev); 1180 1162 void pci_disable_msi(struct pci_dev *dev); 1181 1163 int pci_msix_vec_count(struct pci_dev *dev); ··· 1204 1188 } 1205 1189 #else 1206 1190 static inline int pci_msi_vec_count(struct pci_dev *dev) { return -ENOSYS; } 1207 - static inline int pci_enable_msi_block(struct pci_dev *dev, int nvec) 1208 - { return -ENOSYS; } 1209 1191 static inline void pci_msi_shutdown(struct pci_dev *dev) { } 1210 1192 static inline void pci_disable_msi(struct pci_dev *dev) { } 1211 1193 static inline int pci_msix_vec_count(struct pci_dev *dev) { return -ENOSYS; } ··· 1258 1244 static inline void pcie_ecrc_get_policy(char *str) { } 1259 1245 #endif 1260 1246 1261 - #define pci_enable_msi(pdev) pci_enable_msi_block(pdev, 1) 1247 + #define pci_enable_msi(pdev) pci_enable_msi_exact(pdev, 1) 1262 1248 1263 1249 #ifdef CONFIG_HT_IRQ 1264 1250 /* The functions a driver should call */ ··· 1586 1572 extern unsigned long pci_hotplug_mem_size; 1587 1573 1588 1574 /* Architecture-specific versions may override these (weak) */ 1589 - int pcibios_add_platform_entries(struct pci_dev *dev); 1590 1575 void pcibios_disable_device(struct pci_dev *dev); 1591 1576 void pcibios_set_master(struct pci_dev *dev); 1592 1577 int pcibios_set_pcie_reset_state(struct pci_dev *dev, 1593 1578 enum pcie_reset_state state); 1594 1579 int pcibios_add_device(struct pci_dev *dev); 1595 1580 void pcibios_release_device(struct pci_dev *dev); 1581 + void pcibios_penalize_isa_irq(int irq, int active); 1596 1582 1597 1583 #ifdef CONFIG_HIBERNATE_CALLBACKS 1598 1584 extern struct dev_pm_ops pcibios_pm_ops;
-3
include/linux/pci_ids.h
··· 1631 1631 #define PCI_DEVICE_ID_ATT_VENUS_MODEM 0x480 1632 1632 1633 1633 #define PCI_VENDOR_ID_SPECIALIX 0x11cb 1634 - #define PCI_DEVICE_ID_SPECIALIX_IO8 0x2000 1635 - #define PCI_DEVICE_ID_SPECIALIX_RIO 0x8000 1636 1634 #define PCI_SUBDEVICE_ID_SPECIALIX_SPEED4 0xa004 1637 1635 1638 1636 #define PCI_VENDOR_ID_ANALOG_DEVICES 0x11d4 ··· 2872 2874 #define PCI_DEVICE_ID_SCALEMP_VSMP_CTL 0x1010 2873 2875 2874 2876 #define PCI_VENDOR_ID_COMPUTONE 0x8e0e 2875 - #define PCI_DEVICE_ID_COMPUTONE_IP2EX 0x0291 2876 2877 #define PCI_DEVICE_ID_COMPUTONE_PG 0x0302 2877 2878 #define PCI_SUBVENDOR_ID_COMPUTONE 0x8e0e 2878 2879 #define PCI_SUBDEVICE_ID_COMPUTONE_PG4 0x0001
+1
include/linux/types.h
··· 142 142 #define pgoff_t unsigned long 143 143 #endif 144 144 145 + /* A dma_addr_t can hold any valid DMA or bus address for the platform */ 145 146 #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 146 147 typedef u64 dma_addr_t; 147 148 #else
+2 -5
kernel/resource.c
··· 1288 1288 if (p->flags & IORESOURCE_BUSY) 1289 1289 continue; 1290 1290 1291 - printk(KERN_WARNING "resource map sanity check conflict: " 1292 - "0x%llx 0x%llx 0x%llx 0x%llx %s\n", 1291 + printk(KERN_WARNING "resource sanity check: requesting [mem %#010llx-%#010llx], which spans more than %s %pR\n", 1293 1292 (unsigned long long)addr, 1294 1293 (unsigned long long)(addr + size - 1), 1295 - (unsigned long long)p->start, 1296 - (unsigned long long)p->end, 1297 - p->name); 1294 + p->name, p); 1298 1295 err = -1; 1299 1296 break; 1300 1297 }