Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v3.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
"PCI changes for the v3.14 merge window:

Resource management
- Change pci_bus_region addresses to dma_addr_t (Bjorn Helgaas)
- Support 64-bit AGP BARs (Bjorn Helgaas, Yinghai Lu)
- Add pci_bus_address() to get bus address of a BAR (Bjorn Helgaas)
- Use pci_resource_start() for CPU address of AGP BARs (Bjorn Helgaas)
- Enforce bus address limits in resource allocation (Yinghai Lu)
- Allocate 64-bit BARs above 4G when possible (Yinghai Lu)
- Convert pcibios_resource_to_bus() to take pci_bus, not pci_dev (Yinghai Lu)

PCI device hotplug
- Major rescan/remove locking update (Rafael J. Wysocki)
- Make ioapic builtin only (not modular) (Yinghai Lu)
- Fix release/free issues (Yinghai Lu)
- Clean up pciehp (Bjorn Helgaas)
- Announce pciehp slot info during enumeration (Bjorn Helgaas)

MSI
- Add pci_msi_vec_count(), pci_msix_vec_count() (Alexander Gordeev)
- Add pci_enable_msi_range(), pci_enable_msix_range() (Alexander Gordeev)
- Deprecate "tri-state" interfaces: fail/success/fail+info (Alexander Gordeev)
- Export MSI mode using attributes, not kobjects (Greg Kroah-Hartman)
- Drop "irq" param from *_restore_msi_irqs() (DuanZhenzhong)

SR-IOV
- Clear NumVFs when disabling SR-IOV in sriov_init() (ethan.zhao)

Virtualization
- Add support for save/restore of extended capabilities (Alex Williamson)
- Add Virtual Channel to save/restore support (Alex Williamson)
- Never treat a VF as a multifunction device (Alex Williamson)
- Add pci_try_reset_function(), et al (Alex Williamson)

AER
- Ignore non-PCIe error sources (Betty Dall)
- Support ACPI HEST error sources for domains other than 0 (Betty Dall)
- Consolidate HEST error source parsers (Bjorn Helgaas)
- Add a TLP header print helper (Borislav Petkov)

Freescale i.MX6
- Remove unnecessary code (Fabio Estevam)
- Make reset-gpio optional (Marek Vasut)
- Report "link up" only after link training completes (Marek Vasut)
- Start link in Gen1 before negotiating for Gen2 mode (Marek Vasut)
- Fix PCIe startup code (Richard Zhu)

Marvell MVEBU
- Remove duplicate of_clk_get_by_name() call (Andrew Lunn)
- Drop writes to bridge Secondary Status register (Jason Gunthorpe)
- Obey bridge PCI_COMMAND_MEM and PCI_COMMAND_IO bits (Jason Gunthorpe)
- Support a bridge with no IO port window (Jason Gunthorpe)
- Use max_t() instead of max(resource_size_t,) (Jingoo Han)
- Remove redundant of_match_ptr (Sachin Kamat)
- Call pci_ioremap_io() at startup instead of dynamically (Thomas Petazzoni)

NVIDIA Tegra
- Disable Gen2 for Tegra20 and Tegra30 (Eric Brower)

Renesas R-Car
- Add runtime PM support (Valentine Barshak)
- Fix rcar_pci_probe() return value check (Wei Yongjun)

Synopsys DesignWare
- Fix crash in dw_msi_teardown_irq() (Bjørn Erik Nilsen)
- Remove redundant call to pci_write_config_word() (Bjørn Erik Nilsen)
- Fix missing MSI IRQs (Harro Haan)
- Add dw_pcie prefix before cfg_read/write (Pratyush Anand)
- Fix I/O transfers by using CPU (not realio) address (Pratyush Anand)
- Whitespace cleanup (Jingoo Han)

EISA
- Call put_device() if device_register() fails (Levente Kurusa)
- Revert EISA initialization breakage ((Bjorn Helgaas)

Miscellaneous
- Remove unused code, including PCIe 3.0 interfaces (Stephen Hemminger)
- Prevent bus conflicts while checking for bridge apertures (Bjorn Helgaas)
- Stop clearing bridge Secondary Status when setting up I/O aperture (Bjorn Helgaas)
- Use dev_is_pci() to identify PCI devices (Yijing Wang)
- Deprecate DEFINE_PCI_DEVICE_TABLE (Joe Perches)
- Update documentation 00-INDEX (Erik Ekman)"

* tag 'pci-v3.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (119 commits)
Revert "EISA: Initialize device before its resources"
Revert "EISA: Log device resources in dmesg"
vfio-pci: Use pci "try" reset interface
PCI: Check parent kobject in pci_destroy_dev()
xen/pcifront: Use global PCI rescan-remove locking
powerpc/eeh: Use global PCI rescan-remove locking
PCI: Fix pci_check_and_unmask_intx() comment typos
PCI: Add pci_try_reset_function(), pci_try_reset_slot(), pci_try_reset_bus()
MPT / PCI: Use pci_stop_and_remove_bus_device_locked()
platform / x86: Use global PCI rescan-remove locking
PCI: hotplug: Use global PCI rescan-remove locking
pcmcia: Use global PCI rescan-remove locking
ACPI / hotplug / PCI: Use global PCI rescan-remove locking
ACPI / PCI: Use global PCI rescan-remove locking in PCI root hotplug
PCI: Add global pci_lock_rescan_remove()
PCI: Cleanup pci.h whitespace
PCI: Reorder so actual code comes before stubs
PCI/AER: Support ACPI HEST AER error sources for PCI domains other than 0
ACPICA: Add helper macros to extract bus/segment numbers from HEST table.
PCI: Make local functions static
...

+2523 -1998
+4 -7
Documentation/ABI/testing/sysfs-bus-pci
··· 70 70 Contact: Neil Horman <nhorman@tuxdriver.com> 71 71 Description: 72 72 The /sys/devices/.../msi_irqs directory contains a variable set 73 - of sub-directories, with each sub-directory being named after a 74 - corresponding msi irq vector allocated to that device. Each 75 - numbered sub-directory N contains attributes of that irq. 76 - Note that this directory is not created for device drivers which 77 - do not support msi irqs 73 + of files, with each file being named after a corresponding msi 74 + irq vector allocated to that device. 78 75 79 - What: /sys/bus/pci/devices/.../msi_irqs/<N>/mode 76 + What: /sys/bus/pci/devices/.../msi_irqs/<N> 80 77 Date: September 2011 81 78 Contact: Neil Horman <nhorman@tuxdriver.com> 82 79 Description: 83 80 This attribute indicates the mode that the irq vector named by 84 - the parent directory is in (msi vs. msix) 81 + the file is in (msi vs. msix) 85 82 86 83 What: /sys/bus/pci/devices/.../remove 87 84 Date: January 2009
+2 -2
Documentation/PCI/00-INDEX
··· 2 2 - this file 3 3 MSI-HOWTO.txt 4 4 - the Message Signaled Interrupts (MSI) Driver Guide HOWTO and FAQ. 5 - PCI-DMA-mapping.txt 6 - - info for PCI drivers using DMA portably across all platforms 7 5 PCIEBUS-HOWTO.txt 8 6 - a guide describing the PCI Express Port Bus driver 9 7 pci-error-recovery.txt 10 8 - info on PCI error recovery 9 + pci-iov-howto.txt 10 + - the PCI Express I/O Virtualization HOWTO 11 11 pci.txt 12 12 - info on the PCI subsystem for device driver authors 13 13 pcieaer-howto.txt
+211 -99
Documentation/PCI/MSI-HOWTO.txt
··· 82 82 has to request that the PCI layer set up the MSI capability for this 83 83 device. 84 84 85 - 4.2.1 pci_enable_msi 85 + 4.2.1 pci_enable_msi_range 86 86 87 - int pci_enable_msi(struct pci_dev *dev) 87 + int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec) 88 88 89 - A successful call allocates ONE interrupt to the device, regardless 90 - of how many MSIs the device supports. The device is switched from 91 - pin-based interrupt mode to MSI mode. The dev->irq number is changed 92 - to a new number which represents the message signaled interrupt; 93 - consequently, this function should be called before the driver calls 94 - request_irq(), because an MSI is delivered via a vector that is 95 - different from the vector of a pin-based interrupt. 89 + This function allows a device driver to request any number of MSI 90 + interrupts within specified range from 'minvec' to 'maxvec'. 96 91 97 - 4.2.2 pci_enable_msi_block 98 - 99 - int pci_enable_msi_block(struct pci_dev *dev, int count) 100 - 101 - This variation on the above call allows a device driver to request multiple 102 - MSIs. The MSI specification only allows interrupts to be allocated in 103 - powers of two, up to a maximum of 2^5 (32). 104 - 105 - If this function returns 0, it has succeeded in allocating at least as many 106 - interrupts as the driver requested (it may have allocated more in order 107 - to satisfy the power-of-two requirement). In this case, the function 108 - enables MSI on this device and updates dev->irq to be the lowest of 109 - the new interrupts assigned to it. The other interrupts assigned to 110 - the device are in the range dev->irq to dev->irq + count - 1. 111 - 112 - If this function returns a negative number, it indicates an error and 113 - the driver should not attempt to request any more MSI interrupts for 114 - this device. If this function returns a positive number, it is 115 - less than 'count' and indicates the number of interrupts that could have 116 - been allocated. In neither case is the irq value updated or the device 117 - switched into MSI mode. 118 - 119 - The device driver must decide what action to take if 120 - pci_enable_msi_block() returns a value less than the number requested. 121 - For instance, the driver could still make use of fewer interrupts; 122 - in this case the driver should call pci_enable_msi_block() 123 - again. Note that it is not guaranteed to succeed, even when the 124 - 'count' has been reduced to the value returned from a previous call to 125 - pci_enable_msi_block(). This is because there are multiple constraints 126 - on the number of vectors that can be allocated; pci_enable_msi_block() 127 - returns as soon as it finds any constraint that doesn't allow the 128 - call to succeed. 129 - 130 - 4.2.3 pci_enable_msi_block_auto 131 - 132 - int pci_enable_msi_block_auto(struct pci_dev *dev, unsigned int *count) 133 - 134 - This variation on pci_enable_msi() call allows a device driver to request 135 - the maximum possible number of MSIs. The MSI specification only allows 136 - interrupts to be allocated in powers of two, up to a maximum of 2^5 (32). 137 - 138 - If this function returns a positive number, it indicates that it has 139 - succeeded and the returned value is the number of allocated interrupts. In 140 - this case, the function enables MSI on this device and updates dev->irq to 141 - be the lowest of the new interrupts assigned to it. The other interrupts 142 - assigned to the device are in the range dev->irq to dev->irq + returned 143 - value - 1. 92 + If this function returns a positive number it indicates the number of 93 + MSI interrupts that have been successfully allocated. In this case 94 + the device is switched from pin-based interrupt mode to MSI mode and 95 + updates dev->irq to be the lowest of the new interrupts assigned to it. 96 + The other interrupts assigned to the device are in the range dev->irq 97 + to dev->irq + returned value - 1. Device driver can use the returned 98 + number of successfully allocated MSI interrupts to further allocate 99 + and initialize device resources. 144 100 145 101 If this function returns a negative number, it indicates an error and 146 102 the driver should not attempt to request any more MSI interrupts for 147 103 this device. 148 104 149 - If the device driver needs to know the number of interrupts the device 150 - supports it can pass the pointer count where that number is stored. The 151 - device driver must decide what action to take if pci_enable_msi_block_auto() 152 - succeeds, but returns a value less than the number of interrupts supported. 153 - If the device driver does not need to know the number of interrupts 154 - supported, it can set the pointer count to NULL. 105 + This function should be called before the driver calls request_irq(), 106 + because MSI interrupts are delivered via vectors that are different 107 + from the vector of a pin-based interrupt. 155 108 156 - 4.2.4 pci_disable_msi 109 + It is ideal if drivers can cope with a variable number of MSI interrupts; 110 + there are many reasons why the platform may not be able to provide the 111 + exact number that a driver asks for. 112 + 113 + There could be devices that can not operate with just any number of MSI 114 + interrupts within a range. See chapter 4.3.1.3 to get the idea how to 115 + handle such devices for MSI-X - the same logic applies to MSI. 116 + 117 + 4.2.1.1 Maximum possible number of MSI interrupts 118 + 119 + The typical usage of MSI interrupts is to allocate as many vectors as 120 + possible, likely up to the limit returned by pci_msi_vec_count() function: 121 + 122 + static int foo_driver_enable_msi(struct pci_dev *pdev, int nvec) 123 + { 124 + return pci_enable_msi_range(pdev, 1, nvec); 125 + } 126 + 127 + Note the value of 'minvec' parameter is 1. As 'minvec' is inclusive, 128 + the value of 0 would be meaningless and could result in error. 129 + 130 + Some devices have a minimal limit on number of MSI interrupts. 131 + In this case the function could look like this: 132 + 133 + static int foo_driver_enable_msi(struct pci_dev *pdev, int nvec) 134 + { 135 + return pci_enable_msi_range(pdev, FOO_DRIVER_MINIMUM_NVEC, nvec); 136 + } 137 + 138 + 4.2.1.2 Exact number of MSI interrupts 139 + 140 + If a driver is unable or unwilling to deal with a variable number of MSI 141 + interrupts it could request a particular number of interrupts by passing 142 + that number to pci_enable_msi_range() function as both 'minvec' and 'maxvec' 143 + parameters: 144 + 145 + static int foo_driver_enable_msi(struct pci_dev *pdev, int nvec) 146 + { 147 + return pci_enable_msi_range(pdev, nvec, nvec); 148 + } 149 + 150 + 4.2.1.3 Single MSI mode 151 + 152 + The most notorious example of the request type described above is 153 + enabling the single MSI mode for a device. It could be done by passing 154 + two 1s as 'minvec' and 'maxvec': 155 + 156 + static int foo_driver_enable_single_msi(struct pci_dev *pdev) 157 + { 158 + return pci_enable_msi_range(pdev, 1, 1); 159 + } 160 + 161 + 4.2.2 pci_disable_msi 157 162 158 163 void pci_disable_msi(struct pci_dev *dev) 159 164 160 - This function should be used to undo the effect of pci_enable_msi() or 161 - pci_enable_msi_block() or pci_enable_msi_block_auto(). Calling it restores 162 - dev->irq to the pin-based interrupt number and frees the previously 163 - allocated message signaled interrupt(s). The interrupt may subsequently be 164 - assigned to another device, so drivers should not cache the value of 165 - dev->irq. 165 + This function should be used to undo the effect of pci_enable_msi_range(). 166 + Calling it restores dev->irq to the pin-based interrupt number and frees 167 + the previously allocated MSIs. The interrupts may subsequently be assigned 168 + to another device, so drivers should not cache the value of dev->irq. 166 169 167 170 Before calling this function, a device driver must always call free_irq() 168 171 on any interrupt for which it previously called request_irq(). 169 172 Failure to do so results in a BUG_ON(), leaving the device with 170 173 MSI enabled and thus leaking its vector. 174 + 175 + 4.2.3 pci_msi_vec_count 176 + 177 + int pci_msi_vec_count(struct pci_dev *dev) 178 + 179 + This function could be used to retrieve the number of MSI vectors the 180 + device requested (via the Multiple Message Capable register). The MSI 181 + specification only allows the returned value to be a power of two, 182 + up to a maximum of 2^5 (32). 183 + 184 + If this function returns a negative number, it indicates the device is 185 + not capable of sending MSIs. 186 + 187 + If this function returns a positive number, it indicates the maximum 188 + number of MSI interrupt vectors that could be allocated. 171 189 172 190 4.3 Using MSI-X 173 191 ··· 206 188 should assign interrupts; it is invalid to fill in two entries with the 207 189 same number. 208 190 209 - 4.3.1 pci_enable_msix 191 + 4.3.1 pci_enable_msix_range 210 192 211 - int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec) 193 + int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries, 194 + int minvec, int maxvec) 212 195 213 - Calling this function asks the PCI subsystem to allocate 'nvec' MSIs. 196 + Calling this function asks the PCI subsystem to allocate any number of 197 + MSI-X interrupts within specified range from 'minvec' to 'maxvec'. 214 198 The 'entries' argument is a pointer to an array of msix_entry structs 215 - which should be at least 'nvec' entries in size. On success, the 216 - device is switched into MSI-X mode and the function returns 0. 217 - The 'vector' member in each entry is populated with the interrupt number; 199 + which should be at least 'maxvec' entries in size. 200 + 201 + On success, the device is switched into MSI-X mode and the function 202 + returns the number of MSI-X interrupts that have been successfully 203 + allocated. In this case the 'vector' member in entries numbered from 204 + 0 to the returned value - 1 is populated with the interrupt number; 218 205 the driver should then call request_irq() for each 'vector' that it 219 206 decides to use. The device driver is responsible for keeping track of the 220 207 interrupts assigned to the MSI-X vectors so it can free them again later. 208 + Device driver can use the returned number of successfully allocated MSI-X 209 + interrupts to further allocate and initialize device resources. 221 210 222 211 If this function returns a negative number, it indicates an error and 223 212 the driver should not attempt to allocate any more MSI-X interrupts for 224 - this device. If it returns a positive number, it indicates the maximum 225 - number of interrupt vectors that could have been allocated. See example 226 - below. 213 + this device. 227 214 228 - This function, in contrast with pci_enable_msi(), does not adjust 215 + This function, in contrast with pci_enable_msi_range(), does not adjust 229 216 dev->irq. The device will not generate interrupts for this interrupt 230 217 number once MSI-X is enabled. 231 218 ··· 241 218 there are many reasons why the platform may not be able to provide the 242 219 exact number that a driver asks for. 243 220 244 - A request loop to achieve that might look like: 221 + There could be devices that can not operate with just any number of MSI-X 222 + interrupts within a range. E.g., an network adapter might need let's say 223 + four vectors per each queue it provides. Therefore, a number of MSI-X 224 + interrupts allocated should be a multiple of four. In this case interface 225 + pci_enable_msix_range() can not be used alone to request MSI-X interrupts 226 + (since it can allocate any number within the range, without any notion of 227 + the multiple of four) and the device driver should master a custom logic 228 + to request the required number of MSI-X interrupts. 229 + 230 + 4.3.1.1 Maximum possible number of MSI-X interrupts 231 + 232 + The typical usage of MSI-X interrupts is to allocate as many vectors as 233 + possible, likely up to the limit returned by pci_msix_vec_count() function: 245 234 246 235 static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec) 247 236 { 248 - while (nvec >= FOO_DRIVER_MINIMUM_NVEC) { 249 - rc = pci_enable_msix(adapter->pdev, 250 - adapter->msix_entries, nvec); 251 - if (rc > 0) 252 - nvec = rc; 253 - else 254 - return rc; 237 + return pci_enable_msi_range(adapter->pdev, adapter->msix_entries, 238 + 1, nvec); 239 + } 240 + 241 + Note the value of 'minvec' parameter is 1. As 'minvec' is inclusive, 242 + the value of 0 would be meaningless and could result in error. 243 + 244 + Some devices have a minimal limit on number of MSI-X interrupts. 245 + In this case the function could look like this: 246 + 247 + static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec) 248 + { 249 + return pci_enable_msi_range(adapter->pdev, adapter->msix_entries, 250 + FOO_DRIVER_MINIMUM_NVEC, nvec); 251 + } 252 + 253 + 4.3.1.2 Exact number of MSI-X interrupts 254 + 255 + If a driver is unable or unwilling to deal with a variable number of MSI-X 256 + interrupts it could request a particular number of interrupts by passing 257 + that number to pci_enable_msix_range() function as both 'minvec' and 'maxvec' 258 + parameters: 259 + 260 + static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec) 261 + { 262 + return pci_enable_msi_range(adapter->pdev, adapter->msix_entries, 263 + nvec, nvec); 264 + } 265 + 266 + 4.3.1.3 Specific requirements to the number of MSI-X interrupts 267 + 268 + As noted above, there could be devices that can not operate with just any 269 + number of MSI-X interrupts within a range. E.g., let's assume a device that 270 + is only capable sending the number of MSI-X interrupts which is a power of 271 + two. A routine that enables MSI-X mode for such device might look like this: 272 + 273 + /* 274 + * Assume 'minvec' and 'maxvec' are non-zero 275 + */ 276 + static int foo_driver_enable_msix(struct foo_adapter *adapter, 277 + int minvec, int maxvec) 278 + { 279 + int rc; 280 + 281 + minvec = roundup_pow_of_two(minvec); 282 + maxvec = rounddown_pow_of_two(maxvec); 283 + 284 + if (minvec > maxvec) 285 + return -ERANGE; 286 + 287 + retry: 288 + rc = pci_enable_msix_range(adapter->pdev, adapter->msix_entries, 289 + maxvec, maxvec); 290 + /* 291 + * -ENOSPC is the only error code allowed to be analized 292 + */ 293 + if (rc == -ENOSPC) { 294 + if (maxvec == 1) 295 + return -ENOSPC; 296 + 297 + maxvec /= 2; 298 + 299 + if (minvec > maxvec) 300 + return -ENOSPC; 301 + 302 + goto retry; 255 303 } 256 304 257 - return -ENOSPC; 305 + return rc; 258 306 } 307 + 308 + Note how pci_enable_msix_range() return value is analized for a fallback - 309 + any error code other than -ENOSPC indicates a fatal error and should not 310 + be retried. 259 311 260 312 4.3.2 pci_disable_msix 261 313 262 314 void pci_disable_msix(struct pci_dev *dev) 263 315 264 - This function should be used to undo the effect of pci_enable_msix(). It frees 265 - the previously allocated message signaled interrupts. The interrupts may 316 + This function should be used to undo the effect of pci_enable_msix_range(). 317 + It frees the previously allocated MSI-X interrupts. The interrupts may 266 318 subsequently be assigned to another device, so drivers should not cache 267 319 the value of the 'vector' elements over a call to pci_disable_msix(). 268 320 ··· 353 255 be accessed directly by the device driver. If the driver wishes to 354 256 mask or unmask an interrupt, it should call disable_irq() / enable_irq(). 355 257 258 + 4.3.4 pci_msix_vec_count 259 + 260 + int pci_msix_vec_count(struct pci_dev *dev) 261 + 262 + This function could be used to retrieve number of entries in the device 263 + MSI-X table. 264 + 265 + If this function returns a negative number, it indicates the device is 266 + not capable of sending MSI-Xs. 267 + 268 + If this function returns a positive number, it indicates the maximum 269 + number of MSI-X interrupt vectors that could be allocated. 270 + 356 271 4.4 Handling devices implementing both MSI and MSI-X capabilities 357 272 358 273 If a device implements both MSI and MSI-X capabilities, it can 359 274 run in either MSI mode or MSI-X mode, but not both simultaneously. 360 275 This is a requirement of the PCI spec, and it is enforced by the 361 - PCI layer. Calling pci_enable_msi() when MSI-X is already enabled or 362 - pci_enable_msix() when MSI is already enabled results in an error. 363 - If a device driver wishes to switch between MSI and MSI-X at runtime, 364 - it must first quiesce the device, then switch it back to pin-interrupt 365 - mode, before calling pci_enable_msi() or pci_enable_msix() and resuming 366 - operation. This is not expected to be a common operation but may be 367 - useful for debugging or testing during development. 276 + PCI layer. Calling pci_enable_msi_range() when MSI-X is already 277 + enabled or pci_enable_msix_range() when MSI is already enabled 278 + results in an error. If a device driver wishes to switch between MSI 279 + and MSI-X at runtime, it must first quiesce the device, then switch 280 + it back to pin-interrupt mode, before calling pci_enable_msi_range() 281 + or pci_enable_msix_range() and resuming operation. This is not expected 282 + to be a common operation but may be useful for debugging or testing 283 + during development. 368 284 369 285 4.5 Considerations when using MSIs 370 286 ··· 493 381 to bridges between the PCI root and the device, MSIs are disabled. 494 382 495 383 It is also worth checking the device driver to see whether it supports MSIs. 496 - For example, it may contain calls to pci_enable_msi(), pci_enable_msix() or 497 - pci_enable_msi_block(). 384 + For example, it may contain calls to pci_enable_msi_range() or 385 + pci_enable_msix_range().
+4 -2
Documentation/PCI/pci.txt
··· 123 123 124 124 125 125 The ID table is an array of struct pci_device_id entries ending with an 126 - all-zero entry; use of the macro DEFINE_PCI_DEVICE_TABLE is the preferred 127 - method of declaring the table. Each entry consists of: 126 + all-zero entry. Definitions with static const are generally preferred. 127 + Use of the deprecated macro DEFINE_PCI_DEVICE_TABLE should be avoided. 128 + 129 + Each entry consists of: 128 130 129 131 vendor,device Vendor and device ID to match (or PCI_ANY_ID) 130 132
+2
Documentation/devicetree/bindings/pci/designware-pcie.txt
··· 19 19 to define the mapping of the PCIe interface to interrupt 20 20 numbers. 21 21 - num-lanes: number of lanes to use 22 + 23 + Optional properties: 22 24 - reset-gpio: gpio pin number of power good signal 23 25 24 26 Optional properties for fsl,imx6q-pcie
+2 -2
arch/alpha/kernel/pci-sysfs.c
··· 83 83 if (iomem_is_exclusive(res->start)) 84 84 return -EINVAL; 85 85 86 - pcibios_resource_to_bus(pdev, &bar, res); 86 + pcibios_resource_to_bus(pdev->bus, &bar, res); 87 87 vma->vm_pgoff += bar.start >> (PAGE_SHIFT - (sparse ? 5 : 0)); 88 88 mmap_type = res->flags & IORESOURCE_MEM ? pci_mmap_mem : pci_mmap_io; 89 89 ··· 139 139 long dense_offset; 140 140 unsigned long sparse_size; 141 141 142 - pcibios_resource_to_bus(pdev, &bar, &pdev->resource[num]); 142 + pcibios_resource_to_bus(pdev->bus, &bar, &pdev->resource[num]); 143 143 144 144 /* All core logic chips have 4G sparse address space, except 145 145 CIA which has 16G (see xxx_SPARSE_MEM and xxx_DENSE_MEM
+1 -1
arch/alpha/kernel/pci_iommu.c
··· 325 325 /* Helper for generic DMA-mapping functions. */ 326 326 static struct pci_dev *alpha_gendev_to_pci(struct device *dev) 327 327 { 328 - if (dev && dev->bus == &pci_bus_type) 328 + if (dev && dev_is_pci(dev)) 329 329 return to_pci_dev(dev); 330 330 331 331 /* Assume that non-PCI devices asking for DMA are either ISA or EISA,
+2 -2
arch/arm/common/it8152.c
··· 257 257 */ 258 258 static int it8152_pci_platform_notify(struct device *dev) 259 259 { 260 - if (dev->bus == &pci_bus_type) { 260 + if (dev_is_pci(dev)) { 261 261 if (dev->dma_mask) 262 262 *dev->dma_mask = (SZ_64M - 1) | PHYS_OFFSET; 263 263 dev->coherent_dma_mask = (SZ_64M - 1) | PHYS_OFFSET; ··· 268 268 269 269 static int it8152_pci_platform_notify_remove(struct device *dev) 270 270 { 271 - if (dev->bus == &pci_bus_type) 271 + if (dev_is_pci(dev)) 272 272 dmabounce_unregister_dev(dev); 273 273 274 274 return 0;
+3 -3
arch/arm/mach-ixp4xx/common-pci.c
··· 326 326 */ 327 327 static int ixp4xx_pci_platform_notify(struct device *dev) 328 328 { 329 - if(dev->bus == &pci_bus_type) { 329 + if (dev_is_pci(dev)) { 330 330 *dev->dma_mask = SZ_64M - 1; 331 331 dev->coherent_dma_mask = SZ_64M - 1; 332 332 dmabounce_register_dev(dev, 2048, 4096, ixp4xx_needs_bounce); ··· 336 336 337 337 static int ixp4xx_pci_platform_notify_remove(struct device *dev) 338 338 { 339 - if(dev->bus == &pci_bus_type) { 339 + if (dev_is_pci(dev)) 340 340 dmabounce_unregister_dev(dev); 341 - } 341 + 342 342 return 0; 343 343 } 344 344
+1 -1
arch/ia64/hp/common/sba_iommu.c
··· 255 255 #endif 256 256 257 257 #ifdef CONFIG_PCI 258 - # define GET_IOC(dev) (((dev)->bus == &pci_bus_type) \ 258 + # define GET_IOC(dev) ((dev_is_pci(dev)) \ 259 259 ? ((struct ioc *) PCI_CONTROLLER(to_pci_dev(dev))->iommu) : NULL) 260 260 #else 261 261 # define GET_IOC(dev) NULL
+12 -12
arch/ia64/sn/pci/pci_dma.c
··· 34 34 */ 35 35 static int sn_dma_supported(struct device *dev, u64 mask) 36 36 { 37 - BUG_ON(dev->bus != &pci_bus_type); 37 + BUG_ON(!dev_is_pci(dev)); 38 38 39 39 if (mask < 0x7fffffff) 40 40 return 0; ··· 50 50 */ 51 51 int sn_dma_set_mask(struct device *dev, u64 dma_mask) 52 52 { 53 - BUG_ON(dev->bus != &pci_bus_type); 53 + BUG_ON(!dev_is_pci(dev)); 54 54 55 55 if (!sn_dma_supported(dev, dma_mask)) 56 56 return 0; ··· 85 85 struct pci_dev *pdev = to_pci_dev(dev); 86 86 struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev); 87 87 88 - BUG_ON(dev->bus != &pci_bus_type); 88 + BUG_ON(!dev_is_pci(dev)); 89 89 90 90 /* 91 91 * Allocate the memory. ··· 143 143 struct pci_dev *pdev = to_pci_dev(dev); 144 144 struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev); 145 145 146 - BUG_ON(dev->bus != &pci_bus_type); 146 + BUG_ON(!dev_is_pci(dev)); 147 147 148 148 provider->dma_unmap(pdev, dma_handle, 0); 149 149 free_pages((unsigned long)cpu_addr, get_order(size)); ··· 187 187 188 188 dmabarr = dma_get_attr(DMA_ATTR_WRITE_BARRIER, attrs); 189 189 190 - BUG_ON(dev->bus != &pci_bus_type); 190 + BUG_ON(!dev_is_pci(dev)); 191 191 192 192 phys_addr = __pa(cpu_addr); 193 193 if (dmabarr) ··· 223 223 struct pci_dev *pdev = to_pci_dev(dev); 224 224 struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev); 225 225 226 - BUG_ON(dev->bus != &pci_bus_type); 226 + BUG_ON(!dev_is_pci(dev)); 227 227 228 228 provider->dma_unmap(pdev, dma_addr, dir); 229 229 } ··· 247 247 struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev); 248 248 struct scatterlist *sg; 249 249 250 - BUG_ON(dev->bus != &pci_bus_type); 250 + BUG_ON(!dev_is_pci(dev)); 251 251 252 252 for_each_sg(sgl, sg, nhwentries, i) { 253 253 provider->dma_unmap(pdev, sg->dma_address, dir); ··· 284 284 285 285 dmabarr = dma_get_attr(DMA_ATTR_WRITE_BARRIER, attrs); 286 286 287 - BUG_ON(dev->bus != &pci_bus_type); 287 + BUG_ON(!dev_is_pci(dev)); 288 288 289 289 /* 290 290 * Setup a DMA address for each entry in the scatterlist. ··· 323 323 static void sn_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, 324 324 size_t size, enum dma_data_direction dir) 325 325 { 326 - BUG_ON(dev->bus != &pci_bus_type); 326 + BUG_ON(!dev_is_pci(dev)); 327 327 } 328 328 329 329 static void sn_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, 330 330 size_t size, 331 331 enum dma_data_direction dir) 332 332 { 333 - BUG_ON(dev->bus != &pci_bus_type); 333 + BUG_ON(!dev_is_pci(dev)); 334 334 } 335 335 336 336 static void sn_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, 337 337 int nelems, enum dma_data_direction dir) 338 338 { 339 - BUG_ON(dev->bus != &pci_bus_type); 339 + BUG_ON(!dev_is_pci(dev)); 340 340 } 341 341 342 342 static void sn_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 343 343 int nelems, enum dma_data_direction dir) 344 344 { 345 - BUG_ON(dev->bus != &pci_bus_type); 345 + BUG_ON(!dev_is_pci(dev)); 346 346 } 347 347 348 348 static int sn_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
+5 -17
arch/parisc/kernel/drivers.c
··· 282 282 return NULL; 283 283 } 284 284 285 - #ifdef CONFIG_PCI 286 - static inline int is_pci_dev(struct device *dev) 287 - { 288 - return dev->bus == &pci_bus_type; 289 - } 290 - #else 291 - static inline int is_pci_dev(struct device *dev) 292 - { 293 - return 0; 294 - } 295 - #endif 296 - 297 285 /* 298 286 * get_node_path fills in @path with the firmware path to the device. 299 287 * Note that if @node is a parisc device, we don't fill in the 'mod' field. ··· 294 306 int i = 5; 295 307 memset(&path->bc, -1, 6); 296 308 297 - if (is_pci_dev(dev)) { 309 + if (dev_is_pci(dev)) { 298 310 unsigned int devfn = to_pci_dev(dev)->devfn; 299 311 path->mod = PCI_FUNC(devfn); 300 312 path->bc[i--] = PCI_SLOT(devfn); ··· 302 314 } 303 315 304 316 while (dev != &root) { 305 - if (is_pci_dev(dev)) { 317 + if (dev_is_pci(dev)) { 306 318 unsigned int devfn = to_pci_dev(dev)->devfn; 307 319 path->bc[i--] = PCI_SLOT(devfn) | (PCI_FUNC(devfn)<< 5); 308 320 } else if (dev->bus == &parisc_bus_type) { ··· 683 695 if (dev->bus == &parisc_bus_type) { 684 696 if (match_parisc_device(dev, d->index, d->modpath)) 685 697 d->dev = dev; 686 - } else if (is_pci_dev(dev)) { 698 + } else if (dev_is_pci(dev)) { 687 699 if (match_pci_device(dev, d->index, d->modpath)) 688 700 d->dev = dev; 689 701 } else if (dev->bus == NULL) { ··· 741 753 if (!parent) 742 754 return NULL; 743 755 } 744 - if (is_pci_dev(parent)) /* pci devices already parse MOD */ 756 + if (dev_is_pci(parent)) /* pci devices already parse MOD */ 745 757 return parent; 746 758 else 747 759 return parse_tree_node(parent, 6, modpath); ··· 760 772 padev = to_parisc_device(dev); 761 773 get_node_path(dev->parent, path); 762 774 path->mod = padev->hw_path; 763 - } else if (is_pci_dev(dev)) { 775 + } else if (dev_is_pci(dev)) { 764 776 get_node_path(dev, path); 765 777 } 766 778 }
+16 -3
arch/powerpc/kernel/eeh_driver.c
··· 369 369 edev->mode |= EEH_DEV_DISCONNECTED; 370 370 (*removed)++; 371 371 372 + pci_lock_rescan_remove(); 372 373 pci_stop_and_remove_bus_device(dev); 374 + pci_unlock_rescan_remove(); 373 375 374 376 return NULL; 375 377 } ··· 418 416 * into pcibios_add_pci_devices(). 419 417 */ 420 418 eeh_pe_state_mark(pe, EEH_PE_KEEP); 421 - if (bus) 419 + if (bus) { 420 + pci_lock_rescan_remove(); 422 421 pcibios_remove_pci_devices(bus); 423 - else if (frozen_bus) 422 + pci_unlock_rescan_remove(); 423 + } else if (frozen_bus) { 424 424 eeh_pe_dev_traverse(pe, eeh_rmv_device, &removed); 425 + } 425 426 426 427 /* Reset the pci controller. (Asserts RST#; resets config space). 427 428 * Reconfigure bridges and devices. Don't try to bring the system ··· 433 428 rc = eeh_reset_pe(pe); 434 429 if (rc) 435 430 return rc; 431 + 432 + pci_lock_rescan_remove(); 436 433 437 434 /* Restore PE */ 438 435 eeh_ops->configure_bridge(pe); ··· 469 462 pe->tstamp = tstamp; 470 463 pe->freeze_count = cnt; 471 464 465 + pci_unlock_rescan_remove(); 472 466 return 0; 473 467 } 474 468 ··· 626 618 eeh_pe_dev_traverse(pe, eeh_report_failure, NULL); 627 619 628 620 /* Shut down the device drivers for good. */ 629 - if (frozen_bus) 621 + if (frozen_bus) { 622 + pci_lock_rescan_remove(); 630 623 pcibios_remove_pci_devices(frozen_bus); 624 + pci_unlock_rescan_remove(); 625 + } 631 626 } 632 627 633 628 static void eeh_handle_special_event(void) ··· 703 692 if (rc == 2 || rc == 1) 704 693 eeh_handle_normal_event(pe); 705 694 else { 695 + pci_lock_rescan_remove(); 706 696 list_for_each_entry_safe(hose, tmp, 707 697 &hose_list, list_node) { 708 698 phb_pe = eeh_phb_pe_get(hose); ··· 715 703 eeh_pe_dev_traverse(pe, eeh_report_failure, NULL); 716 704 pcibios_remove_pci_devices(bus); 717 705 } 706 + pci_unlock_rescan_remove(); 718 707 } 719 708 } 720 709
+2 -2
arch/powerpc/kernel/pci-common.c
··· 835 835 * at 0 as unset as well, except if PCI_PROBE_ONLY is also set 836 836 * since in that case, we don't want to re-assign anything 837 837 */ 838 - pcibios_resource_to_bus(dev, &reg, res); 838 + pcibios_resource_to_bus(dev->bus, &reg, res); 839 839 if (pci_has_flag(PCI_REASSIGN_ALL_RSRC) || 840 840 (reg.start == 0 && !pci_has_flag(PCI_PROBE_ONLY))) { 841 841 /* Only print message if not re-assigning */ ··· 886 886 887 887 /* Job is a bit different between memory and IO */ 888 888 if (res->flags & IORESOURCE_MEM) { 889 - pcibios_resource_to_bus(dev, &region, res); 889 + pcibios_resource_to_bus(dev->bus, &region, res); 890 890 891 891 /* If the BAR is non-0 then it's probably been initialized */ 892 892 if (region.start != 0)
+2 -2
arch/powerpc/kernel/pci_of_scan.c
··· 111 111 res->name = pci_name(dev); 112 112 region.start = base; 113 113 region.end = base + size - 1; 114 - pcibios_bus_to_resource(dev, res, &region); 114 + pcibios_bus_to_resource(dev->bus, res, &region); 115 115 } 116 116 } 117 117 ··· 280 280 res->flags = flags; 281 281 region.start = of_read_number(&ranges[1], 2); 282 282 region.end = region.start + size - 1; 283 - pcibios_bus_to_resource(dev, res, &region); 283 + pcibios_bus_to_resource(dev->bus, res, &region); 284 284 } 285 285 sprintf(bus->name, "PCI Bus %04x:%02x", pci_domain_nr(bus), 286 286 bus->number);
+2 -2
arch/s390/pci/pci.c
··· 407 407 struct msi_msg msg; 408 408 int rc; 409 409 410 - if (type != PCI_CAP_ID_MSIX && type != PCI_CAP_ID_MSI) 411 - return -EINVAL; 410 + if (type == PCI_CAP_ID_MSI && nvec > 1) 411 + return 1; 412 412 msi_vecs = min(nvec, ZPCI_MSI_VEC_MAX); 413 413 msi_vecs = min_t(unsigned int, msi_vecs, CONFIG_PCI_NR_MSI); 414 414
+3 -3
arch/sparc/kernel/pci.c
··· 392 392 res->flags = IORESOURCE_IO; 393 393 region.start = (first << 21); 394 394 region.end = (last << 21) + ((1 << 21) - 1); 395 - pcibios_bus_to_resource(dev, res, &region); 395 + pcibios_bus_to_resource(dev->bus, res, &region); 396 396 397 397 pci_read_config_byte(dev, APB_MEM_ADDRESS_MAP, &map); 398 398 apb_calc_first_last(map, &first, &last); ··· 400 400 res->flags = IORESOURCE_MEM; 401 401 region.start = (first << 29); 402 402 region.end = (last << 29) + ((1 << 29) - 1); 403 - pcibios_bus_to_resource(dev, res, &region); 403 + pcibios_bus_to_resource(dev->bus, res, &region); 404 404 } 405 405 406 406 static void pci_of_scan_bus(struct pci_pbm_info *pbm, ··· 491 491 res->flags = flags; 492 492 region.start = GET_64BIT(ranges, 1); 493 493 region.end = region.start + size - 1; 494 - pcibios_bus_to_resource(dev, res, &region); 494 + pcibios_bus_to_resource(dev->bus, res, &region); 495 495 } 496 496 after_ranges: 497 497 sprintf(bus->name, "PCI Bus %04x:%02x", pci_domain_nr(bus),
+1 -2
arch/x86/include/asm/pci.h
··· 104 104 struct msi_desc; 105 105 int native_setup_msi_irqs(struct pci_dev *dev, int nvec, int type); 106 106 void native_teardown_msi_irq(unsigned int irq); 107 - void native_restore_msi_irqs(struct pci_dev *dev, int irq); 107 + void native_restore_msi_irqs(struct pci_dev *dev); 108 108 int setup_msi_irq(struct pci_dev *dev, struct msi_desc *msidesc, 109 109 unsigned int irq_base, unsigned int irq_offset); 110 110 #else ··· 125 125 126 126 /* generic pci stuff */ 127 127 #include <asm-generic/pci.h> 128 - #define PCIBIOS_MAX_MEM_32 0xffffffff 129 128 130 129 #ifdef CONFIG_NUMA 131 130 /* Returns the node based on pci bus */
+1 -1
arch/x86/include/asm/x86_init.h
··· 181 181 u8 hpet_id); 182 182 void (*teardown_msi_irq)(unsigned int irq); 183 183 void (*teardown_msi_irqs)(struct pci_dev *dev); 184 - void (*restore_msi_irqs)(struct pci_dev *dev, int irq); 184 + void (*restore_msi_irqs)(struct pci_dev *dev); 185 185 int (*setup_hpet_msi)(unsigned int irq, unsigned int id); 186 186 u32 (*msi_mask_irq)(struct msi_desc *desc, u32 mask, u32 flag); 187 187 u32 (*msix_mask_irq)(struct msi_desc *desc, u32 flag);
+1 -3
arch/x86/kernel/acpi/boot.c
··· 1034 1034 1035 1035 if (!acpi_ioapic) 1036 1036 return 0; 1037 - if (!dev) 1038 - return 0; 1039 - if (dev->bus != &pci_bus_type) 1037 + if (!dev || !dev_is_pci(dev)) 1040 1038 return 0; 1041 1039 1042 1040 pdev = to_pci_dev(dev);
+2 -2
arch/x86/kernel/x86_init.c
··· 136 136 x86_msi.teardown_msi_irq(irq); 137 137 } 138 138 139 - void arch_restore_msi_irqs(struct pci_dev *dev, int irq) 139 + void arch_restore_msi_irqs(struct pci_dev *dev) 140 140 { 141 - x86_msi.restore_msi_irqs(dev, irq); 141 + x86_msi.restore_msi_irqs(dev); 142 142 } 143 143 u32 arch_msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag) 144 144 {
+1 -1
arch/x86/pci/xen.c
··· 337 337 return ret; 338 338 } 339 339 340 - static void xen_initdom_restore_msi_irqs(struct pci_dev *dev, int irq) 340 + static void xen_initdom_restore_msi_irqs(struct pci_dev *dev) 341 341 { 342 342 int ret = 0; 343 343
+6
drivers/acpi/pci_root.c
··· 599 599 pci_assign_unassigned_root_bus_resources(root->bus); 600 600 } 601 601 602 + pci_lock_rescan_remove(); 602 603 pci_bus_add_devices(root->bus); 604 + pci_unlock_rescan_remove(); 603 605 return 1; 604 606 605 607 end: ··· 613 611 { 614 612 struct acpi_pci_root *root = acpi_driver_data(device); 615 613 614 + pci_lock_rescan_remove(); 615 + 616 616 pci_stop_root_bus(root->bus); 617 617 618 618 device_set_run_wake(root->bus->bridge, false); 619 619 pci_acpi_remove_bus_pm_notifier(device); 620 620 621 621 pci_remove_root_bus(root->bus); 622 + 623 + pci_unlock_rescan_remove(); 622 624 623 625 kfree(root); 624 626 }
+35 -21
drivers/ata/ahci.c
··· 1148 1148 {} 1149 1149 #endif 1150 1150 1151 - static int ahci_init_interrupts(struct pci_dev *pdev, struct ahci_host_priv *hpriv) 1151 + static int ahci_init_interrupts(struct pci_dev *pdev, unsigned int n_ports, 1152 + struct ahci_host_priv *hpriv) 1152 1153 { 1153 - int rc; 1154 - unsigned int maxvec; 1154 + int rc, nvec; 1155 1155 1156 - if (!(hpriv->flags & AHCI_HFLAG_NO_MSI)) { 1157 - rc = pci_enable_msi_block_auto(pdev, &maxvec); 1158 - if (rc > 0) { 1159 - if ((rc == maxvec) || (rc == 1)) 1160 - return rc; 1161 - /* 1162 - * Assume that advantage of multipe MSIs is negated, 1163 - * so fallback to single MSI mode to save resources 1164 - */ 1165 - pci_disable_msi(pdev); 1166 - if (!pci_enable_msi(pdev)) 1167 - return 1; 1168 - } 1169 - } 1156 + if (hpriv->flags & AHCI_HFLAG_NO_MSI) 1157 + goto intx; 1170 1158 1159 + rc = pci_msi_vec_count(pdev); 1160 + if (rc < 0) 1161 + goto intx; 1162 + 1163 + /* 1164 + * If number of MSIs is less than number of ports then Sharing Last 1165 + * Message mode could be enforced. In this case assume that advantage 1166 + * of multipe MSIs is negated and use single MSI mode instead. 1167 + */ 1168 + if (rc < n_ports) 1169 + goto single_msi; 1170 + 1171 + nvec = rc; 1172 + rc = pci_enable_msi_block(pdev, nvec); 1173 + if (rc) 1174 + goto intx; 1175 + 1176 + return nvec; 1177 + 1178 + single_msi: 1179 + rc = pci_enable_msi(pdev); 1180 + if (rc) 1181 + goto intx; 1182 + return 1; 1183 + 1184 + intx: 1171 1185 pci_intx(pdev, 1); 1172 1186 return 0; 1173 1187 } ··· 1342 1328 1343 1329 hpriv->mmio = pcim_iomap_table(pdev)[ahci_pci_bar]; 1344 1330 1345 - n_msis = ahci_init_interrupts(pdev, hpriv); 1346 - if (n_msis > 1) 1347 - hpriv->flags |= AHCI_HFLAG_MULTI_MSI; 1348 - 1349 1331 /* save initial config */ 1350 1332 ahci_pci_save_initial_config(pdev, hpriv); 1351 1333 ··· 1395 1385 * both CAP.NP and port_map. 1396 1386 */ 1397 1387 n_ports = max(ahci_nr_ports(hpriv->cap), fls(hpriv->port_map)); 1388 + 1389 + n_msis = ahci_init_interrupts(pdev, n_ports, hpriv); 1390 + if (n_msis > 1) 1391 + hpriv->flags |= AHCI_HFLAG_MULTI_MSI; 1398 1392 1399 1393 host = ata_host_alloc_pinfo(&pdev->dev, ppi, n_ports); 1400 1394 if (!host)
+1
drivers/char/agp/agp.h
··· 239 239 240 240 /* Chipset independent registers (from AGP Spec) */ 241 241 #define AGP_APBASE 0x10 242 + #define AGP_APERTURE_BAR 0 242 243 243 244 #define AGPSTAT 0x4 244 245 #define AGPCMD 0x8
+2 -2
drivers/char/agp/ali-agp.c
··· 85 85 pci_write_config_dword(agp_bridge->dev, ALI_TLBCTRL, ((temp & 0xffffff00) | 0x00000010)); 86 86 87 87 /* address to map to */ 88 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 89 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 88 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 89 + AGP_APERTURE_BAR); 90 90 91 91 #if 0 92 92 if (agp_bridge->type == ALI_M1541) {
+5 -7
drivers/char/agp/amd-k7-agp.c
··· 11 11 #include <linux/slab.h> 12 12 #include "agp.h" 13 13 14 - #define AMD_MMBASE 0x14 14 + #define AMD_MMBASE_BAR 1 15 15 #define AMD_APSIZE 0xac 16 16 #define AMD_MODECNTL 0xb0 17 17 #define AMD_MODECNTL2 0xb2 ··· 126 126 unsigned long __iomem *cur_gatt; 127 127 unsigned long addr; 128 128 int retval; 129 - u32 temp; 130 129 int i; 131 130 132 131 value = A_SIZE_LVL2(agp_bridge->current_size); ··· 148 149 * used to program the agp master not the cpu 149 150 */ 150 151 151 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 152 - addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 152 + addr = pci_bus_address(agp_bridge->dev, AGP_APERTURE_BAR); 153 153 agp_bridge->gart_bus_addr = addr; 154 154 155 155 /* Calculate the agp offset */ ··· 205 207 static int amd_irongate_configure(void) 206 208 { 207 209 struct aper_size_info_lvl2 *current_size; 210 + phys_addr_t reg; 208 211 u32 temp; 209 212 u16 enable_reg; 210 213 ··· 213 214 214 215 if (!amd_irongate_private.registers) { 215 216 /* Get the memory mapped registers */ 216 - pci_read_config_dword(agp_bridge->dev, AMD_MMBASE, &temp); 217 - temp = (temp & PCI_BASE_ADDRESS_MEM_MASK); 218 - amd_irongate_private.registers = (volatile u8 __iomem *) ioremap(temp, 4096); 217 + reg = pci_resource_start(agp_bridge->dev, AMD_MMBASE_BAR); 218 + amd_irongate_private.registers = (volatile u8 __iomem *) ioremap(reg, 4096); 219 219 if (!amd_irongate_private.registers) 220 220 return -ENOMEM; 221 221 }
+1 -4
drivers/char/agp/amd64-agp.c
··· 269 269 */ 270 270 static int fix_northbridge(struct pci_dev *nb, struct pci_dev *agp, u16 cap) 271 271 { 272 - u32 aper_low, aper_hi; 273 272 u64 aper, nb_aper; 274 273 int order = 0; 275 274 u32 nb_order, nb_base; ··· 294 295 apsize |= 0xf00; 295 296 order = 7 - hweight16(apsize); 296 297 297 - pci_read_config_dword(agp, 0x10, &aper_low); 298 - pci_read_config_dword(agp, 0x14, &aper_hi); 299 - aper = (aper_low & ~((1<<22)-1)) | ((u64)aper_hi << 32); 298 + aper = pci_bus_address(agp, AGP_APERTURE_BAR); 300 299 301 300 /* 302 301 * On some sick chips APSIZE is 0. This means it wants 4G
+10 -11
drivers/char/agp/ati-agp.c
··· 12 12 #include <asm/agp.h> 13 13 #include "agp.h" 14 14 15 - #define ATI_GART_MMBASE_ADDR 0x14 15 + #define ATI_GART_MMBASE_BAR 1 16 16 #define ATI_RS100_APSIZE 0xac 17 17 #define ATI_RS100_IG_AGPMODE 0xb0 18 18 #define ATI_RS300_APSIZE 0xf8 ··· 196 196 197 197 static int ati_configure(void) 198 198 { 199 + phys_addr_t reg; 199 200 u32 temp; 200 201 201 202 /* Get the memory mapped registers */ 202 - pci_read_config_dword(agp_bridge->dev, ATI_GART_MMBASE_ADDR, &temp); 203 - temp = (temp & 0xfffff000); 204 - ati_generic_private.registers = (volatile u8 __iomem *) ioremap(temp, 4096); 203 + reg = pci_resource_start(agp_bridge->dev, ATI_GART_MMBASE_BAR); 204 + ati_generic_private.registers = (volatile u8 __iomem *) ioremap(reg, 4096); 205 205 206 206 if (!ati_generic_private.registers) 207 207 return -ENOMEM; ··· 211 211 else 212 212 pci_write_config_dword(agp_bridge->dev, ATI_RS300_IG_AGPMODE, 0x20000); 213 213 214 - /* address to map too */ 214 + /* address to map to */ 215 215 /* 216 - pci_read_config_dword(agp_bridge.dev, AGP_APBASE, &temp); 217 - agp_bridge.gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 216 + agp_bridge.gart_bus_addr = pci_bus_address(agp_bridge.dev, 217 + AGP_APERTURE_BAR); 218 218 printk(KERN_INFO PFX "IGP320 gart_bus_addr: %x\n", agp_bridge.gart_bus_addr); 219 219 */ 220 220 writel(0x60000, ati_generic_private.registers+ATI_GART_FEATURE_ID); 221 221 readl(ati_generic_private.registers+ATI_GART_FEATURE_ID); /* PCI Posting.*/ 222 222 223 223 /* SIGNALED_SYSTEM_ERROR @ NB_STATUS */ 224 - pci_read_config_dword(agp_bridge->dev, 4, &temp); 225 - pci_write_config_dword(agp_bridge->dev, 4, temp | (1<<14)); 224 + pci_read_config_dword(agp_bridge->dev, PCI_COMMAND, &temp); 225 + pci_write_config_dword(agp_bridge->dev, PCI_COMMAND, temp | (1<<14)); 226 226 227 227 /* Write out the address of the gatt table */ 228 228 writel(agp_bridge->gatt_bus_addr, ati_generic_private.registers+ATI_GART_BASE); ··· 385 385 * This is a bus address even on the alpha, b/c its 386 386 * used to program the agp master not the cpu 387 387 */ 388 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 389 - addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 388 + addr = pci_bus_address(agp_bridge->dev, AGP_APERTURE_BAR); 390 389 agp_bridge->gart_bus_addr = addr; 391 390 392 391 /* Calculate the agp offset */
+2 -3
drivers/char/agp/efficeon-agp.c
··· 128 128 129 129 static int efficeon_configure(void) 130 130 { 131 - u32 temp; 132 131 u16 temp2; 133 132 struct aper_size_info_lvl2 *current_size; 134 133 ··· 140 141 current_size->size_value); 141 142 142 143 /* address to map to */ 143 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 144 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 144 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 145 + AGP_APERTURE_BAR); 145 146 146 147 /* agpctrl */ 147 148 pci_write_config_dword(agp_bridge->dev, INTEL_AGPCTRL, 0x2280);
+2 -2
drivers/char/agp/generic.c
··· 1396 1396 1397 1397 current_size = A_SIZE_16(agp_bridge->current_size); 1398 1398 1399 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 1400 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 1399 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 1400 + AGP_APERTURE_BAR); 1401 1401 1402 1402 /* set aperture size */ 1403 1403 pci_write_config_word(agp_bridge->dev, agp_bridge->capndx+AGPAPSIZE, current_size->size_value);
+20 -28
drivers/char/agp/intel-agp.c
··· 118 118 119 119 static int intel_configure(void) 120 120 { 121 - u32 temp; 122 121 u16 temp2; 123 122 struct aper_size_info_16 *current_size; 124 123 ··· 127 128 pci_write_config_word(agp_bridge->dev, INTEL_APSIZE, current_size->size_value); 128 129 129 130 /* address to map to */ 130 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 131 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 131 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 132 + AGP_APERTURE_BAR); 132 133 133 134 /* attbase - aperture base */ 134 135 pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr); ··· 147 148 148 149 static int intel_815_configure(void) 149 150 { 150 - u32 temp, addr; 151 + u32 addr; 151 152 u8 temp2; 152 153 struct aper_size_info_8 *current_size; 153 154 ··· 166 167 current_size->size_value); 167 168 168 169 /* address to map to */ 169 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 170 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 170 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 171 + AGP_APERTURE_BAR); 171 172 172 173 pci_read_config_dword(agp_bridge->dev, INTEL_ATTBASE, &addr); 173 174 addr &= INTEL_815_ATTBASE_MASK; ··· 207 208 208 209 static int intel_820_configure(void) 209 210 { 210 - u32 temp; 211 211 u8 temp2; 212 212 struct aper_size_info_8 *current_size; 213 213 ··· 216 218 pci_write_config_byte(agp_bridge->dev, INTEL_APSIZE, current_size->size_value); 217 219 218 220 /* address to map to */ 219 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 220 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 221 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 222 + AGP_APERTURE_BAR); 221 223 222 224 /* attbase - aperture base */ 223 225 pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr); ··· 237 239 238 240 static int intel_840_configure(void) 239 241 { 240 - u32 temp; 241 242 u16 temp2; 242 243 struct aper_size_info_8 *current_size; 243 244 ··· 246 249 pci_write_config_byte(agp_bridge->dev, INTEL_APSIZE, current_size->size_value); 247 250 248 251 /* address to map to */ 249 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 250 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 252 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 253 + AGP_APERTURE_BAR); 251 254 252 255 /* attbase - aperture base */ 253 256 pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr); ··· 265 268 266 269 static int intel_845_configure(void) 267 270 { 268 - u32 temp; 269 271 u8 temp2; 270 272 struct aper_size_info_8 *current_size; 271 273 ··· 278 282 agp_bridge->apbase_config); 279 283 } else { 280 284 /* address to map to */ 281 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 282 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 283 - agp_bridge->apbase_config = temp; 285 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 286 + AGP_APERTURE_BAR); 287 + agp_bridge->apbase_config = agp_bridge->gart_bus_addr; 284 288 } 285 289 286 290 /* attbase - aperture base */ ··· 299 303 300 304 static int intel_850_configure(void) 301 305 { 302 - u32 temp; 303 306 u16 temp2; 304 307 struct aper_size_info_8 *current_size; 305 308 ··· 308 313 pci_write_config_byte(agp_bridge->dev, INTEL_APSIZE, current_size->size_value); 309 314 310 315 /* address to map to */ 311 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 312 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 316 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 317 + AGP_APERTURE_BAR); 313 318 314 319 /* attbase - aperture base */ 315 320 pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr); ··· 327 332 328 333 static int intel_860_configure(void) 329 334 { 330 - u32 temp; 331 335 u16 temp2; 332 336 struct aper_size_info_8 *current_size; 333 337 ··· 336 342 pci_write_config_byte(agp_bridge->dev, INTEL_APSIZE, current_size->size_value); 337 343 338 344 /* address to map to */ 339 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 340 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 345 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 346 + AGP_APERTURE_BAR); 341 347 342 348 /* attbase - aperture base */ 343 349 pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr); ··· 355 361 356 362 static int intel_830mp_configure(void) 357 363 { 358 - u32 temp; 359 364 u16 temp2; 360 365 struct aper_size_info_8 *current_size; 361 366 ··· 364 371 pci_write_config_byte(agp_bridge->dev, INTEL_APSIZE, current_size->size_value); 365 372 366 373 /* address to map to */ 367 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 368 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 374 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 375 + AGP_APERTURE_BAR); 369 376 370 377 /* attbase - aperture base */ 371 378 pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr); ··· 383 390 384 391 static int intel_7505_configure(void) 385 392 { 386 - u32 temp; 387 393 u16 temp2; 388 394 struct aper_size_info_8 *current_size; 389 395 ··· 392 400 pci_write_config_byte(agp_bridge->dev, INTEL_APSIZE, current_size->size_value); 393 401 394 402 /* address to map to */ 395 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 396 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 403 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 404 + AGP_APERTURE_BAR); 397 405 398 406 /* attbase - aperture base */ 399 407 pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr);
+5 -5
drivers/char/agp/intel-agp.h
··· 55 55 #define INTEL_I860_ERRSTS 0xc8 56 56 57 57 /* Intel i810 registers */ 58 - #define I810_GMADDR 0x10 59 - #define I810_MMADDR 0x14 58 + #define I810_GMADR_BAR 0 59 + #define I810_MMADR_BAR 1 60 60 #define I810_PTE_BASE 0x10000 61 61 #define I810_PTE_MAIN_UNCACHED 0x00000000 62 62 #define I810_PTE_LOCAL 0x00000002 ··· 113 113 #define INTEL_I850_ERRSTS 0xc8 114 114 115 115 /* intel 915G registers */ 116 - #define I915_GMADDR 0x18 117 - #define I915_MMADDR 0x10 118 - #define I915_PTEADDR 0x1C 116 + #define I915_GMADR_BAR 2 117 + #define I915_MMADR_BAR 0 118 + #define I915_PTE_BAR 3 119 119 #define I915_GMCH_GMS_STOLEN_48M (0x6 << 4) 120 120 #define I915_GMCH_GMS_STOLEN_64M (0x7 << 4) 121 121 #define G33_GMCH_GMS_STOLEN_128M (0x8 << 4)
+19 -28
drivers/char/agp/intel-gtt.c
··· 64 64 struct pci_dev *pcidev; /* device one */ 65 65 struct pci_dev *bridge_dev; 66 66 u8 __iomem *registers; 67 - phys_addr_t gtt_bus_addr; 67 + phys_addr_t gtt_phys_addr; 68 68 u32 PGETBL_save; 69 69 u32 __iomem *gtt; /* I915G */ 70 70 bool clear_fake_agp; /* on first access via agp, fill with scratch */ ··· 172 172 #define I810_GTT_ORDER 4 173 173 static int i810_setup(void) 174 174 { 175 - u32 reg_addr; 175 + phys_addr_t reg_addr; 176 176 char *gtt_table; 177 177 178 178 /* i81x does not preallocate the gtt. It's always 64kb in size. */ ··· 181 181 return -ENOMEM; 182 182 intel_private.i81x_gtt_table = gtt_table; 183 183 184 - pci_read_config_dword(intel_private.pcidev, I810_MMADDR, &reg_addr); 185 - reg_addr &= 0xfff80000; 184 + reg_addr = pci_resource_start(intel_private.pcidev, I810_MMADR_BAR); 186 185 187 186 intel_private.registers = ioremap(reg_addr, KB(64)); 188 187 if (!intel_private.registers) ··· 190 191 writel(virt_to_phys(gtt_table) | I810_PGETBL_ENABLED, 191 192 intel_private.registers+I810_PGETBL_CTL); 192 193 193 - intel_private.gtt_bus_addr = reg_addr + I810_PTE_BASE; 194 + intel_private.gtt_phys_addr = reg_addr + I810_PTE_BASE; 194 195 195 196 if ((readl(intel_private.registers+I810_DRAM_CTL) 196 197 & I810_DRAM_ROW_0) == I810_DRAM_ROW_0_SDRAM) { ··· 607 608 608 609 static int intel_gtt_init(void) 609 610 { 610 - u32 gma_addr; 611 611 u32 gtt_map_size; 612 - int ret; 612 + int ret, bar; 613 613 614 614 ret = intel_private.driver->setup(); 615 615 if (ret != 0) ··· 634 636 635 637 intel_private.gtt = NULL; 636 638 if (intel_gtt_can_wc()) 637 - intel_private.gtt = ioremap_wc(intel_private.gtt_bus_addr, 639 + intel_private.gtt = ioremap_wc(intel_private.gtt_phys_addr, 638 640 gtt_map_size); 639 641 if (intel_private.gtt == NULL) 640 - intel_private.gtt = ioremap(intel_private.gtt_bus_addr, 642 + intel_private.gtt = ioremap(intel_private.gtt_phys_addr, 641 643 gtt_map_size); 642 644 if (intel_private.gtt == NULL) { 643 645 intel_private.driver->cleanup(); ··· 658 660 } 659 661 660 662 if (INTEL_GTT_GEN <= 2) 661 - pci_read_config_dword(intel_private.pcidev, I810_GMADDR, 662 - &gma_addr); 663 + bar = I810_GMADR_BAR; 663 664 else 664 - pci_read_config_dword(intel_private.pcidev, I915_GMADDR, 665 - &gma_addr); 665 + bar = I915_GMADR_BAR; 666 666 667 - intel_private.gma_bus_addr = (gma_addr & PCI_BASE_ADDRESS_MEM_MASK); 668 - 667 + intel_private.gma_bus_addr = pci_bus_address(intel_private.pcidev, bar); 669 668 return 0; 670 669 } 671 670 ··· 782 787 783 788 static int i830_setup(void) 784 789 { 785 - u32 reg_addr; 790 + phys_addr_t reg_addr; 786 791 787 - pci_read_config_dword(intel_private.pcidev, I810_MMADDR, &reg_addr); 788 - reg_addr &= 0xfff80000; 792 + reg_addr = pci_resource_start(intel_private.pcidev, I810_MMADR_BAR); 789 793 790 794 intel_private.registers = ioremap(reg_addr, KB(64)); 791 795 if (!intel_private.registers) 792 796 return -ENOMEM; 793 797 794 - intel_private.gtt_bus_addr = reg_addr + I810_PTE_BASE; 798 + intel_private.gtt_phys_addr = reg_addr + I810_PTE_BASE; 795 799 796 800 return 0; 797 801 } ··· 1102 1108 1103 1109 static int i9xx_setup(void) 1104 1110 { 1105 - u32 reg_addr, gtt_addr; 1111 + phys_addr_t reg_addr; 1106 1112 int size = KB(512); 1107 1113 1108 - pci_read_config_dword(intel_private.pcidev, I915_MMADDR, &reg_addr); 1109 - 1110 - reg_addr &= 0xfff80000; 1114 + reg_addr = pci_resource_start(intel_private.pcidev, I915_MMADR_BAR); 1111 1115 1112 1116 intel_private.registers = ioremap(reg_addr, size); 1113 1117 if (!intel_private.registers) ··· 1113 1121 1114 1122 switch (INTEL_GTT_GEN) { 1115 1123 case 3: 1116 - pci_read_config_dword(intel_private.pcidev, 1117 - I915_PTEADDR, &gtt_addr); 1118 - intel_private.gtt_bus_addr = gtt_addr; 1124 + intel_private.gtt_phys_addr = 1125 + pci_resource_start(intel_private.pcidev, I915_PTE_BAR); 1119 1126 break; 1120 1127 case 5: 1121 - intel_private.gtt_bus_addr = reg_addr + MB(2); 1128 + intel_private.gtt_phys_addr = reg_addr + MB(2); 1122 1129 break; 1123 1130 default: 1124 - intel_private.gtt_bus_addr = reg_addr + KB(512); 1131 + intel_private.gtt_phys_addr = reg_addr + KB(512); 1125 1132 break; 1126 1133 } 1127 1134
+5 -4
drivers/char/agp/nvidia-agp.c
··· 106 106 { 107 107 int i, rc, num_dirs; 108 108 u32 apbase, aplimit; 109 + phys_addr_t apbase_phys; 109 110 struct aper_size_info_8 *current_size; 110 111 u32 temp; 111 112 ··· 116 115 pci_write_config_byte(agp_bridge->dev, NVIDIA_0_APSIZE, 117 116 current_size->size_value); 118 117 119 - /* address to map to */ 120 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &apbase); 121 - apbase &= PCI_BASE_ADDRESS_MEM_MASK; 118 + /* address to map to */ 119 + apbase = pci_bus_address(agp_bridge->dev, AGP_APERTURE_BAR); 122 120 agp_bridge->gart_bus_addr = apbase; 123 121 aplimit = apbase + (current_size->size * 1024 * 1024) - 1; 124 122 pci_write_config_dword(nvidia_private.dev_2, NVIDIA_2_APBASE, apbase); ··· 153 153 pci_write_config_dword(agp_bridge->dev, NVIDIA_0_APSIZE, temp | 0x100); 154 154 155 155 /* map aperture */ 156 + apbase_phys = pci_resource_start(agp_bridge->dev, AGP_APERTURE_BAR); 156 157 nvidia_private.aperture = 157 - (volatile u32 __iomem *) ioremap(apbase, 33 * PAGE_SIZE); 158 + (volatile u32 __iomem *) ioremap(apbase_phys, 33 * PAGE_SIZE); 158 159 159 160 if (!nvidia_private.aperture) 160 161 return -ENOMEM;
+2 -3
drivers/char/agp/sis-agp.c
··· 50 50 51 51 static int sis_configure(void) 52 52 { 53 - u32 temp; 54 53 struct aper_size_info_8 *current_size; 55 54 56 55 current_size = A_SIZE_8(agp_bridge->current_size); 57 56 pci_write_config_byte(agp_bridge->dev, SIS_TLBCNTRL, 0x05); 58 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 59 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 57 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 58 + AGP_APERTURE_BAR); 60 59 pci_write_config_dword(agp_bridge->dev, SIS_ATTBASE, 61 60 agp_bridge->gatt_bus_addr); 62 61 pci_write_config_byte(agp_bridge->dev, SIS_APSIZE,
+6 -7
drivers/char/agp/via-agp.c
··· 43 43 44 44 static int via_configure(void) 45 45 { 46 - u32 temp; 47 46 struct aper_size_info_8 *current_size; 48 47 49 48 current_size = A_SIZE_8(agp_bridge->current_size); 50 49 /* aperture size */ 51 50 pci_write_config_byte(agp_bridge->dev, VIA_APSIZE, 52 51 current_size->size_value); 53 - /* address to map too */ 54 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 55 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 52 + /* address to map to */ 53 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 54 + AGP_APERTURE_BAR); 56 55 57 56 /* GART control register */ 58 57 pci_write_config_dword(agp_bridge->dev, VIA_GARTCTRL, 0x0000000f); ··· 131 132 132 133 current_size = A_SIZE_16(agp_bridge->current_size); 133 134 134 - /* address to map too */ 135 - pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp); 136 - agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); 135 + /* address to map to */ 136 + agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev, 137 + AGP_APERTURE_BAR); 137 138 138 139 /* attbase - aperture GATT base */ 139 140 pci_write_config_dword(agp_bridge->dev, VIA_AGP3_ATTBASE,
+19 -14
drivers/eisa/eisa-bus.c
··· 232 232 static int __init eisa_register_device(struct eisa_device *edev) 233 233 { 234 234 int rc = device_register(&edev->dev); 235 - if (rc) 235 + if (rc) { 236 + put_device(&edev->dev); 236 237 return rc; 238 + } 237 239 238 240 rc = device_create_file(&edev->dev, &dev_attr_signature); 239 241 if (rc) ··· 277 275 } 278 276 279 277 if (slot) { 278 + edev->res[i].name = NULL; 280 279 edev->res[i].start = SLOT_ADDRESS(root, slot) 281 280 + (i * 0x400); 282 281 edev->res[i].end = edev->res[i].start + 0xff; 283 282 edev->res[i].flags = IORESOURCE_IO; 284 283 } else { 284 + edev->res[i].name = NULL; 285 285 edev->res[i].start = SLOT_ADDRESS(root, slot) 286 286 + EISA_VENDOR_ID_OFFSET; 287 287 edev->res[i].end = edev->res[i].start + 3; 288 288 edev->res[i].flags = IORESOURCE_IO | IORESOURCE_BUSY; 289 289 } 290 290 291 - dev_printk(KERN_DEBUG, &edev->dev, "%pR\n", &edev->res[i]); 292 291 if (request_resource(root->res, &edev->res[i])) 293 292 goto failed; 294 293 } ··· 329 326 return -ENOMEM; 330 327 } 331 328 332 - if (eisa_init_device(root, edev, 0)) { 333 - kfree(edev); 334 - if (!root->force_probe) 335 - return -ENODEV; 336 - goto force_probe; 337 - } 338 - 339 329 if (eisa_request_resources(root, edev, 0)) { 340 330 dev_warn(root->dev, 341 331 "EISA: Cannot allocate resource for mainboard\n"); 342 332 kfree(edev); 343 333 if (!root->force_probe) 344 334 return -EBUSY; 335 + goto force_probe; 336 + } 337 + 338 + if (eisa_init_device(root, edev, 0)) { 339 + eisa_release_resources(edev); 340 + kfree(edev); 341 + if (!root->force_probe) 342 + return -ENODEV; 345 343 goto force_probe; 346 344 } 347 345 ··· 365 361 continue; 366 362 } 367 363 368 - if (eisa_init_device(root, edev, i)) { 369 - kfree(edev); 370 - continue; 371 - } 372 - 373 364 if (eisa_request_resources(root, edev, i)) { 374 365 dev_warn(root->dev, 375 366 "Cannot allocate resource for EISA slot %d\n", 376 367 i); 368 + kfree(edev); 369 + continue; 370 + } 371 + 372 + if (eisa_init_device(root, edev, i)) { 373 + eisa_release_resources(edev); 377 374 kfree(edev); 378 375 continue; 379 376 }
+3 -3
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 1265 1265 size_t gtt_size) 1266 1266 { 1267 1267 struct drm_i915_private *dev_priv = dev->dev_private; 1268 - phys_addr_t gtt_bus_addr; 1268 + phys_addr_t gtt_phys_addr; 1269 1269 int ret; 1270 1270 1271 1271 /* For Modern GENs the PTEs and register space are split in the BAR */ 1272 - gtt_bus_addr = pci_resource_start(dev->pdev, 0) + 1272 + gtt_phys_addr = pci_resource_start(dev->pdev, 0) + 1273 1273 (pci_resource_len(dev->pdev, 0) / 2); 1274 1274 1275 - dev_priv->gtt.gsm = ioremap_wc(gtt_bus_addr, gtt_size); 1275 + dev_priv->gtt.gsm = ioremap_wc(gtt_phys_addr, gtt_size); 1276 1276 if (!dev_priv->gtt.gsm) { 1277 1277 DRM_ERROR("Failed to map the gtt page table\n"); 1278 1278 return -ENOMEM;
+1 -1
drivers/message/fusion/mptbase.c
··· 346 346 if ((pdev == NULL)) 347 347 return -1; 348 348 349 - pci_stop_and_remove_bus_device(pdev); 349 + pci_stop_and_remove_bus_device_locked(pdev); 350 350 return 0; 351 351 } 352 352
+2 -1
drivers/pci/Kconfig
··· 105 105 If unsure, say N. 106 106 107 107 config PCI_IOAPIC 108 - tristate "PCI IO-APIC hotplug support" if X86 108 + bool "PCI IO-APIC hotplug support" if X86 109 109 depends on PCI 110 110 depends on ACPI 111 + depends on X86_IO_APIC 111 112 default !X86 112 113 113 114 config PCI_LABEL
+1 -1
drivers/pci/Makefile
··· 4 4 5 5 obj-y += access.o bus.o probe.o host-bridge.o remove.o pci.o \ 6 6 pci-driver.o search.o pci-sysfs.o rom.o setup-res.o \ 7 - irq.o vpd.o setup-bus.o 7 + irq.o vpd.o setup-bus.o vc.o 8 8 obj-$(CONFIG_PROC_FS) += proc.o 9 9 obj-$(CONFIG_SYSFS) += slot.o 10 10
-24
drivers/pci/access.c
··· 381 381 } 382 382 383 383 /** 384 - * pci_vpd_truncate - Set available Vital Product Data size 385 - * @dev: pci device struct 386 - * @size: available memory in bytes 387 - * 388 - * Adjust size of available VPD area. 389 - */ 390 - int pci_vpd_truncate(struct pci_dev *dev, size_t size) 391 - { 392 - if (!dev->vpd) 393 - return -EINVAL; 394 - 395 - /* limited by the access method */ 396 - if (size > dev->vpd->len) 397 - return -EINVAL; 398 - 399 - dev->vpd->len = size; 400 - if (dev->vpd->attr) 401 - dev->vpd->attr->size = size; 402 - 403 - return 0; 404 - } 405 - EXPORT_SYMBOL(pci_vpd_truncate); 406 - 407 - /** 408 384 * pci_cfg_access_lock - Lock PCI config reads/writes 409 385 * @dev: pci device struct 410 386 *
-82
drivers/pci/ats.c
··· 235 235 EXPORT_SYMBOL_GPL(pci_disable_pri); 236 236 237 237 /** 238 - * pci_pri_enabled - Checks if PRI capability is enabled 239 - * @pdev: PCI device structure 240 - * 241 - * Returns true if PRI is enabled on the device, false otherwise 242 - */ 243 - bool pci_pri_enabled(struct pci_dev *pdev) 244 - { 245 - u16 control; 246 - int pos; 247 - 248 - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); 249 - if (!pos) 250 - return false; 251 - 252 - pci_read_config_word(pdev, pos + PCI_PRI_CTRL, &control); 253 - 254 - return (control & PCI_PRI_CTRL_ENABLE) ? true : false; 255 - } 256 - EXPORT_SYMBOL_GPL(pci_pri_enabled); 257 - 258 - /** 259 238 * pci_reset_pri - Resets device's PRI state 260 239 * @pdev: PCI device structure 261 240 * ··· 261 282 return 0; 262 283 } 263 284 EXPORT_SYMBOL_GPL(pci_reset_pri); 264 - 265 - /** 266 - * pci_pri_stopped - Checks whether the PRI capability is stopped 267 - * @pdev: PCI device structure 268 - * 269 - * Returns true if the PRI capability on the device is disabled and the 270 - * device has no outstanding PRI requests, false otherwise. The device 271 - * indicates this via the STOPPED bit in the status register of the 272 - * capability. 273 - * The device internal state can be cleared by resetting the PRI state 274 - * with pci_reset_pri(). This can force the capability into the STOPPED 275 - * state. 276 - */ 277 - bool pci_pri_stopped(struct pci_dev *pdev) 278 - { 279 - u16 control, status; 280 - int pos; 281 - 282 - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); 283 - if (!pos) 284 - return true; 285 - 286 - pci_read_config_word(pdev, pos + PCI_PRI_CTRL, &control); 287 - pci_read_config_word(pdev, pos + PCI_PRI_STATUS, &status); 288 - 289 - if (control & PCI_PRI_CTRL_ENABLE) 290 - return false; 291 - 292 - return (status & PCI_PRI_STATUS_STOPPED) ? true : false; 293 - } 294 - EXPORT_SYMBOL_GPL(pci_pri_stopped); 295 - 296 - /** 297 - * pci_pri_status - Request PRI status of a device 298 - * @pdev: PCI device structure 299 - * 300 - * Returns negative value on failure, status on success. The status can 301 - * be checked against status-bits. Supported bits are currently: 302 - * PCI_PRI_STATUS_RF: Response failure 303 - * PCI_PRI_STATUS_UPRGI: Unexpected Page Request Group Index 304 - * PCI_PRI_STATUS_STOPPED: PRI has stopped 305 - */ 306 - int pci_pri_status(struct pci_dev *pdev) 307 - { 308 - u16 status, control; 309 - int pos; 310 - 311 - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); 312 - if (!pos) 313 - return -EINVAL; 314 - 315 - pci_read_config_word(pdev, pos + PCI_PRI_CTRL, &control); 316 - pci_read_config_word(pdev, pos + PCI_PRI_STATUS, &status); 317 - 318 - /* Stopped bit is undefined when enable == 1, so clear it */ 319 - if (control & PCI_PRI_CTRL_ENABLE) 320 - status &= ~PCI_PRI_STATUS_STOPPED; 321 - 322 - return status; 323 - } 324 - EXPORT_SYMBOL_GPL(pci_pri_status); 325 285 #endif /* CONFIG_PCI_PRI */ 326 286 327 287 #ifdef CONFIG_PCI_PASID
+103 -30
drivers/pci/bus.c
··· 98 98 } 99 99 } 100 100 101 - /** 102 - * pci_bus_alloc_resource - allocate a resource from a parent bus 103 - * @bus: PCI bus 104 - * @res: resource to allocate 105 - * @size: size of resource to allocate 106 - * @align: alignment of resource to allocate 107 - * @min: minimum /proc/iomem address to allocate 108 - * @type_mask: IORESOURCE_* type flags 109 - * @alignf: resource alignment function 110 - * @alignf_data: data argument for resource alignment function 111 - * 112 - * Given the PCI bus a device resides on, the size, minimum address, 113 - * alignment and type, try to find an acceptable resource allocation 114 - * for a specific device resource. 101 + static struct pci_bus_region pci_32_bit = {0, 0xffffffffULL}; 102 + #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 103 + static struct pci_bus_region pci_64_bit = {0, 104 + (dma_addr_t) 0xffffffffffffffffULL}; 105 + static struct pci_bus_region pci_high = {(dma_addr_t) 0x100000000ULL, 106 + (dma_addr_t) 0xffffffffffffffffULL}; 107 + #endif 108 + 109 + /* 110 + * @res contains CPU addresses. Clip it so the corresponding bus addresses 111 + * on @bus are entirely within @region. This is used to control the bus 112 + * addresses of resources we allocate, e.g., we may need a resource that 113 + * can be mapped by a 32-bit BAR. 115 114 */ 116 - int 117 - pci_bus_alloc_resource(struct pci_bus *bus, struct resource *res, 115 + static void pci_clip_resource_to_region(struct pci_bus *bus, 116 + struct resource *res, 117 + struct pci_bus_region *region) 118 + { 119 + struct pci_bus_region r; 120 + 121 + pcibios_resource_to_bus(bus, &r, res); 122 + if (r.start < region->start) 123 + r.start = region->start; 124 + if (r.end > region->end) 125 + r.end = region->end; 126 + 127 + if (r.end < r.start) 128 + res->end = res->start - 1; 129 + else 130 + pcibios_bus_to_resource(bus, res, &r); 131 + } 132 + 133 + static int pci_bus_alloc_from_region(struct pci_bus *bus, struct resource *res, 118 134 resource_size_t size, resource_size_t align, 119 135 resource_size_t min, unsigned int type_mask, 120 136 resource_size_t (*alignf)(void *, 121 137 const struct resource *, 122 138 resource_size_t, 123 139 resource_size_t), 124 - void *alignf_data) 140 + void *alignf_data, 141 + struct pci_bus_region *region) 125 142 { 126 - int i, ret = -ENOMEM; 127 - struct resource *r; 128 - resource_size_t max = -1; 143 + int i, ret; 144 + struct resource *r, avail; 145 + resource_size_t max; 129 146 130 147 type_mask |= IORESOURCE_IO | IORESOURCE_MEM; 131 - 132 - /* don't allocate too high if the pref mem doesn't support 64bit*/ 133 - if (!(res->flags & IORESOURCE_MEM_64)) 134 - max = PCIBIOS_MAX_MEM_32; 135 148 136 149 pci_bus_for_each_resource(bus, r, i) { 137 150 if (!r) ··· 160 147 !(res->flags & IORESOURCE_PREFETCH)) 161 148 continue; 162 149 150 + avail = *r; 151 + pci_clip_resource_to_region(bus, &avail, region); 152 + if (!resource_size(&avail)) 153 + continue; 154 + 155 + /* 156 + * "min" is typically PCIBIOS_MIN_IO or PCIBIOS_MIN_MEM to 157 + * protect badly documented motherboard resources, but if 158 + * this is an already-configured bridge window, its start 159 + * overrides "min". 160 + */ 161 + if (avail.start) 162 + min = avail.start; 163 + 164 + max = avail.end; 165 + 163 166 /* Ok, try it out.. */ 164 - ret = allocate_resource(r, res, size, 165 - r->start ? : min, 166 - max, align, 167 - alignf, alignf_data); 167 + ret = allocate_resource(r, res, size, min, max, 168 + align, alignf, alignf_data); 168 169 if (ret == 0) 169 - break; 170 + return 0; 170 171 } 171 - return ret; 172 + return -ENOMEM; 173 + } 174 + 175 + /** 176 + * pci_bus_alloc_resource - allocate a resource from a parent bus 177 + * @bus: PCI bus 178 + * @res: resource to allocate 179 + * @size: size of resource to allocate 180 + * @align: alignment of resource to allocate 181 + * @min: minimum /proc/iomem address to allocate 182 + * @type_mask: IORESOURCE_* type flags 183 + * @alignf: resource alignment function 184 + * @alignf_data: data argument for resource alignment function 185 + * 186 + * Given the PCI bus a device resides on, the size, minimum address, 187 + * alignment and type, try to find an acceptable resource allocation 188 + * for a specific device resource. 189 + */ 190 + int pci_bus_alloc_resource(struct pci_bus *bus, struct resource *res, 191 + resource_size_t size, resource_size_t align, 192 + resource_size_t min, unsigned int type_mask, 193 + resource_size_t (*alignf)(void *, 194 + const struct resource *, 195 + resource_size_t, 196 + resource_size_t), 197 + void *alignf_data) 198 + { 199 + #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 200 + int rc; 201 + 202 + if (res->flags & IORESOURCE_MEM_64) { 203 + rc = pci_bus_alloc_from_region(bus, res, size, align, min, 204 + type_mask, alignf, alignf_data, 205 + &pci_high); 206 + if (rc == 0) 207 + return 0; 208 + 209 + return pci_bus_alloc_from_region(bus, res, size, align, min, 210 + type_mask, alignf, alignf_data, 211 + &pci_64_bit); 212 + } 213 + #endif 214 + 215 + return pci_bus_alloc_from_region(bus, res, size, align, min, 216 + type_mask, alignf, alignf_data, 217 + &pci_32_bit); 172 218 } 173 219 174 220 void __weak pcibios_resource_survey_bus(struct pci_bus *bus) { } ··· 248 176 */ 249 177 pci_fixup_device(pci_fixup_final, dev); 250 178 pci_create_sysfs_dev_files(dev); 179 + pci_proc_attach_device(dev); 251 180 252 181 dev->match_driver = true; 253 182 retval = device_attach(&dev->dev);
+8 -11
drivers/pci/host-bridge.c
··· 9 9 10 10 #include "pci.h" 11 11 12 - static struct pci_bus *find_pci_root_bus(struct pci_dev *dev) 12 + static struct pci_bus *find_pci_root_bus(struct pci_bus *bus) 13 13 { 14 - struct pci_bus *bus; 15 - 16 - bus = dev->bus; 17 14 while (bus->parent) 18 15 bus = bus->parent; 19 16 20 17 return bus; 21 18 } 22 19 23 - static struct pci_host_bridge *find_pci_host_bridge(struct pci_dev *dev) 20 + static struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus) 24 21 { 25 - struct pci_bus *bus = find_pci_root_bus(dev); 22 + struct pci_bus *root_bus = find_pci_root_bus(bus); 26 23 27 - return to_pci_host_bridge(bus->bridge); 24 + return to_pci_host_bridge(root_bus->bridge); 28 25 } 29 26 30 27 void pci_set_host_bridge_release(struct pci_host_bridge *bridge, ··· 37 40 return res1->start <= res2->start && res1->end >= res2->end; 38 41 } 39 42 40 - void pcibios_resource_to_bus(struct pci_dev *dev, struct pci_bus_region *region, 43 + void pcibios_resource_to_bus(struct pci_bus *bus, struct pci_bus_region *region, 41 44 struct resource *res) 42 45 { 43 - struct pci_host_bridge *bridge = find_pci_host_bridge(dev); 46 + struct pci_host_bridge *bridge = find_pci_host_bridge(bus); 44 47 struct pci_host_bridge_window *window; 45 48 resource_size_t offset = 0; 46 49 ··· 65 68 return region1->start <= region2->start && region1->end >= region2->end; 66 69 } 67 70 68 - void pcibios_bus_to_resource(struct pci_dev *dev, struct resource *res, 71 + void pcibios_bus_to_resource(struct pci_bus *bus, struct resource *res, 69 72 struct pci_bus_region *region) 70 73 { 71 - struct pci_host_bridge *bridge = find_pci_host_bridge(dev); 74 + struct pci_host_bridge *bridge = find_pci_host_bridge(bus); 72 75 struct pci_host_bridge_window *window; 73 76 resource_size_t offset = 0; 74 77
+3 -2
drivers/pci/host/pci-exynos.c
··· 468 468 int ret; 469 469 470 470 exynos_pcie_sideband_dbi_r_mode(pp, true); 471 - ret = cfg_read(pp->dbi_base + (where & ~0x3), where, size, val); 471 + ret = dw_pcie_cfg_read(pp->dbi_base + (where & ~0x3), where, size, val); 472 472 exynos_pcie_sideband_dbi_r_mode(pp, false); 473 473 return ret; 474 474 } ··· 479 479 int ret; 480 480 481 481 exynos_pcie_sideband_dbi_w_mode(pp, true); 482 - ret = cfg_write(pp->dbi_base + (where & ~0x3), where, size, val); 482 + ret = dw_pcie_cfg_write(pp->dbi_base + (where & ~0x3), 483 + where, size, val); 483 484 exynos_pcie_sideband_dbi_w_mode(pp, false); 484 485 return ret; 485 486 }
+146 -81
drivers/pci/host/pci-imx6.c
··· 44 44 void __iomem *mem_base; 45 45 }; 46 46 47 + /* PCIe Root Complex registers (memory-mapped) */ 48 + #define PCIE_RC_LCR 0x7c 49 + #define PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN1 0x1 50 + #define PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN2 0x2 51 + #define PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK 0xf 52 + 47 53 /* PCIe Port Logic registers (memory-mapped) */ 48 54 #define PL_OFFSET 0x700 49 55 #define PCIE_PHY_DEBUG_R0 (PL_OFFSET + 0x28) 50 56 #define PCIE_PHY_DEBUG_R1 (PL_OFFSET + 0x2c) 57 + #define PCIE_PHY_DEBUG_R1_XMLH_LINK_IN_TRAINING (1 << 29) 58 + #define PCIE_PHY_DEBUG_R1_XMLH_LINK_UP (1 << 4) 51 59 52 60 #define PCIE_PHY_CTRL (PL_OFFSET + 0x114) 53 61 #define PCIE_PHY_CTRL_DATA_LOC 0 ··· 66 58 67 59 #define PCIE_PHY_STAT (PL_OFFSET + 0x110) 68 60 #define PCIE_PHY_STAT_ACK_LOC 16 61 + 62 + #define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C 63 + #define PORT_LOGIC_SPEED_CHANGE (0x1 << 17) 69 64 70 65 /* PHY registers (not memory-mapped) */ 71 66 #define PCIE_PHY_RX_ASIC_OUT 0x100D ··· 220 209 221 210 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 222 211 IMX6Q_GPR1_PCIE_TEST_PD, 1 << 18); 223 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 224 - IMX6Q_GPR12_PCIE_CTL_2, 1 << 10); 225 212 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 226 213 IMX6Q_GPR1_PCIE_REF_CLK_EN, 0 << 16); 227 - 228 - gpio_set_value(imx6_pcie->reset_gpio, 0); 229 - msleep(100); 230 - gpio_set_value(imx6_pcie->reset_gpio, 1); 231 214 232 215 return 0; 233 216 } ··· 266 261 /* allow the clocks to stabilize */ 267 262 usleep_range(200, 500); 268 263 264 + /* Some boards don't have PCIe reset GPIO. */ 265 + if (gpio_is_valid(imx6_pcie->reset_gpio)) { 266 + gpio_set_value(imx6_pcie->reset_gpio, 0); 267 + msleep(100); 268 + gpio_set_value(imx6_pcie->reset_gpio, 1); 269 + } 269 270 return 0; 270 271 271 272 err_pcie_axi: ··· 310 299 IMX6Q_GPR8_TX_SWING_LOW, 127 << 25); 311 300 } 312 301 302 + static int imx6_pcie_wait_for_link(struct pcie_port *pp) 303 + { 304 + int count = 200; 305 + 306 + while (!dw_pcie_link_up(pp)) { 307 + usleep_range(100, 1000); 308 + if (--count) 309 + continue; 310 + 311 + dev_err(pp->dev, "phy link never came up\n"); 312 + dev_dbg(pp->dev, "DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n", 313 + readl(pp->dbi_base + PCIE_PHY_DEBUG_R0), 314 + readl(pp->dbi_base + PCIE_PHY_DEBUG_R1)); 315 + return -EINVAL; 316 + } 317 + 318 + return 0; 319 + } 320 + 321 + static int imx6_pcie_start_link(struct pcie_port *pp) 322 + { 323 + struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp); 324 + uint32_t tmp; 325 + int ret, count; 326 + 327 + /* 328 + * Force Gen1 operation when starting the link. In case the link is 329 + * started in Gen2 mode, there is a possibility the devices on the 330 + * bus will not be detected at all. This happens with PCIe switches. 331 + */ 332 + tmp = readl(pp->dbi_base + PCIE_RC_LCR); 333 + tmp &= ~PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK; 334 + tmp |= PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN1; 335 + writel(tmp, pp->dbi_base + PCIE_RC_LCR); 336 + 337 + /* Start LTSSM. */ 338 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 339 + IMX6Q_GPR12_PCIE_CTL_2, 1 << 10); 340 + 341 + ret = imx6_pcie_wait_for_link(pp); 342 + if (ret) 343 + return ret; 344 + 345 + /* Allow Gen2 mode after the link is up. */ 346 + tmp = readl(pp->dbi_base + PCIE_RC_LCR); 347 + tmp &= ~PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK; 348 + tmp |= PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN2; 349 + writel(tmp, pp->dbi_base + PCIE_RC_LCR); 350 + 351 + /* 352 + * Start Directed Speed Change so the best possible speed both link 353 + * partners support can be negotiated. 354 + */ 355 + tmp = readl(pp->dbi_base + PCIE_LINK_WIDTH_SPEED_CONTROL); 356 + tmp |= PORT_LOGIC_SPEED_CHANGE; 357 + writel(tmp, pp->dbi_base + PCIE_LINK_WIDTH_SPEED_CONTROL); 358 + 359 + count = 200; 360 + while (count--) { 361 + tmp = readl(pp->dbi_base + PCIE_LINK_WIDTH_SPEED_CONTROL); 362 + /* Test if the speed change finished. */ 363 + if (!(tmp & PORT_LOGIC_SPEED_CHANGE)) 364 + break; 365 + usleep_range(100, 1000); 366 + } 367 + 368 + /* Make sure link training is finished as well! */ 369 + if (count) 370 + ret = imx6_pcie_wait_for_link(pp); 371 + else 372 + ret = -EINVAL; 373 + 374 + if (ret) { 375 + dev_err(pp->dev, "Failed to bring link up!\n"); 376 + } else { 377 + tmp = readl(pp->dbi_base + 0x80); 378 + dev_dbg(pp->dev, "Link up, Gen=%i\n", (tmp >> 16) & 0xf); 379 + } 380 + 381 + return ret; 382 + } 383 + 313 384 static void imx6_pcie_host_init(struct pcie_port *pp) 314 385 { 315 - int count = 0; 316 - struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp); 317 - 318 386 imx6_pcie_assert_core_reset(pp); 319 387 320 388 imx6_pcie_init_phy(pp); ··· 402 312 403 313 dw_pcie_setup_rc(pp); 404 314 405 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 406 - IMX6Q_GPR12_PCIE_CTL_2, 1 << 10); 315 + imx6_pcie_start_link(pp); 316 + } 407 317 408 - while (!dw_pcie_link_up(pp)) { 409 - usleep_range(100, 1000); 410 - count++; 411 - if (count >= 200) { 412 - dev_err(pp->dev, "phy link never came up\n"); 413 - dev_dbg(pp->dev, 414 - "DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n", 415 - readl(pp->dbi_base + PCIE_PHY_DEBUG_R0), 416 - readl(pp->dbi_base + PCIE_PHY_DEBUG_R1)); 417 - break; 418 - } 419 - } 318 + static void imx6_pcie_reset_phy(struct pcie_port *pp) 319 + { 320 + uint32_t temp; 420 321 421 - return; 322 + pcie_phy_read(pp->dbi_base, PHY_RX_OVRD_IN_LO, &temp); 323 + temp |= (PHY_RX_OVRD_IN_LO_RX_DATA_EN | 324 + PHY_RX_OVRD_IN_LO_RX_PLL_EN); 325 + pcie_phy_write(pp->dbi_base, PHY_RX_OVRD_IN_LO, temp); 326 + 327 + usleep_range(2000, 3000); 328 + 329 + pcie_phy_read(pp->dbi_base, PHY_RX_OVRD_IN_LO, &temp); 330 + temp &= ~(PHY_RX_OVRD_IN_LO_RX_DATA_EN | 331 + PHY_RX_OVRD_IN_LO_RX_PLL_EN); 332 + pcie_phy_write(pp->dbi_base, PHY_RX_OVRD_IN_LO, temp); 422 333 } 423 334 424 335 static int imx6_pcie_link_up(struct pcie_port *pp) 425 336 { 426 - u32 rc, ltssm, rx_valid, temp; 337 + u32 rc, ltssm, rx_valid; 427 338 428 - /* link is debug bit 36, debug register 1 starts at bit 32 */ 429 - rc = readl(pp->dbi_base + PCIE_PHY_DEBUG_R1) & (0x1 << (36 - 32)); 430 - if (rc) 431 - return -EAGAIN; 339 + /* 340 + * Test if the PHY reports that the link is up and also that 341 + * the link training finished. It might happen that the PHY 342 + * reports the link is already up, but the link training bit 343 + * is still set, so make sure to check the training is done 344 + * as well here. 345 + */ 346 + rc = readl(pp->dbi_base + PCIE_PHY_DEBUG_R1); 347 + if ((rc & PCIE_PHY_DEBUG_R1_XMLH_LINK_UP) && 348 + !(rc & PCIE_PHY_DEBUG_R1_XMLH_LINK_IN_TRAINING)) 349 + return 1; 432 350 433 351 /* 434 352 * From L0, initiate MAC entry to gen2 if EP/RC supports gen2. ··· 456 358 457 359 dev_err(pp->dev, "transition to gen2 is stuck, reset PHY!\n"); 458 360 459 - pcie_phy_read(pp->dbi_base, 460 - PHY_RX_OVRD_IN_LO, &temp); 461 - temp |= (PHY_RX_OVRD_IN_LO_RX_DATA_EN 462 - | PHY_RX_OVRD_IN_LO_RX_PLL_EN); 463 - pcie_phy_write(pp->dbi_base, 464 - PHY_RX_OVRD_IN_LO, temp); 465 - 466 - usleep_range(2000, 3000); 467 - 468 - pcie_phy_read(pp->dbi_base, 469 - PHY_RX_OVRD_IN_LO, &temp); 470 - temp &= ~(PHY_RX_OVRD_IN_LO_RX_DATA_EN 471 - | PHY_RX_OVRD_IN_LO_RX_PLL_EN); 472 - pcie_phy_write(pp->dbi_base, 473 - PHY_RX_OVRD_IN_LO, temp); 361 + imx6_pcie_reset_phy(pp); 474 362 475 363 return 0; 476 364 } ··· 510 426 "imprecise external abort"); 511 427 512 428 dbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0); 513 - if (!dbi_base) { 514 - dev_err(&pdev->dev, "dbi_base memory resource not found\n"); 515 - return -ENODEV; 516 - } 517 - 518 429 pp->dbi_base = devm_ioremap_resource(&pdev->dev, dbi_base); 519 - if (IS_ERR(pp->dbi_base)) { 520 - ret = PTR_ERR(pp->dbi_base); 521 - goto err; 522 - } 430 + if (IS_ERR(pp->dbi_base)) 431 + return PTR_ERR(pp->dbi_base); 523 432 524 433 /* Fetch GPIOs */ 525 434 imx6_pcie->reset_gpio = of_get_named_gpio(np, "reset-gpio", 0); 526 - if (!gpio_is_valid(imx6_pcie->reset_gpio)) { 527 - dev_err(&pdev->dev, "no reset-gpio defined\n"); 528 - ret = -ENODEV; 529 - } 530 - ret = devm_gpio_request_one(&pdev->dev, 531 - imx6_pcie->reset_gpio, 532 - GPIOF_OUT_INIT_LOW, 533 - "PCIe reset"); 534 - if (ret) { 535 - dev_err(&pdev->dev, "unable to get reset gpio\n"); 536 - goto err; 435 + if (gpio_is_valid(imx6_pcie->reset_gpio)) { 436 + ret = devm_gpio_request_one(&pdev->dev, imx6_pcie->reset_gpio, 437 + GPIOF_OUT_INIT_LOW, "PCIe reset"); 438 + if (ret) { 439 + dev_err(&pdev->dev, "unable to get reset gpio\n"); 440 + return ret; 441 + } 537 442 } 538 443 539 444 imx6_pcie->power_on_gpio = of_get_named_gpio(np, "power-on-gpio", 0); ··· 533 460 "PCIe power enable"); 534 461 if (ret) { 535 462 dev_err(&pdev->dev, "unable to get power-on gpio\n"); 536 - goto err; 463 + return ret; 537 464 } 538 465 } 539 466 ··· 545 472 "PCIe wake up"); 546 473 if (ret) { 547 474 dev_err(&pdev->dev, "unable to get wake-up gpio\n"); 548 - goto err; 475 + return ret; 549 476 } 550 477 } 551 478 ··· 557 484 "PCIe disable endpoint"); 558 485 if (ret) { 559 486 dev_err(&pdev->dev, "unable to get disable-ep gpio\n"); 560 - goto err; 487 + return ret; 561 488 } 562 489 } 563 490 ··· 566 493 if (IS_ERR(imx6_pcie->lvds_gate)) { 567 494 dev_err(&pdev->dev, 568 495 "lvds_gate clock select missing or invalid\n"); 569 - ret = PTR_ERR(imx6_pcie->lvds_gate); 570 - goto err; 496 + return PTR_ERR(imx6_pcie->lvds_gate); 571 497 } 572 498 573 499 imx6_pcie->sata_ref_100m = devm_clk_get(&pdev->dev, "sata_ref_100m"); 574 500 if (IS_ERR(imx6_pcie->sata_ref_100m)) { 575 501 dev_err(&pdev->dev, 576 502 "sata_ref_100m clock source missing or invalid\n"); 577 - ret = PTR_ERR(imx6_pcie->sata_ref_100m); 578 - goto err; 503 + return PTR_ERR(imx6_pcie->sata_ref_100m); 579 504 } 580 505 581 506 imx6_pcie->pcie_ref_125m = devm_clk_get(&pdev->dev, "pcie_ref_125m"); 582 507 if (IS_ERR(imx6_pcie->pcie_ref_125m)) { 583 508 dev_err(&pdev->dev, 584 509 "pcie_ref_125m clock source missing or invalid\n"); 585 - ret = PTR_ERR(imx6_pcie->pcie_ref_125m); 586 - goto err; 510 + return PTR_ERR(imx6_pcie->pcie_ref_125m); 587 511 } 588 512 589 513 imx6_pcie->pcie_axi = devm_clk_get(&pdev->dev, "pcie_axi"); 590 514 if (IS_ERR(imx6_pcie->pcie_axi)) { 591 515 dev_err(&pdev->dev, 592 516 "pcie_axi clock source missing or invalid\n"); 593 - ret = PTR_ERR(imx6_pcie->pcie_axi); 594 - goto err; 517 + return PTR_ERR(imx6_pcie->pcie_axi); 595 518 } 596 519 597 520 /* Grab GPR config register range */ ··· 595 526 syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr"); 596 527 if (IS_ERR(imx6_pcie->iomuxc_gpr)) { 597 528 dev_err(&pdev->dev, "unable to find iomuxc registers\n"); 598 - ret = PTR_ERR(imx6_pcie->iomuxc_gpr); 599 - goto err; 529 + return PTR_ERR(imx6_pcie->iomuxc_gpr); 600 530 } 601 531 602 532 ret = imx6_add_pcie_port(pp, pdev); 603 533 if (ret < 0) 604 - goto err; 534 + return ret; 605 535 606 536 platform_set_drvdata(pdev, imx6_pcie); 607 537 return 0; 608 - 609 - err: 610 - return ret; 611 538 } 612 539 613 540 static const struct of_device_id imx6_pcie_of_match[] = {
+62 -37
drivers/pci/host/pci-mvebu.c
··· 150 150 return readl(port->base + reg); 151 151 } 152 152 153 + static inline bool mvebu_has_ioport(struct mvebu_pcie_port *port) 154 + { 155 + return port->io_target != -1 && port->io_attr != -1; 156 + } 157 + 153 158 static bool mvebu_pcie_link_up(struct mvebu_pcie_port *port) 154 159 { 155 160 return !(mvebu_readl(port, PCIE_STAT_OFF) & PCIE_STAT_LINK_DOWN); ··· 305 300 306 301 /* Are the new iobase/iolimit values invalid? */ 307 302 if (port->bridge.iolimit < port->bridge.iobase || 308 - port->bridge.iolimitupper < port->bridge.iobaseupper) { 303 + port->bridge.iolimitupper < port->bridge.iobaseupper || 304 + !(port->bridge.command & PCI_COMMAND_IO)) { 309 305 310 306 /* If a window was configured, remove it */ 311 307 if (port->iowin_base) { ··· 316 310 port->iowin_size = 0; 317 311 } 318 312 313 + return; 314 + } 315 + 316 + if (!mvebu_has_ioport(port)) { 317 + dev_WARN(&port->pcie->pdev->dev, 318 + "Attempt to set IO when IO is disabled\n"); 319 319 return; 320 320 } 321 321 ··· 342 330 mvebu_mbus_add_window_remap_by_id(port->io_target, port->io_attr, 343 331 port->iowin_base, port->iowin_size, 344 332 iobase); 345 - 346 - pci_ioremap_io(iobase, port->iowin_base); 347 333 } 348 334 349 335 static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port) 350 336 { 351 337 /* Are the new membase/memlimit values invalid? */ 352 - if (port->bridge.memlimit < port->bridge.membase) { 338 + if (port->bridge.memlimit < port->bridge.membase || 339 + !(port->bridge.command & PCI_COMMAND_MEMORY)) { 353 340 354 341 /* If a window was configured, remove it */ 355 342 if (port->memwin_base) { ··· 437 426 break; 438 427 439 428 case PCI_IO_BASE: 440 - *value = (bridge->secondary_status << 16 | 441 - bridge->iolimit << 8 | 442 - bridge->iobase); 429 + if (!mvebu_has_ioport(port)) 430 + *value = bridge->secondary_status << 16; 431 + else 432 + *value = (bridge->secondary_status << 16 | 433 + bridge->iolimit << 8 | 434 + bridge->iobase); 443 435 break; 444 436 445 437 case PCI_MEMORY_BASE: ··· 504 490 505 491 switch (where & ~3) { 506 492 case PCI_COMMAND: 493 + { 494 + u32 old = bridge->command; 495 + 496 + if (!mvebu_has_ioport(port)) 497 + value &= ~PCI_COMMAND_IO; 498 + 507 499 bridge->command = value & 0xffff; 500 + if ((old ^ bridge->command) & PCI_COMMAND_IO) 501 + mvebu_pcie_handle_iobase_change(port); 502 + if ((old ^ bridge->command) & PCI_COMMAND_MEMORY) 503 + mvebu_pcie_handle_membase_change(port); 508 504 break; 505 + } 509 506 510 507 case PCI_BASE_ADDRESS_0 ... PCI_BASE_ADDRESS_1: 511 508 bridge->bar[((where & ~3) - PCI_BASE_ADDRESS_0) / 4] = value; ··· 530 505 */ 531 506 bridge->iobase = (value & 0xff) | PCI_IO_RANGE_TYPE_32; 532 507 bridge->iolimit = ((value >> 8) & 0xff) | PCI_IO_RANGE_TYPE_32; 533 - bridge->secondary_status = value >> 16; 534 508 mvebu_pcie_handle_iobase_change(port); 535 509 break; 536 510 ··· 680 656 struct mvebu_pcie *pcie = sys_to_pcie(sys); 681 657 int i; 682 658 683 - pci_add_resource_offset(&sys->resources, &pcie->realio, sys->io_offset); 659 + if (resource_size(&pcie->realio) != 0) 660 + pci_add_resource_offset(&sys->resources, &pcie->realio, 661 + sys->io_offset); 684 662 pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset); 685 663 pci_add_resource(&sys->resources, &pcie->busn); 686 664 ··· 733 707 * aligned on their size 734 708 */ 735 709 if (res->flags & IORESOURCE_IO) 736 - return round_up(start, max((resource_size_t)SZ_64K, size)); 710 + return round_up(start, max_t(resource_size_t, SZ_64K, size)); 737 711 else if (res->flags & IORESOURCE_MEM) 738 - return round_up(start, max((resource_size_t)SZ_1M, size)); 712 + return round_up(start, max_t(resource_size_t, SZ_1M, size)); 739 713 else 740 714 return start; 741 715 } ··· 783 757 #define DT_CPUADDR_TO_ATTR(cpuaddr) (((cpuaddr) >> 48) & 0xFF) 784 758 785 759 static int mvebu_get_tgt_attr(struct device_node *np, int devfn, 786 - unsigned long type, int *tgt, int *attr) 760 + unsigned long type, 761 + unsigned int *tgt, 762 + unsigned int *attr) 787 763 { 788 764 const int na = 3, ns = 2; 789 765 const __be32 *range; 790 766 int rlen, nranges, rangesz, pna, i; 767 + 768 + *tgt = -1; 769 + *attr = -1; 791 770 792 771 range = of_get_property(np, "ranges", &rlen); 793 772 if (!range) ··· 863 832 } 864 833 865 834 mvebu_mbus_get_pcie_io_aperture(&pcie->io); 866 - if (resource_size(&pcie->io) == 0) { 867 - dev_err(&pdev->dev, "invalid I/O aperture size\n"); 868 - return -EINVAL; 869 - } 870 835 871 - pcie->realio.flags = pcie->io.flags; 872 - pcie->realio.start = PCIBIOS_MIN_IO; 873 - pcie->realio.end = min_t(resource_size_t, 874 - IO_SPACE_LIMIT, 875 - resource_size(&pcie->io)); 836 + if (resource_size(&pcie->io) != 0) { 837 + pcie->realio.flags = pcie->io.flags; 838 + pcie->realio.start = PCIBIOS_MIN_IO; 839 + pcie->realio.end = min_t(resource_size_t, 840 + IO_SPACE_LIMIT, 841 + resource_size(&pcie->io)); 842 + } else 843 + pcie->realio = pcie->io; 876 844 877 845 /* Get the bus range */ 878 846 ret = of_pci_parse_bus_range(np, &pcie->busn); ··· 930 900 continue; 931 901 } 932 902 933 - ret = mvebu_get_tgt_attr(np, port->devfn, IORESOURCE_IO, 934 - &port->io_target, &port->io_attr); 935 - if (ret < 0) { 936 - dev_err(&pdev->dev, "PCIe%d.%d: cannot get tgt/attr for io window\n", 937 - port->port, port->lane); 938 - continue; 903 + if (resource_size(&pcie->io) != 0) 904 + mvebu_get_tgt_attr(np, port->devfn, IORESOURCE_IO, 905 + &port->io_target, &port->io_attr); 906 + else { 907 + port->io_target = -1; 908 + port->io_attr = -1; 939 909 } 940 910 941 911 port->reset_gpio = of_get_named_gpio_flags(child, ··· 984 954 985 955 mvebu_pcie_set_local_dev_nr(port, 1); 986 956 987 - port->clk = of_clk_get_by_name(child, NULL); 988 - if (IS_ERR(port->clk)) { 989 - dev_err(&pdev->dev, "PCIe%d.%d: cannot get clock\n", 990 - port->port, port->lane); 991 - iounmap(port->base); 992 - continue; 993 - } 994 - 995 957 port->dn = child; 996 958 spin_lock_init(&port->conf_lock); 997 959 mvebu_sw_pci_bridge_init(port); ··· 991 969 } 992 970 993 971 pcie->nports = i; 972 + 973 + for (i = 0; i < (IO_SPACE_LIMIT - SZ_64K); i += SZ_64K) 974 + pci_ioremap_io(i, pcie->io.start + i); 975 + 994 976 mvebu_pcie_msi_enable(pcie); 995 977 mvebu_pcie_enable(pcie); 996 978 ··· 1014 988 .driver = { 1015 989 .owner = THIS_MODULE, 1016 990 .name = "mvebu-pcie", 1017 - .of_match_table = 1018 - of_match_ptr(mvebu_pcie_of_match_table), 991 + .of_match_table = mvebu_pcie_of_match_table, 1019 992 /* driver unloading/unbinding currently not supported */ 1020 993 .suppress_bind_attrs = true, 1021 994 },
+9 -3
drivers/pci/host/pci-rcar-gen2.c
··· 17 17 #include <linux/module.h> 18 18 #include <linux/pci.h> 19 19 #include <linux/platform_device.h> 20 + #include <linux/pm_runtime.h> 20 21 #include <linux/slab.h> 21 22 22 23 /* AHB-PCI Bridge PCI communication registers */ ··· 78 77 #define RCAR_PCI_NR_CONTROLLERS 3 79 78 80 79 struct rcar_pci_priv { 80 + struct device *dev; 81 81 void __iomem *reg; 82 82 struct resource io_res; 83 83 struct resource mem_res; ··· 171 169 void __iomem *reg = priv->reg; 172 170 u32 val; 173 171 172 + pm_runtime_enable(priv->dev); 173 + pm_runtime_get_sync(priv->dev); 174 + 174 175 val = ioread32(reg + RCAR_PCI_UNIT_REV_REG); 175 - pr_info("PCI: bus%u revision %x\n", sys->busnr, val); 176 + dev_info(priv->dev, "PCI: bus%u revision %x\n", sys->busnr, val); 176 177 177 178 /* Disable Direct Power Down State and assert reset */ 178 179 val = ioread32(reg + RCAR_USBCTR_REG) & ~RCAR_USBCTR_DIRPD; ··· 281 276 282 277 cfg_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 283 278 reg = devm_ioremap_resource(&pdev->dev, cfg_res); 284 - if (!reg) 285 - return -ENODEV; 279 + if (IS_ERR(reg)) 280 + return PTR_ERR(reg); 286 281 287 282 mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 288 283 if (!mem_res || !mem_res->start) ··· 306 301 307 302 priv->irq = platform_get_irq(pdev, 0); 308 303 priv->reg = reg; 304 + priv->dev = &pdev->dev; 309 305 310 306 return rcar_pci_add_controller(priv); 311 307 }
+1 -1
drivers/pci/host/pci-tegra.c
··· 805 805 afi_writel(pcie, value, AFI_PCIE_CONFIG); 806 806 807 807 value = afi_readl(pcie, AFI_FUSE); 808 - value &= ~AFI_FUSE_PCIE_T0_GEN2_DIS; 808 + value |= AFI_FUSE_PCIE_T0_GEN2_DIS; 809 809 afi_writel(pcie, value, AFI_FUSE); 810 810 811 811 /* initialize internal PHY, enable up to 16 PCIE lanes */
+59 -32
drivers/pci/host/pcie-designware.c
··· 74 74 return sys->private_data; 75 75 } 76 76 77 - int cfg_read(void __iomem *addr, int where, int size, u32 *val) 77 + int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val) 78 78 { 79 79 *val = readl(addr); 80 80 ··· 88 88 return PCIBIOS_SUCCESSFUL; 89 89 } 90 90 91 - int cfg_write(void __iomem *addr, int where, int size, u32 val) 91 + int dw_pcie_cfg_write(void __iomem *addr, int where, int size, u32 val) 92 92 { 93 93 if (size == 4) 94 94 writel(val, addr); ··· 126 126 if (pp->ops->rd_own_conf) 127 127 ret = pp->ops->rd_own_conf(pp, where, size, val); 128 128 else 129 - ret = cfg_read(pp->dbi_base + (where & ~0x3), where, size, val); 129 + ret = dw_pcie_cfg_read(pp->dbi_base + (where & ~0x3), where, 130 + size, val); 130 131 131 132 return ret; 132 133 } ··· 140 139 if (pp->ops->wr_own_conf) 141 140 ret = pp->ops->wr_own_conf(pp, where, size, val); 142 141 else 143 - ret = cfg_write(pp->dbi_base + (where & ~0x3), where, size, 144 - val); 142 + ret = dw_pcie_cfg_write(pp->dbi_base + (where & ~0x3), where, 143 + size, val); 145 144 146 145 return ret; 147 146 } ··· 168 167 while ((pos = find_next_bit(&val, 32, pos)) != 32) { 169 168 irq = irq_find_mapping(pp->irq_domain, 170 169 i * 32 + pos); 170 + dw_pcie_wr_own_conf(pp, 171 + PCIE_MSI_INTR0_STATUS + i * 12, 172 + 4, 1 << pos); 171 173 generic_handle_irq(irq); 172 174 pos++; 173 175 } 174 176 } 175 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 4, val); 176 177 } 177 178 } 178 179 ··· 212 209 return 0; 213 210 } 214 211 212 + static void clear_irq_range(struct pcie_port *pp, unsigned int irq_base, 213 + unsigned int nvec, unsigned int pos) 214 + { 215 + unsigned int i, res, bit, val; 216 + 217 + for (i = 0; i < nvec; i++) { 218 + irq_set_msi_desc_off(irq_base, i, NULL); 219 + clear_bit(pos + i, pp->msi_irq_in_use); 220 + /* Disable corresponding interrupt on MSI controller */ 221 + res = ((pos + i) / 32) * 12; 222 + bit = (pos + i) % 32; 223 + dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val); 224 + val &= ~(1 << bit); 225 + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val); 226 + } 227 + } 228 + 215 229 static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos) 216 230 { 217 231 int res, bit, irq, pos0, pos1, i; ··· 262 242 if (!irq) 263 243 goto no_valid_irq; 264 244 265 - i = 0; 266 - while (i < no_irqs) { 245 + /* 246 + * irq_create_mapping (called from dw_pcie_host_init) pre-allocates 247 + * descs so there is no need to allocate descs here. We can therefore 248 + * assume that if irq_find_mapping above returns non-zero, then the 249 + * descs are also successfully allocated. 250 + */ 251 + 252 + for (i = 0; i < no_irqs; i++) { 253 + if (irq_set_msi_desc_off(irq, i, desc) != 0) { 254 + clear_irq_range(pp, irq, i, pos0); 255 + goto no_valid_irq; 256 + } 267 257 set_bit(pos0 + i, pp->msi_irq_in_use); 268 - irq_alloc_descs((irq + i), (irq + i), 1, 0); 269 - irq_set_msi_desc(irq + i, desc); 270 258 /*Enable corresponding interrupt in MSI interrupt controller */ 271 259 res = ((pos0 + i) / 32) * 12; 272 260 bit = (pos0 + i) % 32; 273 261 dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val); 274 262 val |= 1 << bit; 275 263 dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val); 276 - i++; 277 264 } 278 265 279 266 *pos = pos0; ··· 293 266 294 267 static void clear_irq(unsigned int irq) 295 268 { 296 - int res, bit, val, pos; 269 + unsigned int pos, nvec; 297 270 struct irq_desc *desc; 298 271 struct msi_desc *msi; 299 272 struct pcie_port *pp; ··· 308 281 return; 309 282 } 310 283 284 + /* undo what was done in assign_irq */ 311 285 pos = data->hwirq; 286 + nvec = 1 << msi->msi_attrib.multiple; 312 287 313 - irq_free_desc(irq); 288 + clear_irq_range(pp, irq, nvec, pos); 314 289 315 - clear_bit(pos, pp->msi_irq_in_use); 316 - 317 - /* Disable corresponding interrupt on MSI interrupt controller */ 318 - res = (pos / 32) * 12; 319 - bit = pos % 32; 320 - dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val); 321 - val &= ~(1 << bit); 322 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val); 290 + /* all irqs cleared; reset attributes */ 291 + msi->irq = 0; 292 + msi->msi_attrib.multiple = 0; 323 293 } 324 294 325 295 static int dw_msi_setup_irq(struct msi_chip *chip, struct pci_dev *pdev, ··· 344 320 if (irq < 0) 345 321 return irq; 346 322 347 - msg_ctr &= ~PCI_MSI_FLAGS_QSIZE; 348 - msg_ctr |= msgvec << 4; 349 - pci_write_config_word(pdev, desc->msi_attrib.pos + PCI_MSI_FLAGS, 350 - msg_ctr); 323 + /* 324 + * write_msi_msg() will update PCI_MSI_FLAGS so there is 325 + * no need to explicitly call pci_write_config_word(). 326 + */ 351 327 desc->msi_attrib.multiple = msgvec; 352 328 353 329 msg.address_lo = virt_to_phys((void *)pp->msi_data); ··· 418 394 + global_io_offset); 419 395 pp->config.io_size = resource_size(&pp->io); 420 396 pp->config.io_bus_addr = range.pci_addr; 397 + pp->io_base = range.cpu_addr; 421 398 } 422 399 if (restype == IORESOURCE_MEM) { 423 400 of_pci_range_to_resource(&range, np, &pp->mem); ··· 444 419 445 420 pp->cfg0_base = pp->cfg.start; 446 421 pp->cfg1_base = pp->cfg.start + pp->config.cfg0_size; 447 - pp->io_base = pp->io.start; 448 422 pp->mem_base = pp->mem.start; 449 423 450 424 pp->va_cfg0_base = devm_ioremap(pp->dev, pp->cfg0_base, ··· 575 551 576 552 if (bus->parent->number == pp->root_bus_nr) { 577 553 dw_pcie_prog_viewport_cfg0(pp, busdev); 578 - ret = cfg_read(pp->va_cfg0_base + address, where, size, val); 554 + ret = dw_pcie_cfg_read(pp->va_cfg0_base + address, where, size, 555 + val); 579 556 dw_pcie_prog_viewport_mem_outbound(pp); 580 557 } else { 581 558 dw_pcie_prog_viewport_cfg1(pp, busdev); 582 - ret = cfg_read(pp->va_cfg1_base + address, where, size, val); 559 + ret = dw_pcie_cfg_read(pp->va_cfg1_base + address, where, size, 560 + val); 583 561 dw_pcie_prog_viewport_io_outbound(pp); 584 562 } 585 563 ··· 600 574 601 575 if (bus->parent->number == pp->root_bus_nr) { 602 576 dw_pcie_prog_viewport_cfg0(pp, busdev); 603 - ret = cfg_write(pp->va_cfg0_base + address, where, size, val); 577 + ret = dw_pcie_cfg_write(pp->va_cfg0_base + address, where, size, 578 + val); 604 579 dw_pcie_prog_viewport_mem_outbound(pp); 605 580 } else { 606 581 dw_pcie_prog_viewport_cfg1(pp, busdev); 607 - ret = cfg_write(pp->va_cfg1_base + address, where, size, val); 582 + ret = dw_pcie_cfg_write(pp->va_cfg1_base + address, where, size, 583 + val); 608 584 dw_pcie_prog_viewport_io_outbound(pp); 609 585 } 610 586 611 587 return ret; 612 588 } 613 - 614 589 615 590 static int dw_pcie_valid_config(struct pcie_port *pp, 616 591 struct pci_bus *bus, int dev) ··· 706 679 707 680 if (global_io_offset < SZ_1M && pp->config.io_size > 0) { 708 681 sys->io_offset = global_io_offset - pp->config.io_bus_addr; 709 - pci_ioremap_io(sys->io_offset, pp->io.start); 682 + pci_ioremap_io(global_io_offset, pp->io_base); 710 683 global_io_offset += SZ_64K; 711 684 pci_add_resource_offset(&sys->resources, &pp->io, 712 685 sys->io_offset);
+2 -2
drivers/pci/host/pcie-designware.h
··· 66 66 void (*host_init)(struct pcie_port *pp); 67 67 }; 68 68 69 - int cfg_read(void __iomem *addr, int where, int size, u32 *val); 70 - int cfg_write(void __iomem *addr, int where, int size, u32 val); 69 + int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val); 70 + int dw_pcie_cfg_write(void __iomem *addr, int where, int size, u32 val); 71 71 void dw_handle_msi_irq(struct pcie_port *pp); 72 72 void dw_pcie_msi_init(struct pcie_port *pp); 73 73 int dw_pcie_link_up(struct pcie_port *pp);
+4 -1
drivers/pci/hotplug/acpiphp.h
··· 77 77 78 78 /* PCI-to-PCI bridge device */ 79 79 struct pci_dev *pci_dev; 80 + 81 + bool is_going_away; 80 82 }; 81 83 82 84 ··· 152 150 /* slot flags */ 153 151 154 152 #define SLOT_ENABLED (0x00000001) 153 + #define SLOT_IS_GOING_AWAY (0x00000002) 155 154 156 155 /* function flags */ 157 156 ··· 172 169 typedef int (*acpiphp_callback)(struct acpiphp_slot *slot, void *data); 173 170 174 171 int acpiphp_enable_slot(struct acpiphp_slot *slot); 175 - int acpiphp_disable_and_eject_slot(struct acpiphp_slot *slot); 172 + int acpiphp_disable_slot(struct acpiphp_slot *slot); 176 173 u8 acpiphp_get_power_status(struct acpiphp_slot *slot); 177 174 u8 acpiphp_get_attention_status(struct acpiphp_slot *slot); 178 175 u8 acpiphp_get_latch_status(struct acpiphp_slot *slot);
+1 -1
drivers/pci/hotplug/acpiphp_core.c
··· 156 156 pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); 157 157 158 158 /* disable the specified slot */ 159 - return acpiphp_disable_and_eject_slot(slot->acpi_slot); 159 + return acpiphp_disable_slot(slot->acpi_slot); 160 160 } 161 161 162 162
+38 -5
drivers/pci/hotplug/acpiphp_glue.c
··· 432 432 pr_err("failed to remove notify handler\n"); 433 433 } 434 434 } 435 + slot->flags |= SLOT_IS_GOING_AWAY; 435 436 if (slot->slot) 436 437 acpiphp_unregister_hotplug_slot(slot); 437 438 } ··· 440 439 mutex_lock(&bridge_mutex); 441 440 list_del(&bridge->list); 442 441 mutex_unlock(&bridge_mutex); 442 + 443 + bridge->is_going_away = true; 443 444 } 444 445 445 446 /** ··· 760 757 { 761 758 struct acpiphp_slot *slot; 762 759 760 + /* Bail out if the bridge is going away. */ 761 + if (bridge->is_going_away) 762 + return; 763 + 763 764 list_for_each_entry(slot, &bridge->slots, node) { 764 765 struct pci_bus *bus = slot->bus; 765 766 struct pci_dev *dev, *tmp; ··· 834 827 } 835 828 } 836 829 830 + static int acpiphp_disable_and_eject_slot(struct acpiphp_slot *slot); 831 + 837 832 static void hotplug_event(acpi_handle handle, u32 type, void *data) 838 833 { 839 834 struct acpiphp_context *context = data; ··· 865 856 } else { 866 857 struct acpiphp_slot *slot = func->slot; 867 858 859 + if (slot->flags & SLOT_IS_GOING_AWAY) 860 + break; 861 + 868 862 mutex_lock(&slot->crit_sect); 869 863 enable_slot(slot); 870 864 mutex_unlock(&slot->crit_sect); ··· 882 870 } else { 883 871 struct acpiphp_slot *slot = func->slot; 884 872 int ret; 873 + 874 + if (slot->flags & SLOT_IS_GOING_AWAY) 875 + break; 885 876 886 877 /* 887 878 * Check if anything has changed in the slot and rescan ··· 915 900 acpi_handle handle = context->handle; 916 901 917 902 acpi_scan_lock_acquire(); 903 + pci_lock_rescan_remove(); 918 904 919 905 hotplug_event(handle, type, context); 920 906 907 + pci_unlock_rescan_remove(); 921 908 acpi_scan_lock_release(); 922 909 acpi_evaluate_hotplug_ost(handle, type, ACPI_OST_SC_SUCCESS, NULL); 923 910 put_bridge(context->func.parent); ··· 1087 1070 */ 1088 1071 int acpiphp_enable_slot(struct acpiphp_slot *slot) 1089 1072 { 1073 + pci_lock_rescan_remove(); 1074 + 1075 + if (slot->flags & SLOT_IS_GOING_AWAY) 1076 + return -ENODEV; 1077 + 1090 1078 mutex_lock(&slot->crit_sect); 1091 1079 /* configure all functions */ 1092 1080 if (!(slot->flags & SLOT_ENABLED)) 1093 1081 enable_slot(slot); 1094 1082 1095 1083 mutex_unlock(&slot->crit_sect); 1084 + 1085 + pci_unlock_rescan_remove(); 1096 1086 return 0; 1097 1087 } 1098 1088 ··· 1107 1083 * acpiphp_disable_and_eject_slot - power off and eject slot 1108 1084 * @slot: ACPI PHP slot 1109 1085 */ 1110 - int acpiphp_disable_and_eject_slot(struct acpiphp_slot *slot) 1086 + static int acpiphp_disable_and_eject_slot(struct acpiphp_slot *slot) 1111 1087 { 1112 1088 struct acpiphp_func *func; 1113 - int retval = 0; 1089 + 1090 + if (slot->flags & SLOT_IS_GOING_AWAY) 1091 + return -ENODEV; 1114 1092 1115 1093 mutex_lock(&slot->crit_sect); 1116 1094 ··· 1130 1104 } 1131 1105 1132 1106 mutex_unlock(&slot->crit_sect); 1133 - return retval; 1107 + return 0; 1134 1108 } 1135 1109 1110 + int acpiphp_disable_slot(struct acpiphp_slot *slot) 1111 + { 1112 + int ret; 1113 + 1114 + pci_lock_rescan_remove(); 1115 + ret = acpiphp_disable_and_eject_slot(slot); 1116 + pci_unlock_rescan_remove(); 1117 + return ret; 1118 + } 1136 1119 1137 1120 /* 1138 1121 * slot enabled: 1 ··· 1152 1117 return (slot->flags & SLOT_ENABLED); 1153 1118 } 1154 1119 1155 - 1156 1120 /* 1157 1121 * latch open: 1 1158 1122 * latch closed: 0 ··· 1160 1126 { 1161 1127 return !(get_slot_status(slot) & ACPI_STA_DEVICE_UI); 1162 1128 } 1163 - 1164 1129 1165 1130 /* 1166 1131 * adapter presence : 1
+12 -2
drivers/pci/hotplug/cpci_hotplug_pci.c
··· 254 254 { 255 255 struct pci_dev *dev; 256 256 struct pci_bus *parent; 257 + int ret = 0; 257 258 258 259 dbg("%s - enter", __func__); 260 + 261 + pci_lock_rescan_remove(); 259 262 260 263 if (slot->dev == NULL) { 261 264 dbg("pci_dev null, finding %02x:%02x:%x", ··· 280 277 slot->dev = pci_get_slot(slot->bus, slot->devfn); 281 278 if (slot->dev == NULL) { 282 279 err("Could not find PCI device for slot %02x", slot->number); 283 - return -ENODEV; 280 + ret = -ENODEV; 281 + goto out; 284 282 } 285 283 } 286 284 parent = slot->dev->bus; ··· 298 294 299 295 pci_bus_add_devices(parent); 300 296 297 + out: 298 + pci_unlock_rescan_remove(); 301 299 dbg("%s - exit", __func__); 302 - return 0; 300 + return ret; 303 301 } 304 302 305 303 int cpci_unconfigure_slot(struct slot* slot) ··· 314 308 return -ENODEV; 315 309 } 316 310 311 + pci_lock_rescan_remove(); 312 + 317 313 list_for_each_entry_safe(dev, temp, &slot->bus->devices, bus_list) { 318 314 if (PCI_SLOT(dev->devfn) != PCI_SLOT(slot->devfn)) 319 315 continue; ··· 325 317 } 326 318 pci_dev_put(slot->dev); 327 319 slot->dev = NULL; 320 + 321 + pci_unlock_rescan_remove(); 328 322 329 323 dbg("%s - exit", __func__); 330 324 return 0;
+7 -1
drivers/pci/hotplug/cpqphp_pci.c
··· 86 86 struct pci_bus *child; 87 87 int num; 88 88 89 + pci_lock_rescan_remove(); 90 + 89 91 if (func->pci_dev == NULL) 90 92 func->pci_dev = pci_get_bus_and_slot(func->bus,PCI_DEVFN(func->device, func->function)); 91 93 ··· 102 100 func->pci_dev = pci_get_bus_and_slot(func->bus, PCI_DEVFN(func->device, func->function)); 103 101 if (func->pci_dev == NULL) { 104 102 dbg("ERROR: pci_dev still null\n"); 105 - return 0; 103 + goto out; 106 104 } 107 105 } 108 106 ··· 115 113 116 114 pci_dev_put(func->pci_dev); 117 115 116 + out: 117 + pci_unlock_rescan_remove(); 118 118 return 0; 119 119 } 120 120 ··· 127 123 128 124 dbg("%s: bus/dev/func = %x/%x/%x\n", __func__, func->bus, func->device, func->function); 129 125 126 + pci_lock_rescan_remove(); 130 127 for (j=0; j<8 ; j++) { 131 128 struct pci_dev* temp = pci_get_bus_and_slot(func->bus, PCI_DEVFN(func->device, j)); 132 129 if (temp) { ··· 135 130 pci_stop_and_remove_bus_device(temp); 136 131 } 137 132 } 133 + pci_unlock_rescan_remove(); 138 134 return 0; 139 135 } 140 136
+11 -2
drivers/pci/hotplug/ibmphp_core.c
··· 718 718 func->device, func->function); 719 719 debug("func->device << 3 | 0x0 = %x\n", func->device << 3 | 0x0); 720 720 721 + pci_lock_rescan_remove(); 722 + 721 723 for (j = 0; j < 0x08; j++) { 722 724 temp = pci_get_bus_and_slot(func->busno, (func->device << 3) | j); 723 725 if (temp) { ··· 727 725 pci_dev_put(temp); 728 726 } 729 727 } 728 + 730 729 pci_dev_put(func->dev); 730 + 731 + pci_unlock_rescan_remove(); 731 732 } 732 733 733 734 /* ··· 785 780 int flag = 0; /* this is to make sure we don't double scan the bus, 786 781 for bridged devices primarily */ 787 782 783 + pci_lock_rescan_remove(); 784 + 788 785 if (!(bus_structure_fixup(func->busno))) 789 786 flag = 1; 790 787 if (func->dev == NULL) ··· 796 789 if (func->dev == NULL) { 797 790 struct pci_bus *bus = pci_find_bus(0, func->busno); 798 791 if (!bus) 799 - return 0; 792 + goto out; 800 793 801 794 num = pci_scan_slot(bus, 802 795 PCI_DEVFN(func->device, func->function)); ··· 807 800 PCI_DEVFN(func->device, func->function)); 808 801 if (func->dev == NULL) { 809 802 err("ERROR... : pci_dev still NULL\n"); 810 - return 0; 803 + goto out; 811 804 } 812 805 } 813 806 if (!(flag) && (func->dev->hdr_type == PCI_HEADER_TYPE_BRIDGE)) { ··· 817 810 pci_bus_add_devices(child); 818 811 } 819 812 813 + out: 814 + pci_unlock_rescan_remove(); 820 815 return 0; 821 816 } 822 817
+7 -8
drivers/pci/hotplug/pciehp.h
··· 43 43 extern bool pciehp_poll_mode; 44 44 extern int pciehp_poll_time; 45 45 extern bool pciehp_debug; 46 - extern bool pciehp_force; 47 46 48 47 #define dbg(format, arg...) \ 49 48 do { \ ··· 139 140 int pcie_init_notification(struct controller *ctrl); 140 141 int pciehp_enable_slot(struct slot *p_slot); 141 142 int pciehp_disable_slot(struct slot *p_slot); 142 - int pcie_enable_notification(struct controller *ctrl); 143 + void pcie_enable_notification(struct controller *ctrl); 143 144 int pciehp_power_on_slot(struct slot *slot); 144 - int pciehp_power_off_slot(struct slot *slot); 145 - int pciehp_get_power_status(struct slot *slot, u8 *status); 146 - int pciehp_get_attention_status(struct slot *slot, u8 *status); 145 + void pciehp_power_off_slot(struct slot *slot); 146 + void pciehp_get_power_status(struct slot *slot, u8 *status); 147 + void pciehp_get_attention_status(struct slot *slot, u8 *status); 147 148 148 - int pciehp_set_attention_status(struct slot *slot, u8 status); 149 - int pciehp_get_latch_status(struct slot *slot, u8 *status); 150 - int pciehp_get_adapter_status(struct slot *slot, u8 *status); 149 + void pciehp_set_attention_status(struct slot *slot, u8 status); 150 + void pciehp_get_latch_status(struct slot *slot, u8 *status); 151 + void pciehp_get_adapter_status(struct slot *slot, u8 *status); 151 152 int pciehp_query_power_fault(struct slot *slot); 152 153 void pciehp_green_led_on(struct slot *slot); 153 154 void pciehp_green_led_off(struct slot *slot);
+11 -6
drivers/pci/hotplug/pciehp_core.c
··· 41 41 bool pciehp_debug; 42 42 bool pciehp_poll_mode; 43 43 int pciehp_poll_time; 44 - bool pciehp_force; 44 + static bool pciehp_force; 45 45 46 46 #define DRIVER_VERSION "0.4" 47 47 #define DRIVER_AUTHOR "Dan Zink <dan.zink@compaq.com>, Greg Kroah-Hartman <greg@kroah.com>, Dely Sy <dely.l.sy@intel.com>" ··· 160 160 ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 161 161 __func__, slot_name(slot)); 162 162 163 - return pciehp_set_attention_status(slot, status); 163 + pciehp_set_attention_status(slot, status); 164 + return 0; 164 165 } 165 166 166 167 ··· 193 192 ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 194 193 __func__, slot_name(slot)); 195 194 196 - return pciehp_get_power_status(slot, value); 195 + pciehp_get_power_status(slot, value); 196 + return 0; 197 197 } 198 198 199 199 static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) ··· 204 202 ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 205 203 __func__, slot_name(slot)); 206 204 207 - return pciehp_get_attention_status(slot, value); 205 + pciehp_get_attention_status(slot, value); 206 + return 0; 208 207 } 209 208 210 209 static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) ··· 215 212 ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 216 213 __func__, slot_name(slot)); 217 214 218 - return pciehp_get_latch_status(slot, value); 215 + pciehp_get_latch_status(slot, value); 216 + return 0; 219 217 } 220 218 221 219 static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) ··· 226 222 ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 227 223 __func__, slot_name(slot)); 228 224 229 - return pciehp_get_adapter_status(slot, value); 225 + pciehp_get_adapter_status(slot, value); 226 + return 0; 230 227 } 231 228 232 229 static int reset_slot(struct hotplug_slot *hotplug_slot, int probe)
+30 -60
drivers/pci/hotplug/pciehp_ctrl.c
··· 158 158 { 159 159 /* turn off slot, turn on Amber LED, turn off Green LED if supported*/ 160 160 if (POWER_CTRL(ctrl)) { 161 - if (pciehp_power_off_slot(pslot)) { 162 - ctrl_err(ctrl, 163 - "Issue of Slot Power Off command failed\n"); 164 - return; 165 - } 161 + pciehp_power_off_slot(pslot); 162 + 166 163 /* 167 164 * After turning power off, we must wait for at least 1 second 168 165 * before taking any action that relies on power having been ··· 168 171 msleep(1000); 169 172 } 170 173 171 - if (PWR_LED(ctrl)) 172 - pciehp_green_led_off(pslot); 173 - 174 - if (ATTN_LED(ctrl)) { 175 - if (pciehp_set_attention_status(pslot, 1)) { 176 - ctrl_err(ctrl, 177 - "Issue of Set Attention Led command failed\n"); 178 - return; 179 - } 180 - } 174 + pciehp_green_led_off(pslot); 175 + pciehp_set_attention_status(pslot, 1); 181 176 } 182 177 183 178 /** ··· 192 203 return retval; 193 204 } 194 205 195 - if (PWR_LED(ctrl)) 196 - pciehp_green_led_blink(p_slot); 206 + pciehp_green_led_blink(p_slot); 197 207 198 208 /* Check link training status */ 199 209 retval = pciehp_check_link_status(ctrl); ··· 215 227 goto err_exit; 216 228 } 217 229 218 - if (PWR_LED(ctrl)) 219 - pciehp_green_led_on(p_slot); 220 - 230 + pciehp_green_led_on(p_slot); 221 231 return 0; 222 232 223 233 err_exit: ··· 229 243 */ 230 244 static int remove_board(struct slot *p_slot) 231 245 { 232 - int retval = 0; 246 + int retval; 233 247 struct controller *ctrl = p_slot->ctrl; 234 248 235 249 retval = pciehp_unconfigure_device(p_slot); ··· 237 251 return retval; 238 252 239 253 if (POWER_CTRL(ctrl)) { 240 - /* power off slot */ 241 - retval = pciehp_power_off_slot(p_slot); 242 - if (retval) { 243 - ctrl_err(ctrl, 244 - "Issue of Slot Disable command failed\n"); 245 - return retval; 246 - } 254 + pciehp_power_off_slot(p_slot); 255 + 247 256 /* 248 257 * After turning power off, we must wait for at least 1 second 249 258 * before taking any action that relies on power having been ··· 248 267 } 249 268 250 269 /* turn off Green LED */ 251 - if (PWR_LED(ctrl)) 252 - pciehp_green_led_off(p_slot); 253 - 270 + pciehp_green_led_off(p_slot); 254 271 return 0; 255 272 } 256 273 ··· 284 305 break; 285 306 case POWERON_STATE: 286 307 mutex_unlock(&p_slot->lock); 287 - if (pciehp_enable_slot(p_slot) && PWR_LED(p_slot->ctrl)) 308 + if (pciehp_enable_slot(p_slot)) 288 309 pciehp_green_led_off(p_slot); 289 310 mutex_lock(&p_slot->lock); 290 311 p_slot->state = STATIC_STATE; ··· 351 372 "press.\n", slot_name(p_slot)); 352 373 } 353 374 /* blink green LED and turn off amber */ 354 - if (PWR_LED(ctrl)) 355 - pciehp_green_led_blink(p_slot); 356 - if (ATTN_LED(ctrl)) 357 - pciehp_set_attention_status(p_slot, 0); 358 - 375 + pciehp_green_led_blink(p_slot); 376 + pciehp_set_attention_status(p_slot, 0); 359 377 queue_delayed_work(p_slot->wq, &p_slot->work, 5*HZ); 360 378 break; 361 379 case BLINKINGOFF_STATE: ··· 365 389 ctrl_info(ctrl, "Button cancel on Slot(%s)\n", slot_name(p_slot)); 366 390 cancel_delayed_work(&p_slot->work); 367 391 if (p_slot->state == BLINKINGOFF_STATE) { 368 - if (PWR_LED(ctrl)) 369 - pciehp_green_led_on(p_slot); 392 + pciehp_green_led_on(p_slot); 370 393 } else { 371 - if (PWR_LED(ctrl)) 372 - pciehp_green_led_off(p_slot); 394 + pciehp_green_led_off(p_slot); 373 395 } 374 - if (ATTN_LED(ctrl)) 375 - pciehp_set_attention_status(p_slot, 0); 396 + pciehp_set_attention_status(p_slot, 0); 376 397 ctrl_info(ctrl, "PCI slot #%s - action canceled " 377 398 "due to button press\n", slot_name(p_slot)); 378 399 p_slot->state = STATIC_STATE; ··· 429 456 case INT_POWER_FAULT: 430 457 if (!POWER_CTRL(ctrl)) 431 458 break; 432 - if (ATTN_LED(ctrl)) 433 - pciehp_set_attention_status(p_slot, 1); 434 - if (PWR_LED(ctrl)) 435 - pciehp_green_led_off(p_slot); 459 + pciehp_set_attention_status(p_slot, 1); 460 + pciehp_green_led_off(p_slot); 436 461 break; 437 462 case INT_PRESENCE_ON: 438 463 case INT_PRESENCE_OFF: ··· 453 482 int rc; 454 483 struct controller *ctrl = p_slot->ctrl; 455 484 456 - rc = pciehp_get_adapter_status(p_slot, &getstatus); 457 - if (rc || !getstatus) { 485 + pciehp_get_adapter_status(p_slot, &getstatus); 486 + if (!getstatus) { 458 487 ctrl_info(ctrl, "No adapter on slot(%s)\n", slot_name(p_slot)); 459 488 return -ENODEV; 460 489 } 461 490 if (MRL_SENS(p_slot->ctrl)) { 462 - rc = pciehp_get_latch_status(p_slot, &getstatus); 463 - if (rc || getstatus) { 491 + pciehp_get_latch_status(p_slot, &getstatus); 492 + if (getstatus) { 464 493 ctrl_info(ctrl, "Latch open on slot(%s)\n", 465 494 slot_name(p_slot)); 466 495 return -ENODEV; ··· 468 497 } 469 498 470 499 if (POWER_CTRL(p_slot->ctrl)) { 471 - rc = pciehp_get_power_status(p_slot, &getstatus); 472 - if (rc || getstatus) { 500 + pciehp_get_power_status(p_slot, &getstatus); 501 + if (getstatus) { 473 502 ctrl_info(ctrl, "Already enabled on slot(%s)\n", 474 503 slot_name(p_slot)); 475 504 return -EINVAL; ··· 489 518 int pciehp_disable_slot(struct slot *p_slot) 490 519 { 491 520 u8 getstatus = 0; 492 - int ret = 0; 493 521 struct controller *ctrl = p_slot->ctrl; 494 522 495 523 if (!p_slot->ctrl) 496 524 return 1; 497 525 498 526 if (!HP_SUPR_RM(p_slot->ctrl)) { 499 - ret = pciehp_get_adapter_status(p_slot, &getstatus); 500 - if (ret || !getstatus) { 527 + pciehp_get_adapter_status(p_slot, &getstatus); 528 + if (!getstatus) { 501 529 ctrl_info(ctrl, "No adapter on slot(%s)\n", 502 530 slot_name(p_slot)); 503 531 return -ENODEV; ··· 504 534 } 505 535 506 536 if (MRL_SENS(p_slot->ctrl)) { 507 - ret = pciehp_get_latch_status(p_slot, &getstatus); 508 - if (ret || getstatus) { 537 + pciehp_get_latch_status(p_slot, &getstatus); 538 + if (getstatus) { 509 539 ctrl_info(ctrl, "Latch open on slot(%s)\n", 510 540 slot_name(p_slot)); 511 541 return -ENODEV; ··· 513 543 } 514 544 515 545 if (POWER_CTRL(p_slot->ctrl)) { 516 - ret = pciehp_get_power_status(p_slot, &getstatus); 517 - if (ret || !getstatus) { 546 + pciehp_get_power_status(p_slot, &getstatus); 547 + if (!getstatus) { 518 548 ctrl_info(ctrl, "Already disabled on slot(%s)\n", 519 549 slot_name(p_slot)); 520 550 return -EINVAL;
+128 -252
drivers/pci/hotplug/pciehp_hpc.c
··· 41 41 #include "../pci.h" 42 42 #include "pciehp.h" 43 43 44 - static inline int pciehp_readw(struct controller *ctrl, int reg, u16 *value) 44 + static inline struct pci_dev *ctrl_dev(struct controller *ctrl) 45 45 { 46 - struct pci_dev *dev = ctrl->pcie->port; 47 - return pcie_capability_read_word(dev, reg, value); 46 + return ctrl->pcie->port; 48 47 } 49 - 50 - static inline int pciehp_readl(struct controller *ctrl, int reg, u32 *value) 51 - { 52 - struct pci_dev *dev = ctrl->pcie->port; 53 - return pcie_capability_read_dword(dev, reg, value); 54 - } 55 - 56 - static inline int pciehp_writew(struct controller *ctrl, int reg, u16 value) 57 - { 58 - struct pci_dev *dev = ctrl->pcie->port; 59 - return pcie_capability_write_word(dev, reg, value); 60 - } 61 - 62 - static inline int pciehp_writel(struct controller *ctrl, int reg, u32 value) 63 - { 64 - struct pci_dev *dev = ctrl->pcie->port; 65 - return pcie_capability_write_dword(dev, reg, value); 66 - } 67 - 68 - /* Power Control Command */ 69 - #define POWER_ON 0 70 - #define POWER_OFF PCI_EXP_SLTCTL_PCC 71 48 72 49 static irqreturn_t pcie_isr(int irq, void *dev_id); 73 50 static void start_int_poll_timer(struct controller *ctrl, int sec); ··· 106 129 107 130 static int pcie_poll_cmd(struct controller *ctrl) 108 131 { 132 + struct pci_dev *pdev = ctrl_dev(ctrl); 109 133 u16 slot_status; 110 - int err, timeout = 1000; 134 + int timeout = 1000; 111 135 112 - err = pciehp_readw(ctrl, PCI_EXP_SLTSTA, &slot_status); 113 - if (!err && (slot_status & PCI_EXP_SLTSTA_CC)) { 114 - pciehp_writew(ctrl, PCI_EXP_SLTSTA, PCI_EXP_SLTSTA_CC); 136 + pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); 137 + if (slot_status & PCI_EXP_SLTSTA_CC) { 138 + pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, 139 + PCI_EXP_SLTSTA_CC); 115 140 return 1; 116 141 } 117 142 while (timeout > 0) { 118 143 msleep(10); 119 144 timeout -= 10; 120 - err = pciehp_readw(ctrl, PCI_EXP_SLTSTA, &slot_status); 121 - if (!err && (slot_status & PCI_EXP_SLTSTA_CC)) { 122 - pciehp_writew(ctrl, PCI_EXP_SLTSTA, PCI_EXP_SLTSTA_CC); 145 + pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); 146 + if (slot_status & PCI_EXP_SLTSTA_CC) { 147 + pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, 148 + PCI_EXP_SLTSTA_CC); 123 149 return 1; 124 150 } 125 151 } ··· 149 169 * @cmd: command value written to slot control register 150 170 * @mask: bitmask of slot control register to be modified 151 171 */ 152 - static int pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask) 172 + static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask) 153 173 { 154 - int retval = 0; 174 + struct pci_dev *pdev = ctrl_dev(ctrl); 155 175 u16 slot_status; 156 176 u16 slot_ctrl; 157 177 158 178 mutex_lock(&ctrl->ctrl_lock); 159 179 160 - retval = pciehp_readw(ctrl, PCI_EXP_SLTSTA, &slot_status); 161 - if (retval) { 162 - ctrl_err(ctrl, "%s: Cannot read SLOTSTATUS register\n", 163 - __func__); 164 - goto out; 165 - } 166 - 180 + pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); 167 181 if (slot_status & PCI_EXP_SLTSTA_CC) { 168 182 if (!ctrl->no_cmd_complete) { 169 183 /* ··· 181 207 } 182 208 } 183 209 184 - retval = pciehp_readw(ctrl, PCI_EXP_SLTCTL, &slot_ctrl); 185 - if (retval) { 186 - ctrl_err(ctrl, "%s: Cannot read SLOTCTRL register\n", __func__); 187 - goto out; 188 - } 189 - 210 + pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl); 190 211 slot_ctrl &= ~mask; 191 212 slot_ctrl |= (cmd & mask); 192 213 ctrl->cmd_busy = 1; 193 214 smp_mb(); 194 - retval = pciehp_writew(ctrl, PCI_EXP_SLTCTL, slot_ctrl); 195 - if (retval) 196 - ctrl_err(ctrl, "Cannot write to SLOTCTRL register\n"); 215 + pcie_capability_write_word(pdev, PCI_EXP_SLTCTL, slot_ctrl); 197 216 198 217 /* 199 218 * Wait for command completion. 200 219 */ 201 - if (!retval && !ctrl->no_cmd_complete) { 220 + if (!ctrl->no_cmd_complete) { 202 221 int poll = 0; 203 222 /* 204 223 * if hotplug interrupt is not enabled or command ··· 203 236 poll = 1; 204 237 pcie_wait_cmd(ctrl, poll); 205 238 } 206 - out: 207 239 mutex_unlock(&ctrl->ctrl_lock); 208 - return retval; 209 240 } 210 241 211 242 static bool check_link_active(struct controller *ctrl) 212 243 { 213 - bool ret = false; 244 + struct pci_dev *pdev = ctrl_dev(ctrl); 214 245 u16 lnk_status; 246 + bool ret; 215 247 216 - if (pciehp_readw(ctrl, PCI_EXP_LNKSTA, &lnk_status)) 217 - return ret; 218 - 248 + pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); 219 249 ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA); 220 250 221 251 if (ret) ··· 275 311 276 312 int pciehp_check_link_status(struct controller *ctrl) 277 313 { 314 + struct pci_dev *pdev = ctrl_dev(ctrl); 315 + bool found; 278 316 u16 lnk_status; 279 - int retval = 0; 280 - bool found = false; 281 317 282 318 /* 283 319 * Data Link Layer Link Active Reporting must be capable for ··· 294 330 found = pci_bus_check_dev(ctrl->pcie->port->subordinate, 295 331 PCI_DEVFN(0, 0)); 296 332 297 - retval = pciehp_readw(ctrl, PCI_EXP_LNKSTA, &lnk_status); 298 - if (retval) { 299 - ctrl_err(ctrl, "Cannot read LNKSTATUS register\n"); 300 - return retval; 301 - } 302 - 333 + pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); 303 334 ctrl_dbg(ctrl, "%s: lnk_status = %x\n", __func__, lnk_status); 304 335 if ((lnk_status & PCI_EXP_LNKSTA_LT) || 305 336 !(lnk_status & PCI_EXP_LNKSTA_NLW)) { 306 337 ctrl_err(ctrl, "Link Training Error occurs \n"); 307 - retval = -1; 308 - return retval; 338 + return -1; 309 339 } 310 340 311 341 pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status); 312 342 313 - if (!found && !retval) 314 - retval = -1; 343 + if (!found) 344 + return -1; 315 345 316 - return retval; 346 + return 0; 317 347 } 318 348 319 349 static int __pciehp_link_set(struct controller *ctrl, bool enable) 320 350 { 351 + struct pci_dev *pdev = ctrl_dev(ctrl); 321 352 u16 lnk_ctrl; 322 - int retval = 0; 323 353 324 - retval = pciehp_readw(ctrl, PCI_EXP_LNKCTL, &lnk_ctrl); 325 - if (retval) { 326 - ctrl_err(ctrl, "Cannot read LNKCTRL register\n"); 327 - return retval; 328 - } 354 + pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &lnk_ctrl); 329 355 330 356 if (enable) 331 357 lnk_ctrl &= ~PCI_EXP_LNKCTL_LD; 332 358 else 333 359 lnk_ctrl |= PCI_EXP_LNKCTL_LD; 334 360 335 - retval = pciehp_writew(ctrl, PCI_EXP_LNKCTL, lnk_ctrl); 336 - if (retval) { 337 - ctrl_err(ctrl, "Cannot write LNKCTRL register\n"); 338 - return retval; 339 - } 361 + pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnk_ctrl); 340 362 ctrl_dbg(ctrl, "%s: lnk_ctrl = %x\n", __func__, lnk_ctrl); 341 - 342 - return retval; 363 + return 0; 343 364 } 344 365 345 366 static int pciehp_link_enable(struct controller *ctrl) ··· 337 388 return __pciehp_link_set(ctrl, false); 338 389 } 339 390 340 - int pciehp_get_attention_status(struct slot *slot, u8 *status) 391 + void pciehp_get_attention_status(struct slot *slot, u8 *status) 341 392 { 342 393 struct controller *ctrl = slot->ctrl; 394 + struct pci_dev *pdev = ctrl_dev(ctrl); 343 395 u16 slot_ctrl; 344 - u8 atten_led_state; 345 - int retval = 0; 346 396 347 - retval = pciehp_readw(ctrl, PCI_EXP_SLTCTL, &slot_ctrl); 348 - if (retval) { 349 - ctrl_err(ctrl, "%s: Cannot read SLOTCTRL register\n", __func__); 350 - return retval; 351 - } 352 - 397 + pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl); 353 398 ctrl_dbg(ctrl, "%s: SLOTCTRL %x, value read %x\n", __func__, 354 399 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_ctrl); 355 400 356 - atten_led_state = (slot_ctrl & PCI_EXP_SLTCTL_AIC) >> 6; 357 - 358 - switch (atten_led_state) { 359 - case 0: 360 - *status = 0xFF; /* Reserved */ 361 - break; 362 - case 1: 401 + switch (slot_ctrl & PCI_EXP_SLTCTL_AIC) { 402 + case PCI_EXP_SLTCTL_ATTN_IND_ON: 363 403 *status = 1; /* On */ 364 404 break; 365 - case 2: 405 + case PCI_EXP_SLTCTL_ATTN_IND_BLINK: 366 406 *status = 2; /* Blink */ 367 407 break; 368 - case 3: 408 + case PCI_EXP_SLTCTL_ATTN_IND_OFF: 369 409 *status = 0; /* Off */ 370 410 break; 371 411 default: 372 412 *status = 0xFF; 373 413 break; 374 414 } 375 - 376 - return 0; 377 415 } 378 416 379 - int pciehp_get_power_status(struct slot *slot, u8 *status) 417 + void pciehp_get_power_status(struct slot *slot, u8 *status) 380 418 { 381 419 struct controller *ctrl = slot->ctrl; 420 + struct pci_dev *pdev = ctrl_dev(ctrl); 382 421 u16 slot_ctrl; 383 - u8 pwr_state; 384 - int retval = 0; 385 422 386 - retval = pciehp_readw(ctrl, PCI_EXP_SLTCTL, &slot_ctrl); 387 - if (retval) { 388 - ctrl_err(ctrl, "%s: Cannot read SLOTCTRL register\n", __func__); 389 - return retval; 390 - } 423 + pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl); 391 424 ctrl_dbg(ctrl, "%s: SLOTCTRL %x value read %x\n", __func__, 392 425 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_ctrl); 393 426 394 - pwr_state = (slot_ctrl & PCI_EXP_SLTCTL_PCC) >> 10; 395 - 396 - switch (pwr_state) { 397 - case 0: 398 - *status = 1; 427 + switch (slot_ctrl & PCI_EXP_SLTCTL_PCC) { 428 + case PCI_EXP_SLTCTL_PWR_ON: 429 + *status = 1; /* On */ 399 430 break; 400 - case 1: 401 - *status = 0; 431 + case PCI_EXP_SLTCTL_PWR_OFF: 432 + *status = 0; /* Off */ 402 433 break; 403 434 default: 404 435 *status = 0xFF; 405 436 break; 406 437 } 407 - 408 - return retval; 409 438 } 410 439 411 - int pciehp_get_latch_status(struct slot *slot, u8 *status) 440 + void pciehp_get_latch_status(struct slot *slot, u8 *status) 412 441 { 413 - struct controller *ctrl = slot->ctrl; 442 + struct pci_dev *pdev = ctrl_dev(slot->ctrl); 414 443 u16 slot_status; 415 - int retval; 416 444 417 - retval = pciehp_readw(ctrl, PCI_EXP_SLTSTA, &slot_status); 418 - if (retval) { 419 - ctrl_err(ctrl, "%s: Cannot read SLOTSTATUS register\n", 420 - __func__); 421 - return retval; 422 - } 445 + pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); 423 446 *status = !!(slot_status & PCI_EXP_SLTSTA_MRLSS); 424 - return 0; 425 447 } 426 448 427 - int pciehp_get_adapter_status(struct slot *slot, u8 *status) 449 + void pciehp_get_adapter_status(struct slot *slot, u8 *status) 428 450 { 429 - struct controller *ctrl = slot->ctrl; 451 + struct pci_dev *pdev = ctrl_dev(slot->ctrl); 430 452 u16 slot_status; 431 - int retval; 432 453 433 - retval = pciehp_readw(ctrl, PCI_EXP_SLTSTA, &slot_status); 434 - if (retval) { 435 - ctrl_err(ctrl, "%s: Cannot read SLOTSTATUS register\n", 436 - __func__); 437 - return retval; 438 - } 454 + pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); 439 455 *status = !!(slot_status & PCI_EXP_SLTSTA_PDS); 440 - return 0; 441 456 } 442 457 443 458 int pciehp_query_power_fault(struct slot *slot) 444 459 { 445 - struct controller *ctrl = slot->ctrl; 460 + struct pci_dev *pdev = ctrl_dev(slot->ctrl); 446 461 u16 slot_status; 447 - int retval; 448 462 449 - retval = pciehp_readw(ctrl, PCI_EXP_SLTSTA, &slot_status); 450 - if (retval) { 451 - ctrl_err(ctrl, "Cannot check for power fault\n"); 452 - return retval; 453 - } 463 + pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); 454 464 return !!(slot_status & PCI_EXP_SLTSTA_PFD); 455 465 } 456 466 457 - int pciehp_set_attention_status(struct slot *slot, u8 value) 467 + void pciehp_set_attention_status(struct slot *slot, u8 value) 458 468 { 459 469 struct controller *ctrl = slot->ctrl; 460 470 u16 slot_cmd; 461 - u16 cmd_mask; 462 471 463 - cmd_mask = PCI_EXP_SLTCTL_AIC; 472 + if (!ATTN_LED(ctrl)) 473 + return; 474 + 464 475 switch (value) { 465 476 case 0 : /* turn off */ 466 - slot_cmd = 0x00C0; 477 + slot_cmd = PCI_EXP_SLTCTL_ATTN_IND_OFF; 467 478 break; 468 479 case 1: /* turn on */ 469 - slot_cmd = 0x0040; 480 + slot_cmd = PCI_EXP_SLTCTL_ATTN_IND_ON; 470 481 break; 471 482 case 2: /* turn blink */ 472 - slot_cmd = 0x0080; 483 + slot_cmd = PCI_EXP_SLTCTL_ATTN_IND_BLINK; 473 484 break; 474 485 default: 475 - return -EINVAL; 486 + return; 476 487 } 477 488 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 478 489 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); 479 - return pcie_write_cmd(ctrl, slot_cmd, cmd_mask); 490 + pcie_write_cmd(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC); 480 491 } 481 492 482 493 void pciehp_green_led_on(struct slot *slot) 483 494 { 484 495 struct controller *ctrl = slot->ctrl; 485 - u16 slot_cmd; 486 - u16 cmd_mask; 487 496 488 - slot_cmd = 0x0100; 489 - cmd_mask = PCI_EXP_SLTCTL_PIC; 490 - pcie_write_cmd(ctrl, slot_cmd, cmd_mask); 497 + if (!PWR_LED(ctrl)) 498 + return; 499 + 500 + pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON, PCI_EXP_SLTCTL_PIC); 491 501 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 492 - pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); 502 + pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 503 + PCI_EXP_SLTCTL_PWR_IND_ON); 493 504 } 494 505 495 506 void pciehp_green_led_off(struct slot *slot) 496 507 { 497 508 struct controller *ctrl = slot->ctrl; 498 - u16 slot_cmd; 499 - u16 cmd_mask; 500 509 501 - slot_cmd = 0x0300; 502 - cmd_mask = PCI_EXP_SLTCTL_PIC; 503 - pcie_write_cmd(ctrl, slot_cmd, cmd_mask); 510 + if (!PWR_LED(ctrl)) 511 + return; 512 + 513 + pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, PCI_EXP_SLTCTL_PIC); 504 514 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 505 - pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); 515 + pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 516 + PCI_EXP_SLTCTL_PWR_IND_OFF); 506 517 } 507 518 508 519 void pciehp_green_led_blink(struct slot *slot) 509 520 { 510 521 struct controller *ctrl = slot->ctrl; 511 - u16 slot_cmd; 512 - u16 cmd_mask; 513 522 514 - slot_cmd = 0x0200; 515 - cmd_mask = PCI_EXP_SLTCTL_PIC; 516 - pcie_write_cmd(ctrl, slot_cmd, cmd_mask); 523 + if (!PWR_LED(ctrl)) 524 + return; 525 + 526 + pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK, PCI_EXP_SLTCTL_PIC); 517 527 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 518 - pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); 528 + pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 529 + PCI_EXP_SLTCTL_PWR_IND_BLINK); 519 530 } 520 531 521 532 int pciehp_power_on_slot(struct slot * slot) 522 533 { 523 534 struct controller *ctrl = slot->ctrl; 524 - u16 slot_cmd; 525 - u16 cmd_mask; 535 + struct pci_dev *pdev = ctrl_dev(ctrl); 526 536 u16 slot_status; 527 - int retval = 0; 537 + int retval; 528 538 529 539 /* Clear sticky power-fault bit from previous power failures */ 530 - retval = pciehp_readw(ctrl, PCI_EXP_SLTSTA, &slot_status); 531 - if (retval) { 532 - ctrl_err(ctrl, "%s: Cannot read SLOTSTATUS register\n", 533 - __func__); 534 - return retval; 535 - } 536 - slot_status &= PCI_EXP_SLTSTA_PFD; 537 - if (slot_status) { 538 - retval = pciehp_writew(ctrl, PCI_EXP_SLTSTA, slot_status); 539 - if (retval) { 540 - ctrl_err(ctrl, 541 - "%s: Cannot write to SLOTSTATUS register\n", 542 - __func__); 543 - return retval; 544 - } 545 - } 540 + pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); 541 + if (slot_status & PCI_EXP_SLTSTA_PFD) 542 + pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, 543 + PCI_EXP_SLTSTA_PFD); 546 544 ctrl->power_fault_detected = 0; 547 545 548 - slot_cmd = POWER_ON; 549 - cmd_mask = PCI_EXP_SLTCTL_PCC; 550 - retval = pcie_write_cmd(ctrl, slot_cmd, cmd_mask); 551 - if (retval) { 552 - ctrl_err(ctrl, "Write %x command failed!\n", slot_cmd); 553 - return retval; 554 - } 546 + pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_ON, PCI_EXP_SLTCTL_PCC); 555 547 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 556 - pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); 548 + pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 549 + PCI_EXP_SLTCTL_PWR_ON); 557 550 558 551 retval = pciehp_link_enable(ctrl); 559 552 if (retval) ··· 504 613 return retval; 505 614 } 506 615 507 - int pciehp_power_off_slot(struct slot * slot) 616 + void pciehp_power_off_slot(struct slot * slot) 508 617 { 509 618 struct controller *ctrl = slot->ctrl; 510 - u16 slot_cmd; 511 - u16 cmd_mask; 512 - int retval; 513 619 514 620 /* Disable the link at first */ 515 621 pciehp_link_disable(ctrl); ··· 516 628 else 517 629 msleep(1000); 518 630 519 - slot_cmd = POWER_OFF; 520 - cmd_mask = PCI_EXP_SLTCTL_PCC; 521 - retval = pcie_write_cmd(ctrl, slot_cmd, cmd_mask); 522 - if (retval) { 523 - ctrl_err(ctrl, "Write command failed!\n"); 524 - return retval; 525 - } 631 + pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_OFF, PCI_EXP_SLTCTL_PCC); 526 632 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 527 - pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); 528 - return 0; 633 + pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 634 + PCI_EXP_SLTCTL_PWR_OFF); 529 635 } 530 636 531 637 static irqreturn_t pcie_isr(int irq, void *dev_id) 532 638 { 533 639 struct controller *ctrl = (struct controller *)dev_id; 640 + struct pci_dev *pdev = ctrl_dev(ctrl); 534 641 struct slot *slot = ctrl->slot; 535 642 u16 detected, intr_loc; 536 643 ··· 536 653 */ 537 654 intr_loc = 0; 538 655 do { 539 - if (pciehp_readw(ctrl, PCI_EXP_SLTSTA, &detected)) { 540 - ctrl_err(ctrl, "%s: Cannot read SLOTSTATUS\n", 541 - __func__); 542 - return IRQ_NONE; 543 - } 656 + pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &detected); 544 657 545 658 detected &= (PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | 546 659 PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_PDC | ··· 545 666 intr_loc |= detected; 546 667 if (!intr_loc) 547 668 return IRQ_NONE; 548 - if (detected && pciehp_writew(ctrl, PCI_EXP_SLTSTA, intr_loc)) { 549 - ctrl_err(ctrl, "%s: Cannot write to SLOTSTATUS\n", 550 - __func__); 551 - return IRQ_NONE; 552 - } 669 + if (detected) 670 + pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, 671 + intr_loc); 553 672 } while (detected); 554 673 555 674 ctrl_dbg(ctrl, "%s: intr_loc %x\n", __func__, intr_loc); ··· 582 705 return IRQ_HANDLED; 583 706 } 584 707 585 - int pcie_enable_notification(struct controller *ctrl) 708 + void pcie_enable_notification(struct controller *ctrl) 586 709 { 587 710 u16 cmd, mask; 588 711 ··· 608 731 PCI_EXP_SLTCTL_MRLSCE | PCI_EXP_SLTCTL_PFDE | 609 732 PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE); 610 733 611 - if (pcie_write_cmd(ctrl, cmd, mask)) { 612 - ctrl_err(ctrl, "Cannot enable software notification\n"); 613 - return -1; 614 - } 615 - return 0; 734 + pcie_write_cmd(ctrl, cmd, mask); 616 735 } 617 736 618 737 static void pcie_disable_notification(struct controller *ctrl) 619 738 { 620 739 u16 mask; 740 + 621 741 mask = (PCI_EXP_SLTCTL_PDCE | PCI_EXP_SLTCTL_ABPE | 622 742 PCI_EXP_SLTCTL_MRLSCE | PCI_EXP_SLTCTL_PFDE | 623 743 PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE | 624 744 PCI_EXP_SLTCTL_DLLSCE); 625 - if (pcie_write_cmd(ctrl, 0, mask)) 626 - ctrl_warn(ctrl, "Cannot disable software notification\n"); 745 + pcie_write_cmd(ctrl, 0, mask); 627 746 } 628 747 629 748 /* ··· 631 758 int pciehp_reset_slot(struct slot *slot, int probe) 632 759 { 633 760 struct controller *ctrl = slot->ctrl; 761 + struct pci_dev *pdev = ctrl_dev(ctrl); 634 762 635 763 if (probe) 636 764 return 0; ··· 645 771 pci_reset_bridge_secondary_bus(ctrl->pcie->port); 646 772 647 773 if (HP_SUPR_RM(ctrl)) { 648 - pciehp_writew(ctrl, PCI_EXP_SLTSTA, PCI_EXP_SLTSTA_PDC); 774 + pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, 775 + PCI_EXP_SLTSTA_PDC); 649 776 pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PDCE, PCI_EXP_SLTCTL_PDCE); 650 777 if (pciehp_poll_mode) 651 778 int_poll_timeout(ctrl->poll_timer.data); ··· 659 784 { 660 785 if (pciehp_request_irq(ctrl)) 661 786 return -1; 662 - if (pcie_enable_notification(ctrl)) { 663 - pciehp_free_irq(ctrl); 664 - return -1; 665 - } 787 + pcie_enable_notification(ctrl); 666 788 ctrl->notification_enabled = 1; 667 789 return 0; 668 790 } ··· 747 875 EMI(ctrl) ? "yes" : "no"); 748 876 ctrl_info(ctrl, " Command Completed : %3s\n", 749 877 NO_CMD_CMPL(ctrl) ? "no" : "yes"); 750 - pciehp_readw(ctrl, PCI_EXP_SLTSTA, &reg16); 878 + pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &reg16); 751 879 ctrl_info(ctrl, "Slot Status : 0x%04x\n", reg16); 752 - pciehp_readw(ctrl, PCI_EXP_SLTCTL, &reg16); 880 + pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &reg16); 753 881 ctrl_info(ctrl, "Slot Control : 0x%04x\n", reg16); 754 882 } 883 + 884 + #define FLAG(x,y) (((x) & (y)) ? '+' : '-') 755 885 756 886 struct controller *pcie_init(struct pcie_device *dev) 757 887 { ··· 767 893 goto abort; 768 894 } 769 895 ctrl->pcie = dev; 770 - if (pciehp_readl(ctrl, PCI_EXP_SLTCAP, &slot_cap)) { 771 - ctrl_err(ctrl, "Cannot read SLOTCAP register\n"); 772 - goto abort_ctrl; 773 - } 774 - 896 + pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap); 775 897 ctrl->slot_cap = slot_cap; 776 898 mutex_init(&ctrl->ctrl_lock); 777 899 init_waitqueue_head(&ctrl->queue); ··· 783 913 ctrl->no_cmd_complete = 1; 784 914 785 915 /* Check if Data Link Layer Link Active Reporting is implemented */ 786 - if (pciehp_readl(ctrl, PCI_EXP_LNKCAP, &link_cap)) { 787 - ctrl_err(ctrl, "%s: Cannot read LNKCAP register\n", __func__); 788 - goto abort_ctrl; 789 - } 916 + pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &link_cap); 790 917 if (link_cap & PCI_EXP_LNKCAP_DLLLARC) { 791 918 ctrl_dbg(ctrl, "Link Active Reporting supported\n"); 792 919 ctrl->link_active_reporting = 1; 793 920 } 794 921 795 922 /* Clear all remaining event bits in Slot Status register */ 796 - if (pciehp_writew(ctrl, PCI_EXP_SLTSTA, 0x1f)) 797 - goto abort_ctrl; 923 + pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, 924 + PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | 925 + PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_PDC | 926 + PCI_EXP_SLTSTA_CC); 798 927 799 928 /* Disable software notification */ 800 929 pcie_disable_notification(ctrl); 801 930 802 - ctrl_info(ctrl, "HPC vendor_id %x device_id %x ss_vid %x ss_did %x\n", 803 - pdev->vendor, pdev->device, pdev->subsystem_vendor, 804 - pdev->subsystem_device); 931 + ctrl_info(ctrl, "Slot #%d AttnBtn%c AttnInd%c PwrInd%c PwrCtrl%c MRL%c Interlock%c NoCompl%c LLActRep%c\n", 932 + (slot_cap & PCI_EXP_SLTCAP_PSN) >> 19, 933 + FLAG(slot_cap, PCI_EXP_SLTCAP_ABP), 934 + FLAG(slot_cap, PCI_EXP_SLTCAP_AIP), 935 + FLAG(slot_cap, PCI_EXP_SLTCAP_PIP), 936 + FLAG(slot_cap, PCI_EXP_SLTCAP_PCP), 937 + FLAG(slot_cap, PCI_EXP_SLTCAP_MRLSP), 938 + FLAG(slot_cap, PCI_EXP_SLTCAP_EIP), 939 + FLAG(slot_cap, PCI_EXP_SLTCAP_NCCS), 940 + FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC)); 805 941 806 942 if (pcie_init_slot(ctrl)) 807 943 goto abort_ctrl;
+15 -8
drivers/pci/hotplug/pciehp_pci.c
··· 39 39 struct pci_dev *dev; 40 40 struct pci_dev *bridge = p_slot->ctrl->pcie->port; 41 41 struct pci_bus *parent = bridge->subordinate; 42 - int num; 42 + int num, ret = 0; 43 43 struct controller *ctrl = p_slot->ctrl; 44 + 45 + pci_lock_rescan_remove(); 44 46 45 47 dev = pci_get_slot(parent, PCI_DEVFN(0, 0)); 46 48 if (dev) { ··· 50 48 "at %04x:%02x:00, cannot hot-add\n", pci_name(dev), 51 49 pci_domain_nr(parent), parent->number); 52 50 pci_dev_put(dev); 53 - return -EINVAL; 51 + ret = -EINVAL; 52 + goto out; 54 53 } 55 54 56 55 num = pci_scan_slot(parent, PCI_DEVFN(0, 0)); 57 56 if (num == 0) { 58 57 ctrl_err(ctrl, "No new device found\n"); 59 - return -ENODEV; 58 + ret = -ENODEV; 59 + goto out; 60 60 } 61 61 62 62 list_for_each_entry(dev, &parent->devices, bus_list) ··· 77 73 78 74 pci_bus_add_devices(parent); 79 75 80 - return 0; 76 + out: 77 + pci_unlock_rescan_remove(); 78 + return ret; 81 79 } 82 80 83 81 int pciehp_unconfigure_device(struct slot *p_slot) 84 82 { 85 - int ret, rc = 0; 83 + int rc = 0; 86 84 u8 bctl = 0; 87 85 u8 presence = 0; 88 86 struct pci_dev *dev, *temp; ··· 94 88 95 89 ctrl_dbg(ctrl, "%s: domain:bus:dev = %04x:%02x:00\n", 96 90 __func__, pci_domain_nr(parent), parent->number); 97 - ret = pciehp_get_adapter_status(p_slot, &presence); 98 - if (ret) 99 - presence = 0; 91 + pciehp_get_adapter_status(p_slot, &presence); 92 + 93 + pci_lock_rescan_remove(); 100 94 101 95 /* 102 96 * Stopping an SR-IOV PF device removes all the associated VFs, ··· 132 126 pci_dev_put(dev); 133 127 } 134 128 129 + pci_unlock_rescan_remove(); 135 130 return rc; 136 131 }
+14 -5
drivers/pci/hotplug/rpadlpar_core.c
··· 354 354 { 355 355 struct pci_bus *bus; 356 356 struct slot *slot; 357 + int ret = 0; 358 + 359 + pci_lock_rescan_remove(); 357 360 358 361 bus = pcibios_find_pci_bus(dn); 359 - if (!bus) 360 - return -EINVAL; 362 + if (!bus) { 363 + ret = -EINVAL; 364 + goto out; 365 + } 361 366 362 367 pr_debug("PCI: Removing PCI slot below EADS bridge %s\n", 363 368 bus->self ? pci_name(bus->self) : "<!PHB!>"); ··· 376 371 printk(KERN_ERR 377 372 "%s: unable to remove hotplug slot %s\n", 378 373 __func__, drc_name); 379 - return -EIO; 374 + ret = -EIO; 375 + goto out; 380 376 } 381 377 } 382 378 ··· 388 382 if (pcibios_unmap_io_space(bus)) { 389 383 printk(KERN_ERR "%s: failed to unmap bus range\n", 390 384 __func__); 391 - return -ERANGE; 385 + ret = -ERANGE; 386 + goto out; 392 387 } 393 388 394 389 /* Remove the EADS bridge device itself */ ··· 397 390 pr_debug("PCI: Now removing bridge device %s\n", pci_name(bus->self)); 398 391 pci_stop_and_remove_bus_device(bus->self); 399 392 400 - return 0; 393 + out: 394 + pci_unlock_rescan_remove(); 395 + return ret; 401 396 } 402 397 403 398 /**
+4
drivers/pci/hotplug/rpaphp_core.c
··· 398 398 return retval; 399 399 400 400 if (state == PRESENT) { 401 + pci_lock_rescan_remove(); 401 402 pcibios_add_pci_devices(slot->bus); 403 + pci_unlock_rescan_remove(); 402 404 slot->state = CONFIGURED; 403 405 } else if (state == EMPTY) { 404 406 slot->state = EMPTY; ··· 420 418 if (slot->state == NOT_CONFIGURED) 421 419 return -EINVAL; 422 420 421 + pci_lock_rescan_remove(); 423 422 pcibios_remove_pci_devices(slot->bus); 423 + pci_unlock_rescan_remove(); 424 424 vm_unmap_aliases(); 425 425 426 426 slot->state = NOT_CONFIGURED;
+3 -1
drivers/pci/hotplug/s390_pci_hpc.c
··· 80 80 goto out_deconfigure; 81 81 82 82 pci_scan_slot(slot->zdev->bus, ZPCI_DEVFN); 83 + pci_lock_rescan_remove(); 83 84 pci_bus_add_devices(slot->zdev->bus); 85 + pci_unlock_rescan_remove(); 84 86 85 87 return rc; 86 88 ··· 100 98 return -EIO; 101 99 102 100 if (slot->zdev->pdev) 103 - pci_stop_and_remove_bus_device(slot->zdev->pdev); 101 + pci_stop_and_remove_bus_device_locked(slot->zdev->pdev); 104 102 105 103 rc = zpci_disable_device(slot->zdev); 106 104 if (rc)
+5
drivers/pci/hotplug/sgi_hotplug.c
··· 459 459 acpi_scan_lock_release(); 460 460 } 461 461 462 + pci_lock_rescan_remove(); 463 + 462 464 /* Call the driver for the new device */ 463 465 pci_bus_add_devices(slot->pci_bus); 464 466 /* Call the drivers for the new devices subordinate to PPB */ 465 467 if (new_ppb) 466 468 pci_bus_add_devices(new_bus); 467 469 470 + pci_unlock_rescan_remove(); 468 471 mutex_unlock(&sn_hotplug_mutex); 469 472 470 473 if (rc == 0) ··· 543 540 acpi_scan_lock_release(); 544 541 } 545 542 543 + pci_lock_rescan_remove(); 546 544 /* Free the SN resources assigned to the Linux device.*/ 547 545 list_for_each_entry_safe(dev, temp, &slot->pci_bus->devices, bus_list) { 548 546 if (PCI_SLOT(dev->devfn) != slot->device_num + 1) ··· 554 550 pci_stop_and_remove_bus_device(dev); 555 551 pci_dev_put(dev); 556 552 } 553 + pci_unlock_rescan_remove(); 557 554 558 555 /* Remove the SSDT for the slot from the ACPI namespace */ 559 556 if (SN_ACPI_BASE_SUPPORT() && ssdt_id) {
+14 -4
drivers/pci/hotplug/shpchp_pci.c
··· 40 40 struct controller *ctrl = p_slot->ctrl; 41 41 struct pci_dev *bridge = ctrl->pci_dev; 42 42 struct pci_bus *parent = bridge->subordinate; 43 - int num; 43 + int num, ret = 0; 44 + 45 + pci_lock_rescan_remove(); 44 46 45 47 dev = pci_get_slot(parent, PCI_DEVFN(p_slot->device, 0)); 46 48 if (dev) { ··· 50 48 "at %04x:%02x:%02x, cannot hot-add\n", pci_name(dev), 51 49 pci_domain_nr(parent), p_slot->bus, p_slot->device); 52 50 pci_dev_put(dev); 53 - return -EINVAL; 51 + ret = -EINVAL; 52 + goto out; 54 53 } 55 54 56 55 num = pci_scan_slot(parent, PCI_DEVFN(p_slot->device, 0)); 57 56 if (num == 0) { 58 57 ctrl_err(ctrl, "No new device found\n"); 59 - return -ENODEV; 58 + ret = -ENODEV; 59 + goto out; 60 60 } 61 61 62 62 list_for_each_entry(dev, &parent->devices, bus_list) { ··· 79 75 80 76 pci_bus_add_devices(parent); 81 77 82 - return 0; 78 + out: 79 + pci_unlock_rescan_remove(); 80 + return ret; 83 81 } 84 82 85 83 int shpchp_unconfigure_device(struct slot *p_slot) ··· 94 88 95 89 ctrl_dbg(ctrl, "%s: domain:bus:dev = %04x:%02x:%02x\n", 96 90 __func__, pci_domain_nr(parent), p_slot->bus, p_slot->device); 91 + 92 + pci_lock_rescan_remove(); 97 93 98 94 list_for_each_entry_safe(dev, temp, &parent->devices, bus_list) { 99 95 if (PCI_SLOT(dev->devfn) != p_slot->device) ··· 116 108 pci_stop_and_remove_bus_device(dev); 117 109 pci_dev_put(dev); 118 110 } 111 + 112 + pci_unlock_rescan_remove(); 119 113 return rc; 120 114 } 121 115
+5 -1
drivers/pci/ioapic.c
··· 113 113 .remove = ioapic_remove, 114 114 }; 115 115 116 - module_pci_driver(ioapic_driver); 116 + static int __init ioapic_init(void) 117 + { 118 + return pci_register_driver(&ioapic_driver); 119 + } 120 + module_init(ioapic_init); 117 121 118 122 MODULE_LICENSE("GPL");
+2
drivers/pci/iov.c
··· 84 84 virtfn->dev.parent = dev->dev.parent; 85 85 virtfn->physfn = pci_dev_get(dev); 86 86 virtfn->is_virtfn = 1; 87 + virtfn->multifunction = 0; 87 88 88 89 for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { 89 90 res = dev->resource + PCI_IOV_RESOURCES + i; ··· 442 441 443 442 found: 444 443 pci_write_config_word(dev, pos + PCI_SRIOV_CTRL, ctrl); 444 + pci_write_config_word(dev, pos + PCI_SRIOV_NUM_VF, 0); 445 445 pci_read_config_word(dev, pos + PCI_SRIOV_VF_OFFSET, &offset); 446 446 pci_read_config_word(dev, pos + PCI_SRIOV_VF_STRIDE, &stride); 447 447 if (!offset || (total > 1 && !stride))
+230 -124
drivers/pci/msi.c
··· 116 116 return default_teardown_msi_irqs(dev); 117 117 } 118 118 119 - void default_restore_msi_irqs(struct pci_dev *dev, int irq) 119 + static void default_restore_msi_irq(struct pci_dev *dev, int irq) 120 120 { 121 121 struct msi_desc *entry; 122 122 ··· 134 134 write_msi_msg(irq, &entry->msg); 135 135 } 136 136 137 - void __weak arch_restore_msi_irqs(struct pci_dev *dev, int irq) 137 + void __weak arch_restore_msi_irqs(struct pci_dev *dev) 138 138 { 139 - return default_restore_msi_irqs(dev, irq); 139 + return default_restore_msi_irqs(dev); 140 140 } 141 141 142 142 static void msi_set_enable(struct pci_dev *dev, int enable) ··· 262 262 msi_set_mask_bit(data, 0); 263 263 } 264 264 265 + void default_restore_msi_irqs(struct pci_dev *dev) 266 + { 267 + struct msi_desc *entry; 268 + 269 + list_for_each_entry(entry, &dev->msi_list, list) { 270 + default_restore_msi_irq(dev, entry->irq); 271 + } 272 + } 273 + 265 274 void __read_msi_msg(struct msi_desc *entry, struct msi_msg *msg) 266 275 { 267 276 BUG_ON(entry->dev->current_state != PCI_D0); ··· 372 363 static void free_msi_irqs(struct pci_dev *dev) 373 364 { 374 365 struct msi_desc *entry, *tmp; 366 + struct attribute **msi_attrs; 367 + struct device_attribute *dev_attr; 368 + int count = 0; 375 369 376 370 list_for_each_entry(entry, &dev->msi_list, list) { 377 371 int i, nvec; ··· 410 398 list_del(&entry->list); 411 399 kfree(entry); 412 400 } 401 + 402 + if (dev->msi_irq_groups) { 403 + sysfs_remove_groups(&dev->dev.kobj, dev->msi_irq_groups); 404 + msi_attrs = dev->msi_irq_groups[0]->attrs; 405 + list_for_each_entry(entry, &dev->msi_list, list) { 406 + dev_attr = container_of(msi_attrs[count], 407 + struct device_attribute, attr); 408 + kfree(dev_attr->attr.name); 409 + kfree(dev_attr); 410 + ++count; 411 + } 412 + kfree(msi_attrs); 413 + kfree(dev->msi_irq_groups[0]); 414 + kfree(dev->msi_irq_groups); 415 + dev->msi_irq_groups = NULL; 416 + } 413 417 } 414 418 415 419 static struct msi_desc *alloc_msi_entry(struct pci_dev *dev) ··· 458 430 459 431 pci_intx_for_msi(dev, 0); 460 432 msi_set_enable(dev, 0); 461 - arch_restore_msi_irqs(dev, dev->irq); 433 + arch_restore_msi_irqs(dev); 462 434 463 435 pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); 464 436 msi_mask_irq(entry, msi_capable_mask(control), entry->masked); ··· 483 455 control |= PCI_MSIX_FLAGS_ENABLE | PCI_MSIX_FLAGS_MASKALL; 484 456 pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, control); 485 457 458 + arch_restore_msi_irqs(dev); 486 459 list_for_each_entry(entry, &dev->msi_list, list) { 487 - arch_restore_msi_irqs(dev, entry->irq); 488 460 msix_mask_irq(entry, entry->masked); 489 461 } 490 462 ··· 499 471 } 500 472 EXPORT_SYMBOL_GPL(pci_restore_msi_state); 501 473 502 - 503 - #define to_msi_attr(obj) container_of(obj, struct msi_attribute, attr) 504 - #define to_msi_desc(obj) container_of(obj, struct msi_desc, kobj) 505 - 506 - struct msi_attribute { 507 - struct attribute attr; 508 - ssize_t (*show)(struct msi_desc *entry, struct msi_attribute *attr, 509 - char *buf); 510 - ssize_t (*store)(struct msi_desc *entry, struct msi_attribute *attr, 511 - const char *buf, size_t count); 512 - }; 513 - 514 - static ssize_t show_msi_mode(struct msi_desc *entry, struct msi_attribute *atr, 474 + static ssize_t msi_mode_show(struct device *dev, struct device_attribute *attr, 515 475 char *buf) 516 476 { 517 - return sprintf(buf, "%s\n", entry->msi_attrib.is_msix ? "msix" : "msi"); 477 + struct pci_dev *pdev = to_pci_dev(dev); 478 + struct msi_desc *entry; 479 + unsigned long irq; 480 + int retval; 481 + 482 + retval = kstrtoul(attr->attr.name, 10, &irq); 483 + if (retval) 484 + return retval; 485 + 486 + list_for_each_entry(entry, &pdev->msi_list, list) { 487 + if (entry->irq == irq) { 488 + return sprintf(buf, "%s\n", 489 + entry->msi_attrib.is_msix ? "msix" : "msi"); 490 + } 491 + } 492 + return -ENODEV; 518 493 } 519 - 520 - static ssize_t msi_irq_attr_show(struct kobject *kobj, 521 - struct attribute *attr, char *buf) 522 - { 523 - struct msi_attribute *attribute = to_msi_attr(attr); 524 - struct msi_desc *entry = to_msi_desc(kobj); 525 - 526 - if (!attribute->show) 527 - return -EIO; 528 - 529 - return attribute->show(entry, attribute, buf); 530 - } 531 - 532 - static const struct sysfs_ops msi_irq_sysfs_ops = { 533 - .show = msi_irq_attr_show, 534 - }; 535 - 536 - static struct msi_attribute mode_attribute = 537 - __ATTR(mode, S_IRUGO, show_msi_mode, NULL); 538 - 539 - 540 - static struct attribute *msi_irq_default_attrs[] = { 541 - &mode_attribute.attr, 542 - NULL 543 - }; 544 - 545 - static void msi_kobj_release(struct kobject *kobj) 546 - { 547 - struct msi_desc *entry = to_msi_desc(kobj); 548 - 549 - pci_dev_put(entry->dev); 550 - } 551 - 552 - static struct kobj_type msi_irq_ktype = { 553 - .release = msi_kobj_release, 554 - .sysfs_ops = &msi_irq_sysfs_ops, 555 - .default_attrs = msi_irq_default_attrs, 556 - }; 557 494 558 495 static int populate_msi_sysfs(struct pci_dev *pdev) 559 496 { 497 + struct attribute **msi_attrs; 498 + struct attribute *msi_attr; 499 + struct device_attribute *msi_dev_attr; 500 + struct attribute_group *msi_irq_group; 501 + const struct attribute_group **msi_irq_groups; 560 502 struct msi_desc *entry; 561 - struct kobject *kobj; 562 - int ret; 503 + int ret = -ENOMEM; 504 + int num_msi = 0; 563 505 int count = 0; 564 506 565 - pdev->msi_kset = kset_create_and_add("msi_irqs", NULL, &pdev->dev.kobj); 566 - if (!pdev->msi_kset) 567 - return -ENOMEM; 568 - 507 + /* Determine how many msi entries we have */ 569 508 list_for_each_entry(entry, &pdev->msi_list, list) { 570 - kobj = &entry->kobj; 571 - kobj->kset = pdev->msi_kset; 572 - pci_dev_get(pdev); 573 - ret = kobject_init_and_add(kobj, &msi_irq_ktype, NULL, 574 - "%u", entry->irq); 575 - if (ret) 576 - goto out_unroll; 577 - 578 - count++; 509 + ++num_msi; 579 510 } 511 + if (!num_msi) 512 + return 0; 513 + 514 + /* Dynamically create the MSI attributes for the PCI device */ 515 + msi_attrs = kzalloc(sizeof(void *) * (num_msi + 1), GFP_KERNEL); 516 + if (!msi_attrs) 517 + return -ENOMEM; 518 + list_for_each_entry(entry, &pdev->msi_list, list) { 519 + char *name = kmalloc(20, GFP_KERNEL); 520 + msi_dev_attr = kzalloc(sizeof(*msi_dev_attr), GFP_KERNEL); 521 + if (!msi_dev_attr) 522 + goto error_attrs; 523 + sprintf(name, "%d", entry->irq); 524 + sysfs_attr_init(&msi_dev_attr->attr); 525 + msi_dev_attr->attr.name = name; 526 + msi_dev_attr->attr.mode = S_IRUGO; 527 + msi_dev_attr->show = msi_mode_show; 528 + msi_attrs[count] = &msi_dev_attr->attr; 529 + ++count; 530 + } 531 + 532 + msi_irq_group = kzalloc(sizeof(*msi_irq_group), GFP_KERNEL); 533 + if (!msi_irq_group) 534 + goto error_attrs; 535 + msi_irq_group->name = "msi_irqs"; 536 + msi_irq_group->attrs = msi_attrs; 537 + 538 + msi_irq_groups = kzalloc(sizeof(void *) * 2, GFP_KERNEL); 539 + if (!msi_irq_groups) 540 + goto error_irq_group; 541 + msi_irq_groups[0] = msi_irq_group; 542 + 543 + ret = sysfs_create_groups(&pdev->dev.kobj, msi_irq_groups); 544 + if (ret) 545 + goto error_irq_groups; 546 + pdev->msi_irq_groups = msi_irq_groups; 580 547 581 548 return 0; 582 549 583 - out_unroll: 584 - list_for_each_entry(entry, &pdev->msi_list, list) { 585 - if (!count) 586 - break; 587 - kobject_del(&entry->kobj); 588 - kobject_put(&entry->kobj); 589 - count--; 550 + error_irq_groups: 551 + kfree(msi_irq_groups); 552 + error_irq_group: 553 + kfree(msi_irq_group); 554 + error_attrs: 555 + count = 0; 556 + msi_attr = msi_attrs[count]; 557 + while (msi_attr) { 558 + msi_dev_attr = container_of(msi_attr, struct device_attribute, attr); 559 + kfree(msi_attr->name); 560 + kfree(msi_dev_attr); 561 + ++count; 562 + msi_attr = msi_attrs[count]; 590 563 } 591 564 return ret; 592 565 } ··· 758 729 759 730 ret = arch_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX); 760 731 if (ret) 761 - goto error; 732 + goto out_avail; 762 733 763 734 /* 764 735 * Some devices require MSI-X to be enabled before we can touch the ··· 771 742 msix_program_entries(dev, entries); 772 743 773 744 ret = populate_msi_sysfs(dev); 774 - if (ret) { 775 - ret = 0; 776 - goto error; 777 - } 745 + if (ret) 746 + goto out_free; 778 747 779 748 /* Set MSI-X enabled bits and unmask the function */ 780 749 pci_intx_for_msi(dev, 0); ··· 783 756 784 757 return 0; 785 758 786 - error: 759 + out_avail: 787 760 if (ret < 0) { 788 761 /* 789 762 * If we had some success, report the number of irqs ··· 800 773 ret = avail; 801 774 } 802 775 776 + out_free: 803 777 free_msi_irqs(dev); 804 778 805 779 return ret; ··· 852 824 } 853 825 854 826 /** 827 + * pci_msi_vec_count - Return the number of MSI vectors a device can send 828 + * @dev: device to report about 829 + * 830 + * This function returns the number of MSI vectors a device requested via 831 + * Multiple Message Capable register. It returns a negative errno if the 832 + * device is not capable sending MSI interrupts. Otherwise, the call succeeds 833 + * and returns a power of two, up to a maximum of 2^5 (32), according to the 834 + * MSI specification. 835 + **/ 836 + int pci_msi_vec_count(struct pci_dev *dev) 837 + { 838 + int ret; 839 + u16 msgctl; 840 + 841 + if (!dev->msi_cap) 842 + return -EINVAL; 843 + 844 + pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl); 845 + ret = 1 << ((msgctl & PCI_MSI_FLAGS_QMASK) >> 1); 846 + 847 + return ret; 848 + } 849 + EXPORT_SYMBOL(pci_msi_vec_count); 850 + 851 + /** 855 852 * pci_enable_msi_block - configure device's MSI capability structure 856 853 * @dev: device to configure 857 854 * @nvec: number of interrupts to configure ··· 889 836 * updates the @dev's irq member to the lowest new interrupt number; the 890 837 * other interrupt numbers allocated to this device are consecutive. 891 838 */ 892 - int pci_enable_msi_block(struct pci_dev *dev, unsigned int nvec) 839 + int pci_enable_msi_block(struct pci_dev *dev, int nvec) 893 840 { 894 841 int status, maxvec; 895 - u16 msgctl; 896 842 897 - if (!dev->msi_cap || dev->current_state != PCI_D0) 843 + if (dev->current_state != PCI_D0) 898 844 return -EINVAL; 899 845 900 - pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl); 901 - maxvec = 1 << ((msgctl & PCI_MSI_FLAGS_QMASK) >> 1); 846 + maxvec = pci_msi_vec_count(dev); 847 + if (maxvec < 0) 848 + return maxvec; 902 849 if (nvec > maxvec) 903 850 return maxvec; 904 851 ··· 919 866 return status; 920 867 } 921 868 EXPORT_SYMBOL(pci_enable_msi_block); 922 - 923 - int pci_enable_msi_block_auto(struct pci_dev *dev, unsigned int *maxvec) 924 - { 925 - int ret, nvec; 926 - u16 msgctl; 927 - 928 - if (!dev->msi_cap || dev->current_state != PCI_D0) 929 - return -EINVAL; 930 - 931 - pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl); 932 - ret = 1 << ((msgctl & PCI_MSI_FLAGS_QMASK) >> 1); 933 - 934 - if (maxvec) 935 - *maxvec = ret; 936 - 937 - do { 938 - nvec = ret; 939 - ret = pci_enable_msi_block(dev, nvec); 940 - } while (ret > 0); 941 - 942 - if (ret < 0) 943 - return ret; 944 - return nvec; 945 - } 946 - EXPORT_SYMBOL(pci_enable_msi_block_auto); 947 869 948 870 void pci_msi_shutdown(struct pci_dev *dev) 949 871 { ··· 953 925 954 926 pci_msi_shutdown(dev); 955 927 free_msi_irqs(dev); 956 - kset_unregister(dev->msi_kset); 957 - dev->msi_kset = NULL; 958 928 } 959 929 EXPORT_SYMBOL(pci_disable_msi); 960 930 961 931 /** 962 - * pci_msix_table_size - return the number of device's MSI-X table entries 932 + * pci_msix_vec_count - return the number of device's MSI-X table entries 963 933 * @dev: pointer to the pci_dev data structure of MSI-X device function 964 - */ 965 - int pci_msix_table_size(struct pci_dev *dev) 934 + 935 + * This function returns the number of device's MSI-X table entries and 936 + * therefore the number of MSI-X vectors device is capable of sending. 937 + * It returns a negative errno if the device is not capable of sending MSI-X 938 + * interrupts. 939 + **/ 940 + int pci_msix_vec_count(struct pci_dev *dev) 966 941 { 967 942 u16 control; 968 943 969 944 if (!dev->msix_cap) 970 - return 0; 945 + return -EINVAL; 971 946 972 947 pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control); 973 948 return msix_table_size(control); 974 949 } 950 + EXPORT_SYMBOL(pci_msix_vec_count); 975 951 976 952 /** 977 953 * pci_enable_msix - configure device's MSI-X capability structure ··· 1004 972 if (status) 1005 973 return status; 1006 974 1007 - nr_entries = pci_msix_table_size(dev); 975 + nr_entries = pci_msix_vec_count(dev); 976 + if (nr_entries < 0) 977 + return nr_entries; 1008 978 if (nvec > nr_entries) 1009 979 return nr_entries; 1010 980 ··· 1057 1023 1058 1024 pci_msix_shutdown(dev); 1059 1025 free_msi_irqs(dev); 1060 - kset_unregister(dev->msi_kset); 1061 - dev->msi_kset = NULL; 1062 1026 } 1063 1027 EXPORT_SYMBOL(pci_disable_msix); 1064 1028 ··· 1111 1079 if (dev->msix_cap) 1112 1080 msix_set_enable(dev, 0); 1113 1081 } 1082 + 1083 + /** 1084 + * pci_enable_msi_range - configure device's MSI capability structure 1085 + * @dev: device to configure 1086 + * @minvec: minimal number of interrupts to configure 1087 + * @maxvec: maximum number of interrupts to configure 1088 + * 1089 + * This function tries to allocate a maximum possible number of interrupts in a 1090 + * range between @minvec and @maxvec. It returns a negative errno if an error 1091 + * occurs. If it succeeds, it returns the actual number of interrupts allocated 1092 + * and updates the @dev's irq member to the lowest new interrupt number; 1093 + * the other interrupt numbers allocated to this device are consecutive. 1094 + **/ 1095 + int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec) 1096 + { 1097 + int nvec = maxvec; 1098 + int rc; 1099 + 1100 + if (maxvec < minvec) 1101 + return -ERANGE; 1102 + 1103 + do { 1104 + rc = pci_enable_msi_block(dev, nvec); 1105 + if (rc < 0) { 1106 + return rc; 1107 + } else if (rc > 0) { 1108 + if (rc < minvec) 1109 + return -ENOSPC; 1110 + nvec = rc; 1111 + } 1112 + } while (rc); 1113 + 1114 + return nvec; 1115 + } 1116 + EXPORT_SYMBOL(pci_enable_msi_range); 1117 + 1118 + /** 1119 + * pci_enable_msix_range - configure device's MSI-X capability structure 1120 + * @dev: pointer to the pci_dev data structure of MSI-X device function 1121 + * @entries: pointer to an array of MSI-X entries 1122 + * @minvec: minimum number of MSI-X irqs requested 1123 + * @maxvec: maximum number of MSI-X irqs requested 1124 + * 1125 + * Setup the MSI-X capability structure of device function with a maximum 1126 + * possible number of interrupts in the range between @minvec and @maxvec 1127 + * upon its software driver call to request for MSI-X mode enabled on its 1128 + * hardware device function. It returns a negative errno if an error occurs. 1129 + * If it succeeds, it returns the actual number of interrupts allocated and 1130 + * indicates the successful configuration of MSI-X capability structure 1131 + * with new allocated MSI-X interrupts. 1132 + **/ 1133 + int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries, 1134 + int minvec, int maxvec) 1135 + { 1136 + int nvec = maxvec; 1137 + int rc; 1138 + 1139 + if (maxvec < minvec) 1140 + return -ERANGE; 1141 + 1142 + do { 1143 + rc = pci_enable_msix(dev, entries, nvec); 1144 + if (rc < 0) { 1145 + return rc; 1146 + } else if (rc > 0) { 1147 + if (rc < minvec) 1148 + return -ENOSPC; 1149 + nvec = rc; 1150 + } 1151 + } while (rc); 1152 + 1153 + return nvec; 1154 + } 1155 + EXPORT_SYMBOL(pci_enable_msix_range);
+1 -1
drivers/pci/pci-acpi.c
··· 361 361 362 362 static bool pci_acpi_bus_match(struct device *dev) 363 363 { 364 - return dev->bus == &pci_bus_type; 364 + return dev_is_pci(dev); 365 365 } 366 366 367 367 static struct acpi_bus_type acpi_pci_bus = {
+30 -37
drivers/pci/pci-label.c
··· 34 34 35 35 #define DEVICE_LABEL_DSM 0x07 36 36 37 - #ifndef CONFIG_DMI 38 - 39 - static inline int 40 - pci_create_smbiosname_file(struct pci_dev *pdev) 41 - { 42 - return -1; 43 - } 44 - 45 - static inline void 46 - pci_remove_smbiosname_file(struct pci_dev *pdev) 47 - { 48 - } 49 - 50 - #else 51 - 37 + #ifdef CONFIG_DMI 52 38 enum smbios_attr_enum { 53 39 SMBIOS_ATTR_NONE = 0, 54 40 SMBIOS_ATTR_LABEL_SHOW, ··· 142 156 { 143 157 sysfs_remove_group(&pdev->dev.kobj, &smbios_attr_group); 144 158 } 159 + #else 160 + static inline int 161 + pci_create_smbiosname_file(struct pci_dev *pdev) 162 + { 163 + return -1; 164 + } 145 165 166 + static inline void 167 + pci_remove_smbiosname_file(struct pci_dev *pdev) 168 + { 169 + } 146 170 #endif 147 171 148 - #ifndef CONFIG_ACPI 149 - 150 - static inline int 151 - pci_create_acpi_index_label_files(struct pci_dev *pdev) 152 - { 153 - return -1; 154 - } 155 - 156 - static inline int 157 - pci_remove_acpi_index_label_files(struct pci_dev *pdev) 158 - { 159 - return -1; 160 - } 161 - 162 - static inline bool 163 - device_has_dsm(struct device *dev) 164 - { 165 - return false; 166 - } 167 - 168 - #else 169 - 172 + #ifdef CONFIG_ACPI 170 173 static const char device_label_dsm_uuid[] = { 171 174 0xD0, 0x37, 0xC9, 0xE5, 0x53, 0x35, 0x7A, 0x4D, 172 175 0x91, 0x17, 0xEA, 0x4D, 0x19, 0xC3, 0x43, 0x4D ··· 338 363 { 339 364 sysfs_remove_group(&pdev->dev.kobj, &acpi_attr_group); 340 365 return 0; 366 + } 367 + #else 368 + static inline int 369 + pci_create_acpi_index_label_files(struct pci_dev *pdev) 370 + { 371 + return -1; 372 + } 373 + 374 + static inline int 375 + pci_remove_acpi_index_label_files(struct pci_dev *pdev) 376 + { 377 + return -1; 378 + } 379 + 380 + static inline bool 381 + device_has_dsm(struct device *dev) 382 + { 383 + return false; 341 384 } 342 385 #endif 343 386
+7 -12
drivers/pci/pci-sysfs.c
··· 297 297 } 298 298 static DEVICE_ATTR_RW(msi_bus); 299 299 300 - static DEFINE_MUTEX(pci_remove_rescan_mutex); 301 300 static ssize_t bus_rescan_store(struct bus_type *bus, const char *buf, 302 301 size_t count) 303 302 { ··· 307 308 return -EINVAL; 308 309 309 310 if (val) { 310 - mutex_lock(&pci_remove_rescan_mutex); 311 + pci_lock_rescan_remove(); 311 312 while ((b = pci_find_next_bus(b)) != NULL) 312 313 pci_rescan_bus(b); 313 - mutex_unlock(&pci_remove_rescan_mutex); 314 + pci_unlock_rescan_remove(); 314 315 } 315 316 return count; 316 317 } ··· 341 342 return -EINVAL; 342 343 343 344 if (val) { 344 - mutex_lock(&pci_remove_rescan_mutex); 345 + pci_lock_rescan_remove(); 345 346 pci_rescan_bus(pdev->bus); 346 - mutex_unlock(&pci_remove_rescan_mutex); 347 + pci_unlock_rescan_remove(); 347 348 } 348 349 return count; 349 350 } ··· 353 354 354 355 static void remove_callback(struct device *dev) 355 356 { 356 - struct pci_dev *pdev = to_pci_dev(dev); 357 - 358 - mutex_lock(&pci_remove_rescan_mutex); 359 - pci_stop_and_remove_bus_device(pdev); 360 - mutex_unlock(&pci_remove_rescan_mutex); 357 + pci_stop_and_remove_bus_device_locked(to_pci_dev(dev)); 361 358 } 362 359 363 360 static ssize_t ··· 390 395 return -EINVAL; 391 396 392 397 if (val) { 393 - mutex_lock(&pci_remove_rescan_mutex); 398 + pci_lock_rescan_remove(); 394 399 if (!pci_is_root_bus(bus) && list_empty(&bus->devices)) 395 400 pci_rescan_bus_bridge_resize(bus->self); 396 401 else 397 402 pci_rescan_bus(bus); 398 - mutex_unlock(&pci_remove_rescan_mutex); 403 + pci_unlock_rescan_remove(); 399 404 } 400 405 return count; 401 406 }
+252 -293
drivers/pci/pci.c
··· 431 431 } 432 432 433 433 /** 434 + * pci_wait_for_pending - wait for @mask bit(s) to clear in status word @pos 435 + * @dev: the PCI device to operate on 436 + * @pos: config space offset of status word 437 + * @mask: mask of bit(s) to care about in status word 438 + * 439 + * Return 1 when mask bit(s) in status word clear, 0 otherwise. 440 + */ 441 + int pci_wait_for_pending(struct pci_dev *dev, int pos, u16 mask) 442 + { 443 + int i; 444 + 445 + /* Wait for Transaction Pending bit clean */ 446 + for (i = 0; i < 4; i++) { 447 + u16 status; 448 + if (i) 449 + msleep((1 << (i - 1)) * 100); 450 + 451 + pci_read_config_word(dev, pos, &status); 452 + if (!(status & mask)) 453 + return 1; 454 + } 455 + 456 + return 0; 457 + } 458 + 459 + /** 434 460 * pci_restore_bars - restore a devices BAR values (e.g. after wake-up) 435 461 * @dev: PCI device to have its BARs restored 436 462 * ··· 683 657 } 684 658 685 659 /** 660 + * pci_wakeup - Wake up a PCI device 661 + * @pci_dev: Device to handle. 662 + * @ign: ignored parameter 663 + */ 664 + static int pci_wakeup(struct pci_dev *pci_dev, void *ign) 665 + { 666 + pci_wakeup_event(pci_dev); 667 + pm_request_resume(&pci_dev->dev); 668 + return 0; 669 + } 670 + 671 + /** 672 + * pci_wakeup_bus - Walk given bus and wake up devices on it 673 + * @bus: Top bus of the subtree to walk. 674 + */ 675 + static void pci_wakeup_bus(struct pci_bus *bus) 676 + { 677 + if (bus) 678 + pci_walk_bus(bus, pci_wakeup, NULL); 679 + } 680 + 681 + /** 686 682 * __pci_start_power_transition - Start power transition of a PCI device 687 683 * @dev: PCI device to handle. 688 684 * @state: State to put the device into. ··· 883 835 #define PCI_EXP_SAVE_REGS 7 884 836 885 837 886 - static struct pci_cap_saved_state *pci_find_saved_cap( 887 - struct pci_dev *pci_dev, char cap) 838 + static struct pci_cap_saved_state *_pci_find_saved_cap(struct pci_dev *pci_dev, 839 + u16 cap, bool extended) 888 840 { 889 841 struct pci_cap_saved_state *tmp; 890 842 891 843 hlist_for_each_entry(tmp, &pci_dev->saved_cap_space, next) { 892 - if (tmp->cap.cap_nr == cap) 844 + if (tmp->cap.cap_extended == extended && tmp->cap.cap_nr == cap) 893 845 return tmp; 894 846 } 895 847 return NULL; 848 + } 849 + 850 + struct pci_cap_saved_state *pci_find_saved_cap(struct pci_dev *dev, char cap) 851 + { 852 + return _pci_find_saved_cap(dev, cap, false); 853 + } 854 + 855 + struct pci_cap_saved_state *pci_find_saved_ext_cap(struct pci_dev *dev, u16 cap) 856 + { 857 + return _pci_find_saved_cap(dev, cap, true); 896 858 } 897 859 898 860 static int pci_save_pcie_state(struct pci_dev *dev) ··· 1006 948 return i; 1007 949 if ((i = pci_save_pcix_state(dev)) != 0) 1008 950 return i; 951 + if ((i = pci_save_vc_state(dev)) != 0) 952 + return i; 1009 953 return 0; 1010 954 } 1011 955 ··· 1070 1010 /* PCI Express register must be restored first */ 1071 1011 pci_restore_pcie_state(dev); 1072 1012 pci_restore_ats_state(dev); 1013 + pci_restore_vc_state(dev); 1073 1014 1074 1015 pci_restore_config_space(dev); 1075 1016 ··· 1132 1071 * @dev: PCI device that we're dealing with 1133 1072 * @state: Saved state returned from pci_store_saved_state() 1134 1073 */ 1135 - int pci_load_saved_state(struct pci_dev *dev, struct pci_saved_state *state) 1074 + static int pci_load_saved_state(struct pci_dev *dev, 1075 + struct pci_saved_state *state) 1136 1076 { 1137 1077 struct pci_cap_saved_data *cap; 1138 1078 ··· 1149 1087 while (cap->size) { 1150 1088 struct pci_cap_saved_state *tmp; 1151 1089 1152 - tmp = pci_find_saved_cap(dev, cap->cap_nr); 1090 + tmp = _pci_find_saved_cap(dev, cap->cap_nr, cap->cap_extended); 1153 1091 if (!tmp || tmp->cap.size != cap->size) 1154 1092 return -EINVAL; 1155 1093 ··· 1161 1099 dev->state_saved = true; 1162 1100 return 0; 1163 1101 } 1164 - EXPORT_SYMBOL_GPL(pci_load_saved_state); 1165 1102 1166 1103 /** 1167 1104 * pci_load_and_free_saved_state - Reload the save state pointed to by state, ··· 1592 1531 pci_walk_bus(bus, pci_pme_wakeup, (void *)true); 1593 1532 } 1594 1533 1595 - /** 1596 - * pci_wakeup - Wake up a PCI device 1597 - * @pci_dev: Device to handle. 1598 - * @ign: ignored parameter 1599 - */ 1600 - static int pci_wakeup(struct pci_dev *pci_dev, void *ign) 1601 - { 1602 - pci_wakeup_event(pci_dev); 1603 - pm_request_resume(&pci_dev->dev); 1604 - return 0; 1605 - } 1606 - 1607 - /** 1608 - * pci_wakeup_bus - Walk given bus and wake up devices on it 1609 - * @bus: Top bus of the subtree to walk. 1610 - */ 1611 - void pci_wakeup_bus(struct pci_bus *bus) 1612 - { 1613 - if (bus) 1614 - pci_walk_bus(bus, pci_wakeup, NULL); 1615 - } 1616 1534 1617 1535 /** 1618 1536 * pci_pme_capable - check the capability of PCI device to generate PME# ··· 1805 1765 * If the platform can't manage @dev, return the deepest state from which it 1806 1766 * can generate wake events, based on any available PME info. 1807 1767 */ 1808 - pci_power_t pci_target_state(struct pci_dev *dev) 1768 + static pci_power_t pci_target_state(struct pci_dev *dev) 1809 1769 { 1810 1770 pci_power_t target_state = PCI_D3hot; 1811 1771 ··· 2061 2021 } 2062 2022 2063 2023 /** 2064 - * pci_add_cap_save_buffer - allocate buffer for saving given capability registers 2024 + * _pci_add_cap_save_buffer - allocate buffer for saving given 2025 + * capability registers 2065 2026 * @dev: the PCI device 2066 2027 * @cap: the capability to allocate the buffer for 2028 + * @extended: Standard or Extended capability ID 2067 2029 * @size: requested size of the buffer 2068 2030 */ 2069 - static int pci_add_cap_save_buffer( 2070 - struct pci_dev *dev, char cap, unsigned int size) 2031 + static int _pci_add_cap_save_buffer(struct pci_dev *dev, u16 cap, 2032 + bool extended, unsigned int size) 2071 2033 { 2072 2034 int pos; 2073 2035 struct pci_cap_saved_state *save_state; 2074 2036 2075 - pos = pci_find_capability(dev, cap); 2037 + if (extended) 2038 + pos = pci_find_ext_capability(dev, cap); 2039 + else 2040 + pos = pci_find_capability(dev, cap); 2041 + 2076 2042 if (pos <= 0) 2077 2043 return 0; 2078 2044 ··· 2087 2041 return -ENOMEM; 2088 2042 2089 2043 save_state->cap.cap_nr = cap; 2044 + save_state->cap.cap_extended = extended; 2090 2045 save_state->cap.size = size; 2091 2046 pci_add_saved_cap(dev, save_state); 2092 2047 2093 2048 return 0; 2049 + } 2050 + 2051 + int pci_add_cap_save_buffer(struct pci_dev *dev, char cap, unsigned int size) 2052 + { 2053 + return _pci_add_cap_save_buffer(dev, cap, false, size); 2054 + } 2055 + 2056 + int pci_add_ext_cap_save_buffer(struct pci_dev *dev, u16 cap, unsigned int size) 2057 + { 2058 + return _pci_add_cap_save_buffer(dev, cap, true, size); 2094 2059 } 2095 2060 2096 2061 /** ··· 2122 2065 if (error) 2123 2066 dev_err(&dev->dev, 2124 2067 "unable to preallocate PCI-X save buffer\n"); 2068 + 2069 + pci_allocate_vc_save_buffers(dev); 2125 2070 } 2126 2071 2127 2072 void pci_free_cap_save_buffers(struct pci_dev *dev) ··· 2168 2109 bridge->ari_enabled = 0; 2169 2110 } 2170 2111 } 2171 - 2172 - /** 2173 - * pci_enable_ido - enable ID-based Ordering on a device 2174 - * @dev: the PCI device 2175 - * @type: which types of IDO to enable 2176 - * 2177 - * Enable ID-based ordering on @dev. @type can contain the bits 2178 - * %PCI_EXP_IDO_REQUEST and/or %PCI_EXP_IDO_COMPLETION to indicate 2179 - * which types of transactions are allowed to be re-ordered. 2180 - */ 2181 - void pci_enable_ido(struct pci_dev *dev, unsigned long type) 2182 - { 2183 - u16 ctrl = 0; 2184 - 2185 - if (type & PCI_EXP_IDO_REQUEST) 2186 - ctrl |= PCI_EXP_DEVCTL2_IDO_REQ_EN; 2187 - if (type & PCI_EXP_IDO_COMPLETION) 2188 - ctrl |= PCI_EXP_DEVCTL2_IDO_CMP_EN; 2189 - if (ctrl) 2190 - pcie_capability_set_word(dev, PCI_EXP_DEVCTL2, ctrl); 2191 - } 2192 - EXPORT_SYMBOL(pci_enable_ido); 2193 - 2194 - /** 2195 - * pci_disable_ido - disable ID-based ordering on a device 2196 - * @dev: the PCI device 2197 - * @type: which types of IDO to disable 2198 - */ 2199 - void pci_disable_ido(struct pci_dev *dev, unsigned long type) 2200 - { 2201 - u16 ctrl = 0; 2202 - 2203 - if (type & PCI_EXP_IDO_REQUEST) 2204 - ctrl |= PCI_EXP_DEVCTL2_IDO_REQ_EN; 2205 - if (type & PCI_EXP_IDO_COMPLETION) 2206 - ctrl |= PCI_EXP_DEVCTL2_IDO_CMP_EN; 2207 - if (ctrl) 2208 - pcie_capability_clear_word(dev, PCI_EXP_DEVCTL2, ctrl); 2209 - } 2210 - EXPORT_SYMBOL(pci_disable_ido); 2211 - 2212 - /** 2213 - * pci_enable_obff - enable optimized buffer flush/fill 2214 - * @dev: PCI device 2215 - * @type: type of signaling to use 2216 - * 2217 - * Try to enable @type OBFF signaling on @dev. It will try using WAKE# 2218 - * signaling if possible, falling back to message signaling only if 2219 - * WAKE# isn't supported. @type should indicate whether the PCIe link 2220 - * be brought out of L0s or L1 to send the message. It should be either 2221 - * %PCI_EXP_OBFF_SIGNAL_ALWAYS or %PCI_OBFF_SIGNAL_L0. 2222 - * 2223 - * If your device can benefit from receiving all messages, even at the 2224 - * power cost of bringing the link back up from a low power state, use 2225 - * %PCI_EXP_OBFF_SIGNAL_ALWAYS. Otherwise, use %PCI_OBFF_SIGNAL_L0 (the 2226 - * preferred type). 2227 - * 2228 - * RETURNS: 2229 - * Zero on success, appropriate error number on failure. 2230 - */ 2231 - int pci_enable_obff(struct pci_dev *dev, enum pci_obff_signal_type type) 2232 - { 2233 - u32 cap; 2234 - u16 ctrl; 2235 - int ret; 2236 - 2237 - pcie_capability_read_dword(dev, PCI_EXP_DEVCAP2, &cap); 2238 - if (!(cap & PCI_EXP_DEVCAP2_OBFF_MASK)) 2239 - return -ENOTSUPP; /* no OBFF support at all */ 2240 - 2241 - /* Make sure the topology supports OBFF as well */ 2242 - if (dev->bus->self) { 2243 - ret = pci_enable_obff(dev->bus->self, type); 2244 - if (ret) 2245 - return ret; 2246 - } 2247 - 2248 - pcie_capability_read_word(dev, PCI_EXP_DEVCTL2, &ctrl); 2249 - if (cap & PCI_EXP_DEVCAP2_OBFF_WAKE) 2250 - ctrl |= PCI_EXP_DEVCTL2_OBFF_WAKE_EN; 2251 - else { 2252 - switch (type) { 2253 - case PCI_EXP_OBFF_SIGNAL_L0: 2254 - if (!(ctrl & PCI_EXP_DEVCTL2_OBFF_WAKE_EN)) 2255 - ctrl |= PCI_EXP_DEVCTL2_OBFF_MSGA_EN; 2256 - break; 2257 - case PCI_EXP_OBFF_SIGNAL_ALWAYS: 2258 - ctrl &= ~PCI_EXP_DEVCTL2_OBFF_WAKE_EN; 2259 - ctrl |= PCI_EXP_DEVCTL2_OBFF_MSGB_EN; 2260 - break; 2261 - default: 2262 - WARN(1, "bad OBFF signal type\n"); 2263 - return -ENOTSUPP; 2264 - } 2265 - } 2266 - pcie_capability_write_word(dev, PCI_EXP_DEVCTL2, ctrl); 2267 - 2268 - return 0; 2269 - } 2270 - EXPORT_SYMBOL(pci_enable_obff); 2271 - 2272 - /** 2273 - * pci_disable_obff - disable optimized buffer flush/fill 2274 - * @dev: PCI device 2275 - * 2276 - * Disable OBFF on @dev. 2277 - */ 2278 - void pci_disable_obff(struct pci_dev *dev) 2279 - { 2280 - pcie_capability_clear_word(dev, PCI_EXP_DEVCTL2, 2281 - PCI_EXP_DEVCTL2_OBFF_WAKE_EN); 2282 - } 2283 - EXPORT_SYMBOL(pci_disable_obff); 2284 - 2285 - /** 2286 - * pci_ltr_supported - check whether a device supports LTR 2287 - * @dev: PCI device 2288 - * 2289 - * RETURNS: 2290 - * True if @dev supports latency tolerance reporting, false otherwise. 2291 - */ 2292 - static bool pci_ltr_supported(struct pci_dev *dev) 2293 - { 2294 - u32 cap; 2295 - 2296 - pcie_capability_read_dword(dev, PCI_EXP_DEVCAP2, &cap); 2297 - 2298 - return cap & PCI_EXP_DEVCAP2_LTR; 2299 - } 2300 - 2301 - /** 2302 - * pci_enable_ltr - enable latency tolerance reporting 2303 - * @dev: PCI device 2304 - * 2305 - * Enable LTR on @dev if possible, which means enabling it first on 2306 - * upstream ports. 2307 - * 2308 - * RETURNS: 2309 - * Zero on success, errno on failure. 2310 - */ 2311 - int pci_enable_ltr(struct pci_dev *dev) 2312 - { 2313 - int ret; 2314 - 2315 - /* Only primary function can enable/disable LTR */ 2316 - if (PCI_FUNC(dev->devfn) != 0) 2317 - return -EINVAL; 2318 - 2319 - if (!pci_ltr_supported(dev)) 2320 - return -ENOTSUPP; 2321 - 2322 - /* Enable upstream ports first */ 2323 - if (dev->bus->self) { 2324 - ret = pci_enable_ltr(dev->bus->self); 2325 - if (ret) 2326 - return ret; 2327 - } 2328 - 2329 - return pcie_capability_set_word(dev, PCI_EXP_DEVCTL2, 2330 - PCI_EXP_DEVCTL2_LTR_EN); 2331 - } 2332 - EXPORT_SYMBOL(pci_enable_ltr); 2333 - 2334 - /** 2335 - * pci_disable_ltr - disable latency tolerance reporting 2336 - * @dev: PCI device 2337 - */ 2338 - void pci_disable_ltr(struct pci_dev *dev) 2339 - { 2340 - /* Only primary function can enable/disable LTR */ 2341 - if (PCI_FUNC(dev->devfn) != 0) 2342 - return; 2343 - 2344 - if (!pci_ltr_supported(dev)) 2345 - return; 2346 - 2347 - pcie_capability_clear_word(dev, PCI_EXP_DEVCTL2, 2348 - PCI_EXP_DEVCTL2_LTR_EN); 2349 - } 2350 - EXPORT_SYMBOL(pci_disable_ltr); 2351 - 2352 - static int __pci_ltr_scale(int *val) 2353 - { 2354 - int scale = 0; 2355 - 2356 - while (*val > 1023) { 2357 - *val = (*val + 31) / 32; 2358 - scale++; 2359 - } 2360 - return scale; 2361 - } 2362 - 2363 - /** 2364 - * pci_set_ltr - set LTR latency values 2365 - * @dev: PCI device 2366 - * @snoop_lat_ns: snoop latency in nanoseconds 2367 - * @nosnoop_lat_ns: nosnoop latency in nanoseconds 2368 - * 2369 - * Figure out the scale and set the LTR values accordingly. 2370 - */ 2371 - int pci_set_ltr(struct pci_dev *dev, int snoop_lat_ns, int nosnoop_lat_ns) 2372 - { 2373 - int pos, ret, snoop_scale, nosnoop_scale; 2374 - u16 val; 2375 - 2376 - if (!pci_ltr_supported(dev)) 2377 - return -ENOTSUPP; 2378 - 2379 - snoop_scale = __pci_ltr_scale(&snoop_lat_ns); 2380 - nosnoop_scale = __pci_ltr_scale(&nosnoop_lat_ns); 2381 - 2382 - if (snoop_lat_ns > PCI_LTR_VALUE_MASK || 2383 - nosnoop_lat_ns > PCI_LTR_VALUE_MASK) 2384 - return -EINVAL; 2385 - 2386 - if ((snoop_scale > (PCI_LTR_SCALE_MASK >> PCI_LTR_SCALE_SHIFT)) || 2387 - (nosnoop_scale > (PCI_LTR_SCALE_MASK >> PCI_LTR_SCALE_SHIFT))) 2388 - return -EINVAL; 2389 - 2390 - pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_LTR); 2391 - if (!pos) 2392 - return -ENOTSUPP; 2393 - 2394 - val = (snoop_scale << PCI_LTR_SCALE_SHIFT) | snoop_lat_ns; 2395 - ret = pci_write_config_word(dev, pos + PCI_LTR_MAX_SNOOP_LAT, val); 2396 - if (ret != 4) 2397 - return -EIO; 2398 - 2399 - val = (nosnoop_scale << PCI_LTR_SCALE_SHIFT) | nosnoop_lat_ns; 2400 - ret = pci_write_config_word(dev, pos + PCI_LTR_MAX_NOSNOOP_LAT, val); 2401 - if (ret != 4) 2402 - return -EIO; 2403 - 2404 - return 0; 2405 - } 2406 - EXPORT_SYMBOL(pci_set_ltr); 2407 2112 2408 2113 static int pci_acs_enable; 2409 2114 ··· 2961 3138 EXPORT_SYMBOL_GPL(pci_check_and_mask_intx); 2962 3139 2963 3140 /** 2964 - * pci_check_and_mask_intx - unmask INTx of no interrupt is pending 3141 + * pci_check_and_unmask_intx - unmask INTx if no interrupt is pending 2965 3142 * @dev: the PCI device to operate on 2966 3143 * 2967 3144 * Check if the device dev has its INTx line asserted, unmask it if not ··· 3027 3204 */ 3028 3205 int pci_wait_for_pending_transaction(struct pci_dev *dev) 3029 3206 { 3030 - int i; 3031 - u16 status; 3207 + if (!pci_is_pcie(dev)) 3208 + return 1; 3032 3209 3033 - /* Wait for Transaction Pending bit clean */ 3034 - for (i = 0; i < 4; i++) { 3035 - if (i) 3036 - msleep((1 << (i - 1)) * 100); 3037 - 3038 - pcie_capability_read_word(dev, PCI_EXP_DEVSTA, &status); 3039 - if (!(status & PCI_EXP_DEVSTA_TRPND)) 3040 - return 1; 3041 - } 3042 - 3043 - return 0; 3210 + return pci_wait_for_pending(dev, PCI_EXP_DEVSTA, PCI_EXP_DEVSTA_TRPND); 3044 3211 } 3045 3212 EXPORT_SYMBOL(pci_wait_for_pending_transaction); 3046 3213 ··· 3057 3244 3058 3245 static int pci_af_flr(struct pci_dev *dev, int probe) 3059 3246 { 3060 - int i; 3061 3247 int pos; 3062 3248 u8 cap; 3063 - u8 status; 3064 3249 3065 3250 pos = pci_find_capability(dev, PCI_CAP_ID_AF); 3066 3251 if (!pos) ··· 3072 3261 return 0; 3073 3262 3074 3263 /* Wait for Transaction Pending bit clean */ 3075 - for (i = 0; i < 4; i++) { 3076 - if (i) 3077 - msleep((1 << (i - 1)) * 100); 3078 - 3079 - pci_read_config_byte(dev, pos + PCI_AF_STATUS, &status); 3080 - if (!(status & PCI_AF_STATUS_TP)) 3081 - goto clear; 3082 - } 3264 + if (pci_wait_for_pending(dev, PCI_AF_STATUS, PCI_AF_STATUS_TP)) 3265 + goto clear; 3083 3266 3084 3267 dev_err(&dev->dev, "transaction is not cleared; " 3085 3268 "proceeding with reset anyway\n"); ··· 3250 3445 device_lock(&dev->dev); 3251 3446 } 3252 3447 3448 + /* Return 1 on successful lock, 0 on contention */ 3449 + static int pci_dev_trylock(struct pci_dev *dev) 3450 + { 3451 + if (pci_cfg_access_trylock(dev)) { 3452 + if (device_trylock(&dev->dev)) 3453 + return 1; 3454 + pci_cfg_access_unlock(dev); 3455 + } 3456 + 3457 + return 0; 3458 + } 3459 + 3253 3460 static void pci_dev_unlock(struct pci_dev *dev) 3254 3461 { 3255 3462 device_unlock(&dev->dev); ··· 3405 3588 } 3406 3589 EXPORT_SYMBOL_GPL(pci_reset_function); 3407 3590 3591 + /** 3592 + * pci_try_reset_function - quiesce and reset a PCI device function 3593 + * @dev: PCI device to reset 3594 + * 3595 + * Same as above, except return -EAGAIN if unable to lock device. 3596 + */ 3597 + int pci_try_reset_function(struct pci_dev *dev) 3598 + { 3599 + int rc; 3600 + 3601 + rc = pci_dev_reset(dev, 1); 3602 + if (rc) 3603 + return rc; 3604 + 3605 + pci_dev_save_and_disable(dev); 3606 + 3607 + if (pci_dev_trylock(dev)) { 3608 + rc = __pci_dev_reset(dev, 0); 3609 + pci_dev_unlock(dev); 3610 + } else 3611 + rc = -EAGAIN; 3612 + 3613 + pci_dev_restore(dev); 3614 + 3615 + return rc; 3616 + } 3617 + EXPORT_SYMBOL_GPL(pci_try_reset_function); 3618 + 3408 3619 /* Lock devices from the top of the tree down */ 3409 3620 static void pci_bus_lock(struct pci_bus *bus) 3410 3621 { ··· 3455 3610 pci_bus_unlock(dev->subordinate); 3456 3611 pci_dev_unlock(dev); 3457 3612 } 3613 + } 3614 + 3615 + /* Return 1 on successful lock, 0 on contention */ 3616 + static int pci_bus_trylock(struct pci_bus *bus) 3617 + { 3618 + struct pci_dev *dev; 3619 + 3620 + list_for_each_entry(dev, &bus->devices, bus_list) { 3621 + if (!pci_dev_trylock(dev)) 3622 + goto unlock; 3623 + if (dev->subordinate) { 3624 + if (!pci_bus_trylock(dev->subordinate)) { 3625 + pci_dev_unlock(dev); 3626 + goto unlock; 3627 + } 3628 + } 3629 + } 3630 + return 1; 3631 + 3632 + unlock: 3633 + list_for_each_entry_continue_reverse(dev, &bus->devices, bus_list) { 3634 + if (dev->subordinate) 3635 + pci_bus_unlock(dev->subordinate); 3636 + pci_dev_unlock(dev); 3637 + } 3638 + return 0; 3458 3639 } 3459 3640 3460 3641 /* Lock devices from the top of the tree down */ ··· 3509 3638 pci_bus_unlock(dev->subordinate); 3510 3639 pci_dev_unlock(dev); 3511 3640 } 3641 + } 3642 + 3643 + /* Return 1 on successful lock, 0 on contention */ 3644 + static int pci_slot_trylock(struct pci_slot *slot) 3645 + { 3646 + struct pci_dev *dev; 3647 + 3648 + list_for_each_entry(dev, &slot->bus->devices, bus_list) { 3649 + if (!dev->slot || dev->slot != slot) 3650 + continue; 3651 + if (!pci_dev_trylock(dev)) 3652 + goto unlock; 3653 + if (dev->subordinate) { 3654 + if (!pci_bus_trylock(dev->subordinate)) { 3655 + pci_dev_unlock(dev); 3656 + goto unlock; 3657 + } 3658 + } 3659 + } 3660 + return 1; 3661 + 3662 + unlock: 3663 + list_for_each_entry_continue_reverse(dev, 3664 + &slot->bus->devices, bus_list) { 3665 + if (!dev->slot || dev->slot != slot) 3666 + continue; 3667 + if (dev->subordinate) 3668 + pci_bus_unlock(dev->subordinate); 3669 + pci_dev_unlock(dev); 3670 + } 3671 + return 0; 3512 3672 } 3513 3673 3514 3674 /* Save and disable devices from the top of the tree down */ ··· 3665 3763 } 3666 3764 EXPORT_SYMBOL_GPL(pci_reset_slot); 3667 3765 3766 + /** 3767 + * pci_try_reset_slot - Try to reset a PCI slot 3768 + * @slot: PCI slot to reset 3769 + * 3770 + * Same as above except return -EAGAIN if the slot cannot be locked 3771 + */ 3772 + int pci_try_reset_slot(struct pci_slot *slot) 3773 + { 3774 + int rc; 3775 + 3776 + rc = pci_slot_reset(slot, 1); 3777 + if (rc) 3778 + return rc; 3779 + 3780 + pci_slot_save_and_disable(slot); 3781 + 3782 + if (pci_slot_trylock(slot)) { 3783 + might_sleep(); 3784 + rc = pci_reset_hotplug_slot(slot->hotplug, 0); 3785 + pci_slot_unlock(slot); 3786 + } else 3787 + rc = -EAGAIN; 3788 + 3789 + pci_slot_restore(slot); 3790 + 3791 + return rc; 3792 + } 3793 + EXPORT_SYMBOL_GPL(pci_try_reset_slot); 3794 + 3668 3795 static int pci_bus_reset(struct pci_bus *bus, int probe) 3669 3796 { 3670 3797 if (!bus->self) ··· 3751 3820 return rc; 3752 3821 } 3753 3822 EXPORT_SYMBOL_GPL(pci_reset_bus); 3823 + 3824 + /** 3825 + * pci_try_reset_bus - Try to reset a PCI bus 3826 + * @bus: top level PCI bus to reset 3827 + * 3828 + * Same as above except return -EAGAIN if the bus cannot be locked 3829 + */ 3830 + int pci_try_reset_bus(struct pci_bus *bus) 3831 + { 3832 + int rc; 3833 + 3834 + rc = pci_bus_reset(bus, 1); 3835 + if (rc) 3836 + return rc; 3837 + 3838 + pci_bus_save_and_disable(bus); 3839 + 3840 + if (pci_bus_trylock(bus)) { 3841 + might_sleep(); 3842 + pci_reset_bridge_secondary_bus(bus->self); 3843 + pci_bus_unlock(bus); 3844 + } else 3845 + rc = -EAGAIN; 3846 + 3847 + pci_bus_restore(bus); 3848 + 3849 + return rc; 3850 + } 3851 + EXPORT_SYMBOL_GPL(pci_try_reset_bus); 3754 3852 3755 3853 /** 3756 3854 * pcix_get_max_mmrbc - get PCI-X maximum designed memory read byte count ··· 4410 4450 EXPORT_SYMBOL(pci_pme_capable); 4411 4451 EXPORT_SYMBOL(pci_pme_active); 4412 4452 EXPORT_SYMBOL(pci_wake_from_d3); 4413 - EXPORT_SYMBOL(pci_target_state); 4414 4453 EXPORT_SYMBOL(pci_prepare_to_sleep); 4415 4454 EXPORT_SYMBOL(pci_back_from_sleep); 4416 4455 EXPORT_SYMBOL_GPL(pci_set_pcie_reset_state);
-2
drivers/pci/pci.h
··· 6 6 #define PCI_CFG_SPACE_SIZE 256 7 7 #define PCI_CFG_SPACE_EXP_SIZE 4096 8 8 9 - extern const unsigned char pcix_bus_speed[]; 10 9 extern const unsigned char pcie_link_speed[]; 11 10 12 11 /* Functions internal to the PCI core code */ ··· 67 68 void pci_disable_enabled_device(struct pci_dev *dev); 68 69 int pci_finish_runtime_suspend(struct pci_dev *dev); 69 70 int __pci_pme_wakeup(struct pci_dev *dev, void *ign); 70 - void pci_wakeup_bus(struct pci_bus *bus); 71 71 void pci_config_pm_runtime_get(struct pci_dev *dev); 72 72 void pci_config_pm_runtime_put(struct pci_dev *dev); 73 73 void pci_pm_init(struct pci_dev *dev);
+33 -23
drivers/pci/pcie/aer/aerdrv_acpi.c
··· 23 23 static inline int hest_match_pci(struct acpi_hest_aer_common *p, 24 24 struct pci_dev *pci) 25 25 { 26 - return (0 == pci_domain_nr(pci->bus) && 27 - p->bus == pci->bus->number && 28 - p->device == PCI_SLOT(pci->devfn) && 29 - p->function == PCI_FUNC(pci->devfn)); 26 + return ACPI_HEST_SEGMENT(p->bus) == pci_domain_nr(pci->bus) && 27 + ACPI_HEST_BUS(p->bus) == pci->bus->number && 28 + p->device == PCI_SLOT(pci->devfn) && 29 + p->function == PCI_FUNC(pci->devfn); 30 30 } 31 31 32 32 static inline bool hest_match_type(struct acpi_hest_header *hest_hdr, ··· 50 50 int firmware_first; 51 51 }; 52 52 53 + static int hest_source_is_pcie_aer(struct acpi_hest_header *hest_hdr) 54 + { 55 + if (hest_hdr->type == ACPI_HEST_TYPE_AER_ROOT_PORT || 56 + hest_hdr->type == ACPI_HEST_TYPE_AER_ENDPOINT || 57 + hest_hdr->type == ACPI_HEST_TYPE_AER_BRIDGE) 58 + return 1; 59 + return 0; 60 + } 61 + 53 62 static int aer_hest_parse(struct acpi_hest_header *hest_hdr, void *data) 54 63 { 55 64 struct aer_hest_parse_info *info = data; 56 65 struct acpi_hest_aer_common *p; 57 66 int ff; 58 67 68 + if (!hest_source_is_pcie_aer(hest_hdr)) 69 + return 0; 70 + 59 71 p = (struct acpi_hest_aer_common *)(hest_hdr + 1); 60 72 ff = !!(p->flags & ACPI_HEST_FIRMWARE_FIRST); 73 + 74 + /* 75 + * If no specific device is supplied, determine whether 76 + * FIRMWARE_FIRST is set for *any* PCIe device. 77 + */ 78 + if (!info->pci_dev) { 79 + info->firmware_first |= ff; 80 + return 0; 81 + } 82 + 83 + /* Otherwise, check the specific device */ 61 84 if (p->flags & ACPI_HEST_GLOBAL) { 62 85 if (hest_match_type(hest_hdr, info->pci_dev)) 63 86 info->firmware_first = ff; ··· 120 97 121 98 static bool aer_firmware_first; 122 99 123 - static int aer_hest_parse_aff(struct acpi_hest_header *hest_hdr, void *data) 124 - { 125 - struct acpi_hest_aer_common *p; 126 - 127 - if (aer_firmware_first) 128 - return 0; 129 - 130 - switch (hest_hdr->type) { 131 - case ACPI_HEST_TYPE_AER_ROOT_PORT: 132 - case ACPI_HEST_TYPE_AER_ENDPOINT: 133 - case ACPI_HEST_TYPE_AER_BRIDGE: 134 - p = (struct acpi_hest_aer_common *)(hest_hdr + 1); 135 - aer_firmware_first = !!(p->flags & ACPI_HEST_FIRMWARE_FIRST); 136 - default: 137 - return 0; 138 - } 139 - } 140 - 141 100 /** 142 101 * aer_acpi_firmware_first - Check if APEI should control AER. 143 102 */ 144 103 bool aer_acpi_firmware_first(void) 145 104 { 146 105 static bool parsed = false; 106 + struct aer_hest_parse_info info = { 107 + .pci_dev = NULL, /* Check all PCIe devices */ 108 + .firmware_first = 0, 109 + }; 147 110 148 111 if (!parsed) { 149 - apei_hest_parse(aer_hest_parse_aff, NULL); 112 + apei_hest_parse(aer_hest_parse, &info); 113 + aer_firmware_first = info.firmware_first; 150 114 parsed = true; 151 115 } 152 116 return aer_firmware_first;
+50 -49
drivers/pci/pcie/aer/aerdrv_errprint.c
··· 124 124 "Transmitter ID" 125 125 }; 126 126 127 + static void __print_tlp_header(struct pci_dev *dev, 128 + struct aer_header_log_regs *t) 129 + { 130 + unsigned char *tlp = (unsigned char *)&t; 131 + 132 + dev_err(&dev->dev, " TLP Header:" 133 + " %02x%02x%02x%02x %02x%02x%02x%02x" 134 + " %02x%02x%02x%02x %02x%02x%02x%02x\n", 135 + *(tlp + 3), *(tlp + 2), *(tlp + 1), *tlp, 136 + *(tlp + 7), *(tlp + 6), *(tlp + 5), *(tlp + 4), 137 + *(tlp + 11), *(tlp + 10), *(tlp + 9), 138 + *(tlp + 8), *(tlp + 15), *(tlp + 14), 139 + *(tlp + 13), *(tlp + 12)); 140 + } 141 + 127 142 static void __aer_print_error(struct pci_dev *dev, 128 143 struct aer_err_info *info) 129 144 { ··· 168 153 169 154 void aer_print_error(struct pci_dev *dev, struct aer_err_info *info) 170 155 { 156 + int layer, agent; 171 157 int id = ((dev->bus->number << 8) | dev->devfn); 172 158 173 - if (info->status == 0) { 159 + if (!info->status) { 174 160 dev_err(&dev->dev, 175 161 "PCIe Bus Error: severity=%s, type=Unaccessible, " 176 162 "id=%04x(Unregistered Agent ID)\n", 177 163 aer_error_severity_string[info->severity], id); 178 - } else { 179 - int layer, agent; 180 - 181 - layer = AER_GET_LAYER_ERROR(info->severity, info->status); 182 - agent = AER_GET_AGENT(info->severity, info->status); 183 - 184 - dev_err(&dev->dev, 185 - "PCIe Bus Error: severity=%s, type=%s, id=%04x(%s)\n", 186 - aer_error_severity_string[info->severity], 187 - aer_error_layer[layer], id, aer_agent_string[agent]); 188 - 189 - dev_err(&dev->dev, 190 - " device [%04x:%04x] error status/mask=%08x/%08x\n", 191 - dev->vendor, dev->device, 192 - info->status, info->mask); 193 - 194 - __aer_print_error(dev, info); 195 - 196 - if (info->tlp_header_valid) { 197 - unsigned char *tlp = (unsigned char *) &info->tlp; 198 - dev_err(&dev->dev, " TLP Header:" 199 - " %02x%02x%02x%02x %02x%02x%02x%02x" 200 - " %02x%02x%02x%02x %02x%02x%02x%02x\n", 201 - *(tlp + 3), *(tlp + 2), *(tlp + 1), *tlp, 202 - *(tlp + 7), *(tlp + 6), *(tlp + 5), *(tlp + 4), 203 - *(tlp + 11), *(tlp + 10), *(tlp + 9), 204 - *(tlp + 8), *(tlp + 15), *(tlp + 14), 205 - *(tlp + 13), *(tlp + 12)); 206 - } 164 + goto out; 207 165 } 208 166 167 + layer = AER_GET_LAYER_ERROR(info->severity, info->status); 168 + agent = AER_GET_AGENT(info->severity, info->status); 169 + 170 + dev_err(&dev->dev, 171 + "PCIe Bus Error: severity=%s, type=%s, id=%04x(%s)\n", 172 + aer_error_severity_string[info->severity], 173 + aer_error_layer[layer], id, aer_agent_string[agent]); 174 + 175 + dev_err(&dev->dev, 176 + " device [%04x:%04x] error status/mask=%08x/%08x\n", 177 + dev->vendor, dev->device, 178 + info->status, info->mask); 179 + 180 + __aer_print_error(dev, info); 181 + 182 + if (info->tlp_header_valid) 183 + __print_tlp_header(dev, &info->tlp); 184 + 185 + out: 209 186 if (info->id && info->error_dev_num > 1 && info->id == id) 210 - dev_err(&dev->dev, 211 - " Error of this Agent(%04x) is reported first\n", 212 - id); 187 + dev_err(&dev->dev, " Error of this Agent(%04x) is reported first\n", id); 188 + 213 189 trace_aer_event(dev_name(&dev->dev), (info->status & ~info->mask), 214 190 info->severity); 215 191 } ··· 234 228 const char **status_strs; 235 229 236 230 aer_severity = cper_severity_to_aer(cper_severity); 231 + 237 232 if (aer_severity == AER_CORRECTABLE) { 238 233 status = aer->cor_status; 239 234 mask = aer->cor_mask; ··· 247 240 status_strs_size = ARRAY_SIZE(aer_uncorrectable_error_string); 248 241 tlp_header_valid = status & AER_LOG_TLP_MASKS; 249 242 } 243 + 250 244 layer = AER_GET_LAYER_ERROR(aer_severity, status); 251 245 agent = AER_GET_AGENT(aer_severity, status); 252 - dev_err(&dev->dev, "aer_status: 0x%08x, aer_mask: 0x%08x\n", 253 - status, mask); 246 + 247 + dev_err(&dev->dev, "aer_status: 0x%08x, aer_mask: 0x%08x\n", status, mask); 254 248 cper_print_bits("", status, status_strs, status_strs_size); 255 249 dev_err(&dev->dev, "aer_layer=%s, aer_agent=%s\n", 256 - aer_error_layer[layer], aer_agent_string[agent]); 250 + aer_error_layer[layer], aer_agent_string[agent]); 251 + 257 252 if (aer_severity != AER_CORRECTABLE) 258 253 dev_err(&dev->dev, "aer_uncor_severity: 0x%08x\n", 259 - aer->uncor_severity); 260 - if (tlp_header_valid) { 261 - const unsigned char *tlp; 262 - tlp = (const unsigned char *)&aer->header_log; 263 - dev_err(&dev->dev, "aer_tlp_header:" 264 - " %02x%02x%02x%02x %02x%02x%02x%02x" 265 - " %02x%02x%02x%02x %02x%02x%02x%02x\n", 266 - *(tlp + 3), *(tlp + 2), *(tlp + 1), *tlp, 267 - *(tlp + 7), *(tlp + 6), *(tlp + 5), *(tlp + 4), 268 - *(tlp + 11), *(tlp + 10), *(tlp + 9), 269 - *(tlp + 8), *(tlp + 15), *(tlp + 14), 270 - *(tlp + 13), *(tlp + 12)); 271 - } 254 + aer->uncor_severity); 255 + 256 + if (tlp_header_valid) 257 + __print_tlp_header(dev, &aer->header_log); 258 + 272 259 trace_aer_event(dev_name(&dev->dev), (status & ~mask), 273 260 aer_severity); 274 261 }
-12
drivers/pci/pcie/aspm.c
··· 984 984 } 985 985 } 986 986 987 - /** 988 - * pcie_aspm_enabled - is PCIe ASPM enabled? 989 - * 990 - * Returns true if ASPM has not been disabled by the command-line option 991 - * pcie_aspm=off. 992 - **/ 993 - int pcie_aspm_enabled(void) 994 - { 995 - return !aspm_disabled; 996 - } 997 - EXPORT_SYMBOL(pcie_aspm_enabled); 998 - 999 987 bool pcie_aspm_support_enabled(void) 1000 988 { 1001 989 return aspm_support_enabled;
+18 -18
drivers/pci/pcie/portdrv_core.c
··· 79 79 u16 reg16; 80 80 u32 reg32; 81 81 82 - nr_entries = pci_msix_table_size(dev); 83 - if (!nr_entries) 84 - return -EINVAL; 82 + nr_entries = pci_msix_vec_count(dev); 83 + if (nr_entries < 0) 84 + return nr_entries; 85 + BUG_ON(!nr_entries); 85 86 if (nr_entries > PCIE_PORT_MAX_MSIX_ENTRIES) 86 87 nr_entries = PCIE_PORT_MAX_MSIX_ENTRIES; 87 88 ··· 345 344 device_enable_async_suspend(device); 346 345 347 346 retval = device_register(device); 348 - if (retval) 349 - kfree(pcie); 350 - else 351 - get_device(device); 352 - return retval; 347 + if (retval) { 348 + put_device(device); 349 + return retval; 350 + } 351 + 352 + return 0; 353 353 } 354 354 355 355 /** ··· 456 454 457 455 static int remove_iter(struct device *dev, void *data) 458 456 { 459 - if (dev->bus == &pcie_port_bus_type) { 460 - put_device(dev); 457 + if (dev->bus == &pcie_port_bus_type) 461 458 device_unregister(dev); 462 - } 463 459 return 0; 464 460 } 465 461 ··· 498 498 499 499 pciedev = to_pcie_device(dev); 500 500 status = driver->probe(pciedev); 501 - if (!status) { 502 - dev_printk(KERN_DEBUG, dev, "service driver %s loaded\n", 503 - driver->name); 504 - get_device(dev); 505 - } 506 - return status; 501 + if (status) 502 + return status; 503 + 504 + dev_printk(KERN_DEBUG, dev, "service driver %s loaded\n", driver->name); 505 + get_device(dev); 506 + return 0; 507 507 } 508 508 509 509 /** ··· 554 554 if (pcie_ports_disabled) 555 555 return -ENODEV; 556 556 557 - new->driver.name = (char *)new->name; 557 + new->driver.name = new->name; 558 558 new->driver.bus = &pcie_port_bus_type; 559 559 new->driver.probe = pcie_port_probe_service; 560 560 new->driver.remove = pcie_port_remove_service;
+102 -74
drivers/pci/probe.c
··· 16 16 #define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */ 17 17 #define CARDBUS_RESERVE_BUSNR 3 18 18 19 - struct resource busn_resource = { 19 + static struct resource busn_resource = { 20 20 .name = "PCI busn", 21 21 .start = 0, 22 22 .end = 255, ··· 269 269 region.end = l + sz; 270 270 } 271 271 272 - pcibios_bus_to_resource(dev, res, &region); 273 - pcibios_resource_to_bus(dev, &inverted_region, res); 272 + pcibios_bus_to_resource(dev->bus, res, &region); 273 + pcibios_resource_to_bus(dev->bus, &inverted_region, res); 274 274 275 275 /* 276 276 * If "A" is a BAR value (a bus address), "bus_to_resource(A)" is ··· 364 364 res->flags = (io_base_lo & PCI_IO_RANGE_TYPE_MASK) | IORESOURCE_IO; 365 365 region.start = base; 366 366 region.end = limit + io_granularity - 1; 367 - pcibios_bus_to_resource(dev, res, &region); 367 + pcibios_bus_to_resource(dev->bus, res, &region); 368 368 dev_printk(KERN_DEBUG, &dev->dev, " bridge window %pR\n", res); 369 369 } 370 370 } ··· 386 386 res->flags = (mem_base_lo & PCI_MEMORY_RANGE_TYPE_MASK) | IORESOURCE_MEM; 387 387 region.start = base; 388 388 region.end = limit + 0xfffff; 389 - pcibios_bus_to_resource(dev, res, &region); 389 + pcibios_bus_to_resource(dev->bus, res, &region); 390 390 dev_printk(KERN_DEBUG, &dev->dev, " bridge window %pR\n", res); 391 391 } 392 392 } ··· 436 436 res->flags |= IORESOURCE_MEM_64; 437 437 region.start = base; 438 438 region.end = limit + 0xfffff; 439 - pcibios_bus_to_resource(dev, res, &region); 439 + pcibios_bus_to_resource(dev->bus, res, &region); 440 440 dev_printk(KERN_DEBUG, &dev->dev, " bridge window %pR\n", res); 441 441 } 442 442 } ··· 518 518 return bridge; 519 519 } 520 520 521 - const unsigned char pcix_bus_speed[] = { 521 + static const unsigned char pcix_bus_speed[] = { 522 522 PCI_SPEED_UNKNOWN, /* 0 */ 523 523 PCI_SPEED_66MHz_PCIX, /* 1 */ 524 524 PCI_SPEED_100MHz_PCIX, /* 2 */ ··· 999 999 pdev->is_hotplug_bridge = 1; 1000 1000 } 1001 1001 1002 + 1003 + /** 1004 + * pci_cfg_space_size - get the configuration space size of the PCI device. 1005 + * @dev: PCI device 1006 + * 1007 + * Regular PCI devices have 256 bytes, but PCI-X 2 and PCI Express devices 1008 + * have 4096 bytes. Even if the device is capable, that doesn't mean we can 1009 + * access it. Maybe we don't have a way to generate extended config space 1010 + * accesses, or the device is behind a reverse Express bridge. So we try 1011 + * reading the dword at 0x100 which must either be 0 or a valid extended 1012 + * capability header. 1013 + */ 1014 + static int pci_cfg_space_size_ext(struct pci_dev *dev) 1015 + { 1016 + u32 status; 1017 + int pos = PCI_CFG_SPACE_SIZE; 1018 + 1019 + if (pci_read_config_dword(dev, pos, &status) != PCIBIOS_SUCCESSFUL) 1020 + goto fail; 1021 + if (status == 0xffffffff) 1022 + goto fail; 1023 + 1024 + return PCI_CFG_SPACE_EXP_SIZE; 1025 + 1026 + fail: 1027 + return PCI_CFG_SPACE_SIZE; 1028 + } 1029 + 1030 + int pci_cfg_space_size(struct pci_dev *dev) 1031 + { 1032 + int pos; 1033 + u32 status; 1034 + u16 class; 1035 + 1036 + class = dev->class >> 8; 1037 + if (class == PCI_CLASS_BRIDGE_HOST) 1038 + return pci_cfg_space_size_ext(dev); 1039 + 1040 + if (!pci_is_pcie(dev)) { 1041 + pos = pci_find_capability(dev, PCI_CAP_ID_PCIX); 1042 + if (!pos) 1043 + goto fail; 1044 + 1045 + pci_read_config_dword(dev, pos + PCI_X_STATUS, &status); 1046 + if (!(status & (PCI_X_STATUS_266MHZ | PCI_X_STATUS_533MHZ))) 1047 + goto fail; 1048 + } 1049 + 1050 + return pci_cfg_space_size_ext(dev); 1051 + 1052 + fail: 1053 + return PCI_CFG_SPACE_SIZE; 1054 + } 1055 + 1002 1056 #define LEGACY_IO_RESOURCE (IORESOURCE_IO | IORESOURCE_PCI_FIXED) 1003 1057 1004 1058 /** ··· 1138 1084 region.end = 0x1F7; 1139 1085 res = &dev->resource[0]; 1140 1086 res->flags = LEGACY_IO_RESOURCE; 1141 - pcibios_bus_to_resource(dev, res, &region); 1087 + pcibios_bus_to_resource(dev->bus, res, &region); 1142 1088 region.start = 0x3F6; 1143 1089 region.end = 0x3F6; 1144 1090 res = &dev->resource[1]; 1145 1091 res->flags = LEGACY_IO_RESOURCE; 1146 - pcibios_bus_to_resource(dev, res, &region); 1092 + pcibios_bus_to_resource(dev->bus, res, &region); 1147 1093 } 1148 1094 if ((progif & 4) == 0) { 1149 1095 region.start = 0x170; 1150 1096 region.end = 0x177; 1151 1097 res = &dev->resource[2]; 1152 1098 res->flags = LEGACY_IO_RESOURCE; 1153 - pcibios_bus_to_resource(dev, res, &region); 1099 + pcibios_bus_to_resource(dev->bus, res, &region); 1154 1100 region.start = 0x376; 1155 1101 region.end = 0x376; 1156 1102 res = &dev->resource[3]; 1157 1103 res->flags = LEGACY_IO_RESOURCE; 1158 - pcibios_bus_to_resource(dev, res, &region); 1104 + pcibios_bus_to_resource(dev->bus, res, &region); 1159 1105 } 1160 1106 } 1161 1107 break; ··· 1208 1154 pci_free_cap_save_buffers(dev); 1209 1155 } 1210 1156 1157 + static void pci_free_resources(struct pci_dev *dev) 1158 + { 1159 + int i; 1160 + 1161 + pci_cleanup_rom(dev); 1162 + for (i = 0; i < PCI_NUM_RESOURCES; i++) { 1163 + struct resource *res = dev->resource + i; 1164 + if (res->parent) 1165 + release_resource(res); 1166 + } 1167 + } 1168 + 1211 1169 /** 1212 1170 * pci_release_dev - free a pci device structure when all users of it are finished. 1213 1171 * @dev: device that's been disconnected ··· 1229 1163 */ 1230 1164 static void pci_release_dev(struct device *dev) 1231 1165 { 1232 - struct pci_dev *pci_dev; 1166 + struct pci_dev *pci_dev = to_pci_dev(dev); 1233 1167 1234 - pci_dev = to_pci_dev(dev); 1168 + down_write(&pci_bus_sem); 1169 + list_del(&pci_dev->bus_list); 1170 + up_write(&pci_bus_sem); 1171 + 1172 + pci_free_resources(pci_dev); 1173 + 1235 1174 pci_release_capabilities(pci_dev); 1236 1175 pci_release_of_node(pci_dev); 1237 1176 pcibios_release_device(pci_dev); 1238 1177 pci_bus_put(pci_dev->bus); 1239 1178 kfree(pci_dev); 1240 - } 1241 - 1242 - /** 1243 - * pci_cfg_space_size - get the configuration space size of the PCI device. 1244 - * @dev: PCI device 1245 - * 1246 - * Regular PCI devices have 256 bytes, but PCI-X 2 and PCI Express devices 1247 - * have 4096 bytes. Even if the device is capable, that doesn't mean we can 1248 - * access it. Maybe we don't have a way to generate extended config space 1249 - * accesses, or the device is behind a reverse Express bridge. So we try 1250 - * reading the dword at 0x100 which must either be 0 or a valid extended 1251 - * capability header. 1252 - */ 1253 - int pci_cfg_space_size_ext(struct pci_dev *dev) 1254 - { 1255 - u32 status; 1256 - int pos = PCI_CFG_SPACE_SIZE; 1257 - 1258 - if (pci_read_config_dword(dev, pos, &status) != PCIBIOS_SUCCESSFUL) 1259 - goto fail; 1260 - if (status == 0xffffffff) 1261 - goto fail; 1262 - 1263 - return PCI_CFG_SPACE_EXP_SIZE; 1264 - 1265 - fail: 1266 - return PCI_CFG_SPACE_SIZE; 1267 - } 1268 - 1269 - int pci_cfg_space_size(struct pci_dev *dev) 1270 - { 1271 - int pos; 1272 - u32 status; 1273 - u16 class; 1274 - 1275 - class = dev->class >> 8; 1276 - if (class == PCI_CLASS_BRIDGE_HOST) 1277 - return pci_cfg_space_size_ext(dev); 1278 - 1279 - if (!pci_is_pcie(dev)) { 1280 - pos = pci_find_capability(dev, PCI_CAP_ID_PCIX); 1281 - if (!pos) 1282 - goto fail; 1283 - 1284 - pci_read_config_dword(dev, pos + PCI_X_STATUS, &status); 1285 - if (!(status & (PCI_X_STATUS_266MHZ | PCI_X_STATUS_533MHZ))) 1286 - goto fail; 1287 - } 1288 - 1289 - return pci_cfg_space_size_ext(dev); 1290 - 1291 - fail: 1292 - return PCI_CFG_SPACE_SIZE; 1293 1179 } 1294 1180 1295 1181 struct pci_dev *pci_alloc_dev(struct pci_bus *bus) ··· 1259 1241 return dev; 1260 1242 } 1261 1243 EXPORT_SYMBOL(pci_alloc_dev); 1262 - 1263 - struct pci_dev *alloc_pci_dev(void) 1264 - { 1265 - return pci_alloc_dev(NULL); 1266 - } 1267 - EXPORT_SYMBOL(alloc_pci_dev); 1268 1244 1269 1245 bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l, 1270 1246 int crs_timeout) ··· 1393 1381 dev->match_driver = false; 1394 1382 ret = device_add(&dev->dev); 1395 1383 WARN_ON(ret < 0); 1396 - 1397 - pci_proc_attach_device(dev); 1398 1384 } 1399 1385 1400 1386 struct pci_dev *__ref pci_scan_single_device(struct pci_bus *bus, int devfn) ··· 2023 2013 EXPORT_SYMBOL(pci_scan_slot); 2024 2014 EXPORT_SYMBOL(pci_scan_bridge); 2025 2015 EXPORT_SYMBOL_GPL(pci_scan_child_bus); 2016 + 2017 + /* 2018 + * pci_rescan_bus(), pci_rescan_bus_bridge_resize() and PCI device removal 2019 + * routines should always be executed under this mutex. 2020 + */ 2021 + static DEFINE_MUTEX(pci_rescan_remove_lock); 2022 + 2023 + void pci_lock_rescan_remove(void) 2024 + { 2025 + mutex_lock(&pci_rescan_remove_lock); 2026 + } 2027 + EXPORT_SYMBOL_GPL(pci_lock_rescan_remove); 2028 + 2029 + void pci_unlock_rescan_remove(void) 2030 + { 2031 + mutex_unlock(&pci_rescan_remove_lock); 2032 + } 2033 + EXPORT_SYMBOL_GPL(pci_unlock_rescan_remove); 2026 2034 2027 2035 static int __init pci_sort_bf_cmp(const struct device *d_a, const struct device *d_b) 2028 2036 {
+1 -1
drivers/pci/quirks.c
··· 339 339 /* Convert from PCI bus to resource space */ 340 340 bus_region.start = region; 341 341 bus_region.end = region + size - 1; 342 - pcibios_bus_to_resource(dev, res, &bus_region); 342 + pcibios_bus_to_resource(dev->bus, res, &bus_region); 343 343 344 344 if (!pci_claim_resource(dev, nr)) 345 345 dev_info(&dev->dev, "quirk: %pR claimed by %s\n", res, name);
+13 -21
drivers/pci/remove.c
··· 3 3 #include <linux/pci-aspm.h> 4 4 #include "pci.h" 5 5 6 - static void pci_free_resources(struct pci_dev *dev) 7 - { 8 - int i; 9 - 10 - msi_remove_pci_irq_vectors(dev); 11 - 12 - pci_cleanup_rom(dev); 13 - for (i = 0; i < PCI_NUM_RESOURCES; i++) { 14 - struct resource *res = dev->resource + i; 15 - if (res->parent) 16 - release_resource(res); 17 - } 18 - } 19 - 20 6 static void pci_stop_dev(struct pci_dev *dev) 21 7 { 22 8 pci_pme_active(dev, false); ··· 20 34 21 35 static void pci_destroy_dev(struct pci_dev *dev) 22 36 { 37 + if (!dev->dev.kobj.parent) 38 + return; 39 + 23 40 device_del(&dev->dev); 24 41 25 - down_write(&pci_bus_sem); 26 - list_del(&dev->bus_list); 27 - up_write(&pci_bus_sem); 28 - 29 - pci_free_resources(dev); 30 42 put_device(&dev->dev); 31 43 } 32 44 ··· 98 114 } 99 115 EXPORT_SYMBOL(pci_stop_and_remove_bus_device); 100 116 117 + void pci_stop_and_remove_bus_device_locked(struct pci_dev *dev) 118 + { 119 + pci_lock_rescan_remove(); 120 + pci_stop_and_remove_bus_device(dev); 121 + pci_unlock_rescan_remove(); 122 + } 123 + EXPORT_SYMBOL_GPL(pci_stop_and_remove_bus_device_locked); 124 + 101 125 void pci_stop_root_bus(struct pci_bus *bus) 102 126 { 103 127 struct pci_dev *child, *tmp; ··· 120 128 pci_stop_bus_device(child); 121 129 122 130 /* stop the host bridge */ 123 - device_del(&host_bridge->dev); 131 + device_release_driver(&host_bridge->dev); 124 132 } 125 133 126 134 void pci_remove_root_bus(struct pci_bus *bus) ··· 139 147 host_bridge->bus = NULL; 140 148 141 149 /* remove the host bridge */ 142 - put_device(&host_bridge->dev); 150 + device_unregister(&host_bridge->dev); 143 151 }
+1 -1
drivers/pci/rom.c
··· 31 31 if (!res->flags) 32 32 return -1; 33 33 34 - pcibios_resource_to_bus(pdev, &region, res); 34 + pcibios_resource_to_bus(pdev->bus, &region, res); 35 35 pci_read_config_dword(pdev, pdev->rom_base_reg, &rom_addr); 36 36 rom_addr &= ~PCI_ROM_ADDRESS_MASK; 37 37 rom_addr |= region.start | PCI_ROM_ADDRESS_ENABLE;
+17 -15
drivers/pci/setup-bus.c
··· 475 475 &bus->busn_res); 476 476 477 477 res = bus->resource[0]; 478 - pcibios_resource_to_bus(bridge, &region, res); 478 + pcibios_resource_to_bus(bridge->bus, &region, res); 479 479 if (res->flags & IORESOURCE_IO) { 480 480 /* 481 481 * The IO resource is allocated a range twice as large as it ··· 489 489 } 490 490 491 491 res = bus->resource[1]; 492 - pcibios_resource_to_bus(bridge, &region, res); 492 + pcibios_resource_to_bus(bridge->bus, &region, res); 493 493 if (res->flags & IORESOURCE_IO) { 494 494 dev_info(&bridge->dev, " bridge window %pR\n", res); 495 495 pci_write_config_dword(bridge, PCI_CB_IO_BASE_1, ··· 499 499 } 500 500 501 501 res = bus->resource[2]; 502 - pcibios_resource_to_bus(bridge, &region, res); 502 + pcibios_resource_to_bus(bridge->bus, &region, res); 503 503 if (res->flags & IORESOURCE_MEM) { 504 504 dev_info(&bridge->dev, " bridge window %pR\n", res); 505 505 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_0, ··· 509 509 } 510 510 511 511 res = bus->resource[3]; 512 - pcibios_resource_to_bus(bridge, &region, res); 512 + pcibios_resource_to_bus(bridge->bus, &region, res); 513 513 if (res->flags & IORESOURCE_MEM) { 514 514 dev_info(&bridge->dev, " bridge window %pR\n", res); 515 515 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_1, ··· 538 538 struct pci_bus_region region; 539 539 unsigned long io_mask; 540 540 u8 io_base_lo, io_limit_lo; 541 - u32 l, io_upper16; 541 + u16 l; 542 + u32 io_upper16; 542 543 543 544 io_mask = PCI_IO_RANGE_MASK; 544 545 if (bridge->io_window_1k) ··· 547 546 548 547 /* Set up the top and bottom of the PCI I/O segment for this bus. */ 549 548 res = bus->resource[0]; 550 - pcibios_resource_to_bus(bridge, &region, res); 549 + pcibios_resource_to_bus(bridge->bus, &region, res); 551 550 if (res->flags & IORESOURCE_IO) { 552 - pci_read_config_dword(bridge, PCI_IO_BASE, &l); 553 - l &= 0xffff0000; 551 + pci_read_config_word(bridge, PCI_IO_BASE, &l); 554 552 io_base_lo = (region.start >> 8) & io_mask; 555 553 io_limit_lo = (region.end >> 8) & io_mask; 556 - l |= ((u32) io_limit_lo << 8) | io_base_lo; 554 + l = ((u16) io_limit_lo << 8) | io_base_lo; 557 555 /* Set up upper 16 bits of I/O base/limit. */ 558 556 io_upper16 = (region.end & 0xffff0000) | (region.start >> 16); 559 557 dev_info(&bridge->dev, " bridge window %pR\n", res); ··· 564 564 /* Temporarily disable the I/O range before updating PCI_IO_BASE. */ 565 565 pci_write_config_dword(bridge, PCI_IO_BASE_UPPER16, 0x0000ffff); 566 566 /* Update lower 16 bits of I/O base/limit. */ 567 - pci_write_config_dword(bridge, PCI_IO_BASE, l); 567 + pci_write_config_word(bridge, PCI_IO_BASE, l); 568 568 /* Update upper 16 bits of I/O base/limit. */ 569 569 pci_write_config_dword(bridge, PCI_IO_BASE_UPPER16, io_upper16); 570 570 } ··· 578 578 579 579 /* Set up the top and bottom of the PCI Memory segment for this bus. */ 580 580 res = bus->resource[1]; 581 - pcibios_resource_to_bus(bridge, &region, res); 581 + pcibios_resource_to_bus(bridge->bus, &region, res); 582 582 if (res->flags & IORESOURCE_MEM) { 583 583 l = (region.start >> 16) & 0xfff0; 584 584 l |= region.end & 0xfff00000; ··· 604 604 /* Set up PREF base/limit. */ 605 605 bu = lu = 0; 606 606 res = bus->resource[2]; 607 - pcibios_resource_to_bus(bridge, &region, res); 607 + pcibios_resource_to_bus(bridge->bus, &region, res); 608 608 if (res->flags & IORESOURCE_PREFETCH) { 609 609 l = (region.start >> 16) & 0xfff0; 610 610 l |= region.end & 0xfff00000; ··· 665 665 666 666 pci_read_config_word(bridge, PCI_IO_BASE, &io); 667 667 if (!io) { 668 - pci_write_config_word(bridge, PCI_IO_BASE, 0xf0f0); 668 + pci_write_config_word(bridge, PCI_IO_BASE, 0xe0f0); 669 669 pci_read_config_word(bridge, PCI_IO_BASE, &io); 670 670 pci_write_config_word(bridge, PCI_IO_BASE, 0x0); 671 671 } 672 672 if (io) 673 673 b_res[0].flags |= IORESOURCE_IO; 674 + 674 675 /* DECchip 21050 pass 2 errata: the bridge may miss an address 675 676 disconnect boundary by one PCI data phase. 676 677 Workaround: do not use prefetching on this device. */ 677 678 if (bridge->vendor == PCI_VENDOR_ID_DEC && bridge->device == 0x0001) 678 679 return; 680 + 679 681 pci_read_config_dword(bridge, PCI_PREF_MEMORY_BASE, &pmem); 680 682 if (!pmem) { 681 683 pci_write_config_dword(bridge, PCI_PREF_MEMORY_BASE, 682 - 0xfff0fff0); 684 + 0xffe0fff0); 683 685 pci_read_config_dword(bridge, PCI_PREF_MEMORY_BASE, &pmem); 684 686 pci_write_config_dword(bridge, PCI_PREF_MEMORY_BASE, 0x0); 685 687 } ··· 1424 1422 if (!r->flags) 1425 1423 continue; 1426 1424 1427 - pcibios_resource_to_bus(dev, &region, r); 1425 + pcibios_resource_to_bus(dev->bus, &region, r); 1428 1426 if (!region.start) { 1429 1427 *unassigned = true; 1430 1428 return 1; /* return early from pci_walk_bus() */
+1 -1
drivers/pci/setup-res.c
··· 52 52 if (res->flags & IORESOURCE_PCI_FIXED) 53 53 return; 54 54 55 - pcibios_resource_to_bus(dev, &region, res); 55 + pcibios_resource_to_bus(dev->bus, &region, res); 56 56 57 57 new = region.start | (res->flags & PCI_REGION_FLAG_MASK); 58 58 if (res->flags & IORESOURCE_IO)
-26
drivers/pci/slot.c
··· 320 320 EXPORT_SYMBOL_GPL(pci_create_slot); 321 321 322 322 /** 323 - * pci_renumber_slot - update %struct pci_slot -> number 324 - * @slot: &struct pci_slot to update 325 - * @slot_nr: new number for slot 326 - * 327 - * The primary purpose of this interface is to allow callers who earlier 328 - * created a placeholder slot in pci_create_slot() by passing a -1 as 329 - * slot_nr, to update their %struct pci_slot with the correct @slot_nr. 330 - */ 331 - void pci_renumber_slot(struct pci_slot *slot, int slot_nr) 332 - { 333 - struct pci_slot *tmp; 334 - 335 - down_write(&pci_bus_sem); 336 - 337 - list_for_each_entry(tmp, &slot->bus->slots, list) { 338 - WARN_ON(tmp->number == slot_nr); 339 - goto out; 340 - } 341 - 342 - slot->number = slot_nr; 343 - out: 344 - up_write(&pci_bus_sem); 345 - } 346 - EXPORT_SYMBOL_GPL(pci_renumber_slot); 347 - 348 - /** 349 323 * pci_destroy_slot - decrement refcount for physical PCI slot 350 324 * @slot: struct pci_slot to decrement 351 325 *
+434
drivers/pci/vc.c
··· 1 + /* 2 + * PCI Virtual Channel support 3 + * 4 + * Copyright (C) 2013 Red Hat, Inc. All rights reserved. 5 + * Author: Alex Williamson <alex.williamson@redhat.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/device.h> 13 + #include <linux/kernel.h> 14 + #include <linux/module.h> 15 + #include <linux/pci.h> 16 + #include <linux/pci_regs.h> 17 + #include <linux/types.h> 18 + 19 + /** 20 + * pci_vc_save_restore_dwords - Save or restore a series of dwords 21 + * @dev: device 22 + * @pos: starting config space position 23 + * @buf: buffer to save to or restore from 24 + * @dwords: number of dwords to save/restore 25 + * @save: whether to save or restore 26 + */ 27 + static void pci_vc_save_restore_dwords(struct pci_dev *dev, int pos, 28 + u32 *buf, int dwords, bool save) 29 + { 30 + int i; 31 + 32 + for (i = 0; i < dwords; i++, buf++) { 33 + if (save) 34 + pci_read_config_dword(dev, pos + (i * 4), buf); 35 + else 36 + pci_write_config_dword(dev, pos + (i * 4), *buf); 37 + } 38 + } 39 + 40 + /** 41 + * pci_vc_load_arb_table - load and wait for VC arbitration table 42 + * @dev: device 43 + * @pos: starting position of VC capability (VC/VC9/MFVC) 44 + * 45 + * Set Load VC Arbitration Table bit requesting hardware to apply the VC 46 + * Arbitration Table (previously loaded). When the VC Arbitration Table 47 + * Status clears, hardware has latched the table into VC arbitration logic. 48 + */ 49 + static void pci_vc_load_arb_table(struct pci_dev *dev, int pos) 50 + { 51 + u16 ctrl; 52 + 53 + pci_read_config_word(dev, pos + PCI_VC_PORT_CTRL, &ctrl); 54 + pci_write_config_word(dev, pos + PCI_VC_PORT_CTRL, 55 + ctrl | PCI_VC_PORT_CTRL_LOAD_TABLE); 56 + if (pci_wait_for_pending(dev, pos + PCI_VC_PORT_STATUS, 57 + PCI_VC_PORT_STATUS_TABLE)) 58 + return; 59 + 60 + dev_err(&dev->dev, "VC arbitration table failed to load\n"); 61 + } 62 + 63 + /** 64 + * pci_vc_load_port_arb_table - Load and wait for VC port arbitration table 65 + * @dev: device 66 + * @pos: starting position of VC capability (VC/VC9/MFVC) 67 + * @res: VC resource number, ie. VCn (0-7) 68 + * 69 + * Set Load Port Arbitration Table bit requesting hardware to apply the Port 70 + * Arbitration Table (previously loaded). When the Port Arbitration Table 71 + * Status clears, hardware has latched the table into port arbitration logic. 72 + */ 73 + static void pci_vc_load_port_arb_table(struct pci_dev *dev, int pos, int res) 74 + { 75 + int ctrl_pos, status_pos; 76 + u32 ctrl; 77 + 78 + ctrl_pos = pos + PCI_VC_RES_CTRL + (res * PCI_CAP_VC_PER_VC_SIZEOF); 79 + status_pos = pos + PCI_VC_RES_STATUS + (res * PCI_CAP_VC_PER_VC_SIZEOF); 80 + 81 + pci_read_config_dword(dev, ctrl_pos, &ctrl); 82 + pci_write_config_dword(dev, ctrl_pos, 83 + ctrl | PCI_VC_RES_CTRL_LOAD_TABLE); 84 + 85 + if (pci_wait_for_pending(dev, status_pos, PCI_VC_RES_STATUS_TABLE)) 86 + return; 87 + 88 + dev_err(&dev->dev, "VC%d port arbitration table failed to load\n", res); 89 + } 90 + 91 + /** 92 + * pci_vc_enable - Enable virtual channel 93 + * @dev: device 94 + * @pos: starting position of VC capability (VC/VC9/MFVC) 95 + * @res: VC res number, ie. VCn (0-7) 96 + * 97 + * A VC is enabled by setting the enable bit in matching resource control 98 + * registers on both sides of a link. We therefore need to find the opposite 99 + * end of the link. To keep this simple we enable from the downstream device. 100 + * RC devices do not have an upstream device, nor does it seem that VC9 do 101 + * (spec is unclear). Once we find the upstream device, match the VC ID to 102 + * get the correct resource, disable and enable on both ends. 103 + */ 104 + static void pci_vc_enable(struct pci_dev *dev, int pos, int res) 105 + { 106 + int ctrl_pos, status_pos, id, pos2, evcc, i, ctrl_pos2, status_pos2; 107 + u32 ctrl, header, cap1, ctrl2; 108 + struct pci_dev *link = NULL; 109 + 110 + /* Enable VCs from the downstream device */ 111 + if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || 112 + pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM) 113 + return; 114 + 115 + ctrl_pos = pos + PCI_VC_RES_CTRL + (res * PCI_CAP_VC_PER_VC_SIZEOF); 116 + status_pos = pos + PCI_VC_RES_STATUS + (res * PCI_CAP_VC_PER_VC_SIZEOF); 117 + 118 + pci_read_config_dword(dev, ctrl_pos, &ctrl); 119 + id = ctrl & PCI_VC_RES_CTRL_ID; 120 + 121 + pci_read_config_dword(dev, pos, &header); 122 + 123 + /* If there is no opposite end of the link, skip to enable */ 124 + if (PCI_EXT_CAP_ID(header) == PCI_EXT_CAP_ID_VC9 || 125 + pci_is_root_bus(dev->bus)) 126 + goto enable; 127 + 128 + pos2 = pci_find_ext_capability(dev->bus->self, PCI_EXT_CAP_ID_VC); 129 + if (!pos2) 130 + goto enable; 131 + 132 + pci_read_config_dword(dev->bus->self, pos2 + PCI_VC_PORT_CAP1, &cap1); 133 + evcc = cap1 & PCI_VC_CAP1_EVCC; 134 + 135 + /* VC0 is hardwired enabled, so we can start with 1 */ 136 + for (i = 1; i < evcc + 1; i++) { 137 + ctrl_pos2 = pos2 + PCI_VC_RES_CTRL + 138 + (i * PCI_CAP_VC_PER_VC_SIZEOF); 139 + status_pos2 = pos2 + PCI_VC_RES_STATUS + 140 + (i * PCI_CAP_VC_PER_VC_SIZEOF); 141 + pci_read_config_dword(dev->bus->self, ctrl_pos2, &ctrl2); 142 + if ((ctrl2 & PCI_VC_RES_CTRL_ID) == id) { 143 + link = dev->bus->self; 144 + break; 145 + } 146 + } 147 + 148 + if (!link) 149 + goto enable; 150 + 151 + /* Disable if enabled */ 152 + if (ctrl2 & PCI_VC_RES_CTRL_ENABLE) { 153 + ctrl2 &= ~PCI_VC_RES_CTRL_ENABLE; 154 + pci_write_config_dword(link, ctrl_pos2, ctrl2); 155 + } 156 + 157 + /* Enable on both ends */ 158 + ctrl2 |= PCI_VC_RES_CTRL_ENABLE; 159 + pci_write_config_dword(link, ctrl_pos2, ctrl2); 160 + enable: 161 + ctrl |= PCI_VC_RES_CTRL_ENABLE; 162 + pci_write_config_dword(dev, ctrl_pos, ctrl); 163 + 164 + if (!pci_wait_for_pending(dev, status_pos, PCI_VC_RES_STATUS_NEGO)) 165 + dev_err(&dev->dev, "VC%d negotiation stuck pending\n", id); 166 + 167 + if (link && !pci_wait_for_pending(link, status_pos2, 168 + PCI_VC_RES_STATUS_NEGO)) 169 + dev_err(&link->dev, "VC%d negotiation stuck pending\n", id); 170 + } 171 + 172 + /** 173 + * pci_vc_do_save_buffer - Size, save, or restore VC state 174 + * @dev: device 175 + * @pos: starting position of VC capability (VC/VC9/MFVC) 176 + * @save_state: buffer for save/restore 177 + * @name: for error message 178 + * @save: if provided a buffer, this indicates what to do with it 179 + * 180 + * Walking Virtual Channel config space to size, save, or restore it 181 + * is complicated, so we do it all from one function to reduce code and 182 + * guarantee ordering matches in the buffer. When called with NULL 183 + * @save_state, return the size of the necessary save buffer. When called 184 + * with a non-NULL @save_state, @save determines whether we save to the 185 + * buffer or restore from it. 186 + */ 187 + static int pci_vc_do_save_buffer(struct pci_dev *dev, int pos, 188 + struct pci_cap_saved_state *save_state, 189 + bool save) 190 + { 191 + u32 cap1; 192 + char evcc, lpevcc, parb_size; 193 + int i, len = 0; 194 + u8 *buf = save_state ? (u8 *)save_state->cap.data : NULL; 195 + 196 + /* Sanity check buffer size for save/restore */ 197 + if (buf && save_state->cap.size != 198 + pci_vc_do_save_buffer(dev, pos, NULL, save)) { 199 + dev_err(&dev->dev, 200 + "VC save buffer size does not match @0x%x\n", pos); 201 + return -ENOMEM; 202 + } 203 + 204 + pci_read_config_dword(dev, pos + PCI_VC_PORT_CAP1, &cap1); 205 + /* Extended VC Count (not counting VC0) */ 206 + evcc = cap1 & PCI_VC_CAP1_EVCC; 207 + /* Low Priority Extended VC Count (not counting VC0) */ 208 + lpevcc = (cap1 & PCI_VC_CAP1_LPEVCC) >> 4; 209 + /* Port Arbitration Table Entry Size (bits) */ 210 + parb_size = 1 << ((cap1 & PCI_VC_CAP1_ARB_SIZE) >> 10); 211 + 212 + /* 213 + * Port VC Control Register contains VC Arbitration Select, which 214 + * cannot be modified when more than one LPVC is in operation. We 215 + * therefore save/restore it first, as only VC0 should be enabled 216 + * after device reset. 217 + */ 218 + if (buf) { 219 + if (save) 220 + pci_read_config_word(dev, pos + PCI_VC_PORT_CTRL, 221 + (u16 *)buf); 222 + else 223 + pci_write_config_word(dev, pos + PCI_VC_PORT_CTRL, 224 + *(u16 *)buf); 225 + buf += 2; 226 + } 227 + len += 2; 228 + 229 + /* 230 + * If we have any Low Priority VCs and a VC Arbitration Table Offset 231 + * in Port VC Capability Register 2 then save/restore it next. 232 + */ 233 + if (lpevcc) { 234 + u32 cap2; 235 + int vcarb_offset; 236 + 237 + pci_read_config_dword(dev, pos + PCI_VC_PORT_CAP2, &cap2); 238 + vcarb_offset = ((cap2 & PCI_VC_CAP2_ARB_OFF) >> 24) * 16; 239 + 240 + if (vcarb_offset) { 241 + int size, vcarb_phases = 0; 242 + 243 + if (cap2 & PCI_VC_CAP2_128_PHASE) 244 + vcarb_phases = 128; 245 + else if (cap2 & PCI_VC_CAP2_64_PHASE) 246 + vcarb_phases = 64; 247 + else if (cap2 & PCI_VC_CAP2_32_PHASE) 248 + vcarb_phases = 32; 249 + 250 + /* Fixed 4 bits per phase per lpevcc (plus VC0) */ 251 + size = ((lpevcc + 1) * vcarb_phases * 4) / 8; 252 + 253 + if (size && buf) { 254 + pci_vc_save_restore_dwords(dev, 255 + pos + vcarb_offset, 256 + (u32 *)buf, 257 + size / 4, save); 258 + /* 259 + * On restore, we need to signal hardware to 260 + * re-load the VC Arbitration Table. 261 + */ 262 + if (!save) 263 + pci_vc_load_arb_table(dev, pos); 264 + 265 + buf += size; 266 + } 267 + len += size; 268 + } 269 + } 270 + 271 + /* 272 + * In addition to each VC Resource Control Register, we may have a 273 + * Port Arbitration Table attached to each VC. The Port Arbitration 274 + * Table Offset in each VC Resource Capability Register tells us if 275 + * it exists. The entry size is global from the Port VC Capability 276 + * Register1 above. The number of phases is determined per VC. 277 + */ 278 + for (i = 0; i < evcc + 1; i++) { 279 + u32 cap; 280 + int parb_offset; 281 + 282 + pci_read_config_dword(dev, pos + PCI_VC_RES_CAP + 283 + (i * PCI_CAP_VC_PER_VC_SIZEOF), &cap); 284 + parb_offset = ((cap & PCI_VC_RES_CAP_ARB_OFF) >> 24) * 16; 285 + if (parb_offset) { 286 + int size, parb_phases = 0; 287 + 288 + if (cap & PCI_VC_RES_CAP_256_PHASE) 289 + parb_phases = 256; 290 + else if (cap & (PCI_VC_RES_CAP_128_PHASE | 291 + PCI_VC_RES_CAP_128_PHASE_TB)) 292 + parb_phases = 128; 293 + else if (cap & PCI_VC_RES_CAP_64_PHASE) 294 + parb_phases = 64; 295 + else if (cap & PCI_VC_RES_CAP_32_PHASE) 296 + parb_phases = 32; 297 + 298 + size = (parb_size * parb_phases) / 8; 299 + 300 + if (size && buf) { 301 + pci_vc_save_restore_dwords(dev, 302 + pos + parb_offset, 303 + (u32 *)buf, 304 + size / 4, save); 305 + buf += size; 306 + } 307 + len += size; 308 + } 309 + 310 + /* VC Resource Control Register */ 311 + if (buf) { 312 + int ctrl_pos = pos + PCI_VC_RES_CTRL + 313 + (i * PCI_CAP_VC_PER_VC_SIZEOF); 314 + if (save) 315 + pci_read_config_dword(dev, ctrl_pos, 316 + (u32 *)buf); 317 + else { 318 + u32 tmp, ctrl = *(u32 *)buf; 319 + /* 320 + * For an FLR case, the VC config may remain. 321 + * Preserve enable bit, restore the rest. 322 + */ 323 + pci_read_config_dword(dev, ctrl_pos, &tmp); 324 + tmp &= PCI_VC_RES_CTRL_ENABLE; 325 + tmp |= ctrl & ~PCI_VC_RES_CTRL_ENABLE; 326 + pci_write_config_dword(dev, ctrl_pos, tmp); 327 + /* Load port arbitration table if used */ 328 + if (ctrl & PCI_VC_RES_CTRL_ARB_SELECT) 329 + pci_vc_load_port_arb_table(dev, pos, i); 330 + /* Re-enable if needed */ 331 + if ((ctrl ^ tmp) & PCI_VC_RES_CTRL_ENABLE) 332 + pci_vc_enable(dev, pos, i); 333 + } 334 + buf += 4; 335 + } 336 + len += 4; 337 + } 338 + 339 + return buf ? 0 : len; 340 + } 341 + 342 + static struct { 343 + u16 id; 344 + const char *name; 345 + } vc_caps[] = { { PCI_EXT_CAP_ID_MFVC, "MFVC" }, 346 + { PCI_EXT_CAP_ID_VC, "VC" }, 347 + { PCI_EXT_CAP_ID_VC9, "VC9" } }; 348 + 349 + /** 350 + * pci_save_vc_state - Save VC state to pre-allocate save buffer 351 + * @dev: device 352 + * 353 + * For each type of VC capability, VC/VC9/MFVC, find the capability and 354 + * save it to the pre-allocated save buffer. 355 + */ 356 + int pci_save_vc_state(struct pci_dev *dev) 357 + { 358 + int i; 359 + 360 + for (i = 0; i < ARRAY_SIZE(vc_caps); i++) { 361 + int pos, ret; 362 + struct pci_cap_saved_state *save_state; 363 + 364 + pos = pci_find_ext_capability(dev, vc_caps[i].id); 365 + if (!pos) 366 + continue; 367 + 368 + save_state = pci_find_saved_ext_cap(dev, vc_caps[i].id); 369 + if (!save_state) { 370 + dev_err(&dev->dev, "%s buffer not found in %s\n", 371 + vc_caps[i].name, __func__); 372 + return -ENOMEM; 373 + } 374 + 375 + ret = pci_vc_do_save_buffer(dev, pos, save_state, true); 376 + if (ret) { 377 + dev_err(&dev->dev, "%s save unsuccessful %s\n", 378 + vc_caps[i].name, __func__); 379 + return ret; 380 + } 381 + } 382 + 383 + return 0; 384 + } 385 + 386 + /** 387 + * pci_restore_vc_state - Restore VC state from save buffer 388 + * @dev: device 389 + * 390 + * For each type of VC capability, VC/VC9/MFVC, find the capability and 391 + * restore it from the previously saved buffer. 392 + */ 393 + void pci_restore_vc_state(struct pci_dev *dev) 394 + { 395 + int i; 396 + 397 + for (i = 0; i < ARRAY_SIZE(vc_caps); i++) { 398 + int pos; 399 + struct pci_cap_saved_state *save_state; 400 + 401 + pos = pci_find_ext_capability(dev, vc_caps[i].id); 402 + save_state = pci_find_saved_ext_cap(dev, vc_caps[i].id); 403 + if (!save_state || !pos) 404 + continue; 405 + 406 + pci_vc_do_save_buffer(dev, pos, save_state, false); 407 + } 408 + } 409 + 410 + /** 411 + * pci_allocate_vc_save_buffers - Allocate save buffers for VC caps 412 + * @dev: device 413 + * 414 + * For each type of VC capability, VC/VC9/MFVC, find the capability, size 415 + * it, and allocate a buffer for save/restore. 416 + */ 417 + 418 + void pci_allocate_vc_save_buffers(struct pci_dev *dev) 419 + { 420 + int i; 421 + 422 + for (i = 0; i < ARRAY_SIZE(vc_caps); i++) { 423 + int len, pos = pci_find_ext_capability(dev, vc_caps[i].id); 424 + 425 + if (!pos) 426 + continue; 427 + 428 + len = pci_vc_do_save_buffer(dev, pos, NULL, false); 429 + if (pci_add_ext_cap_save_buffer(dev, vc_caps[i].id, len)) 430 + dev_err(&dev->dev, 431 + "unable to preallocate %s save buffer\n", 432 + vc_caps[i].name); 433 + } 434 + }
+8
drivers/pci/xen-pcifront.c
··· 471 471 } 472 472 pcifront_init_sd(sd, domain, bus, pdev); 473 473 474 + pci_lock_rescan_remove(); 475 + 474 476 b = pci_scan_bus_parented(&pdev->xdev->dev, bus, 475 477 &pcifront_bus_ops, sd); 476 478 if (!b) { 477 479 dev_err(&pdev->xdev->dev, 478 480 "Error creating PCI Frontend Bus!\n"); 479 481 err = -ENOMEM; 482 + pci_unlock_rescan_remove(); 480 483 goto err_out; 481 484 } 482 485 ··· 497 494 /* Create SysFS and notify udev of the devices. Aka: "going live" */ 498 495 pci_bus_add_devices(b); 499 496 497 + pci_unlock_rescan_remove(); 500 498 return err; 501 499 502 500 err_out: ··· 560 556 561 557 dev_dbg(&pdev->xdev->dev, "cleaning up root buses\n"); 562 558 559 + pci_lock_rescan_remove(); 563 560 list_for_each_entry_safe(bus_entry, t, &pdev->root_buses, list) { 564 561 list_del(&bus_entry->list); 565 562 ··· 573 568 574 569 kfree(bus_entry); 575 570 } 571 + pci_unlock_rescan_remove(); 576 572 } 577 573 578 574 static pci_ers_result_t pcifront_common_process(int cmd, ··· 1049 1043 domain, bus, slot, func); 1050 1044 continue; 1051 1045 } 1046 + pci_lock_rescan_remove(); 1052 1047 pci_stop_and_remove_bus_device(pci_dev); 1053 1048 pci_dev_put(pci_dev); 1049 + pci_unlock_rescan_remove(); 1054 1050 1055 1051 dev_dbg(&pdev->xdev->dev, 1056 1052 "PCI device %04x:%02x:%02x.%d removed.\n",
+7
drivers/pcmcia/cardbus.c
··· 70 70 struct pci_dev *dev; 71 71 unsigned int max, pass; 72 72 73 + pci_lock_rescan_remove(); 74 + 73 75 s->functions = pci_scan_slot(bus, PCI_DEVFN(0, 0)); 74 76 pci_fixup_cardbus(bus); 75 77 ··· 95 93 96 94 pci_bus_add_devices(bus); 97 95 96 + pci_unlock_rescan_remove(); 98 97 return 0; 99 98 } 100 99 ··· 118 115 if (!bus) 119 116 return; 120 117 118 + pci_lock_rescan_remove(); 119 + 121 120 list_for_each_entry_safe(dev, tmp, &bus->devices, bus_list) 122 121 pci_stop_and_remove_bus_device(dev); 122 + 123 + pci_unlock_rescan_remove(); 123 124 }
+1 -1
drivers/pcmcia/i82092.c
··· 608 608 609 609 enter("i82092aa_set_mem_map"); 610 610 611 - pcibios_resource_to_bus(sock_info->dev, &region, mem->res); 611 + pcibios_resource_to_bus(sock_info->dev->bus, &region, mem->res); 612 612 613 613 map = mem->map; 614 614 if (map > 4) {
+3 -3
drivers/pcmcia/yenta_socket.c
··· 445 445 unsigned int start, stop, card_start; 446 446 unsigned short word; 447 447 448 - pcibios_resource_to_bus(socket->dev, &region, mem->res); 448 + pcibios_resource_to_bus(socket->dev->bus, &region, mem->res); 449 449 450 450 map = mem->map; 451 451 start = region.start; ··· 709 709 region.start = config_readl(socket, addr_start) & mask; 710 710 region.end = config_readl(socket, addr_end) | ~mask; 711 711 if (region.start && region.end > region.start && !override_bios) { 712 - pcibios_bus_to_resource(dev, res, &region); 712 + pcibios_bus_to_resource(dev->bus, res, &region); 713 713 if (pci_claim_resource(dev, PCI_BRIDGE_RESOURCES + nr) == 0) 714 714 return 0; 715 715 dev_printk(KERN_INFO, &dev->dev, ··· 1033 1033 struct pci_dev *dev = socket->dev; 1034 1034 struct pci_bus_region region; 1035 1035 1036 - pcibios_resource_to_bus(socket->dev, &region, &dev->resource[0]); 1036 + pcibios_resource_to_bus(socket->dev->bus, &region, &dev->resource[0]); 1037 1037 1038 1038 config_writel(socket, CB_LEGACY_MODE_BASE, 0); 1039 1039 config_writel(socket, PCI_BASE_ADDRESS_0, region.start);
+2
drivers/platform/x86/asus-wmi.c
··· 606 606 mutex_unlock(&asus->wmi_lock); 607 607 608 608 mutex_lock(&asus->hotplug_lock); 609 + pci_lock_rescan_remove(); 609 610 610 611 if (asus->wlan.rfkill) 611 612 rfkill_set_sw_state(asus->wlan.rfkill, blocked); ··· 657 656 } 658 657 659 658 out_unlock: 659 + pci_unlock_rescan_remove(); 660 660 mutex_unlock(&asus->hotplug_lock); 661 661 } 662 662
+2
drivers/platform/x86/eeepc-laptop.c
··· 592 592 rfkill_set_sw_state(eeepc->wlan_rfkill, blocked); 593 593 594 594 mutex_lock(&eeepc->hotplug_lock); 595 + pci_lock_rescan_remove(); 595 596 596 597 if (eeepc->hotplug_slot) { 597 598 port = acpi_get_pci_dev(handle); ··· 650 649 } 651 650 652 651 out_unlock: 652 + pci_unlock_rescan_remove(); 653 653 mutex_unlock(&eeepc->hotplug_lock); 654 654 } 655 655
+1 -1
drivers/scsi/mpt2sas/mpt2sas_base.c
··· 128 128 pdev = ioc->pdev; 129 129 if ((pdev == NULL)) 130 130 return -1; 131 - pci_stop_and_remove_bus_device(pdev); 131 + pci_stop_and_remove_bus_device_locked(pdev); 132 132 return 0; 133 133 } 134 134
+1 -1
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 131 131 pdev = ioc->pdev; 132 132 if ((pdev == NULL)) 133 133 return -1; 134 - pci_stop_and_remove_bus_device(pdev); 134 + pci_stop_and_remove_bus_device_locked(pdev); 135 135 return 0; 136 136 } 137 137
+3 -2
drivers/scsi/sym53c8xx_2/sym_glue.c
··· 1531 1531 struct pci_bus_region bus_addr; 1532 1532 int i = 2; 1533 1533 1534 - pcibios_resource_to_bus(pdev, &bus_addr, &pdev->resource[1]); 1534 + pcibios_resource_to_bus(pdev->bus, &bus_addr, &pdev->resource[1]); 1535 1535 device->mmio_base = bus_addr.start; 1536 1536 1537 1537 if (device->chip.features & FE_RAM) { ··· 1541 1541 */ 1542 1542 if (!pdev->resource[i].flags) 1543 1543 i++; 1544 - pcibios_resource_to_bus(pdev, &bus_addr, &pdev->resource[i]); 1544 + pcibios_resource_to_bus(pdev->bus, &bus_addr, 1545 + &pdev->resource[i]); 1545 1546 device->ram_base = bus_addr.start; 1546 1547 } 1547 1548
+9 -20
drivers/vfio/pci/vfio_pci.c
··· 139 139 pci_write_config_word(pdev, PCI_COMMAND, PCI_COMMAND_INTX_DISABLE); 140 140 141 141 /* 142 - * Careful, device_lock may already be held. This is the case if 143 - * a driver unbind is blocked. Try to get the locks ourselves to 144 - * prevent a deadlock. 142 + * Try to reset the device. The success of this is dependent on 143 + * being able to lock the device, which is not always possible. 145 144 */ 146 145 if (vdev->reset_works) { 147 - bool reset_done = false; 148 - 149 - if (pci_cfg_access_trylock(pdev)) { 150 - if (device_trylock(&pdev->dev)) { 151 - __pci_reset_function_locked(pdev); 152 - reset_done = true; 153 - device_unlock(&pdev->dev); 154 - } 155 - pci_cfg_access_unlock(pdev); 156 - } 157 - 158 - if (!reset_done) 159 - pr_warn("%s: Unable to acquire locks for reset of %s\n", 160 - __func__, dev_name(&pdev->dev)); 146 + int ret = pci_try_reset_function(pdev); 147 + if (ret) 148 + pr_warn("%s: Failed to reset device %s (%d)\n", 149 + __func__, dev_name(&pdev->dev), ret); 161 150 } 162 151 163 152 pci_restore_state(pdev); ··· 503 514 504 515 } else if (cmd == VFIO_DEVICE_RESET) { 505 516 return vdev->reset_works ? 506 - pci_reset_function(vdev->pdev) : -EINVAL; 517 + pci_try_reset_function(vdev->pdev) : -EINVAL; 507 518 508 519 } else if (cmd == VFIO_DEVICE_GET_PCI_HOT_RESET_INFO) { 509 520 struct vfio_pci_hot_reset_info hdr; ··· 673 684 &info, slot); 674 685 if (!ret) 675 686 /* User has access, do the reset */ 676 - ret = slot ? pci_reset_slot(vdev->pdev->slot) : 677 - pci_reset_bus(vdev->pdev->bus); 687 + ret = slot ? pci_try_reset_slot(vdev->pdev->slot) : 688 + pci_try_reset_bus(vdev->pdev->bus); 678 689 679 690 hot_reset_release: 680 691 for (i--; i >= 0; i--)
+6 -6
drivers/vfio/pci/vfio_pci_config.c
··· 975 975 int ret, evcc, phases, vc_arb; 976 976 int len = PCI_CAP_VC_BASE_SIZEOF; 977 977 978 - ret = pci_read_config_dword(pdev, pos + PCI_VC_PORT_REG1, &tmp); 978 + ret = pci_read_config_dword(pdev, pos + PCI_VC_PORT_CAP1, &tmp); 979 979 if (ret) 980 980 return pcibios_err_to_errno(ret); 981 981 982 - evcc = tmp & PCI_VC_REG1_EVCC; /* extended vc count */ 983 - ret = pci_read_config_dword(pdev, pos + PCI_VC_PORT_REG2, &tmp); 982 + evcc = tmp & PCI_VC_CAP1_EVCC; /* extended vc count */ 983 + ret = pci_read_config_dword(pdev, pos + PCI_VC_PORT_CAP2, &tmp); 984 984 if (ret) 985 985 return pcibios_err_to_errno(ret); 986 986 987 - if (tmp & PCI_VC_REG2_128_PHASE) 987 + if (tmp & PCI_VC_CAP2_128_PHASE) 988 988 phases = 128; 989 - else if (tmp & PCI_VC_REG2_64_PHASE) 989 + else if (tmp & PCI_VC_CAP2_64_PHASE) 990 990 phases = 64; 991 - else if (tmp & PCI_VC_REG2_32_PHASE) 991 + else if (tmp & PCI_VC_CAP2_32_PHASE) 992 992 phases = 32; 993 993 else 994 994 phases = 0;
+1 -1
drivers/video/arkfb.c
··· 1014 1014 1015 1015 vga_res.flags = IORESOURCE_IO; 1016 1016 1017 - pcibios_bus_to_resource(dev, &vga_res, &bus_reg); 1017 + pcibios_bus_to_resource(dev->bus, &vga_res, &bus_reg); 1018 1018 1019 1019 par->state.vgabase = (void __iomem *) vga_res.start; 1020 1020
+1 -1
drivers/video/s3fb.c
··· 1180 1180 1181 1181 vga_res.flags = IORESOURCE_IO; 1182 1182 1183 - pcibios_bus_to_resource(dev, &vga_res, &bus_reg); 1183 + pcibios_bus_to_resource(dev->bus, &vga_res, &bus_reg); 1184 1184 1185 1185 par->state.vgabase = (void __iomem *) vga_res.start; 1186 1186
+1 -1
drivers/video/vt8623fb.c
··· 729 729 730 730 vga_res.flags = IORESOURCE_IO; 731 731 732 - pcibios_bus_to_resource(dev, &vga_res, &bus_reg); 732 + pcibios_bus_to_resource(dev->bus, &vga_res, &bus_reg); 733 733 734 734 par->state.vgabase = (void __iomem *) vga_res.start; 735 735
+9 -1
include/acpi/actbl1.h
··· 457 457 u8 enabled; 458 458 u32 records_to_preallocate; 459 459 u32 max_sections_per_record; 460 - u32 bus; 460 + u32 bus; /* Bus and Segment numbers */ 461 461 u16 device; 462 462 u16 function; 463 463 u16 device_control; ··· 472 472 473 473 #define ACPI_HEST_FIRMWARE_FIRST (1) 474 474 #define ACPI_HEST_GLOBAL (1<<1) 475 + 476 + /* 477 + * Macros to access the bus/segment numbers in Bus field above: 478 + * Bus number is encoded in bits 7:0 479 + * Segment number is encoded in bits 23:8 480 + */ 481 + #define ACPI_HEST_BUS(bus) ((bus) & 0xFF) 482 + #define ACPI_HEST_SEGMENT(bus) (((bus) >> 8) & 0xFFFF) 475 483 476 484 /* Hardware Error Notification */ 477 485
+2 -2
include/linux/msi.h
··· 60 60 int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type); 61 61 void arch_teardown_msi_irqs(struct pci_dev *dev); 62 62 int arch_msi_check_device(struct pci_dev* dev, int nvec, int type); 63 - void arch_restore_msi_irqs(struct pci_dev *dev, int irq); 63 + void arch_restore_msi_irqs(struct pci_dev *dev); 64 64 65 65 void default_teardown_msi_irqs(struct pci_dev *dev); 66 - void default_restore_msi_irqs(struct pci_dev *dev, int irq); 66 + void default_restore_msi_irqs(struct pci_dev *dev); 67 67 u32 default_msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag); 68 68 u32 default_msix_mask_irq(struct msi_desc *desc, u32 flag); 69 69
-17
include/linux/pci-ats.h
··· 56 56 57 57 int pci_enable_pri(struct pci_dev *pdev, u32 reqs); 58 58 void pci_disable_pri(struct pci_dev *pdev); 59 - bool pci_pri_enabled(struct pci_dev *pdev); 60 59 int pci_reset_pri(struct pci_dev *pdev); 61 - bool pci_pri_stopped(struct pci_dev *pdev); 62 - int pci_pri_status(struct pci_dev *pdev); 63 60 64 61 #else /* CONFIG_PCI_PRI */ 65 62 ··· 69 72 { 70 73 } 71 74 72 - static inline bool pci_pri_enabled(struct pci_dev *pdev) 73 - { 74 - return false; 75 - } 76 - 77 75 static inline int pci_reset_pri(struct pci_dev *pdev) 78 76 { 79 77 return -ENODEV; 80 78 } 81 79 82 - static inline bool pci_pri_stopped(struct pci_dev *pdev) 83 - { 84 - return true; 85 - } 86 - 87 - static inline int pci_pri_status(struct pci_dev *pdev) 88 - { 89 - return -ENODEV; 90 - } 91 80 #endif /* CONFIG_PCI_PRI */ 92 81 93 82 #ifdef CONFIG_PCI_PASID
+107 -245
include/linux/pci.h
··· 224 224 }; 225 225 226 226 struct pci_cap_saved_data { 227 - char cap_nr; 227 + u16 cap_nr; 228 + bool cap_extended; 228 229 unsigned int size; 229 230 u32 data[0]; 230 231 }; ··· 352 351 struct bin_attribute *res_attr_wc[DEVICE_COUNT_RESOURCE]; /* sysfs file for WC mapping of resources */ 353 352 #ifdef CONFIG_PCI_MSI 354 353 struct list_head msi_list; 355 - struct kset *msi_kset; 354 + const struct attribute_group **msi_irq_groups; 356 355 #endif 357 356 struct pci_vpd *vpd; 358 357 #ifdef CONFIG_PCI_ATS ··· 376 375 } 377 376 378 377 struct pci_dev *pci_alloc_dev(struct pci_bus *bus); 379 - struct pci_dev * __deprecated alloc_pci_dev(void); 380 378 381 379 #define to_pci_dev(n) container_of(n, struct pci_dev, dev) 382 380 #define for_each_pci_dev(d) while ((d = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, d)) != NULL) ··· 384 384 { 385 385 return (pdev->error_state != pci_channel_io_normal); 386 386 } 387 - 388 - extern struct resource busn_resource; 389 387 390 388 struct pci_host_bridge_window { 391 389 struct list_head list; ··· 549 551 int reg, int len, u32 val); 550 552 551 553 struct pci_bus_region { 552 - resource_size_t start; 553 - resource_size_t end; 554 + dma_addr_t start; 555 + dma_addr_t end; 554 556 }; 555 557 556 558 struct pci_dynids { ··· 632 634 * DEFINE_PCI_DEVICE_TABLE - macro used to describe a pci device table 633 635 * @_table: device table name 634 636 * 635 - * This macro is used to create a struct pci_device_id array (a device table) 636 - * in a generic manner. 637 + * This macro is deprecated and should not be used in new code. 637 638 */ 638 639 #define DEFINE_PCI_DEVICE_TABLE(_table) \ 639 640 const struct pci_device_id _table[] ··· 734 737 735 738 /* Generic PCI functions used internally */ 736 739 737 - void pcibios_resource_to_bus(struct pci_dev *dev, struct pci_bus_region *region, 740 + void pcibios_resource_to_bus(struct pci_bus *bus, struct pci_bus_region *region, 738 741 struct resource *res); 739 - void pcibios_bus_to_resource(struct pci_dev *dev, struct resource *res, 742 + void pcibios_bus_to_resource(struct pci_bus *bus, struct resource *res, 740 743 struct pci_bus_region *region); 741 744 void pcibios_scan_specific_bus(int busn); 742 745 struct pci_bus *pci_find_bus(int domain, int busnr); ··· 760 763 const char *name, 761 764 struct hotplug_slot *hotplug); 762 765 void pci_destroy_slot(struct pci_slot *slot); 763 - void pci_renumber_slot(struct pci_slot *slot, int slot_nr); 764 766 int pci_scan_slot(struct pci_bus *bus, int devfn); 765 767 struct pci_dev *pci_scan_single_device(struct pci_bus *bus, int devfn); 766 768 void pci_device_add(struct pci_dev *dev, struct pci_bus *bus); ··· 775 779 void pci_dev_put(struct pci_dev *dev); 776 780 void pci_remove_bus(struct pci_bus *b); 777 781 void pci_stop_and_remove_bus_device(struct pci_dev *dev); 782 + void pci_stop_and_remove_bus_device_locked(struct pci_dev *dev); 778 783 void pci_stop_root_bus(struct pci_bus *bus); 779 784 void pci_remove_root_bus(struct pci_bus *bus); 780 785 void pci_setup_cardbus(struct pci_bus *bus); ··· 935 938 void pci_msi_off(struct pci_dev *dev); 936 939 int pci_set_dma_max_seg_size(struct pci_dev *dev, unsigned int size); 937 940 int pci_set_dma_seg_boundary(struct pci_dev *dev, unsigned long mask); 941 + int pci_wait_for_pending(struct pci_dev *dev, int pos, u16 mask); 938 942 int pci_wait_for_pending_transaction(struct pci_dev *dev); 939 943 int pcix_get_max_mmrbc(struct pci_dev *dev); 940 944 int pcix_get_mmrbc(struct pci_dev *dev); ··· 949 951 int __pci_reset_function(struct pci_dev *dev); 950 952 int __pci_reset_function_locked(struct pci_dev *dev); 951 953 int pci_reset_function(struct pci_dev *dev); 954 + int pci_try_reset_function(struct pci_dev *dev); 952 955 int pci_probe_reset_slot(struct pci_slot *slot); 953 956 int pci_reset_slot(struct pci_slot *slot); 957 + int pci_try_reset_slot(struct pci_slot *slot); 954 958 int pci_probe_reset_bus(struct pci_bus *bus); 955 959 int pci_reset_bus(struct pci_bus *bus); 960 + int pci_try_reset_bus(struct pci_bus *bus); 956 961 void pci_reset_bridge_secondary_bus(struct pci_dev *dev); 957 962 void pci_update_resource(struct pci_dev *dev, int resno); 958 963 int __must_check pci_assign_resource(struct pci_dev *dev, int i); ··· 975 974 int pci_save_state(struct pci_dev *dev); 976 975 void pci_restore_state(struct pci_dev *dev); 977 976 struct pci_saved_state *pci_store_saved_state(struct pci_dev *dev); 978 - int pci_load_saved_state(struct pci_dev *dev, struct pci_saved_state *state); 979 977 int pci_load_and_free_saved_state(struct pci_dev *dev, 980 978 struct pci_saved_state **state); 979 + struct pci_cap_saved_state *pci_find_saved_cap(struct pci_dev *dev, char cap); 980 + struct pci_cap_saved_state *pci_find_saved_ext_cap(struct pci_dev *dev, 981 + u16 cap); 982 + int pci_add_cap_save_buffer(struct pci_dev *dev, char cap, unsigned int size); 983 + int pci_add_ext_cap_save_buffer(struct pci_dev *dev, 984 + u16 cap, unsigned int size); 981 985 int __pci_complete_power_transition(struct pci_dev *dev, pci_power_t state); 982 986 int pci_set_power_state(struct pci_dev *dev, pci_power_t state); 983 987 pci_power_t pci_choose_state(struct pci_dev *dev, pm_message_t state); ··· 991 985 int __pci_enable_wake(struct pci_dev *dev, pci_power_t state, 992 986 bool runtime, bool enable); 993 987 int pci_wake_from_d3(struct pci_dev *dev, bool enable); 994 - pci_power_t pci_target_state(struct pci_dev *dev); 995 988 int pci_prepare_to_sleep(struct pci_dev *dev); 996 989 int pci_back_from_sleep(struct pci_dev *dev); 997 990 bool pci_dev_run_wake(struct pci_dev *dev); ··· 1003 998 return __pci_enable_wake(dev, state, false, enable); 1004 999 } 1005 1000 1006 - #define PCI_EXP_IDO_REQUEST (1<<0) 1007 - #define PCI_EXP_IDO_COMPLETION (1<<1) 1008 - void pci_enable_ido(struct pci_dev *dev, unsigned long type); 1009 - void pci_disable_ido(struct pci_dev *dev, unsigned long type); 1010 - 1011 - enum pci_obff_signal_type { 1012 - PCI_EXP_OBFF_SIGNAL_L0 = 0, 1013 - PCI_EXP_OBFF_SIGNAL_ALWAYS = 1, 1014 - }; 1015 - int pci_enable_obff(struct pci_dev *dev, enum pci_obff_signal_type); 1016 - void pci_disable_obff(struct pci_dev *dev); 1017 - 1018 - int pci_enable_ltr(struct pci_dev *dev); 1019 - void pci_disable_ltr(struct pci_dev *dev); 1020 - int pci_set_ltr(struct pci_dev *dev, int snoop_lat_ns, int nosnoop_lat_ns); 1001 + /* PCI Virtual Channel */ 1002 + int pci_save_vc_state(struct pci_dev *dev); 1003 + void pci_restore_vc_state(struct pci_dev *dev); 1004 + void pci_allocate_vc_save_buffers(struct pci_dev *dev); 1021 1005 1022 1006 /* For use by arch with custom probe code */ 1023 1007 void set_pcie_port_type(struct pci_dev *pdev); ··· 1016 1022 int pci_bus_find_capability(struct pci_bus *bus, unsigned int devfn, int cap); 1017 1023 unsigned int pci_rescan_bus_bridge_resize(struct pci_dev *bridge); 1018 1024 unsigned int pci_rescan_bus(struct pci_bus *bus); 1025 + void pci_lock_rescan_remove(void); 1026 + void pci_unlock_rescan_remove(void); 1019 1027 1020 1028 /* Vital product data routines */ 1021 1029 ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf); 1022 1030 ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf); 1023 - int pci_vpd_truncate(struct pci_dev *dev, size_t size); 1024 1031 1025 1032 /* Helper functions for low-level code (drivers/pci/setup-[bus,res].c) */ 1026 1033 resource_size_t pcibios_retrieve_fw_addr(struct pci_dev *dev, int idx); ··· 1073 1078 resource_size_t), 1074 1079 void *alignf_data); 1075 1080 1081 + static inline dma_addr_t pci_bus_address(struct pci_dev *pdev, int bar) 1082 + { 1083 + struct pci_bus_region region; 1084 + 1085 + pcibios_resource_to_bus(pdev->bus, &region, &pdev->resource[bar]); 1086 + return region.start; 1087 + } 1088 + 1076 1089 /* Proper probing supporting hot-pluggable devices */ 1077 1090 int __must_check __pci_register_driver(struct pci_driver *, struct module *, 1078 1091 const char *mod_name); ··· 1118 1115 1119 1116 void pci_walk_bus(struct pci_bus *top, int (*cb)(struct pci_dev *, void *), 1120 1117 void *userdata); 1121 - int pci_cfg_space_size_ext(struct pci_dev *dev); 1122 1118 int pci_cfg_space_size(struct pci_dev *dev); 1123 1119 unsigned char pci_bus_max_busnr(struct pci_bus *bus); 1124 1120 void pci_setup_bridge(struct pci_bus *bus); ··· 1156 1154 }; 1157 1155 1158 1156 1159 - #ifndef CONFIG_PCI_MSI 1160 - static inline int pci_enable_msi_block(struct pci_dev *dev, unsigned int nvec) 1161 - { 1162 - return -1; 1163 - } 1164 - 1165 - static inline int 1166 - pci_enable_msi_block_auto(struct pci_dev *dev, unsigned int *maxvec) 1167 - { 1168 - return -1; 1169 - } 1170 - 1171 - static inline void pci_msi_shutdown(struct pci_dev *dev) 1172 - { } 1173 - static inline void pci_disable_msi(struct pci_dev *dev) 1174 - { } 1175 - 1176 - static inline int pci_msix_table_size(struct pci_dev *dev) 1177 - { 1178 - return 0; 1179 - } 1180 - static inline int pci_enable_msix(struct pci_dev *dev, 1181 - struct msix_entry *entries, int nvec) 1182 - { 1183 - return -1; 1184 - } 1185 - 1186 - static inline void pci_msix_shutdown(struct pci_dev *dev) 1187 - { } 1188 - static inline void pci_disable_msix(struct pci_dev *dev) 1189 - { } 1190 - 1191 - static inline void msi_remove_pci_irq_vectors(struct pci_dev *dev) 1192 - { } 1193 - 1194 - static inline void pci_restore_msi_state(struct pci_dev *dev) 1195 - { } 1196 - static inline int pci_msi_enabled(void) 1197 - { 1198 - return 0; 1199 - } 1200 - #else 1201 - int pci_enable_msi_block(struct pci_dev *dev, unsigned int nvec); 1202 - int pci_enable_msi_block_auto(struct pci_dev *dev, unsigned int *maxvec); 1157 + #ifdef CONFIG_PCI_MSI 1158 + int pci_msi_vec_count(struct pci_dev *dev); 1159 + int pci_enable_msi_block(struct pci_dev *dev, int nvec); 1203 1160 void pci_msi_shutdown(struct pci_dev *dev); 1204 1161 void pci_disable_msi(struct pci_dev *dev); 1205 - int pci_msix_table_size(struct pci_dev *dev); 1162 + int pci_msix_vec_count(struct pci_dev *dev); 1206 1163 int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec); 1207 1164 void pci_msix_shutdown(struct pci_dev *dev); 1208 1165 void pci_disable_msix(struct pci_dev *dev); 1209 1166 void msi_remove_pci_irq_vectors(struct pci_dev *dev); 1210 1167 void pci_restore_msi_state(struct pci_dev *dev); 1211 1168 int pci_msi_enabled(void); 1169 + int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec); 1170 + int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries, 1171 + int minvec, int maxvec); 1172 + #else 1173 + static inline int pci_msi_vec_count(struct pci_dev *dev) { return -ENOSYS; } 1174 + static inline int pci_enable_msi_block(struct pci_dev *dev, int nvec) 1175 + { return -ENOSYS; } 1176 + static inline void pci_msi_shutdown(struct pci_dev *dev) { } 1177 + static inline void pci_disable_msi(struct pci_dev *dev) { } 1178 + static inline int pci_msix_vec_count(struct pci_dev *dev) { return -ENOSYS; } 1179 + static inline int pci_enable_msix(struct pci_dev *dev, 1180 + struct msix_entry *entries, int nvec) 1181 + { return -ENOSYS; } 1182 + static inline void pci_msix_shutdown(struct pci_dev *dev) { } 1183 + static inline void pci_disable_msix(struct pci_dev *dev) { } 1184 + static inline void msi_remove_pci_irq_vectors(struct pci_dev *dev) { } 1185 + static inline void pci_restore_msi_state(struct pci_dev *dev) { } 1186 + static inline int pci_msi_enabled(void) { return 0; } 1187 + static inline int pci_enable_msi_range(struct pci_dev *dev, int minvec, 1188 + int maxvec) 1189 + { return -ENOSYS; } 1190 + static inline int pci_enable_msix_range(struct pci_dev *dev, 1191 + struct msix_entry *entries, int minvec, int maxvec) 1192 + { return -ENOSYS; } 1212 1193 #endif 1213 1194 1214 1195 #ifdef CONFIG_PCIEPORTBUS ··· 1202 1217 #define pcie_ports_auto false 1203 1218 #endif 1204 1219 1205 - #ifndef CONFIG_PCIEASPM 1206 - static inline int pcie_aspm_enabled(void) { return 0; } 1207 - static inline bool pcie_aspm_support_enabled(void) { return false; } 1208 - #else 1209 - int pcie_aspm_enabled(void); 1220 + #ifdef CONFIG_PCIEASPM 1210 1221 bool pcie_aspm_support_enabled(void); 1222 + #else 1223 + static inline bool pcie_aspm_support_enabled(void) { return false; } 1211 1224 #endif 1212 1225 1213 1226 #ifdef CONFIG_PCIEAER ··· 1216 1233 static inline bool pci_aer_available(void) { return false; } 1217 1234 #endif 1218 1235 1219 - #ifndef CONFIG_PCIE_ECRC 1220 - static inline void pcie_set_ecrc_checking(struct pci_dev *dev) 1221 - { 1222 - return; 1223 - } 1224 - static inline void pcie_ecrc_get_policy(char *str) {}; 1225 - #else 1236 + #ifdef CONFIG_PCIE_ECRC 1226 1237 void pcie_set_ecrc_checking(struct pci_dev *dev); 1227 1238 void pcie_ecrc_get_policy(char *str); 1239 + #else 1240 + static inline void pcie_set_ecrc_checking(struct pci_dev *dev) { } 1241 + static inline void pcie_ecrc_get_policy(char *str) { } 1228 1242 #endif 1229 1243 1230 1244 #define pci_enable_msi(pdev) pci_enable_msi_block(pdev, 1) ··· 1245 1265 extern int pci_domains_supported; 1246 1266 #else 1247 1267 enum { pci_domains_supported = 0 }; 1248 - static inline int pci_domain_nr(struct pci_bus *bus) 1249 - { 1250 - return 0; 1251 - } 1252 - 1253 - static inline int pci_proc_domain(struct pci_bus *bus) 1254 - { 1255 - return 0; 1256 - } 1268 + static inline int pci_domain_nr(struct pci_bus *bus) { return 0; } 1269 + static inline int pci_proc_domain(struct pci_bus *bus) { return 0; } 1257 1270 #endif /* CONFIG_PCI_DOMAINS */ 1258 1271 1259 1272 /* some architectures require additional setup to direct VGA traffic */ ··· 1275 1302 static inline struct pci_dev *pci_get_device(unsigned int vendor, 1276 1303 unsigned int device, 1277 1304 struct pci_dev *from) 1278 - { 1279 - return NULL; 1280 - } 1305 + { return NULL; } 1281 1306 1282 1307 static inline struct pci_dev *pci_get_subsys(unsigned int vendor, 1283 1308 unsigned int device, 1284 1309 unsigned int ss_vendor, 1285 1310 unsigned int ss_device, 1286 1311 struct pci_dev *from) 1287 - { 1288 - return NULL; 1289 - } 1312 + { return NULL; } 1290 1313 1291 1314 static inline struct pci_dev *pci_get_class(unsigned int class, 1292 1315 struct pci_dev *from) 1293 - { 1294 - return NULL; 1295 - } 1316 + { return NULL; } 1296 1317 1297 1318 #define pci_dev_present(ids) (0) 1298 1319 #define no_pci_devices() (1) 1299 1320 #define pci_dev_put(dev) do { } while (0) 1300 1321 1301 - static inline void pci_set_master(struct pci_dev *dev) 1302 - { } 1303 - 1304 - static inline int pci_enable_device(struct pci_dev *dev) 1305 - { 1306 - return -EIO; 1307 - } 1308 - 1309 - static inline void pci_disable_device(struct pci_dev *dev) 1310 - { } 1311 - 1322 + static inline void pci_set_master(struct pci_dev *dev) { } 1323 + static inline int pci_enable_device(struct pci_dev *dev) { return -EIO; } 1324 + static inline void pci_disable_device(struct pci_dev *dev) { } 1312 1325 static inline int pci_set_dma_mask(struct pci_dev *dev, u64 mask) 1313 - { 1314 - return -EIO; 1315 - } 1316 - 1326 + { return -EIO; } 1317 1327 static inline int pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask) 1318 - { 1319 - return -EIO; 1320 - } 1321 - 1328 + { return -EIO; } 1322 1329 static inline int pci_set_dma_max_seg_size(struct pci_dev *dev, 1323 1330 unsigned int size) 1324 - { 1325 - return -EIO; 1326 - } 1327 - 1331 + { return -EIO; } 1328 1332 static inline int pci_set_dma_seg_boundary(struct pci_dev *dev, 1329 1333 unsigned long mask) 1330 - { 1331 - return -EIO; 1332 - } 1333 - 1334 + { return -EIO; } 1334 1335 static inline int pci_assign_resource(struct pci_dev *dev, int i) 1335 - { 1336 - return -EBUSY; 1337 - } 1338 - 1336 + { return -EBUSY; } 1339 1337 static inline int __pci_register_driver(struct pci_driver *drv, 1340 1338 struct module *owner) 1341 - { 1342 - return 0; 1343 - } 1344 - 1339 + { return 0; } 1345 1340 static inline int pci_register_driver(struct pci_driver *drv) 1346 - { 1347 - return 0; 1348 - } 1349 - 1350 - static inline void pci_unregister_driver(struct pci_driver *drv) 1351 - { } 1352 - 1341 + { return 0; } 1342 + static inline void pci_unregister_driver(struct pci_driver *drv) { } 1353 1343 static inline int pci_find_capability(struct pci_dev *dev, int cap) 1354 - { 1355 - return 0; 1356 - } 1357 - 1344 + { return 0; } 1358 1345 static inline int pci_find_next_capability(struct pci_dev *dev, u8 post, 1359 1346 int cap) 1360 - { 1361 - return 0; 1362 - } 1363 - 1347 + { return 0; } 1364 1348 static inline int pci_find_ext_capability(struct pci_dev *dev, int cap) 1365 - { 1366 - return 0; 1367 - } 1349 + { return 0; } 1368 1350 1369 1351 /* Power management related routines */ 1370 - static inline int pci_save_state(struct pci_dev *dev) 1371 - { 1372 - return 0; 1373 - } 1374 - 1375 - static inline void pci_restore_state(struct pci_dev *dev) 1376 - { } 1377 - 1352 + static inline int pci_save_state(struct pci_dev *dev) { return 0; } 1353 + static inline void pci_restore_state(struct pci_dev *dev) { } 1378 1354 static inline int pci_set_power_state(struct pci_dev *dev, pci_power_t state) 1379 - { 1380 - return 0; 1381 - } 1382 - 1355 + { return 0; } 1383 1356 static inline int pci_wake_from_d3(struct pci_dev *dev, bool enable) 1384 - { 1385 - return 0; 1386 - } 1387 - 1357 + { return 0; } 1388 1358 static inline pci_power_t pci_choose_state(struct pci_dev *dev, 1389 1359 pm_message_t state) 1390 - { 1391 - return PCI_D0; 1392 - } 1393 - 1360 + { return PCI_D0; } 1394 1361 static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state, 1395 1362 int enable) 1396 - { 1397 - return 0; 1398 - } 1399 - 1400 - static inline void pci_enable_ido(struct pci_dev *dev, unsigned long type) 1401 - { 1402 - } 1403 - 1404 - static inline void pci_disable_ido(struct pci_dev *dev, unsigned long type) 1405 - { 1406 - } 1407 - 1408 - static inline int pci_enable_obff(struct pci_dev *dev, unsigned long type) 1409 - { 1410 - return 0; 1411 - } 1412 - 1413 - static inline void pci_disable_obff(struct pci_dev *dev) 1414 - { 1415 - } 1363 + { return 0; } 1416 1364 1417 1365 static inline int pci_request_regions(struct pci_dev *dev, const char *res_name) 1418 - { 1419 - return -EIO; 1420 - } 1421 - 1422 - static inline void pci_release_regions(struct pci_dev *dev) 1423 - { } 1366 + { return -EIO; } 1367 + static inline void pci_release_regions(struct pci_dev *dev) { } 1424 1368 1425 1369 #define pci_dma_burst_advice(pdev, strat, strategy_parameter) do { } while (0) 1426 1370 1427 - static inline void pci_block_cfg_access(struct pci_dev *dev) 1428 - { } 1429 - 1371 + static inline void pci_block_cfg_access(struct pci_dev *dev) { } 1430 1372 static inline int pci_block_cfg_access_in_atomic(struct pci_dev *dev) 1431 1373 { return 0; } 1432 - 1433 - static inline void pci_unblock_cfg_access(struct pci_dev *dev) 1434 - { } 1374 + static inline void pci_unblock_cfg_access(struct pci_dev *dev) { } 1435 1375 1436 1376 static inline struct pci_bus *pci_find_next_bus(const struct pci_bus *from) 1437 1377 { return NULL; } 1438 - 1439 1378 static inline struct pci_dev *pci_get_slot(struct pci_bus *bus, 1440 1379 unsigned int devfn) 1441 1380 { return NULL; } 1442 - 1443 1381 static inline struct pci_dev *pci_get_bus_and_slot(unsigned int bus, 1444 1382 unsigned int devfn) 1445 1383 { return NULL; } 1446 1384 1447 - static inline int pci_domain_nr(struct pci_bus *bus) 1448 - { return 0; } 1449 - 1450 - static inline struct pci_dev *pci_dev_get(struct pci_dev *dev) 1451 - { return NULL; } 1385 + static inline int pci_domain_nr(struct pci_bus *bus) { return 0; } 1386 + static inline struct pci_dev *pci_dev_get(struct pci_dev *dev) { return NULL; } 1452 1387 1453 1388 #define dev_is_pci(d) (false) 1454 1389 #define dev_is_pf(d) (false) ··· 1366 1485 /* Include architecture-dependent settings and functions */ 1367 1486 1368 1487 #include <asm/pci.h> 1369 - 1370 - #ifndef PCIBIOS_MAX_MEM_32 1371 - #define PCIBIOS_MAX_MEM_32 (-1) 1372 - #endif 1373 1488 1374 1489 /* these helpers provide future and backwards compatibility 1375 1490 * for accessing popular PCI BAR info */ ··· 1512 1635 int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags); 1513 1636 #else 1514 1637 static inline void pci_fixup_device(enum pci_fixup_pass pass, 1515 - struct pci_dev *dev) {} 1638 + struct pci_dev *dev) { } 1516 1639 static inline struct pci_dev *pci_get_dma_source(struct pci_dev *dev) 1517 1640 { 1518 1641 return pci_dev_get(dev); ··· 1584 1707 int pci_sriov_get_totalvfs(struct pci_dev *dev); 1585 1708 #else 1586 1709 static inline int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn) 1587 - { 1588 - return -ENODEV; 1589 - } 1590 - static inline void pci_disable_sriov(struct pci_dev *dev) 1591 - { 1592 - } 1710 + { return -ENODEV; } 1711 + static inline void pci_disable_sriov(struct pci_dev *dev) { } 1593 1712 static inline irqreturn_t pci_sriov_migration(struct pci_dev *dev) 1594 - { 1595 - return IRQ_NONE; 1596 - } 1597 - static inline int pci_num_vf(struct pci_dev *dev) 1598 - { 1599 - return 0; 1600 - } 1713 + { return IRQ_NONE; } 1714 + static inline int pci_num_vf(struct pci_dev *dev) { return 0; } 1601 1715 static inline int pci_vfs_assigned(struct pci_dev *dev) 1602 - { 1603 - return 0; 1604 - } 1716 + { return 0; } 1605 1717 static inline int pci_sriov_set_totalvfs(struct pci_dev *dev, u16 numvfs) 1606 - { 1607 - return 0; 1608 - } 1718 + { return 0; } 1609 1719 static inline int pci_sriov_get_totalvfs(struct pci_dev *dev) 1610 - { 1611 - return 0; 1612 - } 1720 + { return 0; } 1613 1721 #endif 1614 1722 1615 1723 #if defined(CONFIG_HOTPLUG_PCI) || defined(CONFIG_HOTPLUG_PCI_MODULE)
+31 -6
include/uapi/linux/pci_regs.h
··· 518 518 #define PCI_EXP_SLTCTL_CCIE 0x0010 /* Command Completed Interrupt Enable */ 519 519 #define PCI_EXP_SLTCTL_HPIE 0x0020 /* Hot-Plug Interrupt Enable */ 520 520 #define PCI_EXP_SLTCTL_AIC 0x00c0 /* Attention Indicator Control */ 521 + #define PCI_EXP_SLTCTL_ATTN_IND_ON 0x0040 /* Attention Indicator on */ 522 + #define PCI_EXP_SLTCTL_ATTN_IND_BLINK 0x0080 /* Attention Indicator blinking */ 523 + #define PCI_EXP_SLTCTL_ATTN_IND_OFF 0x00c0 /* Attention Indicator off */ 521 524 #define PCI_EXP_SLTCTL_PIC 0x0300 /* Power Indicator Control */ 525 + #define PCI_EXP_SLTCTL_PWR_IND_ON 0x0100 /* Power Indicator on */ 526 + #define PCI_EXP_SLTCTL_PWR_IND_BLINK 0x0200 /* Power Indicator blinking */ 527 + #define PCI_EXP_SLTCTL_PWR_IND_OFF 0x0300 /* Power Indicator off */ 522 528 #define PCI_EXP_SLTCTL_PCC 0x0400 /* Power Controller Control */ 529 + #define PCI_EXP_SLTCTL_PWR_ON 0x0000 /* Power On */ 530 + #define PCI_EXP_SLTCTL_PWR_OFF 0x0400 /* Power Off */ 523 531 #define PCI_EXP_SLTCTL_EIC 0x0800 /* Electromechanical Interlock Control */ 524 532 #define PCI_EXP_SLTCTL_DLLSCE 0x1000 /* Data Link Layer State Changed Enable */ 525 533 #define PCI_EXP_SLTSTA 26 /* Slot Status */ ··· 685 677 #define PCI_ERR_ROOT_ERR_SRC 52 /* Error Source Identification */ 686 678 687 679 /* Virtual Channel */ 688 - #define PCI_VC_PORT_REG1 4 689 - #define PCI_VC_REG1_EVCC 0x7 /* extended VC count */ 690 - #define PCI_VC_PORT_REG2 8 691 - #define PCI_VC_REG2_32_PHASE 0x2 692 - #define PCI_VC_REG2_64_PHASE 0x4 693 - #define PCI_VC_REG2_128_PHASE 0x8 680 + #define PCI_VC_PORT_CAP1 4 681 + #define PCI_VC_CAP1_EVCC 0x00000007 /* extended VC count */ 682 + #define PCI_VC_CAP1_LPEVCC 0x00000070 /* low prio extended VC count */ 683 + #define PCI_VC_CAP1_ARB_SIZE 0x00000c00 684 + #define PCI_VC_PORT_CAP2 8 685 + #define PCI_VC_CAP2_32_PHASE 0x00000002 686 + #define PCI_VC_CAP2_64_PHASE 0x00000004 687 + #define PCI_VC_CAP2_128_PHASE 0x00000008 688 + #define PCI_VC_CAP2_ARB_OFF 0xff000000 694 689 #define PCI_VC_PORT_CTRL 12 690 + #define PCI_VC_PORT_CTRL_LOAD_TABLE 0x00000001 695 691 #define PCI_VC_PORT_STATUS 14 692 + #define PCI_VC_PORT_STATUS_TABLE 0x00000001 696 693 #define PCI_VC_RES_CAP 16 694 + #define PCI_VC_RES_CAP_32_PHASE 0x00000002 695 + #define PCI_VC_RES_CAP_64_PHASE 0x00000004 696 + #define PCI_VC_RES_CAP_128_PHASE 0x00000008 697 + #define PCI_VC_RES_CAP_128_PHASE_TB 0x00000010 698 + #define PCI_VC_RES_CAP_256_PHASE 0x00000020 699 + #define PCI_VC_RES_CAP_ARB_OFF 0xff000000 697 700 #define PCI_VC_RES_CTRL 20 701 + #define PCI_VC_RES_CTRL_LOAD_TABLE 0x00010000 702 + #define PCI_VC_RES_CTRL_ARB_SELECT 0x000e0000 703 + #define PCI_VC_RES_CTRL_ID 0x07000000 704 + #define PCI_VC_RES_CTRL_ENABLE 0x80000000 698 705 #define PCI_VC_RES_STATUS 26 706 + #define PCI_VC_RES_STATUS_TABLE 0x00000001 707 + #define PCI_VC_RES_STATUS_NEGO 0x00000002 699 708 #define PCI_CAP_VC_BASE_SIZEOF 0x10 700 709 #define PCI_CAP_VC_PER_VC_SIZEOF 0x0C 701 710
+7 -4
scripts/checkpatch.pl
··· 2634 2634 $herecurr); 2635 2635 } 2636 2636 2637 - # check for declarations of struct pci_device_id 2638 - if ($line =~ /\bstruct\s+pci_device_id\s+\w+\s*\[\s*\]\s*\=\s*\{/) { 2639 - WARN("DEFINE_PCI_DEVICE_TABLE", 2640 - "Use DEFINE_PCI_DEVICE_TABLE for struct pci_device_id\n" . $herecurr); 2637 + # check for uses of DEFINE_PCI_DEVICE_TABLE 2638 + if ($line =~ /\bDEFINE_PCI_DEVICE_TABLE\s*\(\s*(\w+)\s*\)\s*=/) { 2639 + if (WARN("DEFINE_PCI_DEVICE_TABLE", 2640 + "Prefer struct pci_device_id over deprecated DEFINE_PCI_DEVICE_TABLE\n" . $herecurr) && 2641 + $fix) { 2642 + $fixed[$linenr - 1] =~ s/\b(?:static\s+|)DEFINE_PCI_DEVICE_TABLE\s*\(\s*(\w+)\s*\)\s*=\s*/static const struct pci_device_id $1\[\] = /; 2643 + } 2641 2644 } 2642 2645 2643 2646 # check for new typedefs, only function parameters and sparse annotations