Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v4.8-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
"Highlights:

- ARM64 support for ACPI host bridges

- new drivers for Axis ARTPEC-6 and Marvell Aardvark

- new pci_alloc_irq_vectors() interface for MSI-X, MSI, legacy INTx

- pci_resource_to_user() cleanup (more to come)

Detailed summary:

Enumeration:
- Move ecam.h to linux/include/pci-ecam.h (Jayachandran C)
- Add parent device field to ECAM struct pci_config_window (Jayachandran C)
- Add generic MCFG table handling (Tomasz Nowicki)
- Refactor pci_bus_assign_domain_nr() for CONFIG_PCI_DOMAINS_GENERIC (Tomasz Nowicki)
- Factor DT-specific pci_bus_find_domain_nr() code out (Tomasz Nowicki)

Resource management:
- Add devm_request_pci_bus_resources() (Bjorn Helgaas)
- Unify pci_resource_to_user() declarations (Bjorn Helgaas)
- Implement pci_resource_to_user() with pcibios_resource_to_bus() (microblaze, powerpc, sparc) (Bjorn Helgaas)
- Request host bridge window resources (designware, iproc, rcar, xgene, xilinx, xilinx-nwl) (Bjorn Helgaas)
- Make PCI I/O space optional on ARM32 (Bjorn Helgaas)
- Ignore write combining when mapping I/O port space (Bjorn Helgaas)
- Claim bus resources on MIPS PCI_PROBE_ONLY set-ups (Bjorn Helgaas)
- Remove unicore32 pci=firmware command line parameter handling (Bjorn Helgaas)
- Support I/O resources when parsing host bridge resources (Jayachandran C)
- Add helpers to request/release memory and I/O regions (Johannes Thumshirn)
- Use pci_(request|release)_mem_regions (NVMe, lpfc, GenWQE, ethernet/intel, alx) (Johannes Thumshirn)
- Extend pci=resource_alignment to specify device/vendor IDs (Koehrer Mathias (ETAS/ESW5))
- Add generic pci_bus_claim_resources() (Lorenzo Pieralisi)
- Claim bus resources on ARM32 PCI_PROBE_ONLY set-ups (Lorenzo Pieralisi)
- Remove ARM32 and ARM64 arch-specific pcibios_enable_device() (Lorenzo Pieralisi)
- Add pci_unmap_iospace() to unmap I/O resources (Sinan Kaya)
- Remove powerpc __pci_mmap_set_pgprot() (Yinghai Lu)

PCI device hotplug:
- Allow additional bus numbers for hotplug bridges (Keith Busch)
- Ignore interrupts during D3cold (Lukas Wunner)

Power management:
- Enforce type casting for pci_power_t (Andy Shevchenko)
- Don't clear d3cold_allowed for PCIe ports (Mika Westerberg)
- Put PCIe ports into D3 during suspend (Mika Westerberg)
- Power on bridges before scanning new devices (Mika Westerberg)
- Runtime resume bridge before rescan (Mika Westerberg)
- Add runtime PM support for PCIe ports (Mika Westerberg)
- Remove redundant check of pcie_set_clkpm (Shawn Lin)

Virtualization:
- Add function 1 DMA alias quirk for Marvell 88SE9182 (Aaron Sierra)
- Add DMA alias quirk for Adaptec 3805 (Alex Williamson)
- Mark Atheros AR9485 and QCA9882 to avoid bus reset (Chris Blake)
- Add ACS quirk for Solarflare SFC9220 (Edward Cree)

MSI:
- Fix PCI_MSI dependencies (Arnd Bergmann)
- Add pci_msix_desc_addr() helper (Christoph Hellwig)
- Switch msix_program_entries() to use pci_msix_desc_addr() (Christoph Hellwig)
- Make the "entries" argument to pci_enable_msix() optional (Christoph Hellwig)
- Provide sensible IRQ vector alloc/free routines (Christoph Hellwig)
- Spread interrupt vectors in pci_alloc_irq_vectors() (Christoph Hellwig)

Error Handling:
- Bind DPC to Root Ports as well as Downstream Ports (Keith Busch)
- Remove DPC tristate module option (Keith Busch)
- Convert Downstream Port Containment driver to use devm_* functions (Mika Westerberg)

Generic host bridge driver:
- Select IRQ_DOMAIN (Arnd Bergmann)
- Claim bus resources on PCI_PROBE_ONLY set-ups (Lorenzo Pieralisi)

ACPI host bridge driver:
- Add ARM64 acpi_pci_bus_find_domain_nr() (Tomasz Nowicki)
- Add ARM64 ACPI support for legacy IRQs parsing and consolidation with DT code (Tomasz Nowicki)
- Implement ARM64 AML accessors for PCI_Config region (Tomasz Nowicki)
- Support ARM64 ACPI-based PCI host controller (Tomasz Nowicki)

Altera host bridge driver:
- Check link status before retrain link (Ley Foon Tan)
- Poll for link up status after retraining the link (Ley Foon Tan)

Axis ARTPEC-6 host bridge driver:
- Add PCI_MSI_IRQ_DOMAIN dependency (Arnd Bergmann)
- Add DT binding for Axis ARTPEC-6 PCIe controller (Niklas Cassel)
- Add Axis ARTPEC-6 PCIe controller driver (Niklas Cassel)

Intel VMD host bridge driver:
- Use lock save/restore in interrupt enable path (Jon Derrick)
- Select device dma ops to override (Keith Busch)
- Initialize list item in IRQ disable (Keith Busch)
- Use x86_vector_domain as parent domain (Keith Busch)
- Separate MSI and MSI-X vector sharing (Keith Busch)

Marvell Aardvark host bridge driver:
- Add DT binding for the Aardvark PCIe controller (Thomas Petazzoni)
- Add Aardvark PCI host controller driver (Thomas Petazzoni)
- Add Aardvark PCIe support for Armada 3700 (Thomas Petazzoni)

Microsoft Hyper-V host bridge driver:
- Fix interrupt cleanup path (Cathy Avery)
- Don't leak buffer in hv_pci_onchannelcallback() (Vitaly Kuznetsov)
- Handle all pending messages in hv_pci_onchannelcallback() (Vitaly Kuznetsov)

NVIDIA Tegra host bridge driver:
- Program PADS_REFCLK_CFG* always, not just on legacy SoCs (Stephen Warren)
- Program PADS_REFCLK_CFG* registers with per-SoC values (Stephen Warren)
- Use lower-case hex consistently for register definitions (Thierry Reding)
- Use generic pci_remap_iospace() rather than ARM32-specific one (Thierry Reding)
- Stop setting pcibios_min_mem (Thierry Reding)

Renesas R-Car host bridge driver:
- Drop gen2 dummy I/O port region (Bjorn Helgaas)

TI DRA7xx host bridge driver:
- Fix return value in case of error (Christophe JAILLET)

Xilinx AXI host bridge driver:
- Fix return value in case of error (Christophe JAILLET)

Miscellaneous:
- Make bus_attr_resource_alignment static (Ben Dooks)
- Include <asm/dma.h> for isa_dma_bridge_buggy (Ben Dooks)
- MAINTAINERS: Add file patterns for PCI device tree bindings (Geert Uytterhoeven)
- Make host bridge drivers explicitly non-modular (Paul Gortmaker)"

* tag 'pci-v4.8-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (125 commits)
PCI: xgene: Make explicitly non-modular
PCI: thunder-pem: Make explicitly non-modular
PCI: thunder-ecam: Make explicitly non-modular
PCI: tegra: Make explicitly non-modular
PCI: rcar-gen2: Make explicitly non-modular
PCI: rcar: Make explicitly non-modular
PCI: mvebu: Make explicitly non-modular
PCI: layerscape: Make explicitly non-modular
PCI: keystone: Make explicitly non-modular
PCI: hisi: Make explicitly non-modular
PCI: generic: Make explicitly non-modular
PCI: designware-plat: Make it explicitly non-modular
PCI: artpec6: Make explicitly non-modular
PCI: armada8k: Make explicitly non-modular
PCI: artpec: Add PCI_MSI_IRQ_DOMAIN dependency
PCI: Add ACS quirk for Solarflare SFC9220
arm64: dts: marvell: Add Aardvark PCIe support for Armada 3700
PCI: aardvark: Add Aardvark PCI host controller driver
dt-bindings: add DT binding for the Aardvark PCIe controller
PCI: tegra: Program PADS_REFCLK_CFG* registers with per-SoC values
...

+3033 -1195
+90 -401
Documentation/PCI/MSI-HOWTO.txt
··· 78 78 79 79 4.2 Using MSI 80 80 81 - Most of the hard work is done for the driver in the PCI layer. It simply 82 - has to request that the PCI layer set up the MSI capability for this 81 + Most of the hard work is done for the driver in the PCI layer. The driver 82 + simply has to request that the PCI layer set up the MSI capability for this 83 83 device. 84 84 85 - 4.2.1 pci_enable_msi 85 + To automatically use MSI or MSI-X interrupt vectors, use the following 86 + function: 86 87 87 - int pci_enable_msi(struct pci_dev *dev) 88 + int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs, 89 + unsigned int max_vecs, unsigned int flags); 88 90 89 - A successful call allocates ONE interrupt to the device, regardless 90 - of how many MSIs the device supports. The device is switched from 91 - pin-based interrupt mode to MSI mode. The dev->irq number is changed 92 - to a new number which represents the message signaled interrupt; 93 - consequently, this function should be called before the driver calls 94 - request_irq(), because an MSI is delivered via a vector that is 95 - different from the vector of a pin-based interrupt. 91 + which allocates up to max_vecs interrupt vectors for a PCI device. It 92 + returns the number of vectors allocated or a negative error. If the device 93 + has a requirements for a minimum number of vectors the driver can pass a 94 + min_vecs argument set to this limit, and the PCI core will return -ENOSPC 95 + if it can't meet the minimum number of vectors. 96 96 97 - 4.2.2 pci_enable_msi_range 97 + The flags argument should normally be set to 0, but can be used to pass the 98 + PCI_IRQ_NOMSI and PCI_IRQ_NOMSIX flag in case a device claims to support 99 + MSI or MSI-X, but the support is broken, or to pass PCI_IRQ_NOLEGACY in 100 + case the device does not support legacy interrupt lines. 98 101 99 - int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec) 102 + By default this function will spread the interrupts around the available 103 + CPUs, but this feature can be disabled by passing the PCI_IRQ_NOAFFINITY 104 + flag. 100 105 101 - This function allows a device driver to request any number of MSI 102 - interrupts within specified range from 'minvec' to 'maxvec'. 106 + To get the Linux IRQ numbers passed to request_irq() and free_irq() and the 107 + vectors, use the following function: 103 108 104 - If this function returns a positive number it indicates the number of 105 - MSI interrupts that have been successfully allocated. In this case 106 - the device is switched from pin-based interrupt mode to MSI mode and 107 - updates dev->irq to be the lowest of the new interrupts assigned to it. 108 - The other interrupts assigned to the device are in the range dev->irq 109 - to dev->irq + returned value - 1. Device driver can use the returned 110 - number of successfully allocated MSI interrupts to further allocate 111 - and initialize device resources. 109 + int pci_irq_vector(struct pci_dev *dev, unsigned int nr); 112 110 113 - If this function returns a negative number, it indicates an error and 114 - the driver should not attempt to request any more MSI interrupts for 115 - this device. 111 + Any allocated resources should be freed before removing the device using 112 + the following function: 116 113 117 - This function should be called before the driver calls request_irq(), 118 - because MSI interrupts are delivered via vectors that are different 119 - from the vector of a pin-based interrupt. 114 + void pci_free_irq_vectors(struct pci_dev *dev); 120 115 121 - It is ideal if drivers can cope with a variable number of MSI interrupts; 122 - there are many reasons why the platform may not be able to provide the 123 - exact number that a driver asks for. 116 + If a device supports both MSI-X and MSI capabilities, this API will use the 117 + MSI-X facilities in preference to the MSI facilities. MSI-X supports any 118 + number of interrupts between 1 and 2048. In contrast, MSI is restricted to 119 + a maximum of 32 interrupts (and must be a power of two). In addition, the 120 + MSI interrupt vectors must be allocated consecutively, so the system might 121 + not be able to allocate as many vectors for MSI as it could for MSI-X. On 122 + some platforms, MSI interrupts must all be targeted at the same set of CPUs 123 + whereas MSI-X interrupts can all be targeted at different CPUs. 124 124 125 - There could be devices that can not operate with just any number of MSI 126 - interrupts within a range. See chapter 4.3.1.3 to get the idea how to 127 - handle such devices for MSI-X - the same logic applies to MSI. 125 + If a device supports neither MSI-X or MSI it will fall back to a single 126 + legacy IRQ vector. 128 127 129 - 4.2.1.1 Maximum possible number of MSI interrupts 128 + The typical usage of MSI or MSI-X interrupts is to allocate as many vectors 129 + as possible, likely up to the limit supported by the device. If nvec is 130 + larger than the number supported by the device it will automatically be 131 + capped to the supported limit, so there is no need to query the number of 132 + vectors supported beforehand: 130 133 131 - The typical usage of MSI interrupts is to allocate as many vectors as 132 - possible, likely up to the limit returned by pci_msi_vec_count() function: 133 - 134 - static int foo_driver_enable_msi(struct pci_dev *pdev, int nvec) 135 - { 136 - return pci_enable_msi_range(pdev, 1, nvec); 137 - } 138 - 139 - Note the value of 'minvec' parameter is 1. As 'minvec' is inclusive, 140 - the value of 0 would be meaningless and could result in error. 141 - 142 - Some devices have a minimal limit on number of MSI interrupts. 143 - In this case the function could look like this: 144 - 145 - static int foo_driver_enable_msi(struct pci_dev *pdev, int nvec) 146 - { 147 - return pci_enable_msi_range(pdev, FOO_DRIVER_MINIMUM_NVEC, nvec); 148 - } 149 - 150 - 4.2.1.2 Exact number of MSI interrupts 134 + nvec = pci_alloc_irq_vectors(pdev, 1, nvec, 0); 135 + if (nvec < 0) 136 + goto out_err; 151 137 152 138 If a driver is unable or unwilling to deal with a variable number of MSI 153 - interrupts it could request a particular number of interrupts by passing 154 - that number to pci_enable_msi_range() function as both 'minvec' and 'maxvec' 155 - parameters: 156 - 157 - static int foo_driver_enable_msi(struct pci_dev *pdev, int nvec) 158 - { 159 - return pci_enable_msi_range(pdev, nvec, nvec); 160 - } 161 - 162 - Note, unlike pci_enable_msi_exact() function, which could be also used to 163 - enable a particular number of MSI-X interrupts, pci_enable_msi_range() 164 - returns either a negative errno or 'nvec' (not negative errno or 0 - as 165 - pci_enable_msi_exact() does). 166 - 167 - 4.2.1.3 Single MSI mode 168 - 169 - The most notorious example of the request type described above is 170 - enabling the single MSI mode for a device. It could be done by passing 171 - two 1s as 'minvec' and 'maxvec': 172 - 173 - static int foo_driver_enable_single_msi(struct pci_dev *pdev) 174 - { 175 - return pci_enable_msi_range(pdev, 1, 1); 176 - } 177 - 178 - Note, unlike pci_enable_msi() function, which could be also used to 179 - enable the single MSI mode, pci_enable_msi_range() returns either a 180 - negative errno or 1 (not negative errno or 0 - as pci_enable_msi() 181 - does). 182 - 183 - 4.2.3 pci_enable_msi_exact 184 - 185 - int pci_enable_msi_exact(struct pci_dev *dev, int nvec) 186 - 187 - This variation on pci_enable_msi_range() call allows a device driver to 188 - request exactly 'nvec' MSIs. 189 - 190 - If this function returns a negative number, it indicates an error and 191 - the driver should not attempt to request any more MSI interrupts for 192 - this device. 193 - 194 - By contrast with pci_enable_msi_range() function, pci_enable_msi_exact() 195 - returns zero in case of success, which indicates MSI interrupts have been 196 - successfully allocated. 197 - 198 - 4.2.4 pci_disable_msi 199 - 200 - void pci_disable_msi(struct pci_dev *dev) 201 - 202 - This function should be used to undo the effect of pci_enable_msi_range(). 203 - Calling it restores dev->irq to the pin-based interrupt number and frees 204 - the previously allocated MSIs. The interrupts may subsequently be assigned 205 - to another device, so drivers should not cache the value of dev->irq. 206 - 207 - Before calling this function, a device driver must always call free_irq() 208 - on any interrupt for which it previously called request_irq(). 209 - Failure to do so results in a BUG_ON(), leaving the device with 210 - MSI enabled and thus leaking its vector. 211 - 212 - 4.2.4 pci_msi_vec_count 213 - 214 - int pci_msi_vec_count(struct pci_dev *dev) 215 - 216 - This function could be used to retrieve the number of MSI vectors the 217 - device requested (via the Multiple Message Capable register). The MSI 218 - specification only allows the returned value to be a power of two, 219 - up to a maximum of 2^5 (32). 220 - 221 - If this function returns a negative number, it indicates the device is 222 - not capable of sending MSIs. 223 - 224 - If this function returns a positive number, it indicates the maximum 225 - number of MSI interrupt vectors that could be allocated. 226 - 227 - 4.3 Using MSI-X 228 - 229 - The MSI-X capability is much more flexible than the MSI capability. 230 - It supports up to 2048 interrupts, each of which can be controlled 231 - independently. To support this flexibility, drivers must use an array of 232 - `struct msix_entry': 233 - 234 - struct msix_entry { 235 - u16 vector; /* kernel uses to write alloc vector */ 236 - u16 entry; /* driver uses to specify entry */ 237 - }; 238 - 239 - This allows for the device to use these interrupts in a sparse fashion; 240 - for example, it could use interrupts 3 and 1027 and yet allocate only a 241 - two-element array. The driver is expected to fill in the 'entry' value 242 - in each element of the array to indicate for which entries the kernel 243 - should assign interrupts; it is invalid to fill in two entries with the 244 - same number. 245 - 246 - 4.3.1 pci_enable_msix_range 247 - 248 - int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries, 249 - int minvec, int maxvec) 250 - 251 - Calling this function asks the PCI subsystem to allocate any number of 252 - MSI-X interrupts within specified range from 'minvec' to 'maxvec'. 253 - The 'entries' argument is a pointer to an array of msix_entry structs 254 - which should be at least 'maxvec' entries in size. 255 - 256 - On success, the device is switched into MSI-X mode and the function 257 - returns the number of MSI-X interrupts that have been successfully 258 - allocated. In this case the 'vector' member in entries numbered from 259 - 0 to the returned value - 1 is populated with the interrupt number; 260 - the driver should then call request_irq() for each 'vector' that it 261 - decides to use. The device driver is responsible for keeping track of the 262 - interrupts assigned to the MSI-X vectors so it can free them again later. 263 - Device driver can use the returned number of successfully allocated MSI-X 264 - interrupts to further allocate and initialize device resources. 265 - 266 - If this function returns a negative number, it indicates an error and 267 - the driver should not attempt to allocate any more MSI-X interrupts for 268 - this device. 269 - 270 - This function, in contrast with pci_enable_msi_range(), does not adjust 271 - dev->irq. The device will not generate interrupts for this interrupt 272 - number once MSI-X is enabled. 273 - 274 - Device drivers should normally call this function once per device 275 - during the initialization phase. 276 - 277 - It is ideal if drivers can cope with a variable number of MSI-X interrupts; 278 - there are many reasons why the platform may not be able to provide the 279 - exact number that a driver asks for. 280 - 281 - There could be devices that can not operate with just any number of MSI-X 282 - interrupts within a range. E.g., an network adapter might need let's say 283 - four vectors per each queue it provides. Therefore, a number of MSI-X 284 - interrupts allocated should be a multiple of four. In this case interface 285 - pci_enable_msix_range() can not be used alone to request MSI-X interrupts 286 - (since it can allocate any number within the range, without any notion of 287 - the multiple of four) and the device driver should master a custom logic 288 - to request the required number of MSI-X interrupts. 289 - 290 - 4.3.1.1 Maximum possible number of MSI-X interrupts 291 - 292 - The typical usage of MSI-X interrupts is to allocate as many vectors as 293 - possible, likely up to the limit returned by pci_msix_vec_count() function: 294 - 295 - static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec) 296 - { 297 - return pci_enable_msix_range(adapter->pdev, adapter->msix_entries, 298 - 1, nvec); 299 - } 300 - 301 - Note the value of 'minvec' parameter is 1. As 'minvec' is inclusive, 302 - the value of 0 would be meaningless and could result in error. 303 - 304 - Some devices have a minimal limit on number of MSI-X interrupts. 305 - In this case the function could look like this: 306 - 307 - static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec) 308 - { 309 - return pci_enable_msix_range(adapter->pdev, adapter->msix_entries, 310 - FOO_DRIVER_MINIMUM_NVEC, nvec); 311 - } 312 - 313 - 4.3.1.2 Exact number of MSI-X interrupts 314 - 315 - If a driver is unable or unwilling to deal with a variable number of MSI-X 316 - interrupts it could request a particular number of interrupts by passing 317 - that number to pci_enable_msix_range() function as both 'minvec' and 'maxvec' 318 - parameters: 319 - 320 - static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec) 321 - { 322 - return pci_enable_msix_range(adapter->pdev, adapter->msix_entries, 323 - nvec, nvec); 324 - } 325 - 326 - Note, unlike pci_enable_msix_exact() function, which could be also used to 327 - enable a particular number of MSI-X interrupts, pci_enable_msix_range() 328 - returns either a negative errno or 'nvec' (not negative errno or 0 - as 329 - pci_enable_msix_exact() does). 330 - 331 - 4.3.1.3 Specific requirements to the number of MSI-X interrupts 332 - 333 - As noted above, there could be devices that can not operate with just any 334 - number of MSI-X interrupts within a range. E.g., let's assume a device that 335 - is only capable sending the number of MSI-X interrupts which is a power of 336 - two. A routine that enables MSI-X mode for such device might look like this: 337 - 338 - /* 339 - * Assume 'minvec' and 'maxvec' are non-zero 340 - */ 341 - static int foo_driver_enable_msix(struct foo_adapter *adapter, 342 - int minvec, int maxvec) 343 - { 344 - int rc; 345 - 346 - minvec = roundup_pow_of_two(minvec); 347 - maxvec = rounddown_pow_of_two(maxvec); 348 - 349 - if (minvec > maxvec) 350 - return -ERANGE; 351 - 352 - retry: 353 - rc = pci_enable_msix_range(adapter->pdev, adapter->msix_entries, 354 - maxvec, maxvec); 355 - /* 356 - * -ENOSPC is the only error code allowed to be analyzed 357 - */ 358 - if (rc == -ENOSPC) { 359 - if (maxvec == 1) 360 - return -ENOSPC; 361 - 362 - maxvec /= 2; 363 - 364 - if (minvec > maxvec) 365 - return -ENOSPC; 366 - 367 - goto retry; 368 - } 369 - 370 - return rc; 371 - } 372 - 373 - Note how pci_enable_msix_range() return value is analyzed for a fallback - 374 - any error code other than -ENOSPC indicates a fatal error and should not 375 - be retried. 376 - 377 - 4.3.2 pci_enable_msix_exact 378 - 379 - int pci_enable_msix_exact(struct pci_dev *dev, 380 - struct msix_entry *entries, int nvec) 381 - 382 - This variation on pci_enable_msix_range() call allows a device driver to 383 - request exactly 'nvec' MSI-Xs. 384 - 385 - If this function returns a negative number, it indicates an error and 386 - the driver should not attempt to allocate any more MSI-X interrupts for 387 - this device. 388 - 389 - By contrast with pci_enable_msix_range() function, pci_enable_msix_exact() 390 - returns zero in case of success, which indicates MSI-X interrupts have been 391 - successfully allocated. 392 - 393 - Another version of a routine that enables MSI-X mode for a device with 394 - specific requirements described in chapter 4.3.1.3 might look like this: 395 - 396 - /* 397 - * Assume 'minvec' and 'maxvec' are non-zero 398 - */ 399 - static int foo_driver_enable_msix(struct foo_adapter *adapter, 400 - int minvec, int maxvec) 401 - { 402 - int rc; 403 - 404 - minvec = roundup_pow_of_two(minvec); 405 - maxvec = rounddown_pow_of_two(maxvec); 406 - 407 - if (minvec > maxvec) 408 - return -ERANGE; 409 - 410 - retry: 411 - rc = pci_enable_msix_exact(adapter->pdev, 412 - adapter->msix_entries, maxvec); 413 - 414 - /* 415 - * -ENOSPC is the only error code allowed to be analyzed 416 - */ 417 - if (rc == -ENOSPC) { 418 - if (maxvec == 1) 419 - return -ENOSPC; 420 - 421 - maxvec /= 2; 422 - 423 - if (minvec > maxvec) 424 - return -ENOSPC; 425 - 426 - goto retry; 427 - } else if (rc < 0) { 428 - return rc; 429 - } 430 - 431 - return maxvec; 432 - } 433 - 434 - 4.3.3 pci_disable_msix 435 - 436 - void pci_disable_msix(struct pci_dev *dev) 437 - 438 - This function should be used to undo the effect of pci_enable_msix_range(). 439 - It frees the previously allocated MSI-X interrupts. The interrupts may 440 - subsequently be assigned to another device, so drivers should not cache 441 - the value of the 'vector' elements over a call to pci_disable_msix(). 442 - 443 - Before calling this function, a device driver must always call free_irq() 444 - on any interrupt for which it previously called request_irq(). 445 - Failure to do so results in a BUG_ON(), leaving the device with 446 - MSI-X enabled and thus leaking its vector. 447 - 448 - 4.3.3 The MSI-X Table 449 - 450 - The MSI-X capability specifies a BAR and offset within that BAR for the 451 - MSI-X Table. This address is mapped by the PCI subsystem, and should not 452 - be accessed directly by the device driver. If the driver wishes to 453 - mask or unmask an interrupt, it should call disable_irq() / enable_irq(). 454 - 455 - 4.3.4 pci_msix_vec_count 456 - 457 - int pci_msix_vec_count(struct pci_dev *dev) 458 - 459 - This function could be used to retrieve number of entries in the device 460 - MSI-X table. 461 - 462 - If this function returns a negative number, it indicates the device is 463 - not capable of sending MSI-Xs. 464 - 465 - If this function returns a positive number, it indicates the maximum 466 - number of MSI-X interrupt vectors that could be allocated. 467 - 468 - 4.4 Handling devices implementing both MSI and MSI-X capabilities 469 - 470 - If a device implements both MSI and MSI-X capabilities, it can 471 - run in either MSI mode or MSI-X mode, but not both simultaneously. 472 - This is a requirement of the PCI spec, and it is enforced by the 473 - PCI layer. Calling pci_enable_msi_range() when MSI-X is already 474 - enabled or pci_enable_msix_range() when MSI is already enabled 475 - results in an error. If a device driver wishes to switch between MSI 476 - and MSI-X at runtime, it must first quiesce the device, then switch 477 - it back to pin-interrupt mode, before calling pci_enable_msi_range() 478 - or pci_enable_msix_range() and resuming operation. This is not expected 479 - to be a common operation but may be useful for debugging or testing 480 - during development. 481 - 482 - 4.5 Considerations when using MSIs 483 - 484 - 4.5.1 Choosing between MSI-X and MSI 485 - 486 - If your device supports both MSI-X and MSI capabilities, you should use 487 - the MSI-X facilities in preference to the MSI facilities. As mentioned 488 - above, MSI-X supports any number of interrupts between 1 and 2048. 489 - In contrast, MSI is restricted to a maximum of 32 interrupts (and 490 - must be a power of two). In addition, the MSI interrupt vectors must 491 - be allocated consecutively, so the system might not be able to allocate 492 - as many vectors for MSI as it could for MSI-X. On some platforms, MSI 493 - interrupts must all be targeted at the same set of CPUs whereas MSI-X 494 - interrupts can all be targeted at different CPUs. 495 - 496 - 4.5.2 Spinlocks 139 + interrupts it can request a particular number of interrupts by passing that 140 + number to pci_alloc_irq_vectors() function as both 'min_vecs' and 141 + 'max_vecs' parameters: 142 + 143 + ret = pci_alloc_irq_vectors(pdev, nvec, nvec, 0); 144 + if (ret < 0) 145 + goto out_err; 146 + 147 + The most notorious example of the request type described above is enabling 148 + the single MSI mode for a device. It could be done by passing two 1s as 149 + 'min_vecs' and 'max_vecs': 150 + 151 + ret = pci_alloc_irq_vectors(pdev, 1, 1, 0); 152 + if (ret < 0) 153 + goto out_err; 154 + 155 + Some devices might not support using legacy line interrupts, in which case 156 + the PCI_IRQ_NOLEGACY flag can be used to fail the request if the platform 157 + can't provide MSI or MSI-X interrupts: 158 + 159 + nvec = pci_alloc_irq_vectors(pdev, 1, nvec, PCI_IRQ_NOLEGACY); 160 + if (nvec < 0) 161 + goto out_err; 162 + 163 + 4.3 Legacy APIs 164 + 165 + The following old APIs to enable and disable MSI or MSI-X interrupts should 166 + not be used in new code: 167 + 168 + pci_enable_msi() /* deprecated */ 169 + pci_enable_msi_range() /* deprecated */ 170 + pci_enable_msi_exact() /* deprecated */ 171 + pci_disable_msi() /* deprecated */ 172 + pci_enable_msix_range() /* deprecated */ 173 + pci_enable_msix_exact() /* deprecated */ 174 + pci_disable_msix() /* deprecated */ 175 + 176 + Additionally there are APIs to provide the number of supported MSI or MSI-X 177 + vectors: pci_msi_vec_count() and pci_msix_vec_count(). In general these 178 + should be avoided in favor of letting pci_alloc_irq_vectors() cap the 179 + number of vectors. If you have a legitimate special use case for the count 180 + of vectors we might have to revisit that decision and add a 181 + pci_nr_irq_vectors() helper that handles MSI and MSI-X transparently. 182 + 183 + 4.4 Considerations when using MSIs 184 + 185 + 4.4.1 Spinlocks 497 186 498 187 Most device drivers have a per-device spinlock which is taken in the 499 188 interrupt handler. With pin-based interrupts or a single MSI, it is not ··· 194 505 spin_lock_irqsave() or spin_lock_irq() which disable local interrupts 195 506 and acquire the lock (see Documentation/DocBook/kernel-locking). 196 507 197 - 4.6 How to tell whether MSI/MSI-X is enabled on a device 508 + 4.5 How to tell whether MSI/MSI-X is enabled on a device 198 509 199 510 Using 'lspci -v' (as root) may show some devices with "MSI", "Message 200 511 Signalled Interrupts" or "MSI-X" capabilities. Each of these capabilities
+56
Documentation/devicetree/bindings/pci/aardvark-pci.txt
··· 1 + Aardvark PCIe controller 2 + 3 + This PCIe controller is used on the Marvell Armada 3700 ARM64 SoC. 4 + 5 + The Device Tree node describing an Aardvark PCIe controller must 6 + contain the following properties: 7 + 8 + - compatible: Should be "marvell,armada-3700-pcie" 9 + - reg: range of registers for the PCIe controller 10 + - interrupts: the interrupt line of the PCIe controller 11 + - #address-cells: set to <3> 12 + - #size-cells: set to <2> 13 + - device_type: set to "pci" 14 + - ranges: ranges for the PCI memory and I/O regions 15 + - #interrupt-cells: set to <1> 16 + - msi-controller: indicates that the PCIe controller can itself 17 + handle MSI interrupts 18 + - msi-parent: pointer to the MSI controller to be used 19 + - interrupt-map-mask and interrupt-map: standard PCI properties to 20 + define the mapping of the PCIe interface to interrupt numbers. 21 + - bus-range: PCI bus numbers covered 22 + 23 + In addition, the Device Tree describing an Aardvark PCIe controller 24 + must include a sub-node that describes the legacy interrupt controller 25 + built into the PCIe controller. This sub-node must have the following 26 + properties: 27 + 28 + - interrupt-controller 29 + - #interrupt-cells: set to <1> 30 + 31 + Example: 32 + 33 + pcie0: pcie@d0070000 { 34 + compatible = "marvell,armada-3700-pcie"; 35 + device_type = "pci"; 36 + status = "disabled"; 37 + reg = <0 0xd0070000 0 0x20000>; 38 + #address-cells = <3>; 39 + #size-cells = <2>; 40 + bus-range = <0x00 0xff>; 41 + interrupts = <GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>; 42 + #interrupt-cells = <1>; 43 + msi-controller; 44 + msi-parent = <&pcie0>; 45 + ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x1000000 /* Port 0 MEM */ 46 + 0x81000000 0 0xe9000000 0 0xe9000000 0 0x10000>; /* Port 0 IO*/ 47 + interrupt-map-mask = <0 0 0 7>; 48 + interrupt-map = <0 0 0 1 &pcie_intc 0>, 49 + <0 0 0 2 &pcie_intc 1>, 50 + <0 0 0 3 &pcie_intc 2>, 51 + <0 0 0 4 &pcie_intc 3>; 52 + pcie_intc: interrupt-controller { 53 + interrupt-controller; 54 + #interrupt-cells = <1>; 55 + }; 56 + };
+46
Documentation/devicetree/bindings/pci/axis,artpec6-pcie.txt
··· 1 + * Axis ARTPEC-6 PCIe interface 2 + 3 + This PCIe host controller is based on the Synopsys DesignWare PCIe IP 4 + and thus inherits all the common properties defined in designware-pcie.txt. 5 + 6 + Required properties: 7 + - compatible: "axis,artpec6-pcie", "snps,dw-pcie" 8 + - reg: base addresses and lengths of the PCIe controller (DBI), 9 + the phy controller, and configuration address space. 10 + - reg-names: Must include the following entries: 11 + - "dbi" 12 + - "phy" 13 + - "config" 14 + - interrupts: A list of interrupt outputs of the controller. Must contain an 15 + entry for each entry in the interrupt-names property. 16 + - interrupt-names: Must include the following entries: 17 + - "msi": The interrupt that is asserted when an MSI is received 18 + - axis,syscon-pcie: A phandle pointing to the ARTPEC-6 system controller, 19 + used to enable and control the Synopsys IP. 20 + 21 + Example: 22 + 23 + pcie@f8050000 { 24 + compatible = "axis,artpec6-pcie", "snps,dw-pcie"; 25 + reg = <0xf8050000 0x2000 26 + 0xf8040000 0x1000 27 + 0xc0000000 0x1000>; 28 + reg-names = "dbi", "phy", "config"; 29 + #address-cells = <3>; 30 + #size-cells = <2>; 31 + device_type = "pci"; 32 + /* downstream I/O */ 33 + ranges = <0x81000000 0 0x00010000 0xc0010000 0 0x00010000 34 + /* non-prefetchable memory */ 35 + 0x82000000 0 0xc0020000 0xc0020000 0 0x1ffe0000>; 36 + num-lanes = <2>; 37 + interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>; 38 + interrupt-names = "msi"; 39 + #interrupt-cells = <1>; 40 + interrupt-map-mask = <0 0 0 0x7>; 41 + interrupt-map = <0 0 0 1 &intc GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>, 42 + <0 0 0 2 &intc GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>, 43 + <0 0 0 3 &intc GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>, 44 + <0 0 0 4 &intc GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>; 45 + axis,syscon-pcie = <&syscon>; 46 + };
+9
Documentation/kernel-parameters.txt
··· 3021 3021 resource_alignment= 3022 3022 Format: 3023 3023 [<order of align>@][<domain>:]<bus>:<slot>.<func>[; ...] 3024 + [<order of align>@]pci:<vendor>:<device>\ 3025 + [:<subvendor>:<subdevice>][; ...] 3024 3026 Specifies alignment and device to reassign 3025 3027 aligned memory resources. 3026 3028 If <order of align> is not specified, ··· 3041 3039 hpmemsize=nn[KMG] The fixed amount of bus space which is 3042 3040 reserved for hotplug bridge's memory window. 3043 3041 Default size is 2 megabytes. 3042 + hpbussize=nn The minimum amount of additional bus numbers 3043 + reserved for buses below a hotplug bridge. 3044 + Default is 1. 3044 3045 realloc= Enable/disable reallocating PCI bridge resources 3045 3046 if allocations done by BIOS are too small to 3046 3047 accommodate resources required by all child ··· 3074 3069 unconditionally. 3075 3070 compat Treat PCIe ports as PCI-to-PCI bridges, disable the PCIe 3076 3071 ports driver. 3072 + 3073 + pcie_port_pm= [PCIE] PCIe port power management handling: 3074 + off Disable power management of all PCIe ports 3075 + force Forcibly enable power management of all PCIe ports 3077 3076 3078 3077 pcie_pme= [PCIE,PM] Native PCIe PME signaling options: 3079 3078 nomsi Do not use MSI for native PCIe PME signaling (this makes
+17
MAINTAINERS
··· 8883 8883 Q: http://patchwork.ozlabs.org/project/linux-pci/list/ 8884 8884 T: git git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git 8885 8885 S: Supported 8886 + F: Documentation/devicetree/bindings/pci/ 8886 8887 F: Documentation/PCI/ 8887 8888 F: drivers/pci/ 8888 8889 F: include/linux/pci* ··· 8946 8945 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 8947 8946 S: Maintained 8948 8947 F: drivers/pci/host/*mvebu* 8948 + 8949 + PCI DRIVER FOR AARDVARK (Marvell Armada 3700) 8950 + M: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> 8951 + L: linux-pci@vger.kernel.org 8952 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 8953 + S: Maintained 8954 + F: drivers/pci/host/pci-aardvark.c 8949 8955 8950 8956 PCI DRIVER FOR NVIDIA TEGRA 8951 8957 M: Thierry Reding <thierry.reding@gmail.com> ··· 9035 9027 S: Maintained 9036 9028 F: Documentation/devicetree/bindings/pci/xgene-pci-msi.txt 9037 9029 F: drivers/pci/host/pci-xgene-msi.c 9030 + 9031 + PCIE DRIVER FOR AXIS ARTPEC 9032 + M: Niklas Cassel <niklas.cassel@axis.com> 9033 + M: Jesper Nilsson <jesper.nilsson@axis.com> 9034 + L: linux-arm-kernel@axis.com 9035 + L: linux-pci@vger.kernel.org 9036 + S: Maintained 9037 + F: Documentation/devicetree/bindings/pci/axis,artpec* 9038 + F: drivers/pci/host/*artpec* 9038 9039 9039 9040 PCIE DRIVER FOR HISILICON 9040 9041 M: Zhou Wang <wangzhou1@hisilicon.com>
+1 -1
arch/arm/Kconfig
··· 700 700 depends on ARCH_MULTI_V7 701 701 select ARM_AMBA 702 702 select ARM_GIC 703 - select ARM_GIC_V2M if PCI_MSI 703 + select ARM_GIC_V2M if PCI 704 704 select ARM_GIC_V3 705 705 select ARM_PSCI 706 706 select HAVE_ARM_ARCH_TIMER
+1
arch/arm/include/asm/mach/pci.h
··· 22 22 struct msi_controller *msi_ctrl; 23 23 struct pci_ops *ops; 24 24 int nr_controllers; 25 + unsigned int io_optional:1; 25 26 void **private_data; 26 27 int (*setup)(int nr, struct pci_sys_data *); 27 28 struct pci_bus *(*scan)(int nr, struct pci_sys_data *);
+20 -25
arch/arm/kernel/bios32.c
··· 410 410 return irq; 411 411 } 412 412 413 - static int pcibios_init_resources(int busnr, struct pci_sys_data *sys) 413 + static int pcibios_init_resource(int busnr, struct pci_sys_data *sys, 414 + int io_optional) 414 415 { 415 416 int ret; 416 417 struct resource_entry *window; ··· 420 419 pci_add_resource_offset(&sys->resources, 421 420 &iomem_resource, sys->mem_offset); 422 421 } 422 + 423 + /* 424 + * If a platform says I/O port support is optional, we don't add 425 + * the default I/O space. The platform is responsible for adding 426 + * any I/O space it needs. 427 + */ 428 + if (io_optional) 429 + return 0; 423 430 424 431 resource_list_for_each_entry(window, &sys->resources) 425 432 if (resource_type(window->res) == IORESOURCE_IO) ··· 475 466 if (ret > 0) { 476 467 struct pci_host_bridge *host_bridge; 477 468 478 - ret = pcibios_init_resources(nr, sys); 469 + ret = pcibios_init_resource(nr, sys, hw->io_optional); 479 470 if (ret) { 480 471 kfree(sys); 481 472 break; ··· 524 515 list_for_each_entry(sys, &head, node) { 525 516 struct pci_bus *bus = sys->bus; 526 517 527 - if (!pci_has_flag(PCI_PROBE_ONLY)) { 518 + /* 519 + * We insert PCI resources into the iomem_resource and 520 + * ioport_resource trees in either pci_bus_claim_resources() 521 + * or pci_bus_assign_resources(). 522 + */ 523 + if (pci_has_flag(PCI_PROBE_ONLY)) { 524 + pci_bus_claim_resources(bus); 525 + } else { 528 526 struct pci_bus *child; 529 527 530 - /* 531 - * Size the bridge windows. 532 - */ 533 528 pci_bus_size_bridges(bus); 534 - 535 - /* 536 - * Assign resources. 537 - */ 538 529 pci_bus_assign_resources(bus); 539 530 540 531 list_for_each_entry(child, &bus->children, node) 541 532 pcie_bus_configure_settings(child); 542 533 } 543 - /* 544 - * Tell drivers about devices found. 545 - */ 534 + 546 535 pci_bus_add_devices(bus); 547 536 } 548 537 } ··· 595 588 start, size, align); 596 589 597 590 return start; 598 - } 599 - 600 - /** 601 - * pcibios_enable_device - Enable I/O and memory. 602 - * @dev: PCI device to be enabled 603 - */ 604 - int pcibios_enable_device(struct pci_dev *dev, int mask) 605 - { 606 - if (pci_has_flag(PCI_PROBE_ONLY)) 607 - return 0; 608 - 609 - return pci_enable_resources(dev, mask); 610 591 } 611 592 612 593 int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
+4 -2
arch/arm64/Kconfig
··· 3 3 select ACPI_CCA_REQUIRED if ACPI 4 4 select ACPI_GENERIC_GSI if ACPI 5 5 select ACPI_REDUCED_HARDWARE_ONLY if ACPI 6 + select ACPI_MCFG if ACPI 6 7 select ARCH_HAS_DEVMEM_IS_ALLOWED 7 8 select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI 8 9 select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE ··· 23 22 select ARM_ARCH_TIMER 24 23 select ARM_GIC 25 24 select AUDIT_ARCH_COMPAT_GENERIC 26 - select ARM_GIC_V2M if PCI_MSI 25 + select ARM_GIC_V2M if PCI 27 26 select ARM_GIC_V3 28 - select ARM_GIC_V3_ITS if PCI_MSI 27 + select ARM_GIC_V3_ITS if PCI 29 28 select ARM_PSCI_FW 30 29 select BUILDTIME_EXTABLE_SORT 31 30 select CLONE_BACKWARDS ··· 103 102 select OF_EARLY_FLATTREE 104 103 select OF_NUMA if NUMA && OF 105 104 select OF_RESERVED_MEM 105 + select PCI_ECAM if ACPI 106 106 select PERF_USE_VMALLOC 107 107 select POWER_RESET 108 108 select POWER_SUPPLY
+5
arch/arm64/boot/dts/marvell/armada-3720-db.dts
··· 76 76 &usb3 { 77 77 status = "okay"; 78 78 }; 79 + 80 + /* CON17 (PCIe) / CON12 (mini-PCIe) */ 81 + &pcie0 { 82 + status = "okay"; 83 + };
+25
arch/arm64/boot/dts/marvell/armada-37xx.dtsi
··· 176 176 <0x1d40000 0x40000>; /* GICR */ 177 177 }; 178 178 }; 179 + 180 + pcie0: pcie@d0070000 { 181 + compatible = "marvell,armada-3700-pcie"; 182 + device_type = "pci"; 183 + status = "disabled"; 184 + reg = <0 0xd0070000 0 0x20000>; 185 + #address-cells = <3>; 186 + #size-cells = <2>; 187 + bus-range = <0x00 0xff>; 188 + interrupts = <GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>; 189 + #interrupt-cells = <1>; 190 + msi-parent = <&pcie0>; 191 + msi-controller; 192 + ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x1000000 /* Port 0 MEM */ 193 + 0x81000000 0 0xe9000000 0 0xe9000000 0 0x10000>; /* Port 0 IO*/ 194 + interrupt-map-mask = <0 0 0 7>; 195 + interrupt-map = <0 0 0 1 &pcie_intc 0>, 196 + <0 0 0 2 &pcie_intc 1>, 197 + <0 0 0 3 &pcie_intc 2>, 198 + <0 0 0 4 &pcie_intc 3>; 199 + pcie_intc: interrupt-controller { 200 + interrupt-controller; 201 + #interrupt-cells = <1>; 202 + }; 203 + }; 179 204 }; 180 205 };
+138 -21
arch/arm64/kernel/pci.c
··· 17 17 #include <linux/mm.h> 18 18 #include <linux/of_pci.h> 19 19 #include <linux/of_platform.h> 20 + #include <linux/pci.h> 21 + #include <linux/pci-acpi.h> 22 + #include <linux/pci-ecam.h> 20 23 #include <linux/slab.h> 21 24 22 25 /* ··· 39 36 return res->start; 40 37 } 41 38 42 - /** 43 - * pcibios_enable_device - Enable I/O and memory. 44 - * @dev: PCI device to be enabled 45 - * @mask: bitmask of BARs to enable 46 - */ 47 - int pcibios_enable_device(struct pci_dev *dev, int mask) 48 - { 49 - if (pci_has_flag(PCI_PROBE_ONLY)) 50 - return 0; 51 - 52 - return pci_enable_resources(dev, mask); 53 - } 54 - 55 39 /* 56 - * Try to assign the IRQ number from DT when adding a new device 40 + * Try to assign the IRQ number when probing a new device 57 41 */ 58 - int pcibios_add_device(struct pci_dev *dev) 42 + int pcibios_alloc_irq(struct pci_dev *dev) 59 43 { 60 - dev->irq = of_irq_parse_and_map_pci(dev, 0, 0); 44 + if (acpi_disabled) 45 + dev->irq = of_irq_parse_and_map_pci(dev, 0, 0); 46 + #ifdef CONFIG_ACPI 47 + else 48 + return acpi_pci_irq_enable(dev); 49 + #endif 61 50 62 51 return 0; 63 52 } ··· 60 65 int raw_pci_read(unsigned int domain, unsigned int bus, 61 66 unsigned int devfn, int reg, int len, u32 *val) 62 67 { 63 - return -ENXIO; 68 + struct pci_bus *b = pci_find_bus(domain, bus); 69 + 70 + if (!b) 71 + return PCIBIOS_DEVICE_NOT_FOUND; 72 + return b->ops->read(b, devfn, reg, len, val); 64 73 } 65 74 66 75 int raw_pci_write(unsigned int domain, unsigned int bus, 67 76 unsigned int devfn, int reg, int len, u32 val) 68 77 { 69 - return -ENXIO; 78 + struct pci_bus *b = pci_find_bus(domain, bus); 79 + 80 + if (!b) 81 + return PCIBIOS_DEVICE_NOT_FOUND; 82 + return b->ops->write(b, devfn, reg, len, val); 70 83 } 71 84 72 85 #ifdef CONFIG_NUMA ··· 88 85 #endif 89 86 90 87 #ifdef CONFIG_ACPI 91 - /* Root bridge scanning */ 88 + 89 + struct acpi_pci_generic_root_info { 90 + struct acpi_pci_root_info common; 91 + struct pci_config_window *cfg; /* config space mapping */ 92 + }; 93 + 94 + int acpi_pci_bus_find_domain_nr(struct pci_bus *bus) 95 + { 96 + struct pci_config_window *cfg = bus->sysdata; 97 + struct acpi_device *adev = to_acpi_device(cfg->parent); 98 + struct acpi_pci_root *root = acpi_driver_data(adev); 99 + 100 + return root->segment; 101 + } 102 + 103 + int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge) 104 + { 105 + if (!acpi_disabled) { 106 + struct pci_config_window *cfg = bridge->bus->sysdata; 107 + struct acpi_device *adev = to_acpi_device(cfg->parent); 108 + ACPI_COMPANION_SET(&bridge->dev, adev); 109 + } 110 + 111 + return 0; 112 + } 113 + 114 + /* 115 + * Lookup the bus range for the domain in MCFG, and set up config space 116 + * mapping. 117 + */ 118 + static struct pci_config_window * 119 + pci_acpi_setup_ecam_mapping(struct acpi_pci_root *root) 120 + { 121 + struct resource *bus_res = &root->secondary; 122 + u16 seg = root->segment; 123 + struct pci_config_window *cfg; 124 + struct resource cfgres; 125 + unsigned int bsz; 126 + 127 + /* Use address from _CBA if present, otherwise lookup MCFG */ 128 + if (!root->mcfg_addr) 129 + root->mcfg_addr = pci_mcfg_lookup(seg, bus_res); 130 + 131 + if (!root->mcfg_addr) { 132 + dev_err(&root->device->dev, "%04x:%pR ECAM region not found\n", 133 + seg, bus_res); 134 + return NULL; 135 + } 136 + 137 + bsz = 1 << pci_generic_ecam_ops.bus_shift; 138 + cfgres.start = root->mcfg_addr + bus_res->start * bsz; 139 + cfgres.end = cfgres.start + resource_size(bus_res) * bsz - 1; 140 + cfgres.flags = IORESOURCE_MEM; 141 + cfg = pci_ecam_create(&root->device->dev, &cfgres, bus_res, 142 + &pci_generic_ecam_ops); 143 + if (IS_ERR(cfg)) { 144 + dev_err(&root->device->dev, "%04x:%pR error %ld mapping ECAM\n", 145 + seg, bus_res, PTR_ERR(cfg)); 146 + return NULL; 147 + } 148 + 149 + return cfg; 150 + } 151 + 152 + /* release_info: free resources allocated by init_info */ 153 + static void pci_acpi_generic_release_info(struct acpi_pci_root_info *ci) 154 + { 155 + struct acpi_pci_generic_root_info *ri; 156 + 157 + ri = container_of(ci, struct acpi_pci_generic_root_info, common); 158 + pci_ecam_free(ri->cfg); 159 + kfree(ri); 160 + } 161 + 162 + static struct acpi_pci_root_ops acpi_pci_root_ops = { 163 + .release_info = pci_acpi_generic_release_info, 164 + }; 165 + 166 + /* Interface called from ACPI code to setup PCI host controller */ 92 167 struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) 93 168 { 94 - /* TODO: Should be revisited when implementing PCI on ACPI */ 95 - return NULL; 169 + int node = acpi_get_node(root->device->handle); 170 + struct acpi_pci_generic_root_info *ri; 171 + struct pci_bus *bus, *child; 172 + 173 + ri = kzalloc_node(sizeof(*ri), GFP_KERNEL, node); 174 + if (!ri) 175 + return NULL; 176 + 177 + ri->cfg = pci_acpi_setup_ecam_mapping(root); 178 + if (!ri->cfg) { 179 + kfree(ri); 180 + return NULL; 181 + } 182 + 183 + acpi_pci_root_ops.pci_ops = &ri->cfg->ops->pci_ops; 184 + bus = acpi_pci_root_create(root, &acpi_pci_root_ops, &ri->common, 185 + ri->cfg); 186 + if (!bus) 187 + return NULL; 188 + 189 + pci_bus_size_bridges(bus); 190 + pci_bus_assign_resources(bus); 191 + 192 + list_for_each_entry(child, &bus->children, node) 193 + pcie_bus_configure_settings(child); 194 + 195 + return bus; 96 196 } 197 + 198 + void pcibios_add_bus(struct pci_bus *bus) 199 + { 200 + acpi_pci_add_bus(bus); 201 + } 202 + 203 + void pcibios_remove_bus(struct pci_bus *bus) 204 + { 205 + acpi_pci_remove_bus(bus); 206 + } 207 + 97 208 #endif
-3
arch/microblaze/include/asm/pci.h
··· 82 82 pgprot_t prot); 83 83 84 84 #define HAVE_ARCH_PCI_RESOURCE_TO_USER 85 - extern void pci_resource_to_user(const struct pci_dev *dev, int bar, 86 - const struct resource *rsrc, 87 - resource_size_t *start, resource_size_t *end); 88 85 89 86 extern void pcibios_setup_bus_devices(struct pci_bus *bus); 90 87 extern void pcibios_setup_bus_self(struct pci_bus *bus);
+15 -58
arch/microblaze/pci/pci-common.c
··· 219 219 } 220 220 221 221 /* 222 - * Set vm_page_prot of VMA, as appropriate for this architecture, for a pci 223 - * device mapping. 224 - */ 225 - static pgprot_t __pci_mmap_set_pgprot(struct pci_dev *dev, struct resource *rp, 226 - pgprot_t protection, 227 - enum pci_mmap_state mmap_state, 228 - int write_combine) 229 - { 230 - pgprot_t prot = protection; 231 - 232 - /* Write combine is always 0 on non-memory space mappings. On 233 - * memory space, if the user didn't pass 1, we check for a 234 - * "prefetchable" resource. This is a bit hackish, but we use 235 - * this to workaround the inability of /sysfs to provide a write 236 - * combine bit 237 - */ 238 - if (mmap_state != pci_mmap_mem) 239 - write_combine = 0; 240 - else if (write_combine == 0) { 241 - if (rp->flags & IORESOURCE_PREFETCH) 242 - write_combine = 1; 243 - } 244 - 245 - return pgprot_noncached(prot); 246 - } 247 - 248 - /* 249 222 * This one is used by /dev/mem and fbdev who have no clue about the 250 223 * PCI device, it tries to find the PCI device first and calls the 251 224 * above routine ··· 290 317 return -EINVAL; 291 318 292 319 vma->vm_pgoff = offset >> PAGE_SHIFT; 293 - vma->vm_page_prot = __pci_mmap_set_pgprot(dev, rp, 294 - vma->vm_page_prot, 295 - mmap_state, write_combine); 320 + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 296 321 297 322 ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, 298 323 vma->vm_end - vma->vm_start, vma->vm_page_prot); ··· 444 473 const struct resource *rsrc, 445 474 resource_size_t *start, resource_size_t *end) 446 475 { 447 - struct pci_controller *hose = pci_bus_to_host(dev->bus); 448 - resource_size_t offset = 0; 476 + struct pci_bus_region region; 449 477 450 - if (hose == NULL) 478 + if (rsrc->flags & IORESOURCE_IO) { 479 + pcibios_resource_to_bus(dev->bus, &region, 480 + (struct resource *) rsrc); 481 + *start = region.start; 482 + *end = region.end; 451 483 return; 484 + } 452 485 453 - if (rsrc->flags & IORESOURCE_IO) 454 - offset = (unsigned long)hose->io_base_virt - _IO_BASE; 455 - 456 - /* We pass a fully fixed up address to userland for MMIO instead of 457 - * a BAR value because X is lame and expects to be able to use that 458 - * to pass to /dev/mem ! 486 + /* We pass a CPU physical address to userland for MMIO instead of a 487 + * BAR value because X is lame and expects to be able to use that 488 + * to pass to /dev/mem! 459 489 * 460 - * That means that we'll have potentially 64 bits values where some 461 - * userland apps only expect 32 (like X itself since it thinks only 462 - * Sparc has 64 bits MMIO) but if we don't do that, we break it on 463 - * 32 bits CHRPs :-( 464 - * 465 - * Hopefully, the sysfs insterface is immune to that gunk. Once X 466 - * has been fixed (and the fix spread enough), we can re-enable the 467 - * 2 lines below and pass down a BAR value to userland. In that case 468 - * we'll also have to re-enable the matching code in 469 - * __pci_mmap_make_offset(). 470 - * 471 - * BenH. 490 + * That means we may have 64-bit values where some apps only expect 491 + * 32 (like X itself since it thinks only Sparc has 64-bit MMIO). 472 492 */ 473 - #if 0 474 - else if (rsrc->flags & IORESOURCE_MEM) 475 - offset = hose->pci_mem_offset; 476 - #endif 477 - 478 - *start = rsrc->start - offset; 479 - *end = rsrc->end - offset; 493 + *start = rsrc->start; 494 + *end = rsrc->end; 480 495 } 481 496 482 497 /**
-10
arch/mips/include/asm/pci.h
··· 80 80 81 81 #define HAVE_ARCH_PCI_RESOURCE_TO_USER 82 82 83 - static inline void pci_resource_to_user(const struct pci_dev *dev, int bar, 84 - const struct resource *rsrc, resource_size_t *start, 85 - resource_size_t *end) 86 - { 87 - phys_addr_t size = resource_size(rsrc); 88 - 89 - *start = fixup_bigphys_addr(rsrc->start, size); 90 - *end = rsrc->start + size; 91 - } 92 - 93 83 /* 94 84 * Dynamic DMA mapping stuff. 95 85 * MIPS has everything mapped statically.
+18 -1
arch/mips/pci/pci.c
··· 112 112 need_domain_info = 1; 113 113 } 114 114 115 - if (!pci_has_flag(PCI_PROBE_ONLY)) { 115 + /* 116 + * We insert PCI resources into the iomem_resource and 117 + * ioport_resource trees in either pci_bus_claim_resources() 118 + * or pci_bus_assign_resources(). 119 + */ 120 + if (pci_has_flag(PCI_PROBE_ONLY)) { 121 + pci_bus_claim_resources(bus); 122 + } else { 116 123 pci_bus_size_bridges(bus); 117 124 pci_bus_assign_resources(bus); 118 125 } ··· 325 318 326 319 EXPORT_SYMBOL(PCIBIOS_MIN_IO); 327 320 EXPORT_SYMBOL(PCIBIOS_MIN_MEM); 321 + 322 + void pci_resource_to_user(const struct pci_dev *dev, int bar, 323 + const struct resource *rsrc, resource_size_t *start, 324 + resource_size_t *end) 325 + { 326 + phys_addr_t size = resource_size(rsrc); 327 + 328 + *start = fixup_bigphys_addr(rsrc->start, size); 329 + *end = rsrc->start + size; 330 + } 328 331 329 332 int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, 330 333 enum pci_mmap_state mmap_state, int write_combine)
-3
arch/powerpc/include/asm/pci.h
··· 136 136 pgprot_t prot); 137 137 138 138 #define HAVE_ARCH_PCI_RESOURCE_TO_USER 139 - extern void pci_resource_to_user(const struct pci_dev *dev, int bar, 140 - const struct resource *rsrc, 141 - resource_size_t *start, resource_size_t *end); 142 139 143 140 extern resource_size_t pcibios_io_space_offset(struct pci_controller *hose); 144 141 extern void pcibios_setup_bus_devices(struct pci_bus *bus);
+18 -61
arch/powerpc/kernel/pci-common.c
··· 412 412 } 413 413 414 414 /* 415 - * Set vm_page_prot of VMA, as appropriate for this architecture, for a pci 416 - * device mapping. 417 - */ 418 - static pgprot_t __pci_mmap_set_pgprot(struct pci_dev *dev, struct resource *rp, 419 - pgprot_t protection, 420 - enum pci_mmap_state mmap_state, 421 - int write_combine) 422 - { 423 - 424 - /* Write combine is always 0 on non-memory space mappings. On 425 - * memory space, if the user didn't pass 1, we check for a 426 - * "prefetchable" resource. This is a bit hackish, but we use 427 - * this to workaround the inability of /sysfs to provide a write 428 - * combine bit 429 - */ 430 - if (mmap_state != pci_mmap_mem) 431 - write_combine = 0; 432 - else if (write_combine == 0) { 433 - if (rp->flags & IORESOURCE_PREFETCH) 434 - write_combine = 1; 435 - } 436 - 437 - /* XXX would be nice to have a way to ask for write-through */ 438 - if (write_combine) 439 - return pgprot_noncached_wc(protection); 440 - else 441 - return pgprot_noncached(protection); 442 - } 443 - 444 - /* 445 415 * This one is used by /dev/mem and fbdev who have no clue about the 446 416 * PCI device, it tries to find the PCI device first and calls the 447 417 * above routine ··· 484 514 return -EINVAL; 485 515 486 516 vma->vm_pgoff = offset >> PAGE_SHIFT; 487 - vma->vm_page_prot = __pci_mmap_set_pgprot(dev, rp, 488 - vma->vm_page_prot, 489 - mmap_state, write_combine); 517 + if (write_combine) 518 + vma->vm_page_prot = pgprot_noncached_wc(vma->vm_page_prot); 519 + else 520 + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 490 521 491 522 ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, 492 523 vma->vm_end - vma->vm_start, vma->vm_page_prot); ··· 637 666 const struct resource *rsrc, 638 667 resource_size_t *start, resource_size_t *end) 639 668 { 640 - struct pci_controller *hose = pci_bus_to_host(dev->bus); 641 - resource_size_t offset = 0; 669 + struct pci_bus_region region; 642 670 643 - if (hose == NULL) 671 + if (rsrc->flags & IORESOURCE_IO) { 672 + pcibios_resource_to_bus(dev->bus, &region, 673 + (struct resource *) rsrc); 674 + *start = region.start; 675 + *end = region.end; 644 676 return; 677 + } 645 678 646 - if (rsrc->flags & IORESOURCE_IO) 647 - offset = (unsigned long)hose->io_base_virt - _IO_BASE; 648 - 649 - /* We pass a fully fixed up address to userland for MMIO instead of 650 - * a BAR value because X is lame and expects to be able to use that 651 - * to pass to /dev/mem ! 679 + /* We pass a CPU physical address to userland for MMIO instead of a 680 + * BAR value because X is lame and expects to be able to use that 681 + * to pass to /dev/mem! 652 682 * 653 - * That means that we'll have potentially 64 bits values where some 654 - * userland apps only expect 32 (like X itself since it thinks only 655 - * Sparc has 64 bits MMIO) but if we don't do that, we break it on 656 - * 32 bits CHRPs :-( 657 - * 658 - * Hopefully, the sysfs insterface is immune to that gunk. Once X 659 - * has been fixed (and the fix spread enough), we can re-enable the 660 - * 2 lines below and pass down a BAR value to userland. In that case 661 - * we'll also have to re-enable the matching code in 662 - * __pci_mmap_make_offset(). 663 - * 664 - * BenH. 683 + * That means we may have 64-bit values where some apps only expect 684 + * 32 (like X itself since it thinks only Sparc has 64-bit MMIO). 665 685 */ 666 - #if 0 667 - else if (rsrc->flags & IORESOURCE_MEM) 668 - offset = hose->pci_mem_offset; 669 - #endif 670 - 671 - *start = rsrc->start - offset; 672 - *end = rsrc->end - offset; 686 + *start = rsrc->start; 687 + *end = rsrc->end; 673 688 } 674 689 675 690 /**
-3
arch/sparc/include/asm/pci_64.h
··· 55 55 } 56 56 57 57 #define HAVE_ARCH_PCI_RESOURCE_TO_USER 58 - void pci_resource_to_user(const struct pci_dev *dev, int bar, 59 - const struct resource *rsrc, 60 - resource_size_t *start, resource_size_t *end); 61 58 #endif /* __KERNEL__ */ 62 59 63 60 #endif /* __SPARC64_PCI_H */
+11 -9
arch/sparc/kernel/pci.c
··· 986 986 const struct resource *rp, resource_size_t *start, 987 987 resource_size_t *end) 988 988 { 989 - struct pci_pbm_info *pbm = pdev->dev.archdata.host_controller; 990 - unsigned long offset; 989 + struct pci_bus_region region; 991 990 992 - if (rp->flags & IORESOURCE_IO) 993 - offset = pbm->io_space.start; 994 - else 995 - offset = pbm->mem_space.start; 996 - 997 - *start = rp->start - offset; 998 - *end = rp->end - offset; 991 + /* 992 + * "User" addresses are shown in /sys/devices/pci.../.../resource 993 + * and /proc/bus/pci/devices and used as mmap offsets for 994 + * /proc/bus/pci/BB/DD.F files (see proc_bus_pci_mmap()). 995 + * 996 + * On sparc, these are PCI bus addresses, i.e., raw BAR values. 997 + */ 998 + pcibios_resource_to_bus(pdev->bus, &region, (struct resource *) rp); 999 + *start = region.start; 1000 + *end = region.end; 999 1001 } 1000 1002 1001 1003 void pcibios_set_master(struct pci_dev *dev)
+2 -7
arch/unicore32/kernel/pci.c
··· 265 265 266 266 pci_fixup_irqs(pci_common_swizzle, pci_puv3_map_irq); 267 267 268 - if (!pci_has_flag(PCI_PROBE_ONLY)) { 269 - pci_bus_size_bridges(puv3_bus); 270 - pci_bus_assign_resources(puv3_bus); 271 - } 268 + pci_bus_size_bridges(puv3_bus); 269 + pci_bus_assign_resources(puv3_bus); 272 270 pci_bus_add_devices(puv3_bus); 273 271 return 0; 274 272 } ··· 276 278 { 277 279 if (!strcmp(str, "debug")) { 278 280 debug_pci = 1; 279 - return NULL; 280 - } else if (!strcmp(str, "firmware")) { 281 - pci_add_flags(PCI_PROBE_ONLY); 282 281 return NULL; 283 282 } 284 283 return str;
+1 -1
arch/x86/pci/common.c
··· 133 133 if (pci_probe & PCI_NOASSIGN_BARS) { 134 134 /* 135 135 * If the BIOS did not assign the BAR, zero out the 136 - * resource so the kernel doesn't attmept to assign 136 + * resource so the kernel doesn't attempt to assign 137 137 * it later on in pci_assign_unassigned_resources 138 138 */ 139 139 for (bar = 0; bar <= PCI_STD_RESOURCE_END; bar++) {
+25 -16
arch/x86/pci/vmd.c
··· 119 119 static void vmd_irq_enable(struct irq_data *data) 120 120 { 121 121 struct vmd_irq *vmdirq = data->chip_data; 122 + unsigned long flags; 122 123 123 - raw_spin_lock(&list_lock); 124 + raw_spin_lock_irqsave(&list_lock, flags); 124 125 list_add_tail_rcu(&vmdirq->node, &vmdirq->irq->irq_list); 125 - raw_spin_unlock(&list_lock); 126 + raw_spin_unlock_irqrestore(&list_lock, flags); 126 127 127 128 data->chip->irq_unmask(data); 128 129 } ··· 131 130 static void vmd_irq_disable(struct irq_data *data) 132 131 { 133 132 struct vmd_irq *vmdirq = data->chip_data; 133 + unsigned long flags; 134 134 135 135 data->chip->irq_mask(data); 136 136 137 - raw_spin_lock(&list_lock); 137 + raw_spin_lock_irqsave(&list_lock, flags); 138 138 list_del_rcu(&vmdirq->node); 139 - raw_spin_unlock(&list_lock); 139 + INIT_LIST_HEAD_RCU(&vmdirq->node); 140 + raw_spin_unlock_irqrestore(&list_lock, flags); 140 141 } 141 142 142 143 /* ··· 169 166 * XXX: We can be even smarter selecting the best IRQ once we solve the 170 167 * affinity problem. 171 168 */ 172 - static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd) 169 + static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *desc) 173 170 { 174 - int i, best = 0; 171 + int i, best = 1; 172 + unsigned long flags; 175 173 176 - raw_spin_lock(&list_lock); 174 + if (!desc->msi_attrib.is_msix || vmd->msix_count == 1) 175 + return &vmd->irqs[0]; 176 + 177 + raw_spin_lock_irqsave(&list_lock, flags); 177 178 for (i = 1; i < vmd->msix_count; i++) 178 179 if (vmd->irqs[i].count < vmd->irqs[best].count) 179 180 best = i; 180 181 vmd->irqs[best].count++; 181 - raw_spin_unlock(&list_lock); 182 + raw_spin_unlock_irqrestore(&list_lock, flags); 182 183 183 184 return &vmd->irqs[best]; 184 185 } ··· 191 184 unsigned int virq, irq_hw_number_t hwirq, 192 185 msi_alloc_info_t *arg) 193 186 { 194 - struct vmd_dev *vmd = vmd_from_bus(msi_desc_to_pci_dev(arg->desc)->bus); 187 + struct msi_desc *desc = arg->desc; 188 + struct vmd_dev *vmd = vmd_from_bus(msi_desc_to_pci_dev(desc)->bus); 195 189 struct vmd_irq *vmdirq = kzalloc(sizeof(*vmdirq), GFP_KERNEL); 196 190 197 191 if (!vmdirq) 198 192 return -ENOMEM; 199 193 200 194 INIT_LIST_HEAD(&vmdirq->node); 201 - vmdirq->irq = vmd_next_irq(vmd); 195 + vmdirq->irq = vmd_next_irq(vmd, desc); 202 196 vmdirq->virq = virq; 203 197 204 198 irq_domain_set_info(domain, virq, vmdirq->irq->vmd_vector, info->chip, ··· 211 203 struct msi_domain_info *info, unsigned int virq) 212 204 { 213 205 struct vmd_irq *vmdirq = irq_get_chip_data(virq); 206 + unsigned long flags; 214 207 215 208 /* XXX: Potential optimization to rebalance */ 216 - raw_spin_lock(&list_lock); 209 + raw_spin_lock_irqsave(&list_lock, flags); 217 210 vmdirq->irq->count--; 218 - raw_spin_unlock(&list_lock); 211 + raw_spin_unlock_irqrestore(&list_lock, flags); 219 212 220 213 kfree_rcu(vmdirq, rcu); 221 214 } ··· 270 261 271 262 static struct dma_map_ops *vmd_dma_ops(struct device *dev) 272 263 { 273 - return to_vmd_dev(dev)->archdata.dma_ops; 264 + return get_dma_ops(to_vmd_dev(dev)); 274 265 } 275 266 276 267 static void *vmd_alloc(struct device *dev, size_t size, dma_addr_t *addr, ··· 376 367 { 377 368 struct dma_domain *domain = &vmd->dma_domain; 378 369 379 - if (vmd->dev->dev.archdata.dma_ops) 370 + if (get_dma_ops(&vmd->dev->dev)) 380 371 del_dma_domain(domain); 381 372 } 382 373 ··· 388 379 389 380 static void vmd_setup_dma_ops(struct vmd_dev *vmd) 390 381 { 391 - const struct dma_map_ops *source = vmd->dev->dev.archdata.dma_ops; 382 + const struct dma_map_ops *source = get_dma_ops(&vmd->dev->dev); 392 383 struct dma_map_ops *dest = &vmd->dma_ops; 393 384 struct dma_domain *domain = &vmd->dma_domain; 394 385 ··· 603 594 sd->node = pcibus_to_node(vmd->dev->bus); 604 595 605 596 vmd->irq_domain = pci_msi_create_irq_domain(NULL, &vmd_msi_domain_info, 606 - NULL); 597 + x86_vector_domain); 607 598 if (!vmd->irq_domain) 608 599 return -ENODEV; 609 600
+3
drivers/acpi/Kconfig
··· 221 221 bool 222 222 select CPU_IDLE 223 223 224 + config ACPI_MCFG 225 + bool 226 + 224 227 config ACPI_CPPC_LIB 225 228 bool 226 229 depends on ACPI_PROCESSOR
+1
drivers/acpi/Makefile
··· 40 40 acpi-y += ec.o 41 41 acpi-$(CONFIG_ACPI_DOCK) += dock.o 42 42 acpi-y += pci_root.o pci_link.o pci_irq.o 43 + obj-$(CONFIG_ACPI_MCFG) += pci_mcfg.o 43 44 acpi-y += acpi_lpss.o acpi_apd.o 44 45 acpi-y += acpi_platform.o 45 46 acpi-y += acpi_pnp.o
+92
drivers/acpi/pci_mcfg.c
··· 1 + /* 2 + * Copyright (C) 2016 Broadcom 3 + * Author: Jayachandran C <jchandra@broadcom.com> 4 + * Copyright (C) 2016 Semihalf 5 + * Author: Tomasz Nowicki <tn@semihalf.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License, version 2, as 9 + * published by the Free Software Foundation (the "GPL"). 10 + * 11 + * This program is distributed in the hope that it will be useful, but 12 + * WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 14 + * General Public License version 2 (GPLv2) for more details. 15 + * 16 + * You should have received a copy of the GNU General Public License 17 + * version 2 (GPLv2) along with this source code. 18 + */ 19 + 20 + #define pr_fmt(fmt) "ACPI: " fmt 21 + 22 + #include <linux/kernel.h> 23 + #include <linux/pci.h> 24 + #include <linux/pci-acpi.h> 25 + 26 + /* Structure to hold entries from the MCFG table */ 27 + struct mcfg_entry { 28 + struct list_head list; 29 + phys_addr_t addr; 30 + u16 segment; 31 + u8 bus_start; 32 + u8 bus_end; 33 + }; 34 + 35 + /* List to save MCFG entries */ 36 + static LIST_HEAD(pci_mcfg_list); 37 + 38 + phys_addr_t pci_mcfg_lookup(u16 seg, struct resource *bus_res) 39 + { 40 + struct mcfg_entry *e; 41 + 42 + /* 43 + * We expect exact match, unless MCFG entry end bus covers more than 44 + * specified by caller. 45 + */ 46 + list_for_each_entry(e, &pci_mcfg_list, list) { 47 + if (e->segment == seg && e->bus_start == bus_res->start && 48 + e->bus_end >= bus_res->end) 49 + return e->addr; 50 + } 51 + 52 + return 0; 53 + } 54 + 55 + static __init int pci_mcfg_parse(struct acpi_table_header *header) 56 + { 57 + struct acpi_table_mcfg *mcfg; 58 + struct acpi_mcfg_allocation *mptr; 59 + struct mcfg_entry *e, *arr; 60 + int i, n; 61 + 62 + if (header->length < sizeof(struct acpi_table_mcfg)) 63 + return -EINVAL; 64 + 65 + n = (header->length - sizeof(struct acpi_table_mcfg)) / 66 + sizeof(struct acpi_mcfg_allocation); 67 + mcfg = (struct acpi_table_mcfg *)header; 68 + mptr = (struct acpi_mcfg_allocation *) &mcfg[1]; 69 + 70 + arr = kcalloc(n, sizeof(*arr), GFP_KERNEL); 71 + if (!arr) 72 + return -ENOMEM; 73 + 74 + for (i = 0, e = arr; i < n; i++, mptr++, e++) { 75 + e->segment = mptr->pci_segment; 76 + e->addr = mptr->address; 77 + e->bus_start = mptr->start_bus_number; 78 + e->bus_end = mptr->end_bus_number; 79 + list_add(&e->list, &pci_mcfg_list); 80 + } 81 + 82 + pr_info("MCFG table detected, %d entries\n", n); 83 + return 0; 84 + } 85 + 86 + /* Interface called by ACPI - parse and save MCFG table */ 87 + void __init pci_mmcfg_late_init(void) 88 + { 89 + int err = acpi_table_parse(ACPI_SIG_MCFG, pci_mcfg_parse); 90 + if (err) 91 + pr_err("Failed to parse MCFG (%d)\n", err); 92 + }
+35
drivers/acpi/pci_root.c
··· 720 720 } 721 721 } 722 722 723 + static void acpi_pci_root_remap_iospace(struct resource_entry *entry) 724 + { 725 + #ifdef PCI_IOBASE 726 + struct resource *res = entry->res; 727 + resource_size_t cpu_addr = res->start; 728 + resource_size_t pci_addr = cpu_addr - entry->offset; 729 + resource_size_t length = resource_size(res); 730 + unsigned long port; 731 + 732 + if (pci_register_io_range(cpu_addr, length)) 733 + goto err; 734 + 735 + port = pci_address_to_pio(cpu_addr); 736 + if (port == (unsigned long)-1) 737 + goto err; 738 + 739 + res->start = port; 740 + res->end = port + length - 1; 741 + entry->offset = port - pci_addr; 742 + 743 + if (pci_remap_iospace(res, cpu_addr) < 0) 744 + goto err; 745 + 746 + pr_info("Remapped I/O %pa to %pR\n", &cpu_addr, res); 747 + return; 748 + err: 749 + res->flags |= IORESOURCE_DISABLED; 750 + #endif 751 + } 752 + 723 753 int acpi_pci_probe_root_resources(struct acpi_pci_root_info *info) 724 754 { 725 755 int ret; ··· 770 740 "no IO and memory resources present in _CRS\n"); 771 741 else { 772 742 resource_list_for_each_entry_safe(entry, tmp, list) { 743 + if (entry->res->flags & IORESOURCE_IO) 744 + acpi_pci_root_remap_iospace(entry); 745 + 773 746 if (entry->res->flags & IORESOURCE_DISABLED) 774 747 resource_list_destroy_entry(entry); 775 748 else ··· 844 811 845 812 resource_list_for_each_entry(entry, &bridge->windows) { 846 813 res = entry->res; 814 + if (res->flags & IORESOURCE_IO) 815 + pci_unmap_iospace(res); 847 816 if (res->parent && 848 817 (res->flags & (IORESOURCE_MEM | IORESOURCE_IO))) 849 818 release_resource(res);
+8 -10
drivers/irqchip/Kconfig
··· 21 21 22 22 config ARM_GIC_V2M 23 23 bool 24 - depends on ARM_GIC 25 - depends on PCI && PCI_MSI 26 - select PCI_MSI_IRQ_DOMAIN 24 + depends on PCI 25 + select ARM_GIC 26 + select PCI_MSI 27 27 28 28 config GIC_NON_BANKED 29 29 bool ··· 37 37 38 38 config ARM_GIC_V3_ITS 39 39 bool 40 - select PCI_MSI_IRQ_DOMAIN 40 + depends on PCI 41 + depends on PCI_MSI 41 42 42 43 config ARM_NVIC 43 44 bool ··· 63 62 config ARMADA_370_XP_IRQ 64 63 bool 65 64 select GENERIC_IRQ_CHIP 66 - select PCI_MSI_IRQ_DOMAIN if PCI_MSI 65 + select PCI_MSI if PCI 67 66 68 67 config ALPINE_MSI 69 68 bool 70 - depends on PCI && PCI_MSI 69 + depends on PCI 70 + select PCI_MSI 71 71 select GENERIC_IRQ_CHIP 72 - select PCI_MSI_IRQ_DOMAIN 73 72 74 73 config ATMEL_AIC_IRQ 75 74 bool ··· 118 117 bool 119 118 select ARM_GIC_V3 120 119 select ARM_GIC_V3_ITS 121 - select GENERIC_MSI_IRQ_DOMAIN 122 120 123 121 config IMGPDC_IRQ 124 122 bool ··· 250 250 251 251 config MVEBU_ODMI 252 252 bool 253 - select GENERIC_MSI_IRQ_DOMAIN 254 253 255 254 config LS_SCFG_MSI 256 255 def_bool y if SOC_LS1021A || ARCH_LAYERSCAPE 257 256 depends on PCI && PCI_MSI 258 - select PCI_MSI_IRQ_DOMAIN 259 257 260 258 config PARTITION_PERCPU 261 259 bool
+7 -11
drivers/misc/genwqe/card_base.c
··· 182 182 */ 183 183 static int genwqe_bus_reset(struct genwqe_dev *cd) 184 184 { 185 - int bars, rc = 0; 185 + int rc = 0; 186 186 struct pci_dev *pci_dev = cd->pci_dev; 187 187 void __iomem *mmio; 188 188 ··· 193 193 cd->mmio = NULL; 194 194 pci_iounmap(pci_dev, mmio); 195 195 196 - bars = pci_select_bars(pci_dev, IORESOURCE_MEM); 197 - pci_release_selected_regions(pci_dev, bars); 196 + pci_release_mem_regions(pci_dev); 198 197 199 198 /* 200 199 * Firmware/BIOS might change memory mapping during bus reset. ··· 217 218 GENWQE_INJECT_GFIR_FATAL | 218 219 GENWQE_INJECT_GFIR_INFO); 219 220 220 - rc = pci_request_selected_regions(pci_dev, bars, genwqe_driver_name); 221 + rc = pci_request_mem_regions(pci_dev, genwqe_driver_name); 221 222 if (rc) { 222 223 dev_err(&pci_dev->dev, 223 224 "[%s] err: request bars failed (%d)\n", __func__, rc); ··· 1067 1068 */ 1068 1069 static int genwqe_pci_setup(struct genwqe_dev *cd) 1069 1070 { 1070 - int err, bars; 1071 + int err; 1071 1072 struct pci_dev *pci_dev = cd->pci_dev; 1072 1073 1073 - bars = pci_select_bars(pci_dev, IORESOURCE_MEM); 1074 1074 err = pci_enable_device_mem(pci_dev); 1075 1075 if (err) { 1076 1076 dev_err(&pci_dev->dev, ··· 1078 1080 } 1079 1081 1080 1082 /* Reserve PCI I/O and memory resources */ 1081 - err = pci_request_selected_regions(pci_dev, bars, genwqe_driver_name); 1083 + err = pci_request_mem_regions(pci_dev, genwqe_driver_name); 1082 1084 if (err) { 1083 1085 dev_err(&pci_dev->dev, 1084 1086 "[%s] err: request bars failed (%d)\n", __func__, err); ··· 1140 1142 out_iounmap: 1141 1143 pci_iounmap(pci_dev, cd->mmio); 1142 1144 out_release_resources: 1143 - pci_release_selected_regions(pci_dev, bars); 1145 + pci_release_mem_regions(pci_dev); 1144 1146 err_disable_device: 1145 1147 pci_disable_device(pci_dev); 1146 1148 err_out: ··· 1152 1154 */ 1153 1155 static void genwqe_pci_remove(struct genwqe_dev *cd) 1154 1156 { 1155 - int bars; 1156 1157 struct pci_dev *pci_dev = cd->pci_dev; 1157 1158 1158 1159 if (cd->mmio) 1159 1160 pci_iounmap(pci_dev, cd->mmio); 1160 1161 1161 - bars = pci_select_bars(pci_dev, IORESOURCE_MEM); 1162 - pci_release_selected_regions(pci_dev, bars); 1162 + pci_release_mem_regions(pci_dev); 1163 1163 pci_disable_device(pci_dev); 1164 1164 } 1165 1165
+5 -7
drivers/net/ethernet/atheros/alx/main.c
··· 1251 1251 struct alx_priv *alx; 1252 1252 struct alx_hw *hw; 1253 1253 bool phy_configured; 1254 - int bars, err; 1254 + int err; 1255 1255 1256 1256 err = pci_enable_device_mem(pdev); 1257 1257 if (err) ··· 1271 1271 } 1272 1272 } 1273 1273 1274 - bars = pci_select_bars(pdev, IORESOURCE_MEM); 1275 - err = pci_request_selected_regions(pdev, bars, alx_drv_name); 1274 + err = pci_request_mem_regions(pdev, alx_drv_name); 1276 1275 if (err) { 1277 1276 dev_err(&pdev->dev, 1278 - "pci_request_selected_regions failed(bars:%d)\n", bars); 1277 + "pci_request_mem_regions failed\n"); 1279 1278 goto out_pci_disable; 1280 1279 } 1281 1280 ··· 1400 1401 out_free_netdev: 1401 1402 free_netdev(netdev); 1402 1403 out_pci_release: 1403 - pci_release_selected_regions(pdev, bars); 1404 + pci_release_mem_regions(pdev); 1404 1405 out_pci_disable: 1405 1406 pci_disable_device(pdev); 1406 1407 return err; ··· 1419 1420 1420 1421 unregister_netdev(alx->dev); 1421 1422 iounmap(hw->hw_addr); 1422 - pci_release_selected_regions(pdev, 1423 - pci_select_bars(pdev, IORESOURCE_MEM)); 1423 + pci_release_mem_regions(pdev); 1424 1424 1425 1425 pci_disable_pcie_error_reporting(pdev); 1426 1426 pci_disable_device(pdev);
+2 -4
drivers/net/ethernet/intel/e1000e/netdev.c
··· 7330 7330 err_ioremap: 7331 7331 free_netdev(netdev); 7332 7332 err_alloc_etherdev: 7333 - pci_release_selected_regions(pdev, 7334 - pci_select_bars(pdev, IORESOURCE_MEM)); 7333 + pci_release_mem_regions(pdev); 7335 7334 err_pci_reg: 7336 7335 err_dma: 7337 7336 pci_disable_device(pdev); ··· 7397 7398 if ((adapter->hw.flash_address) && 7398 7399 (adapter->hw.mac.type < e1000_pch_spt)) 7399 7400 iounmap(adapter->hw.flash_address); 7400 - pci_release_selected_regions(pdev, 7401 - pci_select_bars(pdev, IORESOURCE_MEM)); 7401 + pci_release_mem_regions(pdev); 7402 7402 7403 7403 free_netdev(netdev); 7404 7404
+3 -8
drivers/net/ethernet/intel/fm10k/fm10k_pci.c
··· 1963 1963 goto err_dma; 1964 1964 } 1965 1965 1966 - err = pci_request_selected_regions(pdev, 1967 - pci_select_bars(pdev, 1968 - IORESOURCE_MEM), 1969 - fm10k_driver_name); 1966 + err = pci_request_mem_regions(pdev, fm10k_driver_name); 1970 1967 if (err) { 1971 1968 dev_err(&pdev->dev, 1972 1969 "pci_request_selected_regions failed: %d\n", err); ··· 2067 2070 err_ioremap: 2068 2071 free_netdev(netdev); 2069 2072 err_alloc_netdev: 2070 - pci_release_selected_regions(pdev, 2071 - pci_select_bars(pdev, IORESOURCE_MEM)); 2073 + pci_release_mem_regions(pdev); 2072 2074 err_pci_reg: 2073 2075 err_dma: 2074 2076 pci_disable_device(pdev); ··· 2115 2119 2116 2120 free_netdev(netdev); 2117 2121 2118 - pci_release_selected_regions(pdev, 2119 - pci_select_bars(pdev, IORESOURCE_MEM)); 2122 + pci_release_mem_regions(pdev); 2120 2123 2121 2124 pci_disable_pcie_error_reporting(pdev); 2122 2125
+3 -6
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 10710 10710 } 10711 10711 10712 10712 /* set up pci connections */ 10713 - err = pci_request_selected_regions(pdev, pci_select_bars(pdev, 10714 - IORESOURCE_MEM), i40e_driver_name); 10713 + err = pci_request_mem_regions(pdev, i40e_driver_name); 10715 10714 if (err) { 10716 10715 dev_info(&pdev->dev, 10717 10716 "pci_request_selected_regions failed %d\n", err); ··· 11207 11208 kfree(pf); 11208 11209 err_pf_alloc: 11209 11210 pci_disable_pcie_error_reporting(pdev); 11210 - pci_release_selected_regions(pdev, 11211 - pci_select_bars(pdev, IORESOURCE_MEM)); 11211 + pci_release_mem_regions(pdev); 11212 11212 err_pci_reg: 11213 11213 err_dma: 11214 11214 pci_disable_device(pdev); ··· 11318 11320 11319 11321 iounmap(hw->hw_addr); 11320 11322 kfree(pf); 11321 - pci_release_selected_regions(pdev, 11322 - pci_select_bars(pdev, IORESOURCE_MEM)); 11323 + pci_release_mem_regions(pdev); 11323 11324 11324 11325 pci_disable_pcie_error_reporting(pdev); 11325 11326 pci_disable_device(pdev);
+3 -7
drivers/net/ethernet/intel/igb/igb_main.c
··· 2324 2324 } 2325 2325 } 2326 2326 2327 - err = pci_request_selected_regions(pdev, pci_select_bars(pdev, 2328 - IORESOURCE_MEM), 2329 - igb_driver_name); 2327 + err = pci_request_mem_regions(pdev, igb_driver_name); 2330 2328 if (err) 2331 2329 goto err_pci_reg; 2332 2330 ··· 2748 2750 err_ioremap: 2749 2751 free_netdev(netdev); 2750 2752 err_alloc_etherdev: 2751 - pci_release_selected_regions(pdev, 2752 - pci_select_bars(pdev, IORESOURCE_MEM)); 2753 + pci_release_mem_regions(pdev); 2753 2754 err_pci_reg: 2754 2755 err_dma: 2755 2756 pci_disable_device(pdev); ··· 2913 2916 pci_iounmap(pdev, adapter->io_addr); 2914 2917 if (hw->flash_address) 2915 2918 iounmap(hw->flash_address); 2916 - pci_release_selected_regions(pdev, 2917 - pci_select_bars(pdev, IORESOURCE_MEM)); 2919 + pci_release_mem_regions(pdev); 2918 2920 2919 2921 kfree(adapter->shadow_vfta); 2920 2922 free_netdev(netdev);
+3 -6
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 9353 9353 pci_using_dac = 0; 9354 9354 } 9355 9355 9356 - err = pci_request_selected_regions(pdev, pci_select_bars(pdev, 9357 - IORESOURCE_MEM), ixgbe_driver_name); 9356 + err = pci_request_mem_regions(pdev, ixgbe_driver_name); 9358 9357 if (err) { 9359 9358 dev_err(&pdev->dev, 9360 9359 "pci_request_selected_regions failed 0x%x\n", err); ··· 9739 9740 disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state); 9740 9741 free_netdev(netdev); 9741 9742 err_alloc_etherdev: 9742 - pci_release_selected_regions(pdev, 9743 - pci_select_bars(pdev, IORESOURCE_MEM)); 9743 + pci_release_mem_regions(pdev); 9744 9744 err_pci_reg: 9745 9745 err_dma: 9746 9746 if (!adapter || disable_dev) ··· 9806 9808 9807 9809 #endif 9808 9810 iounmap(adapter->io_addr); 9809 - pci_release_selected_regions(pdev, pci_select_bars(pdev, 9810 - IORESOURCE_MEM)); 9811 + pci_release_mem_regions(pdev); 9811 9812 9812 9813 e_dev_info("complete\n"); 9813 9814
+3 -12
drivers/nvme/host/pci.c
··· 1661 1661 1662 1662 static void nvme_dev_unmap(struct nvme_dev *dev) 1663 1663 { 1664 - struct pci_dev *pdev = to_pci_dev(dev->dev); 1665 - int bars; 1666 - 1667 1664 if (dev->bar) 1668 1665 iounmap(dev->bar); 1669 - 1670 - bars = pci_select_bars(pdev, IORESOURCE_MEM); 1671 - pci_release_selected_regions(pdev, bars); 1666 + pci_release_mem_regions(to_pci_dev(dev->dev)); 1672 1667 } 1673 1668 1674 1669 static void nvme_pci_disable(struct nvme_dev *dev) ··· 1892 1897 1893 1898 static int nvme_dev_map(struct nvme_dev *dev) 1894 1899 { 1895 - int bars; 1896 1900 struct pci_dev *pdev = to_pci_dev(dev->dev); 1897 1901 1898 - bars = pci_select_bars(pdev, IORESOURCE_MEM); 1899 - if (!bars) 1900 - return -ENODEV; 1901 - if (pci_request_selected_regions(pdev, bars, "nvme")) 1902 + if (pci_request_mem_regions(pdev, "nvme")) 1902 1903 return -ENODEV; 1903 1904 1904 1905 dev->bar = ioremap(pci_resource_start(pdev, 0), 8192); ··· 1903 1912 1904 1913 return 0; 1905 1914 release: 1906 - pci_release_selected_regions(pdev, bars); 1915 + pci_release_mem_regions(pdev); 1907 1916 return -ENODEV; 1908 1917 } 1909 1918
+1 -1
drivers/pci/Kconfig
··· 25 25 If you don't know what to do here, say Y. 26 26 27 27 config PCI_MSI_IRQ_DOMAIN 28 - bool 28 + def_bool ARM || ARM64 || X86 29 29 depends on PCI_MSI 30 30 select GENERIC_MSI_IRQ_DOMAIN 31 31
+30 -1
drivers/pci/bus.c
··· 91 91 } 92 92 } 93 93 94 + int devm_request_pci_bus_resources(struct device *dev, 95 + struct list_head *resources) 96 + { 97 + struct resource_entry *win; 98 + struct resource *parent, *res; 99 + int err; 100 + 101 + resource_list_for_each_entry(win, resources) { 102 + res = win->res; 103 + switch (resource_type(res)) { 104 + case IORESOURCE_IO: 105 + parent = &ioport_resource; 106 + break; 107 + case IORESOURCE_MEM: 108 + parent = &iomem_resource; 109 + break; 110 + default: 111 + continue; 112 + } 113 + 114 + err = devm_request_resource(dev, parent, res); 115 + if (err) 116 + return err; 117 + } 118 + 119 + return 0; 120 + } 121 + EXPORT_SYMBOL_GPL(devm_request_pci_bus_resources); 122 + 94 123 static struct pci_bus_region pci_32_bit = {0, 0xffffffffULL}; 95 124 #ifdef CONFIG_PCI_BUS_ADDR_T_64BIT 96 125 static struct pci_bus_region pci_64_bit = {0, ··· 320 291 pci_fixup_device(pci_fixup_final, dev); 321 292 pci_create_sysfs_dev_files(dev); 322 293 pci_proc_attach_device(dev); 294 + pci_bridge_d3_device_changed(dev); 323 295 324 296 dev->match_driver = true; 325 297 retval = device_attach(&dev->dev); ··· 427 397 put_device(&bus->dev); 428 398 } 429 399 EXPORT_SYMBOL(pci_bus_put); 430 -
+3 -3
drivers/pci/ecam.c
··· 19 19 #include <linux/kernel.h> 20 20 #include <linux/module.h> 21 21 #include <linux/pci.h> 22 + #include <linux/pci-ecam.h> 22 23 #include <linux/slab.h> 23 - 24 - #include "ecam.h" 25 24 26 25 /* 27 26 * On 64-bit systems, we do a single ioremap for the whole config space ··· 51 52 if (!cfg) 52 53 return ERR_PTR(-ENOMEM); 53 54 55 + cfg->parent = dev; 54 56 cfg->ops = ops; 55 57 cfg->busr.start = busr->start; 56 58 cfg->busr.end = busr->end; ··· 95 95 } 96 96 97 97 if (ops->init) { 98 - err = ops->init(dev, cfg); 98 + err = ops->init(cfg); 99 99 if (err) 100 100 goto err_exit; 101 101 }
+2 -2
drivers/pci/ecam.h include/linux/pci-ecam.h
··· 27 27 struct pci_ecam_ops { 28 28 unsigned int bus_shift; 29 29 struct pci_ops pci_ops; 30 - int (*init)(struct device *, 31 - struct pci_config_window *); 30 + int (*init)(struct pci_config_window *); 32 31 }; 33 32 34 33 /* ··· 44 45 void __iomem *win; /* 64-bit single mapping */ 45 46 void __iomem **winp; /* 32-bit per-bus mapping */ 46 47 }; 48 + struct device *parent;/* ECAM res was from this dev */ 47 49 }; 48 50 49 51 /* create and free pci_config_window */
+39 -10
drivers/pci/host/Kconfig
··· 3 3 4 4 config PCI_DRA7XX 5 5 bool "TI DRA7xx PCIe controller" 6 - select PCIE_DW 7 6 depends on OF && HAS_IOMEM && TI_PIPE3 7 + depends on PCI_MSI_IRQ_DOMAIN 8 + select PCIE_DW 8 9 help 9 10 Enables support for the PCIe controller in the DRA7xx SoC. There 10 11 are two instances of PCIe controller in DRA7xx. This controller can ··· 17 16 depends on ARM 18 17 depends on OF 19 18 19 + config PCI_AARDVARK 20 + bool "Aardvark PCIe controller" 21 + depends on ARCH_MVEBU && ARM64 22 + depends on OF 23 + depends on PCI_MSI_IRQ_DOMAIN 24 + help 25 + Add support for Aardvark 64bit PCIe Host Controller. This 26 + controller is part of the South Bridge of the Marvel Armada 27 + 3700 SoC. 20 28 21 29 config PCIE_XILINX_NWL 22 30 bool "NWL PCIe Core" 23 31 depends on ARCH_ZYNQMP 24 - select PCI_MSI_IRQ_DOMAIN if PCI_MSI 32 + depends on PCI_MSI_IRQ_DOMAIN 25 33 help 26 34 Say 'Y' here if you want kernel support for Xilinx 27 35 NWL PCIe controller. The controller can act as Root Port ··· 39 29 40 30 config PCIE_DW_PLAT 41 31 bool "Platform bus based DesignWare PCIe Controller" 32 + depends on PCI_MSI_IRQ_DOMAIN 42 33 select PCIE_DW 43 34 ---help--- 44 35 This selects the DesignWare PCIe controller support. Select this if ··· 51 40 52 41 config PCIE_DW 53 42 bool 43 + depends on PCI_MSI_IRQ_DOMAIN 54 44 55 45 config PCI_EXYNOS 56 46 bool "Samsung Exynos PCIe controller" 57 47 depends on SOC_EXYNOS5440 48 + depends on PCI_MSI_IRQ_DOMAIN 58 49 select PCIEPORTBUS 59 50 select PCIE_DW 60 51 61 52 config PCI_IMX6 62 53 bool "Freescale i.MX6 PCIe controller" 63 54 depends on SOC_IMX6Q 55 + depends on PCI_MSI_IRQ_DOMAIN 64 56 select PCIEPORTBUS 65 57 select PCIE_DW 66 58 ··· 86 72 config PCIE_RCAR 87 73 bool "Renesas R-Car PCIe controller" 88 74 depends on ARCH_RENESAS || (ARM && COMPILE_TEST) 89 - select PCI_MSI 90 - select PCI_MSI_IRQ_DOMAIN 75 + depends on PCI_MSI_IRQ_DOMAIN 91 76 help 92 77 Say Y here if you want PCIe controller support on R-Car SoCs. 93 78 ··· 98 85 bool "Generic PCI host controller" 99 86 depends on (ARM || ARM64) && OF 100 87 select PCI_HOST_COMMON 88 + select IRQ_DOMAIN 101 89 help 102 90 Say Y here if you want to support a simple generic PCI host 103 91 controller, such as the one emulated by kvmtool. ··· 106 92 config PCIE_SPEAR13XX 107 93 bool "STMicroelectronics SPEAr PCIe controller" 108 94 depends on ARCH_SPEAR13XX 95 + depends on PCI_MSI_IRQ_DOMAIN 109 96 select PCIEPORTBUS 110 97 select PCIE_DW 111 98 help ··· 115 100 config PCI_KEYSTONE 116 101 bool "TI Keystone PCIe controller" 117 102 depends on ARCH_KEYSTONE 103 + depends on PCI_MSI_IRQ_DOMAIN 118 104 select PCIE_DW 119 105 select PCIEPORTBUS 120 106 help ··· 136 120 depends on ARCH_XGENE 137 121 depends on OF 138 122 select PCIEPORTBUS 139 - select PCI_MSI_IRQ_DOMAIN if PCI_MSI 140 123 help 141 124 Say Y here if you want internal PCI support on APM X-Gene SoC. 142 125 There are 5 internal PCIe ports available. Each port is GEN3 capable ··· 143 128 144 129 config PCI_XGENE_MSI 145 130 bool "X-Gene v1 PCIe MSI feature" 146 - depends on PCI_XGENE && PCI_MSI 131 + depends on PCI_XGENE 132 + depends on PCI_MSI_IRQ_DOMAIN 147 133 default y 148 134 help 149 135 Say Y here if you want PCIe MSI support for the APM X-Gene v1 SoC. ··· 153 137 config PCI_LAYERSCAPE 154 138 bool "Freescale Layerscape PCIe controller" 155 139 depends on OF && (ARM || ARCH_LAYERSCAPE) 140 + depends on PCI_MSI_IRQ_DOMAIN 156 141 select PCIE_DW 157 142 select MFD_SYSCON 158 143 help ··· 194 177 config PCIE_IPROC_MSI 195 178 bool "Broadcom iProc PCIe MSI support" 196 179 depends on PCIE_IPROC_PLATFORM || PCIE_IPROC_BCMA 197 - depends on PCI_MSI 198 - select PCI_MSI_IRQ_DOMAIN 180 + depends on PCI_MSI_IRQ_DOMAIN 199 181 default ARCH_BCM_IPROC 200 182 help 201 183 Say Y here if you want to enable MSI support for Broadcom's iProc ··· 211 195 212 196 config PCIE_ALTERA_MSI 213 197 bool "Altera PCIe MSI feature" 214 - depends on PCIE_ALTERA && PCI_MSI 215 - select PCI_MSI_IRQ_DOMAIN 198 + depends on PCIE_ALTERA 199 + depends on PCI_MSI_IRQ_DOMAIN 216 200 help 217 201 Say Y here if you want PCIe MSI support for the Altera FPGA. 218 202 This MSI driver supports Altera MSI to GIC controller IP. ··· 220 204 config PCI_HISI 221 205 depends on OF && ARM64 222 206 bool "HiSilicon Hip05 and Hip06 SoCs PCIe controllers" 207 + depends on PCI_MSI_IRQ_DOMAIN 223 208 select PCIEPORTBUS 224 209 select PCIE_DW 225 210 help ··· 230 213 config PCIE_QCOM 231 214 bool "Qualcomm PCIe controller" 232 215 depends on ARCH_QCOM && OF 216 + depends on PCI_MSI_IRQ_DOMAIN 233 217 select PCIE_DW 234 218 select PCIEPORTBUS 235 219 help ··· 255 237 config PCIE_ARMADA_8K 256 238 bool "Marvell Armada-8K PCIe controller" 257 239 depends on ARCH_MVEBU 240 + depends on PCI_MSI_IRQ_DOMAIN 258 241 select PCIE_DW 259 242 select PCIEPORTBUS 260 243 help ··· 263 244 Armada-8K SoCs. The PCIe controller on Armada-8K is based on 264 245 Designware hardware and therefore the driver re-uses the 265 246 Designware core functions to implement the driver. 247 + 248 + config PCIE_ARTPEC6 249 + bool "Axis ARTPEC-6 PCIe controller" 250 + depends on MACH_ARTPEC6 251 + depends on PCI_MSI_IRQ_DOMAIN 252 + select PCIE_DW 253 + select PCIEPORTBUS 254 + help 255 + Say Y here to enable PCIe controller support on Axis ARTPEC-6 256 + SoCs. This PCIe controller uses the DesignWare core. 266 257 267 258 endmenu
+2
drivers/pci/host/Makefile
··· 5 5 obj-$(CONFIG_PCI_IMX6) += pci-imx6.o 6 6 obj-$(CONFIG_PCI_HYPERV) += pci-hyperv.o 7 7 obj-$(CONFIG_PCI_MVEBU) += pci-mvebu.o 8 + obj-$(CONFIG_PCI_AARDVARK) += pci-aardvark.o 8 9 obj-$(CONFIG_PCI_TEGRA) += pci-tegra.o 9 10 obj-$(CONFIG_PCI_RCAR_GEN2) += pci-rcar-gen2.o 10 11 obj-$(CONFIG_PCIE_RCAR) += pcie-rcar.o ··· 30 29 obj-$(CONFIG_PCI_HOST_THUNDER_ECAM) += pci-thunder-ecam.o 31 30 obj-$(CONFIG_PCI_HOST_THUNDER_PEM) += pci-thunder-pem.o 32 31 obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o 32 + obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o
+1001
drivers/pci/host/pci-aardvark.c
··· 1 + /* 2 + * Driver for the Aardvark PCIe controller, used on Marvell Armada 3 + * 3700. 4 + * 5 + * Copyright (C) 2016 Marvell 6 + * 7 + * Author: Hezi Shahmoon <hezi.shahmoon@marvell.com> 8 + * 9 + * This file is licensed under the terms of the GNU General Public 10 + * License version 2. This program is licensed "as is" without any 11 + * warranty of any kind, whether express or implied. 12 + */ 13 + 14 + #include <linux/delay.h> 15 + #include <linux/interrupt.h> 16 + #include <linux/irq.h> 17 + #include <linux/irqdomain.h> 18 + #include <linux/kernel.h> 19 + #include <linux/pci.h> 20 + #include <linux/init.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/of_address.h> 23 + #include <linux/of_pci.h> 24 + 25 + /* PCIe core registers */ 26 + #define PCIE_CORE_CMD_STATUS_REG 0x4 27 + #define PCIE_CORE_CMD_IO_ACCESS_EN BIT(0) 28 + #define PCIE_CORE_CMD_MEM_ACCESS_EN BIT(1) 29 + #define PCIE_CORE_CMD_MEM_IO_REQ_EN BIT(2) 30 + #define PCIE_CORE_DEV_CTRL_STATS_REG 0xc8 31 + #define PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE (0 << 4) 32 + #define PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT 5 33 + #define PCIE_CORE_DEV_CTRL_STATS_SNOOP_DISABLE (0 << 11) 34 + #define PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT 12 35 + #define PCIE_CORE_LINK_CTRL_STAT_REG 0xd0 36 + #define PCIE_CORE_LINK_L0S_ENTRY BIT(0) 37 + #define PCIE_CORE_LINK_TRAINING BIT(5) 38 + #define PCIE_CORE_LINK_WIDTH_SHIFT 20 39 + #define PCIE_CORE_ERR_CAPCTL_REG 0x118 40 + #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX BIT(5) 41 + #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN BIT(6) 42 + #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK BIT(7) 43 + #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK_RCV BIT(8) 44 + 45 + /* PIO registers base address and register offsets */ 46 + #define PIO_BASE_ADDR 0x4000 47 + #define PIO_CTRL (PIO_BASE_ADDR + 0x0) 48 + #define PIO_CTRL_TYPE_MASK GENMASK(3, 0) 49 + #define PIO_CTRL_ADDR_WIN_DISABLE BIT(24) 50 + #define PIO_STAT (PIO_BASE_ADDR + 0x4) 51 + #define PIO_COMPLETION_STATUS_SHIFT 7 52 + #define PIO_COMPLETION_STATUS_MASK GENMASK(9, 7) 53 + #define PIO_COMPLETION_STATUS_OK 0 54 + #define PIO_COMPLETION_STATUS_UR 1 55 + #define PIO_COMPLETION_STATUS_CRS 2 56 + #define PIO_COMPLETION_STATUS_CA 4 57 + #define PIO_NON_POSTED_REQ BIT(0) 58 + #define PIO_ADDR_LS (PIO_BASE_ADDR + 0x8) 59 + #define PIO_ADDR_MS (PIO_BASE_ADDR + 0xc) 60 + #define PIO_WR_DATA (PIO_BASE_ADDR + 0x10) 61 + #define PIO_WR_DATA_STRB (PIO_BASE_ADDR + 0x14) 62 + #define PIO_RD_DATA (PIO_BASE_ADDR + 0x18) 63 + #define PIO_START (PIO_BASE_ADDR + 0x1c) 64 + #define PIO_ISR (PIO_BASE_ADDR + 0x20) 65 + #define PIO_ISRM (PIO_BASE_ADDR + 0x24) 66 + 67 + /* Aardvark Control registers */ 68 + #define CONTROL_BASE_ADDR 0x4800 69 + #define PCIE_CORE_CTRL0_REG (CONTROL_BASE_ADDR + 0x0) 70 + #define PCIE_GEN_SEL_MSK 0x3 71 + #define PCIE_GEN_SEL_SHIFT 0x0 72 + #define SPEED_GEN_1 0 73 + #define SPEED_GEN_2 1 74 + #define SPEED_GEN_3 2 75 + #define IS_RC_MSK 1 76 + #define IS_RC_SHIFT 2 77 + #define LANE_CNT_MSK 0x18 78 + #define LANE_CNT_SHIFT 0x3 79 + #define LANE_COUNT_1 (0 << LANE_CNT_SHIFT) 80 + #define LANE_COUNT_2 (1 << LANE_CNT_SHIFT) 81 + #define LANE_COUNT_4 (2 << LANE_CNT_SHIFT) 82 + #define LANE_COUNT_8 (3 << LANE_CNT_SHIFT) 83 + #define LINK_TRAINING_EN BIT(6) 84 + #define LEGACY_INTA BIT(28) 85 + #define LEGACY_INTB BIT(29) 86 + #define LEGACY_INTC BIT(30) 87 + #define LEGACY_INTD BIT(31) 88 + #define PCIE_CORE_CTRL1_REG (CONTROL_BASE_ADDR + 0x4) 89 + #define HOT_RESET_GEN BIT(0) 90 + #define PCIE_CORE_CTRL2_REG (CONTROL_BASE_ADDR + 0x8) 91 + #define PCIE_CORE_CTRL2_RESERVED 0x7 92 + #define PCIE_CORE_CTRL2_TD_ENABLE BIT(4) 93 + #define PCIE_CORE_CTRL2_STRICT_ORDER_ENABLE BIT(5) 94 + #define PCIE_CORE_CTRL2_OB_WIN_ENABLE BIT(6) 95 + #define PCIE_CORE_CTRL2_MSI_ENABLE BIT(10) 96 + #define PCIE_ISR0_REG (CONTROL_BASE_ADDR + 0x40) 97 + #define PCIE_ISR0_MASK_REG (CONTROL_BASE_ADDR + 0x44) 98 + #define PCIE_ISR0_MSI_INT_PENDING BIT(24) 99 + #define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val)) 100 + #define PCIE_ISR0_INTX_DEASSERT(val) BIT(20 + (val)) 101 + #define PCIE_ISR0_ALL_MASK GENMASK(26, 0) 102 + #define PCIE_ISR1_REG (CONTROL_BASE_ADDR + 0x48) 103 + #define PCIE_ISR1_MASK_REG (CONTROL_BASE_ADDR + 0x4C) 104 + #define PCIE_ISR1_POWER_STATE_CHANGE BIT(4) 105 + #define PCIE_ISR1_FLUSH BIT(5) 106 + #define PCIE_ISR1_ALL_MASK GENMASK(5, 4) 107 + #define PCIE_MSI_ADDR_LOW_REG (CONTROL_BASE_ADDR + 0x50) 108 + #define PCIE_MSI_ADDR_HIGH_REG (CONTROL_BASE_ADDR + 0x54) 109 + #define PCIE_MSI_STATUS_REG (CONTROL_BASE_ADDR + 0x58) 110 + #define PCIE_MSI_MASK_REG (CONTROL_BASE_ADDR + 0x5C) 111 + #define PCIE_MSI_PAYLOAD_REG (CONTROL_BASE_ADDR + 0x9C) 112 + 113 + /* PCIe window configuration */ 114 + #define OB_WIN_BASE_ADDR 0x4c00 115 + #define OB_WIN_BLOCK_SIZE 0x20 116 + #define OB_WIN_REG_ADDR(win, offset) (OB_WIN_BASE_ADDR + \ 117 + OB_WIN_BLOCK_SIZE * (win) + \ 118 + (offset)) 119 + #define OB_WIN_MATCH_LS(win) OB_WIN_REG_ADDR(win, 0x00) 120 + #define OB_WIN_MATCH_MS(win) OB_WIN_REG_ADDR(win, 0x04) 121 + #define OB_WIN_REMAP_LS(win) OB_WIN_REG_ADDR(win, 0x08) 122 + #define OB_WIN_REMAP_MS(win) OB_WIN_REG_ADDR(win, 0x0c) 123 + #define OB_WIN_MASK_LS(win) OB_WIN_REG_ADDR(win, 0x10) 124 + #define OB_WIN_MASK_MS(win) OB_WIN_REG_ADDR(win, 0x14) 125 + #define OB_WIN_ACTIONS(win) OB_WIN_REG_ADDR(win, 0x18) 126 + 127 + /* PCIe window types */ 128 + #define OB_PCIE_MEM 0x0 129 + #define OB_PCIE_IO 0x4 130 + 131 + /* LMI registers base address and register offsets */ 132 + #define LMI_BASE_ADDR 0x6000 133 + #define CFG_REG (LMI_BASE_ADDR + 0x0) 134 + #define LTSSM_SHIFT 24 135 + #define LTSSM_MASK 0x3f 136 + #define LTSSM_L0 0x10 137 + #define RC_BAR_CONFIG 0x300 138 + 139 + /* PCIe core controller registers */ 140 + #define CTRL_CORE_BASE_ADDR 0x18000 141 + #define CTRL_CONFIG_REG (CTRL_CORE_BASE_ADDR + 0x0) 142 + #define CTRL_MODE_SHIFT 0x0 143 + #define CTRL_MODE_MASK 0x1 144 + #define PCIE_CORE_MODE_DIRECT 0x0 145 + #define PCIE_CORE_MODE_COMMAND 0x1 146 + 147 + /* PCIe Central Interrupts Registers */ 148 + #define CENTRAL_INT_BASE_ADDR 0x1b000 149 + #define HOST_CTRL_INT_STATUS_REG (CENTRAL_INT_BASE_ADDR + 0x0) 150 + #define HOST_CTRL_INT_MASK_REG (CENTRAL_INT_BASE_ADDR + 0x4) 151 + #define PCIE_IRQ_CMDQ_INT BIT(0) 152 + #define PCIE_IRQ_MSI_STATUS_INT BIT(1) 153 + #define PCIE_IRQ_CMD_SENT_DONE BIT(3) 154 + #define PCIE_IRQ_DMA_INT BIT(4) 155 + #define PCIE_IRQ_IB_DXFERDONE BIT(5) 156 + #define PCIE_IRQ_OB_DXFERDONE BIT(6) 157 + #define PCIE_IRQ_OB_RXFERDONE BIT(7) 158 + #define PCIE_IRQ_COMPQ_INT BIT(12) 159 + #define PCIE_IRQ_DIR_RD_DDR_DET BIT(13) 160 + #define PCIE_IRQ_DIR_WR_DDR_DET BIT(14) 161 + #define PCIE_IRQ_CORE_INT BIT(16) 162 + #define PCIE_IRQ_CORE_INT_PIO BIT(17) 163 + #define PCIE_IRQ_DPMU_INT BIT(18) 164 + #define PCIE_IRQ_PCIE_MIS_INT BIT(19) 165 + #define PCIE_IRQ_MSI_INT1_DET BIT(20) 166 + #define PCIE_IRQ_MSI_INT2_DET BIT(21) 167 + #define PCIE_IRQ_RC_DBELL_DET BIT(22) 168 + #define PCIE_IRQ_EP_STATUS BIT(23) 169 + #define PCIE_IRQ_ALL_MASK 0xfff0fb 170 + #define PCIE_IRQ_ENABLE_INTS_MASK PCIE_IRQ_CORE_INT 171 + 172 + /* Transaction types */ 173 + #define PCIE_CONFIG_RD_TYPE0 0x8 174 + #define PCIE_CONFIG_RD_TYPE1 0x9 175 + #define PCIE_CONFIG_WR_TYPE0 0xa 176 + #define PCIE_CONFIG_WR_TYPE1 0xb 177 + 178 + /* PCI_BDF shifts 8bit, so we need extra 4bit shift */ 179 + #define PCIE_BDF(dev) (dev << 4) 180 + #define PCIE_CONF_BUS(bus) (((bus) & 0xff) << 20) 181 + #define PCIE_CONF_DEV(dev) (((dev) & 0x1f) << 15) 182 + #define PCIE_CONF_FUNC(fun) (((fun) & 0x7) << 12) 183 + #define PCIE_CONF_REG(reg) ((reg) & 0xffc) 184 + #define PCIE_CONF_ADDR(bus, devfn, where) \ 185 + (PCIE_CONF_BUS(bus) | PCIE_CONF_DEV(PCI_SLOT(devfn)) | \ 186 + PCIE_CONF_FUNC(PCI_FUNC(devfn)) | PCIE_CONF_REG(where)) 187 + 188 + #define PIO_TIMEOUT_MS 1 189 + 190 + #define LINK_WAIT_MAX_RETRIES 10 191 + #define LINK_WAIT_USLEEP_MIN 90000 192 + #define LINK_WAIT_USLEEP_MAX 100000 193 + 194 + #define LEGACY_IRQ_NUM 4 195 + #define MSI_IRQ_NUM 32 196 + 197 + struct advk_pcie { 198 + struct platform_device *pdev; 199 + void __iomem *base; 200 + struct list_head resources; 201 + struct irq_domain *irq_domain; 202 + struct irq_chip irq_chip; 203 + struct msi_controller msi; 204 + struct irq_domain *msi_domain; 205 + struct irq_chip msi_irq_chip; 206 + DECLARE_BITMAP(msi_irq_in_use, MSI_IRQ_NUM); 207 + struct mutex msi_used_lock; 208 + u16 msi_msg; 209 + int root_bus_nr; 210 + }; 211 + 212 + static inline void advk_writel(struct advk_pcie *pcie, u32 val, u64 reg) 213 + { 214 + writel(val, pcie->base + reg); 215 + } 216 + 217 + static inline u32 advk_readl(struct advk_pcie *pcie, u64 reg) 218 + { 219 + return readl(pcie->base + reg); 220 + } 221 + 222 + static int advk_pcie_link_up(struct advk_pcie *pcie) 223 + { 224 + u32 val, ltssm_state; 225 + 226 + val = advk_readl(pcie, CFG_REG); 227 + ltssm_state = (val >> LTSSM_SHIFT) & LTSSM_MASK; 228 + return ltssm_state >= LTSSM_L0; 229 + } 230 + 231 + static int advk_pcie_wait_for_link(struct advk_pcie *pcie) 232 + { 233 + int retries; 234 + 235 + /* check if the link is up or not */ 236 + for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) { 237 + if (advk_pcie_link_up(pcie)) { 238 + dev_info(&pcie->pdev->dev, "link up\n"); 239 + return 0; 240 + } 241 + 242 + usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX); 243 + } 244 + 245 + dev_err(&pcie->pdev->dev, "link never came up\n"); 246 + 247 + return -ETIMEDOUT; 248 + } 249 + 250 + /* 251 + * Set PCIe address window register which could be used for memory 252 + * mapping. 253 + */ 254 + static void advk_pcie_set_ob_win(struct advk_pcie *pcie, 255 + u32 win_num, u32 match_ms, 256 + u32 match_ls, u32 mask_ms, 257 + u32 mask_ls, u32 remap_ms, 258 + u32 remap_ls, u32 action) 259 + { 260 + advk_writel(pcie, match_ls, OB_WIN_MATCH_LS(win_num)); 261 + advk_writel(pcie, match_ms, OB_WIN_MATCH_MS(win_num)); 262 + advk_writel(pcie, mask_ms, OB_WIN_MASK_MS(win_num)); 263 + advk_writel(pcie, mask_ls, OB_WIN_MASK_LS(win_num)); 264 + advk_writel(pcie, remap_ms, OB_WIN_REMAP_MS(win_num)); 265 + advk_writel(pcie, remap_ls, OB_WIN_REMAP_LS(win_num)); 266 + advk_writel(pcie, action, OB_WIN_ACTIONS(win_num)); 267 + advk_writel(pcie, match_ls | BIT(0), OB_WIN_MATCH_LS(win_num)); 268 + } 269 + 270 + static void advk_pcie_setup_hw(struct advk_pcie *pcie) 271 + { 272 + u32 reg; 273 + int i; 274 + 275 + /* Point PCIe unit MBUS decode windows to DRAM space */ 276 + for (i = 0; i < 8; i++) 277 + advk_pcie_set_ob_win(pcie, i, 0, 0, 0, 0, 0, 0, 0); 278 + 279 + /* Set to Direct mode */ 280 + reg = advk_readl(pcie, CTRL_CONFIG_REG); 281 + reg &= ~(CTRL_MODE_MASK << CTRL_MODE_SHIFT); 282 + reg |= ((PCIE_CORE_MODE_DIRECT & CTRL_MODE_MASK) << CTRL_MODE_SHIFT); 283 + advk_writel(pcie, reg, CTRL_CONFIG_REG); 284 + 285 + /* Set PCI global control register to RC mode */ 286 + reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 287 + reg |= (IS_RC_MSK << IS_RC_SHIFT); 288 + advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 289 + 290 + /* Set Advanced Error Capabilities and Control PF0 register */ 291 + reg = PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX | 292 + PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN | 293 + PCIE_CORE_ERR_CAPCTL_ECRC_CHCK | 294 + PCIE_CORE_ERR_CAPCTL_ECRC_CHCK_RCV; 295 + advk_writel(pcie, reg, PCIE_CORE_ERR_CAPCTL_REG); 296 + 297 + /* Set PCIe Device Control and Status 1 PF0 register */ 298 + reg = PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE | 299 + (7 << PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT) | 300 + PCIE_CORE_DEV_CTRL_STATS_SNOOP_DISABLE | 301 + PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT; 302 + advk_writel(pcie, reg, PCIE_CORE_DEV_CTRL_STATS_REG); 303 + 304 + /* Program PCIe Control 2 to disable strict ordering */ 305 + reg = PCIE_CORE_CTRL2_RESERVED | 306 + PCIE_CORE_CTRL2_TD_ENABLE; 307 + advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG); 308 + 309 + /* Set GEN2 */ 310 + reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 311 + reg &= ~PCIE_GEN_SEL_MSK; 312 + reg |= SPEED_GEN_2; 313 + advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 314 + 315 + /* Set lane X1 */ 316 + reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 317 + reg &= ~LANE_CNT_MSK; 318 + reg |= LANE_COUNT_1; 319 + advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 320 + 321 + /* Enable link training */ 322 + reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 323 + reg |= LINK_TRAINING_EN; 324 + advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 325 + 326 + /* Enable MSI */ 327 + reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG); 328 + reg |= PCIE_CORE_CTRL2_MSI_ENABLE; 329 + advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG); 330 + 331 + /* Clear all interrupts */ 332 + advk_writel(pcie, PCIE_ISR0_ALL_MASK, PCIE_ISR0_REG); 333 + advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_REG); 334 + advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_STATUS_REG); 335 + 336 + /* Disable All ISR0/1 Sources */ 337 + reg = PCIE_ISR0_ALL_MASK; 338 + reg &= ~PCIE_ISR0_MSI_INT_PENDING; 339 + advk_writel(pcie, reg, PCIE_ISR0_MASK_REG); 340 + 341 + advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_MASK_REG); 342 + 343 + /* Unmask all MSI's */ 344 + advk_writel(pcie, 0, PCIE_MSI_MASK_REG); 345 + 346 + /* Enable summary interrupt for GIC SPI source */ 347 + reg = PCIE_IRQ_ALL_MASK & (~PCIE_IRQ_ENABLE_INTS_MASK); 348 + advk_writel(pcie, reg, HOST_CTRL_INT_MASK_REG); 349 + 350 + reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG); 351 + reg |= PCIE_CORE_CTRL2_OB_WIN_ENABLE; 352 + advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG); 353 + 354 + /* Bypass the address window mapping for PIO */ 355 + reg = advk_readl(pcie, PIO_CTRL); 356 + reg |= PIO_CTRL_ADDR_WIN_DISABLE; 357 + advk_writel(pcie, reg, PIO_CTRL); 358 + 359 + /* Start link training */ 360 + reg = advk_readl(pcie, PCIE_CORE_LINK_CTRL_STAT_REG); 361 + reg |= PCIE_CORE_LINK_TRAINING; 362 + advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG); 363 + 364 + advk_pcie_wait_for_link(pcie); 365 + 366 + reg = PCIE_CORE_LINK_L0S_ENTRY | 367 + (1 << PCIE_CORE_LINK_WIDTH_SHIFT); 368 + advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG); 369 + 370 + reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG); 371 + reg |= PCIE_CORE_CMD_MEM_ACCESS_EN | 372 + PCIE_CORE_CMD_IO_ACCESS_EN | 373 + PCIE_CORE_CMD_MEM_IO_REQ_EN; 374 + advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG); 375 + } 376 + 377 + static void advk_pcie_check_pio_status(struct advk_pcie *pcie) 378 + { 379 + u32 reg; 380 + unsigned int status; 381 + char *strcomp_status, *str_posted; 382 + 383 + reg = advk_readl(pcie, PIO_STAT); 384 + status = (reg & PIO_COMPLETION_STATUS_MASK) >> 385 + PIO_COMPLETION_STATUS_SHIFT; 386 + 387 + if (!status) 388 + return; 389 + 390 + switch (status) { 391 + case PIO_COMPLETION_STATUS_UR: 392 + strcomp_status = "UR"; 393 + break; 394 + case PIO_COMPLETION_STATUS_CRS: 395 + strcomp_status = "CRS"; 396 + break; 397 + case PIO_COMPLETION_STATUS_CA: 398 + strcomp_status = "CA"; 399 + break; 400 + default: 401 + strcomp_status = "Unknown"; 402 + break; 403 + } 404 + 405 + if (reg & PIO_NON_POSTED_REQ) 406 + str_posted = "Non-posted"; 407 + else 408 + str_posted = "Posted"; 409 + 410 + dev_err(&pcie->pdev->dev, "%s PIO Response Status: %s, %#x @ %#x\n", 411 + str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS)); 412 + } 413 + 414 + static int advk_pcie_wait_pio(struct advk_pcie *pcie) 415 + { 416 + unsigned long timeout; 417 + 418 + timeout = jiffies + msecs_to_jiffies(PIO_TIMEOUT_MS); 419 + 420 + while (time_before(jiffies, timeout)) { 421 + u32 start, isr; 422 + 423 + start = advk_readl(pcie, PIO_START); 424 + isr = advk_readl(pcie, PIO_ISR); 425 + if (!start && isr) 426 + return 0; 427 + } 428 + 429 + dev_err(&pcie->pdev->dev, "config read/write timed out\n"); 430 + return -ETIMEDOUT; 431 + } 432 + 433 + static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn, 434 + int where, int size, u32 *val) 435 + { 436 + struct advk_pcie *pcie = bus->sysdata; 437 + u32 reg; 438 + int ret; 439 + 440 + if (PCI_SLOT(devfn) != 0) { 441 + *val = 0xffffffff; 442 + return PCIBIOS_DEVICE_NOT_FOUND; 443 + } 444 + 445 + /* Start PIO */ 446 + advk_writel(pcie, 0, PIO_START); 447 + advk_writel(pcie, 1, PIO_ISR); 448 + 449 + /* Program the control register */ 450 + reg = advk_readl(pcie, PIO_CTRL); 451 + reg &= ~PIO_CTRL_TYPE_MASK; 452 + if (bus->number == pcie->root_bus_nr) 453 + reg |= PCIE_CONFIG_RD_TYPE0; 454 + else 455 + reg |= PCIE_CONFIG_RD_TYPE1; 456 + advk_writel(pcie, reg, PIO_CTRL); 457 + 458 + /* Program the address registers */ 459 + reg = PCIE_BDF(devfn) | PCIE_CONF_REG(where); 460 + advk_writel(pcie, reg, PIO_ADDR_LS); 461 + advk_writel(pcie, 0, PIO_ADDR_MS); 462 + 463 + /* Program the data strobe */ 464 + advk_writel(pcie, 0xf, PIO_WR_DATA_STRB); 465 + 466 + /* Start the transfer */ 467 + advk_writel(pcie, 1, PIO_START); 468 + 469 + ret = advk_pcie_wait_pio(pcie); 470 + if (ret < 0) 471 + return PCIBIOS_SET_FAILED; 472 + 473 + advk_pcie_check_pio_status(pcie); 474 + 475 + /* Get the read result */ 476 + *val = advk_readl(pcie, PIO_RD_DATA); 477 + if (size == 1) 478 + *val = (*val >> (8 * (where & 3))) & 0xff; 479 + else if (size == 2) 480 + *val = (*val >> (8 * (where & 3))) & 0xffff; 481 + 482 + return PCIBIOS_SUCCESSFUL; 483 + } 484 + 485 + static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn, 486 + int where, int size, u32 val) 487 + { 488 + struct advk_pcie *pcie = bus->sysdata; 489 + u32 reg; 490 + u32 data_strobe = 0x0; 491 + int offset; 492 + int ret; 493 + 494 + if (PCI_SLOT(devfn) != 0) 495 + return PCIBIOS_DEVICE_NOT_FOUND; 496 + 497 + if (where % size) 498 + return PCIBIOS_SET_FAILED; 499 + 500 + /* Start PIO */ 501 + advk_writel(pcie, 0, PIO_START); 502 + advk_writel(pcie, 1, PIO_ISR); 503 + 504 + /* Program the control register */ 505 + reg = advk_readl(pcie, PIO_CTRL); 506 + reg &= ~PIO_CTRL_TYPE_MASK; 507 + if (bus->number == pcie->root_bus_nr) 508 + reg |= PCIE_CONFIG_WR_TYPE0; 509 + else 510 + reg |= PCIE_CONFIG_WR_TYPE1; 511 + advk_writel(pcie, reg, PIO_CTRL); 512 + 513 + /* Program the address registers */ 514 + reg = PCIE_CONF_ADDR(bus->number, devfn, where); 515 + advk_writel(pcie, reg, PIO_ADDR_LS); 516 + advk_writel(pcie, 0, PIO_ADDR_MS); 517 + 518 + /* Calculate the write strobe */ 519 + offset = where & 0x3; 520 + reg = val << (8 * offset); 521 + data_strobe = GENMASK(size - 1, 0) << offset; 522 + 523 + /* Program the data register */ 524 + advk_writel(pcie, reg, PIO_WR_DATA); 525 + 526 + /* Program the data strobe */ 527 + advk_writel(pcie, data_strobe, PIO_WR_DATA_STRB); 528 + 529 + /* Start the transfer */ 530 + advk_writel(pcie, 1, PIO_START); 531 + 532 + ret = advk_pcie_wait_pio(pcie); 533 + if (ret < 0) 534 + return PCIBIOS_SET_FAILED; 535 + 536 + advk_pcie_check_pio_status(pcie); 537 + 538 + return PCIBIOS_SUCCESSFUL; 539 + } 540 + 541 + static struct pci_ops advk_pcie_ops = { 542 + .read = advk_pcie_rd_conf, 543 + .write = advk_pcie_wr_conf, 544 + }; 545 + 546 + static int advk_pcie_alloc_msi(struct advk_pcie *pcie) 547 + { 548 + int hwirq; 549 + 550 + mutex_lock(&pcie->msi_used_lock); 551 + hwirq = find_first_zero_bit(pcie->msi_irq_in_use, MSI_IRQ_NUM); 552 + if (hwirq >= MSI_IRQ_NUM) 553 + hwirq = -ENOSPC; 554 + else 555 + set_bit(hwirq, pcie->msi_irq_in_use); 556 + mutex_unlock(&pcie->msi_used_lock); 557 + 558 + return hwirq; 559 + } 560 + 561 + static void advk_pcie_free_msi(struct advk_pcie *pcie, int hwirq) 562 + { 563 + mutex_lock(&pcie->msi_used_lock); 564 + if (!test_bit(hwirq, pcie->msi_irq_in_use)) 565 + dev_err(&pcie->pdev->dev, "trying to free unused MSI#%d\n", 566 + hwirq); 567 + else 568 + clear_bit(hwirq, pcie->msi_irq_in_use); 569 + mutex_unlock(&pcie->msi_used_lock); 570 + } 571 + 572 + static int advk_pcie_setup_msi_irq(struct msi_controller *chip, 573 + struct pci_dev *pdev, 574 + struct msi_desc *desc) 575 + { 576 + struct advk_pcie *pcie = pdev->bus->sysdata; 577 + struct msi_msg msg; 578 + int virq, hwirq; 579 + phys_addr_t msi_msg_phys; 580 + 581 + /* We support MSI, but not MSI-X */ 582 + if (desc->msi_attrib.is_msix) 583 + return -EINVAL; 584 + 585 + hwirq = advk_pcie_alloc_msi(pcie); 586 + if (hwirq < 0) 587 + return hwirq; 588 + 589 + virq = irq_create_mapping(pcie->msi_domain, hwirq); 590 + if (!virq) { 591 + advk_pcie_free_msi(pcie, hwirq); 592 + return -EINVAL; 593 + } 594 + 595 + irq_set_msi_desc(virq, desc); 596 + 597 + msi_msg_phys = virt_to_phys(&pcie->msi_msg); 598 + 599 + msg.address_lo = lower_32_bits(msi_msg_phys); 600 + msg.address_hi = upper_32_bits(msi_msg_phys); 601 + msg.data = virq; 602 + 603 + pci_write_msi_msg(virq, &msg); 604 + 605 + return 0; 606 + } 607 + 608 + static void advk_pcie_teardown_msi_irq(struct msi_controller *chip, 609 + unsigned int irq) 610 + { 611 + struct irq_data *d = irq_get_irq_data(irq); 612 + struct msi_desc *msi = irq_data_get_msi_desc(d); 613 + struct advk_pcie *pcie = msi_desc_to_pci_sysdata(msi); 614 + unsigned long hwirq = d->hwirq; 615 + 616 + irq_dispose_mapping(irq); 617 + advk_pcie_free_msi(pcie, hwirq); 618 + } 619 + 620 + static int advk_pcie_msi_map(struct irq_domain *domain, 621 + unsigned int virq, irq_hw_number_t hw) 622 + { 623 + struct advk_pcie *pcie = domain->host_data; 624 + 625 + irq_set_chip_and_handler(virq, &pcie->msi_irq_chip, 626 + handle_simple_irq); 627 + 628 + return 0; 629 + } 630 + 631 + static const struct irq_domain_ops advk_pcie_msi_irq_ops = { 632 + .map = advk_pcie_msi_map, 633 + }; 634 + 635 + static void advk_pcie_irq_mask(struct irq_data *d) 636 + { 637 + struct advk_pcie *pcie = d->domain->host_data; 638 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 639 + u32 mask; 640 + 641 + mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); 642 + mask |= PCIE_ISR0_INTX_ASSERT(hwirq); 643 + advk_writel(pcie, mask, PCIE_ISR0_MASK_REG); 644 + } 645 + 646 + static void advk_pcie_irq_unmask(struct irq_data *d) 647 + { 648 + struct advk_pcie *pcie = d->domain->host_data; 649 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 650 + u32 mask; 651 + 652 + mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); 653 + mask &= ~PCIE_ISR0_INTX_ASSERT(hwirq); 654 + advk_writel(pcie, mask, PCIE_ISR0_MASK_REG); 655 + } 656 + 657 + static int advk_pcie_irq_map(struct irq_domain *h, 658 + unsigned int virq, irq_hw_number_t hwirq) 659 + { 660 + struct advk_pcie *pcie = h->host_data; 661 + 662 + advk_pcie_irq_mask(irq_get_irq_data(virq)); 663 + irq_set_status_flags(virq, IRQ_LEVEL); 664 + irq_set_chip_and_handler(virq, &pcie->irq_chip, 665 + handle_level_irq); 666 + irq_set_chip_data(virq, pcie); 667 + 668 + return 0; 669 + } 670 + 671 + static const struct irq_domain_ops advk_pcie_irq_domain_ops = { 672 + .map = advk_pcie_irq_map, 673 + .xlate = irq_domain_xlate_onecell, 674 + }; 675 + 676 + static int advk_pcie_init_msi_irq_domain(struct advk_pcie *pcie) 677 + { 678 + struct device *dev = &pcie->pdev->dev; 679 + struct device_node *node = dev->of_node; 680 + struct irq_chip *msi_irq_chip; 681 + struct msi_controller *msi; 682 + phys_addr_t msi_msg_phys; 683 + int ret; 684 + 685 + msi_irq_chip = &pcie->msi_irq_chip; 686 + 687 + msi_irq_chip->name = devm_kasprintf(dev, GFP_KERNEL, "%s-msi", 688 + dev_name(dev)); 689 + if (!msi_irq_chip->name) 690 + return -ENOMEM; 691 + 692 + msi_irq_chip->irq_enable = pci_msi_unmask_irq; 693 + msi_irq_chip->irq_disable = pci_msi_mask_irq; 694 + msi_irq_chip->irq_mask = pci_msi_mask_irq; 695 + msi_irq_chip->irq_unmask = pci_msi_unmask_irq; 696 + 697 + msi = &pcie->msi; 698 + 699 + msi->setup_irq = advk_pcie_setup_msi_irq; 700 + msi->teardown_irq = advk_pcie_teardown_msi_irq; 701 + msi->of_node = node; 702 + 703 + mutex_init(&pcie->msi_used_lock); 704 + 705 + msi_msg_phys = virt_to_phys(&pcie->msi_msg); 706 + 707 + advk_writel(pcie, lower_32_bits(msi_msg_phys), 708 + PCIE_MSI_ADDR_LOW_REG); 709 + advk_writel(pcie, upper_32_bits(msi_msg_phys), 710 + PCIE_MSI_ADDR_HIGH_REG); 711 + 712 + pcie->msi_domain = 713 + irq_domain_add_linear(NULL, MSI_IRQ_NUM, 714 + &advk_pcie_msi_irq_ops, pcie); 715 + if (!pcie->msi_domain) 716 + return -ENOMEM; 717 + 718 + ret = of_pci_msi_chip_add(msi); 719 + if (ret < 0) { 720 + irq_domain_remove(pcie->msi_domain); 721 + return ret; 722 + } 723 + 724 + return 0; 725 + } 726 + 727 + static void advk_pcie_remove_msi_irq_domain(struct advk_pcie *pcie) 728 + { 729 + of_pci_msi_chip_remove(&pcie->msi); 730 + irq_domain_remove(pcie->msi_domain); 731 + } 732 + 733 + static int advk_pcie_init_irq_domain(struct advk_pcie *pcie) 734 + { 735 + struct device *dev = &pcie->pdev->dev; 736 + struct device_node *node = dev->of_node; 737 + struct device_node *pcie_intc_node; 738 + struct irq_chip *irq_chip; 739 + 740 + pcie_intc_node = of_get_next_child(node, NULL); 741 + if (!pcie_intc_node) { 742 + dev_err(dev, "No PCIe Intc node found\n"); 743 + return -ENODEV; 744 + } 745 + 746 + irq_chip = &pcie->irq_chip; 747 + 748 + irq_chip->name = devm_kasprintf(dev, GFP_KERNEL, "%s-irq", 749 + dev_name(dev)); 750 + if (!irq_chip->name) { 751 + of_node_put(pcie_intc_node); 752 + return -ENOMEM; 753 + } 754 + 755 + irq_chip->irq_mask = advk_pcie_irq_mask; 756 + irq_chip->irq_mask_ack = advk_pcie_irq_mask; 757 + irq_chip->irq_unmask = advk_pcie_irq_unmask; 758 + 759 + pcie->irq_domain = 760 + irq_domain_add_linear(pcie_intc_node, LEGACY_IRQ_NUM, 761 + &advk_pcie_irq_domain_ops, pcie); 762 + if (!pcie->irq_domain) { 763 + dev_err(dev, "Failed to get a INTx IRQ domain\n"); 764 + of_node_put(pcie_intc_node); 765 + return -ENOMEM; 766 + } 767 + 768 + return 0; 769 + } 770 + 771 + static void advk_pcie_remove_irq_domain(struct advk_pcie *pcie) 772 + { 773 + irq_domain_remove(pcie->irq_domain); 774 + } 775 + 776 + static void advk_pcie_handle_msi(struct advk_pcie *pcie) 777 + { 778 + u32 msi_val, msi_mask, msi_status, msi_idx; 779 + u16 msi_data; 780 + 781 + msi_mask = advk_readl(pcie, PCIE_MSI_MASK_REG); 782 + msi_val = advk_readl(pcie, PCIE_MSI_STATUS_REG); 783 + msi_status = msi_val & ~msi_mask; 784 + 785 + for (msi_idx = 0; msi_idx < MSI_IRQ_NUM; msi_idx++) { 786 + if (!(BIT(msi_idx) & msi_status)) 787 + continue; 788 + 789 + advk_writel(pcie, BIT(msi_idx), PCIE_MSI_STATUS_REG); 790 + msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & 0xFF; 791 + generic_handle_irq(msi_data); 792 + } 793 + 794 + advk_writel(pcie, PCIE_ISR0_MSI_INT_PENDING, 795 + PCIE_ISR0_REG); 796 + } 797 + 798 + static void advk_pcie_handle_int(struct advk_pcie *pcie) 799 + { 800 + u32 val, mask, status; 801 + int i, virq; 802 + 803 + val = advk_readl(pcie, PCIE_ISR0_REG); 804 + mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); 805 + status = val & ((~mask) & PCIE_ISR0_ALL_MASK); 806 + 807 + if (!status) { 808 + advk_writel(pcie, val, PCIE_ISR0_REG); 809 + return; 810 + } 811 + 812 + /* Process MSI interrupts */ 813 + if (status & PCIE_ISR0_MSI_INT_PENDING) 814 + advk_pcie_handle_msi(pcie); 815 + 816 + /* Process legacy interrupts */ 817 + for (i = 0; i < LEGACY_IRQ_NUM; i++) { 818 + if (!(status & PCIE_ISR0_INTX_ASSERT(i))) 819 + continue; 820 + 821 + advk_writel(pcie, PCIE_ISR0_INTX_ASSERT(i), 822 + PCIE_ISR0_REG); 823 + 824 + virq = irq_find_mapping(pcie->irq_domain, i); 825 + generic_handle_irq(virq); 826 + } 827 + } 828 + 829 + static irqreturn_t advk_pcie_irq_handler(int irq, void *arg) 830 + { 831 + struct advk_pcie *pcie = arg; 832 + u32 status; 833 + 834 + status = advk_readl(pcie, HOST_CTRL_INT_STATUS_REG); 835 + if (!(status & PCIE_IRQ_CORE_INT)) 836 + return IRQ_NONE; 837 + 838 + advk_pcie_handle_int(pcie); 839 + 840 + /* Clear interrupt */ 841 + advk_writel(pcie, PCIE_IRQ_CORE_INT, HOST_CTRL_INT_STATUS_REG); 842 + 843 + return IRQ_HANDLED; 844 + } 845 + 846 + static int advk_pcie_parse_request_of_pci_ranges(struct advk_pcie *pcie) 847 + { 848 + int err, res_valid = 0; 849 + struct device *dev = &pcie->pdev->dev; 850 + struct device_node *np = dev->of_node; 851 + struct resource_entry *win; 852 + resource_size_t iobase; 853 + 854 + INIT_LIST_HEAD(&pcie->resources); 855 + 856 + err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pcie->resources, 857 + &iobase); 858 + if (err) 859 + return err; 860 + 861 + err = devm_request_pci_bus_resources(dev, &pcie->resources); 862 + if (err) 863 + goto out_release_res; 864 + 865 + resource_list_for_each_entry(win, &pcie->resources) { 866 + struct resource *res = win->res; 867 + 868 + switch (resource_type(res)) { 869 + case IORESOURCE_IO: 870 + advk_pcie_set_ob_win(pcie, 1, 871 + upper_32_bits(res->start), 872 + lower_32_bits(res->start), 873 + 0, 0xF8000000, 0, 874 + lower_32_bits(res->start), 875 + OB_PCIE_IO); 876 + err = pci_remap_iospace(res, iobase); 877 + if (err) 878 + dev_warn(dev, "error %d: failed to map resource %pR\n", 879 + err, res); 880 + break; 881 + case IORESOURCE_MEM: 882 + advk_pcie_set_ob_win(pcie, 0, 883 + upper_32_bits(res->start), 884 + lower_32_bits(res->start), 885 + 0x0, 0xF8000000, 0, 886 + lower_32_bits(res->start), 887 + (2 << 20) | OB_PCIE_MEM); 888 + res_valid |= !(res->flags & IORESOURCE_PREFETCH); 889 + break; 890 + case IORESOURCE_BUS: 891 + pcie->root_bus_nr = res->start; 892 + break; 893 + } 894 + } 895 + 896 + if (!res_valid) { 897 + dev_err(dev, "non-prefetchable memory resource required\n"); 898 + err = -EINVAL; 899 + goto out_release_res; 900 + } 901 + 902 + return 0; 903 + 904 + out_release_res: 905 + pci_free_resource_list(&pcie->resources); 906 + return err; 907 + } 908 + 909 + static int advk_pcie_probe(struct platform_device *pdev) 910 + { 911 + struct advk_pcie *pcie; 912 + struct resource *res; 913 + struct pci_bus *bus, *child; 914 + struct msi_controller *msi; 915 + struct device_node *msi_node; 916 + int ret, irq; 917 + 918 + pcie = devm_kzalloc(&pdev->dev, sizeof(struct advk_pcie), 919 + GFP_KERNEL); 920 + if (!pcie) 921 + return -ENOMEM; 922 + 923 + pcie->pdev = pdev; 924 + platform_set_drvdata(pdev, pcie); 925 + 926 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 927 + pcie->base = devm_ioremap_resource(&pdev->dev, res); 928 + if (IS_ERR(pcie->base)) { 929 + dev_err(&pdev->dev, "Failed to map registers\n"); 930 + return PTR_ERR(pcie->base); 931 + } 932 + 933 + irq = platform_get_irq(pdev, 0); 934 + ret = devm_request_irq(&pdev->dev, irq, advk_pcie_irq_handler, 935 + IRQF_SHARED | IRQF_NO_THREAD, "advk-pcie", 936 + pcie); 937 + if (ret) { 938 + dev_err(&pdev->dev, "Failed to register interrupt\n"); 939 + return ret; 940 + } 941 + 942 + ret = advk_pcie_parse_request_of_pci_ranges(pcie); 943 + if (ret) { 944 + dev_err(&pdev->dev, "Failed to parse resources\n"); 945 + return ret; 946 + } 947 + 948 + advk_pcie_setup_hw(pcie); 949 + 950 + ret = advk_pcie_init_irq_domain(pcie); 951 + if (ret) { 952 + dev_err(&pdev->dev, "Failed to initialize irq\n"); 953 + return ret; 954 + } 955 + 956 + ret = advk_pcie_init_msi_irq_domain(pcie); 957 + if (ret) { 958 + dev_err(&pdev->dev, "Failed to initialize irq\n"); 959 + advk_pcie_remove_irq_domain(pcie); 960 + return ret; 961 + } 962 + 963 + msi_node = of_parse_phandle(pdev->dev.of_node, "msi-parent", 0); 964 + if (msi_node) 965 + msi = of_pci_find_msi_chip_by_node(msi_node); 966 + else 967 + msi = NULL; 968 + 969 + bus = pci_scan_root_bus_msi(&pdev->dev, 0, &advk_pcie_ops, 970 + pcie, &pcie->resources, &pcie->msi); 971 + if (!bus) { 972 + advk_pcie_remove_msi_irq_domain(pcie); 973 + advk_pcie_remove_irq_domain(pcie); 974 + return -ENOMEM; 975 + } 976 + 977 + pci_bus_assign_resources(bus); 978 + 979 + list_for_each_entry(child, &bus->children, node) 980 + pcie_bus_configure_settings(child); 981 + 982 + pci_bus_add_devices(bus); 983 + 984 + return 0; 985 + } 986 + 987 + static const struct of_device_id advk_pcie_of_match_table[] = { 988 + { .compatible = "marvell,armada-3700-pcie", }, 989 + {}, 990 + }; 991 + 992 + static struct platform_driver advk_pcie_driver = { 993 + .driver = { 994 + .name = "advk-pcie", 995 + .of_match_table = advk_pcie_of_match_table, 996 + /* Driver unloading/unbinding currently not supported */ 997 + .suppress_bind_attrs = true, 998 + }, 999 + .probe = advk_pcie_probe, 1000 + }; 1001 + builtin_platform_driver(advk_pcie_driver);
+2 -2
drivers/pci/host/pci-dra7xx.c
··· 181 181 182 182 if (!pcie_intc_node) { 183 183 dev_err(dev, "No PCIe Intc node found\n"); 184 - return PTR_ERR(pcie_intc_node); 184 + return -ENODEV; 185 185 } 186 186 187 187 pp->irq_domain = irq_domain_add_linear(pcie_intc_node, 4, 188 188 &intx_domain_ops, pp); 189 189 if (!pp->irq_domain) { 190 190 dev_err(dev, "Failed to get a INTx IRQ domain\n"); 191 - return PTR_ERR(pp->irq_domain); 191 + return -ENODEV; 192 192 } 193 193 194 194 return 0;
+20 -24
drivers/pci/host/pci-host-common.c
··· 20 20 #include <linux/module.h> 21 21 #include <linux/of_address.h> 22 22 #include <linux/of_pci.h> 23 + #include <linux/pci-ecam.h> 23 24 #include <linux/platform_device.h> 24 - 25 - #include "../ecam.h" 26 25 27 26 static int gen_pci_parse_request_of_pci_ranges(struct device *dev, 28 27 struct list_head *resources, struct resource **bus_range) ··· 35 36 if (err) 36 37 return err; 37 38 39 + err = devm_request_pci_bus_resources(dev, resources); 40 + if (err) 41 + return err; 42 + 38 43 resource_list_for_each_entry(win, resources) { 39 - struct resource *parent, *res = win->res; 44 + struct resource *res = win->res; 40 45 41 46 switch (resource_type(res)) { 42 47 case IORESOURCE_IO: 43 - parent = &ioport_resource; 44 48 err = pci_remap_iospace(res, iobase); 45 - if (err) { 49 + if (err) 46 50 dev_warn(dev, "error %d: failed to map resource %pR\n", 47 51 err, res); 48 - continue; 49 - } 50 52 break; 51 53 case IORESOURCE_MEM: 52 - parent = &iomem_resource; 53 54 res_valid |= !(res->flags & IORESOURCE_PREFETCH); 54 55 break; 55 56 case IORESOURCE_BUS: 56 57 *bus_range = res; 57 - default: 58 - continue; 58 + break; 59 59 } 60 - 61 - err = devm_request_resource(dev, parent, res); 62 - if (err) 63 - goto out_release_res; 64 60 } 65 61 66 - if (!res_valid) { 67 - dev_err(dev, "non-prefetchable memory resource required\n"); 68 - err = -EINVAL; 69 - goto out_release_res; 70 - } 62 + if (res_valid) 63 + return 0; 71 64 72 - return 0; 73 - 74 - out_release_res: 75 - return err; 65 + dev_err(dev, "non-prefetchable memory resource required\n"); 66 + return -EINVAL; 76 67 } 77 68 78 69 static void gen_pci_unmap_cfg(void *ptr) ··· 144 155 145 156 pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 146 157 147 - if (!pci_has_flag(PCI_PROBE_ONLY)) { 158 + /* 159 + * We insert PCI resources into the iomem_resource and 160 + * ioport_resource trees in either pci_bus_claim_resources() 161 + * or pci_bus_assign_resources(). 162 + */ 163 + if (pci_has_flag(PCI_PROBE_ONLY)) { 164 + pci_bus_claim_resources(bus); 165 + } else { 148 166 pci_bus_size_bridges(bus); 149 167 pci_bus_assign_resources(bus); 150 168
+3 -10
drivers/pci/host/pci-host-generic.c
··· 20 20 */ 21 21 22 22 #include <linux/kernel.h> 23 - #include <linux/module.h> 23 + #include <linux/init.h> 24 24 #include <linux/of_address.h> 25 25 #include <linux/of_pci.h> 26 + #include <linux/pci-ecam.h> 26 27 #include <linux/platform_device.h> 27 - 28 - #include "../ecam.h" 29 28 30 29 static struct pci_ecam_ops gen_pci_cfg_cam_bus_ops = { 31 30 .bus_shift = 16, ··· 45 46 { }, 46 47 }; 47 48 48 - MODULE_DEVICE_TABLE(of, gen_pci_of_match); 49 - 50 49 static int gen_pci_probe(struct platform_device *pdev) 51 50 { 52 51 const struct of_device_id *of_id; ··· 63 66 }, 64 67 .probe = gen_pci_probe, 65 68 }; 66 - module_platform_driver(gen_pci_driver); 67 - 68 - MODULE_DESCRIPTION("Generic PCI host driver"); 69 - MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>"); 70 - MODULE_LICENSE("GPL v2"); 69 + builtin_platform_driver(gen_pci_driver);
+17 -13
drivers/pci/host/pci-hyperv.c
··· 732 732 733 733 pdev = msi_desc_to_pci_dev(msi); 734 734 hbus = info->data; 735 - hpdev = get_pcichild_wslot(hbus, devfn_to_wslot(pdev->devfn)); 736 - if (!hpdev) 735 + int_desc = irq_data_get_irq_chip_data(irq_data); 736 + if (!int_desc) 737 737 return; 738 738 739 - int_desc = irq_data_get_irq_chip_data(irq_data); 740 - if (int_desc) { 741 - irq_data->chip_data = NULL; 742 - hv_int_desc_free(hpdev, int_desc); 739 + irq_data->chip_data = NULL; 740 + hpdev = get_pcichild_wslot(hbus, devfn_to_wslot(pdev->devfn)); 741 + if (!hpdev) { 742 + kfree(int_desc); 743 + return; 743 744 } 744 745 746 + hv_int_desc_free(hpdev, int_desc); 745 747 put_pcichild(hpdev, hv_pcidev_ref_by_slot); 746 748 } 747 749 ··· 1659 1657 continue; 1660 1658 } 1661 1659 1660 + /* Zero length indicates there are no more packets. */ 1661 + if (ret || !bytes_recvd) 1662 + break; 1663 + 1662 1664 /* 1663 1665 * All incoming packets must be at least as large as a 1664 1666 * response. 1665 1667 */ 1666 - if (bytes_recvd <= sizeof(struct pci_response)) { 1667 - kfree(buffer); 1668 - return; 1669 - } 1668 + if (bytes_recvd <= sizeof(struct pci_response)) 1669 + continue; 1670 1670 desc = (struct vmpacket_descriptor *)buffer; 1671 1671 1672 1672 switch (desc->type) { ··· 1683 1679 comp_packet->completion_func(comp_packet->compl_ctxt, 1684 1680 response, 1685 1681 bytes_recvd); 1686 - kfree(buffer); 1687 - return; 1682 + break; 1688 1683 1689 1684 case VM_PKT_DATA_INBAND: 1690 1685 ··· 1730 1727 desc->type, req_id, bytes_recvd); 1731 1728 break; 1732 1729 } 1733 - break; 1734 1730 } 1731 + 1732 + kfree(buffer); 1735 1733 } 1736 1734 1737 1735 /**
+2 -8
drivers/pci/host/pci-keystone.c
··· 17 17 #include <linux/delay.h> 18 18 #include <linux/interrupt.h> 19 19 #include <linux/irqdomain.h> 20 - #include <linux/module.h> 20 + #include <linux/init.h> 21 21 #include <linux/msi.h> 22 22 #include <linux/of_irq.h> 23 23 #include <linux/of.h> ··· 360 360 }, 361 361 { }, 362 362 }; 363 - MODULE_DEVICE_TABLE(of, ks_pcie_of_match); 364 363 365 364 static int __exit ks_pcie_remove(struct platform_device *pdev) 366 365 { ··· 438 439 .of_match_table = of_match_ptr(ks_pcie_of_match), 439 440 }, 440 441 }; 441 - 442 - module_platform_driver(ks_pcie_driver); 443 - 444 - MODULE_AUTHOR("Murali Karicheri <m-karicheri2@ti.com>"); 445 - MODULE_DESCRIPTION("Keystone PCIe host controller driver"); 446 - MODULE_LICENSE("GPL v2"); 442 + builtin_platform_driver(ks_pcie_driver);
+2 -8
drivers/pci/host/pci-layerscape.c
··· 12 12 13 13 #include <linux/kernel.h> 14 14 #include <linux/interrupt.h> 15 - #include <linux/module.h> 15 + #include <linux/init.h> 16 16 #include <linux/of_pci.h> 17 17 #include <linux/of_platform.h> 18 18 #include <linux/of_irq.h> ··· 211 211 { .compatible = "fsl,ls2085a-pcie", .data = &ls2080_drvdata }, 212 212 { }, 213 213 }; 214 - MODULE_DEVICE_TABLE(of, ls_pcie_of_match); 215 214 216 215 static int __init ls_add_pcie_port(struct pcie_port *pp, 217 216 struct platform_device *pdev) ··· 274 275 .of_match_table = ls_pcie_of_match, 275 276 }, 276 277 }; 277 - 278 - module_platform_driver_probe(ls_pcie_driver, ls_pcie_probe); 279 - 280 - MODULE_AUTHOR("Minghuan Lian <Minghuan.Lian@freescale.com>"); 281 - MODULE_DESCRIPTION("Freescale Layerscape PCIe host controller driver"); 282 - MODULE_LICENSE("GPL v2"); 278 + builtin_platform_driver_probe(ls_pcie_driver, ls_pcie_probe);
+11 -17
drivers/pci/host/pci-mvebu.c
··· 1 1 /* 2 2 * PCIe driver for Marvell Armada 370 and Armada XP SoCs 3 3 * 4 + * Author: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> 5 + * 4 6 * This file is licensed under the terms of the GNU General Public 5 7 * License version 2. This program is licensed "as is" without any 6 8 * warranty of any kind, whether express or implied. ··· 13 11 #include <linux/clk.h> 14 12 #include <linux/delay.h> 15 13 #include <linux/gpio.h> 16 - #include <linux/module.h> 14 + #include <linux/init.h> 17 15 #include <linux/mbus.h> 18 16 #include <linux/msi.h> 19 17 #include <linux/slab.h> ··· 841 839 static int mvebu_pcie_setup(int nr, struct pci_sys_data *sys) 842 840 { 843 841 struct mvebu_pcie *pcie = sys_to_pcie(sys); 844 - int i; 842 + int err, i; 845 843 846 844 pcie->mem.name = "PCI MEM"; 847 845 pcie->realio.name = "PCI I/O"; 848 846 849 - if (request_resource(&iomem_resource, &pcie->mem)) 850 - return 0; 851 - 852 - if (resource_size(&pcie->realio) != 0) { 853 - if (request_resource(&ioport_resource, &pcie->realio)) { 854 - release_resource(&pcie->mem); 855 - return 0; 856 - } 847 + if (resource_size(&pcie->realio) != 0) 857 848 pci_add_resource_offset(&sys->resources, &pcie->realio, 858 849 sys->io_offset); 859 - } 850 + 860 851 pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset); 861 852 pci_add_resource(&sys->resources, &pcie->busn); 853 + 854 + err = devm_request_pci_bus_resources(&pcie->pdev->dev, &sys->resources); 855 + if (err) 856 + return 0; 862 857 863 858 for (i = 0; i < pcie->nports; i++) { 864 859 struct mvebu_pcie_port *port = &pcie->ports[i]; ··· 1297 1298 { .compatible = "marvell,kirkwood-pcie", }, 1298 1299 {}, 1299 1300 }; 1300 - MODULE_DEVICE_TABLE(of, mvebu_pcie_of_match_table); 1301 1301 1302 1302 static const struct dev_pm_ops mvebu_pcie_pm_ops = { 1303 1303 SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mvebu_pcie_suspend, mvebu_pcie_resume) ··· 1312 1314 }, 1313 1315 .probe = mvebu_pcie_probe, 1314 1316 }; 1315 - module_platform_driver(mvebu_pcie_driver); 1316 - 1317 - MODULE_AUTHOR("Thomas Petazzoni <thomas.petazzoni@free-electrons.com>"); 1318 - MODULE_DESCRIPTION("Marvell EBU PCIe driver"); 1319 - MODULE_LICENSE("GPL v2"); 1317 + builtin_platform_driver(mvebu_pcie_driver);
+8 -19
drivers/pci/host/pci-rcar-gen2.c
··· 4 4 * Copyright (C) 2013 Renesas Solutions Corp. 5 5 * Copyright (C) 2013 Cogent Embedded, Inc. 6 6 * 7 + * Author: Valentine Barshak <valentine.barshak@cogentembedded.com> 8 + * 7 9 * This program is free software; you can redistribute it and/or modify 8 10 * it under the terms of the GNU General Public License version 2 as 9 11 * published by the Free Software Foundation. ··· 16 14 #include <linux/interrupt.h> 17 15 #include <linux/io.h> 18 16 #include <linux/kernel.h> 19 - #include <linux/module.h> 20 17 #include <linux/of_address.h> 21 18 #include <linux/of_pci.h> 22 19 #include <linux/pci.h> ··· 98 97 struct rcar_pci_priv { 99 98 struct device *dev; 100 99 void __iomem *reg; 101 - struct resource io_res; 102 100 struct resource mem_res; 103 101 struct resource *cfg_res; 104 102 unsigned busnr; ··· 194 194 struct rcar_pci_priv *priv = sys->private_data; 195 195 void __iomem *reg = priv->reg; 196 196 u32 val; 197 + int ret; 197 198 198 199 pm_runtime_enable(priv->dev); 199 200 pm_runtime_get_sync(priv->dev); ··· 274 273 rcar_pci_setup_errirq(priv); 275 274 276 275 /* Add PCI resources */ 277 - pci_add_resource(&sys->resources, &priv->io_res); 278 276 pci_add_resource(&sys->resources, &priv->mem_res); 277 + ret = devm_request_pci_bus_resources(priv->dev, &sys->resources); 278 + if (ret < 0) 279 + return ret; 279 280 280 281 /* Setup bus number based on platform device id / of bus-range */ 281 282 sys->busnr = priv->busnr; ··· 374 371 return -ENOMEM; 375 372 376 373 priv->mem_res = *mem_res; 377 - /* 378 - * The controller does not support/use port I/O, 379 - * so setup a dummy port I/O region here. 380 - */ 381 - priv->io_res.start = priv->mem_res.start; 382 - priv->io_res.end = priv->mem_res.end; 383 - priv->io_res.flags = IORESOURCE_IO; 384 - 385 374 priv->cfg_res = cfg_res; 386 375 387 376 priv->irq = platform_get_irq(pdev, 0); ··· 416 421 hw_private[0] = priv; 417 422 memset(&hw, 0, sizeof(hw)); 418 423 hw.nr_controllers = ARRAY_SIZE(hw_private); 424 + hw.io_optional = 1; 419 425 hw.private_data = hw_private; 420 426 hw.map_irq = rcar_pci_map_irq; 421 427 hw.ops = &rcar_pci_ops; ··· 433 437 { }, 434 438 }; 435 439 436 - MODULE_DEVICE_TABLE(of, rcar_pci_of_match); 437 - 438 440 static struct platform_driver rcar_pci_driver = { 439 441 .driver = { 440 442 .name = "pci-rcar-gen2", ··· 441 447 }, 442 448 .probe = rcar_pci_probe, 443 449 }; 444 - 445 - module_platform_driver(rcar_pci_driver); 446 - 447 - MODULE_LICENSE("GPL v2"); 448 - MODULE_DESCRIPTION("Renesas R-Car Gen2 internal PCI"); 449 - MODULE_AUTHOR("Valentine Barshak <valentine.barshak@cogentembedded.com>"); 450 + builtin_platform_driver(rcar_pci_driver);
+33 -66
drivers/pci/host/pci-tegra.c
··· 9 9 * 10 10 * Bits taken from arch/arm/mach-dove/pcie.c 11 11 * 12 + * Author: Thierry Reding <treding@nvidia.com> 13 + * 12 14 * This program is free software; you can redistribute it and/or modify 13 15 * it under the terms of the GNU General Public License as published by 14 16 * the Free Software Foundation; either version 2 of the License, or ··· 34 32 #include <linux/irq.h> 35 33 #include <linux/irqdomain.h> 36 34 #include <linux/kernel.h> 37 - #include <linux/module.h> 35 + #include <linux/init.h> 38 36 #include <linux/msi.h> 39 37 #include <linux/of_address.h> 40 38 #include <linux/of_pci.h> ··· 185 183 186 184 #define AFI_PEXBIAS_CTRL_0 0x168 187 185 188 - #define RP_VEND_XP 0x00000F00 186 + #define RP_VEND_XP 0x00000f00 189 187 #define RP_VEND_XP_DL_UP (1 << 30) 190 188 191 - #define RP_PRIV_MISC 0x00000FE0 192 - #define RP_PRIV_MISC_PRSNT_MAP_EP_PRSNT (0xE << 0) 193 - #define RP_PRIV_MISC_PRSNT_MAP_EP_ABSNT (0xF << 0) 189 + #define RP_PRIV_MISC 0x00000fe0 190 + #define RP_PRIV_MISC_PRSNT_MAP_EP_PRSNT (0xe << 0) 191 + #define RP_PRIV_MISC_PRSNT_MAP_EP_ABSNT (0xf << 0) 194 192 195 193 #define RP_LINK_CONTROL_STATUS 0x00000090 196 194 #define RP_LINK_CONTROL_STATUS_DL_LINK_ACTIVE 0x20000000 197 195 #define RP_LINK_CONTROL_STATUS_LINKSTAT_MASK 0x3fff0000 198 196 199 - #define PADS_CTL_SEL 0x0000009C 197 + #define PADS_CTL_SEL 0x0000009c 200 198 201 - #define PADS_CTL 0x000000A0 199 + #define PADS_CTL 0x000000a0 202 200 #define PADS_CTL_IDDQ_1L (1 << 0) 203 201 #define PADS_CTL_TX_DATA_EN_1L (1 << 6) 204 202 #define PADS_CTL_RX_DATA_EN_1L (1 << 10) 205 203 206 - #define PADS_PLL_CTL_TEGRA20 0x000000B8 207 - #define PADS_PLL_CTL_TEGRA30 0x000000B4 204 + #define PADS_PLL_CTL_TEGRA20 0x000000b8 205 + #define PADS_PLL_CTL_TEGRA30 0x000000b4 208 206 #define PADS_PLL_CTL_RST_B4SM (1 << 1) 209 207 #define PADS_PLL_CTL_LOCKDET (1 << 8) 210 208 #define PADS_PLL_CTL_REFCLK_MASK (0x3 << 16) ··· 216 214 #define PADS_PLL_CTL_TXCLKREF_DIV5 (1 << 20) 217 215 #define PADS_PLL_CTL_TXCLKREF_BUF_EN (1 << 22) 218 216 219 - #define PADS_REFCLK_CFG0 0x000000C8 220 - #define PADS_REFCLK_CFG1 0x000000CC 221 - #define PADS_REFCLK_BIAS 0x000000D0 217 + #define PADS_REFCLK_CFG0 0x000000c8 218 + #define PADS_REFCLK_CFG1 0x000000cc 219 + #define PADS_REFCLK_BIAS 0x000000d0 222 220 223 221 /* 224 222 * Fields in PADS_REFCLK_CFG*. Those registers form an array of 16-bit ··· 229 227 #define PADS_REFCLK_CFG_E_TERM_SHIFT 7 230 228 #define PADS_REFCLK_CFG_PREDI_SHIFT 8 /* 11:8 */ 231 229 #define PADS_REFCLK_CFG_DRVI_SHIFT 12 /* 15:12 */ 232 - 233 - /* Default value provided by HW engineering is 0xfa5c */ 234 - #define PADS_REFCLK_CFG_VALUE \ 235 - ( \ 236 - (0x17 << PADS_REFCLK_CFG_TERM_SHIFT) | \ 237 - (0 << PADS_REFCLK_CFG_E_TERM_SHIFT) | \ 238 - (0xa << PADS_REFCLK_CFG_PREDI_SHIFT) | \ 239 - (0xf << PADS_REFCLK_CFG_DRVI_SHIFT) \ 240 - ) 241 230 242 231 struct tegra_msi { 243 232 struct msi_controller chip; ··· 245 252 unsigned int msi_base_shift; 246 253 u32 pads_pll_ctl; 247 254 u32 tx_ref_sel; 255 + u32 pads_refclk_cfg0; 256 + u32 pads_refclk_cfg1; 248 257 bool has_pex_clkreq_en; 249 258 bool has_pex_bias_ctrl; 250 259 bool has_intr_prsnt_sense; ··· 269 274 struct list_head buses; 270 275 struct resource *cs; 271 276 272 - struct resource all; 273 277 struct resource io; 274 278 struct resource pio; 275 279 struct resource mem; ··· 617 623 sys->mem_offset = pcie->offset.mem; 618 624 sys->io_offset = pcie->offset.io; 619 625 620 - err = devm_request_resource(pcie->dev, &pcie->all, &pcie->io); 626 + err = devm_request_resource(pcie->dev, &iomem_resource, &pcie->io); 621 627 if (err < 0) 622 - return err; 623 - 624 - err = devm_request_resource(pcie->dev, &ioport_resource, &pcie->pio); 625 - if (err < 0) 626 - return err; 627 - 628 - err = devm_request_resource(pcie->dev, &pcie->all, &pcie->mem); 629 - if (err < 0) 630 - return err; 631 - 632 - err = devm_request_resource(pcie->dev, &pcie->all, &pcie->prefetch); 633 - if (err) 634 628 return err; 635 629 636 630 pci_add_resource_offset(&sys->resources, &pcie->pio, sys->io_offset); ··· 627 645 sys->mem_offset); 628 646 pci_add_resource(&sys->resources, &pcie->busn); 629 647 630 - pci_ioremap_io(pcie->pio.start, pcie->io.start); 648 + err = devm_request_pci_bus_resources(pcie->dev, &sys->resources); 649 + if (err < 0) 650 + return err; 631 651 652 + pci_remap_iospace(&pcie->pio, pcie->io.start); 632 653 return 1; 633 654 } 634 655 ··· 823 838 value |= PADS_PLL_CTL_RST_B4SM; 824 839 pads_writel(pcie, value, soc->pads_pll_ctl); 825 840 826 - /* Configure the reference clock driver */ 827 - value = PADS_REFCLK_CFG_VALUE | (PADS_REFCLK_CFG_VALUE << 16); 828 - pads_writel(pcie, value, PADS_REFCLK_CFG0); 829 - if (soc->num_ports > 2) 830 - pads_writel(pcie, PADS_REFCLK_CFG_VALUE, PADS_REFCLK_CFG1); 831 - 832 841 /* wait for the PLL to lock */ 833 842 err = tegra_pcie_pll_wait(pcie, 500); 834 843 if (err < 0) { ··· 906 927 907 928 static int tegra_pcie_phy_power_on(struct tegra_pcie *pcie) 908 929 { 930 + const struct tegra_pcie_soc_data *soc = pcie->soc_data; 909 931 struct tegra_pcie_port *port; 910 932 int err; 911 933 ··· 931 951 return err; 932 952 } 933 953 } 954 + 955 + /* Configure the reference clock driver */ 956 + pads_writel(pcie, soc->pads_refclk_cfg0, PADS_REFCLK_CFG0); 957 + 958 + if (soc->num_ports > 2) 959 + pads_writel(pcie, soc->pads_refclk_cfg1, PADS_REFCLK_CFG1); 934 960 935 961 return 0; 936 962 } ··· 1808 1822 struct resource res; 1809 1823 int err; 1810 1824 1811 - memset(&pcie->all, 0, sizeof(pcie->all)); 1812 - pcie->all.flags = IORESOURCE_MEM; 1813 - pcie->all.name = np->full_name; 1814 - pcie->all.start = ~0; 1815 - pcie->all.end = 0; 1816 - 1817 1825 if (of_pci_range_parser_init(&parser, np)) { 1818 1826 dev_err(pcie->dev, "missing \"ranges\" property\n"); 1819 1827 return -EINVAL; ··· 1860 1880 } 1861 1881 break; 1862 1882 } 1863 - 1864 - if (res.start <= pcie->all.start) 1865 - pcie->all.start = res.start; 1866 - 1867 - if (res.end >= pcie->all.end) 1868 - pcie->all.end = res.end; 1869 1883 } 1870 - 1871 - err = devm_request_resource(pcie->dev, &iomem_resource, &pcie->all); 1872 - if (err < 0) 1873 - return err; 1874 1884 1875 1885 err = of_pci_parse_bus_range(np, &pcie->busn); 1876 1886 if (err < 0) { ··· 2048 2078 .msi_base_shift = 0, 2049 2079 .pads_pll_ctl = PADS_PLL_CTL_TEGRA20, 2050 2080 .tx_ref_sel = PADS_PLL_CTL_TXCLKREF_DIV10, 2081 + .pads_refclk_cfg0 = 0xfa5cfa5c, 2051 2082 .has_pex_clkreq_en = false, 2052 2083 .has_pex_bias_ctrl = false, 2053 2084 .has_intr_prsnt_sense = false, ··· 2061 2090 .msi_base_shift = 8, 2062 2091 .pads_pll_ctl = PADS_PLL_CTL_TEGRA30, 2063 2092 .tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN, 2093 + .pads_refclk_cfg0 = 0xfa5cfa5c, 2094 + .pads_refclk_cfg1 = 0xfa5cfa5c, 2064 2095 .has_pex_clkreq_en = true, 2065 2096 .has_pex_bias_ctrl = true, 2066 2097 .has_intr_prsnt_sense = true, ··· 2075 2102 .msi_base_shift = 8, 2076 2103 .pads_pll_ctl = PADS_PLL_CTL_TEGRA30, 2077 2104 .tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN, 2105 + .pads_refclk_cfg0 = 0x44ac44ac, 2078 2106 .has_pex_clkreq_en = true, 2079 2107 .has_pex_bias_ctrl = true, 2080 2108 .has_intr_prsnt_sense = true, ··· 2089 2115 { .compatible = "nvidia,tegra20-pcie", .data = &tegra20_pcie_data }, 2090 2116 { }, 2091 2117 }; 2092 - MODULE_DEVICE_TABLE(of, tegra_pcie_of_match); 2093 2118 2094 2119 static void *tegra_pcie_ports_seq_start(struct seq_file *s, loff_t *pos) 2095 2120 { ··· 2222 2249 if (err < 0) 2223 2250 return err; 2224 2251 2225 - pcibios_min_mem = 0; 2226 - 2227 2252 err = tegra_pcie_get_resources(pcie); 2228 2253 if (err < 0) { 2229 2254 dev_err(&pdev->dev, "failed to request resources: %d\n", err); ··· 2277 2306 }, 2278 2307 .probe = tegra_pcie_probe, 2279 2308 }; 2280 - module_platform_driver(tegra_pcie_driver); 2281 - 2282 - MODULE_AUTHOR("Thierry Reding <treding@nvidia.com>"); 2283 - MODULE_DESCRIPTION("NVIDIA Tegra PCIe driver"); 2284 - MODULE_LICENSE("GPL v2"); 2309 + builtin_platform_driver(tegra_pcie_driver);
+3 -8
drivers/pci/host/pci-thunder-ecam.c
··· 7 7 */ 8 8 9 9 #include <linux/kernel.h> 10 - #include <linux/module.h> 10 + #include <linux/init.h> 11 11 #include <linux/ioport.h> 12 12 #include <linux/of_pci.h> 13 13 #include <linux/of.h> 14 + #include <linux/pci-ecam.h> 14 15 #include <linux/platform_device.h> 15 - 16 - #include "../ecam.h" 17 16 18 17 static void set_val(u32 v, int where, int size, u32 *val) 19 18 { ··· 359 360 { .compatible = "cavium,pci-host-thunder-ecam" }, 360 361 { }, 361 362 }; 362 - MODULE_DEVICE_TABLE(of, thunder_ecam_of_match); 363 363 364 364 static int thunder_ecam_probe(struct platform_device *pdev) 365 365 { ··· 372 374 }, 373 375 .probe = thunder_ecam_probe, 374 376 }; 375 - module_platform_driver(thunder_ecam_driver); 376 - 377 - MODULE_DESCRIPTION("Thunder ECAM PCI host driver"); 378 - MODULE_LICENSE("GPL v2"); 377 + builtin_platform_driver(thunder_ecam_driver);
+5 -9
drivers/pci/host/pci-thunder-pem.c
··· 15 15 */ 16 16 17 17 #include <linux/kernel.h> 18 - #include <linux/module.h> 18 + #include <linux/init.h> 19 19 #include <linux/of_address.h> 20 20 #include <linux/of_pci.h> 21 + #include <linux/pci-ecam.h> 21 22 #include <linux/platform_device.h> 22 - 23 - #include "../ecam.h" 24 23 25 24 #define PEM_CFG_WR 0x28 26 25 #define PEM_CFG_RD 0x30 ··· 284 285 return pci_generic_config_write(bus, devfn, where, size, val); 285 286 } 286 287 287 - static int thunder_pem_init(struct device *dev, struct pci_config_window *cfg) 288 + static int thunder_pem_init(struct pci_config_window *cfg) 288 289 { 290 + struct device *dev = cfg->parent; 289 291 resource_size_t bar4_start; 290 292 struct resource *res_pem; 291 293 struct thunder_pem_pci *pem_pci; ··· 346 346 { .compatible = "cavium,pci-host-thunder-pem" }, 347 347 { }, 348 348 }; 349 - MODULE_DEVICE_TABLE(of, thunder_pem_of_match); 350 349 351 350 static int thunder_pem_probe(struct platform_device *pdev) 352 351 { ··· 359 360 }, 360 361 .probe = thunder_pem_probe, 361 362 }; 362 - module_platform_driver(thunder_pem_driver); 363 - 364 - MODULE_DESCRIPTION("Thunder PEM PCIe host driver"); 365 - MODULE_LICENSE("GPL v2"); 363 + builtin_platform_driver(thunder_pem_driver);
+10 -19
drivers/pci/host/pci-versatile.c
··· 80 80 if (err) 81 81 return err; 82 82 83 + err = devm_request_pci_bus_resources(dev, res); 84 + if (err) 85 + goto out_release_res; 86 + 83 87 resource_list_for_each_entry(win, res) { 84 - struct resource *parent, *res = win->res; 88 + struct resource *res = win->res; 85 89 86 90 switch (resource_type(res)) { 87 91 case IORESOURCE_IO: 88 - parent = &ioport_resource; 89 92 err = pci_remap_iospace(res, iobase); 90 - if (err) { 93 + if (err) 91 94 dev_warn(dev, "error %d: failed to map resource %pR\n", 92 95 err, res); 93 - continue; 94 - } 95 96 break; 96 97 case IORESOURCE_MEM: 97 - parent = &iomem_resource; 98 98 res_valid |= !(res->flags & IORESOURCE_PREFETCH); 99 99 100 100 writel(res->start >> 28, PCI_IMAP(mem)); ··· 102 102 mem++; 103 103 104 104 break; 105 - case IORESOURCE_BUS: 106 - default: 107 - continue; 108 105 } 109 - 110 - err = devm_request_resource(dev, parent, res); 111 - if (err) 112 - goto out_release_res; 113 106 } 114 107 115 - if (!res_valid) { 116 - dev_err(dev, "non-prefetchable memory resource required\n"); 117 - err = -EINVAL; 118 - goto out_release_res; 119 - } 108 + if (res_valid) 109 + return 0; 120 110 121 - return 0; 111 + dev_err(dev, "non-prefetchable memory resource required\n"); 112 + err = -EINVAL; 122 113 123 114 out_release_res: 124 115 pci_free_resource_list(res);
+15 -9
drivers/pci/host/pci-xgene.c
··· 21 21 #include <linux/io.h> 22 22 #include <linux/jiffies.h> 23 23 #include <linux/memblock.h> 24 - #include <linux/module.h> 24 + #include <linux/init.h> 25 25 #include <linux/of.h> 26 26 #include <linux/of_address.h> 27 27 #include <linux/of_irq.h> ··· 540 540 if (ret) 541 541 return ret; 542 542 543 + ret = devm_request_pci_bus_resources(&pdev->dev, &res); 544 + if (ret) 545 + goto error; 546 + 543 547 ret = xgene_pcie_setup(port, &res, iobase); 544 548 if (ret) 545 - return ret; 549 + goto error; 546 550 547 551 bus = pci_create_root_bus(&pdev->dev, 0, 548 552 &xgene_pcie_ops, port, &res); 549 - if (!bus) 550 - return -ENOMEM; 553 + if (!bus) { 554 + ret = -ENOMEM; 555 + goto error; 556 + } 551 557 552 558 pci_scan_child_bus(bus); 553 559 pci_assign_unassigned_bus_resources(bus); ··· 561 555 562 556 platform_set_drvdata(pdev, port); 563 557 return 0; 558 + 559 + error: 560 + pci_free_resource_list(&res); 561 + return ret; 564 562 } 565 563 566 564 static const struct of_device_id xgene_pcie_match_table[] = { ··· 579 569 }, 580 570 .probe = xgene_pcie_probe_bridge, 581 571 }; 582 - module_platform_driver(xgene_pcie_driver); 583 - 584 - MODULE_AUTHOR("Tanmay Inamdar <tinamdar@apm.com>"); 585 - MODULE_DESCRIPTION("APM X-Gene PCIe driver"); 586 - MODULE_LICENSE("GPL v2"); 572 + builtin_platform_driver(xgene_pcie_driver);
+45 -44
drivers/pci/host/pcie-altera.c
··· 61 61 #define TLP_LOOP 500 62 62 #define RP_DEVFN 0 63 63 64 + #define LINK_UP_TIMEOUT 5000 65 + 64 66 #define INTX_NUM 4 65 67 66 68 #define DWORD_MASK 3 ··· 83 81 u32 reg1; 84 82 }; 85 83 84 + static inline void cra_writel(struct altera_pcie *pcie, const u32 value, 85 + const u32 reg) 86 + { 87 + writel_relaxed(value, pcie->cra_base + reg); 88 + } 89 + 90 + static inline u32 cra_readl(struct altera_pcie *pcie, const u32 reg) 91 + { 92 + return readl_relaxed(pcie->cra_base + reg); 93 + } 94 + 95 + static bool altera_pcie_link_is_up(struct altera_pcie *pcie) 96 + { 97 + return !!((cra_readl(pcie, RP_LTSSM) & RP_LTSSM_MASK) == LTSSM_L0); 98 + } 99 + 86 100 static void altera_pcie_retrain(struct pci_dev *dev) 87 101 { 88 102 u16 linkcap, linkstat; 103 + struct altera_pcie *pcie = dev->bus->sysdata; 104 + int timeout = 0; 105 + 106 + if (!altera_pcie_link_is_up(pcie)) 107 + return; 89 108 90 109 /* 91 110 * Set the retrain bit if the PCIe rootport support > 2.5GB/s, but ··· 118 95 return; 119 96 120 97 pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &linkstat); 121 - if ((linkstat & PCI_EXP_LNKSTA_CLS) == PCI_EXP_LNKSTA_CLS_2_5GB) 98 + if ((linkstat & PCI_EXP_LNKSTA_CLS) == PCI_EXP_LNKSTA_CLS_2_5GB) { 122 99 pcie_capability_set_word(dev, PCI_EXP_LNKCTL, 123 100 PCI_EXP_LNKCTL_RL); 101 + while (!altera_pcie_link_is_up(pcie)) { 102 + timeout++; 103 + if (timeout > LINK_UP_TIMEOUT) 104 + break; 105 + udelay(5); 106 + } 107 + } 124 108 } 125 109 DECLARE_PCI_FIXUP_EARLY(0x1172, PCI_ANY_ID, altera_pcie_retrain); 126 110 ··· 150 120 return false; 151 121 } 152 122 153 - static inline void cra_writel(struct altera_pcie *pcie, const u32 value, 154 - const u32 reg) 155 - { 156 - writel_relaxed(value, pcie->cra_base + reg); 157 - } 158 - 159 - static inline u32 cra_readl(struct altera_pcie *pcie, const u32 reg) 160 - { 161 - return readl_relaxed(pcie->cra_base + reg); 162 - } 163 - 164 123 static void tlp_write_tx(struct altera_pcie *pcie, 165 124 struct tlp_rp_regpair_t *tlp_rp_regdata) 166 125 { 167 126 cra_writel(pcie, tlp_rp_regdata->reg0, RP_TX_REG0); 168 127 cra_writel(pcie, tlp_rp_regdata->reg1, RP_TX_REG1); 169 128 cra_writel(pcie, tlp_rp_regdata->ctrl, RP_TX_CNTRL); 170 - } 171 - 172 - static bool altera_pcie_link_is_up(struct altera_pcie *pcie) 173 - { 174 - return !!((cra_readl(pcie, RP_LTSSM) & RP_LTSSM_MASK) == LTSSM_L0); 175 129 } 176 130 177 131 static bool altera_pcie_valid_config(struct altera_pcie *pcie, ··· 429 415 chained_irq_exit(chip, desc); 430 416 } 431 417 432 - static void altera_pcie_release_of_pci_ranges(struct altera_pcie *pcie) 433 - { 434 - pci_free_resource_list(&pcie->resources); 435 - } 436 - 437 418 static int altera_pcie_parse_request_of_pci_ranges(struct altera_pcie *pcie) 438 419 { 439 420 int err, res_valid = 0; ··· 441 432 if (err) 442 433 return err; 443 434 444 - resource_list_for_each_entry(win, &pcie->resources) { 445 - struct resource *parent, *res = win->res; 446 - 447 - switch (resource_type(res)) { 448 - case IORESOURCE_MEM: 449 - parent = &iomem_resource; 450 - res_valid |= !(res->flags & IORESOURCE_PREFETCH); 451 - break; 452 - default: 453 - continue; 454 - } 455 - 456 - err = devm_request_resource(dev, parent, res); 457 - if (err) 458 - goto out_release_res; 459 - } 460 - 461 - if (!res_valid) { 462 - dev_err(dev, "non-prefetchable memory resource required\n"); 463 - err = -EINVAL; 435 + err = devm_request_pci_bus_resources(dev, &pcie->resources); 436 + if (err) 464 437 goto out_release_res; 438 + 439 + resource_list_for_each_entry(win, &pcie->resources) { 440 + struct resource *res = win->res; 441 + 442 + if (resource_type(res) == IORESOURCE_MEM) 443 + res_valid |= !(res->flags & IORESOURCE_PREFETCH); 465 444 } 466 445 467 - return 0; 446 + if (res_valid) 447 + return 0; 448 + 449 + dev_err(dev, "non-prefetchable memory resource required\n"); 450 + err = -EINVAL; 468 451 469 452 out_release_res: 470 - altera_pcie_release_of_pci_ranges(pcie); 453 + pci_free_resource_list(&pcie->resources); 471 454 return err; 472 455 } 473 456
+5 -9
drivers/pci/host/pcie-armada8k.c
··· 5 5 * 6 6 * Copyright (C) 2016 Marvell Technology Group Ltd. 7 7 * 8 + * Author: Yehuda Yitshak <yehuday@marvell.com> 9 + * Author: Shadi Ammouri <shadi@marvell.com> 10 + * 8 11 * This file is licensed under the terms of the GNU General Public 9 12 * License version 2. This program is licensed "as is" without any 10 13 * warranty of any kind, whether express or implied. ··· 17 14 #include <linux/delay.h> 18 15 #include <linux/interrupt.h> 19 16 #include <linux/kernel.h> 20 - #include <linux/module.h> 17 + #include <linux/init.h> 21 18 #include <linux/of.h> 22 19 #include <linux/pci.h> 23 20 #include <linux/phy/phy.h> ··· 247 244 { .compatible = "marvell,armada8k-pcie", }, 248 245 {}, 249 246 }; 250 - MODULE_DEVICE_TABLE(of, armada8k_pcie_of_match); 251 247 252 248 static struct platform_driver armada8k_pcie_driver = { 253 249 .probe = armada8k_pcie_probe, ··· 255 253 .of_match_table = of_match_ptr(armada8k_pcie_of_match), 256 254 }, 257 255 }; 258 - 259 - module_platform_driver(armada8k_pcie_driver); 260 - 261 - MODULE_DESCRIPTION("Armada 8k PCIe host controller driver"); 262 - MODULE_AUTHOR("Yehuda Yitshak <yehuday@marvell.com>"); 263 - MODULE_AUTHOR("Shadi Ammouri <shadi@marvell.com>"); 264 - MODULE_LICENSE("GPL v2"); 256 + builtin_platform_driver(armada8k_pcie_driver);
+280
drivers/pci/host/pcie-artpec6.c
··· 1 + /* 2 + * PCIe host controller driver for Axis ARTPEC-6 SoC 3 + * 4 + * Author: Niklas Cassel <niklas.cassel@axis.com> 5 + * 6 + * Based on work done by Phil Edworthy <phil@edworthys.org> 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + */ 12 + 13 + #include <linux/delay.h> 14 + #include <linux/kernel.h> 15 + #include <linux/init.h> 16 + #include <linux/pci.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/resource.h> 19 + #include <linux/signal.h> 20 + #include <linux/types.h> 21 + #include <linux/interrupt.h> 22 + #include <linux/mfd/syscon.h> 23 + #include <linux/regmap.h> 24 + 25 + #include "pcie-designware.h" 26 + 27 + #define to_artpec6_pcie(x) container_of(x, struct artpec6_pcie, pp) 28 + 29 + struct artpec6_pcie { 30 + struct pcie_port pp; 31 + struct regmap *regmap; 32 + void __iomem *phy_base; 33 + }; 34 + 35 + /* PCIe Port Logic registers (memory-mapped) */ 36 + #define PL_OFFSET 0x700 37 + #define PCIE_PHY_DEBUG_R0 (PL_OFFSET + 0x28) 38 + #define PCIE_PHY_DEBUG_R1 (PL_OFFSET + 0x2c) 39 + 40 + #define MISC_CONTROL_1_OFF (PL_OFFSET + 0x1bc) 41 + #define DBI_RO_WR_EN 1 42 + 43 + /* ARTPEC-6 specific registers */ 44 + #define PCIECFG 0x18 45 + #define PCIECFG_DBG_OEN (1 << 24) 46 + #define PCIECFG_CORE_RESET_REQ (1 << 21) 47 + #define PCIECFG_LTSSM_ENABLE (1 << 20) 48 + #define PCIECFG_CLKREQ_B (1 << 11) 49 + #define PCIECFG_REFCLK_ENABLE (1 << 10) 50 + #define PCIECFG_PLL_ENABLE (1 << 9) 51 + #define PCIECFG_PCLK_ENABLE (1 << 8) 52 + #define PCIECFG_RISRCREN (1 << 4) 53 + #define PCIECFG_MODE_TX_DRV_EN (1 << 3) 54 + #define PCIECFG_CISRREN (1 << 2) 55 + #define PCIECFG_MACRO_ENABLE (1 << 0) 56 + 57 + #define NOCCFG 0x40 58 + #define NOCCFG_ENABLE_CLK_PCIE (1 << 4) 59 + #define NOCCFG_POWER_PCIE_IDLEACK (1 << 3) 60 + #define NOCCFG_POWER_PCIE_IDLE (1 << 2) 61 + #define NOCCFG_POWER_PCIE_IDLEREQ (1 << 1) 62 + 63 + #define PHY_STATUS 0x118 64 + #define PHY_COSPLLLOCK (1 << 0) 65 + 66 + #define ARTPEC6_CPU_TO_BUS_ADDR 0x0fffffff 67 + 68 + static int artpec6_pcie_establish_link(struct pcie_port *pp) 69 + { 70 + struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pp); 71 + u32 val; 72 + unsigned int retries; 73 + 74 + /* Hold DW core in reset */ 75 + regmap_read(artpec6_pcie->regmap, PCIECFG, &val); 76 + val |= PCIECFG_CORE_RESET_REQ; 77 + regmap_write(artpec6_pcie->regmap, PCIECFG, val); 78 + 79 + regmap_read(artpec6_pcie->regmap, PCIECFG, &val); 80 + val |= PCIECFG_RISRCREN | /* Receiver term. 50 Ohm */ 81 + PCIECFG_MODE_TX_DRV_EN | 82 + PCIECFG_CISRREN | /* Reference clock term. 100 Ohm */ 83 + PCIECFG_MACRO_ENABLE; 84 + val |= PCIECFG_REFCLK_ENABLE; 85 + val &= ~PCIECFG_DBG_OEN; 86 + val &= ~PCIECFG_CLKREQ_B; 87 + regmap_write(artpec6_pcie->regmap, PCIECFG, val); 88 + usleep_range(5000, 6000); 89 + 90 + regmap_read(artpec6_pcie->regmap, NOCCFG, &val); 91 + val |= NOCCFG_ENABLE_CLK_PCIE; 92 + regmap_write(artpec6_pcie->regmap, NOCCFG, val); 93 + usleep_range(20, 30); 94 + 95 + regmap_read(artpec6_pcie->regmap, PCIECFG, &val); 96 + val |= PCIECFG_PCLK_ENABLE | PCIECFG_PLL_ENABLE; 97 + regmap_write(artpec6_pcie->regmap, PCIECFG, val); 98 + usleep_range(6000, 7000); 99 + 100 + regmap_read(artpec6_pcie->regmap, NOCCFG, &val); 101 + val &= ~NOCCFG_POWER_PCIE_IDLEREQ; 102 + regmap_write(artpec6_pcie->regmap, NOCCFG, val); 103 + 104 + retries = 50; 105 + do { 106 + usleep_range(1000, 2000); 107 + regmap_read(artpec6_pcie->regmap, NOCCFG, &val); 108 + retries--; 109 + } while (retries && 110 + (val & (NOCCFG_POWER_PCIE_IDLEACK | NOCCFG_POWER_PCIE_IDLE))); 111 + 112 + retries = 50; 113 + do { 114 + usleep_range(1000, 2000); 115 + val = readl(artpec6_pcie->phy_base + PHY_STATUS); 116 + retries--; 117 + } while (retries && !(val & PHY_COSPLLLOCK)); 118 + 119 + /* Take DW core out of reset */ 120 + regmap_read(artpec6_pcie->regmap, PCIECFG, &val); 121 + val &= ~PCIECFG_CORE_RESET_REQ; 122 + regmap_write(artpec6_pcie->regmap, PCIECFG, val); 123 + usleep_range(100, 200); 124 + 125 + /* 126 + * Enable writing to config regs. This is required as the Synopsys 127 + * driver changes the class code. That register needs DBI write enable. 128 + */ 129 + writel(DBI_RO_WR_EN, pp->dbi_base + MISC_CONTROL_1_OFF); 130 + 131 + pp->io_base &= ARTPEC6_CPU_TO_BUS_ADDR; 132 + pp->mem_base &= ARTPEC6_CPU_TO_BUS_ADDR; 133 + pp->cfg0_base &= ARTPEC6_CPU_TO_BUS_ADDR; 134 + pp->cfg1_base &= ARTPEC6_CPU_TO_BUS_ADDR; 135 + 136 + /* setup root complex */ 137 + dw_pcie_setup_rc(pp); 138 + 139 + /* assert LTSSM enable */ 140 + regmap_read(artpec6_pcie->regmap, PCIECFG, &val); 141 + val |= PCIECFG_LTSSM_ENABLE; 142 + regmap_write(artpec6_pcie->regmap, PCIECFG, val); 143 + 144 + /* check if the link is up or not */ 145 + if (!dw_pcie_wait_for_link(pp)) 146 + return 0; 147 + 148 + dev_dbg(pp->dev, "DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n", 149 + readl(pp->dbi_base + PCIE_PHY_DEBUG_R0), 150 + readl(pp->dbi_base + PCIE_PHY_DEBUG_R1)); 151 + 152 + return -ETIMEDOUT; 153 + } 154 + 155 + static void artpec6_pcie_enable_interrupts(struct pcie_port *pp) 156 + { 157 + if (IS_ENABLED(CONFIG_PCI_MSI)) 158 + dw_pcie_msi_init(pp); 159 + } 160 + 161 + static void artpec6_pcie_host_init(struct pcie_port *pp) 162 + { 163 + artpec6_pcie_establish_link(pp); 164 + artpec6_pcie_enable_interrupts(pp); 165 + } 166 + 167 + static int artpec6_pcie_link_up(struct pcie_port *pp) 168 + { 169 + u32 rc; 170 + 171 + /* 172 + * Get status from Synopsys IP 173 + * link is debug bit 36, debug register 1 starts at bit 32 174 + */ 175 + rc = readl(pp->dbi_base + PCIE_PHY_DEBUG_R1) & (0x1 << (36 - 32)); 176 + if (rc) 177 + return 1; 178 + 179 + return 0; 180 + } 181 + 182 + static struct pcie_host_ops artpec6_pcie_host_ops = { 183 + .link_up = artpec6_pcie_link_up, 184 + .host_init = artpec6_pcie_host_init, 185 + }; 186 + 187 + static irqreturn_t artpec6_pcie_msi_handler(int irq, void *arg) 188 + { 189 + struct pcie_port *pp = arg; 190 + 191 + return dw_handle_msi_irq(pp); 192 + } 193 + 194 + static int __init artpec6_add_pcie_port(struct pcie_port *pp, 195 + struct platform_device *pdev) 196 + { 197 + int ret; 198 + 199 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 200 + pp->msi_irq = platform_get_irq_byname(pdev, "msi"); 201 + if (pp->msi_irq <= 0) { 202 + dev_err(&pdev->dev, "failed to get MSI irq\n"); 203 + return -ENODEV; 204 + } 205 + 206 + ret = devm_request_irq(&pdev->dev, pp->msi_irq, 207 + artpec6_pcie_msi_handler, 208 + IRQF_SHARED | IRQF_NO_THREAD, 209 + "artpec6-pcie-msi", pp); 210 + if (ret) { 211 + dev_err(&pdev->dev, "failed to request MSI irq\n"); 212 + return ret; 213 + } 214 + } 215 + 216 + pp->root_bus_nr = -1; 217 + pp->ops = &artpec6_pcie_host_ops; 218 + 219 + ret = dw_pcie_host_init(pp); 220 + if (ret) { 221 + dev_err(&pdev->dev, "failed to initialize host\n"); 222 + return ret; 223 + } 224 + 225 + return 0; 226 + } 227 + 228 + static int artpec6_pcie_probe(struct platform_device *pdev) 229 + { 230 + struct artpec6_pcie *artpec6_pcie; 231 + struct pcie_port *pp; 232 + struct resource *dbi_base; 233 + struct resource *phy_base; 234 + int ret; 235 + 236 + artpec6_pcie = devm_kzalloc(&pdev->dev, sizeof(*artpec6_pcie), 237 + GFP_KERNEL); 238 + if (!artpec6_pcie) 239 + return -ENOMEM; 240 + 241 + pp = &artpec6_pcie->pp; 242 + pp->dev = &pdev->dev; 243 + 244 + dbi_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); 245 + pp->dbi_base = devm_ioremap_resource(&pdev->dev, dbi_base); 246 + if (IS_ERR(pp->dbi_base)) 247 + return PTR_ERR(pp->dbi_base); 248 + 249 + phy_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "phy"); 250 + artpec6_pcie->phy_base = devm_ioremap_resource(&pdev->dev, phy_base); 251 + if (IS_ERR(artpec6_pcie->phy_base)) 252 + return PTR_ERR(artpec6_pcie->phy_base); 253 + 254 + artpec6_pcie->regmap = 255 + syscon_regmap_lookup_by_phandle(pdev->dev.of_node, 256 + "axis,syscon-pcie"); 257 + if (IS_ERR(artpec6_pcie->regmap)) 258 + return PTR_ERR(artpec6_pcie->regmap); 259 + 260 + ret = artpec6_add_pcie_port(pp, pdev); 261 + if (ret < 0) 262 + return ret; 263 + 264 + platform_set_drvdata(pdev, artpec6_pcie); 265 + return 0; 266 + } 267 + 268 + static const struct of_device_id artpec6_pcie_of_match[] = { 269 + { .compatible = "axis,artpec6-pcie", }, 270 + {}, 271 + }; 272 + 273 + static struct platform_driver artpec6_pcie_driver = { 274 + .probe = artpec6_pcie_probe, 275 + .driver = { 276 + .name = "artpec6-pcie", 277 + .of_match_table = artpec6_pcie_of_match, 278 + }, 279 + }; 280 + builtin_platform_driver(artpec6_pcie_driver);
+2 -8
drivers/pci/host/pcie-designware-plat.c
··· 14 14 #include <linux/gpio.h> 15 15 #include <linux/interrupt.h> 16 16 #include <linux/kernel.h> 17 - #include <linux/module.h> 17 + #include <linux/init.h> 18 18 #include <linux/of_gpio.h> 19 19 #include <linux/pci.h> 20 20 #include <linux/platform_device.h> ··· 121 121 { .compatible = "snps,dw-pcie", }, 122 122 {}, 123 123 }; 124 - MODULE_DEVICE_TABLE(of, dw_plat_pcie_of_match); 125 124 126 125 static struct platform_driver dw_plat_pcie_driver = { 127 126 .driver = { ··· 129 130 }, 130 131 .probe = dw_plat_pcie_probe, 131 132 }; 132 - 133 - module_platform_driver(dw_plat_pcie_driver); 134 - 135 - MODULE_AUTHOR("Joao Pinto <Joao.Pinto@synopsys.com>"); 136 - MODULE_DESCRIPTION("Synopsys PCIe host controller glue platform driver"); 137 - MODULE_LICENSE("GPL v2"); 133 + builtin_platform_driver(dw_plat_pcie_driver);
+22 -12
drivers/pci/host/pcie-designware.c
··· 452 452 if (ret) 453 453 return ret; 454 454 455 + ret = devm_request_pci_bus_resources(&pdev->dev, &res); 456 + if (ret) 457 + goto error; 458 + 455 459 /* Get the I/O and memory ranges from DT */ 456 460 resource_list_for_each_entry(win, &res) { 457 461 switch (resource_type(win->res)) { ··· 465 461 pp->io_size = resource_size(pp->io); 466 462 pp->io_bus_addr = pp->io->start - win->offset; 467 463 ret = pci_remap_iospace(pp->io, pp->io_base); 468 - if (ret) { 464 + if (ret) 469 465 dev_warn(pp->dev, "error %d: failed to map resource %pR\n", 470 466 ret, pp->io); 471 - continue; 472 - } 473 467 break; 474 468 case IORESOURCE_MEM: 475 469 pp->mem = win->res; ··· 485 483 case IORESOURCE_BUS: 486 484 pp->busn = win->res; 487 485 break; 488 - default: 489 - continue; 490 486 } 491 487 } 492 488 ··· 493 493 resource_size(pp->cfg)); 494 494 if (!pp->dbi_base) { 495 495 dev_err(pp->dev, "error with ioremap\n"); 496 - return -ENOMEM; 496 + ret = -ENOMEM; 497 + goto error; 497 498 } 498 499 } 499 500 ··· 505 504 pp->cfg0_size); 506 505 if (!pp->va_cfg0_base) { 507 506 dev_err(pp->dev, "error with ioremap in function\n"); 508 - return -ENOMEM; 507 + ret = -ENOMEM; 508 + goto error; 509 509 } 510 510 } 511 511 ··· 515 513 pp->cfg1_size); 516 514 if (!pp->va_cfg1_base) { 517 515 dev_err(pp->dev, "error with ioremap\n"); 518 - return -ENOMEM; 516 + ret = -ENOMEM; 517 + goto error; 519 518 } 520 519 } 521 520 ··· 531 528 &dw_pcie_msi_chip); 532 529 if (!pp->irq_domain) { 533 530 dev_err(pp->dev, "irq domain init failed\n"); 534 - return -ENXIO; 531 + ret = -ENXIO; 532 + goto error; 535 533 } 536 534 537 535 for (i = 0; i < MAX_MSI_IRQS; i++) ··· 540 536 } else { 541 537 ret = pp->ops->msi_host_init(pp, &dw_pcie_msi_chip); 542 538 if (ret < 0) 543 - return ret; 539 + goto error; 544 540 } 545 541 } 546 542 ··· 556 552 } else 557 553 bus = pci_scan_root_bus(pp->dev, pp->root_bus_nr, &dw_pcie_ops, 558 554 pp, &res); 559 - if (!bus) 560 - return -ENOMEM; 555 + if (!bus) { 556 + ret = -ENOMEM; 557 + goto error; 558 + } 561 559 562 560 if (pp->ops->scan_bus) 563 561 pp->ops->scan_bus(pp); ··· 577 571 578 572 pci_bus_add_devices(bus); 579 573 return 0; 574 + 575 + error: 576 + pci_free_resource_list(&res); 577 + return ret; 580 578 } 581 579 582 580 static int dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
+2 -11
drivers/pci/host/pcie-hisi.c
··· 12 12 * published by the Free Software Foundation. 13 13 */ 14 14 #include <linux/interrupt.h> 15 - #include <linux/module.h> 15 + #include <linux/init.h> 16 16 #include <linux/mfd/syscon.h> 17 17 #include <linux/of_address.h> 18 18 #include <linux/of_pci.h> ··· 235 235 {}, 236 236 }; 237 237 238 - 239 - MODULE_DEVICE_TABLE(of, hisi_pcie_of_match); 240 - 241 238 static struct platform_driver hisi_pcie_driver = { 242 239 .probe = hisi_pcie_probe, 243 240 .driver = { ··· 242 245 .of_match_table = hisi_pcie_of_match, 243 246 }, 244 247 }; 245 - 246 - module_platform_driver(hisi_pcie_driver); 247 - 248 - MODULE_AUTHOR("Zhou Wang <wangzhou1@hisilicon.com>"); 249 - MODULE_AUTHOR("Dacai Zhu <zhudacai@hisilicon.com>"); 250 - MODULE_AUTHOR("Gabriele Paoloni <gabriele.paoloni@huawei.com>"); 251 - MODULE_LICENSE("GPL v2"); 248 + builtin_platform_driver(hisi_pcie_driver);
+4
drivers/pci/host/pcie-iproc.c
··· 462 462 if (!pcie || !pcie->dev || !pcie->base) 463 463 return -EINVAL; 464 464 465 + ret = devm_request_pci_bus_resources(pcie->dev, res); 466 + if (ret) 467 + return ret; 468 + 465 469 ret = phy_init(pcie->phy); 466 470 if (ret) { 467 471 dev_err(pcie->dev, "unable to initialize PCIe PHY\n");
+13 -33
drivers/pci/host/pcie-rcar.c
··· 7 7 * arch/sh/drivers/pci/ops-sh7786.c 8 8 * Copyright (C) 2009 - 2011 Paul Mundt 9 9 * 10 + * Author: Phil Edworthy <phil.edworthy@renesas.com> 11 + * 10 12 * This file is licensed under the terms of the GNU General Public 11 13 * License version 2. This program is licensed "as is" without any 12 14 * warranty of any kind, whether express or implied. ··· 20 18 #include <linux/irq.h> 21 19 #include <linux/irqdomain.h> 22 20 #include <linux/kernel.h> 23 - #include <linux/module.h> 21 + #include <linux/init.h> 24 22 #include <linux/msi.h> 25 23 #include <linux/of_address.h> 26 24 #include <linux/of_irq.h> ··· 938 936 { .compatible = "renesas,pcie-r8a7795", .data = rcar_pcie_hw_init }, 939 937 {}, 940 938 }; 941 - MODULE_DEVICE_TABLE(of, rcar_pcie_of_match); 942 - 943 - static void rcar_pcie_release_of_pci_ranges(struct rcar_pcie *pci) 944 - { 945 - pci_free_resource_list(&pci->resources); 946 - } 947 939 948 940 static int rcar_pcie_parse_request_of_pci_ranges(struct rcar_pcie *pci) 949 941 { ··· 951 955 if (err) 952 956 return err; 953 957 954 - resource_list_for_each_entry(win, &pci->resources) { 955 - struct resource *parent, *res = win->res; 958 + err = devm_request_pci_bus_resources(dev, &pci->resources); 959 + if (err) 960 + goto out_release_res; 956 961 957 - switch (resource_type(res)) { 958 - case IORESOURCE_IO: 959 - parent = &ioport_resource; 962 + resource_list_for_each_entry(win, &pci->resources) { 963 + struct resource *res = win->res; 964 + 965 + if (resource_type(res) == IORESOURCE_IO) { 960 966 err = pci_remap_iospace(res, iobase); 961 - if (err) { 967 + if (err) 962 968 dev_warn(dev, "error %d: failed to map resource %pR\n", 963 969 err, res); 964 - continue; 965 - } 966 - break; 967 - case IORESOURCE_MEM: 968 - parent = &iomem_resource; 969 - break; 970 - 971 - case IORESOURCE_BUS: 972 - default: 973 - continue; 974 970 } 975 - 976 - err = devm_request_resource(dev, parent, res); 977 - if (err) 978 - goto out_release_res; 979 971 } 980 972 981 973 return 0; 982 974 983 975 out_release_res: 984 - rcar_pcie_release_of_pci_ranges(pci); 976 + pci_free_resource_list(&pci->resources); 985 977 return err; 986 978 } 987 979 ··· 1057 1073 }, 1058 1074 .probe = rcar_pcie_probe, 1059 1075 }; 1060 - module_platform_driver(rcar_pcie_driver); 1061 - 1062 - MODULE_AUTHOR("Phil Edworthy <phil.edworthy@renesas.com>"); 1063 - MODULE_DESCRIPTION("Renesas R-Car PCIe driver"); 1064 - MODULE_LICENSE("GPL v2"); 1076 + builtin_platform_driver(rcar_pcie_driver);
+15 -5
drivers/pci/host/pcie-xilinx-nwl.c
··· 825 825 826 826 err = of_pci_get_host_bridge_resources(node, 0, 0xff, &res, &iobase); 827 827 if (err) { 828 - pr_err("Getting bridge resources failed\n"); 828 + dev_err(pcie->dev, "Getting bridge resources failed\n"); 829 829 return err; 830 830 } 831 + 832 + err = devm_request_pci_bus_resources(pcie->dev, &res); 833 + if (err) 834 + goto error; 831 835 832 836 err = nwl_pcie_init_irq_domain(pcie); 833 837 if (err) { 834 838 dev_err(pcie->dev, "Failed creating IRQ Domain\n"); 835 - return err; 839 + goto error; 836 840 } 837 841 838 842 bus = pci_create_root_bus(&pdev->dev, pcie->root_busno, 839 843 &nwl_pcie_ops, pcie, &res); 840 - if (!bus) 841 - return -ENOMEM; 844 + if (!bus) { 845 + err = -ENOMEM; 846 + goto error; 847 + } 842 848 843 849 if (IS_ENABLED(CONFIG_PCI_MSI)) { 844 850 err = nwl_pcie_enable_msi(pcie, bus); 845 851 if (err < 0) { 846 852 dev_err(&pdev->dev, 847 853 "failed to enable MSI support: %d\n", err); 848 - return err; 854 + goto error; 849 855 } 850 856 } 851 857 pci_scan_child_bus(bus); ··· 861 855 pci_bus_add_devices(bus); 862 856 platform_set_drvdata(pdev, pcie); 863 857 return 0; 858 + 859 + error: 860 + pci_free_resource_list(&res); 861 + return err; 864 862 } 865 863 866 864 static int nwl_pcie_remove(struct platform_device *pdev)
+16 -6
drivers/pci/host/pcie-xilinx.c
··· 550 550 pcie_intc_node = of_get_next_child(node, NULL); 551 551 if (!pcie_intc_node) { 552 552 dev_err(dev, "No PCIe Intc node found\n"); 553 - return PTR_ERR(pcie_intc_node); 553 + return -ENODEV; 554 554 } 555 555 556 556 port->irq_domain = irq_domain_add_linear(pcie_intc_node, 4, ··· 558 558 port); 559 559 if (!port->irq_domain) { 560 560 dev_err(dev, "Failed to get a INTx IRQ domain\n"); 561 - return PTR_ERR(port->irq_domain); 561 + return -ENODEV; 562 562 } 563 563 564 564 /* Setup MSI */ ··· 569 569 &xilinx_pcie_msi_chip); 570 570 if (!port->irq_domain) { 571 571 dev_err(dev, "Failed to get a MSI IRQ domain\n"); 572 - return PTR_ERR(port->irq_domain); 572 + return -ENODEV; 573 573 } 574 574 575 575 xilinx_pcie_enable_msi(port); ··· 660 660 struct xilinx_pcie_port *port; 661 661 struct device *dev = &pdev->dev; 662 662 struct pci_bus *bus; 663 - 664 663 int err; 665 664 resource_size_t iobase = 0; 666 665 LIST_HEAD(res); ··· 693 694 dev_err(dev, "Getting bridge resources failed\n"); 694 695 return err; 695 696 } 697 + 698 + err = devm_request_pci_bus_resources(dev, &res); 699 + if (err) 700 + goto error; 701 + 696 702 bus = pci_create_root_bus(&pdev->dev, 0, 697 703 &xilinx_pcie_ops, port, &res); 698 - if (!bus) 699 - return -ENOMEM; 704 + if (!bus) { 705 + err = -ENOMEM; 706 + goto error; 707 + } 700 708 701 709 #ifdef CONFIG_PCI_MSI 702 710 xilinx_pcie_msi_chip.dev = port->dev; ··· 718 712 platform_set_drvdata(pdev, port); 719 713 720 714 return 0; 715 + 716 + error: 717 + pci_free_resource_list(&res); 718 + return err; 721 719 } 722 720 723 721 /**
+4
drivers/pci/hotplug/acpiphp_glue.c
··· 675 675 if (bridge->is_going_away) 676 676 return; 677 677 678 + pm_runtime_get_sync(&bridge->pci_dev->dev); 679 + 678 680 list_for_each_entry(slot, &bridge->slots, node) { 679 681 struct pci_bus *bus = slot->bus; 680 682 struct pci_dev *dev, *tmp; ··· 696 694 disable_slot(slot); 697 695 } 698 696 } 697 + 698 + pm_runtime_put(&bridge->pci_dev->dev); 699 699 } 700 700 701 701 /*
+4
drivers/pci/hotplug/pciehp_hpc.c
··· 546 546 u8 present; 547 547 bool link; 548 548 549 + /* Interrupts cannot originate from a controller that's asleep */ 550 + if (pdev->current_state == PCI_D3cold) 551 + return IRQ_NONE; 552 + 549 553 /* 550 554 * In order to guarantee that all interrupt events are 551 555 * serviced, we need to re-inspect Slot Status register after
+204 -70
drivers/pci/msi.c
··· 4 4 * 5 5 * Copyright (C) 2003-2004 Intel 6 6 * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com) 7 + * Copyright (C) 2016 Christoph Hellwig. 7 8 */ 8 9 9 10 #include <linux/err.h> ··· 208 207 desc->masked = __pci_msi_desc_mask_irq(desc, mask, flag); 209 208 } 210 209 210 + static void __iomem *pci_msix_desc_addr(struct msi_desc *desc) 211 + { 212 + return desc->mask_base + 213 + desc->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE; 214 + } 215 + 211 216 /* 212 217 * This internal function does not flush PCI writes to the device. 213 218 * All users must ensure that they read from the device before either ··· 224 217 u32 __pci_msix_desc_mask_irq(struct msi_desc *desc, u32 flag) 225 218 { 226 219 u32 mask_bits = desc->masked; 227 - unsigned offset = desc->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE + 228 - PCI_MSIX_ENTRY_VECTOR_CTRL; 229 220 230 221 if (pci_msi_ignore_mask) 231 222 return 0; ··· 231 226 mask_bits &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT; 232 227 if (flag) 233 228 mask_bits |= PCI_MSIX_ENTRY_CTRL_MASKBIT; 234 - writel(mask_bits, desc->mask_base + offset); 229 + writel(mask_bits, pci_msix_desc_addr(desc) + PCI_MSIX_ENTRY_VECTOR_CTRL); 235 230 236 231 return mask_bits; 237 232 } ··· 289 284 BUG_ON(dev->current_state != PCI_D0); 290 285 291 286 if (entry->msi_attrib.is_msix) { 292 - void __iomem *base = entry->mask_base + 293 - entry->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE; 287 + void __iomem *base = pci_msix_desc_addr(entry); 294 288 295 289 msg->address_lo = readl(base + PCI_MSIX_ENTRY_LOWER_ADDR); 296 290 msg->address_hi = readl(base + PCI_MSIX_ENTRY_UPPER_ADDR); ··· 319 315 if (dev->current_state != PCI_D0) { 320 316 /* Don't touch the hardware now */ 321 317 } else if (entry->msi_attrib.is_msix) { 322 - void __iomem *base; 323 - base = entry->mask_base + 324 - entry->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE; 318 + void __iomem *base = pci_msix_desc_addr(entry); 325 319 326 320 writel(msg->address_lo, base + PCI_MSIX_ENTRY_LOWER_ADDR); 327 321 writel(msg->address_hi, base + PCI_MSIX_ENTRY_UPPER_ADDR); ··· 569 567 entry->msi_attrib.multi_cap = (control & PCI_MSI_FLAGS_QMASK) >> 1; 570 568 entry->msi_attrib.multiple = ilog2(__roundup_pow_of_two(nvec)); 571 569 entry->nvec_used = nvec; 570 + entry->affinity = dev->irq_affinity; 572 571 573 572 if (control & PCI_MSI_FLAGS_64BIT) 574 573 entry->mask_pos = dev->msi_cap + PCI_MSI_MASK_64; ··· 681 678 static int msix_setup_entries(struct pci_dev *dev, void __iomem *base, 682 679 struct msix_entry *entries, int nvec) 683 680 { 681 + const struct cpumask *mask = NULL; 684 682 struct msi_desc *entry; 685 - int i; 683 + int cpu = -1, i; 686 684 687 685 for (i = 0; i < nvec; i++) { 686 + if (dev->irq_affinity) { 687 + cpu = cpumask_next(cpu, dev->irq_affinity); 688 + if (cpu >= nr_cpu_ids) 689 + cpu = cpumask_first(dev->irq_affinity); 690 + mask = cpumask_of(cpu); 691 + } 692 + 688 693 entry = alloc_msi_entry(&dev->dev); 689 694 if (!entry) { 690 695 if (!i) ··· 705 694 706 695 entry->msi_attrib.is_msix = 1; 707 696 entry->msi_attrib.is_64 = 1; 708 - entry->msi_attrib.entry_nr = entries[i].entry; 697 + if (entries) 698 + entry->msi_attrib.entry_nr = entries[i].entry; 699 + else 700 + entry->msi_attrib.entry_nr = i; 709 701 entry->msi_attrib.default_irq = dev->irq; 710 702 entry->mask_base = base; 711 703 entry->nvec_used = 1; 704 + entry->affinity = mask; 712 705 713 706 list_add_tail(&entry->list, dev_to_msi_list(&dev->dev)); 714 707 } ··· 727 712 int i = 0; 728 713 729 714 for_each_pci_msi_entry(entry, dev) { 730 - int offset = entries[i].entry * PCI_MSIX_ENTRY_SIZE + 731 - PCI_MSIX_ENTRY_VECTOR_CTRL; 732 - 733 - entries[i].vector = entry->irq; 734 - entry->masked = readl(entry->mask_base + offset); 715 + if (entries) 716 + entries[i++].vector = entry->irq; 717 + entry->masked = readl(pci_msix_desc_addr(entry) + 718 + PCI_MSIX_ENTRY_VECTOR_CTRL); 735 719 msix_mask_irq(entry, 1); 736 - i++; 737 720 } 738 721 } 739 722 ··· 944 931 /** 945 932 * pci_enable_msix - configure device's MSI-X capability structure 946 933 * @dev: pointer to the pci_dev data structure of MSI-X device function 947 - * @entries: pointer to an array of MSI-X entries 934 + * @entries: pointer to an array of MSI-X entries (optional) 948 935 * @nvec: number of MSI-X irqs requested for allocation by device driver 949 936 * 950 937 * Setup the MSI-X capability structure of device function with the number ··· 964 951 if (!pci_msi_supported(dev, nvec)) 965 952 return -EINVAL; 966 953 967 - if (!entries) 968 - return -EINVAL; 969 - 970 954 nr_entries = pci_msix_vec_count(dev); 971 955 if (nr_entries < 0) 972 956 return nr_entries; 973 957 if (nvec > nr_entries) 974 958 return nr_entries; 975 959 976 - /* Check for any invalid entries */ 977 - for (i = 0; i < nvec; i++) { 978 - if (entries[i].entry >= nr_entries) 979 - return -EINVAL; /* invalid entry */ 980 - for (j = i + 1; j < nvec; j++) { 981 - if (entries[i].entry == entries[j].entry) 982 - return -EINVAL; /* duplicate entry */ 960 + if (entries) { 961 + /* Check for any invalid entries */ 962 + for (i = 0; i < nvec; i++) { 963 + if (entries[i].entry >= nr_entries) 964 + return -EINVAL; /* invalid entry */ 965 + for (j = i + 1; j < nvec; j++) { 966 + if (entries[i].entry == entries[j].entry) 967 + return -EINVAL; /* duplicate entry */ 968 + } 983 969 } 984 970 } 985 971 WARN_ON(!!dev->msix_enabled); ··· 1038 1026 } 1039 1027 EXPORT_SYMBOL(pci_msi_enabled); 1040 1028 1041 - /** 1042 - * pci_enable_msi_range - configure device's MSI capability structure 1043 - * @dev: device to configure 1044 - * @minvec: minimal number of interrupts to configure 1045 - * @maxvec: maximum number of interrupts to configure 1046 - * 1047 - * This function tries to allocate a maximum possible number of interrupts in a 1048 - * range between @minvec and @maxvec. It returns a negative errno if an error 1049 - * occurs. If it succeeds, it returns the actual number of interrupts allocated 1050 - * and updates the @dev's irq member to the lowest new interrupt number; 1051 - * the other interrupt numbers allocated to this device are consecutive. 1052 - **/ 1053 - int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec) 1029 + static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec, 1030 + unsigned int flags) 1054 1031 { 1055 1032 int nvec; 1056 1033 int rc; ··· 1062 1061 nvec = pci_msi_vec_count(dev); 1063 1062 if (nvec < 0) 1064 1063 return nvec; 1065 - else if (nvec < minvec) 1064 + if (nvec < minvec) 1066 1065 return -EINVAL; 1067 - else if (nvec > maxvec) 1066 + 1067 + if (nvec > maxvec) 1068 1068 nvec = maxvec; 1069 1069 1070 - do { 1071 - rc = msi_capability_init(dev, nvec); 1072 - if (rc < 0) { 1073 - return rc; 1074 - } else if (rc > 0) { 1075 - if (rc < minvec) 1070 + for (;;) { 1071 + if (!(flags & PCI_IRQ_NOAFFINITY)) { 1072 + dev->irq_affinity = irq_create_affinity_mask(&nvec); 1073 + if (nvec < minvec) 1076 1074 return -ENOSPC; 1077 - nvec = rc; 1078 1075 } 1079 - } while (rc); 1080 1076 1081 - return nvec; 1077 + rc = msi_capability_init(dev, nvec); 1078 + if (rc == 0) 1079 + return nvec; 1080 + 1081 + kfree(dev->irq_affinity); 1082 + dev->irq_affinity = NULL; 1083 + 1084 + if (rc < 0) 1085 + return rc; 1086 + if (rc < minvec) 1087 + return -ENOSPC; 1088 + 1089 + nvec = rc; 1090 + } 1091 + } 1092 + 1093 + /** 1094 + * pci_enable_msi_range - configure device's MSI capability structure 1095 + * @dev: device to configure 1096 + * @minvec: minimal number of interrupts to configure 1097 + * @maxvec: maximum number of interrupts to configure 1098 + * 1099 + * This function tries to allocate a maximum possible number of interrupts in a 1100 + * range between @minvec and @maxvec. It returns a negative errno if an error 1101 + * occurs. If it succeeds, it returns the actual number of interrupts allocated 1102 + * and updates the @dev's irq member to the lowest new interrupt number; 1103 + * the other interrupt numbers allocated to this device are consecutive. 1104 + **/ 1105 + int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec) 1106 + { 1107 + return __pci_enable_msi_range(dev, minvec, maxvec, PCI_IRQ_NOAFFINITY); 1082 1108 } 1083 1109 EXPORT_SYMBOL(pci_enable_msi_range); 1110 + 1111 + static int __pci_enable_msix_range(struct pci_dev *dev, 1112 + struct msix_entry *entries, int minvec, int maxvec, 1113 + unsigned int flags) 1114 + { 1115 + int nvec = maxvec; 1116 + int rc; 1117 + 1118 + if (maxvec < minvec) 1119 + return -ERANGE; 1120 + 1121 + for (;;) { 1122 + if (!(flags & PCI_IRQ_NOAFFINITY)) { 1123 + dev->irq_affinity = irq_create_affinity_mask(&nvec); 1124 + if (nvec < minvec) 1125 + return -ENOSPC; 1126 + } 1127 + 1128 + rc = pci_enable_msix(dev, entries, nvec); 1129 + if (rc == 0) 1130 + return nvec; 1131 + 1132 + kfree(dev->irq_affinity); 1133 + dev->irq_affinity = NULL; 1134 + 1135 + if (rc < 0) 1136 + return rc; 1137 + if (rc < minvec) 1138 + return -ENOSPC; 1139 + 1140 + nvec = rc; 1141 + } 1142 + } 1084 1143 1085 1144 /** 1086 1145 * pci_enable_msix_range - configure device's MSI-X capability structure ··· 1158 1097 * with new allocated MSI-X interrupts. 1159 1098 **/ 1160 1099 int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries, 1161 - int minvec, int maxvec) 1100 + int minvec, int maxvec) 1162 1101 { 1163 - int nvec = maxvec; 1164 - int rc; 1165 - 1166 - if (maxvec < minvec) 1167 - return -ERANGE; 1168 - 1169 - do { 1170 - rc = pci_enable_msix(dev, entries, nvec); 1171 - if (rc < 0) { 1172 - return rc; 1173 - } else if (rc > 0) { 1174 - if (rc < minvec) 1175 - return -ENOSPC; 1176 - nvec = rc; 1177 - } 1178 - } while (rc); 1179 - 1180 - return nvec; 1102 + return __pci_enable_msix_range(dev, entries, minvec, maxvec, 1103 + PCI_IRQ_NOAFFINITY); 1181 1104 } 1182 1105 EXPORT_SYMBOL(pci_enable_msix_range); 1106 + 1107 + /** 1108 + * pci_alloc_irq_vectors - allocate multiple IRQs for a device 1109 + * @dev: PCI device to operate on 1110 + * @min_vecs: minimum number of vectors required (must be >= 1) 1111 + * @max_vecs: maximum (desired) number of vectors 1112 + * @flags: flags or quirks for the allocation 1113 + * 1114 + * Allocate up to @max_vecs interrupt vectors for @dev, using MSI-X or MSI 1115 + * vectors if available, and fall back to a single legacy vector 1116 + * if neither is available. Return the number of vectors allocated, 1117 + * (which might be smaller than @max_vecs) if successful, or a negative 1118 + * error code on error. If less than @min_vecs interrupt vectors are 1119 + * available for @dev the function will fail with -ENOSPC. 1120 + * 1121 + * To get the Linux IRQ number used for a vector that can be passed to 1122 + * request_irq() use the pci_irq_vector() helper. 1123 + */ 1124 + int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs, 1125 + unsigned int max_vecs, unsigned int flags) 1126 + { 1127 + int vecs = -ENOSPC; 1128 + 1129 + if (!(flags & PCI_IRQ_NOMSIX)) { 1130 + vecs = __pci_enable_msix_range(dev, NULL, min_vecs, max_vecs, 1131 + flags); 1132 + if (vecs > 0) 1133 + return vecs; 1134 + } 1135 + 1136 + if (!(flags & PCI_IRQ_NOMSI)) { 1137 + vecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, flags); 1138 + if (vecs > 0) 1139 + return vecs; 1140 + } 1141 + 1142 + /* use legacy irq if allowed */ 1143 + if (!(flags & PCI_IRQ_NOLEGACY) && min_vecs == 1) 1144 + return 1; 1145 + return vecs; 1146 + } 1147 + EXPORT_SYMBOL(pci_alloc_irq_vectors); 1148 + 1149 + /** 1150 + * pci_free_irq_vectors - free previously allocated IRQs for a device 1151 + * @dev: PCI device to operate on 1152 + * 1153 + * Undoes the allocations and enabling in pci_alloc_irq_vectors(). 1154 + */ 1155 + void pci_free_irq_vectors(struct pci_dev *dev) 1156 + { 1157 + pci_disable_msix(dev); 1158 + pci_disable_msi(dev); 1159 + } 1160 + EXPORT_SYMBOL(pci_free_irq_vectors); 1161 + 1162 + /** 1163 + * pci_irq_vector - return Linux IRQ number of a device vector 1164 + * @dev: PCI device to operate on 1165 + * @nr: device-relative interrupt vector index (0-based). 1166 + */ 1167 + int pci_irq_vector(struct pci_dev *dev, unsigned int nr) 1168 + { 1169 + if (dev->msix_enabled) { 1170 + struct msi_desc *entry; 1171 + int i = 0; 1172 + 1173 + for_each_pci_msi_entry(entry, dev) { 1174 + if (i == nr) 1175 + return entry->irq; 1176 + i++; 1177 + } 1178 + WARN_ON_ONCE(1); 1179 + return -EINVAL; 1180 + } 1181 + 1182 + if (dev->msi_enabled) { 1183 + struct msi_desc *entry = first_pci_msi_entry(dev); 1184 + 1185 + if (WARN_ON_ONCE(nr >= entry->nvec_used)) 1186 + return -EINVAL; 1187 + } else { 1188 + if (WARN_ON_ONCE(nr > 0)) 1189 + return -EINVAL; 1190 + } 1191 + 1192 + return dev->irq + nr; 1193 + } 1194 + EXPORT_SYMBOL(pci_irq_vector); 1183 1195 1184 1196 struct pci_dev *msi_desc_to_pci_dev(struct msi_desc *desc) 1185 1197 {
+1 -4
drivers/pci/pci-driver.c
··· 777 777 778 778 if (!pci_dev->state_saved) { 779 779 pci_save_state(pci_dev); 780 - if (!pci_has_subordinate(pci_dev)) 780 + if (pci_power_manageable(pci_dev)) 781 781 pci_prepare_to_sleep(pci_dev); 782 782 } 783 783 ··· 1144 1144 return -ENOSYS; 1145 1145 1146 1146 pci_dev->state_saved = false; 1147 - pci_dev->no_d3cold = false; 1148 1147 error = pm->runtime_suspend(dev); 1149 1148 if (error) { 1150 1149 /* ··· 1160 1161 1161 1162 return error; 1162 1163 } 1163 - if (!pci_dev->d3cold_allowed) 1164 - pci_dev->no_d3cold = true; 1165 1164 1166 1165 pci_fixup_device(pci_fixup_suspend, pci_dev); 1167 1166
+5
drivers/pci/pci-sysfs.c
··· 406 406 return -EINVAL; 407 407 408 408 pdev->d3cold_allowed = !!val; 409 + if (pdev->d3cold_allowed) 410 + pci_d3cold_enable(pdev); 411 + else 412 + pci_d3cold_disable(pdev); 413 + 409 414 pm_runtime_resume(dev); 410 415 411 416 return count;
+259 -22
drivers/pci/pci.c
··· 7 7 * Copyright 1997 -- 2000 Martin Mares <mj@ucw.cz> 8 8 */ 9 9 10 + #include <linux/acpi.h> 10 11 #include <linux/kernel.h> 11 12 #include <linux/delay.h> 13 + #include <linux/dmi.h> 12 14 #include <linux/init.h> 13 15 #include <linux/of.h> 14 16 #include <linux/of_pci.h> ··· 27 25 #include <linux/device.h> 28 26 #include <linux/pm_runtime.h> 29 27 #include <linux/pci_hotplug.h> 28 + #include <linux/vmalloc.h> 30 29 #include <asm/setup.h> 30 + #include <asm/dma.h> 31 31 #include <linux/aer.h> 32 32 #include "pci.h" 33 33 ··· 85 81 unsigned long pci_hotplug_io_size = DEFAULT_HOTPLUG_IO_SIZE; 86 82 unsigned long pci_hotplug_mem_size = DEFAULT_HOTPLUG_MEM_SIZE; 87 83 84 + #define DEFAULT_HOTPLUG_BUS_SIZE 1 85 + unsigned long pci_hotplug_bus_size = DEFAULT_HOTPLUG_BUS_SIZE; 86 + 88 87 enum pcie_bus_config_types pcie_bus_config = PCIE_BUS_DEFAULT; 89 88 90 89 /* ··· 107 100 108 101 /* If set, the PCIe ARI capability will not be used. */ 109 102 static bool pcie_ari_disabled; 103 + 104 + /* Disable bridge_d3 for all PCIe ports */ 105 + static bool pci_bridge_d3_disable; 106 + /* Force bridge_d3 for all PCIe ports */ 107 + static bool pci_bridge_d3_force; 108 + 109 + static int __init pcie_port_pm_setup(char *str) 110 + { 111 + if (!strcmp(str, "off")) 112 + pci_bridge_d3_disable = true; 113 + else if (!strcmp(str, "force")) 114 + pci_bridge_d3_force = true; 115 + return 1; 116 + } 117 + __setup("pcie_port_pm=", pcie_port_pm_setup); 110 118 111 119 /** 112 120 * pci_bus_max_busnr - returns maximum PCI bus number of given bus' children ··· 2178 2156 } 2179 2157 2180 2158 /** 2159 + * pci_bridge_d3_possible - Is it possible to put the bridge into D3 2160 + * @bridge: Bridge to check 2161 + * 2162 + * This function checks if it is possible to move the bridge to D3. 2163 + * Currently we only allow D3 for recent enough PCIe ports. 2164 + */ 2165 + static bool pci_bridge_d3_possible(struct pci_dev *bridge) 2166 + { 2167 + unsigned int year; 2168 + 2169 + if (!pci_is_pcie(bridge)) 2170 + return false; 2171 + 2172 + switch (pci_pcie_type(bridge)) { 2173 + case PCI_EXP_TYPE_ROOT_PORT: 2174 + case PCI_EXP_TYPE_UPSTREAM: 2175 + case PCI_EXP_TYPE_DOWNSTREAM: 2176 + if (pci_bridge_d3_disable) 2177 + return false; 2178 + if (pci_bridge_d3_force) 2179 + return true; 2180 + 2181 + /* 2182 + * It should be safe to put PCIe ports from 2015 or newer 2183 + * to D3. 2184 + */ 2185 + if (dmi_get_date(DMI_BIOS_DATE, &year, NULL, NULL) && 2186 + year >= 2015) { 2187 + return true; 2188 + } 2189 + break; 2190 + } 2191 + 2192 + return false; 2193 + } 2194 + 2195 + static int pci_dev_check_d3cold(struct pci_dev *dev, void *data) 2196 + { 2197 + bool *d3cold_ok = data; 2198 + bool no_d3cold; 2199 + 2200 + /* 2201 + * The device needs to be allowed to go D3cold and if it is wake 2202 + * capable to do so from D3cold. 2203 + */ 2204 + no_d3cold = dev->no_d3cold || !dev->d3cold_allowed || 2205 + (device_may_wakeup(&dev->dev) && !pci_pme_capable(dev, PCI_D3cold)) || 2206 + !pci_power_manageable(dev); 2207 + 2208 + *d3cold_ok = !no_d3cold; 2209 + 2210 + return no_d3cold; 2211 + } 2212 + 2213 + /* 2214 + * pci_bridge_d3_update - Update bridge D3 capabilities 2215 + * @dev: PCI device which is changed 2216 + * @remove: Is the device being removed 2217 + * 2218 + * Update upstream bridge PM capabilities accordingly depending on if the 2219 + * device PM configuration was changed or the device is being removed. The 2220 + * change is also propagated upstream. 2221 + */ 2222 + static void pci_bridge_d3_update(struct pci_dev *dev, bool remove) 2223 + { 2224 + struct pci_dev *bridge; 2225 + bool d3cold_ok = true; 2226 + 2227 + bridge = pci_upstream_bridge(dev); 2228 + if (!bridge || !pci_bridge_d3_possible(bridge)) 2229 + return; 2230 + 2231 + pci_dev_get(bridge); 2232 + /* 2233 + * If the device is removed we do not care about its D3cold 2234 + * capabilities. 2235 + */ 2236 + if (!remove) 2237 + pci_dev_check_d3cold(dev, &d3cold_ok); 2238 + 2239 + if (d3cold_ok) { 2240 + /* 2241 + * We need to go through all children to find out if all of 2242 + * them can still go to D3cold. 2243 + */ 2244 + pci_walk_bus(bridge->subordinate, pci_dev_check_d3cold, 2245 + &d3cold_ok); 2246 + } 2247 + 2248 + if (bridge->bridge_d3 != d3cold_ok) { 2249 + bridge->bridge_d3 = d3cold_ok; 2250 + /* Propagate change to upstream bridges */ 2251 + pci_bridge_d3_update(bridge, false); 2252 + } 2253 + 2254 + pci_dev_put(bridge); 2255 + } 2256 + 2257 + /** 2258 + * pci_bridge_d3_device_changed - Update bridge D3 capabilities on change 2259 + * @dev: PCI device that was changed 2260 + * 2261 + * If a device is added or its PM configuration, such as is it allowed to 2262 + * enter D3cold, is changed this function updates upstream bridge PM 2263 + * capabilities accordingly. 2264 + */ 2265 + void pci_bridge_d3_device_changed(struct pci_dev *dev) 2266 + { 2267 + pci_bridge_d3_update(dev, false); 2268 + } 2269 + 2270 + /** 2271 + * pci_bridge_d3_device_removed - Update bridge D3 capabilities on remove 2272 + * @dev: PCI device being removed 2273 + * 2274 + * Function updates upstream bridge PM capabilities based on other devices 2275 + * still left on the bus. 2276 + */ 2277 + void pci_bridge_d3_device_removed(struct pci_dev *dev) 2278 + { 2279 + pci_bridge_d3_update(dev, true); 2280 + } 2281 + 2282 + /** 2283 + * pci_d3cold_enable - Enable D3cold for device 2284 + * @dev: PCI device to handle 2285 + * 2286 + * This function can be used in drivers to enable D3cold from the device 2287 + * they handle. It also updates upstream PCI bridge PM capabilities 2288 + * accordingly. 2289 + */ 2290 + void pci_d3cold_enable(struct pci_dev *dev) 2291 + { 2292 + if (dev->no_d3cold) { 2293 + dev->no_d3cold = false; 2294 + pci_bridge_d3_device_changed(dev); 2295 + } 2296 + } 2297 + EXPORT_SYMBOL_GPL(pci_d3cold_enable); 2298 + 2299 + /** 2300 + * pci_d3cold_disable - Disable D3cold for device 2301 + * @dev: PCI device to handle 2302 + * 2303 + * This function can be used in drivers to disable D3cold from the device 2304 + * they handle. It also updates upstream PCI bridge PM capabilities 2305 + * accordingly. 2306 + */ 2307 + void pci_d3cold_disable(struct pci_dev *dev) 2308 + { 2309 + if (!dev->no_d3cold) { 2310 + dev->no_d3cold = true; 2311 + pci_bridge_d3_device_changed(dev); 2312 + } 2313 + } 2314 + EXPORT_SYMBOL_GPL(pci_d3cold_disable); 2315 + 2316 + /** 2181 2317 * pci_pm_init - Initialize PM functions of given PCI device 2182 2318 * @dev: PCI device to handle. 2183 2319 */ ··· 2369 2189 dev->pm_cap = pm; 2370 2190 dev->d3_delay = PCI_PM_D3_WAIT; 2371 2191 dev->d3cold_delay = PCI_PM_D3COLD_WAIT; 2192 + dev->bridge_d3 = pci_bridge_d3_possible(dev); 2372 2193 dev->d3cold_allowed = true; 2373 2194 2374 2195 dev->d1_support = false; ··· 3343 3162 so this function should never be called */ 3344 3163 WARN_ONCE(1, "This architecture does not support memory mapped I/O\n"); 3345 3164 return -ENODEV; 3165 + #endif 3166 + } 3167 + 3168 + /** 3169 + * pci_unmap_iospace - Unmap the memory mapped I/O space 3170 + * @res: resource to be unmapped 3171 + * 3172 + * Unmap the CPU virtual address @res from virtual address space. 3173 + * Only architectures that have memory mapped IO functions defined 3174 + * (and the PCI_IOBASE value defined) should call this function. 3175 + */ 3176 + void pci_unmap_iospace(struct resource *res) 3177 + { 3178 + #if defined(PCI_IOBASE) && defined(CONFIG_MMU) 3179 + unsigned long vaddr = (unsigned long)PCI_IOBASE + res->start; 3180 + 3181 + unmap_kernel_range(vaddr, resource_size(res)); 3346 3182 #endif 3347 3183 } 3348 3184 ··· 4953 4755 static resource_size_t pci_specified_resource_alignment(struct pci_dev *dev) 4954 4756 { 4955 4757 int seg, bus, slot, func, align_order, count; 4758 + unsigned short vendor, device, subsystem_vendor, subsystem_device; 4956 4759 resource_size_t align = 0; 4957 4760 char *p; 4958 4761 ··· 4967 4768 } else { 4968 4769 align_order = -1; 4969 4770 } 4970 - if (sscanf(p, "%x:%x:%x.%x%n", 4971 - &seg, &bus, &slot, &func, &count) != 4) { 4972 - seg = 0; 4973 - if (sscanf(p, "%x:%x.%x%n", 4974 - &bus, &slot, &func, &count) != 3) { 4975 - /* Invalid format */ 4976 - printk(KERN_ERR "PCI: Can't parse resource_alignment parameter: %s\n", 4977 - p); 4771 + if (strncmp(p, "pci:", 4) == 0) { 4772 + /* PCI vendor/device (subvendor/subdevice) ids are specified */ 4773 + p += 4; 4774 + if (sscanf(p, "%hx:%hx:%hx:%hx%n", 4775 + &vendor, &device, &subsystem_vendor, &subsystem_device, &count) != 4) { 4776 + if (sscanf(p, "%hx:%hx%n", &vendor, &device, &count) != 2) { 4777 + printk(KERN_ERR "PCI: Can't parse resource_alignment parameter: pci:%s\n", 4778 + p); 4779 + break; 4780 + } 4781 + subsystem_vendor = subsystem_device = 0; 4782 + } 4783 + p += count; 4784 + if ((!vendor || (vendor == dev->vendor)) && 4785 + (!device || (device == dev->device)) && 4786 + (!subsystem_vendor || (subsystem_vendor == dev->subsystem_vendor)) && 4787 + (!subsystem_device || (subsystem_device == dev->subsystem_device))) { 4788 + if (align_order == -1) 4789 + align = PAGE_SIZE; 4790 + else 4791 + align = 1 << align_order; 4792 + /* Found */ 4978 4793 break; 4979 4794 } 4980 4795 } 4981 - p += count; 4982 - if (seg == pci_domain_nr(dev->bus) && 4983 - bus == dev->bus->number && 4984 - slot == PCI_SLOT(dev->devfn) && 4985 - func == PCI_FUNC(dev->devfn)) { 4986 - if (align_order == -1) 4987 - align = PAGE_SIZE; 4988 - else 4989 - align = 1 << align_order; 4990 - /* Found */ 4991 - break; 4796 + else { 4797 + if (sscanf(p, "%x:%x:%x.%x%n", 4798 + &seg, &bus, &slot, &func, &count) != 4) { 4799 + seg = 0; 4800 + if (sscanf(p, "%x:%x.%x%n", 4801 + &bus, &slot, &func, &count) != 3) { 4802 + /* Invalid format */ 4803 + printk(KERN_ERR "PCI: Can't parse resource_alignment parameter: %s\n", 4804 + p); 4805 + break; 4806 + } 4807 + } 4808 + p += count; 4809 + if (seg == pci_domain_nr(dev->bus) && 4810 + bus == dev->bus->number && 4811 + slot == PCI_SLOT(dev->devfn) && 4812 + func == PCI_FUNC(dev->devfn)) { 4813 + if (align_order == -1) 4814 + align = PAGE_SIZE; 4815 + else 4816 + align = 1 << align_order; 4817 + /* Found */ 4818 + break; 4819 + } 4992 4820 } 4993 4821 if (*p != ';' && *p != ',') { 4994 4822 /* End of param or invalid format */ ··· 5123 4897 return pci_set_resource_alignment_param(buf, count); 5124 4898 } 5125 4899 5126 - BUS_ATTR(resource_alignment, 0644, pci_resource_alignment_show, 4900 + static BUS_ATTR(resource_alignment, 0644, pci_resource_alignment_show, 5127 4901 pci_resource_alignment_store); 5128 4902 5129 4903 static int __init pci_resource_alignment_sysfs_init(void) ··· 5149 4923 } 5150 4924 5151 4925 #ifdef CONFIG_PCI_DOMAINS_GENERIC 5152 - void pci_bus_assign_domain_nr(struct pci_bus *bus, struct device *parent) 4926 + static int of_pci_bus_find_domain_nr(struct device *parent) 5153 4927 { 5154 4928 static int use_dt_domains = -1; 5155 4929 int domain = -1; ··· 5193 4967 domain = -1; 5194 4968 } 5195 4969 5196 - bus->domain_nr = domain; 4970 + return domain; 4971 + } 4972 + 4973 + int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent) 4974 + { 4975 + return acpi_disabled ? of_pci_bus_find_domain_nr(parent) : 4976 + acpi_pci_bus_find_domain_nr(bus); 5197 4977 } 5198 4978 #endif 5199 4979 #endif ··· 5253 5021 pci_hotplug_io_size = memparse(str + 9, &str); 5254 5022 } else if (!strncmp(str, "hpmemsize=", 10)) { 5255 5023 pci_hotplug_mem_size = memparse(str + 10, &str); 5024 + } else if (!strncmp(str, "hpbussize=", 10)) { 5025 + pci_hotplug_bus_size = 5026 + simple_strtoul(str + 10, &str, 0); 5027 + if (pci_hotplug_bus_size > 0xff) 5028 + pci_hotplug_bus_size = DEFAULT_HOTPLUG_BUS_SIZE; 5256 5029 } else if (!strncmp(str, "pcie_bus_tune_off", 17)) { 5257 5030 pcie_bus_config = PCIE_BUS_TUNE_OFF; 5258 5031 } else if (!strncmp(str, "pcie_bus_safe", 13)) {
+11
drivers/pci/pci.h
··· 82 82 void pci_ea_init(struct pci_dev *dev); 83 83 void pci_allocate_cap_save_buffers(struct pci_dev *dev); 84 84 void pci_free_cap_save_buffers(struct pci_dev *dev); 85 + void pci_bridge_d3_device_changed(struct pci_dev *dev); 86 + void pci_bridge_d3_device_removed(struct pci_dev *dev); 85 87 86 88 static inline void pci_wakeup_event(struct pci_dev *dev) 87 89 { ··· 94 92 static inline bool pci_has_subordinate(struct pci_dev *pci_dev) 95 93 { 96 94 return !!(pci_dev->subordinate); 95 + } 96 + 97 + static inline bool pci_power_manageable(struct pci_dev *pci_dev) 98 + { 99 + /* 100 + * Currently we allow normal PCI devices and PCI bridges transition 101 + * into D3 if their bridge_d3 is set. 102 + */ 103 + return !pci_has_subordinate(pci_dev) || pci_dev->bridge_d3; 97 104 } 98 105 99 106 struct pci_vpd_ops {
+1 -4
drivers/pci/pcie/Kconfig
··· 83 83 depends on PCIEPORTBUS && PM 84 84 85 85 config PCIE_DPC 86 - tristate "PCIe Downstream Port Containment support" 86 + bool "PCIe Downstream Port Containment support" 87 87 depends on PCIEPORTBUS 88 88 default n 89 89 help ··· 92 92 will be handled by the DPC driver. If your system doesn't 93 93 have this capability or you do not want to use this feature, 94 94 it is safe to answer N. 95 - 96 - To compile this driver as a module, choose M here: the module 97 - will be called pcie-dpc.
+1 -1
drivers/pci/pcie/aspm.c
··· 139 139 static void pcie_set_clkpm(struct pcie_link_state *link, int enable) 140 140 { 141 141 /* Don't enable Clock PM if the link is not Clock PM capable */ 142 - if (!link->clkpm_capable && enable) 142 + if (!link->clkpm_capable) 143 143 enable = 0; 144 144 /* Need nothing if the specified equals to current state */ 145 145 if (link->clkpm_enabled == enable)
+7 -12
drivers/pci/pcie/pcie-dpc.c
··· 15 15 16 16 struct dpc_dev { 17 17 struct pcie_device *dev; 18 - struct work_struct work; 19 - int cap_pos; 18 + struct work_struct work; 19 + int cap_pos; 20 20 }; 21 21 22 22 static void dpc_wait_link_inactive(struct pci_dev *pdev) ··· 89 89 int status; 90 90 u16 ctl, cap; 91 91 92 - dpc = kzalloc(sizeof(*dpc), GFP_KERNEL); 92 + dpc = devm_kzalloc(&dev->device, sizeof(*dpc), GFP_KERNEL); 93 93 if (!dpc) 94 94 return -ENOMEM; 95 95 ··· 98 98 INIT_WORK(&dpc->work, interrupt_event_handler); 99 99 set_service_data(dev, dpc); 100 100 101 - status = request_irq(dev->irq, dpc_irq, IRQF_SHARED, "pcie-dpc", dpc); 101 + status = devm_request_irq(&dev->device, dev->irq, dpc_irq, IRQF_SHARED, 102 + "pcie-dpc", dpc); 102 103 if (status) { 103 104 dev_warn(&dev->device, "request IRQ%d failed: %d\n", dev->irq, 104 105 status); 105 - goto out; 106 + return status; 106 107 } 107 108 108 109 pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CAP, &cap); ··· 118 117 FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), (cap >> 8) & 0xf, 119 118 FLAG(cap, PCI_EXP_DPC_CAP_DL_ACTIVE)); 120 119 return status; 121 - out: 122 - kfree(dpc); 123 - return status; 124 120 } 125 121 126 122 static void dpc_remove(struct pcie_device *dev) ··· 129 131 pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, &ctl); 130 132 ctl &= ~(PCI_EXP_DPC_CTL_EN_NONFATAL | PCI_EXP_DPC_CTL_INT_EN); 131 133 pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl); 132 - 133 - free_irq(dev->irq, dpc); 134 - kfree(dpc); 135 134 } 136 135 137 136 static struct pcie_port_service_driver dpcdriver = { 138 137 .name = "dpc", 139 - .port_type = PCI_EXP_TYPE_ROOT_PORT | PCI_EXP_TYPE_DOWNSTREAM, 138 + .port_type = PCIE_ANY_PORT, 140 139 .service = PCIE_PORT_SERVICE_DPC, 141 140 .probe = dpc_probe, 142 141 .remove = dpc_remove,
+3
drivers/pci/pcie/portdrv_core.c
··· 11 11 #include <linux/kernel.h> 12 12 #include <linux/errno.h> 13 13 #include <linux/pm.h> 14 + #include <linux/pm_runtime.h> 14 15 #include <linux/string.h> 15 16 #include <linux/slab.h> 16 17 #include <linux/pcieport_if.h> ··· 342 341 put_device(device); 343 342 return retval; 344 343 } 344 + 345 + pm_runtime_no_callbacks(device); 345 346 346 347 return 0; 347 348 }
+49 -3
drivers/pci/pcie/portdrv_pci.c
··· 93 93 return 0; 94 94 } 95 95 96 + static int pcie_port_runtime_suspend(struct device *dev) 97 + { 98 + return to_pci_dev(dev)->bridge_d3 ? 0 : -EBUSY; 99 + } 100 + 101 + static int pcie_port_runtime_resume(struct device *dev) 102 + { 103 + return 0; 104 + } 105 + 106 + static int pcie_port_runtime_idle(struct device *dev) 107 + { 108 + /* 109 + * Assume the PCI core has set bridge_d3 whenever it thinks the port 110 + * should be good to go to D3. Everything else, including moving 111 + * the port to D3, is handled by the PCI core. 112 + */ 113 + return to_pci_dev(dev)->bridge_d3 ? 0 : -EBUSY; 114 + } 115 + 96 116 static const struct dev_pm_ops pcie_portdrv_pm_ops = { 97 117 .suspend = pcie_port_device_suspend, 98 118 .resume = pcie_port_device_resume, ··· 121 101 .poweroff = pcie_port_device_suspend, 122 102 .restore = pcie_port_device_resume, 123 103 .resume_noirq = pcie_port_resume_noirq, 104 + .runtime_suspend = pcie_port_runtime_suspend, 105 + .runtime_resume = pcie_port_runtime_resume, 106 + .runtime_idle = pcie_port_runtime_idle, 124 107 }; 125 108 126 109 #define PCIE_PORTDRV_PM_OPS (&pcie_portdrv_pm_ops) ··· 157 134 return status; 158 135 159 136 pci_save_state(dev); 137 + 160 138 /* 161 - * D3cold may not work properly on some PCIe port, so disable 162 - * it by default. 139 + * Prevent runtime PM if the port is advertising support for PCIe 140 + * hotplug. Otherwise the BIOS hotplug SMI code might not be able 141 + * to enumerate devices behind this port properly (the port is 142 + * powered down preventing all config space accesses to the 143 + * subordinate devices). We can't be sure for native PCIe hotplug 144 + * either so prevent that as well. 163 145 */ 164 - dev->d3cold_allowed = false; 146 + if (!dev->is_hotplug_bridge) { 147 + /* 148 + * Keep the port resumed 100ms to make sure things like 149 + * config space accesses from userspace (lspci) will not 150 + * cause the port to repeatedly suspend and resume. 151 + */ 152 + pm_runtime_set_autosuspend_delay(&dev->dev, 100); 153 + pm_runtime_use_autosuspend(&dev->dev); 154 + pm_runtime_mark_last_busy(&dev->dev); 155 + pm_runtime_put_autosuspend(&dev->dev); 156 + pm_runtime_allow(&dev->dev); 157 + } 158 + 165 159 return 0; 166 160 } 167 161 168 162 static void pcie_portdrv_remove(struct pci_dev *dev) 169 163 { 164 + if (!dev->is_hotplug_bridge) { 165 + pm_runtime_forbid(&dev->dev); 166 + pm_runtime_get_noresume(&dev->dev); 167 + pm_runtime_dont_use_autosuspend(&dev->dev); 168 + } 169 + 170 170 pcie_port_device_remove(dev); 171 171 } 172 172
+21 -1
drivers/pci/probe.c
··· 16 16 #include <linux/aer.h> 17 17 #include <linux/acpi.h> 18 18 #include <linux/irqdomain.h> 19 + #include <linux/pm_runtime.h> 19 20 #include "pci.h" 20 21 21 22 #define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */ ··· 833 832 u8 primary, secondary, subordinate; 834 833 int broken = 0; 835 834 835 + /* 836 + * Make sure the bridge is powered on to be able to access config 837 + * space of devices below it. 838 + */ 839 + pm_runtime_get_sync(&dev->dev); 840 + 836 841 pci_read_config_dword(dev, PCI_PRIMARY_BUS, &buses); 837 842 primary = buses & 0xFF; 838 843 secondary = (buses >> 8) & 0xFF; ··· 1018 1011 1019 1012 out: 1020 1013 pci_write_config_word(dev, PCI_BRIDGE_CONTROL, bctl); 1014 + 1015 + pm_runtime_put(&dev->dev); 1021 1016 1022 1017 return max; 1023 1018 } ··· 2086 2077 } 2087 2078 2088 2079 /* 2080 + * Make sure a hotplug bridge has at least the minimum requested 2081 + * number of buses. 2082 + */ 2083 + if (bus->self && bus->self->is_hotplug_bridge && pci_hotplug_bus_size) { 2084 + if (max - bus->busn_res.start < pci_hotplug_bus_size - 1) 2085 + max = bus->busn_res.start + pci_hotplug_bus_size - 1; 2086 + } 2087 + 2088 + /* 2089 2089 * We've scanned the bus and so we know all about what's on 2090 2090 * the other side of any bridges that may be on this bus plus 2091 2091 * any devices. ··· 2145 2127 b->sysdata = sysdata; 2146 2128 b->ops = ops; 2147 2129 b->number = b->busn_res.start = bus; 2148 - pci_bus_assign_domain_nr(b, parent); 2130 + #ifdef CONFIG_PCI_DOMAINS_GENERIC 2131 + b->domain_nr = pci_bus_find_domain_nr(b, parent); 2132 + #endif 2149 2133 b2 = pci_find_bus(pci_domain_nr(b), bus); 2150 2134 if (b2) { 2151 2135 /* If we already got to this bus through a different bridge, ignore it */
+6 -3
drivers/pci/proc.c
··· 231 231 { 232 232 struct pci_dev *dev = PDE_DATA(file_inode(file)); 233 233 struct pci_filp_private *fpriv = file->private_data; 234 - int i, ret; 234 + int i, ret, write_combine; 235 235 236 236 if (!capable(CAP_SYS_RAWIO)) 237 237 return -EPERM; ··· 245 245 if (i >= PCI_ROM_RESOURCE) 246 246 return -ENODEV; 247 247 248 + if (fpriv->mmap_state == pci_mmap_mem) 249 + write_combine = fpriv->write_combine; 250 + else 251 + write_combine = 0; 248 252 ret = pci_mmap_page_range(dev, vma, 249 - fpriv->mmap_state, 250 - fpriv->write_combine); 253 + fpriv->mmap_state, write_combine); 251 254 if (ret < 0) 252 255 return ret; 253 256
+13 -4
drivers/pci/quirks.c
··· 3189 3189 } 3190 3190 3191 3191 /* 3192 - * Atheros AR93xx chips do not behave after a bus reset. The device will 3193 - * throw a Link Down error on AER-capable systems and regardless of AER, 3194 - * config space of the device is never accessible again and typically 3195 - * causes the system to hang or reset when access is attempted. 3192 + * Some Atheros AR9xxx and QCA988x chips do not behave after a bus reset. 3193 + * The device will throw a Link Down error on AER-capable systems and 3194 + * regardless of AER, config space of the device is never accessible again 3195 + * and typically causes the system to hang or reset when access is attempted. 3196 3196 * http://www.spinics.net/lists/linux-pci/msg34797.html 3197 3197 */ 3198 3198 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0030, quirk_no_bus_reset); 3199 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0032, quirk_no_bus_reset); 3200 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003c, quirk_no_bus_reset); 3199 3201 3200 3202 static void quirk_no_pm_reset(struct pci_dev *dev) 3201 3203 { ··· 3713 3711 /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c59 */ 3714 3712 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x917a, 3715 3713 quirk_dma_func1_alias); 3714 + /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c78 */ 3715 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9182, 3716 + quirk_dma_func1_alias); 3716 3717 /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c46 */ 3717 3718 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x91a0, 3718 3719 quirk_dma_func1_alias); ··· 3751 3746 static const struct pci_device_id fixed_dma_alias_tbl[] = { 3752 3747 { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x0285, 3753 3748 PCI_VENDOR_ID_ADAPTEC2, 0x02bb), /* Adaptec 3405 */ 3749 + .driver_data = PCI_DEVFN(1, 0) }, 3750 + { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x0285, 3751 + PCI_VENDOR_ID_ADAPTEC2, 0x02bc), /* Adaptec 3805 */ 3754 3752 .driver_data = PCI_DEVFN(1, 0) }, 3755 3753 { 0 } 3756 3754 }; ··· 4095 4087 { PCI_VENDOR_ID_AMD, 0x7809, pci_quirk_amd_sb_acs }, 4096 4088 { PCI_VENDOR_ID_SOLARFLARE, 0x0903, pci_quirk_mf_endpoint_acs }, 4097 4089 { PCI_VENDOR_ID_SOLARFLARE, 0x0923, pci_quirk_mf_endpoint_acs }, 4090 + { PCI_VENDOR_ID_SOLARFLARE, 0x0A03, pci_quirk_mf_endpoint_acs }, 4098 4091 { PCI_VENDOR_ID_INTEL, 0x10C6, pci_quirk_mf_endpoint_acs }, 4099 4092 { PCI_VENDOR_ID_INTEL, 0x10DB, pci_quirk_mf_endpoint_acs }, 4100 4093 { PCI_VENDOR_ID_INTEL, 0x10DD, pci_quirk_mf_endpoint_acs },
+2
drivers/pci/remove.c
··· 96 96 dev->subordinate = NULL; 97 97 } 98 98 99 + pci_bridge_d3_device_removed(dev); 100 + 99 101 pci_destroy_dev(dev); 100 102 } 101 103
+68
drivers/pci/setup-bus.c
··· 1428 1428 } 1429 1429 EXPORT_SYMBOL(pci_bus_assign_resources); 1430 1430 1431 + static void pci_claim_device_resources(struct pci_dev *dev) 1432 + { 1433 + int i; 1434 + 1435 + for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) { 1436 + struct resource *r = &dev->resource[i]; 1437 + 1438 + if (!r->flags || r->parent) 1439 + continue; 1440 + 1441 + pci_claim_resource(dev, i); 1442 + } 1443 + } 1444 + 1445 + static void pci_claim_bridge_resources(struct pci_dev *dev) 1446 + { 1447 + int i; 1448 + 1449 + for (i = PCI_BRIDGE_RESOURCES; i < PCI_NUM_RESOURCES; i++) { 1450 + struct resource *r = &dev->resource[i]; 1451 + 1452 + if (!r->flags || r->parent) 1453 + continue; 1454 + 1455 + pci_claim_bridge_resource(dev, i); 1456 + } 1457 + } 1458 + 1459 + static void pci_bus_allocate_dev_resources(struct pci_bus *b) 1460 + { 1461 + struct pci_dev *dev; 1462 + struct pci_bus *child; 1463 + 1464 + list_for_each_entry(dev, &b->devices, bus_list) { 1465 + pci_claim_device_resources(dev); 1466 + 1467 + child = dev->subordinate; 1468 + if (child) 1469 + pci_bus_allocate_dev_resources(child); 1470 + } 1471 + } 1472 + 1473 + static void pci_bus_allocate_resources(struct pci_bus *b) 1474 + { 1475 + struct pci_bus *child; 1476 + 1477 + /* 1478 + * Carry out a depth-first search on the PCI bus 1479 + * tree to allocate bridge apertures. Read the 1480 + * programmed bridge bases and recursively claim 1481 + * the respective bridge resources. 1482 + */ 1483 + if (b->self) { 1484 + pci_read_bridge_bases(b); 1485 + pci_claim_bridge_resources(b->self); 1486 + } 1487 + 1488 + list_for_each_entry(child, &b->children, node) 1489 + pci_bus_allocate_resources(child); 1490 + } 1491 + 1492 + void pci_bus_claim_resources(struct pci_bus *b) 1493 + { 1494 + pci_bus_allocate_resources(b); 1495 + pci_bus_allocate_dev_resources(b); 1496 + } 1497 + EXPORT_SYMBOL(pci_bus_claim_resources); 1498 + 1431 1499 static void __pci_bridge_assign_resources(const struct pci_dev *bridge, 1432 1500 struct list_head *add_head, 1433 1501 struct list_head *fail_head)
+4 -11
drivers/scsi/lpfc/lpfc_init.c
··· 4854 4854 lpfc_enable_pci_dev(struct lpfc_hba *phba) 4855 4855 { 4856 4856 struct pci_dev *pdev; 4857 - int bars = 0; 4858 4857 4859 4858 /* Obtain PCI device reference */ 4860 4859 if (!phba->pcidev) 4861 4860 goto out_error; 4862 4861 else 4863 4862 pdev = phba->pcidev; 4864 - /* Select PCI BARs */ 4865 - bars = pci_select_bars(pdev, IORESOURCE_MEM); 4866 4863 /* Enable PCI device */ 4867 4864 if (pci_enable_device_mem(pdev)) 4868 4865 goto out_error; 4869 4866 /* Request PCI resource for the device */ 4870 - if (pci_request_selected_regions(pdev, bars, LPFC_DRIVER_NAME)) 4867 + if (pci_request_mem_regions(pdev, LPFC_DRIVER_NAME)) 4871 4868 goto out_disable_device; 4872 4869 /* Set up device as PCI master and save state for EEH */ 4873 4870 pci_set_master(pdev); ··· 4881 4884 pci_disable_device(pdev); 4882 4885 out_error: 4883 4886 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 4884 - "1401 Failed to enable pci device, bars:x%x\n", bars); 4887 + "1401 Failed to enable pci device\n"); 4885 4888 return -ENODEV; 4886 4889 } 4887 4890 ··· 4896 4899 lpfc_disable_pci_dev(struct lpfc_hba *phba) 4897 4900 { 4898 4901 struct pci_dev *pdev; 4899 - int bars; 4900 4902 4901 4903 /* Obtain PCI device reference */ 4902 4904 if (!phba->pcidev) 4903 4905 return; 4904 4906 else 4905 4907 pdev = phba->pcidev; 4906 - /* Select PCI BARs */ 4907 - bars = pci_select_bars(pdev, IORESOURCE_MEM); 4908 4908 /* Release PCI resource and disable PCI device */ 4909 - pci_release_selected_regions(pdev, bars); 4909 + pci_release_mem_regions(pdev); 4910 4910 pci_disable_device(pdev); 4911 4911 4912 4912 return; ··· 9805 9811 struct lpfc_vport **vports; 9806 9812 struct lpfc_hba *phba = vport->phba; 9807 9813 int i; 9808 - int bars = pci_select_bars(pdev, IORESOURCE_MEM); 9809 9814 9810 9815 spin_lock_irq(&phba->hbalock); 9811 9816 vport->load_flag |= FC_UNLOADING; ··· 9879 9886 9880 9887 lpfc_hba_free(phba); 9881 9888 9882 - pci_release_selected_regions(pdev, bars); 9889 + pci_release_mem_regions(pdev); 9883 9890 pci_disable_device(pdev); 9884 9891 } 9885 9892
+1 -1
drivers/usb/host/xhci-pci.c
··· 387 387 * need to have the registers polled during D3, so avoid D3cold. 388 388 */ 389 389 if (xhci->quirks & XHCI_COMP_MODE_QUIRK) 390 - pdev->no_d3cold = true; 390 + pci_d3cold_disable(pdev); 391 391 392 392 if (xhci->quirks & XHCI_PME_STUCK_QUIRK) 393 393 xhci_pme_quirk(hcd);
+2
include/linux/pci-acpi.h
··· 24 24 } 25 25 extern phys_addr_t acpi_pci_root_get_mcfg_addr(acpi_handle handle); 26 26 27 + extern phys_addr_t pci_mcfg_lookup(u16 domain, struct resource *bus_res); 28 + 27 29 static inline acpi_handle acpi_find_root_bridge_handle(struct pci_dev *pdev) 28 30 { 29 31 struct pci_bus *pbus = pdev->bus;
+84 -9
include/linux/pci.h
··· 101 101 DEVICE_COUNT_RESOURCE = PCI_NUM_RESOURCES, 102 102 }; 103 103 104 + /* 105 + * pci_power_t values must match the bits in the Capabilities PME_Support 106 + * and Control/Status PowerState fields in the Power Management capability. 107 + */ 104 108 typedef int __bitwise pci_power_t; 105 109 106 110 #define PCI_D0 ((pci_power_t __force) 0) ··· 120 116 121 117 static inline const char *pci_power_name(pci_power_t state) 122 118 { 123 - return pci_power_names[1 + (int) state]; 119 + return pci_power_names[1 + (__force int) state]; 124 120 } 125 121 126 122 #define PCI_PM_D2_DELAY 200 ··· 298 294 unsigned int d2_support:1; /* Low power state D2 is supported */ 299 295 unsigned int no_d1d2:1; /* D1 and D2 are forbidden */ 300 296 unsigned int no_d3cold:1; /* D3cold is forbidden */ 297 + unsigned int bridge_d3:1; /* Allow D3 for bridge */ 301 298 unsigned int d3cold_allowed:1; /* D3cold is allowed by user */ 302 299 unsigned int mmio_always_on:1; /* disallow turning off io/mem 303 300 decoding during bar sizing */ ··· 325 320 * directly, use the values stored here. They might be different! 326 321 */ 327 322 unsigned int irq; 323 + struct cpumask *irq_affinity; 328 324 struct resource resource[DEVICE_COUNT_RESOURCE]; /* I/O and memory regions + expansion ROMs */ 329 325 330 326 bool match_driver; /* Skip attaching driver */ ··· 1090 1084 bool pci_dev_run_wake(struct pci_dev *dev); 1091 1085 bool pci_check_pme_status(struct pci_dev *dev); 1092 1086 void pci_pme_wakeup_bus(struct pci_bus *bus); 1087 + void pci_d3cold_enable(struct pci_dev *dev); 1088 + void pci_d3cold_disable(struct pci_dev *dev); 1093 1089 1094 1090 static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state, 1095 1091 bool enable) ··· 1123 1115 /* Helper functions for low-level code (drivers/pci/setup-[bus,res].c) */ 1124 1116 resource_size_t pcibios_retrieve_fw_addr(struct pci_dev *dev, int idx); 1125 1117 void pci_bus_assign_resources(const struct pci_bus *bus); 1118 + void pci_bus_claim_resources(struct pci_bus *bus); 1126 1119 void pci_bus_size_bridges(struct pci_bus *bus); 1127 1120 int pci_claim_resource(struct pci_dev *, int); 1128 1121 int pci_claim_bridge_resource(struct pci_dev *bridge, int i); ··· 1153 1144 void pci_add_resource_offset(struct list_head *resources, struct resource *res, 1154 1145 resource_size_t offset); 1155 1146 void pci_free_resource_list(struct list_head *resources); 1156 - void pci_bus_add_resource(struct pci_bus *bus, struct resource *res, unsigned int flags); 1147 + void pci_bus_add_resource(struct pci_bus *bus, struct resource *res, 1148 + unsigned int flags); 1157 1149 struct resource *pci_bus_resource_n(const struct pci_bus *bus, int n); 1158 1150 void pci_bus_remove_resources(struct pci_bus *bus); 1151 + int devm_request_pci_bus_resources(struct device *dev, 1152 + struct list_head *resources); 1159 1153 1160 1154 #define pci_bus_for_each_resource(bus, res, i) \ 1161 1155 for (i = 0; \ ··· 1180 1168 unsigned long pci_address_to_pio(phys_addr_t addr); 1181 1169 phys_addr_t pci_pio_to_address(unsigned long pio); 1182 1170 int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr); 1171 + void pci_unmap_iospace(struct resource *res); 1183 1172 1184 1173 static inline pci_bus_addr_t pci_bus_address(struct pci_dev *pdev, int bar) 1185 1174 { ··· 1251 1238 int pci_set_vga_state(struct pci_dev *pdev, bool decode, 1252 1239 unsigned int command_bits, u32 flags); 1253 1240 1241 + #define PCI_IRQ_NOLEGACY (1 << 0) /* don't use legacy interrupts */ 1242 + #define PCI_IRQ_NOMSI (1 << 1) /* don't use MSI interrupts */ 1243 + #define PCI_IRQ_NOMSIX (1 << 2) /* don't use MSI-X interrupts */ 1244 + #define PCI_IRQ_NOAFFINITY (1 << 3) /* don't auto-assign affinity */ 1245 + 1254 1246 /* kmem_cache style wrapper around pci_alloc_consistent() */ 1255 1247 1256 1248 #include <linux/pci-dma.h> ··· 1303 1285 return rc; 1304 1286 return 0; 1305 1287 } 1288 + int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs, 1289 + unsigned int max_vecs, unsigned int flags); 1290 + void pci_free_irq_vectors(struct pci_dev *dev); 1291 + int pci_irq_vector(struct pci_dev *dev, unsigned int nr); 1292 + 1306 1293 #else 1307 1294 static inline int pci_msi_vec_count(struct pci_dev *dev) { return -ENOSYS; } 1308 1295 static inline void pci_msi_shutdown(struct pci_dev *dev) { } ··· 1331 1308 static inline int pci_enable_msix_exact(struct pci_dev *dev, 1332 1309 struct msix_entry *entries, int nvec) 1333 1310 { return -ENOSYS; } 1311 + static inline int pci_alloc_irq_vectors(struct pci_dev *dev, 1312 + unsigned int min_vecs, unsigned int max_vecs, 1313 + unsigned int flags) 1314 + { 1315 + if (min_vecs > 1) 1316 + return -EINVAL; 1317 + return 1; 1318 + } 1319 + static inline void pci_free_irq_vectors(struct pci_dev *dev) 1320 + { 1321 + } 1322 + 1323 + static inline int pci_irq_vector(struct pci_dev *dev, unsigned int nr) 1324 + { 1325 + if (WARN_ON_ONCE(nr > 0)) 1326 + return -EINVAL; 1327 + return dev->irq; 1328 + } 1334 1329 #endif 1335 1330 1336 1331 #ifdef CONFIG_PCIEPORTBUS ··· 1431 1390 { 1432 1391 return bus->domain_nr; 1433 1392 } 1434 - void pci_bus_assign_domain_nr(struct pci_bus *bus, struct device *parent); 1393 + #ifdef CONFIG_ACPI 1394 + int acpi_pci_bus_find_domain_nr(struct pci_bus *bus); 1435 1395 #else 1436 - static inline void pci_bus_assign_domain_nr(struct pci_bus *bus, 1437 - struct device *parent) 1438 - { 1439 - } 1396 + static inline int acpi_pci_bus_find_domain_nr(struct pci_bus *bus) 1397 + { return 0; } 1398 + #endif 1399 + int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent); 1440 1400 #endif 1441 1401 1442 1402 /* some architectures require additional setup to direct VGA traffic */ 1443 1403 typedef int (*arch_set_vga_state_t)(struct pci_dev *pdev, bool decode, 1444 1404 unsigned int command_bits, u32 flags); 1445 1405 void pci_register_set_vga_state(arch_set_vga_state_t func); 1406 + 1407 + static inline int 1408 + pci_request_io_regions(struct pci_dev *pdev, const char *name) 1409 + { 1410 + return pci_request_selected_regions(pdev, 1411 + pci_select_bars(pdev, IORESOURCE_IO), name); 1412 + } 1413 + 1414 + static inline void 1415 + pci_release_io_regions(struct pci_dev *pdev) 1416 + { 1417 + return pci_release_selected_regions(pdev, 1418 + pci_select_bars(pdev, IORESOURCE_IO)); 1419 + } 1420 + 1421 + static inline int 1422 + pci_request_mem_regions(struct pci_dev *pdev, const char *name) 1423 + { 1424 + return pci_request_selected_regions(pdev, 1425 + pci_select_bars(pdev, IORESOURCE_MEM), name); 1426 + } 1427 + 1428 + static inline void 1429 + pci_release_mem_regions(struct pci_dev *pdev) 1430 + { 1431 + return pci_release_selected_regions(pdev, 1432 + pci_select_bars(pdev, IORESOURCE_MEM)); 1433 + } 1446 1434 1447 1435 #else /* CONFIG_PCI is not enabled */ 1448 1436 ··· 1625 1555 /* Some archs don't want to expose struct resource to userland as-is 1626 1556 * in sysfs and /proc 1627 1557 */ 1628 - #ifndef HAVE_ARCH_PCI_RESOURCE_TO_USER 1558 + #ifdef HAVE_ARCH_PCI_RESOURCE_TO_USER 1559 + void pci_resource_to_user(const struct pci_dev *dev, int bar, 1560 + const struct resource *rsrc, 1561 + resource_size_t *start, resource_size_t *end); 1562 + #else 1629 1563 static inline void pci_resource_to_user(const struct pci_dev *dev, int bar, 1630 1564 const struct resource *rsrc, resource_size_t *start, 1631 1565 resource_size_t *end) ··· 1781 1707 1782 1708 extern unsigned long pci_hotplug_io_size; 1783 1709 extern unsigned long pci_hotplug_mem_size; 1710 + extern unsigned long pci_hotplug_bus_size; 1784 1711 1785 1712 /* Architecture-specific versions may override these (weak) */ 1786 1713 void pcibios_disable_device(struct pci_dev *dev); ··· 1798 1723 extern struct dev_pm_ops pcibios_pm_ops; 1799 1724 #endif 1800 1725 1801 - #ifdef CONFIG_PCI_MMCONFIG 1726 + #if defined(CONFIG_PCI_MMCONFIG) || defined(CONFIG_ACPI_MCFG) 1802 1727 void __init pci_mmcfg_early_init(void); 1803 1728 void __init pci_mmcfg_late_init(void); 1804 1729 #else