Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'pci/msi' into next

* pci/msi:
PCI/MSI: Update MSI/MSI-X bits in PCIEBUS-HOWTO
PCI/MSI: Document pci_alloc_irq_vectors(), deprecate pci_enable_msi()
PCI/MSI: Return -ENOSPC if pci_enable_msi_range() can't get enough vectors
PCI/portdrv: Use pci_irq_alloc_vectors()
PCI/MSI: Check that we have a legacy interrupt line before using it
PCI/MSI: Remove pci_msi_domain_{alloc,free}_irqs()
PCI/MSI: Remove unused pci_msi_create_default_irq_domain()
PCI/MSI: Return failure when msix_setup_entries() fails
PCI/MSI: Remove pci_enable_msi_{exact,range}()
amd-xgbe: Update PCI support to use new IRQ functions
[media] cobalt: use pci_irq_allocate_vectors()
PCI/MSI: Fix msi_capability_init() kernel-doc warnings

+135 -377
+2 -4
Documentation/PCI/MSI-HOWTO.txt
··· 162 162 not be used in new code: 163 163 164 164 pci_enable_msi() /* deprecated */ 165 - pci_enable_msi_range() /* deprecated */ 166 - pci_enable_msi_exact() /* deprecated */ 167 165 pci_disable_msi() /* deprecated */ 168 166 pci_enable_msix_range() /* deprecated */ 169 167 pci_enable_msix_exact() /* deprecated */ ··· 266 268 to bridges between the PCI root and the device, MSIs are disabled. 267 269 268 270 It is also worth checking the device driver to see whether it supports MSIs. 269 - For example, it may contain calls to pci_enable_msi_range() or 270 - pci_enable_msix_range(). 271 + For example, it may contain calls to pci_irq_alloc_vectors() with the 272 + PCI_IRQ_MSI or PCI_IRQ_MSIX flags.
+6 -25
Documentation/PCI/PCIEBUS-HOWTO.txt
··· 161 161 allowed to run simultaneously, below lists a few of possible resource 162 162 conflicts with proposed solutions. 163 163 164 - 6.1 MSI Vector Resource 164 + 6.1 MSI and MSI-X Vector Resource 165 165 166 - The MSI capability structure enables a device software driver to call 167 - pci_enable_msi to request MSI based interrupts. Once MSI interrupts 168 - are enabled on a device, it stays in this mode until a device driver 169 - calls pci_disable_msi to disable MSI interrupts and revert back to 170 - INTx emulation mode. Since service drivers of the same PCI-PCI Bridge 171 - port share the same physical device, if an individual service driver 172 - calls pci_enable_msi/pci_disable_msi it may result unpredictable 173 - behavior. For example, two service drivers run simultaneously on the 174 - same physical Root Port. Both service drivers call pci_enable_msi to 175 - request MSI based interrupts. A service driver may not know whether 176 - any other service drivers have run on this Root Port. If either one 177 - of them calls pci_disable_msi, it puts the other service driver 178 - in a wrong interrupt mode. 166 + Once MSI or MSI-X interrupts are enabled on a device, it stays in this 167 + mode until they are disabled again. Since service drivers of the same 168 + PCI-PCI Bridge port share the same physical device, if an individual 169 + service driver enables or disables MSI/MSI-X mode it may result 170 + unpredictable behavior. 179 171 180 172 To avoid this situation all service drivers are not permitted to 181 173 switch interrupt mode on its device. The PCI Express Port Bus driver ··· 178 186 driver. Service drivers should use (struct pcie_device*)dev->irq to 179 187 call request_irq/free_irq. In addition, the interrupt mode is stored 180 188 in the field interrupt_mode of struct pcie_device. 181 - 182 - 6.2 MSI-X Vector Resources 183 - 184 - Similar to the MSI a device driver for an MSI-X capable device can 185 - call pci_enable_msix to request MSI-X interrupts. All service drivers 186 - are not permitted to switch interrupt mode on its device. The PCI 187 - Express Port Bus driver is responsible for determining the interrupt 188 - mode and this should be transparent to service drivers. Any attempt 189 - by service driver to call pci_enable_msix/pci_disable_msix may 190 - result unpredictable behavior. Service drivers should use 191 - (struct pcie_device*)dev->irq and call request_irq/free_irq. 192 189 193 190 6.3 PCI Memory/IO Mapped Regions 194 191
+11 -11
Documentation/PCI/pci.txt
··· 382 382 "vectors" get allocated. MSI requires contiguous blocks of vectors 383 383 while MSI-X can allocate several individual ones. 384 384 385 - MSI capability can be enabled by calling pci_enable_msi() or 386 - pci_enable_msix() before calling request_irq(). This causes 387 - the PCI support to program CPU vector data into the PCI device 388 - capability registers. 385 + MSI capability can be enabled by calling pci_alloc_irq_vectors() with the 386 + PCI_IRQ_MSI and/or PCI_IRQ_MSIX flags before calling request_irq(). This 387 + causes the PCI support to program CPU vector data into the PCI device 388 + capability registers. Many architectures, chip-sets, or BIOSes do NOT 389 + support MSI or MSI-X and a call to pci_alloc_irq_vectors with just 390 + the PCI_IRQ_MSI and PCI_IRQ_MSIX flags will fail, so try to always 391 + specify PCI_IRQ_LEGACY as well. 389 392 390 - If your PCI device supports both, try to enable MSI-X first. 391 - Only one can be enabled at a time. Many architectures, chip-sets, 392 - or BIOSes do NOT support MSI or MSI-X and the call to pci_enable_msi/msix 393 - will fail. This is important to note since many drivers have 394 - two (or more) interrupt handlers: one for MSI/MSI-X and another for IRQs. 395 - They choose which handler to register with request_irq() based on the 396 - return value from pci_enable_msi/msix(). 393 + Drivers that have different interrupt handlers for MSI/MSI-X and 394 + legacy INTx should chose the right one based on the msi_enabled 395 + and msix_enabled flags in the pci_dev structure after calling 396 + pci_alloc_irq_vectors. 397 397 398 398 There are (at least) two really good reasons for using MSI: 399 399 1) MSI is an exclusive interrupt vector by definition.
+1 -1
arch/x86/kernel/apic/msi.c
··· 82 82 if (domain == NULL) 83 83 return -ENOSYS; 84 84 85 - return pci_msi_domain_alloc_irqs(domain, dev, nvec, type); 85 + return msi_domain_alloc_irqs(domain, &dev->dev, nvec); 86 86 } 87 87 88 88 void native_teardown_msi_irq(unsigned int irq)
+2 -6
drivers/media/pci/cobalt/cobalt-driver.c
··· 308 308 static void cobalt_free_msi(struct cobalt *cobalt, struct pci_dev *pci_dev) 309 309 { 310 310 free_irq(pci_dev->irq, (void *)cobalt); 311 - 312 - if (cobalt->msi_enabled) 313 - pci_disable_msi(pci_dev); 311 + pci_free_irq_vectors(pci_dev); 314 312 } 315 313 316 314 static int cobalt_setup_pci(struct cobalt *cobalt, struct pci_dev *pci_dev, ··· 385 387 from being generated. */ 386 388 cobalt_set_interrupt(cobalt, false); 387 389 388 - if (pci_enable_msi_range(pci_dev, 1, 1) < 1) { 390 + if (pci_alloc_irq_vectors(pci_dev, 1, 1, PCI_IRQ_MSI) < 1) { 389 391 cobalt_err("Could not enable MSI\n"); 390 - cobalt->msi_enabled = false; 391 392 ret = -EIO; 392 393 goto err_release; 393 394 } 394 395 msi_config_show(cobalt, pci_dev); 395 - cobalt->msi_enabled = true; 396 396 397 397 /* Register IRQ */ 398 398 if (request_irq(pci_dev->irq, cobalt_irq_handler, IRQF_SHARED,
-2
drivers/media/pci/cobalt/cobalt-driver.h
··· 287 287 u32 irq_none; 288 288 u32 irq_full_fifo; 289 289 290 - bool msi_enabled; 291 - 292 290 /* omnitek dma */ 293 291 int dma_channels; 294 292 int first_fifo_channel;
+38 -90
drivers/net/ethernet/amd/xgbe/xgbe-pci.c
··· 122 122 #include "xgbe.h" 123 123 #include "xgbe-common.h" 124 124 125 - static int xgbe_config_msi(struct xgbe_prv_data *pdata) 125 + static int xgbe_config_multi_msi(struct xgbe_prv_data *pdata) 126 126 { 127 - unsigned int msi_count; 127 + unsigned int vector_count; 128 128 unsigned int i, j; 129 129 int ret; 130 130 131 - msi_count = XGBE_MSIX_BASE_COUNT; 132 - msi_count += max(pdata->rx_ring_count, 133 - pdata->tx_ring_count); 134 - msi_count = roundup_pow_of_two(msi_count); 131 + vector_count = XGBE_MSI_BASE_COUNT; 132 + vector_count += max(pdata->rx_ring_count, 133 + pdata->tx_ring_count); 135 134 136 - ret = pci_enable_msi_exact(pdata->pcidev, msi_count); 135 + ret = pci_alloc_irq_vectors(pdata->pcidev, XGBE_MSI_MIN_COUNT, 136 + vector_count, PCI_IRQ_MSI | PCI_IRQ_MSIX); 137 137 if (ret < 0) { 138 - dev_info(pdata->dev, "MSI request for %u interrupts failed\n", 139 - msi_count); 140 - 141 - ret = pci_enable_msi(pdata->pcidev); 142 - if (ret < 0) { 143 - dev_info(pdata->dev, "MSI enablement failed\n"); 144 - return ret; 145 - } 146 - 147 - msi_count = 1; 148 - } 149 - 150 - pdata->irq_count = msi_count; 151 - 152 - pdata->dev_irq = pdata->pcidev->irq; 153 - 154 - if (msi_count > 1) { 155 - pdata->ecc_irq = pdata->pcidev->irq + 1; 156 - pdata->i2c_irq = pdata->pcidev->irq + 2; 157 - pdata->an_irq = pdata->pcidev->irq + 3; 158 - 159 - for (i = XGBE_MSIX_BASE_COUNT, j = 0; 160 - (i < msi_count) && (j < XGBE_MAX_DMA_CHANNELS); 161 - i++, j++) 162 - pdata->channel_irq[j] = pdata->pcidev->irq + i; 163 - pdata->channel_irq_count = j; 164 - 165 - pdata->per_channel_irq = 1; 166 - pdata->channel_irq_mode = XGBE_IRQ_MODE_LEVEL; 167 - } else { 168 - pdata->ecc_irq = pdata->pcidev->irq; 169 - pdata->i2c_irq = pdata->pcidev->irq; 170 - pdata->an_irq = pdata->pcidev->irq; 171 - } 172 - 173 - if (netif_msg_probe(pdata)) 174 - dev_dbg(pdata->dev, "MSI interrupts enabled\n"); 175 - 176 - return 0; 177 - } 178 - 179 - static int xgbe_config_msix(struct xgbe_prv_data *pdata) 180 - { 181 - unsigned int msix_count; 182 - unsigned int i, j; 183 - int ret; 184 - 185 - msix_count = XGBE_MSIX_BASE_COUNT; 186 - msix_count += max(pdata->rx_ring_count, 187 - pdata->tx_ring_count); 188 - 189 - pdata->msix_entries = devm_kcalloc(pdata->dev, msix_count, 190 - sizeof(struct msix_entry), 191 - GFP_KERNEL); 192 - if (!pdata->msix_entries) 193 - return -ENOMEM; 194 - 195 - for (i = 0; i < msix_count; i++) 196 - pdata->msix_entries[i].entry = i; 197 - 198 - ret = pci_enable_msix_range(pdata->pcidev, pdata->msix_entries, 199 - XGBE_MSIX_MIN_COUNT, msix_count); 200 - if (ret < 0) { 201 - dev_info(pdata->dev, "MSI-X enablement failed\n"); 202 - devm_kfree(pdata->dev, pdata->msix_entries); 203 - pdata->msix_entries = NULL; 138 + dev_info(pdata->dev, "multi MSI/MSI-X enablement failed\n"); 204 139 return ret; 205 140 } 206 141 207 142 pdata->irq_count = ret; 208 143 209 - pdata->dev_irq = pdata->msix_entries[0].vector; 210 - pdata->ecc_irq = pdata->msix_entries[1].vector; 211 - pdata->i2c_irq = pdata->msix_entries[2].vector; 212 - pdata->an_irq = pdata->msix_entries[3].vector; 144 + pdata->dev_irq = pci_irq_vector(pdata->pcidev, 0); 145 + pdata->ecc_irq = pci_irq_vector(pdata->pcidev, 1); 146 + pdata->i2c_irq = pci_irq_vector(pdata->pcidev, 2); 147 + pdata->an_irq = pci_irq_vector(pdata->pcidev, 3); 213 148 214 - for (i = XGBE_MSIX_BASE_COUNT, j = 0; i < ret; i++, j++) 215 - pdata->channel_irq[j] = pdata->msix_entries[i].vector; 149 + for (i = XGBE_MSI_BASE_COUNT, j = 0; i < ret; i++, j++) 150 + pdata->channel_irq[j] = pci_irq_vector(pdata->pcidev, i); 216 151 pdata->channel_irq_count = j; 217 152 218 153 pdata->per_channel_irq = 1; 219 154 pdata->channel_irq_mode = XGBE_IRQ_MODE_LEVEL; 220 155 221 156 if (netif_msg_probe(pdata)) 222 - dev_dbg(pdata->dev, "MSI-X interrupts enabled\n"); 157 + dev_dbg(pdata->dev, "multi %s interrupts enabled\n", 158 + pdata->pcidev->msix_enabled ? "MSI-X" : "MSI"); 223 159 224 160 return 0; 225 161 } ··· 164 228 { 165 229 int ret; 166 230 167 - ret = xgbe_config_msix(pdata); 231 + ret = xgbe_config_multi_msi(pdata); 168 232 if (!ret) 169 233 goto out; 170 234 171 - ret = xgbe_config_msi(pdata); 172 - if (!ret) 173 - goto out; 235 + ret = pci_alloc_irq_vectors(pdata->pcidev, 1, 1, 236 + PCI_IRQ_LEGACY | PCI_IRQ_MSI); 237 + if (ret < 0) { 238 + dev_info(pdata->dev, "single IRQ enablement failed\n"); 239 + return ret; 240 + } 174 241 175 242 pdata->irq_count = 1; 176 - pdata->irq_shared = 1; 243 + pdata->channel_irq_count = 1; 177 244 178 - pdata->dev_irq = pdata->pcidev->irq; 179 - pdata->ecc_irq = pdata->pcidev->irq; 180 - pdata->i2c_irq = pdata->pcidev->irq; 181 - pdata->an_irq = pdata->pcidev->irq; 245 + pdata->dev_irq = pci_irq_vector(pdata->pcidev, 0); 246 + pdata->ecc_irq = pci_irq_vector(pdata->pcidev, 0); 247 + pdata->i2c_irq = pci_irq_vector(pdata->pcidev, 0); 248 + pdata->an_irq = pci_irq_vector(pdata->pcidev, 0); 249 + 250 + if (netif_msg_probe(pdata)) 251 + dev_dbg(pdata->dev, "single %s interrupt enabled\n", 252 + pdata->pcidev->msi_enabled ? "MSI" : "legacy"); 182 253 183 254 out: 184 255 if (netif_msg_probe(pdata)) { ··· 355 412 /* Configure the netdev resource */ 356 413 ret = xgbe_config_netdev(pdata); 357 414 if (ret) 358 - goto err_pci_enable; 415 + goto err_irq_vectors; 359 416 360 417 netdev_notice(pdata->netdev, "net device enabled\n"); 361 418 362 419 return 0; 420 + 421 + err_irq_vectors: 422 + pci_free_irq_vectors(pdata->pcidev); 363 423 364 424 err_pci_enable: 365 425 xgbe_free_pdata(pdata); ··· 378 432 struct xgbe_prv_data *pdata = pci_get_drvdata(pdev); 379 433 380 434 xgbe_deconfig_netdev(pdata); 435 + 436 + pci_free_irq_vectors(pdata->pcidev); 381 437 382 438 xgbe_free_pdata(pdata); 383 439 }
+3 -5
drivers/net/ethernet/amd/xgbe/xgbe.h
··· 211 211 #define XGBE_MAC_PROP_OFFSET 0x1d000 212 212 #define XGBE_I2C_CTRL_OFFSET 0x1e000 213 213 214 - /* PCI MSIx support */ 215 - #define XGBE_MSIX_BASE_COUNT 4 216 - #define XGBE_MSIX_MIN_COUNT (XGBE_MSIX_BASE_COUNT + 1) 214 + /* PCI MSI/MSIx support */ 215 + #define XGBE_MSI_BASE_COUNT 4 216 + #define XGBE_MSI_MIN_COUNT (XGBE_MSI_BASE_COUNT + 1) 217 217 218 218 /* PCI clock frequencies */ 219 219 #define XGBE_V2_DMA_CLOCK_FREQ 500000000 /* 500 MHz */ ··· 980 980 unsigned int desc_ded_count; 981 981 unsigned int desc_sec_count; 982 982 983 - struct msix_entry *msix_entries; 984 983 int dev_irq; 985 984 int ecc_irq; 986 985 int i2c_irq; 987 986 int channel_irq[XGBE_MAX_DMA_CHANNELS]; 988 987 989 988 unsigned int per_channel_irq; 990 - unsigned int irq_shared; 991 989 unsigned int irq_count; 992 990 unsigned int channel_irq_count; 993 991 unsigned int channel_irq_mode;
+21 -99
drivers/pci/msi.c
··· 32 32 #define msix_table_size(flags) ((flags & PCI_MSIX_FLAGS_QSIZE) + 1) 33 33 34 34 #ifdef CONFIG_PCI_MSI_IRQ_DOMAIN 35 - static struct irq_domain *pci_msi_default_domain; 36 - static DEFINE_MUTEX(pci_msi_domain_lock); 37 - 38 - struct irq_domain * __weak arch_get_pci_msi_domain(struct pci_dev *dev) 39 - { 40 - return pci_msi_default_domain; 41 - } 42 - 43 - static struct irq_domain *pci_msi_get_domain(struct pci_dev *dev) 44 - { 45 - struct irq_domain *domain; 46 - 47 - domain = dev_get_msi_domain(&dev->dev); 48 - if (domain) 49 - return domain; 50 - 51 - return arch_get_pci_msi_domain(dev); 52 - } 53 - 54 35 static int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) 55 36 { 56 37 struct irq_domain *domain; 57 38 58 - domain = pci_msi_get_domain(dev); 39 + domain = dev_get_msi_domain(&dev->dev); 59 40 if (domain && irq_domain_is_hierarchy(domain)) 60 - return pci_msi_domain_alloc_irqs(domain, dev, nvec, type); 41 + return msi_domain_alloc_irqs(domain, &dev->dev, nvec); 61 42 62 43 return arch_setup_msi_irqs(dev, nvec, type); 63 44 } ··· 47 66 { 48 67 struct irq_domain *domain; 49 68 50 - domain = pci_msi_get_domain(dev); 69 + domain = dev_get_msi_domain(&dev->dev); 51 70 if (domain && irq_domain_is_hierarchy(domain)) 52 - pci_msi_domain_free_irqs(domain, dev); 71 + msi_domain_free_irqs(domain, &dev->dev); 53 72 else 54 73 arch_teardown_msi_irqs(dev); 55 74 } ··· 591 610 * msi_capability_init - configure device's MSI capability structure 592 611 * @dev: pointer to the pci_dev data structure of MSI device function 593 612 * @nvec: number of interrupts to allocate 594 - * @affinity: flag to indicate cpu irq affinity mask should be set 613 + * @affd: description of automatic irq affinity assignments (may be %NULL) 595 614 * 596 615 * Setup the MSI capability structure of the device with the requested 597 616 * number of interrupts. A return value of zero indicates the successful ··· 712 731 ret = 0; 713 732 out: 714 733 kfree(masks); 715 - return 0; 734 + return ret; 716 735 } 717 736 718 737 static void msix_program_entries(struct pci_dev *dev, ··· 1065 1084 if (nvec < 0) 1066 1085 return nvec; 1067 1086 if (nvec < minvec) 1068 - return -EINVAL; 1087 + return -ENOSPC; 1069 1088 1070 1089 if (nvec > maxvec) 1071 1090 nvec = maxvec; ··· 1090 1109 } 1091 1110 } 1092 1111 1093 - /** 1094 - * pci_enable_msi_range - configure device's MSI capability structure 1095 - * @dev: device to configure 1096 - * @minvec: minimal number of interrupts to configure 1097 - * @maxvec: maximum number of interrupts to configure 1098 - * 1099 - * This function tries to allocate a maximum possible number of interrupts in a 1100 - * range between @minvec and @maxvec. It returns a negative errno if an error 1101 - * occurs. If it succeeds, it returns the actual number of interrupts allocated 1102 - * and updates the @dev's irq member to the lowest new interrupt number; 1103 - * the other interrupt numbers allocated to this device are consecutive. 1104 - **/ 1105 - int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec) 1112 + /* deprecated, don't use */ 1113 + int pci_enable_msi(struct pci_dev *dev) 1106 1114 { 1107 - return __pci_enable_msi_range(dev, minvec, maxvec, NULL); 1115 + int rc = __pci_enable_msi_range(dev, 1, 1, NULL); 1116 + if (rc < 0) 1117 + return rc; 1118 + return 0; 1108 1119 } 1109 - EXPORT_SYMBOL(pci_enable_msi_range); 1120 + EXPORT_SYMBOL(pci_enable_msi); 1110 1121 1111 1122 static int __pci_enable_msix_range(struct pci_dev *dev, 1112 1123 struct msix_entry *entries, int minvec, ··· 1198 1225 } 1199 1226 1200 1227 /* use legacy irq if allowed */ 1201 - if ((flags & PCI_IRQ_LEGACY) && min_vecs == 1) { 1202 - pci_intx(dev, 1); 1203 - return 1; 1228 + if (flags & PCI_IRQ_LEGACY) { 1229 + if (min_vecs == 1 && dev->irq) { 1230 + pci_intx(dev, 1); 1231 + return 1; 1232 + } 1204 1233 } 1205 1234 1206 1235 return vecs; ··· 1356 1381 { 1357 1382 struct msi_desc *desc = first_pci_msi_entry(to_pci_dev(dev)); 1358 1383 1359 - /* Special handling to support pci_enable_msi_range() */ 1384 + /* Special handling to support __pci_enable_msi_range() */ 1360 1385 if (pci_msi_desc_is_multi_msi(desc) && 1361 1386 !(info->flags & MSI_FLAG_MULTI_PCI_MSI)) 1362 1387 return 1; ··· 1369 1394 static int pci_msi_domain_handle_error(struct irq_domain *domain, 1370 1395 struct msi_desc *desc, int error) 1371 1396 { 1372 - /* Special handling to support pci_enable_msi_range() */ 1397 + /* Special handling to support __pci_enable_msi_range() */ 1373 1398 if (pci_msi_desc_is_multi_msi(desc) && error == -ENOSPC) 1374 1399 return 1; 1375 1400 ··· 1455 1480 return domain; 1456 1481 } 1457 1482 EXPORT_SYMBOL_GPL(pci_msi_create_irq_domain); 1458 - 1459 - /** 1460 - * pci_msi_domain_alloc_irqs - Allocate interrupts for @dev in @domain 1461 - * @domain: The interrupt domain to allocate from 1462 - * @dev: The device for which to allocate 1463 - * @nvec: The number of interrupts to allocate 1464 - * @type: Unused to allow simpler migration from the arch_XXX interfaces 1465 - * 1466 - * Returns: 1467 - * A virtual interrupt number or an error code in case of failure 1468 - */ 1469 - int pci_msi_domain_alloc_irqs(struct irq_domain *domain, struct pci_dev *dev, 1470 - int nvec, int type) 1471 - { 1472 - return msi_domain_alloc_irqs(domain, &dev->dev, nvec); 1473 - } 1474 - 1475 - /** 1476 - * pci_msi_domain_free_irqs - Free interrupts for @dev in @domain 1477 - * @domain: The interrupt domain 1478 - * @dev: The device for which to free interrupts 1479 - */ 1480 - void pci_msi_domain_free_irqs(struct irq_domain *domain, struct pci_dev *dev) 1481 - { 1482 - msi_domain_free_irqs(domain, &dev->dev); 1483 - } 1484 - 1485 - /** 1486 - * pci_msi_create_default_irq_domain - Create a default MSI interrupt domain 1487 - * @fwnode: Optional fwnode of the interrupt controller 1488 - * @info: MSI domain info 1489 - * @parent: Parent irq domain 1490 - * 1491 - * Returns: A domain pointer or NULL in case of failure. If successful 1492 - * the default PCI/MSI irqdomain pointer is updated. 1493 - */ 1494 - struct irq_domain *pci_msi_create_default_irq_domain(struct fwnode_handle *fwnode, 1495 - struct msi_domain_info *info, struct irq_domain *parent) 1496 - { 1497 - struct irq_domain *domain; 1498 - 1499 - mutex_lock(&pci_msi_domain_lock); 1500 - if (pci_msi_default_domain) { 1501 - pr_err("PCI: default irq domain for PCI MSI has already been created.\n"); 1502 - domain = NULL; 1503 - } else { 1504 - domain = pci_msi_create_irq_domain(fwnode, info, parent); 1505 - pci_msi_default_domain = domain; 1506 - } 1507 - mutex_unlock(&pci_msi_domain_lock); 1508 - 1509 - return domain; 1510 - } 1511 1483 1512 1484 static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data) 1513 1485 {
+49 -114
drivers/pci/pcie/portdrv_core.c
··· 44 44 } 45 45 46 46 /** 47 - * pcie_port_msix_add_entry - add entry to given array of MSI-X entries 48 - * @entries: Array of MSI-X entries 49 - * @new_entry: Index of the entry to add to the array 50 - * @nr_entries: Number of entries already in the array 51 - * 52 - * Return value: Position of the added entry in the array 53 - */ 54 - static int pcie_port_msix_add_entry( 55 - struct msix_entry *entries, int new_entry, int nr_entries) 56 - { 57 - int j; 58 - 59 - for (j = 0; j < nr_entries; j++) 60 - if (entries[j].entry == new_entry) 61 - return j; 62 - 63 - entries[j].entry = new_entry; 64 - return j; 65 - } 66 - 67 - /** 68 47 * pcie_port_enable_msix - try to set up MSI-X as interrupt mode for given port 69 48 * @dev: PCI Express port to handle 70 - * @vectors: Array of interrupt vectors to populate 49 + * @irqs: Array of interrupt vectors to populate 71 50 * @mask: Bitmask of port capabilities returned by get_port_device_capability() 72 51 * 73 52 * Return value: 0 on success, error code on failure 74 53 */ 75 - static int pcie_port_enable_msix(struct pci_dev *dev, int *vectors, int mask) 54 + static int pcie_port_enable_msix(struct pci_dev *dev, int *irqs, int mask) 76 55 { 77 - struct msix_entry *msix_entries; 78 - int idx[PCIE_PORT_DEVICE_MAXSERVICES]; 79 - int nr_entries, status, pos, i, nvec; 80 - u16 reg16; 81 - u32 reg32; 82 - 83 - nr_entries = pci_msix_vec_count(dev); 84 - if (nr_entries < 0) 85 - return nr_entries; 86 - BUG_ON(!nr_entries); 87 - if (nr_entries > PCIE_PORT_MAX_MSIX_ENTRIES) 88 - nr_entries = PCIE_PORT_MAX_MSIX_ENTRIES; 89 - 90 - msix_entries = kzalloc(sizeof(*msix_entries) * nr_entries, GFP_KERNEL); 91 - if (!msix_entries) 92 - return -ENOMEM; 56 + int nr_entries, entry, nvec = 0; 93 57 94 58 /* 95 59 * Allocate as many entries as the port wants, so that we can check ··· 61 97 * equal to the number of entries this port actually uses, we'll happily 62 98 * go through without any tricks. 63 99 */ 64 - for (i = 0; i < nr_entries; i++) 65 - msix_entries[i].entry = i; 66 - 67 - status = pci_enable_msix_exact(dev, msix_entries, nr_entries); 68 - if (status) 69 - goto Exit; 70 - 71 - for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) 72 - idx[i] = -1; 73 - status = -EIO; 74 - nvec = 0; 100 + nr_entries = pci_alloc_irq_vectors(dev, 1, PCIE_PORT_MAX_MSIX_ENTRIES, 101 + PCI_IRQ_MSIX); 102 + if (nr_entries < 0) 103 + return nr_entries; 75 104 76 105 if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP)) { 77 - int entry; 106 + u16 reg16; 78 107 79 108 /* 80 109 * The code below follows the PCI Express Base Specification 2.0 ··· 82 125 pcie_capability_read_word(dev, PCI_EXP_FLAGS, &reg16); 83 126 entry = (reg16 & PCI_EXP_FLAGS_IRQ) >> 9; 84 127 if (entry >= nr_entries) 85 - goto Error; 128 + goto out_free_irqs; 86 129 87 - i = pcie_port_msix_add_entry(msix_entries, entry, nvec); 88 - if (i == nvec) 89 - nvec++; 130 + irqs[PCIE_PORT_SERVICE_PME_SHIFT] = pci_irq_vector(dev, entry); 131 + irqs[PCIE_PORT_SERVICE_HP_SHIFT] = pci_irq_vector(dev, entry); 90 132 91 - idx[PCIE_PORT_SERVICE_PME_SHIFT] = i; 92 - idx[PCIE_PORT_SERVICE_HP_SHIFT] = i; 133 + nvec = max(nvec, entry + 1); 93 134 } 94 135 95 136 if (mask & PCIE_PORT_SERVICE_AER) { 96 - int entry; 137 + u32 reg32, pos; 97 138 98 139 /* 99 140 * The code below follows Section 7.10.10 of the PCI Express ··· 106 151 pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &reg32); 107 152 entry = reg32 >> 27; 108 153 if (entry >= nr_entries) 109 - goto Error; 154 + goto out_free_irqs; 110 155 111 - i = pcie_port_msix_add_entry(msix_entries, entry, nvec); 112 - if (i == nvec) 113 - nvec++; 156 + irqs[PCIE_PORT_SERVICE_AER_SHIFT] = pci_irq_vector(dev, entry); 114 157 115 - idx[PCIE_PORT_SERVICE_AER_SHIFT] = i; 158 + nvec = max(nvec, entry + 1); 116 159 } 117 160 118 161 /* ··· 118 165 * what we have. Otherwise, the port has some extra entries not for the 119 166 * services we know and we need to work around that. 120 167 */ 121 - if (nvec == nr_entries) { 122 - status = 0; 123 - } else { 168 + if (nvec != nr_entries) { 124 169 /* Drop the temporary MSI-X setup */ 125 - pci_disable_msix(dev); 170 + pci_free_irq_vectors(dev); 126 171 127 172 /* Now allocate the MSI-X vectors for real */ 128 - status = pci_enable_msix_exact(dev, msix_entries, nvec); 129 - if (status) 130 - goto Exit; 173 + nr_entries = pci_alloc_irq_vectors(dev, nvec, nvec, 174 + PCI_IRQ_MSIX); 175 + if (nr_entries < 0) 176 + return nr_entries; 131 177 } 132 178 133 - for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) 134 - vectors[i] = idx[i] >= 0 ? msix_entries[idx[i]].vector : -1; 179 + return 0; 135 180 136 - Exit: 137 - kfree(msix_entries); 138 - return status; 139 - 140 - Error: 141 - pci_disable_msix(dev); 142 - goto Exit; 181 + out_free_irqs: 182 + pci_free_irq_vectors(dev); 183 + return -EIO; 143 184 } 144 185 145 186 /** 146 - * init_service_irqs - initialize irqs for PCI Express port services 187 + * pcie_init_service_irqs - initialize irqs for PCI Express port services 147 188 * @dev: PCI Express port to handle 148 189 * @irqs: Array of irqs to populate 149 190 * @mask: Bitmask of port capabilities returned by get_port_device_capability() 150 191 * 151 192 * Return value: Interrupt mode associated with the port 152 193 */ 153 - static int init_service_irqs(struct pci_dev *dev, int *irqs, int mask) 194 + static int pcie_init_service_irqs(struct pci_dev *dev, int *irqs, int mask) 154 195 { 155 - int i, irq = -1; 196 + unsigned flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI; 197 + int ret, i; 198 + 199 + for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) 200 + irqs[i] = -1; 156 201 157 202 /* 158 203 * If MSI cannot be used for PCIe PME or hotplug, we have to use ··· 158 207 */ 159 208 if (((mask & PCIE_PORT_SERVICE_PME) && pcie_pme_no_msi()) || 160 209 ((mask & PCIE_PORT_SERVICE_HP) && pciehp_no_msi())) { 161 - if (dev->irq) 162 - irq = dev->irq; 163 - goto no_msi; 210 + flags &= ~PCI_IRQ_MSI; 211 + } else { 212 + /* Try to use MSI-X if supported */ 213 + if (!pcie_port_enable_msix(dev, irqs, mask)) 214 + return 0; 164 215 } 165 216 166 - /* Try to use MSI-X if supported */ 167 - if (!pcie_port_enable_msix(dev, irqs, mask)) 168 - return 0; 169 - 170 - /* 171 - * We're not going to use MSI-X, so try MSI and fall back to INTx. 172 - * If neither MSI/MSI-X nor INTx available, try other interrupt. On 173 - * some platforms, root port doesn't support MSI/MSI-X/INTx in RC mode. 174 - */ 175 - if (!pci_enable_msi(dev) || dev->irq) 176 - irq = dev->irq; 177 - 178 - no_msi: 179 - for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) 180 - irqs[i] = irq; 181 - irqs[PCIE_PORT_SERVICE_VC_SHIFT] = -1; 182 - 183 - if (irq < 0) 217 + ret = pci_alloc_irq_vectors(dev, 1, 1, flags); 218 + if (ret < 0) 184 219 return -ENODEV; 185 - return 0; 186 - } 187 220 188 - static void cleanup_service_irqs(struct pci_dev *dev) 189 - { 190 - if (dev->msix_enabled) 191 - pci_disable_msix(dev); 192 - else if (dev->msi_enabled) 193 - pci_disable_msi(dev); 221 + for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) { 222 + if (i != PCIE_PORT_SERVICE_VC_SHIFT) 223 + irqs[i] = pci_irq_vector(dev, 0); 224 + } 225 + 226 + return 0; 194 227 } 195 228 196 229 /** ··· 313 378 * that can be used in the absence of irqs. Allow them to determine 314 379 * if that is to be used. 315 380 */ 316 - status = init_service_irqs(dev, irqs, capabilities); 381 + status = pcie_init_service_irqs(dev, irqs, capabilities); 317 382 if (status) { 318 383 capabilities &= PCIE_PORT_SERVICE_VC | PCIE_PORT_SERVICE_HP; 319 384 if (!capabilities) ··· 336 401 return 0; 337 402 338 403 error_cleanup_irqs: 339 - cleanup_service_irqs(dev); 404 + pci_free_irq_vectors(dev); 340 405 error_disable: 341 406 pci_disable_device(dev); 342 407 return status; ··· 404 469 void pcie_port_device_remove(struct pci_dev *dev) 405 470 { 406 471 device_for_each_child(&dev->dev, NULL, remove_iter); 407 - cleanup_service_irqs(dev); 472 + pci_free_irq_vectors(dev); 408 473 pci_disable_device(dev); 409 474 } 410 475
-6
include/linux/msi.h
··· 316 316 struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode, 317 317 struct msi_domain_info *info, 318 318 struct irq_domain *parent); 319 - int pci_msi_domain_alloc_irqs(struct irq_domain *domain, struct pci_dev *dev, 320 - int nvec, int type); 321 - void pci_msi_domain_free_irqs(struct irq_domain *domain, struct pci_dev *dev); 322 - struct irq_domain *pci_msi_create_default_irq_domain(struct fwnode_handle *fwnode, 323 - struct msi_domain_info *info, struct irq_domain *parent); 324 - 325 319 irq_hw_number_t pci_msi_domain_calc_hwirq(struct pci_dev *dev, 326 320 struct msi_desc *desc); 327 321 int pci_msi_domain_check_cap(struct irq_domain *domain,
+2 -14
include/linux/pci.h
··· 1306 1306 void pci_disable_msix(struct pci_dev *dev); 1307 1307 void pci_restore_msi_state(struct pci_dev *dev); 1308 1308 int pci_msi_enabled(void); 1309 - int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec); 1310 - static inline int pci_enable_msi_exact(struct pci_dev *dev, int nvec) 1311 - { 1312 - int rc = pci_enable_msi_range(dev, nvec, nvec); 1313 - if (rc < 0) 1314 - return rc; 1315 - return 0; 1316 - } 1309 + int pci_enable_msi(struct pci_dev *dev); 1317 1310 int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries, 1318 1311 int minvec, int maxvec); 1319 1312 static inline int pci_enable_msix_exact(struct pci_dev *dev, ··· 1337 1344 static inline void pci_disable_msix(struct pci_dev *dev) { } 1338 1345 static inline void pci_restore_msi_state(struct pci_dev *dev) { } 1339 1346 static inline int pci_msi_enabled(void) { return 0; } 1340 - static inline int pci_enable_msi_range(struct pci_dev *dev, int minvec, 1341 - int maxvec) 1342 - { return -ENOSYS; } 1343 - static inline int pci_enable_msi_exact(struct pci_dev *dev, int nvec) 1347 + static inline int pci_enable_msi(struct pci_dev *dev) 1344 1348 { return -ENOSYS; } 1345 1349 static inline int pci_enable_msix_range(struct pci_dev *dev, 1346 1350 struct msix_entry *entries, int minvec, int maxvec) ··· 1412 1422 static inline void pcie_set_ecrc_checking(struct pci_dev *dev) { } 1413 1423 static inline void pcie_ecrc_get_policy(char *str) { } 1414 1424 #endif 1415 - 1416 - #define pci_enable_msi(pdev) pci_enable_msi_exact(pdev, 1) 1417 1425 1418 1426 #ifdef CONFIG_HT_IRQ 1419 1427 /* The functions a driver should call */