Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
"This update includes the usual round of major driver updates
(hisi_sas, ufs, fnic, cxlflash, be2iscsi, ipr, stex). There's also the
usual amount of cosmetic and spelling stuff"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (155 commits)
scsi: qla4xxx: fix spelling mistake: "Tempalate" -> "Template"
scsi: stex: make S6flag static
scsi: mac_esp: fix to pass correct device identity to free_irq()
scsi: aacraid: pci_alloc_consistent() failures on ARM64
scsi: ufs: make ufshcd_get_lists_status() register operation obvious
scsi: ufs: use MASK_EE_STATUS
scsi: mac_esp: Replace bogus memory barrier with spinlock
scsi: fcoe: make fcoe_e_d_tov and fcoe_r_a_tov static
scsi: sd_zbc: Do not write lock zones for reset
scsi: sd_zbc: Remove superfluous assignments
scsi: sd: sd_zbc: Rename sd_zbc_setup_write_cmnd
scsi: Improve scsi_get_sense_info_fld
scsi: sd: Cleanup sd_done sense data handling
scsi: sd: Improve sd_completed_bytes
scsi: sd: Fix function descriptions
scsi: mpt3sas: remove redundant wmb
scsi: mpt: Move scsi_remove_host() out of mptscsih_remove_host()
scsi: sg: reset 'res_in_use' after unlinking reserved array
scsi: mvumi: remove code handling zero scsi_sg_count(scmd) case
scsi: fusion: fix spelling mistake: "Persistancy" -> "Persistency"
...

+4320 -2769
+5
Documentation/powerpc/cxlflash.txt
··· 239 239 resource handle that is provided is already referencing provisioned 240 240 storage. This is reflected by the last LBA being a non-zero value. 241 241 242 + When a LUN is accessible from more than one port, this ioctl will 243 + return with the DK_CXLFLASH_ALL_PORTS_ACTIVE return flag set. This 244 + provides the user with a hint that I/O can be retried in the event 245 + of an I/O error as the LUN can be reached over multiple paths. 246 + 242 247 DK_CXLFLASH_VLUN_RESIZE 243 248 ----------------------- 244 249 This ioctl is responsible for resizing a previously created virtual
+12 -18
Documentation/scsi/scsi_eh.txt
··· 70 70 scmd is requeued to blk queue. 71 71 72 72 - otherwise 73 - scsi_eh_scmd_add(scmd, 0) is invoked for the command. See 73 + scsi_eh_scmd_add(scmd) is invoked for the command. See 74 74 [1-3] for details of this function. 75 75 76 76 ··· 103 103 eh_timed_out() callback did not handle the command. 104 104 Step #2 is taken. 105 105 106 - 2. If the host supports asynchronous completion (as indicated by the 107 - no_async_abort setting in the host template) scsi_abort_command() 108 - is invoked to schedule an asynchrous abort. If that fails 109 - Step #3 is taken. 106 + 2. scsi_abort_command() is invoked to schedule an asynchrous abort. 107 + Asynchronous abort are not invoked for commands which the 108 + SCSI_EH_ABORT_SCHEDULED flag is set (this indicates that the command 109 + already had been aborted once, and this is a retry which failed), 110 + or when the EH deadline is expired. In these case Step #3 is taken. 110 111 111 - 2. scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD) is invoked for the 112 - command. See [1-3] for more information. 112 + 3. scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD) is invoked for the 113 + command. See [1-4] for more information. 113 114 114 115 [1-3] Asynchronous command aborts 115 116 ··· 125 124 126 125 scmds enter EH via scsi_eh_scmd_add(), which does the following. 127 126 128 - 1. Turns on scmd->eh_eflags as requested. It's 0 for error 129 - completions and SCSI_EH_CANCEL_CMD for timeouts. 127 + 1. Links scmd->eh_entry to shost->eh_cmd_q 130 128 131 - 2. Links scmd->eh_entry to shost->eh_cmd_q 129 + 2. Sets SHOST_RECOVERY bit in shost->shost_state 132 130 133 - 3. Sets SHOST_RECOVERY bit in shost->shost_state 131 + 3. Increments shost->host_failed 134 132 135 - 4. Increments shost->host_failed 136 - 137 - 5. Wakes up SCSI EH thread if shost->host_busy == shost->host_failed 133 + 4. Wakes up SCSI EH thread if shost->host_busy == shost->host_failed 138 134 139 135 As can be seen above, once any scmd is added to shost->eh_cmd_q, 140 136 SHOST_RECOVERY shost_state bit is turned on. This prevents any new ··· 247 249 248 250 1. Error completion / time out 249 251 ACTION: scsi_eh_scmd_add() is invoked for scmd 250 - - set scmd->eh_eflags 251 252 - add scmd to shost->eh_cmd_q 252 253 - set SHOST_RECOVERY 253 254 - shost->host_failed++ ··· 260 263 261 264 3. scmd recovered 262 265 ACTION: scsi_eh_finish_cmd() is invoked to EH-finish scmd 263 - - clear scmd->eh_eflags 264 266 - scsi_setup_cmd_retry() 265 267 - move from local eh_work_q to local eh_done_q 266 268 LOCKING: none ··· 451 455 The following conditions must be true on exit from the handler. 452 456 453 457 - shost->host_failed is zero. 454 - 455 - - Each scmd's eh_eflags field is cleared. 456 458 457 459 - Each scmd is in such a state that scsi_setup_cmd_retry() on the 458 460 scmd doesn't make any difference.
-1
MAINTAINERS
··· 10079 10079 PMC SIERRA PM8001 DRIVER 10080 10080 M: Jack Wang <jinpu.wang@profitbricks.com> 10081 10081 M: lindar_liu@usish.com 10082 - L: pmchba@pmcs.com 10083 10082 L: linux-scsi@vger.kernel.org 10084 10083 S: Supported 10085 10084 F: drivers/scsi/pm8001/
+1 -1
drivers/message/fusion/mptbase.c
··· 7396 7396 break; 7397 7397 case MPI_EVENT_SAS_DEV_STAT_RC_NO_PERSIST_ADDED: 7398 7398 snprintf(evStr, EVENT_DESCR_STR_SZ, 7399 - "SAS Device Status Change: No Persistancy: " 7399 + "SAS Device Status Change: No Persistency: " 7400 7400 "id=%d channel=%d", id, channel); 7401 7401 break; 7402 7402 case MPI_EVENT_SAS_DEV_STAT_RC_UNSUPPORTED:
+6 -1
drivers/message/fusion/mptfc.c
··· 1329 1329 WQ_MEM_RECLAIM); 1330 1330 if (!ioc->fc_rescan_work_q) { 1331 1331 error = -ENOMEM; 1332 - goto out_mptfc_probe; 1332 + goto out_mptfc_host; 1333 1333 } 1334 1334 1335 1335 /* ··· 1350 1350 flush_workqueue(ioc->fc_rescan_work_q); 1351 1351 1352 1352 return 0; 1353 + 1354 + out_mptfc_host: 1355 + scsi_remove_host(sh); 1353 1356 1354 1357 out_mptfc_probe: 1355 1358 ··· 1532 1529 ioc->fc_data.fc_port_page1[ii].data = NULL; 1533 1530 } 1534 1531 } 1532 + 1533 + scsi_remove_host(ioc->sh); 1535 1534 1536 1535 mptscsih_remove(pdev); 1537 1536 }
-2
drivers/message/fusion/mptscsih.c
··· 1176 1176 MPT_SCSI_HOST *hd; 1177 1177 int sz1; 1178 1178 1179 - scsi_remove_host(host); 1180 - 1181 1179 if((hd = shost_priv(host)) == NULL) 1182 1180 return; 1183 1181
+9 -1
drivers/message/fusion/mptspi.c
··· 1548 1548 return error; 1549 1549 } 1550 1550 1551 + static void mptspi_remove(struct pci_dev *pdev) 1552 + { 1553 + MPT_ADAPTER *ioc = pci_get_drvdata(pdev); 1554 + 1555 + scsi_remove_host(ioc->sh); 1556 + mptscsih_remove(pdev); 1557 + } 1558 + 1551 1559 static struct pci_driver mptspi_driver = { 1552 1560 .name = "mptspi", 1553 1561 .id_table = mptspi_pci_table, 1554 1562 .probe = mptspi_probe, 1555 - .remove = mptscsih_remove, 1563 + .remove = mptspi_remove, 1556 1564 .shutdown = mptscsih_shutdown, 1557 1565 #ifdef CONFIG_PM 1558 1566 .suspend = mptscsih_suspend,
+6 -1
drivers/misc/enclosure.c
··· 148 148 for (i = 0; i < components; i++) { 149 149 edev->component[i].number = -1; 150 150 edev->component[i].slot = -1; 151 - edev->component[i].power_status = 1; 151 + edev->component[i].power_status = -1; 152 152 } 153 153 154 154 mutex_lock(&container_list_lock); ··· 594 594 595 595 if (edev->cb->get_power_status) 596 596 edev->cb->get_power_status(edev, ecomp); 597 + 598 + /* If still uninitialized, the callback failed or does not exist. */ 599 + if (ecomp->power_status == -1) 600 + return (edev->cb->get_power_status) ? -EIO : -ENOTTY; 601 + 597 602 return snprintf(buf, 40, "%s\n", ecomp->power_status ? "on" : "off"); 598 603 } 599 604
+11 -3
drivers/scsi/BusLogic.c
··· 3009 3009 3010 3010 spin_lock_irq(SCpnt->device->host->host_lock); 3011 3011 3012 - blogic_inc_count(&stats->adatper_reset_req); 3012 + blogic_inc_count(&stats->adapter_reset_req); 3013 3013 3014 3014 rc = blogic_resetadapter(adapter, false); 3015 3015 spin_unlock_irq(SCpnt->device->host->host_lock); ··· 3560 3560 struct blogic_tgt_flags *tgt_flags = &adapter->tgt_flags[tgt]; 3561 3561 if (!tgt_flags->tgt_exists) 3562 3562 continue; 3563 - seq_printf(m, "\ 3564 - %2d %5d %5d %5d %5d %5d %5d %5d %5d %5d\n", tgt, tgt_stats[tgt].aborts_request, tgt_stats[tgt].aborts_tried, tgt_stats[tgt].aborts_done, tgt_stats[tgt].bdr_request, tgt_stats[tgt].bdr_tried, tgt_stats[tgt].bdr_done, tgt_stats[tgt].adatper_reset_req, tgt_stats[tgt].adapter_reset_attempt, tgt_stats[tgt].adapter_reset_done); 3563 + seq_printf(m, " %2d %5d %5d %5d %5d %5d %5d %5d %5d %5d\n", 3564 + tgt, tgt_stats[tgt].aborts_request, 3565 + tgt_stats[tgt].aborts_tried, 3566 + tgt_stats[tgt].aborts_done, 3567 + tgt_stats[tgt].bdr_request, 3568 + tgt_stats[tgt].bdr_tried, 3569 + tgt_stats[tgt].bdr_done, 3570 + tgt_stats[tgt].adapter_reset_req, 3571 + tgt_stats[tgt].adapter_reset_attempt, 3572 + tgt_stats[tgt].adapter_reset_done); 3565 3573 } 3566 3574 seq_printf(m, "\nExternal Host Adapter Resets: %d\n", adapter->ext_resets); 3567 3575 seq_printf(m, "Host Adapter Internal Errors: %d\n", adapter->adapter_intern_errors);
+1 -1
drivers/scsi/BusLogic.h
··· 935 935 unsigned short bdr_request; 936 936 unsigned short bdr_tried; 937 937 unsigned short bdr_done; 938 - unsigned short adatper_reset_req; 938 + unsigned short adapter_reset_req; 939 939 unsigned short adapter_reset_attempt; 940 940 unsigned short adapter_reset_done; 941 941 };
+6 -7
drivers/scsi/aacraid/aachba.c
··· 1678 1678 sizeof(struct sgentry) + sizeof(struct sgentry64); 1679 1679 datasize = sizeof(struct aac_ciss_identify_pd); 1680 1680 1681 - identify_resp = pci_alloc_consistent(dev->pdev, datasize, &addr); 1682 - 1681 + identify_resp = dma_alloc_coherent(&dev->pdev->dev, datasize, &addr, 1682 + GFP_KERNEL); 1683 1683 if (!identify_resp) 1684 1684 goto fib_free_ptr; 1685 1685 ··· 1720 1720 dev->hba_map[bus][target].qd_limit = 1721 1721 identify_resp->current_queue_depth_limit; 1722 1722 1723 - pci_free_consistent(dev->pdev, datasize, (void *)identify_resp, addr); 1723 + dma_free_coherent(&dev->pdev->dev, datasize, identify_resp, addr); 1724 1724 1725 1725 aac_fib_complete(fibptr); 1726 1726 ··· 1814 1814 datasize = sizeof(struct aac_ciss_phys_luns_resp) 1815 1815 + (AAC_MAX_TARGETS - 1) * sizeof(struct _ciss_lun); 1816 1816 1817 - phys_luns = (struct aac_ciss_phys_luns_resp *) pci_alloc_consistent( 1818 - dev->pdev, datasize, &addr); 1819 - 1817 + phys_luns = dma_alloc_coherent(&dev->pdev->dev, datasize, &addr, 1818 + GFP_KERNEL); 1820 1819 if (phys_luns == NULL) { 1821 1820 rcode = -ENOMEM; 1822 1821 goto err_out; ··· 1860 1861 aac_update_hba_map(dev, phys_luns, rescan); 1861 1862 } 1862 1863 1863 - pci_free_consistent(dev->pdev, datasize, (void *) phys_luns, addr); 1864 + dma_free_coherent(&dev->pdev->dev, datasize, phys_luns, addr); 1864 1865 err_out: 1865 1866 return rcode; 1866 1867 }
+4 -2
drivers/scsi/aacraid/commctrl.c
··· 100 100 goto cleanup; 101 101 } 102 102 103 - kfib = pci_alloc_consistent(dev->pdev, size, &daddr); 103 + kfib = dma_alloc_coherent(&dev->pdev->dev, size, &daddr, 104 + GFP_KERNEL); 104 105 if (!kfib) { 105 106 retval = -ENOMEM; 106 107 goto cleanup; ··· 161 160 retval = -EFAULT; 162 161 cleanup: 163 162 if (hw_fib) { 164 - pci_free_consistent(dev->pdev, size, kfib, fibptr->hw_fib_pa); 163 + dma_free_coherent(&dev->pdev->dev, size, kfib, 164 + fibptr->hw_fib_pa); 165 165 fibptr->hw_fib_pa = hw_fib_pa; 166 166 fibptr->hw_fib_va = hw_fib; 167 167 }
+1 -2
drivers/scsi/aacraid/comminit.c
··· 99 99 size = fibsize + aac_init_size + commsize + commalign + 100 100 printfbufsiz + host_rrq_size; 101 101 102 - base = pci_alloc_consistent(dev->pdev, size, &phys); 103 - 102 + base = dma_alloc_coherent(&dev->pdev->dev, size, &phys, GFP_KERNEL); 104 103 if (base == NULL) { 105 104 printk(KERN_ERR "aacraid: unable to create mapping.\n"); 106 105 return 0;
+11 -9
drivers/scsi/aacraid/commsup.c
··· 73 73 } 74 74 75 75 dprintk((KERN_INFO 76 - "allocate hardware fibs pci_alloc_consistent(%p, %d * (%d + %d), %p)\n", 77 - dev->pdev, dev->max_cmd_size, dev->scsi_host_ptr->can_queue, 76 + "allocate hardware fibs dma_alloc_coherent(%p, %d * (%d + %d), %p)\n", 77 + &dev->pdev->dev, dev->max_cmd_size, dev->scsi_host_ptr->can_queue, 78 78 AAC_NUM_MGT_FIB, &dev->hw_fib_pa)); 79 - dev->hw_fib_va = pci_alloc_consistent(dev->pdev, 79 + dev->hw_fib_va = dma_alloc_coherent(&dev->pdev->dev, 80 80 (dev->max_cmd_size + sizeof(struct aac_fib_xporthdr)) 81 81 * (dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB) + (ALIGN32 - 1), 82 - &dev->hw_fib_pa); 82 + &dev->hw_fib_pa, GFP_KERNEL); 83 83 if (dev->hw_fib_va == NULL) 84 84 return -ENOMEM; 85 85 return 0; ··· 106 106 fib_size = dev->max_fib_size + sizeof(struct aac_fib_xporthdr); 107 107 alloc_size = fib_size * num_fibs + ALIGN32 - 1; 108 108 109 - pci_free_consistent(dev->pdev, alloc_size, dev->hw_fib_va, 110 - dev->hw_fib_pa); 109 + dma_free_coherent(&dev->pdev->dev, alloc_size, dev->hw_fib_va, 110 + dev->hw_fib_pa); 111 111 112 112 dev->hw_fib_va = NULL; 113 113 dev->hw_fib_pa = 0; ··· 1571 1571 * case. 1572 1572 */ 1573 1573 aac_fib_map_free(aac); 1574 - pci_free_consistent(aac->pdev, aac->comm_size, aac->comm_addr, aac->comm_phys); 1574 + dma_free_coherent(&aac->pdev->dev, aac->comm_size, aac->comm_addr, 1575 + aac->comm_phys); 1575 1576 aac->comm_addr = NULL; 1576 1577 aac->comm_phys = 0; 1577 1578 kfree(aac->queues); ··· 2320 2319 if (!fibptr) 2321 2320 goto out; 2322 2321 2323 - dma_buf = pci_alloc_consistent(dev->pdev, datasize, &addr); 2322 + dma_buf = dma_alloc_coherent(&dev->pdev->dev, datasize, &addr, 2323 + GFP_KERNEL); 2324 2324 if (!dma_buf) 2325 2325 goto fib_free_out; 2326 2326 ··· 2356 2354 ret = aac_fib_send(ScsiPortCommand64, fibptr, sizeof(struct aac_srb), 2357 2355 FsaNormal, 1, 1, NULL, NULL); 2358 2356 2359 - pci_free_consistent(dev->pdev, datasize, (void *)dma_buf, addr); 2357 + dma_free_coherent(&dev->pdev->dev, datasize, dma_buf, addr); 2360 2358 2361 2359 /* 2362 2360 * Do not set XferState to zero unless
+4 -4
drivers/scsi/aacraid/linit.c
··· 1592 1592 out_unmap: 1593 1593 aac_fib_map_free(aac); 1594 1594 if (aac->comm_addr) 1595 - pci_free_consistent(aac->pdev, aac->comm_size, aac->comm_addr, 1596 - aac->comm_phys); 1595 + dma_free_coherent(&aac->pdev->dev, aac->comm_size, 1596 + aac->comm_addr, aac->comm_phys); 1597 1597 kfree(aac->queues); 1598 1598 aac_adapter_ioremap(aac, 0); 1599 1599 kfree(aac->fibs); ··· 1729 1729 1730 1730 __aac_shutdown(aac); 1731 1731 aac_fib_map_free(aac); 1732 - pci_free_consistent(aac->pdev, aac->comm_size, aac->comm_addr, 1733 - aac->comm_phys); 1732 + dma_free_coherent(&aac->pdev->dev, aac->comm_size, aac->comm_addr, 1733 + aac->comm_phys); 1734 1734 kfree(aac->queues); 1735 1735 1736 1736 aac_adapter_ioremap(aac, 0);
+9 -7
drivers/scsi/aacraid/rx.c
··· 355 355 356 356 if (likely((status & 0xFF000000L) == 0xBC000000L)) 357 357 return (status >> 16) & 0xFF; 358 - buffer = pci_alloc_consistent(dev->pdev, 512, &baddr); 358 + buffer = dma_alloc_coherent(&dev->pdev->dev, 512, &baddr, 359 + GFP_KERNEL); 359 360 ret = -2; 360 361 if (unlikely(buffer == NULL)) 361 362 return ret; 362 - post = pci_alloc_consistent(dev->pdev, 363 - sizeof(struct POSTSTATUS), &paddr); 363 + post = dma_alloc_coherent(&dev->pdev->dev, 364 + sizeof(struct POSTSTATUS), &paddr, 365 + GFP_KERNEL); 364 366 if (unlikely(post == NULL)) { 365 - pci_free_consistent(dev->pdev, 512, buffer, baddr); 367 + dma_free_coherent(&dev->pdev->dev, 512, buffer, baddr); 366 368 return ret; 367 369 } 368 370 memset(buffer, 0, 512); ··· 373 371 rx_writel(dev, MUnit.IMRx[0], paddr); 374 372 rx_sync_cmd(dev, COMMAND_POST_RESULTS, baddr, 0, 0, 0, 0, 0, 375 373 NULL, NULL, NULL, NULL, NULL); 376 - pci_free_consistent(dev->pdev, sizeof(struct POSTSTATUS), 377 - post, paddr); 374 + dma_free_coherent(&dev->pdev->dev, sizeof(struct POSTSTATUS), 375 + post, paddr); 378 376 if (likely((buffer[0] == '0') && ((buffer[1] == 'x') || (buffer[1] == 'X')))) { 379 377 ret = (hex_to_bin(buffer[2]) << 4) + 380 378 hex_to_bin(buffer[3]); 381 379 } 382 - pci_free_consistent(dev->pdev, 512, buffer, baddr); 380 + dma_free_coherent(&dev->pdev->dev, 512, buffer, baddr); 383 381 return ret; 384 382 } 385 383 /*
+10 -11
drivers/scsi/advansys.c
··· 6291 6291 static uchar 6292 6292 AscMsgOutSDTR(ASC_DVC_VAR *asc_dvc, uchar sdtr_period, uchar sdtr_offset) 6293 6293 { 6294 - EXT_MSG sdtr_buf; 6295 - uchar sdtr_period_index; 6296 - PortAddr iop_base; 6297 - 6298 - iop_base = asc_dvc->iop_base; 6299 - sdtr_buf.msg_type = EXTENDED_MESSAGE; 6300 - sdtr_buf.msg_len = MS_SDTR_LEN; 6301 - sdtr_buf.msg_req = EXTENDED_SDTR; 6302 - sdtr_buf.xfer_period = sdtr_period; 6294 + PortAddr iop_base = asc_dvc->iop_base; 6295 + uchar sdtr_period_index = AscGetSynPeriodIndex(asc_dvc, sdtr_period); 6296 + EXT_MSG sdtr_buf = { 6297 + .msg_type = EXTENDED_MESSAGE, 6298 + .msg_len = MS_SDTR_LEN, 6299 + .msg_req = EXTENDED_SDTR, 6300 + .xfer_period = sdtr_period, 6301 + .req_ack_offset = sdtr_offset, 6302 + }; 6303 6303 sdtr_offset &= ASC_SYN_MAX_OFFSET; 6304 - sdtr_buf.req_ack_offset = sdtr_offset; 6305 - sdtr_period_index = AscGetSynPeriodIndex(asc_dvc, sdtr_period); 6304 + 6306 6305 if (sdtr_period_index <= asc_dvc->max_sdtr_index) { 6307 6306 AscMemWordCopyPtrToLram(iop_base, ASCV_MSGOUT_BEG, 6308 6307 (uchar *)&sdtr_buf,
+2 -2
drivers/scsi/aic7xxx/aic7xxx_pci.c
··· 601 601 #define STA 0x08 602 602 #define DPR 0x01 603 603 604 - static int ahc_9005_subdevinfo_valid(uint16_t vendor, uint16_t device, 605 - uint16_t subvendor, uint16_t subdevice); 604 + static int ahc_9005_subdevinfo_valid(uint16_t device, uint16_t vendor, 605 + uint16_t subdevice, uint16_t subvendor); 606 606 static int ahc_ext_scbram_present(struct ahc_softc *ahc); 607 607 static void ahc_scbram_config(struct ahc_softc *ahc, int enable, 608 608 int pcheck, int fast, int large);
-1
drivers/scsi/aic94xx/aic94xx_init.c
··· 703 703 { 704 704 int err; 705 705 706 - scsi_remove_host(asd_ha->sas_ha.core.shost); 707 706 err = sas_unregister_ha(&asd_ha->sas_ha); 708 707 709 708 sas_remove_host(asd_ha->sas_ha.core.shost);
+4 -8
drivers/scsi/be2iscsi/be.h
··· 1 - /** 2 - * Copyright (C) 2005 - 2016 Broadcom 3 - * All rights reserved. 1 + /* 2 + * Copyright 2017 Broadcom. All Rights Reserved. 3 + * The term "Broadcom" refers to Broadcom Limited and/or its subsidiaries. 4 4 * 5 5 * This program is free software; you can redistribute it and/or 6 6 * modify it under the terms of the GNU General Public License version 2 7 - * as published by the Free Software Foundation. The full GNU General 7 + * as published by the Free Software Foundation. The full GNU General 8 8 * Public License is included in this distribution in the file called COPYING. 9 9 * 10 10 * Contact Information: 11 11 * linux-drivers@broadcom.com 12 12 * 13 - * Emulex 14 - * 3333 Susan Street 15 - * Costa Mesa, CA 92626 16 13 */ 17 14 18 15 #ifndef BEISCSI_H ··· 151 154 #define PAGE_SHIFT_4K 12 152 155 #define PAGE_SIZE_4K (1 << PAGE_SHIFT_4K) 153 156 #define mcc_timeout 120000 /* 12s timeout */ 154 - #define BEISCSI_LOGOUT_SYNC_DELAY 250 155 157 156 158 /* Returns number of pages spanned by the data starting at the given addr */ 157 159 #define PAGES_4K_SPANNED(_address, size) \
+10 -7
drivers/scsi/be2iscsi/be_cmds.c
··· 1 - /** 2 - * Copyright (C) 2005 - 2016 Broadcom 3 - * All rights reserved. 1 + /* 2 + * Copyright 2017 Broadcom. All Rights Reserved. 3 + * The term "Broadcom" refers to Broadcom Limited and/or its subsidiaries. 4 4 * 5 5 * This program is free software; you can redistribute it and/or 6 6 * modify it under the terms of the GNU General Public License version 2 7 - * as published by the Free Software Foundation. The full GNU General 7 + * as published by the Free Software Foundation. The full GNU General 8 8 * Public License is included in this distribution in the file called COPYING. 9 9 * 10 10 * Contact Information: 11 11 * linux-drivers@broadcom.com 12 12 * 13 - * Emulex 14 - * 3333 Susan Street 15 - * Costa Mesa, CA 92626 16 13 */ 17 14 18 15 #include <scsi/iscsi_proto.h> ··· 242 245 struct be_dma_mem *mbx_cmd_mem) 243 246 { 244 247 int rc = 0; 248 + 249 + if (!tag || tag > MAX_MCC_CMD) { 250 + __beiscsi_log(phba, KERN_ERR, 251 + "BC_%d : invalid tag %u\n", tag); 252 + return -EINVAL; 253 + } 245 254 246 255 if (beiscsi_hba_in_error(phba)) { 247 256 clear_bit(MCC_TAG_STATE_RUNNING,
+36 -38
drivers/scsi/be2iscsi/be_cmds.h
··· 1 - /** 2 - * Copyright (C) 2005 - 2016 Broadcom 3 - * All rights reserved. 1 + /* 2 + * Copyright 2017 Broadcom. All Rights Reserved. 3 + * The term "Broadcom" refers to Broadcom Limited and/or its subsidiaries. 4 4 * 5 5 * This program is free software; you can redistribute it and/or 6 6 * modify it under the terms of the GNU General Public License version 2 7 - * as published by the Free Software Foundation. The full GNU General 7 + * as published by the Free Software Foundation. The full GNU General 8 8 * Public License is included in this distribution in the file called COPYING. 9 9 * 10 10 * Contact Information: 11 11 * linux-drivers@broadcom.com 12 12 * 13 - * Emulex 14 - * 3333 Susan Street 15 - * Costa Mesa, CA 92626 16 13 */ 17 14 18 15 #ifndef BEISCSI_CMDS_H ··· 1142 1145 #define DB_DEF_PDU_EVENT_SHIFT 15 1143 1146 #define DB_DEF_PDU_CQPROC_SHIFT 16 1144 1147 1145 - struct dmsg_cqe { 1146 - u32 dw[4]; 1148 + struct be_invalidate_connection_params_in { 1149 + struct be_cmd_req_hdr hdr; 1150 + u32 session_handle; 1151 + u16 cid; 1152 + u16 unused; 1153 + #define BE_CLEANUP_TYPE_INVALIDATE 0x8001 1154 + #define BE_CLEANUP_TYPE_ISSUE_TCP_RST 0x8002 1155 + u16 cleanup_type; 1156 + u16 save_cfg; 1147 1157 } __packed; 1148 1158 1149 - struct tcp_upload_params_in { 1159 + struct be_invalidate_connection_params_out { 1160 + u32 session_handle; 1161 + u16 cid; 1162 + u16 unused; 1163 + } __packed; 1164 + 1165 + union be_invalidate_connection_params { 1166 + struct be_invalidate_connection_params_in req; 1167 + struct be_invalidate_connection_params_out resp; 1168 + } __packed; 1169 + 1170 + struct be_tcp_upload_params_in { 1150 1171 struct be_cmd_req_hdr hdr; 1151 1172 u16 id; 1173 + #define BE_UPLOAD_TYPE_GRACEFUL 1 1174 + /* abortive upload with reset */ 1175 + #define BE_UPLOAD_TYPE_ABORT_RESET 2 1176 + /* abortive upload without reset */ 1177 + #define BE_UPLOAD_TYPE_ABORT 3 1178 + /* abortive upload with reset, sequence number by driver */ 1179 + #define BE_UPLOAD_TYPE_ABORT_WITH_SEQ 4 1152 1180 u16 upload_type; 1153 1181 u32 reset_seq; 1154 1182 } __packed; 1155 1183 1156 - struct tcp_upload_params_out { 1184 + struct be_tcp_upload_params_out { 1157 1185 u32 dw[32]; 1158 1186 } __packed; 1159 1187 1160 - union tcp_upload_params { 1161 - struct tcp_upload_params_in request; 1162 - struct tcp_upload_params_out response; 1188 + union be_tcp_upload_params { 1189 + struct be_tcp_upload_params_in request; 1190 + struct be_tcp_upload_params_out response; 1163 1191 } __packed; 1164 1192 1165 1193 struct be_ulp_fw_cfg { ··· 1265 1243 #define OPCODE_COMMON_WRITE_FLASH 96 1266 1244 #define OPCODE_COMMON_READ_FLASH 97 1267 1245 1268 - /* --- CMD_ISCSI_INVALIDATE_CONNECTION_TYPE --- */ 1269 1246 #define CMD_ISCSI_COMMAND_INVALIDATE 1 1270 - #define CMD_ISCSI_CONNECTION_INVALIDATE 0x8001 1271 - #define CMD_ISCSI_CONNECTION_ISSUE_TCP_RST 0x8002 1272 1247 1273 1248 #define INI_WR_CMD 1 /* Initiator write command */ 1274 1249 #define INI_TMF_CMD 2 /* Initiator TMF command */ ··· 1288 1269 * preparedby 1289 1270 * driver should not be touched 1290 1271 */ 1291 - /* --- CMD_CHUTE_TYPE --- */ 1292 - #define CMD_CONNECTION_CHUTE_0 1 1293 - #define CMD_CONNECTION_CHUTE_1 2 1294 - #define CMD_CONNECTION_CHUTE_2 3 1295 - 1296 - #define EQ_MAJOR_CODE_COMPLETION 0 1297 - 1298 - #define CMD_ISCSI_SESSION_DEL_CFG_FROM_FLASH 0 1299 - #define CMD_ISCSI_SESSION_SAVE_CFG_ON_FLASH 1 1300 - 1301 - /* --- CONNECTION_UPLOAD_PARAMS --- */ 1302 - /* These parameters are used to define the type of upload desired. */ 1303 - #define CONNECTION_UPLOAD_GRACEFUL 1 /* Graceful upload */ 1304 - #define CONNECTION_UPLOAD_ABORT_RESET 2 /* Abortive upload with 1305 - * reset 1306 - */ 1307 - #define CONNECTION_UPLOAD_ABORT 3 /* Abortive upload without 1308 - * reset 1309 - */ 1310 - #define CONNECTION_UPLOAD_ABORT_WITH_SEQ 4 /* Abortive upload with reset, 1311 - * sequence number by driver */ 1312 1272 1313 1273 /* Returns the number of items in the field array. */ 1314 1274 #define BE_NUMBER_OF_FIELD(_type_, _field_) \
+59 -52
drivers/scsi/be2iscsi/be_iscsi.c
··· 1 - /** 2 - * Copyright (C) 2005 - 2016 Broadcom 3 - * All rights reserved. 1 + /* 2 + * Copyright 2017 Broadcom. All Rights Reserved. 3 + * The term "Broadcom" refers to Broadcom Limited and/or its subsidiaries. 4 4 * 5 5 * This program is free software; you can redistribute it and/or 6 6 * modify it under the terms of the GNU General Public License version 2 7 - * as published by the Free Software Foundation. The full GNU General 7 + * as published by the Free Software Foundation. The full GNU General 8 8 * Public License is included in this distribution in the file called COPYING. 9 - * 10 - * Written by: Jayamohan Kallickal (jayamohan.kallickal@broadcom.com) 11 9 * 12 10 * Contact Information: 13 11 * linux-drivers@broadcom.com 14 12 * 15 - * Emulex 16 - * 3333 Susan Street 17 - * Costa Mesa, CA 92626 18 13 */ 19 14 20 15 #include <scsi/libiscsi.h> ··· 1258 1263 } 1259 1264 1260 1265 /** 1261 - * beiscsi_close_conn - Upload the connection 1266 + * beiscsi_conn_close - Invalidate and upload connection 1262 1267 * @ep: The iscsi endpoint 1263 - * @flag: The type of connection closure 1268 + * 1269 + * Returns 0 on success, -1 on failure. 1264 1270 */ 1265 - static int beiscsi_close_conn(struct beiscsi_endpoint *beiscsi_ep, int flag) 1271 + static int beiscsi_conn_close(struct beiscsi_endpoint *beiscsi_ep) 1266 1272 { 1267 - int ret = 0; 1268 - unsigned int tag; 1269 1273 struct beiscsi_hba *phba = beiscsi_ep->phba; 1274 + unsigned int tag, attempts; 1275 + int ret; 1270 1276 1271 - tag = mgmt_upload_connection(phba, beiscsi_ep->ep_cid, flag); 1272 - if (!tag) { 1273 - beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG, 1274 - "BS_%d : upload failed for cid 0x%x\n", 1275 - beiscsi_ep->ep_cid); 1276 - 1277 - ret = -EAGAIN; 1277 + /** 1278 + * Without successfully invalidating and uploading connection 1279 + * driver can't reuse the CID so attempt more than once. 1280 + */ 1281 + attempts = 0; 1282 + while (attempts++ < 3) { 1283 + tag = beiscsi_invalidate_cxn(phba, beiscsi_ep); 1284 + if (tag) { 1285 + ret = beiscsi_mccq_compl_wait(phba, tag, NULL, NULL); 1286 + if (!ret) 1287 + break; 1288 + beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG, 1289 + "BS_%d : invalidate conn failed cid %d\n", 1290 + beiscsi_ep->ep_cid); 1291 + } 1278 1292 } 1279 1293 1280 - ret = beiscsi_mccq_compl_wait(phba, tag, NULL, NULL); 1281 - 1282 - /* Flush the CQ entries */ 1294 + /* wait for all completions to arrive, then process them */ 1295 + msleep(250); 1296 + /* flush CQ entries */ 1283 1297 beiscsi_flush_cq(phba); 1284 1298 1285 - return ret; 1299 + if (attempts > 3) 1300 + return -1; 1301 + 1302 + attempts = 0; 1303 + while (attempts++ < 3) { 1304 + tag = beiscsi_upload_cxn(phba, beiscsi_ep); 1305 + if (tag) { 1306 + ret = beiscsi_mccq_compl_wait(phba, tag, NULL, NULL); 1307 + if (!ret) 1308 + break; 1309 + beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG, 1310 + "BS_%d : upload conn failed cid %d\n", 1311 + beiscsi_ep->ep_cid); 1312 + } 1313 + } 1314 + if (attempts > 3) 1315 + return -1; 1316 + 1317 + return 0; 1286 1318 } 1287 1319 1288 1320 /** ··· 1320 1298 */ 1321 1299 void beiscsi_ep_disconnect(struct iscsi_endpoint *ep) 1322 1300 { 1323 - struct beiscsi_conn *beiscsi_conn; 1324 1301 struct beiscsi_endpoint *beiscsi_ep; 1302 + struct beiscsi_conn *beiscsi_conn; 1325 1303 struct beiscsi_hba *phba; 1326 - unsigned int tag; 1327 - uint8_t mgmt_invalidate_flag, tcp_upload_flag; 1328 - unsigned short savecfg_flag = CMD_ISCSI_SESSION_SAVE_CFG_ON_FLASH; 1329 1304 uint16_t cri_index; 1330 1305 1331 1306 beiscsi_ep = ep->dd_data; ··· 1343 1324 if (beiscsi_ep->conn) { 1344 1325 beiscsi_conn = beiscsi_ep->conn; 1345 1326 iscsi_suspend_queue(beiscsi_conn->conn); 1346 - mgmt_invalidate_flag = ~BEISCSI_NO_RST_ISSUE; 1347 - tcp_upload_flag = CONNECTION_UPLOAD_GRACEFUL; 1348 - } else { 1349 - mgmt_invalidate_flag = BEISCSI_NO_RST_ISSUE; 1350 - tcp_upload_flag = CONNECTION_UPLOAD_ABORT; 1351 1327 } 1352 1328 1353 1329 if (!beiscsi_hba_is_online(phba)) { 1354 1330 beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG, 1355 1331 "BS_%d : HBA in error 0x%lx\n", phba->state); 1356 - goto free_ep; 1332 + } else { 1333 + /** 1334 + * Make CID available even if close fails. 1335 + * If not freed, FW might fail open using the CID. 1336 + */ 1337 + if (beiscsi_conn_close(beiscsi_ep) < 0) 1338 + __beiscsi_log(phba, KERN_ERR, 1339 + "BS_%d : close conn failed cid %d\n", 1340 + beiscsi_ep->ep_cid); 1357 1341 } 1358 1342 1359 - tag = mgmt_invalidate_connection(phba, beiscsi_ep, 1360 - beiscsi_ep->ep_cid, 1361 - mgmt_invalidate_flag, 1362 - savecfg_flag); 1363 - if (!tag) { 1364 - beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG, 1365 - "BS_%d : mgmt_invalidate_connection Failed for cid=%d\n", 1366 - beiscsi_ep->ep_cid); 1367 - } 1368 - 1369 - beiscsi_mccq_compl_wait(phba, tag, NULL, NULL); 1370 - beiscsi_close_conn(beiscsi_ep, tcp_upload_flag); 1371 - free_ep: 1372 - msleep(BEISCSI_LOGOUT_SYNC_DELAY); 1373 1343 beiscsi_free_ep(beiscsi_ep); 1374 1344 if (!phba->conn_table[cri_index]) 1375 1345 __beiscsi_log(phba, KERN_ERR, 1376 - "BS_%d : conn_table empty at %u: cid %u\n", 1377 - cri_index, 1378 - beiscsi_ep->ep_cid); 1346 + "BS_%d : conn_table empty at %u: cid %u\n", 1347 + cri_index, beiscsi_ep->ep_cid); 1379 1348 phba->conn_table[cri_index] = NULL; 1380 1349 iscsi_destroy_endpoint(beiscsi_ep->openiscsi_ep); 1381 1350 }
+4 -9
drivers/scsi/be2iscsi/be_iscsi.h
··· 1 - /** 2 - * Copyright (C) 2005 - 2016 Broadcom 3 - * All rights reserved. 1 + /* 2 + * Copyright 2017 Broadcom. All Rights Reserved. 3 + * The term "Broadcom" refers to Broadcom Limited and/or its subsidiaries. 4 4 * 5 5 * This program is free software; you can redistribute it and/or 6 6 * modify it under the terms of the GNU General Public License version 2 7 - * as published by the Free Software Foundation. The full GNU General 7 + * as published by the Free Software Foundation. The full GNU General 8 8 * Public License is included in this distribution in the file called COPYING. 9 - * 10 - * Written by: Jayamohan Kallickal (jayamohan.kallickal@broadcom.com) 11 9 * 12 10 * Contact Information: 13 11 * linux-drivers@broadcom.com 14 12 * 15 - * Avago Technologies 16 - * 3333 Susan Street 17 - * Costa Mesa, CA 92626 18 13 */ 19 14 20 15 #ifndef _BE_ISCSI_
+160 -243
drivers/scsi/be2iscsi/be_main.c
··· 1 - /** 2 - * Copyright (C) 2005 - 2016 Broadcom 3 - * All rights reserved. 1 + /* 2 + * Copyright 2017 Broadcom. All Rights Reserved. 3 + * The term "Broadcom" refers to Broadcom Limited and/or its subsidiaries. 4 4 * 5 5 * This program is free software; you can redistribute it and/or 6 6 * modify it under the terms of the GNU General Public License version 2 7 - * as published by the Free Software Foundation. The full GNU General 7 + * as published by the Free Software Foundation. The full GNU General 8 8 * Public License is included in this distribution in the file called COPYING. 9 - * 10 - * Written by: Jayamohan Kallickal (jayamohan.kallickal@broadcom.com) 11 9 * 12 10 * Contact Information: 13 11 * linux-drivers@broadcom.com 14 12 * 15 - * Emulex 16 - * 3333 Susan Street 17 - * Costa Mesa, CA 92626 18 13 */ 19 14 20 15 #include <linux/reboot.h> ··· 332 337 inv_tbl->task[nents] = task; 333 338 nents++; 334 339 } 335 - spin_unlock_bh(&session->back_lock); 340 + spin_unlock(&session->back_lock); 336 341 spin_unlock_bh(&session->frwd_lock); 337 342 338 343 rc = SUCCESS; ··· 631 636 (total_cid_count + 632 637 BE2_TMFS + BE2_NOPOUT_REQ)); 633 638 phba->params.cxns_per_ctrl = total_cid_count; 634 - phba->params.asyncpdus_per_ctrl = total_cid_count; 635 639 phba->params.icds_per_ctrl = total_icd_count; 636 640 phba->params.num_sge_per_io = BE2_SGE; 637 641 phba->params.defpdu_hdr_sz = BE2_DEFPDU_HDR_SZ; ··· 796 802 struct pci_dev *pcidev = phba->pcidev; 797 803 struct hwi_controller *phwi_ctrlr; 798 804 struct hwi_context_memory *phwi_context; 799 - int ret, msix_vec, i, j; 805 + int ret, i, j; 800 806 801 807 phwi_ctrlr = phba->phwi_ctrlr; 802 808 phwi_context = phwi_ctrlr->phwi_ctxt; 803 809 804 - if (phba->msix_enabled) { 810 + if (pcidev->msix_enabled) { 805 811 for (i = 0; i < phba->num_cpus; i++) { 806 812 phba->msi_name[i] = kzalloc(BEISCSI_MSI_NAME, 807 813 GFP_KERNEL); ··· 812 818 813 819 sprintf(phba->msi_name[i], "beiscsi_%02x_%02x", 814 820 phba->shost->host_no, i); 815 - msix_vec = phba->msix_entries[i].vector; 816 - ret = request_irq(msix_vec, be_isr_msix, 0, 817 - phba->msi_name[i], 821 + ret = request_irq(pci_irq_vector(pcidev, i), 822 + be_isr_msix, 0, phba->msi_name[i], 818 823 &phwi_context->be_eq[i]); 819 824 if (ret) { 820 825 beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_INIT, ··· 831 838 } 832 839 sprintf(phba->msi_name[i], "beiscsi_mcc_%02x", 833 840 phba->shost->host_no); 834 - msix_vec = phba->msix_entries[i].vector; 835 - ret = request_irq(msix_vec, be_isr_mcc, 0, phba->msi_name[i], 836 - &phwi_context->be_eq[i]); 841 + ret = request_irq(pci_irq_vector(pcidev, i), be_isr_mcc, 0, 842 + phba->msi_name[i], &phwi_context->be_eq[i]); 837 843 if (ret) { 838 844 beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_INIT , 839 845 "BM_%d : beiscsi_init_irqs-" ··· 854 862 return 0; 855 863 free_msix_irqs: 856 864 for (j = i - 1; j >= 0; j--) { 865 + free_irq(pci_irq_vector(pcidev, i), &phwi_context->be_eq[j]); 857 866 kfree(phba->msi_name[j]); 858 - msix_vec = phba->msix_entries[j].vector; 859 - free_irq(msix_vec, &phwi_context->be_eq[j]); 860 867 } 861 868 return ret; 862 869 } ··· 1445 1454 beiscsi_hdl_put_handle(struct hd_async_context *pasync_ctx, 1446 1455 struct hd_async_handle *pasync_handle) 1447 1456 { 1448 - if (pasync_handle->is_header) { 1449 - list_add_tail(&pasync_handle->link, 1450 - &pasync_ctx->async_header.free_list); 1451 - pasync_ctx->async_header.free_entries++; 1452 - } else { 1453 - list_add_tail(&pasync_handle->link, 1454 - &pasync_ctx->async_data.free_list); 1455 - pasync_ctx->async_data.free_entries++; 1456 - } 1457 + pasync_handle->is_final = 0; 1458 + pasync_handle->buffer_len = 0; 1459 + pasync_handle->in_use = 0; 1460 + list_del_init(&pasync_handle->link); 1461 + } 1462 + 1463 + static void 1464 + beiscsi_hdl_purge_handles(struct beiscsi_hba *phba, 1465 + struct hd_async_context *pasync_ctx, 1466 + u16 cri) 1467 + { 1468 + struct hd_async_handle *pasync_handle, *tmp_handle; 1469 + struct list_head *plist; 1470 + 1471 + plist = &pasync_ctx->async_entry[cri].wq.list; 1472 + list_for_each_entry_safe(pasync_handle, tmp_handle, plist, link) 1473 + beiscsi_hdl_put_handle(pasync_ctx, pasync_handle); 1474 + 1475 + INIT_LIST_HEAD(&pasync_ctx->async_entry[cri].wq.list); 1476 + pasync_ctx->async_entry[cri].wq.hdr_len = 0; 1477 + pasync_ctx->async_entry[cri].wq.bytes_received = 0; 1478 + pasync_ctx->async_entry[cri].wq.bytes_needed = 0; 1457 1479 } 1458 1480 1459 1481 static struct hd_async_handle * 1460 1482 beiscsi_hdl_get_handle(struct beiscsi_conn *beiscsi_conn, 1461 1483 struct hd_async_context *pasync_ctx, 1462 - struct i_t_dpdu_cqe *pdpdu_cqe) 1484 + struct i_t_dpdu_cqe *pdpdu_cqe, 1485 + u8 *header) 1463 1486 { 1464 1487 struct beiscsi_hba *phba = beiscsi_conn->phba; 1465 1488 struct hd_async_handle *pasync_handle; 1466 1489 struct be_bus_address phys_addr; 1490 + u16 cid, code, ci, cri; 1467 1491 u8 final, error = 0; 1468 - u16 cid, code, ci; 1469 1492 u32 dpl; 1470 1493 1471 1494 cid = beiscsi_conn->beiscsi_conn_cid; 1495 + cri = BE_GET_ASYNC_CRI_FROM_CID(cid); 1472 1496 /** 1473 1497 * This function is invoked to get the right async_handle structure 1474 1498 * from a given DEF PDU CQ entry. ··· 1522 1516 switch (code) { 1523 1517 case UNSOL_HDR_NOTIFY: 1524 1518 pasync_handle = pasync_ctx->async_entry[ci].header; 1519 + *header = 1; 1525 1520 break; 1526 1521 case UNSOL_DATA_DIGEST_ERROR_NOTIFY: 1527 1522 error = 1; ··· 1531 1524 break; 1532 1525 /* called only for above codes */ 1533 1526 default: 1534 - pasync_handle = NULL; 1535 - break; 1536 - } 1537 - 1538 - if (!pasync_handle) { 1539 - beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_ISCSI, 1540 - "BM_%d : cid %d async PDU handle not found - code %d ci %d addr %llx\n", 1541 - cid, code, ci, phys_addr.u.a64.address); 1542 - return pasync_handle; 1527 + return NULL; 1543 1528 } 1544 1529 1545 1530 if (pasync_handle->pa.u.a64.address != phys_addr.u.a64.address || ··· 1548 1549 } 1549 1550 1550 1551 /** 1551 - * Each CID is associated with unique CRI. 1552 - * ASYNC_CRI_FROM_CID mapping and CRI_FROM_CID are totaly different. 1553 - **/ 1554 - pasync_handle->cri = BE_GET_ASYNC_CRI_FROM_CID(cid); 1555 - pasync_handle->is_final = final; 1556 - pasync_handle->buffer_len = dpl; 1557 - /* empty the slot */ 1558 - if (pasync_handle->is_header) 1559 - pasync_ctx->async_entry[ci].header = NULL; 1560 - else 1561 - pasync_ctx->async_entry[ci].data = NULL; 1562 - 1563 - /** 1564 1552 * DEF PDU header and data buffers with errors should be simply 1565 1553 * dropped as there are no consumers for it. 1566 1554 */ 1567 1555 if (error) { 1568 1556 beiscsi_hdl_put_handle(pasync_ctx, pasync_handle); 1569 - pasync_handle = NULL; 1557 + return NULL; 1570 1558 } 1559 + 1560 + if (pasync_handle->in_use || !list_empty(&pasync_handle->link)) { 1561 + beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_ISCSI, 1562 + "BM_%d : cid %d async PDU handle in use - code %d ci %d addr %llx\n", 1563 + cid, code, ci, phys_addr.u.a64.address); 1564 + beiscsi_hdl_purge_handles(phba, pasync_ctx, cri); 1565 + } 1566 + 1567 + list_del_init(&pasync_handle->link); 1568 + /** 1569 + * Each CID is associated with unique CRI. 1570 + * ASYNC_CRI_FROM_CID mapping and CRI_FROM_CID are totaly different. 1571 + **/ 1572 + pasync_handle->cri = cri; 1573 + pasync_handle->is_final = final; 1574 + pasync_handle->buffer_len = dpl; 1575 + pasync_handle->in_use = 1; 1576 + 1571 1577 return pasync_handle; 1572 - } 1573 - 1574 - static void 1575 - beiscsi_hdl_purge_handles(struct beiscsi_hba *phba, 1576 - struct hd_async_context *pasync_ctx, 1577 - u16 cri) 1578 - { 1579 - struct hd_async_handle *pasync_handle, *tmp_handle; 1580 - struct list_head *plist; 1581 - 1582 - plist = &pasync_ctx->async_entry[cri].wq.list; 1583 - list_for_each_entry_safe(pasync_handle, tmp_handle, plist, link) { 1584 - list_del(&pasync_handle->link); 1585 - beiscsi_hdl_put_handle(pasync_ctx, pasync_handle); 1586 - } 1587 - 1588 - INIT_LIST_HEAD(&pasync_ctx->async_entry[cri].wq.list); 1589 - pasync_ctx->async_entry[cri].wq.hdr_len = 0; 1590 - pasync_ctx->async_entry[cri].wq.bytes_received = 0; 1591 - pasync_ctx->async_entry[cri].wq.bytes_needed = 0; 1592 1578 } 1593 1579 1594 1580 static unsigned int ··· 1603 1619 dlen = pasync_handle->buffer_len; 1604 1620 continue; 1605 1621 } 1622 + if (!pasync_handle->buffer_len || 1623 + (dlen + pasync_handle->buffer_len) > 1624 + pasync_ctx->async_data.buffer_size) 1625 + break; 1606 1626 memcpy(pdata + dlen, pasync_handle->pbuffer, 1607 1627 pasync_handle->buffer_len); 1608 1628 dlen += pasync_handle->buffer_len; ··· 1615 1627 if (!plast_handle->is_final) { 1616 1628 /* last handle should have final PDU notification from FW */ 1617 1629 beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_ISCSI, 1618 - "BM_%d : cid %u %p fwd async PDU with last handle missing - HL%u:DN%u:DR%u\n", 1630 + "BM_%d : cid %u %p fwd async PDU opcode %x with last handle missing - HL%u:DN%u:DR%u\n", 1619 1631 beiscsi_conn->beiscsi_conn_cid, plast_handle, 1632 + AMAP_GET_BITS(struct amap_pdu_base, opcode, phdr), 1620 1633 pasync_ctx->async_entry[cri].wq.hdr_len, 1621 1634 pasync_ctx->async_entry[cri].wq.bytes_needed, 1622 1635 pasync_ctx->async_entry[cri].wq.bytes_received); ··· 1698 1709 1699 1710 static void 1700 1711 beiscsi_hdq_post_handles(struct beiscsi_hba *phba, 1701 - u8 header, u8 ulp_num) 1712 + u8 header, u8 ulp_num, u16 nbuf) 1702 1713 { 1703 - struct hd_async_handle *pasync_handle, *tmp, **slot; 1714 + struct hd_async_handle *pasync_handle; 1704 1715 struct hd_async_context *pasync_ctx; 1705 1716 struct hwi_controller *phwi_ctrlr; 1706 - struct list_head *hfree_list; 1707 1717 struct phys_addr *pasync_sge; 1708 1718 u32 ring_id, doorbell = 0; 1709 1719 u32 doorbell_offset; 1710 - u16 prod = 0, cons; 1711 - u16 index; 1720 + u16 prod, pi; 1712 1721 1713 1722 phwi_ctrlr = phba->phwi_ctrlr; 1714 1723 pasync_ctx = HWI_GET_ASYNC_PDU_CTX(phwi_ctrlr, ulp_num); 1715 1724 if (header) { 1716 - cons = pasync_ctx->async_header.free_entries; 1717 - hfree_list = &pasync_ctx->async_header.free_list; 1725 + pasync_sge = pasync_ctx->async_header.ring_base; 1726 + pi = pasync_ctx->async_header.pi; 1718 1727 ring_id = phwi_ctrlr->default_pdu_hdr[ulp_num].id; 1719 1728 doorbell_offset = phwi_ctrlr->default_pdu_hdr[ulp_num]. 1720 1729 doorbell_offset; 1721 1730 } else { 1722 - cons = pasync_ctx->async_data.free_entries; 1723 - hfree_list = &pasync_ctx->async_data.free_list; 1731 + pasync_sge = pasync_ctx->async_data.ring_base; 1732 + pi = pasync_ctx->async_data.pi; 1724 1733 ring_id = phwi_ctrlr->default_pdu_data[ulp_num].id; 1725 1734 doorbell_offset = phwi_ctrlr->default_pdu_data[ulp_num]. 1726 1735 doorbell_offset; 1727 1736 } 1728 - /* number of entries posted must be in multiples of 8 */ 1729 - if (cons % 8) 1730 - return; 1731 1737 1732 - list_for_each_entry_safe(pasync_handle, tmp, hfree_list, link) { 1733 - list_del_init(&pasync_handle->link); 1734 - pasync_handle->is_final = 0; 1735 - pasync_handle->buffer_len = 0; 1736 - 1737 - /* handles can be consumed out of order, use index in handle */ 1738 - index = pasync_handle->index; 1738 + for (prod = 0; prod < nbuf; prod++) { 1739 + if (header) 1740 + pasync_handle = pasync_ctx->async_entry[pi].header; 1741 + else 1742 + pasync_handle = pasync_ctx->async_entry[pi].data; 1739 1743 WARN_ON(pasync_handle->is_header != header); 1740 - if (header) 1741 - slot = &pasync_ctx->async_entry[index].header; 1742 - else 1743 - slot = &pasync_ctx->async_entry[index].data; 1744 - /** 1745 - * The slot just tracks handle's hold and release, so 1746 - * overwriting at the same index won't do any harm but 1747 - * needs to be caught. 1748 - */ 1749 - if (*slot != NULL) { 1750 - beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_ISCSI, 1751 - "BM_%d : async PDU %s slot at %u not empty\n", 1752 - header ? "header" : "data", index); 1744 + WARN_ON(pasync_handle->index != pi); 1745 + /* setup the ring only once */ 1746 + if (nbuf == pasync_ctx->num_entries) { 1747 + /* note hi is lo */ 1748 + pasync_sge[pi].hi = pasync_handle->pa.u.a32.address_lo; 1749 + pasync_sge[pi].lo = pasync_handle->pa.u.a32.address_hi; 1753 1750 } 1754 - /** 1755 - * We use same freed index as in completion to post so this 1756 - * operation is not required for refills. Its required only 1757 - * for ring creation. 1758 - */ 1759 - if (header) 1760 - pasync_sge = pasync_ctx->async_header.ring_base; 1761 - else 1762 - pasync_sge = pasync_ctx->async_data.ring_base; 1763 - pasync_sge += index; 1764 - /* if its a refill then address is same; hi is lo */ 1765 - WARN_ON(pasync_sge->hi && 1766 - pasync_sge->hi != pasync_handle->pa.u.a32.address_lo); 1767 - WARN_ON(pasync_sge->lo && 1768 - pasync_sge->lo != pasync_handle->pa.u.a32.address_hi); 1769 - pasync_sge->hi = pasync_handle->pa.u.a32.address_lo; 1770 - pasync_sge->lo = pasync_handle->pa.u.a32.address_hi; 1771 - 1772 - *slot = pasync_handle; 1773 - if (++prod == cons) 1774 - break; 1751 + if (++pi == pasync_ctx->num_entries) 1752 + pi = 0; 1775 1753 } 1754 + 1776 1755 if (header) 1777 - pasync_ctx->async_header.free_entries -= prod; 1756 + pasync_ctx->async_header.pi = pi; 1778 1757 else 1779 - pasync_ctx->async_data.free_entries -= prod; 1758 + pasync_ctx->async_data.pi = pi; 1780 1759 1781 1760 doorbell |= ring_id & DB_DEF_PDU_RING_ID_MASK; 1782 1761 doorbell |= 1 << DB_DEF_PDU_REARM_SHIFT; ··· 1761 1804 struct hd_async_handle *pasync_handle = NULL; 1762 1805 struct hd_async_context *pasync_ctx; 1763 1806 struct hwi_controller *phwi_ctrlr; 1807 + u8 ulp_num, consumed, header = 0; 1764 1808 u16 cid_cri; 1765 - u8 ulp_num; 1766 1809 1767 1810 phwi_ctrlr = phba->phwi_ctrlr; 1768 1811 cid_cri = BE_GET_CRI_FROM_CID(beiscsi_conn->beiscsi_conn_cid); 1769 1812 ulp_num = BEISCSI_GET_ULP_FROM_CRI(phwi_ctrlr, cid_cri); 1770 1813 pasync_ctx = HWI_GET_ASYNC_PDU_CTX(phwi_ctrlr, ulp_num); 1771 1814 pasync_handle = beiscsi_hdl_get_handle(beiscsi_conn, pasync_ctx, 1772 - pdpdu_cqe); 1773 - if (!pasync_handle) 1774 - return; 1775 - 1776 - beiscsi_hdl_gather_pdu(beiscsi_conn, pasync_ctx, pasync_handle); 1777 - beiscsi_hdq_post_handles(phba, pasync_handle->is_header, ulp_num); 1815 + pdpdu_cqe, &header); 1816 + if (is_chip_be2_be3r(phba)) 1817 + consumed = AMAP_GET_BITS(struct amap_i_t_dpdu_cqe, 1818 + num_cons, pdpdu_cqe); 1819 + else 1820 + consumed = AMAP_GET_BITS(struct amap_i_t_dpdu_cqe_v2, 1821 + num_cons, pdpdu_cqe); 1822 + if (pasync_handle) 1823 + beiscsi_hdl_gather_pdu(beiscsi_conn, pasync_ctx, pasync_handle); 1824 + /* num_cons indicates number of 8 RQEs consumed */ 1825 + if (consumed) 1826 + beiscsi_hdq_post_handles(phba, header, ulp_num, 8 * consumed); 1778 1827 } 1779 1828 1780 1829 void beiscsi_process_mcc_cq(struct beiscsi_hba *phba) ··· 2370 2407 if (test_bit(ulp_num, &phba->fw_config.ulp_supported)) { 2371 2408 2372 2409 num_async_pdu_buf_sgl_pages = 2373 - PAGES_REQUIRED(BEISCSI_GET_CID_COUNT( 2410 + PAGES_REQUIRED(BEISCSI_ASYNC_HDQ_SIZE( 2374 2411 phba, ulp_num) * 2375 2412 sizeof(struct phys_addr)); 2376 2413 2377 2414 num_async_pdu_buf_pages = 2378 - PAGES_REQUIRED(BEISCSI_GET_CID_COUNT( 2415 + PAGES_REQUIRED(BEISCSI_ASYNC_HDQ_SIZE( 2379 2416 phba, ulp_num) * 2380 2417 phba->params.defpdu_hdr_sz); 2381 2418 2382 2419 num_async_pdu_data_pages = 2383 - PAGES_REQUIRED(BEISCSI_GET_CID_COUNT( 2420 + PAGES_REQUIRED(BEISCSI_ASYNC_HDQ_SIZE( 2384 2421 phba, ulp_num) * 2385 2422 phba->params.defpdu_data_sz); 2386 2423 2387 2424 num_async_pdu_data_sgl_pages = 2388 - PAGES_REQUIRED(BEISCSI_GET_CID_COUNT( 2425 + PAGES_REQUIRED(BEISCSI_ASYNC_HDQ_SIZE( 2389 2426 phba, ulp_num) * 2390 2427 sizeof(struct phys_addr)); 2391 2428 ··· 2422 2459 mem_descr_index = (HWI_MEM_ASYNC_HEADER_HANDLE_ULP0 + 2423 2460 (ulp_num * MEM_DESCR_OFFSET)); 2424 2461 phba->mem_req[mem_descr_index] = 2425 - BEISCSI_GET_CID_COUNT(phba, ulp_num) * 2426 - sizeof(struct hd_async_handle); 2462 + BEISCSI_ASYNC_HDQ_SIZE(phba, ulp_num) * 2463 + sizeof(struct hd_async_handle); 2427 2464 2428 2465 mem_descr_index = (HWI_MEM_ASYNC_DATA_HANDLE_ULP0 + 2429 2466 (ulp_num * MEM_DESCR_OFFSET)); 2430 2467 phba->mem_req[mem_descr_index] = 2431 - BEISCSI_GET_CID_COUNT(phba, ulp_num) * 2432 - sizeof(struct hd_async_handle); 2468 + BEISCSI_ASYNC_HDQ_SIZE(phba, ulp_num) * 2469 + sizeof(struct hd_async_handle); 2433 2470 2434 2471 mem_descr_index = (HWI_MEM_ASYNC_PDU_CONTEXT_ULP0 + 2435 2472 (ulp_num * MEM_DESCR_OFFSET)); 2436 2473 phba->mem_req[mem_descr_index] = 2437 - sizeof(struct hd_async_context) + 2438 - (BEISCSI_GET_CID_COUNT(phba, ulp_num) * 2439 - sizeof(struct hd_async_entry)); 2474 + sizeof(struct hd_async_context) + 2475 + (BEISCSI_ASYNC_HDQ_SIZE(phba, ulp_num) * 2476 + sizeof(struct hd_async_entry)); 2440 2477 } 2441 2478 } 2442 2479 } ··· 2720 2757 ((long unsigned int)pasync_ctx + 2721 2758 sizeof(struct hd_async_context)); 2722 2759 2723 - pasync_ctx->num_entries = BEISCSI_GET_CID_COUNT(phba, 2760 + pasync_ctx->num_entries = BEISCSI_ASYNC_HDQ_SIZE(phba, 2724 2761 ulp_num); 2725 2762 /* setup header buffers */ 2726 2763 mem_descr = (struct be_mem_descriptor *)phba->init_mem; ··· 2739 2776 "BM_%d : No Virtual address for ULP : %d\n", 2740 2777 ulp_num); 2741 2778 2779 + pasync_ctx->async_header.pi = 0; 2742 2780 pasync_ctx->async_header.buffer_size = p->defpdu_hdr_sz; 2743 2781 pasync_ctx->async_header.va_base = 2744 2782 mem_descr->mem_array[0].virtual_address; ··· 2787 2823 2788 2824 pasync_ctx->async_header.handle_base = 2789 2825 mem_descr->mem_array[0].virtual_address; 2790 - INIT_LIST_HEAD(&pasync_ctx->async_header.free_list); 2791 2826 2792 2827 /* setup data buffer sgls */ 2793 2828 mem_descr = (struct be_mem_descriptor *)phba->init_mem; ··· 2820 2857 2821 2858 pasync_ctx->async_data.handle_base = 2822 2859 mem_descr->mem_array[0].virtual_address; 2823 - INIT_LIST_HEAD(&pasync_ctx->async_data.free_list); 2824 2860 2825 2861 pasync_header_h = 2826 2862 (struct hd_async_handle *) ··· 2846 2884 ulp_num); 2847 2885 2848 2886 idx = 0; 2887 + pasync_ctx->async_data.pi = 0; 2849 2888 pasync_ctx->async_data.buffer_size = p->defpdu_data_sz; 2850 2889 pasync_ctx->async_data.va_base = 2851 2890 mem_descr->mem_array[idx].virtual_address; ··· 2858 2895 phba->params.defpdu_data_sz); 2859 2896 num_per_mem = 0; 2860 2897 2861 - for (index = 0; index < BEISCSI_GET_CID_COUNT 2898 + for (index = 0; index < BEISCSI_ASYNC_HDQ_SIZE 2862 2899 (phba, ulp_num); index++) { 2863 2900 pasync_header_h->cri = -1; 2864 2901 pasync_header_h->is_header = 1; ··· 2874 2911 pasync_ctx->async_header.pa_base.u.a64. 2875 2912 address + (p->defpdu_hdr_sz * index); 2876 2913 2877 - list_add_tail(&pasync_header_h->link, 2878 - &pasync_ctx->async_header. 2879 - free_list); 2914 + pasync_ctx->async_entry[index].header = 2915 + pasync_header_h; 2880 2916 pasync_header_h++; 2881 - pasync_ctx->async_header.free_entries++; 2882 2917 INIT_LIST_HEAD(&pasync_ctx->async_entry[index]. 2883 2918 wq.list); 2884 - pasync_ctx->async_entry[index].header = NULL; 2885 2919 2886 2920 pasync_data_h->cri = -1; 2887 2921 pasync_data_h->is_header = 0; ··· 2912 2952 num_per_mem++; 2913 2953 num_async_data--; 2914 2954 2915 - list_add_tail(&pasync_data_h->link, 2916 - &pasync_ctx->async_data. 2917 - free_list); 2955 + pasync_ctx->async_entry[index].data = 2956 + pasync_data_h; 2918 2957 pasync_data_h++; 2919 - pasync_ctx->async_data.free_entries++; 2920 - pasync_ctx->async_entry[index].data = NULL; 2921 2958 } 2922 2959 } 2923 2960 } ··· 2997 3040 num_eq_pages = PAGES_REQUIRED(phba->params.num_eq_entries * \ 2998 3041 sizeof(struct be_eq_entry)); 2999 3042 3000 - if (phba->msix_enabled) 3043 + if (phba->pcidev->msix_enabled) 3001 3044 eq_for_mcc = 1; 3002 3045 else 3003 3046 eq_for_mcc = 0; ··· 3507 3550 sizeof(struct be_mcc_compl))) 3508 3551 goto err; 3509 3552 /* Ask BE to create MCC compl queue; */ 3510 - if (phba->msix_enabled) { 3553 + if (phba->pcidev->msix_enabled) { 3511 3554 if (beiscsi_cmd_cq_create(ctrl, cq, &phwi_context->be_eq 3512 3555 [phba->num_cpus].q, false, true, 0)) 3513 3556 goto mcc_cq_free; ··· 3538 3581 return -ENOMEM; 3539 3582 } 3540 3583 3541 - /** 3542 - * find_num_cpus()- Get the CPU online count 3543 - * @phba: ptr to priv structure 3544 - * 3545 - * CPU count is used for creating EQ. 3546 - **/ 3547 - static void find_num_cpus(struct beiscsi_hba *phba) 3584 + static void be2iscsi_enable_msix(struct beiscsi_hba *phba) 3548 3585 { 3549 - int num_cpus = 0; 3550 - 3551 - num_cpus = num_online_cpus(); 3586 + int nvec = 1; 3552 3587 3553 3588 switch (phba->generation) { 3554 3589 case BE_GEN2: 3555 3590 case BE_GEN3: 3556 - phba->num_cpus = (num_cpus > BEISCSI_MAX_NUM_CPUS) ? 3557 - BEISCSI_MAX_NUM_CPUS : num_cpus; 3591 + nvec = BEISCSI_MAX_NUM_CPUS + 1; 3558 3592 break; 3559 3593 case BE_GEN4: 3560 - /* 3561 - * If eqid_count == 1 fall back to 3562 - * INTX mechanism 3563 - **/ 3564 - if (phba->fw_config.eqid_count == 1) { 3565 - enable_msix = 0; 3566 - phba->num_cpus = 1; 3567 - return; 3568 - } 3569 - 3570 - phba->num_cpus = 3571 - (num_cpus > (phba->fw_config.eqid_count - 1)) ? 3572 - (phba->fw_config.eqid_count - 1) : num_cpus; 3594 + nvec = phba->fw_config.eqid_count; 3573 3595 break; 3574 3596 default: 3575 - phba->num_cpus = 1; 3597 + nvec = 2; 3598 + break; 3576 3599 } 3600 + 3601 + /* if eqid_count == 1 fall back to INTX */ 3602 + if (enable_msix && nvec > 1) { 3603 + const struct irq_affinity desc = { .post_vectors = 1 }; 3604 + 3605 + if (pci_alloc_irq_vectors_affinity(phba->pcidev, 2, nvec, 3606 + PCI_IRQ_MSIX | PCI_IRQ_AFFINITY, &desc) < 0) { 3607 + phba->num_cpus = nvec - 1; 3608 + return; 3609 + } 3610 + } 3611 + 3612 + phba->num_cpus = 1; 3577 3613 } 3578 3614 3579 3615 static void hwi_purge_eq(struct beiscsi_hba *phba) ··· 3583 3633 3584 3634 phwi_ctrlr = phba->phwi_ctrlr; 3585 3635 phwi_context = phwi_ctrlr->phwi_ctxt; 3586 - if (phba->msix_enabled) 3636 + if (phba->pcidev->msix_enabled) 3587 3637 eq_msix = 1; 3588 3638 else 3589 3639 eq_msix = 0; ··· 3661 3711 } 3662 3712 3663 3713 be_mcc_queues_destroy(phba); 3664 - if (phba->msix_enabled) 3714 + if (phba->pcidev->msix_enabled) 3665 3715 eq_for_mcc = 1; 3666 3716 else 3667 3717 eq_for_mcc = 0; ··· 3685 3735 unsigned int def_pdu_ring_sz; 3686 3736 struct be_ctrl_info *ctrl = &phba->ctrl; 3687 3737 int status, ulp_num; 3738 + u16 nbufs; 3688 3739 3689 3740 phwi_ctrlr = phba->phwi_ctrlr; 3690 3741 phwi_context = phwi_ctrlr->phwi_ctxt; ··· 3722 3771 3723 3772 for (ulp_num = 0; ulp_num < BEISCSI_ULP_COUNT; ulp_num++) { 3724 3773 if (test_bit(ulp_num, &phba->fw_config.ulp_supported)) { 3725 - def_pdu_ring_sz = 3726 - BEISCSI_GET_CID_COUNT(phba, ulp_num) * 3727 - sizeof(struct phys_addr); 3774 + nbufs = phwi_context->pasync_ctx[ulp_num]->num_entries; 3775 + def_pdu_ring_sz = nbufs * sizeof(struct phys_addr); 3728 3776 3729 3777 status = beiscsi_create_def_hdr(phba, phwi_context, 3730 3778 phwi_ctrlr, ··· 3751 3801 * let EP know about it. 3752 3802 */ 3753 3803 beiscsi_hdq_post_handles(phba, BEISCSI_DEFQ_HDR, 3754 - ulp_num); 3804 + ulp_num, nbufs); 3755 3805 beiscsi_hdq_post_handles(phba, BEISCSI_DEFQ_DATA, 3756 - ulp_num); 3806 + ulp_num, nbufs); 3757 3807 } 3758 3808 } 3759 3809 ··· 4107 4157 iowrite32(reg, addr); 4108 4158 } 4109 4159 4110 - if (!phba->msix_enabled) { 4160 + if (!phba->pcidev->msix_enabled) { 4111 4161 eq = &phwi_context->be_eq[0].q; 4112 4162 beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_INIT, 4113 4163 "BM_%d : eq->id=%d\n", eq->id); ··· 5230 5280 msecs_to_jiffies(BEISCSI_EQD_UPDATE_INTERVAL)); 5231 5281 } 5232 5282 5233 - static void beiscsi_msix_enable(struct beiscsi_hba *phba) 5234 - { 5235 - int i, status; 5236 - 5237 - for (i = 0; i <= phba->num_cpus; i++) 5238 - phba->msix_entries[i].entry = i; 5239 - 5240 - status = pci_enable_msix_range(phba->pcidev, phba->msix_entries, 5241 - phba->num_cpus + 1, phba->num_cpus + 1); 5242 - if (status > 0) 5243 - phba->msix_enabled = true; 5244 - } 5245 - 5246 5283 static void beiscsi_hw_tpe_check(unsigned long ptr) 5247 5284 { 5248 5285 struct beiscsi_hba *phba; ··· 5297 5360 if (ret) 5298 5361 return ret; 5299 5362 5300 - if (enable_msix) 5301 - find_num_cpus(phba); 5302 - else 5303 - phba->num_cpus = 1; 5304 - if (enable_msix) { 5305 - beiscsi_msix_enable(phba); 5306 - if (!phba->msix_enabled) 5307 - phba->num_cpus = 1; 5308 - } 5363 + be2iscsi_enable_msix(phba); 5309 5364 5310 5365 beiscsi_get_params(phba); 5311 5366 /* Re-enable UER. If different TPE occurs then it is recoverable. */ ··· 5326 5397 irq_poll_init(&pbe_eq->iopoll, be_iopoll_budget, be_iopoll); 5327 5398 } 5328 5399 5329 - i = (phba->msix_enabled) ? i : 0; 5400 + i = (phba->pcidev->msix_enabled) ? i : 0; 5330 5401 /* Work item for MCC handling */ 5331 5402 pbe_eq = &phwi_context->be_eq[i]; 5332 5403 INIT_WORK(&pbe_eq->mcc_work, beiscsi_mcc_work); ··· 5364 5435 hwi_cleanup_port(phba); 5365 5436 5366 5437 disable_msix: 5367 - if (phba->msix_enabled) 5368 - pci_disable_msix(phba->pcidev); 5369 - 5438 + pci_free_irq_vectors(phba->pcidev); 5370 5439 return ret; 5371 5440 } 5372 5441 ··· 5381 5454 struct hwi_context_memory *phwi_context; 5382 5455 struct hwi_controller *phwi_ctrlr; 5383 5456 struct be_eq_obj *pbe_eq; 5384 - unsigned int i, msix_vec; 5457 + unsigned int i; 5385 5458 5386 5459 if (!test_and_clear_bit(BEISCSI_HBA_ONLINE, &phba->state)) 5387 5460 return; ··· 5389 5462 phwi_ctrlr = phba->phwi_ctrlr; 5390 5463 phwi_context = phwi_ctrlr->phwi_ctxt; 5391 5464 hwi_disable_intr(phba); 5392 - if (phba->msix_enabled) { 5465 + if (phba->pcidev->msix_enabled) { 5393 5466 for (i = 0; i <= phba->num_cpus; i++) { 5394 - msix_vec = phba->msix_entries[i].vector; 5395 - free_irq(msix_vec, &phwi_context->be_eq[i]); 5467 + free_irq(pci_irq_vector(phba->pcidev, i), 5468 + &phwi_context->be_eq[i]); 5396 5469 kfree(phba->msi_name[i]); 5397 5470 } 5398 5471 } else 5399 5472 if (phba->pcidev->irq) 5400 5473 free_irq(phba->pcidev->irq, phba); 5401 - pci_disable_msix(phba->pcidev); 5474 + pci_free_irq_vectors(phba->pcidev); 5402 5475 5403 5476 for (i = 0; i < phba->num_cpus; i++) { 5404 5477 pbe_eq = &phwi_context->be_eq[i]; ··· 5608 5681 beiscsi_get_params(phba); 5609 5682 beiscsi_set_uer_feature(phba); 5610 5683 5611 - if (enable_msix) 5612 - find_num_cpus(phba); 5613 - else 5614 - phba->num_cpus = 1; 5684 + be2iscsi_enable_msix(phba); 5615 5685 5616 5686 beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_INIT, 5617 5687 "BM_%d : num_cpus = %d\n", 5618 5688 phba->num_cpus); 5619 - 5620 - if (enable_msix) { 5621 - beiscsi_msix_enable(phba); 5622 - if (!phba->msix_enabled) 5623 - phba->num_cpus = 1; 5624 - } 5625 5689 5626 5690 phba->shost->max_id = phba->params.cxns_per_ctrl; 5627 5691 phba->shost->can_queue = phba->params.ios_per_ctrl; ··· 5663 5745 irq_poll_init(&pbe_eq->iopoll, be_iopoll_budget, be_iopoll); 5664 5746 } 5665 5747 5666 - i = (phba->msix_enabled) ? i : 0; 5748 + i = (phba->pcidev->msix_enabled) ? i : 0; 5667 5749 /* Work item for MCC handling */ 5668 5750 pbe_eq = &phwi_context->be_eq[i]; 5669 5751 INIT_WORK(&pbe_eq->mcc_work, beiscsi_mcc_work); ··· 5734 5816 phba->ctrl.mbox_mem_alloced.dma); 5735 5817 beiscsi_unmap_pci_function(phba); 5736 5818 hba_free: 5737 - if (phba->msix_enabled) 5738 - pci_disable_msix(phba->pcidev); 5819 + pci_disable_msix(phba->pcidev); 5739 5820 pci_dev_put(phba->pcidev); 5740 5821 iscsi_host_free(phba->shost); 5741 5822 pci_set_drvdata(pcidev, NULL);
+10 -20
drivers/scsi/be2iscsi/be_main.h
··· 1 - /** 2 - * Copyright (C) 2005 - 2016 Broadcom 3 - * All rights reserved. 1 + /* 2 + * Copyright 2017 Broadcom. All Rights Reserved. 3 + * The term "Broadcom" refers to Broadcom Limited and/or its subsidiaries. 4 4 * 5 5 * This program is free software; you can redistribute it and/or 6 6 * modify it under the terms of the GNU General Public License version 2 7 - * as published by the Free Software Foundation. The full GNU General 7 + * as published by the Free Software Foundation. The full GNU General 8 8 * Public License is included in this distribution in the file called COPYING. 9 - * 10 - * Written by: Jayamohan Kallickal (jayamohan.kallickal@broadcom.com) 11 9 * 12 10 * Contact Information: 13 11 * linux-drivers@broadcom.com 14 12 * 15 - * Emulex 16 - * 3333 Susan Street 17 - * Costa Mesa, CA 92626 18 13 */ 19 14 20 15 #ifndef _BEISCSI_MAIN_ ··· 31 36 #include <scsi/scsi_transport_iscsi.h> 32 37 33 38 #define DRV_NAME "be2iscsi" 34 - #define BUILD_STR "11.2.1.0" 39 + #define BUILD_STR "11.4.0.0" 35 40 #define BE_NAME "Emulex OneConnect" \ 36 41 "Open-iSCSI Driver version" BUILD_STR 37 42 #define DRV_DESC BE_NAME " " "Driver" ··· 230 235 struct hba_parameters { 231 236 unsigned int ios_per_ctrl; 232 237 unsigned int cxns_per_ctrl; 233 - unsigned int asyncpdus_per_ctrl; 234 238 unsigned int icds_per_ctrl; 235 239 unsigned int num_sge_per_io; 236 240 unsigned int defpdu_hdr_sz; ··· 317 323 struct pci_dev *pcidev; 318 324 unsigned int num_cpus; 319 325 unsigned int nxt_cqid; 320 - struct msix_entry msix_entries[MAX_CPUS]; 321 326 char *msi_name[MAX_CPUS]; 322 - bool msix_enabled; 323 327 struct be_mem_descriptor *init_mem; 324 328 325 329 unsigned short io_sgl_alloc_index; ··· 589 597 u16 cri; 590 598 u8 is_header; 591 599 u8 is_final; 600 + u8 in_use; 592 601 }; 602 + 603 + #define BEISCSI_ASYNC_HDQ_SIZE(phba, ulp) \ 604 + (BEISCSI_GET_CID_COUNT((phba), (ulp)) * 2) 593 605 594 606 /** 595 607 * This has list of async PDUs that are waiting to be processed. ··· 620 624 void *va_base; 621 625 void *ring_base; 622 626 struct hd_async_handle *handle_base; 623 - u16 free_entries; 624 627 u32 buffer_size; 625 - /** 626 - * Once iSCSI layer finishes processing an async PDU, the 627 - * handles used for the PDU are added to this list. 628 - * They are posted back to FW in groups of 8. 629 - */ 630 - struct list_head free_list; 628 + u16 pi; 631 629 }; 632 630 633 631 /**
+70 -70
drivers/scsi/be2iscsi/be_mgmt.c
··· 1 - /** 2 - * Copyright (C) 2005 - 2016 Broadcom 3 - * All rights reserved. 1 + /* 2 + * Copyright 2017 Broadcom. All Rights Reserved. 3 + * The term "Broadcom" refers to Broadcom Limited and/or its subsidiaries. 4 4 * 5 5 * This program is free software; you can redistribute it and/or 6 6 * modify it under the terms of the GNU General Public License version 2 7 - * as published by the Free Software Foundation. The full GNU General 7 + * as published by the Free Software Foundation. The full GNU General 8 8 * Public License is included in this distribution in the file called COPYING. 9 - * 10 - * Written by: Jayamohan Kallickal (jayamohan.kallickal@broadcom.com) 11 9 * 12 10 * Contact Information: 13 11 * linux-drivers@broadcom.com 14 12 * 15 - * Emulex 16 - * 3333 Susan Street 17 - * Costa Mesa, CA 92626 18 13 */ 19 14 20 15 #include <linux/bsg-lib.h> ··· 117 122 118 123 be_mcc_notify(phba, tag); 119 124 120 - mutex_unlock(&ctrl->mbox_lock); 121 - return tag; 122 - } 123 - 124 - unsigned int mgmt_invalidate_connection(struct beiscsi_hba *phba, 125 - struct beiscsi_endpoint *beiscsi_ep, 126 - unsigned short cid, 127 - unsigned short issue_reset, 128 - unsigned short savecfg_flag) 129 - { 130 - struct be_ctrl_info *ctrl = &phba->ctrl; 131 - struct be_mcc_wrb *wrb; 132 - struct iscsi_invalidate_connection_params_in *req; 133 - unsigned int tag = 0; 134 - 135 - mutex_lock(&ctrl->mbox_lock); 136 - wrb = alloc_mcc_wrb(phba, &tag); 137 - if (!wrb) { 138 - mutex_unlock(&ctrl->mbox_lock); 139 - return 0; 140 - } 141 - 142 - req = embedded_payload(wrb); 143 - be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); 144 - be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI_INI, 145 - OPCODE_ISCSI_INI_DRIVER_INVALIDATE_CONNECTION, 146 - sizeof(*req)); 147 - req->session_handle = beiscsi_ep->fw_handle; 148 - req->cid = cid; 149 - if (issue_reset) 150 - req->cleanup_type = CMD_ISCSI_CONNECTION_ISSUE_TCP_RST; 151 - else 152 - req->cleanup_type = CMD_ISCSI_CONNECTION_INVALIDATE; 153 - req->save_cfg = savecfg_flag; 154 - be_mcc_notify(phba, tag); 155 - mutex_unlock(&ctrl->mbox_lock); 156 - return tag; 157 - } 158 - 159 - unsigned int mgmt_upload_connection(struct beiscsi_hba *phba, 160 - unsigned short cid, unsigned int upload_flag) 161 - { 162 - struct be_ctrl_info *ctrl = &phba->ctrl; 163 - struct be_mcc_wrb *wrb; 164 - struct tcp_upload_params_in *req; 165 - unsigned int tag; 166 - 167 - mutex_lock(&ctrl->mbox_lock); 168 - wrb = alloc_mcc_wrb(phba, &tag); 169 - if (!wrb) { 170 - mutex_unlock(&ctrl->mbox_lock); 171 - return 0; 172 - } 173 - 174 - req = embedded_payload(wrb); 175 - be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); 176 - be_cmd_hdr_prepare(&req->hdr, CMD_COMMON_TCP_UPLOAD, 177 - OPCODE_COMMON_TCP_UPLOAD, sizeof(*req)); 178 - req->id = (unsigned short)cid; 179 - req->upload_type = (unsigned char)upload_flag; 180 - be_mcc_notify(phba, tag); 181 125 mutex_unlock(&ctrl->mbox_lock); 182 126 return tag; 183 127 } ··· 1381 1447 pwrb, 1382 1448 (params->dw[offsetof(struct amap_beiscsi_offload_params, 1383 1449 exp_statsn) / 32] + 1)); 1450 + } 1451 + 1452 + unsigned int beiscsi_invalidate_cxn(struct beiscsi_hba *phba, 1453 + struct beiscsi_endpoint *beiscsi_ep) 1454 + { 1455 + struct be_invalidate_connection_params_in *req; 1456 + struct be_ctrl_info *ctrl = &phba->ctrl; 1457 + struct be_mcc_wrb *wrb; 1458 + unsigned int tag = 0; 1459 + 1460 + mutex_lock(&ctrl->mbox_lock); 1461 + wrb = alloc_mcc_wrb(phba, &tag); 1462 + if (!wrb) { 1463 + mutex_unlock(&ctrl->mbox_lock); 1464 + return 0; 1465 + } 1466 + 1467 + req = embedded_payload(wrb); 1468 + be_wrb_hdr_prepare(wrb, sizeof(union be_invalidate_connection_params), 1469 + true, 0); 1470 + be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI_INI, 1471 + OPCODE_ISCSI_INI_DRIVER_INVALIDATE_CONNECTION, 1472 + sizeof(*req)); 1473 + req->session_handle = beiscsi_ep->fw_handle; 1474 + req->cid = beiscsi_ep->ep_cid; 1475 + if (beiscsi_ep->conn) 1476 + req->cleanup_type = BE_CLEANUP_TYPE_INVALIDATE; 1477 + else 1478 + req->cleanup_type = BE_CLEANUP_TYPE_ISSUE_TCP_RST; 1479 + /** 1480 + * 0 - non-persistent targets 1481 + * 1 - save session info on flash 1482 + */ 1483 + req->save_cfg = 0; 1484 + be_mcc_notify(phba, tag); 1485 + mutex_unlock(&ctrl->mbox_lock); 1486 + return tag; 1487 + } 1488 + 1489 + unsigned int beiscsi_upload_cxn(struct beiscsi_hba *phba, 1490 + struct beiscsi_endpoint *beiscsi_ep) 1491 + { 1492 + struct be_ctrl_info *ctrl = &phba->ctrl; 1493 + struct be_mcc_wrb *wrb; 1494 + struct be_tcp_upload_params_in *req; 1495 + unsigned int tag; 1496 + 1497 + mutex_lock(&ctrl->mbox_lock); 1498 + wrb = alloc_mcc_wrb(phba, &tag); 1499 + if (!wrb) { 1500 + mutex_unlock(&ctrl->mbox_lock); 1501 + return 0; 1502 + } 1503 + 1504 + req = embedded_payload(wrb); 1505 + be_wrb_hdr_prepare(wrb, sizeof(union be_tcp_upload_params), true, 0); 1506 + be_cmd_hdr_prepare(&req->hdr, CMD_COMMON_TCP_UPLOAD, 1507 + OPCODE_COMMON_TCP_UPLOAD, sizeof(*req)); 1508 + req->id = beiscsi_ep->ep_cid; 1509 + if (beiscsi_ep->conn) 1510 + req->upload_type = BE_UPLOAD_TYPE_GRACEFUL; 1511 + else 1512 + req->upload_type = BE_UPLOAD_TYPE_ABORT; 1513 + be_mcc_notify(phba, tag); 1514 + mutex_unlock(&ctrl->mbox_lock); 1515 + return tag; 1384 1516 } 1385 1517 1386 1518 int beiscsi_mgmt_invalidate_icds(struct beiscsi_hba *phba,
+10 -33
drivers/scsi/be2iscsi/be_mgmt.h
··· 1 - /** 2 - * Copyright (C) 2005 - 2016 Broadcom 3 - * All rights reserved. 1 + /* 2 + * Copyright 2017 Broadcom. All Rights Reserved. 3 + * The term "Broadcom" refers to Broadcom Limited and/or its subsidiaries. 4 4 * 5 5 * This program is free software; you can redistribute it and/or 6 6 * modify it under the terms of the GNU General Public License version 2 7 - * as published by the Free Software Foundation. The full GNU General 7 + * as published by the Free Software Foundation. The full GNU General 8 8 * Public License is included in this distribution in the file called COPYING. 9 - * 10 - * Written by: Jayamohan Kallickal (jayamohan.kallickal@broadcom.com) 11 9 * 12 10 * Contact Information: 13 11 * linux-drivers@broadcom.com 14 12 * 15 - * Emulex 16 - * 3333 Susan Street 17 - * Costa Mesa, CA 92626 18 13 */ 19 14 20 15 #ifndef _BEISCSI_MGMT_ ··· 36 41 struct beiscsi_endpoint *beiscsi_ep, 37 42 struct be_dma_mem *nonemb_cmd); 38 43 39 - unsigned int mgmt_upload_connection(struct beiscsi_hba *phba, 40 - unsigned short cid, 41 - unsigned int upload_flag); 42 44 unsigned int mgmt_vendor_specific_fw_cmd(struct be_ctrl_info *ctrl, 43 45 struct beiscsi_hba *phba, 44 46 struct bsg_job *job, 45 47 struct be_dma_mem *nonemb_cmd); 46 - 47 - #define BEISCSI_NO_RST_ISSUE 0 48 - struct iscsi_invalidate_connection_params_in { 49 - struct be_cmd_req_hdr hdr; 50 - unsigned int session_handle; 51 - unsigned short cid; 52 - unsigned short unused; 53 - unsigned short cleanup_type; 54 - unsigned short save_cfg; 55 - } __packed; 56 - 57 - struct iscsi_invalidate_connection_params_out { 58 - unsigned int session_handle; 59 - unsigned short cid; 60 - unsigned short unused; 61 - } __packed; 62 - 63 - union iscsi_invalidate_connection_params { 64 - struct iscsi_invalidate_connection_params_in request; 65 - struct iscsi_invalidate_connection_params_out response; 66 - } __packed; 67 48 68 49 #define BE_INVLDT_CMD_TBL_SZ 128 69 50 struct invldt_cmd_tbl { ··· 235 264 void beiscsi_offload_cxn_v2(struct beiscsi_offload_params *params, 236 265 struct wrb_handle *pwrb_handle, 237 266 struct hwi_wrb_context *pwrb_context); 267 + 268 + unsigned int beiscsi_invalidate_cxn(struct beiscsi_hba *phba, 269 + struct beiscsi_endpoint *beiscsi_ep); 270 + 271 + unsigned int beiscsi_upload_cxn(struct beiscsi_hba *phba, 272 + struct beiscsi_endpoint *beiscsi_ep); 238 273 239 274 int be_cmd_modify_eq_delay(struct beiscsi_hba *phba, 240 275 struct be_set_eqd *, int num);
+31 -35
drivers/scsi/bfa/bfa_core.c
··· 23 23 BFA_TRC_FILE(HAL, CORE); 24 24 25 25 /* 26 - * BFA module list terminated by NULL 27 - */ 28 - static struct bfa_module_s *hal_mods[] = { 29 - &hal_mod_fcdiag, 30 - &hal_mod_sgpg, 31 - &hal_mod_fcport, 32 - &hal_mod_fcxp, 33 - &hal_mod_lps, 34 - &hal_mod_uf, 35 - &hal_mod_rport, 36 - &hal_mod_fcp, 37 - &hal_mod_dconf, 38 - NULL 39 - }; 40 - 41 - /* 42 26 * Message handlers for various modules. 43 27 */ 44 28 static bfa_isr_func_t bfa_isrs[BFI_MC_MAX] = { ··· 1175 1191 for (i = 0; i < BFI_IOC_MAX_CQS; i++) 1176 1192 bfa_isr_rspq_ack(bfa, i, bfa_rspq_ci(bfa, i)); 1177 1193 1178 - for (i = 0; hal_mods[i]; i++) 1179 - hal_mods[i]->start(bfa); 1194 + bfa_fcport_start(bfa); 1195 + bfa_uf_start(bfa); 1196 + /* 1197 + * bfa_init() with flash read is complete. now invalidate the stale 1198 + * content of lun mask like unit attention, rp tag and lp tag. 1199 + */ 1200 + bfa_ioim_lm_init(BFA_FCP_MOD(bfa)->bfa); 1180 1201 1181 1202 bfa->iocfc.submod_enabled = BFA_TRUE; 1182 1203 } ··· 1192 1203 static void 1193 1204 bfa_iocfc_disable_submod(struct bfa_s *bfa) 1194 1205 { 1195 - int i; 1196 - 1197 1206 if (bfa->iocfc.submod_enabled == BFA_FALSE) 1198 1207 return; 1199 1208 1200 - for (i = 0; hal_mods[i]; i++) 1201 - hal_mods[i]->iocdisable(bfa); 1209 + bfa_fcdiag_iocdisable(bfa); 1210 + bfa_fcport_iocdisable(bfa); 1211 + bfa_fcxp_iocdisable(bfa); 1212 + bfa_lps_iocdisable(bfa); 1213 + bfa_rport_iocdisable(bfa); 1214 + bfa_fcp_iocdisable(bfa); 1215 + bfa_dconf_iocdisable(bfa); 1202 1216 1203 1217 bfa->iocfc.submod_enabled = BFA_FALSE; 1204 1218 } ··· 1765 1773 bfa_cfg_get_meminfo(struct bfa_iocfc_cfg_s *cfg, struct bfa_meminfo_s *meminfo, 1766 1774 struct bfa_s *bfa) 1767 1775 { 1768 - int i; 1769 1776 struct bfa_mem_dma_s *port_dma = BFA_MEM_PORT_DMA(bfa); 1770 1777 struct bfa_mem_dma_s *ablk_dma = BFA_MEM_ABLK_DMA(bfa); 1771 1778 struct bfa_mem_dma_s *cee_dma = BFA_MEM_CEE_DMA(bfa); ··· 1783 1792 INIT_LIST_HEAD(&meminfo->kva_info.qe); 1784 1793 1785 1794 bfa_iocfc_meminfo(cfg, meminfo, bfa); 1786 - 1787 - for (i = 0; hal_mods[i]; i++) 1788 - hal_mods[i]->meminfo(cfg, meminfo, bfa); 1795 + bfa_sgpg_meminfo(cfg, meminfo, bfa); 1796 + bfa_fcport_meminfo(cfg, meminfo, bfa); 1797 + bfa_fcxp_meminfo(cfg, meminfo, bfa); 1798 + bfa_lps_meminfo(cfg, meminfo, bfa); 1799 + bfa_uf_meminfo(cfg, meminfo, bfa); 1800 + bfa_rport_meminfo(cfg, meminfo, bfa); 1801 + bfa_fcp_meminfo(cfg, meminfo, bfa); 1802 + bfa_dconf_meminfo(cfg, meminfo, bfa); 1789 1803 1790 1804 /* dma info setup */ 1791 1805 bfa_mem_dma_setup(meminfo, port_dma, bfa_port_meminfo()); ··· 1836 1840 bfa_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, 1837 1841 struct bfa_meminfo_s *meminfo, struct bfa_pcidev_s *pcidev) 1838 1842 { 1839 - int i; 1840 1843 struct bfa_mem_dma_s *dma_info, *dma_elem; 1841 1844 struct bfa_mem_kva_s *kva_info, *kva_elem; 1842 1845 struct list_head *dm_qe, *km_qe; ··· 1864 1869 } 1865 1870 1866 1871 bfa_iocfc_attach(bfa, bfad, cfg, pcidev); 1867 - 1868 - for (i = 0; hal_mods[i]; i++) 1869 - hal_mods[i]->attach(bfa, bfad, cfg, pcidev); 1870 - 1872 + bfa_fcdiag_attach(bfa, bfad, cfg, pcidev); 1873 + bfa_sgpg_attach(bfa, bfad, cfg, pcidev); 1874 + bfa_fcport_attach(bfa, bfad, cfg, pcidev); 1875 + bfa_fcxp_attach(bfa, bfad, cfg, pcidev); 1876 + bfa_lps_attach(bfa, bfad, cfg, pcidev); 1877 + bfa_uf_attach(bfa, bfad, cfg, pcidev); 1878 + bfa_rport_attach(bfa, bfad, cfg, pcidev); 1879 + bfa_fcp_attach(bfa, bfad, cfg, pcidev); 1880 + bfa_dconf_attach(bfa, bfad, cfg); 1871 1881 bfa_com_port_attach(bfa); 1872 1882 bfa_com_ablk_attach(bfa); 1873 1883 bfa_com_cee_attach(bfa); ··· 1899 1899 void 1900 1900 bfa_detach(struct bfa_s *bfa) 1901 1901 { 1902 - int i; 1903 - 1904 - for (i = 0; hal_mods[i]; i++) 1905 - hal_mods[i]->detach(bfa); 1906 1902 bfa_ioc_detach(&bfa->ioc); 1907 1903 } 1908 1904
+5 -32
drivers/scsi/bfa/bfa_fcpim.c
··· 25 25 * BFA ITNIM Related definitions 26 26 */ 27 27 static void bfa_itnim_update_del_itn_stats(struct bfa_itnim_s *itnim); 28 - static void bfa_ioim_lm_init(struct bfa_s *bfa); 29 28 30 29 #define BFA_ITNIM_FROM_TAG(_fcpim, _tag) \ 31 30 (((_fcpim)->itnim_arr + ((_tag) & ((_fcpim)->num_itnims - 1)))) ··· 338 339 bfa_ioim_attach(fcpim); 339 340 } 340 341 341 - static void 342 + void 342 343 bfa_fcpim_iocdisable(struct bfa_fcp_mod_s *fcp) 343 344 { 344 345 struct bfa_fcpim_s *fcpim = &fcp->fcpim; ··· 2104 2105 * is complete by driver. now invalidate the stale content of lun mask 2105 2106 * like unit attention, rp tag and lp tag. 2106 2107 */ 2107 - static void 2108 + void 2108 2109 bfa_ioim_lm_init(struct bfa_s *bfa) 2109 2110 { 2110 2111 struct bfa_lun_mask_s *lunm_list; ··· 3633 3634 } 3634 3635 } 3635 3636 3636 - /* BFA FCP module - parent module for fcpim */ 3637 - 3638 - BFA_MODULE(fcp); 3639 - 3640 - static void 3637 + void 3641 3638 bfa_fcp_meminfo(struct bfa_iocfc_cfg_s *cfg, struct bfa_meminfo_s *minfo, 3642 3639 struct bfa_s *bfa) 3643 3640 { ··· 3691 3696 bfa_mem_kva_setup(minfo, fcp_kva, km_len); 3692 3697 } 3693 3698 3694 - static void 3699 + void 3695 3700 bfa_fcp_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, 3696 3701 struct bfa_pcidev_s *pcidev) 3697 3702 { ··· 3734 3739 (fcp->num_itns * sizeof(struct bfa_itn_s))); 3735 3740 } 3736 3741 3737 - static void 3738 - bfa_fcp_detach(struct bfa_s *bfa) 3739 - { 3740 - } 3741 - 3742 - static void 3743 - bfa_fcp_start(struct bfa_s *bfa) 3744 - { 3745 - struct bfa_fcp_mod_s *fcp = BFA_FCP_MOD(bfa); 3746 - 3747 - /* 3748 - * bfa_init() with flash read is complete. now invalidate the stale 3749 - * content of lun mask like unit attention, rp tag and lp tag. 3750 - */ 3751 - bfa_ioim_lm_init(fcp->bfa); 3752 - } 3753 - 3754 - static void 3755 - bfa_fcp_stop(struct bfa_s *bfa) 3756 - { 3757 - } 3758 - 3759 - static void 3742 + void 3760 3743 bfa_fcp_iocdisable(struct bfa_s *bfa) 3761 3744 { 3762 3745 struct bfa_fcp_mod_s *fcp = BFA_FCP_MOD(bfa);
+21 -10
drivers/scsi/bfa/bfa_fcs_lport.c
··· 89 89 void (*online) (struct bfa_fcs_lport_s *port); 90 90 void (*offline) (struct bfa_fcs_lport_s *port); 91 91 } __port_action[] = { 92 - { 93 - bfa_fcs_lport_unknown_init, bfa_fcs_lport_unknown_online, 94 - bfa_fcs_lport_unknown_offline}, { 95 - bfa_fcs_lport_fab_init, bfa_fcs_lport_fab_online, 96 - bfa_fcs_lport_fab_offline}, { 97 - bfa_fcs_lport_n2n_init, bfa_fcs_lport_n2n_online, 98 - bfa_fcs_lport_n2n_offline}, { 99 - bfa_fcs_lport_loop_init, bfa_fcs_lport_loop_online, 100 - bfa_fcs_lport_loop_offline}, 101 - }; 92 + [BFA_FCS_FABRIC_UNKNOWN] = { 93 + .init = bfa_fcs_lport_unknown_init, 94 + .online = bfa_fcs_lport_unknown_online, 95 + .offline = bfa_fcs_lport_unknown_offline 96 + }, 97 + [BFA_FCS_FABRIC_SWITCHED] = { 98 + .init = bfa_fcs_lport_fab_init, 99 + .online = bfa_fcs_lport_fab_online, 100 + .offline = bfa_fcs_lport_fab_offline 101 + }, 102 + [BFA_FCS_FABRIC_N2N] = { 103 + .init = bfa_fcs_lport_n2n_init, 104 + .online = bfa_fcs_lport_n2n_online, 105 + .offline = bfa_fcs_lport_n2n_offline 106 + }, 107 + [BFA_FCS_FABRIC_LOOP] = { 108 + .init = bfa_fcs_lport_loop_init, 109 + .online = bfa_fcs_lport_loop_online, 110 + .offline = bfa_fcs_lport_loop_offline 111 + }, 112 + }; 102 113 103 114 /* 104 115 * fcs_port_sm FCS logical port state machine
+5 -25
drivers/scsi/bfa/bfa_ioc.c
··· 5822 5822 } 5823 5823 5824 5824 /* 5825 - * DCONF module specific 5826 - */ 5827 - 5828 - BFA_MODULE(dconf); 5829 - 5830 - /* 5831 5825 * DCONF state machine events 5832 5826 */ 5833 5827 enum bfa_dconf_event { ··· 6067 6073 /* 6068 6074 * Compute and return memory needed by DRV_CFG module. 6069 6075 */ 6070 - static void 6076 + void 6071 6077 bfa_dconf_meminfo(struct bfa_iocfc_cfg_s *cfg, struct bfa_meminfo_s *meminfo, 6072 6078 struct bfa_s *bfa) 6073 6079 { ··· 6081 6087 sizeof(struct bfa_dconf_s)); 6082 6088 } 6083 6089 6084 - static void 6085 - bfa_dconf_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, 6086 - struct bfa_pcidev_s *pcidev) 6090 + void 6091 + bfa_dconf_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg) 6087 6092 { 6088 6093 struct bfa_dconf_mod_s *dconf = BFA_DCONF_MOD(bfa); 6089 6094 ··· 6127 6134 struct bfa_dconf_mod_s *dconf = BFA_DCONF_MOD(bfa); 6128 6135 bfa_sm_send_event(dconf, BFA_DCONF_SM_INIT); 6129 6136 } 6130 - static void 6131 - bfa_dconf_start(struct bfa_s *bfa) 6132 - { 6133 - } 6134 - 6135 - static void 6136 - bfa_dconf_stop(struct bfa_s *bfa) 6137 - { 6138 - } 6139 6137 6140 6138 static void bfa_dconf_timer(void *cbarg) 6141 6139 { 6142 6140 struct bfa_dconf_mod_s *dconf = cbarg; 6143 6141 bfa_sm_send_event(dconf, BFA_DCONF_SM_TIMEOUT); 6144 6142 } 6145 - static void 6143 + 6144 + void 6146 6145 bfa_dconf_iocdisable(struct bfa_s *bfa) 6147 6146 { 6148 6147 struct bfa_dconf_mod_s *dconf = BFA_DCONF_MOD(bfa); 6149 6148 bfa_sm_send_event(dconf, BFA_DCONF_SM_IOCDISABLE); 6150 - } 6151 - 6152 - static void 6153 - bfa_dconf_detach(struct bfa_s *bfa) 6154 - { 6155 6149 } 6156 6150 6157 6151 static bfa_status_t
+46 -55
drivers/scsi/bfa/bfa_modules.h
··· 61 61 BFA_TRC_HAL_IOCFC_CB = 5, 62 62 }; 63 63 64 - /* 65 - * Macro to define a new BFA module 66 - */ 67 - #define BFA_MODULE(__mod) \ 68 - static void bfa_ ## __mod ## _meminfo( \ 69 - struct bfa_iocfc_cfg_s *cfg, \ 70 - struct bfa_meminfo_s *meminfo, \ 71 - struct bfa_s *bfa); \ 72 - static void bfa_ ## __mod ## _attach(struct bfa_s *bfa, \ 73 - void *bfad, struct bfa_iocfc_cfg_s *cfg, \ 74 - struct bfa_pcidev_s *pcidev); \ 75 - static void bfa_ ## __mod ## _detach(struct bfa_s *bfa); \ 76 - static void bfa_ ## __mod ## _start(struct bfa_s *bfa); \ 77 - static void bfa_ ## __mod ## _stop(struct bfa_s *bfa); \ 78 - static void bfa_ ## __mod ## _iocdisable(struct bfa_s *bfa); \ 79 - \ 80 - extern struct bfa_module_s hal_mod_ ## __mod; \ 81 - struct bfa_module_s hal_mod_ ## __mod = { \ 82 - bfa_ ## __mod ## _meminfo, \ 83 - bfa_ ## __mod ## _attach, \ 84 - bfa_ ## __mod ## _detach, \ 85 - bfa_ ## __mod ## _start, \ 86 - bfa_ ## __mod ## _stop, \ 87 - bfa_ ## __mod ## _iocdisable, \ 88 - } 89 - 90 64 #define BFA_CACHELINE_SZ (256) 91 - 92 - /* 93 - * Structure used to interact between different BFA sub modules 94 - * 95 - * Each sub module needs to implement only the entry points relevant to it (and 96 - * can leave entry points as NULL) 97 - */ 98 - struct bfa_module_s { 99 - void (*meminfo) (struct bfa_iocfc_cfg_s *cfg, 100 - struct bfa_meminfo_s *meminfo, 101 - struct bfa_s *bfa); 102 - void (*attach) (struct bfa_s *bfa, void *bfad, 103 - struct bfa_iocfc_cfg_s *cfg, 104 - struct bfa_pcidev_s *pcidev); 105 - void (*detach) (struct bfa_s *bfa); 106 - void (*start) (struct bfa_s *bfa); 107 - void (*stop) (struct bfa_s *bfa); 108 - void (*iocdisable) (struct bfa_s *bfa); 109 - }; 110 - 111 65 112 66 struct bfa_s { 113 67 void *bfad; /* BFA driver instance */ ··· 81 127 }; 82 128 83 129 extern bfa_boolean_t bfa_auto_recover; 84 - extern struct bfa_module_s hal_mod_fcdiag; 85 - extern struct bfa_module_s hal_mod_sgpg; 86 - extern struct bfa_module_s hal_mod_fcport; 87 - extern struct bfa_module_s hal_mod_fcxp; 88 - extern struct bfa_module_s hal_mod_lps; 89 - extern struct bfa_module_s hal_mod_uf; 90 - extern struct bfa_module_s hal_mod_rport; 91 - extern struct bfa_module_s hal_mod_fcp; 92 - extern struct bfa_module_s hal_mod_dconf; 130 + 131 + void bfa_dconf_attach(struct bfa_s *, void *, struct bfa_iocfc_cfg_s *); 132 + void bfa_dconf_meminfo(struct bfa_iocfc_cfg_s *, struct bfa_meminfo_s *, 133 + struct bfa_s *); 134 + void bfa_dconf_iocdisable(struct bfa_s *); 135 + void bfa_fcp_attach(struct bfa_s *, void *, struct bfa_iocfc_cfg_s *, 136 + struct bfa_pcidev_s *); 137 + void bfa_fcp_iocdisable(struct bfa_s *bfa); 138 + void bfa_fcp_meminfo(struct bfa_iocfc_cfg_s *, struct bfa_meminfo_s *, 139 + struct bfa_s *); 140 + void bfa_fcpim_iocdisable(struct bfa_fcp_mod_s *); 141 + void bfa_fcport_start(struct bfa_s *); 142 + void bfa_fcport_iocdisable(struct bfa_s *); 143 + void bfa_fcport_meminfo(struct bfa_iocfc_cfg_s *, struct bfa_meminfo_s *, 144 + struct bfa_s *); 145 + void bfa_fcport_attach(struct bfa_s *, void *, struct bfa_iocfc_cfg_s *, 146 + struct bfa_pcidev_s *); 147 + void bfa_fcxp_iocdisable(struct bfa_s *); 148 + void bfa_fcxp_meminfo(struct bfa_iocfc_cfg_s *, struct bfa_meminfo_s *, 149 + struct bfa_s *); 150 + void bfa_fcxp_attach(struct bfa_s *, void *, struct bfa_iocfc_cfg_s *, 151 + struct bfa_pcidev_s *); 152 + void bfa_fcdiag_iocdisable(struct bfa_s *); 153 + void bfa_fcdiag_attach(struct bfa_s *bfa, void *, struct bfa_iocfc_cfg_s *, 154 + struct bfa_pcidev_s *); 155 + void bfa_ioim_lm_init(struct bfa_s *); 156 + void bfa_lps_iocdisable(struct bfa_s *bfa); 157 + void bfa_lps_meminfo(struct bfa_iocfc_cfg_s *, struct bfa_meminfo_s *, 158 + struct bfa_s *); 159 + void bfa_lps_attach(struct bfa_s *, void *, struct bfa_iocfc_cfg_s *, 160 + struct bfa_pcidev_s *); 161 + void bfa_rport_iocdisable(struct bfa_s *bfa); 162 + void bfa_rport_meminfo(struct bfa_iocfc_cfg_s *, struct bfa_meminfo_s *, 163 + struct bfa_s *); 164 + void bfa_rport_attach(struct bfa_s *, void *, struct bfa_iocfc_cfg_s *, 165 + struct bfa_pcidev_s *); 166 + void bfa_sgpg_meminfo(struct bfa_iocfc_cfg_s *, struct bfa_meminfo_s *, 167 + struct bfa_s *); 168 + void bfa_sgpg_attach(struct bfa_s *, void *bfad, struct bfa_iocfc_cfg_s *, 169 + struct bfa_pcidev_s *); 170 + void bfa_uf_iocdisable(struct bfa_s *); 171 + void bfa_uf_meminfo(struct bfa_iocfc_cfg_s *, struct bfa_meminfo_s *, 172 + struct bfa_s *); 173 + void bfa_uf_attach(struct bfa_s *, void *, struct bfa_iocfc_cfg_s *, 174 + struct bfa_pcidev_s *); 175 + void bfa_uf_start(struct bfa_s *); 93 176 94 177 #endif /* __BFA_MODULES_H__ */
+21 -151
drivers/scsi/bfa/bfa_svc.c
··· 23 23 #include "bfa_modules.h" 24 24 25 25 BFA_TRC_FILE(HAL, FCXP); 26 - BFA_MODULE(fcdiag); 27 - BFA_MODULE(fcxp); 28 - BFA_MODULE(sgpg); 29 - BFA_MODULE(lps); 30 - BFA_MODULE(fcport); 31 - BFA_MODULE(rport); 32 - BFA_MODULE(uf); 33 26 34 27 /* 35 28 * LPS related definitions ··· 114 121 /* 115 122 * forward declarations for LPS functions 116 123 */ 117 - static void bfa_lps_meminfo(struct bfa_iocfc_cfg_s *cfg, 118 - struct bfa_meminfo_s *minfo, struct bfa_s *bfa); 119 - static void bfa_lps_attach(struct bfa_s *bfa, void *bfad, 120 - struct bfa_iocfc_cfg_s *cfg, 121 - struct bfa_pcidev_s *pcidev); 122 - static void bfa_lps_detach(struct bfa_s *bfa); 123 - static void bfa_lps_start(struct bfa_s *bfa); 124 - static void bfa_lps_stop(struct bfa_s *bfa); 125 - static void bfa_lps_iocdisable(struct bfa_s *bfa); 126 124 static void bfa_lps_login_rsp(struct bfa_s *bfa, 127 125 struct bfi_lps_login_rsp_s *rsp); 128 126 static void bfa_lps_no_res(struct bfa_lps_s *first_lps, u8 count); ··· 468 484 bfa_mem_kva_curp(mod) = (void *)fcxp; 469 485 } 470 486 471 - static void 487 + void 472 488 bfa_fcxp_meminfo(struct bfa_iocfc_cfg_s *cfg, struct bfa_meminfo_s *minfo, 473 489 struct bfa_s *bfa) 474 490 { ··· 506 522 cfg->fwcfg.num_fcxp_reqs * sizeof(struct bfa_fcxp_s)); 507 523 } 508 524 509 - static void 525 + void 510 526 bfa_fcxp_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, 511 527 struct bfa_pcidev_s *pcidev) 512 528 { ··· 528 544 claim_fcxps_mem(mod); 529 545 } 530 546 531 - static void 532 - bfa_fcxp_detach(struct bfa_s *bfa) 533 - { 534 - } 535 - 536 - static void 537 - bfa_fcxp_start(struct bfa_s *bfa) 538 - { 539 - } 540 - 541 - static void 542 - bfa_fcxp_stop(struct bfa_s *bfa) 543 - { 544 - } 545 - 546 - static void 547 + void 547 548 bfa_fcxp_iocdisable(struct bfa_s *bfa) 548 549 { 549 550 struct bfa_fcxp_mod_s *mod = BFA_FCXP_MOD(bfa); ··· 1479 1510 /* 1480 1511 * return memory requirement 1481 1512 */ 1482 - static void 1513 + void 1483 1514 bfa_lps_meminfo(struct bfa_iocfc_cfg_s *cfg, struct bfa_meminfo_s *minfo, 1484 1515 struct bfa_s *bfa) 1485 1516 { ··· 1496 1527 /* 1497 1528 * bfa module attach at initialization time 1498 1529 */ 1499 - static void 1530 + void 1500 1531 bfa_lps_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, 1501 1532 struct bfa_pcidev_s *pcidev) 1502 1533 { ··· 1526 1557 } 1527 1558 } 1528 1559 1529 - static void 1530 - bfa_lps_detach(struct bfa_s *bfa) 1531 - { 1532 - } 1533 - 1534 - static void 1535 - bfa_lps_start(struct bfa_s *bfa) 1536 - { 1537 - } 1538 - 1539 - static void 1540 - bfa_lps_stop(struct bfa_s *bfa) 1541 - { 1542 - } 1543 - 1544 1560 /* 1545 1561 * IOC in disabled state -- consider all lps offline 1546 1562 */ 1547 - static void 1563 + void 1548 1564 bfa_lps_iocdisable(struct bfa_s *bfa) 1549 1565 { 1550 1566 struct bfa_lps_mod_s *mod = BFA_LPS_MOD(bfa); ··· 3009 3055 #define FCPORT_STATS_DMA_SZ (BFA_ROUNDUP(sizeof(union bfa_fcport_stats_u), \ 3010 3056 BFA_CACHELINE_SZ)) 3011 3057 3012 - static void 3058 + void 3013 3059 bfa_fcport_meminfo(struct bfa_iocfc_cfg_s *cfg, struct bfa_meminfo_s *minfo, 3014 3060 struct bfa_s *bfa) 3015 3061 { ··· 3040 3086 /* 3041 3087 * Memory initialization. 3042 3088 */ 3043 - static void 3089 + void 3044 3090 bfa_fcport_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, 3045 3091 struct bfa_pcidev_s *pcidev) 3046 3092 { ··· 3085 3131 bfa_reqq_winit(&fcport->reqq_wait, bfa_fcport_qresume, fcport); 3086 3132 } 3087 3133 3088 - static void 3089 - bfa_fcport_detach(struct bfa_s *bfa) 3090 - { 3091 - } 3092 - 3093 - /* 3094 - * Called when IOC is ready. 3095 - */ 3096 - static void 3134 + void 3097 3135 bfa_fcport_start(struct bfa_s *bfa) 3098 3136 { 3099 3137 bfa_sm_send_event(BFA_FCPORT_MOD(bfa), BFA_FCPORT_SM_START); 3100 3138 } 3101 3139 3102 3140 /* 3103 - * Called before IOC is stopped. 3104 - */ 3105 - static void 3106 - bfa_fcport_stop(struct bfa_s *bfa) 3107 - { 3108 - bfa_sm_send_event(BFA_FCPORT_MOD(bfa), BFA_FCPORT_SM_STOP); 3109 - bfa_trunk_iocdisable(bfa); 3110 - } 3111 - 3112 - /* 3113 3141 * Called when IOC failure is detected. 3114 3142 */ 3115 - static void 3143 + void 3116 3144 bfa_fcport_iocdisable(struct bfa_s *bfa) 3117 3145 { 3118 3146 struct bfa_fcport_s *fcport = BFA_FCPORT_MOD(bfa); ··· 4822 4886 bfa_sm_send_event(rp, BFA_RPORT_SM_QRESUME); 4823 4887 } 4824 4888 4825 - static void 4889 + void 4826 4890 bfa_rport_meminfo(struct bfa_iocfc_cfg_s *cfg, struct bfa_meminfo_s *minfo, 4827 4891 struct bfa_s *bfa) 4828 4892 { ··· 4836 4900 cfg->fwcfg.num_rports * sizeof(struct bfa_rport_s)); 4837 4901 } 4838 4902 4839 - static void 4903 + void 4840 4904 bfa_rport_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, 4841 4905 struct bfa_pcidev_s *pcidev) 4842 4906 { ··· 4876 4940 bfa_mem_kva_curp(mod) = (u8 *) rp; 4877 4941 } 4878 4942 4879 - static void 4880 - bfa_rport_detach(struct bfa_s *bfa) 4881 - { 4882 - } 4883 - 4884 - static void 4885 - bfa_rport_start(struct bfa_s *bfa) 4886 - { 4887 - } 4888 - 4889 - static void 4890 - bfa_rport_stop(struct bfa_s *bfa) 4891 - { 4892 - } 4893 - 4894 - static void 4943 + void 4895 4944 bfa_rport_iocdisable(struct bfa_s *bfa) 4896 4945 { 4897 4946 struct bfa_rport_mod_s *mod = BFA_RPORT_MOD(bfa); ··· 5167 5246 /* 5168 5247 * Compute and return memory needed by FCP(im) module. 5169 5248 */ 5170 - static void 5249 + void 5171 5250 bfa_sgpg_meminfo(struct bfa_iocfc_cfg_s *cfg, struct bfa_meminfo_s *minfo, 5172 5251 struct bfa_s *bfa) 5173 5252 { ··· 5202 5281 cfg->drvcfg.num_sgpgs * sizeof(struct bfa_sgpg_s)); 5203 5282 } 5204 5283 5205 - static void 5284 + void 5206 5285 bfa_sgpg_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, 5207 5286 struct bfa_pcidev_s *pcidev) 5208 5287 { ··· 5263 5342 } 5264 5343 5265 5344 bfa_mem_kva_curp(mod) = (u8 *) hsgpg; 5266 - } 5267 - 5268 - static void 5269 - bfa_sgpg_detach(struct bfa_s *bfa) 5270 - { 5271 - } 5272 - 5273 - static void 5274 - bfa_sgpg_start(struct bfa_s *bfa) 5275 - { 5276 - } 5277 - 5278 - static void 5279 - bfa_sgpg_stop(struct bfa_s *bfa) 5280 - { 5281 - } 5282 - 5283 - static void 5284 - bfa_sgpg_iocdisable(struct bfa_s *bfa) 5285 - { 5286 5345 } 5287 5346 5288 5347 bfa_status_t ··· 5448 5547 claim_uf_post_msgs(ufm); 5449 5548 } 5450 5549 5451 - static void 5550 + void 5452 5551 bfa_uf_meminfo(struct bfa_iocfc_cfg_s *cfg, struct bfa_meminfo_s *minfo, 5453 5552 struct bfa_s *bfa) 5454 5553 { ··· 5476 5575 (sizeof(struct bfa_uf_s) + sizeof(struct bfi_uf_buf_post_s))); 5477 5576 } 5478 5577 5479 - static void 5578 + void 5480 5579 bfa_uf_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, 5481 5580 struct bfa_pcidev_s *pcidev) 5482 5581 { ··· 5489 5588 INIT_LIST_HEAD(&ufm->uf_unused_q); 5490 5589 5491 5590 uf_mem_claim(ufm); 5492 - } 5493 - 5494 - static void 5495 - bfa_uf_detach(struct bfa_s *bfa) 5496 - { 5497 5591 } 5498 5592 5499 5593 static struct bfa_uf_s * ··· 5578 5682 bfa_cb_queue(bfa, &uf->hcb_qe, __bfa_cb_uf_recv, uf); 5579 5683 } 5580 5684 5581 - static void 5582 - bfa_uf_stop(struct bfa_s *bfa) 5583 - { 5584 - } 5585 - 5586 - static void 5685 + void 5587 5686 bfa_uf_iocdisable(struct bfa_s *bfa) 5588 5687 { 5589 5688 struct bfa_uf_mod_s *ufm = BFA_UF_MOD(bfa); ··· 5595 5704 } 5596 5705 } 5597 5706 5598 - static void 5707 + void 5599 5708 bfa_uf_start(struct bfa_s *bfa) 5600 5709 { 5601 5710 bfa_uf_post_all(BFA_UF_MOD(bfa)); ··· 5736 5845 fcport->diag_busy = BFA_FALSE; 5737 5846 } 5738 5847 5739 - static void 5740 - bfa_fcdiag_meminfo(struct bfa_iocfc_cfg_s *cfg, struct bfa_meminfo_s *meminfo, 5741 - struct bfa_s *bfa) 5742 - { 5743 - } 5744 - 5745 - static void 5848 + void 5746 5849 bfa_fcdiag_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, 5747 5850 struct bfa_pcidev_s *pcidev) 5748 5851 { ··· 5755 5870 memset(&dport->result, 0, sizeof(struct bfa_diag_dport_result_s)); 5756 5871 } 5757 5872 5758 - static void 5873 + void 5759 5874 bfa_fcdiag_iocdisable(struct bfa_s *bfa) 5760 5875 { 5761 5876 struct bfa_fcdiag_s *fcdiag = BFA_FCDIAG_MOD(bfa); ··· 5770 5885 } 5771 5886 5772 5887 bfa_sm_send_event(dport, BFA_DPORT_SM_HWFAIL); 5773 - } 5774 - 5775 - static void 5776 - bfa_fcdiag_detach(struct bfa_s *bfa) 5777 - { 5778 - } 5779 - 5780 - static void 5781 - bfa_fcdiag_start(struct bfa_s *bfa) 5782 - { 5783 - } 5784 - 5785 - static void 5786 - bfa_fcdiag_stop(struct bfa_s *bfa) 5787 - { 5788 5888 } 5789 5889 5790 5890 static void
-1
drivers/scsi/csiostor/csio_hw.h
··· 95 95 }; 96 96 97 97 struct csio_msix_entries { 98 - unsigned short vector; /* Assigned MSI-X vector */ 99 98 void *dev_id; /* Priv object associated w/ this msix*/ 100 99 char desc[24]; /* Description of this vector */ 101 100 };
+47 -81
drivers/scsi/csiostor/csio_isr.c
··· 383 383 int rv, i, j, k = 0; 384 384 struct csio_msix_entries *entryp = &hw->msix_entries[0]; 385 385 struct csio_scsi_cpu_info *info; 386 + struct pci_dev *pdev = hw->pdev; 386 387 387 388 if (hw->intr_mode != CSIO_IM_MSIX) { 388 - rv = request_irq(hw->pdev->irq, csio_fcoe_isr, 389 - (hw->intr_mode == CSIO_IM_MSI) ? 390 - 0 : IRQF_SHARED, 391 - KBUILD_MODNAME, hw); 389 + rv = request_irq(pci_irq_vector(pdev, 0), csio_fcoe_isr, 390 + hw->intr_mode == CSIO_IM_MSI ? 0 : IRQF_SHARED, 391 + KBUILD_MODNAME, hw); 392 392 if (rv) { 393 - if (hw->intr_mode == CSIO_IM_MSI) 394 - pci_disable_msi(hw->pdev); 395 393 csio_err(hw, "Failed to allocate interrupt line.\n"); 396 - return -EINVAL; 394 + goto out_free_irqs; 397 395 } 398 396 399 397 goto out; ··· 400 402 /* Add the MSIX vector descriptions */ 401 403 csio_add_msix_desc(hw); 402 404 403 - rv = request_irq(entryp[k].vector, csio_nondata_isr, 0, 405 + rv = request_irq(pci_irq_vector(pdev, k), csio_nondata_isr, 0, 404 406 entryp[k].desc, hw); 405 407 if (rv) { 406 408 csio_err(hw, "IRQ request failed for vec %d err:%d\n", 407 - entryp[k].vector, rv); 408 - goto err; 409 + pci_irq_vector(pdev, k), rv); 410 + goto out_free_irqs; 409 411 } 410 412 411 - entryp[k++].dev_id = (void *)hw; 413 + entryp[k++].dev_id = hw; 412 414 413 - rv = request_irq(entryp[k].vector, csio_fwevt_isr, 0, 415 + rv = request_irq(pci_irq_vector(pdev, k), csio_fwevt_isr, 0, 414 416 entryp[k].desc, hw); 415 417 if (rv) { 416 418 csio_err(hw, "IRQ request failed for vec %d err:%d\n", 417 - entryp[k].vector, rv); 418 - goto err; 419 + pci_irq_vector(pdev, k), rv); 420 + goto out_free_irqs; 419 421 } 420 422 421 423 entryp[k++].dev_id = (void *)hw; ··· 427 429 struct csio_scsi_qset *sqset = &hw->sqset[i][j]; 428 430 struct csio_q *q = hw->wrm.q_arr[sqset->iq_idx]; 429 431 430 - rv = request_irq(entryp[k].vector, csio_scsi_isr, 0, 432 + rv = request_irq(pci_irq_vector(pdev, k), csio_scsi_isr, 0, 431 433 entryp[k].desc, q); 432 434 if (rv) { 433 435 csio_err(hw, 434 436 "IRQ request failed for vec %d err:%d\n", 435 - entryp[k].vector, rv); 436 - goto err; 437 + pci_irq_vector(pdev, k), rv); 438 + goto out_free_irqs; 437 439 } 438 440 439 - entryp[k].dev_id = (void *)q; 441 + entryp[k].dev_id = q; 440 442 441 443 } /* for all scsi cpus */ 442 444 } /* for all ports */ 443 445 444 446 out: 445 447 hw->flags |= CSIO_HWF_HOST_INTR_ENABLED; 446 - 447 448 return 0; 448 449 449 - err: 450 - for (i = 0; i < k; i++) { 451 - entryp = &hw->msix_entries[i]; 452 - free_irq(entryp->vector, entryp->dev_id); 453 - } 454 - pci_disable_msix(hw->pdev); 455 - 450 + out_free_irqs: 451 + for (i = 0; i < k; i++) 452 + free_irq(pci_irq_vector(pdev, i), hw->msix_entries[i].dev_id); 453 + pci_free_irq_vectors(hw->pdev); 456 454 return -EINVAL; 457 - } 458 - 459 - static void 460 - csio_disable_msix(struct csio_hw *hw, bool free) 461 - { 462 - int i; 463 - struct csio_msix_entries *entryp; 464 - int cnt = hw->num_sqsets + CSIO_EXTRA_VECS; 465 - 466 - if (free) { 467 - for (i = 0; i < cnt; i++) { 468 - entryp = &hw->msix_entries[i]; 469 - free_irq(entryp->vector, entryp->dev_id); 470 - } 471 - } 472 - pci_disable_msix(hw->pdev); 473 455 } 474 456 475 457 /* Reduce per-port max possible CPUs */ ··· 478 500 csio_enable_msix(struct csio_hw *hw) 479 501 { 480 502 int i, j, k, n, min, cnt; 481 - struct csio_msix_entries *entryp; 482 - struct msix_entry *entries; 483 503 int extra = CSIO_EXTRA_VECS; 484 504 struct csio_scsi_cpu_info *info; 505 + struct irq_affinity desc = { .pre_vectors = 2 }; 485 506 486 507 min = hw->num_pports + extra; 487 508 cnt = hw->num_sqsets + extra; ··· 489 512 if (hw->flags & CSIO_HWF_USING_SOFT_PARAMS || !csio_is_hw_master(hw)) 490 513 cnt = min_t(uint8_t, hw->cfg_niq, cnt); 491 514 492 - entries = kzalloc(sizeof(struct msix_entry) * cnt, GFP_KERNEL); 493 - if (!entries) 494 - return -ENOMEM; 495 - 496 - for (i = 0; i < cnt; i++) 497 - entries[i].entry = (uint16_t)i; 498 - 499 515 csio_dbg(hw, "FW supp #niq:%d, trying %d msix's\n", hw->cfg_niq, cnt); 500 516 501 - cnt = pci_enable_msix_range(hw->pdev, entries, min, cnt); 502 - if (cnt < 0) { 503 - kfree(entries); 517 + cnt = pci_alloc_irq_vectors_affinity(hw->pdev, min, cnt, 518 + PCI_IRQ_MSIX | PCI_IRQ_AFFINITY, &desc); 519 + if (cnt < 0) 504 520 return cnt; 505 - } 506 521 507 522 if (cnt < (hw->num_sqsets + extra)) { 508 523 csio_dbg(hw, "Reducing sqsets to %d\n", cnt - extra); 509 524 csio_reduce_sqsets(hw, cnt - extra); 510 525 } 511 526 512 - /* Save off vectors */ 513 - for (i = 0; i < cnt; i++) { 514 - entryp = &hw->msix_entries[i]; 515 - entryp->vector = entries[i].vector; 516 - } 517 - 518 527 /* Distribute vectors */ 519 528 k = 0; 520 - csio_set_nondata_intr_idx(hw, entries[k].entry); 521 - csio_set_mb_intr_idx(csio_hw_to_mbm(hw), entries[k++].entry); 522 - csio_set_fwevt_intr_idx(hw, entries[k++].entry); 529 + csio_set_nondata_intr_idx(hw, k); 530 + csio_set_mb_intr_idx(csio_hw_to_mbm(hw), k++); 531 + csio_set_fwevt_intr_idx(hw, k++); 523 532 524 533 for (i = 0; i < hw->num_pports; i++) { 525 534 info = &hw->scsi_cpu_info[i]; 526 535 527 536 for (j = 0; j < hw->num_scsi_msix_cpus; j++) { 528 537 n = (j % info->max_cpus) + k; 529 - hw->sqset[i][j].intr_idx = entries[n].entry; 538 + hw->sqset[i][j].intr_idx = n; 530 539 } 531 540 532 541 k += info->max_cpus; 533 542 } 534 543 535 - kfree(entries); 536 544 return 0; 537 545 } 538 546 ··· 559 597 { 560 598 csio_hw_intr_disable(hw); 561 599 562 - switch (hw->intr_mode) { 563 - case CSIO_IM_MSIX: 564 - csio_disable_msix(hw, free); 565 - break; 566 - case CSIO_IM_MSI: 567 - if (free) 568 - free_irq(hw->pdev->irq, hw); 569 - pci_disable_msi(hw->pdev); 570 - break; 571 - case CSIO_IM_INTX: 572 - if (free) 573 - free_irq(hw->pdev->irq, hw); 574 - break; 575 - default: 576 - break; 600 + if (free) { 601 + int i; 602 + 603 + switch (hw->intr_mode) { 604 + case CSIO_IM_MSIX: 605 + for (i = 0; i < hw->num_sqsets + CSIO_EXTRA_VECS; i++) { 606 + free_irq(pci_irq_vector(hw->pdev, i), 607 + hw->msix_entries[i].dev_id); 608 + } 609 + break; 610 + case CSIO_IM_MSI: 611 + case CSIO_IM_INTX: 612 + free_irq(pci_irq_vector(hw->pdev, 0), hw); 613 + break; 614 + default: 615 + break; 616 + } 577 617 } 618 + 619 + pci_free_irq_vectors(hw->pdev); 578 620 hw->intr_mode = CSIO_IM_NONE; 579 621 hw->flags &= ~CSIO_HWF_HOST_INTR_ENABLED; 580 622 }
+1 -1
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
··· 36 36 #include "../libcxgbi.h" 37 37 38 38 #define DRV_MODULE_NAME "cxgb4i" 39 - #define DRV_MODULE_DESC "Chelsio T4/T5 iSCSI Driver" 39 + #define DRV_MODULE_DESC "Chelsio T4-T6 iSCSI Driver" 40 40 #define DRV_MODULE_VERSION "0.9.5-ko" 41 41 #define DRV_MODULE_RELDATE "Apr. 2015" 42 42
+103 -34
drivers/scsi/cxlflash/common.h
··· 15 15 #ifndef _CXLFLASH_COMMON_H 16 16 #define _CXLFLASH_COMMON_H 17 17 18 + #include <linux/irq_poll.h> 18 19 #include <linux/list.h> 19 20 #include <linux/rwsem.h> 20 21 #include <linux/types.h> ··· 25 24 26 25 extern const struct file_operations cxlflash_cxl_fops; 27 26 28 - #define MAX_CONTEXT CXLFLASH_MAX_CONTEXT /* num contexts per afu */ 27 + #define MAX_CONTEXT CXLFLASH_MAX_CONTEXT /* num contexts per afu */ 28 + #define MAX_FC_PORTS CXLFLASH_MAX_FC_PORTS /* max ports per AFU */ 29 + #define LEGACY_FC_PORTS 2 /* legacy ports per AFU */ 29 30 30 - #define CXLFLASH_BLOCK_SIZE 4096 /* 4K blocks */ 31 + #define CHAN2PORTBANK(_x) ((_x) >> ilog2(CXLFLASH_NUM_FC_PORTS_PER_BANK)) 32 + #define CHAN2BANKPORT(_x) ((_x) & (CXLFLASH_NUM_FC_PORTS_PER_BANK - 1)) 33 + 34 + #define CHAN2PORTMASK(_x) (1 << (_x)) /* channel to port mask */ 35 + #define PORTMASK2CHAN(_x) (ilog2((_x))) /* port mask to channel */ 36 + #define PORTNUM2CHAN(_x) ((_x) - 1) /* port number to channel */ 37 + 38 + #define CXLFLASH_BLOCK_SIZE 4096 /* 4K blocks */ 31 39 #define CXLFLASH_MAX_XFER_SIZE 16777216 /* 16MB transfer */ 32 40 #define CXLFLASH_MAX_SECTORS (CXLFLASH_MAX_XFER_SIZE/512) /* SCSI wants 33 - max_sectors 34 - in units of 35 - 512 byte 36 - sectors 37 - */ 41 + * max_sectors 42 + * in units of 43 + * 512 byte 44 + * sectors 45 + */ 38 46 39 47 #define MAX_RHT_PER_CONTEXT (PAGE_SIZE / sizeof(struct sisl_rht_entry)) 40 48 41 49 /* AFU command retry limit */ 42 - #define MC_RETRY_CNT 5 /* sufficient for SCSI check and 43 - certain AFU errors */ 50 + #define MC_RETRY_CNT 5 /* Sufficient for SCSI and certain AFU errors */ 44 51 45 52 /* Command management definitions */ 46 - #define CXLFLASH_NUM_CMDS (2 * CXLFLASH_MAX_CMDS) /* Must be a pow2 for 47 - alignment and more 48 - efficient array 49 - index derivation 50 - */ 51 - 52 53 #define CXLFLASH_MAX_CMDS 256 53 54 #define CXLFLASH_MAX_CMDS_PER_LUN CXLFLASH_MAX_CMDS 54 55 ··· 60 57 /* SQ for master issued cmds */ 61 58 #define NUM_SQ_ENTRY CXLFLASH_MAX_CMDS 62 59 60 + /* Hardware queue definitions */ 61 + #define CXLFLASH_DEF_HWQS 1 62 + #define CXLFLASH_MAX_HWQS 8 63 + #define PRIMARY_HWQ 0 64 + 63 65 64 66 static inline void check_sizes(void) 65 67 { 66 - BUILD_BUG_ON_NOT_POWER_OF_2(CXLFLASH_NUM_CMDS); 68 + BUILD_BUG_ON_NOT_POWER_OF_2(CXLFLASH_NUM_FC_PORTS_PER_BANK); 69 + BUILD_BUG_ON_NOT_POWER_OF_2(CXLFLASH_MAX_CMDS); 67 70 } 68 71 69 72 /* AFU defines a fixed size of 4K for command buffers (borrow 4K page define) */ ··· 89 80 }; 90 81 91 82 enum cxlflash_state { 83 + STATE_PROBING, /* Initial state during probe */ 84 + STATE_PROBED, /* Temporary state, probe completed but EEH occurred */ 92 85 STATE_NORMAL, /* Normal running state, everything good */ 93 86 STATE_RESET, /* Reset state, trying to reset/recover */ 94 87 STATE_FAILTERM /* Failed/terminating state, error out users/threads */ 88 + }; 89 + 90 + enum cxlflash_hwq_mode { 91 + HWQ_MODE_RR, /* Roundrobin (default) */ 92 + HWQ_MODE_TAG, /* Distribute based on block MQ tag */ 93 + HWQ_MODE_CPU, /* CPU affinity */ 94 + MAX_HWQ_MODE 95 95 }; 96 96 97 97 /* ··· 110 92 111 93 struct cxlflash_cfg { 112 94 struct afu *afu; 113 - struct cxl_context *mcctx; 114 95 115 96 struct pci_dev *dev; 116 97 struct pci_device_id *dev_id; 117 98 struct Scsi_Host *host; 99 + int num_fc_ports; 118 100 119 101 ulong cxlflash_regs_pci; 120 102 ··· 135 117 struct file_operations cxl_fops; 136 118 137 119 /* Parameters that are LUN table related */ 138 - int last_lun_index[CXLFLASH_NUM_FC_PORTS]; 120 + int last_lun_index[MAX_FC_PORTS]; 139 121 int promote_lun_index; 140 122 struct list_head lluns; /* list of llun_info structs */ 141 123 ··· 152 134 struct afu *parent; 153 135 struct scsi_cmnd *scp; 154 136 struct completion cevent; 137 + struct list_head queue; 138 + u32 hwq_index; 155 139 156 140 u8 cmd_tmf:1; 157 141 ··· 176 156 return afuc; 177 157 } 178 158 179 - struct afu { 159 + struct hwq { 180 160 /* Stuff requiring alignment go first. */ 181 161 struct sisl_ioarcb sq[NUM_SQ_ENTRY]; /* 16K SQ */ 182 162 u64 rrq_entry[NUM_RRQ_ENTRY]; /* 2K RRQ */ ··· 184 164 /* Beware of alignment till here. Preferably introduce new 185 165 * fields after this point 186 166 */ 187 - 188 - int (*send_cmd)(struct afu *, struct afu_cmd *); 189 - void (*context_reset)(struct afu_cmd *); 190 - 191 - /* AFU HW */ 167 + struct afu *afu; 168 + struct cxl_context *ctx; 192 169 struct cxl_ioctl_start_work work; 193 - struct cxlflash_afu_map __iomem *afu_map; /* entire MMIO map */ 194 170 struct sisl_host_map __iomem *host_map; /* MC host map */ 195 171 struct sisl_ctrl_map __iomem *ctrl_map; /* MC control map */ 196 - 197 172 ctx_hndl_t ctx_hndl; /* master's context handle */ 173 + u32 index; /* Index of this hwq */ 198 174 199 175 atomic_t hsq_credits; 200 176 spinlock_t hsq_slock; 201 177 struct sisl_ioarcb *hsq_start; 202 178 struct sisl_ioarcb *hsq_end; 203 179 struct sisl_ioarcb *hsq_curr; 180 + spinlock_t hrrq_slock; 204 181 u64 *hrrq_start; 205 182 u64 *hrrq_end; 206 183 u64 *hrrq_curr; 207 184 bool toggle; 208 - atomic_t cmds_active; /* Number of currently active AFU commands */ 185 + 209 186 s64 room; 210 187 spinlock_t rrin_slock; /* Lock to rrin queuing and cmd_room updates */ 188 + 189 + struct irq_poll irqpoll; 190 + } __aligned(cache_line_size()); 191 + 192 + struct afu { 193 + struct hwq hwqs[CXLFLASH_MAX_HWQS]; 194 + int (*send_cmd)(struct afu *, struct afu_cmd *); 195 + void (*context_reset)(struct afu_cmd *); 196 + 197 + /* AFU HW */ 198 + struct cxlflash_afu_map __iomem *afu_map; /* entire MMIO map */ 199 + 200 + atomic_t cmds_active; /* Number of currently active AFU commands */ 211 201 u64 hb; 212 202 u32 internal_lun; /* User-desired LUN mode for this AFU */ 203 + 204 + u32 num_hwqs; /* Number of hardware queues */ 205 + u32 desired_hwqs; /* Desired h/w queues, effective on AFU reset */ 206 + enum cxlflash_hwq_mode hwq_mode; /* Steering mode for h/w queues */ 207 + u32 hwq_rr_count; /* Count to distribute traffic for roundrobin */ 213 208 214 209 char version[16]; 215 210 u64 interface_version; 216 211 212 + u32 irqpoll_weight; 217 213 struct cxlflash_cfg *parent; /* Pointer back to parent cxlflash_cfg */ 218 - 219 214 }; 215 + 216 + static inline struct hwq *get_hwq(struct afu *afu, u32 index) 217 + { 218 + WARN_ON(index >= CXLFLASH_MAX_HWQS); 219 + 220 + return &afu->hwqs[index]; 221 + } 222 + 223 + static inline bool afu_is_irqpoll_enabled(struct afu *afu) 224 + { 225 + return !!afu->irqpoll_weight; 226 + } 220 227 221 228 static inline bool afu_is_cmd_mode(struct afu *afu, u64 cmd_mode) 222 229 { ··· 270 223 return be64_to_cpu(lun_id); 271 224 } 272 225 273 - int cxlflash_afu_sync(struct afu *, ctx_hndl_t, res_hndl_t, u8); 226 + static inline struct fc_port_bank __iomem *get_fc_port_bank( 227 + struct cxlflash_cfg *cfg, int i) 228 + { 229 + struct afu *afu = cfg->afu; 230 + 231 + return &afu->afu_map->global.bank[CHAN2PORTBANK(i)]; 232 + } 233 + 234 + static inline __be64 __iomem *get_fc_port_regs(struct cxlflash_cfg *cfg, int i) 235 + { 236 + struct fc_port_bank __iomem *fcpb = get_fc_port_bank(cfg, i); 237 + 238 + return &fcpb->fc_port_regs[CHAN2BANKPORT(i)][0]; 239 + } 240 + 241 + static inline __be64 __iomem *get_fc_port_luns(struct cxlflash_cfg *cfg, int i) 242 + { 243 + struct fc_port_bank __iomem *fcpb = get_fc_port_bank(cfg, i); 244 + 245 + return &fcpb->fc_port_luns[CHAN2BANKPORT(i)][0]; 246 + } 247 + 248 + int cxlflash_afu_sync(struct afu *afu, ctx_hndl_t c, res_hndl_t r, u8 mode); 274 249 void cxlflash_list_init(void); 275 250 void cxlflash_term_global_luns(void); 276 251 void cxlflash_free_errpage(void); 277 - int cxlflash_ioctl(struct scsi_device *, int, void __user *); 278 - void cxlflash_stop_term_user_contexts(struct cxlflash_cfg *); 279 - int cxlflash_mark_contexts_error(struct cxlflash_cfg *); 280 - void cxlflash_term_local_luns(struct cxlflash_cfg *); 281 - void cxlflash_restore_luntable(struct cxlflash_cfg *); 252 + int cxlflash_ioctl(struct scsi_device *sdev, int cmd, void __user *arg); 253 + void cxlflash_stop_term_user_contexts(struct cxlflash_cfg *cfg); 254 + int cxlflash_mark_contexts_error(struct cxlflash_cfg *cfg); 255 + void cxlflash_term_local_luns(struct cxlflash_cfg *cfg); 256 + void cxlflash_restore_luntable(struct cxlflash_cfg *cfg); 282 257 283 258 #endif /* ifndef _CXLFLASH_COMMON_H */
+2 -2
drivers/scsi/cxlflash/lunmgt.c
··· 252 252 * in unpacked, AFU-friendly format, and hang LUN reference in 253 253 * the sdev. 254 254 */ 255 - lli->port_sel |= CHAN2PORT(chan); 255 + lli->port_sel |= CHAN2PORTMASK(chan); 256 256 lli->lun_id[chan] = lun_to_lunid(sdev->lun); 257 257 sdev->hostdata = lli; 258 258 } else if (flags & DK_CXLFLASH_MANAGE_LUN_DISABLE_SUPERPIPE) { ··· 264 264 * tracking when no more references exist. 265 265 */ 266 266 sdev->hostdata = NULL; 267 - lli->port_sel &= ~CHAN2PORT(chan); 267 + lli->port_sel &= ~CHAN2PORTMASK(chan); 268 268 if (lli->port_sel == 0U) 269 269 lli->in_table = false; 270 270 }
+888 -274
drivers/scsi/cxlflash/main.c
··· 176 176 dev_dbg_ratelimited(dev, "%s:scp=%p result=%08x ioasc=%08x\n", 177 177 __func__, scp, scp->result, cmd->sa.ioasc); 178 178 179 - scsi_dma_unmap(scp); 180 179 scp->scsi_done(scp); 181 180 182 181 if (cmd_is_tmf) { ··· 223 224 static void context_reset_ioarrin(struct afu_cmd *cmd) 224 225 { 225 226 struct afu *afu = cmd->parent; 227 + struct hwq *hwq = get_hwq(afu, cmd->hwq_index); 226 228 227 - context_reset(cmd, &afu->host_map->ioarrin); 229 + context_reset(cmd, &hwq->host_map->ioarrin); 228 230 } 229 231 230 232 /** ··· 235 235 static void context_reset_sq(struct afu_cmd *cmd) 236 236 { 237 237 struct afu *afu = cmd->parent; 238 + struct hwq *hwq = get_hwq(afu, cmd->hwq_index); 238 239 239 - context_reset(cmd, &afu->host_map->sq_ctx_reset); 240 + context_reset(cmd, &hwq->host_map->sq_ctx_reset); 240 241 } 241 242 242 243 /** ··· 252 251 { 253 252 struct cxlflash_cfg *cfg = afu->parent; 254 253 struct device *dev = &cfg->dev->dev; 254 + struct hwq *hwq = get_hwq(afu, cmd->hwq_index); 255 255 int rc = 0; 256 256 s64 room; 257 257 ulong lock_flags; ··· 261 259 * To avoid the performance penalty of MMIO, spread the update of 262 260 * 'room' over multiple commands. 263 261 */ 264 - spin_lock_irqsave(&afu->rrin_slock, lock_flags); 265 - if (--afu->room < 0) { 266 - room = readq_be(&afu->host_map->cmd_room); 262 + spin_lock_irqsave(&hwq->rrin_slock, lock_flags); 263 + if (--hwq->room < 0) { 264 + room = readq_be(&hwq->host_map->cmd_room); 267 265 if (room <= 0) { 268 266 dev_dbg_ratelimited(dev, "%s: no cmd_room to send " 269 267 "0x%02X, room=0x%016llX\n", 270 268 __func__, cmd->rcb.cdb[0], room); 271 - afu->room = 0; 269 + hwq->room = 0; 272 270 rc = SCSI_MLQUEUE_HOST_BUSY; 273 271 goto out; 274 272 } 275 - afu->room = room - 1; 273 + hwq->room = room - 1; 276 274 } 277 275 278 - writeq_be((u64)&cmd->rcb, &afu->host_map->ioarrin); 276 + writeq_be((u64)&cmd->rcb, &hwq->host_map->ioarrin); 279 277 out: 280 - spin_unlock_irqrestore(&afu->rrin_slock, lock_flags); 278 + spin_unlock_irqrestore(&hwq->rrin_slock, lock_flags); 281 279 dev_dbg(dev, "%s: cmd=%p len=%u ea=%016llx rc=%d\n", __func__, 282 280 cmd, cmd->rcb.data_len, cmd->rcb.data_ea, rc); 283 281 return rc; ··· 295 293 { 296 294 struct cxlflash_cfg *cfg = afu->parent; 297 295 struct device *dev = &cfg->dev->dev; 296 + struct hwq *hwq = get_hwq(afu, cmd->hwq_index); 298 297 int rc = 0; 299 298 int newval; 300 299 ulong lock_flags; 301 300 302 - newval = atomic_dec_if_positive(&afu->hsq_credits); 301 + newval = atomic_dec_if_positive(&hwq->hsq_credits); 303 302 if (newval <= 0) { 304 303 rc = SCSI_MLQUEUE_HOST_BUSY; 305 304 goto out; ··· 308 305 309 306 cmd->rcb.ioasa = &cmd->sa; 310 307 311 - spin_lock_irqsave(&afu->hsq_slock, lock_flags); 308 + spin_lock_irqsave(&hwq->hsq_slock, lock_flags); 312 309 313 - *afu->hsq_curr = cmd->rcb; 314 - if (afu->hsq_curr < afu->hsq_end) 315 - afu->hsq_curr++; 310 + *hwq->hsq_curr = cmd->rcb; 311 + if (hwq->hsq_curr < hwq->hsq_end) 312 + hwq->hsq_curr++; 316 313 else 317 - afu->hsq_curr = afu->hsq_start; 318 - writeq_be((u64)afu->hsq_curr, &afu->host_map->sq_tail); 314 + hwq->hsq_curr = hwq->hsq_start; 315 + writeq_be((u64)hwq->hsq_curr, &hwq->host_map->sq_tail); 319 316 320 - spin_unlock_irqrestore(&afu->hsq_slock, lock_flags); 317 + spin_unlock_irqrestore(&hwq->hsq_slock, lock_flags); 321 318 out: 322 319 dev_dbg(dev, "%s: cmd=%p len=%u ea=%016llx ioasa=%p rc=%d curr=%p " 323 320 "head=%016llx tail=%016llx\n", __func__, cmd, cmd->rcb.data_len, 324 - cmd->rcb.data_ea, cmd->rcb.ioasa, rc, afu->hsq_curr, 325 - readq_be(&afu->host_map->sq_head), 326 - readq_be(&afu->host_map->sq_tail)); 321 + cmd->rcb.data_ea, cmd->rcb.ioasa, rc, hwq->hsq_curr, 322 + readq_be(&hwq->host_map->sq_head), 323 + readq_be(&hwq->host_map->sq_tail)); 327 324 return rc; 328 325 } 329 326 ··· 358 355 } 359 356 360 357 /** 358 + * cmd_to_target_hwq() - selects a target hardware queue for a SCSI command 359 + * @host: SCSI host associated with device. 360 + * @scp: SCSI command to send. 361 + * @afu: SCSI command to send. 362 + * 363 + * Hashes a command based upon the hardware queue mode. 364 + * 365 + * Return: Trusted index of target hardware queue 366 + */ 367 + static u32 cmd_to_target_hwq(struct Scsi_Host *host, struct scsi_cmnd *scp, 368 + struct afu *afu) 369 + { 370 + u32 tag; 371 + u32 hwq = 0; 372 + 373 + if (afu->num_hwqs == 1) 374 + return 0; 375 + 376 + switch (afu->hwq_mode) { 377 + case HWQ_MODE_RR: 378 + hwq = afu->hwq_rr_count++ % afu->num_hwqs; 379 + break; 380 + case HWQ_MODE_TAG: 381 + tag = blk_mq_unique_tag(scp->request); 382 + hwq = blk_mq_unique_tag_to_hwq(tag); 383 + break; 384 + case HWQ_MODE_CPU: 385 + hwq = smp_processor_id() % afu->num_hwqs; 386 + break; 387 + default: 388 + WARN_ON_ONCE(1); 389 + } 390 + 391 + return hwq; 392 + } 393 + 394 + /** 361 395 * send_tmf() - sends a Task Management Function (TMF) 362 396 * @afu: AFU to checkout from. 363 397 * @scp: SCSI command from stack. ··· 405 365 */ 406 366 static int send_tmf(struct afu *afu, struct scsi_cmnd *scp, u64 tmfcmd) 407 367 { 408 - u32 port_sel = scp->device->channel + 1; 409 - struct cxlflash_cfg *cfg = shost_priv(scp->device->host); 368 + struct Scsi_Host *host = scp->device->host; 369 + struct cxlflash_cfg *cfg = shost_priv(host); 410 370 struct afu_cmd *cmd = sc_to_afucz(scp); 411 371 struct device *dev = &cfg->dev->dev; 372 + int hwq_index = cmd_to_target_hwq(host, scp, afu); 373 + struct hwq *hwq = get_hwq(afu, hwq_index); 412 374 ulong lock_flags; 413 375 int rc = 0; 414 376 ulong to; ··· 427 385 cmd->scp = scp; 428 386 cmd->parent = afu; 429 387 cmd->cmd_tmf = true; 388 + cmd->hwq_index = hwq_index; 430 389 431 - cmd->rcb.ctx_id = afu->ctx_hndl; 390 + cmd->rcb.ctx_id = hwq->ctx_hndl; 432 391 cmd->rcb.msi = SISL_MSI_RRQ_UPDATED; 433 - cmd->rcb.port_sel = port_sel; 392 + cmd->rcb.port_sel = CHAN2PORTMASK(scp->device->channel); 434 393 cmd->rcb.lun_id = lun_to_lunid(scp->device->lun); 435 394 cmd->rcb.req_flags = (SISL_REQ_FLAGS_PORT_LUN_ID | 436 395 SISL_REQ_FLAGS_SUP_UNDERRUN | ··· 487 444 struct device *dev = &cfg->dev->dev; 488 445 struct afu_cmd *cmd = sc_to_afucz(scp); 489 446 struct scatterlist *sg = scsi_sglist(scp); 490 - u32 port_sel = scp->device->channel + 1; 447 + int hwq_index = cmd_to_target_hwq(host, scp, afu); 448 + struct hwq *hwq = get_hwq(afu, hwq_index); 491 449 u16 req_flags = SISL_REQ_FLAGS_SUP_UNDERRUN; 492 450 ulong lock_flags; 493 - int nseg = 0; 494 451 int rc = 0; 495 452 496 453 dev_dbg_ratelimited(dev, "%s: (scp=%p) %d/%d/%d/%llu " ··· 515 472 spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags); 516 473 517 474 switch (cfg->state) { 475 + case STATE_PROBING: 476 + case STATE_PROBED: 518 477 case STATE_RESET: 519 478 dev_dbg_ratelimited(dev, "%s: device is in reset\n", __func__); 520 479 rc = SCSI_MLQUEUE_HOST_BUSY; ··· 532 487 } 533 488 534 489 if (likely(sg)) { 535 - nseg = scsi_dma_map(scp); 536 - if (unlikely(nseg < 0)) { 537 - dev_err(dev, "%s: Fail DMA map\n", __func__); 538 - rc = SCSI_MLQUEUE_HOST_BUSY; 539 - goto out; 540 - } 541 - 542 - cmd->rcb.data_len = sg_dma_len(sg); 543 - cmd->rcb.data_ea = sg_dma_address(sg); 490 + cmd->rcb.data_len = sg->length; 491 + cmd->rcb.data_ea = (uintptr_t)sg_virt(sg); 544 492 } 545 493 546 494 cmd->scp = scp; 547 495 cmd->parent = afu; 496 + cmd->hwq_index = hwq_index; 548 497 549 - cmd->rcb.ctx_id = afu->ctx_hndl; 498 + cmd->rcb.ctx_id = hwq->ctx_hndl; 550 499 cmd->rcb.msi = SISL_MSI_RRQ_UPDATED; 551 - cmd->rcb.port_sel = port_sel; 500 + cmd->rcb.port_sel = CHAN2PORTMASK(scp->device->channel); 552 501 cmd->rcb.lun_id = lun_to_lunid(scp->device->lun); 553 502 554 503 if (scp->sc_data_direction == DMA_TO_DEVICE) ··· 552 513 memcpy(cmd->rcb.cdb, scp->cmnd, sizeof(cmd->rcb.cdb)); 553 514 554 515 rc = afu->send_cmd(afu, cmd); 555 - if (unlikely(rc)) 556 - scsi_dma_unmap(scp); 557 516 out: 558 517 return rc; 559 518 } ··· 591 554 * Safe to call with AFU in a partially allocated/initialized state. 592 555 * 593 556 * Cancels scheduled worker threads, waits for any active internal AFU 594 - * commands to timeout and then unmaps the MMIO space. 557 + * commands to timeout, disables IRQ polling and then unmaps the MMIO space. 595 558 */ 596 559 static void stop_afu(struct cxlflash_cfg *cfg) 597 560 { 598 561 struct afu *afu = cfg->afu; 562 + struct hwq *hwq; 563 + int i; 599 564 600 565 cancel_work_sync(&cfg->work_q); 601 566 602 567 if (likely(afu)) { 603 568 while (atomic_read(&afu->cmds_active)) 604 569 ssleep(1); 570 + 571 + if (afu_is_irqpoll_enabled(afu)) { 572 + for (i = 0; i < afu->num_hwqs; i++) { 573 + hwq = get_hwq(afu, i); 574 + 575 + irq_poll_disable(&hwq->irqpoll); 576 + } 577 + } 578 + 605 579 if (likely(afu->afu_map)) { 606 580 cxl_psa_unmap((void __iomem *)afu->afu_map); 607 581 afu->afu_map = NULL; ··· 624 576 * term_intr() - disables all AFU interrupts 625 577 * @cfg: Internal structure associated with the host. 626 578 * @level: Depth of allocation, where to begin waterfall tear down. 579 + * @index: Index of the hardware queue. 627 580 * 628 581 * Safe to call with AFU/MC in partially allocated/initialized state. 629 582 */ 630 - static void term_intr(struct cxlflash_cfg *cfg, enum undo_level level) 583 + static void term_intr(struct cxlflash_cfg *cfg, enum undo_level level, 584 + u32 index) 631 585 { 632 586 struct afu *afu = cfg->afu; 633 587 struct device *dev = &cfg->dev->dev; 588 + struct hwq *hwq; 634 589 635 - if (!afu || !cfg->mcctx) { 636 - dev_err(dev, "%s: returning with NULL afu or MC\n", __func__); 590 + if (!afu) { 591 + dev_err(dev, "%s: returning with NULL afu\n", __func__); 592 + return; 593 + } 594 + 595 + hwq = get_hwq(afu, index); 596 + 597 + if (!hwq->ctx) { 598 + dev_err(dev, "%s: returning with NULL MC\n", __func__); 637 599 return; 638 600 } 639 601 640 602 switch (level) { 641 603 case UNMAP_THREE: 642 - cxl_unmap_afu_irq(cfg->mcctx, 3, afu); 604 + /* SISL_MSI_ASYNC_ERROR is setup only for the primary HWQ */ 605 + if (index == PRIMARY_HWQ) 606 + cxl_unmap_afu_irq(hwq->ctx, 3, hwq); 643 607 case UNMAP_TWO: 644 - cxl_unmap_afu_irq(cfg->mcctx, 2, afu); 608 + cxl_unmap_afu_irq(hwq->ctx, 2, hwq); 645 609 case UNMAP_ONE: 646 - cxl_unmap_afu_irq(cfg->mcctx, 1, afu); 610 + cxl_unmap_afu_irq(hwq->ctx, 1, hwq); 647 611 case FREE_IRQ: 648 - cxl_free_afu_irqs(cfg->mcctx); 612 + cxl_free_afu_irqs(hwq->ctx); 649 613 /* fall through */ 650 614 case UNDO_NOOP: 651 615 /* No action required */ ··· 668 608 /** 669 609 * term_mc() - terminates the master context 670 610 * @cfg: Internal structure associated with the host. 671 - * @level: Depth of allocation, where to begin waterfall tear down. 611 + * @index: Index of the hardware queue. 672 612 * 673 613 * Safe to call with AFU/MC in partially allocated/initialized state. 674 614 */ 675 - static void term_mc(struct cxlflash_cfg *cfg) 615 + static void term_mc(struct cxlflash_cfg *cfg, u32 index) 676 616 { 677 - int rc = 0; 678 617 struct afu *afu = cfg->afu; 679 618 struct device *dev = &cfg->dev->dev; 619 + struct hwq *hwq; 680 620 681 - if (!afu || !cfg->mcctx) { 682 - dev_err(dev, "%s: returning with NULL afu or MC\n", __func__); 621 + if (!afu) { 622 + dev_err(dev, "%s: returning with NULL afu\n", __func__); 683 623 return; 684 624 } 685 625 686 - rc = cxl_stop_context(cfg->mcctx); 687 - WARN_ON(rc); 688 - cfg->mcctx = NULL; 626 + hwq = get_hwq(afu, index); 627 + 628 + if (!hwq->ctx) { 629 + dev_err(dev, "%s: returning with NULL MC\n", __func__); 630 + return; 631 + } 632 + 633 + WARN_ON(cxl_stop_context(hwq->ctx)); 634 + if (index != PRIMARY_HWQ) 635 + WARN_ON(cxl_release_context(hwq->ctx)); 636 + hwq->ctx = NULL; 689 637 } 690 638 691 639 /** ··· 705 637 static void term_afu(struct cxlflash_cfg *cfg) 706 638 { 707 639 struct device *dev = &cfg->dev->dev; 640 + int k; 708 641 709 642 /* 710 643 * Tear down is carefully orchestrated to ensure 711 644 * no interrupts can come in when the problem state 712 645 * area is unmapped. 713 646 * 714 - * 1) Disable all AFU interrupts 647 + * 1) Disable all AFU interrupts for each master 715 648 * 2) Unmap the problem state area 716 - * 3) Stop the master context 649 + * 3) Stop each master context 717 650 */ 718 - term_intr(cfg, UNMAP_THREE); 651 + for (k = cfg->afu->num_hwqs - 1; k >= 0; k--) 652 + term_intr(cfg, UNMAP_THREE, k); 653 + 719 654 if (cfg->afu) 720 655 stop_afu(cfg); 721 656 722 - term_mc(cfg); 657 + for (k = cfg->afu->num_hwqs - 1; k >= 0; k--) 658 + term_mc(cfg, k); 723 659 724 660 dev_dbg(dev, "%s: returning\n", __func__); 725 661 } ··· 742 670 { 743 671 struct afu *afu = cfg->afu; 744 672 struct device *dev = &cfg->dev->dev; 745 - struct sisl_global_map __iomem *global; 746 673 struct dev_dependent_vals *ddv; 674 + __be64 __iomem *fc_port_regs; 747 675 u64 reg, status; 748 676 int i, retry_cnt = 0; 749 677 ··· 756 684 return; 757 685 } 758 686 759 - global = &afu->afu_map->global; 760 - 761 687 /* Notify AFU */ 762 - for (i = 0; i < NUM_FC_PORTS; i++) { 763 - reg = readq_be(&global->fc_regs[i][FC_CONFIG2 / 8]); 688 + for (i = 0; i < cfg->num_fc_ports; i++) { 689 + fc_port_regs = get_fc_port_regs(cfg, i); 690 + 691 + reg = readq_be(&fc_port_regs[FC_CONFIG2 / 8]); 764 692 reg |= SISL_FC_SHUTDOWN_NORMAL; 765 - writeq_be(reg, &global->fc_regs[i][FC_CONFIG2 / 8]); 693 + writeq_be(reg, &fc_port_regs[FC_CONFIG2 / 8]); 766 694 } 767 695 768 696 if (!wait) 769 697 return; 770 698 771 699 /* Wait up to 1.5 seconds for shutdown processing to complete */ 772 - for (i = 0; i < NUM_FC_PORTS; i++) { 700 + for (i = 0; i < cfg->num_fc_ports; i++) { 701 + fc_port_regs = get_fc_port_regs(cfg, i); 773 702 retry_cnt = 0; 703 + 774 704 while (true) { 775 - status = readq_be(&global->fc_regs[i][FC_STATUS / 8]); 705 + status = readq_be(&fc_port_regs[FC_STATUS / 8]); 776 706 if (status & SISL_STATUS_SHUTDOWN_COMPLETE) 777 707 break; 778 708 if (++retry_cnt >= MC_RETRY_CNT) { ··· 791 717 * cxlflash_remove() - PCI entry point to tear down host 792 718 * @pdev: PCI device associated with the host. 793 719 * 794 - * Safe to use as a cleanup in partially allocated/initialized state. 720 + * Safe to use as a cleanup in partially allocated/initialized state. Note that 721 + * the reset_waitq is flushed as part of the stop/termination of user contexts. 795 722 */ 796 723 static void cxlflash_remove(struct pci_dev *pdev) 797 724 { ··· 825 750 case INIT_STATE_SCSI: 826 751 cxlflash_term_local_luns(cfg); 827 752 scsi_remove_host(cfg->host); 828 - /* fall through */ 829 753 case INIT_STATE_AFU: 830 754 term_afu(cfg); 831 755 case INIT_STATE_PCI: ··· 863 789 goto out; 864 790 } 865 791 cfg->afu->parent = cfg; 792 + cfg->afu->desired_hwqs = CXLFLASH_DEF_HWQS; 866 793 cfg->afu->afu_map = NULL; 867 794 out: 868 795 return rc; ··· 1099 1024 dev_dbg(dev, "%s: returning port_sel=%016llx\n", __func__, port_sel); 1100 1025 } 1101 1026 1102 - /* 1103 - * Asynchronous interrupt information table 1104 - */ 1105 - static const struct asyc_intr_info ainfo[] = { 1106 - {SISL_ASTATUS_FC0_OTHER, "other error", 0, CLR_FC_ERROR | LINK_RESET}, 1107 - {SISL_ASTATUS_FC0_LOGO, "target initiated LOGO", 0, 0}, 1108 - {SISL_ASTATUS_FC0_CRC_T, "CRC threshold exceeded", 0, LINK_RESET}, 1109 - {SISL_ASTATUS_FC0_LOGI_R, "login timed out, retrying", 0, LINK_RESET}, 1110 - {SISL_ASTATUS_FC0_LOGI_F, "login failed", 0, CLR_FC_ERROR}, 1111 - {SISL_ASTATUS_FC0_LOGI_S, "login succeeded", 0, SCAN_HOST}, 1112 - {SISL_ASTATUS_FC0_LINK_DN, "link down", 0, 0}, 1113 - {SISL_ASTATUS_FC0_LINK_UP, "link up", 0, 0}, 1114 - {SISL_ASTATUS_FC1_OTHER, "other error", 1, CLR_FC_ERROR | LINK_RESET}, 1115 - {SISL_ASTATUS_FC1_LOGO, "target initiated LOGO", 1, 0}, 1116 - {SISL_ASTATUS_FC1_CRC_T, "CRC threshold exceeded", 1, LINK_RESET}, 1117 - {SISL_ASTATUS_FC1_LOGI_R, "login timed out, retrying", 1, LINK_RESET}, 1118 - {SISL_ASTATUS_FC1_LOGI_F, "login failed", 1, CLR_FC_ERROR}, 1119 - {SISL_ASTATUS_FC1_LOGI_S, "login succeeded", 1, SCAN_HOST}, 1120 - {SISL_ASTATUS_FC1_LINK_DN, "link down", 1, 0}, 1121 - {SISL_ASTATUS_FC1_LINK_UP, "link up", 1, 0}, 1122 - {0x0, "", 0, 0} /* terminator */ 1123 - }; 1124 - 1125 - /** 1126 - * find_ainfo() - locates and returns asynchronous interrupt information 1127 - * @status: Status code set by AFU on error. 1128 - * 1129 - * Return: The located information or NULL when the status code is invalid. 1130 - */ 1131 - static const struct asyc_intr_info *find_ainfo(u64 status) 1132 - { 1133 - const struct asyc_intr_info *info; 1134 - 1135 - for (info = &ainfo[0]; info->status; info++) 1136 - if (info->status == status) 1137 - return info; 1138 - 1139 - return NULL; 1140 - } 1141 - 1142 1027 /** 1143 1028 * afu_err_intr_init() - clears and initializes the AFU for error interrupts 1144 1029 * @afu: AFU associated with the host. 1145 1030 */ 1146 1031 static void afu_err_intr_init(struct afu *afu) 1147 1032 { 1033 + struct cxlflash_cfg *cfg = afu->parent; 1034 + __be64 __iomem *fc_port_regs; 1148 1035 int i; 1036 + struct hwq *hwq = get_hwq(afu, PRIMARY_HWQ); 1149 1037 u64 reg; 1150 1038 1151 1039 /* global async interrupts: AFU clears afu_ctrl on context exit ··· 1120 1082 1121 1083 /* mask all */ 1122 1084 writeq_be(-1ULL, &afu->afu_map->global.regs.aintr_mask); 1123 - /* set LISN# to send and point to master context */ 1124 - reg = ((u64) (((afu->ctx_hndl << 8) | SISL_MSI_ASYNC_ERROR)) << 40); 1085 + /* set LISN# to send and point to primary master context */ 1086 + reg = ((u64) (((hwq->ctx_hndl << 8) | SISL_MSI_ASYNC_ERROR)) << 40); 1125 1087 1126 1088 if (afu->internal_lun) 1127 1089 reg |= 1; /* Bit 63 indicates local lun */ ··· 1136 1098 writeq_be(-1ULL, &afu->afu_map->global.regs.aintr_clear); 1137 1099 1138 1100 /* Clear/Set internal lun bits */ 1139 - reg = readq_be(&afu->afu_map->global.fc_regs[0][FC_CONFIG2 / 8]); 1101 + fc_port_regs = get_fc_port_regs(cfg, 0); 1102 + reg = readq_be(&fc_port_regs[FC_CONFIG2 / 8]); 1140 1103 reg &= SISL_FC_INTERNAL_MASK; 1141 1104 if (afu->internal_lun) 1142 1105 reg |= ((u64)(afu->internal_lun - 1) << SISL_FC_INTERNAL_SHIFT); 1143 - writeq_be(reg, &afu->afu_map->global.fc_regs[0][FC_CONFIG2 / 8]); 1106 + writeq_be(reg, &fc_port_regs[FC_CONFIG2 / 8]); 1144 1107 1145 1108 /* now clear FC errors */ 1146 - for (i = 0; i < NUM_FC_PORTS; i++) { 1147 - writeq_be(0xFFFFFFFFU, 1148 - &afu->afu_map->global.fc_regs[i][FC_ERROR / 8]); 1149 - writeq_be(0, &afu->afu_map->global.fc_regs[i][FC_ERRCAP / 8]); 1109 + for (i = 0; i < cfg->num_fc_ports; i++) { 1110 + fc_port_regs = get_fc_port_regs(cfg, i); 1111 + 1112 + writeq_be(0xFFFFFFFFU, &fc_port_regs[FC_ERROR / 8]); 1113 + writeq_be(0, &fc_port_regs[FC_ERRCAP / 8]); 1150 1114 } 1151 1115 1152 1116 /* sync interrupts for master's IOARRIN write */ ··· 1157 1117 /* IOARRIN yet), so there is nothing to clear. */ 1158 1118 1159 1119 /* set LISN#, it is always sent to the context that wrote IOARRIN */ 1160 - writeq_be(SISL_MSI_SYNC_ERROR, &afu->host_map->ctx_ctrl); 1161 - writeq_be(SISL_ISTATUS_MASK, &afu->host_map->intr_mask); 1120 + for (i = 0; i < afu->num_hwqs; i++) { 1121 + hwq = get_hwq(afu, i); 1122 + 1123 + writeq_be(SISL_MSI_SYNC_ERROR, &hwq->host_map->ctx_ctrl); 1124 + writeq_be(SISL_ISTATUS_MASK, &hwq->host_map->intr_mask); 1125 + } 1162 1126 } 1163 1127 1164 1128 /** ··· 1174 1130 */ 1175 1131 static irqreturn_t cxlflash_sync_err_irq(int irq, void *data) 1176 1132 { 1177 - struct afu *afu = (struct afu *)data; 1178 - struct cxlflash_cfg *cfg = afu->parent; 1133 + struct hwq *hwq = (struct hwq *)data; 1134 + struct cxlflash_cfg *cfg = hwq->afu->parent; 1179 1135 struct device *dev = &cfg->dev->dev; 1180 1136 u64 reg; 1181 1137 u64 reg_unmasked; 1182 1138 1183 - reg = readq_be(&afu->host_map->intr_status); 1139 + reg = readq_be(&hwq->host_map->intr_status); 1184 1140 reg_unmasked = (reg & SISL_ISTATUS_UNMASK); 1185 1141 1186 1142 if (reg_unmasked == 0UL) { ··· 1192 1148 dev_err(dev, "%s: unexpected interrupt, intr_status=%016llx\n", 1193 1149 __func__, reg); 1194 1150 1195 - writeq_be(reg_unmasked, &afu->host_map->intr_clear); 1151 + writeq_be(reg_unmasked, &hwq->host_map->intr_clear); 1196 1152 1197 1153 cxlflash_sync_err_irq_exit: 1198 1154 return IRQ_HANDLED; 1199 1155 } 1200 1156 1201 1157 /** 1202 - * cxlflash_rrq_irq() - interrupt handler for read-response queue (normal path) 1203 - * @irq: Interrupt number. 1204 - * @data: Private data provided at interrupt registration, the AFU. 1158 + * process_hrrq() - process the read-response queue 1159 + * @afu: AFU associated with the host. 1160 + * @doneq: Queue of commands harvested from the RRQ. 1161 + * @budget: Threshold of RRQ entries to process. 1205 1162 * 1206 - * Return: Always return IRQ_HANDLED. 1163 + * This routine must be called holding the disabled RRQ spin lock. 1164 + * 1165 + * Return: The number of entries processed. 1207 1166 */ 1208 - static irqreturn_t cxlflash_rrq_irq(int irq, void *data) 1167 + static int process_hrrq(struct hwq *hwq, struct list_head *doneq, int budget) 1209 1168 { 1210 - struct afu *afu = (struct afu *)data; 1169 + struct afu *afu = hwq->afu; 1211 1170 struct afu_cmd *cmd; 1212 1171 struct sisl_ioasa *ioasa; 1213 1172 struct sisl_ioarcb *ioarcb; 1214 - bool toggle = afu->toggle; 1173 + bool toggle = hwq->toggle; 1174 + int num_hrrq = 0; 1215 1175 u64 entry, 1216 - *hrrq_start = afu->hrrq_start, 1217 - *hrrq_end = afu->hrrq_end, 1218 - *hrrq_curr = afu->hrrq_curr; 1176 + *hrrq_start = hwq->hrrq_start, 1177 + *hrrq_end = hwq->hrrq_end, 1178 + *hrrq_curr = hwq->hrrq_curr; 1219 1179 1220 - /* Process however many RRQ entries that are ready */ 1180 + /* Process ready RRQ entries up to the specified budget (if any) */ 1221 1181 while (true) { 1222 1182 entry = *hrrq_curr; 1223 1183 ··· 1238 1190 cmd = container_of(ioarcb, struct afu_cmd, rcb); 1239 1191 } 1240 1192 1241 - cmd_complete(cmd); 1193 + list_add_tail(&cmd->queue, doneq); 1242 1194 1243 1195 /* Advance to next entry or wrap and flip the toggle bit */ 1244 1196 if (hrrq_curr < hrrq_end) ··· 1248 1200 toggle ^= SISL_RESP_HANDLE_T_BIT; 1249 1201 } 1250 1202 1251 - atomic_inc(&afu->hsq_credits); 1203 + atomic_inc(&hwq->hsq_credits); 1204 + num_hrrq++; 1205 + 1206 + if (budget > 0 && num_hrrq >= budget) 1207 + break; 1252 1208 } 1253 1209 1254 - afu->hrrq_curr = hrrq_curr; 1255 - afu->toggle = toggle; 1210 + hwq->hrrq_curr = hrrq_curr; 1211 + hwq->toggle = toggle; 1256 1212 1213 + return num_hrrq; 1214 + } 1215 + 1216 + /** 1217 + * process_cmd_doneq() - process a queue of harvested RRQ commands 1218 + * @doneq: Queue of completed commands. 1219 + * 1220 + * Note that upon return the queue can no longer be trusted. 1221 + */ 1222 + static void process_cmd_doneq(struct list_head *doneq) 1223 + { 1224 + struct afu_cmd *cmd, *tmp; 1225 + 1226 + WARN_ON(list_empty(doneq)); 1227 + 1228 + list_for_each_entry_safe(cmd, tmp, doneq, queue) 1229 + cmd_complete(cmd); 1230 + } 1231 + 1232 + /** 1233 + * cxlflash_irqpoll() - process a queue of harvested RRQ commands 1234 + * @irqpoll: IRQ poll structure associated with queue to poll. 1235 + * @budget: Threshold of RRQ entries to process per poll. 1236 + * 1237 + * Return: The number of entries processed. 1238 + */ 1239 + static int cxlflash_irqpoll(struct irq_poll *irqpoll, int budget) 1240 + { 1241 + struct hwq *hwq = container_of(irqpoll, struct hwq, irqpoll); 1242 + unsigned long hrrq_flags; 1243 + LIST_HEAD(doneq); 1244 + int num_entries = 0; 1245 + 1246 + spin_lock_irqsave(&hwq->hrrq_slock, hrrq_flags); 1247 + 1248 + num_entries = process_hrrq(hwq, &doneq, budget); 1249 + if (num_entries < budget) 1250 + irq_poll_complete(irqpoll); 1251 + 1252 + spin_unlock_irqrestore(&hwq->hrrq_slock, hrrq_flags); 1253 + 1254 + process_cmd_doneq(&doneq); 1255 + return num_entries; 1256 + } 1257 + 1258 + /** 1259 + * cxlflash_rrq_irq() - interrupt handler for read-response queue (normal path) 1260 + * @irq: Interrupt number. 1261 + * @data: Private data provided at interrupt registration, the AFU. 1262 + * 1263 + * Return: IRQ_HANDLED or IRQ_NONE when no ready entries found. 1264 + */ 1265 + static irqreturn_t cxlflash_rrq_irq(int irq, void *data) 1266 + { 1267 + struct hwq *hwq = (struct hwq *)data; 1268 + struct afu *afu = hwq->afu; 1269 + unsigned long hrrq_flags; 1270 + LIST_HEAD(doneq); 1271 + int num_entries = 0; 1272 + 1273 + spin_lock_irqsave(&hwq->hrrq_slock, hrrq_flags); 1274 + 1275 + if (afu_is_irqpoll_enabled(afu)) { 1276 + irq_poll_sched(&hwq->irqpoll); 1277 + spin_unlock_irqrestore(&hwq->hrrq_slock, hrrq_flags); 1278 + return IRQ_HANDLED; 1279 + } 1280 + 1281 + num_entries = process_hrrq(hwq, &doneq, -1); 1282 + spin_unlock_irqrestore(&hwq->hrrq_slock, hrrq_flags); 1283 + 1284 + if (num_entries == 0) 1285 + return IRQ_NONE; 1286 + 1287 + process_cmd_doneq(&doneq); 1257 1288 return IRQ_HANDLED; 1258 1289 } 1290 + 1291 + /* 1292 + * Asynchronous interrupt information table 1293 + * 1294 + * NOTE: 1295 + * - Order matters here as this array is indexed by bit position. 1296 + * 1297 + * - The checkpatch script considers the BUILD_SISL_ASTATUS_FC_PORT macro 1298 + * as complex and complains due to a lack of parentheses/braces. 1299 + */ 1300 + #define ASTATUS_FC(_a, _b, _c, _d) \ 1301 + { SISL_ASTATUS_FC##_a##_##_b, _c, _a, (_d) } 1302 + 1303 + #define BUILD_SISL_ASTATUS_FC_PORT(_a) \ 1304 + ASTATUS_FC(_a, LINK_UP, "link up", 0), \ 1305 + ASTATUS_FC(_a, LINK_DN, "link down", 0), \ 1306 + ASTATUS_FC(_a, LOGI_S, "login succeeded", SCAN_HOST), \ 1307 + ASTATUS_FC(_a, LOGI_F, "login failed", CLR_FC_ERROR), \ 1308 + ASTATUS_FC(_a, LOGI_R, "login timed out, retrying", LINK_RESET), \ 1309 + ASTATUS_FC(_a, CRC_T, "CRC threshold exceeded", LINK_RESET), \ 1310 + ASTATUS_FC(_a, LOGO, "target initiated LOGO", 0), \ 1311 + ASTATUS_FC(_a, OTHER, "other error", CLR_FC_ERROR | LINK_RESET) 1312 + 1313 + static const struct asyc_intr_info ainfo[] = { 1314 + BUILD_SISL_ASTATUS_FC_PORT(1), 1315 + BUILD_SISL_ASTATUS_FC_PORT(0), 1316 + BUILD_SISL_ASTATUS_FC_PORT(3), 1317 + BUILD_SISL_ASTATUS_FC_PORT(2) 1318 + }; 1259 1319 1260 1320 /** 1261 1321 * cxlflash_async_err_irq() - interrupt handler for asynchronous errors ··· 1374 1218 */ 1375 1219 static irqreturn_t cxlflash_async_err_irq(int irq, void *data) 1376 1220 { 1377 - struct afu *afu = (struct afu *)data; 1221 + struct hwq *hwq = (struct hwq *)data; 1222 + struct afu *afu = hwq->afu; 1378 1223 struct cxlflash_cfg *cfg = afu->parent; 1379 1224 struct device *dev = &cfg->dev->dev; 1380 - u64 reg_unmasked; 1381 1225 const struct asyc_intr_info *info; 1382 1226 struct sisl_global_map __iomem *global = &afu->afu_map->global; 1227 + __be64 __iomem *fc_port_regs; 1228 + u64 reg_unmasked; 1383 1229 u64 reg; 1230 + u64 bit; 1384 1231 u8 port; 1385 - int i; 1386 1232 1387 1233 reg = readq_be(&global->regs.aintr_status); 1388 1234 reg_unmasked = (reg & SISL_ASTATUS_UNMASK); 1389 1235 1390 - if (reg_unmasked == 0) { 1236 + if (unlikely(reg_unmasked == 0)) { 1391 1237 dev_err(dev, "%s: spurious interrupt, aintr_status=%016llx\n", 1392 1238 __func__, reg); 1393 1239 goto out; ··· 1399 1241 writeq_be(reg_unmasked, &global->regs.aintr_clear); 1400 1242 1401 1243 /* Check each bit that is on */ 1402 - for (i = 0; reg_unmasked; i++, reg_unmasked = (reg_unmasked >> 1)) { 1403 - info = find_ainfo(1ULL << i); 1404 - if (((reg_unmasked & 0x1) == 0) || !info) 1244 + for_each_set_bit(bit, (ulong *)&reg_unmasked, BITS_PER_LONG) { 1245 + if (unlikely(bit >= ARRAY_SIZE(ainfo))) { 1246 + WARN_ON_ONCE(1); 1405 1247 continue; 1248 + } 1249 + 1250 + info = &ainfo[bit]; 1251 + if (unlikely(info->status != 1ULL << bit)) { 1252 + WARN_ON_ONCE(1); 1253 + continue; 1254 + } 1406 1255 1407 1256 port = info->port; 1257 + fc_port_regs = get_fc_port_regs(cfg, port); 1408 1258 1409 1259 dev_err(dev, "%s: FC Port %d -> %s, fc_status=%016llx\n", 1410 1260 __func__, port, info->desc, 1411 - readq_be(&global->fc_regs[port][FC_STATUS / 8])); 1261 + readq_be(&fc_port_regs[FC_STATUS / 8])); 1412 1262 1413 1263 /* 1414 1264 * Do link reset first, some OTHER errors will set FC_ERROR ··· 1431 1265 } 1432 1266 1433 1267 if (info->action & CLR_FC_ERROR) { 1434 - reg = readq_be(&global->fc_regs[port][FC_ERROR / 8]); 1268 + reg = readq_be(&fc_port_regs[FC_ERROR / 8]); 1435 1269 1436 1270 /* 1437 1271 * Since all errors are unmasked, FC_ERROR and FC_ERRCAP ··· 1441 1275 dev_err(dev, "%s: fc %d: clearing fc_error=%016llx\n", 1442 1276 __func__, port, reg); 1443 1277 1444 - writeq_be(reg, &global->fc_regs[port][FC_ERROR / 8]); 1445 - writeq_be(0, &global->fc_regs[port][FC_ERRCAP / 8]); 1278 + writeq_be(reg, &fc_port_regs[FC_ERROR / 8]); 1279 + writeq_be(0, &fc_port_regs[FC_ERRCAP / 8]); 1446 1280 } 1447 1281 1448 1282 if (info->action & SCAN_HOST) { ··· 1458 1292 /** 1459 1293 * start_context() - starts the master context 1460 1294 * @cfg: Internal structure associated with the host. 1295 + * @index: Index of the hardware queue. 1461 1296 * 1462 1297 * Return: A success or failure value from CXL services. 1463 1298 */ 1464 - static int start_context(struct cxlflash_cfg *cfg) 1299 + static int start_context(struct cxlflash_cfg *cfg, u32 index) 1465 1300 { 1466 1301 struct device *dev = &cfg->dev->dev; 1302 + struct hwq *hwq = get_hwq(cfg->afu, index); 1467 1303 int rc = 0; 1468 1304 1469 - rc = cxl_start_context(cfg->mcctx, 1470 - cfg->afu->work.work_element_descriptor, 1305 + rc = cxl_start_context(hwq->ctx, 1306 + hwq->work.work_element_descriptor, 1471 1307 NULL); 1472 1308 1473 1309 dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); ··· 1479 1311 /** 1480 1312 * read_vpd() - obtains the WWPNs from VPD 1481 1313 * @cfg: Internal structure associated with the host. 1482 - * @wwpn: Array of size NUM_FC_PORTS to pass back WWPNs 1314 + * @wwpn: Array of size MAX_FC_PORTS to pass back WWPNs 1483 1315 * 1484 1316 * Return: 0 on success, -errno on failure 1485 1317 */ ··· 1492 1324 ssize_t vpd_size; 1493 1325 char vpd_data[CXLFLASH_VPD_LEN]; 1494 1326 char tmp_buf[WWPN_BUF_LEN] = { 0 }; 1495 - char *wwpn_vpd_tags[NUM_FC_PORTS] = { "V5", "V6" }; 1327 + char *wwpn_vpd_tags[MAX_FC_PORTS] = { "V5", "V6", "V7", "V8" }; 1496 1328 1497 1329 /* Get the VPD data from the device */ 1498 1330 vpd_size = cxl_read_adapter_vpd(pdev, vpd_data, sizeof(vpd_data)); ··· 1530 1362 * because the conversion service requires that the ASCII 1531 1363 * string be terminated. 1532 1364 */ 1533 - for (k = 0; k < NUM_FC_PORTS; k++) { 1365 + for (k = 0; k < cfg->num_fc_ports; k++) { 1534 1366 j = ro_size; 1535 1367 i = ro_start + PCI_VPD_LRDT_TAG_SIZE; 1536 1368 ··· 1559 1391 rc = -ENODEV; 1560 1392 goto out; 1561 1393 } 1394 + 1395 + dev_dbg(dev, "%s: wwpn%d=%016llx\n", __func__, k, wwpn[k]); 1562 1396 } 1563 1397 1564 1398 out: ··· 1579 1409 { 1580 1410 struct afu *afu = cfg->afu; 1581 1411 struct sisl_ctrl_map __iomem *ctrl_map; 1412 + struct hwq *hwq; 1582 1413 int i; 1583 1414 1584 1415 for (i = 0; i < MAX_CONTEXT; i++) { ··· 1591 1420 writeq_be(0, &ctrl_map->ctx_cap); 1592 1421 } 1593 1422 1594 - /* Copy frequently used fields into afu */ 1595 - afu->ctx_hndl = (u16) cxl_process_element(cfg->mcctx); 1596 - afu->host_map = &afu->afu_map->hosts[afu->ctx_hndl].host; 1597 - afu->ctrl_map = &afu->afu_map->ctrls[afu->ctx_hndl].ctrl; 1423 + /* Copy frequently used fields into hwq */ 1424 + for (i = 0; i < afu->num_hwqs; i++) { 1425 + hwq = get_hwq(afu, i); 1598 1426 1599 - /* Program the Endian Control for the master context */ 1600 - writeq_be(SISL_ENDIAN_CTRL, &afu->host_map->endian_ctrl); 1427 + hwq->ctx_hndl = (u16) cxl_process_element(hwq->ctx); 1428 + hwq->host_map = &afu->afu_map->hosts[hwq->ctx_hndl].host; 1429 + hwq->ctrl_map = &afu->afu_map->ctrls[hwq->ctx_hndl].ctrl; 1430 + 1431 + /* Program the Endian Control for the master context */ 1432 + writeq_be(SISL_ENDIAN_CTRL, &hwq->host_map->endian_ctrl); 1433 + } 1601 1434 } 1602 1435 1603 1436 /** ··· 1612 1437 { 1613 1438 struct afu *afu = cfg->afu; 1614 1439 struct device *dev = &cfg->dev->dev; 1615 - u64 wwpn[NUM_FC_PORTS]; /* wwpn of AFU ports */ 1440 + struct hwq *hwq; 1441 + struct sisl_host_map __iomem *hmap; 1442 + __be64 __iomem *fc_port_regs; 1443 + u64 wwpn[MAX_FC_PORTS]; /* wwpn of AFU ports */ 1616 1444 int i = 0, num_ports = 0; 1617 1445 int rc = 0; 1618 1446 u64 reg; ··· 1626 1448 goto out; 1627 1449 } 1628 1450 1629 - dev_dbg(dev, "%s: wwpn0=%016llx wwpn1=%016llx\n", 1630 - __func__, wwpn[0], wwpn[1]); 1451 + /* Set up RRQ and SQ in HWQ for master issued cmds */ 1452 + for (i = 0; i < afu->num_hwqs; i++) { 1453 + hwq = get_hwq(afu, i); 1454 + hmap = hwq->host_map; 1631 1455 1632 - /* Set up RRQ and SQ in AFU for master issued cmds */ 1633 - writeq_be((u64) afu->hrrq_start, &afu->host_map->rrq_start); 1634 - writeq_be((u64) afu->hrrq_end, &afu->host_map->rrq_end); 1456 + writeq_be((u64) hwq->hrrq_start, &hmap->rrq_start); 1457 + writeq_be((u64) hwq->hrrq_end, &hmap->rrq_end); 1635 1458 1636 - if (afu_is_sq_cmd_mode(afu)) { 1637 - writeq_be((u64)afu->hsq_start, &afu->host_map->sq_start); 1638 - writeq_be((u64)afu->hsq_end, &afu->host_map->sq_end); 1459 + if (afu_is_sq_cmd_mode(afu)) { 1460 + writeq_be((u64)hwq->hsq_start, &hmap->sq_start); 1461 + writeq_be((u64)hwq->hsq_end, &hmap->sq_end); 1462 + } 1639 1463 } 1640 1464 1641 1465 /* AFU configuration */ ··· 1653 1473 if (afu->internal_lun) { 1654 1474 /* Only use port 0 */ 1655 1475 writeq_be(PORT0, &afu->afu_map->global.regs.afu_port_sel); 1656 - num_ports = NUM_FC_PORTS - 1; 1476 + num_ports = 0; 1657 1477 } else { 1658 - writeq_be(BOTH_PORTS, &afu->afu_map->global.regs.afu_port_sel); 1659 - num_ports = NUM_FC_PORTS; 1478 + writeq_be(PORT_MASK(cfg->num_fc_ports), 1479 + &afu->afu_map->global.regs.afu_port_sel); 1480 + num_ports = cfg->num_fc_ports; 1660 1481 } 1661 1482 1662 1483 for (i = 0; i < num_ports; i++) { 1484 + fc_port_regs = get_fc_port_regs(cfg, i); 1485 + 1663 1486 /* Unmask all errors (but they are still masked at AFU) */ 1664 - writeq_be(0, &afu->afu_map->global.fc_regs[i][FC_ERRMSK / 8]); 1487 + writeq_be(0, &fc_port_regs[FC_ERRMSK / 8]); 1665 1488 /* Clear CRC error cnt & set a threshold */ 1666 - (void)readq_be(&afu->afu_map->global. 1667 - fc_regs[i][FC_CNT_CRCERR / 8]); 1668 - writeq_be(MC_CRC_THRESH, &afu->afu_map->global.fc_regs[i] 1669 - [FC_CRC_THRESH / 8]); 1489 + (void)readq_be(&fc_port_regs[FC_CNT_CRCERR / 8]); 1490 + writeq_be(MC_CRC_THRESH, &fc_port_regs[FC_CRC_THRESH / 8]); 1670 1491 1671 1492 /* Set WWPNs. If already programmed, wwpn[i] is 0 */ 1672 1493 if (wwpn[i] != 0) 1673 - afu_set_wwpn(afu, i, 1674 - &afu->afu_map->global.fc_regs[i][0], 1675 - wwpn[i]); 1494 + afu_set_wwpn(afu, i, &fc_port_regs[0], wwpn[i]); 1676 1495 /* Programming WWPN back to back causes additional 1677 1496 * offline/online transitions and a PLOGI 1678 1497 */ ··· 1681 1502 /* Set up master's own CTX_CAP to allow real mode, host translation */ 1682 1503 /* tables, afu cmds and read/write GSCSI cmds. */ 1683 1504 /* First, unlock ctx_cap write by reading mbox */ 1684 - (void)readq_be(&afu->ctrl_map->mbox_r); /* unlock ctx_cap */ 1685 - writeq_be((SISL_CTX_CAP_REAL_MODE | SISL_CTX_CAP_HOST_XLATE | 1686 - SISL_CTX_CAP_READ_CMD | SISL_CTX_CAP_WRITE_CMD | 1687 - SISL_CTX_CAP_AFU_CMD | SISL_CTX_CAP_GSCSI_CMD), 1688 - &afu->ctrl_map->ctx_cap); 1505 + for (i = 0; i < afu->num_hwqs; i++) { 1506 + hwq = get_hwq(afu, i); 1507 + 1508 + (void)readq_be(&hwq->ctrl_map->mbox_r); /* unlock ctx_cap */ 1509 + writeq_be((SISL_CTX_CAP_REAL_MODE | SISL_CTX_CAP_HOST_XLATE | 1510 + SISL_CTX_CAP_READ_CMD | SISL_CTX_CAP_WRITE_CMD | 1511 + SISL_CTX_CAP_AFU_CMD | SISL_CTX_CAP_GSCSI_CMD), 1512 + &hwq->ctrl_map->ctx_cap); 1513 + } 1689 1514 /* Initialize heartbeat */ 1690 1515 afu->hb = readq_be(&afu->afu_map->global.regs.afu_hb); 1691 1516 out: ··· 1704 1521 { 1705 1522 struct afu *afu = cfg->afu; 1706 1523 struct device *dev = &cfg->dev->dev; 1524 + struct hwq *hwq; 1707 1525 int rc = 0; 1526 + int i; 1708 1527 1709 1528 init_pcr(cfg); 1710 1529 1711 - /* After an AFU reset, RRQ entries are stale, clear them */ 1712 - memset(&afu->rrq_entry, 0, sizeof(afu->rrq_entry)); 1530 + /* Initialize each HWQ */ 1531 + for (i = 0; i < afu->num_hwqs; i++) { 1532 + hwq = get_hwq(afu, i); 1713 1533 1714 - /* Initialize RRQ pointers */ 1715 - afu->hrrq_start = &afu->rrq_entry[0]; 1716 - afu->hrrq_end = &afu->rrq_entry[NUM_RRQ_ENTRY - 1]; 1717 - afu->hrrq_curr = afu->hrrq_start; 1718 - afu->toggle = 1; 1534 + /* After an AFU reset, RRQ entries are stale, clear them */ 1535 + memset(&hwq->rrq_entry, 0, sizeof(hwq->rrq_entry)); 1719 1536 1720 - /* Initialize SQ */ 1721 - if (afu_is_sq_cmd_mode(afu)) { 1722 - memset(&afu->sq, 0, sizeof(afu->sq)); 1723 - afu->hsq_start = &afu->sq[0]; 1724 - afu->hsq_end = &afu->sq[NUM_SQ_ENTRY - 1]; 1725 - afu->hsq_curr = afu->hsq_start; 1537 + /* Initialize RRQ pointers */ 1538 + hwq->hrrq_start = &hwq->rrq_entry[0]; 1539 + hwq->hrrq_end = &hwq->rrq_entry[NUM_RRQ_ENTRY - 1]; 1540 + hwq->hrrq_curr = hwq->hrrq_start; 1541 + hwq->toggle = 1; 1542 + spin_lock_init(&hwq->hrrq_slock); 1726 1543 1727 - spin_lock_init(&afu->hsq_slock); 1728 - atomic_set(&afu->hsq_credits, NUM_SQ_ENTRY - 1); 1544 + /* Initialize SQ */ 1545 + if (afu_is_sq_cmd_mode(afu)) { 1546 + memset(&hwq->sq, 0, sizeof(hwq->sq)); 1547 + hwq->hsq_start = &hwq->sq[0]; 1548 + hwq->hsq_end = &hwq->sq[NUM_SQ_ENTRY - 1]; 1549 + hwq->hsq_curr = hwq->hsq_start; 1550 + 1551 + spin_lock_init(&hwq->hsq_slock); 1552 + atomic_set(&hwq->hsq_credits, NUM_SQ_ENTRY - 1); 1553 + } 1554 + 1555 + /* Initialize IRQ poll */ 1556 + if (afu_is_irqpoll_enabled(afu)) 1557 + irq_poll_init(&hwq->irqpoll, afu->irqpoll_weight, 1558 + cxlflash_irqpoll); 1559 + 1729 1560 } 1730 1561 1731 1562 rc = init_global(cfg); ··· 1751 1554 /** 1752 1555 * init_intr() - setup interrupt handlers for the master context 1753 1556 * @cfg: Internal structure associated with the host. 1557 + * @hwq: Hardware queue to initialize. 1754 1558 * 1755 1559 * Return: 0 on success, -errno on failure 1756 1560 */ 1757 1561 static enum undo_level init_intr(struct cxlflash_cfg *cfg, 1758 - struct cxl_context *ctx) 1562 + struct hwq *hwq) 1759 1563 { 1760 - struct afu *afu = cfg->afu; 1761 1564 struct device *dev = &cfg->dev->dev; 1565 + struct cxl_context *ctx = hwq->ctx; 1762 1566 int rc = 0; 1763 1567 enum undo_level level = UNDO_NOOP; 1568 + bool is_primary_hwq = (hwq->index == PRIMARY_HWQ); 1569 + int num_irqs = is_primary_hwq ? 3 : 2; 1764 1570 1765 - rc = cxl_allocate_afu_irqs(ctx, 3); 1571 + rc = cxl_allocate_afu_irqs(ctx, num_irqs); 1766 1572 if (unlikely(rc)) { 1767 1573 dev_err(dev, "%s: allocate_afu_irqs failed rc=%d\n", 1768 1574 __func__, rc); ··· 1773 1573 goto out; 1774 1574 } 1775 1575 1776 - rc = cxl_map_afu_irq(ctx, 1, cxlflash_sync_err_irq, afu, 1576 + rc = cxl_map_afu_irq(ctx, 1, cxlflash_sync_err_irq, hwq, 1777 1577 "SISL_MSI_SYNC_ERROR"); 1778 1578 if (unlikely(rc <= 0)) { 1779 1579 dev_err(dev, "%s: SISL_MSI_SYNC_ERROR map failed\n", __func__); ··· 1781 1581 goto out; 1782 1582 } 1783 1583 1784 - rc = cxl_map_afu_irq(ctx, 2, cxlflash_rrq_irq, afu, 1584 + rc = cxl_map_afu_irq(ctx, 2, cxlflash_rrq_irq, hwq, 1785 1585 "SISL_MSI_RRQ_UPDATED"); 1786 1586 if (unlikely(rc <= 0)) { 1787 1587 dev_err(dev, "%s: SISL_MSI_RRQ_UPDATED map failed\n", __func__); ··· 1789 1589 goto out; 1790 1590 } 1791 1591 1792 - rc = cxl_map_afu_irq(ctx, 3, cxlflash_async_err_irq, afu, 1592 + /* SISL_MSI_ASYNC_ERROR is setup only for the primary HWQ */ 1593 + if (!is_primary_hwq) 1594 + goto out; 1595 + 1596 + rc = cxl_map_afu_irq(ctx, 3, cxlflash_async_err_irq, hwq, 1793 1597 "SISL_MSI_ASYNC_ERROR"); 1794 1598 if (unlikely(rc <= 0)) { 1795 1599 dev_err(dev, "%s: SISL_MSI_ASYNC_ERROR map failed\n", __func__); ··· 1807 1603 /** 1808 1604 * init_mc() - create and register as the master context 1809 1605 * @cfg: Internal structure associated with the host. 1606 + * index: HWQ Index of the master context. 1810 1607 * 1811 1608 * Return: 0 on success, -errno on failure 1812 1609 */ 1813 - static int init_mc(struct cxlflash_cfg *cfg) 1610 + static int init_mc(struct cxlflash_cfg *cfg, u32 index) 1814 1611 { 1815 1612 struct cxl_context *ctx; 1816 1613 struct device *dev = &cfg->dev->dev; 1614 + struct hwq *hwq = get_hwq(cfg->afu, index); 1817 1615 int rc = 0; 1818 1616 enum undo_level level; 1819 1617 1820 - ctx = cxl_get_context(cfg->dev); 1618 + hwq->afu = cfg->afu; 1619 + hwq->index = index; 1620 + 1621 + if (index == PRIMARY_HWQ) 1622 + ctx = cxl_get_context(cfg->dev); 1623 + else 1624 + ctx = cxl_dev_context_init(cfg->dev); 1821 1625 if (unlikely(!ctx)) { 1822 1626 rc = -ENOMEM; 1823 - goto ret; 1627 + goto err1; 1824 1628 } 1825 - cfg->mcctx = ctx; 1629 + 1630 + WARN_ON(hwq->ctx); 1631 + hwq->ctx = ctx; 1826 1632 1827 1633 /* Set it up as a master with the CXL */ 1828 1634 cxl_set_master(ctx); 1829 1635 1830 - /* During initialization reset the AFU to start from a clean slate */ 1831 - rc = cxl_afu_reset(cfg->mcctx); 1832 - if (unlikely(rc)) { 1833 - dev_err(dev, "%s: AFU reset failed rc=%d\n", __func__, rc); 1834 - goto ret; 1636 + /* Reset AFU when initializing primary context */ 1637 + if (index == PRIMARY_HWQ) { 1638 + rc = cxl_afu_reset(ctx); 1639 + if (unlikely(rc)) { 1640 + dev_err(dev, "%s: AFU reset failed rc=%d\n", 1641 + __func__, rc); 1642 + goto err1; 1643 + } 1835 1644 } 1836 1645 1837 - level = init_intr(cfg, ctx); 1646 + level = init_intr(cfg, hwq); 1838 1647 if (unlikely(level)) { 1839 1648 dev_err(dev, "%s: interrupt init failed rc=%d\n", __func__, rc); 1840 - goto out; 1649 + goto err2; 1841 1650 } 1842 1651 1843 1652 /* This performs the equivalent of the CXL_IOCTL_START_WORK. 1844 1653 * The CXL_IOCTL_GET_PROCESS_ELEMENT is implicit in the process 1845 1654 * element (pe) that is embedded in the context (ctx) 1846 1655 */ 1847 - rc = start_context(cfg); 1656 + rc = start_context(cfg, index); 1848 1657 if (unlikely(rc)) { 1849 1658 dev_err(dev, "%s: start context failed rc=%d\n", __func__, rc); 1850 1659 level = UNMAP_THREE; 1851 - goto out; 1660 + goto err2; 1852 1661 } 1853 - ret: 1662 + 1663 + out: 1854 1664 dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 1855 1665 return rc; 1856 - out: 1857 - term_intr(cfg, level); 1858 - goto ret; 1666 + err2: 1667 + term_intr(cfg, level, index); 1668 + if (index != PRIMARY_HWQ) 1669 + cxl_release_context(ctx); 1670 + err1: 1671 + hwq->ctx = NULL; 1672 + goto out; 1673 + } 1674 + 1675 + /** 1676 + * get_num_afu_ports() - determines and configures the number of AFU ports 1677 + * @cfg: Internal structure associated with the host. 1678 + * 1679 + * This routine determines the number of AFU ports by converting the global 1680 + * port selection mask. The converted value is only valid following an AFU 1681 + * reset (explicit or power-on). This routine must be invoked shortly after 1682 + * mapping as other routines are dependent on the number of ports during the 1683 + * initialization sequence. 1684 + * 1685 + * To support legacy AFUs that might not have reflected an initial global 1686 + * port mask (value read is 0), default to the number of ports originally 1687 + * supported by the cxlflash driver (2) before hardware with other port 1688 + * offerings was introduced. 1689 + */ 1690 + static void get_num_afu_ports(struct cxlflash_cfg *cfg) 1691 + { 1692 + struct afu *afu = cfg->afu; 1693 + struct device *dev = &cfg->dev->dev; 1694 + u64 port_mask; 1695 + int num_fc_ports = LEGACY_FC_PORTS; 1696 + 1697 + port_mask = readq_be(&afu->afu_map->global.regs.afu_port_sel); 1698 + if (port_mask != 0ULL) 1699 + num_fc_ports = min(ilog2(port_mask) + 1, MAX_FC_PORTS); 1700 + 1701 + dev_dbg(dev, "%s: port_mask=%016llx num_fc_ports=%d\n", 1702 + __func__, port_mask, num_fc_ports); 1703 + 1704 + cfg->num_fc_ports = num_fc_ports; 1705 + cfg->host->max_channel = PORTNUM2CHAN(num_fc_ports); 1859 1706 } 1860 1707 1861 1708 /** ··· 1924 1669 int rc = 0; 1925 1670 struct afu *afu = cfg->afu; 1926 1671 struct device *dev = &cfg->dev->dev; 1672 + struct hwq *hwq; 1673 + int i; 1927 1674 1928 1675 cxl_perst_reloads_same_image(cfg->cxl_afu, true); 1929 1676 1930 - rc = init_mc(cfg); 1931 - if (rc) { 1932 - dev_err(dev, "%s: init_mc failed rc=%d\n", 1933 - __func__, rc); 1934 - goto out; 1677 + afu->num_hwqs = afu->desired_hwqs; 1678 + for (i = 0; i < afu->num_hwqs; i++) { 1679 + rc = init_mc(cfg, i); 1680 + if (rc) { 1681 + dev_err(dev, "%s: init_mc failed rc=%d index=%d\n", 1682 + __func__, rc, i); 1683 + goto err1; 1684 + } 1935 1685 } 1936 1686 1937 - /* Map the entire MMIO space of the AFU */ 1938 - afu->afu_map = cxl_psa_map(cfg->mcctx); 1687 + /* Map the entire MMIO space of the AFU using the first context */ 1688 + hwq = get_hwq(afu, PRIMARY_HWQ); 1689 + afu->afu_map = cxl_psa_map(hwq->ctx); 1939 1690 if (!afu->afu_map) { 1940 1691 dev_err(dev, "%s: cxl_psa_map failed\n", __func__); 1941 1692 rc = -ENOMEM; ··· 1972 1711 dev_dbg(dev, "%s: afu_ver=%s interface_ver=%016llx\n", __func__, 1973 1712 afu->version, afu->interface_version); 1974 1713 1714 + get_num_afu_ports(cfg); 1715 + 1975 1716 rc = start_afu(cfg); 1976 1717 if (rc) { 1977 1718 dev_err(dev, "%s: start_afu failed, rc=%d\n", __func__, rc); ··· 1981 1718 } 1982 1719 1983 1720 afu_err_intr_init(cfg->afu); 1984 - spin_lock_init(&afu->rrin_slock); 1985 - afu->room = readq_be(&afu->host_map->cmd_room); 1721 + for (i = 0; i < afu->num_hwqs; i++) { 1722 + hwq = get_hwq(afu, i); 1723 + 1724 + spin_lock_init(&hwq->rrin_slock); 1725 + hwq->room = readq_be(&hwq->host_map->cmd_room); 1726 + } 1986 1727 1987 1728 /* Restore the LUN mappings */ 1988 1729 cxlflash_restore_luntable(cfg); ··· 1995 1728 return rc; 1996 1729 1997 1730 err1: 1998 - term_intr(cfg, UNMAP_THREE); 1999 - term_mc(cfg); 1731 + for (i = afu->num_hwqs - 1; i >= 0; i--) { 1732 + term_intr(cfg, UNMAP_THREE, i); 1733 + term_mc(cfg, i); 1734 + } 2000 1735 goto out; 2001 1736 } 2002 1737 ··· 2030 1761 struct cxlflash_cfg *cfg = afu->parent; 2031 1762 struct device *dev = &cfg->dev->dev; 2032 1763 struct afu_cmd *cmd = NULL; 1764 + struct hwq *hwq = get_hwq(afu, PRIMARY_HWQ); 2033 1765 char *buf = NULL; 2034 1766 int rc = 0; 2035 1767 static DEFINE_MUTEX(sync_active); ··· 2053 1783 cmd = (struct afu_cmd *)PTR_ALIGN(buf, __alignof__(*cmd)); 2054 1784 init_completion(&cmd->cevent); 2055 1785 cmd->parent = afu; 1786 + cmd->hwq_index = hwq->index; 2056 1787 2057 1788 dev_dbg(dev, "%s: afu=%p cmd=%p %d\n", __func__, afu, cmd, ctx_hndl_u); 2058 1789 2059 1790 cmd->rcb.req_flags = SISL_REQ_FLAGS_AFU_CMD; 2060 - cmd->rcb.ctx_id = afu->ctx_hndl; 1791 + cmd->rcb.ctx_id = hwq->ctx_hndl; 2061 1792 cmd->rcb.msi = SISL_MSI_RRQ_UPDATED; 2062 1793 cmd->rcb.timeout = MC_AFU_SYNC_TIMEOUT; 2063 1794 ··· 2242 1971 /** 2243 1972 * cxlflash_show_port_status() - queries and presents the current port status 2244 1973 * @port: Desired port for status reporting. 2245 - * @afu: AFU owning the specified port. 1974 + * @cfg: Internal structure associated with the host. 2246 1975 * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2247 1976 * 2248 - * Return: The size of the ASCII string returned in @buf. 1977 + * Return: The size of the ASCII string returned in @buf or -EINVAL. 2249 1978 */ 2250 - static ssize_t cxlflash_show_port_status(u32 port, struct afu *afu, char *buf) 1979 + static ssize_t cxlflash_show_port_status(u32 port, 1980 + struct cxlflash_cfg *cfg, 1981 + char *buf) 2251 1982 { 1983 + struct device *dev = &cfg->dev->dev; 2252 1984 char *disp_status; 2253 1985 u64 status; 2254 - __be64 __iomem *fc_regs; 1986 + __be64 __iomem *fc_port_regs; 2255 1987 2256 - if (port >= NUM_FC_PORTS) 2257 - return 0; 1988 + WARN_ON(port >= MAX_FC_PORTS); 2258 1989 2259 - fc_regs = &afu->afu_map->global.fc_regs[port][0]; 2260 - status = readq_be(&fc_regs[FC_MTIP_STATUS / 8]); 1990 + if (port >= cfg->num_fc_ports) { 1991 + dev_info(dev, "%s: Port %d not supported on this card.\n", 1992 + __func__, port); 1993 + return -EINVAL; 1994 + } 1995 + 1996 + fc_port_regs = get_fc_port_regs(cfg, port); 1997 + status = readq_be(&fc_port_regs[FC_MTIP_STATUS / 8]); 2261 1998 status &= FC_MTIP_STATUS_MASK; 2262 1999 2263 2000 if (status == FC_MTIP_STATUS_ONLINE) ··· 2291 2012 char *buf) 2292 2013 { 2293 2014 struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2294 - struct afu *afu = cfg->afu; 2295 2015 2296 - return cxlflash_show_port_status(0, afu, buf); 2016 + return cxlflash_show_port_status(0, cfg, buf); 2297 2017 } 2298 2018 2299 2019 /** ··· 2308 2030 char *buf) 2309 2031 { 2310 2032 struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2311 - struct afu *afu = cfg->afu; 2312 2033 2313 - return cxlflash_show_port_status(1, afu, buf); 2034 + return cxlflash_show_port_status(1, cfg, buf); 2035 + } 2036 + 2037 + /** 2038 + * port2_show() - queries and presents the current status of port 2 2039 + * @dev: Generic device associated with the host owning the port. 2040 + * @attr: Device attribute representing the port. 2041 + * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2042 + * 2043 + * Return: The size of the ASCII string returned in @buf. 2044 + */ 2045 + static ssize_t port2_show(struct device *dev, 2046 + struct device_attribute *attr, 2047 + char *buf) 2048 + { 2049 + struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2050 + 2051 + return cxlflash_show_port_status(2, cfg, buf); 2052 + } 2053 + 2054 + /** 2055 + * port3_show() - queries and presents the current status of port 3 2056 + * @dev: Generic device associated with the host owning the port. 2057 + * @attr: Device attribute representing the port. 2058 + * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2059 + * 2060 + * Return: The size of the ASCII string returned in @buf. 2061 + */ 2062 + static ssize_t port3_show(struct device *dev, 2063 + struct device_attribute *attr, 2064 + char *buf) 2065 + { 2066 + struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2067 + 2068 + return cxlflash_show_port_status(3, cfg, buf); 2314 2069 } 2315 2070 2316 2071 /** ··· 2401 2090 2402 2091 /* 2403 2092 * When configured for internal LUN, there is only one channel, 2404 - * channel number 0, else there will be 2 (default). 2093 + * channel number 0, else there will be one less than the number 2094 + * of fc ports for this card. 2405 2095 */ 2406 2096 if (afu->internal_lun) 2407 2097 shost->max_channel = 0; 2408 2098 else 2409 - shost->max_channel = NUM_FC_PORTS - 1; 2099 + shost->max_channel = PORTNUM2CHAN(cfg->num_fc_ports); 2410 2100 2411 2101 afu_reset(cfg); 2412 2102 scsi_scan_host(cfg->host); ··· 2433 2121 /** 2434 2122 * cxlflash_show_port_lun_table() - queries and presents the port LUN table 2435 2123 * @port: Desired port for status reporting. 2436 - * @afu: AFU owning the specified port. 2124 + * @cfg: Internal structure associated with the host. 2437 2125 * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2438 2126 * 2439 - * Return: The size of the ASCII string returned in @buf. 2127 + * Return: The size of the ASCII string returned in @buf or -EINVAL. 2440 2128 */ 2441 2129 static ssize_t cxlflash_show_port_lun_table(u32 port, 2442 - struct afu *afu, 2130 + struct cxlflash_cfg *cfg, 2443 2131 char *buf) 2444 2132 { 2133 + struct device *dev = &cfg->dev->dev; 2134 + __be64 __iomem *fc_port_luns; 2445 2135 int i; 2446 2136 ssize_t bytes = 0; 2447 - __be64 __iomem *fc_port; 2448 2137 2449 - if (port >= NUM_FC_PORTS) 2450 - return 0; 2138 + WARN_ON(port >= MAX_FC_PORTS); 2451 2139 2452 - fc_port = &afu->afu_map->global.fc_port[port][0]; 2140 + if (port >= cfg->num_fc_ports) { 2141 + dev_info(dev, "%s: Port %d not supported on this card.\n", 2142 + __func__, port); 2143 + return -EINVAL; 2144 + } 2145 + 2146 + fc_port_luns = get_fc_port_luns(cfg, port); 2453 2147 2454 2148 for (i = 0; i < CXLFLASH_NUM_VLUNS; i++) 2455 2149 bytes += scnprintf(buf + bytes, PAGE_SIZE - bytes, 2456 - "%03d: %016llx\n", i, readq_be(&fc_port[i])); 2150 + "%03d: %016llx\n", 2151 + i, readq_be(&fc_port_luns[i])); 2457 2152 return bytes; 2458 2153 } 2459 2154 ··· 2477 2158 char *buf) 2478 2159 { 2479 2160 struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2480 - struct afu *afu = cfg->afu; 2481 2161 2482 - return cxlflash_show_port_lun_table(0, afu, buf); 2162 + return cxlflash_show_port_lun_table(0, cfg, buf); 2483 2163 } 2484 2164 2485 2165 /** ··· 2494 2176 char *buf) 2495 2177 { 2496 2178 struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2179 + 2180 + return cxlflash_show_port_lun_table(1, cfg, buf); 2181 + } 2182 + 2183 + /** 2184 + * port2_lun_table_show() - presents the current LUN table of port 2 2185 + * @dev: Generic device associated with the host owning the port. 2186 + * @attr: Device attribute representing the port. 2187 + * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2188 + * 2189 + * Return: The size of the ASCII string returned in @buf. 2190 + */ 2191 + static ssize_t port2_lun_table_show(struct device *dev, 2192 + struct device_attribute *attr, 2193 + char *buf) 2194 + { 2195 + struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2196 + 2197 + return cxlflash_show_port_lun_table(2, cfg, buf); 2198 + } 2199 + 2200 + /** 2201 + * port3_lun_table_show() - presents the current LUN table of port 3 2202 + * @dev: Generic device associated with the host owning the port. 2203 + * @attr: Device attribute representing the port. 2204 + * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2205 + * 2206 + * Return: The size of the ASCII string returned in @buf. 2207 + */ 2208 + static ssize_t port3_lun_table_show(struct device *dev, 2209 + struct device_attribute *attr, 2210 + char *buf) 2211 + { 2212 + struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2213 + 2214 + return cxlflash_show_port_lun_table(3, cfg, buf); 2215 + } 2216 + 2217 + /** 2218 + * irqpoll_weight_show() - presents the current IRQ poll weight for the host 2219 + * @dev: Generic device associated with the host. 2220 + * @attr: Device attribute representing the IRQ poll weight. 2221 + * @buf: Buffer of length PAGE_SIZE to report back the current IRQ poll 2222 + * weight in ASCII. 2223 + * 2224 + * An IRQ poll weight of 0 indicates polling is disabled. 2225 + * 2226 + * Return: The size of the ASCII string returned in @buf. 2227 + */ 2228 + static ssize_t irqpoll_weight_show(struct device *dev, 2229 + struct device_attribute *attr, char *buf) 2230 + { 2231 + struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2497 2232 struct afu *afu = cfg->afu; 2498 2233 2499 - return cxlflash_show_port_lun_table(1, afu, buf); 2234 + return scnprintf(buf, PAGE_SIZE, "%u\n", afu->irqpoll_weight); 2235 + } 2236 + 2237 + /** 2238 + * irqpoll_weight_store() - sets the current IRQ poll weight for the host 2239 + * @dev: Generic device associated with the host. 2240 + * @attr: Device attribute representing the IRQ poll weight. 2241 + * @buf: Buffer of length PAGE_SIZE containing the desired IRQ poll 2242 + * weight in ASCII. 2243 + * @count: Length of data resizing in @buf. 2244 + * 2245 + * An IRQ poll weight of 0 indicates polling is disabled. 2246 + * 2247 + * Return: The size of the ASCII string returned in @buf. 2248 + */ 2249 + static ssize_t irqpoll_weight_store(struct device *dev, 2250 + struct device_attribute *attr, 2251 + const char *buf, size_t count) 2252 + { 2253 + struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2254 + struct device *cfgdev = &cfg->dev->dev; 2255 + struct afu *afu = cfg->afu; 2256 + struct hwq *hwq; 2257 + u32 weight; 2258 + int rc, i; 2259 + 2260 + rc = kstrtouint(buf, 10, &weight); 2261 + if (rc) 2262 + return -EINVAL; 2263 + 2264 + if (weight > 256) { 2265 + dev_info(cfgdev, 2266 + "Invalid IRQ poll weight. It must be 256 or less.\n"); 2267 + return -EINVAL; 2268 + } 2269 + 2270 + if (weight == afu->irqpoll_weight) { 2271 + dev_info(cfgdev, 2272 + "Current IRQ poll weight has the same weight.\n"); 2273 + return -EINVAL; 2274 + } 2275 + 2276 + if (afu_is_irqpoll_enabled(afu)) { 2277 + for (i = 0; i < afu->num_hwqs; i++) { 2278 + hwq = get_hwq(afu, i); 2279 + 2280 + irq_poll_disable(&hwq->irqpoll); 2281 + } 2282 + } 2283 + 2284 + afu->irqpoll_weight = weight; 2285 + 2286 + if (weight > 0) { 2287 + for (i = 0; i < afu->num_hwqs; i++) { 2288 + hwq = get_hwq(afu, i); 2289 + 2290 + irq_poll_init(&hwq->irqpoll, weight, cxlflash_irqpoll); 2291 + } 2292 + } 2293 + 2294 + return count; 2295 + } 2296 + 2297 + /** 2298 + * num_hwqs_show() - presents the number of hardware queues for the host 2299 + * @dev: Generic device associated with the host. 2300 + * @attr: Device attribute representing the number of hardware queues. 2301 + * @buf: Buffer of length PAGE_SIZE to report back the number of hardware 2302 + * queues in ASCII. 2303 + * 2304 + * Return: The size of the ASCII string returned in @buf. 2305 + */ 2306 + static ssize_t num_hwqs_show(struct device *dev, 2307 + struct device_attribute *attr, char *buf) 2308 + { 2309 + struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2310 + struct afu *afu = cfg->afu; 2311 + 2312 + return scnprintf(buf, PAGE_SIZE, "%u\n", afu->num_hwqs); 2313 + } 2314 + 2315 + /** 2316 + * num_hwqs_store() - sets the number of hardware queues for the host 2317 + * @dev: Generic device associated with the host. 2318 + * @attr: Device attribute representing the number of hardware queues. 2319 + * @buf: Buffer of length PAGE_SIZE containing the number of hardware 2320 + * queues in ASCII. 2321 + * @count: Length of data resizing in @buf. 2322 + * 2323 + * n > 0: num_hwqs = n 2324 + * n = 0: num_hwqs = num_online_cpus() 2325 + * n < 0: num_online_cpus() / abs(n) 2326 + * 2327 + * Return: The size of the ASCII string returned in @buf. 2328 + */ 2329 + static ssize_t num_hwqs_store(struct device *dev, 2330 + struct device_attribute *attr, 2331 + const char *buf, size_t count) 2332 + { 2333 + struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2334 + struct afu *afu = cfg->afu; 2335 + int rc; 2336 + int nhwqs, num_hwqs; 2337 + 2338 + rc = kstrtoint(buf, 10, &nhwqs); 2339 + if (rc) 2340 + return -EINVAL; 2341 + 2342 + if (nhwqs >= 1) 2343 + num_hwqs = nhwqs; 2344 + else if (nhwqs == 0) 2345 + num_hwqs = num_online_cpus(); 2346 + else 2347 + num_hwqs = num_online_cpus() / abs(nhwqs); 2348 + 2349 + afu->desired_hwqs = min(num_hwqs, CXLFLASH_MAX_HWQS); 2350 + WARN_ON_ONCE(afu->desired_hwqs == 0); 2351 + 2352 + retry: 2353 + switch (cfg->state) { 2354 + case STATE_NORMAL: 2355 + cfg->state = STATE_RESET; 2356 + drain_ioctls(cfg); 2357 + cxlflash_mark_contexts_error(cfg); 2358 + rc = afu_reset(cfg); 2359 + if (rc) 2360 + cfg->state = STATE_FAILTERM; 2361 + else 2362 + cfg->state = STATE_NORMAL; 2363 + wake_up_all(&cfg->reset_waitq); 2364 + break; 2365 + case STATE_RESET: 2366 + wait_event(cfg->reset_waitq, cfg->state != STATE_RESET); 2367 + if (cfg->state == STATE_NORMAL) 2368 + goto retry; 2369 + default: 2370 + /* Ideally should not happen */ 2371 + dev_err(dev, "%s: Device is not ready, state=%d\n", 2372 + __func__, cfg->state); 2373 + break; 2374 + } 2375 + 2376 + return count; 2377 + } 2378 + 2379 + static const char *hwq_mode_name[MAX_HWQ_MODE] = { "rr", "tag", "cpu" }; 2380 + 2381 + /** 2382 + * hwq_mode_show() - presents the HWQ steering mode for the host 2383 + * @dev: Generic device associated with the host. 2384 + * @attr: Device attribute representing the HWQ steering mode. 2385 + * @buf: Buffer of length PAGE_SIZE to report back the HWQ steering mode 2386 + * as a character string. 2387 + * 2388 + * Return: The size of the ASCII string returned in @buf. 2389 + */ 2390 + static ssize_t hwq_mode_show(struct device *dev, 2391 + struct device_attribute *attr, char *buf) 2392 + { 2393 + struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2394 + struct afu *afu = cfg->afu; 2395 + 2396 + return scnprintf(buf, PAGE_SIZE, "%s\n", hwq_mode_name[afu->hwq_mode]); 2397 + } 2398 + 2399 + /** 2400 + * hwq_mode_store() - sets the HWQ steering mode for the host 2401 + * @dev: Generic device associated with the host. 2402 + * @attr: Device attribute representing the HWQ steering mode. 2403 + * @buf: Buffer of length PAGE_SIZE containing the HWQ steering mode 2404 + * as a character string. 2405 + * @count: Length of data resizing in @buf. 2406 + * 2407 + * rr = Round-Robin 2408 + * tag = Block MQ Tagging 2409 + * cpu = CPU Affinity 2410 + * 2411 + * Return: The size of the ASCII string returned in @buf. 2412 + */ 2413 + static ssize_t hwq_mode_store(struct device *dev, 2414 + struct device_attribute *attr, 2415 + const char *buf, size_t count) 2416 + { 2417 + struct Scsi_Host *shost = class_to_shost(dev); 2418 + struct cxlflash_cfg *cfg = shost_priv(shost); 2419 + struct device *cfgdev = &cfg->dev->dev; 2420 + struct afu *afu = cfg->afu; 2421 + int i; 2422 + u32 mode = MAX_HWQ_MODE; 2423 + 2424 + for (i = 0; i < MAX_HWQ_MODE; i++) { 2425 + if (!strncmp(hwq_mode_name[i], buf, strlen(hwq_mode_name[i]))) { 2426 + mode = i; 2427 + break; 2428 + } 2429 + } 2430 + 2431 + if (mode >= MAX_HWQ_MODE) { 2432 + dev_info(cfgdev, "Invalid HWQ steering mode.\n"); 2433 + return -EINVAL; 2434 + } 2435 + 2436 + if ((mode == HWQ_MODE_TAG) && !shost_use_blk_mq(shost)) { 2437 + dev_info(cfgdev, "SCSI-MQ is not enabled, use a different " 2438 + "HWQ steering mode.\n"); 2439 + return -EINVAL; 2440 + } 2441 + 2442 + afu->hwq_mode = mode; 2443 + 2444 + return count; 2500 2445 } 2501 2446 2502 2447 /** ··· 2784 2203 */ 2785 2204 static DEVICE_ATTR_RO(port0); 2786 2205 static DEVICE_ATTR_RO(port1); 2206 + static DEVICE_ATTR_RO(port2); 2207 + static DEVICE_ATTR_RO(port3); 2787 2208 static DEVICE_ATTR_RW(lun_mode); 2788 2209 static DEVICE_ATTR_RO(ioctl_version); 2789 2210 static DEVICE_ATTR_RO(port0_lun_table); 2790 2211 static DEVICE_ATTR_RO(port1_lun_table); 2212 + static DEVICE_ATTR_RO(port2_lun_table); 2213 + static DEVICE_ATTR_RO(port3_lun_table); 2214 + static DEVICE_ATTR_RW(irqpoll_weight); 2215 + static DEVICE_ATTR_RW(num_hwqs); 2216 + static DEVICE_ATTR_RW(hwq_mode); 2791 2217 2792 2218 static struct device_attribute *cxlflash_host_attrs[] = { 2793 2219 &dev_attr_port0, 2794 2220 &dev_attr_port1, 2221 + &dev_attr_port2, 2222 + &dev_attr_port3, 2795 2223 &dev_attr_lun_mode, 2796 2224 &dev_attr_ioctl_version, 2797 2225 &dev_attr_port0_lun_table, 2798 2226 &dev_attr_port1_lun_table, 2227 + &dev_attr_port2_lun_table, 2228 + &dev_attr_port3_lun_table, 2229 + &dev_attr_irqpoll_weight, 2230 + &dev_attr_num_hwqs, 2231 + &dev_attr_hwq_mode, 2799 2232 NULL 2800 2233 }; 2801 2234 ··· 2887 2292 work_q); 2888 2293 struct afu *afu = cfg->afu; 2889 2294 struct device *dev = &cfg->dev->dev; 2295 + __be64 __iomem *fc_port_regs; 2890 2296 int port; 2891 2297 ulong lock_flags; 2892 2298 ··· 2908 2312 lock_flags); 2909 2313 2910 2314 /* The reset can block... */ 2911 - afu_link_reset(afu, port, 2912 - &afu->afu_map->global.fc_regs[port][0]); 2315 + fc_port_regs = get_fc_port_regs(cfg, port); 2316 + afu_link_reset(afu, port, fc_port_regs); 2913 2317 spin_lock_irqsave(cfg->host->host_lock, lock_flags); 2914 2318 } 2915 2319 ··· 2927 2331 * @pdev: PCI device associated with the host. 2928 2332 * @dev_id: PCI device id associated with device. 2929 2333 * 2334 + * The device will initially start out in a 'probing' state and 2335 + * transition to the 'normal' state at the end of a successful 2336 + * probe. Should an EEH event occur during probe, the notification 2337 + * thread (error_detected()) will wait until the probe handler 2338 + * is nearly complete. At that time, the device will be moved to 2339 + * a 'probed' state and the EEH thread woken up to drive the slot 2340 + * reset and recovery (device moves to 'normal' state). Meanwhile, 2341 + * the probe will be allowed to exit successfully. 2342 + * 2930 2343 * Return: 0 on success, -errno on failure 2931 2344 */ 2932 2345 static int cxlflash_probe(struct pci_dev *pdev, ··· 2946 2341 struct device *dev = &pdev->dev; 2947 2342 struct dev_dependent_vals *ddv; 2948 2343 int rc = 0; 2344 + int k; 2949 2345 2950 2346 dev_dbg(&pdev->dev, "%s: Found CXLFLASH with IRQ: %d\n", 2951 2347 __func__, pdev->irq); ··· 2963 2357 2964 2358 host->max_id = CXLFLASH_MAX_NUM_TARGETS_PER_BUS; 2965 2359 host->max_lun = CXLFLASH_MAX_NUM_LUNS_PER_TARGET; 2966 - host->max_channel = NUM_FC_PORTS - 1; 2967 2360 host->unique_id = host->host_no; 2968 2361 host->max_cmd_len = CXLFLASH_MAX_CDB_LEN; 2969 2362 ··· 2981 2376 cfg->cxl_fops = cxlflash_cxl_fops; 2982 2377 2983 2378 /* 2984 - * The promoted LUNs move to the top of the LUN table. The rest stay 2985 - * on the bottom half. The bottom half grows from the end 2986 - * (index = 255), whereas the top half grows from the beginning 2987 - * (index = 0). 2379 + * Promoted LUNs move to the top of the LUN table. The rest stay on 2380 + * the bottom half. The bottom half grows from the end (index = 255), 2381 + * whereas the top half grows from the beginning (index = 0). 2382 + * 2383 + * Initialize the last LUN index for all possible ports. 2988 2384 */ 2989 - cfg->promote_lun_index = 0; 2990 - cfg->last_lun_index[0] = CXLFLASH_NUM_VLUNS/2 - 1; 2991 - cfg->last_lun_index[1] = CXLFLASH_NUM_VLUNS/2 - 1; 2385 + cfg->promote_lun_index = 0; 2386 + 2387 + for (k = 0; k < MAX_FC_PORTS; k++) 2388 + cfg->last_lun_index[k] = CXLFLASH_NUM_VLUNS/2 - 1; 2992 2389 2993 2390 cfg->dev_id = (struct pci_device_id *)dev_id; 2994 2391 ··· 3019 2412 cfg->init_state = INIT_STATE_PCI; 3020 2413 3021 2414 rc = init_afu(cfg); 3022 - if (rc) { 2415 + if (rc && !wq_has_sleeper(&cfg->reset_waitq)) { 3023 2416 dev_err(dev, "%s: init_afu failed rc=%d\n", __func__, rc); 3024 2417 goto out_remove; 3025 2418 } ··· 3032 2425 } 3033 2426 cfg->init_state = INIT_STATE_SCSI; 3034 2427 2428 + if (wq_has_sleeper(&cfg->reset_waitq)) { 2429 + cfg->state = STATE_PROBED; 2430 + wake_up_all(&cfg->reset_waitq); 2431 + } else 2432 + cfg->state = STATE_NORMAL; 3035 2433 out: 3036 2434 dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 3037 2435 return rc; ··· 3067 2455 3068 2456 switch (state) { 3069 2457 case pci_channel_io_frozen: 3070 - wait_event(cfg->reset_waitq, cfg->state != STATE_RESET); 2458 + wait_event(cfg->reset_waitq, cfg->state != STATE_RESET && 2459 + cfg->state != STATE_PROBING); 3071 2460 if (cfg->state == STATE_FAILTERM) 3072 2461 return PCI_ERS_RESULT_DISCONNECT; 3073 2462 ··· 3159 2546 */ 3160 2547 static int __init init_cxlflash(void) 3161 2548 { 2549 + check_sizes(); 3162 2550 cxlflash_list_init(); 3163 2551 3164 2552 return pci_register_driver(&cxlflash_driver);
-2
drivers/scsi/cxlflash/main.h
··· 37 37 38 38 #define CXLFLASH_PCI_ERROR_RECOVERY_TIMEOUT (120 * HZ) 39 39 40 - #define NUM_FC_PORTS CXLFLASH_NUM_FC_PORTS /* ports per AFU */ 41 - 42 40 /* FC defines */ 43 41 #define FC_MTIP_CMDCONFIG 0x010 44 42 #define FC_MTIP_STATUS 0x018
+79 -43
drivers/scsi/cxlflash/sislite.h
··· 90 90 #define SISL_AFU_RC_RHT_UNALIGNED 0x02U /* should never happen */ 91 91 #define SISL_AFU_RC_RHT_OUT_OF_BOUNDS 0x03u /* user error */ 92 92 #define SISL_AFU_RC_RHT_DMA_ERR 0x04u /* see afu_extra 93 - may retry if afu_retry is off 94 - possible on master exit 93 + * may retry if afu_retry is off 94 + * possible on master exit 95 95 */ 96 96 #define SISL_AFU_RC_RHT_RW_PERM 0x05u /* no RW perms, user error */ 97 97 #define SISL_AFU_RC_LXT_UNALIGNED 0x12U /* should never happen */ 98 98 #define SISL_AFU_RC_LXT_OUT_OF_BOUNDS 0x13u /* user error */ 99 99 #define SISL_AFU_RC_LXT_DMA_ERR 0x14u /* see afu_extra 100 - may retry if afu_retry is off 101 - possible on master exit 100 + * may retry if afu_retry is off 101 + * possible on master exit 102 102 */ 103 103 #define SISL_AFU_RC_LXT_RW_PERM 0x15u /* no RW perms, user error */ 104 104 ··· 111 111 */ 112 112 #define SISL_AFU_RC_NO_CHANNELS 0x20U /* see afu_extra, may retry */ 113 113 #define SISL_AFU_RC_CAP_VIOLATION 0x21U /* either user error or 114 - afu reset/master restart 114 + * afu reset/master restart 115 115 */ 116 116 #define SISL_AFU_RC_OUT_OF_DATA_BUFS 0x30U /* always retry */ 117 117 #define SISL_AFU_RC_DATA_DMA_ERR 0x31U /* see afu_extra 118 - may retry if afu_retry is off 118 + * may retry if afu_retry is off 119 119 */ 120 120 121 121 u8 scsi_rc; /* SCSI status byte, retry as appropriate */ ··· 149 149 #define SISL_FC_RC_ABORTFAIL 0x59 /* pending abort completed w/fail */ 150 150 #define SISL_FC_RC_RESID 0x5A /* ioasa underrun/overrun flags set */ 151 151 #define SISL_FC_RC_RESIDERR 0x5B /* actual data len does not match SCSI 152 - reported len, possibly due to dropped 153 - frames */ 152 + * reported len, possibly due to dropped 153 + * frames 154 + */ 154 155 #define SISL_FC_RC_TGTABORT 0x5C /* command aborted by target */ 155 156 }; 156 157 ··· 228 227 229 228 /* per context host transport MMIO */ 230 229 struct sisl_host_map { 231 - __be64 endian_ctrl; /* Per context Endian Control. The AFU will 232 - * operate on whatever the context is of the 233 - * host application. 234 - */ 230 + __be64 endian_ctrl; /* Per context Endian Control. The AFU will 231 + * operate on whatever the context is of the 232 + * host application. 233 + */ 235 234 236 235 __be64 intr_status; /* this sends LISN# programmed in ctx_ctrl. 237 236 * Only recovery in a PERM_ERR is a context ··· 293 292 /* single copy global regs */ 294 293 struct sisl_global_regs { 295 294 __be64 aintr_status; 296 - /* In cxlflash, each FC port/link gets a byte of status */ 297 - #define SISL_ASTATUS_FC0_OTHER 0x8000ULL /* b48, other err, 298 - FC_ERRCAP[31:20] */ 299 - #define SISL_ASTATUS_FC0_LOGO 0x4000ULL /* b49, target sent FLOGI/PLOGI/LOGO 300 - while logged in */ 301 - #define SISL_ASTATUS_FC0_CRC_T 0x2000ULL /* b50, CRC threshold exceeded */ 302 - #define SISL_ASTATUS_FC0_LOGI_R 0x1000ULL /* b51, login state machine timed out 303 - and retrying */ 304 - #define SISL_ASTATUS_FC0_LOGI_F 0x0800ULL /* b52, login failed, 305 - FC_ERROR[19:0] */ 306 - #define SISL_ASTATUS_FC0_LOGI_S 0x0400ULL /* b53, login succeeded */ 307 - #define SISL_ASTATUS_FC0_LINK_DN 0x0200ULL /* b54, link online to offline */ 308 - #define SISL_ASTATUS_FC0_LINK_UP 0x0100ULL /* b55, link offline to online */ 295 + /* 296 + * In cxlflash, FC port/link are arranged in port pairs, each 297 + * gets a byte of status: 298 + * 299 + * *_OTHER: other err, FC_ERRCAP[31:20] 300 + * *_LOGO: target sent FLOGI/PLOGI/LOGO while logged in 301 + * *_CRC_T: CRC threshold exceeded 302 + * *_LOGI_R: login state machine timed out and retrying 303 + * *_LOGI_F: login failed, FC_ERROR[19:0] 304 + * *_LOGI_S: login succeeded 305 + * *_LINK_DN: link online to offline 306 + * *_LINK_UP: link offline to online 307 + */ 308 + #define SISL_ASTATUS_FC2_OTHER 0x80000000ULL /* b32 */ 309 + #define SISL_ASTATUS_FC2_LOGO 0x40000000ULL /* b33 */ 310 + #define SISL_ASTATUS_FC2_CRC_T 0x20000000ULL /* b34 */ 311 + #define SISL_ASTATUS_FC2_LOGI_R 0x10000000ULL /* b35 */ 312 + #define SISL_ASTATUS_FC2_LOGI_F 0x08000000ULL /* b36 */ 313 + #define SISL_ASTATUS_FC2_LOGI_S 0x04000000ULL /* b37 */ 314 + #define SISL_ASTATUS_FC2_LINK_DN 0x02000000ULL /* b38 */ 315 + #define SISL_ASTATUS_FC2_LINK_UP 0x01000000ULL /* b39 */ 309 316 310 - #define SISL_ASTATUS_FC1_OTHER 0x0080ULL /* b56 */ 311 - #define SISL_ASTATUS_FC1_LOGO 0x0040ULL /* b57 */ 312 - #define SISL_ASTATUS_FC1_CRC_T 0x0020ULL /* b58 */ 313 - #define SISL_ASTATUS_FC1_LOGI_R 0x0010ULL /* b59 */ 314 - #define SISL_ASTATUS_FC1_LOGI_F 0x0008ULL /* b60 */ 315 - #define SISL_ASTATUS_FC1_LOGI_S 0x0004ULL /* b61 */ 316 - #define SISL_ASTATUS_FC1_LINK_DN 0x0002ULL /* b62 */ 317 - #define SISL_ASTATUS_FC1_LINK_UP 0x0001ULL /* b63 */ 317 + #define SISL_ASTATUS_FC3_OTHER 0x00800000ULL /* b40 */ 318 + #define SISL_ASTATUS_FC3_LOGO 0x00400000ULL /* b41 */ 319 + #define SISL_ASTATUS_FC3_CRC_T 0x00200000ULL /* b42 */ 320 + #define SISL_ASTATUS_FC3_LOGI_R 0x00100000ULL /* b43 */ 321 + #define SISL_ASTATUS_FC3_LOGI_F 0x00080000ULL /* b44 */ 322 + #define SISL_ASTATUS_FC3_LOGI_S 0x00040000ULL /* b45 */ 323 + #define SISL_ASTATUS_FC3_LINK_DN 0x00020000ULL /* b46 */ 324 + #define SISL_ASTATUS_FC3_LINK_UP 0x00010000ULL /* b47 */ 325 + 326 + #define SISL_ASTATUS_FC0_OTHER 0x00008000ULL /* b48 */ 327 + #define SISL_ASTATUS_FC0_LOGO 0x00004000ULL /* b49 */ 328 + #define SISL_ASTATUS_FC0_CRC_T 0x00002000ULL /* b50 */ 329 + #define SISL_ASTATUS_FC0_LOGI_R 0x00001000ULL /* b51 */ 330 + #define SISL_ASTATUS_FC0_LOGI_F 0x00000800ULL /* b52 */ 331 + #define SISL_ASTATUS_FC0_LOGI_S 0x00000400ULL /* b53 */ 332 + #define SISL_ASTATUS_FC0_LINK_DN 0x00000200ULL /* b54 */ 333 + #define SISL_ASTATUS_FC0_LINK_UP 0x00000100ULL /* b55 */ 334 + 335 + #define SISL_ASTATUS_FC1_OTHER 0x00000080ULL /* b56 */ 336 + #define SISL_ASTATUS_FC1_LOGO 0x00000040ULL /* b57 */ 337 + #define SISL_ASTATUS_FC1_CRC_T 0x00000020ULL /* b58 */ 338 + #define SISL_ASTATUS_FC1_LOGI_R 0x00000010ULL /* b59 */ 339 + #define SISL_ASTATUS_FC1_LOGI_F 0x00000008ULL /* b60 */ 340 + #define SISL_ASTATUS_FC1_LOGI_S 0x00000004ULL /* b61 */ 341 + #define SISL_ASTATUS_FC1_LINK_DN 0x00000002ULL /* b62 */ 342 + #define SISL_ASTATUS_FC1_LINK_UP 0x00000001ULL /* b63 */ 318 343 319 344 #define SISL_FC_INTERNAL_UNMASK 0x0000000300000000ULL /* 1 means unmasked */ 320 345 #define SISL_FC_INTERNAL_MASK ~(SISL_FC_INTERNAL_UNMASK) ··· 352 325 #define SISL_STATUS_SHUTDOWN_ACTIVE 0x0000000000000010ULL 353 326 #define SISL_STATUS_SHUTDOWN_COMPLETE 0x0000000000000020ULL 354 327 355 - #define SISL_ASTATUS_UNMASK 0xFFFFULL /* 1 means unmasked */ 328 + #define SISL_ASTATUS_UNMASK 0xFFFFFFFFULL /* 1 means unmasked */ 356 329 #define SISL_ASTATUS_MASK ~(SISL_ASTATUS_UNMASK) /* 1 means masked */ 357 330 358 331 __be64 aintr_clear; ··· 394 367 #define SISL_INTVER_CAP_RESERVED_CMD_MODE_B 0x100000000000ULL 395 368 }; 396 369 397 - #define CXLFLASH_NUM_FC_PORTS 2 398 - #define CXLFLASH_MAX_CONTEXT 512 /* how many contexts per afu */ 399 - #define CXLFLASH_NUM_VLUNS 512 370 + #define CXLFLASH_NUM_FC_PORTS_PER_BANK 2 /* fixed # of ports per bank */ 371 + #define CXLFLASH_MAX_FC_BANKS 2 /* max # of banks supported */ 372 + #define CXLFLASH_MAX_FC_PORTS (CXLFLASH_NUM_FC_PORTS_PER_BANK * \ 373 + CXLFLASH_MAX_FC_BANKS) 374 + #define CXLFLASH_MAX_CONTEXT 512 /* number of contexts per AFU */ 375 + #define CXLFLASH_NUM_VLUNS 512 /* number of vluns per AFU/port */ 376 + #define CXLFLASH_NUM_REGS 512 /* number of registers per port */ 377 + 378 + struct fc_port_bank { 379 + __be64 fc_port_regs[CXLFLASH_NUM_FC_PORTS_PER_BANK][CXLFLASH_NUM_REGS]; 380 + __be64 fc_port_luns[CXLFLASH_NUM_FC_PORTS_PER_BANK][CXLFLASH_NUM_VLUNS]; 381 + }; 400 382 401 383 struct sisl_global_map { 402 384 union { ··· 415 379 416 380 char page1[SIZE_4K]; /* page 1 */ 417 381 418 - /* pages 2 & 3 */ 419 - __be64 fc_regs[CXLFLASH_NUM_FC_PORTS][CXLFLASH_NUM_VLUNS]; 382 + struct fc_port_bank bank[CXLFLASH_MAX_FC_BANKS]; /* pages 2 - 9 */ 420 383 421 - /* pages 4 & 5 (lun tbl) */ 422 - __be64 fc_port[CXLFLASH_NUM_FC_PORTS][CXLFLASH_NUM_VLUNS]; 384 + /* pages 10 - 15 are reserved */ 423 385 424 386 }; 425 387 ··· 436 402 * | 64 KB Global | 437 403 * | Trusted Process accessible | 438 404 * +-------------------------------+ 439 - */ 405 + */ 440 406 struct cxlflash_afu_map { 441 407 union { 442 408 struct sisl_host_map host; ··· 512 478 513 479 #define PORT0 0x01U 514 480 #define PORT1 0x02U 515 - #define BOTH_PORTS (PORT0 | PORT1) 481 + #define PORT2 0x04U 482 + #define PORT3 0x08U 483 + #define PORT_MASK(_n) ((1 << (_n)) - 1) 516 484 517 485 /* AFU Sync Mode byte */ 518 486 #define AFU_LW_SYNC 0x0U
+10 -6
drivers/scsi/cxlflash/superpipe.c
··· 78 78 * memory freed. This is accomplished by putting the contexts in error 79 79 * state which will notify the user and let them 'drive' the tear down. 80 80 * Meanwhile, this routine camps until all user contexts have been removed. 81 + * 82 + * Note that the main loop in this routine will always execute at least once 83 + * to flush the reset_waitq. 81 84 */ 82 85 void cxlflash_stop_term_user_contexts(struct cxlflash_cfg *cfg) 83 86 { 84 87 struct device *dev = &cfg->dev->dev; 85 - int i, found; 88 + int i, found = true; 86 89 87 90 cxlflash_mark_contexts_error(cfg); 88 91 89 92 while (true) { 90 - found = false; 91 - 92 93 for (i = 0; i < MAX_CONTEXT; i++) 93 94 if (cfg->ctx_tbl[i]) { 94 95 found = true; ··· 103 102 __func__); 104 103 wake_up_all(&cfg->reset_waitq); 105 104 ssleep(1); 105 + found = false; 106 106 } 107 107 } 108 108 ··· 254 252 struct afu *afu = cfg->afu; 255 253 struct sisl_ctrl_map __iomem *ctrl_map = ctxi->ctrl_map; 256 254 int rc = 0; 255 + struct hwq *hwq = get_hwq(afu, PRIMARY_HWQ); 257 256 u64 val; 258 257 259 258 /* Unlock cap and restrict user to read/write cmds in translated mode */ ··· 271 268 272 269 /* Set up MMIO registers pointing to the RHT */ 273 270 writeq_be((u64)ctxi->rht_start, &ctrl_map->rht_start); 274 - val = SISL_RHT_CNT_ID((u64)MAX_RHT_PER_CONTEXT, (u64)(afu->ctx_hndl)); 271 + val = SISL_RHT_CNT_ID((u64)MAX_RHT_PER_CONTEXT, (u64)(hwq->ctx_hndl)); 275 272 writeq_be(val, &ctrl_map->rht_cnt_id); 276 273 out: 277 274 dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); ··· 1627 1624 struct afu *afu = cfg->afu; 1628 1625 struct ctx_info *ctxi = NULL; 1629 1626 struct mutex *mutex = &cfg->ctx_recovery_mutex; 1627 + struct hwq *hwq = get_hwq(afu, PRIMARY_HWQ); 1630 1628 u64 flags; 1631 1629 u64 ctxid = DECODE_CTXID(recover->context_id), 1632 1630 rctxid = recover->context_id; ··· 1698 1694 } 1699 1695 1700 1696 /* Test if in error state */ 1701 - reg = readq_be(&afu->ctrl_map->mbox_r); 1697 + reg = readq_be(&hwq->ctrl_map->mbox_r); 1702 1698 if (reg == -1) { 1703 1699 dev_dbg(dev, "%s: MMIO fail, wait for recovery.\n", __func__); 1704 1700 ··· 1937 1933 u64 lun_size = 0; 1938 1934 u64 last_lba = 0; 1939 1935 u64 rsrc_handle = -1; 1940 - u32 port = CHAN2PORT(sdev->channel); 1936 + u32 port = CHAN2PORTMASK(sdev->channel); 1941 1937 1942 1938 int rc = 0; 1943 1939
+30 -26
drivers/scsi/cxlflash/superpipe.h
··· 24 24 */ 25 25 26 26 /* Chunk size parms: note sislite minimum chunk size is 27 - 0x10000 LBAs corresponding to a NMASK or 16. 28 - */ 27 + * 0x10000 LBAs corresponding to a NMASK or 16. 28 + */ 29 29 #define MC_CHUNK_SIZE (1 << MC_RHT_NMASK) /* in LBAs */ 30 30 31 31 #define CMD_TIMEOUT 30 /* 30 secs */ 32 32 #define CMD_RETRIES 5 /* 5 retries for scsi_execute */ 33 33 34 34 #define MAX_SECTOR_UNIT 512 /* max_sector is in 512 byte multiples */ 35 - 36 - #define CHAN2PORT(_x) ((_x) + 1) 37 - #define PORT2CHAN(_x) ((_x) - 1) 38 35 39 36 enum lun_mode { 40 37 MODE_NONE = 0, ··· 56 59 57 60 /* Local (per-adapter) lun_info structure */ 58 61 struct llun_info { 59 - u64 lun_id[CXLFLASH_NUM_FC_PORTS]; /* from REPORT_LUNS */ 62 + u64 lun_id[MAX_FC_PORTS]; /* from REPORT_LUNS */ 60 63 u32 lun_index; /* Index in the LUN table */ 61 64 u32 host_no; /* host_no from Scsi_host */ 62 65 u32 port_sel; /* What port to use for this LUN */ ··· 89 92 struct ctx_info { 90 93 struct sisl_ctrl_map __iomem *ctrl_map; /* initialized at startup */ 91 94 struct sisl_rht_entry *rht_start; /* 1 page (req'd for alignment), 92 - alloc/free on attach/detach */ 95 + * alloc/free on attach/detach 96 + */ 93 97 u32 rht_out; /* Number of checked out RHT entries */ 94 98 u32 rht_perms; /* User-defined permissions for RHT entries */ 95 99 struct llun_info **rht_lun; /* Mapping of RHT entries to LUNs */ ··· 118 120 struct page *err_page; /* One page of all 0xF for error notification */ 119 121 }; 120 122 121 - int cxlflash_vlun_resize(struct scsi_device *, struct dk_cxlflash_resize *); 122 - int _cxlflash_vlun_resize(struct scsi_device *, struct ctx_info *, 123 - struct dk_cxlflash_resize *); 123 + int cxlflash_vlun_resize(struct scsi_device *sdev, 124 + struct dk_cxlflash_resize *resize); 125 + int _cxlflash_vlun_resize(struct scsi_device *sdev, struct ctx_info *ctxi, 126 + struct dk_cxlflash_resize *resize); 124 127 125 - int cxlflash_disk_release(struct scsi_device *, struct dk_cxlflash_release *); 126 - int _cxlflash_disk_release(struct scsi_device *, struct ctx_info *, 127 - struct dk_cxlflash_release *); 128 + int cxlflash_disk_release(struct scsi_device *sdev, 129 + struct dk_cxlflash_release *release); 130 + int _cxlflash_disk_release(struct scsi_device *sdev, struct ctx_info *ctxi, 131 + struct dk_cxlflash_release *release); 128 132 129 - int cxlflash_disk_clone(struct scsi_device *, struct dk_cxlflash_clone *); 133 + int cxlflash_disk_clone(struct scsi_device *sdev, 134 + struct dk_cxlflash_clone *clone); 130 135 131 - int cxlflash_disk_virtual_open(struct scsi_device *, void *); 136 + int cxlflash_disk_virtual_open(struct scsi_device *sdev, void *arg); 132 137 133 - int cxlflash_lun_attach(struct glun_info *, enum lun_mode, bool); 134 - void cxlflash_lun_detach(struct glun_info *); 138 + int cxlflash_lun_attach(struct glun_info *gli, enum lun_mode mode, bool locked); 139 + void cxlflash_lun_detach(struct glun_info *gli); 135 140 136 - struct ctx_info *get_context(struct cxlflash_cfg *, u64, void *, enum ctx_ctrl); 137 - void put_context(struct ctx_info *); 141 + struct ctx_info *get_context(struct cxlflash_cfg *cfg, u64 rctxit, void *arg, 142 + enum ctx_ctrl ctrl); 143 + void put_context(struct ctx_info *ctxi); 138 144 139 - struct sisl_rht_entry *get_rhte(struct ctx_info *, res_hndl_t, 140 - struct llun_info *); 145 + struct sisl_rht_entry *get_rhte(struct ctx_info *ctxi, res_hndl_t rhndl, 146 + struct llun_info *lli); 141 147 142 - struct sisl_rht_entry *rhte_checkout(struct ctx_info *, struct llun_info *); 143 - void rhte_checkin(struct ctx_info *, struct sisl_rht_entry *); 148 + struct sisl_rht_entry *rhte_checkout(struct ctx_info *ctxi, 149 + struct llun_info *lli); 150 + void rhte_checkin(struct ctx_info *ctxi, struct sisl_rht_entry *rhte); 144 151 145 - void cxlflash_ba_terminate(struct ba_lun *); 152 + void cxlflash_ba_terminate(struct ba_lun *ba_lun); 146 153 147 - int cxlflash_manage_lun(struct scsi_device *, struct dk_cxlflash_manage_lun *); 154 + int cxlflash_manage_lun(struct scsi_device *sdev, 155 + struct dk_cxlflash_manage_lun *manage); 148 156 149 - int check_state(struct cxlflash_cfg *); 157 + int check_state(struct cxlflash_cfg *cfg); 150 158 151 159 #endif /* ifndef _CXLFLASH_SUPERPIPE_H */
+63 -36
drivers/scsi/cxlflash/vlun.c
··· 819 819 void cxlflash_restore_luntable(struct cxlflash_cfg *cfg) 820 820 { 821 821 struct llun_info *lli, *temp; 822 - u32 chan; 823 822 u32 lind; 824 - struct afu *afu = cfg->afu; 823 + int k; 825 824 struct device *dev = &cfg->dev->dev; 826 - struct sisl_global_map __iomem *agm = &afu->afu_map->global; 825 + __be64 __iomem *fc_port_luns; 827 826 828 827 mutex_lock(&global.mutex); 829 828 ··· 831 832 continue; 832 833 833 834 lind = lli->lun_index; 835 + dev_dbg(dev, "%s: Virtual LUNs on slot %d:\n", __func__, lind); 834 836 835 - if (lli->port_sel == BOTH_PORTS) { 836 - writeq_be(lli->lun_id[0], &agm->fc_port[0][lind]); 837 - writeq_be(lli->lun_id[1], &agm->fc_port[1][lind]); 838 - dev_dbg(dev, "%s: Virtual LUN on slot %d id0=%llx " 839 - "id1=%llx\n", __func__, lind, 840 - lli->lun_id[0], lli->lun_id[1]); 841 - } else { 842 - chan = PORT2CHAN(lli->port_sel); 843 - writeq_be(lli->lun_id[chan], &agm->fc_port[chan][lind]); 844 - dev_dbg(dev, "%s: Virtual LUN on slot %d chan=%d " 845 - "id=%llx\n", __func__, lind, chan, 846 - lli->lun_id[chan]); 847 - } 837 + for (k = 0; k < cfg->num_fc_ports; k++) 838 + if (lli->port_sel & (1 << k)) { 839 + fc_port_luns = get_fc_port_luns(cfg, k); 840 + writeq_be(lli->lun_id[k], &fc_port_luns[lind]); 841 + dev_dbg(dev, "\t%d=%llx\n", k, lli->lun_id[k]); 842 + } 848 843 } 849 844 850 845 mutex_unlock(&global.mutex); 846 + } 847 + 848 + /** 849 + * get_num_ports() - compute number of ports from port selection mask 850 + * @psm: Port selection mask. 851 + * 852 + * Return: Population count of port selection mask 853 + */ 854 + static inline u8 get_num_ports(u32 psm) 855 + { 856 + static const u8 bits[16] = { 0, 1, 1, 2, 1, 2, 2, 3, 857 + 1, 2, 2, 3, 2, 3, 3, 4 }; 858 + 859 + return bits[psm & 0xf]; 851 860 } 852 861 853 862 /** ··· 863 856 * @cfg: Internal structure associated with the host. 864 857 * @lli: Per adapter LUN information structure. 865 858 * 866 - * On successful return, a LUN table entry is created. 867 - * At the top for LUNs visible on both ports. 868 - * At the bottom for LUNs visible only on one port. 859 + * On successful return, a LUN table entry is created: 860 + * - at the top for LUNs visible on multiple ports. 861 + * - at the bottom for LUNs visible only on one port. 869 862 * 870 863 * Return: 0 on success, -errno on failure 871 864 */ ··· 873 866 { 874 867 u32 chan; 875 868 u32 lind; 869 + u32 nports; 876 870 int rc = 0; 877 - struct afu *afu = cfg->afu; 871 + int k; 878 872 struct device *dev = &cfg->dev->dev; 879 - struct sisl_global_map __iomem *agm = &afu->afu_map->global; 873 + __be64 __iomem *fc_port_luns; 880 874 881 875 mutex_lock(&global.mutex); 882 876 883 877 if (lli->in_table) 884 878 goto out; 885 879 886 - if (lli->port_sel == BOTH_PORTS) { 880 + nports = get_num_ports(lli->port_sel); 881 + if (nports == 0 || nports > cfg->num_fc_ports) { 882 + WARN(1, "Unsupported port configuration nports=%u", nports); 883 + rc = -EIO; 884 + goto out; 885 + } 886 + 887 + if (nports > 1) { 887 888 /* 888 - * If this LUN is visible from both ports, we will put 889 + * When LUN is visible from multiple ports, we will put 889 890 * it in the top half of the LUN table. 890 891 */ 891 - if ((cfg->promote_lun_index == cfg->last_lun_index[0]) || 892 - (cfg->promote_lun_index == cfg->last_lun_index[1])) { 893 - rc = -ENOSPC; 894 - goto out; 892 + for (k = 0; k < cfg->num_fc_ports; k++) { 893 + if (!(lli->port_sel & (1 << k))) 894 + continue; 895 + 896 + if (cfg->promote_lun_index == cfg->last_lun_index[k]) { 897 + rc = -ENOSPC; 898 + goto out; 899 + } 895 900 } 896 901 897 902 lind = lli->lun_index = cfg->promote_lun_index; 898 - writeq_be(lli->lun_id[0], &agm->fc_port[0][lind]); 899 - writeq_be(lli->lun_id[1], &agm->fc_port[1][lind]); 903 + dev_dbg(dev, "%s: Virtual LUNs on slot %d:\n", __func__, lind); 904 + 905 + for (k = 0; k < cfg->num_fc_ports; k++) { 906 + if (!(lli->port_sel & (1 << k))) 907 + continue; 908 + 909 + fc_port_luns = get_fc_port_luns(cfg, k); 910 + writeq_be(lli->lun_id[k], &fc_port_luns[lind]); 911 + dev_dbg(dev, "\t%d=%llx\n", k, lli->lun_id[k]); 912 + } 913 + 900 914 cfg->promote_lun_index++; 901 - dev_dbg(dev, "%s: Virtual LUN on slot %d id0=%llx id1=%llx\n", 902 - __func__, lind, lli->lun_id[0], lli->lun_id[1]); 903 915 } else { 904 916 /* 905 - * If this LUN is visible only from one port, we will put 917 + * When LUN is visible only from one port, we will put 906 918 * it in the bottom half of the LUN table. 907 919 */ 908 - chan = PORT2CHAN(lli->port_sel); 920 + chan = PORTMASK2CHAN(lli->port_sel); 909 921 if (cfg->promote_lun_index == cfg->last_lun_index[chan]) { 910 922 rc = -ENOSPC; 911 923 goto out; 912 924 } 913 925 914 926 lind = lli->lun_index = cfg->last_lun_index[chan]; 915 - writeq_be(lli->lun_id[chan], &agm->fc_port[chan][lind]); 927 + fc_port_luns = get_fc_port_luns(cfg, chan); 928 + writeq_be(lli->lun_id[chan], &fc_port_luns[lind]); 916 929 cfg->last_lun_index[chan]--; 917 - dev_dbg(dev, "%s: Virtual LUN on slot %d chan=%d id=%llx\n", 930 + dev_dbg(dev, "%s: Virtual LUNs on slot %d:\n\t%d=%llx\n", 918 931 __func__, lind, chan, lli->lun_id[chan]); 919 932 } 920 933 ··· 1043 1016 virt->last_lba = last_lba; 1044 1017 virt->rsrc_handle = rsrc_handle; 1045 1018 1046 - if (lli->port_sel == BOTH_PORTS) 1019 + if (get_num_ports(lli->port_sel) > 1) 1047 1020 virt->hdr.return_flags |= DK_CXLFLASH_ALL_PORTS_ACTIVE; 1048 1021 out: 1049 1022 if (likely(ctxi))
+1 -1
drivers/scsi/cxlflash/vlun.h
··· 47 47 * not stored anywhere. 48 48 * 49 49 * The LXT table is re-allocated whenever it needs to cross into another group. 50 - */ 50 + */ 51 51 #define LXT_GROUP_SIZE 8 52 52 #define LXT_NUM_GROUPS(lxt_cnt) (((lxt_cnt) + 7)/8) /* alloc'ed groups */ 53 53 #define LXT_LUNIDX_SHIFT 8 /* LXT entry, shift for LUN index */
-5
drivers/scsi/esas2r/esas2r_log.c
··· 130 130 131 131 spin_lock_irqsave(&event_buffer_lock, flags); 132 132 133 - if (buffer == NULL) { 134 - spin_unlock_irqrestore(&event_buffer_lock, flags); 135 - return -1; 136 - } 137 - 138 133 memset(buffer, 0, buflen); 139 134 140 135 /*
+2 -2
drivers/scsi/fcoe/fcoe.c
··· 63 63 module_param_named(debug_logging, fcoe_debug_logging, int, S_IRUGO|S_IWUSR); 64 64 MODULE_PARM_DESC(debug_logging, "a bit mask of logging levels"); 65 65 66 - unsigned int fcoe_e_d_tov = 2 * 1000; 66 + static unsigned int fcoe_e_d_tov = 2 * 1000; 67 67 module_param_named(e_d_tov, fcoe_e_d_tov, int, S_IRUGO|S_IWUSR); 68 68 MODULE_PARM_DESC(e_d_tov, "E_D_TOV in ms, default 2000"); 69 69 70 - unsigned int fcoe_r_a_tov = 2 * 2 * 1000; 70 + static unsigned int fcoe_r_a_tov = 2 * 2 * 1000; 71 71 module_param_named(r_a_tov, fcoe_r_a_tov, int, S_IRUGO|S_IWUSR); 72 72 MODULE_PARM_DESC(r_a_tov, "R_A_TOV in ms, default 4000"); 73 73
+1 -2
drivers/scsi/fnic/fnic.h
··· 39 39 40 40 #define DRV_NAME "fnic" 41 41 #define DRV_DESCRIPTION "Cisco FCoE HBA Driver" 42 - #define DRV_VERSION "1.6.0.21" 42 + #define DRV_VERSION "1.6.0.34" 43 43 #define PFX DRV_NAME ": " 44 44 #define DFX DRV_NAME "%d: " 45 45 ··· 217 217 struct fcoe_ctlr ctlr; /* FIP FCoE controller structure */ 218 218 struct vnic_dev_bar bar0; 219 219 220 - struct msix_entry msix_entry[FNIC_MSIX_INTR_MAX]; 221 220 struct fnic_msix_entry msix[FNIC_MSIX_INTR_MAX]; 222 221 223 222 struct vnic_stats *stats;
+14 -9
drivers/scsi/fnic/fnic_fcs.c
··· 342 342 343 343 fnic_fcoe_reset_vlans(fnic); 344 344 fnic->set_vlan(fnic, 0); 345 - FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, 346 - "Sending VLAN request...\n"); 345 + 346 + if (printk_ratelimit()) 347 + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, 348 + "Sending VLAN request...\n"); 349 + 347 350 skb = dev_alloc_skb(sizeof(struct fip_vlan)); 348 351 if (!skb) 349 352 return; ··· 362 359 363 360 vlan->fip.fip_ver = FIP_VER_ENCAPS(FIP_VER); 364 361 vlan->fip.fip_op = htons(FIP_OP_VLAN); 365 - vlan->fip.fip_subcode = FIP_SC_VL_NOTE; 362 + vlan->fip.fip_subcode = FIP_SC_VL_REQ; 366 363 vlan->fip.fip_dl_len = htons(sizeof(vlan->desc) / FIP_BPW); 367 364 368 365 vlan->desc.mac.fd_desc.fip_dtype = FIP_DT_MAC; ··· 1316 1313 1317 1314 spin_lock_irqsave(&fnic->vlans_lock, flags); 1318 1315 if (list_empty(&fnic->vlans)) { 1319 - /* no vlans available, try again */ 1320 - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, 1321 - "Start VLAN Discovery\n"); 1322 1316 spin_unlock_irqrestore(&fnic->vlans_lock, flags); 1317 + /* no vlans available, try again */ 1318 + if (printk_ratelimit()) 1319 + FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, 1320 + "Start VLAN Discovery\n"); 1323 1321 fnic_event_enq(fnic, FNIC_EVT_START_VLAN_DISC); 1324 1322 return; 1325 1323 } ··· 1336 1332 spin_unlock_irqrestore(&fnic->vlans_lock, flags); 1337 1333 break; 1338 1334 case FIP_VLAN_FAILED: 1339 - /* if all vlans are in failed state, restart vlan disc */ 1340 - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, 1341 - "Start VLAN Discovery\n"); 1342 1335 spin_unlock_irqrestore(&fnic->vlans_lock, flags); 1336 + /* if all vlans are in failed state, restart vlan disc */ 1337 + if (printk_ratelimit()) 1338 + FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, 1339 + "Start VLAN Discovery\n"); 1343 1340 fnic_event_enq(fnic, FNIC_EVT_START_VLAN_DISC); 1344 1341 break; 1345 1342 case FIP_VLAN_SENT:
+13 -28
drivers/scsi/fnic/fnic_isr.c
··· 154 154 switch (vnic_dev_get_intr_mode(fnic->vdev)) { 155 155 case VNIC_DEV_INTR_MODE_INTX: 156 156 case VNIC_DEV_INTR_MODE_MSI: 157 - free_irq(fnic->pdev->irq, fnic); 157 + free_irq(pci_irq_vector(fnic->pdev, 0), fnic); 158 158 break; 159 159 160 160 case VNIC_DEV_INTR_MODE_MSIX: 161 161 for (i = 0; i < ARRAY_SIZE(fnic->msix); i++) 162 162 if (fnic->msix[i].requested) 163 - free_irq(fnic->msix_entry[i].vector, 163 + free_irq(pci_irq_vector(fnic->pdev, i), 164 164 fnic->msix[i].devid); 165 165 break; 166 166 ··· 177 177 switch (vnic_dev_get_intr_mode(fnic->vdev)) { 178 178 179 179 case VNIC_DEV_INTR_MODE_INTX: 180 - err = request_irq(fnic->pdev->irq, &fnic_isr_legacy, 181 - IRQF_SHARED, DRV_NAME, fnic); 180 + err = request_irq(pci_irq_vector(fnic->pdev, 0), 181 + &fnic_isr_legacy, IRQF_SHARED, DRV_NAME, fnic); 182 182 break; 183 183 184 184 case VNIC_DEV_INTR_MODE_MSI: 185 - err = request_irq(fnic->pdev->irq, &fnic_isr_msi, 185 + err = request_irq(pci_irq_vector(fnic->pdev, 0), &fnic_isr_msi, 186 186 0, fnic->name, fnic); 187 187 break; 188 188 ··· 210 210 fnic->msix[FNIC_MSIX_ERR_NOTIFY].devid = fnic; 211 211 212 212 for (i = 0; i < ARRAY_SIZE(fnic->msix); i++) { 213 - err = request_irq(fnic->msix_entry[i].vector, 213 + err = request_irq(pci_irq_vector(fnic->pdev, i), 214 214 fnic->msix[i].isr, 0, 215 215 fnic->msix[i].devname, 216 216 fnic->msix[i].devid); ··· 237 237 unsigned int n = ARRAY_SIZE(fnic->rq); 238 238 unsigned int m = ARRAY_SIZE(fnic->wq); 239 239 unsigned int o = ARRAY_SIZE(fnic->wq_copy); 240 - unsigned int i; 241 240 242 241 /* 243 242 * Set interrupt mode (INTx, MSI, MSI-X) depending ··· 247 248 * We need n RQs, m WQs, o Copy WQs, n+m+o CQs, and n+m+o+1 INTRs 248 249 * (last INTR is used for WQ/RQ errors and notification area) 249 250 */ 250 - 251 - BUG_ON(ARRAY_SIZE(fnic->msix_entry) < n + m + o + 1); 252 - for (i = 0; i < n + m + o + 1; i++) 253 - fnic->msix_entry[i].entry = i; 254 - 255 251 if (fnic->rq_count >= n && 256 252 fnic->raw_wq_count >= m && 257 253 fnic->wq_copy_count >= o && 258 254 fnic->cq_count >= n + m + o) { 259 - if (!pci_enable_msix_exact(fnic->pdev, fnic->msix_entry, 260 - n + m + o + 1)) { 255 + int vecs = n + m + o + 1; 256 + 257 + if (pci_alloc_irq_vectors(fnic->pdev, vecs, vecs, 258 + PCI_IRQ_MSIX) < 0) { 261 259 fnic->rq_count = n; 262 260 fnic->raw_wq_count = m; 263 261 fnic->wq_copy_count = o; 264 262 fnic->wq_count = m + o; 265 263 fnic->cq_count = n + m + o; 266 - fnic->intr_count = n + m + o + 1; 264 + fnic->intr_count = vecs; 267 265 fnic->err_intr_offset = FNIC_MSIX_ERR_NOTIFY; 268 266 269 267 FNIC_ISR_DBG(KERN_DEBUG, fnic->lport->host, ··· 280 284 fnic->wq_copy_count >= 1 && 281 285 fnic->cq_count >= 3 && 282 286 fnic->intr_count >= 1 && 283 - !pci_enable_msi(fnic->pdev)) { 284 - 287 + pci_alloc_irq_vectors(fnic->pdev, 1, 1, PCI_IRQ_MSI) < 0) { 285 288 fnic->rq_count = 1; 286 289 fnic->raw_wq_count = 1; 287 290 fnic->wq_copy_count = 1; ··· 329 334 330 335 void fnic_clear_intr_mode(struct fnic *fnic) 331 336 { 332 - switch (vnic_dev_get_intr_mode(fnic->vdev)) { 333 - case VNIC_DEV_INTR_MODE_MSIX: 334 - pci_disable_msix(fnic->pdev); 335 - break; 336 - case VNIC_DEV_INTR_MODE_MSI: 337 - pci_disable_msi(fnic->pdev); 338 - break; 339 - default: 340 - break; 341 - } 342 - 337 + pci_free_irq_vectors(fnic->pdev); 343 338 vnic_dev_set_intr_mode(fnic->vdev, VNIC_DEV_INTR_MODE_INTX); 344 339 } 345 340
+75 -30
drivers/scsi/fnic/fnic_scsi.c
··· 823 823 spinlock_t *io_lock; 824 824 u64 cmd_trace; 825 825 unsigned long start_time; 826 + unsigned long io_duration_time; 826 827 827 828 /* Decode the cmpl description to get the io_req id */ 828 829 fcpio_header_dec(&desc->hdr, &type, &hdr_status, &tag); ··· 877 876 878 877 /* 879 878 * if SCSI-ML has already issued abort on this command, 880 - * ignore completion of the IO. The abts path will clean it up 879 + * set completion of the IO. The abts path will clean it up 881 880 */ 882 881 if (CMD_STATE(sc) == FNIC_IOREQ_ABTS_PENDING) { 883 - spin_unlock_irqrestore(io_lock, flags); 882 + 883 + /* 884 + * set the FNIC_IO_DONE so that this doesn't get 885 + * flagged as 'out of order' if it was not aborted 886 + */ 887 + CMD_FLAGS(sc) |= FNIC_IO_DONE; 884 888 CMD_FLAGS(sc) |= FNIC_IO_ABTS_PENDING; 885 - switch (hdr_status) { 886 - case FCPIO_SUCCESS: 887 - CMD_FLAGS(sc) |= FNIC_IO_DONE; 888 - FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, 889 - "icmnd_cmpl ABTS pending hdr status = %s " 890 - "sc 0x%p scsi_status %x residual %d\n", 891 - fnic_fcpio_status_to_str(hdr_status), sc, 892 - icmnd_cmpl->scsi_status, 893 - icmnd_cmpl->residual); 894 - break; 895 - case FCPIO_ABORTED: 889 + spin_unlock_irqrestore(io_lock, flags); 890 + if(FCPIO_ABORTED == hdr_status) 896 891 CMD_FLAGS(sc) |= FNIC_IO_ABORTED; 897 - break; 898 - default: 899 - FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, 900 - "icmnd_cmpl abts pending " 901 - "hdr status = %s tag = 0x%x sc = 0x%p\n", 902 - fnic_fcpio_status_to_str(hdr_status), 903 - id, sc); 904 - break; 905 - } 892 + 893 + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, 894 + "icmnd_cmpl abts pending " 895 + "hdr status = %s tag = 0x%x sc = 0x%p" 896 + "scsi_status = %x residual = %d\n", 897 + fnic_fcpio_status_to_str(hdr_status), 898 + id, sc, 899 + icmnd_cmpl->scsi_status, 900 + icmnd_cmpl->residual); 906 901 return; 907 902 } 908 903 ··· 915 918 916 919 if (icmnd_cmpl->flags & FCPIO_ICMND_CMPL_RESID_UNDER) 917 920 xfer_len -= icmnd_cmpl->residual; 921 + 922 + if (icmnd_cmpl->scsi_status == SAM_STAT_CHECK_CONDITION) 923 + atomic64_inc(&fnic_stats->misc_stats.check_condition); 918 924 919 925 if (icmnd_cmpl->scsi_status == SAM_STAT_TASK_SET_FULL) 920 926 atomic64_inc(&fnic_stats->misc_stats.queue_fulls); ··· 1016 1016 atomic64_dec(&fnic->io_cmpl_skip); 1017 1017 else 1018 1018 atomic64_inc(&fnic_stats->io_stats.io_completions); 1019 + 1020 + 1021 + io_duration_time = jiffies_to_msecs(jiffies) - jiffies_to_msecs(io_req->start_time); 1022 + 1023 + if(io_duration_time <= 10) 1024 + atomic64_inc(&fnic_stats->io_stats.io_btw_0_to_10_msec); 1025 + else if(io_duration_time <= 100) 1026 + atomic64_inc(&fnic_stats->io_stats.io_btw_10_to_100_msec); 1027 + else if(io_duration_time <= 500) 1028 + atomic64_inc(&fnic_stats->io_stats.io_btw_100_to_500_msec); 1029 + else if(io_duration_time <= 5000) 1030 + atomic64_inc(&fnic_stats->io_stats.io_btw_500_to_5000_msec); 1031 + else if(io_duration_time <= 10000) 1032 + atomic64_inc(&fnic_stats->io_stats.io_btw_5000_to_10000_msec); 1033 + else if(io_duration_time <= 30000) 1034 + atomic64_inc(&fnic_stats->io_stats.io_btw_10000_to_30000_msec); 1035 + else { 1036 + atomic64_inc(&fnic_stats->io_stats.io_greater_than_30000_msec); 1037 + 1038 + if(io_duration_time > atomic64_read(&fnic_stats->io_stats.current_max_io_time)) 1039 + atomic64_set(&fnic_stats->io_stats.current_max_io_time, io_duration_time); 1040 + } 1019 1041 1020 1042 /* Call SCSI completion function to complete the IO */ 1021 1043 if (sc->scsi_done) ··· 1150 1128 } 1151 1129 1152 1130 CMD_FLAGS(sc) |= FNIC_IO_ABT_TERM_DONE; 1131 + CMD_ABTS_STATUS(sc) = hdr_status; 1153 1132 1154 1133 /* If the status is IO not found consider it as success */ 1155 1134 if (hdr_status == FCPIO_IO_NOT_FOUND) 1156 1135 CMD_ABTS_STATUS(sc) = FCPIO_SUCCESS; 1157 - else 1158 - CMD_ABTS_STATUS(sc) = hdr_status; 1159 - 1160 - atomic64_dec(&fnic_stats->io_stats.active_ios); 1161 - if (atomic64_read(&fnic->io_cmpl_skip)) 1162 - atomic64_dec(&fnic->io_cmpl_skip); 1163 - else 1164 - atomic64_inc(&fnic_stats->io_stats.io_completions); 1165 1136 1166 1137 if (!(CMD_FLAGS(sc) & (FNIC_IO_ABORTED | FNIC_IO_DONE))) 1167 1138 atomic64_inc(&misc_stats->no_icmnd_itmf_cmpls); ··· 1196 1181 (((u64)CMD_FLAGS(sc) << 32) | 1197 1182 CMD_STATE(sc))); 1198 1183 sc->scsi_done(sc); 1184 + atomic64_dec(&fnic_stats->io_stats.active_ios); 1185 + if (atomic64_read(&fnic->io_cmpl_skip)) 1186 + atomic64_dec(&fnic->io_cmpl_skip); 1187 + else 1188 + atomic64_inc(&fnic_stats->io_stats.io_completions); 1199 1189 } 1200 1190 } 1201 1191 ··· 1813 1793 struct terminate_stats *term_stats; 1814 1794 enum fnic_ioreq_state old_ioreq_state; 1815 1795 int tag; 1796 + unsigned long abt_issued_time; 1816 1797 DECLARE_COMPLETION_ONSTACK(tm_done); 1817 1798 1818 1799 /* Wait for rport to unblock */ ··· 1867 1846 spin_unlock_irqrestore(io_lock, flags); 1868 1847 goto wait_pending; 1869 1848 } 1849 + 1850 + abt_issued_time = jiffies_to_msecs(jiffies) - jiffies_to_msecs(io_req->start_time); 1851 + if (abt_issued_time <= 6000) 1852 + atomic64_inc(&abts_stats->abort_issued_btw_0_to_6_sec); 1853 + else if (abt_issued_time > 6000 && abt_issued_time <= 20000) 1854 + atomic64_inc(&abts_stats->abort_issued_btw_6_to_20_sec); 1855 + else if (abt_issued_time > 20000 && abt_issued_time <= 30000) 1856 + atomic64_inc(&abts_stats->abort_issued_btw_20_to_30_sec); 1857 + else if (abt_issued_time > 30000 && abt_issued_time <= 40000) 1858 + atomic64_inc(&abts_stats->abort_issued_btw_30_to_40_sec); 1859 + else if (abt_issued_time > 40000 && abt_issued_time <= 50000) 1860 + atomic64_inc(&abts_stats->abort_issued_btw_40_to_50_sec); 1861 + else if (abt_issued_time > 50000 && abt_issued_time <= 60000) 1862 + atomic64_inc(&abts_stats->abort_issued_btw_50_to_60_sec); 1863 + else 1864 + atomic64_inc(&abts_stats->abort_issued_greater_than_60_sec); 1865 + 1866 + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, 1867 + "CBD Opcode: %02x Abort issued time: %lu msec\n", sc->cmnd[0], abt_issued_time); 1870 1868 /* 1871 1869 * Command is still pending, need to abort it 1872 1870 * If the firmware completes the command after this point, ··· 2010 1970 /* Call SCSI completion function to complete the IO */ 2011 1971 sc->result = (DID_ABORT << 16); 2012 1972 sc->scsi_done(sc); 1973 + atomic64_dec(&fnic_stats->io_stats.active_ios); 1974 + if (atomic64_read(&fnic->io_cmpl_skip)) 1975 + atomic64_dec(&fnic->io_cmpl_skip); 1976 + else 1977 + atomic64_inc(&fnic_stats->io_stats.io_completions); 2013 1978 } 2014 1979 2015 1980 fnic_abort_cmd_end:
+16
drivers/scsi/fnic/fnic_stats.h
··· 26 26 atomic64_t sc_null; 27 27 atomic64_t io_not_found; 28 28 atomic64_t num_ios; 29 + atomic64_t io_btw_0_to_10_msec; 30 + atomic64_t io_btw_10_to_100_msec; 31 + atomic64_t io_btw_100_to_500_msec; 32 + atomic64_t io_btw_500_to_5000_msec; 33 + atomic64_t io_btw_5000_to_10000_msec; 34 + atomic64_t io_btw_10000_to_30000_msec; 35 + atomic64_t io_greater_than_30000_msec; 36 + atomic64_t current_max_io_time; 29 37 }; 30 38 31 39 struct abort_stats { ··· 42 34 atomic64_t abort_drv_timeouts; 43 35 atomic64_t abort_fw_timeouts; 44 36 atomic64_t abort_io_not_found; 37 + atomic64_t abort_issued_btw_0_to_6_sec; 38 + atomic64_t abort_issued_btw_6_to_20_sec; 39 + atomic64_t abort_issued_btw_20_to_30_sec; 40 + atomic64_t abort_issued_btw_30_to_40_sec; 41 + atomic64_t abort_issued_btw_40_to_50_sec; 42 + atomic64_t abort_issued_btw_50_to_60_sec; 43 + atomic64_t abort_issued_greater_than_60_sec; 45 44 }; 46 45 47 46 struct terminate_stats { ··· 103 88 atomic64_t devrst_cpwq_alloc_failures; 104 89 atomic64_t io_cpwq_alloc_failures; 105 90 atomic64_t no_icmnd_itmf_cmpls; 91 + atomic64_t check_condition; 106 92 atomic64_t queue_fulls; 107 93 atomic64_t rport_not_ready; 108 94 atomic64_t frame_errors;
+45 -4
drivers/scsi/fnic/fnic_trace.c
··· 229 229 "Number of IO Failures: %lld\nNumber of IO NOT Found: %lld\n" 230 230 "Number of Memory alloc Failures: %lld\n" 231 231 "Number of IOREQ Null: %lld\n" 232 - "Number of SCSI cmd pointer Null: %lld\n", 232 + "Number of SCSI cmd pointer Null: %lld\n" 233 + 234 + "\nIO completion times: \n" 235 + " < 10 ms : %lld\n" 236 + " 10 ms - 100 ms : %lld\n" 237 + " 100 ms - 500 ms : %lld\n" 238 + " 500 ms - 5 sec: %lld\n" 239 + " 5 sec - 10 sec: %lld\n" 240 + " 10 sec - 30 sec: %lld\n" 241 + " > 30 sec: %lld\n", 233 242 (u64)atomic64_read(&stats->io_stats.active_ios), 234 243 (u64)atomic64_read(&stats->io_stats.max_active_ios), 235 244 (u64)atomic64_read(&stats->io_stats.num_ios), ··· 247 238 (u64)atomic64_read(&stats->io_stats.io_not_found), 248 239 (u64)atomic64_read(&stats->io_stats.alloc_failures), 249 240 (u64)atomic64_read(&stats->io_stats.ioreq_null), 250 - (u64)atomic64_read(&stats->io_stats.sc_null)); 241 + (u64)atomic64_read(&stats->io_stats.sc_null), 242 + (u64)atomic64_read(&stats->io_stats.io_btw_0_to_10_msec), 243 + (u64)atomic64_read(&stats->io_stats.io_btw_10_to_100_msec), 244 + (u64)atomic64_read(&stats->io_stats.io_btw_100_to_500_msec), 245 + (u64)atomic64_read(&stats->io_stats.io_btw_500_to_5000_msec), 246 + (u64)atomic64_read(&stats->io_stats.io_btw_5000_to_10000_msec), 247 + (u64)atomic64_read(&stats->io_stats.io_btw_10000_to_30000_msec), 248 + (u64)atomic64_read(&stats->io_stats.io_greater_than_30000_msec)); 249 + 250 + len += snprintf(debug->debug_buffer + len, buf_size - len, 251 + "\nCurrent Max IO time : %lld\n", 252 + (u64)atomic64_read(&stats->io_stats.current_max_io_time)); 251 253 252 254 len += snprintf(debug->debug_buffer + len, buf_size - len, 253 255 "\n------------------------------------------\n" 254 256 "\t\tAbort Statistics\n" 255 257 "------------------------------------------\n"); 258 + 256 259 len += snprintf(debug->debug_buffer + len, buf_size - len, 257 260 "Number of Aborts: %lld\n" 258 261 "Number of Abort Failures: %lld\n" 259 262 "Number of Abort Driver Timeouts: %lld\n" 260 263 "Number of Abort FW Timeouts: %lld\n" 261 - "Number of Abort IO NOT Found: %lld\n", 264 + "Number of Abort IO NOT Found: %lld\n" 265 + 266 + "Abord issued times: \n" 267 + " < 6 sec : %lld\n" 268 + " 6 sec - 20 sec : %lld\n" 269 + " 20 sec - 30 sec : %lld\n" 270 + " 30 sec - 40 sec : %lld\n" 271 + " 40 sec - 50 sec : %lld\n" 272 + " 50 sec - 60 sec : %lld\n" 273 + " > 60 sec: %lld\n", 274 + 262 275 (u64)atomic64_read(&stats->abts_stats.aborts), 263 276 (u64)atomic64_read(&stats->abts_stats.abort_failures), 264 277 (u64)atomic64_read(&stats->abts_stats.abort_drv_timeouts), 265 278 (u64)atomic64_read(&stats->abts_stats.abort_fw_timeouts), 266 - (u64)atomic64_read(&stats->abts_stats.abort_io_not_found)); 279 + (u64)atomic64_read(&stats->abts_stats.abort_io_not_found), 280 + (u64)atomic64_read(&stats->abts_stats.abort_issued_btw_0_to_6_sec), 281 + (u64)atomic64_read(&stats->abts_stats.abort_issued_btw_6_to_20_sec), 282 + (u64)atomic64_read(&stats->abts_stats.abort_issued_btw_20_to_30_sec), 283 + (u64)atomic64_read(&stats->abts_stats.abort_issued_btw_30_to_40_sec), 284 + (u64)atomic64_read(&stats->abts_stats.abort_issued_btw_40_to_50_sec), 285 + (u64)atomic64_read(&stats->abts_stats.abort_issued_btw_50_to_60_sec), 286 + (u64)atomic64_read(&stats->abts_stats.abort_issued_greater_than_60_sec)); 267 287 268 288 len += snprintf(debug->debug_buffer + len, buf_size - len, 269 289 "\n------------------------------------------\n" 270 290 "\t\tTerminate Statistics\n" 271 291 "------------------------------------------\n"); 292 + 272 293 len += snprintf(debug->debug_buffer + len, buf_size - len, 273 294 "Number of Terminates: %lld\n" 274 295 "Maximum Terminates: %lld\n" ··· 396 357 "Number of Copy WQ Alloc Failures for Device Reset: %lld\n" 397 358 "Number of Copy WQ Alloc Failures for IOs: %lld\n" 398 359 "Number of no icmnd itmf Completions: %lld\n" 360 + "Number of Check Conditions encountered: %lld\n" 399 361 "Number of QUEUE Fulls: %lld\n" 400 362 "Number of rport not ready: %lld\n" 401 363 "Number of receive frame errors: %lld\n", ··· 417 377 &stats->misc_stats.devrst_cpwq_alloc_failures), 418 378 (u64)atomic64_read(&stats->misc_stats.io_cpwq_alloc_failures), 419 379 (u64)atomic64_read(&stats->misc_stats.no_icmnd_itmf_cmpls), 380 + (u64)atomic64_read(&stats->misc_stats.check_condition), 420 381 (u64)atomic64_read(&stats->misc_stats.queue_fulls), 421 382 (u64)atomic64_read(&stats->misc_stats.rport_not_ready), 422 383 (u64)atomic64_read(&stats->misc_stats.frame_errors));
+1
drivers/scsi/hisi_sas/Kconfig
··· 4 4 depends on ARM64 || COMPILE_TEST 5 5 select SCSI_SAS_LIBSAS 6 6 select BLK_DEV_INTEGRITY 7 + depends on ATA 7 8 help 8 9 This driver supports HiSilicon's SAS HBA
+16 -3
drivers/scsi/hisi_sas/hisi_sas.h
··· 31 31 #define HISI_SAS_QUEUE_SLOTS 512 32 32 #define HISI_SAS_MAX_ITCT_ENTRIES 2048 33 33 #define HISI_SAS_MAX_DEVICES HISI_SAS_MAX_ITCT_ENTRIES 34 + #define HISI_SAS_RESET_BIT 0 34 35 35 36 #define HISI_SAS_STATUS_BUF_SZ \ 36 37 (sizeof(struct hisi_sas_err_record) + 1024) ··· 91 90 struct asd_sas_port sas_port; 92 91 u8 port_attached; 93 92 u8 id; /* from hw */ 94 - struct list_head list; 95 93 }; 96 94 97 95 struct hisi_sas_cq { ··· 113 113 u64 attached_phy; 114 114 u64 device_id; 115 115 atomic64_t running_req; 116 + struct list_head list; 116 117 u8 dev_status; 118 + int sata_idx; 117 119 }; 118 120 119 121 struct hisi_sas_slot { ··· 138 136 struct hisi_sas_sge_page *sge_page; 139 137 dma_addr_t sge_page_dma; 140 138 struct work_struct abort_slot; 139 + struct timer_list internal_abort_timer; 141 140 }; 142 141 143 142 struct hisi_sas_tmf_task { ··· 168 165 struct hisi_sas_slot *slot, 169 166 int device_id, int abort_flag, int tag_to_abort); 170 167 int (*slot_complete)(struct hisi_hba *hisi_hba, 171 - struct hisi_sas_slot *slot, int abort); 168 + struct hisi_sas_slot *slot); 169 + void (*phys_init)(struct hisi_hba *hisi_hba); 172 170 void (*phy_enable)(struct hisi_hba *hisi_hba, int phy_no); 173 171 void (*phy_disable)(struct hisi_hba *hisi_hba, int phy_no); 174 172 void (*phy_hard_reset)(struct hisi_hba *hisi_hba, int phy_no); ··· 179 175 void (*free_device)(struct hisi_hba *hisi_hba, 180 176 struct hisi_sas_device *dev); 181 177 int (*get_wideport_bitmap)(struct hisi_hba *hisi_hba, int port_id); 178 + int (*soft_reset)(struct hisi_hba *hisi_hba); 182 179 int max_command_entries; 183 180 int complete_hdr_size; 184 181 }; ··· 198 193 u8 sas_addr[SAS_ADDR_SIZE]; 199 194 200 195 int n_phy; 201 - int scan_finished; 202 196 spinlock_t lock; 203 197 204 198 struct timer_list timer; ··· 205 201 206 202 int slot_index_count; 207 203 unsigned long *slot_index_tags; 204 + unsigned long reject_stp_links_msk; 208 205 209 206 /* SCSI/SAS glue */ 210 207 struct sas_ha_struct sha; ··· 238 233 struct hisi_sas_breakpoint *sata_breakpoint; 239 234 dma_addr_t sata_breakpoint_dma; 240 235 struct hisi_sas_slot *slot_info; 236 + unsigned long flags; 241 237 const struct hisi_sas_hw *hw; /* Low level hw interface */ 238 + unsigned long sata_dev_bitmap[BITS_TO_LONGS(HISI_SAS_MAX_DEVICES)]; 239 + struct work_struct rst_work; 242 240 }; 243 241 244 242 /* Generic HW DMA host memory structures */ ··· 354 346 struct hisi_sas_command_table_smp smp; 355 347 struct hisi_sas_command_table_stp stp; 356 348 }; 349 + 350 + extern struct hisi_sas_port *to_hisi_sas_port(struct asd_sas_port *sas_port); 357 351 extern int hisi_sas_probe(struct platform_device *pdev, 358 352 const struct hisi_sas_hw *ops); 359 353 extern int hisi_sas_remove(struct platform_device *pdev); ··· 364 354 extern void hisi_sas_slot_task_free(struct hisi_hba *hisi_hba, 365 355 struct sas_task *task, 366 356 struct hisi_sas_slot *slot); 357 + extern void hisi_sas_init_mem(struct hisi_hba *hisi_hba); 358 + extern void hisi_sas_rescan_topology(struct hisi_hba *hisi_hba, u32 old_state, 359 + u32 state); 367 360 #endif
+338 -135
drivers/scsi/hisi_sas/hisi_sas_main.c
··· 21 21 hisi_sas_internal_task_abort(struct hisi_hba *hisi_hba, 22 22 struct domain_device *device, 23 23 int abort_flag, int tag); 24 + static int hisi_sas_softreset_ata_disk(struct domain_device *device); 24 25 25 26 static struct hisi_hba *dev_to_hisi_hba(struct domain_device *device) 26 27 { 27 28 return device->port->ha->lldd_ha; 28 29 } 30 + 31 + struct hisi_sas_port *to_hisi_sas_port(struct asd_sas_port *sas_port) 32 + { 33 + return container_of(sas_port, struct hisi_sas_port, sas_port); 34 + } 35 + EXPORT_SYMBOL_GPL(to_hisi_sas_port); 29 36 30 37 static void hisi_sas_slot_index_clear(struct hisi_hba *hisi_hba, int slot_idx) 31 38 { ··· 77 70 void hisi_sas_slot_task_free(struct hisi_hba *hisi_hba, struct sas_task *task, 78 71 struct hisi_sas_slot *slot) 79 72 { 80 - struct device *dev = &hisi_hba->pdev->dev; 81 - struct domain_device *device = task->dev; 82 - struct hisi_sas_device *sas_dev = device->lldd_dev; 83 73 84 - if (!slot->task) 85 - return; 74 + if (task) { 75 + struct device *dev = &hisi_hba->pdev->dev; 76 + struct domain_device *device = task->dev; 77 + struct hisi_sas_device *sas_dev = device->lldd_dev; 86 78 87 - if (!sas_protocol_ata(task->task_proto)) 88 - if (slot->n_elem) 89 - dma_unmap_sg(dev, task->scatter, slot->n_elem, 90 - task->data_dir); 79 + if (!sas_protocol_ata(task->task_proto)) 80 + if (slot->n_elem) 81 + dma_unmap_sg(dev, task->scatter, slot->n_elem, 82 + task->data_dir); 83 + 84 + task->lldd_task = NULL; 85 + 86 + if (sas_dev) 87 + atomic64_dec(&sas_dev->running_req); 88 + } 91 89 92 90 if (slot->command_table) 93 91 dma_pool_free(hisi_hba->command_table_pool, ··· 107 95 slot->sge_page_dma); 108 96 109 97 list_del_init(&slot->entry); 110 - task->lldd_task = NULL; 111 98 slot->task = NULL; 112 99 slot->port = NULL; 113 100 hisi_sas_slot_index_free(hisi_hba, slot->idx); 114 - if (sas_dev) 115 - atomic64_dec(&sas_dev->running_req); 101 + 116 102 /* slot memory is fully zeroed when it is reused */ 117 103 } 118 104 EXPORT_SYMBOL_GPL(hisi_sas_slot_task_free); ··· 188 178 struct hisi_sas_port *port; 189 179 struct hisi_sas_slot *slot; 190 180 struct hisi_sas_cmd_hdr *cmd_hdr_base; 181 + struct asd_sas_port *sas_port = device->port; 191 182 struct device *dev = &hisi_hba->pdev->dev; 192 183 int dlvry_queue_slot, dlvry_queue, n_elem = 0, rc, slot_idx; 184 + unsigned long flags; 193 185 194 - if (!device->port) { 186 + if (!sas_port) { 195 187 struct task_status_struct *ts = &task->task_status; 196 188 197 189 ts->resp = SAS_TASK_UNDELIVERED; ··· 204 192 */ 205 193 if (device->dev_type != SAS_SATA_DEV) 206 194 task->task_done(task); 207 - return 0; 195 + return SAS_PHY_DOWN; 208 196 } 209 197 210 198 if (DEV_IS_GONE(sas_dev)) { ··· 215 203 dev_info(dev, "task prep: device %016llx not ready\n", 216 204 SAS_ADDR(device->sas_addr)); 217 205 218 - rc = SAS_PHY_DOWN; 219 - return rc; 206 + return SAS_PHY_DOWN; 220 207 } 221 - port = device->port->lldd_port; 208 + 209 + port = to_hisi_sas_port(sas_port); 222 210 if (port && !port->port_attached) { 223 211 dev_info(dev, "task prep: %s port%d not attach device\n", 224 - (sas_protocol_ata(task->task_proto)) ? 212 + (dev_is_sata(device)) ? 225 213 "SATA/STP" : "SAS", 226 214 device->port->id); 227 215 ··· 311 299 goto err_out_command_table; 312 300 } 313 301 314 - list_add_tail(&slot->entry, &port->list); 315 - spin_lock(&task->task_state_lock); 302 + list_add_tail(&slot->entry, &sas_dev->list); 303 + spin_lock_irqsave(&task->task_state_lock, flags); 316 304 task->task_state_flags |= SAS_TASK_AT_INITIATOR; 317 - spin_unlock(&task->task_state_lock); 305 + spin_unlock_irqrestore(&task->task_state_lock, flags); 318 306 319 307 hisi_hba->slot_prep = slot; 320 308 ··· 354 342 unsigned long flags; 355 343 struct hisi_hba *hisi_hba = dev_to_hisi_hba(task->dev); 356 344 struct device *dev = &hisi_hba->pdev->dev; 345 + 346 + if (unlikely(test_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))) 347 + return -EINVAL; 357 348 358 349 /* protect task_prep and start_delivery sequence */ 359 350 spin_lock_irqsave(&hisi_hba->lock, flags); ··· 427 412 sas_dev->dev_type = device->dev_type; 428 413 sas_dev->hisi_hba = hisi_hba; 429 414 sas_dev->sas_device = device; 415 + INIT_LIST_HEAD(&hisi_hba->devices[i].list); 430 416 break; 431 417 } 432 418 } ··· 498 482 static void hisi_sas_scan_start(struct Scsi_Host *shost) 499 483 { 500 484 struct hisi_hba *hisi_hba = shost_priv(shost); 501 - int i; 502 485 503 - for (i = 0; i < hisi_hba->n_phy; ++i) 504 - hisi_sas_bytes_dmaed(hisi_hba, i); 505 - 506 - hisi_hba->scan_finished = 1; 486 + hisi_hba->hw->phys_init(hisi_hba); 507 487 } 508 488 509 489 static int hisi_sas_scan_finished(struct Scsi_Host *shost, unsigned long time) ··· 507 495 struct hisi_hba *hisi_hba = shost_priv(shost); 508 496 struct sas_ha_struct *sha = &hisi_hba->sha; 509 497 510 - if (hisi_hba->scan_finished == 0) 498 + /* Wait for PHY up interrupt to occur */ 499 + if (time < HZ) 511 500 return 0; 512 501 513 502 sas_drain_work(sha); ··· 558 545 struct hisi_hba *hisi_hba = sas_ha->lldd_ha; 559 546 struct hisi_sas_phy *phy = sas_phy->lldd_phy; 560 547 struct asd_sas_port *sas_port = sas_phy->port; 561 - struct hisi_sas_port *port = &hisi_hba->port[phy->port_id]; 548 + struct hisi_sas_port *port = to_hisi_sas_port(sas_port); 562 549 unsigned long flags; 563 550 564 551 if (!sas_port) ··· 572 559 spin_unlock_irqrestore(&hisi_hba->lock, flags); 573 560 } 574 561 575 - static void hisi_sas_do_release_task(struct hisi_hba *hisi_hba, int phy_no, 576 - struct domain_device *device) 562 + static void hisi_sas_do_release_task(struct hisi_hba *hisi_hba, struct sas_task *task, 563 + struct hisi_sas_slot *slot) 577 564 { 578 - struct hisi_sas_phy *phy; 579 - struct hisi_sas_port *port; 580 - struct hisi_sas_slot *slot, *slot2; 581 - struct device *dev = &hisi_hba->pdev->dev; 565 + if (task) { 566 + unsigned long flags; 567 + struct task_status_struct *ts; 582 568 583 - phy = &hisi_hba->phy[phy_no]; 584 - port = phy->port; 585 - if (!port) 586 - return; 569 + ts = &task->task_status; 587 570 588 - list_for_each_entry_safe(slot, slot2, &port->list, entry) { 589 - struct sas_task *task; 590 - 591 - task = slot->task; 592 - if (device && task->dev != device) 593 - continue; 594 - 595 - dev_info(dev, "Release slot [%d:%d], task [%p]:\n", 596 - slot->dlvry_queue, slot->dlvry_queue_slot, task); 597 - hisi_hba->hw->slot_complete(hisi_hba, slot, 1); 571 + ts->resp = SAS_TASK_COMPLETE; 572 + ts->stat = SAS_ABORTED_TASK; 573 + spin_lock_irqsave(&task->task_state_lock, flags); 574 + task->task_state_flags &= 575 + ~(SAS_TASK_STATE_PENDING | SAS_TASK_AT_INITIATOR); 576 + task->task_state_flags |= SAS_TASK_STATE_DONE; 577 + spin_unlock_irqrestore(&task->task_state_lock, flags); 598 578 } 579 + 580 + hisi_sas_slot_task_free(hisi_hba, task, slot); 599 581 } 600 582 601 - static void hisi_sas_port_notify_deformed(struct asd_sas_phy *sas_phy) 602 - { 603 - struct domain_device *device; 604 - struct hisi_sas_phy *phy = sas_phy->lldd_phy; 605 - struct asd_sas_port *sas_port = sas_phy->port; 606 - 607 - list_for_each_entry(device, &sas_port->dev_list, dev_list_node) 608 - hisi_sas_do_release_task(phy->hisi_hba, sas_phy->id, device); 609 - } 610 - 583 + /* hisi_hba.lock should be locked */ 611 584 static void hisi_sas_release_task(struct hisi_hba *hisi_hba, 612 585 struct domain_device *device) 613 586 { 614 - struct asd_sas_port *port = device->port; 615 - struct asd_sas_phy *sas_phy; 587 + struct hisi_sas_slot *slot, *slot2; 588 + struct hisi_sas_device *sas_dev = device->lldd_dev; 616 589 617 - list_for_each_entry(sas_phy, &port->phy_list, port_phy_el) 618 - hisi_sas_do_release_task(hisi_hba, sas_phy->id, device); 590 + list_for_each_entry_safe(slot, slot2, &sas_dev->list, entry) 591 + hisi_sas_do_release_task(hisi_hba, slot->task, slot); 592 + } 593 + 594 + static void hisi_sas_release_tasks(struct hisi_hba *hisi_hba) 595 + { 596 + struct hisi_sas_device *sas_dev; 597 + struct domain_device *device; 598 + int i; 599 + 600 + for (i = 0; i < HISI_SAS_MAX_DEVICES; i++) { 601 + sas_dev = &hisi_hba->devices[i]; 602 + device = sas_dev->sas_device; 603 + 604 + if ((sas_dev->dev_type == SAS_PHY_UNUSED) || 605 + !device) 606 + continue; 607 + 608 + hisi_sas_release_task(hisi_hba, device); 609 + } 619 610 } 620 611 621 612 static void hisi_sas_dev_gone(struct domain_device *device) ··· 661 644 break; 662 645 663 646 case PHY_FUNC_LINK_RESET: 647 + hisi_hba->hw->phy_disable(hisi_hba, phy_no); 648 + msleep(100); 664 649 hisi_hba->hw->phy_enable(hisi_hba, phy_no); 665 - hisi_hba->hw->phy_hard_reset(hisi_hba, phy_no); 666 650 break; 667 651 668 652 case PHY_FUNC_DISABLE: ··· 716 698 task->dev = device; 717 699 task->task_proto = device->tproto; 718 700 719 - memcpy(&task->ssp_task, parameter, para_len); 701 + if (dev_is_sata(device)) { 702 + task->ata_task.device_control_reg_update = 1; 703 + memcpy(&task->ata_task.fis, parameter, para_len); 704 + } else { 705 + memcpy(&task->ssp_task, parameter, para_len); 706 + } 720 707 task->task_done = hisi_sas_task_done; 721 708 722 709 task->slow_task->timer.data = (unsigned long) task; ··· 743 720 /* Even TMF timed out, return direct. */ 744 721 if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) { 745 722 if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) { 746 - dev_err(dev, "abort tmf: TMF task[%d] timeout\n", 747 - tmf->tag_of_task_to_be_managed); 748 - if (task->lldd_task) { 749 - struct hisi_sas_slot *slot = 750 - task->lldd_task; 723 + struct hisi_sas_slot *slot = task->lldd_task; 751 724 752 - hisi_sas_slot_task_free(hisi_hba, 753 - task, slot); 754 - } 725 + dev_err(dev, "abort tmf: TMF task timeout\n"); 726 + if (slot) 727 + slot->task = NULL; 755 728 756 729 goto ex_err; 757 730 } ··· 800 781 return res; 801 782 } 802 783 784 + static void hisi_sas_fill_ata_reset_cmd(struct ata_device *dev, 785 + bool reset, int pmp, u8 *fis) 786 + { 787 + struct ata_taskfile tf; 788 + 789 + ata_tf_init(dev, &tf); 790 + if (reset) 791 + tf.ctl |= ATA_SRST; 792 + else 793 + tf.ctl &= ~ATA_SRST; 794 + tf.command = ATA_CMD_DEV_RESET; 795 + ata_tf_to_fis(&tf, pmp, 0, fis); 796 + } 797 + 798 + static int hisi_sas_softreset_ata_disk(struct domain_device *device) 799 + { 800 + u8 fis[20] = {0}; 801 + struct ata_port *ap = device->sata_dev.ap; 802 + struct ata_link *link; 803 + int rc = TMF_RESP_FUNC_FAILED; 804 + struct hisi_hba *hisi_hba = dev_to_hisi_hba(device); 805 + struct device *dev = &hisi_hba->pdev->dev; 806 + int s = sizeof(struct host_to_dev_fis); 807 + unsigned long flags; 808 + 809 + ata_for_each_link(link, ap, EDGE) { 810 + int pmp = sata_srst_pmp(link); 811 + 812 + hisi_sas_fill_ata_reset_cmd(link->device, 1, pmp, fis); 813 + rc = hisi_sas_exec_internal_tmf_task(device, fis, s, NULL); 814 + if (rc != TMF_RESP_FUNC_COMPLETE) 815 + break; 816 + } 817 + 818 + if (rc == TMF_RESP_FUNC_COMPLETE) { 819 + ata_for_each_link(link, ap, EDGE) { 820 + int pmp = sata_srst_pmp(link); 821 + 822 + hisi_sas_fill_ata_reset_cmd(link->device, 0, pmp, fis); 823 + rc = hisi_sas_exec_internal_tmf_task(device, fis, 824 + s, NULL); 825 + if (rc != TMF_RESP_FUNC_COMPLETE) 826 + dev_err(dev, "ata disk de-reset failed\n"); 827 + } 828 + } else { 829 + dev_err(dev, "ata disk reset failed\n"); 830 + } 831 + 832 + if (rc == TMF_RESP_FUNC_COMPLETE) { 833 + spin_lock_irqsave(&hisi_hba->lock, flags); 834 + hisi_sas_release_task(hisi_hba, device); 835 + spin_unlock_irqrestore(&hisi_hba->lock, flags); 836 + } 837 + 838 + return rc; 839 + } 840 + 803 841 static int hisi_sas_debug_issue_ssp_tmf(struct domain_device *device, 804 842 u8 *lun, struct hisi_sas_tmf_task *tmf) 805 843 { ··· 869 793 870 794 return hisi_sas_exec_internal_tmf_task(device, &ssp_task, 871 795 sizeof(ssp_task), tmf); 796 + } 797 + 798 + static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba) 799 + { 800 + int rc; 801 + 802 + if (!hisi_hba->hw->soft_reset) 803 + return -1; 804 + 805 + if (!test_and_set_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags)) { 806 + struct device *dev = &hisi_hba->pdev->dev; 807 + struct sas_ha_struct *sas_ha = &hisi_hba->sha; 808 + unsigned long flags; 809 + 810 + dev_dbg(dev, "controller reset begins!\n"); 811 + scsi_block_requests(hisi_hba->shost); 812 + rc = hisi_hba->hw->soft_reset(hisi_hba); 813 + if (rc) { 814 + dev_warn(dev, "controller reset failed (%d)\n", rc); 815 + goto out; 816 + } 817 + spin_lock_irqsave(&hisi_hba->lock, flags); 818 + hisi_sas_release_tasks(hisi_hba); 819 + spin_unlock_irqrestore(&hisi_hba->lock, flags); 820 + 821 + sas_ha->notify_ha_event(sas_ha, HAE_RESET); 822 + dev_dbg(dev, "controller reset successful!\n"); 823 + } else 824 + return -1; 825 + 826 + out: 827 + scsi_unblock_requests(hisi_hba->shost); 828 + clear_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags); 829 + return rc; 872 830 } 873 831 874 832 static int hisi_sas_abort_task(struct sas_task *task) ··· 921 811 return TMF_RESP_FUNC_FAILED; 922 812 } 923 813 924 - spin_lock_irqsave(&task->task_state_lock, flags); 925 814 if (task->task_state_flags & SAS_TASK_STATE_DONE) { 926 - spin_unlock_irqrestore(&task->task_state_lock, flags); 927 815 rc = TMF_RESP_FUNC_COMPLETE; 928 816 goto out; 929 817 } 930 818 931 - spin_unlock_irqrestore(&task->task_state_lock, flags); 932 819 sas_dev->dev_status = HISI_SAS_DEV_EH; 933 820 if (task->lldd_task && task->task_proto & SAS_PROTOCOL_SSP) { 934 821 struct scsi_cmnd *cmnd = task->uldd_task; 935 822 struct hisi_sas_slot *slot = task->lldd_task; 936 823 u32 tag = slot->idx; 824 + int rc2; 937 825 938 826 int_to_scsilun(cmnd->device->lun, &lun); 939 827 tmf_task.tmf = TMF_ABORT_TASK; ··· 940 832 rc = hisi_sas_debug_issue_ssp_tmf(task->dev, lun.scsi_lun, 941 833 &tmf_task); 942 834 943 - /* if successful, clear the task and callback forwards.*/ 944 - if (rc == TMF_RESP_FUNC_COMPLETE) { 835 + rc2 = hisi_sas_internal_task_abort(hisi_hba, device, 836 + HISI_SAS_INT_ABT_CMD, tag); 837 + /* 838 + * If the TMF finds that the IO is not in the device and also 839 + * the internal abort does not succeed, then it is safe to 840 + * free the slot. 841 + * Note: if the internal abort succeeds then the slot 842 + * will have already been completed 843 + */ 844 + if (rc == TMF_RESP_FUNC_COMPLETE && rc2 != TMF_RESP_FUNC_SUCC) { 945 845 if (task->lldd_task) { 946 - struct hisi_sas_slot *slot; 947 - 948 - slot = &hisi_hba->slot_info 949 - [tmf_task.tag_of_task_to_be_managed]; 950 846 spin_lock_irqsave(&hisi_hba->lock, flags); 951 - hisi_hba->hw->slot_complete(hisi_hba, slot, 1); 847 + hisi_sas_do_release_task(hisi_hba, task, slot); 952 848 spin_unlock_irqrestore(&hisi_hba->lock, flags); 953 849 } 954 850 } 955 - 956 - hisi_sas_internal_task_abort(hisi_hba, device, 957 - HISI_SAS_INT_ABT_CMD, tag); 958 851 } else if (task->task_proto & SAS_PROTOCOL_SATA || 959 852 task->task_proto & SAS_PROTOCOL_STP) { 960 853 if (task->dev->dev_type == SAS_SATA_DEV) { 961 854 hisi_sas_internal_task_abort(hisi_hba, device, 962 855 HISI_SAS_INT_ABT_DEV, 0); 963 - rc = TMF_RESP_FUNC_COMPLETE; 856 + rc = hisi_sas_softreset_ata_disk(device); 964 857 } 965 858 } else if (task->task_proto & SAS_PROTOCOL_SMP) { 966 859 /* SMP */ 967 860 struct hisi_sas_slot *slot = task->lldd_task; 968 861 u32 tag = slot->idx; 969 862 970 - hisi_sas_internal_task_abort(hisi_hba, device, 971 - HISI_SAS_INT_ABT_CMD, tag); 863 + rc = hisi_sas_internal_task_abort(hisi_hba, device, 864 + HISI_SAS_INT_ABT_CMD, tag); 865 + if (rc == TMF_RESP_FUNC_FAILED) { 866 + spin_lock_irqsave(&hisi_hba->lock, flags); 867 + hisi_sas_do_release_task(hisi_hba, task, slot); 868 + spin_unlock_irqrestore(&hisi_hba->lock, flags); 869 + } 972 870 } 973 871 974 872 out: ··· 1029 915 1030 916 rc = hisi_sas_debug_I_T_nexus_reset(device); 1031 917 1032 - spin_lock_irqsave(&hisi_hba->lock, flags); 1033 - hisi_sas_release_task(hisi_hba, device); 1034 - spin_unlock_irqrestore(&hisi_hba->lock, flags); 1035 - 1036 - return 0; 918 + if (rc == TMF_RESP_FUNC_COMPLETE) { 919 + spin_lock_irqsave(&hisi_hba->lock, flags); 920 + hisi_sas_release_task(hisi_hba, device); 921 + spin_unlock_irqrestore(&hisi_hba->lock, flags); 922 + } 923 + return rc; 1037 924 } 1038 925 1039 926 static int hisi_sas_lu_reset(struct domain_device *device, u8 *lun) 1040 927 { 1041 - struct hisi_sas_tmf_task tmf_task; 1042 928 struct hisi_sas_device *sas_dev = device->lldd_dev; 1043 929 struct hisi_hba *hisi_hba = dev_to_hisi_hba(device); 1044 930 struct device *dev = &hisi_hba->pdev->dev; 1045 931 unsigned long flags; 1046 932 int rc = TMF_RESP_FUNC_FAILED; 1047 933 1048 - tmf_task.tmf = TMF_LU_RESET; 1049 934 sas_dev->dev_status = HISI_SAS_DEV_EH; 1050 - rc = hisi_sas_debug_issue_ssp_tmf(device, lun, &tmf_task); 1051 - if (rc == TMF_RESP_FUNC_COMPLETE) { 1052 - spin_lock_irqsave(&hisi_hba->lock, flags); 1053 - hisi_sas_release_task(hisi_hba, device); 1054 - spin_unlock_irqrestore(&hisi_hba->lock, flags); 1055 - } 935 + if (dev_is_sata(device)) { 936 + struct sas_phy *phy; 1056 937 1057 - /* If failed, fall-through I_T_Nexus reset */ 1058 - dev_err(dev, "lu_reset: for device[%llx]:rc= %d\n", 1059 - sas_dev->device_id, rc); 938 + /* Clear internal IO and then hardreset */ 939 + rc = hisi_sas_internal_task_abort(hisi_hba, device, 940 + HISI_SAS_INT_ABT_DEV, 0); 941 + if (rc == TMF_RESP_FUNC_FAILED) 942 + goto out; 943 + 944 + phy = sas_get_local_phy(device); 945 + 946 + rc = sas_phy_reset(phy, 1); 947 + 948 + if (rc == 0) { 949 + spin_lock_irqsave(&hisi_hba->lock, flags); 950 + hisi_sas_release_task(hisi_hba, device); 951 + spin_unlock_irqrestore(&hisi_hba->lock, flags); 952 + } 953 + sas_put_local_phy(phy); 954 + } else { 955 + struct hisi_sas_tmf_task tmf_task = { .tmf = TMF_LU_RESET }; 956 + 957 + rc = hisi_sas_debug_issue_ssp_tmf(device, lun, &tmf_task); 958 + if (rc == TMF_RESP_FUNC_COMPLETE) { 959 + spin_lock_irqsave(&hisi_hba->lock, flags); 960 + hisi_sas_release_task(hisi_hba, device); 961 + spin_unlock_irqrestore(&hisi_hba->lock, flags); 962 + } 963 + } 964 + out: 965 + if (rc != TMF_RESP_FUNC_COMPLETE) 966 + dev_err(dev, "lu_reset: for device[%llx]:rc= %d\n", 967 + sas_dev->device_id, rc); 1060 968 return rc; 969 + } 970 + 971 + static int hisi_sas_clear_nexus_ha(struct sas_ha_struct *sas_ha) 972 + { 973 + struct hisi_hba *hisi_hba = sas_ha->lldd_ha; 974 + 975 + return hisi_sas_controller_reset(hisi_hba); 1061 976 } 1062 977 1063 978 static int hisi_sas_query_task(struct sas_task *task) ··· 1133 990 struct device *dev = &hisi_hba->pdev->dev; 1134 991 struct hisi_sas_port *port; 1135 992 struct hisi_sas_slot *slot; 993 + struct asd_sas_port *sas_port = device->port; 1136 994 struct hisi_sas_cmd_hdr *cmd_hdr_base; 1137 995 int dlvry_queue_slot, dlvry_queue, n_elem = 0, rc, slot_idx; 996 + unsigned long flags; 997 + 998 + if (unlikely(test_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))) 999 + return -EINVAL; 1138 1000 1139 1001 if (!device->port) 1140 1002 return -1; 1141 1003 1142 - port = device->port->lldd_port; 1004 + port = to_hisi_sas_port(sas_port); 1143 1005 1144 1006 /* simply get a slot and send abort command */ 1145 1007 rc = hisi_sas_slot_index_alloc(hisi_hba, &slot_idx); ··· 1175 1027 if (rc) 1176 1028 goto err_out_tag; 1177 1029 1178 - /* Port structure is static for the HBA, so 1179 - * even if the port is deformed it is ok 1180 - * to reference. 1181 - */ 1182 - list_add_tail(&slot->entry, &port->list); 1183 - spin_lock(&task->task_state_lock); 1030 + 1031 + list_add_tail(&slot->entry, &sas_dev->list); 1032 + spin_lock_irqsave(&task->task_state_lock, flags); 1184 1033 task->task_state_flags |= SAS_TASK_AT_INITIATOR; 1185 - spin_unlock(&task->task_state_lock); 1034 + spin_unlock_irqrestore(&task->task_state_lock, flags); 1186 1035 1187 1036 hisi_hba->slot_prep = slot; 1188 1037 ··· 1230 1085 task->task_done = hisi_sas_task_done; 1231 1086 task->slow_task->timer.data = (unsigned long)task; 1232 1087 task->slow_task->timer.function = hisi_sas_tmf_timedout; 1233 - task->slow_task->timer.expires = jiffies + 20*HZ; 1088 + task->slow_task->timer.expires = jiffies + msecs_to_jiffies(110); 1234 1089 add_timer(&task->slow_task->timer); 1235 1090 1236 1091 /* Lock as we are alloc'ing a slot, which cannot be interrupted */ ··· 1253 1108 goto exit; 1254 1109 } 1255 1110 1256 - /* TMF timed out, return direct. */ 1111 + if (task->task_status.resp == SAS_TASK_COMPLETE && 1112 + task->task_status.stat == TMF_RESP_FUNC_SUCC) { 1113 + res = TMF_RESP_FUNC_SUCC; 1114 + goto exit; 1115 + } 1116 + 1117 + /* Internal abort timed out */ 1257 1118 if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) { 1258 1119 if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) { 1259 1120 dev_err(dev, "internal task abort: timeout.\n"); 1260 - if (task->lldd_task) { 1261 - struct hisi_sas_slot *slot = task->lldd_task; 1262 - 1263 - hisi_sas_slot_task_free(hisi_hba, task, slot); 1264 - } 1265 1121 } 1266 1122 } 1267 1123 ··· 1281 1135 static void hisi_sas_port_formed(struct asd_sas_phy *sas_phy) 1282 1136 { 1283 1137 hisi_sas_port_notify_formed(sas_phy); 1284 - } 1285 - 1286 - static void hisi_sas_port_deformed(struct asd_sas_phy *sas_phy) 1287 - { 1288 - hisi_sas_port_notify_deformed(sas_phy); 1289 1138 } 1290 1139 1291 1140 static void hisi_sas_phy_disconnected(struct hisi_sas_phy *phy) ··· 1322 1181 } 1323 1182 EXPORT_SYMBOL_GPL(hisi_sas_phy_down); 1324 1183 1184 + void hisi_sas_rescan_topology(struct hisi_hba *hisi_hba, u32 old_state, 1185 + u32 state) 1186 + { 1187 + struct sas_ha_struct *sas_ha = &hisi_hba->sha; 1188 + int phy_no; 1189 + 1190 + for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) { 1191 + struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no]; 1192 + struct asd_sas_phy *sas_phy = &phy->sas_phy; 1193 + struct asd_sas_port *sas_port = sas_phy->port; 1194 + struct domain_device *dev; 1195 + 1196 + if (sas_phy->enabled) { 1197 + /* Report PHY state change to libsas */ 1198 + if (state & (1 << phy_no)) 1199 + continue; 1200 + 1201 + if (old_state & (1 << phy_no)) 1202 + /* PHY down but was up before */ 1203 + hisi_sas_phy_down(hisi_hba, phy_no, 0); 1204 + } 1205 + if (!sas_port) 1206 + continue; 1207 + dev = sas_port->port_dev; 1208 + 1209 + if (DEV_IS_EXPANDER(dev->dev_type)) 1210 + sas_ha->notify_phy_event(sas_phy, PORTE_BROADCAST_RCVD); 1211 + } 1212 + } 1213 + EXPORT_SYMBOL_GPL(hisi_sas_rescan_topology); 1214 + 1325 1215 static struct scsi_transport_template *hisi_sas_stt; 1326 1216 1327 1217 static struct scsi_host_template hisi_sas_sht = { ··· 1387 1215 .lldd_I_T_nexus_reset = hisi_sas_I_T_nexus_reset, 1388 1216 .lldd_lu_reset = hisi_sas_lu_reset, 1389 1217 .lldd_query_task = hisi_sas_query_task, 1218 + .lldd_clear_nexus_ha = hisi_sas_clear_nexus_ha, 1390 1219 .lldd_port_formed = hisi_sas_port_formed, 1391 - .lldd_port_deformed = hisi_sas_port_deformed, 1392 1220 }; 1221 + 1222 + void hisi_sas_init_mem(struct hisi_hba *hisi_hba) 1223 + { 1224 + int i, s, max_command_entries = hisi_hba->hw->max_command_entries; 1225 + 1226 + for (i = 0; i < hisi_hba->queue_count; i++) { 1227 + struct hisi_sas_cq *cq = &hisi_hba->cq[i]; 1228 + struct hisi_sas_dq *dq = &hisi_hba->dq[i]; 1229 + 1230 + s = sizeof(struct hisi_sas_cmd_hdr) * HISI_SAS_QUEUE_SLOTS; 1231 + memset(hisi_hba->cmd_hdr[i], 0, s); 1232 + dq->wr_point = 0; 1233 + 1234 + s = hisi_hba->hw->complete_hdr_size * HISI_SAS_QUEUE_SLOTS; 1235 + memset(hisi_hba->complete_hdr[i], 0, s); 1236 + cq->rd_point = 0; 1237 + } 1238 + 1239 + s = sizeof(struct hisi_sas_initial_fis) * hisi_hba->n_phy; 1240 + memset(hisi_hba->initial_fis, 0, s); 1241 + 1242 + s = max_command_entries * sizeof(struct hisi_sas_iost); 1243 + memset(hisi_hba->iost, 0, s); 1244 + 1245 + s = max_command_entries * sizeof(struct hisi_sas_breakpoint); 1246 + memset(hisi_hba->breakpoint, 0, s); 1247 + 1248 + s = max_command_entries * sizeof(struct hisi_sas_breakpoint) * 2; 1249 + memset(hisi_hba->sata_breakpoint, 0, s); 1250 + } 1251 + EXPORT_SYMBOL_GPL(hisi_sas_init_mem); 1393 1252 1394 1253 static int hisi_sas_alloc(struct hisi_hba *hisi_hba, struct Scsi_Host *shost) 1395 1254 { ··· 1433 1230 hisi_sas_phy_init(hisi_hba, i); 1434 1231 hisi_hba->port[i].port_attached = 0; 1435 1232 hisi_hba->port[i].id = -1; 1436 - INIT_LIST_HEAD(&hisi_hba->port[i].list); 1437 1233 } 1438 1234 1439 1235 for (i = 0; i < HISI_SAS_MAX_DEVICES; i++) { ··· 1459 1257 &hisi_hba->cmd_hdr_dma[i], GFP_KERNEL); 1460 1258 if (!hisi_hba->cmd_hdr[i]) 1461 1259 goto err_out; 1462 - memset(hisi_hba->cmd_hdr[i], 0, s); 1463 1260 1464 1261 /* Completion queue */ 1465 1262 s = hisi_hba->hw->complete_hdr_size * HISI_SAS_QUEUE_SLOTS; ··· 1466 1265 &hisi_hba->complete_hdr_dma[i], GFP_KERNEL); 1467 1266 if (!hisi_hba->complete_hdr[i]) 1468 1267 goto err_out; 1469 - memset(hisi_hba->complete_hdr[i], 0, s); 1470 1268 } 1471 1269 1472 1270 s = HISI_SAS_STATUS_BUF_SZ; ··· 1500 1300 if (!hisi_hba->iost) 1501 1301 goto err_out; 1502 1302 1503 - memset(hisi_hba->iost, 0, s); 1504 - 1505 1303 s = max_command_entries * sizeof(struct hisi_sas_breakpoint); 1506 1304 hisi_hba->breakpoint = dma_alloc_coherent(dev, s, 1507 1305 &hisi_hba->breakpoint_dma, GFP_KERNEL); 1508 1306 if (!hisi_hba->breakpoint) 1509 1307 goto err_out; 1510 - 1511 - memset(hisi_hba->breakpoint, 0, s); 1512 1308 1513 1309 hisi_hba->slot_index_count = max_command_entries; 1514 1310 s = hisi_hba->slot_index_count / BITS_PER_BYTE; ··· 1522 1326 &hisi_hba->initial_fis_dma, GFP_KERNEL); 1523 1327 if (!hisi_hba->initial_fis) 1524 1328 goto err_out; 1525 - memset(hisi_hba->initial_fis, 0, s); 1526 1329 1527 1330 s = max_command_entries * sizeof(struct hisi_sas_breakpoint) * 2; 1528 1331 hisi_hba->sata_breakpoint = dma_alloc_coherent(dev, s, 1529 1332 &hisi_hba->sata_breakpoint_dma, GFP_KERNEL); 1530 1333 if (!hisi_hba->sata_breakpoint) 1531 1334 goto err_out; 1532 - memset(hisi_hba->sata_breakpoint, 0, s); 1335 + hisi_sas_init_mem(hisi_hba); 1533 1336 1534 1337 hisi_sas_slot_index_init(hisi_hba); 1535 1338 ··· 1599 1404 destroy_workqueue(hisi_hba->wq); 1600 1405 } 1601 1406 1407 + static void hisi_sas_rst_work_handler(struct work_struct *work) 1408 + { 1409 + struct hisi_hba *hisi_hba = 1410 + container_of(work, struct hisi_hba, rst_work); 1411 + 1412 + hisi_sas_controller_reset(hisi_hba); 1413 + } 1414 + 1602 1415 static struct Scsi_Host *hisi_sas_shost_alloc(struct platform_device *pdev, 1603 1416 const struct hisi_sas_hw *hw) 1604 1417 { ··· 1624 1421 } 1625 1422 hisi_hba = shost_priv(shost); 1626 1423 1424 + INIT_WORK(&hisi_hba->rst_work, hisi_sas_rst_work_handler); 1627 1425 hisi_hba->hw = hw; 1628 1426 hisi_hba->pdev = pdev; 1629 1427 hisi_hba->shost = shost; ··· 1787 1583 struct hisi_hba *hisi_hba = sha->lldd_ha; 1788 1584 struct Scsi_Host *shost = sha->core.shost; 1789 1585 1790 - scsi_remove_host(sha->core.shost); 1791 1586 sas_unregister_ha(sha); 1792 1587 sas_remove_host(sha->core.shost); 1793 1588
+11 -8
drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
··· 508 508 struct device *dev = &hisi_hba->pdev->dev; 509 509 u64 qw0, device_id = sas_dev->device_id; 510 510 struct hisi_sas_itct *itct = &hisi_hba->itct[device_id]; 511 + struct asd_sas_port *sas_port = device->port; 512 + struct hisi_sas_port *port = to_hisi_sas_port(sas_port); 511 513 512 514 memset(itct, 0, sizeof(*itct)); 513 515 ··· 530 528 (1 << ITCT_HDR_AWT_CONTROL_OFF) | 531 529 (device->max_linkrate << ITCT_HDR_MAX_CONN_RATE_OFF) | 532 530 (1 << ITCT_HDR_VALID_LINK_NUM_OFF) | 533 - (device->port->id << ITCT_HDR_PORT_ID_OFF)); 531 + (port->id << ITCT_HDR_PORT_ID_OFF)); 534 532 itct->qw0 = cpu_to_le64(qw0); 535 533 536 534 /* qw1 */ ··· 1277 1275 } 1278 1276 1279 1277 static int slot_complete_v1_hw(struct hisi_hba *hisi_hba, 1280 - struct hisi_sas_slot *slot, int abort) 1278 + struct hisi_sas_slot *slot) 1281 1279 { 1282 1280 struct sas_task *task = slot->task; 1283 1281 struct hisi_sas_device *sas_dev; ··· 1288 1286 struct hisi_sas_complete_v1_hdr *complete_queue = 1289 1287 hisi_hba->complete_hdr[slot->cmplt_queue]; 1290 1288 struct hisi_sas_complete_v1_hdr *complete_hdr; 1289 + unsigned long flags; 1291 1290 u32 cmplt_hdr_data; 1292 1291 1293 1292 complete_hdr = &complete_queue[slot->cmplt_queue_slot]; ··· 1301 1298 device = task->dev; 1302 1299 sas_dev = device->lldd_dev; 1303 1300 1301 + spin_lock_irqsave(&task->task_state_lock, flags); 1304 1302 task->task_state_flags &= 1305 1303 ~(SAS_TASK_STATE_PENDING | SAS_TASK_AT_INITIATOR); 1306 1304 task->task_state_flags |= SAS_TASK_STATE_DONE; 1305 + spin_unlock_irqrestore(&task->task_state_lock, flags); 1307 1306 1308 1307 memset(ts, 0, sizeof(*ts)); 1309 1308 ts->resp = SAS_TASK_COMPLETE; 1310 1309 1311 - if (unlikely(!sas_dev || abort)) { 1312 - if (!sas_dev) 1313 - dev_dbg(dev, "slot complete: port has not device\n"); 1310 + if (unlikely(!sas_dev)) { 1311 + dev_dbg(dev, "slot complete: port has no device\n"); 1314 1312 ts->stat = SAS_PHY_DOWN; 1315 1313 goto out; 1316 1314 } ··· 1624 1620 */ 1625 1621 slot->cmplt_queue_slot = rd_point; 1626 1622 slot->cmplt_queue = queue; 1627 - slot_complete_v1_hw(hisi_hba, slot, 0); 1623 + slot_complete_v1_hw(hisi_hba, slot); 1628 1624 1629 1625 if (++rd_point >= HISI_SAS_QUEUE_SLOTS) 1630 1626 rd_point = 0; ··· 1849 1845 if (rc) 1850 1846 return rc; 1851 1847 1852 - phys_init_v1_hw(hisi_hba); 1853 - 1854 1848 return 0; 1855 1849 } 1856 1850 ··· 1862 1860 .get_free_slot = get_free_slot_v1_hw, 1863 1861 .start_delivery = start_delivery_v1_hw, 1864 1862 .slot_complete = slot_complete_v1_hw, 1863 + .phys_init = phys_init_v1_hw, 1865 1864 .phy_enable = enable_phy_v1_hw, 1866 1865 .phy_disable = disable_phy_v1_hw, 1867 1866 .phy_hard_reset = phy_hard_reset_v1_hw,
+945 -262
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
··· 194 194 #define SL_CONTROL_NOTIFY_EN_MSK (0x1 << SL_CONTROL_NOTIFY_EN_OFF) 195 195 #define SL_CONTROL_CTA_OFF 17 196 196 #define SL_CONTROL_CTA_MSK (0x1 << SL_CONTROL_CTA_OFF) 197 - #define RX_PRIMS_STATUS (PORT_BASE + 0x98) 198 - #define RX_BCAST_CHG_OFF 1 199 - #define RX_BCAST_CHG_MSK (0x1 << RX_BCAST_CHG_OFF) 197 + #define RX_PRIMS_STATUS (PORT_BASE + 0x98) 198 + #define RX_BCAST_CHG_OFF 1 199 + #define RX_BCAST_CHG_MSK (0x1 << RX_BCAST_CHG_OFF) 200 200 #define TX_ID_DWORD0 (PORT_BASE + 0x9c) 201 201 #define TX_ID_DWORD1 (PORT_BASE + 0xa0) 202 202 #define TX_ID_DWORD2 (PORT_BASE + 0xa4) ··· 207 207 #define TXID_AUTO (PORT_BASE + 0xb8) 208 208 #define TXID_AUTO_CT3_OFF 1 209 209 #define TXID_AUTO_CT3_MSK (0x1 << TXID_AUTO_CT3_OFF) 210 - #define TX_HARDRST_OFF 2 211 - #define TX_HARDRST_MSK (0x1 << TX_HARDRST_OFF) 210 + #define TXID_AUTO_CTB_OFF 11 211 + #define TXID_AUTO_CTB_MSK (0x1 << TXID_AUTO_CTB_OFF) 212 + #define TX_HARDRST_OFF 2 213 + #define TX_HARDRST_MSK (0x1 << TX_HARDRST_OFF) 212 214 #define RX_IDAF_DWORD0 (PORT_BASE + 0xc4) 213 215 #define RX_IDAF_DWORD1 (PORT_BASE + 0xc8) 214 216 #define RX_IDAF_DWORD2 (PORT_BASE + 0xcc) ··· 220 218 #define RX_IDAF_DWORD6 (PORT_BASE + 0xdc) 221 219 #define RXOP_CHECK_CFG_H (PORT_BASE + 0xfc) 222 220 #define CON_CONTROL (PORT_BASE + 0x118) 221 + #define CON_CONTROL_CFG_OPEN_ACC_STP_OFF 0 222 + #define CON_CONTROL_CFG_OPEN_ACC_STP_MSK \ 223 + (0x01 << CON_CONTROL_CFG_OPEN_ACC_STP_OFF) 223 224 #define DONE_RECEIVED_TIME (PORT_BASE + 0x11c) 224 225 #define CHL_INT0 (PORT_BASE + 0x1b4) 225 226 #define CHL_INT0_HOTPLUG_TOUT_OFF 0 ··· 245 240 #define CHL_INT1_MSK (PORT_BASE + 0x1c4) 246 241 #define CHL_INT2_MSK (PORT_BASE + 0x1c8) 247 242 #define CHL_INT_COAL_EN (PORT_BASE + 0x1d0) 243 + #define DMA_TX_DFX0 (PORT_BASE + 0x200) 244 + #define DMA_TX_DFX1 (PORT_BASE + 0x204) 245 + #define DMA_TX_DFX1_IPTT_OFF 0 246 + #define DMA_TX_DFX1_IPTT_MSK (0xffff << DMA_TX_DFX1_IPTT_OFF) 247 + #define DMA_TX_FIFO_DFX0 (PORT_BASE + 0x240) 248 + #define PORT_DFX0 (PORT_BASE + 0x258) 249 + #define LINK_DFX2 (PORT_BASE + 0X264) 250 + #define LINK_DFX2_RCVR_HOLD_STS_OFF 9 251 + #define LINK_DFX2_RCVR_HOLD_STS_MSK (0x1 << LINK_DFX2_RCVR_HOLD_STS_OFF) 252 + #define LINK_DFX2_SEND_HOLD_STS_OFF 10 253 + #define LINK_DFX2_SEND_HOLD_STS_MSK (0x1 << LINK_DFX2_SEND_HOLD_STS_OFF) 248 254 #define PHY_CTRL_RDY_MSK (PORT_BASE + 0x2b0) 249 255 #define PHYCTRL_NOT_RDY_MSK (PORT_BASE + 0x2b4) 250 256 #define PHYCTRL_DWS_RESET_MSK (PORT_BASE + 0x2b8) ··· 272 256 #define AXI_CFG (0x5100) 273 257 #define AM_CFG_MAX_TRANS (0x5010) 274 258 #define AM_CFG_SINGLE_PORT_MAX_TRANS (0x5014) 259 + 260 + #define AXI_MASTER_CFG_BASE (0x5000) 261 + #define AM_CTRL_GLOBAL (0x0) 262 + #define AM_CURR_TRANS_RETURN (0x150) 275 263 276 264 /* HW dma structures */ 277 265 /* Delivery queue header */ ··· 329 309 330 310 /* Completion header */ 331 311 /* dw0 */ 312 + #define CMPLT_HDR_ERR_PHASE_OFF 2 313 + #define CMPLT_HDR_ERR_PHASE_MSK (0xff << CMPLT_HDR_ERR_PHASE_OFF) 332 314 #define CMPLT_HDR_RSPNS_XFRD_OFF 10 333 315 #define CMPLT_HDR_RSPNS_XFRD_MSK (0x1 << CMPLT_HDR_RSPNS_XFRD_OFF) 334 316 #define CMPLT_HDR_ERX_OFF 12 ··· 407 385 408 386 enum { 409 387 TRANS_TX_FAIL_BASE = 0x0, /* dw0 */ 410 - TRANS_RX_FAIL_BASE = 0x100, /* dw1 */ 411 - DMA_TX_ERR_BASE = 0x200, /* dw2 bit 15-0 */ 412 - SIPC_RX_ERR_BASE = 0x300, /* dw2 bit 31-16*/ 413 - DMA_RX_ERR_BASE = 0x400, /* dw3 */ 388 + TRANS_RX_FAIL_BASE = 0x20, /* dw1 */ 389 + DMA_TX_ERR_BASE = 0x40, /* dw2 bit 15-0 */ 390 + SIPC_RX_ERR_BASE = 0x50, /* dw2 bit 31-16*/ 391 + DMA_RX_ERR_BASE = 0x60, /* dw3 */ 414 392 415 393 /* trans tx*/ 416 394 TRANS_TX_OPEN_FAIL_WITH_IT_NEXUS_LOSS = TRANS_TX_FAIL_BASE, /* 0x0 */ ··· 450 428 TRANS_TX_ERR_WITH_WAIT_RECV_TIMEOUT, /* 0x1f for sata/stp */ 451 429 452 430 /* trans rx */ 453 - TRANS_RX_ERR_WITH_RXFRAME_CRC_ERR = TRANS_RX_FAIL_BASE, /* 0x100 */ 454 - TRANS_RX_ERR_WITH_RXFIS_8B10B_DISP_ERR, /* 0x101 for sata/stp */ 455 - TRANS_RX_ERR_WITH_RXFRAME_HAVE_ERRPRM, /* 0x102 for ssp/smp */ 456 - /*IO_ERR_WITH_RXFIS_8B10B_CODE_ERR, [> 0x102 <] for sata/stp */ 457 - TRANS_RX_ERR_WITH_RXFIS_DECODE_ERROR, /* 0x103 for sata/stp */ 458 - TRANS_RX_ERR_WITH_RXFIS_CRC_ERR, /* 0x104 for sata/stp */ 459 - TRANS_RX_ERR_WITH_RXFRAME_LENGTH_OVERRUN, /* 0x105 for smp */ 460 - /*IO_ERR_WITH_RXFIS_TX SYNCP, [> 0x105 <] for sata/stp */ 461 - TRANS_RX_ERR_WITH_RXFIS_RX_SYNCP, /* 0x106 for sata/stp*/ 462 - TRANS_RX_ERR_WITH_LINK_BUF_OVERRUN, /* 0x107 */ 463 - TRANS_RX_ERR_WITH_BREAK_TIMEOUT, /* 0x108 */ 464 - TRANS_RX_ERR_WITH_BREAK_REQUEST, /* 0x109 */ 465 - TRANS_RX_ERR_WITH_BREAK_RECEVIED, /* 0x10a */ 466 - RESERVED1, /* 0x10b */ 467 - TRANS_RX_ERR_WITH_CLOSE_NORMAL, /* 0x10c */ 468 - TRANS_RX_ERR_WITH_CLOSE_PHY_DISABLE, /* 0x10d */ 469 - TRANS_RX_ERR_WITH_CLOSE_DWS_TIMEOUT, /* 0x10e */ 470 - TRANS_RX_ERR_WITH_CLOSE_COMINIT, /* 0x10f */ 471 - TRANS_RX_ERR_WITH_DATA_LEN0, /* 0x110 for ssp/smp */ 472 - TRANS_RX_ERR_WITH_BAD_HASH, /* 0x111 for ssp */ 473 - /*IO_RX_ERR_WITH_FIS_TOO_SHORT, [> 0x111 <] for sata/stp */ 474 - TRANS_RX_XRDY_WLEN_ZERO_ERR, /* 0x112 for ssp*/ 475 - /*IO_RX_ERR_WITH_FIS_TOO_LONG, [> 0x112 <] for sata/stp */ 476 - TRANS_RX_SSP_FRM_LEN_ERR, /* 0x113 for ssp */ 477 - /*IO_RX_ERR_WITH_SATA_DEVICE_LOST, [> 0x113 <] for sata */ 478 - RESERVED2, /* 0x114 */ 479 - RESERVED3, /* 0x115 */ 480 - RESERVED4, /* 0x116 */ 481 - RESERVED5, /* 0x117 */ 482 - TRANS_RX_ERR_WITH_BAD_FRM_TYPE, /* 0x118 */ 483 - TRANS_RX_SMP_FRM_LEN_ERR, /* 0x119 */ 484 - TRANS_RX_SMP_RESP_TIMEOUT_ERR, /* 0x11a */ 485 - RESERVED6, /* 0x11b */ 486 - RESERVED7, /* 0x11c */ 487 - RESERVED8, /* 0x11d */ 488 - RESERVED9, /* 0x11e */ 489 - TRANS_RX_R_ERR, /* 0x11f */ 431 + TRANS_RX_ERR_WITH_RXFRAME_CRC_ERR = TRANS_RX_FAIL_BASE, /* 0x20 */ 432 + TRANS_RX_ERR_WITH_RXFIS_8B10B_DISP_ERR, /* 0x21 for sata/stp */ 433 + TRANS_RX_ERR_WITH_RXFRAME_HAVE_ERRPRM, /* 0x22 for ssp/smp */ 434 + /*IO_ERR_WITH_RXFIS_8B10B_CODE_ERR, [> 0x22 <] for sata/stp */ 435 + TRANS_RX_ERR_WITH_RXFIS_DECODE_ERROR, /* 0x23 for sata/stp */ 436 + TRANS_RX_ERR_WITH_RXFIS_CRC_ERR, /* 0x24 for sata/stp */ 437 + TRANS_RX_ERR_WITH_RXFRAME_LENGTH_OVERRUN, /* 0x25 for smp */ 438 + /*IO_ERR_WITH_RXFIS_TX SYNCP, [> 0x25 <] for sata/stp */ 439 + TRANS_RX_ERR_WITH_RXFIS_RX_SYNCP, /* 0x26 for sata/stp*/ 440 + TRANS_RX_ERR_WITH_LINK_BUF_OVERRUN, /* 0x27 */ 441 + TRANS_RX_ERR_WITH_BREAK_TIMEOUT, /* 0x28 */ 442 + TRANS_RX_ERR_WITH_BREAK_REQUEST, /* 0x29 */ 443 + TRANS_RX_ERR_WITH_BREAK_RECEVIED, /* 0x2a */ 444 + RESERVED1, /* 0x2b */ 445 + TRANS_RX_ERR_WITH_CLOSE_NORMAL, /* 0x2c */ 446 + TRANS_RX_ERR_WITH_CLOSE_PHY_DISABLE, /* 0x2d */ 447 + TRANS_RX_ERR_WITH_CLOSE_DWS_TIMEOUT, /* 0x2e */ 448 + TRANS_RX_ERR_WITH_CLOSE_COMINIT, /* 0x2f */ 449 + TRANS_RX_ERR_WITH_DATA_LEN0, /* 0x30 for ssp/smp */ 450 + TRANS_RX_ERR_WITH_BAD_HASH, /* 0x31 for ssp */ 451 + /*IO_RX_ERR_WITH_FIS_TOO_SHORT, [> 0x31 <] for sata/stp */ 452 + TRANS_RX_XRDY_WLEN_ZERO_ERR, /* 0x32 for ssp*/ 453 + /*IO_RX_ERR_WITH_FIS_TOO_LONG, [> 0x32 <] for sata/stp */ 454 + TRANS_RX_SSP_FRM_LEN_ERR, /* 0x33 for ssp */ 455 + /*IO_RX_ERR_WITH_SATA_DEVICE_LOST, [> 0x33 <] for sata */ 456 + RESERVED2, /* 0x34 */ 457 + RESERVED3, /* 0x35 */ 458 + RESERVED4, /* 0x36 */ 459 + RESERVED5, /* 0x37 */ 460 + TRANS_RX_ERR_WITH_BAD_FRM_TYPE, /* 0x38 */ 461 + TRANS_RX_SMP_FRM_LEN_ERR, /* 0x39 */ 462 + TRANS_RX_SMP_RESP_TIMEOUT_ERR, /* 0x3a */ 463 + RESERVED6, /* 0x3b */ 464 + RESERVED7, /* 0x3c */ 465 + RESERVED8, /* 0x3d */ 466 + RESERVED9, /* 0x3e */ 467 + TRANS_RX_R_ERR, /* 0x3f */ 490 468 491 469 /* dma tx */ 492 - DMA_TX_DIF_CRC_ERR = DMA_TX_ERR_BASE, /* 0x200 */ 493 - DMA_TX_DIF_APP_ERR, /* 0x201 */ 494 - DMA_TX_DIF_RPP_ERR, /* 0x202 */ 495 - DMA_TX_DATA_SGL_OVERFLOW, /* 0x203 */ 496 - DMA_TX_DIF_SGL_OVERFLOW, /* 0x204 */ 497 - DMA_TX_UNEXP_XFER_ERR, /* 0x205 */ 498 - DMA_TX_UNEXP_RETRANS_ERR, /* 0x206 */ 499 - DMA_TX_XFER_LEN_OVERFLOW, /* 0x207 */ 500 - DMA_TX_XFER_OFFSET_ERR, /* 0x208 */ 501 - DMA_TX_RAM_ECC_ERR, /* 0x209 */ 502 - DMA_TX_DIF_LEN_ALIGN_ERR, /* 0x20a */ 470 + DMA_TX_DIF_CRC_ERR = DMA_TX_ERR_BASE, /* 0x40 */ 471 + DMA_TX_DIF_APP_ERR, /* 0x41 */ 472 + DMA_TX_DIF_RPP_ERR, /* 0x42 */ 473 + DMA_TX_DATA_SGL_OVERFLOW, /* 0x43 */ 474 + DMA_TX_DIF_SGL_OVERFLOW, /* 0x44 */ 475 + DMA_TX_UNEXP_XFER_ERR, /* 0x45 */ 476 + DMA_TX_UNEXP_RETRANS_ERR, /* 0x46 */ 477 + DMA_TX_XFER_LEN_OVERFLOW, /* 0x47 */ 478 + DMA_TX_XFER_OFFSET_ERR, /* 0x48 */ 479 + DMA_TX_RAM_ECC_ERR, /* 0x49 */ 480 + DMA_TX_DIF_LEN_ALIGN_ERR, /* 0x4a */ 481 + DMA_TX_MAX_ERR_CODE, 503 482 504 483 /* sipc rx */ 505 - SIPC_RX_FIS_STATUS_ERR_BIT_VLD = SIPC_RX_ERR_BASE, /* 0x300 */ 506 - SIPC_RX_PIO_WRSETUP_STATUS_DRQ_ERR, /* 0x301 */ 507 - SIPC_RX_FIS_STATUS_BSY_BIT_ERR, /* 0x302 */ 508 - SIPC_RX_WRSETUP_LEN_ODD_ERR, /* 0x303 */ 509 - SIPC_RX_WRSETUP_LEN_ZERO_ERR, /* 0x304 */ 510 - SIPC_RX_WRDATA_LEN_NOT_MATCH_ERR, /* 0x305 */ 511 - SIPC_RX_NCQ_WRSETUP_OFFSET_ERR, /* 0x306 */ 512 - SIPC_RX_NCQ_WRSETUP_AUTO_ACTIVE_ERR, /* 0x307 */ 513 - SIPC_RX_SATA_UNEXP_FIS_ERR, /* 0x308 */ 514 - SIPC_RX_WRSETUP_ESTATUS_ERR, /* 0x309 */ 515 - SIPC_RX_DATA_UNDERFLOW_ERR, /* 0x30a */ 484 + SIPC_RX_FIS_STATUS_ERR_BIT_VLD = SIPC_RX_ERR_BASE, /* 0x50 */ 485 + SIPC_RX_PIO_WRSETUP_STATUS_DRQ_ERR, /* 0x51 */ 486 + SIPC_RX_FIS_STATUS_BSY_BIT_ERR, /* 0x52 */ 487 + SIPC_RX_WRSETUP_LEN_ODD_ERR, /* 0x53 */ 488 + SIPC_RX_WRSETUP_LEN_ZERO_ERR, /* 0x54 */ 489 + SIPC_RX_WRDATA_LEN_NOT_MATCH_ERR, /* 0x55 */ 490 + SIPC_RX_NCQ_WRSETUP_OFFSET_ERR, /* 0x56 */ 491 + SIPC_RX_NCQ_WRSETUP_AUTO_ACTIVE_ERR, /* 0x57 */ 492 + SIPC_RX_SATA_UNEXP_FIS_ERR, /* 0x58 */ 493 + SIPC_RX_WRSETUP_ESTATUS_ERR, /* 0x59 */ 494 + SIPC_RX_DATA_UNDERFLOW_ERR, /* 0x5a */ 495 + SIPC_RX_MAX_ERR_CODE, 516 496 517 497 /* dma rx */ 518 - DMA_RX_DIF_CRC_ERR = DMA_RX_ERR_BASE, /* 0x400 */ 519 - DMA_RX_DIF_APP_ERR, /* 0x401 */ 520 - DMA_RX_DIF_RPP_ERR, /* 0x402 */ 521 - DMA_RX_DATA_SGL_OVERFLOW, /* 0x403 */ 522 - DMA_RX_DIF_SGL_OVERFLOW, /* 0x404 */ 523 - DMA_RX_DATA_LEN_OVERFLOW, /* 0x405 */ 524 - DMA_RX_DATA_LEN_UNDERFLOW, /* 0x406 */ 525 - DMA_RX_DATA_OFFSET_ERR, /* 0x407 */ 526 - RESERVED10, /* 0x408 */ 527 - DMA_RX_SATA_FRAME_TYPE_ERR, /* 0x409 */ 528 - DMA_RX_RESP_BUF_OVERFLOW, /* 0x40a */ 529 - DMA_RX_UNEXP_RETRANS_RESP_ERR, /* 0x40b */ 530 - DMA_RX_UNEXP_NORM_RESP_ERR, /* 0x40c */ 531 - DMA_RX_UNEXP_RDFRAME_ERR, /* 0x40d */ 532 - DMA_RX_PIO_DATA_LEN_ERR, /* 0x40e */ 533 - DMA_RX_RDSETUP_STATUS_ERR, /* 0x40f */ 534 - DMA_RX_RDSETUP_STATUS_DRQ_ERR, /* 0x410 */ 535 - DMA_RX_RDSETUP_STATUS_BSY_ERR, /* 0x411 */ 536 - DMA_RX_RDSETUP_LEN_ODD_ERR, /* 0x412 */ 537 - DMA_RX_RDSETUP_LEN_ZERO_ERR, /* 0x413 */ 538 - DMA_RX_RDSETUP_LEN_OVER_ERR, /* 0x414 */ 539 - DMA_RX_RDSETUP_OFFSET_ERR, /* 0x415 */ 540 - DMA_RX_RDSETUP_ACTIVE_ERR, /* 0x416 */ 541 - DMA_RX_RDSETUP_ESTATUS_ERR, /* 0x417 */ 542 - DMA_RX_RAM_ECC_ERR, /* 0x418 */ 543 - DMA_RX_UNKNOWN_FRM_ERR, /* 0x419 */ 498 + DMA_RX_DIF_CRC_ERR = DMA_RX_ERR_BASE, /* 0x60 */ 499 + DMA_RX_DIF_APP_ERR, /* 0x61 */ 500 + DMA_RX_DIF_RPP_ERR, /* 0x62 */ 501 + DMA_RX_DATA_SGL_OVERFLOW, /* 0x63 */ 502 + DMA_RX_DIF_SGL_OVERFLOW, /* 0x64 */ 503 + DMA_RX_DATA_LEN_OVERFLOW, /* 0x65 */ 504 + DMA_RX_DATA_LEN_UNDERFLOW, /* 0x66 */ 505 + DMA_RX_DATA_OFFSET_ERR, /* 0x67 */ 506 + RESERVED10, /* 0x68 */ 507 + DMA_RX_SATA_FRAME_TYPE_ERR, /* 0x69 */ 508 + DMA_RX_RESP_BUF_OVERFLOW, /* 0x6a */ 509 + DMA_RX_UNEXP_RETRANS_RESP_ERR, /* 0x6b */ 510 + DMA_RX_UNEXP_NORM_RESP_ERR, /* 0x6c */ 511 + DMA_RX_UNEXP_RDFRAME_ERR, /* 0x6d */ 512 + DMA_RX_PIO_DATA_LEN_ERR, /* 0x6e */ 513 + DMA_RX_RDSETUP_STATUS_ERR, /* 0x6f */ 514 + DMA_RX_RDSETUP_STATUS_DRQ_ERR, /* 0x70 */ 515 + DMA_RX_RDSETUP_STATUS_BSY_ERR, /* 0x71 */ 516 + DMA_RX_RDSETUP_LEN_ODD_ERR, /* 0x72 */ 517 + DMA_RX_RDSETUP_LEN_ZERO_ERR, /* 0x73 */ 518 + DMA_RX_RDSETUP_LEN_OVER_ERR, /* 0x74 */ 519 + DMA_RX_RDSETUP_OFFSET_ERR, /* 0x75 */ 520 + DMA_RX_RDSETUP_ACTIVE_ERR, /* 0x76 */ 521 + DMA_RX_RDSETUP_ESTATUS_ERR, /* 0x77 */ 522 + DMA_RX_RAM_ECC_ERR, /* 0x78 */ 523 + DMA_RX_UNKNOWN_FRM_ERR, /* 0x79 */ 524 + DMA_RX_MAX_ERR_CODE, 544 525 }; 545 526 546 527 #define HISI_SAS_COMMAND_ENTRIES_V2_HW 4096 528 + #define HISI_MAX_SATA_SUPPORT_V2_HW (HISI_SAS_COMMAND_ENTRIES_V2_HW/64 - 1) 547 529 548 530 #define DIR_NO_DATA 0 549 531 #define DIR_TO_INI 1 ··· 560 534 #define SATA_PROTOCOL_FPDMA 0x8 561 535 #define SATA_PROTOCOL_ATAPI 0x10 562 536 563 - static void hisi_sas_link_timeout_disable_link(unsigned long data); 537 + #define ERR_ON_TX_PHASE(err_phase) (err_phase == 0x2 || \ 538 + err_phase == 0x4 || err_phase == 0x8 ||\ 539 + err_phase == 0x6 || err_phase == 0xa) 540 + #define ERR_ON_RX_PHASE(err_phase) (err_phase == 0x10 || \ 541 + err_phase == 0x20 || err_phase == 0x40) 542 + 543 + static void link_timeout_disable_link(unsigned long data); 564 544 565 545 static u32 hisi_sas_read32(struct hisi_hba *hisi_hba, u32 off) 566 546 { ··· 608 576 /* This function needs to be protected from pre-emption. */ 609 577 static int 610 578 slot_index_alloc_quirk_v2_hw(struct hisi_hba *hisi_hba, int *slot_idx, 611 - struct domain_device *device) 579 + struct domain_device *device) 612 580 { 613 - unsigned int index = 0; 614 - void *bitmap = hisi_hba->slot_index_tags; 615 581 int sata_dev = dev_is_sata(device); 582 + void *bitmap = hisi_hba->slot_index_tags; 583 + struct hisi_sas_device *sas_dev = device->lldd_dev; 584 + int sata_idx = sas_dev->sata_idx; 585 + int start, end; 586 + 587 + if (!sata_dev) { 588 + /* 589 + * STP link SoC bug workaround: index starts from 1. 590 + * additionally, we can only allocate odd IPTT(1~4095) 591 + * for SAS/SMP device. 592 + */ 593 + start = 1; 594 + end = hisi_hba->slot_index_count; 595 + } else { 596 + if (sata_idx >= HISI_MAX_SATA_SUPPORT_V2_HW) 597 + return -EINVAL; 598 + 599 + /* 600 + * For SATA device: allocate even IPTT in this interval 601 + * [64*(sata_idx+1), 64*(sata_idx+2)], then each SATA device 602 + * own 32 IPTTs. IPTT 0 shall not be used duing to STP link 603 + * SoC bug workaround. So we ignore the first 32 even IPTTs. 604 + */ 605 + start = 64 * (sata_idx + 1); 606 + end = 64 * (sata_idx + 2); 607 + } 616 608 617 609 while (1) { 618 - index = find_next_zero_bit(bitmap, hisi_hba->slot_index_count, 619 - index); 620 - if (index >= hisi_hba->slot_index_count) 610 + start = find_next_zero_bit(bitmap, 611 + hisi_hba->slot_index_count, start); 612 + if (start >= end) 621 613 return -SAS_QUEUE_FULL; 622 614 /* 623 - * SAS IPTT bit0 should be 1 624 - */ 625 - if (sata_dev || (index & 1)) 615 + * SAS IPTT bit0 should be 1, and SATA IPTT bit0 should be 0. 616 + */ 617 + if (sata_dev ^ (start & 1)) 626 618 break; 627 - index++; 619 + start++; 620 + } 621 + 622 + set_bit(start, bitmap); 623 + *slot_idx = start; 624 + return 0; 625 + } 626 + 627 + static bool sata_index_alloc_v2_hw(struct hisi_hba *hisi_hba, int *idx) 628 + { 629 + unsigned int index; 630 + struct device *dev = &hisi_hba->pdev->dev; 631 + void *bitmap = hisi_hba->sata_dev_bitmap; 632 + 633 + index = find_first_zero_bit(bitmap, HISI_MAX_SATA_SUPPORT_V2_HW); 634 + if (index >= HISI_MAX_SATA_SUPPORT_V2_HW) { 635 + dev_warn(dev, "alloc sata index failed, index=%d\n", index); 636 + return false; 628 637 } 629 638 630 639 set_bit(index, bitmap); 631 - *slot_idx = index; 632 - return 0; 640 + *idx = index; 641 + return true; 633 642 } 643 + 634 644 635 645 static struct 636 646 hisi_sas_device *alloc_dev_quirk_v2_hw(struct domain_device *device) ··· 680 606 struct hisi_hba *hisi_hba = device->port->ha->lldd_ha; 681 607 struct hisi_sas_device *sas_dev = NULL; 682 608 int i, sata_dev = dev_is_sata(device); 609 + int sata_idx = -1; 683 610 684 611 spin_lock(&hisi_hba->lock); 612 + 613 + if (sata_dev) 614 + if (!sata_index_alloc_v2_hw(hisi_hba, &sata_idx)) 615 + goto out; 616 + 685 617 for (i = 0; i < HISI_SAS_MAX_DEVICES; i++) { 686 618 /* 687 619 * SATA device id bit0 should be 0 ··· 701 621 sas_dev->dev_type = device->dev_type; 702 622 sas_dev->hisi_hba = hisi_hba; 703 623 sas_dev->sas_device = device; 624 + sas_dev->sata_idx = sata_idx; 625 + INIT_LIST_HEAD(&hisi_hba->devices[i].list); 704 626 break; 705 627 } 706 628 } 629 + 630 + out: 707 631 spin_unlock(&hisi_hba->lock); 708 632 709 633 return sas_dev; ··· 760 676 u64 qw0, device_id = sas_dev->device_id; 761 677 struct hisi_sas_itct *itct = &hisi_hba->itct[device_id]; 762 678 struct domain_device *parent_dev = device->parent; 763 - struct hisi_sas_port *port = device->port->lldd_port; 679 + struct asd_sas_port *sas_port = device->port; 680 + struct hisi_sas_port *port = to_hisi_sas_port(sas_port); 764 681 765 682 memset(itct, 0, sizeof(*itct)); 766 683 ··· 813 728 struct hisi_sas_itct *itct = &hisi_hba->itct[dev_id]; 814 729 u32 reg_val = hisi_sas_read32(hisi_hba, ENT_INT_SRC3); 815 730 int i; 731 + 732 + /* SoC bug workaround */ 733 + if (dev_is_sata(sas_dev->sas_device)) 734 + clear_bit(sas_dev->sata_idx, hisi_hba->sata_dev_bitmap); 816 735 817 736 /* clear the itct interrupt state */ 818 737 if (ENT_INT_SRC3_ITC_INT_MSK & reg_val) ··· 947 858 return 0; 948 859 } 949 860 861 + /* This function needs to be called after resetting SAS controller. */ 862 + static void phys_reject_stp_links_v2_hw(struct hisi_hba *hisi_hba) 863 + { 864 + u32 cfg; 865 + int phy_no; 866 + 867 + hisi_hba->reject_stp_links_msk = (1 << hisi_hba->n_phy) - 1; 868 + for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) { 869 + cfg = hisi_sas_phy_read32(hisi_hba, phy_no, CON_CONTROL); 870 + if (!(cfg & CON_CONTROL_CFG_OPEN_ACC_STP_MSK)) 871 + continue; 872 + 873 + cfg &= ~CON_CONTROL_CFG_OPEN_ACC_STP_MSK; 874 + hisi_sas_phy_write32(hisi_hba, phy_no, CON_CONTROL, cfg); 875 + } 876 + } 877 + 878 + static void phys_try_accept_stp_links_v2_hw(struct hisi_hba *hisi_hba) 879 + { 880 + int phy_no; 881 + u32 dma_tx_dfx1; 882 + 883 + for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) { 884 + if (!(hisi_hba->reject_stp_links_msk & BIT(phy_no))) 885 + continue; 886 + 887 + dma_tx_dfx1 = hisi_sas_phy_read32(hisi_hba, phy_no, 888 + DMA_TX_DFX1); 889 + if (dma_tx_dfx1 & DMA_TX_DFX1_IPTT_MSK) { 890 + u32 cfg = hisi_sas_phy_read32(hisi_hba, 891 + phy_no, CON_CONTROL); 892 + 893 + cfg |= CON_CONTROL_CFG_OPEN_ACC_STP_MSK; 894 + hisi_sas_phy_write32(hisi_hba, phy_no, 895 + CON_CONTROL, cfg); 896 + clear_bit(phy_no, &hisi_hba->reject_stp_links_msk); 897 + } 898 + } 899 + } 900 + 950 901 static void init_reg_v2_hw(struct hisi_hba *hisi_hba) 951 902 { 952 903 struct device *dev = &hisi_hba->pdev->dev; ··· 1005 876 (u32)((1ULL << hisi_hba->queue_count) - 1)); 1006 877 hisi_sas_write32(hisi_hba, AXI_USER1, 0xc0000000); 1007 878 hisi_sas_write32(hisi_hba, AXI_USER2, 0x10000); 1008 - hisi_sas_write32(hisi_hba, HGC_SAS_TXFAIL_RETRY_CTRL, 0x108); 879 + hisi_sas_write32(hisi_hba, HGC_SAS_TXFAIL_RETRY_CTRL, 0x0); 1009 880 hisi_sas_write32(hisi_hba, HGC_SAS_TX_OPEN_FAIL_RETRY_CTRL, 0x7FF); 1010 881 hisi_sas_write32(hisi_hba, OPENA_WT_CONTI_TIME, 0x1); 1011 882 hisi_sas_write32(hisi_hba, I_T_NEXUS_LOSS_TIME, 0x1F4); ··· 1014 885 hisi_sas_write32(hisi_hba, CFG_AGING_TIME, 0x1); 1015 886 hisi_sas_write32(hisi_hba, HGC_ERR_STAT_EN, 0x1); 1016 887 hisi_sas_write32(hisi_hba, HGC_GET_ITV_TIME, 0x1); 1017 - hisi_sas_write32(hisi_hba, INT_COAL_EN, 0x1); 1018 - hisi_sas_write32(hisi_hba, OQ_INT_COAL_TIME, 0x1); 1019 - hisi_sas_write32(hisi_hba, OQ_INT_COAL_CNT, 0x1); 888 + hisi_sas_write32(hisi_hba, INT_COAL_EN, 0xc); 889 + hisi_sas_write32(hisi_hba, OQ_INT_COAL_TIME, 0x60); 890 + hisi_sas_write32(hisi_hba, OQ_INT_COAL_CNT, 0x3); 1020 891 hisi_sas_write32(hisi_hba, ENT_INT_COAL_TIME, 0x1); 1021 892 hisi_sas_write32(hisi_hba, ENT_INT_COAL_CNT, 0x1); 1022 893 hisi_sas_write32(hisi_hba, OQ_INT_SRC, 0x0); ··· 1039 910 hisi_sas_phy_write32(hisi_hba, i, SL_TOUT_CFG, 0x7d7d7d7d); 1040 911 hisi_sas_phy_write32(hisi_hba, i, SL_CONTROL, 0x0); 1041 912 hisi_sas_phy_write32(hisi_hba, i, TXID_AUTO, 0x2); 1042 - hisi_sas_phy_write32(hisi_hba, i, DONE_RECEIVED_TIME, 0x10); 913 + hisi_sas_phy_write32(hisi_hba, i, DONE_RECEIVED_TIME, 0x8); 1043 914 hisi_sas_phy_write32(hisi_hba, i, CHL_INT0, 0xffffffff); 1044 915 hisi_sas_phy_write32(hisi_hba, i, CHL_INT1, 0xffffffff); 1045 916 hisi_sas_phy_write32(hisi_hba, i, CHL_INT2, 0xfff87fff); 1046 917 hisi_sas_phy_write32(hisi_hba, i, RXOP_CHECK_CFG_H, 0x1000); 1047 918 hisi_sas_phy_write32(hisi_hba, i, CHL_INT1_MSK, 0xffffffff); 1048 919 hisi_sas_phy_write32(hisi_hba, i, CHL_INT2_MSK, 0x8ffffbff); 1049 - hisi_sas_phy_write32(hisi_hba, i, SL_CFG, 0x23f801fc); 920 + hisi_sas_phy_write32(hisi_hba, i, SL_CFG, 0x13f801fc); 1050 921 hisi_sas_phy_write32(hisi_hba, i, PHY_CTRL_RDY_MSK, 0x0); 1051 922 hisi_sas_phy_write32(hisi_hba, i, PHYCTRL_NOT_RDY_MSK, 0x0); 1052 923 hisi_sas_phy_write32(hisi_hba, i, PHYCTRL_DWS_RESET_MSK, 0x0); ··· 1118 989 upper_32_bits(hisi_hba->initial_fis_dma)); 1119 990 } 1120 991 1121 - static void hisi_sas_link_timeout_enable_link(unsigned long data) 992 + static void link_timeout_enable_link(unsigned long data) 1122 993 { 1123 994 struct hisi_hba *hisi_hba = (struct hisi_hba *)data; 1124 995 int i, reg_val; 1125 996 1126 997 for (i = 0; i < hisi_hba->n_phy; i++) { 998 + if (hisi_hba->reject_stp_links_msk & BIT(i)) 999 + continue; 1000 + 1127 1001 reg_val = hisi_sas_phy_read32(hisi_hba, i, CON_CONTROL); 1128 1002 if (!(reg_val & BIT(0))) { 1129 1003 hisi_sas_phy_write32(hisi_hba, i, ··· 1135 1003 } 1136 1004 } 1137 1005 1138 - hisi_hba->timer.function = hisi_sas_link_timeout_disable_link; 1006 + hisi_hba->timer.function = link_timeout_disable_link; 1139 1007 mod_timer(&hisi_hba->timer, jiffies + msecs_to_jiffies(900)); 1140 1008 } 1141 1009 1142 - static void hisi_sas_link_timeout_disable_link(unsigned long data) 1010 + static void link_timeout_disable_link(unsigned long data) 1143 1011 { 1144 1012 struct hisi_hba *hisi_hba = (struct hisi_hba *)data; 1145 1013 int i, reg_val; 1146 1014 1147 1015 reg_val = hisi_sas_read32(hisi_hba, PHY_STATE); 1148 1016 for (i = 0; i < hisi_hba->n_phy && reg_val; i++) { 1017 + if (hisi_hba->reject_stp_links_msk & BIT(i)) 1018 + continue; 1019 + 1149 1020 if (reg_val & BIT(i)) { 1150 1021 hisi_sas_phy_write32(hisi_hba, i, 1151 1022 CON_CONTROL, 0x6); ··· 1156 1021 } 1157 1022 } 1158 1023 1159 - hisi_hba->timer.function = hisi_sas_link_timeout_enable_link; 1024 + hisi_hba->timer.function = link_timeout_enable_link; 1160 1025 mod_timer(&hisi_hba->timer, jiffies + msecs_to_jiffies(100)); 1161 1026 } 1162 1027 1163 1028 static void set_link_timer_quirk(struct hisi_hba *hisi_hba) 1164 1029 { 1165 1030 hisi_hba->timer.data = (unsigned long)hisi_hba; 1166 - hisi_hba->timer.function = hisi_sas_link_timeout_disable_link; 1031 + hisi_hba->timer.function = link_timeout_disable_link; 1167 1032 hisi_hba->timer.expires = jiffies + msecs_to_jiffies(1000); 1168 1033 add_timer(&hisi_hba->timer); 1169 1034 } ··· 1193 1058 hisi_sas_phy_write32(hisi_hba, phy_no, PHY_CFG, cfg); 1194 1059 } 1195 1060 1061 + static bool is_sata_phy_v2_hw(struct hisi_hba *hisi_hba, int phy_no) 1062 + { 1063 + u32 context; 1064 + 1065 + context = hisi_sas_read32(hisi_hba, PHY_CONTEXT); 1066 + if (context & (1 << phy_no)) 1067 + return true; 1068 + 1069 + return false; 1070 + } 1071 + 1072 + static bool tx_fifo_is_empty_v2_hw(struct hisi_hba *hisi_hba, int phy_no) 1073 + { 1074 + u32 dfx_val; 1075 + 1076 + dfx_val = hisi_sas_phy_read32(hisi_hba, phy_no, DMA_TX_DFX1); 1077 + 1078 + if (dfx_val & BIT(16)) 1079 + return false; 1080 + 1081 + return true; 1082 + } 1083 + 1084 + static bool axi_bus_is_idle_v2_hw(struct hisi_hba *hisi_hba, int phy_no) 1085 + { 1086 + int i, max_loop = 1000; 1087 + struct device *dev = &hisi_hba->pdev->dev; 1088 + u32 status, axi_status, dfx_val, dfx_tx_val; 1089 + 1090 + for (i = 0; i < max_loop; i++) { 1091 + status = hisi_sas_read32_relaxed(hisi_hba, 1092 + AXI_MASTER_CFG_BASE + AM_CURR_TRANS_RETURN); 1093 + 1094 + axi_status = hisi_sas_read32(hisi_hba, AXI_CFG); 1095 + dfx_val = hisi_sas_phy_read32(hisi_hba, phy_no, DMA_TX_DFX1); 1096 + dfx_tx_val = hisi_sas_phy_read32(hisi_hba, 1097 + phy_no, DMA_TX_FIFO_DFX0); 1098 + 1099 + if ((status == 0x3) && (axi_status == 0x0) && 1100 + (dfx_val & BIT(20)) && (dfx_tx_val & BIT(10))) 1101 + return true; 1102 + udelay(10); 1103 + } 1104 + dev_err(dev, "bus is not idle phy%d, axi150:0x%x axi100:0x%x port204:0x%x port240:0x%x\n", 1105 + phy_no, status, axi_status, 1106 + dfx_val, dfx_tx_val); 1107 + return false; 1108 + } 1109 + 1110 + static bool wait_io_done_v2_hw(struct hisi_hba *hisi_hba, int phy_no) 1111 + { 1112 + int i, max_loop = 1000; 1113 + struct device *dev = &hisi_hba->pdev->dev; 1114 + u32 status, tx_dfx0; 1115 + 1116 + for (i = 0; i < max_loop; i++) { 1117 + status = hisi_sas_phy_read32(hisi_hba, phy_no, LINK_DFX2); 1118 + status = (status & 0x3fc0) >> 6; 1119 + 1120 + if (status != 0x1) 1121 + return true; 1122 + 1123 + tx_dfx0 = hisi_sas_phy_read32(hisi_hba, phy_no, DMA_TX_DFX0); 1124 + if ((tx_dfx0 & 0x1ff) == 0x2) 1125 + return true; 1126 + udelay(10); 1127 + } 1128 + dev_err(dev, "IO not done phy%d, port264:0x%x port200:0x%x\n", 1129 + phy_no, status, tx_dfx0); 1130 + return false; 1131 + } 1132 + 1133 + static bool allowed_disable_phy_v2_hw(struct hisi_hba *hisi_hba, int phy_no) 1134 + { 1135 + if (tx_fifo_is_empty_v2_hw(hisi_hba, phy_no)) 1136 + return true; 1137 + 1138 + if (!axi_bus_is_idle_v2_hw(hisi_hba, phy_no)) 1139 + return false; 1140 + 1141 + if (!wait_io_done_v2_hw(hisi_hba, phy_no)) 1142 + return false; 1143 + 1144 + return true; 1145 + } 1146 + 1147 + 1196 1148 static void disable_phy_v2_hw(struct hisi_hba *hisi_hba, int phy_no) 1197 1149 { 1198 - u32 cfg = hisi_sas_phy_read32(hisi_hba, phy_no, PHY_CFG); 1150 + u32 cfg, axi_val, dfx0_val, txid_auto; 1151 + struct device *dev = &hisi_hba->pdev->dev; 1199 1152 1153 + /* Close axi bus. */ 1154 + axi_val = hisi_sas_read32(hisi_hba, AXI_MASTER_CFG_BASE + 1155 + AM_CTRL_GLOBAL); 1156 + axi_val |= 0x1; 1157 + hisi_sas_write32(hisi_hba, AXI_MASTER_CFG_BASE + 1158 + AM_CTRL_GLOBAL, axi_val); 1159 + 1160 + if (is_sata_phy_v2_hw(hisi_hba, phy_no)) { 1161 + if (allowed_disable_phy_v2_hw(hisi_hba, phy_no)) 1162 + goto do_disable; 1163 + 1164 + /* Reset host controller. */ 1165 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 1166 + return; 1167 + } 1168 + 1169 + dfx0_val = hisi_sas_phy_read32(hisi_hba, phy_no, PORT_DFX0); 1170 + dfx0_val = (dfx0_val & 0x1fc0) >> 6; 1171 + if (dfx0_val != 0x4) 1172 + goto do_disable; 1173 + 1174 + if (!tx_fifo_is_empty_v2_hw(hisi_hba, phy_no)) { 1175 + dev_warn(dev, "phy%d, wait tx fifo need send break\n", 1176 + phy_no); 1177 + txid_auto = hisi_sas_phy_read32(hisi_hba, phy_no, 1178 + TXID_AUTO); 1179 + txid_auto |= TXID_AUTO_CTB_MSK; 1180 + hisi_sas_phy_write32(hisi_hba, phy_no, TXID_AUTO, 1181 + txid_auto); 1182 + } 1183 + 1184 + do_disable: 1185 + cfg = hisi_sas_phy_read32(hisi_hba, phy_no, PHY_CFG); 1200 1186 cfg &= ~PHY_CFG_ENA_MSK; 1201 1187 hisi_sas_phy_write32(hisi_hba, phy_no, PHY_CFG, cfg); 1188 + 1189 + /* Open axi bus. */ 1190 + axi_val &= ~0x1; 1191 + hisi_sas_write32(hisi_hba, AXI_MASTER_CFG_BASE + 1192 + AM_CTRL_GLOBAL, axi_val); 1202 1193 } 1203 1194 1204 1195 static void start_phy_v2_hw(struct hisi_hba *hisi_hba, int phy_no) ··· 1337 1076 static void stop_phy_v2_hw(struct hisi_hba *hisi_hba, int phy_no) 1338 1077 { 1339 1078 disable_phy_v2_hw(hisi_hba, phy_no); 1079 + } 1080 + 1081 + static void stop_phys_v2_hw(struct hisi_hba *hisi_hba) 1082 + { 1083 + int i; 1084 + 1085 + for (i = 0; i < hisi_hba->n_phy; i++) 1086 + stop_phy_v2_hw(hisi_hba, i); 1340 1087 } 1341 1088 1342 1089 static void phy_hard_reset_v2_hw(struct hisi_hba *hisi_hba, int phy_no) ··· 1706 1437 ts->buf_valid_size = sizeof(*resp); 1707 1438 } 1708 1439 1440 + #define TRANS_TX_ERR 0 1441 + #define TRANS_RX_ERR 1 1442 + #define DMA_TX_ERR 2 1443 + #define SIPC_RX_ERR 3 1444 + #define DMA_RX_ERR 4 1445 + 1446 + #define DMA_TX_ERR_OFF 0 1447 + #define DMA_TX_ERR_MSK (0xffff << DMA_TX_ERR_OFF) 1448 + #define SIPC_RX_ERR_OFF 16 1449 + #define SIPC_RX_ERR_MSK (0xffff << SIPC_RX_ERR_OFF) 1450 + 1451 + static int parse_trans_tx_err_code_v2_hw(u32 err_msk) 1452 + { 1453 + const u8 trans_tx_err_code_prio[] = { 1454 + TRANS_TX_OPEN_FAIL_WITH_IT_NEXUS_LOSS, 1455 + TRANS_TX_ERR_PHY_NOT_ENABLE, 1456 + TRANS_TX_OPEN_CNX_ERR_WRONG_DESTINATION, 1457 + TRANS_TX_OPEN_CNX_ERR_ZONE_VIOLATION, 1458 + TRANS_TX_OPEN_CNX_ERR_BY_OTHER, 1459 + RESERVED0, 1460 + TRANS_TX_OPEN_CNX_ERR_AIP_TIMEOUT, 1461 + TRANS_TX_OPEN_CNX_ERR_STP_RESOURCES_BUSY, 1462 + TRANS_TX_OPEN_CNX_ERR_PROTOCOL_NOT_SUPPORTED, 1463 + TRANS_TX_OPEN_CNX_ERR_CONNECTION_RATE_NOT_SUPPORTED, 1464 + TRANS_TX_OPEN_CNX_ERR_BAD_DESTINATION, 1465 + TRANS_TX_OPEN_CNX_ERR_BREAK_RCVD, 1466 + TRANS_TX_OPEN_CNX_ERR_LOW_PHY_POWER, 1467 + TRANS_TX_OPEN_CNX_ERR_PATHWAY_BLOCKED, 1468 + TRANS_TX_OPEN_CNX_ERR_OPEN_TIMEOUT, 1469 + TRANS_TX_OPEN_CNX_ERR_NO_DESTINATION, 1470 + TRANS_TX_OPEN_RETRY_ERR_THRESHOLD_REACHED, 1471 + TRANS_TX_ERR_WITH_CLOSE_PHYDISALE, 1472 + TRANS_TX_ERR_WITH_CLOSE_DWS_TIMEOUT, 1473 + TRANS_TX_ERR_WITH_CLOSE_COMINIT, 1474 + TRANS_TX_ERR_WITH_BREAK_TIMEOUT, 1475 + TRANS_TX_ERR_WITH_BREAK_REQUEST, 1476 + TRANS_TX_ERR_WITH_BREAK_RECEVIED, 1477 + TRANS_TX_ERR_WITH_CLOSE_TIMEOUT, 1478 + TRANS_TX_ERR_WITH_CLOSE_NORMAL, 1479 + TRANS_TX_ERR_WITH_NAK_RECEVIED, 1480 + TRANS_TX_ERR_WITH_ACK_NAK_TIMEOUT, 1481 + TRANS_TX_ERR_WITH_CREDIT_TIMEOUT, 1482 + TRANS_TX_ERR_WITH_IPTT_CONFLICT, 1483 + TRANS_TX_ERR_WITH_OPEN_BY_DES_OR_OTHERS, 1484 + TRANS_TX_ERR_WITH_WAIT_RECV_TIMEOUT, 1485 + }; 1486 + int index, i; 1487 + 1488 + for (i = 0; i < ARRAY_SIZE(trans_tx_err_code_prio); i++) { 1489 + index = trans_tx_err_code_prio[i] - TRANS_TX_FAIL_BASE; 1490 + if (err_msk & (1 << index)) 1491 + return trans_tx_err_code_prio[i]; 1492 + } 1493 + return -1; 1494 + } 1495 + 1496 + static int parse_trans_rx_err_code_v2_hw(u32 err_msk) 1497 + { 1498 + const u8 trans_rx_err_code_prio[] = { 1499 + TRANS_RX_ERR_WITH_RXFRAME_CRC_ERR, 1500 + TRANS_RX_ERR_WITH_RXFIS_8B10B_DISP_ERR, 1501 + TRANS_RX_ERR_WITH_RXFRAME_HAVE_ERRPRM, 1502 + TRANS_RX_ERR_WITH_RXFIS_DECODE_ERROR, 1503 + TRANS_RX_ERR_WITH_RXFIS_CRC_ERR, 1504 + TRANS_RX_ERR_WITH_RXFRAME_LENGTH_OVERRUN, 1505 + TRANS_RX_ERR_WITH_RXFIS_RX_SYNCP, 1506 + TRANS_RX_ERR_WITH_LINK_BUF_OVERRUN, 1507 + TRANS_RX_ERR_WITH_CLOSE_PHY_DISABLE, 1508 + TRANS_RX_ERR_WITH_CLOSE_DWS_TIMEOUT, 1509 + TRANS_RX_ERR_WITH_CLOSE_COMINIT, 1510 + TRANS_RX_ERR_WITH_BREAK_TIMEOUT, 1511 + TRANS_RX_ERR_WITH_BREAK_REQUEST, 1512 + TRANS_RX_ERR_WITH_BREAK_RECEVIED, 1513 + RESERVED1, 1514 + TRANS_RX_ERR_WITH_CLOSE_NORMAL, 1515 + TRANS_RX_ERR_WITH_DATA_LEN0, 1516 + TRANS_RX_ERR_WITH_BAD_HASH, 1517 + TRANS_RX_XRDY_WLEN_ZERO_ERR, 1518 + TRANS_RX_SSP_FRM_LEN_ERR, 1519 + RESERVED2, 1520 + RESERVED3, 1521 + RESERVED4, 1522 + RESERVED5, 1523 + TRANS_RX_ERR_WITH_BAD_FRM_TYPE, 1524 + TRANS_RX_SMP_FRM_LEN_ERR, 1525 + TRANS_RX_SMP_RESP_TIMEOUT_ERR, 1526 + RESERVED6, 1527 + RESERVED7, 1528 + RESERVED8, 1529 + RESERVED9, 1530 + TRANS_RX_R_ERR, 1531 + }; 1532 + int index, i; 1533 + 1534 + for (i = 0; i < ARRAY_SIZE(trans_rx_err_code_prio); i++) { 1535 + index = trans_rx_err_code_prio[i] - TRANS_RX_FAIL_BASE; 1536 + if (err_msk & (1 << index)) 1537 + return trans_rx_err_code_prio[i]; 1538 + } 1539 + return -1; 1540 + } 1541 + 1542 + static int parse_dma_tx_err_code_v2_hw(u32 err_msk) 1543 + { 1544 + const u8 dma_tx_err_code_prio[] = { 1545 + DMA_TX_UNEXP_XFER_ERR, 1546 + DMA_TX_UNEXP_RETRANS_ERR, 1547 + DMA_TX_XFER_LEN_OVERFLOW, 1548 + DMA_TX_XFER_OFFSET_ERR, 1549 + DMA_TX_RAM_ECC_ERR, 1550 + DMA_TX_DIF_LEN_ALIGN_ERR, 1551 + DMA_TX_DIF_CRC_ERR, 1552 + DMA_TX_DIF_APP_ERR, 1553 + DMA_TX_DIF_RPP_ERR, 1554 + DMA_TX_DATA_SGL_OVERFLOW, 1555 + DMA_TX_DIF_SGL_OVERFLOW, 1556 + }; 1557 + int index, i; 1558 + 1559 + for (i = 0; i < ARRAY_SIZE(dma_tx_err_code_prio); i++) { 1560 + index = dma_tx_err_code_prio[i] - DMA_TX_ERR_BASE; 1561 + err_msk = err_msk & DMA_TX_ERR_MSK; 1562 + if (err_msk & (1 << index)) 1563 + return dma_tx_err_code_prio[i]; 1564 + } 1565 + return -1; 1566 + } 1567 + 1568 + static int parse_sipc_rx_err_code_v2_hw(u32 err_msk) 1569 + { 1570 + const u8 sipc_rx_err_code_prio[] = { 1571 + SIPC_RX_FIS_STATUS_ERR_BIT_VLD, 1572 + SIPC_RX_PIO_WRSETUP_STATUS_DRQ_ERR, 1573 + SIPC_RX_FIS_STATUS_BSY_BIT_ERR, 1574 + SIPC_RX_WRSETUP_LEN_ODD_ERR, 1575 + SIPC_RX_WRSETUP_LEN_ZERO_ERR, 1576 + SIPC_RX_WRDATA_LEN_NOT_MATCH_ERR, 1577 + SIPC_RX_NCQ_WRSETUP_OFFSET_ERR, 1578 + SIPC_RX_NCQ_WRSETUP_AUTO_ACTIVE_ERR, 1579 + SIPC_RX_SATA_UNEXP_FIS_ERR, 1580 + SIPC_RX_WRSETUP_ESTATUS_ERR, 1581 + SIPC_RX_DATA_UNDERFLOW_ERR, 1582 + }; 1583 + int index, i; 1584 + 1585 + for (i = 0; i < ARRAY_SIZE(sipc_rx_err_code_prio); i++) { 1586 + index = sipc_rx_err_code_prio[i] - SIPC_RX_ERR_BASE; 1587 + err_msk = err_msk & SIPC_RX_ERR_MSK; 1588 + if (err_msk & (1 << (index + 0x10))) 1589 + return sipc_rx_err_code_prio[i]; 1590 + } 1591 + return -1; 1592 + } 1593 + 1594 + static int parse_dma_rx_err_code_v2_hw(u32 err_msk) 1595 + { 1596 + const u8 dma_rx_err_code_prio[] = { 1597 + DMA_RX_UNKNOWN_FRM_ERR, 1598 + DMA_RX_DATA_LEN_OVERFLOW, 1599 + DMA_RX_DATA_LEN_UNDERFLOW, 1600 + DMA_RX_DATA_OFFSET_ERR, 1601 + RESERVED10, 1602 + DMA_RX_SATA_FRAME_TYPE_ERR, 1603 + DMA_RX_RESP_BUF_OVERFLOW, 1604 + DMA_RX_UNEXP_RETRANS_RESP_ERR, 1605 + DMA_RX_UNEXP_NORM_RESP_ERR, 1606 + DMA_RX_UNEXP_RDFRAME_ERR, 1607 + DMA_RX_PIO_DATA_LEN_ERR, 1608 + DMA_RX_RDSETUP_STATUS_ERR, 1609 + DMA_RX_RDSETUP_STATUS_DRQ_ERR, 1610 + DMA_RX_RDSETUP_STATUS_BSY_ERR, 1611 + DMA_RX_RDSETUP_LEN_ODD_ERR, 1612 + DMA_RX_RDSETUP_LEN_ZERO_ERR, 1613 + DMA_RX_RDSETUP_LEN_OVER_ERR, 1614 + DMA_RX_RDSETUP_OFFSET_ERR, 1615 + DMA_RX_RDSETUP_ACTIVE_ERR, 1616 + DMA_RX_RDSETUP_ESTATUS_ERR, 1617 + DMA_RX_RAM_ECC_ERR, 1618 + DMA_RX_DIF_CRC_ERR, 1619 + DMA_RX_DIF_APP_ERR, 1620 + DMA_RX_DIF_RPP_ERR, 1621 + DMA_RX_DATA_SGL_OVERFLOW, 1622 + DMA_RX_DIF_SGL_OVERFLOW, 1623 + }; 1624 + int index, i; 1625 + 1626 + for (i = 0; i < ARRAY_SIZE(dma_rx_err_code_prio); i++) { 1627 + index = dma_rx_err_code_prio[i] - DMA_RX_ERR_BASE; 1628 + if (err_msk & (1 << index)) 1629 + return dma_rx_err_code_prio[i]; 1630 + } 1631 + return -1; 1632 + } 1633 + 1709 1634 /* by default, task resp is complete */ 1710 1635 static void slot_err_v2_hw(struct hisi_hba *hisi_hba, 1711 1636 struct sas_task *task, 1712 - struct hisi_sas_slot *slot) 1637 + struct hisi_sas_slot *slot, 1638 + int err_phase) 1713 1639 { 1714 1640 struct task_status_struct *ts = &task->task_status; 1715 1641 struct hisi_sas_err_record_v2 *err_record = slot->status_buffer; ··· 1915 1451 u32 dma_rx_err_type = cpu_to_le32(err_record->dma_rx_err_type); 1916 1452 int error = -1; 1917 1453 1918 - if (dma_rx_err_type) { 1919 - error = ffs(dma_rx_err_type) 1920 - - 1 + DMA_RX_ERR_BASE; 1921 - } else if (sipc_rx_err_type) { 1922 - error = ffs(sipc_rx_err_type) 1923 - - 1 + SIPC_RX_ERR_BASE; 1924 - } else if (dma_tx_err_type) { 1925 - error = ffs(dma_tx_err_type) 1926 - - 1 + DMA_TX_ERR_BASE; 1927 - } else if (trans_rx_fail_type) { 1928 - error = ffs(trans_rx_fail_type) 1929 - - 1 + TRANS_RX_FAIL_BASE; 1930 - } else if (trans_tx_fail_type) { 1931 - error = ffs(trans_tx_fail_type) 1932 - - 1 + TRANS_TX_FAIL_BASE; 1454 + if (err_phase == 1) { 1455 + /* error in TX phase, the priority of error is: DW2 > DW0 */ 1456 + error = parse_dma_tx_err_code_v2_hw(dma_tx_err_type); 1457 + if (error == -1) 1458 + error = parse_trans_tx_err_code_v2_hw( 1459 + trans_tx_fail_type); 1460 + } else if (err_phase == 2) { 1461 + /* error in RX phase, the priority is: DW1 > DW3 > DW2 */ 1462 + error = parse_trans_rx_err_code_v2_hw( 1463 + trans_rx_fail_type); 1464 + if (error == -1) { 1465 + error = parse_dma_rx_err_code_v2_hw( 1466 + dma_rx_err_type); 1467 + if (error == -1) 1468 + error = parse_sipc_rx_err_code_v2_hw( 1469 + sipc_rx_err_type); 1470 + } 1933 1471 } 1934 1472 1935 1473 switch (task->task_proto) { ··· 1942 1476 { 1943 1477 ts->stat = SAS_OPEN_REJECT; 1944 1478 ts->open_rej_reason = SAS_OREJ_NO_DEST; 1945 - break; 1946 - } 1947 - case TRANS_TX_OPEN_CNX_ERR_PATHWAY_BLOCKED: 1948 - { 1949 - ts->stat = SAS_OPEN_REJECT; 1950 - ts->open_rej_reason = SAS_OREJ_PATH_BLOCKED; 1951 1479 break; 1952 1480 } 1953 1481 case TRANS_TX_OPEN_CNX_ERR_PROTOCOL_NOT_SUPPORTED: ··· 1962 1502 ts->open_rej_reason = SAS_OREJ_BAD_DEST; 1963 1503 break; 1964 1504 } 1965 - case TRANS_TX_OPEN_CNX_ERR_BREAK_RCVD: 1966 - { 1967 - ts->stat = SAS_OPEN_REJECT; 1968 - ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; 1969 - break; 1970 - } 1971 1505 case TRANS_TX_OPEN_CNX_ERR_WRONG_DESTINATION: 1972 1506 { 1973 1507 ts->stat = SAS_OPEN_REJECT; 1974 1508 ts->open_rej_reason = SAS_OREJ_WRONG_DEST; 1975 1509 break; 1976 1510 } 1511 + case DMA_RX_UNEXP_NORM_RESP_ERR: 1977 1512 case TRANS_TX_OPEN_CNX_ERR_ZONE_VIOLATION: 1513 + case DMA_RX_RESP_BUF_OVERFLOW: 1978 1514 { 1979 1515 ts->stat = SAS_OPEN_REJECT; 1980 1516 ts->open_rej_reason = SAS_OREJ_UNKNOWN; ··· 1982 1526 ts->stat = SAS_DEV_NO_RESPONSE; 1983 1527 break; 1984 1528 } 1985 - case TRANS_RX_ERR_WITH_CLOSE_PHY_DISABLE: 1986 - { 1987 - ts->stat = SAS_PHY_DOWN; 1988 - break; 1989 - } 1990 - case TRANS_TX_OPEN_CNX_ERR_OPEN_TIMEOUT: 1991 - { 1992 - ts->stat = SAS_OPEN_TO; 1993 - break; 1994 - } 1995 1529 case DMA_RX_DATA_LEN_OVERFLOW: 1996 1530 { 1997 1531 ts->stat = SAS_DATA_OVERRUN; ··· 1989 1543 break; 1990 1544 } 1991 1545 case DMA_RX_DATA_LEN_UNDERFLOW: 1992 - case SIPC_RX_DATA_UNDERFLOW_ERR: 1993 1546 { 1994 - ts->residual = trans_tx_fail_type; 1547 + ts->residual = dma_rx_err_type; 1995 1548 ts->stat = SAS_DATA_UNDERRUN; 1996 - break; 1997 - } 1998 - case TRANS_TX_ERR_FRAME_TXED: 1999 - { 2000 - /* This will request a retry */ 2001 - ts->stat = SAS_QUEUE_FULL; 2002 - slot->abort = 1; 2003 1549 break; 2004 1550 } 2005 1551 case TRANS_TX_OPEN_FAIL_WITH_IT_NEXUS_LOSS: 2006 1552 case TRANS_TX_ERR_PHY_NOT_ENABLE: 2007 1553 case TRANS_TX_OPEN_CNX_ERR_BY_OTHER: 2008 1554 case TRANS_TX_OPEN_CNX_ERR_AIP_TIMEOUT: 1555 + case TRANS_TX_OPEN_CNX_ERR_BREAK_RCVD: 1556 + case TRANS_TX_OPEN_CNX_ERR_PATHWAY_BLOCKED: 1557 + case TRANS_TX_OPEN_CNX_ERR_OPEN_TIMEOUT: 2009 1558 case TRANS_TX_OPEN_RETRY_ERR_THRESHOLD_REACHED: 2010 1559 case TRANS_TX_ERR_WITH_BREAK_TIMEOUT: 2011 1560 case TRANS_TX_ERR_WITH_BREAK_REQUEST: 2012 1561 case TRANS_TX_ERR_WITH_BREAK_RECEVIED: 2013 1562 case TRANS_TX_ERR_WITH_CLOSE_TIMEOUT: 2014 1563 case TRANS_TX_ERR_WITH_CLOSE_NORMAL: 1564 + case TRANS_TX_ERR_WITH_CLOSE_PHYDISALE: 2015 1565 case TRANS_TX_ERR_WITH_CLOSE_DWS_TIMEOUT: 2016 1566 case TRANS_TX_ERR_WITH_CLOSE_COMINIT: 2017 1567 case TRANS_TX_ERR_WITH_NAK_RECEVIED: 2018 1568 case TRANS_TX_ERR_WITH_ACK_NAK_TIMEOUT: 2019 - case TRANS_TX_ERR_WITH_IPTT_CONFLICT: 2020 1569 case TRANS_TX_ERR_WITH_CREDIT_TIMEOUT: 1570 + case TRANS_TX_ERR_WITH_IPTT_CONFLICT: 2021 1571 case TRANS_RX_ERR_WITH_RXFRAME_CRC_ERR: 2022 1572 case TRANS_RX_ERR_WITH_RXFIS_8B10B_DISP_ERR: 2023 1573 case TRANS_RX_ERR_WITH_RXFRAME_HAVE_ERRPRM: 1574 + case TRANS_RX_ERR_WITH_LINK_BUF_OVERRUN: 2024 1575 case TRANS_RX_ERR_WITH_BREAK_TIMEOUT: 2025 1576 case TRANS_RX_ERR_WITH_BREAK_REQUEST: 2026 1577 case TRANS_RX_ERR_WITH_BREAK_RECEVIED: 2027 1578 case TRANS_RX_ERR_WITH_CLOSE_NORMAL: 2028 1579 case TRANS_RX_ERR_WITH_CLOSE_DWS_TIMEOUT: 2029 1580 case TRANS_RX_ERR_WITH_CLOSE_COMINIT: 1581 + case TRANS_TX_ERR_FRAME_TXED: 1582 + case TRANS_RX_ERR_WITH_CLOSE_PHY_DISABLE: 2030 1583 case TRANS_RX_ERR_WITH_DATA_LEN0: 2031 1584 case TRANS_RX_ERR_WITH_BAD_HASH: 2032 1585 case TRANS_RX_XRDY_WLEN_ZERO_ERR: 2033 1586 case TRANS_RX_SSP_FRM_LEN_ERR: 2034 1587 case TRANS_RX_ERR_WITH_BAD_FRM_TYPE: 1588 + case DMA_TX_DATA_SGL_OVERFLOW: 2035 1589 case DMA_TX_UNEXP_XFER_ERR: 2036 1590 case DMA_TX_UNEXP_RETRANS_ERR: 2037 1591 case DMA_TX_XFER_LEN_OVERFLOW: 2038 1592 case DMA_TX_XFER_OFFSET_ERR: 1593 + case SIPC_RX_DATA_UNDERFLOW_ERR: 1594 + case DMA_RX_DATA_SGL_OVERFLOW: 2039 1595 case DMA_RX_DATA_OFFSET_ERR: 2040 - case DMA_RX_UNEXP_NORM_RESP_ERR: 2041 - case DMA_RX_UNEXP_RDFRAME_ERR: 1596 + case DMA_RX_RDSETUP_LEN_ODD_ERR: 1597 + case DMA_RX_RDSETUP_LEN_ZERO_ERR: 1598 + case DMA_RX_RDSETUP_LEN_OVER_ERR: 1599 + case DMA_RX_SATA_FRAME_TYPE_ERR: 2042 1600 case DMA_RX_UNKNOWN_FRM_ERR: 2043 1601 { 2044 - ts->stat = SAS_OPEN_REJECT; 2045 - ts->open_rej_reason = SAS_OREJ_UNKNOWN; 1602 + /* This will request a retry */ 1603 + ts->stat = SAS_QUEUE_FULL; 1604 + slot->abort = 1; 2046 1605 break; 2047 1606 } 2048 1607 default: ··· 2064 1613 case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP: 2065 1614 { 2066 1615 switch (error) { 2067 - case TRANS_TX_OPEN_CNX_ERR_LOW_PHY_POWER: 2068 - case TRANS_TX_OPEN_CNX_ERR_PATHWAY_BLOCKED: 2069 1616 case TRANS_TX_OPEN_CNX_ERR_NO_DESTINATION: 1617 + { 1618 + ts->stat = SAS_OPEN_REJECT; 1619 + ts->open_rej_reason = SAS_OREJ_NO_DEST; 1620 + break; 1621 + } 1622 + case TRANS_TX_OPEN_CNX_ERR_LOW_PHY_POWER: 2070 1623 { 2071 1624 ts->resp = SAS_TASK_UNDELIVERED; 2072 1625 ts->stat = SAS_DEV_NO_RESPONSE; 2073 1626 break; 2074 1627 } 2075 1628 case TRANS_TX_OPEN_CNX_ERR_PROTOCOL_NOT_SUPPORTED: 2076 - case TRANS_TX_OPEN_CNX_ERR_CONNECTION_RATE_NOT_SUPPORTED: 2077 - case TRANS_TX_OPEN_CNX_ERR_BAD_DESTINATION: 2078 - case TRANS_TX_OPEN_CNX_ERR_BREAK_RCVD: 2079 - case TRANS_TX_OPEN_CNX_ERR_WRONG_DESTINATION: 2080 - case TRANS_TX_OPEN_CNX_ERR_ZONE_VIOLATION: 2081 - case TRANS_TX_OPEN_CNX_ERR_STP_RESOURCES_BUSY: 2082 1629 { 2083 1630 ts->stat = SAS_OPEN_REJECT; 1631 + ts->open_rej_reason = SAS_OREJ_EPROTO; 2084 1632 break; 2085 1633 } 2086 - case TRANS_TX_OPEN_CNX_ERR_OPEN_TIMEOUT: 1634 + case TRANS_TX_OPEN_CNX_ERR_CONNECTION_RATE_NOT_SUPPORTED: 2087 1635 { 2088 - ts->stat = SAS_OPEN_TO; 1636 + ts->stat = SAS_OPEN_REJECT; 1637 + ts->open_rej_reason = SAS_OREJ_CONN_RATE; 1638 + break; 1639 + } 1640 + case TRANS_TX_OPEN_CNX_ERR_BAD_DESTINATION: 1641 + { 1642 + ts->stat = SAS_OPEN_REJECT; 1643 + ts->open_rej_reason = SAS_OREJ_CONN_RATE; 1644 + break; 1645 + } 1646 + case TRANS_TX_OPEN_CNX_ERR_WRONG_DESTINATION: 1647 + { 1648 + ts->stat = SAS_OPEN_REJECT; 1649 + ts->open_rej_reason = SAS_OREJ_WRONG_DEST; 1650 + break; 1651 + } 1652 + case DMA_RX_RESP_BUF_OVERFLOW: 1653 + case DMA_RX_UNEXP_NORM_RESP_ERR: 1654 + case TRANS_TX_OPEN_CNX_ERR_ZONE_VIOLATION: 1655 + { 1656 + ts->stat = SAS_OPEN_REJECT; 1657 + ts->open_rej_reason = SAS_OREJ_UNKNOWN; 2089 1658 break; 2090 1659 } 2091 1660 case DMA_RX_DATA_LEN_OVERFLOW: 2092 1661 { 2093 1662 ts->stat = SAS_DATA_OVERRUN; 1663 + ts->residual = 0; 1664 + break; 1665 + } 1666 + case DMA_RX_DATA_LEN_UNDERFLOW: 1667 + { 1668 + ts->residual = dma_rx_err_type; 1669 + ts->stat = SAS_DATA_UNDERRUN; 2094 1670 break; 2095 1671 } 2096 1672 case TRANS_TX_OPEN_FAIL_WITH_IT_NEXUS_LOSS: 2097 1673 case TRANS_TX_ERR_PHY_NOT_ENABLE: 2098 1674 case TRANS_TX_OPEN_CNX_ERR_BY_OTHER: 2099 1675 case TRANS_TX_OPEN_CNX_ERR_AIP_TIMEOUT: 1676 + case TRANS_TX_OPEN_CNX_ERR_BREAK_RCVD: 1677 + case TRANS_TX_OPEN_CNX_ERR_PATHWAY_BLOCKED: 1678 + case TRANS_TX_OPEN_CNX_ERR_OPEN_TIMEOUT: 2100 1679 case TRANS_TX_OPEN_RETRY_ERR_THRESHOLD_REACHED: 2101 1680 case TRANS_TX_ERR_WITH_BREAK_TIMEOUT: 2102 1681 case TRANS_TX_ERR_WITH_BREAK_REQUEST: 2103 1682 case TRANS_TX_ERR_WITH_BREAK_RECEVIED: 2104 1683 case TRANS_TX_ERR_WITH_CLOSE_TIMEOUT: 2105 1684 case TRANS_TX_ERR_WITH_CLOSE_NORMAL: 1685 + case TRANS_TX_ERR_WITH_CLOSE_PHYDISALE: 2106 1686 case TRANS_TX_ERR_WITH_CLOSE_DWS_TIMEOUT: 2107 1687 case TRANS_TX_ERR_WITH_CLOSE_COMINIT: 2108 - case TRANS_TX_ERR_WITH_NAK_RECEVIED: 2109 1688 case TRANS_TX_ERR_WITH_ACK_NAK_TIMEOUT: 2110 1689 case TRANS_TX_ERR_WITH_CREDIT_TIMEOUT: 1690 + case TRANS_TX_ERR_WITH_OPEN_BY_DES_OR_OTHERS: 2111 1691 case TRANS_TX_ERR_WITH_WAIT_RECV_TIMEOUT: 2112 - case TRANS_RX_ERR_WITH_RXFIS_8B10B_DISP_ERR: 2113 1692 case TRANS_RX_ERR_WITH_RXFRAME_HAVE_ERRPRM: 1693 + case TRANS_RX_ERR_WITH_RXFIS_8B10B_DISP_ERR: 2114 1694 case TRANS_RX_ERR_WITH_RXFIS_DECODE_ERROR: 2115 1695 case TRANS_RX_ERR_WITH_RXFIS_CRC_ERR: 2116 1696 case TRANS_RX_ERR_WITH_RXFRAME_LENGTH_OVERRUN: 2117 1697 case TRANS_RX_ERR_WITH_RXFIS_RX_SYNCP: 1698 + case TRANS_RX_ERR_WITH_LINK_BUF_OVERRUN: 1699 + case TRANS_RX_ERR_WITH_BREAK_TIMEOUT: 1700 + case TRANS_RX_ERR_WITH_BREAK_REQUEST: 1701 + case TRANS_RX_ERR_WITH_BREAK_RECEVIED: 2118 1702 case TRANS_RX_ERR_WITH_CLOSE_NORMAL: 2119 1703 case TRANS_RX_ERR_WITH_CLOSE_PHY_DISABLE: 2120 1704 case TRANS_RX_ERR_WITH_CLOSE_DWS_TIMEOUT: ··· 2157 1671 case TRANS_RX_ERR_WITH_DATA_LEN0: 2158 1672 case TRANS_RX_ERR_WITH_BAD_HASH: 2159 1673 case TRANS_RX_XRDY_WLEN_ZERO_ERR: 2160 - case TRANS_RX_SSP_FRM_LEN_ERR: 1674 + case TRANS_RX_ERR_WITH_BAD_FRM_TYPE: 1675 + case DMA_TX_DATA_SGL_OVERFLOW: 1676 + case DMA_TX_UNEXP_XFER_ERR: 1677 + case DMA_TX_UNEXP_RETRANS_ERR: 1678 + case DMA_TX_XFER_LEN_OVERFLOW: 1679 + case DMA_TX_XFER_OFFSET_ERR: 2161 1680 case SIPC_RX_FIS_STATUS_ERR_BIT_VLD: 2162 1681 case SIPC_RX_PIO_WRSETUP_STATUS_DRQ_ERR: 2163 1682 case SIPC_RX_FIS_STATUS_BSY_BIT_ERR: ··· 2170 1679 case SIPC_RX_WRSETUP_LEN_ZERO_ERR: 2171 1680 case SIPC_RX_WRDATA_LEN_NOT_MATCH_ERR: 2172 1681 case SIPC_RX_SATA_UNEXP_FIS_ERR: 1682 + case DMA_RX_DATA_SGL_OVERFLOW: 1683 + case DMA_RX_DATA_OFFSET_ERR: 2173 1684 case DMA_RX_SATA_FRAME_TYPE_ERR: 2174 1685 case DMA_RX_UNEXP_RDFRAME_ERR: 2175 1686 case DMA_RX_PIO_DATA_LEN_ERR: ··· 2185 1692 case DMA_RX_RDSETUP_ACTIVE_ERR: 2186 1693 case DMA_RX_RDSETUP_ESTATUS_ERR: 2187 1694 case DMA_RX_UNKNOWN_FRM_ERR: 1695 + case TRANS_RX_SSP_FRM_LEN_ERR: 1696 + case TRANS_TX_OPEN_CNX_ERR_STP_RESOURCES_BUSY: 2188 1697 { 2189 - ts->stat = SAS_OPEN_REJECT; 1698 + slot->abort = 1; 1699 + ts->stat = SAS_PHY_DOWN; 2190 1700 break; 2191 1701 } 2192 1702 default: ··· 2207 1711 } 2208 1712 2209 1713 static int 2210 - slot_complete_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot, 2211 - int abort) 1714 + slot_complete_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot) 2212 1715 { 2213 1716 struct sas_task *task = slot->task; 2214 1717 struct hisi_sas_device *sas_dev; ··· 2219 1724 hisi_hba->complete_hdr[slot->cmplt_queue]; 2220 1725 struct hisi_sas_complete_v2_hdr *complete_hdr = 2221 1726 &complete_queue[slot->cmplt_queue_slot]; 1727 + unsigned long flags; 1728 + int aborted; 2222 1729 2223 1730 if (unlikely(!task || !task->lldd_task || !task->dev)) 2224 1731 return -EINVAL; ··· 2229 1732 device = task->dev; 2230 1733 sas_dev = device->lldd_dev; 2231 1734 1735 + spin_lock_irqsave(&task->task_state_lock, flags); 1736 + aborted = task->task_state_flags & SAS_TASK_STATE_ABORTED; 2232 1737 task->task_state_flags &= 2233 1738 ~(SAS_TASK_STATE_PENDING | SAS_TASK_AT_INITIATOR); 2234 - task->task_state_flags |= SAS_TASK_STATE_DONE; 1739 + spin_unlock_irqrestore(&task->task_state_lock, flags); 2235 1740 2236 1741 memset(ts, 0, sizeof(*ts)); 2237 1742 ts->resp = SAS_TASK_COMPLETE; 2238 1743 2239 - if (unlikely(!sas_dev || abort)) { 2240 - if (!sas_dev) 2241 - dev_dbg(dev, "slot complete: port has not device\n"); 1744 + if (unlikely(aborted)) { 1745 + ts->stat = SAS_ABORTED_TASK; 1746 + hisi_sas_slot_task_free(hisi_hba, task, slot); 1747 + return -1; 1748 + } 1749 + 1750 + if (unlikely(!sas_dev)) { 1751 + dev_dbg(dev, "slot complete: port has no device\n"); 2242 1752 ts->stat = SAS_PHY_DOWN; 2243 1753 goto out; 2244 1754 } ··· 2259 1755 goto out; 2260 1756 case STAT_IO_COMPLETE: 2261 1757 /* internal abort command complete */ 2262 - ts->stat = TMF_RESP_FUNC_COMPLETE; 1758 + ts->stat = TMF_RESP_FUNC_SUCC; 1759 + del_timer(&slot->internal_abort_timer); 2263 1760 goto out; 2264 1761 case STAT_IO_NO_DEVICE: 2265 1762 ts->stat = TMF_RESP_FUNC_COMPLETE; 1763 + del_timer(&slot->internal_abort_timer); 2266 1764 goto out; 2267 1765 case STAT_IO_NOT_VALID: 2268 1766 /* abort single io, controller don't find 2269 1767 * the io need to abort 2270 1768 */ 2271 1769 ts->stat = TMF_RESP_FUNC_FAILED; 1770 + del_timer(&slot->internal_abort_timer); 2272 1771 goto out; 2273 1772 default: 2274 1773 break; ··· 2279 1772 2280 1773 if ((complete_hdr->dw0 & CMPLT_HDR_ERX_MSK) && 2281 1774 (!(complete_hdr->dw0 & CMPLT_HDR_RSPNS_XFRD_MSK))) { 1775 + u32 err_phase = (complete_hdr->dw0 & CMPLT_HDR_ERR_PHASE_MSK) 1776 + >> CMPLT_HDR_ERR_PHASE_OFF; 2282 1777 2283 - slot_err_v2_hw(hisi_hba, task, slot); 2284 - if (unlikely(slot->abort)) { 2285 - queue_work(hisi_hba->wq, &slot->abort_slot); 2286 - /* immediately return and do not complete */ 1778 + /* Analyse error happens on which phase TX or RX */ 1779 + if (ERR_ON_TX_PHASE(err_phase)) 1780 + slot_err_v2_hw(hisi_hba, task, slot, 1); 1781 + else if (ERR_ON_RX_PHASE(err_phase)) 1782 + slot_err_v2_hw(hisi_hba, task, slot, 2); 1783 + 1784 + if (unlikely(slot->abort)) 2287 1785 return ts->stat; 2288 - } 2289 1786 goto out; 2290 1787 } 2291 1788 ··· 2341 1830 } 2342 1831 2343 1832 out: 2344 - 1833 + spin_lock_irqsave(&task->task_state_lock, flags); 1834 + task->task_state_flags |= SAS_TASK_STATE_DONE; 1835 + spin_unlock_irqrestore(&task->task_state_lock, flags); 2345 1836 hisi_sas_slot_task_free(hisi_hba, task, slot); 2346 1837 sts = ts->stat; 2347 1838 ··· 2433 1920 struct domain_device *parent_dev = device->parent; 2434 1921 struct hisi_sas_device *sas_dev = device->lldd_dev; 2435 1922 struct hisi_sas_cmd_hdr *hdr = slot->cmd_hdr; 2436 - struct hisi_sas_port *port = device->port->lldd_port; 1923 + struct asd_sas_port *sas_port = device->port; 1924 + struct hisi_sas_port *port = to_hisi_sas_port(sas_port); 2437 1925 u8 *buf_cmd; 2438 1926 int has_data = 0, rc = 0, hdr_tag = 0; 2439 1927 u32 dw1 = 0, dw2 = 0; ··· 2461 1947 dw1 &= ~CMD_HDR_DIR_MSK; 2462 1948 } 2463 1949 2464 - if (0 == task->ata_task.fis.command) 1950 + if ((task->ata_task.fis.command == ATA_CMD_DEV_RESET) && 1951 + (task->ata_task.fis.control & ATA_SRST)) 2465 1952 dw1 |= 1 << CMD_HDR_RESET_OFF; 2466 1953 2467 1954 dw1 |= (get_ata_protocol(task->ata_task.fis.command, task->data_dir)) ··· 2505 1990 return 0; 2506 1991 } 2507 1992 1993 + static void hisi_sas_internal_abort_quirk_timeout(unsigned long data) 1994 + { 1995 + struct hisi_sas_slot *slot = (struct hisi_sas_slot *)data; 1996 + struct hisi_sas_port *port = slot->port; 1997 + struct asd_sas_port *asd_sas_port; 1998 + struct asd_sas_phy *sas_phy; 1999 + 2000 + if (!port) 2001 + return; 2002 + 2003 + asd_sas_port = &port->sas_port; 2004 + 2005 + /* Kick the hardware - send break command */ 2006 + list_for_each_entry(sas_phy, &asd_sas_port->phy_list, port_phy_el) { 2007 + struct hisi_sas_phy *phy = sas_phy->lldd_phy; 2008 + struct hisi_hba *hisi_hba = phy->hisi_hba; 2009 + int phy_no = sas_phy->id; 2010 + u32 link_dfx2; 2011 + 2012 + link_dfx2 = hisi_sas_phy_read32(hisi_hba, phy_no, LINK_DFX2); 2013 + if ((link_dfx2 == LINK_DFX2_RCVR_HOLD_STS_MSK) || 2014 + (link_dfx2 & LINK_DFX2_SEND_HOLD_STS_MSK)) { 2015 + u32 txid_auto; 2016 + 2017 + txid_auto = hisi_sas_phy_read32(hisi_hba, phy_no, 2018 + TXID_AUTO); 2019 + txid_auto |= TXID_AUTO_CTB_MSK; 2020 + hisi_sas_phy_write32(hisi_hba, phy_no, TXID_AUTO, 2021 + txid_auto); 2022 + return; 2023 + } 2024 + } 2025 + } 2026 + 2508 2027 static int prep_abort_v2_hw(struct hisi_hba *hisi_hba, 2509 2028 struct hisi_sas_slot *slot, 2510 2029 int device_id, int abort_flag, int tag_to_abort) ··· 2547 1998 struct domain_device *dev = task->dev; 2548 1999 struct hisi_sas_cmd_hdr *hdr = slot->cmd_hdr; 2549 2000 struct hisi_sas_port *port = slot->port; 2001 + struct timer_list *timer = &slot->internal_abort_timer; 2002 + 2003 + /* setup the quirk timer */ 2004 + setup_timer(timer, hisi_sas_internal_abort_quirk_timeout, 2005 + (unsigned long)slot); 2006 + /* Set the timeout to 10ms less than internal abort timeout */ 2007 + mod_timer(timer, jiffies + msecs_to_jiffies(100)); 2550 2008 2551 2009 /* dw0 */ 2552 2010 hdr->dw0 = cpu_to_le32((5 << CMD_HDR_CMD_OFF) | /*abort*/ ··· 2574 2018 2575 2019 static int phy_up_v2_hw(int phy_no, struct hisi_hba *hisi_hba) 2576 2020 { 2577 - int i, res = 0; 2578 - u32 context, port_id, link_rate, hard_phy_linkrate; 2021 + int i, res = IRQ_HANDLED; 2022 + u32 port_id, link_rate, hard_phy_linkrate; 2579 2023 struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no]; 2580 2024 struct asd_sas_phy *sas_phy = &phy->sas_phy; 2581 2025 struct device *dev = &hisi_hba->pdev->dev; ··· 2584 2028 2585 2029 hisi_sas_phy_write32(hisi_hba, phy_no, PHYCTRL_PHY_ENA_MSK, 1); 2586 2030 2587 - /* Check for SATA dev */ 2588 - context = hisi_sas_read32(hisi_hba, PHY_CONTEXT); 2589 - if (context & (1 << phy_no)) 2031 + if (is_sata_phy_v2_hw(hisi_hba, phy_no)) 2590 2032 goto end; 2591 2033 2592 2034 if (phy_no == 8) { ··· 2660 2106 2661 2107 static int phy_down_v2_hw(int phy_no, struct hisi_hba *hisi_hba) 2662 2108 { 2663 - int res = 0; 2664 2109 u32 phy_state, sl_ctrl, txid_auto; 2665 2110 struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no]; 2666 2111 struct hisi_sas_port *port = phy->port; ··· 2684 2131 hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT0, CHL_INT0_NOT_RDY_MSK); 2685 2132 hisi_sas_phy_write32(hisi_hba, phy_no, PHYCTRL_NOT_RDY_MSK, 0); 2686 2133 2687 - return res; 2134 + return IRQ_HANDLED; 2688 2135 } 2689 2136 2690 2137 static irqreturn_t int_phy_updown_v2_hw(int irq_no, void *p) ··· 2692 2139 struct hisi_hba *hisi_hba = p; 2693 2140 u32 irq_msk; 2694 2141 int phy_no = 0; 2695 - irqreturn_t res = IRQ_HANDLED; 2696 2142 2697 2143 irq_msk = (hisi_sas_read32(hisi_hba, HGC_INVLD_DQE_INFO) 2698 2144 >> HGC_INVLD_DQE_INFO_FB_CH0_OFF) & 0x1ff; 2699 2145 while (irq_msk) { 2700 2146 if (irq_msk & 1) { 2701 - u32 irq_value = hisi_sas_phy_read32(hisi_hba, phy_no, 2702 - CHL_INT0); 2147 + u32 reg_value = hisi_sas_phy_read32(hisi_hba, phy_no, 2148 + CHL_INT0); 2703 2149 2704 - if (irq_value & CHL_INT0_SL_PHY_ENABLE_MSK) 2150 + switch (reg_value & (CHL_INT0_NOT_RDY_MSK | 2151 + CHL_INT0_SL_PHY_ENABLE_MSK)) { 2152 + 2153 + case CHL_INT0_SL_PHY_ENABLE_MSK: 2705 2154 /* phy up */ 2706 - if (phy_up_v2_hw(phy_no, hisi_hba)) { 2707 - res = IRQ_NONE; 2708 - goto end; 2709 - } 2155 + if (phy_up_v2_hw(phy_no, hisi_hba) == 2156 + IRQ_NONE) 2157 + return IRQ_NONE; 2158 + break; 2710 2159 2711 - if (irq_value & CHL_INT0_NOT_RDY_MSK) 2160 + case CHL_INT0_NOT_RDY_MSK: 2712 2161 /* phy down */ 2713 - if (phy_down_v2_hw(phy_no, hisi_hba)) { 2714 - res = IRQ_NONE; 2715 - goto end; 2162 + if (phy_down_v2_hw(phy_no, hisi_hba) == 2163 + IRQ_NONE) 2164 + return IRQ_NONE; 2165 + break; 2166 + 2167 + case (CHL_INT0_NOT_RDY_MSK | 2168 + CHL_INT0_SL_PHY_ENABLE_MSK): 2169 + reg_value = hisi_sas_read32(hisi_hba, 2170 + PHY_STATE); 2171 + if (reg_value & BIT(phy_no)) { 2172 + /* phy up */ 2173 + if (phy_up_v2_hw(phy_no, hisi_hba) == 2174 + IRQ_NONE) 2175 + return IRQ_NONE; 2176 + } else { 2177 + /* phy down */ 2178 + if (phy_down_v2_hw(phy_no, hisi_hba) == 2179 + IRQ_NONE) 2180 + return IRQ_NONE; 2716 2181 } 2182 + break; 2183 + 2184 + default: 2185 + break; 2186 + } 2187 + 2717 2188 } 2718 2189 irq_msk >>= 1; 2719 2190 phy_no++; 2720 2191 } 2721 2192 2722 - end: 2723 - return res; 2193 + return IRQ_HANDLED; 2724 2194 } 2725 2195 2726 2196 static void phy_bcast_v2_hw(int phy_no, struct hisi_hba *hisi_hba) ··· 2918 2342 2919 2343 if (irq_value & BIT(SAS_ECC_INTR_DQE_ECC_MB_OFF)) { 2920 2344 reg_val = hisi_sas_read32(hisi_hba, HGC_DQE_ECC_ADDR); 2921 - panic("%s: hgc_dqe_accbad_intr (0x%x) found: \ 2345 + dev_warn(dev, "hgc_dqe_accbad_intr (0x%x) found: \ 2922 2346 Ram address is 0x%08X\n", 2923 - dev_name(dev), irq_value, 2347 + irq_value, 2924 2348 (reg_val & HGC_DQE_ECC_MB_ADDR_MSK) >> 2925 2349 HGC_DQE_ECC_MB_ADDR_OFF); 2350 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2926 2351 } 2927 2352 2928 2353 if (irq_value & BIT(SAS_ECC_INTR_IOST_ECC_MB_OFF)) { 2929 2354 reg_val = hisi_sas_read32(hisi_hba, HGC_IOST_ECC_ADDR); 2930 - panic("%s: hgc_iost_accbad_intr (0x%x) found: \ 2355 + dev_warn(dev, "hgc_iost_accbad_intr (0x%x) found: \ 2931 2356 Ram address is 0x%08X\n", 2932 - dev_name(dev), irq_value, 2357 + irq_value, 2933 2358 (reg_val & HGC_IOST_ECC_MB_ADDR_MSK) >> 2934 2359 HGC_IOST_ECC_MB_ADDR_OFF); 2360 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2935 2361 } 2936 2362 2937 2363 if (irq_value & BIT(SAS_ECC_INTR_ITCT_ECC_MB_OFF)) { 2938 2364 reg_val = hisi_sas_read32(hisi_hba, HGC_ITCT_ECC_ADDR); 2939 - panic("%s: hgc_itct_accbad_intr (0x%x) found: \ 2365 + dev_warn(dev,"hgc_itct_accbad_intr (0x%x) found: \ 2940 2366 Ram address is 0x%08X\n", 2941 - dev_name(dev), irq_value, 2367 + irq_value, 2942 2368 (reg_val & HGC_ITCT_ECC_MB_ADDR_MSK) >> 2943 2369 HGC_ITCT_ECC_MB_ADDR_OFF); 2370 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2944 2371 } 2945 2372 2946 2373 if (irq_value & BIT(SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF)) { 2947 2374 reg_val = hisi_sas_read32(hisi_hba, HGC_LM_DFX_STATUS2); 2948 - panic("%s: hgc_iostl_accbad_intr (0x%x) found: \ 2375 + dev_warn(dev, "hgc_iostl_accbad_intr (0x%x) found: \ 2949 2376 memory address is 0x%08X\n", 2950 - dev_name(dev), irq_value, 2377 + irq_value, 2951 2378 (reg_val & HGC_LM_DFX_STATUS2_IOSTLIST_MSK) >> 2952 2379 HGC_LM_DFX_STATUS2_IOSTLIST_OFF); 2380 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2953 2381 } 2954 2382 2955 2383 if (irq_value & BIT(SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF)) { 2956 2384 reg_val = hisi_sas_read32(hisi_hba, HGC_LM_DFX_STATUS2); 2957 - panic("%s: hgc_itctl_accbad_intr (0x%x) found: \ 2385 + dev_warn(dev, "hgc_itctl_accbad_intr (0x%x) found: \ 2958 2386 memory address is 0x%08X\n", 2959 - dev_name(dev), irq_value, 2387 + irq_value, 2960 2388 (reg_val & HGC_LM_DFX_STATUS2_ITCTLIST_MSK) >> 2961 2389 HGC_LM_DFX_STATUS2_ITCTLIST_OFF); 2390 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2962 2391 } 2963 2392 2964 2393 if (irq_value & BIT(SAS_ECC_INTR_CQE_ECC_MB_OFF)) { 2965 2394 reg_val = hisi_sas_read32(hisi_hba, HGC_CQE_ECC_ADDR); 2966 - panic("%s: hgc_cqe_accbad_intr (0x%x) found: \ 2395 + dev_warn(dev, "hgc_cqe_accbad_intr (0x%x) found: \ 2967 2396 Ram address is 0x%08X\n", 2968 - dev_name(dev), irq_value, 2397 + irq_value, 2969 2398 (reg_val & HGC_CQE_ECC_MB_ADDR_MSK) >> 2970 2399 HGC_CQE_ECC_MB_ADDR_OFF); 2400 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2971 2401 } 2972 2402 2973 2403 if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF)) { 2974 2404 reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14); 2975 - panic("%s: rxm_mem0_accbad_intr (0x%x) found: \ 2405 + dev_warn(dev, "rxm_mem0_accbad_intr (0x%x) found: \ 2976 2406 memory address is 0x%08X\n", 2977 - dev_name(dev), irq_value, 2407 + irq_value, 2978 2408 (reg_val & HGC_RXM_DFX_STATUS14_MEM0_MSK) >> 2979 2409 HGC_RXM_DFX_STATUS14_MEM0_OFF); 2410 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2980 2411 } 2981 2412 2982 2413 if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF)) { 2983 2414 reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14); 2984 - panic("%s: rxm_mem1_accbad_intr (0x%x) found: \ 2415 + dev_warn(dev, "rxm_mem1_accbad_intr (0x%x) found: \ 2985 2416 memory address is 0x%08X\n", 2986 - dev_name(dev), irq_value, 2417 + irq_value, 2987 2418 (reg_val & HGC_RXM_DFX_STATUS14_MEM1_MSK) >> 2988 2419 HGC_RXM_DFX_STATUS14_MEM1_OFF); 2420 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2989 2421 } 2990 2422 2991 2423 if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF)) { 2992 2424 reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14); 2993 - panic("%s: rxm_mem2_accbad_intr (0x%x) found: \ 2425 + dev_warn(dev, "rxm_mem2_accbad_intr (0x%x) found: \ 2994 2426 memory address is 0x%08X\n", 2995 - dev_name(dev), irq_value, 2427 + irq_value, 2996 2428 (reg_val & HGC_RXM_DFX_STATUS14_MEM2_MSK) >> 2997 2429 HGC_RXM_DFX_STATUS14_MEM2_OFF); 2430 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2998 2431 } 2999 2432 3000 2433 if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF)) { 3001 2434 reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS15); 3002 - panic("%s: rxm_mem3_accbad_intr (0x%x) found: \ 2435 + dev_warn(dev, "rxm_mem3_accbad_intr (0x%x) found: \ 3003 2436 memory address is 0x%08X\n", 3004 - dev_name(dev), irq_value, 2437 + irq_value, 3005 2438 (reg_val & HGC_RXM_DFX_STATUS15_MEM3_MSK) >> 3006 2439 HGC_RXM_DFX_STATUS15_MEM3_OFF); 2440 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 3007 2441 } 3008 2442 2443 + return; 3009 2444 } 3010 2445 3011 2446 static irqreturn_t fatal_ecc_int_v2_hw(int irq_no, void *p) ··· 3074 2487 if (irq_value & BIT(ENT_INT_SRC3_WP_DEPTH_OFF)) { 3075 2488 hisi_sas_write32(hisi_hba, ENT_INT_SRC3, 3076 2489 1 << ENT_INT_SRC3_WP_DEPTH_OFF); 3077 - panic("%s: write pointer and depth error (0x%x) \ 2490 + dev_warn(dev, "write pointer and depth error (0x%x) \ 3078 2491 found!\n", 3079 - dev_name(dev), irq_value); 2492 + irq_value); 2493 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 3080 2494 } 3081 2495 3082 2496 if (irq_value & BIT(ENT_INT_SRC3_IPTT_SLOT_NOMATCH_OFF)) { 3083 2497 hisi_sas_write32(hisi_hba, ENT_INT_SRC3, 3084 2498 1 << 3085 2499 ENT_INT_SRC3_IPTT_SLOT_NOMATCH_OFF); 3086 - panic("%s: iptt no match slot error (0x%x) found!\n", 3087 - dev_name(dev), irq_value); 2500 + dev_warn(dev, "iptt no match slot error (0x%x) found!\n", 2501 + irq_value); 2502 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 3088 2503 } 3089 2504 3090 - if (irq_value & BIT(ENT_INT_SRC3_RP_DEPTH_OFF)) 3091 - panic("%s: read pointer and depth error (0x%x) \ 2505 + if (irq_value & BIT(ENT_INT_SRC3_RP_DEPTH_OFF)) { 2506 + dev_warn(dev, "read pointer and depth error (0x%x) \ 3092 2507 found!\n", 3093 - dev_name(dev), irq_value); 2508 + irq_value); 2509 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2510 + } 3094 2511 3095 2512 if (irq_value & BIT(ENT_INT_SRC3_AXI_OFF)) { 3096 2513 int i; ··· 3105 2514 HGC_AXI_FIFO_ERR_INFO); 3106 2515 3107 2516 for (i = 0; i < AXI_ERR_NR; i++) { 3108 - if (err_value & BIT(i)) 3109 - panic("%s: %s (0x%x) found!\n", 3110 - dev_name(dev), 2517 + if (err_value & BIT(i)) { 2518 + dev_warn(dev, "%s (0x%x) found!\n", 3111 2519 axi_err_info[i], irq_value); 2520 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2521 + } 3112 2522 } 3113 2523 } 3114 2524 ··· 3122 2530 HGC_AXI_FIFO_ERR_INFO); 3123 2531 3124 2532 for (i = 0; i < FIFO_ERR_NR; i++) { 3125 - if (err_value & BIT(AXI_ERR_NR + i)) 3126 - panic("%s: %s (0x%x) found!\n", 3127 - dev_name(dev), 2533 + if (err_value & BIT(AXI_ERR_NR + i)) { 2534 + dev_warn(dev, "%s (0x%x) found!\n", 3128 2535 fifo_err_info[i], irq_value); 2536 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2537 + } 3129 2538 } 3130 2539 3131 2540 } ··· 3134 2541 if (irq_value & BIT(ENT_INT_SRC3_LM_OFF)) { 3135 2542 hisi_sas_write32(hisi_hba, ENT_INT_SRC3, 3136 2543 1 << ENT_INT_SRC3_LM_OFF); 3137 - panic("%s: LM add/fetch list error (0x%x) found!\n", 3138 - dev_name(dev), irq_value); 2544 + dev_warn(dev, "LM add/fetch list error (0x%x) found!\n", 2545 + irq_value); 2546 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 3139 2547 } 3140 2548 3141 2549 if (irq_value & BIT(ENT_INT_SRC3_ABT_OFF)) { 3142 2550 hisi_sas_write32(hisi_hba, ENT_INT_SRC3, 3143 2551 1 << ENT_INT_SRC3_ABT_OFF); 3144 - panic("%s: SAS_HGC_ABT fetch LM list error (0x%x) found!\n", 3145 - dev_name(dev), irq_value); 2552 + dev_warn(dev, "SAS_HGC_ABT fetch LM list error (0x%x) found!\n", 2553 + irq_value); 2554 + queue_work(hisi_hba->wq, &hisi_hba->rst_work); 3146 2555 } 3147 2556 } 3148 2557 ··· 3162 2567 struct hisi_sas_complete_v2_hdr *complete_queue; 3163 2568 u32 rd_point = cq->rd_point, wr_point, dev_id; 3164 2569 int queue = cq->id; 2570 + 2571 + if (unlikely(hisi_hba->reject_stp_links_msk)) 2572 + phys_try_accept_stp_links_v2_hw(hisi_hba); 3165 2573 3166 2574 complete_queue = hisi_hba->complete_hdr[queue]; 3167 2575 ··· 3198 2600 slot = &hisi_hba->slot_info[iptt]; 3199 2601 slot->cmplt_queue_slot = rd_point; 3200 2602 slot->cmplt_queue = queue; 3201 - slot_complete_v2_hw(hisi_hba, slot, 0); 2603 + slot_complete_v2_hw(hisi_hba, slot); 3202 2604 3203 2605 act_tmp &= ~(1 << ncq_tag_count); 3204 2606 ncq_tag_count = ffs(act_tmp); ··· 3208 2610 slot = &hisi_hba->slot_info[iptt]; 3209 2611 slot->cmplt_queue_slot = rd_point; 3210 2612 slot->cmplt_queue = queue; 3211 - slot_complete_v2_hw(hisi_hba, slot, 0); 2613 + slot_complete_v2_hw(hisi_hba, slot); 3212 2614 } 3213 2615 3214 2616 if (++rd_point >= HISI_SAS_QUEUE_SLOTS) ··· 3440 2842 { 3441 2843 int rc; 3442 2844 2845 + memset(hisi_hba->sata_dev_bitmap, 0, sizeof(hisi_hba->sata_dev_bitmap)); 2846 + 3443 2847 rc = hw_init_v2_hw(hisi_hba); 3444 2848 if (rc) 3445 2849 return rc; ··· 3450 2850 if (rc) 3451 2851 return rc; 3452 2852 3453 - phys_init_v2_hw(hisi_hba); 2853 + return 0; 2854 + } 2855 + 2856 + static void interrupt_disable_v2_hw(struct hisi_hba *hisi_hba) 2857 + { 2858 + struct platform_device *pdev = hisi_hba->pdev; 2859 + int i; 2860 + 2861 + for (i = 0; i < hisi_hba->queue_count; i++) 2862 + hisi_sas_write32(hisi_hba, OQ0_INT_SRC_MSK + 0x4 * i, 0x1); 2863 + 2864 + hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK1, 0xffffffff); 2865 + hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK2, 0xffffffff); 2866 + hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK3, 0xffffffff); 2867 + hisi_sas_write32(hisi_hba, SAS_ECC_INTR_MSK, 0xffffffff); 2868 + 2869 + for (i = 0; i < hisi_hba->n_phy; i++) { 2870 + hisi_sas_phy_write32(hisi_hba, i, CHL_INT1_MSK, 0xffffffff); 2871 + hisi_sas_phy_write32(hisi_hba, i, CHL_INT2_MSK, 0xffffffff); 2872 + } 2873 + 2874 + for (i = 0; i < 128; i++) 2875 + synchronize_irq(platform_get_irq(pdev, i)); 2876 + } 2877 + 2878 + static int soft_reset_v2_hw(struct hisi_hba *hisi_hba) 2879 + { 2880 + struct device *dev = &hisi_hba->pdev->dev; 2881 + u32 old_state, state; 2882 + int rc, cnt; 2883 + int phy_no; 2884 + 2885 + old_state = hisi_sas_read32(hisi_hba, PHY_STATE); 2886 + 2887 + interrupt_disable_v2_hw(hisi_hba); 2888 + hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 0x0); 2889 + 2890 + stop_phys_v2_hw(hisi_hba); 2891 + 2892 + mdelay(10); 2893 + 2894 + hisi_sas_write32(hisi_hba, AXI_MASTER_CFG_BASE + AM_CTRL_GLOBAL, 0x1); 2895 + 2896 + /* wait until bus idle */ 2897 + cnt = 0; 2898 + while (1) { 2899 + u32 status = hisi_sas_read32_relaxed(hisi_hba, 2900 + AXI_MASTER_CFG_BASE + AM_CURR_TRANS_RETURN); 2901 + 2902 + if (status == 0x3) 2903 + break; 2904 + 2905 + udelay(10); 2906 + if (cnt++ > 10) { 2907 + dev_info(dev, "wait axi bus state to idle timeout!\n"); 2908 + return -1; 2909 + } 2910 + } 2911 + 2912 + hisi_sas_init_mem(hisi_hba); 2913 + 2914 + rc = hw_init_v2_hw(hisi_hba); 2915 + if (rc) 2916 + return rc; 2917 + 2918 + phys_reject_stp_links_v2_hw(hisi_hba); 2919 + 2920 + /* Re-enable the PHYs */ 2921 + for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) { 2922 + struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no]; 2923 + struct asd_sas_phy *sas_phy = &phy->sas_phy; 2924 + 2925 + if (sas_phy->enabled) 2926 + start_phy_v2_hw(hisi_hba, phy_no); 2927 + } 2928 + 2929 + /* Wait for the PHYs to come up and read the PHY state */ 2930 + msleep(1000); 2931 + 2932 + state = hisi_sas_read32(hisi_hba, PHY_STATE); 2933 + 2934 + hisi_sas_rescan_topology(hisi_hba, old_state, state); 3454 2935 3455 2936 return 0; 3456 2937 } ··· 3551 2870 .get_free_slot = get_free_slot_v2_hw, 3552 2871 .start_delivery = start_delivery_v2_hw, 3553 2872 .slot_complete = slot_complete_v2_hw, 2873 + .phys_init = phys_init_v2_hw, 3554 2874 .phy_enable = enable_phy_v2_hw, 3555 2875 .phy_disable = disable_phy_v2_hw, 3556 2876 .phy_hard_reset = phy_hard_reset_v2_hw, ··· 3559 2877 .phy_get_max_linkrate = phy_get_max_linkrate_v2_hw, 3560 2878 .max_command_entries = HISI_SAS_COMMAND_ENTRIES_V2_HW, 3561 2879 .complete_hdr_size = sizeof(struct hisi_sas_complete_v2_hdr), 2880 + .soft_reset = soft_reset_v2_hw, 3562 2881 }; 3563 2882 3564 2883 static int hisi_sas_v2_probe(struct platform_device *pdev)
+5 -1
drivers/scsi/hpsa.c
··· 60 60 * HPSA_DRIVER_VERSION must be 3 byte values (0-255) separated by '.' 61 61 * with an optional trailing '-' followed by a byte value (0-255). 62 62 */ 63 - #define HPSA_DRIVER_VERSION "3.4.16-0" 63 + #define HPSA_DRIVER_VERSION "3.4.18-0" 64 64 #define DRIVER_NAME "HP HPSA Driver (v " HPSA_DRIVER_VERSION ")" 65 65 #define HPSA "hpsa" 66 66 ··· 108 108 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3354}, 109 109 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3355}, 110 110 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3356}, 111 + {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSH, 0x103c, 0x1920}, 111 112 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSH, 0x103C, 0x1921}, 112 113 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSH, 0x103C, 0x1922}, 113 114 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSH, 0x103C, 0x1923}, 114 115 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSH, 0x103C, 0x1924}, 116 + {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSH, 0x103c, 0x1925}, 115 117 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSH, 0x103C, 0x1926}, 116 118 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSH, 0x103C, 0x1928}, 117 119 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSH, 0x103C, 0x1929}, ··· 173 171 {0x3354103C, "Smart Array P420i", &SA5_access}, 174 172 {0x3355103C, "Smart Array P220i", &SA5_access}, 175 173 {0x3356103C, "Smart Array P721m", &SA5_access}, 174 + {0x1920103C, "Smart Array P430i", &SA5_access}, 176 175 {0x1921103C, "Smart Array P830i", &SA5_access}, 177 176 {0x1922103C, "Smart Array P430", &SA5_access}, 178 177 {0x1923103C, "Smart Array P431", &SA5_access}, 179 178 {0x1924103C, "Smart Array P830", &SA5_access}, 179 + {0x1925103C, "Smart Array P831", &SA5_access}, 180 180 {0x1926103C, "Smart Array P731m", &SA5_access}, 181 181 {0x1928103C, "Smart Array P230i", &SA5_access}, 182 182 {0x1929103C, "Smart Array P530", &SA5_access},
-6
drivers/scsi/ibmvscsi/ibmvfc.c
··· 3910 3910 spin_unlock_irqrestore(vhost->host->host_lock, flags); 3911 3911 3912 3912 tgt = mempool_alloc(vhost->tgt_pool, GFP_NOIO); 3913 - if (!tgt) { 3914 - dev_err(vhost->dev, "Target allocation failure for scsi id %08llx\n", 3915 - scsi_id); 3916 - return -ENOMEM; 3917 - } 3918 - 3919 3913 memset(tgt, 0, sizeof(*tgt)); 3920 3914 tgt->scsi_id = scsi_id; 3921 3915 tgt->new_scsi_id = scsi_id;
+206 -69
drivers/scsi/ipr.c
··· 820 820 } 821 821 822 822 /** 823 + * __ipr_sata_eh_done - done function for aborted SATA commands 824 + * @ipr_cmd: ipr command struct 825 + * 826 + * This function is invoked for ops generated to SATA 827 + * devices which are being aborted. 828 + * 829 + * Return value: 830 + * none 831 + **/ 832 + static void __ipr_sata_eh_done(struct ipr_cmnd *ipr_cmd) 833 + { 834 + struct ata_queued_cmd *qc = ipr_cmd->qc; 835 + struct ipr_sata_port *sata_port = qc->ap->private_data; 836 + 837 + qc->err_mask |= AC_ERR_OTHER; 838 + sata_port->ioasa.status |= ATA_BUSY; 839 + ata_qc_complete(qc); 840 + if (ipr_cmd->eh_comp) 841 + complete(ipr_cmd->eh_comp); 842 + list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); 843 + } 844 + 845 + /** 823 846 * ipr_sata_eh_done - done function for aborted SATA commands 824 847 * @ipr_cmd: ipr command struct 825 848 * ··· 854 831 **/ 855 832 static void ipr_sata_eh_done(struct ipr_cmnd *ipr_cmd) 856 833 { 857 - struct ata_queued_cmd *qc = ipr_cmd->qc; 858 - struct ipr_sata_port *sata_port = qc->ap->private_data; 834 + struct ipr_hrr_queue *hrrq = ipr_cmd->hrrq; 835 + unsigned long hrrq_flags; 859 836 860 - qc->err_mask |= AC_ERR_OTHER; 861 - sata_port->ioasa.status |= ATA_BUSY; 837 + spin_lock_irqsave(&hrrq->_lock, hrrq_flags); 838 + __ipr_sata_eh_done(ipr_cmd); 839 + spin_unlock_irqrestore(&hrrq->_lock, hrrq_flags); 840 + } 841 + 842 + /** 843 + * __ipr_scsi_eh_done - mid-layer done function for aborted ops 844 + * @ipr_cmd: ipr command struct 845 + * 846 + * This function is invoked by the interrupt handler for 847 + * ops generated by the SCSI mid-layer which are being aborted. 848 + * 849 + * Return value: 850 + * none 851 + **/ 852 + static void __ipr_scsi_eh_done(struct ipr_cmnd *ipr_cmd) 853 + { 854 + struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd; 855 + 856 + scsi_cmd->result |= (DID_ERROR << 16); 857 + 858 + scsi_dma_unmap(ipr_cmd->scsi_cmd); 859 + scsi_cmd->scsi_done(scsi_cmd); 860 + if (ipr_cmd->eh_comp) 861 + complete(ipr_cmd->eh_comp); 862 862 list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); 863 - ata_qc_complete(qc); 864 863 } 865 864 866 865 /** ··· 897 852 **/ 898 853 static void ipr_scsi_eh_done(struct ipr_cmnd *ipr_cmd) 899 854 { 900 - struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd; 855 + unsigned long hrrq_flags; 856 + struct ipr_hrr_queue *hrrq = ipr_cmd->hrrq; 901 857 902 - scsi_cmd->result |= (DID_ERROR << 16); 903 - 904 - scsi_dma_unmap(ipr_cmd->scsi_cmd); 905 - scsi_cmd->scsi_done(scsi_cmd); 906 - if (ipr_cmd->eh_comp) 907 - complete(ipr_cmd->eh_comp); 908 - list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); 858 + spin_lock_irqsave(&hrrq->_lock, hrrq_flags); 859 + __ipr_scsi_eh_done(ipr_cmd); 860 + spin_unlock_irqrestore(&hrrq->_lock, hrrq_flags); 909 861 } 910 862 911 863 /** ··· 932 890 cpu_to_be32(IPR_DRIVER_ILID); 933 891 934 892 if (ipr_cmd->scsi_cmd) 935 - ipr_cmd->done = ipr_scsi_eh_done; 893 + ipr_cmd->done = __ipr_scsi_eh_done; 936 894 else if (ipr_cmd->qc) 937 - ipr_cmd->done = ipr_sata_eh_done; 895 + ipr_cmd->done = __ipr_sata_eh_done; 938 896 939 897 ipr_trc_hook(ipr_cmd, IPR_TRACE_FINISH, 940 898 IPR_IOASC_IOA_WAS_RESET); ··· 5048 5006 } 5049 5007 5050 5008 /** 5009 + * ipr_cmnd_is_free - Check if a command is free or not 5010 + * @ipr_cmd ipr command struct 5011 + * 5012 + * Returns: 5013 + * true / false 5014 + **/ 5015 + static bool ipr_cmnd_is_free(struct ipr_cmnd *ipr_cmd) 5016 + { 5017 + struct ipr_cmnd *loop_cmd; 5018 + 5019 + list_for_each_entry(loop_cmd, &ipr_cmd->hrrq->hrrq_free_q, queue) { 5020 + if (loop_cmd == ipr_cmd) 5021 + return true; 5022 + } 5023 + 5024 + return false; 5025 + } 5026 + 5027 + /** 5028 + * ipr_match_res - Match function for specified resource entry 5029 + * @ipr_cmd: ipr command struct 5030 + * @resource: resource entry to match 5031 + * 5032 + * Returns: 5033 + * 1 if command matches sdev / 0 if command does not match sdev 5034 + **/ 5035 + static int ipr_match_res(struct ipr_cmnd *ipr_cmd, void *resource) 5036 + { 5037 + struct ipr_resource_entry *res = resource; 5038 + 5039 + if (res && ipr_cmd->ioarcb.res_handle == res->res_handle) 5040 + return 1; 5041 + return 0; 5042 + } 5043 + 5044 + /** 5051 5045 * ipr_wait_for_ops - Wait for matching commands to complete 5052 5046 * @ipr_cmd: ipr command struct 5053 5047 * @device: device to match (sdev) ··· 5096 5018 int (*match)(struct ipr_cmnd *, void *)) 5097 5019 { 5098 5020 struct ipr_cmnd *ipr_cmd; 5099 - int wait; 5021 + int wait, i; 5100 5022 unsigned long flags; 5101 5023 struct ipr_hrr_queue *hrrq; 5102 5024 signed long timeout = IPR_ABORT_TASK_TIMEOUT; ··· 5108 5030 5109 5031 for_each_hrrq(hrrq, ioa_cfg) { 5110 5032 spin_lock_irqsave(hrrq->lock, flags); 5111 - list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) { 5112 - if (match(ipr_cmd, device)) { 5113 - ipr_cmd->eh_comp = &comp; 5114 - wait++; 5033 + for (i = hrrq->min_cmd_id; i <= hrrq->max_cmd_id; i++) { 5034 + ipr_cmd = ioa_cfg->ipr_cmnd_list[i]; 5035 + if (!ipr_cmnd_is_free(ipr_cmd)) { 5036 + if (match(ipr_cmd, device)) { 5037 + ipr_cmd->eh_comp = &comp; 5038 + wait++; 5039 + } 5115 5040 } 5116 5041 } 5117 5042 spin_unlock_irqrestore(hrrq->lock, flags); ··· 5128 5047 5129 5048 for_each_hrrq(hrrq, ioa_cfg) { 5130 5049 spin_lock_irqsave(hrrq->lock, flags); 5131 - list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) { 5132 - if (match(ipr_cmd, device)) { 5133 - ipr_cmd->eh_comp = NULL; 5134 - wait++; 5050 + for (i = hrrq->min_cmd_id; i <= hrrq->max_cmd_id; i++) { 5051 + ipr_cmd = ioa_cfg->ipr_cmnd_list[i]; 5052 + if (!ipr_cmnd_is_free(ipr_cmd)) { 5053 + if (match(ipr_cmd, device)) { 5054 + ipr_cmd->eh_comp = NULL; 5055 + wait++; 5056 + } 5135 5057 } 5136 5058 } 5137 5059 spin_unlock_irqrestore(hrrq->lock, flags); ··· 5263 5179 struct ipr_ioa_cfg *ioa_cfg = sata_port->ioa_cfg; 5264 5180 struct ipr_resource_entry *res; 5265 5181 unsigned long lock_flags = 0; 5266 - int rc = -ENXIO; 5182 + int rc = -ENXIO, ret; 5267 5183 5268 5184 ENTER; 5269 5185 spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); ··· 5277 5193 if (res) { 5278 5194 rc = ipr_device_reset(ioa_cfg, res); 5279 5195 *classes = res->ata_class; 5280 - } 5196 + spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 5281 5197 5282 - spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 5198 + ret = ipr_wait_for_ops(ioa_cfg, res, ipr_match_res); 5199 + if (ret != SUCCESS) { 5200 + spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); 5201 + ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_ABBREV); 5202 + spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 5203 + 5204 + wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload); 5205 + } 5206 + } else 5207 + spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 5208 + 5283 5209 LEAVE; 5284 5210 return rc; 5285 5211 } ··· 5311 5217 struct ipr_ioa_cfg *ioa_cfg; 5312 5218 struct ipr_resource_entry *res; 5313 5219 struct ata_port *ap; 5314 - int rc = 0; 5220 + int rc = 0, i; 5315 5221 struct ipr_hrr_queue *hrrq; 5316 5222 5317 5223 ENTER; 5318 5224 ioa_cfg = (struct ipr_ioa_cfg *) scsi_cmd->device->host->hostdata; 5319 5225 res = scsi_cmd->device->hostdata; 5320 - 5321 - if (!res) 5322 - return FAILED; 5323 5226 5324 5227 /* 5325 5228 * If we are currently going through reset/reload, return failed. This will force the ··· 5330 5239 5331 5240 for_each_hrrq(hrrq, ioa_cfg) { 5332 5241 spin_lock(&hrrq->_lock); 5333 - list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) { 5242 + for (i = hrrq->min_cmd_id; i <= hrrq->max_cmd_id; i++) { 5243 + ipr_cmd = ioa_cfg->ipr_cmnd_list[i]; 5244 + 5334 5245 if (ipr_cmd->ioarcb.res_handle == res->res_handle) { 5335 - if (ipr_cmd->scsi_cmd) 5336 - ipr_cmd->done = ipr_scsi_eh_done; 5337 - if (ipr_cmd->qc) 5338 - ipr_cmd->done = ipr_sata_eh_done; 5339 - if (ipr_cmd->qc && 5340 - !(ipr_cmd->qc->flags & ATA_QCFLAG_FAILED)) { 5246 + if (!ipr_cmd->qc) 5247 + continue; 5248 + if (ipr_cmnd_is_free(ipr_cmd)) 5249 + continue; 5250 + 5251 + ipr_cmd->done = ipr_sata_eh_done; 5252 + if (!(ipr_cmd->qc->flags & ATA_QCFLAG_FAILED)) { 5341 5253 ipr_cmd->qc->err_mask |= AC_ERR_TIMEOUT; 5342 5254 ipr_cmd->qc->flags |= ATA_QCFLAG_FAILED; 5343 5255 } ··· 5356 5262 spin_unlock_irq(scsi_cmd->device->host->host_lock); 5357 5263 ata_std_error_handler(ap); 5358 5264 spin_lock_irq(scsi_cmd->device->host->host_lock); 5359 - 5360 - for_each_hrrq(hrrq, ioa_cfg) { 5361 - spin_lock(&hrrq->_lock); 5362 - list_for_each_entry(ipr_cmd, 5363 - &hrrq->hrrq_pending_q, queue) { 5364 - if (ipr_cmd->ioarcb.res_handle == 5365 - res->res_handle) { 5366 - rc = -EIO; 5367 - break; 5368 - } 5369 - } 5370 - spin_unlock(&hrrq->_lock); 5371 - } 5372 5265 } else 5373 5266 rc = ipr_device_reset(ioa_cfg, res); 5374 5267 res->resetting_device = 0; ··· 5369 5288 { 5370 5289 int rc; 5371 5290 struct ipr_ioa_cfg *ioa_cfg; 5291 + struct ipr_resource_entry *res; 5372 5292 5373 5293 ioa_cfg = (struct ipr_ioa_cfg *) cmd->device->host->hostdata; 5294 + res = cmd->device->hostdata; 5295 + 5296 + if (!res) 5297 + return FAILED; 5374 5298 5375 5299 spin_lock_irq(cmd->device->host->host_lock); 5376 5300 rc = __ipr_eh_dev_reset(cmd); 5377 5301 spin_unlock_irq(cmd->device->host->host_lock); 5378 5302 5379 - if (rc == SUCCESS) 5380 - rc = ipr_wait_for_ops(ioa_cfg, cmd->device, ipr_match_lun); 5303 + if (rc == SUCCESS) { 5304 + if (ipr_is_gata(res) && res->sata_port) 5305 + rc = ipr_wait_for_ops(ioa_cfg, res, ipr_match_res); 5306 + else 5307 + rc = ipr_wait_for_ops(ioa_cfg, cmd->device, ipr_match_lun); 5308 + } 5381 5309 5382 5310 return rc; 5383 5311 } ··· 5483 5393 struct ipr_resource_entry *res; 5484 5394 struct ipr_cmd_pkt *cmd_pkt; 5485 5395 u32 ioasc, int_reg; 5486 - int op_found = 0; 5396 + int i, op_found = 0; 5487 5397 struct ipr_hrr_queue *hrrq; 5488 5398 5489 5399 ENTER; ··· 5512 5422 5513 5423 for_each_hrrq(hrrq, ioa_cfg) { 5514 5424 spin_lock(&hrrq->_lock); 5515 - list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) { 5516 - if (ipr_cmd->scsi_cmd == scsi_cmd) { 5517 - ipr_cmd->done = ipr_scsi_eh_done; 5518 - op_found = 1; 5519 - break; 5425 + for (i = hrrq->min_cmd_id; i <= hrrq->max_cmd_id; i++) { 5426 + if (ioa_cfg->ipr_cmnd_list[i]->scsi_cmd == scsi_cmd) { 5427 + if (!ipr_cmnd_is_free(ioa_cfg->ipr_cmnd_list[i])) { 5428 + op_found = 1; 5429 + break; 5430 + } 5520 5431 } 5521 5432 } 5522 5433 spin_unlock(&hrrq->_lock); ··· 6008 5917 } 6009 5918 6010 5919 /** 6011 - * ipr_erp_done - Process completion of ERP for a device 5920 + * __ipr_erp_done - Process completion of ERP for a device 6012 5921 * @ipr_cmd: ipr command struct 6013 5922 * 6014 5923 * This function copies the sense buffer into the scsi_cmd ··· 6017 5926 * Return value: 6018 5927 * nothing 6019 5928 **/ 6020 - static void ipr_erp_done(struct ipr_cmnd *ipr_cmd) 5929 + static void __ipr_erp_done(struct ipr_cmnd *ipr_cmd) 6021 5930 { 6022 5931 struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd; 6023 5932 struct ipr_resource_entry *res = scsi_cmd->device->hostdata; ··· 6038 5947 res->in_erp = 0; 6039 5948 } 6040 5949 scsi_dma_unmap(ipr_cmd->scsi_cmd); 6041 - list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); 6042 5950 scsi_cmd->scsi_done(scsi_cmd); 5951 + if (ipr_cmd->eh_comp) 5952 + complete(ipr_cmd->eh_comp); 5953 + list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); 5954 + } 5955 + 5956 + /** 5957 + * ipr_erp_done - Process completion of ERP for a device 5958 + * @ipr_cmd: ipr command struct 5959 + * 5960 + * This function copies the sense buffer into the scsi_cmd 5961 + * struct and pushes the scsi_done function. 5962 + * 5963 + * Return value: 5964 + * nothing 5965 + **/ 5966 + static void ipr_erp_done(struct ipr_cmnd *ipr_cmd) 5967 + { 5968 + struct ipr_hrr_queue *hrrq = ipr_cmd->hrrq; 5969 + unsigned long hrrq_flags; 5970 + 5971 + spin_lock_irqsave(&hrrq->_lock, hrrq_flags); 5972 + __ipr_erp_done(ipr_cmd); 5973 + spin_unlock_irqrestore(&hrrq->_lock, hrrq_flags); 6043 5974 } 6044 5975 6045 5976 /** ··· 6096 5983 } 6097 5984 6098 5985 /** 6099 - * ipr_erp_request_sense - Send request sense to a device 5986 + * __ipr_erp_request_sense - Send request sense to a device 6100 5987 * @ipr_cmd: ipr command struct 6101 5988 * 6102 5989 * This function sends a request sense to a device as a result ··· 6105 5992 * Return value: 6106 5993 * nothing 6107 5994 **/ 6108 - static void ipr_erp_request_sense(struct ipr_cmnd *ipr_cmd) 5995 + static void __ipr_erp_request_sense(struct ipr_cmnd *ipr_cmd) 6109 5996 { 6110 5997 struct ipr_cmd_pkt *cmd_pkt = &ipr_cmd->ioarcb.cmd_pkt; 6111 5998 u32 ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc); 6112 5999 6113 6000 if (IPR_IOASC_SENSE_KEY(ioasc) > 0) { 6114 - ipr_erp_done(ipr_cmd); 6001 + __ipr_erp_done(ipr_cmd); 6115 6002 return; 6116 6003 } 6117 6004 ··· 6129 6016 6130 6017 ipr_do_req(ipr_cmd, ipr_erp_done, ipr_timeout, 6131 6018 IPR_REQUEST_SENSE_TIMEOUT * 2); 6019 + } 6020 + 6021 + /** 6022 + * ipr_erp_request_sense - Send request sense to a device 6023 + * @ipr_cmd: ipr command struct 6024 + * 6025 + * This function sends a request sense to a device as a result 6026 + * of a check condition. 6027 + * 6028 + * Return value: 6029 + * nothing 6030 + **/ 6031 + static void ipr_erp_request_sense(struct ipr_cmnd *ipr_cmd) 6032 + { 6033 + struct ipr_hrr_queue *hrrq = ipr_cmd->hrrq; 6034 + unsigned long hrrq_flags; 6035 + 6036 + spin_lock_irqsave(&hrrq->_lock, hrrq_flags); 6037 + __ipr_erp_request_sense(ipr_cmd); 6038 + spin_unlock_irqrestore(&hrrq->_lock, hrrq_flags); 6132 6039 } 6133 6040 6134 6041 /** ··· 6174 6041 ipr_reinit_ipr_cmnd_for_erp(ipr_cmd); 6175 6042 6176 6043 if (!scsi_cmd->device->simple_tags) { 6177 - ipr_erp_request_sense(ipr_cmd); 6044 + __ipr_erp_request_sense(ipr_cmd); 6178 6045 return; 6179 6046 } 6180 6047 ··· 6394 6261 u32 masked_ioasc = ioasc & IPR_IOASC_IOASC_MASK; 6395 6262 6396 6263 if (!res) { 6397 - ipr_scsi_eh_done(ipr_cmd); 6264 + __ipr_scsi_eh_done(ipr_cmd); 6398 6265 return; 6399 6266 } 6400 6267 ··· 6476 6343 } 6477 6344 6478 6345 scsi_dma_unmap(ipr_cmd->scsi_cmd); 6479 - list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); 6480 6346 scsi_cmd->scsi_done(scsi_cmd); 6347 + if (ipr_cmd->eh_comp) 6348 + complete(ipr_cmd->eh_comp); 6349 + list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); 6481 6350 } 6482 6351 6483 6352 /** ··· 6505 6370 scsi_dma_unmap(scsi_cmd); 6506 6371 6507 6372 spin_lock_irqsave(ipr_cmd->hrrq->lock, lock_flags); 6508 - list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); 6509 6373 scsi_cmd->scsi_done(scsi_cmd); 6374 + if (ipr_cmd->eh_comp) 6375 + complete(ipr_cmd->eh_comp); 6376 + list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); 6510 6377 spin_unlock_irqrestore(ipr_cmd->hrrq->lock, lock_flags); 6511 6378 } else { 6512 6379 spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+2 -2
drivers/scsi/ipr.h
··· 39 39 /* 40 40 * Literals 41 41 */ 42 - #define IPR_DRIVER_VERSION "2.6.3" 43 - #define IPR_DRIVER_DATE "(October 17, 2015)" 42 + #define IPR_DRIVER_VERSION "2.6.4" 43 + #define IPR_DRIVER_DATE "(March 14, 2017)" 44 44 45 45 /* 46 46 * IPR_MAX_CMD_PER_LUN: This defines the maximum number of outstanding
-1
drivers/scsi/isci/init.c
··· 272 272 return; 273 273 274 274 shost = to_shost(isci_host); 275 - scsi_remove_host(shost); 276 275 sas_unregister_ha(&isci_host->sas_ha); 277 276 278 277 sas_remove_host(shost);
+3 -3
drivers/scsi/libfc/fc_fcp.c
··· 154 154 memset(fsp, 0, sizeof(*fsp)); 155 155 fsp->lp = lport; 156 156 fsp->xfer_ddp = FC_XID_UNKNOWN; 157 - atomic_set(&fsp->ref_cnt, 1); 157 + refcount_set(&fsp->ref_cnt, 1); 158 158 init_timer(&fsp->timer); 159 159 fsp->timer.data = (unsigned long)fsp; 160 160 INIT_LIST_HEAD(&fsp->list); ··· 175 175 */ 176 176 static void fc_fcp_pkt_release(struct fc_fcp_pkt *fsp) 177 177 { 178 - if (atomic_dec_and_test(&fsp->ref_cnt)) { 178 + if (refcount_dec_and_test(&fsp->ref_cnt)) { 179 179 struct fc_fcp_internal *si = fc_get_scsi_internal(fsp->lp); 180 180 181 181 mempool_free(fsp, si->scsi_pkt_pool); ··· 188 188 */ 189 189 static void fc_fcp_pkt_hold(struct fc_fcp_pkt *fsp) 190 190 { 191 - atomic_inc(&fsp->ref_cnt); 191 + refcount_inc(&fsp->ref_cnt); 192 192 } 193 193 194 194 /**
+9 -11
drivers/scsi/libfc/fc_lport.c
··· 887 887 static void fc_lport_recv_els_req(struct fc_lport *lport, 888 888 struct fc_frame *fp) 889 889 { 890 - void (*recv)(struct fc_lport *, struct fc_frame *); 891 - 892 890 mutex_lock(&lport->lp_mutex); 893 891 894 892 /* ··· 900 902 /* 901 903 * Check opcode. 902 904 */ 903 - recv = fc_rport_recv_req; 904 905 switch (fc_frame_payload_op(fp)) { 905 906 case ELS_FLOGI: 906 907 if (!lport->point_to_multipoint) 907 - recv = fc_lport_recv_flogi_req; 908 + fc_lport_recv_flogi_req(lport, fp); 908 909 break; 909 910 case ELS_LOGO: 910 911 if (fc_frame_sid(fp) == FC_FID_FLOGI) 911 - recv = fc_lport_recv_logo_req; 912 + fc_lport_recv_logo_req(lport, fp); 912 913 break; 913 914 case ELS_RSCN: 914 - recv = lport->tt.disc_recv_req; 915 + lport->tt.disc_recv_req(lport, fp); 915 916 break; 916 917 case ELS_ECHO: 917 - recv = fc_lport_recv_echo_req; 918 + fc_lport_recv_echo_req(lport, fp); 918 919 break; 919 920 case ELS_RLIR: 920 - recv = fc_lport_recv_rlir_req; 921 + fc_lport_recv_rlir_req(lport, fp); 921 922 break; 922 923 case ELS_RNID: 923 - recv = fc_lport_recv_rnid_req; 924 + fc_lport_recv_rnid_req(lport, fp); 925 + break; 926 + default: 927 + fc_rport_recv_req(lport, fp); 924 928 break; 925 929 } 926 - 927 - recv(lport, fp); 928 930 } 929 931 mutex_unlock(&lport->lp_mutex); 930 932 }
+4 -4
drivers/scsi/libiscsi.c
··· 517 517 518 518 void __iscsi_get_task(struct iscsi_task *task) 519 519 { 520 - atomic_inc(&task->refcount); 520 + refcount_inc(&task->refcount); 521 521 } 522 522 EXPORT_SYMBOL_GPL(__iscsi_get_task); 523 523 524 524 void __iscsi_put_task(struct iscsi_task *task) 525 525 { 526 - if (atomic_dec_and_test(&task->refcount)) 526 + if (refcount_dec_and_test(&task->refcount)) 527 527 iscsi_free_task(task); 528 528 } 529 529 EXPORT_SYMBOL_GPL(__iscsi_put_task); ··· 749 749 * released by the lld when it has transmitted the task for 750 750 * pdus we do not expect a response for. 751 751 */ 752 - atomic_set(&task->refcount, 1); 752 + refcount_set(&task->refcount, 1); 753 753 task->conn = conn; 754 754 task->sc = NULL; 755 755 INIT_LIST_HEAD(&task->running); ··· 1638 1638 sc->SCp.phase = conn->session->age; 1639 1639 sc->SCp.ptr = (char *) task; 1640 1640 1641 - atomic_set(&task->refcount, 1); 1641 + refcount_set(&task->refcount, 1); 1642 1642 task->state = ISCSI_TASK_PENDING; 1643 1643 task->conn = conn; 1644 1644 task->sc = sc;
-7
drivers/scsi/libsas/sas_init.c
··· 566 566 } 567 567 EXPORT_SYMBOL_GPL(sas_domain_attach_transport); 568 568 569 - 570 - void sas_domain_release_transport(struct scsi_transport_template *stt) 571 - { 572 - sas_release_transport(stt); 573 - } 574 - EXPORT_SYMBOL_GPL(sas_domain_release_transport); 575 - 576 569 /* ---------- SAS Class register/unregister ---------- */ 577 570 578 571 static int __init sas_class_init(void)
-5
drivers/scsi/libsas/sas_scsi_host.c
··· 491 491 struct Scsi_Host *host = cmd->device->host; 492 492 struct sas_internal *i = to_sas_internal(host->transportt); 493 493 494 - if (current != host->ehandler) 495 - return FAILED; 496 - 497 494 if (!i->dft->lldd_abort_task) 498 495 return FAILED; 499 496 ··· 612 615 613 616 SAS_DPRINTK("trying to find task 0x%p\n", task); 614 617 res = sas_scsi_find_task(task); 615 - 616 - cmd->eh_eflags = 0; 617 618 618 619 switch (res) { 619 620 case TASK_IS_DONE:
+2 -2
drivers/scsi/lpfc/lpfc_attr.c
··· 181 181 wwn_to_u64(vport->fc_nodename.u.wwn), 182 182 phba->targetport->port_id); 183 183 184 - len += snprintf(buf + len, PAGE_SIZE, 184 + len += snprintf(buf + len, PAGE_SIZE - len, 185 185 "\nNVME Target: Statistics\n"); 186 186 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; 187 187 len += snprintf(buf+len, PAGE_SIZE-len, ··· 326 326 } 327 327 spin_unlock_irq(shost->host_lock); 328 328 329 - len += snprintf(buf + len, PAGE_SIZE, "\nNVME Statistics\n"); 329 + len += snprintf(buf + len, PAGE_SIZE - len, "\nNVME Statistics\n"); 330 330 len += snprintf(buf+len, PAGE_SIZE-len, 331 331 "LS: Xmt %016llx Cmpl %016llx\n", 332 332 phba->fc4NvmeLsRequests,
+24 -11
drivers/scsi/mac_esp.c
··· 55 55 int error; 56 56 }; 57 57 static struct esp *esp_chips[2]; 58 + static DEFINE_SPINLOCK(esp_chips_lock); 58 59 59 60 #define MAC_ESP_GET_PRIV(esp) ((struct mac_esp_priv *) \ 60 61 platform_get_drvdata((struct platform_device *) \ ··· 563 562 } 564 563 565 564 host->irq = IRQ_MAC_SCSI; 566 - esp_chips[dev->id] = esp; 567 - mb(); 568 - if (esp_chips[!dev->id] == NULL) { 569 - err = request_irq(host->irq, mac_scsi_esp_intr, 0, "ESP", NULL); 570 - if (err < 0) { 571 - esp_chips[dev->id] = NULL; 572 - goto fail_free_priv; 573 - } 565 + 566 + /* The request_irq() call is intended to succeed for the first device 567 + * and fail for the second device. 568 + */ 569 + err = request_irq(host->irq, mac_scsi_esp_intr, 0, "ESP", NULL); 570 + spin_lock(&esp_chips_lock); 571 + if (err < 0 && esp_chips[!dev->id] == NULL) { 572 + spin_unlock(&esp_chips_lock); 573 + goto fail_free_priv; 574 574 } 575 + esp_chips[dev->id] = esp; 576 + spin_unlock(&esp_chips_lock); 575 577 576 578 err = scsi_esp_register(esp, &dev->dev); 577 579 if (err) ··· 583 579 return 0; 584 580 585 581 fail_free_irq: 586 - if (esp_chips[!dev->id] == NULL) 587 - free_irq(host->irq, esp); 582 + spin_lock(&esp_chips_lock); 583 + esp_chips[dev->id] = NULL; 584 + if (esp_chips[!dev->id] == NULL) { 585 + spin_unlock(&esp_chips_lock); 586 + free_irq(host->irq, NULL); 587 + } else 588 + spin_unlock(&esp_chips_lock); 588 589 fail_free_priv: 589 590 kfree(mep); 590 591 fail_free_command_block: ··· 608 599 609 600 scsi_esp_unregister(esp); 610 601 602 + spin_lock(&esp_chips_lock); 611 603 esp_chips[dev->id] = NULL; 612 - if (!(esp_chips[0] || esp_chips[1])) 604 + if (esp_chips[!dev->id] == NULL) { 605 + spin_unlock(&esp_chips_lock); 613 606 free_irq(irq, NULL); 607 + } else 608 + spin_unlock(&esp_chips_lock); 614 609 615 610 kfree(mep); 616 611
-2
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 1025 1025 0 : ioc->reply_free_host_index + 1; 1026 1026 ioc->reply_free[ioc->reply_free_host_index] = 1027 1027 cpu_to_le32(reply); 1028 - wmb(); 1029 1028 writel(ioc->reply_free_host_index, 1030 1029 &ioc->chip->ReplyFreeHostIndex); 1031 1030 } ··· 1073 1074 return IRQ_NONE; 1074 1075 } 1075 1076 1076 - wmb(); 1077 1077 if (ioc->is_warpdrive) { 1078 1078 writel(reply_q->reply_post_host_index, 1079 1079 ioc->reply_post_host_index[msix_index]);
-1
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 8283 8283 } 8284 8284 8285 8285 sas_remove_host(shost); 8286 - scsi_remove_host(shost); 8287 8286 mpt3sas_base_detach(ioc); 8288 8287 spin_lock(&gioc_lock); 8289 8288 list_del(&ioc->list);
-1
drivers/scsi/mvsas/mv_init.c
··· 642 642 tasklet_kill(&((struct mvs_prv_info *)sha->lldd_ha)->mv_tasklet); 643 643 #endif 644 644 645 - scsi_remove_host(mvi->shost); 646 645 sas_unregister_ha(sha); 647 646 sas_remove_host(mvi->shost); 648 647
+26 -59
drivers/scsi/mvumi.c
··· 210 210 unsigned int sgnum = scsi_sg_count(scmd); 211 211 dma_addr_t busaddr; 212 212 213 - if (sgnum) { 214 - sg = scsi_sglist(scmd); 215 - *sg_count = pci_map_sg(mhba->pdev, sg, sgnum, 216 - (int) scmd->sc_data_direction); 217 - if (*sg_count > mhba->max_sge) { 218 - dev_err(&mhba->pdev->dev, "sg count[0x%x] is bigger " 219 - "than max sg[0x%x].\n", 220 - *sg_count, mhba->max_sge); 221 - return -1; 222 - } 223 - for (i = 0; i < *sg_count; i++) { 224 - busaddr = sg_dma_address(&sg[i]); 225 - m_sg->baseaddr_l = cpu_to_le32(lower_32_bits(busaddr)); 226 - m_sg->baseaddr_h = cpu_to_le32(upper_32_bits(busaddr)); 227 - m_sg->flags = 0; 228 - sgd_setsz(mhba, m_sg, cpu_to_le32(sg_dma_len(&sg[i]))); 229 - if ((i + 1) == *sg_count) 230 - m_sg->flags |= 1U << mhba->eot_flag; 231 - 232 - sgd_inc(mhba, m_sg); 233 - } 234 - } else { 235 - scmd->SCp.dma_handle = scsi_bufflen(scmd) ? 236 - pci_map_single(mhba->pdev, scsi_sglist(scmd), 237 - scsi_bufflen(scmd), 238 - (int) scmd->sc_data_direction) 239 - : 0; 240 - busaddr = scmd->SCp.dma_handle; 213 + sg = scsi_sglist(scmd); 214 + *sg_count = pci_map_sg(mhba->pdev, sg, sgnum, 215 + (int) scmd->sc_data_direction); 216 + if (*sg_count > mhba->max_sge) { 217 + dev_err(&mhba->pdev->dev, 218 + "sg count[0x%x] is bigger than max sg[0x%x].\n", 219 + *sg_count, mhba->max_sge); 220 + pci_unmap_sg(mhba->pdev, sg, sgnum, 221 + (int) scmd->sc_data_direction); 222 + return -1; 223 + } 224 + for (i = 0; i < *sg_count; i++) { 225 + busaddr = sg_dma_address(&sg[i]); 241 226 m_sg->baseaddr_l = cpu_to_le32(lower_32_bits(busaddr)); 242 227 m_sg->baseaddr_h = cpu_to_le32(upper_32_bits(busaddr)); 243 - m_sg->flags = 1U << mhba->eot_flag; 244 - sgd_setsz(mhba, m_sg, cpu_to_le32(scsi_bufflen(scmd))); 245 - *sg_count = 1; 228 + m_sg->flags = 0; 229 + sgd_setsz(mhba, m_sg, cpu_to_le32(sg_dma_len(&sg[i]))); 230 + if ((i + 1) == *sg_count) 231 + m_sg->flags |= 1U << mhba->eot_flag; 232 + 233 + sgd_inc(mhba, m_sg); 246 234 } 247 235 248 236 return 0; ··· 1338 1350 break; 1339 1351 } 1340 1352 1341 - if (scsi_bufflen(scmd)) { 1342 - if (scsi_sg_count(scmd)) { 1343 - pci_unmap_sg(mhba->pdev, 1344 - scsi_sglist(scmd), 1345 - scsi_sg_count(scmd), 1346 - (int) scmd->sc_data_direction); 1347 - } else { 1348 - pci_unmap_single(mhba->pdev, 1349 - scmd->SCp.dma_handle, 1350 - scsi_bufflen(scmd), 1351 - (int) scmd->sc_data_direction); 1352 - 1353 - scmd->SCp.dma_handle = 0; 1354 - } 1355 - } 1353 + if (scsi_bufflen(scmd)) 1354 + pci_unmap_sg(mhba->pdev, scsi_sglist(scmd), 1355 + scsi_sg_count(scmd), 1356 + (int) scmd->sc_data_direction); 1356 1357 cmd->scmd->scsi_done(scmd); 1357 1358 mvumi_return_cmd(mhba, cmd); 1358 1359 } ··· 2148 2171 scmd->result = (DRIVER_INVALID << 24) | (DID_ABORT << 16); 2149 2172 scmd->SCp.ptr = NULL; 2150 2173 if (scsi_bufflen(scmd)) { 2151 - if (scsi_sg_count(scmd)) { 2152 - pci_unmap_sg(mhba->pdev, 2153 - scsi_sglist(scmd), 2154 - scsi_sg_count(scmd), 2155 - (int)scmd->sc_data_direction); 2156 - } else { 2157 - pci_unmap_single(mhba->pdev, 2158 - scmd->SCp.dma_handle, 2159 - scsi_bufflen(scmd), 2160 - (int)scmd->sc_data_direction); 2161 - 2162 - scmd->SCp.dma_handle = 0; 2163 - } 2174 + pci_unmap_sg(mhba->pdev, scsi_sglist(scmd), 2175 + scsi_sg_count(scmd), 2176 + (int)scmd->sc_data_direction); 2164 2177 } 2165 2178 mvumi_return_cmd(mhba, cmd); 2166 2179 spin_unlock_irqrestore(mhba->shost->host_lock, flags);
+8 -7
drivers/scsi/osd/osd_uld.c
··· 464 464 /* hold one more reference to the scsi_device that will get released 465 465 * in __release, in case a logout is happening while fs is mounted 466 466 */ 467 - scsi_device_get(scsi_device); 467 + if (scsi_device_get(scsi_device)) 468 + goto err_put_disk; 468 469 osd_dev_init(&oud->od, scsi_device); 469 470 470 471 /* Detect the OSD Version */ 471 472 error = __detect_osd(oud); 472 473 if (error) { 473 474 OSD_ERR("osd detection failed, non-compatible OSD device\n"); 474 - goto err_put_disk; 475 + goto err_put_sdev; 475 476 } 476 477 477 478 /* init the char-device for communication with user-mode */ ··· 509 508 510 509 err_put_cdev: 511 510 cdev_del(&oud->cdev); 512 - err_put_disk: 511 + err_put_sdev: 513 512 scsi_device_put(scsi_device); 513 + err_put_disk: 514 514 put_disk(disk); 515 515 err_free_osd: 516 516 dev_set_drvdata(dev, NULL); ··· 526 524 struct scsi_device *scsi_device = to_scsi_device(dev); 527 525 struct osd_uld_device *oud = dev_get_drvdata(dev); 528 526 529 - if (!oud || (oud->od.scsi_device != scsi_device)) { 530 - OSD_ERR("Half cooked osd-device %p,%p || %p!=%p", 531 - dev, oud, oud ? oud->od.scsi_device : NULL, 532 - scsi_device); 527 + if (oud->od.scsi_device != scsi_device) { 528 + OSD_ERR("Half cooked osd-device %p, || %p!=%p", 529 + dev, oud->od.scsi_device, scsi_device); 533 530 } 534 531 535 532 device_unregister(&oud->class_dev);
-1
drivers/scsi/pm8001/pm8001_init.c
··· 1088 1088 struct pm8001_hba_info *pm8001_ha; 1089 1089 int i, j; 1090 1090 pm8001_ha = sha->lldd_ha; 1091 - scsi_remove_host(pm8001_ha->shost); 1092 1091 sas_unregister_ha(sha); 1093 1092 sas_remove_host(pm8001_ha->shost); 1094 1093 list_del(&pm8001_ha->list);
+98 -140
drivers/scsi/pmcraid.c
··· 77 77 */ 78 78 static unsigned int pmcraid_major; 79 79 static struct class *pmcraid_class; 80 - DECLARE_BITMAP(pmcraid_minor, PMCRAID_MAX_ADAPTERS); 80 + static DECLARE_BITMAP(pmcraid_minor, PMCRAID_MAX_ADAPTERS); 81 81 82 82 /* 83 83 * Module parameters ··· 175 175 if (fw_version <= PMCRAID_FW_VERSION_1) 176 176 target = temp->cfg_entry.unique_flags1; 177 177 else 178 - target = temp->cfg_entry.array_id & 0xFF; 178 + target = le16_to_cpu(temp->cfg_entry.array_id) & 0xFF; 179 179 180 180 if (target > PMCRAID_MAX_VSET_TARGETS) 181 181 continue; ··· 330 330 ioarcb->request_flags0 = 0; 331 331 ioarcb->request_flags1 = 0; 332 332 ioarcb->cmd_timeout = 0; 333 - ioarcb->ioarcb_bus_addr &= (~0x1FULL); 333 + ioarcb->ioarcb_bus_addr &= cpu_to_le64(~0x1FULL); 334 334 ioarcb->ioadl_bus_addr = 0; 335 335 ioarcb->ioadl_length = 0; 336 336 ioarcb->data_transfer_length = 0; ··· 345 345 cmd->scsi_cmd = NULL; 346 346 cmd->release = 0; 347 347 cmd->completion_req = 0; 348 - cmd->sense_buffer = 0; 348 + cmd->sense_buffer = NULL; 349 349 cmd->sense_buffer_dma = 0; 350 350 cmd->dma_handle = 0; 351 351 init_timer(&cmd->timer); ··· 898 898 899 899 /* driver writes lower 32-bit value of IOARCB address only */ 900 900 mb(); 901 - iowrite32(le32_to_cpu(cmd->ioa_cb->ioarcb.ioarcb_bus_addr), 902 - pinstance->ioarrin); 901 + iowrite32(le64_to_cpu(cmd->ioa_cb->ioarcb.ioarcb_bus_addr), pinstance->ioarrin); 903 902 } 904 903 905 904 /** ··· 1050 1051 offsetof(struct pmcraid_ioarcb, 1051 1052 add_data.u.ioadl[0])); 1052 1053 ioarcb->ioadl_length = cpu_to_le32(sizeof(struct pmcraid_ioadl_desc)); 1053 - ioarcb->ioarcb_bus_addr &= ~(0x1FULL); 1054 + ioarcb->ioarcb_bus_addr &= cpu_to_le64(~(0x1FULL)); 1054 1055 1055 1056 ioarcb->request_flags0 |= NO_LINK_DESCS; 1056 1057 ioarcb->data_transfer_length = cpu_to_le32(data_size); ··· 1076 1077 struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; 1077 1078 int index = cmd->hrrq_index; 1078 1079 __be64 hrrq_addr = cpu_to_be64(pinstance->hrrq_start_bus_addr[index]); 1079 - u32 hrrq_size = cpu_to_be32(sizeof(u32) * PMCRAID_MAX_CMD); 1080 + __be32 hrrq_size = cpu_to_be32(sizeof(u32) * PMCRAID_MAX_CMD); 1080 1081 void (*done_function)(struct pmcraid_cmd *); 1081 1082 1082 1083 pmcraid_reinit_cmdblk(cmd); ··· 1201 1202 1202 1203 ioadl[0].flags |= IOADL_FLAGS_READ_LAST; 1203 1204 ioadl[0].data_len = cpu_to_le32(rcb_size); 1204 - ioadl[0].address = cpu_to_le32(dma); 1205 + ioadl[0].address = cpu_to_le64(dma); 1205 1206 1206 1207 cmd->cmd_done = cmd_done; 1207 1208 return cmd; ··· 1236 1237 ) 1237 1238 { 1238 1239 struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; 1239 - __be64 ioarcb_addr = cmd_to_cancel->ioa_cb->ioarcb.ioarcb_bus_addr; 1240 + __be64 ioarcb_addr; 1241 + 1242 + /* IOARCB address of the command to be cancelled is given in 1243 + * cdb[2]..cdb[9] is Big-Endian format. Note that length bits in 1244 + * IOARCB address are not masked. 1245 + */ 1246 + ioarcb_addr = cpu_to_be64(le64_to_cpu(cmd_to_cancel->ioa_cb->ioarcb.ioarcb_bus_addr)); 1240 1247 1241 1248 /* Get the resource handle to where the command to be aborted has been 1242 1249 * sent. ··· 1252 1247 memset(ioarcb->cdb, 0, PMCRAID_MAX_CDB_LEN); 1253 1248 ioarcb->cdb[0] = PMCRAID_ABORT_CMD; 1254 1249 1255 - /* IOARCB address of the command to be cancelled is given in 1256 - * cdb[2]..cdb[9] is Big-Endian format. Note that length bits in 1257 - * IOARCB address are not masked. 1258 - */ 1259 - ioarcb_addr = cpu_to_be64(ioarcb_addr); 1260 1250 memcpy(&(ioarcb->cdb[2]), &ioarcb_addr, sizeof(ioarcb_addr)); 1261 1251 } 1262 1252 ··· 1493 1493 { 1494 1494 return pmcraid_notify_aen(pinstance, 1495 1495 pinstance->ccn.msg, 1496 - pinstance->ccn.hcam->data_len + 1496 + le32_to_cpu(pinstance->ccn.hcam->data_len) + 1497 1497 sizeof(struct pmcraid_hcam_hdr)); 1498 1498 } 1499 1499 ··· 1508 1508 { 1509 1509 return pmcraid_notify_aen(pinstance, 1510 1510 pinstance->ldn.msg, 1511 - pinstance->ldn.hcam->data_len + 1511 + le32_to_cpu(pinstance->ldn.hcam->data_len) + 1512 1512 sizeof(struct pmcraid_hcam_hdr)); 1513 1513 } 1514 1514 ··· 1556 1556 1557 1557 pmcraid_info("CCN(%x): %x timestamp: %llx type: %x lost: %x flags: %x \ 1558 1558 res: %x:%x:%x:%x\n", 1559 - pinstance->ccn.hcam->ilid, 1559 + le32_to_cpu(pinstance->ccn.hcam->ilid), 1560 1560 pinstance->ccn.hcam->op_code, 1561 - ((pinstance->ccn.hcam->timestamp1) | 1562 - ((pinstance->ccn.hcam->timestamp2 & 0xffffffffLL) << 32)), 1561 + (le32_to_cpu(pinstance->ccn.hcam->timestamp1) | 1562 + ((le32_to_cpu(pinstance->ccn.hcam->timestamp2) & 0xffffffffLL) << 32)), 1563 1563 pinstance->ccn.hcam->notification_type, 1564 1564 pinstance->ccn.hcam->notification_lost, 1565 1565 pinstance->ccn.hcam->flags, ··· 1570 1570 RES_IS_VSET(*cfg_entry) ? 1571 1571 (fw_version <= PMCRAID_FW_VERSION_1 ? 1572 1572 cfg_entry->unique_flags1 : 1573 - cfg_entry->array_id & 0xFF) : 1573 + le16_to_cpu(cfg_entry->array_id) & 0xFF) : 1574 1574 RES_TARGET(cfg_entry->resource_address), 1575 1575 RES_LUN(cfg_entry->resource_address)); 1576 1576 ··· 1658 1658 if (fw_version <= PMCRAID_FW_VERSION_1) 1659 1659 res->cfg_entry.unique_flags1 &= 0x7F; 1660 1660 else 1661 - res->cfg_entry.array_id &= 0xFF; 1661 + res->cfg_entry.array_id &= cpu_to_le16(0xFF); 1662 1662 res->change_detected = RES_CHANGE_DEL; 1663 1663 res->cfg_entry.resource_handle = 1664 1664 PMCRAID_INVALID_RES_HANDLE; ··· 1716 1716 /* log the error string */ 1717 1717 pmcraid_err("cmd [%x] for resource %x failed with %x(%s)\n", 1718 1718 cmd->ioa_cb->ioarcb.cdb[0], 1719 - cmd->ioa_cb->ioarcb.resource_handle, 1720 - le32_to_cpu(ioasc), error_info->error_string); 1719 + le32_to_cpu(cmd->ioa_cb->ioarcb.resource_handle), 1720 + ioasc, error_info->error_string); 1721 1721 } 1722 1722 1723 1723 /** ··· 2034 2034 cmd->ioa_cb->ioasa.ioasc = 2035 2035 cpu_to_le32(PMCRAID_IOASC_IOA_WAS_RESET); 2036 2036 cmd->ioa_cb->ioasa.ilid = 2037 - cpu_to_be32(PMCRAID_DRIVER_ILID); 2037 + cpu_to_le32(PMCRAID_DRIVER_ILID); 2038 2038 2039 2039 /* In case the command timer is still running */ 2040 2040 del_timer(&cmd->timer); ··· 2373 2373 spin_lock_irqsave(pinstance->host->host_lock, lock_flags); 2374 2374 2375 2375 if (pinstance->ioa_state == IOA_STATE_DEAD) { 2376 - spin_unlock_irqrestore(pinstance->host->host_lock, 2377 - lock_flags); 2378 2376 pmcraid_info("reset_reload: IOA is dead\n"); 2379 - return reset; 2380 - } else if (pinstance->ioa_state == target_state) { 2377 + goto out_unlock; 2378 + } 2379 + 2380 + if (pinstance->ioa_state == target_state) { 2381 2381 reset = 0; 2382 + goto out_unlock; 2382 2383 } 2383 2384 } 2384 2385 2385 - if (reset) { 2386 - pmcraid_info("reset_reload: proceeding with reset\n"); 2387 - scsi_block_requests(pinstance->host); 2388 - reset_cmd = pmcraid_get_free_cmd(pinstance); 2389 - 2390 - if (reset_cmd == NULL) { 2391 - pmcraid_err("no free cmnd for reset_reload\n"); 2392 - spin_unlock_irqrestore(pinstance->host->host_lock, 2393 - lock_flags); 2394 - return reset; 2395 - } 2396 - 2397 - if (shutdown_type == SHUTDOWN_NORMAL) 2398 - pinstance->ioa_bringdown = 1; 2399 - 2400 - pinstance->ioa_shutdown_type = shutdown_type; 2401 - pinstance->reset_cmd = reset_cmd; 2402 - pinstance->force_ioa_reset = reset; 2403 - pmcraid_info("reset_reload: initiating reset\n"); 2404 - pmcraid_ioa_reset(reset_cmd); 2405 - spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); 2406 - pmcraid_info("reset_reload: waiting for reset to complete\n"); 2407 - wait_event(pinstance->reset_wait_q, 2408 - !pinstance->ioa_reset_in_progress); 2409 - 2410 - pmcraid_info("reset_reload: reset is complete !!\n"); 2411 - scsi_unblock_requests(pinstance->host); 2412 - if (pinstance->ioa_state == target_state) 2413 - reset = 0; 2386 + pmcraid_info("reset_reload: proceeding with reset\n"); 2387 + scsi_block_requests(pinstance->host); 2388 + reset_cmd = pmcraid_get_free_cmd(pinstance); 2389 + if (reset_cmd == NULL) { 2390 + pmcraid_err("no free cmnd for reset_reload\n"); 2391 + goto out_unlock; 2414 2392 } 2415 2393 2394 + if (shutdown_type == SHUTDOWN_NORMAL) 2395 + pinstance->ioa_bringdown = 1; 2396 + 2397 + pinstance->ioa_shutdown_type = shutdown_type; 2398 + pinstance->reset_cmd = reset_cmd; 2399 + pinstance->force_ioa_reset = reset; 2400 + pmcraid_info("reset_reload: initiating reset\n"); 2401 + pmcraid_ioa_reset(reset_cmd); 2402 + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); 2403 + pmcraid_info("reset_reload: waiting for reset to complete\n"); 2404 + wait_event(pinstance->reset_wait_q, 2405 + !pinstance->ioa_reset_in_progress); 2406 + 2407 + pmcraid_info("reset_reload: reset is complete !!\n"); 2408 + scsi_unblock_requests(pinstance->host); 2409 + return pinstance->ioa_state != target_state; 2410 + 2411 + out_unlock: 2412 + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); 2416 2413 return reset; 2417 2414 } 2418 2415 ··· 2526 2529 ioarcb->ioadl_bus_addr = 0; 2527 2530 ioarcb->ioadl_length = 0; 2528 2531 ioarcb->data_transfer_length = 0; 2529 - ioarcb->ioarcb_bus_addr &= (~0x1FULL); 2532 + ioarcb->ioarcb_bus_addr &= cpu_to_le64((~0x1FULL)); 2530 2533 2531 2534 /* writing to IOARRIN must be protected by host_lock, as mid-layer 2532 2535 * schedule queuecommand while we are doing this ··· 2689 2692 * mid-layer 2690 2693 */ 2691 2694 if (ioasa->auto_sense_length != 0) { 2692 - short sense_len = ioasa->auto_sense_length; 2693 - int data_size = min_t(u16, le16_to_cpu(sense_len), 2695 + short sense_len = le16_to_cpu(ioasa->auto_sense_length); 2696 + int data_size = min_t(u16, sense_len, 2694 2697 SCSI_SENSE_BUFFERSIZE); 2695 2698 2696 2699 memcpy(scsi_cmd->sense_buffer, ··· 2912 2915 2913 2916 pmcraid_info("aborting command CDB[0]= %x with index = %d\n", 2914 2917 cmd->ioa_cb->ioarcb.cdb[0], 2915 - cmd->ioa_cb->ioarcb.response_handle >> 2); 2918 + le32_to_cpu(cmd->ioa_cb->ioarcb.response_handle) >> 2); 2916 2919 2917 2920 init_completion(&cancel_cmd->wait_for_completion); 2918 2921 cancel_cmd->completion_req = 1; ··· 3137 3140 int ioadl_count = 0; 3138 3141 3139 3142 if (ioarcb->add_cmd_param_length) 3140 - ioadl_count = DIV_ROUND_UP(ioarcb->add_cmd_param_length, 16); 3141 - ioarcb->ioadl_length = 3142 - sizeof(struct pmcraid_ioadl_desc) * sgcount; 3143 + ioadl_count = DIV_ROUND_UP(le16_to_cpu(ioarcb->add_cmd_param_length), 16); 3144 + ioarcb->ioadl_length = cpu_to_le32(sizeof(struct pmcraid_ioadl_desc) * sgcount); 3143 3145 3144 3146 if ((sgcount + ioadl_count) > (ARRAY_SIZE(ioarcb->add_data.u.ioadl))) { 3145 3147 /* external ioadls start at offset 0x80 from control_block ··· 3146 3150 * It is necessary to indicate to firmware that driver is 3147 3151 * using ioadls to be treated as external to IOARCB. 3148 3152 */ 3149 - ioarcb->ioarcb_bus_addr &= ~(0x1FULL); 3153 + ioarcb->ioarcb_bus_addr &= cpu_to_le64(~(0x1FULL)); 3150 3154 ioarcb->ioadl_bus_addr = 3151 3155 cpu_to_le64((cmd->ioa_cb_bus_addr) + 3152 3156 offsetof(struct pmcraid_ioarcb, ··· 3160 3164 3161 3165 ioadl = &ioarcb->add_data.u.ioadl[ioadl_count]; 3162 3166 ioarcb->ioarcb_bus_addr |= 3163 - DIV_ROUND_CLOSEST(sgcount + ioadl_count, 8); 3167 + cpu_to_le64(DIV_ROUND_CLOSEST(sgcount + ioadl_count, 8)); 3164 3168 } 3165 3169 3166 3170 return ioadl; ··· 3321 3325 */ 3322 3326 static int pmcraid_copy_sglist( 3323 3327 struct pmcraid_sglist *sglist, 3324 - unsigned long buffer, 3328 + void __user *buffer, 3325 3329 u32 len, 3326 3330 int direction 3327 3331 ) ··· 3342 3346 3343 3347 kaddr = kmap(page); 3344 3348 if (direction == DMA_TO_DEVICE) 3345 - rc = __copy_from_user(kaddr, 3346 - (void *)buffer, 3347 - bsize_elem); 3349 + rc = copy_from_user(kaddr, buffer, bsize_elem); 3348 3350 else 3349 - rc = __copy_to_user((void *)buffer, kaddr, bsize_elem); 3351 + rc = copy_to_user(buffer, kaddr, bsize_elem); 3350 3352 3351 3353 kunmap(page); 3352 3354 ··· 3362 3368 kaddr = kmap(page); 3363 3369 3364 3370 if (direction == DMA_TO_DEVICE) 3365 - rc = __copy_from_user(kaddr, 3366 - (void *)buffer, 3367 - len % bsize_elem); 3371 + rc = copy_from_user(kaddr, buffer, len % bsize_elem); 3368 3372 else 3369 - rc = __copy_to_user((void *)buffer, 3370 - kaddr, 3371 - len % bsize_elem); 3373 + rc = copy_to_user(buffer, kaddr, len % bsize_elem); 3372 3374 3373 3375 kunmap(page); 3374 3376 ··· 3486 3496 RES_IS_VSET(res->cfg_entry) ? 3487 3497 (fw_version <= PMCRAID_FW_VERSION_1 ? 3488 3498 res->cfg_entry.unique_flags1 : 3489 - res->cfg_entry.array_id & 0xFF) : 3499 + le16_to_cpu(res->cfg_entry.array_id) & 0xFF) : 3490 3500 RES_TARGET(res->cfg_entry.resource_address), 3491 3501 RES_LUN(res->cfg_entry.resource_address)); 3492 3502 ··· 3642 3652 struct pmcraid_instance *pinstance, 3643 3653 unsigned int ioctl_cmd, 3644 3654 unsigned int buflen, 3645 - unsigned long arg 3655 + void __user *arg 3646 3656 ) 3647 3657 { 3648 3658 struct pmcraid_passthrough_ioctl_buffer *buffer; 3649 3659 struct pmcraid_ioarcb *ioarcb; 3650 3660 struct pmcraid_cmd *cmd; 3651 3661 struct pmcraid_cmd *cancel_cmd; 3652 - unsigned long request_buffer; 3662 + void __user *request_buffer; 3653 3663 unsigned long request_offset; 3654 3664 unsigned long lock_flags; 3655 - void *ioasa; 3665 + void __user *ioasa; 3656 3666 u32 ioasc; 3657 3667 int request_size; 3658 3668 int buffer_size; ··· 3691 3701 3692 3702 request_buffer = arg + request_offset; 3693 3703 3694 - rc = __copy_from_user(buffer, 3695 - (struct pmcraid_passthrough_ioctl_buffer *) arg, 3704 + rc = copy_from_user(buffer, arg, 3696 3705 sizeof(struct pmcraid_passthrough_ioctl_buffer)); 3697 3706 3698 - ioasa = 3699 - (void *)(arg + 3700 - offsetof(struct pmcraid_passthrough_ioctl_buffer, ioasa)); 3707 + ioasa = arg + offsetof(struct pmcraid_passthrough_ioctl_buffer, ioasa); 3701 3708 3702 3709 if (rc) { 3703 3710 pmcraid_err("ioctl: can't copy passthrough buffer\n"); ··· 3702 3715 goto out_free_buffer; 3703 3716 } 3704 3717 3705 - request_size = buffer->ioarcb.data_transfer_length; 3718 + request_size = le32_to_cpu(buffer->ioarcb.data_transfer_length); 3706 3719 3707 3720 if (buffer->ioarcb.request_flags0 & TRANSFER_DIR_WRITE) { 3708 3721 access = VERIFY_READ; ··· 3712 3725 direction = DMA_FROM_DEVICE; 3713 3726 } 3714 3727 3715 - if (request_size > 0) { 3716 - rc = access_ok(access, arg, request_offset + request_size); 3717 - 3718 - if (!rc) { 3719 - rc = -EFAULT; 3720 - goto out_free_buffer; 3721 - } 3722 - } else if (request_size < 0) { 3728 + if (request_size < 0) { 3723 3729 rc = -EINVAL; 3724 3730 goto out_free_buffer; 3725 3731 } 3726 3732 3727 3733 /* check if we have any additional command parameters */ 3728 - if (buffer->ioarcb.add_cmd_param_length > PMCRAID_ADD_CMD_PARAM_LEN) { 3734 + if (le16_to_cpu(buffer->ioarcb.add_cmd_param_length) 3735 + > PMCRAID_ADD_CMD_PARAM_LEN) { 3729 3736 rc = -EINVAL; 3730 3737 goto out_free_buffer; 3731 3738 } ··· 3751 3770 buffer->ioarcb.add_cmd_param_offset; 3752 3771 memcpy(ioarcb->add_data.u.add_cmd_params, 3753 3772 buffer->ioarcb.add_data.u.add_cmd_params, 3754 - buffer->ioarcb.add_cmd_param_length); 3773 + le16_to_cpu(buffer->ioarcb.add_cmd_param_length)); 3755 3774 } 3756 3775 3757 3776 /* set hrrq number where the IOA should respond to. Note that all cmds ··· 3821 3840 wait_for_completion(&cmd->wait_for_completion); 3822 3841 } else if (!wait_for_completion_timeout( 3823 3842 &cmd->wait_for_completion, 3824 - msecs_to_jiffies(buffer->ioarcb.cmd_timeout * 1000))) { 3843 + msecs_to_jiffies(le16_to_cpu(buffer->ioarcb.cmd_timeout) * 1000))) { 3825 3844 3826 3845 pmcraid_info("aborting cmd %d (CDB[0] = %x) due to timeout\n", 3827 - le32_to_cpu(cmd->ioa_cb->ioarcb.response_handle >> 2), 3846 + le32_to_cpu(cmd->ioa_cb->ioarcb.response_handle) >> 2, 3828 3847 cmd->ioa_cb->ioarcb.cdb[0]); 3829 3848 3830 3849 spin_lock_irqsave(pinstance->host->host_lock, lock_flags); ··· 3833 3852 3834 3853 if (cancel_cmd) { 3835 3854 wait_for_completion(&cancel_cmd->wait_for_completion); 3836 - ioasc = cancel_cmd->ioa_cb->ioasa.ioasc; 3855 + ioasc = le32_to_cpu(cancel_cmd->ioa_cb->ioasa.ioasc); 3837 3856 pmcraid_return_cmd(cancel_cmd); 3838 3857 3839 3858 /* if abort task couldn't find the command i.e it got ··· 3922 3941 { 3923 3942 int rc = -ENOSYS; 3924 3943 3925 - if (!access_ok(VERIFY_READ, user_buffer, _IOC_SIZE(cmd))) { 3926 - pmcraid_err("ioctl_driver: access fault in request buffer\n"); 3927 - return -EFAULT; 3928 - } 3929 - 3930 3944 switch (cmd) { 3931 3945 case PMCRAID_IOCTL_RESET_ADAPTER: 3932 3946 pmcraid_reset_bringup(pinstance); ··· 3953 3977 struct pmcraid_ioctl_header *hdr 3954 3978 ) 3955 3979 { 3956 - int rc = 0; 3957 - int access = VERIFY_READ; 3980 + int rc; 3958 3981 3959 3982 if (copy_from_user(hdr, arg, sizeof(struct pmcraid_ioctl_header))) { 3960 3983 pmcraid_err("couldn't copy ioctl header from user buffer\n"); ··· 3967 3992 if (rc) { 3968 3993 pmcraid_err("signature verification failed\n"); 3969 3994 return -EINVAL; 3970 - } 3971 - 3972 - /* check for appropriate buffer access */ 3973 - if ((_IOC_DIR(cmd) & _IOC_READ) == _IOC_READ) 3974 - access = VERIFY_WRITE; 3975 - 3976 - rc = access_ok(access, 3977 - (arg + sizeof(struct pmcraid_ioctl_header)), 3978 - hdr->buffer_length); 3979 - if (!rc) { 3980 - pmcraid_err("access failed for user buffer of size %d\n", 3981 - hdr->buffer_length); 3982 - return -EFAULT; 3983 3995 } 3984 3996 3985 3997 return 0; ··· 3983 4021 { 3984 4022 struct pmcraid_instance *pinstance = NULL; 3985 4023 struct pmcraid_ioctl_header *hdr = NULL; 4024 + void __user *argp = (void __user *)arg; 3986 4025 int retval = -ENOTTY; 3987 4026 3988 4027 hdr = kmalloc(sizeof(struct pmcraid_ioctl_header), GFP_KERNEL); ··· 3993 4030 return -ENOMEM; 3994 4031 } 3995 4032 3996 - retval = pmcraid_check_ioctl_buffer(cmd, (void *)arg, hdr); 4033 + retval = pmcraid_check_ioctl_buffer(cmd, argp, hdr); 3997 4034 3998 4035 if (retval) { 3999 4036 pmcraid_info("chr_ioctl: header check failed\n"); ··· 4018 4055 if (cmd == PMCRAID_IOCTL_DOWNLOAD_MICROCODE) 4019 4056 scsi_block_requests(pinstance->host); 4020 4057 4021 - retval = pmcraid_ioctl_passthrough(pinstance, 4022 - cmd, 4023 - hdr->buffer_length, 4024 - arg); 4058 + retval = pmcraid_ioctl_passthrough(pinstance, cmd, 4059 + hdr->buffer_length, argp); 4025 4060 4026 4061 if (cmd == PMCRAID_IOCTL_DOWNLOAD_MICROCODE) 4027 4062 scsi_unblock_requests(pinstance->host); ··· 4027 4066 4028 4067 case PMCRAID_DRIVER_IOCTL: 4029 4068 arg += sizeof(struct pmcraid_ioctl_header); 4030 - retval = pmcraid_ioctl_driver(pinstance, 4031 - cmd, 4032 - hdr->buffer_length, 4033 - (void __user *)arg); 4069 + retval = pmcraid_ioctl_driver(pinstance, cmd, 4070 + hdr->buffer_length, argp); 4034 4071 break; 4035 4072 4036 4073 default: ··· 4429 4470 if (fw_version <= PMCRAID_FW_VERSION_1) 4430 4471 target = res->cfg_entry.unique_flags1; 4431 4472 else 4432 - target = res->cfg_entry.array_id & 0xFF; 4473 + target = le16_to_cpu(res->cfg_entry.array_id) & 0xFF; 4433 4474 lun = PMCRAID_VSET_LUN_ID; 4434 4475 } else { 4435 4476 bus = PMCRAID_PHYS_BUS_ID; ··· 4468 4509 unsigned long host_lock_flags; 4469 4510 spinlock_t *lockp; /* hrrq buffer lock */ 4470 4511 int id; 4471 - __le32 resp; 4512 + u32 resp; 4472 4513 4473 4514 hrrq_vector = (struct pmcraid_isr_param *)instance; 4474 4515 pinstance = hrrq_vector->drv_inst; ··· 4792 4833 buffer_size, 4793 4834 &(pinstance->hrrq_start_bus_addr[i])); 4794 4835 4795 - if (pinstance->hrrq_start[i] == 0) { 4836 + if (!pinstance->hrrq_start[i]) { 4796 4837 pmcraid_err("pci_alloc failed for hrrq vector : %d\n", 4797 4838 i); 4798 4839 pmcraid_release_host_rrqs(pinstance, i); ··· 5508 5549 struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; 5509 5550 __be32 time_stamp_len = cpu_to_be32(PMCRAID_TIMESTAMP_LEN); 5510 5551 struct pmcraid_ioadl_desc *ioadl = ioarcb->add_data.u.ioadl; 5511 - 5512 - __le64 timestamp; 5552 + u64 timestamp; 5513 5553 5514 5554 timestamp = ktime_get_real_seconds() * 1000; 5515 5555 ··· 5530 5572 offsetof(struct pmcraid_ioarcb, 5531 5573 add_data.u.ioadl[0])); 5532 5574 ioarcb->ioadl_length = cpu_to_le32(sizeof(struct pmcraid_ioadl_desc)); 5533 - ioarcb->ioarcb_bus_addr &= ~(0x1FULL); 5575 + ioarcb->ioarcb_bus_addr &= cpu_to_le64(~(0x1FULL)); 5534 5576 5535 5577 ioarcb->request_flags0 |= NO_LINK_DESCS; 5536 5578 ioarcb->request_flags0 |= TRANSFER_DIR_WRITE; ··· 5589 5631 list_for_each_entry_safe(res, temp, &pinstance->used_res_q, queue) 5590 5632 list_move_tail(&res->queue, &old_res); 5591 5633 5592 - for (i = 0; i < pinstance->cfg_table->num_entries; i++) { 5634 + for (i = 0; i < le16_to_cpu(pinstance->cfg_table->num_entries); i++) { 5593 5635 if (be16_to_cpu(pinstance->inq_data->fw_version) <= 5594 5636 PMCRAID_FW_VERSION_1) 5595 5637 cfgte = &pinstance->cfg_table->entries[i]; ··· 5644 5686 res->cfg_entry.resource_type, 5645 5687 (fw_version <= PMCRAID_FW_VERSION_1 ? 5646 5688 res->cfg_entry.unique_flags1 : 5647 - res->cfg_entry.array_id & 0xFF), 5689 + le16_to_cpu(res->cfg_entry.array_id) & 0xFF), 5648 5690 le32_to_cpu(res->cfg_entry.resource_address)); 5649 5691 } 5650 5692 } ··· 5682 5724 struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; 5683 5725 struct pmcraid_ioadl_desc *ioadl = ioarcb->add_data.u.ioadl; 5684 5726 struct pmcraid_instance *pinstance = cmd->drv_inst; 5685 - int cfg_table_size = cpu_to_be32(sizeof(struct pmcraid_config_table)); 5727 + __be32 cfg_table_size = cpu_to_be32(sizeof(struct pmcraid_config_table)); 5686 5728 5687 5729 if (be16_to_cpu(pinstance->inq_data->fw_version) <= 5688 5730 PMCRAID_FW_VERSION_1) ··· 5707 5749 offsetof(struct pmcraid_ioarcb, 5708 5750 add_data.u.ioadl[0])); 5709 5751 ioarcb->ioadl_length = cpu_to_le32(sizeof(struct pmcraid_ioadl_desc)); 5710 - ioarcb->ioarcb_bus_addr &= ~(0x1FULL); 5752 + ioarcb->ioarcb_bus_addr &= cpu_to_le64(~0x1FULL); 5711 5753 5712 5754 ioarcb->request_flags0 |= NO_LINK_DESCS; 5713 5755 ioarcb->data_transfer_length =
+4 -4
drivers/scsi/pmcraid.h
··· 554 554 __u8 add_page_len; 555 555 __u8 length; 556 556 __u8 reserved2; 557 - __le16 fw_version; 557 + __be16 fw_version; 558 558 __u8 reserved3[16]; 559 559 }; 560 560 ··· 697 697 dma_addr_t hrrq_start_bus_addr[PMCRAID_NUM_MSIX_VECTORS]; 698 698 699 699 /* Pointer to 1st entry of HRRQ */ 700 - __be32 *hrrq_start[PMCRAID_NUM_MSIX_VECTORS]; 700 + __le32 *hrrq_start[PMCRAID_NUM_MSIX_VECTORS]; 701 701 702 702 /* Pointer to last entry of HRRQ */ 703 - __be32 *hrrq_end[PMCRAID_NUM_MSIX_VECTORS]; 703 + __le32 *hrrq_end[PMCRAID_NUM_MSIX_VECTORS]; 704 704 705 705 /* Pointer to current pointer of hrrq */ 706 - __be32 *hrrq_curr[PMCRAID_NUM_MSIX_VECTORS]; 706 + __le32 *hrrq_curr[PMCRAID_NUM_MSIX_VECTORS]; 707 707 708 708 /* Lock for HRRQ access */ 709 709 spinlock_t hrrq_lock[PMCRAID_NUM_MSIX_VECTORS];
+1 -1
drivers/scsi/qedf/qedf_debugfs.c
··· 449 449 qedf_dbg_fileops(qedf, clear_stats), 450 450 qedf_dbg_fileops_seq(qedf, offload_stats), 451 451 /* This must be last */ 452 - { NULL, NULL }, 452 + { }, 453 453 }; 454 454 455 455 #else /* CONFIG_DEBUG_FS */
+1 -1
drivers/scsi/qedi/qedi_debugfs.c
··· 240 240 qedi_dbg_fileops_seq(qedi, gbl_ctx), 241 241 qedi_dbg_fileops(qedi, do_not_recover), 242 242 qedi_dbg_fileops_seq(qedi, io_trace), 243 - { NULL, NULL }, 243 + { }, 244 244 };
+1 -1
drivers/scsi/qedi/qedi_iscsi.c
··· 1370 1370 { 1371 1371 if (!task->sc || task->state == ISCSI_TASK_PENDING) { 1372 1372 QEDI_INFO(NULL, QEDI_LOG_IO, "Returning ref_cnt=%d\n", 1373 - atomic_read(&task->refcount)); 1373 + refcount_read(&task->refcount)); 1374 1374 return; 1375 1375 } 1376 1376
+1 -1
drivers/scsi/qla2xxx/qla_attr.c
··· 695 695 case 0x2025e: 696 696 if (!IS_P3P_TYPE(ha) || vha != base_vha) { 697 697 ql_log(ql_log_info, vha, 0x7071, 698 - "FCoE ctx reset no supported.\n"); 698 + "FCoE ctx reset not supported.\n"); 699 699 return -EPERM; 700 700 } 701 701
+1 -1
drivers/scsi/qla2xxx/qla_bsg.c
··· 1822 1822 /* Check if operating mode is P2P */ 1823 1823 if (ha->operating_mode != P2P) { 1824 1824 ql_log(ql_log_warn, vha, 0x70a4, 1825 - "Host is operating mode is not P2p\n"); 1825 + "Host operating mode is not P2p\n"); 1826 1826 rval = EXT_STATUS_INVALID_CFG; 1827 1827 goto done; 1828 1828 }
+1 -1
drivers/scsi/qla2xxx/qla_gs.c
··· 144 144 if (ct_rsp->header.response != 145 145 cpu_to_be16(CT_ACCEPT_RESPONSE)) { 146 146 ql_dbg(ql_dbg_disc + ql_dbg_buffer, vha, 0x2077, 147 - "%s failed rejected request on port_id: %02x%02x%02x Compeltion status 0x%x, response 0x%x\n", 147 + "%s failed rejected request on port_id: %02x%02x%02x Completion status 0x%x, response 0x%x\n", 148 148 routine, vha->d_id.b.domain, 149 149 vha->d_id.b.area, vha->d_id.b.al_pa, 150 150 comp_status, ct_rsp->header.response);
+1 -1
drivers/scsi/qla2xxx/qla_init.c
··· 2289 2289 goto chip_diag_failed; 2290 2290 2291 2291 /* Check product ID of chip */ 2292 - ql_dbg(ql_dbg_init, vha, 0x007d, "Checking product Id of chip.\n"); 2292 + ql_dbg(ql_dbg_init, vha, 0x007d, "Checking product ID of chip.\n"); 2293 2293 2294 2294 mb[1] = RD_MAILBOX_REG(ha, reg, 1); 2295 2295 mb[2] = RD_MAILBOX_REG(ha, reg, 2);
+3 -3
drivers/scsi/qla2xxx/qla_isr.c
··· 2100 2100 2101 2101 case CS_DATA_OVERRUN: 2102 2102 ql_dbg(ql_dbg_user, vha, 0x70b1, 2103 - "Command completed with date overrun thread_id=%d\n", 2103 + "Command completed with data overrun thread_id=%d\n", 2104 2104 thread_id); 2105 2105 rval = EXT_STATUS_DATA_OVERRUN; 2106 2106 break; 2107 2107 2108 2108 case CS_DATA_UNDERRUN: 2109 2109 ql_dbg(ql_dbg_user, vha, 0x70b2, 2110 - "Command completed with date underrun thread_id=%d\n", 2110 + "Command completed with data underrun thread_id=%d\n", 2111 2111 thread_id); 2112 2112 rval = EXT_STATUS_DATA_UNDERRUN; 2113 2113 break; ··· 2134 2134 2135 2135 case CS_BIDIR_RD_UNDERRUN: 2136 2136 ql_dbg(ql_dbg_user, vha, 0x70b6, 2137 - "Command completed with read data data underrun " 2137 + "Command completed with read data underrun " 2138 2138 "thread_id=%d\n", thread_id); 2139 2139 rval = EXT_STATUS_DATA_UNDERRUN; 2140 2140 break;
-6
drivers/scsi/qla2xxx/qla_os.c
··· 423 423 kfree(req->outstanding_cmds); 424 424 425 425 kfree(req); 426 - req = NULL; 427 426 } 428 427 429 428 static void qla2x00_free_rsp_que(struct qla_hw_data *ha, struct rsp_que *rsp) ··· 438 439 rsp->ring, rsp->dma); 439 440 } 440 441 kfree(rsp); 441 - rsp = NULL; 442 442 } 443 443 444 444 static void qla2x00_free_queues(struct qla_hw_data *ha) ··· 651 653 ha->gbl_dsd_inuse -= ctx1->dsd_use_cnt; 652 654 ha->gbl_dsd_avail += ctx1->dsd_use_cnt; 653 655 mempool_free(ctx1, ha->ctx_mempool); 654 - ctx1 = NULL; 655 656 } 656 657 657 658 CMD_SP(cmd) = NULL; ··· 3253 3256 } 3254 3257 pci_release_selected_regions(ha->pdev, ha->bars); 3255 3258 kfree(ha); 3256 - ha = NULL; 3257 3259 3258 3260 probe_out: 3259 3261 pci_disable_device(pdev); ··· 3500 3504 3501 3505 pci_release_selected_regions(ha->pdev, ha->bars); 3502 3506 kfree(ha); 3503 - ha = NULL; 3504 3507 3505 3508 pci_disable_pcie_error_reporting(pdev); 3506 3509 ··· 3563 3568 list_del(&fcport->list); 3564 3569 qla2x00_clear_loop_id(fcport); 3565 3570 kfree(fcport); 3566 - fcport = NULL; 3567 3571 } 3568 3572 } 3569 3573
+1 -1
drivers/scsi/qla4xxx/ql4_init.c
··· 389 389 goto alloc_cleanup; 390 390 391 391 DEBUG2(ql4_printk(KERN_INFO, ha, 392 - "Minidump Tempalate Size = 0x%x KB\n", 392 + "Minidump Template Size = 0x%x KB\n", 393 393 ha->fw_dump_tmplt_size)); 394 394 DEBUG2(ql4_printk(KERN_INFO, ha, 395 395 "Total Minidump size = 0x%x KB\n", ha->fw_dump_size));
-1
drivers/scsi/qla4xxx/ql4_os.c
··· 8664 8664 init_completion(&ha->disable_acb_comp); 8665 8665 init_completion(&ha->idc_comp); 8666 8666 init_completion(&ha->link_up_comp); 8667 - init_completion(&ha->disable_acb_comp); 8668 8667 8669 8668 spin_lock_init(&ha->hardware_lock); 8670 8669 spin_lock_init(&ha->work_lock);
+50 -134
drivers/scsi/scsi_error.c
··· 46 46 47 47 #include <trace/events/scsi.h> 48 48 49 + #include <asm/unaligned.h> 50 + 49 51 static void scsi_eh_done(struct scsi_cmnd *scmd); 50 52 51 53 /* ··· 164 162 } 165 163 } 166 164 167 - if (!scsi_eh_scmd_add(scmd, 0)) { 168 - SCSI_LOG_ERROR_RECOVERY(3, 169 - scmd_printk(KERN_WARNING, scmd, 170 - "terminate aborted command\n")); 171 - set_host_byte(scmd, DID_TIME_OUT); 172 - scsi_finish_command(scmd); 173 - } 165 + scsi_eh_scmd_add(scmd); 174 166 } 175 167 176 168 /** ··· 184 188 /* 185 189 * Retry after abort failed, escalate to next level. 186 190 */ 187 - scmd->eh_eflags &= ~SCSI_EH_ABORT_SCHEDULED; 188 191 SCSI_LOG_ERROR_RECOVERY(3, 189 192 scmd_printk(KERN_INFO, scmd, 190 193 "previous abort failed\n")); ··· 191 196 return FAILED; 192 197 } 193 198 194 - /* 195 - * Do not try a command abort if 196 - * SCSI EH has already started. 197 - */ 198 199 spin_lock_irqsave(shost->host_lock, flags); 199 - if (scsi_host_in_recovery(shost)) { 200 - spin_unlock_irqrestore(shost->host_lock, flags); 201 - SCSI_LOG_ERROR_RECOVERY(3, 202 - scmd_printk(KERN_INFO, scmd, 203 - "not aborting, host in recovery\n")); 204 - return FAILED; 205 - } 206 - 207 200 if (shost->eh_deadline != -1 && !shost->last_reset) 208 201 shost->last_reset = jiffies; 209 202 spin_unlock_irqrestore(shost->host_lock, flags); ··· 204 221 } 205 222 206 223 /** 224 + * scsi_eh_reset - call into ->eh_action to reset internal counters 225 + * @scmd: scmd to run eh on. 226 + * 227 + * The scsi driver might be carrying internal state about the 228 + * devices, so we need to call into the driver to reset the 229 + * internal state once the error handler is started. 230 + */ 231 + static void scsi_eh_reset(struct scsi_cmnd *scmd) 232 + { 233 + if (!blk_rq_is_passthrough(scmd->request)) { 234 + struct scsi_driver *sdrv = scsi_cmd_to_driver(scmd); 235 + if (sdrv->eh_reset) 236 + sdrv->eh_reset(scmd); 237 + } 238 + } 239 + 240 + /** 207 241 * scsi_eh_scmd_add - add scsi cmd to error handling. 208 242 * @scmd: scmd to run eh on. 209 - * @eh_flag: optional SCSI_EH flag. 210 - * 211 - * Return value: 212 - * 0 on failure. 213 243 */ 214 - int scsi_eh_scmd_add(struct scsi_cmnd *scmd, int eh_flag) 244 + void scsi_eh_scmd_add(struct scsi_cmnd *scmd) 215 245 { 216 246 struct Scsi_Host *shost = scmd->device->host; 217 247 unsigned long flags; 218 - int ret = 0; 248 + int ret; 219 249 220 - if (!shost->ehandler) 221 - return 0; 250 + WARN_ON_ONCE(!shost->ehandler); 222 251 223 252 spin_lock_irqsave(shost->host_lock, flags); 224 - if (scsi_host_set_state(shost, SHOST_RECOVERY)) 225 - if (scsi_host_set_state(shost, SHOST_CANCEL_RECOVERY)) 226 - goto out_unlock; 227 - 253 + if (scsi_host_set_state(shost, SHOST_RECOVERY)) { 254 + ret = scsi_host_set_state(shost, SHOST_CANCEL_RECOVERY); 255 + WARN_ON_ONCE(ret); 256 + } 228 257 if (shost->eh_deadline != -1 && !shost->last_reset) 229 258 shost->last_reset = jiffies; 230 259 231 - ret = 1; 232 - if (scmd->eh_eflags & SCSI_EH_ABORT_SCHEDULED) 233 - eh_flag &= ~SCSI_EH_CANCEL_CMD; 234 - scmd->eh_eflags |= eh_flag; 260 + scsi_eh_reset(scmd); 235 261 list_add_tail(&scmd->eh_entry, &shost->eh_cmd_q); 236 262 shost->host_failed++; 237 263 scsi_eh_wakeup(shost); 238 - out_unlock: 239 264 spin_unlock_irqrestore(shost->host_lock, flags); 240 - return ret; 241 265 } 242 266 243 267 /** ··· 273 283 rtn = host->hostt->eh_timed_out(scmd); 274 284 275 285 if (rtn == BLK_EH_NOT_HANDLED) { 276 - if (!host->hostt->no_async_abort && 277 - scsi_abort_command(scmd) == SUCCESS) 278 - return BLK_EH_NOT_HANDLED; 279 - 280 - set_host_byte(scmd, DID_TIME_OUT); 281 - if (!scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD)) 282 - rtn = BLK_EH_HANDLED; 286 + if (scsi_abort_command(scmd) != SUCCESS) { 287 + set_host_byte(scmd, DID_TIME_OUT); 288 + scsi_eh_scmd_add(scmd); 289 + } 283 290 } 284 291 285 292 return rtn; ··· 328 341 list_for_each_entry(scmd, work_q, eh_entry) { 329 342 if (scmd->device == sdev) { 330 343 ++total_failures; 331 - if (scmd->eh_eflags & SCSI_EH_CANCEL_CMD) 344 + if (scmd->eh_eflags & SCSI_EH_ABORT_SCHEDULED) 332 345 ++cmd_cancel; 333 346 else 334 347 ++cmd_failed; ··· 918 931 ses->result = scmd->result; 919 932 ses->underflow = scmd->underflow; 920 933 ses->prot_op = scmd->prot_op; 934 + ses->eh_eflags = scmd->eh_eflags; 921 935 922 936 scmd->prot_op = SCSI_PROT_NORMAL; 923 937 scmd->eh_eflags = 0; ··· 982 994 scmd->result = ses->result; 983 995 scmd->underflow = ses->underflow; 984 996 scmd->prot_op = ses->prot_op; 997 + scmd->eh_eflags = ses->eh_eflags; 985 998 } 986 999 EXPORT_SYMBOL(scsi_eh_restore_cmnd); 987 1000 ··· 1115 1126 */ 1116 1127 void scsi_eh_finish_cmd(struct scsi_cmnd *scmd, struct list_head *done_q) 1117 1128 { 1118 - scmd->eh_eflags = 0; 1119 1129 list_move_tail(&scmd->eh_entry, done_q); 1120 1130 } 1121 1131 EXPORT_SYMBOL(scsi_eh_finish_cmd); ··· 1151 1163 * should not get sense. 1152 1164 */ 1153 1165 list_for_each_entry_safe(scmd, next, work_q, eh_entry) { 1154 - if ((scmd->eh_eflags & SCSI_EH_CANCEL_CMD) || 1155 - (scmd->eh_eflags & SCSI_EH_ABORT_SCHEDULED) || 1166 + if ((scmd->eh_eflags & SCSI_EH_ABORT_SCHEDULED) || 1156 1167 SCSI_SENSE_VALID(scmd)) 1157 1168 continue; 1158 1169 ··· 1289 1302 } 1290 1303 } 1291 1304 return list_empty(work_q); 1292 - } 1293 - 1294 - 1295 - /** 1296 - * scsi_eh_abort_cmds - abort pending commands. 1297 - * @work_q: &list_head for pending commands. 1298 - * @done_q: &list_head for processed commands. 1299 - * 1300 - * Decription: 1301 - * Try and see whether or not it makes sense to try and abort the 1302 - * running command. This only works out to be the case if we have one 1303 - * command that has timed out. If the command simply failed, it makes 1304 - * no sense to try and abort the command, since as far as the shost 1305 - * adapter is concerned, it isn't running. 1306 - */ 1307 - static int scsi_eh_abort_cmds(struct list_head *work_q, 1308 - struct list_head *done_q) 1309 - { 1310 - struct scsi_cmnd *scmd, *next; 1311 - LIST_HEAD(check_list); 1312 - int rtn; 1313 - struct Scsi_Host *shost; 1314 - 1315 - list_for_each_entry_safe(scmd, next, work_q, eh_entry) { 1316 - if (!(scmd->eh_eflags & SCSI_EH_CANCEL_CMD)) 1317 - continue; 1318 - shost = scmd->device->host; 1319 - if (scsi_host_eh_past_deadline(shost)) { 1320 - list_splice_init(&check_list, work_q); 1321 - SCSI_LOG_ERROR_RECOVERY(3, 1322 - scmd_printk(KERN_INFO, scmd, 1323 - "%s: skip aborting cmd, past eh deadline\n", 1324 - current->comm)); 1325 - return list_empty(work_q); 1326 - } 1327 - SCSI_LOG_ERROR_RECOVERY(3, 1328 - scmd_printk(KERN_INFO, scmd, 1329 - "%s: aborting cmd\n", current->comm)); 1330 - rtn = scsi_try_to_abort_cmd(shost->hostt, scmd); 1331 - if (rtn == FAILED) { 1332 - SCSI_LOG_ERROR_RECOVERY(3, 1333 - scmd_printk(KERN_INFO, scmd, 1334 - "%s: aborting cmd failed\n", 1335 - current->comm)); 1336 - list_splice_init(&check_list, work_q); 1337 - return list_empty(work_q); 1338 - } 1339 - scmd->eh_eflags &= ~SCSI_EH_CANCEL_CMD; 1340 - if (rtn == FAST_IO_FAIL) 1341 - scsi_eh_finish_cmd(scmd, done_q); 1342 - else 1343 - list_move_tail(&scmd->eh_entry, &check_list); 1344 - } 1345 - 1346 - return scsi_eh_test_devices(&check_list, work_q, done_q, 0); 1347 1305 } 1348 1306 1349 1307 /** ··· 1633 1701 sdev_printk(KERN_INFO, scmd->device, "Device offlined - " 1634 1702 "not ready after error recovery\n"); 1635 1703 scsi_device_set_state(scmd->device, SDEV_OFFLINE); 1636 - if (scmd->eh_eflags & SCSI_EH_CANCEL_CMD) { 1637 - /* 1638 - * FIXME: Handle lost cmds. 1639 - */ 1640 - } 1641 1704 scsi_eh_finish_cmd(scmd, done_q); 1642 1705 } 1643 1706 return; ··· 2076 2149 SCSI_LOG_ERROR_RECOVERY(1, scsi_eh_prt_fail_stats(shost, &eh_work_q)); 2077 2150 2078 2151 if (!scsi_eh_get_sense(&eh_work_q, &eh_done_q)) 2079 - if (!scsi_eh_abort_cmds(&eh_work_q, &eh_done_q)) 2080 - scsi_eh_ready_devs(shost, &eh_work_q, &eh_done_q); 2152 + scsi_eh_ready_devs(shost, &eh_work_q, &eh_done_q); 2081 2153 2082 2154 spin_lock_irqsave(shost->host_lock, flags); 2083 2155 if (shost->eh_deadline != -1) ··· 2363 2437 * field will be placed if found. 2364 2438 * 2365 2439 * Return value: 2366 - * 1 if information field found, 0 if not found. 2440 + * true if information field found, false if not found. 2367 2441 */ 2368 - int scsi_get_sense_info_fld(const u8 * sense_buffer, int sb_len, 2369 - u64 * info_out) 2442 + bool scsi_get_sense_info_fld(const u8 *sense_buffer, int sb_len, 2443 + u64 *info_out) 2370 2444 { 2371 - int j; 2372 2445 const u8 * ucp; 2373 - u64 ull; 2374 2446 2375 2447 if (sb_len < 7) 2376 - return 0; 2448 + return false; 2377 2449 switch (sense_buffer[0] & 0x7f) { 2378 2450 case 0x70: 2379 2451 case 0x71: 2380 2452 if (sense_buffer[0] & 0x80) { 2381 - *info_out = (sense_buffer[3] << 24) + 2382 - (sense_buffer[4] << 16) + 2383 - (sense_buffer[5] << 8) + sense_buffer[6]; 2384 - return 1; 2385 - } else 2386 - return 0; 2453 + *info_out = get_unaligned_be32(&sense_buffer[3]); 2454 + return true; 2455 + } 2456 + return false; 2387 2457 case 0x72: 2388 2458 case 0x73: 2389 2459 ucp = scsi_sense_desc_find(sense_buffer, sb_len, 2390 2460 0 /* info desc */); 2391 2461 if (ucp && (0xa == ucp[1])) { 2392 - ull = 0; 2393 - for (j = 0; j < 8; ++j) { 2394 - if (j > 0) 2395 - ull <<= 8; 2396 - ull |= ucp[4 + j]; 2397 - } 2398 - *info_out = ull; 2399 - return 1; 2400 - } else 2401 - return 0; 2462 + *info_out = get_unaligned_be64(&ucp[4]); 2463 + return true; 2464 + } 2465 + return false; 2402 2466 default: 2403 - return 0; 2467 + return false; 2404 2468 } 2405 2469 } 2406 2470 EXPORT_SYMBOL(scsi_get_sense_info_fld);
+2 -2
drivers/scsi/scsi_lib.c
··· 1593 1593 scsi_queue_insert(cmd, SCSI_MLQUEUE_DEVICE_BUSY); 1594 1594 break; 1595 1595 default: 1596 - if (!scsi_eh_scmd_add(cmd, 0)) 1597 - scsi_finish_command(cmd); 1596 + scsi_eh_scmd_add(cmd); 1597 + break; 1598 1598 } 1599 1599 } 1600 1600
+1 -2
drivers/scsi/scsi_priv.h
··· 18 18 /* 19 19 * Scsi Error Handler Flags 20 20 */ 21 - #define SCSI_EH_CANCEL_CMD 0x0001 /* Cancel this cmd */ 22 21 #define SCSI_EH_ABORT_SCHEDULED 0x0002 /* Abort has been scheduled */ 23 22 24 23 #define SCSI_SENSE_VALID(scmd) \ ··· 71 72 extern int scsi_error_handler(void *host); 72 73 extern int scsi_decide_disposition(struct scsi_cmnd *cmd); 73 74 extern void scsi_eh_wakeup(struct Scsi_Host *shost); 74 - extern int scsi_eh_scmd_add(struct scsi_cmnd *, int); 75 + extern void scsi_eh_scmd_add(struct scsi_cmnd *); 75 76 void scsi_eh_ready_devs(struct Scsi_Host *shost, 76 77 struct list_head *work_q, 77 78 struct list_head *done_q);
+7 -5
drivers/scsi/scsi_transport_fc.c
··· 289 289 u32 value; 290 290 char *name; 291 291 } fc_port_role_names[] = { 292 - { FC_PORT_ROLE_FCP_TARGET, "FCP Target" }, 293 - { FC_PORT_ROLE_FCP_INITIATOR, "FCP Initiator" }, 294 - { FC_PORT_ROLE_IP_PORT, "IP Port" }, 292 + { FC_PORT_ROLE_FCP_TARGET, "FCP Target" }, 293 + { FC_PORT_ROLE_FCP_INITIATOR, "FCP Initiator" }, 294 + { FC_PORT_ROLE_IP_PORT, "IP Port" }, 295 + { FC_PORT_ROLE_FCP_DUMMY_INITIATOR, "FCP Dummy Initiator" }, 295 296 }; 296 297 fc_bitfield_name_search(port_roles, fc_port_role_names) 297 298 ··· 851 850 char *cp; 852 851 853 852 *val = simple_strtoul(buf, &cp, 0); 854 - if ((*cp && (*cp != '\n')) || (*val < 0)) 853 + if (*cp && (*cp != '\n')) 855 854 return -EINVAL; 856 855 /* 857 856 * Check for overflow; dev_loss_tmo is u32 ··· 2629 2628 spin_lock_irqsave(shost->host_lock, flags); 2630 2629 2631 2630 rport->number = fc_host->next_rport_number++; 2632 - if (rport->roles & FC_PORT_ROLE_FCP_TARGET) 2631 + if ((rport->roles & FC_PORT_ROLE_FCP_TARGET) || 2632 + (rport->roles & FC_PORT_ROLE_FCP_DUMMY_INITIATOR)) 2633 2633 rport->scsi_target_id = fc_host->next_target_id++; 2634 2634 else 2635 2635 rport->scsi_target_id = -1;
+1 -2
drivers/scsi/scsi_transport_iscsi.c
··· 2158 2158 2159 2159 void iscsi_remove_session(struct iscsi_cls_session *session) 2160 2160 { 2161 - struct Scsi_Host *shost = iscsi_session_to_shost(session); 2162 2161 unsigned long flags; 2163 2162 int err; 2164 2163 ··· 2184 2185 2185 2186 scsi_target_unblock(&session->dev, SDEV_TRANSPORT_OFFLINE); 2186 2187 /* flush running scans then delete devices */ 2187 - scsi_flush_work(shost); 2188 + flush_work(&session->scan_work); 2188 2189 __iscsi_unbind_session(&session->unbind_work); 2189 2190 2190 2191 /* hw iscsi may not have removed all connections from session */
+6 -2
drivers/scsi/scsi_transport_sas.c
··· 370 370 * sas_remove_host - tear down a Scsi_Host's SAS data structures 371 371 * @shost: Scsi Host that is torn down 372 372 * 373 - * Removes all SAS PHYs and remote PHYs for a given Scsi_Host. 374 - * Must be called just before scsi_remove_host for SAS HBAs. 373 + * Removes all SAS PHYs and remote PHYs for a given Scsi_Host and remove the 374 + * Scsi_Host as well. 375 + * 376 + * Note: Do not call scsi_remove_host() on the Scsi_Host any more, as it is 377 + * already removed. 375 378 */ 376 379 void sas_remove_host(struct Scsi_Host *shost) 377 380 { 378 381 sas_remove_children(&shost->shost_gendev); 382 + scsi_remove_host(shost); 379 383 } 380 384 EXPORT_SYMBOL(sas_remove_host); 381 385
+81 -54
drivers/scsi/sd.c
··· 115 115 static int sd_init_command(struct scsi_cmnd *SCpnt); 116 116 static void sd_uninit_command(struct scsi_cmnd *SCpnt); 117 117 static int sd_done(struct scsi_cmnd *); 118 + static void sd_eh_reset(struct scsi_cmnd *); 118 119 static int sd_eh_action(struct scsi_cmnd *, int); 119 120 static void sd_read_capacity(struct scsi_disk *sdkp, unsigned char *buffer); 120 121 static void scsi_disk_release(struct device *cdev); ··· 574 573 .uninit_command = sd_uninit_command, 575 574 .done = sd_done, 576 575 .eh_action = sd_eh_action, 576 + .eh_reset = sd_eh_reset, 577 577 }; 578 578 579 579 /* ··· 890 888 * sd_setup_write_same_cmnd - write the same data to multiple blocks 891 889 * @cmd: command to prepare 892 890 * 893 - * Will issue either WRITE SAME(10) or WRITE SAME(16) depending on 894 - * preference indicated by target device. 891 + * Will set up either WRITE SAME(10) or WRITE SAME(16) depending on 892 + * the preference indicated by the target device. 895 893 **/ 896 894 static int sd_setup_write_same_cmnd(struct scsi_cmnd *cmd) 897 895 { ··· 910 908 BUG_ON(bio_offset(bio) || bio_iovec(bio).bv_len != sdp->sector_size); 911 909 912 910 if (sd_is_zoned(sdkp)) { 913 - ret = sd_zbc_setup_write_cmnd(cmd); 911 + ret = sd_zbc_write_lock_zone(cmd); 914 912 if (ret != BLKPREP_OK) 915 913 return ret; 916 914 } ··· 982 980 unsigned char protect; 983 981 984 982 if (zoned_write) { 985 - ret = sd_zbc_setup_write_cmnd(SCpnt); 983 + ret = sd_zbc_write_lock_zone(SCpnt); 986 984 if (ret != BLKPREP_OK) 987 985 return ret; 988 986 } ··· 1209 1207 ret = BLKPREP_OK; 1210 1208 out: 1211 1209 if (zoned_write && ret != BLKPREP_OK) 1212 - sd_zbc_cancel_write_cmnd(SCpnt); 1210 + sd_zbc_write_unlock_zone(SCpnt); 1213 1211 1214 1212 return ret; 1215 1213 } ··· 1266 1264 1267 1265 /** 1268 1266 * sd_open - open a scsi disk device 1269 - * @inode: only i_rdev member may be used 1270 - * @filp: only f_mode and f_flags may be used 1267 + * @bdev: Block device of the scsi disk to open 1268 + * @mode: FMODE_* mask 1271 1269 * 1272 1270 * Returns 0 if successful. Returns a negated errno value in case 1273 1271 * of error. ··· 1343 1341 /** 1344 1342 * sd_release - invoked when the (last) close(2) is called on this 1345 1343 * scsi disk. 1346 - * @inode: only i_rdev member may be used 1347 - * @filp: only f_mode and f_flags may be used 1344 + * @disk: disk to release 1345 + * @mode: FMODE_* mask 1348 1346 * 1349 1347 * Returns 0. 1350 1348 * ··· 1400 1398 1401 1399 /** 1402 1400 * sd_ioctl - process an ioctl 1403 - * @inode: only i_rdev/i_bdev members may be used 1404 - * @filp: only f_mode and f_flags may be used 1401 + * @bdev: target block device 1402 + * @mode: FMODE_* mask 1405 1403 * @cmd: ioctl command number 1406 1404 * @arg: this is third argument given to ioctl(2) system call. 1407 1405 * Often contains a pointer. ··· 1764 1762 }; 1765 1763 1766 1764 /** 1765 + * sd_eh_reset - reset error handling callback 1766 + * @scmd: sd-issued command that has failed 1767 + * 1768 + * This function is called by the SCSI midlayer before starting 1769 + * SCSI EH. When counting medium access failures we have to be 1770 + * careful to register it only only once per device and SCSI EH run; 1771 + * there might be several timed out commands which will cause the 1772 + * 'max_medium_access_timeouts' counter to trigger after the first 1773 + * SCSI EH run already and set the device to offline. 1774 + * So this function resets the internal counter before starting SCSI EH. 1775 + **/ 1776 + static void sd_eh_reset(struct scsi_cmnd *scmd) 1777 + { 1778 + struct scsi_disk *sdkp = scsi_disk(scmd->request->rq_disk); 1779 + 1780 + /* New SCSI EH run, reset gate variable */ 1781 + sdkp->ignore_medium_access_errors = false; 1782 + } 1783 + 1784 + /** 1767 1785 * sd_eh_action - error handling callback 1768 1786 * @scmd: sd-issued command that has failed 1769 1787 * @eh_disp: The recovery disposition suggested by the midlayer ··· 1812 1790 * process of recovering or has it suffered an internal failure 1813 1791 * that prevents access to the storage medium. 1814 1792 */ 1815 - sdkp->medium_access_timed_out++; 1793 + if (!sdkp->ignore_medium_access_errors) { 1794 + sdkp->medium_access_timed_out++; 1795 + sdkp->ignore_medium_access_errors = true; 1796 + } 1816 1797 1817 1798 /* 1818 1799 * If the device keeps failing read/write commands but TEST UNIT ··· 1827 1802 "Medium access timeout failure. Offlining disk!\n"); 1828 1803 scsi_device_set_state(scmd->device, SDEV_OFFLINE); 1829 1804 1830 - return FAILED; 1805 + return SUCCESS; 1831 1806 } 1832 1807 1833 1808 return eh_disp; ··· 1835 1810 1836 1811 static unsigned int sd_completed_bytes(struct scsi_cmnd *scmd) 1837 1812 { 1838 - u64 start_lba = blk_rq_pos(scmd->request); 1839 - u64 end_lba = blk_rq_pos(scmd->request) + (scsi_bufflen(scmd) / 512); 1840 - u64 factor = scmd->device->sector_size / 512; 1841 - u64 bad_lba; 1842 - int info_valid; 1813 + struct request *req = scmd->request; 1814 + struct scsi_device *sdev = scmd->device; 1815 + unsigned int transferred, good_bytes; 1816 + u64 start_lba, end_lba, bad_lba; 1817 + 1818 + /* 1819 + * Some commands have a payload smaller than the device logical 1820 + * block size (e.g. INQUIRY on a 4K disk). 1821 + */ 1822 + if (scsi_bufflen(scmd) <= sdev->sector_size) 1823 + return 0; 1824 + 1825 + /* Check if we have a 'bad_lba' information */ 1826 + if (!scsi_get_sense_info_fld(scmd->sense_buffer, 1827 + SCSI_SENSE_BUFFERSIZE, 1828 + &bad_lba)) 1829 + return 0; 1830 + 1831 + /* 1832 + * If the bad lba was reported incorrectly, we have no idea where 1833 + * the error is. 1834 + */ 1835 + start_lba = sectors_to_logical(sdev, blk_rq_pos(req)); 1836 + end_lba = start_lba + bytes_to_logical(sdev, scsi_bufflen(scmd)); 1837 + if (bad_lba < start_lba || bad_lba >= end_lba) 1838 + return 0; 1839 + 1843 1840 /* 1844 1841 * resid is optional but mostly filled in. When it's unused, 1845 1842 * its value is zero, so we assume the whole buffer transferred 1846 1843 */ 1847 - unsigned int transferred = scsi_bufflen(scmd) - scsi_get_resid(scmd); 1848 - unsigned int good_bytes; 1844 + transferred = scsi_bufflen(scmd) - scsi_get_resid(scmd); 1849 1845 1850 - info_valid = scsi_get_sense_info_fld(scmd->sense_buffer, 1851 - SCSI_SENSE_BUFFERSIZE, 1852 - &bad_lba); 1853 - if (!info_valid) 1854 - return 0; 1855 - 1856 - if (scsi_bufflen(scmd) <= scmd->device->sector_size) 1857 - return 0; 1858 - 1859 - /* be careful ... don't want any overflows */ 1860 - do_div(start_lba, factor); 1861 - do_div(end_lba, factor); 1862 - 1863 - /* The bad lba was reported incorrectly, we have no idea where 1864 - * the error is. 1846 + /* This computation should always be done in terms of the 1847 + * resolution of the device's medium. 1865 1848 */ 1866 - if (bad_lba < start_lba || bad_lba >= end_lba) 1867 - return 0; 1849 + good_bytes = logical_to_bytes(sdev, bad_lba - start_lba); 1868 1850 1869 - /* This computation should always be done in terms of 1870 - * the resolution of the device's medium. 1871 - */ 1872 - good_bytes = (bad_lba - start_lba) * scmd->device->sector_size; 1873 1851 return min(good_bytes, transferred); 1874 1852 } 1875 1853 ··· 1894 1866 struct request *req = SCpnt->request; 1895 1867 int sense_valid = 0; 1896 1868 int sense_deferred = 0; 1897 - unsigned char op = SCpnt->cmnd[0]; 1898 - unsigned char unmap = SCpnt->cmnd[1] & 8; 1899 1869 1900 1870 switch (req_op(req)) { 1901 1871 case REQ_OP_DISCARD: ··· 1967 1941 good_bytes = sd_completed_bytes(SCpnt); 1968 1942 break; 1969 1943 case ILLEGAL_REQUEST: 1970 - if (sshdr.asc == 0x10) /* DIX: Host detected corruption */ 1944 + switch (sshdr.asc) { 1945 + case 0x10: /* DIX: Host detected corruption */ 1971 1946 good_bytes = sd_completed_bytes(SCpnt); 1972 - /* INVALID COMMAND OPCODE or INVALID FIELD IN CDB */ 1973 - if (sshdr.asc == 0x20 || sshdr.asc == 0x24) { 1974 - switch (op) { 1947 + break; 1948 + case 0x20: /* INVALID COMMAND OPCODE */ 1949 + case 0x24: /* INVALID FIELD IN CDB */ 1950 + switch (SCpnt->cmnd[0]) { 1975 1951 case UNMAP: 1976 1952 sd_config_discard(sdkp, SD_LBP_DISABLE); 1977 1953 break; 1978 1954 case WRITE_SAME_16: 1979 1955 case WRITE_SAME: 1980 - if (unmap) 1956 + if (SCpnt->cmnd[1] & 8) { /* UNMAP */ 1981 1957 sd_config_discard(sdkp, SD_LBP_DISABLE); 1982 - else { 1958 + } else { 1983 1959 sdkp->device->no_write_same = 1; 1984 1960 sd_config_write_same(sdkp); 1985 - 1986 - good_bytes = 0; 1987 1961 req->__data_len = blk_rq_bytes(req); 1988 1962 req->rq_flags |= RQF_QUIET; 1989 1963 } 1964 + break; 1990 1965 } 1991 1966 } 1992 1967 break; ··· 2825 2798 2826 2799 /** 2827 2800 * sd_read_block_limits - Query disk device for preferred I/O sizes. 2828 - * @disk: disk to query 2801 + * @sdkp: disk to query 2829 2802 */ 2830 2803 static void sd_read_block_limits(struct scsi_disk *sdkp) 2831 2804 { ··· 2891 2864 2892 2865 /** 2893 2866 * sd_read_block_characteristics - Query block dev. characteristics 2894 - * @disk: disk to query 2867 + * @sdkp: disk to query 2895 2868 */ 2896 2869 static void sd_read_block_characteristics(struct scsi_disk *sdkp) 2897 2870 { ··· 2939 2912 2940 2913 /** 2941 2914 * sd_read_block_provisioning - Query provisioning VPD page 2942 - * @disk: disk to query 2915 + * @sdkp: disk to query 2943 2916 */ 2944 2917 static void sd_read_block_provisioning(struct scsi_disk *sdkp) 2945 2918 {
+10 -4
drivers/scsi/sd.h
··· 114 114 unsigned rc_basis: 2; 115 115 unsigned zoned: 2; 116 116 unsigned urswrz : 1; 117 + unsigned ignore_medium_access_errors : 1; 117 118 }; 118 119 #define to_scsi_disk(obj) container_of(obj,struct scsi_disk,dev) 119 120 ··· 175 174 static inline unsigned int logical_to_bytes(struct scsi_device *sdev, sector_t blocks) 176 175 { 177 176 return blocks * sdev->sector_size; 177 + } 178 + 179 + static inline sector_t bytes_to_logical(struct scsi_device *sdev, unsigned int bytes) 180 + { 181 + return bytes >> ilog2(sdev->sector_size); 178 182 } 179 183 180 184 static inline sector_t sectors_to_logical(struct scsi_device *sdev, sector_t sector) ··· 280 274 extern int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buffer); 281 275 extern void sd_zbc_remove(struct scsi_disk *sdkp); 282 276 extern void sd_zbc_print_zones(struct scsi_disk *sdkp); 283 - extern int sd_zbc_setup_write_cmnd(struct scsi_cmnd *cmd); 284 - extern void sd_zbc_cancel_write_cmnd(struct scsi_cmnd *cmd); 277 + extern int sd_zbc_write_lock_zone(struct scsi_cmnd *cmd); 278 + extern void sd_zbc_write_unlock_zone(struct scsi_cmnd *cmd); 285 279 extern int sd_zbc_setup_report_cmnd(struct scsi_cmnd *cmd); 286 280 extern int sd_zbc_setup_reset_cmnd(struct scsi_cmnd *cmd); 287 281 extern void sd_zbc_complete(struct scsi_cmnd *cmd, unsigned int good_bytes, ··· 299 293 300 294 static inline void sd_zbc_print_zones(struct scsi_disk *sdkp) {} 301 295 302 - static inline int sd_zbc_setup_write_cmnd(struct scsi_cmnd *cmd) 296 + static inline int sd_zbc_write_lock_zone(struct scsi_cmnd *cmd) 303 297 { 304 298 /* Let the drive fail requests */ 305 299 return BLKPREP_OK; 306 300 } 307 301 308 - static inline void sd_zbc_cancel_write_cmnd(struct scsi_cmnd *cmd) {} 302 + static inline void sd_zbc_write_unlock_zone(struct scsi_cmnd *cmd) {} 309 303 310 304 static inline int sd_zbc_setup_report_cmnd(struct scsi_cmnd *cmd) 311 305 {
+21 -37
drivers/scsi/sd_zbc.c
··· 237 237 struct scsi_disk *sdkp = scsi_disk(rq->rq_disk); 238 238 sector_t sector = blk_rq_pos(rq); 239 239 sector_t block = sectors_to_logical(sdkp->device, sector); 240 - unsigned int zno = block >> sdkp->zone_shift; 241 240 242 241 if (!sd_is_zoned(sdkp)) 243 242 /* Not a zoned device */ ··· 248 249 if (sector & (sd_zbc_zone_sectors(sdkp) - 1)) 249 250 /* Unaligned request */ 250 251 return BLKPREP_KILL; 251 - 252 - /* Do not allow concurrent reset and writes */ 253 - if (sdkp->zones_wlock && 254 - test_and_set_bit(zno, sdkp->zones_wlock)) 255 - return BLKPREP_DEFER; 256 252 257 253 cmd->cmd_len = 16; 258 254 memset(cmd->cmnd, 0, cmd->cmd_len); ··· 263 269 return BLKPREP_OK; 264 270 } 265 271 266 - int sd_zbc_setup_write_cmnd(struct scsi_cmnd *cmd) 272 + int sd_zbc_write_lock_zone(struct scsi_cmnd *cmd) 267 273 { 268 274 struct request *rq = cmd->request; 269 275 struct scsi_disk *sdkp = scsi_disk(rq->rq_disk); ··· 297 303 return BLKPREP_OK; 298 304 } 299 305 300 - static void sd_zbc_unlock_zone(struct request *rq) 306 + void sd_zbc_write_unlock_zone(struct scsi_cmnd *cmd) 301 307 { 308 + struct request *rq = cmd->request; 302 309 struct scsi_disk *sdkp = scsi_disk(rq->rq_disk); 303 310 304 311 if (sdkp->zones_wlock) { ··· 310 315 } 311 316 } 312 317 313 - void sd_zbc_cancel_write_cmnd(struct scsi_cmnd *cmd) 314 - { 315 - sd_zbc_unlock_zone(cmd->request); 316 - } 317 - 318 318 void sd_zbc_complete(struct scsi_cmnd *cmd, 319 319 unsigned int good_bytes, 320 320 struct scsi_sense_hdr *sshdr) ··· 318 328 struct request *rq = cmd->request; 319 329 320 330 switch (req_op(rq)) { 331 + case REQ_OP_ZONE_RESET: 332 + 333 + if (result && 334 + sshdr->sense_key == ILLEGAL_REQUEST && 335 + sshdr->asc == 0x24) 336 + /* 337 + * INVALID FIELD IN CDB error: reset of a conventional 338 + * zone was attempted. Nothing to worry about, so be 339 + * quiet about the error. 340 + */ 341 + rq->rq_flags |= RQF_QUIET; 342 + break; 343 + 321 344 case REQ_OP_WRITE: 322 345 case REQ_OP_WRITE_ZEROES: 323 346 case REQ_OP_WRITE_SAME: 324 - case REQ_OP_ZONE_RESET: 325 347 326 348 /* Unlock the zone */ 327 - sd_zbc_unlock_zone(rq); 349 + sd_zbc_write_unlock_zone(cmd); 328 350 329 - if (!result || 330 - sshdr->sense_key != ILLEGAL_REQUEST) 331 - break; 332 - 333 - switch (sshdr->asc) { 334 - case 0x24: 335 - /* 336 - * INVALID FIELD IN CDB error: For a zone reset, 337 - * this means that a reset of a conventional 338 - * zone was attempted. Nothing to worry about in 339 - * this case, so be quiet about the error. 340 - */ 341 - if (req_op(rq) == REQ_OP_ZONE_RESET) 342 - rq->rq_flags |= RQF_QUIET; 343 - break; 344 - case 0x21: 351 + if (result && 352 + sshdr->sense_key == ILLEGAL_REQUEST && 353 + sshdr->asc == 0x21) 345 354 /* 346 355 * INVALID ADDRESS FOR WRITE error: It is unlikely that 347 356 * retrying write requests failed with any kind of 348 357 * alignement error will result in success. So don't. 349 358 */ 350 359 cmd->allowed = 0; 351 - break; 352 - } 353 - 354 360 break; 355 361 356 362 case REQ_OP_ZONE_REPORT: ··· 551 565 int sd_zbc_read_zones(struct scsi_disk *sdkp, 552 566 unsigned char *buf) 553 567 { 554 - sector_t capacity; 555 - int ret = 0; 568 + int ret; 556 569 557 570 if (!sd_is_zoned(sdkp)) 558 571 /* ··· 583 598 ret = sd_zbc_check_capacity(sdkp, buf); 584 599 if (ret) 585 600 goto err; 586 - capacity = logical_to_sectors(sdkp->device, sdkp->capacity); 587 601 588 602 /* 589 603 * Check zone size: only devices with a constant zone size (except
-1
drivers/scsi/ses.c
··· 548 548 ecomp = &edev->component[components++]; 549 549 550 550 if (!IS_ERR(ecomp)) { 551 - ses_get_power_status(edev, ecomp); 552 551 if (addl_desc_ptr) 553 552 ses_process_descriptor( 554 553 ecomp,
+141 -143
drivers/scsi/sg.c
··· 122 122 struct sg_fd; 123 123 124 124 typedef struct sg_request { /* SG_MAX_QUEUE requests outstanding per file */ 125 - struct sg_request *nextrp; /* NULL -> tail request (slist) */ 125 + struct list_head entry; /* list entry */ 126 126 struct sg_fd *parentfp; /* NULL -> not in use */ 127 127 Sg_scatter_hold data; /* hold buffer, perhaps scatter list */ 128 128 sg_io_hdr_t header; /* scsi command+info, see <scsi/sg.h> */ ··· 142 142 struct sg_device *parentdp; /* owning device */ 143 143 wait_queue_head_t read_wait; /* queue read until command done */ 144 144 rwlock_t rq_list_lock; /* protect access to list in req_arr */ 145 + struct mutex f_mutex; /* protect against changes in this fd */ 145 146 int timeout; /* defaults to SG_DEFAULT_TIMEOUT */ 146 147 int timeout_user; /* defaults to SG_DEFAULT_TIMEOUT_USER */ 147 148 Sg_scatter_hold reserve; /* buffer held for this file descriptor */ 148 - unsigned save_scat_len; /* original length of trunc. scat. element */ 149 - Sg_request *headrp; /* head of request slist, NULL->empty */ 149 + struct list_head rq_list; /* head of request list */ 150 150 struct fasync_struct *async_qp; /* used by asynchronous notification */ 151 151 Sg_request req_arr[SG_MAX_QUEUE]; /* used as singly-linked list */ 152 - char low_dma; /* as in parent but possibly overridden to 1 */ 153 152 char force_packid; /* 1 -> pack_id input to read(), 0 -> ignored */ 154 153 char cmd_q; /* 1 -> allow command queuing, 0 -> don't */ 155 154 unsigned char next_cmd_len; /* 0: automatic, >0: use on next write() */ 156 155 char keep_orphan; /* 0 -> drop orphan (def), 1 -> keep for read() */ 157 156 char mmap_called; /* 0 -> mmap() never called on this fd */ 157 + char res_in_use; /* 1 -> 'reserve' array in use */ 158 158 struct kref f_ref; 159 159 struct execute_work ew; 160 160 } Sg_fd; ··· 198 198 static Sg_request *sg_get_rq_mark(Sg_fd * sfp, int pack_id); 199 199 static Sg_request *sg_add_request(Sg_fd * sfp); 200 200 static int sg_remove_request(Sg_fd * sfp, Sg_request * srp); 201 - static int sg_res_in_use(Sg_fd * sfp); 202 201 static Sg_device *sg_get_dev(int dev); 203 202 static void sg_device_destroy(struct kref *kref); 204 203 ··· 524 525 } else 525 526 count = (old_hdr->result == 0) ? 0 : -EIO; 526 527 sg_finish_rem_req(srp); 528 + sg_remove_request(sfp, srp); 527 529 retval = count; 528 530 free_old_hdr: 529 531 kfree(old_hdr); ··· 565 565 } 566 566 err_out: 567 567 err2 = sg_finish_rem_req(srp); 568 + sg_remove_request(sfp, srp); 568 569 return err ? : err2 ? : count; 569 570 } 570 571 ··· 615 614 } 616 615 buf += SZ_SG_HEADER; 617 616 __get_user(opcode, buf); 617 + mutex_lock(&sfp->f_mutex); 618 618 if (sfp->next_cmd_len > 0) { 619 619 cmd_size = sfp->next_cmd_len; 620 620 sfp->next_cmd_len = 0; /* reset so only this write() effected */ ··· 624 622 if ((opcode >= 0xc0) && old_hdr.twelve_byte) 625 623 cmd_size = 12; 626 624 } 625 + mutex_unlock(&sfp->f_mutex); 627 626 SCSI_LOG_TIMEOUT(4, sg_printk(KERN_INFO, sdp, 628 627 "sg_write: scsi opcode=0x%02x, cmd_size=%d\n", (int) opcode, cmd_size)); 629 628 /* Determine buffer size. */ ··· 665 662 * is a non-zero input_size, so emit a warning. 666 663 */ 667 664 if (hp->dxfer_direction == SG_DXFER_TO_FROM_DEV) { 668 - static char cmd[TASK_COMM_LEN]; 669 - if (strcmp(current->comm, cmd)) { 670 - printk_ratelimited(KERN_WARNING 671 - "sg_write: data in/out %d/%d bytes " 672 - "for SCSI command 0x%x-- guessing " 673 - "data in;\n program %s not setting " 674 - "count and/or reply_len properly\n", 675 - old_hdr.reply_len - (int)SZ_SG_HEADER, 676 - input_size, (unsigned int) cmnd[0], 677 - current->comm); 678 - strcpy(cmd, current->comm); 679 - } 665 + printk_ratelimited(KERN_WARNING 666 + "sg_write: data in/out %d/%d bytes " 667 + "for SCSI command 0x%x-- guessing " 668 + "data in;\n program %s not setting " 669 + "count and/or reply_len properly\n", 670 + old_hdr.reply_len - (int)SZ_SG_HEADER, 671 + input_size, (unsigned int) cmnd[0], 672 + current->comm); 680 673 } 681 674 k = sg_common_write(sfp, srp, cmnd, sfp->timeout, blocking); 682 675 return (k < 0) ? k : count; ··· 720 721 sg_remove_request(sfp, srp); 721 722 return -EINVAL; /* either MMAP_IO or DIRECT_IO (not both) */ 722 723 } 723 - if (sg_res_in_use(sfp)) { 724 + if (sfp->res_in_use) { 724 725 sg_remove_request(sfp, srp); 725 726 return -EBUSY; /* reserve buffer already being used */ 726 727 } ··· 751 752 return count; 752 753 } 753 754 755 + static bool sg_is_valid_dxfer(sg_io_hdr_t *hp) 756 + { 757 + switch (hp->dxfer_direction) { 758 + case SG_DXFER_NONE: 759 + if (hp->dxferp || hp->dxfer_len > 0) 760 + return false; 761 + return true; 762 + case SG_DXFER_TO_DEV: 763 + case SG_DXFER_FROM_DEV: 764 + case SG_DXFER_TO_FROM_DEV: 765 + if (!hp->dxferp || hp->dxfer_len == 0) 766 + return false; 767 + return true; 768 + case SG_DXFER_UNKNOWN: 769 + if ((!hp->dxferp && hp->dxfer_len) || 770 + (hp->dxferp && hp->dxfer_len == 0)) 771 + return false; 772 + return true; 773 + default: 774 + return false; 775 + } 776 + } 777 + 754 778 static int 755 779 sg_common_write(Sg_fd * sfp, Sg_request * srp, 756 780 unsigned char *cmnd, int timeout, int blocking) ··· 794 772 "sg_common_write: scsi opcode=0x%02x, cmd_size=%d\n", 795 773 (int) cmnd[0], (int) hp->cmd_len)); 796 774 775 + if (!sg_is_valid_dxfer(hp)) 776 + return -EINVAL; 777 + 797 778 k = sg_start_req(srp, cmnd); 798 779 if (k) { 799 780 SCSI_LOG_TIMEOUT(1, sg_printk(KERN_INFO, sfp->parentdp, 800 781 "sg_common_write: start_req err=%d\n", k)); 801 782 sg_finish_rem_req(srp); 783 + sg_remove_request(sfp, srp); 802 784 return k; /* probably out of space --> ENOMEM */ 803 785 } 804 786 if (atomic_read(&sdp->detaching)) { ··· 813 787 } 814 788 815 789 sg_finish_rem_req(srp); 790 + sg_remove_request(sfp, srp); 816 791 return -ENODEV; 817 792 } 818 793 ··· 912 885 /* strange ..., for backward compatibility */ 913 886 return sfp->timeout_user; 914 887 case SG_SET_FORCE_LOW_DMA: 915 - result = get_user(val, ip); 916 - if (result) 917 - return result; 918 - if (val) { 919 - sfp->low_dma = 1; 920 - if ((0 == sfp->low_dma) && (0 == sg_res_in_use(sfp))) { 921 - val = (int) sfp->reserve.bufflen; 922 - sg_remove_scat(sfp, &sfp->reserve); 923 - sg_build_reserve(sfp, val); 924 - } 925 - } else { 926 - if (atomic_read(&sdp->detaching)) 927 - return -ENODEV; 928 - sfp->low_dma = sdp->device->host->unchecked_isa_dma; 929 - } 888 + /* 889 + * N.B. This ioctl never worked properly, but failed to 890 + * return an error value. So returning '0' to keep compability 891 + * with legacy applications. 892 + */ 930 893 return 0; 931 894 case SG_GET_LOW_DMA: 932 - return put_user((int) sfp->low_dma, ip); 895 + return put_user((int) sdp->device->host->unchecked_isa_dma, ip); 933 896 case SG_GET_SCSI_ID: 934 897 if (!access_ok(VERIFY_WRITE, p, sizeof (sg_scsi_id_t))) 935 898 return -EFAULT; ··· 953 936 if (!access_ok(VERIFY_WRITE, ip, sizeof (int))) 954 937 return -EFAULT; 955 938 read_lock_irqsave(&sfp->rq_list_lock, iflags); 956 - for (srp = sfp->headrp; srp; srp = srp->nextrp) { 939 + list_for_each_entry(srp, &sfp->rq_list, entry) { 957 940 if ((1 == srp->done) && (!srp->sg_io_owned)) { 958 941 read_unlock_irqrestore(&sfp->rq_list_lock, 959 942 iflags); ··· 966 949 return 0; 967 950 case SG_GET_NUM_WAITING: 968 951 read_lock_irqsave(&sfp->rq_list_lock, iflags); 969 - for (val = 0, srp = sfp->headrp; srp; srp = srp->nextrp) { 952 + val = 0; 953 + list_for_each_entry(srp, &sfp->rq_list, entry) { 970 954 if ((1 == srp->done) && (!srp->sg_io_owned)) 971 955 ++val; 972 956 } ··· 983 965 return -EINVAL; 984 966 val = min_t(int, val, 985 967 max_sectors_bytes(sdp->device->request_queue)); 968 + mutex_lock(&sfp->f_mutex); 986 969 if (val != sfp->reserve.bufflen) { 987 - if (sg_res_in_use(sfp) || sfp->mmap_called) 970 + if (sfp->mmap_called || 971 + sfp->res_in_use) { 972 + mutex_unlock(&sfp->f_mutex); 988 973 return -EBUSY; 974 + } 975 + 989 976 sg_remove_scat(sfp, &sfp->reserve); 990 977 sg_build_reserve(sfp, val); 991 978 } 979 + mutex_unlock(&sfp->f_mutex); 992 980 return 0; 993 981 case SG_GET_RESERVED_SIZE: 994 982 val = min_t(int, sfp->reserve.bufflen, ··· 1042 1018 if (!rinfo) 1043 1019 return -ENOMEM; 1044 1020 read_lock_irqsave(&sfp->rq_list_lock, iflags); 1045 - for (srp = sfp->headrp, val = 0; val < SG_MAX_QUEUE; 1046 - ++val, srp = srp ? srp->nextrp : srp) { 1021 + val = 0; 1022 + list_for_each_entry(srp, &sfp->rq_list, entry) { 1023 + if (val > SG_MAX_QUEUE) 1024 + break; 1047 1025 memset(&rinfo[val], 0, SZ_SG_REQ_INFO); 1048 - if (srp) { 1049 - rinfo[val].req_state = srp->done + 1; 1050 - rinfo[val].problem = 1051 - srp->header.masked_status & 1052 - srp->header.host_status & 1053 - srp->header.driver_status; 1054 - if (srp->done) 1055 - rinfo[val].duration = 1056 - srp->header.duration; 1057 - else { 1058 - ms = jiffies_to_msecs(jiffies); 1059 - rinfo[val].duration = 1060 - (ms > srp->header.duration) ? 1061 - (ms - srp->header.duration) : 0; 1062 - } 1063 - rinfo[val].orphan = srp->orphan; 1064 - rinfo[val].sg_io_owned = 1065 - srp->sg_io_owned; 1066 - rinfo[val].pack_id = 1067 - srp->header.pack_id; 1068 - rinfo[val].usr_ptr = 1069 - srp->header.usr_ptr; 1026 + rinfo[val].req_state = srp->done + 1; 1027 + rinfo[val].problem = 1028 + srp->header.masked_status & 1029 + srp->header.host_status & 1030 + srp->header.driver_status; 1031 + if (srp->done) 1032 + rinfo[val].duration = 1033 + srp->header.duration; 1034 + else { 1035 + ms = jiffies_to_msecs(jiffies); 1036 + rinfo[val].duration = 1037 + (ms > srp->header.duration) ? 1038 + (ms - srp->header.duration) : 0; 1070 1039 } 1040 + rinfo[val].orphan = srp->orphan; 1041 + rinfo[val].sg_io_owned = srp->sg_io_owned; 1042 + rinfo[val].pack_id = srp->header.pack_id; 1043 + rinfo[val].usr_ptr = srp->header.usr_ptr; 1044 + val++; 1071 1045 } 1072 1046 read_unlock_irqrestore(&sfp->rq_list_lock, iflags); 1073 - result = __copy_to_user(p, rinfo, 1047 + result = __copy_to_user(p, rinfo, 1074 1048 SZ_SG_REQ_INFO * SG_MAX_QUEUE); 1075 1049 result = result ? -EFAULT : 0; 1076 1050 kfree(rinfo); ··· 1174 1152 return POLLERR; 1175 1153 poll_wait(filp, &sfp->read_wait, wait); 1176 1154 read_lock_irqsave(&sfp->rq_list_lock, iflags); 1177 - for (srp = sfp->headrp; srp; srp = srp->nextrp) { 1155 + list_for_each_entry(srp, &sfp->rq_list, entry) { 1178 1156 /* if any read waiting, flag it */ 1179 1157 if ((0 == res) && (1 == srp->done) && (!srp->sg_io_owned)) 1180 1158 res = POLLIN | POLLRDNORM; ··· 1291 1269 struct sg_fd *sfp = srp->parentfp; 1292 1270 1293 1271 sg_finish_rem_req(srp); 1272 + sg_remove_request(sfp, srp); 1294 1273 kref_put(&sfp->f_ref, sg_remove_sfp); 1295 1274 } 1296 1275 ··· 1755 1732 md = &map_data; 1756 1733 1757 1734 if (md) { 1758 - if (!sg_res_in_use(sfp) && dxfer_len <= rsv_schp->bufflen) 1735 + mutex_lock(&sfp->f_mutex); 1736 + if (dxfer_len <= rsv_schp->bufflen && 1737 + !sfp->res_in_use) { 1738 + sfp->res_in_use = 1; 1759 1739 sg_link_reserve(sfp, srp, dxfer_len); 1760 - else { 1740 + } else if ((hp->flags & SG_FLAG_MMAP_IO) && sfp->res_in_use) { 1741 + mutex_unlock(&sfp->f_mutex); 1742 + return -EBUSY; 1743 + } else { 1761 1744 res = sg_build_indirect(req_schp, sfp, dxfer_len); 1762 - if (res) 1745 + if (res) { 1746 + mutex_unlock(&sfp->f_mutex); 1763 1747 return res; 1748 + } 1764 1749 } 1750 + mutex_unlock(&sfp->f_mutex); 1765 1751 1766 1752 md->pages = req_schp->pages; 1767 1753 md->page_order = req_schp->page_order; ··· 1838 1806 else 1839 1807 sg_remove_scat(sfp, req_schp); 1840 1808 1841 - sg_remove_request(sfp, srp); 1842 - 1843 1809 return ret; 1844 1810 } 1845 1811 ··· 1861 1831 int sg_tablesize = sfp->parentdp->sg_tablesize; 1862 1832 int blk_size = buff_size, order; 1863 1833 gfp_t gfp_mask = GFP_ATOMIC | __GFP_COMP | __GFP_NOWARN; 1834 + struct sg_device *sdp = sfp->parentdp; 1864 1835 1865 1836 if (blk_size < 0) 1866 1837 return -EFAULT; ··· 1887 1856 scatter_elem_sz_prev = num; 1888 1857 } 1889 1858 1890 - if (sfp->low_dma) 1859 + if (sdp->device->host->unchecked_isa_dma) 1891 1860 gfp_mask |= GFP_DMA; 1892 1861 1893 1862 if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SYS_RAWIO)) ··· 2057 2026 req_schp->pages = NULL; 2058 2027 req_schp->page_order = 0; 2059 2028 req_schp->sglist_len = 0; 2060 - sfp->save_scat_len = 0; 2061 2029 srp->res_used = 0; 2030 + /* Called without mutex lock to avoid deadlock */ 2031 + sfp->res_in_use = 0; 2062 2032 } 2063 2033 2064 2034 static Sg_request * ··· 2069 2037 unsigned long iflags; 2070 2038 2071 2039 write_lock_irqsave(&sfp->rq_list_lock, iflags); 2072 - for (resp = sfp->headrp; resp; resp = resp->nextrp) { 2040 + list_for_each_entry(resp, &sfp->rq_list, entry) { 2073 2041 /* look for requests that are ready + not SG_IO owned */ 2074 2042 if ((1 == resp->done) && (!resp->sg_io_owned) && 2075 2043 ((-1 == pack_id) || (resp->header.pack_id == pack_id))) { ··· 2087 2055 { 2088 2056 int k; 2089 2057 unsigned long iflags; 2090 - Sg_request *resp; 2091 2058 Sg_request *rp = sfp->req_arr; 2092 2059 2093 2060 write_lock_irqsave(&sfp->rq_list_lock, iflags); 2094 - resp = sfp->headrp; 2095 - if (!resp) { 2096 - memset(rp, 0, sizeof (Sg_request)); 2097 - rp->parentfp = sfp; 2098 - resp = rp; 2099 - sfp->headrp = resp; 2100 - } else { 2101 - if (0 == sfp->cmd_q) 2102 - resp = NULL; /* command queuing disallowed */ 2103 - else { 2104 - for (k = 0; k < SG_MAX_QUEUE; ++k, ++rp) { 2105 - if (!rp->parentfp) 2106 - break; 2107 - } 2108 - if (k < SG_MAX_QUEUE) { 2109 - memset(rp, 0, sizeof (Sg_request)); 2110 - rp->parentfp = sfp; 2111 - while (resp->nextrp) 2112 - resp = resp->nextrp; 2113 - resp->nextrp = rp; 2114 - resp = rp; 2115 - } else 2116 - resp = NULL; 2061 + if (!list_empty(&sfp->rq_list)) { 2062 + if (!sfp->cmd_q) 2063 + goto out_unlock; 2064 + 2065 + for (k = 0; k < SG_MAX_QUEUE; ++k, ++rp) { 2066 + if (!rp->parentfp) 2067 + break; 2117 2068 } 2069 + if (k >= SG_MAX_QUEUE) 2070 + goto out_unlock; 2118 2071 } 2119 - if (resp) { 2120 - resp->nextrp = NULL; 2121 - resp->header.duration = jiffies_to_msecs(jiffies); 2122 - } 2072 + memset(rp, 0, sizeof (Sg_request)); 2073 + rp->parentfp = sfp; 2074 + rp->header.duration = jiffies_to_msecs(jiffies); 2075 + list_add_tail(&rp->entry, &sfp->rq_list); 2123 2076 write_unlock_irqrestore(&sfp->rq_list_lock, iflags); 2124 - return resp; 2077 + return rp; 2078 + out_unlock: 2079 + write_unlock_irqrestore(&sfp->rq_list_lock, iflags); 2080 + return NULL; 2125 2081 } 2126 2082 2127 2083 /* Return of 1 for found; 0 for not found */ 2128 2084 static int 2129 2085 sg_remove_request(Sg_fd * sfp, Sg_request * srp) 2130 2086 { 2131 - Sg_request *prev_rp; 2132 - Sg_request *rp; 2133 2087 unsigned long iflags; 2134 2088 int res = 0; 2135 2089 2136 - if ((!sfp) || (!srp) || (!sfp->headrp)) 2090 + if (!sfp || !srp || list_empty(&sfp->rq_list)) 2137 2091 return res; 2138 2092 write_lock_irqsave(&sfp->rq_list_lock, iflags); 2139 - prev_rp = sfp->headrp; 2140 - if (srp == prev_rp) { 2141 - sfp->headrp = prev_rp->nextrp; 2142 - prev_rp->parentfp = NULL; 2093 + if (!list_empty(&srp->entry)) { 2094 + list_del(&srp->entry); 2095 + srp->parentfp = NULL; 2143 2096 res = 1; 2144 - } else { 2145 - while ((rp = prev_rp->nextrp)) { 2146 - if (srp == rp) { 2147 - prev_rp->nextrp = rp->nextrp; 2148 - rp->parentfp = NULL; 2149 - res = 1; 2150 - break; 2151 - } 2152 - prev_rp = rp; 2153 - } 2154 2097 } 2155 2098 write_unlock_irqrestore(&sfp->rq_list_lock, iflags); 2156 2099 return res; ··· 2144 2137 2145 2138 init_waitqueue_head(&sfp->read_wait); 2146 2139 rwlock_init(&sfp->rq_list_lock); 2147 - 2140 + INIT_LIST_HEAD(&sfp->rq_list); 2148 2141 kref_init(&sfp->f_ref); 2142 + mutex_init(&sfp->f_mutex); 2149 2143 sfp->timeout = SG_DEFAULT_TIMEOUT; 2150 2144 sfp->timeout_user = SG_DEFAULT_TIMEOUT_USER; 2151 2145 sfp->force_packid = SG_DEF_FORCE_PACK_ID; 2152 - sfp->low_dma = (SG_DEF_FORCE_LOW_DMA == 0) ? 2153 - sdp->device->host->unchecked_isa_dma : 1; 2154 2146 sfp->cmd_q = SG_DEF_COMMAND_Q; 2155 2147 sfp->keep_orphan = SG_DEF_KEEP_ORPHAN; 2156 2148 sfp->parentdp = sdp; ··· 2183 2177 { 2184 2178 struct sg_fd *sfp = container_of(work, struct sg_fd, ew.work); 2185 2179 struct sg_device *sdp = sfp->parentdp; 2180 + Sg_request *srp; 2181 + unsigned long iflags; 2186 2182 2187 2183 /* Cleanup any responses which were never read(). */ 2188 - while (sfp->headrp) 2189 - sg_finish_rem_req(sfp->headrp); 2184 + write_lock_irqsave(&sfp->rq_list_lock, iflags); 2185 + while (!list_empty(&sfp->rq_list)) { 2186 + srp = list_first_entry(&sfp->rq_list, Sg_request, entry); 2187 + sg_finish_rem_req(srp); 2188 + list_del(&srp->entry); 2189 + srp->parentfp = NULL; 2190 + } 2191 + write_unlock_irqrestore(&sfp->rq_list_lock, iflags); 2190 2192 2191 2193 if (sfp->reserve.bufflen > 0) { 2192 2194 SCSI_LOG_TIMEOUT(6, sg_printk(KERN_INFO, sdp, ··· 2226 2212 2227 2213 INIT_WORK(&sfp->ew.work, sg_remove_sfp_usercontext); 2228 2214 schedule_work(&sfp->ew.work); 2229 - } 2230 - 2231 - static int 2232 - sg_res_in_use(Sg_fd * sfp) 2233 - { 2234 - const Sg_request *srp; 2235 - unsigned long iflags; 2236 - 2237 - read_lock_irqsave(&sfp->rq_list_lock, iflags); 2238 - for (srp = sfp->headrp; srp; srp = srp->nextrp) 2239 - if (srp->res_used) 2240 - break; 2241 - read_unlock_irqrestore(&sfp->rq_list_lock, iflags); 2242 - return srp ? 1 : 0; 2243 2215 } 2244 2216 2245 2217 #ifdef CONFIG_SCSI_PROC_FS ··· 2597 2597 /* must be called while holding sg_index_lock */ 2598 2598 static void sg_proc_debug_helper(struct seq_file *s, Sg_device * sdp) 2599 2599 { 2600 - int k, m, new_interface, blen, usg; 2600 + int k, new_interface, blen, usg; 2601 2601 Sg_request *srp; 2602 2602 Sg_fd *fp; 2603 2603 const sg_io_hdr_t *hp; ··· 2613 2613 jiffies_to_msecs(fp->timeout), 2614 2614 fp->reserve.bufflen, 2615 2615 (int) fp->reserve.k_use_sg, 2616 - (int) fp->low_dma); 2616 + (int) sdp->device->host->unchecked_isa_dma); 2617 2617 seq_printf(s, " cmd_q=%d f_packid=%d k_orphan=%d closed=0\n", 2618 2618 (int) fp->cmd_q, (int) fp->force_packid, 2619 2619 (int) fp->keep_orphan); 2620 - for (m = 0, srp = fp->headrp; 2621 - srp != NULL; 2622 - ++m, srp = srp->nextrp) { 2620 + list_for_each_entry(srp, &fp->rq_list, entry) { 2623 2621 hp = &srp->header; 2624 2622 new_interface = (hp->interface_id == '\0') ? 0 : 1; 2625 2623 if (srp->res_used) { 2626 - if (new_interface && 2624 + if (new_interface && 2627 2625 (SG_FLAG_MMAP_IO & hp->flags)) 2628 2626 cp = " mmap>> "; 2629 2627 else ··· 2652 2654 seq_printf(s, "ms sgat=%d op=0x%02x\n", usg, 2653 2655 (int) srp->data.cmd_opcode); 2654 2656 } 2655 - if (0 == m) 2657 + if (list_empty(&fp->rq_list)) 2656 2658 seq_puts(s, " No requests active\n"); 2657 2659 read_unlock(&fp->rq_list_lock); 2658 2660 }
+1 -1
drivers/scsi/sgiwd93.c
··· 297 297 return err; 298 298 } 299 299 300 - static int __exit sgiwd93_remove(struct platform_device *pdev) 300 + static int sgiwd93_remove(struct platform_device *pdev) 301 301 { 302 302 struct Scsi_Host *host = platform_get_drvdata(pdev); 303 303 struct ip22_hostdata *hdata = (struct ip22_hostdata *) host->hostdata;
+1 -1
drivers/scsi/sni_53c710.c
··· 117 117 return -ENODEV; 118 118 } 119 119 120 - static int __exit snirm710_driver_remove(struct platform_device *dev) 120 + static int snirm710_driver_remove(struct platform_device *dev) 121 121 { 122 122 struct Scsi_Host *host = dev_get_drvdata(&dev->dev); 123 123 struct NCR_700_Host_Parameters *hostdata =
+1 -1
drivers/scsi/snic/snic_debugfs.c
··· 548 548 &snic_trc_fops); 549 549 550 550 if (!de) { 551 - SNIC_ERR("Cann't create trace file.\n"); 551 + SNIC_ERR("Cannot create trace file.\n"); 552 552 553 553 return ret; 554 554 }
+219 -70
drivers/scsi/stex.c
··· 26 26 #include <linux/module.h> 27 27 #include <linux/spinlock.h> 28 28 #include <linux/ktime.h> 29 + #include <linux/reboot.h> 29 30 #include <asm/io.h> 30 31 #include <asm/irq.h> 31 32 #include <asm/byteorder.h> ··· 39 38 #include <scsi/scsi_eh.h> 40 39 41 40 #define DRV_NAME "stex" 42 - #define ST_DRIVER_VERSION "5.00.0000.01" 43 - #define ST_VER_MAJOR 5 44 - #define ST_VER_MINOR 00 41 + #define ST_DRIVER_VERSION "6.02.0000.01" 42 + #define ST_VER_MAJOR 6 43 + #define ST_VER_MINOR 02 45 44 #define ST_OEM 0000 46 45 #define ST_BUILD_VER 01 47 46 ··· 65 64 YI2H_INT_C = 0xa0, 66 65 YH2I_REQ = 0xc0, 67 66 YH2I_REQ_HI = 0xc4, 67 + PSCRATCH0 = 0xb0, 68 + PSCRATCH1 = 0xb4, 69 + PSCRATCH2 = 0xb8, 70 + PSCRATCH3 = 0xbc, 71 + PSCRATCH4 = 0xc8, 72 + MAILBOX_BASE = 0x1000, 73 + MAILBOX_HNDSHK_STS = 0x0, 68 74 69 75 /* MU register value */ 70 76 MU_INBOUND_DOORBELL_HANDSHAKE = (1 << 0), ··· 95 87 MU_STATE_STOP = 5, 96 88 MU_STATE_NOCONNECT = 6, 97 89 98 - MU_MAX_DELAY = 120, 90 + MU_MAX_DELAY = 50, 99 91 MU_HANDSHAKE_SIGNATURE = 0x55aaaa55, 100 92 MU_HANDSHAKE_SIGNATURE_HALF = 0x5a5a0000, 101 93 MU_HARD_RESET_WAIT = 30000, ··· 143 135 st_yosemite = 2, 144 136 st_seq = 3, 145 137 st_yel = 4, 138 + st_P3 = 5, 146 139 147 140 PASSTHRU_REQ_TYPE = 0x00000001, 148 141 PASSTHRU_REQ_NO_WAKEUP = 0x00000100, ··· 348 339 u16 rq_size; 349 340 u16 sts_count; 350 341 u8 supports_pm; 342 + int msi_lock; 351 343 }; 352 344 353 345 struct st_card_info { ··· 361 351 u16 rq_count; 362 352 u16 rq_size; 363 353 u16 sts_count; 354 + }; 355 + 356 + static int S6flag; 357 + static int stex_halt(struct notifier_block *nb, ulong event, void *buf); 358 + static struct notifier_block stex_notifier = { 359 + stex_halt, NULL, 0 364 360 }; 365 361 366 362 static int msi; ··· 556 540 557 541 ++hba->req_head; 558 542 hba->req_head %= hba->rq_count+1; 559 - 560 - writel((addr >> 16) >> 16, hba->mmio_base + YH2I_REQ_HI); 561 - readl(hba->mmio_base + YH2I_REQ_HI); /* flush */ 562 - writel(addr, hba->mmio_base + YH2I_REQ); 563 - readl(hba->mmio_base + YH2I_REQ); /* flush */ 543 + if (hba->cardtype == st_P3) { 544 + writel((addr >> 16) >> 16, hba->mmio_base + YH2I_REQ_HI); 545 + writel(addr, hba->mmio_base + YH2I_REQ); 546 + } else { 547 + writel((addr >> 16) >> 16, hba->mmio_base + YH2I_REQ_HI); 548 + readl(hba->mmio_base + YH2I_REQ_HI); /* flush */ 549 + writel(addr, hba->mmio_base + YH2I_REQ); 550 + readl(hba->mmio_base + YH2I_REQ); /* flush */ 551 + } 564 552 } 565 553 566 554 static void return_abnormal_state(struct st_hba *hba, int status) ··· 994 974 995 975 spin_lock_irqsave(hba->host->host_lock, flags); 996 976 997 - data = readl(base + YI2H_INT); 998 - if (data && data != 0xffffffff) { 999 - /* clear the interrupt */ 1000 - writel(data, base + YI2H_INT_C); 1001 - stex_ss_mu_intr(hba); 1002 - spin_unlock_irqrestore(hba->host->host_lock, flags); 1003 - if (unlikely(data & SS_I2H_REQUEST_RESET)) 1004 - queue_work(hba->work_q, &hba->reset_work); 1005 - return IRQ_HANDLED; 977 + if (hba->cardtype == st_yel) { 978 + data = readl(base + YI2H_INT); 979 + if (data && data != 0xffffffff) { 980 + /* clear the interrupt */ 981 + writel(data, base + YI2H_INT_C); 982 + stex_ss_mu_intr(hba); 983 + spin_unlock_irqrestore(hba->host->host_lock, flags); 984 + if (unlikely(data & SS_I2H_REQUEST_RESET)) 985 + queue_work(hba->work_q, &hba->reset_work); 986 + return IRQ_HANDLED; 987 + } 988 + } else { 989 + data = readl(base + PSCRATCH4); 990 + if (data != 0xffffffff) { 991 + if (data != 0) { 992 + /* clear the interrupt */ 993 + writel(data, base + PSCRATCH1); 994 + writel((1 << 22), base + YH2I_INT); 995 + } 996 + stex_ss_mu_intr(hba); 997 + spin_unlock_irqrestore(hba->host->host_lock, flags); 998 + if (unlikely(data & SS_I2H_REQUEST_RESET)) 999 + queue_work(hba->work_q, &hba->reset_work); 1000 + return IRQ_HANDLED; 1001 + } 1006 1002 } 1007 1003 1008 1004 spin_unlock_irqrestore(hba->host->host_lock, flags); ··· 1116 1080 struct st_msg_header *msg_h; 1117 1081 struct handshake_frame *h; 1118 1082 __le32 *scratch; 1119 - u32 data, scratch_size; 1083 + u32 data, scratch_size, mailboxdata, operationaldata; 1120 1084 unsigned long before; 1121 1085 int ret = 0; 1122 1086 1123 1087 before = jiffies; 1124 - while ((readl(base + YIOA_STATUS) & SS_MU_OPERATIONAL) == 0) { 1125 - if (time_after(jiffies, before + MU_MAX_DELAY * HZ)) { 1126 - printk(KERN_ERR DRV_NAME 1127 - "(%s): firmware not operational\n", 1128 - pci_name(hba->pdev)); 1129 - return -1; 1088 + 1089 + if (hba->cardtype == st_yel) { 1090 + operationaldata = readl(base + YIOA_STATUS); 1091 + while (operationaldata != SS_MU_OPERATIONAL) { 1092 + if (time_after(jiffies, before + MU_MAX_DELAY * HZ)) { 1093 + printk(KERN_ERR DRV_NAME 1094 + "(%s): firmware not operational\n", 1095 + pci_name(hba->pdev)); 1096 + return -1; 1097 + } 1098 + msleep(1); 1099 + operationaldata = readl(base + YIOA_STATUS); 1130 1100 } 1131 - msleep(1); 1101 + } else { 1102 + operationaldata = readl(base + PSCRATCH3); 1103 + while (operationaldata != SS_MU_OPERATIONAL) { 1104 + if (time_after(jiffies, before + MU_MAX_DELAY * HZ)) { 1105 + printk(KERN_ERR DRV_NAME 1106 + "(%s): firmware not operational\n", 1107 + pci_name(hba->pdev)); 1108 + return -1; 1109 + } 1110 + msleep(1); 1111 + operationaldata = readl(base + PSCRATCH3); 1112 + } 1132 1113 } 1133 1114 1134 1115 msg_h = (struct st_msg_header *)hba->dma_mem; ··· 1164 1111 scratch_size = (hba->sts_count+1)*sizeof(u32); 1165 1112 h->scratch_size = cpu_to_le32(scratch_size); 1166 1113 1167 - data = readl(base + YINT_EN); 1168 - data &= ~4; 1169 - writel(data, base + YINT_EN); 1170 - writel((hba->dma_handle >> 16) >> 16, base + YH2I_REQ_HI); 1171 - readl(base + YH2I_REQ_HI); 1172 - writel(hba->dma_handle, base + YH2I_REQ); 1173 - readl(base + YH2I_REQ); /* flush */ 1174 - 1175 - scratch = hba->scratch; 1176 - before = jiffies; 1177 - while (!(le32_to_cpu(*scratch) & SS_STS_HANDSHAKE)) { 1178 - if (time_after(jiffies, before + MU_MAX_DELAY * HZ)) { 1179 - printk(KERN_ERR DRV_NAME 1180 - "(%s): no signature after handshake frame\n", 1181 - pci_name(hba->pdev)); 1182 - ret = -1; 1183 - break; 1114 + if (hba->cardtype == st_yel) { 1115 + data = readl(base + YINT_EN); 1116 + data &= ~4; 1117 + writel(data, base + YINT_EN); 1118 + writel((hba->dma_handle >> 16) >> 16, base + YH2I_REQ_HI); 1119 + readl(base + YH2I_REQ_HI); 1120 + writel(hba->dma_handle, base + YH2I_REQ); 1121 + readl(base + YH2I_REQ); /* flush */ 1122 + } else { 1123 + data = readl(base + YINT_EN); 1124 + data &= ~(1 << 0); 1125 + data &= ~(1 << 2); 1126 + writel(data, base + YINT_EN); 1127 + if (hba->msi_lock == 0) { 1128 + /* P3 MSI Register cannot access twice */ 1129 + writel((1 << 6), base + YH2I_INT); 1130 + hba->msi_lock = 1; 1184 1131 } 1185 - rmb(); 1186 - msleep(1); 1132 + writel((hba->dma_handle >> 16) >> 16, base + YH2I_REQ_HI); 1133 + writel(hba->dma_handle, base + YH2I_REQ); 1187 1134 } 1188 1135 1136 + before = jiffies; 1137 + scratch = hba->scratch; 1138 + if (hba->cardtype == st_yel) { 1139 + while (!(le32_to_cpu(*scratch) & SS_STS_HANDSHAKE)) { 1140 + if (time_after(jiffies, before + MU_MAX_DELAY * HZ)) { 1141 + printk(KERN_ERR DRV_NAME 1142 + "(%s): no signature after handshake frame\n", 1143 + pci_name(hba->pdev)); 1144 + ret = -1; 1145 + break; 1146 + } 1147 + rmb(); 1148 + msleep(1); 1149 + } 1150 + } else { 1151 + mailboxdata = readl(base + MAILBOX_BASE + MAILBOX_HNDSHK_STS); 1152 + while (mailboxdata != SS_STS_HANDSHAKE) { 1153 + if (time_after(jiffies, before + MU_MAX_DELAY * HZ)) { 1154 + printk(KERN_ERR DRV_NAME 1155 + "(%s): no signature after handshake frame\n", 1156 + pci_name(hba->pdev)); 1157 + ret = -1; 1158 + break; 1159 + } 1160 + rmb(); 1161 + msleep(1); 1162 + mailboxdata = readl(base + MAILBOX_BASE + MAILBOX_HNDSHK_STS); 1163 + } 1164 + } 1189 1165 memset(scratch, 0, scratch_size); 1190 1166 msg_h->flag = 0; 1167 + 1191 1168 return ret; 1192 1169 } 1193 1170 ··· 1227 1144 unsigned long flags; 1228 1145 unsigned int mu_status; 1229 1146 1230 - err = (hba->cardtype == st_yel) ? 1231 - stex_ss_handshake(hba) : stex_common_handshake(hba); 1147 + if (hba->cardtype == st_yel || hba->cardtype == st_P3) 1148 + err = stex_ss_handshake(hba); 1149 + else 1150 + err = stex_common_handshake(hba); 1232 1151 spin_lock_irqsave(hba->host->host_lock, flags); 1233 1152 mu_status = hba->mu_status; 1234 1153 if (err == 0) { ··· 1275 1190 1276 1191 writel(data, base + YI2H_INT_C); 1277 1192 stex_ss_mu_intr(hba); 1193 + } else if (hba->cardtype == st_P3) { 1194 + data = readl(base + PSCRATCH4); 1195 + if (data == 0xffffffff) 1196 + goto fail_out; 1197 + if (data != 0) { 1198 + writel(data, base + PSCRATCH1); 1199 + writel((1 << 22), base + YH2I_INT); 1200 + } 1201 + stex_ss_mu_intr(hba); 1278 1202 } else { 1279 1203 data = readl(base + ODBL); 1280 1204 if (data == 0 || data == 0xffffffff) ··· 1291 1197 1292 1198 writel(data, base + ODBL); 1293 1199 readl(base + ODBL); /* flush */ 1294 - 1295 1200 stex_mu_intr(hba, data); 1296 1201 } 1297 1202 if (hba->wait_ccb == NULL) { ··· 1386 1293 ssleep(5); 1387 1294 } 1388 1295 1296 + static void stex_p3_reset(struct st_hba *hba) 1297 + { 1298 + writel(SS_H2I_INT_RESET, hba->mmio_base + YH2I_INT); 1299 + ssleep(5); 1300 + } 1301 + 1389 1302 static int stex_do_reset(struct st_hba *hba) 1390 1303 { 1391 1304 unsigned long flags; ··· 1428 1329 stex_hard_reset(hba); 1429 1330 else if (hba->cardtype == st_yel) 1430 1331 stex_ss_reset(hba); 1431 - 1332 + else if (hba->cardtype == st_P3) 1333 + stex_p3_reset(hba); 1432 1334 1433 1335 return_abnormal_state(hba, DID_RESET); 1434 1336 ··· 1514 1414 /* st_yel */ 1515 1415 { 0x105a, 0x8650, 0x1033, PCI_ANY_ID, 0, 0, st_yel }, 1516 1416 { 0x105a, 0x8760, PCI_ANY_ID, PCI_ANY_ID, 0, 0, st_yel }, 1417 + 1418 + /* st_P3, pluto */ 1419 + { PCI_VENDOR_ID_PROMISE, 0x8870, PCI_VENDOR_ID_PROMISE, 1420 + 0x8870, 0, 0, st_P3 }, 1421 + /* st_P3, p3 */ 1422 + { PCI_VENDOR_ID_PROMISE, 0x8870, PCI_VENDOR_ID_PROMISE, 1423 + 0x4300, 0, 0, st_P3 }, 1424 + 1425 + /* st_P3, SymplyStor4E */ 1426 + { PCI_VENDOR_ID_PROMISE, 0x8871, PCI_VENDOR_ID_PROMISE, 1427 + 0x4311, 0, 0, st_P3 }, 1428 + /* st_P3, SymplyStor8E */ 1429 + { PCI_VENDOR_ID_PROMISE, 0x8871, PCI_VENDOR_ID_PROMISE, 1430 + 0x4312, 0, 0, st_P3 }, 1431 + /* st_P3, SymplyStor4 */ 1432 + { PCI_VENDOR_ID_PROMISE, 0x8871, PCI_VENDOR_ID_PROMISE, 1433 + 0x4321, 0, 0, st_P3 }, 1434 + /* st_P3, SymplyStor8 */ 1435 + { PCI_VENDOR_ID_PROMISE, 0x8871, PCI_VENDOR_ID_PROMISE, 1436 + 0x4322, 0, 0, st_P3 }, 1517 1437 { } /* terminate list */ 1518 1438 }; 1519 1439 ··· 1602 1482 .map_sg = stex_ss_map_sg, 1603 1483 .send = stex_ss_send_cmd, 1604 1484 }, 1485 + 1486 + /* st_P3 */ 1487 + { 1488 + .max_id = 129, 1489 + .max_lun = 256, 1490 + .max_channel = 0, 1491 + .rq_count = 801, 1492 + .rq_size = 512, 1493 + .sts_count = 801, 1494 + .alloc_rq = stex_ss_alloc_req, 1495 + .map_sg = stex_ss_map_sg, 1496 + .send = stex_ss_send_cmd, 1497 + }, 1605 1498 }; 1606 1499 1607 1500 static int stex_set_dma_mask(struct pci_dev * pdev) ··· 1635 1502 struct pci_dev *pdev = hba->pdev; 1636 1503 int status; 1637 1504 1638 - if (msi) { 1505 + if (msi || hba->cardtype == st_P3) { 1639 1506 status = pci_enable_msi(pdev); 1640 1507 if (status != 0) 1641 1508 printk(KERN_ERR DRV_NAME ··· 1646 1513 } else 1647 1514 hba->msi_enabled = 0; 1648 1515 1649 - status = request_irq(pdev->irq, hba->cardtype == st_yel ? 1516 + status = request_irq(pdev->irq, 1517 + (hba->cardtype == st_yel || hba->cardtype == st_P3) ? 1650 1518 stex_ss_intr : stex_intr, IRQF_SHARED, DRV_NAME, hba); 1651 1519 1652 1520 if (status != 0) { ··· 1679 1545 return err; 1680 1546 1681 1547 pci_set_master(pdev); 1548 + 1549 + S6flag = 0; 1550 + register_reboot_notifier(&stex_notifier); 1682 1551 1683 1552 host = scsi_host_alloc(&driver_template, sizeof(struct st_hba)); 1684 1553 ··· 1734 1597 case 0x4265: 1735 1598 break; 1736 1599 default: 1737 - if (hba->cardtype == st_yel) 1600 + if (hba->cardtype == st_yel || hba->cardtype == st_P3) 1738 1601 hba->supports_pm = 1; 1739 1602 } 1740 1603 1741 1604 sts_offset = scratch_offset = (ci->rq_count+1) * ci->rq_size; 1742 - if (hba->cardtype == st_yel) 1605 + if (hba->cardtype == st_yel || hba->cardtype == st_P3) 1743 1606 sts_offset += (ci->sts_count+1) * sizeof(u32); 1744 1607 cp_offset = sts_offset + (ci->sts_count+1) * sizeof(struct status_msg); 1745 1608 hba->dma_size = cp_offset + sizeof(struct st_frame); ··· 1779 1642 goto out_pci_free; 1780 1643 } 1781 1644 1782 - if (hba->cardtype == st_yel) 1645 + if (hba->cardtype == st_yel || hba->cardtype == st_P3) 1783 1646 hba->scratch = (__le32 *)(hba->dma_mem + scratch_offset); 1784 1647 hba->status_buffer = (struct status_msg *)(hba->dma_mem + sts_offset); 1785 1648 hba->copy_buffer = hba->dma_mem + cp_offset; ··· 1790 1653 hba->map_sg = ci->map_sg; 1791 1654 hba->send = ci->send; 1792 1655 hba->mu_status = MU_STATE_STARTING; 1656 + hba->msi_lock = 0; 1793 1657 1794 - if (hba->cardtype == st_yel) 1658 + if (hba->cardtype == st_yel || hba->cardtype == st_P3) 1795 1659 host->sg_tablesize = 38; 1796 1660 else 1797 1661 host->sg_tablesize = 32; ··· 1874 1736 1875 1737 spin_lock_irqsave(hba->host->host_lock, flags); 1876 1738 1877 - if (hba->cardtype == st_yel && hba->supports_pm == 1) 1878 - { 1879 - if(st_sleep_mic == ST_NOTHANDLED) 1880 - { 1739 + if ((hba->cardtype == st_yel || hba->cardtype == st_P3) && 1740 + hba->supports_pm == 1) { 1741 + if (st_sleep_mic == ST_NOTHANDLED) { 1881 1742 spin_unlock_irqrestore(hba->host->host_lock, flags); 1882 1743 return; 1883 1744 } 1884 1745 } 1885 1746 req = hba->alloc_rq(hba); 1886 - if (hba->cardtype == st_yel) { 1747 + if (hba->cardtype == st_yel || hba->cardtype == st_P3) { 1887 1748 msg_h = (struct st_msg_header *)req - 1; 1888 1749 memset(msg_h, 0, hba->rq_size); 1889 1750 } else 1890 1751 memset(req, 0, hba->rq_size); 1891 1752 1892 - if ((hba->cardtype == st_yosemite || hba->cardtype == st_yel) 1753 + if ((hba->cardtype == st_yosemite || hba->cardtype == st_yel 1754 + || hba->cardtype == st_P3) 1893 1755 && st_sleep_mic == ST_IGNORED) { 1894 1756 req->cdb[0] = MGT_CMD; 1895 1757 req->cdb[1] = MGT_CMD_SIGNATURE; 1896 1758 req->cdb[2] = CTLR_CONFIG_CMD; 1897 1759 req->cdb[3] = CTLR_SHUTDOWN; 1898 - } else if (hba->cardtype == st_yel && st_sleep_mic != ST_IGNORED) { 1760 + } else if ((hba->cardtype == st_yel || hba->cardtype == st_P3) 1761 + && st_sleep_mic != ST_IGNORED) { 1899 1762 req->cdb[0] = MGT_CMD; 1900 1763 req->cdb[1] = MGT_CMD_SIGNATURE; 1901 1764 req->cdb[2] = CTLR_CONFIG_CMD; ··· 1907 1768 req->cdb[1] = CTLR_POWER_STATE_CHANGE; 1908 1769 req->cdb[2] = CTLR_POWER_SAVING; 1909 1770 } 1910 - 1911 1771 hba->ccb[tag].cmd = NULL; 1912 1772 hba->ccb[tag].sg_count = 0; 1913 1773 hba->ccb[tag].sense_bufflen = 0; 1914 1774 hba->ccb[tag].sense_buffer = NULL; 1915 1775 hba->ccb[tag].req_type = PASSTHRU_REQ_TYPE; 1916 - 1917 1776 hba->send(hba, req, tag); 1918 1777 spin_unlock_irqrestore(hba->host->host_lock, flags); 1919 - 1920 1778 before = jiffies; 1921 1779 while (hba->ccb[tag].req_type & PASSTHRU_REQ_TYPE) { 1922 1780 if (time_after(jiffies, before + ST_INTERNAL_TIMEOUT * HZ)) { ··· 1957 1821 scsi_host_put(hba->host); 1958 1822 1959 1823 pci_disable_device(pdev); 1824 + 1825 + unregister_reboot_notifier(&stex_notifier); 1960 1826 } 1961 1827 1962 1828 static void stex_shutdown(struct pci_dev *pdev) 1963 1829 { 1964 1830 struct st_hba *hba = pci_get_drvdata(pdev); 1965 1831 1966 - if (hba->supports_pm == 0) 1832 + if (hba->supports_pm == 0) { 1967 1833 stex_hba_stop(hba, ST_IGNORED); 1968 - else 1834 + } else if (hba->supports_pm == 1 && S6flag) { 1835 + unregister_reboot_notifier(&stex_notifier); 1836 + stex_hba_stop(hba, ST_S6); 1837 + } else 1969 1838 stex_hba_stop(hba, ST_S5); 1970 1839 } 1971 1840 1972 - static int stex_choice_sleep_mic(pm_message_t state) 1841 + static int stex_choice_sleep_mic(struct st_hba *hba, pm_message_t state) 1973 1842 { 1974 1843 switch (state.event) { 1975 1844 case PM_EVENT_SUSPEND: 1976 1845 return ST_S3; 1977 1846 case PM_EVENT_HIBERNATE: 1847 + hba->msi_lock = 0; 1978 1848 return ST_S4; 1979 1849 default: 1980 1850 return ST_NOTHANDLED; ··· 1991 1849 { 1992 1850 struct st_hba *hba = pci_get_drvdata(pdev); 1993 1851 1994 - if (hba->cardtype == st_yel && hba->supports_pm == 1) 1995 - stex_hba_stop(hba, stex_choice_sleep_mic(state)); 1852 + if ((hba->cardtype == st_yel || hba->cardtype == st_P3) 1853 + && hba->supports_pm == 1) 1854 + stex_hba_stop(hba, stex_choice_sleep_mic(hba, state)); 1996 1855 else 1997 1856 stex_hba_stop(hba, ST_IGNORED); 1998 1857 return 0; ··· 2006 1863 hba->mu_status = MU_STATE_STARTING; 2007 1864 stex_handshake(hba); 2008 1865 return 0; 1866 + } 1867 + 1868 + static int stex_halt(struct notifier_block *nb, unsigned long event, void *buf) 1869 + { 1870 + S6flag = 1; 1871 + return NOTIFY_OK; 2009 1872 } 2010 1873 MODULE_DEVICE_TABLE(pci, stex_pci_tbl); 2011 1874
+19 -8
drivers/scsi/storvsc_drv.c
··· 476 476 */ 477 477 u64 node_name; 478 478 u64 port_name; 479 + #if IS_ENABLED(CONFIG_SCSI_FC_ATTRS) 480 + struct fc_rport *rport; 481 + #endif 479 482 }; 480 483 481 484 struct hv_host_device { ··· 867 864 * We will however populate all the slots to evenly distribute 868 865 * the load. 869 866 */ 870 - stor_device->stor_chns = kzalloc(sizeof(void *) * num_possible_cpus(), 867 + stor_device->stor_chns = kcalloc(num_possible_cpus(), sizeof(void *), 871 868 GFP_KERNEL); 872 869 if (stor_device->stor_chns == NULL) 873 870 return -ENOMEM; ··· 1192 1189 break; 1193 1190 } 1194 1191 } while (1); 1195 - 1196 - return; 1197 1192 } 1198 1193 1199 1194 static int storvsc_connect_to_vsp(struct hv_device *device, u32 ring_size, ··· 1824 1823 target = (device->dev_instance.b[5] << 8 | 1825 1824 device->dev_instance.b[4]); 1826 1825 ret = scsi_add_device(host, 0, target, 0); 1827 - if (ret) { 1828 - scsi_remove_host(host); 1829 - goto err_out2; 1830 - } 1826 + if (ret) 1827 + goto err_out3; 1831 1828 } 1832 1829 #if IS_ENABLED(CONFIG_SCSI_FC_ATTRS) 1833 1830 if (host->transportt == fc_transport_template) { 1831 + struct fc_rport_identifiers ids = { 1832 + .roles = FC_PORT_ROLE_FCP_DUMMY_INITIATOR, 1833 + }; 1834 + 1834 1835 fc_host_node_name(host) = stor_device->node_name; 1835 1836 fc_host_port_name(host) = stor_device->port_name; 1837 + stor_device->rport = fc_remote_port_add(host, 0, &ids); 1838 + if (!stor_device->rport) 1839 + goto err_out3; 1836 1840 } 1837 1841 #endif 1838 1842 return 0; 1843 + 1844 + err_out3: 1845 + scsi_remove_host(host); 1839 1846 1840 1847 err_out2: 1841 1848 /* ··· 1870 1861 struct Scsi_Host *host = stor_device->host; 1871 1862 1872 1863 #if IS_ENABLED(CONFIG_SCSI_FC_ATTRS) 1873 - if (host->transportt == fc_transport_template) 1864 + if (host->transportt == fc_transport_template) { 1865 + fc_remote_port_delete(stor_device->rport); 1874 1866 fc_remove_host(host); 1867 + } 1875 1868 #endif 1876 1869 scsi_remove_host(host); 1877 1870 storvsc_dev_remove(dev);
+25 -77
drivers/scsi/ufs/ufshcd.c
··· 130 130 UFSHCD_UIC_DME_ERROR = (1 << 5), /* DME error */ 131 131 }; 132 132 133 - /* Interrupt configuration options */ 134 - enum { 135 - UFSHCD_INT_DISABLE, 136 - UFSHCD_INT_ENABLE, 137 - UFSHCD_INT_CLEAR, 138 - }; 139 - 140 133 #define ufshcd_set_eh_in_progress(h) \ 141 - (h->eh_flags |= UFSHCD_EH_IN_PROGRESS) 134 + ((h)->eh_flags |= UFSHCD_EH_IN_PROGRESS) 142 135 #define ufshcd_eh_in_progress(h) \ 143 - (h->eh_flags & UFSHCD_EH_IN_PROGRESS) 136 + ((h)->eh_flags & UFSHCD_EH_IN_PROGRESS) 144 137 #define ufshcd_clear_eh_in_progress(h) \ 145 - (h->eh_flags &= ~UFSHCD_EH_IN_PROGRESS) 138 + ((h)->eh_flags &= ~UFSHCD_EH_IN_PROGRESS) 146 139 147 140 #define ufshcd_set_ufs_dev_active(h) \ 148 141 ((h)->curr_dev_pwr_mode = UFS_ACTIVE_PWR_MODE) ··· 533 540 case UFSHCI_VERSION_10: 534 541 intr_mask = INTERRUPT_MASK_ALL_VER_10; 535 542 break; 536 - /* allow fall through */ 537 543 case UFSHCI_VERSION_11: 538 544 case UFSHCI_VERSION_20: 539 545 intr_mask = INTERRUPT_MASK_ALL_VER_11; 540 546 break; 541 - /* allow fall through */ 542 547 case UFSHCI_VERSION_21: 543 548 default: 544 549 intr_mask = INTERRUPT_MASK_ALL_VER_21; 550 + break; 545 551 } 546 552 547 553 return intr_mask; ··· 565 573 * the host controller 566 574 * @hba: pointer to adapter instance 567 575 * 568 - * Returns 1 if device present, 0 if no device detected 576 + * Returns true if device present, false if no device detected 569 577 */ 570 - static inline int ufshcd_is_device_present(struct ufs_hba *hba) 578 + static inline bool ufshcd_is_device_present(struct ufs_hba *hba) 571 579 { 572 580 return (ufshcd_readl(hba, REG_CONTROLLER_STATUS) & 573 - DEVICE_PRESENT) ? 1 : 0; 581 + DEVICE_PRESENT) ? true : false; 574 582 } 575 583 576 584 /** ··· 660 668 */ 661 669 static inline int ufshcd_get_lists_status(u32 reg) 662 670 { 663 - /* 664 - * The mask 0xFF is for the following HCS register bits 665 - * Bit Description 666 - * 0 Device Present 667 - * 1 UTRLRDY 668 - * 2 UTMRLRDY 669 - * 3 UCRDY 670 - * 4-7 reserved 671 - */ 672 - return ((reg & 0xFF) >> 1) ^ 0x07; 671 + return !((reg & UFSHCD_STATUS_READY) == UFSHCD_STATUS_READY); 673 672 } 674 673 675 674 /** ··· 803 820 * ufshcd_is_hba_active - Get controller state 804 821 * @hba: per adapter instance 805 822 * 806 - * Returns zero if controller is active, 1 otherwise 823 + * Returns false if controller is active, true otherwise 807 824 */ 808 - static inline int ufshcd_is_hba_active(struct ufs_hba *hba) 825 + static inline bool ufshcd_is_hba_active(struct ufs_hba *hba) 809 826 { 810 - return (ufshcd_readl(hba, REG_CONTROLLER_ENABLE) & 0x1) ? 0 : 1; 827 + return (ufshcd_readl(hba, REG_CONTROLLER_ENABLE) & CONTROLLER_ENABLE) 828 + ? false : true; 811 829 } 812 830 813 831 static const char *ufschd_uic_link_state_to_string( ··· 1462 1478 break; 1463 1479 } 1464 1480 /* 1465 - * If we here, it means gating work is either done or 1481 + * If we are here, it means gating work is either done or 1466 1482 * currently running. Hence, fall through to cancel gating 1467 1483 * work and to enable clocks. 1468 1484 */ ··· 3087 3103 u8 *buf, 3088 3104 u32 size) 3089 3105 { 3090 - int err = 0; 3091 - int retries; 3092 - 3093 - for (retries = QUERY_REQ_RETRIES; retries > 0; retries--) { 3094 - /* Read descriptor*/ 3095 - err = ufshcd_read_desc(hba, QUERY_DESC_IDN_POWER, 0, buf, size); 3096 - if (!err) 3097 - break; 3098 - dev_dbg(hba->dev, "%s: error %d retrying\n", __func__, err); 3099 - } 3100 - 3101 - return err; 3106 + return ufshcd_read_desc(hba, QUERY_DESC_IDN_POWER, 0, buf, size); 3102 3107 } 3103 3108 3104 3109 static int ufshcd_read_device_desc(struct ufs_hba *hba, u8 *buf, u32 size) ··· 4245 4272 { 4246 4273 int ret = 0; 4247 4274 u8 lun_qdepth; 4248 - int retries; 4249 4275 struct ufs_hba *hba; 4250 4276 4251 4277 hba = shost_priv(sdev->host); 4252 4278 4253 4279 lun_qdepth = hba->nutrs; 4254 - for (retries = QUERY_REQ_RETRIES; retries > 0; retries--) { 4255 - /* Read descriptor*/ 4256 - ret = ufshcd_read_unit_desc_param(hba, 4257 - ufshcd_scsi_to_upiu_lun(sdev->lun), 4258 - UNIT_DESC_PARAM_LU_Q_DEPTH, 4259 - &lun_qdepth, 4260 - sizeof(lun_qdepth)); 4261 - if (!ret || ret == -ENOTSUPP) 4262 - break; 4263 - 4264 - dev_dbg(hba->dev, "%s: error %d retrying\n", __func__, ret); 4265 - } 4280 + ret = ufshcd_read_unit_desc_param(hba, 4281 + ufshcd_scsi_to_upiu_lun(sdev->lun), 4282 + UNIT_DESC_PARAM_LU_Q_DEPTH, 4283 + &lun_qdepth, 4284 + sizeof(lun_qdepth)); 4266 4285 4267 4286 /* Some WLUN doesn't support unit descriptor */ 4268 4287 if (ret == -EOPNOTSUPP) ··· 4682 4717 goto out; 4683 4718 4684 4719 val = hba->ee_ctrl_mask & ~mask; 4685 - val &= 0xFFFF; /* 2 bytes */ 4720 + val &= MASK_EE_STATUS; 4686 4721 err = ufshcd_query_attr_retry(hba, UPIU_QUERY_OPCODE_WRITE_ATTR, 4687 4722 QUERY_ATTR_IDN_EE_CONTROL, 0, 0, &val); 4688 4723 if (!err) ··· 4710 4745 goto out; 4711 4746 4712 4747 val = hba->ee_ctrl_mask | mask; 4713 - val &= 0xFFFF; /* 2 bytes */ 4748 + val &= MASK_EE_STATUS; 4714 4749 err = ufshcd_query_attr_retry(hba, UPIU_QUERY_OPCODE_WRITE_ATTR, 4715 4750 QUERY_ATTR_IDN_EE_CONTROL, 0, 0, &val); 4716 4751 if (!err) ··· 5925 5960 return icc_level; 5926 5961 } 5927 5962 5928 - static int ufshcd_set_icc_levels_attr(struct ufs_hba *hba, u32 icc_level) 5929 - { 5930 - int ret = 0; 5931 - int retries; 5932 - 5933 - for (retries = QUERY_REQ_RETRIES; retries > 0; retries--) { 5934 - /* write attribute */ 5935 - ret = ufshcd_query_attr(hba, UPIU_QUERY_OPCODE_WRITE_ATTR, 5936 - QUERY_ATTR_IDN_ACTIVE_ICC_LVL, 0, 0, &icc_level); 5937 - if (!ret) 5938 - break; 5939 - 5940 - dev_dbg(hba->dev, "%s: failed with error %d\n", __func__, ret); 5941 - } 5942 - 5943 - return ret; 5944 - } 5945 - 5946 5963 static void ufshcd_init_icc_levels(struct ufs_hba *hba) 5947 5964 { 5948 5965 int ret; ··· 5945 5998 dev_dbg(hba->dev, "%s: setting icc_level 0x%x", 5946 5999 __func__, hba->init_prefetch_data.icc_level); 5947 6000 5948 - ret = ufshcd_set_icc_levels_attr(hba, 5949 - hba->init_prefetch_data.icc_level); 6001 + ret = ufshcd_query_attr_retry(hba, UPIU_QUERY_OPCODE_WRITE_ATTR, 6002 + QUERY_ATTR_IDN_ACTIVE_ICC_LVL, 0, 0, 6003 + &hba->init_prefetch_data.icc_level); 5950 6004 5951 6005 if (ret) 5952 6006 dev_err(hba->dev, ··· 7948 8000 INIT_WORK(&hba->clk_scaling.resume_work, 7949 8001 ufshcd_clk_scaling_resume_work); 7950 8002 7951 - snprintf(wq_name, ARRAY_SIZE(wq_name), "ufs_clkscaling_%d", 8003 + snprintf(wq_name, sizeof(wq_name), "ufs_clkscaling_%d", 7952 8004 host->host_no); 7953 8005 hba->clk_scaling.workq = create_singlethread_workqueue(wq_name); 7954 8006
+6
drivers/scsi/ufs/ufshci.h
··· 48 48 REG_UFS_VERSION = 0x08, 49 49 REG_CONTROLLER_DEV_ID = 0x10, 50 50 REG_CONTROLLER_PROD_ID = 0x14, 51 + REG_AUTO_HIBERNATE_IDLE_TIMER = 0x18, 51 52 REG_INTERRUPT_STATUS = 0x20, 52 53 REG_INTERRUPT_ENABLE = 0x24, 53 54 REG_CONTROLLER_STATUS = 0x30, ··· 160 159 #define DEVICE_ERROR_INDICATOR UFS_BIT(5) 161 160 #define UIC_POWER_MODE_CHANGE_REQ_STATUS_MASK UFS_MASK(0x7, 8) 162 161 162 + #define UFSHCD_STATUS_READY (UTP_TRANSFER_REQ_LIST_READY |\ 163 + UTP_TASK_REQ_LIST_READY |\ 164 + UIC_COMMAND_READY) 165 + 163 166 enum { 164 167 PWR_OK = 0x0, 165 168 PWR_LOCAL = 0x01, ··· 176 171 /* HCE - Host Controller Enable 34h */ 177 172 #define CONTROLLER_ENABLE UFS_BIT(0) 178 173 #define CONTROLLER_DISABLE 0x0 174 + #define CRYPTO_GENERAL_ENABLE UFS_BIT(1) 179 175 180 176 /* UECPA - Host UIC Error Code PHY Adapter Layer 38h */ 181 177 #define UIC_PHY_ADAPTER_LAYER_ERROR UFS_BIT(31)
+24
drivers/scsi/virtio_scsi.c
··· 29 29 #include <scsi/scsi_device.h> 30 30 #include <scsi/scsi_cmnd.h> 31 31 #include <scsi/scsi_tcq.h> 32 + #include <scsi/scsi_devinfo.h> 32 33 #include <linux/seqlock.h> 33 34 #include <linux/blk-mq-virtio.h> 34 35 ··· 706 705 return virtscsi_tmf(vscsi, cmd); 707 706 } 708 707 708 + static int virtscsi_device_alloc(struct scsi_device *sdevice) 709 + { 710 + /* 711 + * Passed through SCSI targets (e.g. with qemu's 'scsi-block') 712 + * may have transfer limits which come from the host SCSI 713 + * controller or something on the host side other than the 714 + * target itself. 715 + * 716 + * To make this work properly, the hypervisor can adjust the 717 + * target's VPD information to advertise these limits. But 718 + * for that to work, the guest has to look at the VPD pages, 719 + * which we won't do by default if it is an SPC-2 device, even 720 + * if it does actually support it. 721 + * 722 + * So, set the blist to always try to read the VPD pages. 723 + */ 724 + sdevice->sdev_bflags = BLIST_TRY_VPD_PAGES; 725 + 726 + return 0; 727 + } 728 + 729 + 709 730 /** 710 731 * virtscsi_change_queue_depth() - Change a virtscsi target's queue depth 711 732 * @sdev: Virtscsi target whose queue depth to change ··· 806 783 .change_queue_depth = virtscsi_change_queue_depth, 807 784 .eh_abort_handler = virtscsi_abort, 808 785 .eh_device_reset_handler = virtscsi_device_reset, 786 + .slave_alloc = virtscsi_device_alloc, 809 787 810 788 .can_queue = 1024, 811 789 .dma_boundary = UINT_MAX,
+1 -1
drivers/scsi/zalon.c
··· 167 167 168 168 MODULE_DEVICE_TABLE(parisc, zalon_tbl); 169 169 170 - static int __exit zalon_remove(struct parisc_device *dev) 170 + static int zalon_remove(struct parisc_device *dev) 171 171 { 172 172 struct Scsi_Host *host = dev_get_drvdata(&dev->dev); 173 173
+2 -1
include/scsi/libfc.h
··· 23 23 #include <linux/timer.h> 24 24 #include <linux/if.h> 25 25 #include <linux/percpu.h> 26 + #include <linux/refcount.h> 26 27 27 28 #include <scsi/scsi_transport.h> 28 29 #include <scsi/scsi_transport_fc.h> ··· 322 321 */ 323 322 struct fc_fcp_pkt { 324 323 spinlock_t scsi_pkt_lock; 325 - atomic_t ref_cnt; 324 + refcount_t ref_cnt; 326 325 327 326 /* SCSI command and data transfer information */ 328 327 u32 data_len;
+2 -1
include/scsi/libiscsi.h
··· 29 29 #include <linux/timer.h> 30 30 #include <linux/workqueue.h> 31 31 #include <linux/kfifo.h> 32 + #include <linux/refcount.h> 32 33 #include <scsi/iscsi_proto.h> 33 34 #include <scsi/iscsi_if.h> 34 35 #include <scsi/scsi_transport_iscsi.h> ··· 140 139 141 140 /* state set/tested under session->lock */ 142 141 int state; 143 - atomic_t refcount; 142 + refcount_t refcount; 144 143 struct list_head running; /* running cmd list */ 145 144 void *dd_data; /* driver/transport data */ 146 145 };
-1
include/scsi/libsas.h
··· 693 693 sector_t capacity, int *hsc); 694 694 extern struct scsi_transport_template * 695 695 sas_domain_attach_transport(struct sas_domain_function_template *); 696 - extern void sas_domain_release_transport(struct scsi_transport_template *); 697 696 698 697 int sas_discover_root_expander(struct domain_device *); 699 698
+1 -1
include/scsi/scsi_device.h
··· 316 316 void scsi_attach_vpd(struct scsi_device *sdev); 317 317 318 318 extern struct scsi_device *scsi_device_from_queue(struct request_queue *q); 319 - extern int scsi_device_get(struct scsi_device *); 319 + extern int __must_check scsi_device_get(struct scsi_device *); 320 320 extern void scsi_device_put(struct scsi_device *); 321 321 extern struct scsi_device *scsi_device_lookup(struct Scsi_Host *, 322 322 uint, uint, u64);
+1
include/scsi/scsi_driver.h
··· 16 16 void (*uninit_command)(struct scsi_cmnd *); 17 17 int (*done)(struct scsi_cmnd *); 18 18 int (*eh_action)(struct scsi_cmnd *, int); 19 + void (*eh_reset)(struct scsi_cmnd *); 19 20 }; 20 21 #define to_scsi_driver(drv) \ 21 22 container_of((drv), struct scsi_driver, gendrv)
+3 -2
include/scsi/scsi_eh.h
··· 23 23 return ((sshdr->response_code >= 0x70) && (sshdr->response_code & 1)); 24 24 } 25 25 26 - extern int scsi_get_sense_info_fld(const u8 * sense_buffer, int sb_len, 27 - u64 * info_out); 26 + extern bool scsi_get_sense_info_fld(const u8 *sense_buffer, int sb_len, 27 + u64 *info_out); 28 28 29 29 extern int scsi_ioctl_reset(struct scsi_device *, int __user *); 30 30 31 31 struct scsi_eh_save { 32 32 /* saved state */ 33 33 int result; 34 + int eh_eflags; 34 35 enum dma_data_direction data_direction; 35 36 unsigned underflow; 36 37 unsigned char cmd_len;
-5
include/scsi/scsi_host.h
··· 452 452 unsigned no_write_same:1; 453 453 454 454 /* 455 - * True if asynchronous aborts are not supported 456 - */ 457 - unsigned no_async_abort:1; 458 - 459 - /* 460 455 * Countdown for host blocking with no commands outstanding. 461 456 */ 462 457 unsigned int max_host_blocked;
+1
include/scsi/scsi_transport_fc.h
··· 162 162 #define FC_PORT_ROLE_FCP_TARGET 0x01 163 163 #define FC_PORT_ROLE_FCP_INITIATOR 0x02 164 164 #define FC_PORT_ROLE_IP_PORT 0x04 165 + #define FC_PORT_ROLE_FCP_DUMMY_INITIATOR 0x08 165 166 166 167 /* The following are for compatibility */ 167 168 #define FC_RPORT_ROLE_UNKNOWN FC_PORT_ROLE_UNKNOWN
-1
include/scsi/sg.h
··· 197 197 #define SG_DEFAULT_RETRIES 0 198 198 199 199 /* Defaults, commented if they differ from original sg driver */ 200 - #define SG_DEF_FORCE_LOW_DMA 0 /* was 1 -> memory below 16MB on i386 */ 201 200 #define SG_DEF_FORCE_PACK_ID 0 202 201 #define SG_DEF_KEEP_ORPHAN 0 203 202 #define SG_DEF_RESERVED_SIZE SG_SCATTER_SZ /* load time option */