Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
"Updates to the usual drivers (ufs, megaraid_sas, lpfc, target, ibmvfc,
scsi_debug) plus the usual assorted minor fixes and updates.

The major change this time around is a prep patch for rethreading of
the driver reset handler API not to take a scsi_cmd structure which
starts to reduce various drivers' dependence on scsi_cmd in error
handling"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (132 commits)
scsi: ufs: core: Leave space for '\0' in utf8 desc string
scsi: ufs: core: Conversion to bool not necessary
scsi: ufs: core: Fix race between force complete and ISR
scsi: megaraid: Fix up debug message in megaraid_abort_and_reset()
scsi: aic79xx: Fix up NULL command in ahd_done()
scsi: message: fusion: Initialize return value in mptfc_bus_reset()
scsi: mpt3sas: Fix loop logic
scsi: snic: Remove useless code in snic_dr_clean_pending_req()
scsi: core: Add comment to target_destroy in scsi_host_template
scsi: core: Clean up scsi_dev_queue_ready()
scsi: pmcraid: Add missing scsi_device_put() in pmcraid_eh_target_reset_handler()
scsi: target: core: Fix kernel-doc comment
scsi: pmcraid: Fix kernel-doc comment
scsi: core: Handle depopulation and restoration in progress
scsi: ufs: core: Add support for parsing OPP
scsi: ufs: core: Add OPP support for scaling clocks and regulators
scsi: ufs: dt-bindings: common: Add OPP table
scsi: scsi_debug: Add param to control sdev's allow_restart
scsi: scsi_debug: Add debugfs interface to fail target reset
scsi: scsi_debug: Add new error injection type: Reset LUN failed
...

+2755 -1370
+32 -3
Documentation/devicetree/bindings/ufs/ufs-common.yaml
··· 20 20 items: 21 21 - description: Minimum frequency for given clock in Hz 22 22 - description: Maximum frequency for given clock in Hz 23 + deprecated: true 23 24 description: | 25 + Preferred is operating-points-v2. 26 + 24 27 Array of <min max> operating frequencies in Hz stored in the same order 25 - as the clocks property. If this property is not defined or a value in the 26 - array is "0" then it is assumed that the frequency is set by the parent 27 - clock or a fixed rate clock source. 28 + as the clocks property. If either this property or operating-points-v2 is 29 + not defined or a value in the array is "0" then it is assumed that the 30 + frequency is set by the parent clock or a fixed rate clock source. 31 + 32 + operating-points-v2: 33 + description: 34 + Preferred over freq-table-hz. 35 + If present, each OPP must contain array of frequencies stored in the same 36 + order for each clock. If clock frequency in the array is "0" then it is 37 + assumed that the frequency is set by the parent clock or a fixed rate 38 + clock source. 39 + 40 + opp-table: 41 + type: object 28 42 29 43 interrupts: 30 44 maxItems: 1 ··· 89 75 90 76 dependencies: 91 77 freq-table-hz: [ clocks ] 78 + operating-points-v2: [ clocks, clock-names ] 92 79 93 80 required: 94 81 - interrupts 82 + 83 + allOf: 84 + - if: 85 + required: 86 + - freq-table-hz 87 + then: 88 + properties: 89 + operating-points-v2: false 90 + - if: 91 + required: 92 + - operating-points-v2 93 + then: 94 + properties: 95 + freq-table-hz: false 95 96 96 97 additionalProperties: true
+1 -2
MAINTAINERS
··· 11221 11221 L: linux-rdma@vger.kernel.org 11222 11222 L: target-devel@vger.kernel.org 11223 11223 S: Supported 11224 - W: http://www.linux-iscsi.org 11225 11224 T: git git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git master 11226 11225 F: drivers/infiniband/ulp/isert 11227 11226 ··· 13622 13623 M: Kashyap Desai <kashyap.desai@broadcom.com> 13623 13624 M: Sumit Saxena <sumit.saxena@broadcom.com> 13624 13625 M: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> 13626 + M: Chandrakanth patil <chandrakanth.patil@broadcom.com> 13625 13627 L: megaraidlinux.pdl@broadcom.com 13626 13628 L: linux-scsi@vger.kernel.org 13627 13629 S: Maintained ··· 19275 19275 L: linux-scsi@vger.kernel.org 19276 19276 L: target-devel@vger.kernel.org 19277 19277 S: Supported 19278 - W: http://www.linux-iscsi.org 19279 19278 Q: https://patchwork.kernel.org/project/target-devel/list/ 19280 19279 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git 19281 19280 F: Documentation/target/
+3
drivers/infiniband/ulp/srpt/ib_srpt.c
··· 3867 3867 .tfc_discovery_attrs = srpt_da_attrs, 3868 3868 .tfc_wwn_attrs = srpt_wwn_attrs, 3869 3869 .tfc_tpg_attrib_attrs = srpt_tpg_attrib_attrs, 3870 + 3871 + .default_submit_type = TARGET_DIRECT_SUBMIT, 3872 + .direct_submit_supp = 1, 3870 3873 }; 3871 3874 3872 3875 /**
+9 -10
drivers/message/fusion/mptctl.c
··· 1328 1328 1329 1329 /* Set the Version Strings. 1330 1330 */ 1331 - strncpy (karg->driverVersion, MPT_LINUX_PACKAGE_NAME, MPT_IOCTL_VERSION_LENGTH); 1332 - karg->driverVersion[MPT_IOCTL_VERSION_LENGTH-1]='\0'; 1331 + strscpy_pad(karg->driverVersion, MPT_LINUX_PACKAGE_NAME, 1332 + sizeof(karg->driverVersion)); 1333 1333 1334 1334 karg->busChangeEvent = 0; 1335 1335 karg->hostId = ioc->pfacts[port].PortSCSIID; ··· 1493 1493 #else 1494 1494 karg.chip_type = ioc->pcidev->device; 1495 1495 #endif 1496 - strncpy (karg.name, ioc->name, MPT_MAX_NAME); 1497 - karg.name[MPT_MAX_NAME-1]='\0'; 1498 - strncpy (karg.product, ioc->prod_name, MPT_PRODUCT_LENGTH); 1499 - karg.product[MPT_PRODUCT_LENGTH-1]='\0'; 1496 + strscpy_pad(karg.name, ioc->name, sizeof(karg.name)); 1497 + strscpy_pad(karg.product, ioc->prod_name, sizeof(karg.product)); 1500 1498 1501 1499 /* Copy the data from kernel memory to user memory 1502 1500 */ ··· 2392 2394 cfg.dir = 0; /* read */ 2393 2395 cfg.timeout = 10; 2394 2396 2395 - strncpy(karg.serial_number, " ", 24); 2397 + strscpy_pad(karg.serial_number, " ", sizeof(karg.serial_number)); 2396 2398 if (mpt_config(ioc, &cfg) == 0) { 2397 2399 if (cfg.cfghdr.hdr->PageLength > 0) { 2398 2400 /* Issue the second config page request */ ··· 2406 2408 if (mpt_config(ioc, &cfg) == 0) { 2407 2409 ManufacturingPage0_t *pdata = (ManufacturingPage0_t *) pbuf; 2408 2410 if (strlen(pdata->BoardTracerNumber) > 1) { 2409 - strscpy(karg.serial_number, 2410 - pdata->BoardTracerNumber, 24); 2411 + strscpy_pad(karg.serial_number, 2412 + pdata->BoardTracerNumber, 2413 + sizeof(karg.serial_number)); 2411 2414 } 2412 2415 } 2413 2416 dma_free_coherent(&ioc->pcidev->dev, ··· 2455 2456 } 2456 2457 } 2457 2458 2458 - /* 2459 + /* 2459 2460 * Gather ISTWI(Industry Standard Two Wire Interface) Data 2460 2461 */ 2461 2462 if ((mf = mpt_get_msg_frame(mptctl_id, ioc)) == NULL) {
+65 -29
drivers/message/fusion/mptfc.c
··· 183 183 }; 184 184 185 185 static int 186 - mptfc_block_error_handler(struct scsi_cmnd *SCpnt, 187 - int (*func)(struct scsi_cmnd *SCpnt), 188 - const char *caller) 186 + mptfc_block_error_handler(struct fc_rport *rport) 189 187 { 190 188 MPT_SCSI_HOST *hd; 191 - struct scsi_device *sdev = SCpnt->device; 192 - struct Scsi_Host *shost = sdev->host; 193 - struct fc_rport *rport = starget_to_rport(scsi_target(sdev)); 189 + struct Scsi_Host *shost = rport_to_shost(rport); 194 190 unsigned long flags; 195 191 int ready; 196 - MPT_ADAPTER *ioc; 192 + MPT_ADAPTER *ioc; 197 193 int loops = 40; /* seconds */ 198 194 199 - hd = shost_priv(SCpnt->device->host); 195 + hd = shost_priv(shost); 200 196 ioc = hd->ioc; 201 197 spin_lock_irqsave(shost->host_lock, flags); 202 198 while ((ready = fc_remote_port_chkready(rport) >> 16) == DID_IMM_RETRY 203 199 || (loops > 0 && ioc->active == 0)) { 204 200 spin_unlock_irqrestore(shost->host_lock, flags); 205 201 dfcprintk (ioc, printk(MYIOC_s_DEBUG_FMT 206 - "mptfc_block_error_handler.%d: %d:%llu, port status is " 207 - "%x, active flag %d, deferring %s recovery.\n", 202 + "mptfc_block_error_handler.%d: %s, port status is " 203 + "%x, active flag %d, deferring recovery.\n", 208 204 ioc->name, ioc->sh->host_no, 209 - SCpnt->device->id, SCpnt->device->lun, 210 - ready, ioc->active, caller)); 205 + dev_name(&rport->dev), ready, ioc->active)); 211 206 msleep(1000); 212 207 spin_lock_irqsave(shost->host_lock, flags); 213 208 loops --; 214 209 } 215 210 spin_unlock_irqrestore(shost->host_lock, flags); 216 211 217 - if (ready == DID_NO_CONNECT || !SCpnt->device->hostdata 218 - || ioc->active == 0) { 212 + if (ready == DID_NO_CONNECT || ioc->active == 0) { 219 213 dfcprintk (ioc, printk(MYIOC_s_DEBUG_FMT 220 - "%s.%d: %d:%llu, failing recovery, " 221 - "port state %x, active %d, vdevice %p.\n", caller, 214 + "mpt_block_error_handler.%d: %s, failing recovery, " 215 + "port state %x, active %d.\n", 222 216 ioc->name, ioc->sh->host_no, 223 - SCpnt->device->id, SCpnt->device->lun, ready, 224 - ioc->active, SCpnt->device->hostdata)); 217 + dev_name(&rport->dev), ready, ioc->active)); 225 218 return FAILED; 226 219 } 227 - dfcprintk (ioc, printk(MYIOC_s_DEBUG_FMT 228 - "%s.%d: %d:%llu, executing recovery.\n", caller, 229 - ioc->name, ioc->sh->host_no, 230 - SCpnt->device->id, SCpnt->device->lun)); 231 - return (*func)(SCpnt); 220 + return SUCCESS; 232 221 } 233 222 234 223 static int 235 224 mptfc_abort(struct scsi_cmnd *SCpnt) 236 225 { 237 - return 238 - mptfc_block_error_handler(SCpnt, mptscsih_abort, __func__); 226 + struct Scsi_Host *shost = SCpnt->device->host; 227 + struct fc_rport *rport = starget_to_rport(scsi_target(SCpnt->device)); 228 + MPT_SCSI_HOST __maybe_unused *hd = shost_priv(shost); 229 + int rtn; 230 + 231 + rtn = mptfc_block_error_handler(rport); 232 + if (rtn == SUCCESS) { 233 + dfcprintk (hd->ioc, printk(MYIOC_s_DEBUG_FMT 234 + "%s.%d: %d:%llu, executing recovery.\n", __func__, 235 + hd->ioc->name, shost->host_no, 236 + SCpnt->device->id, SCpnt->device->lun)); 237 + rtn = mptscsih_abort(SCpnt); 238 + } 239 + return rtn; 239 240 } 240 241 241 242 static int 242 243 mptfc_dev_reset(struct scsi_cmnd *SCpnt) 243 244 { 244 - return 245 - mptfc_block_error_handler(SCpnt, mptscsih_dev_reset, __func__); 245 + struct Scsi_Host *shost = SCpnt->device->host; 246 + struct fc_rport *rport = starget_to_rport(scsi_target(SCpnt->device)); 247 + MPT_SCSI_HOST __maybe_unused *hd = shost_priv(shost); 248 + int rtn; 249 + 250 + rtn = mptfc_block_error_handler(rport); 251 + if (rtn == SUCCESS) { 252 + dfcprintk (hd->ioc, printk(MYIOC_s_DEBUG_FMT 253 + "%s.%d: %d:%llu, executing recovery.\n", __func__, 254 + hd->ioc->name, shost->host_no, 255 + SCpnt->device->id, SCpnt->device->lun)); 256 + rtn = mptscsih_dev_reset(SCpnt); 257 + } 258 + return rtn; 246 259 } 247 260 248 261 static int 249 262 mptfc_bus_reset(struct scsi_cmnd *SCpnt) 250 263 { 251 - return 252 - mptfc_block_error_handler(SCpnt, mptscsih_bus_reset, __func__); 264 + struct Scsi_Host *shost = SCpnt->device->host; 265 + MPT_SCSI_HOST __maybe_unused *hd = shost_priv(shost); 266 + int channel = SCpnt->device->channel; 267 + struct mptfc_rport_info *ri; 268 + int rtn = FAILED; 269 + 270 + list_for_each_entry(ri, &hd->ioc->fc_rports, list) { 271 + if (ri->flags & MPT_RPORT_INFO_FLAGS_REGISTERED) { 272 + VirtTarget *vtarget = ri->starget->hostdata; 273 + 274 + if (!vtarget || vtarget->channel != channel) 275 + continue; 276 + rtn = fc_block_rport(ri->rport); 277 + if (rtn != 0) 278 + break; 279 + } 280 + } 281 + if (rtn == 0) { 282 + dfcprintk (hd->ioc, printk(MYIOC_s_DEBUG_FMT 283 + "%s.%d: %d:%llu, executing recovery.\n", __func__, 284 + hd->ioc->name, shost->host_no, 285 + SCpnt->device->id, SCpnt->device->lun)); 286 + rtn = mptscsih_bus_reset(SCpnt); 287 + } 288 + return rtn; 253 289 } 254 290 255 291 static void
+8 -8
drivers/message/fusion/mptsas.c
··· 2964 2964 goto out_free; 2965 2965 2966 2966 manufacture_reply = data_out + sizeof(struct rep_manu_request); 2967 - strncpy(edev->vendor_id, manufacture_reply->vendor_id, 2968 - SAS_EXPANDER_VENDOR_ID_LEN); 2969 - strncpy(edev->product_id, manufacture_reply->product_id, 2970 - SAS_EXPANDER_PRODUCT_ID_LEN); 2971 - strncpy(edev->product_rev, manufacture_reply->product_rev, 2972 - SAS_EXPANDER_PRODUCT_REV_LEN); 2967 + strscpy(edev->vendor_id, manufacture_reply->vendor_id, 2968 + sizeof(edev->vendor_id)); 2969 + strscpy(edev->product_id, manufacture_reply->product_id, 2970 + sizeof(edev->product_id)); 2971 + strscpy(edev->product_rev, manufacture_reply->product_rev, 2972 + sizeof(edev->product_rev)); 2973 2973 edev->level = manufacture_reply->sas_format; 2974 2974 if (manufacture_reply->sas_format) { 2975 - strncpy(edev->component_vendor_id, 2975 + strscpy(edev->component_vendor_id, 2976 2976 manufacture_reply->component_vendor_id, 2977 - SAS_EXPANDER_COMPONENT_VENDOR_ID_LEN); 2977 + sizeof(edev->component_vendor_id)); 2978 2978 tmp = (u8 *)&manufacture_reply->component_id; 2979 2979 edev->component_id = tmp[0] << 8 | tmp[1]; 2980 2980 edev->component_revision_id =
+54 -1
drivers/message/fusion/mptscsih.c
··· 1793 1793 1794 1794 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 1795 1795 /** 1796 - * mptscsih_dev_reset - Perform a SCSI TARGET_RESET! new_eh variant 1796 + * mptscsih_dev_reset - Perform a SCSI LOGICAL_UNIT_RESET! 1797 1797 * @SCpnt: Pointer to scsi_cmnd structure, IO which reset is due to 1798 1798 * 1799 1799 * (linux scsi_host_template.eh_dev_reset_handler routine) ··· 1802 1802 **/ 1803 1803 int 1804 1804 mptscsih_dev_reset(struct scsi_cmnd * SCpnt) 1805 + { 1806 + MPT_SCSI_HOST *hd; 1807 + int retval; 1808 + VirtDevice *vdevice; 1809 + MPT_ADAPTER *ioc; 1810 + 1811 + /* If we can't locate our host adapter structure, return FAILED status. 1812 + */ 1813 + if ((hd = shost_priv(SCpnt->device->host)) == NULL){ 1814 + printk(KERN_ERR MYNAM ": lun reset: " 1815 + "Can't locate host! (sc=%p)\n", SCpnt); 1816 + return FAILED; 1817 + } 1818 + 1819 + ioc = hd->ioc; 1820 + printk(MYIOC_s_INFO_FMT "attempting lun reset! (sc=%p)\n", 1821 + ioc->name, SCpnt); 1822 + scsi_print_command(SCpnt); 1823 + 1824 + vdevice = SCpnt->device->hostdata; 1825 + if (!vdevice || !vdevice->vtarget) { 1826 + retval = 0; 1827 + goto out; 1828 + } 1829 + 1830 + retval = mptscsih_IssueTaskMgmt(hd, 1831 + MPI_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET, 1832 + vdevice->vtarget->channel, 1833 + vdevice->vtarget->id, vdevice->lun, 0, 1834 + mptscsih_get_tm_timeout(ioc)); 1835 + 1836 + out: 1837 + printk (MYIOC_s_INFO_FMT "lun reset: %s (sc=%p)\n", 1838 + ioc->name, ((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt); 1839 + 1840 + if (retval == 0) 1841 + return SUCCESS; 1842 + else 1843 + return FAILED; 1844 + } 1845 + 1846 + /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 1847 + /** 1848 + * mptscsih_target_reset - Perform a SCSI TARGET_RESET! 1849 + * @SCpnt: Pointer to scsi_cmnd structure, IO which reset is due to 1850 + * 1851 + * (linux scsi_host_template.eh_target_reset_handler routine) 1852 + * 1853 + * Returns SUCCESS or FAILED. 1854 + **/ 1855 + int 1856 + mptscsih_target_reset(struct scsi_cmnd * SCpnt) 1805 1857 { 1806 1858 MPT_SCSI_HOST *hd; 1807 1859 int retval; ··· 3308 3256 EXPORT_SYMBOL(mptscsih_slave_configure); 3309 3257 EXPORT_SYMBOL(mptscsih_abort); 3310 3258 EXPORT_SYMBOL(mptscsih_dev_reset); 3259 + EXPORT_SYMBOL(mptscsih_target_reset); 3311 3260 EXPORT_SYMBOL(mptscsih_bus_reset); 3312 3261 EXPORT_SYMBOL(mptscsih_host_reset); 3313 3262 EXPORT_SYMBOL(mptscsih_bios_param);
+1
drivers/message/fusion/mptscsih.h
··· 120 120 extern int mptscsih_slave_configure(struct scsi_device *device); 121 121 extern int mptscsih_abort(struct scsi_cmnd * SCpnt); 122 122 extern int mptscsih_dev_reset(struct scsi_cmnd * SCpnt); 123 + extern int mptscsih_target_reset(struct scsi_cmnd * SCpnt); 123 124 extern int mptscsih_bus_reset(struct scsi_cmnd * SCpnt); 124 125 extern int mptscsih_host_reset(struct scsi_cmnd *SCpnt); 125 126 extern int mptscsih_bios_param(struct scsi_device * sdev, struct block_device *bdev, sector_t capacity, int geom[]);
-15
drivers/scsi/Kconfig
··· 834 834 To compile this driver as a module, choose M here: the 835 835 module will be called imm. 836 836 837 - config SCSI_IZIP_EPP16 838 - bool "ppa/imm option - Use slow (but safe) EPP-16" 839 - depends on SCSI_IMM 840 - help 841 - EPP (Enhanced Parallel Port) is a standard for parallel ports which 842 - allows them to act as expansion buses that can handle up to 64 843 - peripheral devices. 844 - 845 - Some parallel port chipsets are slower than their motherboard, and 846 - so we have to control the state of the chipset's FIFO queue every 847 - now and then to avoid data loss. This will be done if you say Y 848 - here. 849 - 850 - Generally, saying Y is the safe option and slows things down a bit. 851 - 852 837 config SCSI_IZIP_SLOW_CTR 853 838 bool "ppa/imm option - Assume slow parport control register" 854 839 depends on SCSI_PPA || SCSI_IMM
+26 -12
drivers/scsi/aic7xxx/aic79xx_osm.c
··· 536 536 struct scsi_cmnd *cmd; 537 537 538 538 cmd = scb->io_ctx; 539 - ahd_sync_sglist(ahd, scb, BUS_DMASYNC_POSTWRITE); 540 - scsi_dma_unmap(cmd); 539 + if (cmd) { 540 + ahd_sync_sglist(ahd, scb, BUS_DMASYNC_POSTWRITE); 541 + scsi_dma_unmap(cmd); 542 + } 541 543 } 542 544 543 545 /******************************** Macros **************************************/ 544 - #define BUILD_SCSIID(ahd, cmd) \ 545 - (((scmd_id(cmd) << TID_SHIFT) & TID) | (ahd)->our_id) 546 + static inline unsigned int ahd_build_scsiid(struct ahd_softc *ahd, 547 + struct scsi_device *sdev) 548 + { 549 + return ((sdev_id(sdev) << TID_SHIFT) & TID) | (ahd)->our_id; 550 + } 546 551 547 552 /* 548 553 * Return a string describing the driver. ··· 816 811 817 812 tinfo = ahd_fetch_transinfo(ahd, 'A', ahd->our_id, 818 813 cmd->device->id, &tstate); 819 - reset_scb->io_ctx = cmd; 814 + reset_scb->io_ctx = NULL; 820 815 reset_scb->platform_data->dev = dev; 821 816 reset_scb->sg_count = 0; 822 817 ahd_set_residual(reset_scb, 0); 823 818 ahd_set_sense_residual(reset_scb, 0); 824 819 reset_scb->platform_data->xfer_len = 0; 825 820 reset_scb->hscb->control = 0; 826 - reset_scb->hscb->scsiid = BUILD_SCSIID(ahd,cmd); 821 + reset_scb->hscb->scsiid = ahd_build_scsiid(ahd, cmd->device); 827 822 reset_scb->hscb->lun = cmd->device->lun; 828 823 reset_scb->hscb->cdb_len = 0; 829 824 reset_scb->hscb->task_management = SIU_TASKMGMT_LUN_RESET; ··· 1582 1577 * Fill out basics of the HSCB. 1583 1578 */ 1584 1579 hscb->control = 0; 1585 - hscb->scsiid = BUILD_SCSIID(ahd, cmd); 1580 + hscb->scsiid = ahd_build_scsiid(ahd, cmd->device); 1586 1581 hscb->lun = cmd->device->lun; 1587 1582 scb->hscb->task_management = 0; 1588 1583 mask = SCB_GET_TARGET_MASK(ahd, scb); ··· 1771 1766 dev = scb->platform_data->dev; 1772 1767 dev->active--; 1773 1768 dev->openings++; 1774 - if ((cmd->result & (CAM_DEV_QFRZN << 16)) != 0) { 1775 - cmd->result &= ~(CAM_DEV_QFRZN << 16); 1776 - dev->qfrozen--; 1769 + if (cmd) { 1770 + if ((cmd->result & (CAM_DEV_QFRZN << 16)) != 0) { 1771 + cmd->result &= ~(CAM_DEV_QFRZN << 16); 1772 + dev->qfrozen--; 1773 + } 1774 + } else if (scb->flags & SCB_DEVICE_RESET) { 1775 + if (ahd->platform_data->eh_done) 1776 + complete(ahd->platform_data->eh_done); 1777 + ahd_free_scb(ahd, scb); 1778 + return; 1777 1779 } 1778 1780 ahd_linux_unmap_scb(ahd, scb); 1779 1781 ··· 1834 1822 } else { 1835 1823 ahd_set_transaction_status(scb, CAM_REQ_CMP); 1836 1824 } 1837 - } else if (ahd_get_transaction_status(scb) == CAM_SCSI_STATUS_ERROR) { 1825 + } else if (cmd && 1826 + ahd_get_transaction_status(scb) == CAM_SCSI_STATUS_ERROR) { 1838 1827 ahd_linux_handle_scsi_status(ahd, cmd->device, scb); 1839 1828 } 1840 1829 ··· 1869 1856 } 1870 1857 1871 1858 ahd_free_scb(ahd, scb); 1872 - ahd_linux_queue_cmd_complete(ahd, cmd); 1859 + if (cmd) 1860 + ahd_linux_queue_cmd_complete(ahd, cmd); 1873 1861 } 1874 1862 1875 1863 static void
+72 -59
drivers/scsi/aic7xxx/aic7xxx_osm.c
··· 366 366 struct scsi_cmnd *cmd); 367 367 static void ahc_linux_freeze_simq(struct ahc_softc *ahc); 368 368 static void ahc_linux_release_simq(struct ahc_softc *ahc); 369 - static int ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag); 369 + static int ahc_linux_queue_recovery_cmd(struct scsi_device *sdev, 370 + struct scsi_cmnd *cmd); 370 371 static void ahc_linux_initialize_scsi_bus(struct ahc_softc *ahc); 371 372 static u_int ahc_linux_user_tagdepth(struct ahc_softc *ahc, 372 373 struct ahc_devinfo *devinfo); ··· 729 728 { 730 729 int error; 731 730 732 - error = ahc_linux_queue_recovery_cmd(cmd, SCB_ABORT); 731 + error = ahc_linux_queue_recovery_cmd(cmd->device, cmd); 733 732 if (error != SUCCESS) 734 733 printk("aic7xxx_abort returns 0x%x\n", error); 735 734 return (error); ··· 743 742 { 744 743 int error; 745 744 746 - error = ahc_linux_queue_recovery_cmd(cmd, SCB_DEVICE_RESET); 745 + error = ahc_linux_queue_recovery_cmd(cmd->device, NULL); 747 746 if (error != SUCCESS) 748 747 printk("aic7xxx_dev_reset returns 0x%x\n", error); 749 748 return (error); ··· 799 798 800 799 /**************************** Tasklet Handler *********************************/ 801 800 802 - /******************************** Macros **************************************/ 803 - #define BUILD_SCSIID(ahc, cmd) \ 804 - ((((cmd)->device->id << TID_SHIFT) & TID) \ 805 - | (((cmd)->device->channel == 0) ? (ahc)->our_id : (ahc)->our_id_b) \ 806 - | (((cmd)->device->channel == 0) ? 0 : TWIN_CHNLB)) 801 + 802 + static inline unsigned int ahc_build_scsiid(struct ahc_softc *ahc, 803 + struct scsi_device *sdev) 804 + { 805 + unsigned int scsiid = (sdev->id << TID_SHIFT) & TID; 806 + 807 + if (sdev->channel == 0) 808 + scsiid |= ahc->our_id; 809 + else 810 + scsiid |= ahc->our_id_b | TWIN_CHNLB; 811 + return scsiid; 812 + } 807 813 808 814 /******************************** Bus DMA *************************************/ 809 815 int ··· 1465 1457 * Fill out basics of the HSCB. 1466 1458 */ 1467 1459 hscb->control = 0; 1468 - hscb->scsiid = BUILD_SCSIID(ahc, cmd); 1460 + hscb->scsiid = ahc_build_scsiid(ahc, cmd->device); 1469 1461 hscb->lun = cmd->device->lun; 1470 1462 mask = SCB_GET_TARGET_MASK(ahc, scb); 1471 1463 tinfo = ahc_fetch_transinfo(ahc, SCB_GET_CHANNEL(ahc, scb), ··· 2037 2029 } 2038 2030 2039 2031 static int 2040 - ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag) 2032 + ahc_linux_queue_recovery_cmd(struct scsi_device *sdev, 2033 + struct scsi_cmnd *cmd) 2041 2034 { 2042 2035 struct ahc_softc *ahc; 2043 2036 struct ahc_linux_device *dev; 2044 - struct scb *pending_scb; 2037 + struct scb *pending_scb = NULL, *scb; 2045 2038 u_int saved_scbptr; 2046 2039 u_int active_scb_index; 2047 2040 u_int last_phase; ··· 2055 2046 int disconnected; 2056 2047 unsigned long flags; 2057 2048 2058 - pending_scb = NULL; 2059 2049 paused = FALSE; 2060 2050 wait = FALSE; 2061 - ahc = *(struct ahc_softc **)cmd->device->host->hostdata; 2051 + ahc = *(struct ahc_softc **)sdev->host->hostdata; 2062 2052 2063 - scmd_printk(KERN_INFO, cmd, "Attempting to queue a%s message\n", 2064 - flag == SCB_ABORT ? "n ABORT" : " TARGET RESET"); 2053 + sdev_printk(KERN_INFO, sdev, "Attempting to queue a%s message\n", 2054 + cmd ? "n ABORT" : " TARGET RESET"); 2065 2055 2066 - printk("CDB:"); 2067 - for (cdb_byte = 0; cdb_byte < cmd->cmd_len; cdb_byte++) 2068 - printk(" 0x%x", cmd->cmnd[cdb_byte]); 2069 - printk("\n"); 2056 + if (cmd) { 2057 + printk("CDB:"); 2058 + for (cdb_byte = 0; cdb_byte < cmd->cmd_len; cdb_byte++) 2059 + printk(" 0x%x", cmd->cmnd[cdb_byte]); 2060 + printk("\n"); 2061 + } 2070 2062 2071 2063 ahc_lock(ahc, &flags); 2072 2064 ··· 2078 2068 * at all, and the system wanted us to just abort the 2079 2069 * command, return success. 2080 2070 */ 2081 - dev = scsi_transport_device_data(cmd->device); 2071 + dev = scsi_transport_device_data(sdev); 2082 2072 2083 2073 if (dev == NULL) { 2084 2074 /* ··· 2086 2076 * so we must not still own the command. 2087 2077 */ 2088 2078 printk("%s:%d:%d:%d: Is not an active device\n", 2089 - ahc_name(ahc), cmd->device->channel, cmd->device->id, 2090 - (u8)cmd->device->lun); 2079 + ahc_name(ahc), sdev->channel, sdev->id, (u8)sdev->lun); 2091 2080 retval = SUCCESS; 2092 2081 goto no_cmd; 2093 2082 } 2094 2083 2095 - if ((dev->flags & (AHC_DEV_Q_BASIC|AHC_DEV_Q_TAGGED)) == 0 2084 + if (cmd && (dev->flags & (AHC_DEV_Q_BASIC|AHC_DEV_Q_TAGGED)) == 0 2096 2085 && ahc_search_untagged_queues(ahc, cmd, cmd->device->id, 2097 2086 cmd->device->channel + 'A', 2098 2087 (u8)cmd->device->lun, ··· 2106 2097 /* 2107 2098 * See if we can find a matching cmd in the pending list. 2108 2099 */ 2109 - LIST_FOREACH(pending_scb, &ahc->pending_scbs, pending_links) { 2110 - if (pending_scb->io_ctx == cmd) 2111 - break; 2112 - } 2113 - 2114 - if (pending_scb == NULL && flag == SCB_DEVICE_RESET) { 2115 - 2116 - /* Any SCB for this device will do for a target reset */ 2117 - LIST_FOREACH(pending_scb, &ahc->pending_scbs, pending_links) { 2118 - if (ahc_match_scb(ahc, pending_scb, scmd_id(cmd), 2119 - scmd_channel(cmd) + 'A', 2120 - CAM_LUN_WILDCARD, 2121 - SCB_LIST_NULL, ROLE_INITIATOR)) 2100 + if (cmd) { 2101 + LIST_FOREACH(scb, &ahc->pending_scbs, pending_links) { 2102 + if (scb->io_ctx == cmd) { 2103 + pending_scb = scb; 2122 2104 break; 2105 + } 2106 + } 2107 + } else { 2108 + /* Any SCB for this device will do for a target reset */ 2109 + LIST_FOREACH(scb, &ahc->pending_scbs, pending_links) { 2110 + if (ahc_match_scb(ahc, scb, sdev->id, 2111 + sdev->channel + 'A', 2112 + CAM_LUN_WILDCARD, 2113 + SCB_LIST_NULL, ROLE_INITIATOR)) { 2114 + pending_scb = scb; 2115 + break; 2116 + } 2123 2117 } 2124 2118 } 2125 2119 2126 2120 if (pending_scb == NULL) { 2127 - scmd_printk(KERN_INFO, cmd, "Command not found\n"); 2121 + sdev_printk(KERN_INFO, sdev, "Command not found\n"); 2128 2122 goto no_cmd; 2129 2123 } 2130 2124 ··· 2158 2146 ahc_dump_card_state(ahc); 2159 2147 2160 2148 disconnected = TRUE; 2161 - if (flag == SCB_ABORT) { 2162 - if (ahc_search_qinfifo(ahc, cmd->device->id, 2163 - cmd->device->channel + 'A', 2164 - cmd->device->lun, 2149 + if (cmd) { 2150 + if (ahc_search_qinfifo(ahc, sdev->id, 2151 + sdev->channel + 'A', 2152 + sdev->lun, 2165 2153 pending_scb->hscb->tag, 2166 2154 ROLE_INITIATOR, CAM_REQ_ABORTED, 2167 2155 SEARCH_COMPLETE) > 0) { 2168 2156 printk("%s:%d:%d:%d: Cmd aborted from QINFIFO\n", 2169 - ahc_name(ahc), cmd->device->channel, 2170 - cmd->device->id, (u8)cmd->device->lun); 2157 + ahc_name(ahc), sdev->channel, 2158 + sdev->id, (u8)sdev->lun); 2171 2159 retval = SUCCESS; 2172 2160 goto done; 2173 2161 } 2174 - } else if (ahc_search_qinfifo(ahc, cmd->device->id, 2175 - cmd->device->channel + 'A', 2176 - cmd->device->lun, 2162 + } else if (ahc_search_qinfifo(ahc, sdev->id, 2163 + sdev->channel + 'A', 2164 + sdev->lun, 2177 2165 pending_scb->hscb->tag, 2178 2166 ROLE_INITIATOR, /*status*/0, 2179 2167 SEARCH_COUNT) > 0) { ··· 2186 2174 bus_scb = ahc_lookup_scb(ahc, ahc_inb(ahc, SCB_TAG)); 2187 2175 if (bus_scb == pending_scb) 2188 2176 disconnected = FALSE; 2189 - else if (flag != SCB_ABORT 2177 + else if (!cmd 2190 2178 && ahc_inb(ahc, SAVED_SCSIID) == pending_scb->hscb->scsiid 2191 2179 && ahc_inb(ahc, SAVED_LUN) == SCB_GET_LUN(pending_scb)) 2192 2180 disconnected = FALSE; ··· 2206 2194 saved_scsiid = ahc_inb(ahc, SAVED_SCSIID); 2207 2195 if (last_phase != P_BUSFREE 2208 2196 && (pending_scb->hscb->tag == active_scb_index 2209 - || (flag == SCB_DEVICE_RESET 2210 - && SCSIID_TARGET(ahc, saved_scsiid) == scmd_id(cmd)))) { 2197 + || (!cmd && SCSIID_TARGET(ahc, saved_scsiid) == sdev->id))) { 2211 2198 2212 2199 /* 2213 2200 * We're active on the bus, so assert ATN 2214 2201 * and hope that the target responds. 2215 2202 */ 2216 2203 pending_scb = ahc_lookup_scb(ahc, active_scb_index); 2217 - pending_scb->flags |= SCB_RECOVERY_SCB|flag; 2204 + pending_scb->flags |= SCB_RECOVERY_SCB; 2205 + pending_scb->flags |= cmd ? SCB_ABORT : SCB_DEVICE_RESET; 2218 2206 ahc_outb(ahc, MSG_OUT, HOST_MSG); 2219 2207 ahc_outb(ahc, SCSISIGO, last_phase|ATNO); 2220 - scmd_printk(KERN_INFO, cmd, "Device is active, asserting ATN\n"); 2208 + sdev_printk(KERN_INFO, sdev, "Device is active, asserting ATN\n"); 2221 2209 wait = TRUE; 2222 2210 } else if (disconnected) { 2223 2211 ··· 2238 2226 * an unsolicited reselection occurred. 2239 2227 */ 2240 2228 pending_scb->hscb->control |= MK_MESSAGE|DISCONNECTED; 2241 - pending_scb->flags |= SCB_RECOVERY_SCB|flag; 2229 + pending_scb->flags |= SCB_RECOVERY_SCB; 2230 + pending_scb->flags |= cmd ? SCB_ABORT : SCB_DEVICE_RESET; 2242 2231 2243 2232 /* 2244 2233 * Remove any cached copy of this SCB in the ··· 2248 2235 * same element in the SCB, SCB_NEXT, for 2249 2236 * both the qinfifo and the disconnected list. 2250 2237 */ 2251 - ahc_search_disc_list(ahc, cmd->device->id, 2252 - cmd->device->channel + 'A', 2253 - cmd->device->lun, pending_scb->hscb->tag, 2238 + ahc_search_disc_list(ahc, sdev->id, 2239 + sdev->channel + 'A', 2240 + sdev->lun, pending_scb->hscb->tag, 2254 2241 /*stop_on_first*/TRUE, 2255 2242 /*remove*/TRUE, 2256 2243 /*save_state*/FALSE); ··· 2273 2260 * so we are the next SCB for this target 2274 2261 * to run. 2275 2262 */ 2276 - ahc_search_qinfifo(ahc, cmd->device->id, 2277 - cmd->device->channel + 'A', 2278 - cmd->device->lun, SCB_LIST_NULL, 2263 + ahc_search_qinfifo(ahc, sdev->id, 2264 + sdev->channel + 'A', 2265 + (u8)sdev->lun, SCB_LIST_NULL, 2279 2266 ROLE_INITIATOR, CAM_REQUEUE_REQ, 2280 2267 SEARCH_COMPLETE); 2281 2268 ahc_qinfifo_requeue_tail(ahc, pending_scb); ··· 2284 2271 printk("Device is disconnected, re-queuing SCB\n"); 2285 2272 wait = TRUE; 2286 2273 } else { 2287 - scmd_printk(KERN_INFO, cmd, "Unable to deliver message\n"); 2274 + sdev_printk(KERN_INFO, sdev, "Unable to deliver message\n"); 2288 2275 retval = FAILED; 2289 2276 goto done; 2290 2277 }
+1
drivers/scsi/bnx2fc/bnx2fc.h
··· 384 384 }; 385 385 386 386 struct bnx2fc_mp_req { 387 + u64 tm_lun; 387 388 u8 tm_flags; 388 389 389 390 u32 req_len;
+9 -5
drivers/scsi/bnx2fc/bnx2fc_hwi.c
··· 1709 1709 struct fcoe_cached_sge_ctx *cached_sge; 1710 1710 struct fcoe_ext_mul_sges_ctx *sgl; 1711 1711 int dev_type = tgt->dev_type; 1712 - u64 *fcp_cmnd; 1712 + struct fcp_cmnd *fcp_cmnd; 1713 + u64 *raw_fcp_cmnd; 1713 1714 u64 tmp_fcp_cmnd[4]; 1714 1715 u32 context_id; 1715 1716 int cnt, i; ··· 1779 1778 task->txwr_rxrd.union_ctx.tx_seq.ctx.seq_cnt = 1; 1780 1779 1781 1780 /* Fill FCP_CMND IU */ 1782 - fcp_cmnd = (u64 *) 1781 + fcp_cmnd = (struct fcp_cmnd *)&tmp_fcp_cmnd; 1782 + bnx2fc_build_fcp_cmnd(io_req, fcp_cmnd); 1783 + int_to_scsilun(sc_cmd->device->lun, &fcp_cmnd->fc_lun); 1784 + memcpy(fcp_cmnd->fc_cdb, sc_cmd->cmnd, sc_cmd->cmd_len); 1785 + raw_fcp_cmnd = (u64 *) 1783 1786 task->txwr_rxrd.union_ctx.fcp_cmd.opaque; 1784 - bnx2fc_build_fcp_cmnd(io_req, (struct fcp_cmnd *)&tmp_fcp_cmnd); 1785 1787 1786 1788 /* swap fcp_cmnd */ 1787 1789 cnt = sizeof(struct fcp_cmnd) / sizeof(u64); 1788 1790 1789 1791 for (i = 0; i < cnt; i++) { 1790 - *fcp_cmnd = cpu_to_be64(tmp_fcp_cmnd[i]); 1791 - fcp_cmnd++; 1792 + *raw_fcp_cmnd = cpu_to_be64(tmp_fcp_cmnd[i]); 1793 + raw_fcp_cmnd++; 1792 1794 } 1793 1795 1794 1796 /* Rx Write Tx Read */
+47 -47
drivers/scsi/bnx2fc/bnx2fc_io.c
··· 656 656 return SUCCESS; 657 657 } 658 658 659 - static int bnx2fc_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags) 659 + static int bnx2fc_initiate_tmf(struct fc_lport *lport, struct fc_rport *rport, 660 + u64 tm_lun, u8 tm_flags) 660 661 { 661 - struct fc_lport *lport; 662 - struct fc_rport *rport; 663 662 struct fc_rport_libfc_priv *rp; 664 663 struct fcoe_port *port; 665 664 struct bnx2fc_interface *interface; ··· 667 668 struct bnx2fc_mp_req *tm_req; 668 669 struct fcoe_task_ctx_entry *task; 669 670 struct fcoe_task_ctx_entry *task_page; 670 - struct Scsi_Host *host = sc_cmd->device->host; 671 671 struct fc_frame_header *fc_hdr; 672 672 struct fcp_cmnd *fcp_cmnd; 673 673 int task_idx, index; ··· 675 677 u32 sid, did; 676 678 unsigned long start = jiffies; 677 679 678 - lport = shost_priv(host); 679 - rport = starget_to_rport(scsi_target(sc_cmd->device)); 680 680 port = lport_priv(lport); 681 681 interface = port->priv; 682 682 ··· 685 689 } 686 690 rp = rport->dd_data; 687 691 688 - rc = fc_block_scsi_eh(sc_cmd); 692 + rc = fc_block_rport(rport); 689 693 if (rc) 690 694 return rc; 691 695 ··· 714 718 goto retry_tmf; 715 719 } 716 720 /* Initialize rest of io_req fields */ 717 - io_req->sc_cmd = sc_cmd; 721 + io_req->sc_cmd = NULL; 718 722 io_req->port = port; 719 723 io_req->tgt = tgt; 720 724 ··· 732 736 /* Set TM flags */ 733 737 io_req->io_req_flags = 0; 734 738 tm_req->tm_flags = tm_flags; 739 + tm_req->tm_lun = tm_lun; 735 740 736 741 /* Fill FCP_CMND */ 737 742 bnx2fc_build_fcp_cmnd(io_req, (struct fcp_cmnd *)tm_req->req_buf); 738 743 fcp_cmnd = (struct fcp_cmnd *)tm_req->req_buf; 739 - memset(fcp_cmnd->fc_cdb, 0, sc_cmd->cmd_len); 744 + int_to_scsilun(tm_lun, &fcp_cmnd->fc_lun); 745 + memset(fcp_cmnd->fc_cdb, 0, BNX2FC_MAX_CMD_LEN); 740 746 fcp_cmnd->fc_dl = 0; 741 747 742 748 /* Fill FC header */ ··· 760 762 interface->hba->task_ctx[task_idx]; 761 763 task = &(task_page[index]); 762 764 bnx2fc_init_mp_task(io_req, task); 763 - 764 - bnx2fc_priv(sc_cmd)->io_req = io_req; 765 765 766 766 /* Obtain free SQ entry */ 767 767 spin_lock_bh(&tgt->tgt_lock); ··· 1058 1062 */ 1059 1063 int bnx2fc_eh_target_reset(struct scsi_cmnd *sc_cmd) 1060 1064 { 1061 - return bnx2fc_initiate_tmf(sc_cmd, FCP_TMF_TGT_RESET); 1065 + struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device)); 1066 + struct fc_lport *lport = shost_priv(rport_to_shost(rport)); 1067 + 1068 + return bnx2fc_initiate_tmf(lport, rport, 0, FCP_TMF_TGT_RESET); 1062 1069 } 1063 1070 1064 1071 /** ··· 1074 1075 */ 1075 1076 int bnx2fc_eh_device_reset(struct scsi_cmnd *sc_cmd) 1076 1077 { 1077 - return bnx2fc_initiate_tmf(sc_cmd, FCP_TMF_LUN_RESET); 1078 + struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device)); 1079 + struct fc_lport *lport = shost_priv(rport_to_shost(rport)); 1080 + 1081 + return bnx2fc_initiate_tmf(lport, rport, sc_cmd->device->lun, 1082 + FCP_TMF_LUN_RESET); 1078 1083 } 1079 1084 1080 1085 static int bnx2fc_abts_cleanup(struct bnx2fc_cmd *io_req) ··· 1453 1450 1454 1451 static void bnx2fc_lun_reset_cmpl(struct bnx2fc_cmd *io_req) 1455 1452 { 1456 - struct scsi_cmnd *sc_cmd = io_req->sc_cmd; 1457 1453 struct bnx2fc_rport *tgt = io_req->tgt; 1458 1454 struct bnx2fc_cmd *cmd, *tmp; 1459 - u64 tm_lun = sc_cmd->device->lun; 1455 + struct bnx2fc_mp_req *tm_req = &io_req->mp_req; 1460 1456 u64 lun; 1461 1457 int rc = 0; 1462 1458 ··· 1467 1465 */ 1468 1466 list_for_each_entry_safe(cmd, tmp, &tgt->active_cmd_queue, link) { 1469 1467 BNX2FC_TGT_DBG(tgt, "LUN RST cmpl: scan for pending IOs\n"); 1468 + if (!cmd->sc_cmd) 1469 + continue; 1470 1470 lun = cmd->sc_cmd->device->lun; 1471 - if (lun == tm_lun) { 1471 + if (lun == tm_req->tm_lun) { 1472 1472 /* Initiate ABTS on this cmd */ 1473 1473 if (!test_and_set_bit(BNX2FC_FLAG_ISSUE_ABTS, 1474 1474 &cmd->req_flags)) { ··· 1574 1570 printk(KERN_ERR PFX "tmf's fc_hdr r_ctl = 0x%x\n", 1575 1571 fc_hdr->fh_r_ctl); 1576 1572 } 1577 - if (!bnx2fc_priv(sc_cmd)->io_req) { 1578 - printk(KERN_ERR PFX "tm_compl: io_req is NULL\n"); 1579 - return; 1580 - } 1581 - switch (io_req->fcp_status) { 1582 - case FC_GOOD: 1583 - if (io_req->cdb_status == 0) { 1584 - /* Good IO completion */ 1585 - sc_cmd->result = DID_OK << 16; 1586 - } else { 1587 - /* Transport status is good, SCSI status not good */ 1588 - sc_cmd->result = (DID_OK << 16) | io_req->cdb_status; 1573 + if (sc_cmd) { 1574 + if (!bnx2fc_priv(sc_cmd)->io_req) { 1575 + printk(KERN_ERR PFX "tm_compl: io_req is NULL\n"); 1576 + return; 1589 1577 } 1590 - if (io_req->fcp_resid) 1591 - scsi_set_resid(sc_cmd, io_req->fcp_resid); 1592 - break; 1578 + switch (io_req->fcp_status) { 1579 + case FC_GOOD: 1580 + if (io_req->cdb_status == 0) { 1581 + /* Good IO completion */ 1582 + sc_cmd->result = DID_OK << 16; 1583 + } else { 1584 + /* Transport status is good, SCSI status not good */ 1585 + sc_cmd->result = (DID_OK << 16) | io_req->cdb_status; 1586 + } 1587 + if (io_req->fcp_resid) 1588 + scsi_set_resid(sc_cmd, io_req->fcp_resid); 1589 + break; 1593 1590 1594 - default: 1595 - BNX2FC_IO_DBG(io_req, "process_tm_compl: fcp_status = %d\n", 1596 - io_req->fcp_status); 1597 - break; 1591 + default: 1592 + BNX2FC_IO_DBG(io_req, "process_tm_compl: fcp_status = %d\n", 1593 + io_req->fcp_status); 1594 + break; 1595 + } 1596 + 1597 + sc_cmd = io_req->sc_cmd; 1598 + io_req->sc_cmd = NULL; 1599 + 1600 + bnx2fc_priv(sc_cmd)->io_req = NULL; 1601 + scsi_done(sc_cmd); 1598 1602 } 1599 - 1600 - sc_cmd = io_req->sc_cmd; 1601 - io_req->sc_cmd = NULL; 1602 1603 1603 1604 /* check if the io_req exists in tgt's tmf_q */ 1604 1605 if (io_req->on_tmf_queue) { ··· 1615 1606 printk(KERN_ERR PFX "Command not on active_cmd_queue!\n"); 1616 1607 return; 1617 1608 } 1618 - 1619 - bnx2fc_priv(sc_cmd)->io_req = NULL; 1620 - scsi_done(sc_cmd); 1621 1609 1622 1610 kref_put(&io_req->refcount, bnx2fc_cmd_release); 1623 1611 if (io_req->wait_for_abts_comp) { ··· 1744 1738 void bnx2fc_build_fcp_cmnd(struct bnx2fc_cmd *io_req, 1745 1739 struct fcp_cmnd *fcp_cmnd) 1746 1740 { 1747 - struct scsi_cmnd *sc_cmd = io_req->sc_cmd; 1748 - 1749 1741 memset(fcp_cmnd, 0, sizeof(struct fcp_cmnd)); 1750 1742 1751 - int_to_scsilun(sc_cmd->device->lun, &fcp_cmnd->fc_lun); 1752 - 1753 1743 fcp_cmnd->fc_dl = htonl(io_req->data_xfer_len); 1754 - memcpy(fcp_cmnd->fc_cdb, sc_cmd->cmnd, sc_cmd->cmd_len); 1755 - 1756 1744 fcp_cmnd->fc_cmdref = 0; 1757 1745 fcp_cmnd->fc_pri_ta = 0; 1758 1746 fcp_cmnd->fc_tm_flags = io_req->mp_req.tm_flags;
+1 -1
drivers/scsi/cxgbi/libcxgbi.c
··· 1294 1294 1295 1295 /* 1296 1296 * the ddp tag will be used for the itt in the outgoing pdu, 1297 - * the itt genrated by libiscsi is saved in the ppm and can be 1297 + * the itt generated by libiscsi is saved in the ppm and can be 1298 1298 * retrieved via the ddp tag 1299 1299 */ 1300 1300 err = cxgbi_ppm_ppods_reserve(ppm, ttinfo->nr_pages, 0, &ttinfo->idx,
+41 -40
drivers/scsi/device_handler/scsi_dh_hp_sw.c
··· 82 82 { 83 83 unsigned char cmd[6] = { TEST_UNIT_READY }; 84 84 struct scsi_sense_hdr sshdr; 85 - int ret = SCSI_DH_OK, res; 85 + int ret, res; 86 86 blk_opf_t opf = REQ_OP_DRV_IN | REQ_FAILFAST_DEV | 87 87 REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER; 88 88 const struct scsi_exec_args exec_args = { ··· 92 92 retry: 93 93 res = scsi_execute_cmd(sdev, cmd, opf, NULL, 0, HP_SW_TIMEOUT, 94 94 HP_SW_RETRIES, &exec_args); 95 - if (res) { 96 - if (scsi_sense_valid(&sshdr)) 97 - ret = tur_done(sdev, h, &sshdr); 98 - else { 99 - sdev_printk(KERN_WARNING, sdev, 100 - "%s: sending tur failed with %x\n", 101 - HP_SW_NAME, res); 102 - ret = SCSI_DH_IO; 103 - } 104 - } else { 95 + if (res > 0 && scsi_sense_valid(&sshdr)) { 96 + ret = tur_done(sdev, h, &sshdr); 97 + } else if (res == 0) { 105 98 h->path_state = HP_SW_PATH_ACTIVE; 106 99 ret = SCSI_DH_OK; 100 + } else { 101 + sdev_printk(KERN_WARNING, sdev, 102 + "%s: sending tur failed with %x\n", 103 + HP_SW_NAME, res); 104 + ret = SCSI_DH_IO; 107 105 } 106 + 108 107 if (ret == SCSI_DH_IMM_RETRY) 109 108 goto retry; 110 109 ··· 121 122 unsigned char cmd[6] = { START_STOP, 0, 0, 0, 1, 0 }; 122 123 struct scsi_sense_hdr sshdr; 123 124 struct scsi_device *sdev = h->sdev; 124 - int res, rc = SCSI_DH_OK; 125 + int res, rc; 125 126 int retry_cnt = HP_SW_RETRIES; 126 127 blk_opf_t opf = REQ_OP_DRV_IN | REQ_FAILFAST_DEV | 127 128 REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER; ··· 132 133 retry: 133 134 res = scsi_execute_cmd(sdev, cmd, opf, NULL, 0, HP_SW_TIMEOUT, 134 135 HP_SW_RETRIES, &exec_args); 135 - if (res) { 136 - if (!scsi_sense_valid(&sshdr)) { 137 - sdev_printk(KERN_WARNING, sdev, 138 - "%s: sending start_stop_unit failed, " 139 - "no sense available\n", HP_SW_NAME); 140 - return SCSI_DH_IO; 141 - } 142 - switch (sshdr.sense_key) { 143 - case NOT_READY: 144 - if (sshdr.asc == 0x04 && sshdr.ascq == 3) { 145 - /* 146 - * LUN not ready - manual intervention required 147 - * 148 - * Switch-over in progress, retry. 149 - */ 150 - if (--retry_cnt) 151 - goto retry; 152 - rc = SCSI_DH_RETRY; 153 - break; 154 - } 155 - fallthrough; 156 - default: 157 - sdev_printk(KERN_WARNING, sdev, 158 - "%s: sending start_stop_unit failed, " 159 - "sense %x/%x/%x\n", HP_SW_NAME, 160 - sshdr.sense_key, sshdr.asc, sshdr.ascq); 161 - rc = SCSI_DH_IO; 162 - } 136 + if (!res) { 137 + return SCSI_DH_OK; 138 + } else if (res < 0 || !scsi_sense_valid(&sshdr)) { 139 + sdev_printk(KERN_WARNING, sdev, 140 + "%s: sending start_stop_unit failed, " 141 + "no sense available\n", HP_SW_NAME); 142 + return SCSI_DH_IO; 163 143 } 144 + 145 + switch (sshdr.sense_key) { 146 + case NOT_READY: 147 + if (sshdr.asc == 0x04 && sshdr.ascq == 3) { 148 + /* 149 + * LUN not ready - manual intervention required 150 + * 151 + * Switch-over in progress, retry. 152 + */ 153 + if (--retry_cnt) 154 + goto retry; 155 + rc = SCSI_DH_RETRY; 156 + break; 157 + } 158 + fallthrough; 159 + default: 160 + sdev_printk(KERN_WARNING, sdev, 161 + "%s: sending start_stop_unit failed, " 162 + "sense %x/%x/%x\n", HP_SW_NAME, 163 + sshdr.sense_key, sshdr.asc, sshdr.ascq); 164 + rc = SCSI_DH_IO; 165 + } 166 + 164 167 return rc; 165 168 } 166 169
+12 -9
drivers/scsi/device_handler/scsi_dh_rdac.c
··· 530 530 container_of(work, struct rdac_controller, ms_work); 531 531 struct scsi_device *sdev = ctlr->ms_sdev; 532 532 struct rdac_dh_data *h = sdev->handler_data; 533 - int err = SCSI_DH_OK, retry_cnt = RDAC_RETRY_COUNT; 533 + int rc, err, retry_cnt = RDAC_RETRY_COUNT; 534 534 struct rdac_queue_data *tmp, *qdata; 535 535 LIST_HEAD(list); 536 536 unsigned char cdb[MAX_COMMAND_SIZE]; ··· 558 558 (char *) h->ctlr->array_name, h->ctlr->index, 559 559 (retry_cnt == RDAC_RETRY_COUNT) ? "queueing" : "retrying"); 560 560 561 - if (scsi_execute_cmd(sdev, cdb, opf, &h->ctlr->mode_select, data_size, 562 - RDAC_TIMEOUT * HZ, RDAC_RETRIES, &exec_args)) { 561 + rc = scsi_execute_cmd(sdev, cdb, opf, &h->ctlr->mode_select, data_size, 562 + RDAC_TIMEOUT * HZ, RDAC_RETRIES, &exec_args); 563 + if (!rc) { 564 + h->state = RDAC_STATE_ACTIVE; 565 + RDAC_LOG(RDAC_LOG_FAILOVER, sdev, "array %s, ctlr %d, " 566 + "MODE_SELECT completed", 567 + (char *) h->ctlr->array_name, h->ctlr->index); 568 + err = SCSI_DH_OK; 569 + } else if (rc < 0) { 570 + err = SCSI_DH_IO; 571 + } else { 563 572 err = mode_select_handle_sense(sdev, &sshdr); 564 573 if (err == SCSI_DH_RETRY && retry_cnt--) 565 574 goto retry; 566 575 if (err == SCSI_DH_IMM_RETRY) 567 576 goto retry; 568 - } 569 - if (err == SCSI_DH_OK) { 570 - h->state = RDAC_STATE_ACTIVE; 571 - RDAC_LOG(RDAC_LOG_FAILOVER, sdev, "array %s, ctlr %d, " 572 - "MODE_SELECT completed", 573 - (char *) h->ctlr->array_name, h->ctlr->index); 574 577 } 575 578 576 579 list_for_each_entry_safe(qdata, tmp, &list, entry) {
+5
drivers/scsi/elx/efct/efct_lio.c
··· 1611 1611 .sess_get_initiator_sid = NULL, 1612 1612 .tfc_tpg_base_attrs = efct_lio_tpg_attrs, 1613 1613 .tfc_tpg_attrib_attrs = efct_lio_tpg_attrib_attrs, 1614 + .default_submit_type = TARGET_DIRECT_SUBMIT, 1615 + .direct_submit_supp = 1, 1614 1616 }; 1615 1617 1616 1618 static const struct target_core_fabric_ops efct_lio_npiv_ops = { ··· 1648 1646 .sess_get_initiator_sid = NULL, 1649 1647 .tfc_tpg_base_attrs = efct_lio_npiv_tpg_attrs, 1650 1648 .tfc_tpg_attrib_attrs = efct_lio_npiv_tpg_attrib_attrs, 1649 + 1650 + .default_submit_type = TARGET_DIRECT_SUBMIT, 1651 + .direct_submit_supp = 1, 1651 1652 }; 1652 1653 1653 1654 int efct_scsi_tgt_driver_init(void)
+6 -10
drivers/scsi/esas2r/esas2r_ioctl.c
··· 41 41 * USA. 42 42 */ 43 43 44 + #include <linux/bitfield.h> 45 + 44 46 #include "esas2r.h" 45 47 46 48 /* ··· 794 792 pcie_capability_read_dword(a->pcid, PCI_EXP_LNKCAP, 795 793 &caps); 796 794 797 - gai->pci.link_speed_curr = 798 - (u8)(stat & PCI_EXP_LNKSTA_CLS); 799 - gai->pci.link_speed_max = 800 - (u8)(caps & PCI_EXP_LNKCAP_SLS); 801 - gai->pci.link_width_curr = 802 - (u8)((stat & PCI_EXP_LNKSTA_NLW) 803 - >> PCI_EXP_LNKSTA_NLW_SHIFT); 804 - gai->pci.link_width_max = 805 - (u8)((caps & PCI_EXP_LNKCAP_MLW) 806 - >> 4); 795 + gai->pci.link_speed_curr = FIELD_GET(PCI_EXP_LNKSTA_CLS, stat); 796 + gai->pci.link_speed_max = FIELD_GET(PCI_EXP_LNKCAP_SLS, caps); 797 + gai->pci.link_width_curr = FIELD_GET(PCI_EXP_LNKSTA_NLW, stat); 798 + gai->pci.link_width_max = FIELD_GET(PCI_EXP_LNKCAP_MLW, caps); 807 799 } 808 800 809 801 gai->pci.msi_vector_cnt = 1;
+6 -5
drivers/scsi/fnic/fnic_fcs.c
··· 145 145 spin_unlock_irqrestore(&fnic->fnic_lock, flags); 146 146 if (fnic->config.flags & VFCF_FIP_CAPABLE) { 147 147 /* start FCoE VLAN discovery */ 148 - fnic_fc_trace_set_data( 149 - fnic->lport->host->host_no, 150 - FNIC_FC_LE, "Link Status: DOWN_UP_VLAN", 151 - strlen("Link Status: DOWN_UP_VLAN")); 148 + fnic_fc_trace_set_data(fnic->lport->host->host_no, 149 + FNIC_FC_LE, "Link Status: DOWN_UP_VLAN", 150 + strlen("Link Status: DOWN_UP_VLAN")); 152 151 fnic_fcoe_send_vlan_req(fnic); 152 + 153 153 return; 154 154 } 155 + 155 156 FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, "link up\n"); 156 157 fnic_fc_trace_set_data(fnic->lport->host->host_no, FNIC_FC_LE, 157 - "Link Status: DOWN_UP", strlen("Link Status: DOWN_UP")); 158 + "Link Status: DOWN_UP", strlen("Link Status: DOWN_UP")); 158 159 fcoe_ctlr_link_up(&fnic->ctlr); 159 160 } else { 160 161 /* UP -> DOWN */
+1 -2
drivers/scsi/hisi_sas/hisi_sas.h
··· 343 343 u8 reg_index, u8 reg_count, u8 *write_data); 344 344 void (*wait_cmds_complete_timeout)(struct hisi_hba *hisi_hba, 345 345 int delay_ms, int timeout_ms); 346 - void (*debugfs_snapshot_regs)(struct hisi_hba *hisi_hba); 346 + int (*debugfs_snapshot_regs)(struct hisi_hba *hisi_hba); 347 347 int complete_hdr_size; 348 348 const struct scsi_host_template *sht; 349 349 }; ··· 451 451 const struct hisi_sas_hw *hw; /* Low level hw interface */ 452 452 unsigned long sata_dev_bitmap[BITS_TO_LONGS(HISI_SAS_MAX_DEVICES)]; 453 453 struct work_struct rst_work; 454 - struct work_struct debugfs_work; 455 454 u32 phy_state; 456 455 u32 intr_coal_ticks; /* Time of interrupt coalesce in us */ 457 456 u32 intr_coal_count; /* Interrupt count to coalesce */
+5 -2
drivers/scsi/hisi_sas/hisi_sas_main.c
··· 1958 1958 struct hisi_hba *hisi_hba = dev_to_hisi_hba(device); 1959 1959 struct hisi_sas_internal_abort_data *timeout = data; 1960 1960 1961 - if (hisi_sas_debugfs_enable && hisi_hba->debugfs_itct[0].itct) 1962 - queue_work(hisi_hba->wq, &hisi_hba->debugfs_work); 1961 + if (hisi_sas_debugfs_enable && hisi_hba->debugfs_itct[0].itct) { 1962 + down(&hisi_hba->sem); 1963 + hisi_hba->hw->debugfs_snapshot_regs(hisi_hba); 1964 + up(&hisi_hba->sem); 1965 + } 1963 1966 1964 1967 if (task->task_state_flags & SAS_TASK_STATE_DONE) { 1965 1968 pr_err("Internal abort: timeout %016llx\n",
+53 -63
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
··· 558 558 module_param(experimental_iopoll_q_cnt, int, 0444); 559 559 MODULE_PARM_DESC(experimental_iopoll_q_cnt, "number of queues to be used as poll mode, def=0"); 560 560 561 - static void debugfs_work_handler_v3_hw(struct work_struct *work); 562 - static void debugfs_snapshot_regs_v3_hw(struct hisi_hba *hisi_hba); 561 + static int debugfs_snapshot_regs_v3_hw(struct hisi_hba *hisi_hba); 563 562 564 563 static u32 hisi_sas_read32(struct hisi_hba *hisi_hba, u32 off) 565 564 { ··· 3387 3388 hisi_hba = shost_priv(shost); 3388 3389 3389 3390 INIT_WORK(&hisi_hba->rst_work, hisi_sas_rst_work_handler); 3390 - INIT_WORK(&hisi_hba->debugfs_work, debugfs_work_handler_v3_hw); 3391 3391 hisi_hba->hw = &hisi_sas_v3_hw; 3392 3392 hisi_hba->pci_dev = pdev; 3393 3393 hisi_hba->dev = dev; ··· 3858 3860 &debugfs_ras_v3_hw_fops); 3859 3861 } 3860 3862 3861 - static void debugfs_snapshot_regs_v3_hw(struct hisi_hba *hisi_hba) 3862 - { 3863 - int debugfs_dump_index = hisi_hba->debugfs_dump_index; 3864 - struct device *dev = hisi_hba->dev; 3865 - u64 timestamp = local_clock(); 3866 - 3867 - if (debugfs_dump_index >= hisi_sas_debugfs_dump_count) { 3868 - dev_warn(dev, "dump count exceeded!\n"); 3869 - return; 3870 - } 3871 - 3872 - do_div(timestamp, NSEC_PER_MSEC); 3873 - hisi_hba->debugfs_timestamp[debugfs_dump_index] = timestamp; 3874 - 3875 - debugfs_snapshot_prepare_v3_hw(hisi_hba); 3876 - 3877 - debugfs_snapshot_global_reg_v3_hw(hisi_hba); 3878 - debugfs_snapshot_port_reg_v3_hw(hisi_hba); 3879 - debugfs_snapshot_axi_reg_v3_hw(hisi_hba); 3880 - debugfs_snapshot_ras_reg_v3_hw(hisi_hba); 3881 - debugfs_snapshot_cq_reg_v3_hw(hisi_hba); 3882 - debugfs_snapshot_dq_reg_v3_hw(hisi_hba); 3883 - debugfs_snapshot_itct_reg_v3_hw(hisi_hba); 3884 - debugfs_snapshot_iost_reg_v3_hw(hisi_hba); 3885 - 3886 - debugfs_create_files_v3_hw(hisi_hba); 3887 - 3888 - debugfs_snapshot_restore_v3_hw(hisi_hba); 3889 - hisi_hba->debugfs_dump_index++; 3890 - } 3891 - 3892 3863 static ssize_t debugfs_trigger_dump_v3_hw_write(struct file *file, 3893 3864 const char __user *user_buf, 3894 3865 size_t count, loff_t *ppos) 3895 3866 { 3896 3867 struct hisi_hba *hisi_hba = file->f_inode->i_private; 3897 3868 char buf[8]; 3898 - 3899 - if (hisi_hba->debugfs_dump_index >= hisi_sas_debugfs_dump_count) 3900 - return -EFAULT; 3901 3869 3902 3870 if (count > 8) 3903 3871 return -EFAULT; ··· 3874 3910 if (buf[0] != '1') 3875 3911 return -EFAULT; 3876 3912 3877 - queue_work(hisi_hba->wq, &hisi_hba->debugfs_work); 3913 + down(&hisi_hba->sem); 3914 + if (debugfs_snapshot_regs_v3_hw(hisi_hba)) { 3915 + up(&hisi_hba->sem); 3916 + return -EFAULT; 3917 + } 3918 + up(&hisi_hba->sem); 3878 3919 3879 3920 return count; 3880 3921 } ··· 4630 4661 } 4631 4662 } 4632 4663 4633 - static void debugfs_work_handler_v3_hw(struct work_struct *work) 4634 - { 4635 - struct hisi_hba *hisi_hba = 4636 - container_of(work, struct hisi_hba, debugfs_work); 4637 - 4638 - debugfs_snapshot_regs_v3_hw(hisi_hba); 4639 - } 4640 - 4641 4664 static void debugfs_release_v3_hw(struct hisi_hba *hisi_hba, int dump_index) 4642 4665 { 4643 4666 struct device *dev = hisi_hba->dev; ··· 4664 4703 { 4665 4704 const struct hisi_sas_hw *hw = hisi_hba->hw; 4666 4705 struct device *dev = hisi_hba->dev; 4667 - int p, c, d, r, i; 4706 + int p, c, d, r; 4668 4707 size_t sz; 4669 4708 4670 4709 for (r = 0; r < DEBUGFS_REGS_NUM; r++) { ··· 4744 4783 4745 4784 return 0; 4746 4785 fail: 4747 - for (i = 0; i < hisi_sas_debugfs_dump_count; i++) 4748 - debugfs_release_v3_hw(hisi_hba, i); 4786 + debugfs_release_v3_hw(hisi_hba, dump_index); 4749 4787 return -ENOMEM; 4788 + } 4789 + 4790 + static int debugfs_snapshot_regs_v3_hw(struct hisi_hba *hisi_hba) 4791 + { 4792 + int debugfs_dump_index = hisi_hba->debugfs_dump_index; 4793 + struct device *dev = hisi_hba->dev; 4794 + u64 timestamp = local_clock(); 4795 + 4796 + if (debugfs_dump_index >= hisi_sas_debugfs_dump_count) { 4797 + dev_warn(dev, "dump count exceeded!\n"); 4798 + return -EINVAL; 4799 + } 4800 + 4801 + if (debugfs_alloc_v3_hw(hisi_hba, debugfs_dump_index)) { 4802 + dev_warn(dev, "failed to alloc memory\n"); 4803 + return -ENOMEM; 4804 + } 4805 + 4806 + do_div(timestamp, NSEC_PER_MSEC); 4807 + hisi_hba->debugfs_timestamp[debugfs_dump_index] = timestamp; 4808 + 4809 + debugfs_snapshot_prepare_v3_hw(hisi_hba); 4810 + 4811 + debugfs_snapshot_global_reg_v3_hw(hisi_hba); 4812 + debugfs_snapshot_port_reg_v3_hw(hisi_hba); 4813 + debugfs_snapshot_axi_reg_v3_hw(hisi_hba); 4814 + debugfs_snapshot_ras_reg_v3_hw(hisi_hba); 4815 + debugfs_snapshot_cq_reg_v3_hw(hisi_hba); 4816 + debugfs_snapshot_dq_reg_v3_hw(hisi_hba); 4817 + debugfs_snapshot_itct_reg_v3_hw(hisi_hba); 4818 + debugfs_snapshot_iost_reg_v3_hw(hisi_hba); 4819 + 4820 + debugfs_create_files_v3_hw(hisi_hba); 4821 + 4822 + debugfs_snapshot_restore_v3_hw(hisi_hba); 4823 + hisi_hba->debugfs_dump_index++; 4824 + 4825 + return 0; 4750 4826 } 4751 4827 4752 4828 static void debugfs_phy_down_cnt_init_v3_hw(struct hisi_hba *hisi_hba) ··· 4863 4865 hisi_hba->debugfs_bist_linkrate = SAS_LINK_RATE_1_5_GBPS; 4864 4866 } 4865 4867 4868 + static void debugfs_exit_v3_hw(struct hisi_hba *hisi_hba) 4869 + { 4870 + debugfs_remove_recursive(hisi_hba->debugfs_dir); 4871 + hisi_hba->debugfs_dir = NULL; 4872 + } 4873 + 4866 4874 static void debugfs_init_v3_hw(struct hisi_hba *hisi_hba) 4867 4875 { 4868 4876 struct device *dev = hisi_hba->dev; 4869 - int i; 4870 4877 4871 4878 hisi_hba->debugfs_dir = debugfs_create_dir(dev_name(dev), 4872 4879 hisi_sas_debugfs_dir); ··· 4888 4885 4889 4886 debugfs_phy_down_cnt_init_v3_hw(hisi_hba); 4890 4887 debugfs_fifo_init_v3_hw(hisi_hba); 4891 - 4892 - for (i = 0; i < hisi_sas_debugfs_dump_count; i++) { 4893 - if (debugfs_alloc_v3_hw(hisi_hba, i)) { 4894 - debugfs_remove_recursive(hisi_hba->debugfs_dir); 4895 - dev_dbg(dev, "failed to init debugfs!\n"); 4896 - break; 4897 - } 4898 - } 4899 - } 4900 - 4901 - static void debugfs_exit_v3_hw(struct hisi_hba *hisi_hba) 4902 - { 4903 - debugfs_remove_recursive(hisi_hba->debugfs_dir); 4904 4888 } 4905 4889 4906 4890 static int
+351 -142
drivers/scsi/ibmvscsi/ibmvfc.c
··· 22 22 #include <linux/bsg-lib.h> 23 23 #include <asm/firmware.h> 24 24 #include <asm/irq.h> 25 - #include <asm/rtas.h> 26 25 #include <asm/vio.h> 27 26 #include <scsi/scsi.h> 28 27 #include <scsi/scsi_cmnd.h> ··· 37 38 static u64 max_lun = IBMVFC_MAX_LUN; 38 39 static unsigned int max_targets = IBMVFC_MAX_TARGETS; 39 40 static unsigned int max_requests = IBMVFC_MAX_REQUESTS_DEFAULT; 41 + static u16 scsi_qdepth = IBMVFC_SCSI_QDEPTH; 40 42 static unsigned int disc_threads = IBMVFC_MAX_DISC_THREADS; 41 43 static unsigned int ibmvfc_debug = IBMVFC_DEBUG; 42 44 static unsigned int log_level = IBMVFC_DEFAULT_LOG_LEVEL; ··· 83 83 module_param_named(max_requests, max_requests, uint, S_IRUGO); 84 84 MODULE_PARM_DESC(max_requests, "Maximum requests for this adapter. " 85 85 "[Default=" __stringify(IBMVFC_MAX_REQUESTS_DEFAULT) "]"); 86 + module_param_named(scsi_qdepth, scsi_qdepth, ushort, S_IRUGO); 87 + MODULE_PARM_DESC(scsi_qdepth, "Maximum scsi command depth per adapter queue. " 88 + "[Default=" __stringify(IBMVFC_SCSI_QDEPTH) "]"); 86 89 module_param_named(max_lun, max_lun, ullong, S_IRUGO); 87 90 MODULE_PARM_DESC(max_lun, "Maximum allowed LUN. " 88 91 "[Default=" __stringify(IBMVFC_MAX_LUN) "]"); ··· 163 160 static void ibmvfc_tgt_implicit_logout_and_del(struct ibmvfc_target *); 164 161 static void ibmvfc_tgt_move_login(struct ibmvfc_target *); 165 162 166 - static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *); 167 - static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *); 163 + static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *, struct ibmvfc_channels *); 164 + static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *, struct ibmvfc_channels *); 168 165 169 166 static const char *unknown_error = "unknown error"; 170 167 ··· 779 776 * ibmvfc_init_event_pool - Allocates and initializes the event pool for a host 780 777 * @vhost: ibmvfc host who owns the event pool 781 778 * @queue: ibmvfc queue struct 782 - * @size: pool size 783 779 * 784 780 * Returns zero on success. 785 781 **/ 786 782 static int ibmvfc_init_event_pool(struct ibmvfc_host *vhost, 787 - struct ibmvfc_queue *queue, 788 - unsigned int size) 783 + struct ibmvfc_queue *queue) 789 784 { 790 785 int i; 791 786 struct ibmvfc_event_pool *pool = &queue->evt_pool; 792 787 793 788 ENTER; 794 - if (!size) 789 + if (!queue->total_depth) 795 790 return 0; 796 791 797 - pool->size = size; 798 - pool->events = kcalloc(size, sizeof(*pool->events), GFP_KERNEL); 792 + pool->size = queue->total_depth; 793 + pool->events = kcalloc(pool->size, sizeof(*pool->events), GFP_KERNEL); 799 794 if (!pool->events) 800 795 return -ENOMEM; 801 796 802 797 pool->iu_storage = dma_alloc_coherent(vhost->dev, 803 - size * sizeof(*pool->iu_storage), 798 + pool->size * sizeof(*pool->iu_storage), 804 799 &pool->iu_token, 0); 805 800 806 801 if (!pool->iu_storage) { ··· 808 807 809 808 INIT_LIST_HEAD(&queue->sent); 810 809 INIT_LIST_HEAD(&queue->free); 810 + queue->evt_free = queue->evt_depth; 811 + queue->reserved_free = queue->reserved_depth; 811 812 spin_lock_init(&queue->l_lock); 812 813 813 - for (i = 0; i < size; ++i) { 814 + for (i = 0; i < pool->size; ++i) { 814 815 struct ibmvfc_event *evt = &pool->events[i]; 815 816 816 817 /* ··· 925 922 struct vio_dev *vdev = to_vio_dev(vhost->dev); 926 923 unsigned long flags; 927 924 928 - ibmvfc_dereg_sub_crqs(vhost); 925 + ibmvfc_dereg_sub_crqs(vhost, &vhost->scsi_scrqs); 929 926 930 927 /* Re-enable the CRQ */ 931 928 do { ··· 944 941 spin_unlock(vhost->crq.q_lock); 945 942 spin_unlock_irqrestore(vhost->host->host_lock, flags); 946 943 947 - ibmvfc_reg_sub_crqs(vhost); 944 + ibmvfc_reg_sub_crqs(vhost, &vhost->scsi_scrqs); 948 945 949 946 return rc; 950 947 } ··· 963 960 struct vio_dev *vdev = to_vio_dev(vhost->dev); 964 961 struct ibmvfc_queue *crq = &vhost->crq; 965 962 966 - ibmvfc_dereg_sub_crqs(vhost); 963 + ibmvfc_dereg_sub_crqs(vhost, &vhost->scsi_scrqs); 967 964 968 965 /* Close the CRQ */ 969 966 do { ··· 996 993 spin_unlock(vhost->crq.q_lock); 997 994 spin_unlock_irqrestore(vhost->host->host_lock, flags); 998 995 999 - ibmvfc_reg_sub_crqs(vhost); 996 + ibmvfc_reg_sub_crqs(vhost, &vhost->scsi_scrqs); 1000 997 1001 998 return rc; 1002 999 } ··· 1036 1033 1037 1034 spin_lock_irqsave(&evt->queue->l_lock, flags); 1038 1035 list_add_tail(&evt->queue_list, &evt->queue->free); 1036 + if (evt->reserved) { 1037 + evt->reserved = 0; 1038 + evt->queue->reserved_free++; 1039 + } else { 1040 + evt->queue->evt_free++; 1041 + } 1039 1042 if (evt->eh_comp) 1040 1043 complete(evt->eh_comp); 1041 1044 spin_unlock_irqrestore(&evt->queue->l_lock, flags); ··· 1484 1475 struct ibmvfc_queue *async_crq = &vhost->async_crq; 1485 1476 struct device_node *of_node = vhost->dev->of_node; 1486 1477 const char *location; 1478 + u16 max_cmds; 1479 + 1480 + max_cmds = scsi_qdepth + IBMVFC_NUM_INTERNAL_REQ; 1481 + if (mq_enabled) 1482 + max_cmds += (scsi_qdepth + IBMVFC_NUM_INTERNAL_SUBQ_REQ) * 1483 + vhost->scsi_scrqs.desired_queues; 1487 1484 1488 1485 memset(login_info, 0, sizeof(*login_info)); 1489 1486 ··· 1504 1489 if (vhost->client_migrated) 1505 1490 login_info->flags |= cpu_to_be16(IBMVFC_CLIENT_MIGRATED); 1506 1491 1507 - login_info->max_cmds = cpu_to_be32(max_requests + IBMVFC_NUM_INTERNAL_REQ); 1492 + login_info->max_cmds = cpu_to_be32(max_cmds); 1508 1493 login_info->capabilities = cpu_to_be64(IBMVFC_CAN_MIGRATE | IBMVFC_CAN_SEND_VF_WWPN); 1509 1494 1510 1495 if (vhost->mq_enabled || vhost->using_channels) ··· 1523 1508 } 1524 1509 1525 1510 /** 1526 - * ibmvfc_get_event - Gets the next free event in pool 1511 + * __ibmvfc_get_event - Gets the next free event in pool 1527 1512 * @queue: ibmvfc queue struct 1513 + * @reserved: event is for a reserved management command 1528 1514 * 1529 1515 * Returns a free event from the pool. 1530 1516 **/ 1531 - static struct ibmvfc_event *ibmvfc_get_event(struct ibmvfc_queue *queue) 1517 + static struct ibmvfc_event *__ibmvfc_get_event(struct ibmvfc_queue *queue, int reserved) 1532 1518 { 1533 - struct ibmvfc_event *evt; 1519 + struct ibmvfc_event *evt = NULL; 1534 1520 unsigned long flags; 1535 1521 1536 1522 spin_lock_irqsave(&queue->l_lock, flags); 1537 - BUG_ON(list_empty(&queue->free)); 1538 - evt = list_entry(queue->free.next, struct ibmvfc_event, queue_list); 1523 + if (reserved && queue->reserved_free) { 1524 + evt = list_entry(queue->free.next, struct ibmvfc_event, queue_list); 1525 + evt->reserved = 1; 1526 + queue->reserved_free--; 1527 + } else if (queue->evt_free) { 1528 + evt = list_entry(queue->free.next, struct ibmvfc_event, queue_list); 1529 + queue->evt_free--; 1530 + } else { 1531 + goto out; 1532 + } 1533 + 1539 1534 atomic_set(&evt->free, 0); 1540 1535 list_del(&evt->queue_list); 1536 + out: 1541 1537 spin_unlock_irqrestore(&queue->l_lock, flags); 1542 1538 return evt; 1543 1539 } 1540 + 1541 + #define ibmvfc_get_event(queue) __ibmvfc_get_event(queue, 0) 1542 + #define ibmvfc_get_reserved_event(queue) __ibmvfc_get_event(queue, 1) 1544 1543 1545 1544 /** 1546 1545 * ibmvfc_locked_done - Calls evt completion with host_lock held ··· 1977 1948 if (vhost->using_channels) { 1978 1949 scsi_channel = hwq % vhost->scsi_scrqs.active_queues; 1979 1950 evt = ibmvfc_get_event(&vhost->scsi_scrqs.scrqs[scsi_channel]); 1951 + if (!evt) 1952 + return SCSI_MLQUEUE_HOST_BUSY; 1953 + 1980 1954 evt->hwq = hwq % vhost->scsi_scrqs.active_queues; 1981 - } else 1955 + } else { 1982 1956 evt = ibmvfc_get_event(&vhost->crq); 1957 + if (!evt) 1958 + return SCSI_MLQUEUE_HOST_BUSY; 1959 + } 1983 1960 1984 1961 ibmvfc_init_event(evt, ibmvfc_scsi_done, IBMVFC_CMD_FORMAT); 1985 1962 evt->cmnd = cmnd; ··· 2072 2037 } 2073 2038 2074 2039 vhost->aborting_passthru = 1; 2075 - evt = ibmvfc_get_event(&vhost->crq); 2040 + evt = ibmvfc_get_reserved_event(&vhost->crq); 2041 + if (!evt) { 2042 + spin_unlock_irqrestore(vhost->host->host_lock, flags); 2043 + return -ENOMEM; 2044 + } 2045 + 2076 2046 ibmvfc_init_event(evt, ibmvfc_bsg_timeout_done, IBMVFC_MAD_FORMAT); 2077 2047 2078 2048 tmf = &evt->iu.tmf; ··· 2135 2095 if (unlikely((rc = ibmvfc_host_chkready(vhost)))) 2136 2096 goto unlock_out; 2137 2097 2138 - evt = ibmvfc_get_event(&vhost->crq); 2098 + evt = ibmvfc_get_reserved_event(&vhost->crq); 2099 + if (!evt) { 2100 + rc = -ENOMEM; 2101 + goto unlock_out; 2102 + } 2139 2103 ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_MAD_FORMAT); 2140 2104 plogi = &evt->iu.plogi; 2141 2105 memset(plogi, 0, sizeof(*plogi)); ··· 2257 2213 goto out; 2258 2214 } 2259 2215 2260 - evt = ibmvfc_get_event(&vhost->crq); 2216 + evt = ibmvfc_get_reserved_event(&vhost->crq); 2217 + if (!evt) { 2218 + spin_unlock_irqrestore(vhost->host->host_lock, flags); 2219 + rc = -ENOMEM; 2220 + goto out; 2221 + } 2261 2222 ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_MAD_FORMAT); 2262 2223 mad = &evt->iu.passthru; 2263 2224 ··· 2350 2301 evt = ibmvfc_get_event(&vhost->scsi_scrqs.scrqs[0]); 2351 2302 else 2352 2303 evt = ibmvfc_get_event(&vhost->crq); 2304 + 2305 + if (!evt) { 2306 + spin_unlock_irqrestore(vhost->host->host_lock, flags); 2307 + return -ENOMEM; 2308 + } 2353 2309 2354 2310 ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_CMD_FORMAT); 2355 2311 tmf = ibmvfc_init_vfc_cmd(evt, sdev); ··· 2558 2504 struct ibmvfc_event *evt; 2559 2505 struct ibmvfc_tmf *tmf; 2560 2506 2561 - evt = ibmvfc_get_event(queue); 2507 + evt = ibmvfc_get_reserved_event(queue); 2508 + if (!evt) 2509 + return NULL; 2562 2510 ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_MAD_FORMAT); 2563 2511 2564 2512 tmf = &evt->iu.tmf; ··· 2617 2561 2618 2562 if (found_evt && vhost->logged_in) { 2619 2563 evt = ibmvfc_init_tmf(&queues[i], sdev, type); 2564 + if (!evt) { 2565 + spin_unlock(queues[i].q_lock); 2566 + spin_unlock_irqrestore(vhost->host->host_lock, flags); 2567 + return -ENOMEM; 2568 + } 2620 2569 evt->sync_iu = &queues[i].cancel_rsp; 2621 2570 ibmvfc_send_event(evt, vhost, default_timeout); 2622 2571 list_add_tail(&evt->cancel, &cancelq); ··· 2835 2774 2836 2775 if (vhost->state == IBMVFC_ACTIVE) { 2837 2776 evt = ibmvfc_get_event(&vhost->crq); 2777 + if (!evt) { 2778 + spin_unlock_irqrestore(vhost->host->host_lock, flags); 2779 + return -ENOMEM; 2780 + } 2838 2781 ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_CMD_FORMAT); 2839 2782 tmf = ibmvfc_init_vfc_cmd(evt, sdev); 2840 2783 iu = ibmvfc_get_fcp_iu(vhost, tmf); ··· 2996 2931 } 2997 2932 2998 2933 /** 2999 - * ibmvfc_dev_cancel_all_reset - Device iterated cancel all function 3000 - * @sdev: scsi device struct 3001 - * @data: return code 3002 - * 3003 - **/ 3004 - static void ibmvfc_dev_cancel_all_reset(struct scsi_device *sdev, void *data) 3005 - { 3006 - unsigned long *rc = data; 3007 - *rc |= ibmvfc_cancel_all(sdev, IBMVFC_TMF_TGT_RESET); 3008 - } 3009 - 3010 - /** 3011 2934 * ibmvfc_eh_target_reset_handler - Reset the target 3012 2935 * @cmd: scsi command struct 3013 2936 * ··· 3004 2951 **/ 3005 2952 static int ibmvfc_eh_target_reset_handler(struct scsi_cmnd *cmd) 3006 2953 { 3007 - struct scsi_device *sdev = cmd->device; 3008 - struct ibmvfc_host *vhost = shost_priv(sdev->host); 3009 - struct scsi_target *starget = scsi_target(sdev); 2954 + struct scsi_target *starget = scsi_target(cmd->device); 2955 + struct fc_rport *rport = starget_to_rport(starget); 2956 + struct Scsi_Host *shost = rport_to_shost(rport); 2957 + struct ibmvfc_host *vhost = shost_priv(shost); 3010 2958 int block_rc; 3011 2959 int reset_rc = 0; 3012 2960 int rc = FAILED; 3013 2961 unsigned long cancel_rc = 0; 2962 + bool tgt_reset = false; 3014 2963 3015 2964 ENTER; 3016 - block_rc = fc_block_scsi_eh(cmd); 2965 + block_rc = fc_block_rport(rport); 3017 2966 ibmvfc_wait_while_resetting(vhost); 3018 2967 if (block_rc != FAST_IO_FAIL) { 3019 - starget_for_each_device(starget, &cancel_rc, ibmvfc_dev_cancel_all_reset); 3020 - reset_rc = ibmvfc_reset_device(sdev, IBMVFC_TARGET_RESET, "target"); 2968 + struct scsi_device *sdev; 2969 + 2970 + shost_for_each_device(sdev, shost) { 2971 + if ((sdev->channel != starget->channel) || 2972 + (sdev->id != starget->id)) 2973 + continue; 2974 + 2975 + cancel_rc |= ibmvfc_cancel_all(sdev, 2976 + IBMVFC_TMF_TGT_RESET); 2977 + if (!tgt_reset) { 2978 + reset_rc = ibmvfc_reset_device(sdev, 2979 + IBMVFC_TARGET_RESET, "target"); 2980 + tgt_reset = true; 2981 + } 2982 + } 3021 2983 } else 3022 - starget_for_each_device(starget, &cancel_rc, ibmvfc_dev_cancel_all_noreset); 2984 + starget_for_each_device(starget, &cancel_rc, 2985 + ibmvfc_dev_cancel_all_noreset); 3023 2986 3024 2987 if (!cancel_rc && !reset_rc) 3025 2988 rc = ibmvfc_wait_for_ops(vhost, starget, ibmvfc_match_target); ··· 3582 3513 { 3583 3514 struct Scsi_Host *shost = class_to_shost(dev); 3584 3515 struct ibmvfc_host *vhost = shost_priv(shost); 3516 + struct ibmvfc_channels *scsi = &vhost->scsi_scrqs; 3585 3517 unsigned long flags = 0; 3586 3518 int len; 3587 3519 3588 3520 spin_lock_irqsave(shost->host_lock, flags); 3589 - len = snprintf(buf, PAGE_SIZE, "%d\n", vhost->client_scsi_channels); 3521 + len = snprintf(buf, PAGE_SIZE, "%d\n", scsi->desired_queues); 3590 3522 spin_unlock_irqrestore(shost->host_lock, flags); 3591 3523 return len; 3592 3524 } ··· 3598 3528 { 3599 3529 struct Scsi_Host *shost = class_to_shost(dev); 3600 3530 struct ibmvfc_host *vhost = shost_priv(shost); 3531 + struct ibmvfc_channels *scsi = &vhost->scsi_scrqs; 3601 3532 unsigned long flags = 0; 3602 3533 unsigned int channels; 3603 3534 3604 3535 spin_lock_irqsave(shost->host_lock, flags); 3605 3536 channels = simple_strtoul(buf, NULL, 10); 3606 - vhost->client_scsi_channels = min(channels, nr_scsi_hw_queues); 3537 + scsi->desired_queues = min(channels, shost->nr_hw_queues); 3607 3538 ibmvfc_hard_reset_host(vhost); 3608 3539 spin_unlock_irqrestore(shost->host_lock, flags); 3609 3540 return strlen(buf); ··· 3704 3633 .max_sectors = IBMVFC_MAX_SECTORS, 3705 3634 .shost_groups = ibmvfc_host_groups, 3706 3635 .track_queue_depth = 1, 3707 - .host_tagset = 1, 3708 3636 }; 3709 3637 3710 3638 /** ··· 3939 3869 } 3940 3870 } 3941 3871 3942 - static irqreturn_t ibmvfc_interrupt_scsi(int irq, void *scrq_instance) 3872 + static irqreturn_t ibmvfc_interrupt_mq(int irq, void *scrq_instance) 3943 3873 { 3944 3874 struct ibmvfc_queue *scrq = (struct ibmvfc_queue *)scrq_instance; 3945 3875 ··· 4101 4031 return; 4102 4032 4103 4033 kref_get(&tgt->kref); 4104 - evt = ibmvfc_get_event(&vhost->crq); 4034 + evt = ibmvfc_get_reserved_event(&vhost->crq); 4035 + if (!evt) { 4036 + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); 4037 + kref_put(&tgt->kref, ibmvfc_release_tgt); 4038 + __ibmvfc_reset_host(vhost); 4039 + return; 4040 + } 4105 4041 vhost->discovery_threads++; 4106 4042 ibmvfc_init_event(evt, ibmvfc_tgt_prli_done, IBMVFC_MAD_FORMAT); 4107 4043 evt->tgt = tgt; ··· 4214 4138 4215 4139 kref_get(&tgt->kref); 4216 4140 tgt->logo_rcvd = 0; 4217 - evt = ibmvfc_get_event(&vhost->crq); 4141 + evt = ibmvfc_get_reserved_event(&vhost->crq); 4142 + if (!evt) { 4143 + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); 4144 + kref_put(&tgt->kref, ibmvfc_release_tgt); 4145 + __ibmvfc_reset_host(vhost); 4146 + return; 4147 + } 4218 4148 vhost->discovery_threads++; 4219 4149 ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_INIT_WAIT); 4220 4150 ibmvfc_init_event(evt, ibmvfc_tgt_plogi_done, IBMVFC_MAD_FORMAT); ··· 4296 4214 struct ibmvfc_event *evt; 4297 4215 4298 4216 kref_get(&tgt->kref); 4299 - evt = ibmvfc_get_event(&vhost->crq); 4217 + evt = ibmvfc_get_reserved_event(&vhost->crq); 4218 + if (!evt) 4219 + return NULL; 4300 4220 ibmvfc_init_event(evt, done, IBMVFC_MAD_FORMAT); 4301 4221 evt->tgt = tgt; 4302 4222 mad = &evt->iu.implicit_logout; ··· 4326 4242 vhost->discovery_threads++; 4327 4243 evt = __ibmvfc_tgt_get_implicit_logout_evt(tgt, 4328 4244 ibmvfc_tgt_implicit_logout_done); 4245 + if (!evt) { 4246 + vhost->discovery_threads--; 4247 + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); 4248 + kref_put(&tgt->kref, ibmvfc_release_tgt); 4249 + __ibmvfc_reset_host(vhost); 4250 + return; 4251 + } 4329 4252 4330 4253 ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_INIT_WAIT); 4331 4254 if (ibmvfc_send_event(evt, vhost, default_timeout)) { ··· 4471 4380 return; 4472 4381 4473 4382 kref_get(&tgt->kref); 4474 - evt = ibmvfc_get_event(&vhost->crq); 4383 + evt = ibmvfc_get_reserved_event(&vhost->crq); 4384 + if (!evt) { 4385 + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_DEL_RPORT); 4386 + kref_put(&tgt->kref, ibmvfc_release_tgt); 4387 + __ibmvfc_reset_host(vhost); 4388 + return; 4389 + } 4475 4390 vhost->discovery_threads++; 4476 4391 ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_INIT_WAIT); 4477 4392 ibmvfc_init_event(evt, ibmvfc_tgt_move_login_done, IBMVFC_MAD_FORMAT); ··· 4643 4546 4644 4547 vhost->abort_threads++; 4645 4548 kref_get(&tgt->kref); 4646 - evt = ibmvfc_get_event(&vhost->crq); 4549 + evt = ibmvfc_get_reserved_event(&vhost->crq); 4550 + if (!evt) { 4551 + tgt_err(tgt, "Failed to get cancel event for ADISC.\n"); 4552 + vhost->abort_threads--; 4553 + kref_put(&tgt->kref, ibmvfc_release_tgt); 4554 + __ibmvfc_reset_host(vhost); 4555 + spin_unlock_irqrestore(vhost->host->host_lock, flags); 4556 + return; 4557 + } 4647 4558 ibmvfc_init_event(evt, ibmvfc_tgt_adisc_cancel_done, IBMVFC_MAD_FORMAT); 4648 4559 4649 4560 evt->tgt = tgt; ··· 4701 4596 return; 4702 4597 4703 4598 kref_get(&tgt->kref); 4704 - evt = ibmvfc_get_event(&vhost->crq); 4599 + evt = ibmvfc_get_reserved_event(&vhost->crq); 4600 + if (!evt) { 4601 + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); 4602 + kref_put(&tgt->kref, ibmvfc_release_tgt); 4603 + __ibmvfc_reset_host(vhost); 4604 + return; 4605 + } 4705 4606 vhost->discovery_threads++; 4706 4607 ibmvfc_init_event(evt, ibmvfc_tgt_adisc_done, IBMVFC_MAD_FORMAT); 4707 4608 evt->tgt = tgt; ··· 4810 4699 return; 4811 4700 4812 4701 kref_get(&tgt->kref); 4813 - evt = ibmvfc_get_event(&vhost->crq); 4702 + evt = ibmvfc_get_reserved_event(&vhost->crq); 4703 + if (!evt) { 4704 + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); 4705 + kref_put(&tgt->kref, ibmvfc_release_tgt); 4706 + __ibmvfc_reset_host(vhost); 4707 + return; 4708 + } 4814 4709 vhost->discovery_threads++; 4815 4710 evt->tgt = tgt; 4816 4711 ibmvfc_init_event(evt, ibmvfc_tgt_query_target_done, IBMVFC_MAD_FORMAT); ··· 4939 4822 int i, rc; 4940 4823 4941 4824 for (i = 0, rc = 0; !rc && i < vhost->num_targets; i++) 4942 - rc = ibmvfc_alloc_target(vhost, &vhost->disc_buf[i]); 4825 + rc = ibmvfc_alloc_target(vhost, &vhost->scsi_scrqs.disc_buf[i]); 4943 4826 4944 4827 return rc; 4945 4828 } ··· 4988 4871 static void ibmvfc_discover_targets(struct ibmvfc_host *vhost) 4989 4872 { 4990 4873 struct ibmvfc_discover_targets *mad; 4991 - struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq); 4874 + struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq); 4875 + int level = IBMVFC_DEFAULT_LOG_LEVEL; 4876 + 4877 + if (!evt) { 4878 + ibmvfc_log(vhost, level, "Discover Targets failed: no available events\n"); 4879 + ibmvfc_hard_reset_host(vhost); 4880 + return; 4881 + } 4992 4882 4993 4883 ibmvfc_init_event(evt, ibmvfc_discover_targets_done, IBMVFC_MAD_FORMAT); 4994 4884 mad = &evt->iu.discover_targets; ··· 5003 4879 mad->common.version = cpu_to_be32(1); 5004 4880 mad->common.opcode = cpu_to_be32(IBMVFC_DISC_TARGETS); 5005 4881 mad->common.length = cpu_to_be16(sizeof(*mad)); 5006 - mad->bufflen = cpu_to_be32(vhost->disc_buf_sz); 5007 - mad->buffer.va = cpu_to_be64(vhost->disc_buf_dma); 5008 - mad->buffer.len = cpu_to_be32(vhost->disc_buf_sz); 4882 + mad->bufflen = cpu_to_be32(vhost->scsi_scrqs.disc_buf_sz); 4883 + mad->buffer.va = cpu_to_be64(vhost->scsi_scrqs.disc_buf_dma); 4884 + mad->buffer.len = cpu_to_be32(vhost->scsi_scrqs.disc_buf_sz); 5009 4885 mad->flags = cpu_to_be32(IBMVFC_DISC_TGT_PORT_ID_WWPN_LIST); 5010 4886 ibmvfc_set_host_action(vhost, IBMVFC_HOST_ACTION_INIT_WAIT); 5011 4887 ··· 5019 4895 { 5020 4896 struct ibmvfc_host *vhost = evt->vhost; 5021 4897 struct ibmvfc_channel_setup *setup = vhost->channel_setup_buf; 5022 - struct ibmvfc_scsi_channels *scrqs = &vhost->scsi_scrqs; 4898 + struct ibmvfc_channels *scrqs = &vhost->scsi_scrqs; 5023 4899 u32 mad_status = be16_to_cpu(evt->xfer_iu->channel_setup.common.status); 5024 4900 int level = IBMVFC_DEFAULT_LOG_LEVEL; 5025 4901 int flags, active_queues, i; ··· 5069 4945 { 5070 4946 struct ibmvfc_channel_setup_mad *mad; 5071 4947 struct ibmvfc_channel_setup *setup_buf = vhost->channel_setup_buf; 5072 - struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq); 5073 - struct ibmvfc_scsi_channels *scrqs = &vhost->scsi_scrqs; 4948 + struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq); 4949 + struct ibmvfc_channels *scrqs = &vhost->scsi_scrqs; 5074 4950 unsigned int num_channels = 5075 - min(vhost->client_scsi_channels, vhost->max_vios_scsi_channels); 4951 + min(scrqs->desired_queues, vhost->max_vios_scsi_channels); 4952 + int level = IBMVFC_DEFAULT_LOG_LEVEL; 5076 4953 int i; 4954 + 4955 + if (!evt) { 4956 + ibmvfc_log(vhost, level, "Channel Setup failed: no available events\n"); 4957 + ibmvfc_hard_reset_host(vhost); 4958 + return; 4959 + } 5077 4960 5078 4961 memset(setup_buf, 0, sizeof(*setup_buf)); 5079 4962 if (num_channels == 0) ··· 5142 5011 static void ibmvfc_channel_enquiry(struct ibmvfc_host *vhost) 5143 5012 { 5144 5013 struct ibmvfc_channel_enquiry *mad; 5145 - struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq); 5014 + struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq); 5015 + int level = IBMVFC_DEFAULT_LOG_LEVEL; 5016 + 5017 + if (!evt) { 5018 + ibmvfc_log(vhost, level, "Channel Enquiry failed: no available events\n"); 5019 + ibmvfc_hard_reset_host(vhost); 5020 + return; 5021 + } 5146 5022 5147 5023 ibmvfc_init_event(evt, ibmvfc_channel_enquiry_done, IBMVFC_MAD_FORMAT); 5148 5024 mad = &evt->iu.channel_enquiry; ··· 5270 5132 static void ibmvfc_npiv_login(struct ibmvfc_host *vhost) 5271 5133 { 5272 5134 struct ibmvfc_npiv_login_mad *mad; 5273 - struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq); 5135 + struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq); 5136 + 5137 + if (!evt) { 5138 + ibmvfc_dbg(vhost, "NPIV Login failed: no available events\n"); 5139 + ibmvfc_hard_reset_host(vhost); 5140 + return; 5141 + } 5274 5142 5275 5143 ibmvfc_gather_partition_info(vhost); 5276 5144 ibmvfc_set_login_info(vhost); ··· 5341 5197 struct ibmvfc_npiv_logout_mad *mad; 5342 5198 struct ibmvfc_event *evt; 5343 5199 5344 - evt = ibmvfc_get_event(&vhost->crq); 5200 + evt = ibmvfc_get_reserved_event(&vhost->crq); 5201 + if (!evt) { 5202 + ibmvfc_dbg(vhost, "NPIV Logout failed: no available events\n"); 5203 + ibmvfc_hard_reset_host(vhost); 5204 + return; 5205 + } 5206 + 5345 5207 ibmvfc_init_event(evt, ibmvfc_npiv_logout_done, IBMVFC_MAD_FORMAT); 5346 5208 5347 5209 mad = &evt->iu.npiv_logout; ··· 5795 5645 { 5796 5646 struct device *dev = vhost->dev; 5797 5647 size_t fmt_size; 5798 - unsigned int pool_size = 0; 5799 5648 5800 5649 ENTER; 5801 5650 spin_lock_init(&queue->_lock); ··· 5803 5654 switch (fmt) { 5804 5655 case IBMVFC_CRQ_FMT: 5805 5656 fmt_size = sizeof(*queue->msgs.crq); 5806 - pool_size = max_requests + IBMVFC_NUM_INTERNAL_REQ; 5657 + queue->total_depth = scsi_qdepth + IBMVFC_NUM_INTERNAL_REQ; 5658 + queue->evt_depth = scsi_qdepth; 5659 + queue->reserved_depth = IBMVFC_NUM_INTERNAL_REQ; 5807 5660 break; 5808 5661 case IBMVFC_ASYNC_FMT: 5809 5662 fmt_size = sizeof(*queue->msgs.async); ··· 5813 5662 case IBMVFC_SUB_CRQ_FMT: 5814 5663 fmt_size = sizeof(*queue->msgs.scrq); 5815 5664 /* We need one extra event for Cancel Commands */ 5816 - pool_size = max_requests + 1; 5665 + queue->total_depth = scsi_qdepth + IBMVFC_NUM_INTERNAL_SUBQ_REQ; 5666 + queue->evt_depth = scsi_qdepth; 5667 + queue->reserved_depth = IBMVFC_NUM_INTERNAL_SUBQ_REQ; 5817 5668 break; 5818 5669 default: 5819 5670 dev_warn(dev, "Unknown command/response queue message format: %d\n", fmt); 5820 5671 return -EINVAL; 5821 5672 } 5822 5673 5823 - if (ibmvfc_init_event_pool(vhost, queue, pool_size)) { 5674 + queue->fmt = fmt; 5675 + if (ibmvfc_init_event_pool(vhost, queue)) { 5824 5676 dev_err(dev, "Couldn't initialize event pool.\n"); 5825 5677 return -ENOMEM; 5826 5678 } ··· 5842 5688 } 5843 5689 5844 5690 queue->cur = 0; 5845 - queue->fmt = fmt; 5846 5691 queue->size = PAGE_SIZE / fmt_size; 5847 5692 5848 5693 queue->vhost = vhost; ··· 5910 5757 return retrc; 5911 5758 } 5912 5759 5913 - static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost, 5914 - int index) 5760 + static int ibmvfc_register_channel(struct ibmvfc_host *vhost, 5761 + struct ibmvfc_channels *channels, 5762 + int index) 5915 5763 { 5916 5764 struct device *dev = vhost->dev; 5917 5765 struct vio_dev *vdev = to_vio_dev(dev); 5918 - struct ibmvfc_queue *scrq = &vhost->scsi_scrqs.scrqs[index]; 5766 + struct ibmvfc_queue *scrq = &channels->scrqs[index]; 5919 5767 int rc = -ENOMEM; 5920 5768 5921 5769 ENTER; ··· 5940 5786 goto irq_failed; 5941 5787 } 5942 5788 5943 - snprintf(scrq->name, sizeof(scrq->name), "ibmvfc-%x-scsi%d", 5944 - vdev->unit_address, index); 5945 - rc = request_irq(scrq->irq, ibmvfc_interrupt_scsi, 0, scrq->name, scrq); 5789 + switch (channels->protocol) { 5790 + case IBMVFC_PROTO_SCSI: 5791 + snprintf(scrq->name, sizeof(scrq->name), "ibmvfc-%x-scsi%d", 5792 + vdev->unit_address, index); 5793 + scrq->handler = ibmvfc_interrupt_mq; 5794 + break; 5795 + case IBMVFC_PROTO_NVME: 5796 + snprintf(scrq->name, sizeof(scrq->name), "ibmvfc-%x-nvmf%d", 5797 + vdev->unit_address, index); 5798 + scrq->handler = ibmvfc_interrupt_mq; 5799 + break; 5800 + default: 5801 + dev_err(dev, "Unknown channel protocol (%d)\n", 5802 + channels->protocol); 5803 + goto irq_failed; 5804 + } 5805 + 5806 + rc = request_irq(scrq->irq, scrq->handler, 0, scrq->name, scrq); 5946 5807 5947 5808 if (rc) { 5948 5809 dev_err(dev, "Couldn't register sub-crq[%d] irq\n", index); ··· 5973 5804 irq_failed: 5974 5805 do { 5975 5806 rc = plpar_hcall_norets(H_FREE_SUB_CRQ, vdev->unit_address, scrq->cookie); 5976 - } while (rtas_busy_delay(rc)); 5807 + } while (rc == H_BUSY || H_IS_LONG_BUSY(rc)); 5977 5808 reg_failed: 5978 5809 LEAVE; 5979 5810 return rc; 5980 5811 } 5981 5812 5982 - static void ibmvfc_deregister_scsi_channel(struct ibmvfc_host *vhost, int index) 5813 + static void ibmvfc_deregister_channel(struct ibmvfc_host *vhost, 5814 + struct ibmvfc_channels *channels, 5815 + int index) 5983 5816 { 5984 5817 struct device *dev = vhost->dev; 5985 5818 struct vio_dev *vdev = to_vio_dev(dev); 5986 - struct ibmvfc_queue *scrq = &vhost->scsi_scrqs.scrqs[index]; 5819 + struct ibmvfc_queue *scrq = &channels->scrqs[index]; 5987 5820 long rc; 5988 5821 5989 5822 ENTER; ··· 6009 5838 LEAVE; 6010 5839 } 6011 5840 6012 - static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *vhost) 5841 + static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *vhost, 5842 + struct ibmvfc_channels *channels) 6013 5843 { 6014 5844 int i, j; 6015 5845 6016 5846 ENTER; 6017 - if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs) 5847 + if (!vhost->mq_enabled || !channels->scrqs) 6018 5848 return; 6019 5849 6020 - for (i = 0; i < nr_scsi_hw_queues; i++) { 6021 - if (ibmvfc_register_scsi_channel(vhost, i)) { 5850 + for (i = 0; i < channels->max_queues; i++) { 5851 + if (ibmvfc_register_channel(vhost, channels, i)) { 6022 5852 for (j = i; j > 0; j--) 6023 - ibmvfc_deregister_scsi_channel(vhost, j - 1); 5853 + ibmvfc_deregister_channel(vhost, channels, j - 1); 6024 5854 vhost->do_enquiry = 0; 6025 5855 return; 6026 5856 } ··· 6030 5858 LEAVE; 6031 5859 } 6032 5860 6033 - static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *vhost) 5861 + static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *vhost, 5862 + struct ibmvfc_channels *channels) 6034 5863 { 6035 5864 int i; 6036 5865 6037 5866 ENTER; 6038 - if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs) 5867 + if (!vhost->mq_enabled || !channels->scrqs) 6039 5868 return; 6040 5869 6041 - for (i = 0; i < nr_scsi_hw_queues; i++) 6042 - ibmvfc_deregister_scsi_channel(vhost, i); 5870 + for (i = 0; i < channels->max_queues; i++) 5871 + ibmvfc_deregister_channel(vhost, channels, i); 6043 5872 6044 5873 LEAVE; 5874 + } 5875 + 5876 + static int ibmvfc_alloc_channels(struct ibmvfc_host *vhost, 5877 + struct ibmvfc_channels *channels) 5878 + { 5879 + struct ibmvfc_queue *scrq; 5880 + int i, j; 5881 + int rc = 0; 5882 + 5883 + channels->scrqs = kcalloc(channels->max_queues, 5884 + sizeof(*channels->scrqs), 5885 + GFP_KERNEL); 5886 + if (!channels->scrqs) 5887 + return -ENOMEM; 5888 + 5889 + for (i = 0; i < channels->max_queues; i++) { 5890 + scrq = &channels->scrqs[i]; 5891 + rc = ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT); 5892 + if (rc) { 5893 + for (j = i; j > 0; j--) { 5894 + scrq = &channels->scrqs[j - 1]; 5895 + ibmvfc_free_queue(vhost, scrq); 5896 + } 5897 + kfree(channels->scrqs); 5898 + channels->scrqs = NULL; 5899 + channels->active_queues = 0; 5900 + return rc; 5901 + } 5902 + } 5903 + 5904 + return rc; 6045 5905 } 6046 5906 6047 5907 static void ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost) 6048 5908 { 6049 - struct ibmvfc_queue *scrq; 6050 - int i, j; 6051 - 6052 5909 ENTER; 6053 5910 if (!vhost->mq_enabled) 6054 5911 return; 6055 5912 6056 - vhost->scsi_scrqs.scrqs = kcalloc(nr_scsi_hw_queues, 6057 - sizeof(*vhost->scsi_scrqs.scrqs), 6058 - GFP_KERNEL); 6059 - if (!vhost->scsi_scrqs.scrqs) { 5913 + if (ibmvfc_alloc_channels(vhost, &vhost->scsi_scrqs)) { 6060 5914 vhost->do_enquiry = 0; 5915 + vhost->mq_enabled = 0; 6061 5916 return; 6062 5917 } 6063 5918 6064 - for (i = 0; i < nr_scsi_hw_queues; i++) { 6065 - scrq = &vhost->scsi_scrqs.scrqs[i]; 6066 - if (ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT)) { 6067 - for (j = i; j > 0; j--) { 6068 - scrq = &vhost->scsi_scrqs.scrqs[j - 1]; 6069 - ibmvfc_free_queue(vhost, scrq); 6070 - } 6071 - kfree(vhost->scsi_scrqs.scrqs); 6072 - vhost->scsi_scrqs.scrqs = NULL; 6073 - vhost->scsi_scrqs.active_queues = 0; 6074 - vhost->do_enquiry = 0; 6075 - vhost->mq_enabled = 0; 6076 - return; 6077 - } 6078 - } 6079 - 6080 - ibmvfc_reg_sub_crqs(vhost); 5919 + ibmvfc_reg_sub_crqs(vhost, &vhost->scsi_scrqs); 6081 5920 6082 5921 LEAVE; 6083 5922 } 6084 5923 6085 - static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost) 5924 + static void ibmvfc_release_channels(struct ibmvfc_host *vhost, 5925 + struct ibmvfc_channels *channels) 6086 5926 { 6087 5927 struct ibmvfc_queue *scrq; 6088 5928 int i; 6089 5929 5930 + if (channels->scrqs) { 5931 + for (i = 0; i < channels->max_queues; i++) { 5932 + scrq = &channels->scrqs[i]; 5933 + ibmvfc_free_queue(vhost, scrq); 5934 + } 5935 + 5936 + kfree(channels->scrqs); 5937 + channels->scrqs = NULL; 5938 + channels->active_queues = 0; 5939 + } 5940 + } 5941 + 5942 + static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost) 5943 + { 6090 5944 ENTER; 6091 5945 if (!vhost->scsi_scrqs.scrqs) 6092 5946 return; 6093 5947 6094 - ibmvfc_dereg_sub_crqs(vhost); 5948 + ibmvfc_dereg_sub_crqs(vhost, &vhost->scsi_scrqs); 6095 5949 6096 - for (i = 0; i < nr_scsi_hw_queues; i++) { 6097 - scrq = &vhost->scsi_scrqs.scrqs[i]; 6098 - ibmvfc_free_queue(vhost, scrq); 6099 - } 6100 - 6101 - kfree(vhost->scsi_scrqs.scrqs); 6102 - vhost->scsi_scrqs.scrqs = NULL; 6103 - vhost->scsi_scrqs.active_queues = 0; 5950 + ibmvfc_release_channels(vhost, &vhost->scsi_scrqs); 6104 5951 LEAVE; 5952 + } 5953 + 5954 + static void ibmvfc_free_disc_buf(struct device *dev, struct ibmvfc_channels *channels) 5955 + { 5956 + dma_free_coherent(dev, channels->disc_buf_sz, channels->disc_buf, 5957 + channels->disc_buf_dma); 6105 5958 } 6106 5959 6107 5960 /** ··· 6143 5946 ENTER; 6144 5947 mempool_destroy(vhost->tgt_pool); 6145 5948 kfree(vhost->trace); 6146 - dma_free_coherent(vhost->dev, vhost->disc_buf_sz, vhost->disc_buf, 6147 - vhost->disc_buf_dma); 5949 + ibmvfc_free_disc_buf(vhost->dev, &vhost->scsi_scrqs); 6148 5950 dma_free_coherent(vhost->dev, sizeof(*vhost->login_buf), 6149 5951 vhost->login_buf, vhost->login_buf_dma); 6150 5952 dma_free_coherent(vhost->dev, sizeof(*vhost->channel_setup_buf), ··· 6151 5955 dma_pool_destroy(vhost->sg_pool); 6152 5956 ibmvfc_free_queue(vhost, async_q); 6153 5957 LEAVE; 5958 + } 5959 + 5960 + static int ibmvfc_alloc_disc_buf(struct device *dev, struct ibmvfc_channels *channels) 5961 + { 5962 + channels->disc_buf_sz = sizeof(*channels->disc_buf) * max_targets; 5963 + channels->disc_buf = dma_alloc_coherent(dev, channels->disc_buf_sz, 5964 + &channels->disc_buf_dma, GFP_KERNEL); 5965 + 5966 + if (!channels->disc_buf) { 5967 + dev_err(dev, "Couldn't allocate %s Discover Targets buffer\n", 5968 + (channels->protocol == IBMVFC_PROTO_SCSI) ? "SCSI" : "NVMe"); 5969 + return -ENOMEM; 5970 + } 5971 + 5972 + return 0; 6154 5973 } 6155 5974 6156 5975 /** ··· 6203 5992 goto free_sg_pool; 6204 5993 } 6205 5994 6206 - vhost->disc_buf_sz = sizeof(*vhost->disc_buf) * max_targets; 6207 - vhost->disc_buf = dma_alloc_coherent(dev, vhost->disc_buf_sz, 6208 - &vhost->disc_buf_dma, GFP_KERNEL); 6209 - 6210 - if (!vhost->disc_buf) { 6211 - dev_err(dev, "Couldn't allocate Discover Targets buffer\n"); 5995 + if (ibmvfc_alloc_disc_buf(dev, &vhost->scsi_scrqs)) 6212 5996 goto free_login_buffer; 6213 - } 6214 5997 6215 5998 vhost->trace = kcalloc(IBMVFC_NUM_TRACE_ENTRIES, 6216 5999 sizeof(struct ibmvfc_trace_entry), GFP_KERNEL); 6217 6000 atomic_set(&vhost->trace_index, -1); 6218 6001 6219 6002 if (!vhost->trace) 6220 - goto free_disc_buffer; 6003 + goto free_scsi_disc_buffer; 6221 6004 6222 6005 vhost->tgt_pool = mempool_create_kmalloc_pool(IBMVFC_TGT_MEMPOOL_SZ, 6223 6006 sizeof(struct ibmvfc_target)); ··· 6237 6032 mempool_destroy(vhost->tgt_pool); 6238 6033 free_trace: 6239 6034 kfree(vhost->trace); 6240 - free_disc_buffer: 6241 - dma_free_coherent(dev, vhost->disc_buf_sz, vhost->disc_buf, 6242 - vhost->disc_buf_dma); 6035 + free_scsi_disc_buffer: 6036 + ibmvfc_free_disc_buf(dev, &vhost->scsi_scrqs); 6243 6037 free_login_buffer: 6244 6038 dma_free_coherent(dev, sizeof(*vhost->login_buf), 6245 6039 vhost->login_buf, vhost->login_buf_dma); ··· 6317 6113 struct Scsi_Host *shost; 6318 6114 struct device *dev = &vdev->dev; 6319 6115 int rc = -ENOMEM; 6320 - unsigned int max_scsi_queues = IBMVFC_MAX_SCSI_QUEUES; 6116 + unsigned int online_cpus = num_online_cpus(); 6117 + unsigned int max_scsi_queues = min((unsigned int)IBMVFC_MAX_SCSI_QUEUES, online_cpus); 6321 6118 6322 6119 ENTER; 6323 6120 shost = scsi_host_alloc(&driver_template, sizeof(*vhost)); ··· 6328 6123 } 6329 6124 6330 6125 shost->transportt = ibmvfc_transport_template; 6331 - shost->can_queue = max_requests; 6126 + shost->can_queue = scsi_qdepth; 6332 6127 shost->max_lun = max_lun; 6333 6128 shost->max_id = max_targets; 6334 6129 shost->max_sectors = IBMVFC_MAX_SECTORS; ··· 6347 6142 vhost->task_set = 1; 6348 6143 6349 6144 vhost->mq_enabled = mq_enabled; 6350 - vhost->client_scsi_channels = min(shost->nr_hw_queues, nr_scsi_channels); 6145 + vhost->scsi_scrqs.desired_queues = min(shost->nr_hw_queues, nr_scsi_channels); 6146 + vhost->scsi_scrqs.max_queues = shost->nr_hw_queues; 6147 + vhost->scsi_scrqs.protocol = IBMVFC_PROTO_SCSI; 6351 6148 vhost->using_channels = 0; 6352 6149 vhost->do_enquiry = 1; 6353 6150 vhost->scan_timeout = 0; ··· 6489 6282 */ 6490 6283 static unsigned long ibmvfc_get_desired_dma(struct vio_dev *vdev) 6491 6284 { 6492 - unsigned long pool_dma = max_requests * sizeof(union ibmvfc_iu); 6285 + unsigned long pool_dma; 6286 + 6287 + pool_dma = (IBMVFC_MAX_SCSI_QUEUES * scsi_qdepth) * sizeof(union ibmvfc_iu); 6493 6288 return pool_dma + ((512 * 1024) * driver_template.cmd_per_lun); 6494 6289 } 6495 6290
+34 -16
drivers/scsi/ibmvscsi/ibmvfc.h
··· 27 27 #define IBMVFC_ABORT_TIMEOUT 8 28 28 #define IBMVFC_ABORT_WAIT_TIMEOUT 40 29 29 #define IBMVFC_MAX_REQUESTS_DEFAULT 100 30 + #define IBMVFC_SCSI_QDEPTH 128 30 31 31 32 #define IBMVFC_DEBUG 0 32 33 #define IBMVFC_MAX_TARGETS 1024 ··· 58 57 * 2 for each discovery thread 59 58 */ 60 59 #define IBMVFC_NUM_INTERNAL_REQ (1 + 1 + 1 + 2 + (disc_threads * 2)) 60 + /* Reserved suset of events for cancelling channelized IO commands */ 61 + #define IBMVFC_NUM_INTERNAL_SUBQ_REQ 4 61 62 62 63 #define IBMVFC_MAD_SUCCESS 0x00 63 64 #define IBMVFC_MAD_NOT_SUPPORTED 0xF1 ··· 716 713 IBMVFC_TGT_ACTION_LOGOUT_DELETED_RPORT, 717 714 }; 718 715 716 + enum ibmvfc_protocol { 717 + IBMVFC_PROTO_SCSI = 0, 718 + IBMVFC_PROTO_NVME = 1, 719 + }; 720 + 719 721 struct ibmvfc_target { 720 722 struct list_head queue; 721 723 struct ibmvfc_host *vhost; 724 + enum ibmvfc_protocol protocol; 722 725 u64 scsi_id; 723 726 u64 wwpn; 724 727 u64 new_scsi_id; ··· 767 758 struct completion *eh_comp; 768 759 struct timer_list timer; 769 760 u16 hwq; 761 + u8 reserved; 770 762 }; 771 763 772 764 /* a pool of event structs for use */ ··· 803 793 struct ibmvfc_event_pool evt_pool; 804 794 struct list_head sent; 805 795 struct list_head free; 796 + u16 total_depth; 797 + u16 evt_depth; 798 + u16 reserved_depth; 799 + u16 evt_free; 800 + u16 reserved_free; 806 801 spinlock_t l_lock; 807 802 808 803 union ibmvfc_iu cancel_rsp; ··· 819 804 unsigned long irq; 820 805 unsigned long hwq_id; 821 806 char name[32]; 807 + irq_handler_t handler; 822 808 }; 823 809 824 - struct ibmvfc_scsi_channels { 810 + struct ibmvfc_channels { 825 811 struct ibmvfc_queue *scrqs; 812 + enum ibmvfc_protocol protocol; 826 813 unsigned int active_queues; 814 + unsigned int desired_queues; 815 + unsigned int max_queues; 816 + int disc_buf_sz; 817 + struct ibmvfc_discover_targets_entry *disc_buf; 818 + dma_addr_t disc_buf_dma; 827 819 }; 828 820 829 821 enum ibmvfc_host_action { ··· 879 857 mempool_t *tgt_pool; 880 858 struct ibmvfc_queue crq; 881 859 struct ibmvfc_queue async_crq; 882 - struct ibmvfc_scsi_channels scsi_scrqs; 860 + struct ibmvfc_channels scsi_scrqs; 883 861 struct ibmvfc_npiv_login login_info; 884 862 union ibmvfc_npiv_login_data *login_buf; 885 863 dma_addr_t login_buf_dma; 886 864 struct ibmvfc_channel_setup *channel_setup_buf; 887 865 dma_addr_t channel_setup_dma; 888 - int disc_buf_sz; 889 866 int log_level; 890 - struct ibmvfc_discover_targets_entry *disc_buf; 891 867 struct mutex passthru_mutex; 892 - int max_vios_scsi_channels; 868 + unsigned int max_vios_scsi_channels; 893 869 int task_set; 894 870 int init_retries; 895 871 int discovery_threads; 896 872 int abort_threads; 897 - int client_migrated; 898 - int reinit; 899 - int delay_init; 900 - int scan_complete; 873 + unsigned int client_migrated:1; 874 + unsigned int reinit:1; 875 + unsigned int delay_init:1; 876 + unsigned int logged_in:1; 877 + unsigned int mq_enabled:1; 878 + unsigned int using_channels:1; 879 + unsigned int do_enquiry:1; 880 + unsigned int aborting_passthru:1; 881 + unsigned int scan_complete:1; 901 882 int scan_timeout; 902 - int logged_in; 903 - int mq_enabled; 904 - int using_channels; 905 - int do_enquiry; 906 - int client_scsi_channels; 907 - int aborting_passthru; 908 883 int events_to_log; 909 884 #define IBMVFC_AE_LINKUP 0x0001 910 885 #define IBMVFC_AE_LINKDOWN 0x0002 911 886 #define IBMVFC_AE_RSCN 0x0004 912 - dma_addr_t disc_buf_dma; 913 887 unsigned int partition_number; 914 888 char partition_name[97]; 915 889 void (*job_step) (struct ibmvfc_host *);
+3
drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
··· 3975 3975 .fabric_drop_tpg = ibmvscsis_drop_tpg, 3976 3976 3977 3977 .tfc_wwn_attrs = ibmvscsis_wwn_attrs, 3978 + 3979 + .default_submit_type = TARGET_DIRECT_SUBMIT, 3980 + .direct_submit_supp = 1, 3978 3981 }; 3979 3982 3980 3983 static void ibmvscsis_dev_release(struct device *dev) {};
+37 -33
drivers/scsi/imm.c
··· 51 51 } imm_struct; 52 52 53 53 static void imm_reset_pulse(unsigned int base); 54 - static int device_check(imm_struct *dev); 54 + static int device_check(imm_struct *dev, bool autodetect); 55 55 56 56 #include "imm.h" 57 + 58 + static unsigned int mode = IMM_AUTODETECT; 59 + module_param(mode, uint, 0644); 60 + MODULE_PARM_DESC(mode, "Transfer mode (0 = Autodetect, 1 = SPP 4-bit, " 61 + "2 = SPP 8-bit, 3 = EPP 8-bit, 4 = EPP 16-bit, 5 = EPP 32-bit"); 57 62 58 63 static inline imm_struct *imm_dev(struct Scsi_Host *host) 59 64 { ··· 371 366 case IMM_EPP_8: 372 367 epp_reset(ppb); 373 368 w_ctr(ppb, 0x4); 374 - #ifdef CONFIG_SCSI_IZIP_EPP16 375 - if (!(((long) buffer | len) & 0x01)) 376 - outsw(ppb + 4, buffer, len >> 1); 377 - #else 378 - if (!(((long) buffer | len) & 0x03)) 369 + if (dev->mode == IMM_EPP_32 && !(((long) buffer | len) & 0x03)) 379 370 outsl(ppb + 4, buffer, len >> 2); 380 - #endif 371 + else if (dev->mode == IMM_EPP_16 && !(((long) buffer | len) & 0x01)) 372 + outsw(ppb + 4, buffer, len >> 1); 381 373 else 382 374 outsb(ppb + 4, buffer, len); 383 375 w_ctr(ppb, 0xc); ··· 428 426 case IMM_EPP_8: 429 427 epp_reset(ppb); 430 428 w_ctr(ppb, 0x24); 431 - #ifdef CONFIG_SCSI_IZIP_EPP16 432 - if (!(((long) buffer | len) & 0x01)) 433 - insw(ppb + 4, buffer, len >> 1); 434 - #else 435 - if (!(((long) buffer | len) & 0x03)) 436 - insl(ppb + 4, buffer, len >> 2); 437 - #endif 429 + if (dev->mode == IMM_EPP_32 && !(((long) buffer | len) & 0x03)) 430 + insw(ppb + 4, buffer, len >> 2); 431 + else if (dev->mode == IMM_EPP_16 && !(((long) buffer | len) & 0x01)) 432 + insl(ppb + 4, buffer, len >> 1); 438 433 else 439 434 insb(ppb + 4, buffer, len); 440 435 w_ctr(ppb, 0x2c); ··· 588 589 589 590 static int imm_init(imm_struct *dev) 590 591 { 592 + bool autodetect = dev->mode == IMM_AUTODETECT; 593 + 594 + if (autodetect) { 595 + int modes = dev->dev->port->modes; 596 + 597 + /* Mode detection works up the chain of speed 598 + * This avoids a nasty if-then-else-if-... tree 599 + */ 600 + dev->mode = IMM_NIBBLE; 601 + 602 + if (modes & PARPORT_MODE_TRISTATE) 603 + dev->mode = IMM_PS2; 604 + } 605 + 591 606 if (imm_connect(dev, 0) != 1) 592 607 return -EIO; 593 608 imm_reset_pulse(dev->base); 594 609 mdelay(1); /* Delay to allow devices to settle */ 595 610 imm_disconnect(dev); 596 611 mdelay(1); /* Another delay to allow devices to settle */ 597 - return device_check(dev); 612 + 613 + return device_check(dev, autodetect); 598 614 } 599 615 600 616 static inline int imm_send_command(struct scsi_cmnd *cmd) ··· 1014 1000 return SUCCESS; 1015 1001 } 1016 1002 1017 - static int device_check(imm_struct *dev) 1003 + static int device_check(imm_struct *dev, bool autodetect) 1018 1004 { 1019 1005 /* This routine looks for a device and then attempts to use EPP 1020 1006 to send a command. If all goes as planned then EPP is available. */ ··· 1026 1012 old_mode = dev->mode; 1027 1013 for (loop = 0; loop < 8; loop++) { 1028 1014 /* Attempt to use EPP for Test Unit Ready */ 1029 - if ((ppb & 0x0007) == 0x0000) 1030 - dev->mode = IMM_EPP_32; 1015 + if (autodetect && (ppb & 0x0007) == 0x0000) 1016 + dev->mode = IMM_EPP_8; 1031 1017 1032 1018 second_pass: 1033 1019 imm_connect(dev, CONNECT_EPP_MAYBE); ··· 1052 1038 udelay(1000); 1053 1039 imm_disconnect(dev); 1054 1040 udelay(1000); 1055 - if (dev->mode == IMM_EPP_32) { 1041 + if (dev->mode != old_mode) { 1056 1042 dev->mode = old_mode; 1057 1043 goto second_pass; 1058 1044 } ··· 1077 1063 udelay(1000); 1078 1064 imm_disconnect(dev); 1079 1065 udelay(1000); 1080 - if (dev->mode == IMM_EPP_32) { 1066 + if (dev->mode != old_mode) { 1081 1067 dev->mode = old_mode; 1082 1068 goto second_pass; 1083 1069 } ··· 1164 1150 DECLARE_WAIT_QUEUE_HEAD_ONSTACK(waiting); 1165 1151 DEFINE_WAIT(wait); 1166 1152 int ports; 1167 - int modes, ppb; 1168 1153 int err = -ENOMEM; 1169 1154 struct pardev_cb imm_cb; 1170 1155 ··· 1175 1162 1176 1163 1177 1164 dev->base = -1; 1178 - dev->mode = IMM_AUTODETECT; 1165 + dev->mode = mode < IMM_UNKNOWN ? mode : IMM_AUTODETECT; 1179 1166 INIT_LIST_HEAD(&dev->list); 1180 1167 1181 1168 temp = find_parent(); ··· 1210 1197 } 1211 1198 dev->waiting = NULL; 1212 1199 finish_wait(&waiting, &wait); 1213 - ppb = dev->base = dev->dev->port->base; 1200 + dev->base = dev->dev->port->base; 1214 1201 dev->base_hi = dev->dev->port->base_hi; 1215 - w_ctr(ppb, 0x0c); 1216 - modes = dev->dev->port->modes; 1217 - 1218 - /* Mode detection works up the chain of speed 1219 - * This avoids a nasty if-then-else-if-... tree 1220 - */ 1221 - dev->mode = IMM_NIBBLE; 1222 - 1223 - if (modes & PARPORT_MODE_TRISTATE) 1224 - dev->mode = IMM_PS2; 1202 + w_ctr(dev->base, 0x0c); 1225 1203 1226 1204 /* Done configuration */ 1227 1205
-4
drivers/scsi/imm.h
··· 100 100 [IMM_PS2] = "PS/2", 101 101 [IMM_EPP_8] = "EPP 8 bit", 102 102 [IMM_EPP_16] = "EPP 16 bit", 103 - #ifdef CONFIG_SCSI_IZIP_EPP16 104 - [IMM_EPP_32] = "EPP 16 bit", 105 - #else 106 103 [IMM_EPP_32] = "EPP 32 bit", 107 - #endif 108 104 [IMM_UNKNOWN] = "Unknown", 109 105 }; 110 106
-18
drivers/scsi/ips.c
··· 835 835 int i; 836 836 ips_ha_t *ha; 837 837 ips_scb_t *scb; 838 - ips_copp_wait_item_t *item; 839 838 840 839 METHOD_TRACE("ips_eh_reset", 1); 841 840 ··· 858 859 859 860 if (!ha->active) 860 861 return (FAILED); 861 - 862 - /* See if the command is on the copp queue */ 863 - item = ha->copp_waitlist.head; 864 - while ((item) && (item->scsi_cmd != SC)) 865 - item = item->next; 866 - 867 - if (item) { 868 - /* Found it */ 869 - ips_removeq_copp(&ha->copp_waitlist, item); 870 - return (SUCCESS); 871 - } 872 - 873 - /* See if the command is on the wait queue */ 874 - if (ips_removeq_wait(&ha->scb_waitlist, SC)) { 875 - /* command not sent yet */ 876 - return (SUCCESS); 877 - } 878 862 879 863 /* An explanation for the casual observer: */ 880 864 /* Part of the function of a RAID controller is automatic error */
+6
drivers/scsi/libfc/fc_lport.c
··· 241 241 } 242 242 mutex_lock(&lport->disc.disc_mutex); 243 243 lport->ptp_rdata = fc_rport_create(lport, remote_fid); 244 + if (!lport->ptp_rdata) { 245 + printk(KERN_WARNING "libfc: Failed to setup lport 0x%x\n", 246 + lport->port_id); 247 + mutex_unlock(&lport->disc.disc_mutex); 248 + return; 249 + } 244 250 kref_get(&lport->ptp_rdata->kref); 245 251 lport->ptp_rdata->ids.port_name = remote_wwpn; 246 252 lport->ptp_rdata->ids.node_name = remote_wwnn;
+1 -1
drivers/scsi/libsas/sas_discover.c
··· 275 275 * 276 276 * See comment in sas_discover_sata(). 277 277 */ 278 - int sas_discover_end_dev(struct domain_device *dev) 278 + static int sas_discover_end_dev(struct domain_device *dev) 279 279 { 280 280 return sas_notify_lldd_dev_found(dev); 281 281 }
+2 -2
drivers/scsi/libsas/sas_init.c
··· 315 315 } 316 316 EXPORT_SYMBOL_GPL(sas_phy_reset); 317 317 318 - int sas_set_phy_speed(struct sas_phy *phy, 319 - struct sas_phy_linkrates *rates) 318 + static int sas_set_phy_speed(struct sas_phy *phy, 319 + struct sas_phy_linkrates *rates) 320 320 { 321 321 int ret; 322 322
+12
drivers/scsi/libsas/sas_internal.h
··· 39 39 struct sas_work enable_work; 40 40 }; 41 41 42 + void sas_hash_addr(u8 *hashed, const u8 *sas_addr); 43 + 44 + int sas_discover_root_expander(struct domain_device *dev); 45 + 46 + int sas_ex_revalidate_domain(struct domain_device *dev); 47 + void sas_unregister_domain_devices(struct asd_sas_port *port, int gone); 48 + void sas_init_disc(struct sas_discovery *disc, struct asd_sas_port *port); 49 + void sas_discover_event(struct asd_sas_port *port, enum discover_event ev); 50 + 51 + void sas_init_dev(struct domain_device *dev); 52 + void sas_unregister_dev(struct asd_sas_port *port, struct domain_device *dev); 53 + 42 54 void sas_scsi_recover_host(struct Scsi_Host *shost); 43 55 44 56 int sas_register_phys(struct sas_ha_struct *sas_ha);
+23
drivers/scsi/lpfc/lpfc_els.c
··· 131 131 return 1; 132 132 } 133 133 134 + static bool lpfc_is_els_acc_rsp(struct lpfc_dmabuf *buf) 135 + { 136 + struct fc_els_ls_acc *rsp = buf->virt; 137 + 138 + if (rsp && rsp->la_cmd == ELS_LS_ACC) 139 + return true; 140 + return false; 141 + } 142 + 134 143 /** 135 144 * lpfc_prep_els_iocb - Allocate and prepare a lpfc iocb data structure 136 145 * @vport: pointer to a host virtual N_Port data structure. ··· 1115 1106 */ 1116 1107 prsp = list_get_first(&pcmd->list, struct lpfc_dmabuf, list); 1117 1108 if (!prsp) 1109 + goto out; 1110 + if (!lpfc_is_els_acc_rsp(prsp)) 1118 1111 goto out; 1119 1112 sp = prsp->virt + sizeof(uint32_t); 1120 1113 ··· 2130 2119 /* Good status, call state machine */ 2131 2120 prsp = list_entry(cmdiocb->cmd_dmabuf->list.next, 2132 2121 struct lpfc_dmabuf, list); 2122 + if (!prsp) 2123 + goto out; 2124 + if (!lpfc_is_els_acc_rsp(prsp)) 2125 + goto out; 2133 2126 ndlp = lpfc_plogi_confirm_nport(phba, prsp->virt, ndlp); 2134 2127 2135 2128 sp = (struct serv_parm *)((u8 *)prsp->virt + ··· 3460 3445 prdf = (struct lpfc_els_rdf_rsp *)prsp->virt; 3461 3446 if (!prdf) 3462 3447 goto out; 3448 + if (!lpfc_is_els_acc_rsp(prsp)) 3449 + goto out; 3463 3450 3464 3451 for (i = 0; i < ELS_RDF_REG_TAG_CNT && 3465 3452 i < be32_to_cpu(prdf->reg_d1.reg_desc.count); i++) ··· 4059 4042 "0x%02x, 0x%08x\n", 4060 4043 edc_rsp->acc_hdr.la_cmd, 4061 4044 be32_to_cpu(edc_rsp->desc_list_len)); 4045 + 4046 + if (!lpfc_is_els_acc_rsp(prsp)) 4047 + goto out; 4062 4048 4063 4049 /* 4064 4050 * Payload length in bytes is the response descriptor list ··· 11359 11339 prsp = list_get_first(&pcmd->list, struct lpfc_dmabuf, list); 11360 11340 if (!prsp) 11361 11341 goto out; 11342 + if (!lpfc_is_els_acc_rsp(prsp)) 11343 + goto out; 11344 + 11362 11345 sp = prsp->virt + sizeof(uint32_t); 11363 11346 fabric_param_changed = lpfc_check_clean_addr_bit(vport, sp); 11364 11347 memcpy(&vport->fabric_portname, &sp->portName,
+4 -4
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 5654 5654 ((uint32_t)ndlp->nlp_xri << 16) | 5655 5655 ((uint32_t)ndlp->nlp_type << 8) 5656 5656 ); 5657 - lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, 5657 + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE_VERBOSE, 5658 5658 "0929 FIND node DID " 5659 5659 "Data: x%px x%x x%x x%x x%x x%px\n", 5660 5660 ndlp, ndlp->nlp_DID, ··· 5701 5701 ((uint32_t)ndlp->nlp_type << 8) | 5702 5702 ((uint32_t)ndlp->nlp_rpi & 0xff)); 5703 5703 spin_unlock_irqrestore(shost->host_lock, iflags); 5704 - lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, 5705 - "2025 FIND node DID " 5704 + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE_VERBOSE, 5705 + "2025 FIND node DID MAPPED " 5706 5706 "Data: x%px x%x x%x x%x x%px\n", 5707 5707 ndlp, ndlp->nlp_DID, 5708 5708 ndlp->nlp_flag, data1, ··· 6468 6468 6469 6469 list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { 6470 6470 if (filter(ndlp, param)) { 6471 - lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, 6471 + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE_VERBOSE, 6472 6472 "3185 FIND node filter %ps DID " 6473 6473 "ndlp x%px did x%x flg x%x st x%x " 6474 6474 "xri x%x type x%x rpi x%x\n",
+1 -1
drivers/scsi/lpfc/lpfc_logmsg.h
··· 25 25 #define LOG_MBOX 0x00000004 /* Mailbox events */ 26 26 #define LOG_INIT 0x00000008 /* Initialization events */ 27 27 #define LOG_LINK_EVENT 0x00000010 /* Link events */ 28 - #define LOG_IP 0x00000020 /* IP traffic history */ 28 + #define LOG_NODE_VERBOSE 0x00000020 /* Node verbose events */ 29 29 #define LOG_FCP 0x00000040 /* FCP traffic history */ 30 30 #define LOG_NODE 0x00000080 /* Node table events */ 31 31 #define LOG_TEMP 0x00000100 /* Temperature sensor events */
+14 -4
drivers/scsi/lpfc/lpfc_nportdisc.c
··· 934 934 struct ls_rjt stat; 935 935 uint32_t *payload; 936 936 uint32_t cmd; 937 + PRLI *npr; 937 938 938 939 payload = cmdiocb->cmd_dmabuf->virt; 939 940 cmd = *payload; 941 + npr = (PRLI *)((uint8_t *)payload + sizeof(uint32_t)); 942 + 940 943 if (vport->phba->nvmet_support) { 941 944 /* Must be a NVME PRLI */ 942 - if (cmd == ELS_CMD_PRLI) 945 + if (cmd == ELS_CMD_PRLI) 943 946 goto out; 944 947 } else { 945 948 /* Initiator mode. */ 946 949 if (!vport->nvmei_support && (cmd == ELS_CMD_NVMEPRLI)) 947 950 goto out; 951 + 952 + /* NPIV ports will RJT initiator only functions */ 953 + if (vport->port_type == LPFC_NPIV_PORT && 954 + npr->initiatorFunc && !npr->targetFunc) 955 + goto out; 948 956 } 949 957 return 1; 950 958 out: 951 - lpfc_printf_vlog(vport, KERN_WARNING, LOG_NVME_DISC, 959 + lpfc_printf_vlog(vport, KERN_WARNING, LOG_DISCOVERY, 952 960 "6115 Rcv PRLI (%x) check failed: ndlp rpi %d " 953 - "state x%x flags x%x\n", 961 + "state x%x flags x%x port_type: x%x " 962 + "npr->initfcn: x%x npr->tgtfcn: x%x\n", 954 963 cmd, ndlp->nlp_rpi, ndlp->nlp_state, 955 - ndlp->nlp_flag); 964 + ndlp->nlp_flag, vport->port_type, 965 + npr->initiatorFunc, npr->targetFunc); 956 966 memset(&stat, 0, sizeof(struct ls_rjt)); 957 967 stat.un.b.lsRjtRsnCode = LSRJT_CMD_UNSUPPORTED; 958 968 stat.un.b.lsRjtRsnCodeExp = LSEXP_REQ_UNSUPPORTED;
+4 -2
drivers/scsi/lpfc/lpfc_nvme.c
··· 950 950 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 951 951 int cpu; 952 952 #endif 953 - int offline = 0; 953 + bool offline = false; 954 954 955 955 /* Sanity check on return of outstanding command */ 956 956 if (!lpfc_ncmd) { ··· 1124 1124 nCmd->transferred_length = 0; 1125 1125 nCmd->rcv_rsplen = 0; 1126 1126 nCmd->status = NVME_SC_INTERNAL; 1127 - offline = pci_channel_offline(vport->phba->pcidev); 1127 + if (pci_channel_offline(vport->phba->pcidev) || 1128 + lpfc_ncmd->result == IOERR_SLI_DOWN) 1129 + offline = true; 1128 1130 } 1129 1131 } 1130 1132
+1 -3
drivers/scsi/lpfc/lpfc_sli.c
··· 8571 8571 * is not fatal as the driver will use generic values. 8572 8572 */ 8573 8573 rc = lpfc_parse_vpd(phba, vpd, vpd_size); 8574 - if (unlikely(!rc)) { 8574 + if (unlikely(!rc)) 8575 8575 lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 8576 8576 "0377 Error %d parsing vpd. " 8577 8577 "Using defaults.\n", rc); 8578 - rc = 0; 8579 - } 8580 8578 kfree(vpd); 8581 8579 8582 8580 /* Save information as VPD data */
+1 -1
drivers/scsi/lpfc/lpfc_version.h
··· 20 20 * included with this package. * 21 21 *******************************************************************/ 22 22 23 - #define LPFC_DRIVER_VERSION "14.2.0.14" 23 + #define LPFC_DRIVER_VERSION "14.2.0.15" 24 24 #define LPFC_DRIVER_NAME "lpfc" 25 25 26 26 /* Used for SLI 2/3 */
+23 -30
drivers/scsi/megaraid.c
··· 1898 1898 1899 1899 spin_lock_irq(&adapter->lock); 1900 1900 1901 - rval = megaraid_abort_and_reset(adapter, cmd, SCB_RESET); 1901 + rval = megaraid_abort_and_reset(adapter, NULL, SCB_RESET); 1902 1902 1903 1903 /* 1904 1904 * This is required here to complete any completed requests ··· 1925 1925 struct list_head *pos, *next; 1926 1926 scb_t *scb; 1927 1927 1928 - dev_warn(&adapter->dev->dev, "%s cmd=%x <c=%d t=%d l=%d>\n", 1929 - (aor == SCB_ABORT)? "ABORTING":"RESET", 1930 - cmd->cmnd[0], cmd->device->channel, 1931 - cmd->device->id, (u32)cmd->device->lun); 1928 + if (aor == SCB_ABORT) 1929 + dev_warn(&adapter->dev->dev, 1930 + "ABORTING cmd=%x <c=%d t=%d l=%d>\n", 1931 + cmd->cmnd[0], cmd->device->channel, 1932 + cmd->device->id, (u32)cmd->device->lun); 1933 + else 1934 + dev_warn(&adapter->dev->dev, "RESETTING\n"); 1932 1935 1933 1936 if(list_empty(&adapter->pending_list)) 1934 1937 return FAILED; ··· 1940 1937 1941 1938 scb = list_entry(pos, scb_t, list); 1942 1939 1943 - if (scb->cmd == cmd) { /* Found command */ 1940 + if (!cmd || scb->cmd == cmd) { /* Found command */ 1944 1941 1945 1942 scb->state |= aor; 1946 1943 ··· 1959 1956 1960 1957 return FAILED; 1961 1958 } 1962 - else { 1959 + /* 1960 + * Not yet issued! Remove from the pending 1961 + * list 1962 + */ 1963 + dev_warn(&adapter->dev->dev, 1964 + "%s-[%x], driver owner\n", 1965 + (cmd) ? "ABORTING":"RESET", 1966 + scb->idx); 1967 + mega_free_scb(adapter, scb); 1963 1968 1964 - /* 1965 - * Not yet issued! Remove from the pending 1966 - * list 1967 - */ 1968 - dev_warn(&adapter->dev->dev, 1969 - "%s-[%x], driver owner\n", 1970 - (aor==SCB_ABORT) ? "ABORTING":"RESET", 1971 - scb->idx); 1972 - 1973 - mega_free_scb(adapter, scb); 1974 - 1975 - if( aor == SCB_ABORT ) { 1976 - cmd->result = (DID_ABORT << 16); 1977 - } 1978 - else { 1979 - cmd->result = (DID_RESET << 16); 1980 - } 1981 - 1969 + if (cmd) { 1970 + cmd->result = (DID_ABORT << 16); 1982 1971 list_add_tail(SCSI_LIST(cmd), 1983 - &adapter->completed_list); 1984 - 1985 - return SUCCESS; 1972 + &adapter->completed_list); 1986 1973 } 1974 + 1975 + return SUCCESS; 1987 1976 } 1988 1977 } 1989 1978 ··· 4109 4114 .sg_tablesize = MAX_SGLIST, 4110 4115 .cmd_per_lun = DEF_CMD_PER_LUN, 4111 4116 .eh_abort_handler = megaraid_abort, 4112 - .eh_device_reset_handler = megaraid_reset, 4113 - .eh_bus_reset_handler = megaraid_reset, 4114 4117 .eh_host_reset_handler = megaraid_reset, 4115 4118 .no_write_same = 1, 4116 4119 .cmd_size = sizeof(struct megaraid_cmd_priv),
+2 -2
drivers/scsi/megaraid/megaraid_sas.h
··· 23 23 /* 24 24 * MegaRAID SAS Driver meta data 25 25 */ 26 - #define MEGASAS_VERSION "07.725.01.00-rc1" 27 - #define MEGASAS_RELDATE "Mar 2, 2023" 26 + #define MEGASAS_VERSION "07.727.03.00-rc1" 27 + #define MEGASAS_RELDATE "Oct 03, 2023" 28 28 29 29 #define MEGASAS_MSIX_NAME_LEN 32 30 30
+2 -2
drivers/scsi/megaraid/megaraid_sas_base.c
··· 263 263 * Fusion registers could intermittently return all zeroes. 264 264 * This behavior is transient in nature and subsequent reads will 265 265 * return valid value. As a workaround in driver, retry readl for 266 - * upto three times until a non-zero value is read. 266 + * up to thirty times until a non-zero value is read. 267 267 */ 268 268 if (instance->adapter_type == AERO_SERIES) { 269 269 do { 270 270 ret_val = readl(addr); 271 271 i++; 272 - } while (ret_val == 0 && i < 3); 272 + } while (ret_val == 0 && i < 30); 273 273 return ret_val; 274 274 } else { 275 275 return readl(addr);
+3
drivers/scsi/megaraid/megaraid_sas_fusion.c
··· 4268 4268 } 4269 4269 4270 4270 out: 4271 + if (!retval && reason == SCSIIO_TIMEOUT_OCR) 4272 + dev_info(&instance->pdev->dev, "IO is completed, no OCR is required\n"); 4273 + 4271 4274 return retval; 4272 4275 } 4273 4276
+40 -23
drivers/scsi/mpi3mr/mpi3mr_os.c
··· 4012 4012 * mpi3mr_eh_host_reset - Host reset error handling callback 4013 4013 * @scmd: SCSI command reference 4014 4014 * 4015 - * Issue controller reset if the scmd is for a Physical Device, 4016 - * if the scmd is for RAID volume, then wait for 4017 - * MPI3MR_RAID_ERRREC_RESET_TIMEOUT and checke whether any 4018 - * pending I/Os prior to issuing reset to the controller. 4015 + * Issue controller reset 4019 4016 * 4020 4017 * Return: SUCCESS of successful reset else FAILED 4021 4018 */ 4022 4019 static int mpi3mr_eh_host_reset(struct scsi_cmnd *scmd) 4023 4020 { 4024 4021 struct mpi3mr_ioc *mrioc = shost_priv(scmd->device->host); 4025 - struct mpi3mr_stgt_priv_data *stgt_priv_data; 4026 - struct mpi3mr_sdev_priv_data *sdev_priv_data; 4027 - u8 dev_type = MPI3_DEVICE_DEVFORM_VD; 4028 4022 int retval = FAILED, ret; 4029 4023 4030 - sdev_priv_data = scmd->device->hostdata; 4031 - if (sdev_priv_data && sdev_priv_data->tgt_priv_data) { 4032 - stgt_priv_data = sdev_priv_data->tgt_priv_data; 4033 - dev_type = stgt_priv_data->dev_type; 4034 - } 4035 - 4036 - if (dev_type == MPI3_DEVICE_DEVFORM_VD) { 4037 - mpi3mr_wait_for_host_io(mrioc, 4038 - MPI3MR_RAID_ERRREC_RESET_TIMEOUT); 4039 - if (!mpi3mr_get_fw_pending_ios(mrioc)) { 4040 - retval = SUCCESS; 4041 - goto out; 4042 - } 4043 - } 4044 - 4045 - mpi3mr_print_pending_host_io(mrioc); 4046 4024 ret = mpi3mr_soft_reset_handler(mrioc, 4047 4025 MPI3MR_RESET_FROM_EH_HOS, 1); 4048 4026 if (ret) ··· 4032 4054 "Host reset is %s for scmd(%p)\n", 4033 4055 ((retval == SUCCESS) ? "SUCCESS" : "FAILED"), scmd); 4034 4056 4057 + return retval; 4058 + } 4059 + 4060 + /** 4061 + * mpi3mr_eh_bus_reset - Bus reset error handling callback 4062 + * @scmd: SCSI command reference 4063 + * 4064 + * Checks whether pending I/Os are present for the RAID volume; 4065 + * if not there's no need to reset the adapter. 4066 + * 4067 + * Return: SUCCESS of successful reset else FAILED 4068 + */ 4069 + static int mpi3mr_eh_bus_reset(struct scsi_cmnd *scmd) 4070 + { 4071 + struct mpi3mr_ioc *mrioc = shost_priv(scmd->device->host); 4072 + struct mpi3mr_stgt_priv_data *stgt_priv_data; 4073 + struct mpi3mr_sdev_priv_data *sdev_priv_data; 4074 + u8 dev_type = MPI3_DEVICE_DEVFORM_VD; 4075 + int retval = FAILED; 4076 + 4077 + sdev_priv_data = scmd->device->hostdata; 4078 + if (sdev_priv_data && sdev_priv_data->tgt_priv_data) { 4079 + stgt_priv_data = sdev_priv_data->tgt_priv_data; 4080 + dev_type = stgt_priv_data->dev_type; 4081 + } 4082 + 4083 + if (dev_type == MPI3_DEVICE_DEVFORM_VD) { 4084 + mpi3mr_wait_for_host_io(mrioc, 4085 + MPI3MR_RAID_ERRREC_RESET_TIMEOUT); 4086 + if (!mpi3mr_get_fw_pending_ios(mrioc)) 4087 + retval = SUCCESS; 4088 + } 4089 + if (retval == FAILED) 4090 + mpi3mr_print_pending_host_io(mrioc); 4091 + 4092 + sdev_printk(KERN_INFO, scmd->device, 4093 + "Bus reset is %s for scmd(%p)\n", 4094 + ((retval == SUCCESS) ? "SUCCESS" : "FAILED"), scmd); 4035 4095 return retval; 4036 4096 } 4037 4097 ··· 4916 4900 .change_queue_depth = mpi3mr_change_queue_depth, 4917 4901 .eh_device_reset_handler = mpi3mr_eh_dev_reset, 4918 4902 .eh_target_reset_handler = mpi3mr_eh_target_reset, 4903 + .eh_bus_reset_handler = mpi3mr_eh_bus_reset, 4919 4904 .eh_host_reset_handler = mpi3mr_eh_host_reset, 4920 4905 .bios_param = mpi3mr_bios_param, 4921 4906 .map_queues = mpi3mr_map_queues,
+2 -2
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 223 223 224 224 for (i = 0 ; i < 30 ; i++) { 225 225 ret_val = readl(addr); 226 - if (ret_val == 0) 227 - continue; 226 + if (ret_val != 0) 227 + break; 228 228 } 229 229 230 230 return ret_val;
+16 -73
drivers/scsi/pm8001/pm8001_hwi.c
··· 1180 1180 } 1181 1181 } 1182 1182 1183 - #ifndef PM8001_USE_MSIX 1184 - /** 1185 - * pm8001_chip_intx_interrupt_enable - enable PM8001 chip interrupt 1186 - * @pm8001_ha: our hba card information 1187 - */ 1188 - static void 1189 - pm8001_chip_intx_interrupt_enable(struct pm8001_hba_info *pm8001_ha) 1190 - { 1191 - pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_CLEAR_ALL); 1192 - pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, ODCR_CLEAR_ALL); 1193 - } 1194 - 1195 - /** 1196 - * pm8001_chip_intx_interrupt_disable - disable PM8001 chip interrupt 1197 - * @pm8001_ha: our hba card information 1198 - */ 1199 - static void 1200 - pm8001_chip_intx_interrupt_disable(struct pm8001_hba_info *pm8001_ha) 1201 - { 1202 - pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_MASK_ALL); 1203 - } 1204 - 1205 - #else 1206 - 1207 - /** 1208 - * pm8001_chip_msix_interrupt_enable - enable PM8001 chip interrupt 1209 - * @pm8001_ha: our hba card information 1210 - * @int_vec_idx: interrupt number to enable 1211 - */ 1212 - static void 1213 - pm8001_chip_msix_interrupt_enable(struct pm8001_hba_info *pm8001_ha, 1214 - u32 int_vec_idx) 1215 - { 1216 - u32 msi_index; 1217 - u32 value; 1218 - msi_index = int_vec_idx * MSIX_TABLE_ELEMENT_SIZE; 1219 - msi_index += MSIX_TABLE_BASE; 1220 - pm8001_cw32(pm8001_ha, 0, msi_index, MSIX_INTERRUPT_ENABLE); 1221 - value = (1 << int_vec_idx); 1222 - pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, value); 1223 - 1224 - } 1225 - 1226 - /** 1227 - * pm8001_chip_msix_interrupt_disable - disable PM8001 chip interrupt 1228 - * @pm8001_ha: our hba card information 1229 - * @int_vec_idx: interrupt number to disable 1230 - */ 1231 - static void 1232 - pm8001_chip_msix_interrupt_disable(struct pm8001_hba_info *pm8001_ha, 1233 - u32 int_vec_idx) 1234 - { 1235 - u32 msi_index; 1236 - msi_index = int_vec_idx * MSIX_TABLE_ELEMENT_SIZE; 1237 - msi_index += MSIX_TABLE_BASE; 1238 - pm8001_cw32(pm8001_ha, 0, msi_index, MSIX_INTERRUPT_DISABLE); 1239 - } 1240 - #endif 1241 - 1242 1183 /** 1243 1184 * pm8001_chip_interrupt_enable - enable PM8001 chip interrupt 1244 1185 * @pm8001_ha: our hba card information ··· 1188 1247 static void 1189 1248 pm8001_chip_interrupt_enable(struct pm8001_hba_info *pm8001_ha, u8 vec) 1190 1249 { 1191 - #ifdef PM8001_USE_MSIX 1192 - pm8001_chip_msix_interrupt_enable(pm8001_ha, 0); 1193 - #else 1194 - pm8001_chip_intx_interrupt_enable(pm8001_ha); 1195 - #endif 1250 + if (pm8001_ha->use_msix) { 1251 + pm8001_cw32(pm8001_ha, 0, MSIX_TABLE_BASE, 1252 + MSIX_INTERRUPT_ENABLE); 1253 + pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, 1); 1254 + } else { 1255 + pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_CLEAR_ALL); 1256 + pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, ODCR_CLEAR_ALL); 1257 + } 1196 1258 } 1197 1259 1198 1260 /** ··· 1206 1262 static void 1207 1263 pm8001_chip_interrupt_disable(struct pm8001_hba_info *pm8001_ha, u8 vec) 1208 1264 { 1209 - #ifdef PM8001_USE_MSIX 1210 - pm8001_chip_msix_interrupt_disable(pm8001_ha, 0); 1211 - #else 1212 - pm8001_chip_intx_interrupt_disable(pm8001_ha); 1213 - #endif 1265 + if (pm8001_ha->use_msix) 1266 + pm8001_cw32(pm8001_ha, 0, MSIX_TABLE_BASE, 1267 + MSIX_INTERRUPT_DISABLE); 1268 + else 1269 + pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_MASK_ALL); 1214 1270 } 1215 1271 1216 1272 /** ··· 4253 4309 4254 4310 static u32 pm8001_chip_is_our_interrupt(struct pm8001_hba_info *pm8001_ha) 4255 4311 { 4256 - #ifdef PM8001_USE_MSIX 4257 - return 1; 4258 - #else 4259 4312 u32 value; 4313 + 4314 + if (pm8001_ha->use_msix) 4315 + return 1; 4260 4316 4261 4317 value = pm8001_cr32(pm8001_ha, 0, MSGU_ODR); 4262 4318 if (value) 4263 4319 return 1; 4264 4320 return 0; 4265 - #endif 4266 4321 } 4267 4322 4268 4323 /**
+153 -132
drivers/scsi/pm8001/pm8001_init.c
··· 56 56 " 4: Link rate 6.0G\n" 57 57 " 8: Link rate 12.0G\n"); 58 58 59 + bool pm8001_use_msix = true; 60 + module_param_named(use_msix, pm8001_use_msix, bool, 0444); 61 + MODULE_PARM_DESC(zoned, "Use MSIX interrupts. Default: true"); 62 + 63 + static bool pm8001_use_tasklet = true; 64 + module_param_named(use_tasklet, pm8001_use_tasklet, bool, 0444); 65 + MODULE_PARM_DESC(zoned, "Use MSIX interrupts. Default: true"); 66 + 67 + static bool pm8001_read_wwn = true; 68 + module_param_named(read_wwn, pm8001_read_wwn, bool, 0444); 69 + MODULE_PARM_DESC(zoned, "Get WWN from the controller. Default: true"); 70 + 59 71 static struct scsi_transport_template *pm8001_stt; 60 72 static int pm8001_init_ccb_tag(struct pm8001_hba_info *); 61 73 ··· 212 200 kfree(pm8001_ha); 213 201 } 214 202 215 - #ifdef PM8001_USE_TASKLET 216 - 217 203 /** 218 204 * pm8001_tasklet() - tasklet for 64 msi-x interrupt handler 219 205 * @opaque: the passed general host adapter struct ··· 219 209 */ 220 210 static void pm8001_tasklet(unsigned long opaque) 221 211 { 222 - struct pm8001_hba_info *pm8001_ha; 223 - struct isr_param *irq_vector; 212 + struct isr_param *irq_vector = (struct isr_param *)opaque; 213 + struct pm8001_hba_info *pm8001_ha = irq_vector->drv_inst; 224 214 225 - irq_vector = (struct isr_param *)opaque; 226 - pm8001_ha = irq_vector->drv_inst; 227 - if (unlikely(!pm8001_ha)) 228 - BUG_ON(1); 215 + if (WARN_ON_ONCE(!pm8001_ha)) 216 + return; 217 + 229 218 PM8001_CHIP_DISP->isr(pm8001_ha, irq_vector->irq_id); 230 219 } 231 - #endif 220 + 221 + static void pm8001_init_tasklet(struct pm8001_hba_info *pm8001_ha) 222 + { 223 + int i; 224 + 225 + if (!pm8001_use_tasklet) 226 + return; 227 + 228 + /* Tasklet for non msi-x interrupt handler */ 229 + if ((!pm8001_ha->pdev->msix_cap || !pci_msi_enabled()) || 230 + (pm8001_ha->chip_id == chip_8001)) { 231 + tasklet_init(&pm8001_ha->tasklet[0], pm8001_tasklet, 232 + (unsigned long)&(pm8001_ha->irq_vector[0])); 233 + return; 234 + } 235 + for (i = 0; i < PM8001_MAX_MSIX_VEC; i++) 236 + tasklet_init(&pm8001_ha->tasklet[i], pm8001_tasklet, 237 + (unsigned long)&(pm8001_ha->irq_vector[i])); 238 + } 239 + 240 + static void pm8001_kill_tasklet(struct pm8001_hba_info *pm8001_ha) 241 + { 242 + int i; 243 + 244 + if (!pm8001_use_tasklet) 245 + return; 246 + 247 + /* For non-msix and msix interrupts */ 248 + if ((!pm8001_ha->pdev->msix_cap || !pci_msi_enabled()) || 249 + (pm8001_ha->chip_id == chip_8001)) { 250 + tasklet_kill(&pm8001_ha->tasklet[0]); 251 + return; 252 + } 253 + 254 + for (i = 0; i < PM8001_MAX_MSIX_VEC; i++) 255 + tasklet_kill(&pm8001_ha->tasklet[i]); 256 + } 257 + 258 + static irqreturn_t pm8001_handle_irq(struct pm8001_hba_info *pm8001_ha, 259 + int irq) 260 + { 261 + if (unlikely(!pm8001_ha)) 262 + return IRQ_NONE; 263 + 264 + if (!PM8001_CHIP_DISP->is_our_interrupt(pm8001_ha)) 265 + return IRQ_NONE; 266 + 267 + if (!pm8001_use_tasklet) 268 + return PM8001_CHIP_DISP->isr(pm8001_ha, irq); 269 + 270 + tasklet_schedule(&pm8001_ha->tasklet[irq]); 271 + return IRQ_HANDLED; 272 + } 232 273 233 274 /** 234 275 * pm8001_interrupt_handler_msix - main MSIX interrupt handler. ··· 291 230 */ 292 231 static irqreturn_t pm8001_interrupt_handler_msix(int irq, void *opaque) 293 232 { 294 - struct isr_param *irq_vector; 295 - struct pm8001_hba_info *pm8001_ha; 296 - irqreturn_t ret = IRQ_HANDLED; 297 - irq_vector = (struct isr_param *)opaque; 298 - pm8001_ha = irq_vector->drv_inst; 233 + struct isr_param *irq_vector = (struct isr_param *)opaque; 234 + struct pm8001_hba_info *pm8001_ha = irq_vector->drv_inst; 299 235 300 - if (unlikely(!pm8001_ha)) 301 - return IRQ_NONE; 302 - if (!PM8001_CHIP_DISP->is_our_interrupt(pm8001_ha)) 303 - return IRQ_NONE; 304 - #ifdef PM8001_USE_TASKLET 305 - tasklet_schedule(&pm8001_ha->tasklet[irq_vector->irq_id]); 306 - #else 307 - ret = PM8001_CHIP_DISP->isr(pm8001_ha, irq_vector->irq_id); 308 - #endif 309 - return ret; 236 + return pm8001_handle_irq(pm8001_ha, irq_vector->irq_id); 310 237 } 311 238 312 239 /** ··· 305 256 306 257 static irqreturn_t pm8001_interrupt_handler_intx(int irq, void *dev_id) 307 258 { 308 - struct pm8001_hba_info *pm8001_ha; 309 - irqreturn_t ret = IRQ_HANDLED; 310 259 struct sas_ha_struct *sha = dev_id; 311 - pm8001_ha = sha->lldd_ha; 312 - if (unlikely(!pm8001_ha)) 313 - return IRQ_NONE; 314 - if (!PM8001_CHIP_DISP->is_our_interrupt(pm8001_ha)) 315 - return IRQ_NONE; 260 + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; 316 261 317 - #ifdef PM8001_USE_TASKLET 318 - tasklet_schedule(&pm8001_ha->tasklet[0]); 319 - #else 320 - ret = PM8001_CHIP_DISP->isr(pm8001_ha, 0); 321 - #endif 322 - return ret; 262 + return pm8001_handle_irq(pm8001_ha, 0); 323 263 } 324 264 325 265 static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha); 266 + static void pm8001_free_irq(struct pm8001_hba_info *pm8001_ha); 326 267 327 268 /** 328 269 * pm8001_alloc - initiate our hba structure and 6 DMAs area. ··· 550 511 { 551 512 struct pm8001_hba_info *pm8001_ha; 552 513 struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); 553 - int j; 554 514 555 515 pm8001_ha = sha->lldd_ha; 556 516 if (!pm8001_ha) ··· 580 542 else 581 543 pm8001_ha->iomb_size = IOMB_SIZE_SPC; 582 544 583 - #ifdef PM8001_USE_TASKLET 584 - /* Tasklet for non msi-x interrupt handler */ 585 - if ((!pdev->msix_cap || !pci_msi_enabled()) 586 - || (pm8001_ha->chip_id == chip_8001)) 587 - tasklet_init(&pm8001_ha->tasklet[0], pm8001_tasklet, 588 - (unsigned long)&(pm8001_ha->irq_vector[0])); 589 - else 590 - for (j = 0; j < PM8001_MAX_MSIX_VEC; j++) 591 - tasklet_init(&pm8001_ha->tasklet[j], pm8001_tasklet, 592 - (unsigned long)&(pm8001_ha->irq_vector[j])); 593 - #endif 545 + pm8001_init_tasklet(pm8001_ha); 546 + 594 547 if (pm8001_ioremap(pm8001_ha)) 595 548 goto failed_pci_alloc; 596 549 if (!pm8001_alloc(pm8001_ha, ent)) ··· 687 658 */ 688 659 static int pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha) 689 660 { 690 - u8 i, j; 691 - u8 sas_add[8]; 692 - #ifdef PM8001_READ_VPD 693 - /* For new SPC controllers WWN is stored in flash vpd 694 - * For SPC/SPCve controllers WWN is stored in EEPROM 695 - * For Older SPC WWN is stored in NVMD 696 - */ 697 661 DECLARE_COMPLETION_ONSTACK(completion); 698 662 struct pm8001_ioctl_payload payload; 663 + unsigned long time_remaining; 664 + u8 sas_add[8]; 699 665 u16 deviceid; 700 666 int rc; 701 - unsigned long time_remaining; 667 + u8 i, j; 702 668 669 + if (!pm8001_read_wwn) { 670 + __be64 dev_sas_addr = cpu_to_be64(0x50010c600047f9d0ULL); 671 + 672 + for (i = 0; i < pm8001_ha->chip->n_phy; i++) 673 + memcpy(&pm8001_ha->phy[i].dev_sas_addr, &dev_sas_addr, 674 + SAS_ADDR_SIZE); 675 + memcpy(pm8001_ha->sas_addr, &pm8001_ha->phy[0].dev_sas_addr, 676 + SAS_ADDR_SIZE); 677 + return 0; 678 + } 679 + 680 + /* 681 + * For new SPC controllers WWN is stored in flash vpd. For SPC/SPCve 682 + * controllers WWN is stored in EEPROM. And for Older SPC WWN is stored 683 + * in NVMD. 684 + */ 703 685 if (PM8001_CHIP_DISP->fatal_errors(pm8001_ha)) { 704 686 pm8001_dbg(pm8001_ha, FAIL, "controller is in fatal error state\n"); 705 687 return -EIO; ··· 784 744 pm8001_ha->phy[i].dev_sas_addr); 785 745 } 786 746 kfree(payload.func_specific); 787 - #else 788 - for (i = 0; i < pm8001_ha->chip->n_phy; i++) { 789 - pm8001_ha->phy[i].dev_sas_addr = 0x50010c600047f9d0ULL; 790 - pm8001_ha->phy[i].dev_sas_addr = 791 - cpu_to_be64((u64) 792 - (*(u64 *)&pm8001_ha->phy[i].dev_sas_addr)); 793 - } 794 - memcpy(pm8001_ha->sas_addr, &pm8001_ha->phy[0].dev_sas_addr, 795 - SAS_ADDR_SIZE); 796 - #endif 747 + 797 748 return 0; 798 749 } 799 750 ··· 794 763 */ 795 764 static int pm8001_get_phy_settings_info(struct pm8001_hba_info *pm8001_ha) 796 765 { 797 - 798 - #ifdef PM8001_READ_VPD 799 - /*OPTION ROM FLASH read for the SPC cards */ 800 766 DECLARE_COMPLETION_ONSTACK(completion); 801 767 struct pm8001_ioctl_payload payload; 802 768 int rc; 769 + 770 + if (!pm8001_read_wwn) 771 + return 0; 803 772 804 773 pm8001_ha->nvmd_completion = &completion; 805 774 /* SAS ADDRESS read from flash / EEPROM */ ··· 819 788 wait_for_completion(&completion); 820 789 pm8001_set_phy_profile(pm8001_ha, sizeof(u8), payload.func_specific); 821 790 kfree(payload.func_specific); 822 - #endif 791 + 823 792 return 0; 824 793 } 825 794 ··· 970 939 } 971 940 } 972 941 973 - #ifdef PM8001_USE_MSIX 974 942 /** 975 943 * pm8001_setup_msix - enable MSI-X interrupt 976 944 * @pm8001_ha: our ha struct. ··· 1051 1021 1052 1022 return rc; 1053 1023 } 1054 - #endif 1055 1024 1056 1025 /** 1057 1026 * pm8001_request_irq - register interrupt ··· 1059 1030 static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha) 1060 1031 { 1061 1032 struct pci_dev *pdev = pm8001_ha->pdev; 1062 - #ifdef PM8001_USE_MSIX 1063 1033 int rc; 1064 1034 1065 - if (pci_find_capability(pdev, PCI_CAP_ID_MSIX)) { 1035 + if (pm8001_use_msix && pci_find_capability(pdev, PCI_CAP_ID_MSIX)) { 1066 1036 rc = pm8001_setup_msix(pm8001_ha); 1067 1037 if (rc) { 1068 1038 pm8001_dbg(pm8001_ha, FAIL, ··· 1069 1041 return rc; 1070 1042 } 1071 1043 1072 - if (pdev->msix_cap && pci_msi_enabled()) 1073 - return pm8001_request_msix(pm8001_ha); 1044 + if (!pdev->msix_cap || !pci_msi_enabled()) 1045 + goto use_intx; 1046 + 1047 + rc = pm8001_request_msix(pm8001_ha); 1048 + if (rc) 1049 + return rc; 1050 + 1051 + pm8001_ha->use_msix = true; 1052 + 1053 + return 0; 1074 1054 } 1075 1055 1056 + use_intx: 1057 + /* Initialize the INT-X interrupt */ 1076 1058 pm8001_dbg(pm8001_ha, INIT, "MSIX not supported!!!\n"); 1077 - #endif 1078 - 1079 - /* initialize the INT-X interrupt */ 1059 + pm8001_ha->use_msix = false; 1080 1060 pm8001_ha->irq_vector[0].irq_id = 0; 1081 1061 pm8001_ha->irq_vector[0].drv_inst = pm8001_ha; 1082 1062 1083 1063 return request_irq(pdev->irq, pm8001_interrupt_handler_intx, 1084 1064 IRQF_SHARED, pm8001_ha->name, 1085 1065 SHOST_TO_SAS_HA(pm8001_ha->shost)); 1066 + } 1067 + 1068 + static void pm8001_free_irq(struct pm8001_hba_info *pm8001_ha) 1069 + { 1070 + struct pci_dev *pdev = pm8001_ha->pdev; 1071 + int i; 1072 + 1073 + if (pm8001_ha->use_msix) { 1074 + for (i = 0; i < pm8001_ha->number_of_intr; i++) 1075 + synchronize_irq(pci_irq_vector(pdev, i)); 1076 + 1077 + for (i = 0; i < pm8001_ha->number_of_intr; i++) 1078 + free_irq(pci_irq_vector(pdev, i), &pm8001_ha->irq_vector[i]); 1079 + 1080 + pci_free_irq_vectors(pdev); 1081 + return; 1082 + } 1083 + 1084 + /* INT-X */ 1085 + free_irq(pm8001_ha->irq, pm8001_ha->sas); 1086 1086 } 1087 1087 1088 1088 /** ··· 1308 1252 static void pm8001_pci_remove(struct pci_dev *pdev) 1309 1253 { 1310 1254 struct sas_ha_struct *sha = pci_get_drvdata(pdev); 1311 - struct pm8001_hba_info *pm8001_ha; 1312 - int i, j; 1313 - pm8001_ha = sha->lldd_ha; 1255 + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; 1256 + int i; 1257 + 1314 1258 sas_unregister_ha(sha); 1315 1259 sas_remove_host(pm8001_ha->shost); 1316 1260 list_del(&pm8001_ha->list); 1317 1261 PM8001_CHIP_DISP->interrupt_disable(pm8001_ha, 0xFF); 1318 1262 PM8001_CHIP_DISP->chip_soft_rst(pm8001_ha); 1319 1263 1320 - #ifdef PM8001_USE_MSIX 1321 - for (i = 0; i < pm8001_ha->number_of_intr; i++) 1322 - synchronize_irq(pci_irq_vector(pdev, i)); 1323 - for (i = 0; i < pm8001_ha->number_of_intr; i++) 1324 - free_irq(pci_irq_vector(pdev, i), &pm8001_ha->irq_vector[i]); 1325 - pci_free_irq_vectors(pdev); 1326 - #else 1327 - free_irq(pm8001_ha->irq, sha); 1328 - #endif 1329 - #ifdef PM8001_USE_TASKLET 1330 - /* For non-msix and msix interrupts */ 1331 - if ((!pdev->msix_cap || !pci_msi_enabled()) || 1332 - (pm8001_ha->chip_id == chip_8001)) 1333 - tasklet_kill(&pm8001_ha->tasklet[0]); 1334 - else 1335 - for (j = 0; j < PM8001_MAX_MSIX_VEC; j++) 1336 - tasklet_kill(&pm8001_ha->tasklet[j]); 1337 - #endif 1264 + pm8001_free_irq(pm8001_ha); 1265 + pm8001_kill_tasklet(pm8001_ha); 1338 1266 scsi_host_put(pm8001_ha->shost); 1339 1267 1340 1268 for (i = 0; i < pm8001_ha->ccb_count; i++) { ··· 1349 1309 struct pci_dev *pdev = to_pci_dev(dev); 1350 1310 struct sas_ha_struct *sha = pci_get_drvdata(pdev); 1351 1311 struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; 1352 - int i, j; 1312 + 1353 1313 sas_suspend_ha(sha); 1354 1314 flush_workqueue(pm8001_wq); 1355 1315 scsi_block_requests(pm8001_ha->shost); ··· 1359 1319 } 1360 1320 PM8001_CHIP_DISP->interrupt_disable(pm8001_ha, 0xFF); 1361 1321 PM8001_CHIP_DISP->chip_soft_rst(pm8001_ha); 1362 - #ifdef PM8001_USE_MSIX 1363 - for (i = 0; i < pm8001_ha->number_of_intr; i++) 1364 - synchronize_irq(pci_irq_vector(pdev, i)); 1365 - for (i = 0; i < pm8001_ha->number_of_intr; i++) 1366 - free_irq(pci_irq_vector(pdev, i), &pm8001_ha->irq_vector[i]); 1367 - pci_free_irq_vectors(pdev); 1368 - #else 1369 - free_irq(pm8001_ha->irq, sha); 1370 - #endif 1371 - #ifdef PM8001_USE_TASKLET 1372 - /* For non-msix and msix interrupts */ 1373 - if ((!pdev->msix_cap || !pci_msi_enabled()) || 1374 - (pm8001_ha->chip_id == chip_8001)) 1375 - tasklet_kill(&pm8001_ha->tasklet[0]); 1376 - else 1377 - for (j = 0; j < PM8001_MAX_MSIX_VEC; j++) 1378 - tasklet_kill(&pm8001_ha->tasklet[j]); 1379 - #endif 1322 + 1323 + pm8001_free_irq(pm8001_ha); 1324 + pm8001_kill_tasklet(pm8001_ha); 1325 + 1380 1326 pm8001_info(pm8001_ha, "pdev=0x%p, slot=%s, entering " 1381 1327 "suspended state\n", pdev, 1382 1328 pm8001_ha->name); ··· 1381 1355 struct sas_ha_struct *sha = pci_get_drvdata(pdev); 1382 1356 struct pm8001_hba_info *pm8001_ha; 1383 1357 int rc; 1384 - u8 i = 0, j; 1358 + u8 i = 0; 1385 1359 DECLARE_COMPLETION_ONSTACK(completion); 1386 1360 1387 1361 pm8001_ha = sha->lldd_ha; ··· 1409 1383 rc = pm8001_request_irq(pm8001_ha); 1410 1384 if (rc) 1411 1385 goto err_out_disable; 1412 - #ifdef PM8001_USE_TASKLET 1413 - /* Tasklet for non msi-x interrupt handler */ 1414 - if ((!pdev->msix_cap || !pci_msi_enabled()) || 1415 - (pm8001_ha->chip_id == chip_8001)) 1416 - tasklet_init(&pm8001_ha->tasklet[0], pm8001_tasklet, 1417 - (unsigned long)&(pm8001_ha->irq_vector[0])); 1418 - else 1419 - for (j = 0; j < PM8001_MAX_MSIX_VEC; j++) 1420 - tasklet_init(&pm8001_ha->tasklet[j], pm8001_tasklet, 1421 - (unsigned long)&(pm8001_ha->irq_vector[j])); 1422 - #endif 1386 + 1387 + pm8001_init_tasklet(pm8001_ha); 1388 + 1423 1389 PM8001_CHIP_DISP->interrupt_enable(pm8001_ha, 0); 1424 1390 if (pm8001_ha->chip_id != chip_8001) { 1425 1391 for (i = 1; i < pm8001_ha->number_of_intr; i++) ··· 1542 1524 static int __init pm8001_init(void) 1543 1525 { 1544 1526 int rc = -ENOMEM; 1527 + 1528 + if (pm8001_use_tasklet && !pm8001_use_msix) 1529 + pm8001_use_tasklet = false; 1545 1530 1546 1531 pm8001_wq = alloc_workqueue("pm80xx", 0, 0); 1547 1532 if (!pm8001_wq)
+3 -8
drivers/scsi/pm8001/pm8001_sas.h
··· 83 83 pm8001_info(HBA, fmt, ##__VA_ARGS__); \ 84 84 } while (0) 85 85 86 - #define PM8001_USE_TASKLET 87 - #define PM8001_USE_MSIX 88 - #define PM8001_READ_VPD 89 - 86 + extern bool pm8001_use_msix; 90 87 91 88 #define IS_SPCV_12G(dev) ((dev->device == 0X8074) \ 92 89 || (dev->device == 0X8076) \ ··· 517 520 struct pm8001_device *devices; 518 521 struct pm8001_ccb_info *ccb_info; 519 522 u32 ccb_count; 520 - #ifdef PM8001_USE_MSIX 523 + 524 + bool use_msix; 521 525 int number_of_intr;/*will be used in remove()*/ 522 526 char intr_drvname[PM8001_MAX_MSIX_VEC] 523 527 [PM8001_NAME_LENGTH+1+3+1]; 524 - #endif 525 - #ifdef PM8001_USE_TASKLET 526 528 struct tasklet_struct tasklet[PM8001_MAX_MSIX_VEC]; 527 - #endif 528 529 u32 logging_level; 529 530 u32 link_rate; 530 531 u32 fw_status;
+19 -40
drivers/scsi/pm8001/pm80xx_hwi.c
··· 1715 1715 } 1716 1716 1717 1717 /** 1718 - * pm80xx_chip_intx_interrupt_enable - enable PM8001 chip interrupt 1719 - * @pm8001_ha: our hba card information 1720 - */ 1721 - static void 1722 - pm80xx_chip_intx_interrupt_enable(struct pm8001_hba_info *pm8001_ha) 1723 - { 1724 - pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_CLEAR_ALL); 1725 - pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, ODCR_CLEAR_ALL); 1726 - } 1727 - 1728 - /** 1729 - * pm80xx_chip_intx_interrupt_disable - disable PM8001 chip interrupt 1730 - * @pm8001_ha: our hba card information 1731 - */ 1732 - static void 1733 - pm80xx_chip_intx_interrupt_disable(struct pm8001_hba_info *pm8001_ha) 1734 - { 1735 - pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR, ODMR_MASK_ALL); 1736 - } 1737 - 1738 - /** 1739 1718 * pm80xx_chip_interrupt_enable - enable PM8001 chip interrupt 1740 1719 * @pm8001_ha: our hba card information 1741 1720 * @vec: interrupt number to enable ··· 1722 1743 static void 1723 1744 pm80xx_chip_interrupt_enable(struct pm8001_hba_info *pm8001_ha, u8 vec) 1724 1745 { 1725 - #ifdef PM8001_USE_MSIX 1746 + if (!pm8001_ha->use_msix) { 1747 + pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_CLEAR_ALL); 1748 + pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, ODCR_CLEAR_ALL); 1749 + return; 1750 + } 1751 + 1726 1752 if (vec < 32) 1727 1753 pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR, 1U << vec); 1728 1754 else 1729 - pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR_U, 1730 - 1U << (vec - 32)); 1731 - return; 1732 - #endif 1733 - pm80xx_chip_intx_interrupt_enable(pm8001_ha); 1734 - 1755 + pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR_U, 1U << (vec - 32)); 1735 1756 } 1736 1757 1737 1758 /** ··· 1742 1763 static void 1743 1764 pm80xx_chip_interrupt_disable(struct pm8001_hba_info *pm8001_ha, u8 vec) 1744 1765 { 1745 - #ifdef PM8001_USE_MSIX 1766 + if (!pm8001_ha->use_msix) { 1767 + pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR, ODMR_MASK_ALL); 1768 + return; 1769 + } 1770 + 1746 1771 if (vec == 0xFF) { 1747 1772 /* disable all vectors 0-31, 32-63 */ 1748 1773 pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, 0xFFFFFFFF); 1749 1774 pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_U, 0xFFFFFFFF); 1750 - } else if (vec < 32) 1775 + } else if (vec < 32) { 1751 1776 pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, 1U << vec); 1752 - else 1753 - pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_U, 1754 - 1U << (vec - 32)); 1755 - return; 1756 - #endif 1757 - pm80xx_chip_intx_interrupt_disable(pm8001_ha); 1777 + } else { 1778 + pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_U, 1U << (vec - 32)); 1779 + } 1758 1780 } 1759 1781 1760 1782 /** ··· 4782 4802 4783 4803 static u32 pm80xx_chip_is_our_interrupt(struct pm8001_hba_info *pm8001_ha) 4784 4804 { 4785 - #ifdef PM8001_USE_MSIX 4786 - return 1; 4787 - #else 4788 4805 u32 value; 4806 + 4807 + if (pm8001_ha->use_msix) 4808 + return 1; 4789 4809 4790 4810 value = pm8001_cr32(pm8001_ha, 0, MSGU_ODR); 4791 4811 if (value) 4792 4812 return 1; 4793 4813 return 0; 4794 - #endif 4795 4814 } 4796 4815 4797 4816 /**
+57 -12
drivers/scsi/pmcraid.c
··· 2679 2679 /** 2680 2680 * pmcraid_reset_device - device reset handler functions 2681 2681 * 2682 - * @scsi_cmd: scsi command struct 2682 + * @scsi_dev: scsi device struct 2683 2683 * @timeout: command timeout 2684 2684 * @modifier: reset modifier indicating the reset sequence to be performed 2685 2685 * ··· 2691 2691 * SUCCESS / FAILED 2692 2692 */ 2693 2693 static int pmcraid_reset_device( 2694 - struct scsi_cmnd *scsi_cmd, 2694 + struct scsi_device *scsi_dev, 2695 2695 unsigned long timeout, 2696 2696 u8 modifier) 2697 2697 { ··· 2703 2703 u32 ioasc; 2704 2704 2705 2705 pinstance = 2706 - (struct pmcraid_instance *)scsi_cmd->device->host->hostdata; 2707 - res = scsi_cmd->device->hostdata; 2706 + (struct pmcraid_instance *)scsi_dev->host->hostdata; 2707 + res = scsi_dev->hostdata; 2708 2708 2709 2709 if (!res) { 2710 - sdev_printk(KERN_ERR, scsi_cmd->device, 2710 + sdev_printk(KERN_ERR, scsi_dev, 2711 2711 "reset_device: NULL resource pointer\n"); 2712 2712 return FAILED; 2713 2713 } ··· 3018 3018 { 3019 3019 scmd_printk(KERN_INFO, scmd, 3020 3020 "resetting device due to an I/O command timeout.\n"); 3021 - return pmcraid_reset_device(scmd, 3021 + return pmcraid_reset_device(scmd->device, 3022 3022 PMCRAID_INTERNAL_TIMEOUT, 3023 3023 RESET_DEVICE_LUN); 3024 3024 } 3025 3025 3026 3026 static int pmcraid_eh_bus_reset_handler(struct scsi_cmnd *scmd) 3027 3027 { 3028 - scmd_printk(KERN_INFO, scmd, 3028 + struct Scsi_Host *host = scmd->device->host; 3029 + struct pmcraid_instance *pinstance = 3030 + (struct pmcraid_instance *)host->hostdata; 3031 + struct pmcraid_resource_entry *res = NULL; 3032 + struct pmcraid_resource_entry *temp; 3033 + struct scsi_device *sdev = NULL; 3034 + unsigned long lock_flags; 3035 + 3036 + /* 3037 + * The reset device code insists on us passing down 3038 + * a device, so grab the first device on the bus. 3039 + */ 3040 + spin_lock_irqsave(&pinstance->resource_lock, lock_flags); 3041 + list_for_each_entry(temp, &pinstance->used_res_q, queue) { 3042 + if (scmd->device->channel == PMCRAID_VSET_BUS_ID && 3043 + RES_IS_VSET(temp->cfg_entry)) { 3044 + res = temp; 3045 + break; 3046 + } else if (scmd->device->channel == PMCRAID_PHYS_BUS_ID && 3047 + RES_IS_GSCSI(temp->cfg_entry)) { 3048 + res = temp; 3049 + break; 3050 + } 3051 + } 3052 + if (res) 3053 + sdev = res->scsi_dev; 3054 + spin_unlock_irqrestore(&pinstance->resource_lock, lock_flags); 3055 + if (!sdev) 3056 + return FAILED; 3057 + 3058 + sdev_printk(KERN_INFO, sdev, 3029 3059 "Doing bus reset due to an I/O command timeout.\n"); 3030 - return pmcraid_reset_device(scmd, 3060 + return pmcraid_reset_device(sdev, 3031 3061 PMCRAID_RESET_BUS_TIMEOUT, 3032 3062 RESET_DEVICE_BUS); 3033 3063 } 3034 3064 3035 3065 static int pmcraid_eh_target_reset_handler(struct scsi_cmnd *scmd) 3036 3066 { 3037 - scmd_printk(KERN_INFO, scmd, 3067 + struct Scsi_Host *shost = scmd->device->host; 3068 + struct scsi_device *scsi_dev = NULL, *tmp; 3069 + int ret; 3070 + 3071 + shost_for_each_device(tmp, shost) { 3072 + if ((tmp->channel == scmd->device->channel) && 3073 + (tmp->id == scmd->device->id)) { 3074 + scsi_dev = tmp; 3075 + break; 3076 + } 3077 + } 3078 + if (!scsi_dev) 3079 + return FAILED; 3080 + sdev_printk(KERN_INFO, scsi_dev, 3038 3081 "Doing target reset due to an I/O command timeout.\n"); 3039 - return pmcraid_reset_device(scmd, 3040 - PMCRAID_INTERNAL_TIMEOUT, 3041 - RESET_DEVICE_TARGET); 3082 + ret = pmcraid_reset_device(scsi_dev, 3083 + PMCRAID_INTERNAL_TIMEOUT, 3084 + RESET_DEVICE_TARGET); 3085 + scsi_device_put(scsi_dev); 3086 + return ret; 3042 3087 } 3043 3088 3044 3089 /**
+3 -2
drivers/scsi/qedf/qedf.h
··· 112 112 #define QEDF_CMD_ERR_SCSI_DONE 0x5 113 113 u8 io_req_flags; 114 114 uint8_t tm_flags; 115 + u64 tm_lun; 115 116 struct qedf_rport *fcport; 116 117 #define QEDF_CMD_ST_INACTIVE 0 117 118 #define QEDFC_CMD_ST_IO_ACTIVE 1 ··· 498 497 struct fcoe_cqe *cqe, struct qedf_ioreq *io_req); 499 498 extern void qedf_process_error_detect(struct qedf_ctx *qedf, 500 499 struct fcoe_cqe *cqe, struct qedf_ioreq *io_req); 501 - extern void qedf_flush_active_ios(struct qedf_rport *fcport, int lun); 500 + extern void qedf_flush_active_ios(struct qedf_rport *fcport, u64 lun); 502 501 extern void qedf_release_cmd(struct kref *ref); 503 502 extern int qedf_initiate_abts(struct qedf_ioreq *io_req, 504 503 bool return_scsi_cmd_on_abts); ··· 523 522 bool return_scsi_cmd_on_abts); 524 523 extern void qedf_process_cleanup_compl(struct qedf_ctx *qedf, 525 524 struct fcoe_cqe *cqe, struct qedf_ioreq *io_req); 526 - extern int qedf_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags); 525 + extern int qedf_initiate_tmf(struct fc_rport *rport, u64 lun, u8 tm_flags); 527 526 extern void qedf_process_tmf_compl(struct qedf_ctx *qedf, struct fcoe_cqe *cqe, 528 527 struct qedf_ioreq *io_req); 529 528 extern void qedf_process_cqe(struct qedf_ctx *qedf, struct fcoe_cqe *cqe);
+20 -55
drivers/scsi/qedf/qedf_io.c
··· 546 546 } 547 547 548 548 static void qedf_build_fcp_cmnd(struct qedf_ioreq *io_req, 549 - struct fcp_cmnd *fcp_cmnd) 549 + struct fcp_cmnd *fcp_cmnd) 550 550 { 551 551 struct scsi_cmnd *sc_cmd = io_req->sc_cmd; 552 552 ··· 554 554 memset(fcp_cmnd, 0, FCP_CMND_LEN); 555 555 556 556 /* 8 bytes: SCSI LUN info */ 557 - int_to_scsilun(sc_cmd->device->lun, 558 - (struct scsi_lun *)&fcp_cmnd->fc_lun); 557 + if (io_req->cmd_type == QEDF_TASK_MGMT_CMD) 558 + int_to_scsilun(io_req->tm_lun, 559 + (struct scsi_lun *)&fcp_cmnd->fc_lun); 560 + else 561 + int_to_scsilun(sc_cmd->device->lun, 562 + (struct scsi_lun *)&fcp_cmnd->fc_lun); 559 563 560 564 /* 4 bytes: flag info */ 561 565 fcp_cmnd->fc_pri_ta = 0; ··· 1099 1095 } 1100 1096 1101 1097 /* The sense buffer can be NULL for TMF commands */ 1102 - if (sc_cmd->sense_buffer) { 1098 + if (sc_cmd && sc_cmd->sense_buffer) { 1103 1099 memset(sc_cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE); 1104 1100 if (fcp_sns_len) 1105 1101 memcpy(sc_cmd->sense_buffer, sense_data, ··· 1584 1580 /* A value of -1 for lun is a wild card that means flush all 1585 1581 * active SCSI I/Os for the target. 1586 1582 */ 1587 - void qedf_flush_active_ios(struct qedf_rport *fcport, int lun) 1583 + void qedf_flush_active_ios(struct qedf_rport *fcport, u64 lun) 1588 1584 { 1589 1585 struct qedf_ioreq *io_req; 1590 1586 struct qedf_ctx *qedf; ··· 1771 1767 qedf_initiate_cleanup(io_req, false); 1772 1768 kref_put(&io_req->refcount, qedf_release_cmd); 1773 1769 continue; 1774 - } 1775 - if (lun > -1) { 1776 - if (io_req->lun != lun) 1777 - continue; 1778 1770 } 1779 1771 1780 1772 /* ··· 2287 2287 complete(&io_req->cleanup_done); 2288 2288 } 2289 2289 2290 - static int qedf_execute_tmf(struct qedf_rport *fcport, struct scsi_cmnd *sc_cmd, 2290 + static int qedf_execute_tmf(struct qedf_rport *fcport, u64 tm_lun, 2291 2291 uint8_t tm_flags) 2292 2292 { 2293 2293 struct qedf_ioreq *io_req; ··· 2297 2297 int rc = 0; 2298 2298 uint16_t xid; 2299 2299 int tmo = 0; 2300 - int lun = 0; 2301 2300 unsigned long flags; 2302 2301 struct fcoe_wqe *sqe; 2303 2302 u16 sqe_idx; 2304 2303 2305 - if (!sc_cmd) { 2306 - QEDF_ERR(&qedf->dbg_ctx, "sc_cmd is NULL\n"); 2307 - return FAILED; 2308 - } 2309 - 2310 - lun = (int)sc_cmd->device->lun; 2311 2304 if (!test_bit(QEDF_RPORT_SESSION_READY, &fcport->flags)) { 2312 2305 QEDF_ERR(&(qedf->dbg_ctx), "fcport not offloaded\n"); 2313 2306 rc = FAILED; ··· 2320 2327 qedf->target_resets++; 2321 2328 2322 2329 /* Initialize rest of io_req fields */ 2323 - io_req->sc_cmd = sc_cmd; 2330 + io_req->sc_cmd = NULL; 2324 2331 io_req->fcport = fcport; 2325 2332 io_req->cmd_type = QEDF_TASK_MGMT_CMD; 2326 2333 ··· 2334 2341 2335 2342 /* Default is to return a SCSI command when an error occurs */ 2336 2343 io_req->return_scsi_cmd_on_abts = false; 2344 + io_req->tm_lun = tm_lun; 2337 2345 2338 2346 /* Obtain exchange id */ 2339 2347 xid = io_req->xid; ··· 2389 2395 2390 2396 2391 2397 if (tm_flags == FCP_TMF_LUN_RESET) 2392 - qedf_flush_active_ios(fcport, lun); 2398 + qedf_flush_active_ios(fcport, tm_lun); 2393 2399 else 2394 2400 qedf_flush_active_ios(fcport, -1); 2395 2401 ··· 2404 2410 return rc; 2405 2411 } 2406 2412 2407 - int qedf_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags) 2413 + int qedf_initiate_tmf(struct fc_rport *rport, u64 lun, u8 tm_flags) 2408 2414 { 2409 - struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device)); 2410 2415 struct fc_rport_libfc_priv *rp = rport->dd_data; 2411 2416 struct qedf_rport *fcport = (struct qedf_rport *)&rp[1]; 2412 - struct qedf_ctx *qedf; 2413 - struct fc_lport *lport = shost_priv(sc_cmd->device->host); 2417 + struct qedf_ctx *qedf = fcport->qedf; 2418 + struct fc_lport *lport = rp->local_port; 2414 2419 int rc = SUCCESS; 2415 - int rval; 2416 - struct qedf_ioreq *io_req = NULL; 2417 - int ref_cnt = 0; 2418 2420 struct fc_rport_priv *rdata = fcport->rdata; 2419 2421 2420 2422 QEDF_ERR(NULL, 2421 - "tm_flags 0x%x sc_cmd %p op = 0x%02x target_id = 0x%x lun=%d\n", 2422 - tm_flags, sc_cmd, sc_cmd->cmd_len ? sc_cmd->cmnd[0] : 0xff, 2423 - rport->scsi_target_id, (int)sc_cmd->device->lun); 2423 + "tm_flags 0x%x target_id = 0x%x lun=%llu\n", 2424 + tm_flags, rport->scsi_target_id, lun); 2424 2425 2425 2426 if (!rdata || !kref_get_unless_zero(&rdata->kref)) { 2426 2427 QEDF_ERR(NULL, "stale rport\n"); ··· 2426 2437 (tm_flags == FCP_TMF_TGT_RESET) ? "TARGET RESET" : 2427 2438 "LUN RESET"); 2428 2439 2429 - if (qedf_priv(sc_cmd)->io_req) { 2430 - io_req = qedf_priv(sc_cmd)->io_req; 2431 - ref_cnt = kref_read(&io_req->refcount); 2432 - QEDF_ERR(NULL, 2433 - "orig io_req = %p xid = 0x%x ref_cnt = %d.\n", 2434 - io_req, io_req->xid, ref_cnt); 2435 - } 2436 - 2437 - rval = fc_remote_port_chkready(rport); 2438 - if (rval) { 2439 - QEDF_ERR(NULL, "device_reset rport not ready\n"); 2440 - rc = FAILED; 2441 - goto tmf_err; 2442 - } 2443 - 2444 - rc = fc_block_scsi_eh(sc_cmd); 2440 + rc = fc_block_rport(rport); 2445 2441 if (rc) 2446 2442 goto tmf_err; 2447 - 2448 - if (!fcport) { 2449 - QEDF_ERR(NULL, "device_reset: rport is NULL\n"); 2450 - rc = FAILED; 2451 - goto tmf_err; 2452 - } 2453 - 2454 - qedf = fcport->qedf; 2455 2443 2456 2444 if (!qedf) { 2457 2445 QEDF_ERR(NULL, "qedf is NULL.\n"); ··· 2466 2500 goto tmf_err; 2467 2501 } 2468 2502 2469 - rc = qedf_execute_tmf(fcport, sc_cmd, tm_flags); 2503 + rc = qedf_execute_tmf(fcport, lun, tm_flags); 2470 2504 2471 2505 tmf_err: 2472 2506 kref_put(&rdata->kref, fc_rport_destroy); ··· 2483 2517 fcp_rsp = &cqe->cqe_info.rsp_info; 2484 2518 qedf_parse_fcp_rsp(io_req, fcp_rsp); 2485 2519 2486 - io_req->sc_cmd = NULL; 2487 2520 complete(&io_req->tm_done); 2488 2521 } 2489 2522
+10 -9
drivers/scsi/qedf/qedf_main.c
··· 774 774 goto drop_rdata_kref; 775 775 } 776 776 777 - rc = fc_block_scsi_eh(sc_cmd); 777 + rc = fc_block_rport(rport); 778 778 if (rc) 779 779 goto drop_rdata_kref; 780 780 ··· 858 858 859 859 static int qedf_eh_target_reset(struct scsi_cmnd *sc_cmd) 860 860 { 861 - QEDF_ERR(NULL, "%d:0:%d:%lld: TARGET RESET Issued...", 862 - sc_cmd->device->host->host_no, sc_cmd->device->id, 863 - sc_cmd->device->lun); 864 - return qedf_initiate_tmf(sc_cmd, FCP_TMF_TGT_RESET); 861 + struct scsi_target *starget = scsi_target(sc_cmd->device); 862 + struct fc_rport *rport = starget_to_rport(starget); 863 + 864 + QEDF_ERR(NULL, "TARGET RESET Issued..."); 865 + return qedf_initiate_tmf(rport, 0, FCP_TMF_TGT_RESET); 865 866 } 866 867 867 868 static int qedf_eh_device_reset(struct scsi_cmnd *sc_cmd) 868 869 { 869 - QEDF_ERR(NULL, "%d:0:%d:%lld: LUN RESET Issued... ", 870 - sc_cmd->device->host->host_no, sc_cmd->device->id, 871 - sc_cmd->device->lun); 872 - return qedf_initiate_tmf(sc_cmd, FCP_TMF_LUN_RESET); 870 + struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device)); 871 + 872 + QEDF_ERR(NULL, "LUN RESET Issued...\n"); 873 + return qedf_initiate_tmf(rport, sc_cmd->device->lun, FCP_TMF_LUN_RESET); 873 874 } 874 875 875 876 bool qedf_wait_for_upload(struct qedf_ctx *qedf)
+22 -20
drivers/scsi/qla1280.c
··· 716 716 ABORT_COMMAND, 717 717 DEVICE_RESET, 718 718 BUS_RESET, 719 - ADAPTER_RESET, 720 719 }; 721 720 722 721 ··· 897 898 } 898 899 break; 899 900 900 - case ADAPTER_RESET: 901 901 default: 902 - if (qla1280_verbose) { 903 - printk(KERN_INFO 904 - "scsi(%ld): Issued ADAPTER RESET\n", 905 - ha->host_no); 906 - printk(KERN_INFO "scsi(%ld): I/O processing will " 907 - "continue automatically\n", ha->host_no); 908 - } 909 - ha->flags.reset_active = 1; 910 - 911 - if (qla1280_abort_isp(ha) != 0) { /* it's dead */ 912 - result = FAILED; 913 - } 914 - 915 - ha->flags.reset_active = 0; 902 + dprintk(1, "RESET invalid action %d\n", action); 903 + return FAILED; 916 904 } 917 905 918 906 /* ··· 997 1011 static int 998 1012 qla1280_eh_adapter_reset(struct scsi_cmnd *cmd) 999 1013 { 1000 - int rc; 1014 + int rc = SUCCESS; 1015 + struct Scsi_Host *shost = cmd->device->host; 1016 + struct scsi_qla_host *ha = (struct scsi_qla_host *)shost->hostdata; 1001 1017 1002 - spin_lock_irq(cmd->device->host->host_lock); 1003 - rc = qla1280_error_action(cmd, ADAPTER_RESET); 1004 - spin_unlock_irq(cmd->device->host->host_lock); 1018 + spin_lock_irq(shost->host_lock); 1019 + if (qla1280_verbose) { 1020 + printk(KERN_INFO 1021 + "scsi(%ld): Issued ADAPTER RESET\n", 1022 + ha->host_no); 1023 + printk(KERN_INFO "scsi(%ld): I/O processing will " 1024 + "continue automatically\n", ha->host_no); 1025 + } 1026 + ha->flags.reset_active = 1; 1027 + 1028 + if (qla1280_abort_isp(ha) != 0) { /* it's dead */ 1029 + rc = FAILED; 1030 + } 1031 + 1032 + ha->flags.reset_active = 0; 1033 + 1034 + spin_unlock_irq(shost->host_lock); 1005 1035 1006 1036 return rc; 1007 1037 }
+3 -2
drivers/scsi/qla2xxx/qla_os.c
··· 5 5 */ 6 6 #include "qla_def.h" 7 7 8 + #include <linux/bitfield.h> 8 9 #include <linux/moduleparam.h> 9 10 #include <linux/vmalloc.h> 10 11 #include <linux/delay.h> ··· 634 633 const char *speed_str; 635 634 636 635 pcie_capability_read_dword(ha->pdev, PCI_EXP_LNKCAP, &lstat); 637 - lspeed = lstat & PCI_EXP_LNKCAP_SLS; 638 - lwidth = (lstat & PCI_EXP_LNKCAP_MLW) >> 4; 636 + lspeed = FIELD_GET(PCI_EXP_LNKCAP_SLS, lstat); 637 + lwidth = FIELD_GET(PCI_EXP_LNKCAP_MLW, lstat); 639 638 640 639 switch (lspeed) { 641 640 case 1:
+6
drivers/scsi/qla2xxx/tcm_qla2xxx.c
··· 1822 1822 .tfc_wwn_attrs = tcm_qla2xxx_wwn_attrs, 1823 1823 .tfc_tpg_base_attrs = tcm_qla2xxx_tpg_attrs, 1824 1824 .tfc_tpg_attrib_attrs = tcm_qla2xxx_tpg_attrib_attrs, 1825 + 1826 + .default_submit_type = TARGET_DIRECT_SUBMIT, 1827 + .direct_submit_supp = 1, 1825 1828 }; 1826 1829 1827 1830 static const struct target_core_fabric_ops tcm_qla2xxx_npiv_ops = { ··· 1862 1859 .fabric_init_nodeacl = tcm_qla2xxx_init_nodeacl, 1863 1860 1864 1861 .tfc_wwn_attrs = tcm_qla2xxx_wwn_attrs, 1862 + 1863 + .default_submit_type = TARGET_DIRECT_SUBMIT, 1864 + .direct_submit_supp = 1, 1865 1865 }; 1866 1866 1867 1867 static int tcm_qla2xxx_register_configfs(void)
+1 -1
drivers/scsi/scsi.c
··· 703 703 ret = scsi_mode_select(sdev, 1, 0, buf_data, len, 5 * HZ, 3, 704 704 &data, &sshdr); 705 705 if (ret) { 706 - if (scsi_sense_valid(&sshdr)) 706 + if (ret > 0 && scsi_sense_valid(&sshdr)) 707 707 scsi_print_sense_hdr(sdev, 708 708 dev_name(&sdev->sdev_gendev), &sshdr); 709 709 return ret;
+570 -5
drivers/scsi/scsi_debug.c
··· 41 41 #include <linux/random.h> 42 42 #include <linux/xarray.h> 43 43 #include <linux/prefetch.h> 44 + #include <linux/debugfs.h> 45 + #include <linux/async.h> 44 46 45 47 #include <net/checksum.h> 46 48 ··· 287 285 sector_t z_wp; 288 286 }; 289 287 288 + enum sdebug_err_type { 289 + ERR_TMOUT_CMD = 0, /* make specific scsi command timeout */ 290 + ERR_FAIL_QUEUE_CMD = 1, /* make specific scsi command's */ 291 + /* queuecmd return failed */ 292 + ERR_FAIL_CMD = 2, /* make specific scsi command's */ 293 + /* queuecmd return succeed but */ 294 + /* with errors set in scsi_cmnd */ 295 + ERR_ABORT_CMD_FAILED = 3, /* control return FAILED from */ 296 + /* scsi_debug_abort() */ 297 + ERR_LUN_RESET_FAILED = 4, /* control return FAILED from */ 298 + /* scsi_debug_device_reseLUN_RESET_FAILEDt() */ 299 + }; 300 + 301 + struct sdebug_err_inject { 302 + int type; 303 + struct list_head list; 304 + int cnt; 305 + unsigned char cmd; 306 + struct rcu_head rcu; 307 + 308 + union { 309 + /* 310 + * For ERR_FAIL_QUEUE_CMD 311 + */ 312 + int queuecmd_ret; 313 + 314 + /* 315 + * For ERR_FAIL_CMD 316 + */ 317 + struct { 318 + unsigned char host_byte; 319 + unsigned char driver_byte; 320 + unsigned char status_byte; 321 + unsigned char sense_key; 322 + unsigned char asc; 323 + unsigned char asq; 324 + }; 325 + }; 326 + }; 327 + 290 328 struct sdebug_dev_info { 291 329 struct list_head dev_list; 292 330 unsigned int channel; ··· 352 310 unsigned int max_open; 353 311 ktime_t create_ts; /* time since bootup that this device was created */ 354 312 struct sdeb_zone_state *zstate; 313 + 314 + struct dentry *debugfs_entry; 315 + struct spinlock list_lock; 316 + struct list_head inject_err_list; 317 + }; 318 + 319 + struct sdebug_target_info { 320 + bool reset_fail; 321 + struct dentry *debugfs_entry; 355 322 }; 356 323 357 324 struct sdebug_host_info { ··· 843 792 static bool write_since_sync; 844 793 static bool sdebug_statistics = DEF_STATISTICS; 845 794 static bool sdebug_wp; 795 + static bool sdebug_allow_restart; 846 796 /* Following enum: 0: no zbc, def; 1: host aware; 2: host managed */ 847 797 static enum blk_zoned_model sdeb_zbc_model = BLK_ZONED_NONE; 848 798 static char *sdeb_zbc_model_s; ··· 914 862 915 863 static const int condition_met_result = SAM_STAT_CONDITION_MET; 916 864 865 + static struct dentry *sdebug_debugfs_root; 866 + 867 + static void sdebug_err_free(struct rcu_head *head) 868 + { 869 + struct sdebug_err_inject *inject = 870 + container_of(head, typeof(*inject), rcu); 871 + 872 + kfree(inject); 873 + } 874 + 875 + static void sdebug_err_add(struct scsi_device *sdev, struct sdebug_err_inject *new) 876 + { 877 + struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdev->hostdata; 878 + struct sdebug_err_inject *err; 879 + 880 + spin_lock(&devip->list_lock); 881 + list_for_each_entry_rcu(err, &devip->inject_err_list, list) { 882 + if (err->type == new->type && err->cmd == new->cmd) { 883 + list_del_rcu(&err->list); 884 + call_rcu(&err->rcu, sdebug_err_free); 885 + } 886 + } 887 + 888 + list_add_tail_rcu(&new->list, &devip->inject_err_list); 889 + spin_unlock(&devip->list_lock); 890 + } 891 + 892 + static int sdebug_err_remove(struct scsi_device *sdev, const char *buf, size_t count) 893 + { 894 + struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdev->hostdata; 895 + struct sdebug_err_inject *err; 896 + int type; 897 + unsigned char cmd; 898 + 899 + if (sscanf(buf, "- %d %hhx", &type, &cmd) != 2) { 900 + kfree(buf); 901 + return -EINVAL; 902 + } 903 + 904 + spin_lock(&devip->list_lock); 905 + list_for_each_entry_rcu(err, &devip->inject_err_list, list) { 906 + if (err->type == type && err->cmd == cmd) { 907 + list_del_rcu(&err->list); 908 + call_rcu(&err->rcu, sdebug_err_free); 909 + spin_unlock(&devip->list_lock); 910 + kfree(buf); 911 + return count; 912 + } 913 + } 914 + spin_unlock(&devip->list_lock); 915 + 916 + kfree(buf); 917 + return -EINVAL; 918 + } 919 + 920 + static int sdebug_error_show(struct seq_file *m, void *p) 921 + { 922 + struct scsi_device *sdev = (struct scsi_device *)m->private; 923 + struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdev->hostdata; 924 + struct sdebug_err_inject *err; 925 + 926 + seq_puts(m, "Type\tCount\tCommand\n"); 927 + 928 + rcu_read_lock(); 929 + list_for_each_entry_rcu(err, &devip->inject_err_list, list) { 930 + switch (err->type) { 931 + case ERR_TMOUT_CMD: 932 + case ERR_ABORT_CMD_FAILED: 933 + case ERR_LUN_RESET_FAILED: 934 + seq_printf(m, "%d\t%d\t0x%x\n", err->type, err->cnt, 935 + err->cmd); 936 + break; 937 + 938 + case ERR_FAIL_QUEUE_CMD: 939 + seq_printf(m, "%d\t%d\t0x%x\t0x%x\n", err->type, 940 + err->cnt, err->cmd, err->queuecmd_ret); 941 + break; 942 + 943 + case ERR_FAIL_CMD: 944 + seq_printf(m, "%d\t%d\t0x%x\t0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n", 945 + err->type, err->cnt, err->cmd, 946 + err->host_byte, err->driver_byte, 947 + err->status_byte, err->sense_key, 948 + err->asc, err->asq); 949 + break; 950 + } 951 + } 952 + rcu_read_unlock(); 953 + 954 + return 0; 955 + } 956 + 957 + static int sdebug_error_open(struct inode *inode, struct file *file) 958 + { 959 + return single_open(file, sdebug_error_show, inode->i_private); 960 + } 961 + 962 + static ssize_t sdebug_error_write(struct file *file, const char __user *ubuf, 963 + size_t count, loff_t *ppos) 964 + { 965 + char *buf; 966 + unsigned int inject_type; 967 + struct sdebug_err_inject *inject; 968 + struct scsi_device *sdev = (struct scsi_device *)file->f_inode->i_private; 969 + 970 + buf = kmalloc(count, GFP_KERNEL); 971 + if (!buf) 972 + return -ENOMEM; 973 + 974 + if (copy_from_user(buf, ubuf, count)) { 975 + kfree(buf); 976 + return -EFAULT; 977 + } 978 + 979 + if (buf[0] == '-') 980 + return sdebug_err_remove(sdev, buf, count); 981 + 982 + if (sscanf(buf, "%d", &inject_type) != 1) { 983 + kfree(buf); 984 + return -EINVAL; 985 + } 986 + 987 + inject = kzalloc(sizeof(struct sdebug_err_inject), GFP_KERNEL); 988 + if (!inject) { 989 + kfree(buf); 990 + return -ENOMEM; 991 + } 992 + 993 + switch (inject_type) { 994 + case ERR_TMOUT_CMD: 995 + case ERR_ABORT_CMD_FAILED: 996 + case ERR_LUN_RESET_FAILED: 997 + if (sscanf(buf, "%d %d %hhx", &inject->type, &inject->cnt, 998 + &inject->cmd) != 3) 999 + goto out_error; 1000 + break; 1001 + 1002 + case ERR_FAIL_QUEUE_CMD: 1003 + if (sscanf(buf, "%d %d %hhx %x", &inject->type, &inject->cnt, 1004 + &inject->cmd, &inject->queuecmd_ret) != 4) 1005 + goto out_error; 1006 + break; 1007 + 1008 + case ERR_FAIL_CMD: 1009 + if (sscanf(buf, "%d %d %hhx %hhx %hhx %hhx %hhx %hhx %hhx", 1010 + &inject->type, &inject->cnt, &inject->cmd, 1011 + &inject->host_byte, &inject->driver_byte, 1012 + &inject->status_byte, &inject->sense_key, 1013 + &inject->asc, &inject->asq) != 9) 1014 + goto out_error; 1015 + break; 1016 + 1017 + default: 1018 + goto out_error; 1019 + break; 1020 + } 1021 + 1022 + kfree(buf); 1023 + sdebug_err_add(sdev, inject); 1024 + 1025 + return count; 1026 + 1027 + out_error: 1028 + kfree(buf); 1029 + kfree(inject); 1030 + return -EINVAL; 1031 + } 1032 + 1033 + static const struct file_operations sdebug_error_fops = { 1034 + .open = sdebug_error_open, 1035 + .read = seq_read, 1036 + .write = sdebug_error_write, 1037 + .release = single_release, 1038 + }; 1039 + 1040 + static int sdebug_target_reset_fail_show(struct seq_file *m, void *p) 1041 + { 1042 + struct scsi_target *starget = (struct scsi_target *)m->private; 1043 + struct sdebug_target_info *targetip = 1044 + (struct sdebug_target_info *)starget->hostdata; 1045 + 1046 + if (targetip) 1047 + seq_printf(m, "%c\n", targetip->reset_fail ? 'Y' : 'N'); 1048 + 1049 + return 0; 1050 + } 1051 + 1052 + static int sdebug_target_reset_fail_open(struct inode *inode, struct file *file) 1053 + { 1054 + return single_open(file, sdebug_target_reset_fail_show, inode->i_private); 1055 + } 1056 + 1057 + static ssize_t sdebug_target_reset_fail_write(struct file *file, 1058 + const char __user *ubuf, size_t count, loff_t *ppos) 1059 + { 1060 + int ret; 1061 + struct scsi_target *starget = 1062 + (struct scsi_target *)file->f_inode->i_private; 1063 + struct sdebug_target_info *targetip = 1064 + (struct sdebug_target_info *)starget->hostdata; 1065 + 1066 + if (targetip) { 1067 + ret = kstrtobool_from_user(ubuf, count, &targetip->reset_fail); 1068 + return ret < 0 ? ret : count; 1069 + } 1070 + return -ENODEV; 1071 + } 1072 + 1073 + static const struct file_operations sdebug_target_reset_fail_fops = { 1074 + .open = sdebug_target_reset_fail_open, 1075 + .read = seq_read, 1076 + .write = sdebug_target_reset_fail_write, 1077 + .release = single_release, 1078 + }; 1079 + 1080 + static int sdebug_target_alloc(struct scsi_target *starget) 1081 + { 1082 + struct sdebug_target_info *targetip; 1083 + struct dentry *dentry; 1084 + 1085 + targetip = kzalloc(sizeof(struct sdebug_target_info), GFP_KERNEL); 1086 + if (!targetip) 1087 + return -ENOMEM; 1088 + 1089 + targetip->debugfs_entry = debugfs_create_dir(dev_name(&starget->dev), 1090 + sdebug_debugfs_root); 1091 + if (IS_ERR_OR_NULL(targetip->debugfs_entry)) 1092 + pr_info("%s: failed to create debugfs directory for target %s\n", 1093 + __func__, dev_name(&starget->dev)); 1094 + 1095 + debugfs_create_file("fail_reset", 0600, targetip->debugfs_entry, starget, 1096 + &sdebug_target_reset_fail_fops); 1097 + if (IS_ERR_OR_NULL(dentry)) 1098 + pr_info("%s: failed to create fail_reset file for target %s\n", 1099 + __func__, dev_name(&starget->dev)); 1100 + 1101 + starget->hostdata = targetip; 1102 + 1103 + return 0; 1104 + } 1105 + 1106 + static void sdebug_tartget_cleanup_async(void *data, async_cookie_t cookie) 1107 + { 1108 + struct sdebug_target_info *targetip = data; 1109 + 1110 + debugfs_remove(targetip->debugfs_entry); 1111 + kfree(targetip); 1112 + } 1113 + 1114 + static void sdebug_target_destroy(struct scsi_target *starget) 1115 + { 1116 + struct sdebug_target_info *targetip; 1117 + 1118 + targetip = (struct sdebug_target_info *)starget->hostdata; 1119 + if (targetip) { 1120 + starget->hostdata = NULL; 1121 + async_schedule(sdebug_tartget_cleanup_async, targetip); 1122 + } 1123 + } 917 1124 918 1125 /* Only do the extra work involved in logical block provisioning if one or 919 1126 * more of the lbpu, lbpws or lbpws10 parameters are given and we are doing ··· 5407 5096 } 5408 5097 devip->create_ts = ktime_get_boottime(); 5409 5098 atomic_set(&devip->stopped, (sdeb_tur_ms_to_ready > 0 ? 2 : 0)); 5099 + spin_lock_init(&devip->list_lock); 5100 + INIT_LIST_HEAD(&devip->inject_err_list); 5410 5101 list_add_tail(&devip->dev_list, &sdbg_host->dev_info_list); 5411 5102 } 5412 5103 return devip; ··· 5454 5141 if (sdebug_verbose) 5455 5142 pr_info("slave_alloc <%u %u %u %llu>\n", 5456 5143 sdp->host->host_no, sdp->channel, sdp->id, sdp->lun); 5144 + 5457 5145 return 0; 5458 5146 } 5459 5147 ··· 5462 5148 { 5463 5149 struct sdebug_dev_info *devip = 5464 5150 (struct sdebug_dev_info *)sdp->hostdata; 5151 + struct dentry *dentry; 5465 5152 5466 5153 if (sdebug_verbose) 5467 5154 pr_info("slave_configure <%u %u %u %llu>\n", ··· 5478 5163 if (sdebug_no_uld) 5479 5164 sdp->no_uld_attach = 1; 5480 5165 config_cdb_len(sdp); 5166 + 5167 + if (sdebug_allow_restart) 5168 + sdp->allow_restart = 1; 5169 + 5170 + devip->debugfs_entry = debugfs_create_dir(dev_name(&sdp->sdev_dev), 5171 + sdebug_debugfs_root); 5172 + if (IS_ERR_OR_NULL(devip->debugfs_entry)) 5173 + pr_info("%s: failed to create debugfs directory for device %s\n", 5174 + __func__, dev_name(&sdp->sdev_gendev)); 5175 + 5176 + dentry = debugfs_create_file("error", 0600, devip->debugfs_entry, sdp, 5177 + &sdebug_error_fops); 5178 + if (IS_ERR_OR_NULL(dentry)) 5179 + pr_info("%s: failed to create error file for device %s\n", 5180 + __func__, dev_name(&sdp->sdev_gendev)); 5181 + 5481 5182 return 0; 5482 5183 } 5483 5184 ··· 5501 5170 { 5502 5171 struct sdebug_dev_info *devip = 5503 5172 (struct sdebug_dev_info *)sdp->hostdata; 5173 + struct sdebug_err_inject *err; 5504 5174 5505 5175 if (sdebug_verbose) 5506 5176 pr_info("slave_destroy <%u %u %u %llu>\n", 5507 5177 sdp->host->host_no, sdp->channel, sdp->id, sdp->lun); 5508 - if (devip) { 5509 - /* make this slot available for re-use */ 5510 - devip->used = false; 5511 - sdp->hostdata = NULL; 5178 + 5179 + if (!devip) 5180 + return; 5181 + 5182 + spin_lock(&devip->list_lock); 5183 + list_for_each_entry_rcu(err, &devip->inject_err_list, list) { 5184 + list_del_rcu(&err->list); 5185 + call_rcu(&err->rcu, sdebug_err_free); 5512 5186 } 5187 + spin_unlock(&devip->list_lock); 5188 + 5189 + debugfs_remove(devip->debugfs_entry); 5190 + 5191 + /* make this slot available for re-use */ 5192 + devip->used = false; 5193 + sdp->hostdata = NULL; 5513 5194 } 5514 5195 5515 5196 /* Returns true if we require the queued memory to be freed by the caller. */ ··· 5615 5272 mutex_unlock(&sdebug_host_list_mutex); 5616 5273 } 5617 5274 5275 + static int sdebug_fail_abort(struct scsi_cmnd *cmnd) 5276 + { 5277 + struct scsi_device *sdp = cmnd->device; 5278 + struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdp->hostdata; 5279 + struct sdebug_err_inject *err; 5280 + unsigned char *cmd = cmnd->cmnd; 5281 + int ret = 0; 5282 + 5283 + if (devip == NULL) 5284 + return 0; 5285 + 5286 + rcu_read_lock(); 5287 + list_for_each_entry_rcu(err, &devip->inject_err_list, list) { 5288 + if (err->type == ERR_ABORT_CMD_FAILED && 5289 + (err->cmd == cmd[0] || err->cmd == 0xff)) { 5290 + ret = !!err->cnt; 5291 + if (err->cnt < 0) 5292 + err->cnt++; 5293 + 5294 + rcu_read_unlock(); 5295 + return ret; 5296 + } 5297 + } 5298 + rcu_read_unlock(); 5299 + 5300 + return 0; 5301 + } 5302 + 5618 5303 static int scsi_debug_abort(struct scsi_cmnd *SCpnt) 5619 5304 { 5620 5305 bool ok = scsi_debug_abort_cmnd(SCpnt); 5306 + u8 *cmd = SCpnt->cmnd; 5307 + u8 opcode = cmd[0]; 5621 5308 5622 5309 ++num_aborts; 5623 5310 ··· 5655 5282 sdev_printk(KERN_INFO, SCpnt->device, 5656 5283 "%s: command%s found\n", __func__, 5657 5284 ok ? "" : " not"); 5285 + 5286 + if (sdebug_fail_abort(SCpnt)) { 5287 + scmd_printk(KERN_INFO, SCpnt, "fail abort command 0x%x\n", 5288 + opcode); 5289 + return FAILED; 5290 + } 5658 5291 5659 5292 return SUCCESS; 5660 5293 } ··· 5685 5306 scsi_debug_stop_all_queued_iter, sdp); 5686 5307 } 5687 5308 5309 + static int sdebug_fail_lun_reset(struct scsi_cmnd *cmnd) 5310 + { 5311 + struct scsi_device *sdp = cmnd->device; 5312 + struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdp->hostdata; 5313 + struct sdebug_err_inject *err; 5314 + unsigned char *cmd = cmnd->cmnd; 5315 + int ret = 0; 5316 + 5317 + if (devip == NULL) 5318 + return 0; 5319 + 5320 + rcu_read_lock(); 5321 + list_for_each_entry_rcu(err, &devip->inject_err_list, list) { 5322 + if (err->type == ERR_LUN_RESET_FAILED && 5323 + (err->cmd == cmd[0] || err->cmd == 0xff)) { 5324 + ret = !!err->cnt; 5325 + if (err->cnt < 0) 5326 + err->cnt++; 5327 + 5328 + rcu_read_unlock(); 5329 + return ret; 5330 + } 5331 + } 5332 + rcu_read_unlock(); 5333 + 5334 + return 0; 5335 + } 5336 + 5688 5337 static int scsi_debug_device_reset(struct scsi_cmnd *SCpnt) 5689 5338 { 5690 5339 struct scsi_device *sdp = SCpnt->device; 5691 5340 struct sdebug_dev_info *devip = sdp->hostdata; 5341 + u8 *cmd = SCpnt->cmnd; 5342 + u8 opcode = cmd[0]; 5692 5343 5693 5344 ++num_dev_resets; 5694 5345 ··· 5729 5320 if (devip) 5730 5321 set_bit(SDEBUG_UA_POR, devip->uas_bm); 5731 5322 5323 + if (sdebug_fail_lun_reset(SCpnt)) { 5324 + scmd_printk(KERN_INFO, SCpnt, "fail lun reset 0x%x\n", opcode); 5325 + return FAILED; 5326 + } 5327 + 5732 5328 return SUCCESS; 5329 + } 5330 + 5331 + static int sdebug_fail_target_reset(struct scsi_cmnd *cmnd) 5332 + { 5333 + struct scsi_target *starget = scsi_target(cmnd->device); 5334 + struct sdebug_target_info *targetip = 5335 + (struct sdebug_target_info *)starget->hostdata; 5336 + 5337 + if (targetip) 5338 + return targetip->reset_fail; 5339 + 5340 + return 0; 5733 5341 } 5734 5342 5735 5343 static int scsi_debug_target_reset(struct scsi_cmnd *SCpnt) ··· 5754 5328 struct scsi_device *sdp = SCpnt->device; 5755 5329 struct sdebug_host_info *sdbg_host = shost_to_sdebug_host(sdp->host); 5756 5330 struct sdebug_dev_info *devip; 5331 + u8 *cmd = SCpnt->cmnd; 5332 + u8 opcode = cmd[0]; 5757 5333 int k = 0; 5758 5334 5759 5335 ++num_target_resets; ··· 5772 5344 if (SDEBUG_OPT_RESET_NOISE & sdebug_opts) 5773 5345 sdev_printk(KERN_INFO, sdp, 5774 5346 "%s: %d device(s) found in target\n", __func__, k); 5347 + 5348 + if (sdebug_fail_target_reset(SCpnt)) { 5349 + scmd_printk(KERN_INFO, SCpnt, "fail target reset 0x%x\n", 5350 + opcode); 5351 + return FAILED; 5352 + } 5775 5353 5776 5354 return SUCCESS; 5777 5355 } ··· 6206 5772 module_param_named(zone_max_open, sdeb_zbc_max_open, int, S_IRUGO); 6207 5773 module_param_named(zone_nr_conv, sdeb_zbc_nr_conv, int, S_IRUGO); 6208 5774 module_param_named(zone_size_mb, sdeb_zbc_zone_size_mb, int, S_IRUGO); 5775 + module_param_named(allow_restart, sdebug_allow_restart, bool, S_IRUGO | S_IWUSR); 6209 5776 6210 5777 MODULE_AUTHOR("Eric Youngdale + Douglas Gilbert"); 6211 5778 MODULE_DESCRIPTION("SCSI debug adapter driver"); ··· 6279 5844 MODULE_PARM_DESC(zone_max_open, "Maximum number of open zones; [0] for no limit (def=auto)"); 6280 5845 MODULE_PARM_DESC(zone_nr_conv, "Number of conventional zones (def=1)"); 6281 5846 MODULE_PARM_DESC(zone_size_mb, "Zone size in MiB (def=auto)"); 5847 + MODULE_PARM_DESC(allow_restart, "Set scsi_device's allow_restart flag(def=0)"); 6282 5848 6283 5849 #define SDEBUG_INFO_LEN 256 6284 5850 static char sdebug_info[SDEBUG_INFO_LEN]; ··· 7447 7011 goto driver_unreg; 7448 7012 } 7449 7013 7014 + sdebug_debugfs_root = debugfs_create_dir("scsi_debug", NULL); 7015 + if (IS_ERR_OR_NULL(sdebug_debugfs_root)) 7016 + pr_info("%s: failed to create initial debugfs directory\n", __func__); 7017 + 7450 7018 for (k = 0; k < hosts_to_add; k++) { 7451 7019 if (want_store && k == 0) { 7452 7020 ret = sdebug_add_host_helper(idx); ··· 7497 7057 7498 7058 sdebug_erase_all_stores(false); 7499 7059 xa_destroy(per_store_ap); 7060 + debugfs_remove(sdebug_debugfs_root); 7500 7061 } 7501 7062 7502 7063 device_initcall(scsi_debug_init); ··· 7937 7496 return num_entries; 7938 7497 } 7939 7498 7499 + static int sdebug_timeout_cmd(struct scsi_cmnd *cmnd) 7500 + { 7501 + struct scsi_device *sdp = cmnd->device; 7502 + struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdp->hostdata; 7503 + struct sdebug_err_inject *err; 7504 + unsigned char *cmd = cmnd->cmnd; 7505 + int ret = 0; 7506 + 7507 + if (devip == NULL) 7508 + return 0; 7509 + 7510 + rcu_read_lock(); 7511 + list_for_each_entry_rcu(err, &devip->inject_err_list, list) { 7512 + if (err->type == ERR_TMOUT_CMD && 7513 + (err->cmd == cmd[0] || err->cmd == 0xff)) { 7514 + ret = !!err->cnt; 7515 + if (err->cnt < 0) 7516 + err->cnt++; 7517 + 7518 + rcu_read_unlock(); 7519 + return ret; 7520 + } 7521 + } 7522 + rcu_read_unlock(); 7523 + 7524 + return 0; 7525 + } 7526 + 7527 + static int sdebug_fail_queue_cmd(struct scsi_cmnd *cmnd) 7528 + { 7529 + struct scsi_device *sdp = cmnd->device; 7530 + struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdp->hostdata; 7531 + struct sdebug_err_inject *err; 7532 + unsigned char *cmd = cmnd->cmnd; 7533 + int ret = 0; 7534 + 7535 + if (devip == NULL) 7536 + return 0; 7537 + 7538 + rcu_read_lock(); 7539 + list_for_each_entry_rcu(err, &devip->inject_err_list, list) { 7540 + if (err->type == ERR_FAIL_QUEUE_CMD && 7541 + (err->cmd == cmd[0] || err->cmd == 0xff)) { 7542 + ret = err->cnt ? err->queuecmd_ret : 0; 7543 + if (err->cnt < 0) 7544 + err->cnt++; 7545 + 7546 + rcu_read_unlock(); 7547 + return ret; 7548 + } 7549 + } 7550 + rcu_read_unlock(); 7551 + 7552 + return 0; 7553 + } 7554 + 7555 + static int sdebug_fail_cmd(struct scsi_cmnd *cmnd, int *retval, 7556 + struct sdebug_err_inject *info) 7557 + { 7558 + struct scsi_device *sdp = cmnd->device; 7559 + struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdp->hostdata; 7560 + struct sdebug_err_inject *err; 7561 + unsigned char *cmd = cmnd->cmnd; 7562 + int ret = 0; 7563 + int result; 7564 + 7565 + if (devip == NULL) 7566 + return 0; 7567 + 7568 + rcu_read_lock(); 7569 + list_for_each_entry_rcu(err, &devip->inject_err_list, list) { 7570 + if (err->type == ERR_FAIL_CMD && 7571 + (err->cmd == cmd[0] || err->cmd == 0xff)) { 7572 + if (!err->cnt) { 7573 + rcu_read_unlock(); 7574 + return 0; 7575 + } 7576 + 7577 + ret = !!err->cnt; 7578 + rcu_read_unlock(); 7579 + goto out_handle; 7580 + } 7581 + } 7582 + rcu_read_unlock(); 7583 + 7584 + return 0; 7585 + 7586 + out_handle: 7587 + if (err->cnt < 0) 7588 + err->cnt++; 7589 + mk_sense_buffer(cmnd, err->sense_key, err->asc, err->asq); 7590 + result = err->status_byte | err->host_byte << 16 | err->driver_byte << 24; 7591 + *info = *err; 7592 + *retval = schedule_resp(cmnd, devip, result, NULL, 0, 0); 7593 + 7594 + return ret; 7595 + } 7596 + 7940 7597 static int scsi_debug_queuecommand(struct Scsi_Host *shost, 7941 7598 struct scsi_cmnd *scp) 7942 7599 { ··· 8054 7515 u8 opcode = cmd[0]; 8055 7516 bool has_wlun_rl; 8056 7517 bool inject_now; 7518 + int ret = 0; 7519 + struct sdebug_err_inject err; 8057 7520 8058 7521 scsi_set_resid(scp, 0); 8059 7522 if (sdebug_statistics) { ··· 8095 7554 if (NULL == devip) 8096 7555 goto err_out; 8097 7556 } 7557 + 7558 + if (sdebug_timeout_cmd(scp)) { 7559 + scmd_printk(KERN_INFO, scp, "timeout command 0x%x\n", opcode); 7560 + return 0; 7561 + } 7562 + 7563 + ret = sdebug_fail_queue_cmd(scp); 7564 + if (ret) { 7565 + scmd_printk(KERN_INFO, scp, "fail queue command 0x%x with 0x%x\n", 7566 + opcode, ret); 7567 + return ret; 7568 + } 7569 + 7570 + if (sdebug_fail_cmd(scp, &ret, &err)) { 7571 + scmd_printk(KERN_INFO, scp, 7572 + "fail command 0x%x with hostbyte=0x%x, " 7573 + "driverbyte=0x%x, statusbyte=0x%x, " 7574 + "sense_key=0x%x, asc=0x%x, asq=0x%x\n", 7575 + opcode, err.host_byte, err.driver_byte, 7576 + err.status_byte, err.sense_key, err.asc, err.asq); 7577 + return ret; 7578 + } 7579 + 8098 7580 if (unlikely(inject_now && !atomic_read(&sdeb_inject_pending))) 8099 7581 atomic_set(&sdeb_inject_pending, 1); 8100 7582 ··· 8236 7672 return 0; 8237 7673 } 8238 7674 8239 - 8240 7675 static struct scsi_host_template sdebug_driver_template = { 8241 7676 .show_info = scsi_debug_show_info, 8242 7677 .write_info = scsi_debug_write_info, ··· 8265 7702 .track_queue_depth = 1, 8266 7703 .cmd_size = sizeof(struct sdebug_scsi_cmd), 8267 7704 .init_cmd_priv = sdebug_init_cmd_priv, 7705 + .target_alloc = sdebug_target_alloc, 7706 + .target_destroy = sdebug_target_destroy, 8268 7707 }; 8269 7708 8270 7709 static int sdebug_driver_probe(struct device *dev)
+18 -19
drivers/scsi/scsi_lib.c
··· 774 774 case 0x1b: /* sanitize in progress */ 775 775 case 0x1d: /* configuration in progress */ 776 776 case 0x24: /* depopulation in progress */ 777 + case 0x25: /* depopulation restore in progress */ 777 778 action = ACTION_DELAYED_RETRY; 778 779 break; 779 780 case 0x0a: /* ALUA state transition */ ··· 1251 1250 int token; 1252 1251 1253 1252 token = sbitmap_get(&sdev->budget_map); 1254 - if (atomic_read(&sdev->device_blocked)) { 1255 - if (token < 0) 1256 - goto out; 1253 + if (token < 0) 1254 + return -1; 1257 1255 1258 - if (scsi_device_busy(sdev) > 1) 1259 - goto out_dec; 1256 + if (!atomic_read(&sdev->device_blocked)) 1257 + return token; 1260 1258 1261 - /* 1262 - * unblock after device_blocked iterates to zero 1263 - */ 1264 - if (atomic_dec_return(&sdev->device_blocked) > 0) 1265 - goto out_dec; 1266 - SCSI_LOG_MLQUEUE(3, sdev_printk(KERN_INFO, sdev, 1267 - "unblocking device at zero depth\n")); 1259 + /* 1260 + * Only unblock if no other commands are pending and 1261 + * if device_blocked has decreased to zero 1262 + */ 1263 + if (scsi_device_busy(sdev) > 1 || 1264 + atomic_dec_return(&sdev->device_blocked) > 0) { 1265 + sbitmap_put(&sdev->budget_map, token); 1266 + return -1; 1268 1267 } 1269 1268 1269 + SCSI_LOG_MLQUEUE(3, sdev_printk(KERN_INFO, sdev, 1270 + "unblocking device at zero depth\n")); 1271 + 1270 1272 return token; 1271 - out_dec: 1272 - if (token >= 0) 1273 - sbitmap_put(&sdev->budget_map, token); 1274 - out: 1275 - return -1; 1276 1273 } 1277 1274 1278 1275 /* ··· 2298 2299 do { 2299 2300 result = scsi_execute_cmd(sdev, cmd, REQ_OP_DRV_IN, NULL, 0, 2300 2301 timeout, 1, &exec_args); 2301 - if (sdev->removable && scsi_sense_valid(sshdr) && 2302 + if (sdev->removable && result > 0 && scsi_sense_valid(sshdr) && 2302 2303 sshdr->sense_key == UNIT_ATTENTION) 2303 2304 sdev->changed = 1; 2304 - } while (scsi_sense_valid(sshdr) && 2305 + } while (result > 0 && scsi_sense_valid(sshdr) && 2305 2306 sshdr->sense_key == UNIT_ATTENTION && --retries); 2306 2307 2307 2308 return result;
+2 -2
drivers/scsi/scsi_transport_spi.c
··· 676 676 for (r = 0; r < retries; r++) { 677 677 result = spi_execute(sdev, spi_write_buffer, REQ_OP_DRV_OUT, 678 678 buffer, len, &sshdr); 679 - if(result || !scsi_device_online(sdev)) { 679 + if (result || !scsi_device_online(sdev)) { 680 680 681 681 scsi_device_set_state(sdev, SDEV_QUIESCE); 682 - if (scsi_sense_valid(&sshdr) 682 + if (result > 0 && scsi_sense_valid(&sshdr) 683 683 && sshdr.sense_key == ILLEGAL_REQUEST 684 684 /* INVALID FIELD IN CDB */ 685 685 && sshdr.asc == 0x24 && sshdr.ascq == 0x00)
+27 -21
drivers/scsi/sd.c
··· 143 143 struct scsi_mode_data data; 144 144 struct scsi_sense_hdr sshdr; 145 145 static const char temp[] = "temporary "; 146 - int len; 146 + int len, ret; 147 147 148 148 if (sdp->type != TYPE_DISK && sdp->type != TYPE_ZBC) 149 149 /* no cache control on RBC devices; theoretically they ··· 190 190 */ 191 191 data.device_specific = 0; 192 192 193 - if (scsi_mode_select(sdp, 1, sp, buffer_data, len, SD_TIMEOUT, 194 - sdkp->max_retries, &data, &sshdr)) { 195 - if (scsi_sense_valid(&sshdr)) 193 + ret = scsi_mode_select(sdp, 1, sp, buffer_data, len, SD_TIMEOUT, 194 + sdkp->max_retries, &data, &sshdr); 195 + if (ret) { 196 + if (ret > 0 && scsi_sense_valid(&sshdr)) 196 197 sd_print_sense_hdr(sdkp, &sshdr); 197 198 return -EINVAL; 198 199 } ··· 2259 2258 sdkp->max_retries, 2260 2259 &exec_args); 2261 2260 2262 - /* 2263 - * If the drive has indicated to us that it 2264 - * doesn't have any media in it, don't bother 2265 - * with any more polling. 2266 - */ 2267 - if (media_not_present(sdkp, &sshdr)) { 2268 - if (media_was_present) 2269 - sd_printk(KERN_NOTICE, sdkp, "Media removed, stopped polling\n"); 2270 - return; 2271 - } 2261 + if (the_result > 0) { 2262 + /* 2263 + * If the drive has indicated to us that it 2264 + * doesn't have any media in it, don't bother 2265 + * with any more polling. 2266 + */ 2267 + if (media_not_present(sdkp, &sshdr)) { 2268 + if (media_was_present) 2269 + sd_printk(KERN_NOTICE, sdkp, 2270 + "Media removed, stopped polling\n"); 2271 + return; 2272 + } 2272 2273 2273 - if (the_result) 2274 2274 sense_valid = scsi_sense_valid(&sshdr); 2275 + } 2275 2276 retries++; 2276 2277 } while (retries < 3 && 2277 2278 (!scsi_status_is_good(the_result) || ··· 2305 2302 break; /* unavailable */ 2306 2303 if (sshdr.asc == 4 && sshdr.ascq == 0x1b) 2307 2304 break; /* sanitize in progress */ 2305 + if (sshdr.asc == 4 && sshdr.ascq == 0x24) 2306 + break; /* depopulation in progress */ 2307 + if (sshdr.asc == 4 && sshdr.ascq == 0x25) 2308 + break; /* depopulation restoration in progress */ 2308 2309 /* 2309 2310 * Issue command to spin up drive when not ready 2310 2311 */ ··· 2473 2466 the_result = scsi_execute_cmd(sdp, cmd, REQ_OP_DRV_IN, 2474 2467 buffer, RC16_LEN, SD_TIMEOUT, 2475 2468 sdkp->max_retries, &exec_args); 2476 - 2477 - if (media_not_present(sdkp, &sshdr)) 2478 - return -ENODEV; 2479 - 2480 2469 if (the_result > 0) { 2470 + if (media_not_present(sdkp, &sshdr)) 2471 + return -ENODEV; 2472 + 2481 2473 sense_valid = scsi_sense_valid(&sshdr); 2482 2474 if (sense_valid && 2483 2475 sshdr.sense_key == ILLEGAL_REQUEST && ··· 2973 2967 } 2974 2968 2975 2969 bad_sense: 2976 - if (scsi_sense_valid(&sshdr) && 2970 + if (res == -EIO && scsi_sense_valid(&sshdr) && 2977 2971 sshdr.sense_key == ILLEGAL_REQUEST && 2978 2972 sshdr.asc == 0x24 && sshdr.ascq == 0x0) 2979 2973 /* Invalid field in CDB */ ··· 3021 3015 sd_first_printk(KERN_WARNING, sdkp, 3022 3016 "getting Control mode page failed, assume no ATO\n"); 3023 3017 3024 - if (scsi_sense_valid(&sshdr)) 3018 + if (res == -EIO && scsi_sense_valid(&sshdr)) 3025 3019 sd_print_sense_hdr(sdkp, &sshdr); 3026 3020 3027 3021 return;
+4 -10
drivers/scsi/snic/snic_scsi.c
··· 1850 1850 { 1851 1851 struct scsi_device *lr_sdev = lr_sc->device; 1852 1852 u32 tag = 0; 1853 - int ret = FAILED; 1853 + int ret; 1854 1854 1855 1855 for (tag = 0; tag < snic->max_tag_id; tag++) { 1856 1856 if (tag == snic_cmd_tag(lr_sc)) ··· 1859 1859 ret = snic_dr_clean_single_req(snic, tag, lr_sdev); 1860 1860 if (ret) { 1861 1861 SNIC_HOST_ERR(snic->shost, "clean_err:tag = %d\n", tag); 1862 - 1863 1862 goto clean_err; 1864 1863 } 1865 1864 } ··· 1866 1867 schedule_timeout(msecs_to_jiffies(100)); 1867 1868 1868 1869 /* Walk through all the cmds and check abts status. */ 1869 - if (snic_is_abts_pending(snic, lr_sc)) { 1870 - ret = FAILED; 1871 - 1870 + if (snic_is_abts_pending(snic, lr_sc)) 1872 1871 goto clean_err; 1873 - } 1874 1872 1875 - ret = 0; 1876 1873 SNIC_SCSI_DBG(snic->shost, "clean_pending_req: Success.\n"); 1877 1874 1878 - return ret; 1875 + return 0; 1879 1876 1880 1877 clean_err: 1881 - ret = FAILED; 1882 1878 SNIC_HOST_ERR(snic->shost, 1883 1879 "Failed to Clean Pending IOs on %s device.\n", 1884 1880 dev_name(&lr_sdev->sdev_gendev)); 1885 1881 1886 - return ret; 1882 + return FAILED; 1887 1883 1888 1884 } /* end of snic_dr_clean_pending_req */ 1889 1885
+2 -1
drivers/scsi/sr.c
··· 177 177 178 178 result = scsi_execute_cmd(sdev, cmd, REQ_OP_DRV_IN, buf, sizeof(buf), 179 179 SR_TIMEOUT, MAX_RETRIES, &exec_args); 180 - if (scsi_sense_valid(&sshdr) && sshdr.sense_key == UNIT_ATTENTION) 180 + if (result > 0 && scsi_sense_valid(&sshdr) && 181 + sshdr.sense_key == UNIT_ATTENTION) 181 182 return DISK_EVENT_MEDIA_CHANGE; 182 183 183 184 if (result || be16_to_cpu(eh->data_len) < sizeof(*med))
+121 -68
drivers/scsi/sym53c8xx_2/sym_glue.c
··· 559 559 */ 560 560 #define SYM_EH_ABORT 0 561 561 #define SYM_EH_DEVICE_RESET 1 562 - #define SYM_EH_BUS_RESET 2 563 - #define SYM_EH_HOST_RESET 3 564 562 565 563 /* 566 564 * Generic method for our eh processing. 567 565 * The 'op' argument tells what we have to do. 568 566 */ 569 - static int sym_eh_handler(int op, char *opname, struct scsi_cmnd *cmd) 567 + /* 568 + * Error handlers called from the eh thread (one thread per HBA). 569 + */ 570 + static int sym53c8xx_eh_abort_handler(struct scsi_cmnd *cmd) 570 571 { 571 572 struct sym_ucmd *ucmd = SYM_UCMD_PTR(cmd); 572 573 struct Scsi_Host *shost = cmd->device->host; ··· 579 578 int sts = -1; 580 579 struct completion eh_done; 581 580 582 - scmd_printk(KERN_WARNING, cmd, "%s operation started\n", opname); 581 + scmd_printk(KERN_WARNING, cmd, "ABORT operation started\n"); 583 582 584 - /* We may be in an error condition because the PCI bus 585 - * went down. In this case, we need to wait until the 586 - * PCI bus is reset, the card is reset, and only then 587 - * proceed with the scsi error recovery. There's no 588 - * point in hurrying; take a leisurely wait. 583 + /* 584 + * Escalate to host reset if the PCI bus went down 589 585 */ 590 - #define WAIT_FOR_PCI_RECOVERY 35 591 - if (pci_channel_offline(pdev)) { 592 - int finished_reset = 0; 593 - init_completion(&eh_done); 594 - spin_lock_irq(shost->host_lock); 595 - /* Make sure we didn't race */ 596 - if (pci_channel_offline(pdev)) { 597 - BUG_ON(sym_data->io_reset); 598 - sym_data->io_reset = &eh_done; 599 - } else { 600 - finished_reset = 1; 601 - } 602 - spin_unlock_irq(shost->host_lock); 603 - if (!finished_reset) 604 - finished_reset = wait_for_completion_timeout 605 - (sym_data->io_reset, 606 - WAIT_FOR_PCI_RECOVERY*HZ); 607 - spin_lock_irq(shost->host_lock); 608 - sym_data->io_reset = NULL; 609 - spin_unlock_irq(shost->host_lock); 610 - if (!finished_reset) 611 - return SCSI_FAILED; 612 - } 586 + if (pci_channel_offline(pdev)) 587 + return SCSI_FAILED; 613 588 614 589 spin_lock_irq(shost->host_lock); 615 590 /* This one is queued in some place -> to wait for completion */ ··· 597 620 } 598 621 } 599 622 600 - /* Try to proceed the operation we have been asked for */ 601 - sts = -1; 602 - switch(op) { 603 - case SYM_EH_ABORT: 604 - sts = sym_abort_scsiio(np, cmd, 1); 605 - break; 606 - case SYM_EH_DEVICE_RESET: 607 - sts = sym_reset_scsi_target(np, cmd->device->id); 608 - break; 609 - case SYM_EH_BUS_RESET: 610 - sym_reset_scsi_bus(np, 1); 611 - sts = 0; 612 - break; 613 - case SYM_EH_HOST_RESET: 614 - sym_reset_scsi_bus(np, 0); 615 - sym_start_up(shost, 1); 616 - sts = 0; 617 - break; 618 - default: 619 - break; 620 - } 621 - 623 + sts = sym_abort_scsiio(np, cmd, 1); 622 624 /* On error, restore everything and cross fingers :) */ 623 625 if (sts) 624 626 cmd_queued = 0; ··· 614 658 spin_unlock_irq(shost->host_lock); 615 659 } 616 660 617 - dev_warn(&cmd->device->sdev_gendev, "%s operation %s.\n", opname, 661 + dev_warn(&cmd->device->sdev_gendev, "ABORT operation %s.\n", 618 662 sts==0 ? "complete" :sts==-2 ? "timed-out" : "failed"); 619 663 return sts ? SCSI_FAILED : SCSI_SUCCESS; 620 664 } 621 665 622 - 623 - /* 624 - * Error handlers called from the eh thread (one thread per HBA). 625 - */ 626 - static int sym53c8xx_eh_abort_handler(struct scsi_cmnd *cmd) 666 + static int sym53c8xx_eh_target_reset_handler(struct scsi_cmnd *cmd) 627 667 { 628 - return sym_eh_handler(SYM_EH_ABORT, "ABORT", cmd); 629 - } 668 + struct scsi_target *starget = scsi_target(cmd->device); 669 + struct Scsi_Host *shost = dev_to_shost(starget->dev.parent); 670 + struct sym_data *sym_data = shost_priv(shost); 671 + struct pci_dev *pdev = sym_data->pdev; 672 + struct sym_hcb *np = sym_data->ncb; 673 + SYM_QUEHEAD *qp; 674 + int sts; 675 + struct completion eh_done; 630 676 631 - static int sym53c8xx_eh_device_reset_handler(struct scsi_cmnd *cmd) 632 - { 633 - return sym_eh_handler(SYM_EH_DEVICE_RESET, "DEVICE RESET", cmd); 677 + starget_printk(KERN_WARNING, starget, 678 + "TARGET RESET operation started\n"); 679 + 680 + /* 681 + * Escalate to host reset if the PCI bus went down 682 + */ 683 + if (pci_channel_offline(pdev)) 684 + return SCSI_FAILED; 685 + 686 + spin_lock_irq(shost->host_lock); 687 + sts = sym_reset_scsi_target(np, starget->id); 688 + if (!sts) { 689 + FOR_EACH_QUEUED_ELEMENT(&np->busy_ccbq, qp) { 690 + struct sym_ccb *cp = sym_que_entry(qp, struct sym_ccb, 691 + link_ccbq); 692 + struct scsi_cmnd *cmd = cp->cmd; 693 + struct sym_ucmd *ucmd; 694 + 695 + if (!cmd || cmd->device->channel != starget->channel || 696 + cmd->device->id != starget->id) 697 + continue; 698 + 699 + ucmd = SYM_UCMD_PTR(cmd); 700 + init_completion(&eh_done); 701 + ucmd->eh_done = &eh_done; 702 + spin_unlock_irq(shost->host_lock); 703 + if (!wait_for_completion_timeout(&eh_done, 5*HZ)) { 704 + ucmd->eh_done = NULL; 705 + sts = -2; 706 + } 707 + spin_lock_irq(shost->host_lock); 708 + } 709 + } 710 + spin_unlock_irq(shost->host_lock); 711 + 712 + starget_printk(KERN_WARNING, starget, "TARGET RESET operation %s.\n", 713 + sts==0 ? "complete" :sts==-2 ? "timed-out" : "failed"); 714 + return SCSI_SUCCESS; 634 715 } 635 716 636 717 static int sym53c8xx_eh_bus_reset_handler(struct scsi_cmnd *cmd) 637 718 { 638 - return sym_eh_handler(SYM_EH_BUS_RESET, "BUS RESET", cmd); 719 + struct Scsi_Host *shost = cmd->device->host; 720 + struct sym_data *sym_data = shost_priv(shost); 721 + struct pci_dev *pdev = sym_data->pdev; 722 + struct sym_hcb *np = sym_data->ncb; 723 + 724 + scmd_printk(KERN_WARNING, cmd, "BUS RESET operation started\n"); 725 + 726 + /* 727 + * Escalate to host reset if the PCI bus went down 728 + */ 729 + if (pci_channel_offline(pdev)) 730 + return SCSI_FAILED; 731 + 732 + spin_lock_irq(shost->host_lock); 733 + sym_reset_scsi_bus(np, 1); 734 + spin_unlock_irq(shost->host_lock); 735 + 736 + dev_warn(&cmd->device->sdev_gendev, "BUS RESET operation complete.\n"); 737 + return SCSI_SUCCESS; 639 738 } 640 739 641 740 static int sym53c8xx_eh_host_reset_handler(struct scsi_cmnd *cmd) 642 741 { 643 - return sym_eh_handler(SYM_EH_HOST_RESET, "HOST RESET", cmd); 742 + struct Scsi_Host *shost = cmd->device->host; 743 + struct sym_data *sym_data = shost_priv(shost); 744 + struct pci_dev *pdev = sym_data->pdev; 745 + struct sym_hcb *np = sym_data->ncb; 746 + struct completion eh_done; 747 + int finished_reset = 1; 748 + 749 + shost_printk(KERN_WARNING, shost, "HOST RESET operation started\n"); 750 + 751 + /* We may be in an error condition because the PCI bus 752 + * went down. In this case, we need to wait until the 753 + * PCI bus is reset, the card is reset, and only then 754 + * proceed with the scsi error recovery. There's no 755 + * point in hurrying; take a leisurely wait. 756 + */ 757 + #define WAIT_FOR_PCI_RECOVERY 35 758 + if (pci_channel_offline(pdev)) { 759 + init_completion(&eh_done); 760 + spin_lock_irq(shost->host_lock); 761 + /* Make sure we didn't race */ 762 + if (pci_channel_offline(pdev)) { 763 + BUG_ON(sym_data->io_reset); 764 + sym_data->io_reset = &eh_done; 765 + finished_reset = 0; 766 + } 767 + spin_unlock_irq(shost->host_lock); 768 + if (!finished_reset) 769 + finished_reset = wait_for_completion_timeout 770 + (sym_data->io_reset, 771 + WAIT_FOR_PCI_RECOVERY*HZ); 772 + spin_lock_irq(shost->host_lock); 773 + sym_data->io_reset = NULL; 774 + spin_unlock_irq(shost->host_lock); 775 + } 776 + 777 + if (finished_reset) { 778 + sym_reset_scsi_bus(np, 0); 779 + sym_start_up(shost, 1); 780 + } 781 + 782 + shost_printk(KERN_WARNING, shost, "HOST RESET operation %s.\n", 783 + finished_reset==1 ? "complete" : "failed"); 784 + return finished_reset ? SCSI_SUCCESS : SCSI_FAILED; 644 785 } 645 786 646 787 /* ··· 1688 1635 .slave_configure = sym53c8xx_slave_configure, 1689 1636 .slave_destroy = sym53c8xx_slave_destroy, 1690 1637 .eh_abort_handler = sym53c8xx_eh_abort_handler, 1691 - .eh_device_reset_handler = sym53c8xx_eh_device_reset_handler, 1638 + .eh_target_reset_handler = sym53c8xx_eh_target_reset_handler, 1692 1639 .eh_bus_reset_handler = sym53c8xx_eh_bus_reset_handler, 1693 1640 .eh_host_reset_handler = sym53c8xx_eh_host_reset_handler, 1694 1641 .this_id = 7,
+6 -3
drivers/target/iscsi/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config ISCSI_TARGET 3 - tristate "Linux-iSCSI.org iSCSI Target Mode Stack" 3 + tristate "SCSI Target Mode Stack" 4 4 depends on INET 5 5 select CRYPTO 6 6 select CRYPTO_CRC32C 7 7 select CRYPTO_CRC32C_INTEL if X86 8 8 help 9 - Say M here to enable the ConfigFS enabled Linux-iSCSI.org iSCSI 10 - Target Mode Stack. 9 + Say M to enable the SCSI target mode stack. A SCSI target mode stack 10 + is software that makes local storage available over a storage network 11 + to a SCSI initiator system. The supported storage network technologies 12 + include iSCSI, Fibre Channel and the SCSI RDMA Protocol (SRP). 13 + Configuration of the SCSI target mode stack happens through configfs. 11 14 12 15 source "drivers/target/iscsi/cxgbit/Kconfig"
-6
drivers/target/iscsi/iscsi_target.c
··· 1234 1234 spin_lock_bh(&conn->cmd_lock); 1235 1235 list_add_tail(&cmd->i_conn_node, &conn->conn_cmd_list); 1236 1236 spin_unlock_bh(&conn->cmd_lock); 1237 - /* 1238 - * Check if we need to delay processing because of ALUA 1239 - * Active/NonOptimized primary access state.. 1240 - */ 1241 - core_alua_check_nonop_delay(&cmd->se_cmd); 1242 - 1243 1237 return 0; 1244 1238 } 1245 1239 EXPORT_SYMBOL(iscsit_setup_scsi_cmd);
+4 -1
drivers/target/iscsi/iscsi_target_configfs.c
··· 1589 1589 .tfc_tpg_nacl_auth_attrs = lio_target_nacl_auth_attrs, 1590 1590 .tfc_tpg_nacl_param_attrs = lio_target_nacl_param_attrs, 1591 1591 1592 - .write_pending_must_be_called = true, 1592 + .write_pending_must_be_called = 1, 1593 + 1594 + .default_submit_type = TARGET_DIRECT_SUBMIT, 1595 + .direct_submit_supp = 1, 1593 1596 };
+1 -1
drivers/target/iscsi/iscsi_target_erl1.c
··· 948 948 949 949 iscsit_set_unsolicited_dataout(cmd); 950 950 } 951 - return transport_handle_cdb_direct(&cmd->se_cmd); 951 + return target_submit(&cmd->se_cmd); 952 952 953 953 case ISCSI_OP_NOOP_OUT: 954 954 case ISCSI_OP_TEXT:
+1 -1
drivers/target/iscsi/iscsi_target_tmr.c
··· 318 318 pr_debug("READ ITT: 0x%08x: t_state: %d never sent to" 319 319 " transport\n", cmd->init_task_tag, 320 320 cmd->se_cmd.t_state); 321 - transport_handle_cdb_direct(se_cmd); 321 + target_submit(se_cmd); 322 322 return 0; 323 323 } 324 324
+3 -1
drivers/target/loopback/tcm_loop.c
··· 154 154 GFP_ATOMIC)) 155 155 return; 156 156 157 - target_queue_submission(se_cmd); 157 + target_submit(se_cmd); 158 158 return; 159 159 160 160 out_done: ··· 1102 1102 .tfc_wwn_attrs = tcm_loop_wwn_attrs, 1103 1103 .tfc_tpg_base_attrs = tcm_loop_tpg_attrs, 1104 1104 .tfc_tpg_attrib_attrs = tcm_loop_tpg_attrib_attrs, 1105 + .default_submit_type = TARGET_QUEUE_SUBMIT, 1106 + .direct_submit_supp = 0, 1105 1107 }; 1106 1108 1107 1109 static int __init tcm_loop_fabric_init(void)
+3
drivers/target/sbp/sbp_target.c
··· 2278 2278 .tfc_wwn_attrs = sbp_wwn_attrs, 2279 2279 .tfc_tpg_base_attrs = sbp_tpg_base_attrs, 2280 2280 .tfc_tpg_attrib_attrs = sbp_tpg_attrib_attrs, 2281 + 2282 + .default_submit_type = TARGET_DIRECT_SUBMIT, 2283 + .direct_submit_supp = 1, 2281 2284 }; 2282 2285 2283 2286 static int __init sbp_init(void)
-1
drivers/target/target_core_alua.c
··· 850 850 msleep_interruptible(cmd->alua_nonop_delay); 851 851 return 0; 852 852 } 853 - EXPORT_SYMBOL(core_alua_check_nonop_delay); 854 853 855 854 static int core_alua_write_tpg_metadata( 856 855 const char *path,
+22
drivers/target/target_core_configfs.c
··· 577 577 DEF_CONFIGFS_ATTRIB_SHOW(unmap_zeroes_data); 578 578 DEF_CONFIGFS_ATTRIB_SHOW(max_write_same_len); 579 579 DEF_CONFIGFS_ATTRIB_SHOW(emulate_rsoc); 580 + DEF_CONFIGFS_ATTRIB_SHOW(submit_type); 580 581 581 582 #define DEF_CONFIGFS_ATTRIB_STORE_U32(_name) \ 582 583 static ssize_t _name##_store(struct config_item *item, const char *page,\ ··· 1232 1231 return count; 1233 1232 } 1234 1233 1234 + static ssize_t submit_type_store(struct config_item *item, const char *page, 1235 + size_t count) 1236 + { 1237 + struct se_dev_attrib *da = to_attrib(item); 1238 + int ret; 1239 + u8 val; 1240 + 1241 + ret = kstrtou8(page, 0, &val); 1242 + if (ret < 0) 1243 + return ret; 1244 + 1245 + if (val > TARGET_QUEUE_SUBMIT) 1246 + return -EINVAL; 1247 + 1248 + da->submit_type = val; 1249 + return count; 1250 + } 1251 + 1235 1252 CONFIGFS_ATTR(, emulate_model_alias); 1236 1253 CONFIGFS_ATTR(, emulate_dpo); 1237 1254 CONFIGFS_ATTR(, emulate_fua_write); ··· 1285 1266 CONFIGFS_ATTR(, max_write_same_len); 1286 1267 CONFIGFS_ATTR(, alua_support); 1287 1268 CONFIGFS_ATTR(, pgr_support); 1269 + CONFIGFS_ATTR(, submit_type); 1288 1270 1289 1271 /* 1290 1272 * dev_attrib attributes for devices using the target core SBC/SPC ··· 1328 1308 &attr_alua_support, 1329 1309 &attr_pgr_support, 1330 1310 &attr_emulate_rsoc, 1311 + &attr_submit_type, 1331 1312 NULL, 1332 1313 }; 1333 1314 EXPORT_SYMBOL(sbc_attrib_attrs); ··· 1346 1325 &attr_emulate_pr, 1347 1326 &attr_alua_support, 1348 1327 &attr_pgr_support, 1328 + &attr_submit_type, 1349 1329 NULL, 1350 1330 }; 1351 1331 EXPORT_SYMBOL(passthrough_attrib_attrs);
+1
drivers/target/target_core_device.c
··· 779 779 dev->dev_attrib.unmap_zeroes_data = 780 780 DA_UNMAP_ZEROES_DATA_DEFAULT; 781 781 dev->dev_attrib.max_write_same_len = DA_MAX_WRITE_SAME_LEN; 782 + dev->dev_attrib.submit_type = TARGET_FABRIC_DEFAULT_SUBMIT; 782 783 783 784 xcopy_lun = &dev->xcopy_lun; 784 785 rcu_assign_pointer(xcopy_lun->lun_se_dev, dev);
+24
drivers/target/target_core_fabric_configfs.c
··· 1065 1065 } 1066 1066 CONFIGFS_ATTR(target_fabric_wwn_, cmd_completion_affinity); 1067 1067 1068 + static ssize_t 1069 + target_fabric_wwn_default_submit_type_show(struct config_item *item, 1070 + char *page) 1071 + { 1072 + struct se_wwn *wwn = container_of(to_config_group(item), struct se_wwn, 1073 + param_group); 1074 + return sysfs_emit(page, "%u\n", 1075 + wwn->wwn_tf->tf_ops->default_submit_type); 1076 + } 1077 + CONFIGFS_ATTR_RO(target_fabric_wwn_, default_submit_type); 1078 + 1079 + static ssize_t 1080 + target_fabric_wwn_direct_submit_supported_show(struct config_item *item, 1081 + char *page) 1082 + { 1083 + struct se_wwn *wwn = container_of(to_config_group(item), struct se_wwn, 1084 + param_group); 1085 + return sysfs_emit(page, "%u\n", 1086 + wwn->wwn_tf->tf_ops->direct_submit_supp); 1087 + } 1088 + CONFIGFS_ATTR_RO(target_fabric_wwn_, direct_submit_supported); 1089 + 1068 1090 static struct configfs_attribute *target_fabric_wwn_param_attrs[] = { 1069 1091 &target_fabric_wwn_attr_cmd_completion_affinity, 1092 + &target_fabric_wwn_attr_default_submit_type, 1093 + &target_fabric_wwn_attr_direct_submit_supported, 1070 1094 NULL, 1071 1095 }; 1072 1096
+59 -57
drivers/target/target_core_transport.c
··· 1576 1576 } 1577 1577 EXPORT_SYMBOL(target_cmd_parse_cdb); 1578 1578 1579 - /* 1580 - * Used by fabric module frontends to queue tasks directly. 1581 - * May only be used from process context. 1582 - */ 1583 - int transport_handle_cdb_direct( 1584 - struct se_cmd *cmd) 1579 + static int __target_submit(struct se_cmd *cmd) 1585 1580 { 1586 1581 sense_reason_t ret; 1587 1582 1588 1583 might_sleep(); 1584 + 1585 + /* 1586 + * Check if we need to delay processing because of ALUA 1587 + * Active/NonOptimized primary access state.. 1588 + */ 1589 + core_alua_check_nonop_delay(cmd); 1590 + 1591 + if (cmd->t_data_nents != 0) { 1592 + /* 1593 + * This is primarily a hack for udev and tcm loop which sends 1594 + * INQUIRYs with a single page and expects the data to be 1595 + * cleared. 1596 + */ 1597 + if (!(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) && 1598 + cmd->data_direction == DMA_FROM_DEVICE) { 1599 + struct scatterlist *sgl = cmd->t_data_sg; 1600 + unsigned char *buf = NULL; 1601 + 1602 + BUG_ON(!sgl); 1603 + 1604 + buf = kmap_local_page(sg_page(sgl)); 1605 + if (buf) { 1606 + memset(buf + sgl->offset, 0, sgl->length); 1607 + kunmap_local(buf); 1608 + } 1609 + } 1610 + } 1589 1611 1590 1612 if (!cmd->se_lun) { 1591 1613 dump_stack(); ··· 1636 1614 transport_generic_request_failure(cmd, ret); 1637 1615 return 0; 1638 1616 } 1639 - EXPORT_SYMBOL(transport_handle_cdb_direct); 1640 1617 1641 1618 sense_reason_t 1642 1619 transport_generic_map_mem_to_cmd(struct se_cmd *cmd, struct scatterlist *sgl, ··· 1803 1782 EXPORT_SYMBOL_GPL(target_submit_prep); 1804 1783 1805 1784 /** 1806 - * target_submit - perform final initialization and submit cmd to LIO core 1807 - * @se_cmd: command descriptor to submit 1808 - * 1809 - * target_submit_prep must have been called on the cmd, and this must be 1810 - * called from process context. 1811 - */ 1812 - void target_submit(struct se_cmd *se_cmd) 1813 - { 1814 - struct scatterlist *sgl = se_cmd->t_data_sg; 1815 - unsigned char *buf = NULL; 1816 - 1817 - might_sleep(); 1818 - 1819 - if (se_cmd->t_data_nents != 0) { 1820 - BUG_ON(!sgl); 1821 - /* 1822 - * A work-around for tcm_loop as some userspace code via 1823 - * scsi-generic do not memset their associated read buffers, 1824 - * so go ahead and do that here for type non-data CDBs. Also 1825 - * note that this is currently guaranteed to be a single SGL 1826 - * for this case by target core in target_setup_cmd_from_cdb() 1827 - * -> transport_generic_cmd_sequencer(). 1828 - */ 1829 - if (!(se_cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) && 1830 - se_cmd->data_direction == DMA_FROM_DEVICE) { 1831 - if (sgl) 1832 - buf = kmap(sg_page(sgl)) + sgl->offset; 1833 - 1834 - if (buf) { 1835 - memset(buf, 0, sgl->length); 1836 - kunmap(sg_page(sgl)); 1837 - } 1838 - } 1839 - 1840 - } 1841 - 1842 - /* 1843 - * Check if we need to delay processing because of ALUA 1844 - * Active/NonOptimized primary access state.. 1845 - */ 1846 - core_alua_check_nonop_delay(se_cmd); 1847 - 1848 - transport_handle_cdb_direct(se_cmd); 1849 - } 1850 - EXPORT_SYMBOL_GPL(target_submit); 1851 - 1852 - /** 1853 1785 * target_submit_cmd - lookup unpacked lun and submit uninitialized se_cmd 1854 1786 * 1855 1787 * @se_cmd: command descriptor to submit ··· 1897 1923 se_plug = target_plug_device(se_dev); 1898 1924 } 1899 1925 1900 - target_submit(se_cmd); 1926 + __target_submit(se_cmd); 1901 1927 } 1902 1928 1903 1929 if (se_plug) ··· 1908 1934 * target_queue_submission - queue the cmd to run on the LIO workqueue 1909 1935 * @se_cmd: command descriptor to submit 1910 1936 */ 1911 - void target_queue_submission(struct se_cmd *se_cmd) 1937 + static void target_queue_submission(struct se_cmd *se_cmd) 1912 1938 { 1913 1939 struct se_device *se_dev = se_cmd->se_dev; 1914 1940 int cpu = se_cmd->cpuid; ··· 1918 1944 llist_add(&se_cmd->se_cmd_list, &sq->cmd_list); 1919 1945 queue_work_on(cpu, target_submission_wq, &sq->work); 1920 1946 } 1921 - EXPORT_SYMBOL_GPL(target_queue_submission); 1947 + 1948 + /** 1949 + * target_submit - perform final initialization and submit cmd to LIO core 1950 + * @se_cmd: command descriptor to submit 1951 + * 1952 + * target_submit_prep or something similar must have been called on the cmd, 1953 + * and this must be called from process context. 1954 + */ 1955 + int target_submit(struct se_cmd *se_cmd) 1956 + { 1957 + const struct target_core_fabric_ops *tfo = se_cmd->se_sess->se_tpg->se_tpg_tfo; 1958 + struct se_dev_attrib *da = &se_cmd->se_dev->dev_attrib; 1959 + u8 submit_type; 1960 + 1961 + if (da->submit_type == TARGET_FABRIC_DEFAULT_SUBMIT) 1962 + submit_type = tfo->default_submit_type; 1963 + else if (da->submit_type == TARGET_DIRECT_SUBMIT && 1964 + tfo->direct_submit_supp) 1965 + submit_type = TARGET_DIRECT_SUBMIT; 1966 + else 1967 + submit_type = TARGET_QUEUE_SUBMIT; 1968 + 1969 + if (submit_type == TARGET_DIRECT_SUBMIT) 1970 + return __target_submit(se_cmd); 1971 + 1972 + target_queue_submission(se_cmd); 1973 + return 0; 1974 + } 1975 + EXPORT_SYMBOL_GPL(target_submit); 1922 1976 1923 1977 static void target_complete_tmr_failure(struct work_struct *work) 1924 1978 {
+1 -1
drivers/target/target_core_user.c
··· 201 201 202 202 uint8_t tmr_type; 203 203 uint32_t tmr_cmd_cnt; 204 - int16_t tmr_cmd_ids[]; 204 + int16_t tmr_cmd_ids[] __counted_by(tmr_cmd_cnt); 205 205 }; 206 206 207 207 /*
+3
drivers/target/tcm_fc/tfc_conf.c
··· 432 432 433 433 .tfc_wwn_attrs = ft_wwn_attrs, 434 434 .tfc_tpg_nacl_base_attrs = ft_nacl_base_attrs, 435 + 436 + .default_submit_type = TARGET_DIRECT_SUBMIT, 437 + .direct_submit_supp = 1, 435 438 }; 436 439 437 440 static struct notifier_block ft_notifier = {
+193 -75
drivers/ufs/core/ufshcd.c
··· 20 20 #include <linux/delay.h> 21 21 #include <linux/interrupt.h> 22 22 #include <linux/module.h> 23 + #include <linux/pm_opp.h> 23 24 #include <linux/regulator/consumer.h> 24 25 #include <linux/sched/clock.h> 25 26 #include <linux/iopoll.h> ··· 275 274 static int ufshcd_host_reset_and_restore(struct ufs_hba *hba); 276 275 static void ufshcd_resume_clkscaling(struct ufs_hba *hba); 277 276 static void ufshcd_suspend_clkscaling(struct ufs_hba *hba); 278 - static void __ufshcd_suspend_clkscaling(struct ufs_hba *hba); 279 - static int ufshcd_scale_clks(struct ufs_hba *hba, bool scale_up); 277 + static int ufshcd_scale_clks(struct ufs_hba *hba, unsigned long freq, 278 + bool scale_up); 280 279 static irqreturn_t ufshcd_intr(int irq, void *__hba); 281 280 static int ufshcd_change_power_mode(struct ufs_hba *hba, 282 281 struct ufs_pa_layer_attr *pwr_mode); ··· 448 447 } else { 449 448 doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL); 450 449 } 451 - trace_ufshcd_command(dev_name(hba->dev), str_t, tag, 452 - doorbell, hwq_id, transfer_len, intr, lba, opcode, group_id); 450 + trace_ufshcd_command(cmd->device, str_t, tag, doorbell, hwq_id, 451 + transfer_len, intr, lba, opcode, group_id); 453 452 } 454 453 455 454 static void ufshcd_print_clk_freqs(struct ufs_hba *hba) ··· 1063 1062 return ret; 1064 1063 } 1065 1064 1065 + int ufshcd_opp_config_clks(struct device *dev, struct opp_table *opp_table, 1066 + struct dev_pm_opp *opp, void *data, 1067 + bool scaling_down) 1068 + { 1069 + struct ufs_hba *hba = dev_get_drvdata(dev); 1070 + struct list_head *head = &hba->clk_list_head; 1071 + struct ufs_clk_info *clki; 1072 + unsigned long freq; 1073 + u8 idx = 0; 1074 + int ret; 1075 + 1076 + list_for_each_entry(clki, head, list) { 1077 + if (!IS_ERR_OR_NULL(clki->clk)) { 1078 + freq = dev_pm_opp_get_freq_indexed(opp, idx++); 1079 + 1080 + /* Do not set rate for clocks having frequency as 0 */ 1081 + if (!freq) 1082 + continue; 1083 + 1084 + ret = clk_set_rate(clki->clk, freq); 1085 + if (ret) { 1086 + dev_err(dev, "%s: %s clk set rate(%ldHz) failed, %d\n", 1087 + __func__, clki->name, freq, ret); 1088 + return ret; 1089 + } 1090 + 1091 + trace_ufshcd_clk_scaling(dev_name(dev), 1092 + (scaling_down ? "scaled down" : "scaled up"), 1093 + clki->name, hba->clk_scaling.target_freq, freq); 1094 + } 1095 + } 1096 + 1097 + return 0; 1098 + } 1099 + EXPORT_SYMBOL_GPL(ufshcd_opp_config_clks); 1100 + 1101 + static int ufshcd_opp_set_rate(struct ufs_hba *hba, unsigned long freq) 1102 + { 1103 + struct dev_pm_opp *opp; 1104 + int ret; 1105 + 1106 + opp = dev_pm_opp_find_freq_floor_indexed(hba->dev, 1107 + &freq, 0); 1108 + if (IS_ERR(opp)) 1109 + return PTR_ERR(opp); 1110 + 1111 + ret = dev_pm_opp_set_opp(hba->dev, opp); 1112 + dev_pm_opp_put(opp); 1113 + 1114 + return ret; 1115 + } 1116 + 1066 1117 /** 1067 1118 * ufshcd_scale_clks - scale up or scale down UFS controller clocks 1068 1119 * @hba: per adapter instance 1120 + * @freq: frequency to scale 1069 1121 * @scale_up: True if scaling up and false if scaling down 1070 1122 * 1071 1123 * Return: 0 if successful; < 0 upon failure. 1072 1124 */ 1073 - static int ufshcd_scale_clks(struct ufs_hba *hba, bool scale_up) 1125 + static int ufshcd_scale_clks(struct ufs_hba *hba, unsigned long freq, 1126 + bool scale_up) 1074 1127 { 1075 1128 int ret = 0; 1076 1129 ktime_t start = ktime_get(); ··· 1133 1078 if (ret) 1134 1079 goto out; 1135 1080 1136 - ret = ufshcd_set_clk_freq(hba, scale_up); 1081 + if (hba->use_pm_opp) 1082 + ret = ufshcd_opp_set_rate(hba, freq); 1083 + else 1084 + ret = ufshcd_set_clk_freq(hba, scale_up); 1137 1085 if (ret) 1138 1086 goto out; 1139 1087 1140 1088 ret = ufshcd_vops_clk_scale_notify(hba, scale_up, POST_CHANGE); 1141 - if (ret) 1142 - ufshcd_set_clk_freq(hba, !scale_up); 1089 + if (ret) { 1090 + if (hba->use_pm_opp) 1091 + ufshcd_opp_set_rate(hba, 1092 + hba->devfreq->previous_freq); 1093 + else 1094 + ufshcd_set_clk_freq(hba, !scale_up); 1095 + } 1143 1096 1144 1097 out: 1145 1098 trace_ufshcd_profile_clk_scaling(dev_name(hba->dev), ··· 1159 1096 /** 1160 1097 * ufshcd_is_devfreq_scaling_required - check if scaling is required or not 1161 1098 * @hba: per adapter instance 1099 + * @freq: frequency to scale 1162 1100 * @scale_up: True if scaling up and false if scaling down 1163 1101 * 1164 1102 * Return: true if scaling is required, false otherwise. 1165 1103 */ 1166 1104 static bool ufshcd_is_devfreq_scaling_required(struct ufs_hba *hba, 1167 - bool scale_up) 1105 + unsigned long freq, bool scale_up) 1168 1106 { 1169 1107 struct ufs_clk_info *clki; 1170 1108 struct list_head *head = &hba->clk_list_head; 1171 1109 1172 1110 if (list_empty(head)) 1173 1111 return false; 1112 + 1113 + if (hba->use_pm_opp) 1114 + return freq != hba->clk_scaling.target_freq; 1174 1115 1175 1116 list_for_each_entry(clki, head, list) { 1176 1117 if (!IS_ERR_OR_NULL(clki->clk)) { ··· 1371 1304 /** 1372 1305 * ufshcd_devfreq_scale - scale up/down UFS clocks and gear 1373 1306 * @hba: per adapter instance 1307 + * @freq: frequency to scale 1374 1308 * @scale_up: True for scaling up and false for scalin down 1375 1309 * 1376 1310 * Return: 0 for success; -EBUSY if scaling can't happen at this time; non-zero 1377 1311 * for any other errors. 1378 1312 */ 1379 - static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up) 1313 + static int ufshcd_devfreq_scale(struct ufs_hba *hba, unsigned long freq, 1314 + bool scale_up) 1380 1315 { 1381 1316 int ret = 0; 1382 1317 ··· 1393 1324 goto out_unprepare; 1394 1325 } 1395 1326 1396 - ret = ufshcd_scale_clks(hba, scale_up); 1327 + ret = ufshcd_scale_clks(hba, freq, scale_up); 1397 1328 if (ret) { 1398 1329 if (!scale_up) 1399 1330 ufshcd_scale_gear(hba, true); ··· 1404 1335 if (scale_up) { 1405 1336 ret = ufshcd_scale_gear(hba, true); 1406 1337 if (ret) { 1407 - ufshcd_scale_clks(hba, false); 1338 + ufshcd_scale_clks(hba, hba->devfreq->previous_freq, 1339 + false); 1408 1340 goto out_unprepare; 1409 1341 } 1410 1342 } ··· 1427 1357 return; 1428 1358 } 1429 1359 hba->clk_scaling.is_suspended = true; 1360 + hba->clk_scaling.window_start_t = 0; 1430 1361 spin_unlock_irqrestore(hba->host->host_lock, irq_flags); 1431 1362 1432 - __ufshcd_suspend_clkscaling(hba); 1363 + devfreq_suspend_device(hba->devfreq); 1433 1364 } 1434 1365 1435 1366 static void ufshcd_clk_scaling_resume_work(struct work_struct *work) ··· 1464 1393 if (!ufshcd_is_clkscaling_supported(hba)) 1465 1394 return -EINVAL; 1466 1395 1467 - clki = list_first_entry(&hba->clk_list_head, struct ufs_clk_info, list); 1468 - /* Override with the closest supported frequency */ 1469 - *freq = (unsigned long) clk_round_rate(clki->clk, *freq); 1396 + if (hba->use_pm_opp) { 1397 + struct dev_pm_opp *opp; 1398 + 1399 + /* Get the recommended frequency from OPP framework */ 1400 + opp = devfreq_recommended_opp(dev, freq, flags); 1401 + if (IS_ERR(opp)) 1402 + return PTR_ERR(opp); 1403 + 1404 + dev_pm_opp_put(opp); 1405 + } else { 1406 + /* Override with the closest supported frequency */ 1407 + clki = list_first_entry(&hba->clk_list_head, struct ufs_clk_info, 1408 + list); 1409 + *freq = (unsigned long) clk_round_rate(clki->clk, *freq); 1410 + } 1411 + 1470 1412 spin_lock_irqsave(hba->host->host_lock, irq_flags); 1471 1413 if (ufshcd_eh_in_progress(hba)) { 1472 1414 spin_unlock_irqrestore(hba->host->host_lock, irq_flags); 1415 + return 0; 1416 + } 1417 + 1418 + /* Skip scaling clock when clock scaling is suspended */ 1419 + if (hba->clk_scaling.is_suspended) { 1420 + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); 1421 + dev_warn(hba->dev, "clock scaling is suspended, skip"); 1473 1422 return 0; 1474 1423 } 1475 1424 ··· 1501 1410 goto out; 1502 1411 } 1503 1412 1504 - /* Decide based on the rounded-off frequency and update */ 1505 - scale_up = *freq == clki->max_freq; 1506 - if (!scale_up) 1413 + /* Decide based on the target or rounded-off frequency and update */ 1414 + if (hba->use_pm_opp) 1415 + scale_up = *freq > hba->clk_scaling.target_freq; 1416 + else 1417 + scale_up = *freq == clki->max_freq; 1418 + 1419 + if (!hba->use_pm_opp && !scale_up) 1507 1420 *freq = clki->min_freq; 1421 + 1508 1422 /* Update the frequency */ 1509 - if (!ufshcd_is_devfreq_scaling_required(hba, scale_up)) { 1423 + if (!ufshcd_is_devfreq_scaling_required(hba, *freq, scale_up)) { 1510 1424 spin_unlock_irqrestore(hba->host->host_lock, irq_flags); 1511 1425 ret = 0; 1512 1426 goto out; /* no state change required */ ··· 1519 1423 spin_unlock_irqrestore(hba->host->host_lock, irq_flags); 1520 1424 1521 1425 start = ktime_get(); 1522 - ret = ufshcd_devfreq_scale(hba, scale_up); 1426 + ret = ufshcd_devfreq_scale(hba, *freq, scale_up); 1427 + if (!ret) 1428 + hba->clk_scaling.target_freq = *freq; 1523 1429 1524 1430 trace_ufshcd_profile_clk_scaling(dev_name(hba->dev), 1525 1431 (scale_up ? "up" : "down"), 1526 1432 ktime_to_us(ktime_sub(ktime_get(), start)), ret); 1527 1433 1528 1434 out: 1529 - if (sched_clk_scaling_suspend_work) 1435 + if (sched_clk_scaling_suspend_work && !scale_up) 1530 1436 queue_work(hba->clk_scaling.workq, 1531 1437 &hba->clk_scaling.suspend_work); 1532 1438 ··· 1541 1443 struct ufs_hba *hba = dev_get_drvdata(dev); 1542 1444 struct ufs_clk_scaling *scaling = &hba->clk_scaling; 1543 1445 unsigned long flags; 1544 - struct list_head *clk_list = &hba->clk_list_head; 1545 - struct ufs_clk_info *clki; 1546 1446 ktime_t curr_t; 1547 1447 1548 1448 if (!ufshcd_is_clkscaling_supported(hba)) ··· 1553 1457 if (!scaling->window_start_t) 1554 1458 goto start_window; 1555 1459 1556 - clki = list_first_entry(clk_list, struct ufs_clk_info, list); 1557 1460 /* 1558 1461 * If current frequency is 0, then the ondemand governor considers 1559 1462 * there's no initial frequency set. And it always requests to set 1560 1463 * to max. frequency. 1561 1464 */ 1562 - stat->current_frequency = clki->curr_freq; 1465 + if (hba->use_pm_opp) { 1466 + stat->current_frequency = hba->clk_scaling.target_freq; 1467 + } else { 1468 + struct list_head *clk_list = &hba->clk_list_head; 1469 + struct ufs_clk_info *clki; 1470 + 1471 + clki = list_first_entry(clk_list, struct ufs_clk_info, list); 1472 + stat->current_frequency = clki->curr_freq; 1473 + } 1474 + 1563 1475 if (scaling->is_busy_started) 1564 1476 scaling->tot_busy_t += ktime_us_delta(curr_t, 1565 1477 scaling->busy_start_t); 1566 - 1567 1478 stat->total_time = ktime_us_delta(curr_t, scaling->window_start_t); 1568 1479 stat->busy_time = scaling->tot_busy_t; 1569 1480 start_window: ··· 1599 1496 if (list_empty(clk_list)) 1600 1497 return 0; 1601 1498 1602 - clki = list_first_entry(clk_list, struct ufs_clk_info, list); 1603 - dev_pm_opp_add(hba->dev, clki->min_freq, 0); 1604 - dev_pm_opp_add(hba->dev, clki->max_freq, 0); 1499 + if (!hba->use_pm_opp) { 1500 + clki = list_first_entry(clk_list, struct ufs_clk_info, list); 1501 + dev_pm_opp_add(hba->dev, clki->min_freq, 0); 1502 + dev_pm_opp_add(hba->dev, clki->max_freq, 0); 1503 + } 1605 1504 1606 1505 ufshcd_vops_config_scaling_param(hba, &hba->vps->devfreq_profile, 1607 1506 &hba->vps->ondemand_data); ··· 1615 1510 ret = PTR_ERR(devfreq); 1616 1511 dev_err(hba->dev, "Unable to register with devfreq %d\n", ret); 1617 1512 1618 - dev_pm_opp_remove(hba->dev, clki->min_freq); 1619 - dev_pm_opp_remove(hba->dev, clki->max_freq); 1513 + if (!hba->use_pm_opp) { 1514 + dev_pm_opp_remove(hba->dev, clki->min_freq); 1515 + dev_pm_opp_remove(hba->dev, clki->max_freq); 1516 + } 1620 1517 return ret; 1621 1518 } 1622 1519 ··· 1630 1523 static void ufshcd_devfreq_remove(struct ufs_hba *hba) 1631 1524 { 1632 1525 struct list_head *clk_list = &hba->clk_list_head; 1633 - struct ufs_clk_info *clki; 1634 1526 1635 1527 if (!hba->devfreq) 1636 1528 return; ··· 1637 1531 devfreq_remove_device(hba->devfreq); 1638 1532 hba->devfreq = NULL; 1639 1533 1640 - clki = list_first_entry(clk_list, struct ufs_clk_info, list); 1641 - dev_pm_opp_remove(hba->dev, clki->min_freq); 1642 - dev_pm_opp_remove(hba->dev, clki->max_freq); 1643 - } 1534 + if (!hba->use_pm_opp) { 1535 + struct ufs_clk_info *clki; 1644 1536 1645 - static void __ufshcd_suspend_clkscaling(struct ufs_hba *hba) 1646 - { 1647 - unsigned long flags; 1648 - 1649 - devfreq_suspend_device(hba->devfreq); 1650 - spin_lock_irqsave(hba->host->host_lock, flags); 1651 - hba->clk_scaling.window_start_t = 0; 1652 - spin_unlock_irqrestore(hba->host->host_lock, flags); 1537 + clki = list_first_entry(clk_list, struct ufs_clk_info, list); 1538 + dev_pm_opp_remove(hba->dev, clki->min_freq); 1539 + dev_pm_opp_remove(hba->dev, clki->max_freq); 1540 + } 1653 1541 } 1654 1542 1655 1543 static void ufshcd_suspend_clkscaling(struct ufs_hba *hba) ··· 1658 1558 if (!hba->clk_scaling.is_suspended) { 1659 1559 suspend = true; 1660 1560 hba->clk_scaling.is_suspended = true; 1561 + hba->clk_scaling.window_start_t = 0; 1661 1562 } 1662 1563 spin_unlock_irqrestore(hba->host->host_lock, flags); 1663 1564 1664 1565 if (suspend) 1665 - __ufshcd_suspend_clkscaling(hba); 1566 + devfreq_suspend_device(hba->devfreq); 1666 1567 } 1667 1568 1668 1569 static void ufshcd_resume_clkscaling(struct ufs_hba *hba) ··· 1719 1618 ufshcd_resume_clkscaling(hba); 1720 1619 } else { 1721 1620 ufshcd_suspend_clkscaling(hba); 1722 - err = ufshcd_devfreq_scale(hba, true); 1621 + err = ufshcd_devfreq_scale(hba, ULONG_MAX, true); 1723 1622 if (err) 1724 1623 dev_err(hba->dev, "%s: failed to scale clocks up %d\n", 1725 1624 __func__, err); ··· 2266 2165 lrbp->compl_time_stamp = ktime_set(0, 0); 2267 2166 lrbp->compl_time_stamp_local_clock = 0; 2268 2167 ufshcd_add_command_trace(hba, task_tag, UFS_CMD_SEND); 2269 - ufshcd_clk_scaling_start_busy(hba); 2168 + if (lrbp->cmd) 2169 + ufshcd_clk_scaling_start_busy(hba); 2270 2170 if (unlikely(ufshcd_should_inform_monitor(hba, lrbp))) 2271 2171 ufshcd_start_monitor(hba, lrbp); 2272 2172 ··· 2406 2304 int ret = read_poll_timeout(ufshcd_readl, val, val & UIC_COMMAND_READY, 2407 2305 500, UIC_CMD_TIMEOUT * 1000, false, hba, 2408 2306 REG_CONTROLLER_STATUS); 2409 - return ret == 0 ? true : false; 2307 + return ret == 0; 2410 2308 } 2411 2309 2412 2310 /** ··· 2817 2715 * for SCSI Purposes 2818 2716 * @hba: per adapter instance 2819 2717 * @lrbp: pointer to local reference block 2820 - * 2821 - * Return: 0 upon success; < 0 upon failure. 2822 2718 */ 2823 - static int ufshcd_comp_scsi_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) 2719 + static void ufshcd_comp_scsi_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) 2824 2720 { 2721 + struct request *rq = scsi_cmd_to_rq(lrbp->cmd); 2722 + unsigned int ioprio_class = IOPRIO_PRIO_CLASS(req_get_ioprio(rq)); 2825 2723 u8 upiu_flags; 2826 - int ret = 0; 2827 2724 2828 2725 if (hba->ufs_version <= ufshci_version(1, 1)) 2829 2726 lrbp->command_type = UTP_CMD_TYPE_SCSI; 2830 2727 else 2831 2728 lrbp->command_type = UTP_CMD_TYPE_UFS_STORAGE; 2832 2729 2833 - if (likely(lrbp->cmd)) { 2834 - ufshcd_prepare_req_desc_hdr(lrbp, &upiu_flags, lrbp->cmd->sc_data_direction, 0); 2835 - ufshcd_prepare_utp_scsi_cmd_upiu(lrbp, upiu_flags); 2836 - } else { 2837 - ret = -EINVAL; 2838 - } 2839 - 2840 - return ret; 2730 + ufshcd_prepare_req_desc_hdr(lrbp, &upiu_flags, 2731 + lrbp->cmd->sc_data_direction, 0); 2732 + if (ioprio_class == IOPRIO_CLASS_RT) 2733 + upiu_flags |= UPIU_CMD_FLAGS_CP; 2734 + ufshcd_prepare_utp_scsi_cmd_upiu(lrbp, upiu_flags); 2841 2735 } 2842 2736 2843 2737 /** ··· 2920 2822 struct ufshcd_lrb *lrbp; 2921 2823 int err = 0; 2922 2824 struct ufs_hw_queue *hwq = NULL; 2923 - 2924 - WARN_ONCE(tag < 0 || tag >= hba->nutrs, "Invalid tag %d\n", tag); 2925 2825 2926 2826 switch (hba->ufshcd_state) { 2927 2827 case UFSHCD_STATE_OPERATIONAL: ··· 3728 3632 */ 3729 3633 ret = utf16s_to_utf8s(uc_str->uc, 3730 3634 uc_str->len - QUERY_DESC_HDR_SIZE, 3731 - UTF16_BIG_ENDIAN, str, ascii_len); 3635 + UTF16_BIG_ENDIAN, str, ascii_len - 1); 3732 3636 3733 3637 /* replace non-printable or non-ASCII characters with spaces */ 3734 3638 for (i = 0; i < ret; i++) ··· 5194 5098 struct request_queue *q = sdev->request_queue; 5195 5099 5196 5100 blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); 5197 - if (hba->quirks & UFSHCD_QUIRK_4KB_DMA_ALIGNMENT) 5198 - blk_queue_update_dma_alignment(q, SZ_4K - 1); 5101 + 5199 5102 /* 5200 5103 * Block runtime-pm until all consumers are added. 5201 5104 * Refer ufshcd_setup_links(). ··· 5209 5114 * resume, causing more messages and so on. 5210 5115 */ 5211 5116 sdev->silence_suspend = 1; 5117 + 5118 + if (hba->vops && hba->vops->config_scsi_dev) 5119 + hba->vops->config_scsi_dev(sdev); 5212 5120 5213 5121 ufshcd_crypto_register(hba, q); 5214 5122 ··· 5503 5405 lrbp->utr_descriptor_ptr->header.ocs = ocs; 5504 5406 } 5505 5407 complete(hba->dev_cmd.complete); 5506 - ufshcd_clk_scaling_update_busy(hba); 5507 5408 } 5508 5409 } 5509 5410 } ··· 5615 5518 * For those cmds of which the cqes are not present 5616 5519 * in the cq, complete them explicitly. 5617 5520 */ 5521 + spin_lock_irqsave(&hwq->cq_lock, flags); 5618 5522 if (cmd && !test_bit(SCMD_STATE_COMPLETE, &cmd->state)) { 5619 - spin_lock_irqsave(&hwq->cq_lock, flags); 5620 5523 set_host_byte(cmd, DID_REQUEUE); 5621 5524 ufshcd_release_scsi_cmd(hba, lrbp); 5622 5525 scsi_done(cmd); 5623 - spin_unlock_irqrestore(&hwq->cq_lock, flags); 5624 5526 } 5527 + spin_unlock_irqrestore(&hwq->cq_lock, flags); 5625 5528 } else { 5626 5529 ufshcd_mcq_poll_cqe_lock(hba, hwq); 5627 5530 } ··· 7021 6924 spin_lock_irqsave(host->host_lock, flags); 7022 6925 7023 6926 task_tag = req->tag; 7024 - WARN_ONCE(task_tag < 0 || task_tag >= hba->nutmrs, "Invalid tag %d\n", 7025 - task_tag); 7026 6927 hba->tmf_rqs[req->tag] = req; 7027 6928 treq->upiu_req.req_header.task_tag = task_tag; 7028 6929 ··· 7594 7499 bool outstanding; 7595 7500 u32 reg; 7596 7501 7597 - WARN_ONCE(tag < 0, "Invalid tag %d\n", tag); 7598 - 7599 7502 ufshcd_hold(hba); 7600 7503 7601 7504 if (!is_mcq_enabled(hba)) { ··· 7720 7627 hba->silence_err_logs = false; 7721 7628 7722 7629 /* scale up clocks to max frequency before full reinitialization */ 7723 - ufshcd_scale_clks(hba, true); 7630 + ufshcd_scale_clks(hba, ULONG_MAX, true); 7724 7631 7725 7632 err = ufshcd_hba_enable(hba); 7726 7633 ··· 7808 7715 struct ufs_hba *hba; 7809 7716 7810 7717 hba = shost_priv(cmd->device->host); 7718 + 7719 + /* 7720 + * If runtime PM sent SSU and got a timeout, scsi_error_handler is 7721 + * stuck in this function waiting for flush_work(&hba->eh_work). And 7722 + * ufshcd_err_handler(eh_work) is stuck waiting for runtime PM. Do 7723 + * ufshcd_link_recovery instead of eh_work to prevent deadlock. 7724 + */ 7725 + if (hba->pm_op_in_progress) { 7726 + if (ufshcd_link_recovery(hba)) 7727 + err = FAILED; 7728 + 7729 + return err; 7730 + } 7811 7731 7812 7732 spin_lock_irqsave(hba->host->host_lock, flags); 7813 7733 hba->force_reset = true; ··· 8829 8723 if (ret) 8830 8724 goto out; 8831 8725 8832 - if (hba->quirks & UFSHCD_QUIRK_REINIT_AFTER_MAX_GEAR_SWITCH) { 8726 + if (!hba->pm_op_in_progress && 8727 + (hba->quirks & UFSHCD_QUIRK_REINIT_AFTER_MAX_GEAR_SWITCH)) { 8833 8728 /* Reset the device and controller before doing reinit */ 8834 8729 ufshcd_device_reset(hba); 8835 8730 ufshcd_hba_stop(hba); ··· 9266 9159 dev_dbg(dev, "%s: clk: %s, rate: %lu\n", __func__, 9267 9160 clki->name, clk_get_rate(clki->clk)); 9268 9161 } 9162 + 9163 + /* Set Max. frequency for all clocks */ 9164 + if (hba->use_pm_opp) { 9165 + ret = ufshcd_opp_set_rate(hba, ULONG_MAX); 9166 + if (ret) { 9167 + dev_err(hba->dev, "%s: failed to set OPP: %d", __func__, 9168 + ret); 9169 + goto out; 9170 + } 9171 + } 9172 + 9269 9173 out: 9270 9174 return ret; 9271 9175 }
+2 -3
drivers/ufs/host/cdns-pltfrm.c
··· 305 305 * 306 306 * Return: 0 (success). 307 307 */ 308 - static int cdns_ufs_pltfrm_remove(struct platform_device *pdev) 308 + static void cdns_ufs_pltfrm_remove(struct platform_device *pdev) 309 309 { 310 310 struct ufs_hba *hba = platform_get_drvdata(pdev); 311 311 312 312 ufshcd_remove(hba); 313 - return 0; 314 313 } 315 314 316 315 static const struct dev_pm_ops cdns_ufs_dev_pm_ops = { ··· 321 322 322 323 static struct platform_driver cdns_ufs_pltfrm_driver = { 323 324 .probe = cdns_ufs_pltfrm_probe, 324 - .remove = cdns_ufs_pltfrm_remove, 325 + .remove_new = cdns_ufs_pltfrm_remove, 325 326 .driver = { 326 327 .name = "cdns-ufshcd", 327 328 .pm = &cdns_ufs_dev_pm_ops,
+2 -4
drivers/ufs/host/tc-dwc-g210-pltfrm.c
··· 74 74 * @pdev: pointer to platform device structure 75 75 * 76 76 */ 77 - static int tc_dwc_g210_pltfm_remove(struct platform_device *pdev) 77 + static void tc_dwc_g210_pltfm_remove(struct platform_device *pdev) 78 78 { 79 79 struct ufs_hba *hba = platform_get_drvdata(pdev); 80 80 81 81 pm_runtime_get_sync(&(pdev)->dev); 82 82 ufshcd_remove(hba); 83 - 84 - return 0; 85 83 } 86 84 87 85 static const struct dev_pm_ops tc_dwc_g210_pltfm_pm_ops = { ··· 89 91 90 92 static struct platform_driver tc_dwc_g210_pltfm_driver = { 91 93 .probe = tc_dwc_g210_pltfm_probe, 92 - .remove = tc_dwc_g210_pltfm_remove, 94 + .remove_new = tc_dwc_g210_pltfm_remove, 93 95 .driver = { 94 96 .name = "tc-dwc-g210-pltfm", 95 97 .pm = &tc_dwc_g210_pltfm_pm_ops,
+2 -4
drivers/ufs/host/ti-j721e-ufs.c
··· 65 65 return ret; 66 66 } 67 67 68 - static int ti_j721e_ufs_remove(struct platform_device *pdev) 68 + static void ti_j721e_ufs_remove(struct platform_device *pdev) 69 69 { 70 70 of_platform_depopulate(&pdev->dev); 71 71 pm_runtime_put_sync(&pdev->dev); 72 72 pm_runtime_disable(&pdev->dev); 73 - 74 - return 0; 75 73 } 76 74 77 75 static const struct of_device_id ti_j721e_ufs_of_match[] = { ··· 83 85 84 86 static struct platform_driver ti_j721e_ufs_driver = { 85 87 .probe = ti_j721e_ufs_probe, 86 - .remove = ti_j721e_ufs_remove, 88 + .remove_new = ti_j721e_ufs_remove, 87 89 .driver = { 88 90 .name = "ti-j721e-ufs", 89 91 .of_match_table = ti_j721e_ufs_of_match,
+9 -6
drivers/ufs/host/ufs-exynos.c
··· 1511 1511 return 0; 1512 1512 } 1513 1513 1514 + static void exynos_ufs_config_scsi_dev(struct scsi_device *sdev) 1515 + { 1516 + blk_queue_update_dma_alignment(sdev->request_queue, SZ_4K - 1); 1517 + } 1518 + 1514 1519 static int fsd_ufs_post_link(struct exynos_ufs *ufs) 1515 1520 { 1516 1521 int i; ··· 1584 1579 .hibern8_notify = exynos_ufs_hibern8_notify, 1585 1580 .suspend = exynos_ufs_suspend, 1586 1581 .resume = exynos_ufs_resume, 1582 + .config_scsi_dev = exynos_ufs_config_scsi_dev, 1587 1583 }; 1588 1584 1589 1585 static struct ufs_hba_variant_ops ufs_hba_exynosauto_vh_ops = { ··· 1611 1605 return err; 1612 1606 } 1613 1607 1614 - static int exynos_ufs_remove(struct platform_device *pdev) 1608 + static void exynos_ufs_remove(struct platform_device *pdev) 1615 1609 { 1616 1610 struct ufs_hba *hba = platform_get_drvdata(pdev); 1617 1611 struct exynos_ufs *ufs = ufshcd_get_variant(hba); ··· 1621 1615 1622 1616 phy_power_off(ufs->phy); 1623 1617 phy_exit(ufs->phy); 1624 - 1625 - return 0; 1626 1618 } 1627 1619 1628 1620 static struct exynos_ufs_uic_attr exynos7_uic_attr = { ··· 1684 1680 UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR | 1685 1681 UFSHCD_QUIRK_BROKEN_OCS_FATAL_ERROR | 1686 1682 UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL | 1687 - UFSHCD_QUIRK_SKIP_DEF_UNIPRO_TIMEOUT_SETTING | 1688 - UFSHCD_QUIRK_4KB_DMA_ALIGNMENT, 1683 + UFSHCD_QUIRK_SKIP_DEF_UNIPRO_TIMEOUT_SETTING, 1689 1684 .opts = EXYNOS_UFS_OPT_HAS_APB_CLK_CTRL | 1690 1685 EXYNOS_UFS_OPT_BROKEN_AUTO_CLK_CTRL | 1691 1686 EXYNOS_UFS_OPT_BROKEN_RX_SEL_IDX | ··· 1759 1756 1760 1757 static struct platform_driver exynos_ufs_pltform = { 1761 1758 .probe = exynos_ufs_probe, 1762 - .remove = exynos_ufs_remove, 1759 + .remove_new = exynos_ufs_remove, 1763 1760 .driver = { 1764 1761 .name = "exynos-ufshc", 1765 1762 .pm = &exynos_ufs_pm_ops,
+2 -3
drivers/ufs/host/ufs-hisi.c
··· 575 575 return ufshcd_pltfrm_init(pdev, of_id->data); 576 576 } 577 577 578 - static int ufs_hisi_remove(struct platform_device *pdev) 578 + static void ufs_hisi_remove(struct platform_device *pdev) 579 579 { 580 580 struct ufs_hba *hba = platform_get_drvdata(pdev); 581 581 582 582 ufshcd_remove(hba); 583 - return 0; 584 583 } 585 584 586 585 static const struct dev_pm_ops ufs_hisi_pm_ops = { ··· 591 592 592 593 static struct platform_driver ufs_hisi_pltform = { 593 594 .probe = ufs_hisi_probe, 594 - .remove = ufs_hisi_remove, 595 + .remove_new = ufs_hisi_remove, 595 596 .driver = { 596 597 .name = "ufshcd-hisi", 597 598 .pm = &ufs_hisi_pm_ops,
+3 -4
drivers/ufs/host/ufs-mediatek.c
··· 806 806 return 0; 807 807 } 808 808 809 - err = ufshcd_populate_vreg(dev, vcc_name, &info->vcc); 809 + err = ufshcd_populate_vreg(dev, vcc_name, &info->vcc, false); 810 810 if (err) 811 811 return err; 812 812 ··· 1748 1748 * 1749 1749 * Always return 0 1750 1750 */ 1751 - static int ufs_mtk_remove(struct platform_device *pdev) 1751 + static void ufs_mtk_remove(struct platform_device *pdev) 1752 1752 { 1753 1753 struct ufs_hba *hba = platform_get_drvdata(pdev); 1754 1754 1755 1755 pm_runtime_get_sync(&(pdev)->dev); 1756 1756 ufshcd_remove(hba); 1757 - return 0; 1758 1757 } 1759 1758 1760 1759 #ifdef CONFIG_PM_SLEEP ··· 1817 1818 1818 1819 static struct platform_driver ufs_mtk_pltform = { 1819 1820 .probe = ufs_mtk_probe, 1820 - .remove = ufs_mtk_remove, 1821 + .remove_new = ufs_mtk_remove, 1821 1822 .driver = { 1822 1823 .name = "ufshcd-mtk", 1823 1824 .pm = &ufs_mtk_pm_ops,
+166 -50
drivers/ufs/host/ufs-qcom.c
··· 93 93 static struct ufs_qcom_host *ufs_qcom_hosts[MAX_UFS_QCOM_HOSTS]; 94 94 95 95 static void ufs_qcom_get_default_testbus_cfg(struct ufs_qcom_host *host); 96 - static int ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div(struct ufs_hba *hba, 97 - u32 clk_cycles); 96 + static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, bool is_scale_up); 98 97 99 98 static struct ufs_qcom_host *rcdev_to_ufs_host(struct reset_controller_dev *rcd) 100 99 { ··· 459 460 return ret; 460 461 } 461 462 462 - phy_set_mode_ext(phy, PHY_MODE_UFS_HS_B, host->hs_gear); 463 + phy_set_mode_ext(phy, PHY_MODE_UFS_HS_B, host->phy_gear); 463 464 464 465 /* power on phy - start serdes and phy's power and clocks */ 465 466 ret = phy_power_on(phy); ··· 527 528 return err; 528 529 } 529 530 530 - /* 531 + /** 532 + * ufs_qcom_cfg_timers - Configure ufs qcom cfg timers 533 + * 534 + * @hba: host controller instance 535 + * @gear: Current operating gear 536 + * @hs: current power mode 537 + * @rate: current operating rate (A or B) 538 + * @update_link_startup_timer: indicate if link_start ongoing 539 + * @is_pre_scale_up: flag to check if pre scale up condition. 531 540 * Return: zero for success and non-zero in case of a failure. 532 541 */ 533 542 static int ufs_qcom_cfg_timers(struct ufs_hba *hba, u32 gear, 534 - u32 hs, u32 rate, bool update_link_startup_timer) 543 + u32 hs, u32 rate, bool update_link_startup_timer, 544 + bool is_pre_scale_up) 535 545 { 536 546 struct ufs_qcom_host *host = ufshcd_get_variant(hba); 537 547 struct ufs_clk_info *clki; ··· 571 563 /* 572 564 * The Qunipro controller does not use following registers: 573 565 * SYS1CLK_1US_REG, TX_SYMBOL_CLK_1US_REG, CLK_NS_REG & 574 - * UFS_REG_PA_LINK_STARTUP_TIMER 575 - * But UTP controller uses SYS1CLK_1US_REG register for Interrupt 566 + * UFS_REG_PA_LINK_STARTUP_TIMER. 567 + * However UTP controller uses SYS1CLK_1US_REG register for Interrupt 576 568 * Aggregation logic. 577 - */ 578 - if (ufs_qcom_cap_qunipro(host) && !ufshcd_is_intr_aggr_allowed(hba)) 569 + * It is mandatory to write SYS1CLK_1US_REG register on UFS host 570 + * controller V4.0.0 onwards. 571 + */ 572 + if (host->hw_ver.major < 4 && ufs_qcom_cap_qunipro(host) && 573 + !ufshcd_is_intr_aggr_allowed(hba)) 579 574 return 0; 580 575 581 576 if (gear == 0) { ··· 587 576 } 588 577 589 578 list_for_each_entry(clki, &hba->clk_list_head, list) { 590 - if (!strcmp(clki->name, "core_clk")) 591 - core_clk_rate = clk_get_rate(clki->clk); 579 + if (!strcmp(clki->name, "core_clk")) { 580 + if (is_pre_scale_up) 581 + core_clk_rate = clki->max_freq; 582 + else 583 + core_clk_rate = clk_get_rate(clki->clk); 584 + break; 585 + } 586 + 592 587 } 593 588 594 589 /* If frequency is smaller than 1MHz, set to 1MHz */ ··· 696 679 switch (status) { 697 680 case PRE_CHANGE: 698 681 if (ufs_qcom_cfg_timers(hba, UFS_PWM_G1, SLOWAUTO_MODE, 699 - 0, true)) { 682 + 0, true, false)) { 700 683 dev_err(hba->dev, "%s: ufs_qcom_cfg_timers() failed\n", 701 684 __func__); 702 685 return -EINVAL; 703 686 } 704 687 705 - if (ufs_qcom_cap_qunipro(host)) 706 - /* 707 - * set unipro core clock cycles to 150 & clear clock 708 - * divider 709 - */ 710 - err = ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div(hba, 711 - 150); 712 - 688 + if (ufs_qcom_cap_qunipro(host)) { 689 + err = ufs_qcom_set_core_clk_ctrl(hba, true); 690 + if (err) 691 + dev_err(hba->dev, "cfg core clk ctrl failed\n"); 692 + } 713 693 /* 714 694 * Some UFS devices (and may be host) have issues if LCC is 715 695 * enabled. So we are setting PA_Local_TX_LCC_Enable to 0 ··· 923 909 return ret; 924 910 } 925 911 926 - /* Use the agreed gear */ 927 - host->hs_gear = dev_req_params->gear_tx; 912 + /* 913 + * Update phy_gear only when the gears are scaled to a higher value. This is 914 + * because, the PHY gear settings are backwards compatible and we only need to 915 + * change the PHY gear settings while scaling to higher gears. 916 + */ 917 + if (dev_req_params->gear_tx > host->phy_gear) 918 + host->phy_gear = dev_req_params->gear_tx; 928 919 929 920 /* enable the device ref clock before changing to HS mode */ 930 921 if (!ufshcd_is_hs_mode(&hba->pwr_info) && ··· 945 926 case POST_CHANGE: 946 927 if (ufs_qcom_cfg_timers(hba, dev_req_params->gear_rx, 947 928 dev_req_params->pwr_rx, 948 - dev_req_params->hs_rate, false)) { 929 + dev_req_params->hs_rate, false, false)) { 949 930 dev_err(hba->dev, "%s: ufs_qcom_cfg_timers() failed\n", 950 931 __func__); 951 932 /* ··· 1296 1277 * Power up the PHY using the minimum supported gear (UFS_HS_G2). 1297 1278 * Switching to max gear will be performed during reinit if supported. 1298 1279 */ 1299 - host->hs_gear = UFS_HS_G2; 1280 + host->phy_gear = UFS_HS_G2; 1300 1281 1301 1282 return 0; 1302 1283 ··· 1315 1296 phy_exit(host->generic_phy); 1316 1297 } 1317 1298 1318 - static int ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div(struct ufs_hba *hba, 1319 - u32 clk_cycles) 1299 + /** 1300 + * ufs_qcom_set_clk_40ns_cycles - Configure 40ns clk cycles 1301 + * 1302 + * @hba: host controller instance 1303 + * @cycles_in_1us: No of cycles in 1us to be configured 1304 + * 1305 + * Returns error if dme get/set configuration for 40ns fails 1306 + * and returns zero on success. 1307 + */ 1308 + static int ufs_qcom_set_clk_40ns_cycles(struct ufs_hba *hba, 1309 + u32 cycles_in_1us) 1320 1310 { 1311 + struct ufs_qcom_host *host = ufshcd_get_variant(hba); 1312 + u32 cycles_in_40ns; 1313 + u32 reg; 1321 1314 int err; 1322 - u32 core_clk_ctrl_reg; 1323 1315 1324 - if (clk_cycles > DME_VS_CORE_CLK_CTRL_MAX_CORE_CLK_1US_CYCLES_MASK) 1316 + /* 1317 + * UFS host controller V4.0.0 onwards needs to program 1318 + * PA_VS_CORE_CLK_40NS_CYCLES attribute per programmed 1319 + * frequency of unipro core clk of UFS host controller. 1320 + */ 1321 + if (host->hw_ver.major < 4) 1322 + return 0; 1323 + 1324 + /* 1325 + * Generic formulae for cycles_in_40ns = (freq_unipro/25) is not 1326 + * applicable for all frequencies. For ex: ceil(37.5 MHz/25) will 1327 + * be 2 and ceil(403 MHZ/25) will be 17 whereas Hardware 1328 + * specification expect to be 16. Hence use exact hardware spec 1329 + * mandated value for cycles_in_40ns instead of calculating using 1330 + * generic formulae. 1331 + */ 1332 + switch (cycles_in_1us) { 1333 + case UNIPRO_CORE_CLK_FREQ_403_MHZ: 1334 + cycles_in_40ns = 16; 1335 + break; 1336 + case UNIPRO_CORE_CLK_FREQ_300_MHZ: 1337 + cycles_in_40ns = 12; 1338 + break; 1339 + case UNIPRO_CORE_CLK_FREQ_201_5_MHZ: 1340 + cycles_in_40ns = 8; 1341 + break; 1342 + case UNIPRO_CORE_CLK_FREQ_150_MHZ: 1343 + cycles_in_40ns = 6; 1344 + break; 1345 + case UNIPRO_CORE_CLK_FREQ_100_MHZ: 1346 + cycles_in_40ns = 4; 1347 + break; 1348 + case UNIPRO_CORE_CLK_FREQ_75_MHZ: 1349 + cycles_in_40ns = 3; 1350 + break; 1351 + case UNIPRO_CORE_CLK_FREQ_37_5_MHZ: 1352 + cycles_in_40ns = 2; 1353 + break; 1354 + default: 1355 + dev_err(hba->dev, "UNIPRO clk freq %u MHz not supported\n", 1356 + cycles_in_1us); 1325 1357 return -EINVAL; 1358 + } 1359 + 1360 + err = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_VS_CORE_CLK_40NS_CYCLES), &reg); 1361 + if (err) 1362 + return err; 1363 + 1364 + reg &= ~PA_VS_CORE_CLK_40NS_CYCLES_MASK; 1365 + reg |= cycles_in_40ns; 1366 + 1367 + return ufshcd_dme_set(hba, UIC_ARG_MIB(PA_VS_CORE_CLK_40NS_CYCLES), reg); 1368 + } 1369 + 1370 + static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, bool is_scale_up) 1371 + { 1372 + struct ufs_qcom_host *host = ufshcd_get_variant(hba); 1373 + struct list_head *head = &hba->clk_list_head; 1374 + struct ufs_clk_info *clki; 1375 + u32 cycles_in_1us; 1376 + u32 core_clk_ctrl_reg; 1377 + int err; 1378 + 1379 + list_for_each_entry(clki, head, list) { 1380 + if (!IS_ERR_OR_NULL(clki->clk) && 1381 + !strcmp(clki->name, "core_clk_unipro")) { 1382 + if (is_scale_up) 1383 + cycles_in_1us = ceil(clki->max_freq, (1000 * 1000)); 1384 + else 1385 + cycles_in_1us = ceil(clk_get_rate(clki->clk), (1000 * 1000)); 1386 + break; 1387 + } 1388 + } 1326 1389 1327 1390 err = ufshcd_dme_get(hba, 1328 1391 UIC_ARG_MIB(DME_VS_CORE_CLK_CTRL), ··· 1412 1311 if (err) 1413 1312 return err; 1414 1313 1415 - core_clk_ctrl_reg &= ~DME_VS_CORE_CLK_CTRL_MAX_CORE_CLK_1US_CYCLES_MASK; 1416 - core_clk_ctrl_reg |= clk_cycles; 1314 + /* Bit mask is different for UFS host controller V4.0.0 onwards */ 1315 + if (host->hw_ver.major >= 4) { 1316 + if (!FIELD_FIT(CLK_1US_CYCLES_MASK_V4, cycles_in_1us)) 1317 + return -ERANGE; 1318 + core_clk_ctrl_reg &= ~CLK_1US_CYCLES_MASK_V4; 1319 + core_clk_ctrl_reg |= FIELD_PREP(CLK_1US_CYCLES_MASK_V4, cycles_in_1us); 1320 + } else { 1321 + if (!FIELD_FIT(CLK_1US_CYCLES_MASK, cycles_in_1us)) 1322 + return -ERANGE; 1323 + core_clk_ctrl_reg &= ~CLK_1US_CYCLES_MASK; 1324 + core_clk_ctrl_reg |= FIELD_PREP(CLK_1US_CYCLES_MASK, cycles_in_1us); 1325 + } 1417 1326 1418 1327 /* Clear CORE_CLK_DIV_EN */ 1419 1328 core_clk_ctrl_reg &= ~DME_VS_CORE_CLK_CTRL_CORE_CLK_DIV_EN_BIT; 1420 1329 1421 - return ufshcd_dme_set(hba, 1330 + err = ufshcd_dme_set(hba, 1422 1331 UIC_ARG_MIB(DME_VS_CORE_CLK_CTRL), 1423 1332 core_clk_ctrl_reg); 1333 + if (err) 1334 + return err; 1335 + 1336 + /* Configure unipro core clk 40ns attribute */ 1337 + return ufs_qcom_set_clk_40ns_cycles(hba, cycles_in_1us); 1424 1338 } 1425 1339 1426 1340 static int ufs_qcom_clk_scale_up_pre_change(struct ufs_hba *hba) 1427 1341 { 1428 - /* nothing to do as of now */ 1429 - return 0; 1430 - } 1431 - 1432 - static int ufs_qcom_clk_scale_up_post_change(struct ufs_hba *hba) 1433 - { 1434 1342 struct ufs_qcom_host *host = ufshcd_get_variant(hba); 1343 + struct ufs_pa_layer_attr *attr = &host->dev_req_params; 1344 + int ret; 1435 1345 1436 1346 if (!ufs_qcom_cap_qunipro(host)) 1437 1347 return 0; 1438 1348 1439 - /* set unipro core clock cycles to 150 and clear clock divider */ 1440 - return ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div(hba, 150); 1349 + ret = ufs_qcom_cfg_timers(hba, attr->gear_rx, attr->pwr_rx, 1350 + attr->hs_rate, false, true); 1351 + if (ret) { 1352 + dev_err(hba->dev, "%s ufs cfg timer failed\n", __func__); 1353 + return ret; 1354 + } 1355 + /* set unipro core clock attributes and clear clock divider */ 1356 + return ufs_qcom_set_core_clk_ctrl(hba, true); 1357 + } 1358 + 1359 + static int ufs_qcom_clk_scale_up_post_change(struct ufs_hba *hba) 1360 + { 1361 + return 0; 1441 1362 } 1442 1363 1443 1364 static int ufs_qcom_clk_scale_down_pre_change(struct ufs_hba *hba) ··· 1494 1371 if (!ufs_qcom_cap_qunipro(host)) 1495 1372 return 0; 1496 1373 1497 - /* set unipro core clock cycles to 75 and clear clock divider */ 1498 - return ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div(hba, 75); 1374 + /* set unipro core clock attributes and clear clock divider */ 1375 + return ufs_qcom_set_core_clk_ctrl(hba, false); 1499 1376 } 1500 1377 1501 1378 static int ufs_qcom_clk_scale_notify(struct ufs_hba *hba, 1502 1379 bool scale_up, enum ufs_notify_change_status status) 1503 1380 { 1504 1381 struct ufs_qcom_host *host = ufshcd_get_variant(hba); 1505 - struct ufs_pa_layer_attr *dev_req_params = &host->dev_req_params; 1506 1382 int err = 0; 1507 1383 1508 1384 /* check the host controller state before sending hibern8 cmd */ ··· 1531 1409 return err; 1532 1410 } 1533 1411 1534 - ufs_qcom_cfg_timers(hba, 1535 - dev_req_params->gear_rx, 1536 - dev_req_params->pwr_rx, 1537 - dev_req_params->hs_rate, 1538 - false); 1539 1412 ufs_qcom_icc_update_bw(host); 1540 1413 ufshcd_uic_hibern8_exit(hba); 1541 1414 } ··· 2027 1910 * 2028 1911 * Always returns 0 2029 1912 */ 2030 - static int ufs_qcom_remove(struct platform_device *pdev) 1913 + static void ufs_qcom_remove(struct platform_device *pdev) 2031 1914 { 2032 1915 struct ufs_hba *hba = platform_get_drvdata(pdev); 2033 1916 2034 1917 pm_runtime_get_sync(&(pdev)->dev); 2035 1918 ufshcd_remove(hba); 2036 1919 platform_msi_domain_free_irqs(hba->dev); 2037 - return 0; 2038 1920 } 2039 1921 2040 1922 static const struct of_device_id ufs_qcom_of_match[] __maybe_unused = { ··· 2065 1949 2066 1950 static struct platform_driver ufs_qcom_pltform = { 2067 1951 .probe = ufs_qcom_probe, 2068 - .remove = ufs_qcom_remove, 1952 + .remove_new = ufs_qcom_remove, 2069 1953 .driver = { 2070 1954 .name = "ufshcd-qcom", 2071 1955 .pm = &ufs_qcom_pm_ops,
+17 -3
drivers/ufs/host/ufs-qcom.h
··· 129 129 #define PA_VS_CONFIG_REG1 0x9000 130 130 #define DME_VS_CORE_CLK_CTRL 0xD002 131 131 /* bit and mask definitions for DME_VS_CORE_CLK_CTRL attribute */ 132 - #define DME_VS_CORE_CLK_CTRL_CORE_CLK_DIV_EN_BIT BIT(8) 133 - #define DME_VS_CORE_CLK_CTRL_MAX_CORE_CLK_1US_CYCLES_MASK 0xFF 132 + #define CLK_1US_CYCLES_MASK_V4 GENMASK(27, 16) 133 + #define CLK_1US_CYCLES_MASK GENMASK(7, 0) 134 + #define DME_VS_CORE_CLK_CTRL_CORE_CLK_DIV_EN_BIT BIT(8) 135 + #define PA_VS_CORE_CLK_40NS_CYCLES 0x9007 136 + #define PA_VS_CORE_CLK_40NS_CYCLES_MASK GENMASK(6, 0) 137 + 138 + 139 + /* QCOM UFS host controller core clk frequencies */ 140 + #define UNIPRO_CORE_CLK_FREQ_37_5_MHZ 38 141 + #define UNIPRO_CORE_CLK_FREQ_75_MHZ 75 142 + #define UNIPRO_CORE_CLK_FREQ_100_MHZ 100 143 + #define UNIPRO_CORE_CLK_FREQ_150_MHZ 150 144 + #define UNIPRO_CORE_CLK_FREQ_300_MHZ 300 145 + #define UNIPRO_CORE_CLK_FREQ_201_5_MHZ 202 146 + #define UNIPRO_CORE_CLK_FREQ_403_MHZ 403 134 147 135 148 static inline void 136 149 ufs_qcom_get_controller_revision(struct ufs_hba *hba, ··· 240 227 241 228 struct gpio_desc *device_reset; 242 229 243 - u32 hs_gear; 230 + u32 phy_gear; 244 231 245 232 bool esi_enabled; 246 233 }; ··· 257 244 #define ufs_qcom_is_link_off(hba) ufshcd_is_link_off(hba) 258 245 #define ufs_qcom_is_link_active(hba) ufshcd_is_link_active(hba) 259 246 #define ufs_qcom_is_link_hibern8(hba) ufshcd_is_link_hibern8(hba) 247 + #define ceil(freq, div) ((freq) % (div) == 0 ? ((freq)/(div)) : ((freq)/(div) + 1)) 260 248 261 249 int ufs_qcom_testbus_config(struct ufs_qcom_host *host); 262 250
+2 -4
drivers/ufs/host/ufs-renesas.c
··· 388 388 return ufshcd_pltfrm_init(pdev, &ufs_renesas_vops); 389 389 } 390 390 391 - static int ufs_renesas_remove(struct platform_device *pdev) 391 + static void ufs_renesas_remove(struct platform_device *pdev) 392 392 { 393 393 struct ufs_hba *hba = platform_get_drvdata(pdev); 394 394 395 395 ufshcd_remove(hba); 396 - 397 - return 0; 398 396 } 399 397 400 398 static struct platform_driver ufs_renesas_platform = { 401 399 .probe = ufs_renesas_probe, 402 - .remove = ufs_renesas_remove, 400 + .remove_new = ufs_renesas_remove, 403 401 .driver = { 404 402 .name = "ufshcd-renesas", 405 403 .of_match_table = of_match_ptr(ufs_renesas_of_match),
+2 -3
drivers/ufs/host/ufs-sprd.c
··· 425 425 return err; 426 426 } 427 427 428 - static int ufs_sprd_remove(struct platform_device *pdev) 428 + static void ufs_sprd_remove(struct platform_device *pdev) 429 429 { 430 430 struct ufs_hba *hba = platform_get_drvdata(pdev); 431 431 432 432 pm_runtime_get_sync(&(pdev)->dev); 433 433 ufshcd_remove(hba); 434 - return 0; 435 434 } 436 435 437 436 static const struct dev_pm_ops ufs_sprd_pm_ops = { ··· 442 443 443 444 static struct platform_driver ufs_sprd_pltform = { 444 445 .probe = ufs_sprd_probe, 445 - .remove = ufs_sprd_remove, 446 + .remove_new = ufs_sprd_remove, 446 447 .driver = { 447 448 .name = "ufshcd-sprd", 448 449 .pm = &ufs_sprd_pm_ops,
+3 -2
drivers/ufs/host/ufshcd-pci.c
··· 58 58 int err = 0; 59 59 size_t len; 60 60 61 - obj = acpi_evaluate_dsm(ACPI_HANDLE(dev), &intel_dsm_guid, 0, fn, NULL); 61 + obj = acpi_evaluate_dsm_typed(ACPI_HANDLE(dev), &intel_dsm_guid, 0, fn, NULL, 62 + ACPI_TYPE_BUFFER); 62 63 if (!obj) 63 64 return -EOPNOTSUPP; 64 65 65 - if (obj->type != ACPI_TYPE_BUFFER || obj->buffer.length < 1) { 66 + if (obj->buffer.length < 1) { 66 67 err = -EINVAL; 67 68 goto out; 68 69 }
+88 -5
drivers/ufs/host/ufshcd-pltfrm.c
··· 10 10 11 11 #include <linux/module.h> 12 12 #include <linux/platform_device.h> 13 + #include <linux/pm_opp.h> 13 14 #include <linux/pm_runtime.h> 14 15 #include <linux/of.h> 15 16 ··· 122 121 123 122 #define MAX_PROP_SIZE 32 124 123 int ufshcd_populate_vreg(struct device *dev, const char *name, 125 - struct ufs_vreg **out_vreg) 124 + struct ufs_vreg **out_vreg, bool skip_current) 126 125 { 127 126 char prop_name[MAX_PROP_SIZE]; 128 127 struct ufs_vreg *vreg = NULL; ··· 147 146 vreg->name = devm_kstrdup(dev, name, GFP_KERNEL); 148 147 if (!vreg->name) 149 148 return -ENOMEM; 149 + 150 + if (skip_current) { 151 + vreg->max_uA = 0; 152 + goto out; 153 + } 150 154 151 155 snprintf(prop_name, MAX_PROP_SIZE, "%s-max-microamp", name); 152 156 if (of_property_read_u32(np, prop_name, &vreg->max_uA)) { ··· 181 175 struct device *dev = hba->dev; 182 176 struct ufs_vreg_info *info = &hba->vreg_info; 183 177 184 - err = ufshcd_populate_vreg(dev, "vdd-hba", &info->vdd_hba); 178 + err = ufshcd_populate_vreg(dev, "vdd-hba", &info->vdd_hba, true); 185 179 if (err) 186 180 goto out; 187 181 188 - err = ufshcd_populate_vreg(dev, "vcc", &info->vcc); 182 + err = ufshcd_populate_vreg(dev, "vcc", &info->vcc, false); 189 183 if (err) 190 184 goto out; 191 185 192 - err = ufshcd_populate_vreg(dev, "vccq", &info->vccq); 186 + err = ufshcd_populate_vreg(dev, "vccq", &info->vccq, false); 193 187 if (err) 194 188 goto out; 195 189 196 - err = ufshcd_populate_vreg(dev, "vccq2", &info->vccq2); 190 + err = ufshcd_populate_vreg(dev, "vccq2", &info->vccq2, false); 197 191 out: 198 192 return err; 199 193 } ··· 211 205 __func__, ret); 212 206 hba->lanes_per_direction = UFSHCD_DEFAULT_LANES_PER_DIRECTION; 213 207 } 208 + } 209 + 210 + static int ufshcd_parse_operating_points(struct ufs_hba *hba) 211 + { 212 + struct device *dev = hba->dev; 213 + struct device_node *np = dev->of_node; 214 + struct dev_pm_opp_config config = {}; 215 + struct ufs_clk_info *clki; 216 + const char **clk_names; 217 + int cnt, i, ret; 218 + 219 + if (!of_find_property(np, "operating-points-v2", NULL)) 220 + return 0; 221 + 222 + if (of_find_property(np, "freq-table-hz", NULL)) { 223 + dev_err(dev, "%s: operating-points and freq-table-hz are incompatible\n", 224 + __func__); 225 + return -EINVAL; 226 + } 227 + 228 + cnt = of_property_count_strings(np, "clock-names"); 229 + if (cnt <= 0) { 230 + dev_err(dev, "%s: Missing clock-names\n", __func__); 231 + return -ENODEV; 232 + } 233 + 234 + /* OPP expects clk_names to be NULL terminated */ 235 + clk_names = devm_kcalloc(dev, cnt + 1, sizeof(*clk_names), GFP_KERNEL); 236 + if (!clk_names) 237 + return -ENOMEM; 238 + 239 + /* 240 + * We still need to get reference to all clocks as the UFS core uses 241 + * them separately. 242 + */ 243 + for (i = 0; i < cnt; i++) { 244 + ret = of_property_read_string_index(np, "clock-names", i, 245 + &clk_names[i]); 246 + if (ret) 247 + return ret; 248 + 249 + clki = devm_kzalloc(dev, sizeof(*clki), GFP_KERNEL); 250 + if (!clki) 251 + return -ENOMEM; 252 + 253 + clki->name = devm_kstrdup(dev, clk_names[i], GFP_KERNEL); 254 + if (!clki->name) 255 + return -ENOMEM; 256 + 257 + if (!strcmp(clk_names[i], "ref_clk")) 258 + clki->keep_link_active = true; 259 + 260 + list_add_tail(&clki->list, &hba->clk_list_head); 261 + } 262 + 263 + config.clk_names = clk_names, 264 + config.config_clks = ufshcd_opp_config_clks; 265 + 266 + ret = devm_pm_opp_set_config(dev, &config); 267 + if (ret) 268 + return ret; 269 + 270 + ret = devm_pm_opp_of_add_table(dev); 271 + if (ret) { 272 + dev_err(dev, "Failed to add OPP table: %d\n", ret); 273 + return ret; 274 + } 275 + 276 + hba->use_pm_opp = true; 277 + 278 + return 0; 214 279 } 215 280 216 281 /** ··· 449 372 } 450 373 451 374 ufshcd_init_lanes_per_dir(hba); 375 + 376 + err = ufshcd_parse_operating_points(hba); 377 + if (err) { 378 + dev_err(dev, "%s: OPP parse failed %d\n", __func__, err); 379 + goto dealloc_host; 380 + } 452 381 453 382 err = ufshcd_init(hba, mmio_base, irq); 454 383 if (err) {
+1 -1
drivers/ufs/host/ufshcd-pltfrm.h
··· 32 32 int ufshcd_pltfrm_init(struct platform_device *pdev, 33 33 const struct ufs_hba_variant_ops *vops); 34 34 int ufshcd_populate_vreg(struct device *dev, const char *name, 35 - struct ufs_vreg **out_vreg); 35 + struct ufs_vreg **out_vreg, bool skip_current); 36 36 37 37 #endif /* UFSHCD_PLTFRM_H_ */
+3
drivers/usb/gadget/function/f_tcm.c
··· 1687 1687 1688 1688 .tfc_wwn_attrs = usbg_wwn_attrs, 1689 1689 .tfc_tpg_base_attrs = usbg_base_attrs, 1690 + 1691 + .default_submit_type = TARGET_DIRECT_SUBMIT, 1692 + .direct_submit_supp = 1, 1690 1693 }; 1691 1694 1692 1695 /* Start gadget.c code */
+4 -1
drivers/vhost/scsi.c
··· 909 909 cmd->tvc_prot_sgl_count, GFP_KERNEL)) 910 910 return; 911 911 912 - target_queue_submission(se_cmd); 912 + target_submit(se_cmd); 913 913 } 914 914 915 915 static void ··· 2598 2598 .tfc_wwn_attrs = vhost_scsi_wwn_attrs, 2599 2599 .tfc_tpg_base_attrs = vhost_scsi_tpg_attrs, 2600 2600 .tfc_tpg_attrib_attrs = vhost_scsi_tpg_attrib_attrs, 2601 + 2602 + .default_submit_type = TARGET_QUEUE_SUBMIT, 2603 + .direct_submit_supp = 1, 2601 2604 }; 2602 2605 2603 2606 static int __init vhost_scsi_init(void)
+3
drivers/xen/xen-scsiback.c
··· 1832 1832 .tfc_wwn_attrs = scsiback_wwn_attrs, 1833 1833 .tfc_tpg_base_attrs = scsiback_tpg_attrs, 1834 1834 .tfc_tpg_param_attrs = scsiback_param_attrs, 1835 + 1836 + .default_submit_type = TARGET_DIRECT_SUBMIT, 1837 + .direct_submit_supp = 1, 1835 1838 }; 1836 1839 1837 1840 static const struct xenbus_device_id scsiback_ids[] = {
-17
include/scsi/libsas.h
··· 404 404 return sdev_to_domain_dev(cmd->device); 405 405 } 406 406 407 - void sas_hash_addr(u8 *hashed, const u8 *sas_addr); 408 - 409 407 /* Before calling a notify event, LLDD should use this function 410 408 * when the link is severed (possibly from its tasklet). 411 409 * The idea is that the Class only reads those, while the LLDD, ··· 679 681 extern void sas_resume_ha_no_sync(struct sas_ha_struct *sas_ha); 680 682 extern void sas_suspend_ha(struct sas_ha_struct *sas_ha); 681 683 682 - int sas_set_phy_speed(struct sas_phy *phy, struct sas_phy_linkrates *rates); 683 684 int sas_phy_reset(struct sas_phy *phy, int hard_reset); 684 685 int sas_phy_enable(struct sas_phy *phy, int enable); 685 686 extern int sas_queuecommand(struct Scsi_Host *, struct scsi_cmnd *); ··· 695 698 extern struct scsi_transport_template * 696 699 sas_domain_attach_transport(struct sas_domain_function_template *); 697 700 extern struct device_attribute dev_attr_phy_event_threshold; 698 - 699 - int sas_discover_root_expander(struct domain_device *); 700 - 701 - int sas_ex_revalidate_domain(struct domain_device *); 702 - 703 - void sas_unregister_domain_devices(struct asd_sas_port *port, int gone); 704 - void sas_init_disc(struct sas_discovery *disc, struct asd_sas_port *); 705 - void sas_discover_event(struct asd_sas_port *, enum discover_event ev); 706 - 707 - int sas_discover_end_dev(struct domain_device *); 708 - 709 - void sas_unregister_dev(struct asd_sas_port *port, struct domain_device *); 710 - 711 - void sas_init_dev(struct domain_device *); 712 701 713 702 void sas_task_abort(struct sas_task *); 714 703 int sas_eh_abort_handler(struct scsi_cmnd *cmd);
+3
include/scsi/scsi_host.h
··· 245 245 * midlayer calls this point so that the driver may deallocate 246 246 * and terminate any references to the target. 247 247 * 248 + * Note: This callback is called with the host lock held and hence 249 + * must not sleep. 250 + * 248 251 * Status: OPTIONAL 249 252 */ 250 253 void (* target_destroy)(struct scsi_target *);
+10
include/target/target_core_base.h
··· 108 108 #define SE_MODE_PAGE_BUF 512 109 109 #define SE_SENSE_BUF 96 110 110 111 + enum target_submit_type { 112 + /* Use the fabric driver's default submission type */ 113 + TARGET_FABRIC_DEFAULT_SUBMIT, 114 + /* Submit from the calling context */ 115 + TARGET_DIRECT_SUBMIT, 116 + /* Defer submission to the LIO workqueue */ 117 + TARGET_QUEUE_SUBMIT, 118 + }; 119 + 111 120 /* struct se_hba->hba_flags */ 112 121 enum hba_flags_table { 113 122 HBA_FLAGS_INTERNAL_USE = 0x01, ··· 726 717 u32 unmap_granularity; 727 718 u32 unmap_granularity_alignment; 728 719 u32 max_write_same_len; 720 + u8 submit_type; 729 721 struct se_device *da_dev; 730 722 struct config_group da_group; 731 723 };
+12 -7
include/target/target_core_fabric.h
··· 113 113 struct configfs_attribute **tfc_tpg_nacl_param_attrs; 114 114 115 115 /* 116 - * Set this member variable to true if the SCSI transport protocol 116 + * Set this member variable if the SCSI transport protocol 117 117 * (e.g. iSCSI) requires that the Data-Out buffer is transferred in 118 118 * its entirety before a command is aborted. 119 119 */ 120 - bool write_pending_must_be_called; 120 + unsigned int write_pending_must_be_called:1; 121 + /* 122 + * Set this if the driver supports submitting commands to the backend 123 + * from target_submit/target_submit_cmd. 124 + */ 125 + unsigned int direct_submit_supp:1; 126 + /* 127 + * Set this to a target_submit_type value. 128 + */ 129 + u8 default_submit_type; 121 130 }; 122 131 123 132 int target_register_template(const struct target_core_fabric_ops *fo); ··· 175 166 struct scatterlist *sgl, u32 sgl_count, 176 167 struct scatterlist *sgl_bidi, u32 sgl_bidi_count, 177 168 struct scatterlist *sgl_prot, u32 sgl_prot_count, gfp_t gfp); 178 - void target_submit(struct se_cmd *se_cmd); 169 + int target_submit(struct se_cmd *se_cmd); 179 170 sense_reason_t transport_lookup_cmd_lun(struct se_cmd *); 180 171 sense_reason_t target_cmd_init_cdb(struct se_cmd *se_cmd, unsigned char *cdb, 181 172 gfp_t gfp); 182 173 sense_reason_t target_cmd_parse_cdb(struct se_cmd *); 183 174 void target_submit_cmd(struct se_cmd *, struct se_session *, unsigned char *, 184 175 unsigned char *, u64, u32, int, int, int); 185 - void target_queue_submission(struct se_cmd *se_cmd); 186 176 187 177 int target_submit_tmr(struct se_cmd *se_cmd, struct se_session *se_sess, 188 178 unsigned char *sense, u64 unpacked_lun, 189 179 void *fabric_tmr_ptr, unsigned char tm_type, 190 180 gfp_t, u64, int); 191 - int transport_handle_cdb_direct(struct se_cmd *); 192 181 sense_reason_t transport_generic_new_cmd(struct se_cmd *); 193 182 194 183 void target_put_cmd_and_wait(struct se_cmd *cmd); ··· 203 196 void target_stop_session(struct se_session *se_sess); 204 197 void target_wait_for_sess_cmds(struct se_session *); 205 198 void target_show_cmd(const char *pfx, struct se_cmd *cmd); 206 - 207 - int core_alua_check_nonop_delay(struct se_cmd *); 208 199 209 200 int core_tmr_alloc_req(struct se_cmd *, void *, u8, gfp_t); 210 201 void core_tmr_release_req(struct se_tmr_req *);
+8 -7
include/trace/events/ufs.h
··· 267 267 TP_ARGS(dev_name, err, usecs, dev_state, link_state)); 268 268 269 269 TRACE_EVENT(ufshcd_command, 270 - TP_PROTO(const char *dev_name, enum ufs_trace_str_t str_t, 270 + TP_PROTO(struct scsi_device *sdev, enum ufs_trace_str_t str_t, 271 271 unsigned int tag, u32 doorbell, u32 hwq_id, int transfer_len, 272 272 u32 intr, u64 lba, u8 opcode, u8 group_id), 273 273 274 - TP_ARGS(dev_name, str_t, tag, doorbell, hwq_id, transfer_len, 275 - intr, lba, opcode, group_id), 274 + TP_ARGS(sdev, str_t, tag, doorbell, hwq_id, transfer_len, intr, lba, 275 + opcode, group_id), 276 276 277 277 TP_STRUCT__entry( 278 - __string(dev_name, dev_name) 278 + __field(struct scsi_device *, sdev) 279 279 __field(enum ufs_trace_str_t, str_t) 280 280 __field(unsigned int, tag) 281 281 __field(u32, doorbell) ··· 288 288 ), 289 289 290 290 TP_fast_assign( 291 - __assign_str(dev_name, dev_name); 291 + __entry->sdev = sdev; 292 292 __entry->str_t = str_t; 293 293 __entry->tag = tag; 294 294 __entry->doorbell = doorbell; ··· 302 302 303 303 TP_printk( 304 304 "%s: %s: tag: %u, DB: 0x%x, size: %d, IS: %u, LBA: %llu, opcode: 0x%x (%s), group_id: 0x%x, hwq_id: %d", 305 - show_ufs_cmd_trace_str(__entry->str_t), __get_str(dev_name), 306 - __entry->tag, __entry->doorbell, __entry->transfer_len, __entry->intr, 305 + show_ufs_cmd_trace_str(__entry->str_t), 306 + dev_name(&__entry->sdev->sdev_dev), __entry->tag, 307 + __entry->doorbell, __entry->transfer_len, __entry->intr, 307 308 __entry->lba, (u32)__entry->opcode, str_opcode(__entry->opcode), 308 309 (u32)__entry->group_id, __entry->hwq_id 309 310 )
+2 -1
include/ufs/ufs.h
··· 98 98 UPIU_TRANSACTION_REJECT_UPIU = 0x3F, 99 99 }; 100 100 101 - /* UPIU Read/Write flags */ 101 + /* UPIU Read/Write flags. See also table "UPIU Flags" in the UFS standard. */ 102 102 enum { 103 103 UPIU_CMD_FLAGS_NONE = 0x00, 104 + UPIU_CMD_FLAGS_CP = 0x04, 104 105 UPIU_CMD_FLAGS_WRITE = 0x20, 105 106 UPIU_CMD_FLAGS_READ = 0x40, 106 107 };
+9 -5
include/ufs/ufshcd.h
··· 28 28 29 29 #define UFSHCD "ufshcd" 30 30 31 + struct scsi_device; 31 32 struct ufs_hba; 32 33 33 34 enum dev_cmd_type { ··· 372 371 int (*get_outstanding_cqs)(struct ufs_hba *hba, 373 372 unsigned long *ocqs); 374 373 int (*config_esi)(struct ufs_hba *hba); 374 + void (*config_scsi_dev)(struct scsi_device *sdev); 375 375 }; 376 376 377 377 /* clock gating state */ ··· 429 427 * @workq: workqueue to schedule devfreq suspend/resume work 430 428 * @suspend_work: worker to suspend devfreq 431 429 * @resume_work: worker to resume devfreq 430 + * @target_freq: frequency requested by devfreq framework 432 431 * @min_gear: lowest HS gear to scale down to 433 432 * @is_enabled: tracks if scaling is currently enabled or not, controlled by 434 433 * clkscale_enable sysfs node ··· 449 446 struct workqueue_struct *workq; 450 447 struct work_struct suspend_work; 451 448 struct work_struct resume_work; 449 + unsigned long target_freq; 452 450 u32 min_gear; 453 451 bool is_enabled; 454 452 bool is_allowed; ··· 599 595 * before power mode change 600 596 */ 601 597 UFSHCD_QUIRK_SKIP_DEF_UNIPRO_TIMEOUT_SETTING = 1 << 13, 602 - 603 - /* 604 - * Align DMA SG entries on a 4 KiB boundary. 605 - */ 606 - UFSHCD_QUIRK_4KB_DMA_ALIGNMENT = 1 << 14, 607 598 608 599 /* 609 600 * This quirk needs to be enabled if the host controller does not ··· 864 865 * @auto_bkops_enabled: to track whether bkops is enabled in device 865 866 * @vreg_info: UFS device voltage regulator information 866 867 * @clk_list_head: UFS host controller clocks list node head 868 + * @use_pm_opp: Indicates whether OPP based scaling is used or not 867 869 * @req_abort_count: number of times ufshcd_abort() has been called 868 870 * @lanes_per_direction: number of lanes per data direction between the UFS 869 871 * controller and the UFS device. ··· 1015 1015 bool auto_bkops_enabled; 1016 1016 struct ufs_vreg_info vreg_info; 1017 1017 struct list_head clk_list_head; 1018 + bool use_pm_opp; 1018 1019 1019 1020 /* Number of requests aborts */ 1020 1021 int req_abort_count; ··· 1254 1253 void ufshcd_mcq_enable_esi(struct ufs_hba *hba); 1255 1254 void ufshcd_mcq_config_esi(struct ufs_hba *hba, struct msi_msg *msg); 1256 1255 1256 + int ufshcd_opp_config_clks(struct device *dev, struct opp_table *opp_table, 1257 + struct dev_pm_opp *opp, void *data, 1258 + bool scaling_down); 1257 1259 /** 1258 1260 * ufshcd_set_variant - set variant specific data to the hba 1259 1261 * @hba: per adapter instance