Merge git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6: (170 commits)
[SCSI] scsi_dh_rdac: Add MD36xxf into device list
[SCSI] scsi_debug: add consecutive medium errors
[SCSI] libsas: fix ata list corruption issue
[SCSI] hpsa: export resettable host attribute
[SCSI] hpsa: move device attributes to avoid forward declarations
[SCSI] scsi_debug: Logical Block Provisioning (SBC3r26)
[SCSI] sd: Logical Block Provisioning update
[SCSI] Include protection operation in SCSI command trace
[SCSI] hpsa: fix incorrect PCI IDs and add two new ones (2nd try)
[SCSI] target: Fix volume size misreporting for volumes > 2TB
[SCSI] bnx2fc: Broadcom FCoE offload driver
[SCSI] fcoe: fix broken fcoe interface reset
[SCSI] fcoe: precedence bug in fcoe_filter_frames()
[SCSI] libfcoe: Remove stale fcoe-netdev entries
[SCSI] libfcoe: Move FCOE_MTU definition from fcoe.h to libfcoe.h
[SCSI] libfc: introduce __fc_fill_fc_hdr that accepts fc_hdr as an argument
[SCSI] fcoe, libfc: initialize EM anchors list and then update npiv EMs
[SCSI] Revert "[SCSI] libfc: fix exchange being deleted when the abort itself is timed out"
[SCSI] libfc: Fixing a memory leak when destroying an interface
[SCSI] megaraid_sas: Version and Changelog update
...

Fix up trivial conflicts due to whitespace differences in
drivers/scsi/libsas/{sas_ata.c,sas_scsi_host.c}

+15391 -2740
+23
Documentation/scsi/ChangeLog.megaraid_sas
··· 1 + Release Date : Thu. Feb 24, 2011 17:00:00 PST 2010 - 2 + (emaild-id:megaraidlinux@lsi.com) 3 + Adam Radford 4 + Current Version : 00.00.05.34-rc1 5 + Old Version : 00.00.05.29-rc1 6 + 1. Fix some failure gotos from megasas_probe_one(), etc. 7 + 2. Add missing check_and_restore_queue_depth() call in 8 + complete_cmd_fusion(). 9 + 3. Enable MSI-X before calling megasas_init_fw(). 10 + 4. Call tasklet_schedule() even if outbound_intr_status == 0 for MFI based 11 + boards in MSI-X mode. 12 + 5. Fix megasas_probe_one() to clear PCI_MSIX_FLAGS_ENABLE in msi control 13 + register in kdump kernel. 14 + 6. Fix megasas_get_cmd() to only print "Command pool empty" if 15 + megasas_dbg_lvl is set. 16 + 7. Fix megasas_build_dcdb_fusion() to not filter by TYPE_DISK. 17 + 8. Fix megasas_build_dcdb_fusion() to use io_request->LUN[1] field. 18 + 9. Add MR_EVT_CFG_CLEARED to megasas_aen_polling(). 19 + 10. Fix tasklet_init() in megasas_init_fw() to use instancet->tasklet. 20 + 11. Fix fault state handling in megasas_transition_to_ready(). 21 + 12. Fix max_sectors setting for IEEE SGL's. 22 + 13. Fix iMR OCR support to work correctly. 23 + ------------------------------------------------------------------------------- 1 24 Release Date : Tues. Dec 14, 2010 17:00:00 PST 2010 - 2 25 (emaild-id:megaraidlinux@lsi.com) 3 26 Adam Radford
+23
Documentation/scsi/hpsa.txt
··· 28 28 nor supported by HP with this driver. For older Smart Arrays, the cciss 29 29 driver should still be used. 30 30 31 + The "hpsa_simple_mode=1" boot parameter may be used to prevent the driver from 32 + putting the controller into "performant" mode. The difference is that with simple 33 + mode, each command completion requires an interrupt, while with "performant mode" 34 + (the default, and ordinarily better performing) it is possible to have multiple 35 + command completions indicated by a single interrupt. 36 + 31 37 HPSA specific entries in /sys 32 38 ----------------------------- 33 39 ··· 45 39 46 40 /sys/class/scsi_host/host*/rescan 47 41 /sys/class/scsi_host/host*/firmware_revision 42 + /sys/class/scsi_host/host*/resettable 43 + /sys/class/scsi_host/host*/transport_mode 48 44 49 45 the host "rescan" attribute is a write only attribute. Writing to this 50 46 attribute will cause the driver to scan for new, changed, or removed devices ··· 62 54 63 55 root@host:/sys/class/scsi_host/host4# cat firmware_revision 64 56 7.14 57 + 58 + The transport_mode indicates whether the controller is in "performant" 59 + or "simple" mode. This is controlled by the "hpsa_simple_mode" module 60 + parameter. 61 + 62 + The "resettable" read-only attribute indicates whether a particular 63 + controller is able to honor the "reset_devices" kernel parameter. If the 64 + device is resettable, this file will contain a "1", otherwise, a "0". This 65 + parameter is used by kdump, for example, to reset the controller at driver 66 + load time to eliminate any outstanding commands on the controller and get the 67 + controller into a known state so that the kdump initiated i/o will work right 68 + and not be disrupted in any way by stale commands or other stale state 69 + remaining on the controller from the previous kernel. This attribute enables 70 + kexec tools to warn the user if they attempt to designate a device which is 71 + unable to honor the reset_devices kernel parameter as a dump device. 65 72 66 73 HPSA specific disk attributes: 67 74 ------------------------------
+13 -1
Documentation/scsi/scsi_mid_low_api.txt
··· 1343 1343 underruns (overruns should be rare). If possible an LLD 1344 1344 should set 'resid' prior to invoking 'done'. The most 1345 1345 interesting case is data transfers from a SCSI target 1346 - device device (i.e. READs) that underrun. 1346 + device (e.g. READs) that underrun. 1347 1347 underflow - LLD should place (DID_ERROR << 16) in 'result' if 1348 1348 actual number of bytes transferred is less than this 1349 1349 figure. Not many LLDs implement this check and some that 1350 1350 do just output an error message to the log rather than 1351 1351 report a DID_ERROR. Better for an LLD to implement 1352 1352 'resid'. 1353 + 1354 + It is recommended that a LLD set 'resid' on data transfers from a SCSI 1355 + target device (e.g. READs). It is especially important that 'resid' is set 1356 + when such data transfers have sense keys of MEDIUM ERROR and HARDWARE ERROR 1357 + (and possibly RECOVERED ERROR). In these cases if a LLD is in doubt how much 1358 + data has been received then the safest approach is to indicate no bytes have 1359 + been received. For example: to indicate that no valid data has been received 1360 + a LLD might use these helpers: 1361 + scsi_set_resid(SCpnt, scsi_bufflen(SCpnt)); 1362 + where 'SCpnt' is a pointer to a scsi_cmnd object. To indicate only three 512 1363 + bytes blocks has been received 'resid' could be set like this: 1364 + scsi_set_resid(SCpnt, scsi_bufflen(SCpnt) - (3 * 512)); 1353 1365 1354 1366 The scsi_cmnd structure is defined in include/scsi/scsi_cmnd.h 1355 1367
+1 -2
MAINTAINERS
··· 5359 5359 F: drivers/s390/crypto/ 5360 5360 5361 5361 S390 ZFCP DRIVER 5362 - M: Christof Schmitt <christof.schmitt@de.ibm.com> 5363 - M: Swen Schillig <swen@vnet.ibm.com> 5362 + M: Steffen Maier <maier@linux.vnet.ibm.com> 5364 5363 M: linux390@de.ibm.com 5365 5364 L: linux-s390@vger.kernel.org 5366 5365 W: http://www.ibm.com/developerworks/linux/linux390/
+20 -3
block/blk-core.c
··· 2045 2045 2046 2046 if (error && req->cmd_type == REQ_TYPE_FS && 2047 2047 !(req->cmd_flags & REQ_QUIET)) { 2048 - printk(KERN_ERR "end_request: I/O error, dev %s, sector %llu\n", 2049 - req->rq_disk ? req->rq_disk->disk_name : "?", 2050 - (unsigned long long)blk_rq_pos(req)); 2048 + char *error_type; 2049 + 2050 + switch (error) { 2051 + case -ENOLINK: 2052 + error_type = "recoverable transport"; 2053 + break; 2054 + case -EREMOTEIO: 2055 + error_type = "critical target"; 2056 + break; 2057 + case -EBADE: 2058 + error_type = "critical nexus"; 2059 + break; 2060 + case -EIO: 2061 + default: 2062 + error_type = "I/O"; 2063 + break; 2064 + } 2065 + printk(KERN_ERR "end_request: %s error, dev %s, sector %llu\n", 2066 + error_type, req->rq_disk ? req->rq_disk->disk_name : "?", 2067 + (unsigned long long)blk_rq_pos(req)); 2051 2068 } 2052 2069 2053 2070 blk_account_io_completion(req, nr_bytes);
+26
drivers/infiniband/ulp/iser/iscsi_iser.c
··· 532 532 stats->custom[3].value = conn->fmr_unalign_cnt; 533 533 } 534 534 535 + static int iscsi_iser_get_ep_param(struct iscsi_endpoint *ep, 536 + enum iscsi_param param, char *buf) 537 + { 538 + struct iser_conn *ib_conn = ep->dd_data; 539 + int len; 540 + 541 + switch (param) { 542 + case ISCSI_PARAM_CONN_PORT: 543 + case ISCSI_PARAM_CONN_ADDRESS: 544 + if (!ib_conn || !ib_conn->cma_id) 545 + return -ENOTCONN; 546 + 547 + return iscsi_conn_get_addr_param((struct sockaddr_storage *) 548 + &ib_conn->cma_id->route.addr.dst_addr, 549 + param, buf); 550 + break; 551 + default: 552 + return -ENOSYS; 553 + } 554 + 555 + return len; 556 + } 557 + 535 558 static struct iscsi_endpoint * 536 559 iscsi_iser_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr, 537 560 int non_blocking) ··· 660 637 ISCSI_MAX_BURST | 661 638 ISCSI_PDU_INORDER_EN | 662 639 ISCSI_DATASEQ_INORDER_EN | 640 + ISCSI_CONN_PORT | 641 + ISCSI_CONN_ADDRESS | 663 642 ISCSI_EXP_STATSN | 664 643 ISCSI_PERSISTENT_PORT | 665 644 ISCSI_PERSISTENT_ADDRESS | ··· 684 659 .destroy_conn = iscsi_iser_conn_destroy, 685 660 .set_param = iscsi_iser_set_param, 686 661 .get_conn_param = iscsi_conn_get_param, 662 + .get_ep_param = iscsi_iser_get_ep_param, 687 663 .get_session_param = iscsi_session_get_param, 688 664 .start_conn = iscsi_iser_conn_start, 689 665 .stop_conn = iscsi_iser_conn_stop,
+10 -12
drivers/md/dm-mpath.c
··· 1283 1283 if (!error && !clone->errors) 1284 1284 return 0; /* I/O complete */ 1285 1285 1286 - if (error == -EOPNOTSUPP) 1287 - return error; 1288 - 1289 - if (clone->cmd_flags & REQ_DISCARD) 1290 - /* 1291 - * Pass all discard request failures up. 1292 - * FIXME: only fail_path if the discard failed due to a 1293 - * transport problem. This requires precise understanding 1294 - * of the underlying failure (e.g. the SCSI sense). 1295 - */ 1286 + if (error == -EOPNOTSUPP || error == -EREMOTEIO) 1296 1287 return error; 1297 1288 1298 1289 if (mpio->pgpath) 1299 1290 fail_path(mpio->pgpath); 1300 1291 1301 1292 spin_lock_irqsave(&m->lock, flags); 1302 - if (!m->nr_valid_paths && !m->queue_if_no_path && !__must_push_back(m)) 1303 - r = -EIO; 1293 + if (!m->nr_valid_paths) { 1294 + if (!m->queue_if_no_path) { 1295 + if (!__must_push_back(m)) 1296 + r = -EIO; 1297 + } else { 1298 + if (error == -EBADE) 1299 + r = error; 1300 + } 1301 + } 1304 1302 spin_unlock_irqrestore(&m->lock, flags); 1305 1303 1306 1304 return r;
+1
drivers/message/fusion/lsi/mpi_cnfg.h
··· 2593 2593 #define MPI_SAS_IOUNIT0_RATE_SATA_OOB_COMPLETE (0x03) 2594 2594 #define MPI_SAS_IOUNIT0_RATE_1_5 (0x08) 2595 2595 #define MPI_SAS_IOUNIT0_RATE_3_0 (0x09) 2596 + #define MPI_SAS_IOUNIT0_RATE_6_0 (0x0A) 2596 2597 2597 2598 /* see mpi_sas.h for values for SAS IO Unit Page 0 ControllerPhyDeviceInfo values */ 2598 2599
+1
drivers/message/fusion/lsi/mpi_ioc.h
··· 841 841 #define MPI_EVENT_SAS_PLS_LR_RATE_SATA_OOB_COMPLETE (0x03) 842 842 #define MPI_EVENT_SAS_PLS_LR_RATE_1_5 (0x08) 843 843 #define MPI_EVENT_SAS_PLS_LR_RATE_3_0 (0x09) 844 + #define MPI_EVENT_SAS_PLS_LR_RATE_6_0 (0x0A) 844 845 845 846 /* SAS Discovery Event data */ 846 847
+6 -1
drivers/message/fusion/mptbase.c
··· 7418 7418 case MPI_EVENT_SAS_PLS_LR_RATE_3_0: 7419 7419 snprintf(evStr, EVENT_DESCR_STR_SZ, 7420 7420 "SAS PHY Link Status: Phy=%d:" 7421 - " Rate 3.0 Gpbs",PhyNumber); 7421 + " Rate 3.0 Gbps", PhyNumber); 7422 + break; 7423 + case MPI_EVENT_SAS_PLS_LR_RATE_6_0: 7424 + snprintf(evStr, EVENT_DESCR_STR_SZ, 7425 + "SAS PHY Link Status: Phy=%d:" 7426 + " Rate 6.0 Gbps", PhyNumber); 7422 7427 break; 7423 7428 default: 7424 7429 snprintf(evStr, EVENT_DESCR_STR_SZ,
+3 -1
drivers/message/fusion/mptctl.c
··· 1314 1314 else 1315 1315 karg->adapterType = MPT_IOCTL_INTERFACE_SCSI; 1316 1316 1317 - if (karg->hdr.port > 1) 1317 + if (karg->hdr.port > 1) { 1318 + kfree(karg); 1318 1319 return -EINVAL; 1320 + } 1319 1321 port = karg->hdr.port; 1320 1322 1321 1323 karg->port = port;
+5 -2
drivers/message/fusion/mptsas.c
··· 1973 1973 .change_queue_depth = mptscsih_change_queue_depth, 1974 1974 .eh_abort_handler = mptscsih_abort, 1975 1975 .eh_device_reset_handler = mptscsih_dev_reset, 1976 - .eh_bus_reset_handler = mptscsih_bus_reset, 1977 1976 .eh_host_reset_handler = mptscsih_host_reset, 1978 1977 .bios_param = mptscsih_bios_param, 1979 1978 .can_queue = MPT_SAS_CAN_QUEUE, ··· 3062 3063 case MPI_SAS_IOUNIT0_RATE_3_0: 3063 3064 phy->negotiated_linkrate = SAS_LINK_RATE_3_0_GBPS; 3064 3065 break; 3066 + case MPI_SAS_IOUNIT0_RATE_6_0: 3067 + phy->negotiated_linkrate = SAS_LINK_RATE_6_0_GBPS; 3068 + break; 3065 3069 case MPI_SAS_IOUNIT0_RATE_SATA_OOB_COMPLETE: 3066 3070 case MPI_SAS_IOUNIT0_RATE_UNKNOWN: 3067 3071 default: ··· 3693 3691 } 3694 3692 3695 3693 if (link_rate == MPI_SAS_IOUNIT0_RATE_1_5 || 3696 - link_rate == MPI_SAS_IOUNIT0_RATE_3_0) { 3694 + link_rate == MPI_SAS_IOUNIT0_RATE_3_0 || 3695 + link_rate == MPI_SAS_IOUNIT0_RATE_6_0) { 3697 3696 3698 3697 if (!port_info) { 3699 3698 if (ioc->old_sas_discovery_protocal) {
-7
drivers/net/bnx2x/bnx2x_main.c
··· 145 145 { "Broadcom NetXtreme II BCM57712E XGb" } 146 146 }; 147 147 148 - #ifndef PCI_DEVICE_ID_NX2_57712 149 - #define PCI_DEVICE_ID_NX2_57712 0x1662 150 - #endif 151 - #ifndef PCI_DEVICE_ID_NX2_57712E 152 - #define PCI_DEVICE_ID_NX2_57712E 0x1663 153 - #endif 154 - 155 148 static DEFINE_PCI_DEVICE_TABLE(bnx2x_pci_tbl) = { 156 149 { PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57710), BCM57710 }, 157 150 { PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57711), BCM57711 },
+29 -51
drivers/s390/scsi/zfcp_aux.c
··· 122 122 { 123 123 int retval = -ENOMEM; 124 124 125 - zfcp_data.gpn_ft_cache = zfcp_cache_hw_align("zfcp_gpn", 126 - sizeof(struct zfcp_fc_gpn_ft_req)); 127 - if (!zfcp_data.gpn_ft_cache) 128 - goto out; 129 - 130 - zfcp_data.qtcb_cache = zfcp_cache_hw_align("zfcp_qtcb", 131 - sizeof(struct fsf_qtcb)); 132 - if (!zfcp_data.qtcb_cache) 125 + zfcp_fsf_qtcb_cache = zfcp_cache_hw_align("zfcp_fsf_qtcb", 126 + sizeof(struct fsf_qtcb)); 127 + if (!zfcp_fsf_qtcb_cache) 133 128 goto out_qtcb_cache; 134 129 135 - zfcp_data.sr_buffer_cache = zfcp_cache_hw_align("zfcp_sr", 136 - sizeof(struct fsf_status_read_buffer)); 137 - if (!zfcp_data.sr_buffer_cache) 138 - goto out_sr_cache; 130 + zfcp_fc_req_cache = zfcp_cache_hw_align("zfcp_fc_req", 131 + sizeof(struct zfcp_fc_req)); 132 + if (!zfcp_fc_req_cache) 133 + goto out_fc_cache; 139 134 140 - zfcp_data.gid_pn_cache = zfcp_cache_hw_align("zfcp_gid", 141 - sizeof(struct zfcp_fc_gid_pn)); 142 - if (!zfcp_data.gid_pn_cache) 143 - goto out_gid_cache; 144 - 145 - zfcp_data.adisc_cache = zfcp_cache_hw_align("zfcp_adisc", 146 - sizeof(struct zfcp_fc_els_adisc)); 147 - if (!zfcp_data.adisc_cache) 148 - goto out_adisc_cache; 149 - 150 - zfcp_data.scsi_transport_template = 135 + zfcp_scsi_transport_template = 151 136 fc_attach_transport(&zfcp_transport_functions); 152 - if (!zfcp_data.scsi_transport_template) 137 + if (!zfcp_scsi_transport_template) 153 138 goto out_transport; 154 - scsi_transport_reserve_device(zfcp_data.scsi_transport_template, 139 + scsi_transport_reserve_device(zfcp_scsi_transport_template, 155 140 sizeof(struct zfcp_scsi_dev)); 156 141 157 142 ··· 160 175 out_ccw_register: 161 176 misc_deregister(&zfcp_cfdc_misc); 162 177 out_misc: 163 - fc_release_transport(zfcp_data.scsi_transport_template); 178 + fc_release_transport(zfcp_scsi_transport_template); 164 179 out_transport: 165 - kmem_cache_destroy(zfcp_data.adisc_cache); 166 - out_adisc_cache: 167 - kmem_cache_destroy(zfcp_data.gid_pn_cache); 168 - out_gid_cache: 169 - kmem_cache_destroy(zfcp_data.sr_buffer_cache); 170 - out_sr_cache: 171 - kmem_cache_destroy(zfcp_data.qtcb_cache); 180 + kmem_cache_destroy(zfcp_fc_req_cache); 181 + out_fc_cache: 182 + kmem_cache_destroy(zfcp_fsf_qtcb_cache); 172 183 out_qtcb_cache: 173 - kmem_cache_destroy(zfcp_data.gpn_ft_cache); 174 - out: 175 184 return retval; 176 185 } 177 186 ··· 175 196 { 176 197 ccw_driver_unregister(&zfcp_ccw_driver); 177 198 misc_deregister(&zfcp_cfdc_misc); 178 - fc_release_transport(zfcp_data.scsi_transport_template); 179 - kmem_cache_destroy(zfcp_data.adisc_cache); 180 - kmem_cache_destroy(zfcp_data.gid_pn_cache); 181 - kmem_cache_destroy(zfcp_data.sr_buffer_cache); 182 - kmem_cache_destroy(zfcp_data.qtcb_cache); 183 - kmem_cache_destroy(zfcp_data.gpn_ft_cache); 199 + fc_release_transport(zfcp_scsi_transport_template); 200 + kmem_cache_destroy(zfcp_fc_req_cache); 201 + kmem_cache_destroy(zfcp_fsf_qtcb_cache); 184 202 } 185 203 186 204 module_exit(zfcp_module_exit); ··· 236 260 return -ENOMEM; 237 261 238 262 adapter->pool.qtcb_pool = 239 - mempool_create_slab_pool(4, zfcp_data.qtcb_cache); 263 + mempool_create_slab_pool(4, zfcp_fsf_qtcb_cache); 240 264 if (!adapter->pool.qtcb_pool) 241 265 return -ENOMEM; 242 266 243 - adapter->pool.status_read_data = 244 - mempool_create_slab_pool(FSF_STATUS_READS_RECOM, 245 - zfcp_data.sr_buffer_cache); 246 - if (!adapter->pool.status_read_data) 267 + BUILD_BUG_ON(sizeof(struct fsf_status_read_buffer) > PAGE_SIZE); 268 + adapter->pool.sr_data = 269 + mempool_create_page_pool(FSF_STATUS_READS_RECOM, 0); 270 + if (!adapter->pool.sr_data) 247 271 return -ENOMEM; 248 272 249 273 adapter->pool.gid_pn = 250 - mempool_create_slab_pool(1, zfcp_data.gid_pn_cache); 274 + mempool_create_slab_pool(1, zfcp_fc_req_cache); 251 275 if (!adapter->pool.gid_pn) 252 276 return -ENOMEM; 253 277 ··· 266 290 mempool_destroy(adapter->pool.qtcb_pool); 267 291 if (adapter->pool.status_read_req) 268 292 mempool_destroy(adapter->pool.status_read_req); 269 - if (adapter->pool.status_read_data) 270 - mempool_destroy(adapter->pool.status_read_data); 293 + if (adapter->pool.sr_data) 294 + mempool_destroy(adapter->pool.sr_data); 271 295 if (adapter->pool.gid_pn) 272 296 mempool_destroy(adapter->pool.gid_pn); 273 297 } ··· 362 386 363 387 INIT_WORK(&adapter->stat_work, _zfcp_status_read_scheduler); 364 388 INIT_WORK(&adapter->scan_work, zfcp_fc_scan_ports); 389 + INIT_WORK(&adapter->ns_up_work, zfcp_fc_sym_name_update); 365 390 366 391 if (zfcp_qdio_setup(adapter)) 367 392 goto failed; ··· 414 437 adapter->dma_parms.max_segment_size = ZFCP_QDIO_SBALE_LEN; 415 438 adapter->ccw_device->dev.dma_parms = &adapter->dma_parms; 416 439 417 - if (!zfcp_adapter_scsi_register(adapter)) 440 + if (!zfcp_scsi_adapter_register(adapter)) 418 441 return adapter; 419 442 420 443 failed: ··· 428 451 429 452 cancel_work_sync(&adapter->scan_work); 430 453 cancel_work_sync(&adapter->stat_work); 454 + cancel_work_sync(&adapter->ns_up_work); 431 455 zfcp_destroy_adapter_work_queue(adapter); 432 456 433 457 zfcp_fc_wka_ports_force_offline(adapter->gs); 434 - zfcp_adapter_scsi_unregister(adapter); 458 + zfcp_scsi_adapter_unregister(adapter); 435 459 sysfs_remove_group(&cdev->dev.kobj, &zfcp_sysfs_adapter_attrs); 436 460 437 461 zfcp_erp_thread_kill(adapter);
+2 -13
drivers/s390/scsi/zfcp_def.h
··· 89 89 #define ZFCP_STATUS_LUN_READONLY 0x00000008 90 90 91 91 /* FSF request status (this does not have a common part) */ 92 - #define ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT 0x00000002 93 92 #define ZFCP_STATUS_FSFREQ_ERROR 0x00000008 94 93 #define ZFCP_STATUS_FSFREQ_CLEANUP 0x00000010 95 94 #define ZFCP_STATUS_FSFREQ_ABORTSUCCEEDED 0x00000040 ··· 107 108 mempool_t *scsi_req; 108 109 mempool_t *scsi_abort; 109 110 mempool_t *status_read_req; 110 - mempool_t *status_read_data; 111 + mempool_t *sr_data; 111 112 mempool_t *gid_pn; 112 113 mempool_t *qtcb_pool; 113 114 }; ··· 189 190 struct fsf_qtcb_bottom_port *stats_reset_data; 190 191 unsigned long stats_reset; 191 192 struct work_struct scan_work; 193 + struct work_struct ns_up_work; 192 194 struct service_level service_level; 193 195 struct workqueue_struct *work_queue; 194 196 struct device_dma_parameters dma_parms; ··· 312 312 mempool_t *pool; 313 313 unsigned long long issued; 314 314 void (*handler)(struct zfcp_fsf_req *); 315 - }; 316 - 317 - /* driver data */ 318 - struct zfcp_data { 319 - struct scsi_host_template scsi_host_template; 320 - struct scsi_transport_template *scsi_transport_template; 321 - struct kmem_cache *gpn_ft_cache; 322 - struct kmem_cache *qtcb_cache; 323 - struct kmem_cache *sr_buffer_cache; 324 - struct kmem_cache *gid_pn_cache; 325 - struct kmem_cache *adisc_cache; 326 315 }; 327 316 328 317 #endif /* ZFCP_DEF_H */
+3 -1
drivers/s390/scsi/zfcp_erp.c
··· 732 732 if (zfcp_erp_adapter_strategy_open_fsf_xport(act) == ZFCP_ERP_FAILED) 733 733 return ZFCP_ERP_FAILED; 734 734 735 - if (mempool_resize(act->adapter->pool.status_read_data, 735 + if (mempool_resize(act->adapter->pool.sr_data, 736 736 act->adapter->stat_read_buf_num, GFP_KERNEL)) 737 737 return ZFCP_ERP_FAILED; 738 738 ··· 1231 1231 if (result == ZFCP_ERP_SUCCEEDED) { 1232 1232 register_service_level(&adapter->service_level); 1233 1233 queue_work(adapter->work_queue, &adapter->scan_work); 1234 + queue_work(adapter->work_queue, &adapter->ns_up_work); 1234 1235 } else 1235 1236 unregister_service_level(&adapter->service_level); 1237 + 1236 1238 kref_put(&adapter->ref, zfcp_adapter_release); 1237 1239 break; 1238 1240 }
+6 -3
drivers/s390/scsi/zfcp_ext.h
··· 80 80 extern void zfcp_erp_timeout_handler(unsigned long); 81 81 82 82 /* zfcp_fc.c */ 83 + extern struct kmem_cache *zfcp_fc_req_cache; 83 84 extern void zfcp_fc_enqueue_event(struct zfcp_adapter *, 84 85 enum fc_host_event_code event_code, u32); 85 86 extern void zfcp_fc_post_event(struct work_struct *); ··· 96 95 extern void zfcp_fc_gs_destroy(struct zfcp_adapter *); 97 96 extern int zfcp_fc_exec_bsg_job(struct fc_bsg_job *); 98 97 extern int zfcp_fc_timeout_bsg_job(struct fc_bsg_job *); 98 + extern void zfcp_fc_sym_name_update(struct work_struct *); 99 99 100 100 /* zfcp_fsf.c */ 101 + extern struct kmem_cache *zfcp_fsf_qtcb_cache; 101 102 extern int zfcp_fsf_open_port(struct zfcp_erp_action *); 102 103 extern int zfcp_fsf_open_wka_port(struct zfcp_fc_wka_port *); 103 104 extern int zfcp_fsf_close_wka_port(struct zfcp_fc_wka_port *); ··· 142 139 struct qdio_buffer *); 143 140 144 141 /* zfcp_scsi.c */ 145 - extern struct zfcp_data zfcp_data; 146 - extern int zfcp_adapter_scsi_register(struct zfcp_adapter *); 147 - extern void zfcp_adapter_scsi_unregister(struct zfcp_adapter *); 142 + extern struct scsi_transport_template *zfcp_scsi_transport_template; 143 + extern int zfcp_scsi_adapter_register(struct zfcp_adapter *); 144 + extern void zfcp_scsi_adapter_unregister(struct zfcp_adapter *); 148 145 extern struct fc_function_template zfcp_transport_functions; 149 146 extern void zfcp_scsi_rport_work(struct work_struct *); 150 147 extern void zfcp_scsi_schedule_rport_register(struct zfcp_port *);
+212 -117
drivers/s390/scsi/zfcp_fc.c
··· 11 11 12 12 #include <linux/types.h> 13 13 #include <linux/slab.h> 14 + #include <linux/utsname.h> 14 15 #include <scsi/fc/fc_els.h> 15 16 #include <scsi/libfc.h> 16 17 #include "zfcp_ext.h" 17 18 #include "zfcp_fc.h" 19 + 20 + struct kmem_cache *zfcp_fc_req_cache; 18 21 19 22 static u32 zfcp_fc_rscn_range_mask[] = { 20 23 [ELS_ADDR_FMT_PORT] = 0xFFFFFF, ··· 263 260 zfcp_fc_incoming_rscn(fsf_req); 264 261 } 265 262 266 - static void zfcp_fc_ns_gid_pn_eval(void *data) 263 + static void zfcp_fc_ns_gid_pn_eval(struct zfcp_fc_req *fc_req) 267 264 { 268 - struct zfcp_fc_gid_pn *gid_pn = data; 269 - struct zfcp_fsf_ct_els *ct = &gid_pn->ct; 270 - struct zfcp_fc_gid_pn_req *gid_pn_req = sg_virt(ct->req); 271 - struct zfcp_fc_gid_pn_resp *gid_pn_resp = sg_virt(ct->resp); 272 - struct zfcp_port *port = gid_pn->port; 265 + struct zfcp_fsf_ct_els *ct_els = &fc_req->ct_els; 266 + struct zfcp_fc_gid_pn_rsp *gid_pn_rsp = &fc_req->u.gid_pn.rsp; 273 267 274 - if (ct->status) 268 + if (ct_els->status) 275 269 return; 276 - if (gid_pn_resp->ct_hdr.ct_cmd != FC_FS_ACC) 270 + if (gid_pn_rsp->ct_hdr.ct_cmd != FC_FS_ACC) 277 271 return; 278 272 279 - /* paranoia */ 280 - if (gid_pn_req->gid_pn.fn_wwpn != port->wwpn) 281 - return; 282 273 /* looks like a valid d_id */ 283 - port->d_id = ntoh24(gid_pn_resp->gid_pn.fp_fid); 274 + ct_els->port->d_id = ntoh24(gid_pn_rsp->gid_pn.fp_fid); 284 275 } 285 276 286 277 static void zfcp_fc_complete(void *data) ··· 282 285 complete(data); 283 286 } 284 287 288 + static void zfcp_fc_ct_ns_init(struct fc_ct_hdr *ct_hdr, u16 cmd, u16 mr_size) 289 + { 290 + ct_hdr->ct_rev = FC_CT_REV; 291 + ct_hdr->ct_fs_type = FC_FST_DIR; 292 + ct_hdr->ct_fs_subtype = FC_NS_SUBTYPE; 293 + ct_hdr->ct_cmd = cmd; 294 + ct_hdr->ct_mr_size = mr_size / 4; 295 + } 296 + 285 297 static int zfcp_fc_ns_gid_pn_request(struct zfcp_port *port, 286 - struct zfcp_fc_gid_pn *gid_pn) 298 + struct zfcp_fc_req *fc_req) 287 299 { 288 300 struct zfcp_adapter *adapter = port->adapter; 289 301 DECLARE_COMPLETION_ONSTACK(completion); 302 + struct zfcp_fc_gid_pn_req *gid_pn_req = &fc_req->u.gid_pn.req; 303 + struct zfcp_fc_gid_pn_rsp *gid_pn_rsp = &fc_req->u.gid_pn.rsp; 290 304 int ret; 291 305 292 306 /* setup parameters for send generic command */ 293 - gid_pn->port = port; 294 - gid_pn->ct.handler = zfcp_fc_complete; 295 - gid_pn->ct.handler_data = &completion; 296 - gid_pn->ct.req = &gid_pn->sg_req; 297 - gid_pn->ct.resp = &gid_pn->sg_resp; 298 - sg_init_one(&gid_pn->sg_req, &gid_pn->gid_pn_req, 299 - sizeof(struct zfcp_fc_gid_pn_req)); 300 - sg_init_one(&gid_pn->sg_resp, &gid_pn->gid_pn_resp, 301 - sizeof(struct zfcp_fc_gid_pn_resp)); 307 + fc_req->ct_els.port = port; 308 + fc_req->ct_els.handler = zfcp_fc_complete; 309 + fc_req->ct_els.handler_data = &completion; 310 + fc_req->ct_els.req = &fc_req->sg_req; 311 + fc_req->ct_els.resp = &fc_req->sg_rsp; 312 + sg_init_one(&fc_req->sg_req, gid_pn_req, sizeof(*gid_pn_req)); 313 + sg_init_one(&fc_req->sg_rsp, gid_pn_rsp, sizeof(*gid_pn_rsp)); 302 314 303 - /* setup nameserver request */ 304 - gid_pn->gid_pn_req.ct_hdr.ct_rev = FC_CT_REV; 305 - gid_pn->gid_pn_req.ct_hdr.ct_fs_type = FC_FST_DIR; 306 - gid_pn->gid_pn_req.ct_hdr.ct_fs_subtype = FC_NS_SUBTYPE; 307 - gid_pn->gid_pn_req.ct_hdr.ct_options = 0; 308 - gid_pn->gid_pn_req.ct_hdr.ct_cmd = FC_NS_GID_PN; 309 - gid_pn->gid_pn_req.ct_hdr.ct_mr_size = ZFCP_FC_CT_SIZE_PAGE / 4; 310 - gid_pn->gid_pn_req.gid_pn.fn_wwpn = port->wwpn; 315 + zfcp_fc_ct_ns_init(&gid_pn_req->ct_hdr, 316 + FC_NS_GID_PN, ZFCP_FC_CT_SIZE_PAGE); 317 + gid_pn_req->gid_pn.fn_wwpn = port->wwpn; 311 318 312 - ret = zfcp_fsf_send_ct(&adapter->gs->ds, &gid_pn->ct, 319 + ret = zfcp_fsf_send_ct(&adapter->gs->ds, &fc_req->ct_els, 313 320 adapter->pool.gid_pn_req, 314 321 ZFCP_FC_CTELS_TMO); 315 322 if (!ret) { 316 323 wait_for_completion(&completion); 317 - zfcp_fc_ns_gid_pn_eval(gid_pn); 324 + zfcp_fc_ns_gid_pn_eval(fc_req); 318 325 } 319 326 return ret; 320 327 } 321 328 322 329 /** 323 - * zfcp_fc_ns_gid_pn_request - initiate GID_PN nameserver request 330 + * zfcp_fc_ns_gid_pn - initiate GID_PN nameserver request 324 331 * @port: port where GID_PN request is needed 325 332 * return: -ENOMEM on error, 0 otherwise 326 333 */ 327 334 static int zfcp_fc_ns_gid_pn(struct zfcp_port *port) 328 335 { 329 336 int ret; 330 - struct zfcp_fc_gid_pn *gid_pn; 337 + struct zfcp_fc_req *fc_req; 331 338 struct zfcp_adapter *adapter = port->adapter; 332 339 333 - gid_pn = mempool_alloc(adapter->pool.gid_pn, GFP_ATOMIC); 334 - if (!gid_pn) 340 + fc_req = mempool_alloc(adapter->pool.gid_pn, GFP_ATOMIC); 341 + if (!fc_req) 335 342 return -ENOMEM; 336 343 337 - memset(gid_pn, 0, sizeof(*gid_pn)); 344 + memset(fc_req, 0, sizeof(*fc_req)); 338 345 339 346 ret = zfcp_fc_wka_port_get(&adapter->gs->ds); 340 347 if (ret) 341 348 goto out; 342 349 343 - ret = zfcp_fc_ns_gid_pn_request(port, gid_pn); 350 + ret = zfcp_fc_ns_gid_pn_request(port, fc_req); 344 351 345 352 zfcp_fc_wka_port_put(&adapter->gs->ds); 346 353 out: 347 - mempool_free(gid_pn, adapter->pool.gid_pn); 354 + mempool_free(fc_req, adapter->pool.gid_pn); 348 355 return ret; 349 356 } 350 357 ··· 420 419 421 420 static void zfcp_fc_adisc_handler(void *data) 422 421 { 423 - struct zfcp_fc_els_adisc *adisc = data; 424 - struct zfcp_port *port = adisc->els.port; 425 - struct fc_els_adisc *adisc_resp = &adisc->adisc_resp; 422 + struct zfcp_fc_req *fc_req = data; 423 + struct zfcp_port *port = fc_req->ct_els.port; 424 + struct fc_els_adisc *adisc_resp = &fc_req->u.adisc.rsp; 426 425 427 - if (adisc->els.status) { 426 + if (fc_req->ct_els.status) { 428 427 /* request rejected or timed out */ 429 428 zfcp_erp_port_forced_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED, 430 429 "fcadh_1"); ··· 446 445 out: 447 446 atomic_clear_mask(ZFCP_STATUS_PORT_LINK_TEST, &port->status); 448 447 put_device(&port->dev); 449 - kmem_cache_free(zfcp_data.adisc_cache, adisc); 448 + kmem_cache_free(zfcp_fc_req_cache, fc_req); 450 449 } 451 450 452 451 static int zfcp_fc_adisc(struct zfcp_port *port) 453 452 { 454 - struct zfcp_fc_els_adisc *adisc; 453 + struct zfcp_fc_req *fc_req; 455 454 struct zfcp_adapter *adapter = port->adapter; 455 + struct Scsi_Host *shost = adapter->scsi_host; 456 456 int ret; 457 457 458 - adisc = kmem_cache_zalloc(zfcp_data.adisc_cache, GFP_ATOMIC); 459 - if (!adisc) 458 + fc_req = kmem_cache_zalloc(zfcp_fc_req_cache, GFP_ATOMIC); 459 + if (!fc_req) 460 460 return -ENOMEM; 461 461 462 - adisc->els.port = port; 463 - adisc->els.req = &adisc->req; 464 - adisc->els.resp = &adisc->resp; 465 - sg_init_one(adisc->els.req, &adisc->adisc_req, 462 + fc_req->ct_els.port = port; 463 + fc_req->ct_els.req = &fc_req->sg_req; 464 + fc_req->ct_els.resp = &fc_req->sg_rsp; 465 + sg_init_one(&fc_req->sg_req, &fc_req->u.adisc.req, 466 466 sizeof(struct fc_els_adisc)); 467 - sg_init_one(adisc->els.resp, &adisc->adisc_resp, 467 + sg_init_one(&fc_req->sg_rsp, &fc_req->u.adisc.rsp, 468 468 sizeof(struct fc_els_adisc)); 469 469 470 - adisc->els.handler = zfcp_fc_adisc_handler; 471 - adisc->els.handler_data = adisc; 470 + fc_req->ct_els.handler = zfcp_fc_adisc_handler; 471 + fc_req->ct_els.handler_data = fc_req; 472 472 473 473 /* acc. to FC-FS, hard_nport_id in ADISC should not be set for ports 474 474 without FC-AL-2 capability, so we don't set it */ 475 - adisc->adisc_req.adisc_wwpn = fc_host_port_name(adapter->scsi_host); 476 - adisc->adisc_req.adisc_wwnn = fc_host_node_name(adapter->scsi_host); 477 - adisc->adisc_req.adisc_cmd = ELS_ADISC; 478 - hton24(adisc->adisc_req.adisc_port_id, 479 - fc_host_port_id(adapter->scsi_host)); 475 + fc_req->u.adisc.req.adisc_wwpn = fc_host_port_name(shost); 476 + fc_req->u.adisc.req.adisc_wwnn = fc_host_node_name(shost); 477 + fc_req->u.adisc.req.adisc_cmd = ELS_ADISC; 478 + hton24(fc_req->u.adisc.req.adisc_port_id, fc_host_port_id(shost)); 480 479 481 - ret = zfcp_fsf_send_els(adapter, port->d_id, &adisc->els, 480 + ret = zfcp_fsf_send_els(adapter, port->d_id, &fc_req->ct_els, 482 481 ZFCP_FC_CTELS_TMO); 483 482 if (ret) 484 - kmem_cache_free(zfcp_data.adisc_cache, adisc); 483 + kmem_cache_free(zfcp_fc_req_cache, fc_req); 485 484 486 485 return ret; 487 486 } ··· 529 528 put_device(&port->dev); 530 529 } 531 530 532 - static void zfcp_free_sg_env(struct zfcp_fc_gpn_ft *gpn_ft, int buf_num) 531 + static struct zfcp_fc_req *zfcp_alloc_sg_env(int buf_num) 533 532 { 534 - struct scatterlist *sg = &gpn_ft->sg_req; 533 + struct zfcp_fc_req *fc_req; 535 534 536 - kmem_cache_free(zfcp_data.gpn_ft_cache, sg_virt(sg)); 537 - zfcp_sg_free_table(gpn_ft->sg_resp, buf_num); 538 - 539 - kfree(gpn_ft); 540 - } 541 - 542 - static struct zfcp_fc_gpn_ft *zfcp_alloc_sg_env(int buf_num) 543 - { 544 - struct zfcp_fc_gpn_ft *gpn_ft; 545 - struct zfcp_fc_gpn_ft_req *req; 546 - 547 - gpn_ft = kzalloc(sizeof(*gpn_ft), GFP_KERNEL); 548 - if (!gpn_ft) 535 + fc_req = kmem_cache_zalloc(zfcp_fc_req_cache, GFP_KERNEL); 536 + if (!fc_req) 549 537 return NULL; 550 538 551 - req = kmem_cache_zalloc(zfcp_data.gpn_ft_cache, GFP_KERNEL); 552 - if (!req) { 553 - kfree(gpn_ft); 554 - gpn_ft = NULL; 555 - goto out; 539 + if (zfcp_sg_setup_table(&fc_req->sg_rsp, buf_num)) { 540 + kmem_cache_free(zfcp_fc_req_cache, fc_req); 541 + return NULL; 556 542 } 557 - sg_init_one(&gpn_ft->sg_req, req, sizeof(*req)); 558 543 559 - if (zfcp_sg_setup_table(gpn_ft->sg_resp, buf_num)) { 560 - zfcp_free_sg_env(gpn_ft, buf_num); 561 - gpn_ft = NULL; 562 - } 563 - out: 564 - return gpn_ft; 544 + sg_init_one(&fc_req->sg_req, &fc_req->u.gpn_ft.req, 545 + sizeof(struct zfcp_fc_gpn_ft_req)); 546 + 547 + return fc_req; 565 548 } 566 549 567 - 568 - static int zfcp_fc_send_gpn_ft(struct zfcp_fc_gpn_ft *gpn_ft, 550 + static int zfcp_fc_send_gpn_ft(struct zfcp_fc_req *fc_req, 569 551 struct zfcp_adapter *adapter, int max_bytes) 570 552 { 571 - struct zfcp_fsf_ct_els *ct = &gpn_ft->ct; 572 - struct zfcp_fc_gpn_ft_req *req = sg_virt(&gpn_ft->sg_req); 553 + struct zfcp_fsf_ct_els *ct_els = &fc_req->ct_els; 554 + struct zfcp_fc_gpn_ft_req *req = &fc_req->u.gpn_ft.req; 573 555 DECLARE_COMPLETION_ONSTACK(completion); 574 556 int ret; 575 557 576 - /* prepare CT IU for GPN_FT */ 577 - req->ct_hdr.ct_rev = FC_CT_REV; 578 - req->ct_hdr.ct_fs_type = FC_FST_DIR; 579 - req->ct_hdr.ct_fs_subtype = FC_NS_SUBTYPE; 580 - req->ct_hdr.ct_options = 0; 581 - req->ct_hdr.ct_cmd = FC_NS_GPN_FT; 582 - req->ct_hdr.ct_mr_size = max_bytes / 4; 583 - req->gpn_ft.fn_domain_id_scope = 0; 584 - req->gpn_ft.fn_area_id_scope = 0; 558 + zfcp_fc_ct_ns_init(&req->ct_hdr, FC_NS_GPN_FT, max_bytes); 585 559 req->gpn_ft.fn_fc4_type = FC_TYPE_FCP; 586 560 587 - /* prepare zfcp_send_ct */ 588 - ct->handler = zfcp_fc_complete; 589 - ct->handler_data = &completion; 590 - ct->req = &gpn_ft->sg_req; 591 - ct->resp = gpn_ft->sg_resp; 561 + ct_els->handler = zfcp_fc_complete; 562 + ct_els->handler_data = &completion; 563 + ct_els->req = &fc_req->sg_req; 564 + ct_els->resp = &fc_req->sg_rsp; 592 565 593 - ret = zfcp_fsf_send_ct(&adapter->gs->ds, ct, NULL, 566 + ret = zfcp_fsf_send_ct(&adapter->gs->ds, ct_els, NULL, 594 567 ZFCP_FC_CTELS_TMO); 595 568 if (!ret) 596 569 wait_for_completion(&completion); ··· 585 610 list_move_tail(&port->list, lh); 586 611 } 587 612 588 - static int zfcp_fc_eval_gpn_ft(struct zfcp_fc_gpn_ft *gpn_ft, 613 + static int zfcp_fc_eval_gpn_ft(struct zfcp_fc_req *fc_req, 589 614 struct zfcp_adapter *adapter, int max_entries) 590 615 { 591 - struct zfcp_fsf_ct_els *ct = &gpn_ft->ct; 592 - struct scatterlist *sg = gpn_ft->sg_resp; 616 + struct zfcp_fsf_ct_els *ct_els = &fc_req->ct_els; 617 + struct scatterlist *sg = &fc_req->sg_rsp; 593 618 struct fc_ct_hdr *hdr = sg_virt(sg); 594 619 struct fc_gpn_ft_resp *acc = sg_virt(sg); 595 620 struct zfcp_port *port, *tmp; ··· 598 623 u32 d_id; 599 624 int ret = 0, x, last = 0; 600 625 601 - if (ct->status) 626 + if (ct_els->status) 602 627 return -EIO; 603 628 604 629 if (hdr->ct_cmd != FC_FS_ACC) { ··· 662 687 struct zfcp_adapter *adapter = container_of(work, struct zfcp_adapter, 663 688 scan_work); 664 689 int ret, i; 665 - struct zfcp_fc_gpn_ft *gpn_ft; 690 + struct zfcp_fc_req *fc_req; 666 691 int chain, max_entries, buf_num, max_bytes; 667 692 668 693 chain = adapter->adapter_features & FSF_FEATURE_ELS_CT_CHAINED_SBALS; ··· 677 702 if (zfcp_fc_wka_port_get(&adapter->gs->ds)) 678 703 return; 679 704 680 - gpn_ft = zfcp_alloc_sg_env(buf_num); 681 - if (!gpn_ft) 705 + fc_req = zfcp_alloc_sg_env(buf_num); 706 + if (!fc_req) 682 707 goto out; 683 708 684 709 for (i = 0; i < 3; i++) { 685 - ret = zfcp_fc_send_gpn_ft(gpn_ft, adapter, max_bytes); 710 + ret = zfcp_fc_send_gpn_ft(fc_req, adapter, max_bytes); 686 711 if (!ret) { 687 - ret = zfcp_fc_eval_gpn_ft(gpn_ft, adapter, max_entries); 712 + ret = zfcp_fc_eval_gpn_ft(fc_req, adapter, max_entries); 688 713 if (ret == -EAGAIN) 689 714 ssleep(1); 690 715 else 691 716 break; 692 717 } 693 718 } 694 - zfcp_free_sg_env(gpn_ft, buf_num); 719 + zfcp_sg_free_table(&fc_req->sg_rsp, buf_num); 720 + kmem_cache_free(zfcp_fc_req_cache, fc_req); 695 721 out: 696 722 zfcp_fc_wka_port_put(&adapter->gs->ds); 723 + } 724 + 725 + static int zfcp_fc_gspn(struct zfcp_adapter *adapter, 726 + struct zfcp_fc_req *fc_req) 727 + { 728 + DECLARE_COMPLETION_ONSTACK(completion); 729 + char devno[] = "DEVNO:"; 730 + struct zfcp_fsf_ct_els *ct_els = &fc_req->ct_els; 731 + struct zfcp_fc_gspn_req *gspn_req = &fc_req->u.gspn.req; 732 + struct zfcp_fc_gspn_rsp *gspn_rsp = &fc_req->u.gspn.rsp; 733 + int ret; 734 + 735 + zfcp_fc_ct_ns_init(&gspn_req->ct_hdr, FC_NS_GSPN_ID, 736 + FC_SYMBOLIC_NAME_SIZE); 737 + hton24(gspn_req->gspn.fp_fid, fc_host_port_id(adapter->scsi_host)); 738 + 739 + sg_init_one(&fc_req->sg_req, gspn_req, sizeof(*gspn_req)); 740 + sg_init_one(&fc_req->sg_rsp, gspn_rsp, sizeof(*gspn_rsp)); 741 + 742 + ct_els->handler = zfcp_fc_complete; 743 + ct_els->handler_data = &completion; 744 + ct_els->req = &fc_req->sg_req; 745 + ct_els->resp = &fc_req->sg_rsp; 746 + 747 + ret = zfcp_fsf_send_ct(&adapter->gs->ds, ct_els, NULL, 748 + ZFCP_FC_CTELS_TMO); 749 + if (ret) 750 + return ret; 751 + 752 + wait_for_completion(&completion); 753 + if (ct_els->status) 754 + return ct_els->status; 755 + 756 + if (fc_host_port_type(adapter->scsi_host) == FC_PORTTYPE_NPIV && 757 + !(strstr(gspn_rsp->gspn.fp_name, devno))) 758 + snprintf(fc_host_symbolic_name(adapter->scsi_host), 759 + FC_SYMBOLIC_NAME_SIZE, "%s%s %s NAME: %s", 760 + gspn_rsp->gspn.fp_name, devno, 761 + dev_name(&adapter->ccw_device->dev), 762 + init_utsname()->nodename); 763 + else 764 + strlcpy(fc_host_symbolic_name(adapter->scsi_host), 765 + gspn_rsp->gspn.fp_name, FC_SYMBOLIC_NAME_SIZE); 766 + 767 + return 0; 768 + } 769 + 770 + static void zfcp_fc_rspn(struct zfcp_adapter *adapter, 771 + struct zfcp_fc_req *fc_req) 772 + { 773 + DECLARE_COMPLETION_ONSTACK(completion); 774 + struct Scsi_Host *shost = adapter->scsi_host; 775 + struct zfcp_fsf_ct_els *ct_els = &fc_req->ct_els; 776 + struct zfcp_fc_rspn_req *rspn_req = &fc_req->u.rspn.req; 777 + struct fc_ct_hdr *rspn_rsp = &fc_req->u.rspn.rsp; 778 + int ret, len; 779 + 780 + zfcp_fc_ct_ns_init(&rspn_req->ct_hdr, FC_NS_RSPN_ID, 781 + FC_SYMBOLIC_NAME_SIZE); 782 + hton24(rspn_req->rspn.fr_fid.fp_fid, fc_host_port_id(shost)); 783 + len = strlcpy(rspn_req->rspn.fr_name, fc_host_symbolic_name(shost), 784 + FC_SYMBOLIC_NAME_SIZE); 785 + rspn_req->rspn.fr_name_len = len; 786 + 787 + sg_init_one(&fc_req->sg_req, rspn_req, sizeof(*rspn_req)); 788 + sg_init_one(&fc_req->sg_rsp, rspn_rsp, sizeof(*rspn_rsp)); 789 + 790 + ct_els->handler = zfcp_fc_complete; 791 + ct_els->handler_data = &completion; 792 + ct_els->req = &fc_req->sg_req; 793 + ct_els->resp = &fc_req->sg_rsp; 794 + 795 + ret = zfcp_fsf_send_ct(&adapter->gs->ds, ct_els, NULL, 796 + ZFCP_FC_CTELS_TMO); 797 + if (!ret) 798 + wait_for_completion(&completion); 799 + } 800 + 801 + /** 802 + * zfcp_fc_sym_name_update - Retrieve and update the symbolic port name 803 + * @work: ns_up_work of the adapter where to update the symbolic port name 804 + * 805 + * Retrieve the current symbolic port name that may have been set by 806 + * the hardware using the GSPN request and update the fc_host 807 + * symbolic_name sysfs attribute. When running in NPIV mode (and hence 808 + * the port name is unique for this system), update the symbolic port 809 + * name to add Linux specific information and update the FC nameserver 810 + * using the RSPN request. 811 + */ 812 + void zfcp_fc_sym_name_update(struct work_struct *work) 813 + { 814 + struct zfcp_adapter *adapter = container_of(work, struct zfcp_adapter, 815 + ns_up_work); 816 + int ret; 817 + struct zfcp_fc_req *fc_req; 818 + 819 + if (fc_host_port_type(adapter->scsi_host) != FC_PORTTYPE_NPORT && 820 + fc_host_port_type(adapter->scsi_host) != FC_PORTTYPE_NPIV) 821 + return; 822 + 823 + fc_req = kmem_cache_zalloc(zfcp_fc_req_cache, GFP_KERNEL); 824 + if (!fc_req) 825 + return; 826 + 827 + ret = zfcp_fc_wka_port_get(&adapter->gs->ds); 828 + if (ret) 829 + goto out_free; 830 + 831 + ret = zfcp_fc_gspn(adapter, fc_req); 832 + if (ret || fc_host_port_type(adapter->scsi_host) != FC_PORTTYPE_NPIV) 833 + goto out_ds_put; 834 + 835 + memset(fc_req, 0, sizeof(*fc_req)); 836 + zfcp_fc_rspn(adapter, fc_req); 837 + 838 + out_ds_put: 839 + zfcp_fc_wka_port_put(&adapter->gs->ds); 840 + out_free: 841 + kmem_cache_free(zfcp_fc_req_cache, fc_req); 697 842 } 698 843 699 844 static void zfcp_fc_ct_els_job_handler(void *data)
+66 -58
drivers/s390/scsi/zfcp_fc.h
··· 64 64 } __packed; 65 65 66 66 /** 67 - * struct zfcp_fc_gid_pn_resp - container for ct header plus gid_pn response 67 + * struct zfcp_fc_gid_pn_rsp - container for ct header plus gid_pn response 68 68 * @ct_hdr: FC GS common transport header 69 69 * @gid_pn: GID_PN response 70 70 */ 71 - struct zfcp_fc_gid_pn_resp { 71 + struct zfcp_fc_gid_pn_rsp { 72 72 struct fc_ct_hdr ct_hdr; 73 73 struct fc_gid_pn_resp gid_pn; 74 74 } __packed; 75 - 76 - /** 77 - * struct zfcp_fc_gid_pn - everything required in zfcp for gid_pn request 78 - * @ct: data passed to zfcp_fsf for issuing fsf request 79 - * @sg_req: scatterlist entry for request data 80 - * @sg_resp: scatterlist entry for response data 81 - * @gid_pn_req: GID_PN request data 82 - * @gid_pn_resp: GID_PN response data 83 - */ 84 - struct zfcp_fc_gid_pn { 85 - struct zfcp_fsf_ct_els ct; 86 - struct scatterlist sg_req; 87 - struct scatterlist sg_resp; 88 - struct zfcp_fc_gid_pn_req gid_pn_req; 89 - struct zfcp_fc_gid_pn_resp gid_pn_resp; 90 - struct zfcp_port *port; 91 - }; 92 75 93 76 /** 94 77 * struct zfcp_fc_gpn_ft - container for ct header plus gpn_ft request ··· 84 101 } __packed; 85 102 86 103 /** 87 - * struct zfcp_fc_gpn_ft_resp - container for ct header plus gpn_ft response 104 + * struct zfcp_fc_gspn_req - container for ct header plus GSPN_ID request 88 105 * @ct_hdr: FC GS common transport header 89 - * @gpn_ft: Array of gpn_ft response data to fill one memory page 106 + * @gspn: GSPN_ID request 90 107 */ 91 - struct zfcp_fc_gpn_ft_resp { 108 + struct zfcp_fc_gspn_req { 92 109 struct fc_ct_hdr ct_hdr; 93 - struct fc_gpn_ft_resp gpn_ft[ZFCP_FC_GPN_FT_ENT_PAGE]; 110 + struct fc_gid_pn_resp gspn; 94 111 } __packed; 95 112 96 113 /** 97 - * struct zfcp_fc_gpn_ft - zfcp data for gpn_ft request 98 - * @ct: data passed to zfcp_fsf for issuing fsf request 99 - * @sg_req: scatter list entry for gpn_ft request 100 - * @sg_resp: scatter list entries for gpn_ft responses (per memory page) 114 + * struct zfcp_fc_gspn_rsp - container for ct header plus GSPN_ID response 115 + * @ct_hdr: FC GS common transport header 116 + * @gspn: GSPN_ID response 117 + * @name: The name string of the GSPN_ID response 101 118 */ 102 - struct zfcp_fc_gpn_ft { 103 - struct zfcp_fsf_ct_els ct; 104 - struct scatterlist sg_req; 105 - struct scatterlist sg_resp[ZFCP_FC_GPN_FT_NUM_BUFS]; 106 - }; 119 + struct zfcp_fc_gspn_rsp { 120 + struct fc_ct_hdr ct_hdr; 121 + struct fc_gspn_resp gspn; 122 + char name[FC_SYMBOLIC_NAME_SIZE]; 123 + } __packed; 107 124 108 125 /** 109 - * struct zfcp_fc_els_adisc - everything required in zfcp for issuing ELS ADISC 110 - * @els: data required for issuing els fsf command 111 - * @req: scatterlist entry for ELS ADISC request 112 - * @resp: scatterlist entry for ELS ADISC response 113 - * @adisc_req: ELS ADISC request data 114 - * @adisc_resp: ELS ADISC response data 126 + * struct zfcp_fc_rspn_req - container for ct header plus RSPN_ID request 127 + * @ct_hdr: FC GS common transport header 128 + * @rspn: RSPN_ID request 129 + * @name: The name string of the RSPN_ID request 115 130 */ 116 - struct zfcp_fc_els_adisc { 117 - struct zfcp_fsf_ct_els els; 118 - struct scatterlist req; 119 - struct scatterlist resp; 120 - struct fc_els_adisc adisc_req; 121 - struct fc_els_adisc adisc_resp; 131 + struct zfcp_fc_rspn_req { 132 + struct fc_ct_hdr ct_hdr; 133 + struct fc_ns_rspn rspn; 134 + char name[FC_SYMBOLIC_NAME_SIZE]; 135 + } __packed; 136 + 137 + /** 138 + * struct zfcp_fc_req - Container for FC ELS and CT requests sent from zfcp 139 + * @ct_els: data required for issuing fsf command 140 + * @sg_req: scatterlist entry for request data 141 + * @sg_rsp: scatterlist entry for response data 142 + * @u: request specific data 143 + */ 144 + struct zfcp_fc_req { 145 + struct zfcp_fsf_ct_els ct_els; 146 + struct scatterlist sg_req; 147 + struct scatterlist sg_rsp; 148 + union { 149 + struct { 150 + struct fc_els_adisc req; 151 + struct fc_els_adisc rsp; 152 + } adisc; 153 + struct { 154 + struct zfcp_fc_gid_pn_req req; 155 + struct zfcp_fc_gid_pn_rsp rsp; 156 + } gid_pn; 157 + struct { 158 + struct scatterlist sg_rsp2[ZFCP_FC_GPN_FT_NUM_BUFS - 1]; 159 + struct zfcp_fc_gpn_ft_req req; 160 + } gpn_ft; 161 + struct { 162 + struct zfcp_fc_gspn_req req; 163 + struct zfcp_fc_gspn_rsp rsp; 164 + } gspn; 165 + struct { 166 + struct zfcp_fc_rspn_req req; 167 + struct fc_ct_hdr rsp; 168 + } rspn; 169 + } u; 122 170 }; 123 171 124 172 /** ··· 206 192 * zfcp_fc_scsi_to_fcp - setup FCP command with data from scsi_cmnd 207 193 * @fcp: fcp_cmnd to setup 208 194 * @scsi: scsi_cmnd where to get LUN, task attributes/flags and CDB 195 + * @tm: task management flags to setup task management command 209 196 */ 210 197 static inline 211 - void zfcp_fc_scsi_to_fcp(struct fcp_cmnd *fcp, struct scsi_cmnd *scsi) 198 + void zfcp_fc_scsi_to_fcp(struct fcp_cmnd *fcp, struct scsi_cmnd *scsi, 199 + u8 tm_flags) 212 200 { 213 201 char tag[2]; 214 202 215 203 int_to_scsilun(scsi->device->lun, (struct scsi_lun *) &fcp->fc_lun); 204 + 205 + if (unlikely(tm_flags)) { 206 + fcp->fc_tm_flags = tm_flags; 207 + return; 208 + } 216 209 217 210 if (scsi_populate_tag_msg(scsi, tag)) { 218 211 switch (tag[0]) { ··· 244 223 245 224 if (scsi_get_prot_type(scsi) == SCSI_PROT_DIF_TYPE1) 246 225 fcp->fc_dl += fcp->fc_dl / scsi->device->sector_size * 8; 247 - } 248 - 249 - /** 250 - * zfcp_fc_fcp_tm - setup FCP command as task management command 251 - * @fcp: fcp_cmnd to setup 252 - * @dev: scsi_device where to send the task management command 253 - * @tm: task management flags to setup tm command 254 - */ 255 - static inline 256 - void zfcp_fc_fcp_tm(struct fcp_cmnd *fcp, struct scsi_device *dev, u8 tm_flags) 257 - { 258 - int_to_scsilun(dev->lun, (struct scsi_lun *) &fcp->fc_lun); 259 - fcp->fc_tm_flags |= tm_flags; 260 226 } 261 227 262 228 /**
+15 -12
drivers/s390/scsi/zfcp_fsf.c
··· 18 18 #include "zfcp_qdio.h" 19 19 #include "zfcp_reqlist.h" 20 20 21 + struct kmem_cache *zfcp_fsf_qtcb_cache; 22 + 21 23 static void zfcp_fsf_request_timeout_handler(unsigned long data) 22 24 { 23 25 struct zfcp_adapter *adapter = (struct zfcp_adapter *) data; ··· 85 83 } 86 84 87 85 if (likely(req->qtcb)) 88 - kmem_cache_free(zfcp_data.qtcb_cache, req->qtcb); 86 + kmem_cache_free(zfcp_fsf_qtcb_cache, req->qtcb); 89 87 kfree(req); 90 88 } 91 89 ··· 214 212 215 213 if (req->status & ZFCP_STATUS_FSFREQ_DISMISSED) { 216 214 zfcp_dbf_hba_fsf_uss("fssrh_1", req); 217 - mempool_free(sr_buf, adapter->pool.status_read_data); 215 + mempool_free(virt_to_page(sr_buf), adapter->pool.sr_data); 218 216 zfcp_fsf_req_free(req); 219 217 return; 220 218 } ··· 267 265 break; 268 266 } 269 267 270 - mempool_free(sr_buf, adapter->pool.status_read_data); 268 + mempool_free(virt_to_page(sr_buf), adapter->pool.sr_data); 271 269 zfcp_fsf_req_free(req); 272 270 273 271 atomic_inc(&adapter->stat_miss); ··· 630 628 if (likely(pool)) 631 629 qtcb = mempool_alloc(pool, GFP_ATOMIC); 632 630 else 633 - qtcb = kmem_cache_alloc(zfcp_data.qtcb_cache, GFP_ATOMIC); 631 + qtcb = kmem_cache_alloc(zfcp_fsf_qtcb_cache, GFP_ATOMIC); 634 632 635 633 if (unlikely(!qtcb)) 636 634 return NULL; ··· 725 723 struct zfcp_adapter *adapter = qdio->adapter; 726 724 struct zfcp_fsf_req *req; 727 725 struct fsf_status_read_buffer *sr_buf; 726 + struct page *page; 728 727 int retval = -EIO; 729 728 730 729 spin_lock_irq(&qdio->req_q_lock); ··· 739 736 goto out; 740 737 } 741 738 742 - sr_buf = mempool_alloc(adapter->pool.status_read_data, GFP_ATOMIC); 743 - if (!sr_buf) { 739 + page = mempool_alloc(adapter->pool.sr_data, GFP_ATOMIC); 740 + if (!page) { 744 741 retval = -ENOMEM; 745 742 goto failed_buf; 746 743 } 744 + sr_buf = page_address(page); 747 745 memset(sr_buf, 0, sizeof(*sr_buf)); 748 746 req->data = sr_buf; 749 747 ··· 759 755 760 756 failed_req_send: 761 757 req->data = NULL; 762 - mempool_free(sr_buf, adapter->pool.status_read_data); 758 + mempool_free(virt_to_page(sr_buf), adapter->pool.sr_data); 763 759 failed_buf: 764 760 zfcp_dbf_hba_fsf_uss("fssr__1", req); 765 761 zfcp_fsf_req_free(req); ··· 1556 1552 SBAL_FLAGS0_TYPE_READ, 1557 1553 qdio->adapter->pool.erp_req); 1558 1554 1559 - if (unlikely(IS_ERR(req))) { 1555 + if (IS_ERR(req)) { 1560 1556 retval = PTR_ERR(req); 1561 1557 goto out; 1562 1558 } ··· 1609 1605 SBAL_FLAGS0_TYPE_READ, 1610 1606 qdio->adapter->pool.erp_req); 1611 1607 1612 - if (unlikely(IS_ERR(req))) { 1608 + if (IS_ERR(req)) { 1613 1609 retval = PTR_ERR(req); 1614 1610 goto out; 1615 1611 } ··· 2210 2206 zfcp_fsf_set_data_dir(scsi_cmnd, &io->data_direction); 2211 2207 2212 2208 fcp_cmnd = (struct fcp_cmnd *) &req->qtcb->bottom.io.fcp_cmnd; 2213 - zfcp_fc_scsi_to_fcp(fcp_cmnd, scsi_cmnd); 2209 + zfcp_fc_scsi_to_fcp(fcp_cmnd, scsi_cmnd, 0); 2214 2210 2215 2211 if (scsi_prot_sg_count(scsi_cmnd)) { 2216 2212 zfcp_qdio_set_data_div(qdio, &req->qdio_req, ··· 2288 2284 goto out; 2289 2285 } 2290 2286 2291 - req->status |= ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT; 2292 2287 req->data = scmnd; 2293 2288 req->handler = zfcp_fsf_fcp_task_mgmt_handler; 2294 2289 req->qtcb->header.lun_handle = zfcp_sdev->lun_handle; ··· 2299 2296 zfcp_qdio_set_sbale_last(qdio, &req->qdio_req); 2300 2297 2301 2298 fcp_cmnd = (struct fcp_cmnd *) &req->qtcb->bottom.io.fcp_cmnd; 2302 - zfcp_fc_fcp_tm(fcp_cmnd, scmnd->device, tm_flags); 2299 + zfcp_fc_scsi_to_fcp(fcp_cmnd, scmnd, tm_flags); 2303 2300 2304 2301 zfcp_fsf_start_timer(req, ZFCP_SCSI_ER_TIMEOUT); 2305 2302 if (!zfcp_fsf_req_send(req))
+39 -32
drivers/s390/scsi/zfcp_scsi.c
··· 292 292 return SUCCESS; 293 293 } 294 294 295 - int zfcp_adapter_scsi_register(struct zfcp_adapter *adapter) 295 + struct scsi_transport_template *zfcp_scsi_transport_template; 296 + 297 + static struct scsi_host_template zfcp_scsi_host_template = { 298 + .module = THIS_MODULE, 299 + .name = "zfcp", 300 + .queuecommand = zfcp_scsi_queuecommand, 301 + .eh_abort_handler = zfcp_scsi_eh_abort_handler, 302 + .eh_device_reset_handler = zfcp_scsi_eh_device_reset_handler, 303 + .eh_target_reset_handler = zfcp_scsi_eh_target_reset_handler, 304 + .eh_host_reset_handler = zfcp_scsi_eh_host_reset_handler, 305 + .slave_alloc = zfcp_scsi_slave_alloc, 306 + .slave_configure = zfcp_scsi_slave_configure, 307 + .slave_destroy = zfcp_scsi_slave_destroy, 308 + .change_queue_depth = zfcp_scsi_change_queue_depth, 309 + .proc_name = "zfcp", 310 + .can_queue = 4096, 311 + .this_id = -1, 312 + .sg_tablesize = ZFCP_QDIO_MAX_SBALES_PER_REQ, 313 + .max_sectors = (ZFCP_QDIO_MAX_SBALES_PER_REQ * 8), 314 + .dma_boundary = ZFCP_QDIO_SBALE_LEN - 1, 315 + .cmd_per_lun = 1, 316 + .use_clustering = 1, 317 + .shost_attrs = zfcp_sysfs_shost_attrs, 318 + .sdev_attrs = zfcp_sysfs_sdev_attrs, 319 + }; 320 + 321 + /** 322 + * zfcp_scsi_adapter_register - Register SCSI and FC host with SCSI midlayer 323 + * @adapter: The zfcp adapter to register with the SCSI midlayer 324 + */ 325 + int zfcp_scsi_adapter_register(struct zfcp_adapter *adapter) 296 326 { 297 327 struct ccw_dev_id dev_id; 298 328 ··· 331 301 332 302 ccw_device_get_id(adapter->ccw_device, &dev_id); 333 303 /* register adapter as SCSI host with mid layer of SCSI stack */ 334 - adapter->scsi_host = scsi_host_alloc(&zfcp_data.scsi_host_template, 304 + adapter->scsi_host = scsi_host_alloc(&zfcp_scsi_host_template, 335 305 sizeof (struct zfcp_adapter *)); 336 306 if (!adapter->scsi_host) { 337 307 dev_err(&adapter->ccw_device->dev, ··· 346 316 adapter->scsi_host->max_channel = 0; 347 317 adapter->scsi_host->unique_id = dev_id.devno; 348 318 adapter->scsi_host->max_cmd_len = 16; /* in struct fcp_cmnd */ 349 - adapter->scsi_host->transportt = zfcp_data.scsi_transport_template; 319 + adapter->scsi_host->transportt = zfcp_scsi_transport_template; 350 320 351 321 adapter->scsi_host->hostdata[0] = (unsigned long) adapter; 352 322 ··· 358 328 return 0; 359 329 } 360 330 361 - void zfcp_adapter_scsi_unregister(struct zfcp_adapter *adapter) 331 + /** 332 + * zfcp_scsi_adapter_unregister - Unregister SCSI and FC host from SCSI midlayer 333 + * @adapter: The zfcp adapter to unregister. 334 + */ 335 + void zfcp_scsi_adapter_unregister(struct zfcp_adapter *adapter) 362 336 { 363 337 struct Scsi_Host *shost; 364 338 struct zfcp_port *port; ··· 380 346 scsi_remove_host(shost); 381 347 scsi_host_put(shost); 382 348 adapter->scsi_host = NULL; 383 - 384 - return; 385 349 } 386 350 387 351 static struct fc_host_statistics* ··· 720 688 /* no functions registered for following dynamic attributes but 721 689 directly set by LLDD */ 722 690 .show_host_port_type = 1, 691 + .show_host_symbolic_name = 1, 723 692 .show_host_speed = 1, 724 693 .show_host_port_id = 1, 725 694 .dd_bsg_size = sizeof(struct zfcp_fsf_ct_els), 726 - }; 727 - 728 - struct zfcp_data zfcp_data = { 729 - .scsi_host_template = { 730 - .name = "zfcp", 731 - .module = THIS_MODULE, 732 - .proc_name = "zfcp", 733 - .change_queue_depth = zfcp_scsi_change_queue_depth, 734 - .slave_alloc = zfcp_scsi_slave_alloc, 735 - .slave_configure = zfcp_scsi_slave_configure, 736 - .slave_destroy = zfcp_scsi_slave_destroy, 737 - .queuecommand = zfcp_scsi_queuecommand, 738 - .eh_abort_handler = zfcp_scsi_eh_abort_handler, 739 - .eh_device_reset_handler = zfcp_scsi_eh_device_reset_handler, 740 - .eh_target_reset_handler = zfcp_scsi_eh_target_reset_handler, 741 - .eh_host_reset_handler = zfcp_scsi_eh_host_reset_handler, 742 - .can_queue = 4096, 743 - .this_id = -1, 744 - .sg_tablesize = ZFCP_QDIO_MAX_SBALES_PER_REQ, 745 - .cmd_per_lun = 1, 746 - .use_clustering = 1, 747 - .sdev_attrs = zfcp_sysfs_sdev_attrs, 748 - .max_sectors = (ZFCP_QDIO_MAX_SBALES_PER_REQ * 8), 749 - .dma_boundary = ZFCP_QDIO_SBALE_LEN - 1, 750 - .shost_attrs = zfcp_sysfs_shost_attrs, 751 - }, 752 695 };
+1
drivers/scsi/Kconfig
··· 381 381 382 382 source "drivers/scsi/cxgbi/Kconfig" 383 383 source "drivers/scsi/bnx2i/Kconfig" 384 + source "drivers/scsi/bnx2fc/Kconfig" 384 385 source "drivers/scsi/be2iscsi/Kconfig" 385 386 386 387 config SGIWD93_SCSI
+1
drivers/scsi/Makefile
··· 40 40 obj-$(CONFIG_LIBFCOE) += fcoe/ 41 41 obj-$(CONFIG_FCOE) += fcoe/ 42 42 obj-$(CONFIG_FCOE_FNIC) += fnic/ 43 + obj-$(CONFIG_SCSI_BNX2X_FCOE) += libfc/ fcoe/ bnx2fc/ 43 44 obj-$(CONFIG_ISCSI_TCP) += libiscsi.o libiscsi_tcp.o iscsi_tcp.o 44 45 obj-$(CONFIG_INFINIBAND_ISER) += libiscsi.o 45 46 obj-$(CONFIG_ISCSI_BOOT_SYSFS) += iscsi_boot_sysfs.o
+1 -2
drivers/scsi/NCR5380.c
··· 936 936 { 937 937 struct NCR5380_hostdata *hostdata = (struct NCR5380_hostdata *) instance->hostdata; 938 938 939 - cancel_delayed_work(&hostdata->coroutine); 940 - flush_scheduled_work(); 939 + cancel_delayed_work_sync(&hostdata->coroutine); 941 940 } 942 941 943 942 /**
+2 -2
drivers/scsi/arcmsr/arcmsr_hba.c
··· 1020 1020 int poll_count = 0; 1021 1021 arcmsr_free_sysfs_attr(acb); 1022 1022 scsi_remove_host(host); 1023 - flush_scheduled_work(); 1023 + flush_work_sync(&acb->arcmsr_do_message_isr_bh); 1024 1024 del_timer_sync(&acb->eternal_timer); 1025 1025 arcmsr_disable_outbound_ints(acb); 1026 1026 arcmsr_stop_adapter_bgrb(acb); ··· 1066 1066 (struct AdapterControlBlock *)host->hostdata; 1067 1067 del_timer_sync(&acb->eternal_timer); 1068 1068 arcmsr_disable_outbound_ints(acb); 1069 - flush_scheduled_work(); 1069 + flush_work_sync(&acb->arcmsr_do_message_isr_bh); 1070 1070 arcmsr_stop_adapter_bgrb(acb); 1071 1071 arcmsr_flush_adapter_cache(acb); 1072 1072 }
+5 -13
drivers/scsi/be2iscsi/be_iscsi.c
··· 210 210 } 211 211 212 212 /** 213 - * beiscsi_conn_get_param - get the iscsi parameter 214 - * @cls_conn: pointer to iscsi cls conn 213 + * beiscsi_ep_get_param - get the iscsi parameter 214 + * @ep: pointer to iscsi ep 215 215 * @param: parameter type identifier 216 216 * @buf: buffer pointer 217 217 * 218 218 * returns iscsi parameter 219 219 */ 220 - int beiscsi_conn_get_param(struct iscsi_cls_conn *cls_conn, 220 + int beiscsi_ep_get_param(struct iscsi_endpoint *ep, 221 221 enum iscsi_param param, char *buf) 222 222 { 223 - struct beiscsi_endpoint *beiscsi_ep; 224 - struct iscsi_conn *conn = cls_conn->dd_data; 225 - struct beiscsi_conn *beiscsi_conn = conn->dd_data; 223 + struct beiscsi_endpoint *beiscsi_ep = ep->dd_data; 226 224 int len = 0; 227 225 228 226 SE_DEBUG(DBG_LVL_8, "In beiscsi_conn_get_param, param= %d\n", param); 229 - beiscsi_ep = beiscsi_conn->ep; 230 - if (!beiscsi_ep) { 231 - SE_DEBUG(DBG_LVL_1, 232 - "In beiscsi_conn_get_param , no beiscsi_ep\n"); 233 - return -ENODEV; 234 - } 235 227 236 228 switch (param) { 237 229 case ISCSI_PARAM_CONN_PORT: ··· 236 244 len = sprintf(buf, "%pI6\n", &beiscsi_ep->dst6_addr); 237 245 break; 238 246 default: 239 - return iscsi_conn_get_param(cls_conn, param, buf); 247 + return -ENOSYS; 240 248 } 241 249 return len; 242 250 }
+2 -2
drivers/scsi/be2iscsi/be_iscsi.h
··· 48 48 struct iscsi_cls_conn *cls_conn, 49 49 uint64_t transport_fd, int is_leading); 50 50 51 - int beiscsi_conn_get_param(struct iscsi_cls_conn *cls_conn, 52 - enum iscsi_param param, char *buf); 51 + int beiscsi_ep_get_param(struct iscsi_endpoint *ep, enum iscsi_param param, 52 + char *buf); 53 53 54 54 int beiscsi_get_host_param(struct Scsi_Host *shost, 55 55 enum iscsi_host_param param, char *buf);
+2 -1
drivers/scsi/be2iscsi/be_main.c
··· 4384 4384 .bind_conn = beiscsi_conn_bind, 4385 4385 .destroy_conn = iscsi_conn_teardown, 4386 4386 .set_param = beiscsi_set_param, 4387 - .get_conn_param = beiscsi_conn_get_param, 4387 + .get_conn_param = iscsi_conn_get_param, 4388 4388 .get_session_param = iscsi_session_get_param, 4389 4389 .get_host_param = beiscsi_get_host_param, 4390 4390 .start_conn = beiscsi_conn_start, ··· 4395 4395 .alloc_pdu = beiscsi_alloc_pdu, 4396 4396 .parse_pdu_itt = beiscsi_parse_pdu, 4397 4397 .get_stats = beiscsi_conn_get_stats, 4398 + .get_ep_param = beiscsi_ep_get_param, 4398 4399 .ep_connect = beiscsi_ep_connect, 4399 4400 .ep_poll = beiscsi_ep_poll, 4400 4401 .ep_disconnect = beiscsi_ep_disconnect,
+1080
drivers/scsi/bnx2fc/57xx_hsi_bnx2fc.h
··· 1 + #ifndef __57XX_FCOE_HSI_LINUX_LE__ 2 + #define __57XX_FCOE_HSI_LINUX_LE__ 3 + 4 + /* 5 + * common data for all protocols 6 + */ 7 + struct b577xx_doorbell_hdr { 8 + u8 header; 9 + #define B577XX_DOORBELL_HDR_RX (0x1<<0) 10 + #define B577XX_DOORBELL_HDR_RX_SHIFT 0 11 + #define B577XX_DOORBELL_HDR_DB_TYPE (0x1<<1) 12 + #define B577XX_DOORBELL_HDR_DB_TYPE_SHIFT 1 13 + #define B577XX_DOORBELL_HDR_DPM_SIZE (0x3<<2) 14 + #define B577XX_DOORBELL_HDR_DPM_SIZE_SHIFT 2 15 + #define B577XX_DOORBELL_HDR_CONN_TYPE (0xF<<4) 16 + #define B577XX_DOORBELL_HDR_CONN_TYPE_SHIFT 4 17 + }; 18 + 19 + /* 20 + * doorbell message sent to the chip 21 + */ 22 + struct b577xx_doorbell_set_prod { 23 + #if defined(__BIG_ENDIAN) 24 + u16 prod; 25 + u8 zero_fill1; 26 + struct b577xx_doorbell_hdr header; 27 + #elif defined(__LITTLE_ENDIAN) 28 + struct b577xx_doorbell_hdr header; 29 + u8 zero_fill1; 30 + u16 prod; 31 + #endif 32 + }; 33 + 34 + 35 + struct regpair { 36 + __le32 lo; 37 + __le32 hi; 38 + }; 39 + 40 + 41 + /* 42 + * Fixed size structure in order to plant it in Union structure 43 + */ 44 + struct fcoe_abts_rsp_union { 45 + u32 r_ctl; 46 + u32 abts_rsp_payload[7]; 47 + }; 48 + 49 + 50 + /* 51 + * 4 regs size 52 + */ 53 + struct fcoe_bd_ctx { 54 + u32 buf_addr_hi; 55 + u32 buf_addr_lo; 56 + #if defined(__BIG_ENDIAN) 57 + u16 rsrv0; 58 + u16 buf_len; 59 + #elif defined(__LITTLE_ENDIAN) 60 + u16 buf_len; 61 + u16 rsrv0; 62 + #endif 63 + #if defined(__BIG_ENDIAN) 64 + u16 rsrv1; 65 + u16 flags; 66 + #elif defined(__LITTLE_ENDIAN) 67 + u16 flags; 68 + u16 rsrv1; 69 + #endif 70 + }; 71 + 72 + 73 + struct fcoe_cleanup_flow_info { 74 + #if defined(__BIG_ENDIAN) 75 + u16 reserved1; 76 + u16 task_id; 77 + #elif defined(__LITTLE_ENDIAN) 78 + u16 task_id; 79 + u16 reserved1; 80 + #endif 81 + u32 reserved2[7]; 82 + }; 83 + 84 + 85 + struct fcoe_fcp_cmd_payload { 86 + u32 opaque[8]; 87 + }; 88 + 89 + struct fcoe_fc_hdr { 90 + #if defined(__BIG_ENDIAN) 91 + u8 cs_ctl; 92 + u8 s_id[3]; 93 + #elif defined(__LITTLE_ENDIAN) 94 + u8 s_id[3]; 95 + u8 cs_ctl; 96 + #endif 97 + #if defined(__BIG_ENDIAN) 98 + u8 r_ctl; 99 + u8 d_id[3]; 100 + #elif defined(__LITTLE_ENDIAN) 101 + u8 d_id[3]; 102 + u8 r_ctl; 103 + #endif 104 + #if defined(__BIG_ENDIAN) 105 + u8 seq_id; 106 + u8 df_ctl; 107 + u16 seq_cnt; 108 + #elif defined(__LITTLE_ENDIAN) 109 + u16 seq_cnt; 110 + u8 df_ctl; 111 + u8 seq_id; 112 + #endif 113 + #if defined(__BIG_ENDIAN) 114 + u8 type; 115 + u8 f_ctl[3]; 116 + #elif defined(__LITTLE_ENDIAN) 117 + u8 f_ctl[3]; 118 + u8 type; 119 + #endif 120 + u32 parameters; 121 + #if defined(__BIG_ENDIAN) 122 + u16 ox_id; 123 + u16 rx_id; 124 + #elif defined(__LITTLE_ENDIAN) 125 + u16 rx_id; 126 + u16 ox_id; 127 + #endif 128 + }; 129 + 130 + struct fcoe_fc_frame { 131 + struct fcoe_fc_hdr fc_hdr; 132 + u32 reserved0[2]; 133 + }; 134 + 135 + union fcoe_cmd_flow_info { 136 + struct fcoe_fcp_cmd_payload fcp_cmd_payload; 137 + struct fcoe_fc_frame mp_fc_frame; 138 + }; 139 + 140 + 141 + 142 + struct fcoe_fcp_rsp_flags { 143 + u8 flags; 144 + #define FCOE_FCP_RSP_FLAGS_FCP_RSP_LEN_VALID (0x1<<0) 145 + #define FCOE_FCP_RSP_FLAGS_FCP_RSP_LEN_VALID_SHIFT 0 146 + #define FCOE_FCP_RSP_FLAGS_FCP_SNS_LEN_VALID (0x1<<1) 147 + #define FCOE_FCP_RSP_FLAGS_FCP_SNS_LEN_VALID_SHIFT 1 148 + #define FCOE_FCP_RSP_FLAGS_FCP_RESID_OVER (0x1<<2) 149 + #define FCOE_FCP_RSP_FLAGS_FCP_RESID_OVER_SHIFT 2 150 + #define FCOE_FCP_RSP_FLAGS_FCP_RESID_UNDER (0x1<<3) 151 + #define FCOE_FCP_RSP_FLAGS_FCP_RESID_UNDER_SHIFT 3 152 + #define FCOE_FCP_RSP_FLAGS_FCP_CONF_REQ (0x1<<4) 153 + #define FCOE_FCP_RSP_FLAGS_FCP_CONF_REQ_SHIFT 4 154 + #define FCOE_FCP_RSP_FLAGS_FCP_BIDI_FLAGS (0x7<<5) 155 + #define FCOE_FCP_RSP_FLAGS_FCP_BIDI_FLAGS_SHIFT 5 156 + }; 157 + 158 + 159 + struct fcoe_fcp_rsp_payload { 160 + struct regpair reserved0; 161 + u32 fcp_resid; 162 + #if defined(__BIG_ENDIAN) 163 + u16 retry_delay_timer; 164 + struct fcoe_fcp_rsp_flags fcp_flags; 165 + u8 scsi_status_code; 166 + #elif defined(__LITTLE_ENDIAN) 167 + u8 scsi_status_code; 168 + struct fcoe_fcp_rsp_flags fcp_flags; 169 + u16 retry_delay_timer; 170 + #endif 171 + u32 fcp_rsp_len; 172 + u32 fcp_sns_len; 173 + }; 174 + 175 + 176 + /* 177 + * Fixed size structure in order to plant it in Union structure 178 + */ 179 + struct fcoe_fcp_rsp_union { 180 + struct fcoe_fcp_rsp_payload payload; 181 + struct regpair reserved0; 182 + }; 183 + 184 + 185 + struct fcoe_fcp_xfr_rdy_payload { 186 + u32 burst_len; 187 + u32 data_ro; 188 + }; 189 + 190 + struct fcoe_read_flow_info { 191 + struct fcoe_fc_hdr fc_data_in_hdr; 192 + u32 reserved[2]; 193 + }; 194 + 195 + struct fcoe_write_flow_info { 196 + struct fcoe_fc_hdr fc_data_out_hdr; 197 + struct fcoe_fcp_xfr_rdy_payload fcp_xfr_payload; 198 + }; 199 + 200 + union fcoe_rsp_flow_info { 201 + struct fcoe_fcp_rsp_union fcp_rsp; 202 + struct fcoe_abts_rsp_union abts_rsp; 203 + }; 204 + 205 + /* 206 + * 32 bytes used for general purposes 207 + */ 208 + union fcoe_general_task_ctx { 209 + union fcoe_cmd_flow_info cmd_info; 210 + struct fcoe_read_flow_info read_info; 211 + struct fcoe_write_flow_info write_info; 212 + union fcoe_rsp_flow_info rsp_info; 213 + struct fcoe_cleanup_flow_info cleanup_info; 214 + u32 comp_info[8]; 215 + }; 216 + 217 + 218 + /* 219 + * FCoE KCQ CQE parameters 220 + */ 221 + union fcoe_kcqe_params { 222 + u32 reserved0[4]; 223 + }; 224 + 225 + /* 226 + * FCoE KCQ CQE 227 + */ 228 + struct fcoe_kcqe { 229 + u32 fcoe_conn_id; 230 + u32 completion_status; 231 + u32 fcoe_conn_context_id; 232 + union fcoe_kcqe_params params; 233 + #if defined(__BIG_ENDIAN) 234 + u8 flags; 235 + #define FCOE_KCQE_RESERVED0 (0x7<<0) 236 + #define FCOE_KCQE_RESERVED0_SHIFT 0 237 + #define FCOE_KCQE_RAMROD_COMPLETION (0x1<<3) 238 + #define FCOE_KCQE_RAMROD_COMPLETION_SHIFT 3 239 + #define FCOE_KCQE_LAYER_CODE (0x7<<4) 240 + #define FCOE_KCQE_LAYER_CODE_SHIFT 4 241 + #define FCOE_KCQE_LINKED_WITH_NEXT (0x1<<7) 242 + #define FCOE_KCQE_LINKED_WITH_NEXT_SHIFT 7 243 + u8 op_code; 244 + u16 qe_self_seq; 245 + #elif defined(__LITTLE_ENDIAN) 246 + u16 qe_self_seq; 247 + u8 op_code; 248 + u8 flags; 249 + #define FCOE_KCQE_RESERVED0 (0x7<<0) 250 + #define FCOE_KCQE_RESERVED0_SHIFT 0 251 + #define FCOE_KCQE_RAMROD_COMPLETION (0x1<<3) 252 + #define FCOE_KCQE_RAMROD_COMPLETION_SHIFT 3 253 + #define FCOE_KCQE_LAYER_CODE (0x7<<4) 254 + #define FCOE_KCQE_LAYER_CODE_SHIFT 4 255 + #define FCOE_KCQE_LINKED_WITH_NEXT (0x1<<7) 256 + #define FCOE_KCQE_LINKED_WITH_NEXT_SHIFT 7 257 + #endif 258 + }; 259 + 260 + /* 261 + * FCoE KWQE header 262 + */ 263 + struct fcoe_kwqe_header { 264 + #if defined(__BIG_ENDIAN) 265 + u8 flags; 266 + #define FCOE_KWQE_HEADER_RESERVED0 (0xF<<0) 267 + #define FCOE_KWQE_HEADER_RESERVED0_SHIFT 0 268 + #define FCOE_KWQE_HEADER_LAYER_CODE (0x7<<4) 269 + #define FCOE_KWQE_HEADER_LAYER_CODE_SHIFT 4 270 + #define FCOE_KWQE_HEADER_RESERVED1 (0x1<<7) 271 + #define FCOE_KWQE_HEADER_RESERVED1_SHIFT 7 272 + u8 op_code; 273 + #elif defined(__LITTLE_ENDIAN) 274 + u8 op_code; 275 + u8 flags; 276 + #define FCOE_KWQE_HEADER_RESERVED0 (0xF<<0) 277 + #define FCOE_KWQE_HEADER_RESERVED0_SHIFT 0 278 + #define FCOE_KWQE_HEADER_LAYER_CODE (0x7<<4) 279 + #define FCOE_KWQE_HEADER_LAYER_CODE_SHIFT 4 280 + #define FCOE_KWQE_HEADER_RESERVED1 (0x1<<7) 281 + #define FCOE_KWQE_HEADER_RESERVED1_SHIFT 7 282 + #endif 283 + }; 284 + 285 + /* 286 + * FCoE firmware init request 1 287 + */ 288 + struct fcoe_kwqe_init1 { 289 + #if defined(__BIG_ENDIAN) 290 + struct fcoe_kwqe_header hdr; 291 + u16 num_tasks; 292 + #elif defined(__LITTLE_ENDIAN) 293 + u16 num_tasks; 294 + struct fcoe_kwqe_header hdr; 295 + #endif 296 + u32 task_list_pbl_addr_lo; 297 + u32 task_list_pbl_addr_hi; 298 + u32 dummy_buffer_addr_lo; 299 + u32 dummy_buffer_addr_hi; 300 + #if defined(__BIG_ENDIAN) 301 + u16 rq_num_wqes; 302 + u16 sq_num_wqes; 303 + #elif defined(__LITTLE_ENDIAN) 304 + u16 sq_num_wqes; 305 + u16 rq_num_wqes; 306 + #endif 307 + #if defined(__BIG_ENDIAN) 308 + u16 cq_num_wqes; 309 + u16 rq_buffer_log_size; 310 + #elif defined(__LITTLE_ENDIAN) 311 + u16 rq_buffer_log_size; 312 + u16 cq_num_wqes; 313 + #endif 314 + #if defined(__BIG_ENDIAN) 315 + u8 flags; 316 + #define FCOE_KWQE_INIT1_LOG_PAGE_SIZE (0xF<<0) 317 + #define FCOE_KWQE_INIT1_LOG_PAGE_SIZE_SHIFT 0 318 + #define FCOE_KWQE_INIT1_LOG_CACHED_PBES_PER_FUNC (0x7<<4) 319 + #define FCOE_KWQE_INIT1_LOG_CACHED_PBES_PER_FUNC_SHIFT 4 320 + #define FCOE_KWQE_INIT1_RESERVED1 (0x1<<7) 321 + #define FCOE_KWQE_INIT1_RESERVED1_SHIFT 7 322 + u8 num_sessions_log; 323 + u16 mtu; 324 + #elif defined(__LITTLE_ENDIAN) 325 + u16 mtu; 326 + u8 num_sessions_log; 327 + u8 flags; 328 + #define FCOE_KWQE_INIT1_LOG_PAGE_SIZE (0xF<<0) 329 + #define FCOE_KWQE_INIT1_LOG_PAGE_SIZE_SHIFT 0 330 + #define FCOE_KWQE_INIT1_LOG_CACHED_PBES_PER_FUNC (0x7<<4) 331 + #define FCOE_KWQE_INIT1_LOG_CACHED_PBES_PER_FUNC_SHIFT 4 332 + #define FCOE_KWQE_INIT1_RESERVED1 (0x1<<7) 333 + #define FCOE_KWQE_INIT1_RESERVED1_SHIFT 7 334 + #endif 335 + }; 336 + 337 + /* 338 + * FCoE firmware init request 2 339 + */ 340 + struct fcoe_kwqe_init2 { 341 + #if defined(__BIG_ENDIAN) 342 + struct fcoe_kwqe_header hdr; 343 + u16 reserved0; 344 + #elif defined(__LITTLE_ENDIAN) 345 + u16 reserved0; 346 + struct fcoe_kwqe_header hdr; 347 + #endif 348 + u32 hash_tbl_pbl_addr_lo; 349 + u32 hash_tbl_pbl_addr_hi; 350 + u32 t2_hash_tbl_addr_lo; 351 + u32 t2_hash_tbl_addr_hi; 352 + u32 t2_ptr_hash_tbl_addr_lo; 353 + u32 t2_ptr_hash_tbl_addr_hi; 354 + u32 free_list_count; 355 + }; 356 + 357 + /* 358 + * FCoE firmware init request 3 359 + */ 360 + struct fcoe_kwqe_init3 { 361 + #if defined(__BIG_ENDIAN) 362 + struct fcoe_kwqe_header hdr; 363 + u16 reserved0; 364 + #elif defined(__LITTLE_ENDIAN) 365 + u16 reserved0; 366 + struct fcoe_kwqe_header hdr; 367 + #endif 368 + u32 error_bit_map_lo; 369 + u32 error_bit_map_hi; 370 + #if defined(__BIG_ENDIAN) 371 + u8 reserved21[3]; 372 + u8 cached_session_enable; 373 + #elif defined(__LITTLE_ENDIAN) 374 + u8 cached_session_enable; 375 + u8 reserved21[3]; 376 + #endif 377 + u32 reserved2[4]; 378 + }; 379 + 380 + /* 381 + * FCoE connection offload request 1 382 + */ 383 + struct fcoe_kwqe_conn_offload1 { 384 + #if defined(__BIG_ENDIAN) 385 + struct fcoe_kwqe_header hdr; 386 + u16 fcoe_conn_id; 387 + #elif defined(__LITTLE_ENDIAN) 388 + u16 fcoe_conn_id; 389 + struct fcoe_kwqe_header hdr; 390 + #endif 391 + u32 sq_addr_lo; 392 + u32 sq_addr_hi; 393 + u32 rq_pbl_addr_lo; 394 + u32 rq_pbl_addr_hi; 395 + u32 rq_first_pbe_addr_lo; 396 + u32 rq_first_pbe_addr_hi; 397 + #if defined(__BIG_ENDIAN) 398 + u16 reserved0; 399 + u16 rq_prod; 400 + #elif defined(__LITTLE_ENDIAN) 401 + u16 rq_prod; 402 + u16 reserved0; 403 + #endif 404 + }; 405 + 406 + /* 407 + * FCoE connection offload request 2 408 + */ 409 + struct fcoe_kwqe_conn_offload2 { 410 + #if defined(__BIG_ENDIAN) 411 + struct fcoe_kwqe_header hdr; 412 + u16 tx_max_fc_pay_len; 413 + #elif defined(__LITTLE_ENDIAN) 414 + u16 tx_max_fc_pay_len; 415 + struct fcoe_kwqe_header hdr; 416 + #endif 417 + u32 cq_addr_lo; 418 + u32 cq_addr_hi; 419 + u32 xferq_addr_lo; 420 + u32 xferq_addr_hi; 421 + u32 conn_db_addr_lo; 422 + u32 conn_db_addr_hi; 423 + u32 reserved1; 424 + }; 425 + 426 + /* 427 + * FCoE connection offload request 3 428 + */ 429 + struct fcoe_kwqe_conn_offload3 { 430 + #if defined(__BIG_ENDIAN) 431 + struct fcoe_kwqe_header hdr; 432 + u16 vlan_tag; 433 + #define FCOE_KWQE_CONN_OFFLOAD3_VLAN_ID (0xFFF<<0) 434 + #define FCOE_KWQE_CONN_OFFLOAD3_VLAN_ID_SHIFT 0 435 + #define FCOE_KWQE_CONN_OFFLOAD3_CFI (0x1<<12) 436 + #define FCOE_KWQE_CONN_OFFLOAD3_CFI_SHIFT 12 437 + #define FCOE_KWQE_CONN_OFFLOAD3_PRIORITY (0x7<<13) 438 + #define FCOE_KWQE_CONN_OFFLOAD3_PRIORITY_SHIFT 13 439 + #elif defined(__LITTLE_ENDIAN) 440 + u16 vlan_tag; 441 + #define FCOE_KWQE_CONN_OFFLOAD3_VLAN_ID (0xFFF<<0) 442 + #define FCOE_KWQE_CONN_OFFLOAD3_VLAN_ID_SHIFT 0 443 + #define FCOE_KWQE_CONN_OFFLOAD3_CFI (0x1<<12) 444 + #define FCOE_KWQE_CONN_OFFLOAD3_CFI_SHIFT 12 445 + #define FCOE_KWQE_CONN_OFFLOAD3_PRIORITY (0x7<<13) 446 + #define FCOE_KWQE_CONN_OFFLOAD3_PRIORITY_SHIFT 13 447 + struct fcoe_kwqe_header hdr; 448 + #endif 449 + #if defined(__BIG_ENDIAN) 450 + u8 tx_max_conc_seqs_c3; 451 + u8 s_id[3]; 452 + #elif defined(__LITTLE_ENDIAN) 453 + u8 s_id[3]; 454 + u8 tx_max_conc_seqs_c3; 455 + #endif 456 + #if defined(__BIG_ENDIAN) 457 + u8 flags; 458 + #define FCOE_KWQE_CONN_OFFLOAD3_B_MUL_N_PORT_IDS (0x1<<0) 459 + #define FCOE_KWQE_CONN_OFFLOAD3_B_MUL_N_PORT_IDS_SHIFT 0 460 + #define FCOE_KWQE_CONN_OFFLOAD3_B_E_D_TOV_RES (0x1<<1) 461 + #define FCOE_KWQE_CONN_OFFLOAD3_B_E_D_TOV_RES_SHIFT 1 462 + #define FCOE_KWQE_CONN_OFFLOAD3_B_CONT_INCR_SEQ_CNT (0x1<<2) 463 + #define FCOE_KWQE_CONN_OFFLOAD3_B_CONT_INCR_SEQ_CNT_SHIFT 2 464 + #define FCOE_KWQE_CONN_OFFLOAD3_B_CONF_REQ (0x1<<3) 465 + #define FCOE_KWQE_CONN_OFFLOAD3_B_CONF_REQ_SHIFT 3 466 + #define FCOE_KWQE_CONN_OFFLOAD3_B_REC_VALID (0x1<<4) 467 + #define FCOE_KWQE_CONN_OFFLOAD3_B_REC_VALID_SHIFT 4 468 + #define FCOE_KWQE_CONN_OFFLOAD3_B_C2_VALID (0x1<<5) 469 + #define FCOE_KWQE_CONN_OFFLOAD3_B_C2_VALID_SHIFT 5 470 + #define FCOE_KWQE_CONN_OFFLOAD3_B_ACK_0 (0x1<<6) 471 + #define FCOE_KWQE_CONN_OFFLOAD3_B_ACK_0_SHIFT 6 472 + #define FCOE_KWQE_CONN_OFFLOAD3_B_VLAN_FLAG (0x1<<7) 473 + #define FCOE_KWQE_CONN_OFFLOAD3_B_VLAN_FLAG_SHIFT 7 474 + u8 d_id[3]; 475 + #elif defined(__LITTLE_ENDIAN) 476 + u8 d_id[3]; 477 + u8 flags; 478 + #define FCOE_KWQE_CONN_OFFLOAD3_B_MUL_N_PORT_IDS (0x1<<0) 479 + #define FCOE_KWQE_CONN_OFFLOAD3_B_MUL_N_PORT_IDS_SHIFT 0 480 + #define FCOE_KWQE_CONN_OFFLOAD3_B_E_D_TOV_RES (0x1<<1) 481 + #define FCOE_KWQE_CONN_OFFLOAD3_B_E_D_TOV_RES_SHIFT 1 482 + #define FCOE_KWQE_CONN_OFFLOAD3_B_CONT_INCR_SEQ_CNT (0x1<<2) 483 + #define FCOE_KWQE_CONN_OFFLOAD3_B_CONT_INCR_SEQ_CNT_SHIFT 2 484 + #define FCOE_KWQE_CONN_OFFLOAD3_B_CONF_REQ (0x1<<3) 485 + #define FCOE_KWQE_CONN_OFFLOAD3_B_CONF_REQ_SHIFT 3 486 + #define FCOE_KWQE_CONN_OFFLOAD3_B_REC_VALID (0x1<<4) 487 + #define FCOE_KWQE_CONN_OFFLOAD3_B_REC_VALID_SHIFT 4 488 + #define FCOE_KWQE_CONN_OFFLOAD3_B_C2_VALID (0x1<<5) 489 + #define FCOE_KWQE_CONN_OFFLOAD3_B_C2_VALID_SHIFT 5 490 + #define FCOE_KWQE_CONN_OFFLOAD3_B_ACK_0 (0x1<<6) 491 + #define FCOE_KWQE_CONN_OFFLOAD3_B_ACK_0_SHIFT 6 492 + #define FCOE_KWQE_CONN_OFFLOAD3_B_VLAN_FLAG (0x1<<7) 493 + #define FCOE_KWQE_CONN_OFFLOAD3_B_VLAN_FLAG_SHIFT 7 494 + #endif 495 + u32 reserved; 496 + u32 confq_first_pbe_addr_lo; 497 + u32 confq_first_pbe_addr_hi; 498 + #if defined(__BIG_ENDIAN) 499 + u16 rx_max_fc_pay_len; 500 + u16 tx_total_conc_seqs; 501 + #elif defined(__LITTLE_ENDIAN) 502 + u16 tx_total_conc_seqs; 503 + u16 rx_max_fc_pay_len; 504 + #endif 505 + #if defined(__BIG_ENDIAN) 506 + u8 rx_open_seqs_exch_c3; 507 + u8 rx_max_conc_seqs_c3; 508 + u16 rx_total_conc_seqs; 509 + #elif defined(__LITTLE_ENDIAN) 510 + u16 rx_total_conc_seqs; 511 + u8 rx_max_conc_seqs_c3; 512 + u8 rx_open_seqs_exch_c3; 513 + #endif 514 + }; 515 + 516 + /* 517 + * FCoE connection offload request 4 518 + */ 519 + struct fcoe_kwqe_conn_offload4 { 520 + #if defined(__BIG_ENDIAN) 521 + struct fcoe_kwqe_header hdr; 522 + u8 reserved2; 523 + u8 e_d_tov_timer_val; 524 + #elif defined(__LITTLE_ENDIAN) 525 + u8 e_d_tov_timer_val; 526 + u8 reserved2; 527 + struct fcoe_kwqe_header hdr; 528 + #endif 529 + u8 src_mac_addr_lo32[4]; 530 + #if defined(__BIG_ENDIAN) 531 + u8 dst_mac_addr_hi16[2]; 532 + u8 src_mac_addr_hi16[2]; 533 + #elif defined(__LITTLE_ENDIAN) 534 + u8 src_mac_addr_hi16[2]; 535 + u8 dst_mac_addr_hi16[2]; 536 + #endif 537 + u8 dst_mac_addr_lo32[4]; 538 + u32 lcq_addr_lo; 539 + u32 lcq_addr_hi; 540 + u32 confq_pbl_base_addr_lo; 541 + u32 confq_pbl_base_addr_hi; 542 + }; 543 + 544 + /* 545 + * FCoE connection enable request 546 + */ 547 + struct fcoe_kwqe_conn_enable_disable { 548 + #if defined(__BIG_ENDIAN) 549 + struct fcoe_kwqe_header hdr; 550 + u16 reserved0; 551 + #elif defined(__LITTLE_ENDIAN) 552 + u16 reserved0; 553 + struct fcoe_kwqe_header hdr; 554 + #endif 555 + u8 src_mac_addr_lo32[4]; 556 + #if defined(__BIG_ENDIAN) 557 + u16 vlan_tag; 558 + #define FCOE_KWQE_CONN_ENABLE_DISABLE_VLAN_ID (0xFFF<<0) 559 + #define FCOE_KWQE_CONN_ENABLE_DISABLE_VLAN_ID_SHIFT 0 560 + #define FCOE_KWQE_CONN_ENABLE_DISABLE_CFI (0x1<<12) 561 + #define FCOE_KWQE_CONN_ENABLE_DISABLE_CFI_SHIFT 12 562 + #define FCOE_KWQE_CONN_ENABLE_DISABLE_PRIORITY (0x7<<13) 563 + #define FCOE_KWQE_CONN_ENABLE_DISABLE_PRIORITY_SHIFT 13 564 + u8 src_mac_addr_hi16[2]; 565 + #elif defined(__LITTLE_ENDIAN) 566 + u8 src_mac_addr_hi16[2]; 567 + u16 vlan_tag; 568 + #define FCOE_KWQE_CONN_ENABLE_DISABLE_VLAN_ID (0xFFF<<0) 569 + #define FCOE_KWQE_CONN_ENABLE_DISABLE_VLAN_ID_SHIFT 0 570 + #define FCOE_KWQE_CONN_ENABLE_DISABLE_CFI (0x1<<12) 571 + #define FCOE_KWQE_CONN_ENABLE_DISABLE_CFI_SHIFT 12 572 + #define FCOE_KWQE_CONN_ENABLE_DISABLE_PRIORITY (0x7<<13) 573 + #define FCOE_KWQE_CONN_ENABLE_DISABLE_PRIORITY_SHIFT 13 574 + #endif 575 + u8 dst_mac_addr_lo32[4]; 576 + #if defined(__BIG_ENDIAN) 577 + u16 reserved1; 578 + u8 dst_mac_addr_hi16[2]; 579 + #elif defined(__LITTLE_ENDIAN) 580 + u8 dst_mac_addr_hi16[2]; 581 + u16 reserved1; 582 + #endif 583 + #if defined(__BIG_ENDIAN) 584 + u8 vlan_flag; 585 + u8 s_id[3]; 586 + #elif defined(__LITTLE_ENDIAN) 587 + u8 s_id[3]; 588 + u8 vlan_flag; 589 + #endif 590 + #if defined(__BIG_ENDIAN) 591 + u8 reserved3; 592 + u8 d_id[3]; 593 + #elif defined(__LITTLE_ENDIAN) 594 + u8 d_id[3]; 595 + u8 reserved3; 596 + #endif 597 + u32 context_id; 598 + u32 conn_id; 599 + u32 reserved4; 600 + }; 601 + 602 + /* 603 + * FCoE connection destroy request 604 + */ 605 + struct fcoe_kwqe_conn_destroy { 606 + #if defined(__BIG_ENDIAN) 607 + struct fcoe_kwqe_header hdr; 608 + u16 reserved0; 609 + #elif defined(__LITTLE_ENDIAN) 610 + u16 reserved0; 611 + struct fcoe_kwqe_header hdr; 612 + #endif 613 + u32 context_id; 614 + u32 conn_id; 615 + u32 reserved1[5]; 616 + }; 617 + 618 + /* 619 + * FCoe destroy request 620 + */ 621 + struct fcoe_kwqe_destroy { 622 + #if defined(__BIG_ENDIAN) 623 + struct fcoe_kwqe_header hdr; 624 + u16 reserved0; 625 + #elif defined(__LITTLE_ENDIAN) 626 + u16 reserved0; 627 + struct fcoe_kwqe_header hdr; 628 + #endif 629 + u32 reserved1[7]; 630 + }; 631 + 632 + /* 633 + * FCoe statistics request 634 + */ 635 + struct fcoe_kwqe_stat { 636 + #if defined(__BIG_ENDIAN) 637 + struct fcoe_kwqe_header hdr; 638 + u16 reserved0; 639 + #elif defined(__LITTLE_ENDIAN) 640 + u16 reserved0; 641 + struct fcoe_kwqe_header hdr; 642 + #endif 643 + u32 stat_params_addr_lo; 644 + u32 stat_params_addr_hi; 645 + u32 reserved1[5]; 646 + }; 647 + 648 + /* 649 + * FCoE KWQ WQE 650 + */ 651 + union fcoe_kwqe { 652 + struct fcoe_kwqe_init1 init1; 653 + struct fcoe_kwqe_init2 init2; 654 + struct fcoe_kwqe_init3 init3; 655 + struct fcoe_kwqe_conn_offload1 conn_offload1; 656 + struct fcoe_kwqe_conn_offload2 conn_offload2; 657 + struct fcoe_kwqe_conn_offload3 conn_offload3; 658 + struct fcoe_kwqe_conn_offload4 conn_offload4; 659 + struct fcoe_kwqe_conn_enable_disable conn_enable_disable; 660 + struct fcoe_kwqe_conn_destroy conn_destroy; 661 + struct fcoe_kwqe_destroy destroy; 662 + struct fcoe_kwqe_stat statistics; 663 + }; 664 + 665 + struct fcoe_mul_sges_ctx { 666 + struct regpair cur_sge_addr; 667 + #if defined(__BIG_ENDIAN) 668 + u8 sgl_size; 669 + u8 cur_sge_idx; 670 + u16 cur_sge_off; 671 + #elif defined(__LITTLE_ENDIAN) 672 + u16 cur_sge_off; 673 + u8 cur_sge_idx; 674 + u8 sgl_size; 675 + #endif 676 + }; 677 + 678 + struct fcoe_s_stat_ctx { 679 + u8 flags; 680 + #define FCOE_S_STAT_CTX_ACTIVE (0x1<<0) 681 + #define FCOE_S_STAT_CTX_ACTIVE_SHIFT 0 682 + #define FCOE_S_STAT_CTX_ACK_ABORT_SEQ_COND (0x1<<1) 683 + #define FCOE_S_STAT_CTX_ACK_ABORT_SEQ_COND_SHIFT 1 684 + #define FCOE_S_STAT_CTX_ABTS_PERFORMED (0x1<<2) 685 + #define FCOE_S_STAT_CTX_ABTS_PERFORMED_SHIFT 2 686 + #define FCOE_S_STAT_CTX_SEQ_TIMEOUT (0x1<<3) 687 + #define FCOE_S_STAT_CTX_SEQ_TIMEOUT_SHIFT 3 688 + #define FCOE_S_STAT_CTX_P_RJT (0x1<<4) 689 + #define FCOE_S_STAT_CTX_P_RJT_SHIFT 4 690 + #define FCOE_S_STAT_CTX_ACK_EOFT (0x1<<5) 691 + #define FCOE_S_STAT_CTX_ACK_EOFT_SHIFT 5 692 + #define FCOE_S_STAT_CTX_RSRV1 (0x3<<6) 693 + #define FCOE_S_STAT_CTX_RSRV1_SHIFT 6 694 + }; 695 + 696 + struct fcoe_seq_ctx { 697 + #if defined(__BIG_ENDIAN) 698 + u16 low_seq_cnt; 699 + struct fcoe_s_stat_ctx s_stat; 700 + u8 seq_id; 701 + #elif defined(__LITTLE_ENDIAN) 702 + u8 seq_id; 703 + struct fcoe_s_stat_ctx s_stat; 704 + u16 low_seq_cnt; 705 + #endif 706 + #if defined(__BIG_ENDIAN) 707 + u16 err_seq_cnt; 708 + u16 high_seq_cnt; 709 + #elif defined(__LITTLE_ENDIAN) 710 + u16 high_seq_cnt; 711 + u16 err_seq_cnt; 712 + #endif 713 + u32 low_exp_ro; 714 + u32 high_exp_ro; 715 + }; 716 + 717 + 718 + struct fcoe_single_sge_ctx { 719 + struct regpair cur_buf_addr; 720 + #if defined(__BIG_ENDIAN) 721 + u16 reserved0; 722 + u16 cur_buf_rem; 723 + #elif defined(__LITTLE_ENDIAN) 724 + u16 cur_buf_rem; 725 + u16 reserved0; 726 + #endif 727 + }; 728 + 729 + union fcoe_sgl_ctx { 730 + struct fcoe_single_sge_ctx single_sge; 731 + struct fcoe_mul_sges_ctx mul_sges; 732 + }; 733 + 734 + 735 + 736 + /* 737 + * FCoE SQ element 738 + */ 739 + struct fcoe_sqe { 740 + u16 wqe; 741 + #define FCOE_SQE_TASK_ID (0x7FFF<<0) 742 + #define FCOE_SQE_TASK_ID_SHIFT 0 743 + #define FCOE_SQE_TOGGLE_BIT (0x1<<15) 744 + #define FCOE_SQE_TOGGLE_BIT_SHIFT 15 745 + }; 746 + 747 + 748 + 749 + struct fcoe_task_ctx_entry_tx_only { 750 + union fcoe_sgl_ctx sgl_ctx; 751 + }; 752 + 753 + struct fcoe_task_ctx_entry_txwr_rxrd { 754 + #if defined(__BIG_ENDIAN) 755 + u16 verify_tx_seq; 756 + u8 init_flags; 757 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TASK_TYPE (0x7<<0) 758 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TASK_TYPE_SHIFT 0 759 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_DEV_TYPE (0x1<<3) 760 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_DEV_TYPE_SHIFT 3 761 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_CLASS_TYPE (0x1<<4) 762 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_CLASS_TYPE_SHIFT 4 763 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_SINGLE_SGE (0x1<<5) 764 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_SINGLE_SGE_SHIFT 5 765 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_RSRV5 (0x3<<6) 766 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_RSRV5_SHIFT 6 767 + u8 tx_flags; 768 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TX_STATE (0xF<<0) 769 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TX_STATE_SHIFT 0 770 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_RSRV4 (0xF<<4) 771 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_RSRV4_SHIFT 4 772 + #elif defined(__LITTLE_ENDIAN) 773 + u8 tx_flags; 774 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TX_STATE (0xF<<0) 775 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TX_STATE_SHIFT 0 776 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_RSRV4 (0xF<<4) 777 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_RSRV4_SHIFT 4 778 + u8 init_flags; 779 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TASK_TYPE (0x7<<0) 780 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TASK_TYPE_SHIFT 0 781 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_DEV_TYPE (0x1<<3) 782 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_DEV_TYPE_SHIFT 3 783 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_CLASS_TYPE (0x1<<4) 784 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_CLASS_TYPE_SHIFT 4 785 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_SINGLE_SGE (0x1<<5) 786 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_SINGLE_SGE_SHIFT 5 787 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_RSRV5 (0x3<<6) 788 + #define FCOE_TASK_CTX_ENTRY_TXWR_RXRD_RSRV5_SHIFT 6 789 + u16 verify_tx_seq; 790 + #endif 791 + }; 792 + 793 + /* 794 + * Common section. Both TX and RX processing might write and read from it in 795 + * different flows 796 + */ 797 + struct fcoe_task_ctx_entry_tx_rx_cmn { 798 + u32 data_2_trns; 799 + union fcoe_general_task_ctx general; 800 + #if defined(__BIG_ENDIAN) 801 + u16 tx_low_seq_cnt; 802 + struct fcoe_s_stat_ctx tx_s_stat; 803 + u8 tx_seq_id; 804 + #elif defined(__LITTLE_ENDIAN) 805 + u8 tx_seq_id; 806 + struct fcoe_s_stat_ctx tx_s_stat; 807 + u16 tx_low_seq_cnt; 808 + #endif 809 + u32 common_flags; 810 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_CID (0xFFFFFF<<0) 811 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_CID_SHIFT 0 812 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_VALID (0x1<<24) 813 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_VALID_SHIFT 24 814 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_SEQ_INIT (0x1<<25) 815 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_SEQ_INIT_SHIFT 25 816 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_PEND_XFER (0x1<<26) 817 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_PEND_XFER_SHIFT 26 818 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_PEND_CONF (0x1<<27) 819 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_PEND_CONF_SHIFT 27 820 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_EXP_FIRST_FRAME (0x1<<28) 821 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_EXP_FIRST_FRAME_SHIFT 28 822 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_RSRV (0x7<<29) 823 + #define FCOE_TASK_CTX_ENTRY_TX_RX_CMN_RSRV_SHIFT 29 824 + }; 825 + 826 + struct fcoe_task_ctx_entry_rxwr_txrd { 827 + #if defined(__BIG_ENDIAN) 828 + u16 rx_id; 829 + u16 rx_flags; 830 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_RX_STATE (0xF<<0) 831 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_RX_STATE_SHIFT 0 832 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_NUM_RQ_WQE (0x7<<4) 833 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_NUM_RQ_WQE_SHIFT 4 834 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_CONF_REQ (0x1<<7) 835 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_CONF_REQ_SHIFT 7 836 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_MISS_FRAME (0x1<<8) 837 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_MISS_FRAME_SHIFT 8 838 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_RESERVED0 (0x7F<<9) 839 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_RESERVED0_SHIFT 9 840 + #elif defined(__LITTLE_ENDIAN) 841 + u16 rx_flags; 842 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_RX_STATE (0xF<<0) 843 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_RX_STATE_SHIFT 0 844 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_NUM_RQ_WQE (0x7<<4) 845 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_NUM_RQ_WQE_SHIFT 4 846 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_CONF_REQ (0x1<<7) 847 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_CONF_REQ_SHIFT 7 848 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_MISS_FRAME (0x1<<8) 849 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_MISS_FRAME_SHIFT 8 850 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_RESERVED0 (0x7F<<9) 851 + #define FCOE_TASK_CTX_ENTRY_RXWR_TXRD_RESERVED0_SHIFT 9 852 + u16 rx_id; 853 + #endif 854 + }; 855 + 856 + struct fcoe_task_ctx_entry_rx_only { 857 + struct fcoe_seq_ctx seq_ctx; 858 + struct fcoe_seq_ctx ooo_seq_ctx; 859 + u32 rsrv3; 860 + union fcoe_sgl_ctx sgl_ctx; 861 + }; 862 + 863 + struct fcoe_task_ctx_entry { 864 + struct fcoe_task_ctx_entry_tx_only tx_wr_only; 865 + struct fcoe_task_ctx_entry_txwr_rxrd tx_wr_rx_rd; 866 + struct fcoe_task_ctx_entry_tx_rx_cmn cmn; 867 + struct fcoe_task_ctx_entry_rxwr_txrd rx_wr_tx_rd; 868 + struct fcoe_task_ctx_entry_rx_only rx_wr_only; 869 + u32 reserved[4]; 870 + }; 871 + 872 + 873 + /* 874 + * FCoE XFRQ element 875 + */ 876 + struct fcoe_xfrqe { 877 + u16 wqe; 878 + #define FCOE_XFRQE_TASK_ID (0x7FFF<<0) 879 + #define FCOE_XFRQE_TASK_ID_SHIFT 0 880 + #define FCOE_XFRQE_TOGGLE_BIT (0x1<<15) 881 + #define FCOE_XFRQE_TOGGLE_BIT_SHIFT 15 882 + }; 883 + 884 + 885 + /* 886 + * FCoE CONFQ element 887 + */ 888 + struct fcoe_confqe { 889 + #if defined(__BIG_ENDIAN) 890 + u16 rx_id; 891 + u16 ox_id; 892 + #elif defined(__LITTLE_ENDIAN) 893 + u16 ox_id; 894 + u16 rx_id; 895 + #endif 896 + u32 param; 897 + }; 898 + 899 + 900 + /* 901 + * FCoE conection data base 902 + */ 903 + struct fcoe_conn_db { 904 + #if defined(__BIG_ENDIAN) 905 + u16 rsrv0; 906 + u16 rq_prod; 907 + #elif defined(__LITTLE_ENDIAN) 908 + u16 rq_prod; 909 + u16 rsrv0; 910 + #endif 911 + u32 rsrv1; 912 + struct regpair cq_arm; 913 + }; 914 + 915 + 916 + /* 917 + * FCoE CQ element 918 + */ 919 + struct fcoe_cqe { 920 + u16 wqe; 921 + #define FCOE_CQE_CQE_INFO (0x3FFF<<0) 922 + #define FCOE_CQE_CQE_INFO_SHIFT 0 923 + #define FCOE_CQE_CQE_TYPE (0x1<<14) 924 + #define FCOE_CQE_CQE_TYPE_SHIFT 14 925 + #define FCOE_CQE_TOGGLE_BIT (0x1<<15) 926 + #define FCOE_CQE_TOGGLE_BIT_SHIFT 15 927 + }; 928 + 929 + 930 + /* 931 + * FCoE error/warning resporting entry 932 + */ 933 + struct fcoe_err_report_entry { 934 + u32 err_warn_bitmap_lo; 935 + u32 err_warn_bitmap_hi; 936 + u32 tx_buf_off; 937 + u32 rx_buf_off; 938 + struct fcoe_fc_hdr fc_hdr; 939 + }; 940 + 941 + 942 + /* 943 + * FCoE hash table entry (32 bytes) 944 + */ 945 + struct fcoe_hash_table_entry { 946 + #if defined(__BIG_ENDIAN) 947 + u8 d_id_0; 948 + u8 s_id_2; 949 + u8 s_id_1; 950 + u8 s_id_0; 951 + #elif defined(__LITTLE_ENDIAN) 952 + u8 s_id_0; 953 + u8 s_id_1; 954 + u8 s_id_2; 955 + u8 d_id_0; 956 + #endif 957 + #if defined(__BIG_ENDIAN) 958 + u16 dst_mac_addr_hi; 959 + u8 d_id_2; 960 + u8 d_id_1; 961 + #elif defined(__LITTLE_ENDIAN) 962 + u8 d_id_1; 963 + u8 d_id_2; 964 + u16 dst_mac_addr_hi; 965 + #endif 966 + u32 dst_mac_addr_lo; 967 + #if defined(__BIG_ENDIAN) 968 + u16 vlan_id; 969 + u16 src_mac_addr_hi; 970 + #elif defined(__LITTLE_ENDIAN) 971 + u16 src_mac_addr_hi; 972 + u16 vlan_id; 973 + #endif 974 + u32 src_mac_addr_lo; 975 + #if defined(__BIG_ENDIAN) 976 + u16 reserved1; 977 + u8 reserved0; 978 + u8 vlan_flag; 979 + #elif defined(__LITTLE_ENDIAN) 980 + u8 vlan_flag; 981 + u8 reserved0; 982 + u16 reserved1; 983 + #endif 984 + u32 reserved2; 985 + u32 field_id; 986 + #define FCOE_HASH_TABLE_ENTRY_CID (0xFFFFFF<<0) 987 + #define FCOE_HASH_TABLE_ENTRY_CID_SHIFT 0 988 + #define FCOE_HASH_TABLE_ENTRY_RESERVED3 (0x7F<<24) 989 + #define FCOE_HASH_TABLE_ENTRY_RESERVED3_SHIFT 24 990 + #define FCOE_HASH_TABLE_ENTRY_VALID (0x1<<31) 991 + #define FCOE_HASH_TABLE_ENTRY_VALID_SHIFT 31 992 + }; 993 + 994 + /* 995 + * FCoE pending work request CQE 996 + */ 997 + struct fcoe_pend_wq_cqe { 998 + u16 wqe; 999 + #define FCOE_PEND_WQ_CQE_TASK_ID (0x3FFF<<0) 1000 + #define FCOE_PEND_WQ_CQE_TASK_ID_SHIFT 0 1001 + #define FCOE_PEND_WQ_CQE_CQE_TYPE (0x1<<14) 1002 + #define FCOE_PEND_WQ_CQE_CQE_TYPE_SHIFT 14 1003 + #define FCOE_PEND_WQ_CQE_TOGGLE_BIT (0x1<<15) 1004 + #define FCOE_PEND_WQ_CQE_TOGGLE_BIT_SHIFT 15 1005 + }; 1006 + 1007 + 1008 + /* 1009 + * FCoE RX statistics parameters section#0 1010 + */ 1011 + struct fcoe_rx_stat_params_section0 { 1012 + u32 fcoe_ver_cnt; 1013 + u32 fcoe_rx_pkt_cnt; 1014 + u32 fcoe_rx_byte_cnt; 1015 + u32 fcoe_rx_drop_pkt_cnt; 1016 + }; 1017 + 1018 + 1019 + /* 1020 + * FCoE RX statistics parameters section#1 1021 + */ 1022 + struct fcoe_rx_stat_params_section1 { 1023 + u32 fc_crc_cnt; 1024 + u32 eofa_del_cnt; 1025 + u32 miss_frame_cnt; 1026 + u32 seq_timeout_cnt; 1027 + u32 drop_seq_cnt; 1028 + u32 fcoe_rx_drop_pkt_cnt; 1029 + u32 fcp_rx_pkt_cnt; 1030 + u32 reserved0; 1031 + }; 1032 + 1033 + 1034 + /* 1035 + * FCoE TX statistics parameters 1036 + */ 1037 + struct fcoe_tx_stat_params { 1038 + u32 fcoe_tx_pkt_cnt; 1039 + u32 fcoe_tx_byte_cnt; 1040 + u32 fcp_tx_pkt_cnt; 1041 + u32 reserved0; 1042 + }; 1043 + 1044 + /* 1045 + * FCoE statistics parameters 1046 + */ 1047 + struct fcoe_statistics_params { 1048 + struct fcoe_tx_stat_params tx_stat; 1049 + struct fcoe_rx_stat_params_section0 rx_stat0; 1050 + struct fcoe_rx_stat_params_section1 rx_stat1; 1051 + }; 1052 + 1053 + 1054 + /* 1055 + * FCoE t2 hash table entry (64 bytes) 1056 + */ 1057 + struct fcoe_t2_hash_table_entry { 1058 + struct fcoe_hash_table_entry data; 1059 + struct regpair next; 1060 + struct regpair reserved0[3]; 1061 + }; 1062 + 1063 + /* 1064 + * FCoE unsolicited CQE 1065 + */ 1066 + struct fcoe_unsolicited_cqe { 1067 + u16 wqe; 1068 + #define FCOE_UNSOLICITED_CQE_SUBTYPE (0x3<<0) 1069 + #define FCOE_UNSOLICITED_CQE_SUBTYPE_SHIFT 0 1070 + #define FCOE_UNSOLICITED_CQE_PKT_LEN (0xFFF<<2) 1071 + #define FCOE_UNSOLICITED_CQE_PKT_LEN_SHIFT 2 1072 + #define FCOE_UNSOLICITED_CQE_CQE_TYPE (0x1<<14) 1073 + #define FCOE_UNSOLICITED_CQE_CQE_TYPE_SHIFT 14 1074 + #define FCOE_UNSOLICITED_CQE_TOGGLE_BIT (0x1<<15) 1075 + #define FCOE_UNSOLICITED_CQE_TOGGLE_BIT_SHIFT 15 1076 + }; 1077 + 1078 + 1079 + 1080 + #endif /* __57XX_FCOE_HSI_LINUX_LE__ */
+11
drivers/scsi/bnx2fc/Kconfig
··· 1 + config SCSI_BNX2X_FCOE 2 + tristate "Broadcom NetXtreme II FCoE support" 3 + depends on PCI 4 + select NETDEVICES 5 + select NETDEV_1000 6 + select LIBFC 7 + select LIBFCOE 8 + select CNIC 9 + ---help--- 10 + This driver supports FCoE offload for the Broadcom NetXtreme II 11 + devices.
+3
drivers/scsi/bnx2fc/Makefile
··· 1 + obj-$(CONFIG_SCSI_BNX2X_FCOE) += bnx2fc.o 2 + 3 + bnx2fc-y := bnx2fc_els.o bnx2fc_fcoe.o bnx2fc_hwi.o bnx2fc_io.o bnx2fc_tgt.o
+511
drivers/scsi/bnx2fc/bnx2fc.h
··· 1 + #ifndef _BNX2FC_H_ 2 + #define _BNX2FC_H_ 3 + /* bnx2fc.h: Broadcom NetXtreme II Linux FCoE offload driver. 4 + * 5 + * Copyright (c) 2008 - 2010 Broadcom Corporation 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation. 10 + * 11 + * Written by: Bhanu Prakash Gollapudi (bprakash@broadcom.com) 12 + */ 13 + 14 + #include <linux/module.h> 15 + #include <linux/moduleparam.h> 16 + #include <linux/kernel.h> 17 + #include <linux/skbuff.h> 18 + #include <linux/netdevice.h> 19 + #include <linux/etherdevice.h> 20 + #include <linux/if_ether.h> 21 + #include <linux/if_vlan.h> 22 + #include <linux/kthread.h> 23 + #include <linux/crc32.h> 24 + #include <linux/cpu.h> 25 + #include <linux/types.h> 26 + #include <linux/list.h> 27 + #include <linux/delay.h> 28 + #include <linux/timer.h> 29 + #include <linux/errno.h> 30 + #include <linux/pci.h> 31 + #include <linux/init.h> 32 + #include <linux/dma-mapping.h> 33 + #include <linux/workqueue.h> 34 + #include <linux/mutex.h> 35 + #include <linux/spinlock.h> 36 + #include <linux/bitops.h> 37 + #include <linux/log2.h> 38 + #include <linux/interrupt.h> 39 + #include <linux/sched.h> 40 + #include <linux/io.h> 41 + 42 + #include <scsi/scsi.h> 43 + #include <scsi/scsi_host.h> 44 + #include <scsi/scsi_device.h> 45 + #include <scsi/scsi_cmnd.h> 46 + #include <scsi/scsi_eh.h> 47 + #include <scsi/scsi_tcq.h> 48 + #include <scsi/libfc.h> 49 + #include <scsi/libfcoe.h> 50 + #include <scsi/fc_encode.h> 51 + #include <scsi/scsi_transport.h> 52 + #include <scsi/scsi_transport_fc.h> 53 + #include <scsi/fc/fc_fip.h> 54 + #include <scsi/fc/fc_fc2.h> 55 + #include <scsi/fc_frame.h> 56 + #include <scsi/fc/fc_fcoe.h> 57 + #include <scsi/fc/fc_fcp.h> 58 + 59 + #include "57xx_hsi_bnx2fc.h" 60 + #include "bnx2fc_debug.h" 61 + #include "../../net/cnic_if.h" 62 + #include "bnx2fc_constants.h" 63 + 64 + #define BNX2FC_NAME "bnx2fc" 65 + #define BNX2FC_VERSION "1.0.0" 66 + 67 + #define PFX "bnx2fc: " 68 + 69 + #define BNX2X_DOORBELL_PCI_BAR 2 70 + 71 + #define BNX2FC_MAX_BD_LEN 0xffff 72 + #define BNX2FC_BD_SPLIT_SZ 0x8000 73 + #define BNX2FC_MAX_BDS_PER_CMD 256 74 + 75 + #define BNX2FC_SQ_WQES_MAX 256 76 + 77 + #define BNX2FC_SCSI_MAX_SQES ((3 * BNX2FC_SQ_WQES_MAX) / 8) 78 + #define BNX2FC_TM_MAX_SQES ((BNX2FC_SQ_WQES_MAX) / 2) 79 + #define BNX2FC_ELS_MAX_SQES (BNX2FC_TM_MAX_SQES - 1) 80 + 81 + #define BNX2FC_RQ_WQES_MAX 16 82 + #define BNX2FC_CQ_WQES_MAX (BNX2FC_SQ_WQES_MAX + BNX2FC_RQ_WQES_MAX) 83 + 84 + #define BNX2FC_NUM_MAX_SESS 128 85 + #define BNX2FC_NUM_MAX_SESS_LOG (ilog2(BNX2FC_NUM_MAX_SESS)) 86 + 87 + #define BNX2FC_MAX_OUTSTANDING_CMNDS 4096 88 + #define BNX2FC_MIN_PAYLOAD 256 89 + #define BNX2FC_MAX_PAYLOAD 2048 90 + 91 + #define BNX2FC_RQ_BUF_SZ 256 92 + #define BNX2FC_RQ_BUF_LOG_SZ (ilog2(BNX2FC_RQ_BUF_SZ)) 93 + 94 + #define BNX2FC_SQ_WQE_SIZE (sizeof(struct fcoe_sqe)) 95 + #define BNX2FC_CQ_WQE_SIZE (sizeof(struct fcoe_cqe)) 96 + #define BNX2FC_RQ_WQE_SIZE (BNX2FC_RQ_BUF_SZ) 97 + #define BNX2FC_XFERQ_WQE_SIZE (sizeof(struct fcoe_xfrqe)) 98 + #define BNX2FC_CONFQ_WQE_SIZE (sizeof(struct fcoe_confqe)) 99 + #define BNX2FC_5771X_DB_PAGE_SIZE 128 100 + 101 + #define BNX2FC_MAX_TASKS BNX2FC_MAX_OUTSTANDING_CMNDS 102 + #define BNX2FC_TASK_SIZE 128 103 + #define BNX2FC_TASKS_PER_PAGE (PAGE_SIZE/BNX2FC_TASK_SIZE) 104 + #define BNX2FC_TASK_CTX_ARR_SZ (BNX2FC_MAX_TASKS/BNX2FC_TASKS_PER_PAGE) 105 + 106 + #define BNX2FC_MAX_ROWS_IN_HASH_TBL 8 107 + #define BNX2FC_HASH_TBL_CHUNK_SIZE (16 * 1024) 108 + 109 + #define BNX2FC_MAX_SEQS 255 110 + 111 + #define BNX2FC_READ (1 << 1) 112 + #define BNX2FC_WRITE (1 << 0) 113 + 114 + #define BNX2FC_MIN_XID 0 115 + #define BNX2FC_MAX_XID (BNX2FC_MAX_OUTSTANDING_CMNDS - 1) 116 + #define FCOE_MIN_XID (BNX2FC_MAX_OUTSTANDING_CMNDS) 117 + #define FCOE_MAX_XID \ 118 + (BNX2FC_MAX_OUTSTANDING_CMNDS + (nr_cpu_ids * 256)) 119 + #define BNX2FC_MAX_LUN 0xFFFF 120 + #define BNX2FC_MAX_FCP_TGT 256 121 + #define BNX2FC_MAX_CMD_LEN 16 122 + 123 + #define BNX2FC_TM_TIMEOUT 60 /* secs */ 124 + #define BNX2FC_IO_TIMEOUT 20000UL /* msecs */ 125 + 126 + #define BNX2FC_WAIT_CNT 120 127 + #define BNX2FC_FW_TIMEOUT (3 * HZ) 128 + 129 + #define PORT_MAX 2 130 + 131 + #define CMD_SCSI_STATUS(Cmnd) ((Cmnd)->SCp.Status) 132 + 133 + /* FC FCP Status */ 134 + #define FC_GOOD 0 135 + 136 + #define BNX2FC_RNID_HBA 0x7 137 + 138 + /* bnx2fc driver uses only one instance of fcoe_percpu_s */ 139 + extern struct fcoe_percpu_s bnx2fc_global; 140 + 141 + extern struct workqueue_struct *bnx2fc_wq; 142 + 143 + struct bnx2fc_percpu_s { 144 + struct task_struct *iothread; 145 + struct list_head work_list; 146 + spinlock_t fp_work_lock; 147 + }; 148 + 149 + 150 + struct bnx2fc_hba { 151 + struct list_head link; 152 + struct cnic_dev *cnic; 153 + struct pci_dev *pcidev; 154 + struct net_device *netdev; 155 + struct net_device *phys_dev; 156 + unsigned long reg_with_cnic; 157 + #define BNX2FC_CNIC_REGISTERED 1 158 + struct packet_type fcoe_packet_type; 159 + struct packet_type fip_packet_type; 160 + struct bnx2fc_cmd_mgr *cmd_mgr; 161 + struct workqueue_struct *timer_work_queue; 162 + struct kref kref; 163 + spinlock_t hba_lock; 164 + struct mutex hba_mutex; 165 + unsigned long adapter_state; 166 + #define ADAPTER_STATE_UP 0 167 + #define ADAPTER_STATE_GOING_DOWN 1 168 + #define ADAPTER_STATE_LINK_DOWN 2 169 + #define ADAPTER_STATE_READY 3 170 + u32 flags; 171 + unsigned long init_done; 172 + #define BNX2FC_FW_INIT_DONE 0 173 + #define BNX2FC_CTLR_INIT_DONE 1 174 + #define BNX2FC_CREATE_DONE 2 175 + struct fcoe_ctlr ctlr; 176 + u8 vlan_enabled; 177 + int vlan_id; 178 + u32 next_conn_id; 179 + struct fcoe_task_ctx_entry **task_ctx; 180 + dma_addr_t *task_ctx_dma; 181 + struct regpair *task_ctx_bd_tbl; 182 + dma_addr_t task_ctx_bd_dma; 183 + 184 + int hash_tbl_segment_count; 185 + void **hash_tbl_segments; 186 + void *hash_tbl_pbl; 187 + dma_addr_t hash_tbl_pbl_dma; 188 + struct fcoe_t2_hash_table_entry *t2_hash_tbl; 189 + dma_addr_t t2_hash_tbl_dma; 190 + char *t2_hash_tbl_ptr; 191 + dma_addr_t t2_hash_tbl_ptr_dma; 192 + 193 + char *dummy_buffer; 194 + dma_addr_t dummy_buf_dma; 195 + 196 + struct fcoe_statistics_params *stats_buffer; 197 + dma_addr_t stats_buf_dma; 198 + 199 + /* 200 + * PCI related info. 201 + */ 202 + u16 pci_did; 203 + u16 pci_vid; 204 + u16 pci_sdid; 205 + u16 pci_svid; 206 + u16 pci_func; 207 + u16 pci_devno; 208 + 209 + struct task_struct *l2_thread; 210 + 211 + /* linkdown handling */ 212 + wait_queue_head_t shutdown_wait; 213 + int wait_for_link_down; 214 + 215 + /*destroy handling */ 216 + struct timer_list destroy_timer; 217 + wait_queue_head_t destroy_wait; 218 + 219 + /* Active list of offloaded sessions */ 220 + struct bnx2fc_rport *tgt_ofld_list[BNX2FC_NUM_MAX_SESS]; 221 + int num_ofld_sess; 222 + 223 + /* statistics */ 224 + struct completion stat_req_done; 225 + }; 226 + 227 + #define bnx2fc_from_ctlr(fip) container_of(fip, struct bnx2fc_hba, ctlr) 228 + 229 + struct bnx2fc_cmd_mgr { 230 + struct bnx2fc_hba *hba; 231 + u16 next_idx; 232 + struct list_head *free_list; 233 + spinlock_t *free_list_lock; 234 + struct io_bdt **io_bdt_pool; 235 + struct bnx2fc_cmd **cmds; 236 + }; 237 + 238 + struct bnx2fc_rport { 239 + struct fcoe_port *port; 240 + struct fc_rport *rport; 241 + struct fc_rport_priv *rdata; 242 + void __iomem *ctx_base; 243 + #define DPM_TRIGER_TYPE 0x40 244 + u32 fcoe_conn_id; 245 + u32 context_id; 246 + u32 sid; 247 + 248 + unsigned long flags; 249 + #define BNX2FC_FLAG_SESSION_READY 0x1 250 + #define BNX2FC_FLAG_OFFLOADED 0x2 251 + #define BNX2FC_FLAG_DISABLED 0x3 252 + #define BNX2FC_FLAG_DESTROYED 0x4 253 + #define BNX2FC_FLAG_OFLD_REQ_CMPL 0x5 254 + #define BNX2FC_FLAG_DESTROY_CMPL 0x6 255 + #define BNX2FC_FLAG_CTX_ALLOC_FAILURE 0x7 256 + #define BNX2FC_FLAG_UPLD_REQ_COMPL 0x8 257 + #define BNX2FC_FLAG_EXPL_LOGO 0x9 258 + 259 + u32 max_sqes; 260 + u32 max_rqes; 261 + u32 max_cqes; 262 + 263 + struct fcoe_sqe *sq; 264 + dma_addr_t sq_dma; 265 + u16 sq_prod_idx; 266 + u8 sq_curr_toggle_bit; 267 + u32 sq_mem_size; 268 + 269 + struct fcoe_cqe *cq; 270 + dma_addr_t cq_dma; 271 + u32 cq_cons_idx; 272 + u8 cq_curr_toggle_bit; 273 + u32 cq_mem_size; 274 + 275 + void *rq; 276 + dma_addr_t rq_dma; 277 + u32 rq_prod_idx; 278 + u32 rq_cons_idx; 279 + u32 rq_mem_size; 280 + 281 + void *rq_pbl; 282 + dma_addr_t rq_pbl_dma; 283 + u32 rq_pbl_size; 284 + 285 + struct fcoe_xfrqe *xferq; 286 + dma_addr_t xferq_dma; 287 + u32 xferq_mem_size; 288 + 289 + struct fcoe_confqe *confq; 290 + dma_addr_t confq_dma; 291 + u32 confq_mem_size; 292 + 293 + void *confq_pbl; 294 + dma_addr_t confq_pbl_dma; 295 + u32 confq_pbl_size; 296 + 297 + struct fcoe_conn_db *conn_db; 298 + dma_addr_t conn_db_dma; 299 + u32 conn_db_mem_size; 300 + 301 + struct fcoe_sqe *lcq; 302 + dma_addr_t lcq_dma; 303 + u32 lcq_mem_size; 304 + 305 + void *ofld_req[4]; 306 + dma_addr_t ofld_req_dma[4]; 307 + void *enbl_req; 308 + dma_addr_t enbl_req_dma; 309 + 310 + spinlock_t tgt_lock; 311 + spinlock_t cq_lock; 312 + atomic_t num_active_ios; 313 + u32 flush_in_prog; 314 + unsigned long work_time_slice; 315 + unsigned long timestamp; 316 + struct list_head free_task_list; 317 + struct bnx2fc_cmd *pending_queue[BNX2FC_SQ_WQES_MAX+1]; 318 + atomic_t pi; 319 + atomic_t ci; 320 + struct list_head active_cmd_queue; 321 + struct list_head els_queue; 322 + struct list_head io_retire_queue; 323 + struct list_head active_tm_queue; 324 + 325 + struct timer_list ofld_timer; 326 + wait_queue_head_t ofld_wait; 327 + 328 + struct timer_list upld_timer; 329 + wait_queue_head_t upld_wait; 330 + }; 331 + 332 + struct bnx2fc_mp_req { 333 + u8 tm_flags; 334 + 335 + u32 req_len; 336 + void *req_buf; 337 + dma_addr_t req_buf_dma; 338 + struct fcoe_bd_ctx *mp_req_bd; 339 + dma_addr_t mp_req_bd_dma; 340 + struct fc_frame_header req_fc_hdr; 341 + 342 + u32 resp_len; 343 + void *resp_buf; 344 + dma_addr_t resp_buf_dma; 345 + struct fcoe_bd_ctx *mp_resp_bd; 346 + dma_addr_t mp_resp_bd_dma; 347 + struct fc_frame_header resp_fc_hdr; 348 + }; 349 + 350 + struct bnx2fc_els_cb_arg { 351 + struct bnx2fc_cmd *aborted_io_req; 352 + struct bnx2fc_cmd *io_req; 353 + u16 l2_oxid; 354 + }; 355 + 356 + /* bnx2fc command structure */ 357 + struct bnx2fc_cmd { 358 + struct list_head link; 359 + u8 on_active_queue; 360 + u8 on_tmf_queue; 361 + u8 cmd_type; 362 + #define BNX2FC_SCSI_CMD 1 363 + #define BNX2FC_TASK_MGMT_CMD 2 364 + #define BNX2FC_ABTS 3 365 + #define BNX2FC_ELS 4 366 + #define BNX2FC_CLEANUP 5 367 + u8 io_req_flags; 368 + struct kref refcount; 369 + struct fcoe_port *port; 370 + struct bnx2fc_rport *tgt; 371 + struct scsi_cmnd *sc_cmd; 372 + struct bnx2fc_cmd_mgr *cmd_mgr; 373 + struct bnx2fc_mp_req mp_req; 374 + void (*cb_func)(struct bnx2fc_els_cb_arg *cb_arg); 375 + struct bnx2fc_els_cb_arg *cb_arg; 376 + struct delayed_work timeout_work; /* timer for ULP timeouts */ 377 + struct completion tm_done; 378 + int wait_for_comp; 379 + u16 xid; 380 + struct fcoe_task_ctx_entry *task; 381 + struct io_bdt *bd_tbl; 382 + struct fcp_rsp *rsp; 383 + size_t data_xfer_len; 384 + unsigned long req_flags; 385 + #define BNX2FC_FLAG_ISSUE_RRQ 0x1 386 + #define BNX2FC_FLAG_ISSUE_ABTS 0x2 387 + #define BNX2FC_FLAG_ABTS_DONE 0x3 388 + #define BNX2FC_FLAG_TM_COMPL 0x4 389 + #define BNX2FC_FLAG_TM_TIMEOUT 0x5 390 + #define BNX2FC_FLAG_IO_CLEANUP 0x6 391 + #define BNX2FC_FLAG_RETIRE_OXID 0x7 392 + #define BNX2FC_FLAG_EH_ABORT 0x8 393 + #define BNX2FC_FLAG_IO_COMPL 0x9 394 + #define BNX2FC_FLAG_ELS_DONE 0xa 395 + #define BNX2FC_FLAG_ELS_TIMEOUT 0xb 396 + u32 fcp_resid; 397 + u32 fcp_rsp_len; 398 + u32 fcp_sns_len; 399 + u8 cdb_status; /* SCSI IO status */ 400 + u8 fcp_status; /* FCP IO status */ 401 + u8 fcp_rsp_code; 402 + u8 scsi_comp_flags; 403 + }; 404 + 405 + struct io_bdt { 406 + struct bnx2fc_cmd *io_req; 407 + struct fcoe_bd_ctx *bd_tbl; 408 + dma_addr_t bd_tbl_dma; 409 + u16 bd_valid; 410 + }; 411 + 412 + struct bnx2fc_work { 413 + struct list_head list; 414 + struct bnx2fc_rport *tgt; 415 + u16 wqe; 416 + }; 417 + struct bnx2fc_unsol_els { 418 + struct fc_lport *lport; 419 + struct fc_frame *fp; 420 + struct work_struct unsol_els_work; 421 + }; 422 + 423 + 424 + 425 + struct bnx2fc_cmd *bnx2fc_elstm_alloc(struct bnx2fc_rport *tgt, int type); 426 + void bnx2fc_cmd_release(struct kref *ref); 427 + int bnx2fc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *sc_cmd); 428 + int bnx2fc_send_fw_fcoe_init_msg(struct bnx2fc_hba *hba); 429 + int bnx2fc_send_fw_fcoe_destroy_msg(struct bnx2fc_hba *hba); 430 + int bnx2fc_send_session_ofld_req(struct fcoe_port *port, 431 + struct bnx2fc_rport *tgt); 432 + int bnx2fc_send_session_disable_req(struct fcoe_port *port, 433 + struct bnx2fc_rport *tgt); 434 + int bnx2fc_send_session_destroy_req(struct bnx2fc_hba *hba, 435 + struct bnx2fc_rport *tgt); 436 + int bnx2fc_map_doorbell(struct bnx2fc_rport *tgt); 437 + void bnx2fc_indicate_kcqe(void *context, struct kcqe *kcq[], 438 + u32 num_cqe); 439 + int bnx2fc_setup_task_ctx(struct bnx2fc_hba *hba); 440 + void bnx2fc_free_task_ctx(struct bnx2fc_hba *hba); 441 + int bnx2fc_setup_fw_resc(struct bnx2fc_hba *hba); 442 + void bnx2fc_free_fw_resc(struct bnx2fc_hba *hba); 443 + struct bnx2fc_cmd_mgr *bnx2fc_cmd_mgr_alloc(struct bnx2fc_hba *hba, 444 + u16 min_xid, u16 max_xid); 445 + void bnx2fc_cmd_mgr_free(struct bnx2fc_cmd_mgr *cmgr); 446 + void bnx2fc_get_link_state(struct bnx2fc_hba *hba); 447 + char *bnx2fc_get_next_rqe(struct bnx2fc_rport *tgt, u8 num_items); 448 + void bnx2fc_return_rqe(struct bnx2fc_rport *tgt, u8 num_items); 449 + int bnx2fc_get_paged_crc_eof(struct sk_buff *skb, int tlen); 450 + int bnx2fc_send_rrq(struct bnx2fc_cmd *aborted_io_req); 451 + int bnx2fc_send_adisc(struct bnx2fc_rport *tgt, struct fc_frame *fp); 452 + int bnx2fc_send_logo(struct bnx2fc_rport *tgt, struct fc_frame *fp); 453 + int bnx2fc_send_rls(struct bnx2fc_rport *tgt, struct fc_frame *fp); 454 + int bnx2fc_initiate_cleanup(struct bnx2fc_cmd *io_req); 455 + int bnx2fc_initiate_abts(struct bnx2fc_cmd *io_req); 456 + void bnx2fc_cmd_timer_set(struct bnx2fc_cmd *io_req, 457 + unsigned int timer_msec); 458 + int bnx2fc_init_mp_req(struct bnx2fc_cmd *io_req); 459 + void bnx2fc_init_cleanup_task(struct bnx2fc_cmd *io_req, 460 + struct fcoe_task_ctx_entry *task, 461 + u16 orig_xid); 462 + void bnx2fc_init_mp_task(struct bnx2fc_cmd *io_req, 463 + struct fcoe_task_ctx_entry *task); 464 + void bnx2fc_init_task(struct bnx2fc_cmd *io_req, 465 + struct fcoe_task_ctx_entry *task); 466 + void bnx2fc_add_2_sq(struct bnx2fc_rport *tgt, u16 xid); 467 + void bnx2fc_ring_doorbell(struct bnx2fc_rport *tgt); 468 + int bnx2fc_eh_abort(struct scsi_cmnd *sc_cmd); 469 + int bnx2fc_eh_host_reset(struct scsi_cmnd *sc_cmd); 470 + int bnx2fc_eh_target_reset(struct scsi_cmnd *sc_cmd); 471 + int bnx2fc_eh_device_reset(struct scsi_cmnd *sc_cmd); 472 + void bnx2fc_rport_event_handler(struct fc_lport *lport, 473 + struct fc_rport_priv *rport, 474 + enum fc_rport_event event); 475 + void bnx2fc_process_scsi_cmd_compl(struct bnx2fc_cmd *io_req, 476 + struct fcoe_task_ctx_entry *task, 477 + u8 num_rq); 478 + void bnx2fc_process_cleanup_compl(struct bnx2fc_cmd *io_req, 479 + struct fcoe_task_ctx_entry *task, 480 + u8 num_rq); 481 + void bnx2fc_process_abts_compl(struct bnx2fc_cmd *io_req, 482 + struct fcoe_task_ctx_entry *task, 483 + u8 num_rq); 484 + void bnx2fc_process_tm_compl(struct bnx2fc_cmd *io_req, 485 + struct fcoe_task_ctx_entry *task, 486 + u8 num_rq); 487 + void bnx2fc_process_els_compl(struct bnx2fc_cmd *els_req, 488 + struct fcoe_task_ctx_entry *task, 489 + u8 num_rq); 490 + void bnx2fc_build_fcp_cmnd(struct bnx2fc_cmd *io_req, 491 + struct fcp_cmnd *fcp_cmnd); 492 + 493 + 494 + 495 + void bnx2fc_flush_active_ios(struct bnx2fc_rport *tgt); 496 + struct fc_seq *bnx2fc_elsct_send(struct fc_lport *lport, u32 did, 497 + struct fc_frame *fp, unsigned int op, 498 + void (*resp)(struct fc_seq *, 499 + struct fc_frame *, 500 + void *), 501 + void *arg, u32 timeout); 502 + int bnx2fc_process_new_cqes(struct bnx2fc_rport *tgt); 503 + void bnx2fc_process_cq_compl(struct bnx2fc_rport *tgt, u16 wqe); 504 + struct bnx2fc_rport *bnx2fc_tgt_lookup(struct fcoe_port *port, 505 + u32 port_id); 506 + void bnx2fc_process_l2_frame_compl(struct bnx2fc_rport *tgt, 507 + unsigned char *buf, 508 + u32 frame_len, u16 l2_oxid); 509 + int bnx2fc_send_stat_req(struct bnx2fc_hba *hba); 510 + 511 + #endif
+206
drivers/scsi/bnx2fc/bnx2fc_constants.h
··· 1 + #ifndef __BNX2FC_CONSTANTS_H_ 2 + #define __BNX2FC_CONSTANTS_H_ 3 + 4 + /** 5 + * This file defines HSI constants for the FCoE flows 6 + */ 7 + 8 + /* KWQ/KCQ FCoE layer code */ 9 + #define FCOE_KWQE_LAYER_CODE (7) 10 + 11 + /* KWQ (kernel work queue) request op codes */ 12 + #define FCOE_KWQE_OPCODE_INIT1 (0) 13 + #define FCOE_KWQE_OPCODE_INIT2 (1) 14 + #define FCOE_KWQE_OPCODE_INIT3 (2) 15 + #define FCOE_KWQE_OPCODE_OFFLOAD_CONN1 (3) 16 + #define FCOE_KWQE_OPCODE_OFFLOAD_CONN2 (4) 17 + #define FCOE_KWQE_OPCODE_OFFLOAD_CONN3 (5) 18 + #define FCOE_KWQE_OPCODE_OFFLOAD_CONN4 (6) 19 + #define FCOE_KWQE_OPCODE_ENABLE_CONN (7) 20 + #define FCOE_KWQE_OPCODE_DISABLE_CONN (8) 21 + #define FCOE_KWQE_OPCODE_DESTROY_CONN (9) 22 + #define FCOE_KWQE_OPCODE_DESTROY (10) 23 + #define FCOE_KWQE_OPCODE_STAT (11) 24 + 25 + /* KCQ (kernel completion queue) response op codes */ 26 + #define FCOE_KCQE_OPCODE_INIT_FUNC (0x10) 27 + #define FCOE_KCQE_OPCODE_DESTROY_FUNC (0x11) 28 + #define FCOE_KCQE_OPCODE_STAT_FUNC (0x12) 29 + #define FCOE_KCQE_OPCODE_OFFLOAD_CONN (0x15) 30 + #define FCOE_KCQE_OPCODE_ENABLE_CONN (0x16) 31 + #define FCOE_KCQE_OPCODE_DISABLE_CONN (0x17) 32 + #define FCOE_KCQE_OPCODE_DESTROY_CONN (0x18) 33 + #define FCOE_KCQE_OPCODE_CQ_EVENT_NOTIFICATION (0x20) 34 + #define FCOE_KCQE_OPCODE_FCOE_ERROR (0x21) 35 + 36 + /* KCQ (kernel completion queue) completion status */ 37 + #define FCOE_KCQE_COMPLETION_STATUS_SUCCESS (0x0) 38 + #define FCOE_KCQE_COMPLETION_STATUS_ERROR (0x1) 39 + #define FCOE_KCQE_COMPLETION_STATUS_INVALID_OPCODE (0x2) 40 + #define FCOE_KCQE_COMPLETION_STATUS_CTX_ALLOC_FAILURE (0x3) 41 + #define FCOE_KCQE_COMPLETION_STATUS_CTX_FREE_FAILURE (0x4) 42 + #define FCOE_KCQE_COMPLETION_STATUS_NIC_ERROR (0x5) 43 + 44 + /* Unsolicited CQE type */ 45 + #define FCOE_UNSOLICITED_FRAME_CQE_TYPE 0 46 + #define FCOE_ERROR_DETECTION_CQE_TYPE 1 47 + #define FCOE_WARNING_DETECTION_CQE_TYPE 2 48 + 49 + /* Task context constants */ 50 + /* After driver has initialize the task in case timer services required */ 51 + #define FCOE_TASK_TX_STATE_INIT 0 52 + /* In case timer services are required then shall be updated by Xstorm after 53 + * start processing the task. In case no timer facilities are required then the 54 + * driver would initialize the state to this value */ 55 + #define FCOE_TASK_TX_STATE_NORMAL 1 56 + /* Task is under abort procedure. Updated in order to stop processing of 57 + * pending WQEs on this task */ 58 + #define FCOE_TASK_TX_STATE_ABORT 2 59 + /* For E_D_T_TOV timer expiration in Xstorm (Class 2 only) */ 60 + #define FCOE_TASK_TX_STATE_ERROR 3 61 + /* For REC_TOV timer expiration indication received from Xstorm */ 62 + #define FCOE_TASK_TX_STATE_WARNING 4 63 + /* For completed unsolicited task */ 64 + #define FCOE_TASK_TX_STATE_UNSOLICITED_COMPLETED 5 65 + /* For exchange cleanup request task */ 66 + #define FCOE_TASK_TX_STATE_EXCHANGE_CLEANUP 6 67 + /* For sequence cleanup request task */ 68 + #define FCOE_TASK_TX_STATE_SEQUENCE_CLEANUP 7 69 + /* Mark task as aborted and indicate that ABTS was not transmitted */ 70 + #define FCOE_TASK_TX_STATE_BEFORE_ABTS_TX 8 71 + /* Mark task as aborted and indicate that ABTS was transmitted */ 72 + #define FCOE_TASK_TX_STATE_AFTER_ABTS_TX 9 73 + /* For completion the ABTS task. */ 74 + #define FCOE_TASK_TX_STATE_ABTS_TX_COMPLETED 10 75 + /* Mark task as aborted and indicate that Exchange cleanup was not transmitted 76 + */ 77 + #define FCOE_TASK_TX_STATE_BEFORE_EXCHANGE_CLEANUP_TX 11 78 + /* Mark task as aborted and indicate that Exchange cleanup was transmitted */ 79 + #define FCOE_TASK_TX_STATE_AFTER_EXCHANGE_CLEANUP_TX 12 80 + 81 + #define FCOE_TASK_RX_STATE_NORMAL 0 82 + #define FCOE_TASK_RX_STATE_COMPLETED 1 83 + /* Obsolete: Intermediate completion (middle path with local completion) */ 84 + #define FCOE_TASK_RX_STATE_INTER_COMP 2 85 + /* For REC_TOV timer expiration indication received from Xstorm */ 86 + #define FCOE_TASK_RX_STATE_WARNING 3 87 + /* For E_D_T_TOV timer expiration in Ustorm */ 88 + #define FCOE_TASK_RX_STATE_ERROR 4 89 + /* ABTS ACC arrived wait for local completion to finally complete the task. */ 90 + #define FCOE_TASK_RX_STATE_ABTS_ACC_ARRIVED 5 91 + /* local completion arrived wait for ABTS ACC to finally complete the task. */ 92 + #define FCOE_TASK_RX_STATE_ABTS_LOCAL_COMP_ARRIVED 6 93 + /* Special completion indication in case of task was aborted. */ 94 + #define FCOE_TASK_RX_STATE_ABTS_COMPLETED 7 95 + /* Special completion indication in case of task was cleaned. */ 96 + #define FCOE_TASK_RX_STATE_EXCHANGE_CLEANUP_COMPLETED 8 97 + /* Special completion indication (in task requested the exchange cleanup) in 98 + * case cleaned task is in non-valid. */ 99 + #define FCOE_TASK_RX_STATE_ABORT_CLEANUP_COMPLETED 9 100 + /* Special completion indication (in task requested the sequence cleanup) in 101 + * case cleaned task was already returned to normal. */ 102 + #define FCOE_TASK_RX_STATE_IGNORED_SEQUENCE_CLEANUP 10 103 + /* Exchange cleanup arrived wait until xfer will be handled to finally 104 + * complete the task. */ 105 + #define FCOE_TASK_RX_STATE_EXCHANGE_CLEANUP_ARRIVED 11 106 + /* Xfer handled, wait for exchange cleanup to finally complete the task. */ 107 + #define FCOE_TASK_RX_STATE_EXCHANGE_CLEANUP_HANDLED_XFER 12 108 + 109 + #define FCOE_TASK_TYPE_WRITE 0 110 + #define FCOE_TASK_TYPE_READ 1 111 + #define FCOE_TASK_TYPE_MIDPATH 2 112 + #define FCOE_TASK_TYPE_UNSOLICITED 3 113 + #define FCOE_TASK_TYPE_ABTS 4 114 + #define FCOE_TASK_TYPE_EXCHANGE_CLEANUP 5 115 + #define FCOE_TASK_TYPE_SEQUENCE_CLEANUP 6 116 + 117 + #define FCOE_TASK_DEV_TYPE_DISK 0 118 + #define FCOE_TASK_DEV_TYPE_TAPE 1 119 + 120 + #define FCOE_TASK_CLASS_TYPE_3 0 121 + #define FCOE_TASK_CLASS_TYPE_2 1 122 + 123 + /* Everest FCoE connection type */ 124 + #define B577XX_FCOE_CONNECTION_TYPE 4 125 + 126 + /* Error codes for Error Reporting in fast path flows */ 127 + /* XFER error codes */ 128 + #define FCOE_ERROR_CODE_XFER_OOO_RO 0 129 + #define FCOE_ERROR_CODE_XFER_RO_NOT_ALIGNED 1 130 + #define FCOE_ERROR_CODE_XFER_NULL_BURST_LEN 2 131 + #define FCOE_ERROR_CODE_XFER_RO_GREATER_THAN_DATA2TRNS 3 132 + #define FCOE_ERROR_CODE_XFER_INVALID_PAYLOAD_SIZE 4 133 + #define FCOE_ERROR_CODE_XFER_TASK_TYPE_NOT_WRITE 5 134 + #define FCOE_ERROR_CODE_XFER_PEND_XFER_SET 6 135 + #define FCOE_ERROR_CODE_XFER_OPENED_SEQ 7 136 + #define FCOE_ERROR_CODE_XFER_FCTL 8 137 + 138 + /* FCP RSP error codes */ 139 + #define FCOE_ERROR_CODE_FCP_RSP_BIDI_FLAGS_SET 9 140 + #define FCOE_ERROR_CODE_FCP_RSP_UNDERFLOW 10 141 + #define FCOE_ERROR_CODE_FCP_RSP_OVERFLOW 11 142 + #define FCOE_ERROR_CODE_FCP_RSP_INVALID_LENGTH_FIELD 12 143 + #define FCOE_ERROR_CODE_FCP_RSP_INVALID_SNS_FIELD 13 144 + #define FCOE_ERROR_CODE_FCP_RSP_INVALID_PAYLOAD_SIZE 14 145 + #define FCOE_ERROR_CODE_FCP_RSP_PEND_XFER_SET 15 146 + #define FCOE_ERROR_CODE_FCP_RSP_OPENED_SEQ 16 147 + #define FCOE_ERROR_CODE_FCP_RSP_FCTL 17 148 + #define FCOE_ERROR_CODE_FCP_RSP_LAST_SEQ_RESET 18 149 + #define FCOE_ERROR_CODE_FCP_RSP_CONF_REQ_NOT_SUPPORTED_YET 19 150 + 151 + /* FCP DATA error codes */ 152 + #define FCOE_ERROR_CODE_DATA_OOO_RO 20 153 + #define FCOE_ERROR_CODE_DATA_EXCEEDS_DEFINED_MAX_FRAME_SIZE 21 154 + #define FCOE_ERROR_CODE_DATA_EXCEEDS_DATA2TRNS 22 155 + #define FCOE_ERROR_CODE_DATA_SOFI3_SEQ_ACTIVE_SET 23 156 + #define FCOE_ERROR_CODE_DATA_SOFN_SEQ_ACTIVE_RESET 24 157 + #define FCOE_ERROR_CODE_DATA_EOFN_END_SEQ_SET 25 158 + #define FCOE_ERROR_CODE_DATA_EOFT_END_SEQ_RESET 26 159 + #define FCOE_ERROR_CODE_DATA_TASK_TYPE_NOT_READ 27 160 + #define FCOE_ERROR_CODE_DATA_FCTL 28 161 + 162 + /* Middle path error codes */ 163 + #define FCOE_ERROR_CODE_MIDPATH_TYPE_NOT_ELS 29 164 + #define FCOE_ERROR_CODE_MIDPATH_SOFI3_SEQ_ACTIVE_SET 30 165 + #define FCOE_ERROR_CODE_MIDPATH_SOFN_SEQ_ACTIVE_RESET 31 166 + #define FCOE_ERROR_CODE_MIDPATH_EOFN_END_SEQ_SET 32 167 + #define FCOE_ERROR_CODE_MIDPATH_EOFT_END_SEQ_RESET 33 168 + #define FCOE_ERROR_CODE_MIDPATH_ELS_REPLY_FCTL 34 169 + #define FCOE_ERROR_CODE_MIDPATH_INVALID_REPLY 35 170 + #define FCOE_ERROR_CODE_MIDPATH_ELS_REPLY_RCTL 36 171 + 172 + /* ABTS error codes */ 173 + #define FCOE_ERROR_CODE_ABTS_REPLY_F_CTL 37 174 + #define FCOE_ERROR_CODE_ABTS_REPLY_DDF_RCTL_FIELD 38 175 + #define FCOE_ERROR_CODE_ABTS_REPLY_INVALID_BLS_RCTL 39 176 + #define FCOE_ERROR_CODE_ABTS_REPLY_INVALID_RCTL 40 177 + #define FCOE_ERROR_CODE_ABTS_REPLY_RCTL_GENERAL_MISMATCH 41 178 + 179 + /* Common error codes */ 180 + #define FCOE_ERROR_CODE_COMMON_MIDDLE_FRAME_WITH_PAD 42 181 + #define FCOE_ERROR_CODE_COMMON_SEQ_INIT_IN_TCE 43 182 + #define FCOE_ERROR_CODE_COMMON_FC_HDR_RX_ID_MISMATCH 44 183 + #define FCOE_ERROR_CODE_COMMON_INCORRECT_SEQ_CNT 45 184 + #define FCOE_ERROR_CODE_COMMON_DATA_FC_HDR_FCP_TYPE_MISMATCH 46 185 + #define FCOE_ERROR_CODE_COMMON_DATA_NO_MORE_SGES 47 186 + #define FCOE_ERROR_CODE_COMMON_OPTIONAL_FC_HDR 48 187 + #define FCOE_ERROR_CODE_COMMON_READ_TCE_OX_ID_TOO_BIG 49 188 + #define FCOE_ERROR_CODE_COMMON_DATA_WAS_NOT_TRANSMITTED 50 189 + 190 + /* Unsolicited Rx error codes */ 191 + #define FCOE_ERROR_CODE_UNSOLICITED_TYPE_NOT_ELS 51 192 + #define FCOE_ERROR_CODE_UNSOLICITED_TYPE_NOT_BLS 52 193 + #define FCOE_ERROR_CODE_UNSOLICITED_FCTL_ELS 53 194 + #define FCOE_ERROR_CODE_UNSOLICITED_FCTL_BLS 54 195 + #define FCOE_ERROR_CODE_UNSOLICITED_R_CTL 55 196 + 197 + #define FCOE_ERROR_CODE_RW_TASK_DDF_RCTL_INFO_FIELD 56 198 + #define FCOE_ERROR_CODE_RW_TASK_INVALID_RCTL 57 199 + #define FCOE_ERROR_CODE_RW_TASK_RCTL_GENERAL_MISMATCH 58 200 + 201 + /* Timer error codes */ 202 + #define FCOE_ERROR_CODE_E_D_TOV_TIMER_EXPIRATION 60 203 + #define FCOE_ERROR_CODE_REC_TOV_TIMER_EXPIRATION 61 204 + 205 + 206 + #endif /* BNX2FC_CONSTANTS_H_ */
+70
drivers/scsi/bnx2fc/bnx2fc_debug.h
··· 1 + #ifndef __BNX2FC_DEBUG__ 2 + #define __BNX2FC_DEBUG__ 3 + 4 + /* Log level bit mask */ 5 + #define LOG_IO 0x01 /* scsi cmd error, cleanup */ 6 + #define LOG_TGT 0x02 /* Session setup, cleanup, etc' */ 7 + #define LOG_HBA 0x04 /* lport events, link, mtu, etc' */ 8 + #define LOG_ELS 0x08 /* ELS logs */ 9 + #define LOG_MISC 0x10 /* fcoe L2 frame related logs*/ 10 + #define LOG_ALL 0xff /* LOG all messages */ 11 + 12 + extern unsigned int bnx2fc_debug_level; 13 + 14 + #define BNX2FC_CHK_LOGGING(LEVEL, CMD) \ 15 + do { \ 16 + if (unlikely(bnx2fc_debug_level & LEVEL)) \ 17 + do { \ 18 + CMD; \ 19 + } while (0); \ 20 + } while (0) 21 + 22 + #define BNX2FC_ELS_DBG(fmt, arg...) \ 23 + BNX2FC_CHK_LOGGING(LOG_ELS, \ 24 + printk(KERN_ALERT PFX fmt, ##arg)) 25 + 26 + #define BNX2FC_MISC_DBG(fmt, arg...) \ 27 + BNX2FC_CHK_LOGGING(LOG_MISC, \ 28 + printk(KERN_ALERT PFX fmt, ##arg)) 29 + 30 + #define BNX2FC_IO_DBG(io_req, fmt, arg...) \ 31 + do { \ 32 + if (!io_req || !io_req->port || !io_req->port->lport || \ 33 + !io_req->port->lport->host) \ 34 + BNX2FC_CHK_LOGGING(LOG_IO, \ 35 + printk(KERN_ALERT PFX "NULL " fmt, ##arg)); \ 36 + else \ 37 + BNX2FC_CHK_LOGGING(LOG_IO, \ 38 + shost_printk(KERN_ALERT, \ 39 + (io_req)->port->lport->host, \ 40 + PFX "xid:0x%x " fmt, \ 41 + (io_req)->xid, ##arg)); \ 42 + } while (0) 43 + 44 + #define BNX2FC_TGT_DBG(tgt, fmt, arg...) \ 45 + do { \ 46 + if (!tgt || !tgt->port || !tgt->port->lport || \ 47 + !tgt->port->lport->host || !tgt->rport) \ 48 + BNX2FC_CHK_LOGGING(LOG_TGT, \ 49 + printk(KERN_ALERT PFX "NULL " fmt, ##arg)); \ 50 + else \ 51 + BNX2FC_CHK_LOGGING(LOG_TGT, \ 52 + shost_printk(KERN_ALERT, \ 53 + (tgt)->port->lport->host, \ 54 + PFX "port:%x " fmt, \ 55 + (tgt)->rport->port_id, ##arg)); \ 56 + } while (0) 57 + 58 + 59 + #define BNX2FC_HBA_DBG(lport, fmt, arg...) \ 60 + do { \ 61 + if (!lport || !lport->host) \ 62 + BNX2FC_CHK_LOGGING(LOG_HBA, \ 63 + printk(KERN_ALERT PFX "NULL " fmt, ##arg)); \ 64 + else \ 65 + BNX2FC_CHK_LOGGING(LOG_HBA, \ 66 + shost_printk(KERN_ALERT, lport->host, \ 67 + PFX fmt, ##arg)); \ 68 + } while (0) 69 + 70 + #endif
+515
drivers/scsi/bnx2fc/bnx2fc_els.c
··· 1 + /* 2 + * bnx2fc_els.c: Broadcom NetXtreme II Linux FCoE offload driver. 3 + * This file contains helper routines that handle ELS requests 4 + * and responses. 5 + * 6 + * Copyright (c) 2008 - 2010 Broadcom Corporation 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License as published by 10 + * the Free Software Foundation. 11 + * 12 + * Written by: Bhanu Prakash Gollapudi (bprakash@broadcom.com) 13 + */ 14 + 15 + #include "bnx2fc.h" 16 + 17 + static void bnx2fc_logo_resp(struct fc_seq *seq, struct fc_frame *fp, 18 + void *arg); 19 + static void bnx2fc_flogi_resp(struct fc_seq *seq, struct fc_frame *fp, 20 + void *arg); 21 + static int bnx2fc_initiate_els(struct bnx2fc_rport *tgt, unsigned int op, 22 + void *data, u32 data_len, 23 + void (*cb_func)(struct bnx2fc_els_cb_arg *cb_arg), 24 + struct bnx2fc_els_cb_arg *cb_arg, u32 timer_msec); 25 + 26 + static void bnx2fc_rrq_compl(struct bnx2fc_els_cb_arg *cb_arg) 27 + { 28 + struct bnx2fc_cmd *orig_io_req; 29 + struct bnx2fc_cmd *rrq_req; 30 + int rc = 0; 31 + 32 + BUG_ON(!cb_arg); 33 + rrq_req = cb_arg->io_req; 34 + orig_io_req = cb_arg->aborted_io_req; 35 + BUG_ON(!orig_io_req); 36 + BNX2FC_ELS_DBG("rrq_compl: orig xid = 0x%x, rrq_xid = 0x%x\n", 37 + orig_io_req->xid, rrq_req->xid); 38 + 39 + kref_put(&orig_io_req->refcount, bnx2fc_cmd_release); 40 + 41 + if (test_and_clear_bit(BNX2FC_FLAG_ELS_TIMEOUT, &rrq_req->req_flags)) { 42 + /* 43 + * els req is timed out. cleanup the IO with FW and 44 + * drop the completion. Remove from active_cmd_queue. 45 + */ 46 + BNX2FC_ELS_DBG("rrq xid - 0x%x timed out, clean it up\n", 47 + rrq_req->xid); 48 + 49 + if (rrq_req->on_active_queue) { 50 + list_del_init(&rrq_req->link); 51 + rrq_req->on_active_queue = 0; 52 + rc = bnx2fc_initiate_cleanup(rrq_req); 53 + BUG_ON(rc); 54 + } 55 + } 56 + kfree(cb_arg); 57 + } 58 + int bnx2fc_send_rrq(struct bnx2fc_cmd *aborted_io_req) 59 + { 60 + 61 + struct fc_els_rrq rrq; 62 + struct bnx2fc_rport *tgt = aborted_io_req->tgt; 63 + struct fc_lport *lport = tgt->rdata->local_port; 64 + struct bnx2fc_els_cb_arg *cb_arg = NULL; 65 + u32 sid = tgt->sid; 66 + u32 r_a_tov = lport->r_a_tov; 67 + unsigned long start = jiffies; 68 + int rc; 69 + 70 + BNX2FC_ELS_DBG("Sending RRQ orig_xid = 0x%x\n", 71 + aborted_io_req->xid); 72 + memset(&rrq, 0, sizeof(rrq)); 73 + 74 + cb_arg = kzalloc(sizeof(struct bnx2fc_els_cb_arg), GFP_NOIO); 75 + if (!cb_arg) { 76 + printk(KERN_ERR PFX "Unable to allocate cb_arg for RRQ\n"); 77 + rc = -ENOMEM; 78 + goto rrq_err; 79 + } 80 + 81 + cb_arg->aborted_io_req = aborted_io_req; 82 + 83 + rrq.rrq_cmd = ELS_RRQ; 84 + hton24(rrq.rrq_s_id, sid); 85 + rrq.rrq_ox_id = htons(aborted_io_req->xid); 86 + rrq.rrq_rx_id = htons(aborted_io_req->task->rx_wr_tx_rd.rx_id); 87 + 88 + retry_rrq: 89 + rc = bnx2fc_initiate_els(tgt, ELS_RRQ, &rrq, sizeof(rrq), 90 + bnx2fc_rrq_compl, cb_arg, 91 + r_a_tov); 92 + if (rc == -ENOMEM) { 93 + if (time_after(jiffies, start + (10 * HZ))) { 94 + BNX2FC_ELS_DBG("rrq Failed\n"); 95 + rc = FAILED; 96 + goto rrq_err; 97 + } 98 + msleep(20); 99 + goto retry_rrq; 100 + } 101 + rrq_err: 102 + if (rc) { 103 + BNX2FC_ELS_DBG("RRQ failed - release orig io req 0x%x\n", 104 + aborted_io_req->xid); 105 + kfree(cb_arg); 106 + spin_lock_bh(&tgt->tgt_lock); 107 + kref_put(&aborted_io_req->refcount, bnx2fc_cmd_release); 108 + spin_unlock_bh(&tgt->tgt_lock); 109 + } 110 + return rc; 111 + } 112 + 113 + static void bnx2fc_l2_els_compl(struct bnx2fc_els_cb_arg *cb_arg) 114 + { 115 + struct bnx2fc_cmd *els_req; 116 + struct bnx2fc_rport *tgt; 117 + struct bnx2fc_mp_req *mp_req; 118 + struct fc_frame_header *fc_hdr; 119 + unsigned char *buf; 120 + void *resp_buf; 121 + u32 resp_len, hdr_len; 122 + u16 l2_oxid; 123 + int frame_len; 124 + int rc = 0; 125 + 126 + l2_oxid = cb_arg->l2_oxid; 127 + BNX2FC_ELS_DBG("ELS COMPL - l2_oxid = 0x%x\n", l2_oxid); 128 + 129 + els_req = cb_arg->io_req; 130 + if (test_and_clear_bit(BNX2FC_FLAG_ELS_TIMEOUT, &els_req->req_flags)) { 131 + /* 132 + * els req is timed out. cleanup the IO with FW and 133 + * drop the completion. libfc will handle the els timeout 134 + */ 135 + if (els_req->on_active_queue) { 136 + list_del_init(&els_req->link); 137 + els_req->on_active_queue = 0; 138 + rc = bnx2fc_initiate_cleanup(els_req); 139 + BUG_ON(rc); 140 + } 141 + goto free_arg; 142 + } 143 + 144 + tgt = els_req->tgt; 145 + mp_req = &(els_req->mp_req); 146 + fc_hdr = &(mp_req->resp_fc_hdr); 147 + resp_len = mp_req->resp_len; 148 + resp_buf = mp_req->resp_buf; 149 + 150 + buf = kzalloc(PAGE_SIZE, GFP_ATOMIC); 151 + if (!buf) { 152 + printk(KERN_ERR PFX "Unable to alloc mp buf\n"); 153 + goto free_arg; 154 + } 155 + hdr_len = sizeof(*fc_hdr); 156 + if (hdr_len + resp_len > PAGE_SIZE) { 157 + printk(KERN_ERR PFX "l2_els_compl: resp len is " 158 + "beyond page size\n"); 159 + goto free_buf; 160 + } 161 + memcpy(buf, fc_hdr, hdr_len); 162 + memcpy(buf + hdr_len, resp_buf, resp_len); 163 + frame_len = hdr_len + resp_len; 164 + 165 + bnx2fc_process_l2_frame_compl(tgt, buf, frame_len, l2_oxid); 166 + 167 + free_buf: 168 + kfree(buf); 169 + free_arg: 170 + kfree(cb_arg); 171 + } 172 + 173 + int bnx2fc_send_adisc(struct bnx2fc_rport *tgt, struct fc_frame *fp) 174 + { 175 + struct fc_els_adisc *adisc; 176 + struct fc_frame_header *fh; 177 + struct bnx2fc_els_cb_arg *cb_arg; 178 + struct fc_lport *lport = tgt->rdata->local_port; 179 + u32 r_a_tov = lport->r_a_tov; 180 + int rc; 181 + 182 + fh = fc_frame_header_get(fp); 183 + cb_arg = kzalloc(sizeof(struct bnx2fc_els_cb_arg), GFP_ATOMIC); 184 + if (!cb_arg) { 185 + printk(KERN_ERR PFX "Unable to allocate cb_arg for ADISC\n"); 186 + return -ENOMEM; 187 + } 188 + 189 + cb_arg->l2_oxid = ntohs(fh->fh_ox_id); 190 + 191 + BNX2FC_ELS_DBG("send ADISC: l2_oxid = 0x%x\n", cb_arg->l2_oxid); 192 + adisc = fc_frame_payload_get(fp, sizeof(*adisc)); 193 + /* adisc is initialized by libfc */ 194 + rc = bnx2fc_initiate_els(tgt, ELS_ADISC, adisc, sizeof(*adisc), 195 + bnx2fc_l2_els_compl, cb_arg, 2 * r_a_tov); 196 + if (rc) 197 + kfree(cb_arg); 198 + return rc; 199 + } 200 + 201 + int bnx2fc_send_logo(struct bnx2fc_rport *tgt, struct fc_frame *fp) 202 + { 203 + struct fc_els_logo *logo; 204 + struct fc_frame_header *fh; 205 + struct bnx2fc_els_cb_arg *cb_arg; 206 + struct fc_lport *lport = tgt->rdata->local_port; 207 + u32 r_a_tov = lport->r_a_tov; 208 + int rc; 209 + 210 + fh = fc_frame_header_get(fp); 211 + cb_arg = kzalloc(sizeof(struct bnx2fc_els_cb_arg), GFP_ATOMIC); 212 + if (!cb_arg) { 213 + printk(KERN_ERR PFX "Unable to allocate cb_arg for LOGO\n"); 214 + return -ENOMEM; 215 + } 216 + 217 + cb_arg->l2_oxid = ntohs(fh->fh_ox_id); 218 + 219 + BNX2FC_ELS_DBG("Send LOGO: l2_oxid = 0x%x\n", cb_arg->l2_oxid); 220 + logo = fc_frame_payload_get(fp, sizeof(*logo)); 221 + /* logo is initialized by libfc */ 222 + rc = bnx2fc_initiate_els(tgt, ELS_LOGO, logo, sizeof(*logo), 223 + bnx2fc_l2_els_compl, cb_arg, 2 * r_a_tov); 224 + if (rc) 225 + kfree(cb_arg); 226 + return rc; 227 + } 228 + 229 + int bnx2fc_send_rls(struct bnx2fc_rport *tgt, struct fc_frame *fp) 230 + { 231 + struct fc_els_rls *rls; 232 + struct fc_frame_header *fh; 233 + struct bnx2fc_els_cb_arg *cb_arg; 234 + struct fc_lport *lport = tgt->rdata->local_port; 235 + u32 r_a_tov = lport->r_a_tov; 236 + int rc; 237 + 238 + fh = fc_frame_header_get(fp); 239 + cb_arg = kzalloc(sizeof(struct bnx2fc_els_cb_arg), GFP_ATOMIC); 240 + if (!cb_arg) { 241 + printk(KERN_ERR PFX "Unable to allocate cb_arg for LOGO\n"); 242 + return -ENOMEM; 243 + } 244 + 245 + cb_arg->l2_oxid = ntohs(fh->fh_ox_id); 246 + 247 + rls = fc_frame_payload_get(fp, sizeof(*rls)); 248 + /* rls is initialized by libfc */ 249 + rc = bnx2fc_initiate_els(tgt, ELS_RLS, rls, sizeof(*rls), 250 + bnx2fc_l2_els_compl, cb_arg, 2 * r_a_tov); 251 + if (rc) 252 + kfree(cb_arg); 253 + return rc; 254 + } 255 + 256 + static int bnx2fc_initiate_els(struct bnx2fc_rport *tgt, unsigned int op, 257 + void *data, u32 data_len, 258 + void (*cb_func)(struct bnx2fc_els_cb_arg *cb_arg), 259 + struct bnx2fc_els_cb_arg *cb_arg, u32 timer_msec) 260 + { 261 + struct fcoe_port *port = tgt->port; 262 + struct bnx2fc_hba *hba = port->priv; 263 + struct fc_rport *rport = tgt->rport; 264 + struct fc_lport *lport = port->lport; 265 + struct bnx2fc_cmd *els_req; 266 + struct bnx2fc_mp_req *mp_req; 267 + struct fc_frame_header *fc_hdr; 268 + struct fcoe_task_ctx_entry *task; 269 + struct fcoe_task_ctx_entry *task_page; 270 + int rc = 0; 271 + int task_idx, index; 272 + u32 did, sid; 273 + u16 xid; 274 + 275 + rc = fc_remote_port_chkready(rport); 276 + if (rc) { 277 + printk(KERN_ALERT PFX "els 0x%x: rport not ready\n", op); 278 + rc = -EINVAL; 279 + goto els_err; 280 + } 281 + if (lport->state != LPORT_ST_READY || !(lport->link_up)) { 282 + printk(KERN_ALERT PFX "els 0x%x: link is not ready\n", op); 283 + rc = -EINVAL; 284 + goto els_err; 285 + } 286 + if (!(test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags)) || 287 + (test_bit(BNX2FC_FLAG_EXPL_LOGO, &tgt->flags))) { 288 + printk(KERN_ERR PFX "els 0x%x: tgt not ready\n", op); 289 + rc = -EINVAL; 290 + goto els_err; 291 + } 292 + els_req = bnx2fc_elstm_alloc(tgt, BNX2FC_ELS); 293 + if (!els_req) { 294 + rc = -ENOMEM; 295 + goto els_err; 296 + } 297 + 298 + els_req->sc_cmd = NULL; 299 + els_req->port = port; 300 + els_req->tgt = tgt; 301 + els_req->cb_func = cb_func; 302 + cb_arg->io_req = els_req; 303 + els_req->cb_arg = cb_arg; 304 + 305 + mp_req = (struct bnx2fc_mp_req *)&(els_req->mp_req); 306 + rc = bnx2fc_init_mp_req(els_req); 307 + if (rc == FAILED) { 308 + printk(KERN_ALERT PFX "ELS MP request init failed\n"); 309 + spin_lock_bh(&tgt->tgt_lock); 310 + kref_put(&els_req->refcount, bnx2fc_cmd_release); 311 + spin_unlock_bh(&tgt->tgt_lock); 312 + rc = -ENOMEM; 313 + goto els_err; 314 + } else { 315 + /* rc SUCCESS */ 316 + rc = 0; 317 + } 318 + 319 + /* Set the data_xfer_len to the size of ELS payload */ 320 + mp_req->req_len = data_len; 321 + els_req->data_xfer_len = mp_req->req_len; 322 + 323 + /* Fill ELS Payload */ 324 + if ((op >= ELS_LS_RJT) && (op <= ELS_AUTH_ELS)) { 325 + memcpy(mp_req->req_buf, data, data_len); 326 + } else { 327 + printk(KERN_ALERT PFX "Invalid ELS op 0x%x\n", op); 328 + els_req->cb_func = NULL; 329 + els_req->cb_arg = NULL; 330 + spin_lock_bh(&tgt->tgt_lock); 331 + kref_put(&els_req->refcount, bnx2fc_cmd_release); 332 + spin_unlock_bh(&tgt->tgt_lock); 333 + rc = -EINVAL; 334 + } 335 + 336 + if (rc) 337 + goto els_err; 338 + 339 + /* Fill FC header */ 340 + fc_hdr = &(mp_req->req_fc_hdr); 341 + 342 + did = tgt->rport->port_id; 343 + sid = tgt->sid; 344 + 345 + __fc_fill_fc_hdr(fc_hdr, FC_RCTL_ELS_REQ, did, sid, 346 + FC_TYPE_ELS, FC_FC_FIRST_SEQ | FC_FC_END_SEQ | 347 + FC_FC_SEQ_INIT, 0); 348 + 349 + /* Obtain exchange id */ 350 + xid = els_req->xid; 351 + task_idx = xid/BNX2FC_TASKS_PER_PAGE; 352 + index = xid % BNX2FC_TASKS_PER_PAGE; 353 + 354 + /* Initialize task context for this IO request */ 355 + task_page = (struct fcoe_task_ctx_entry *) hba->task_ctx[task_idx]; 356 + task = &(task_page[index]); 357 + bnx2fc_init_mp_task(els_req, task); 358 + 359 + spin_lock_bh(&tgt->tgt_lock); 360 + 361 + if (!test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags)) { 362 + printk(KERN_ERR PFX "initiate_els.. session not ready\n"); 363 + els_req->cb_func = NULL; 364 + els_req->cb_arg = NULL; 365 + kref_put(&els_req->refcount, bnx2fc_cmd_release); 366 + spin_unlock_bh(&tgt->tgt_lock); 367 + return -EINVAL; 368 + } 369 + 370 + if (timer_msec) 371 + bnx2fc_cmd_timer_set(els_req, timer_msec); 372 + bnx2fc_add_2_sq(tgt, xid); 373 + 374 + els_req->on_active_queue = 1; 375 + list_add_tail(&els_req->link, &tgt->els_queue); 376 + 377 + /* Ring doorbell */ 378 + bnx2fc_ring_doorbell(tgt); 379 + spin_unlock_bh(&tgt->tgt_lock); 380 + 381 + els_err: 382 + return rc; 383 + } 384 + 385 + void bnx2fc_process_els_compl(struct bnx2fc_cmd *els_req, 386 + struct fcoe_task_ctx_entry *task, u8 num_rq) 387 + { 388 + struct bnx2fc_mp_req *mp_req; 389 + struct fc_frame_header *fc_hdr; 390 + u64 *hdr; 391 + u64 *temp_hdr; 392 + 393 + BNX2FC_ELS_DBG("Entered process_els_compl xid = 0x%x" 394 + "cmd_type = %d\n", els_req->xid, els_req->cmd_type); 395 + 396 + if (test_and_set_bit(BNX2FC_FLAG_ELS_DONE, 397 + &els_req->req_flags)) { 398 + BNX2FC_ELS_DBG("Timer context finished processing this " 399 + "els - 0x%x\n", els_req->xid); 400 + /* This IO doesnt receive cleanup completion */ 401 + kref_put(&els_req->refcount, bnx2fc_cmd_release); 402 + return; 403 + } 404 + 405 + /* Cancel the timeout_work, as we received the response */ 406 + if (cancel_delayed_work(&els_req->timeout_work)) 407 + kref_put(&els_req->refcount, 408 + bnx2fc_cmd_release); /* drop timer hold */ 409 + 410 + if (els_req->on_active_queue) { 411 + list_del_init(&els_req->link); 412 + els_req->on_active_queue = 0; 413 + } 414 + 415 + mp_req = &(els_req->mp_req); 416 + fc_hdr = &(mp_req->resp_fc_hdr); 417 + 418 + hdr = (u64 *)fc_hdr; 419 + temp_hdr = (u64 *) 420 + &task->cmn.general.cmd_info.mp_fc_frame.fc_hdr; 421 + hdr[0] = cpu_to_be64(temp_hdr[0]); 422 + hdr[1] = cpu_to_be64(temp_hdr[1]); 423 + hdr[2] = cpu_to_be64(temp_hdr[2]); 424 + 425 + mp_req->resp_len = task->rx_wr_only.sgl_ctx.mul_sges.cur_sge_off; 426 + 427 + /* Parse ELS response */ 428 + if ((els_req->cb_func) && (els_req->cb_arg)) { 429 + els_req->cb_func(els_req->cb_arg); 430 + els_req->cb_arg = NULL; 431 + } 432 + 433 + kref_put(&els_req->refcount, bnx2fc_cmd_release); 434 + } 435 + 436 + static void bnx2fc_flogi_resp(struct fc_seq *seq, struct fc_frame *fp, 437 + void *arg) 438 + { 439 + struct fcoe_ctlr *fip = arg; 440 + struct fc_exch *exch = fc_seq_exch(seq); 441 + struct fc_lport *lport = exch->lp; 442 + u8 *mac; 443 + struct fc_frame_header *fh; 444 + u8 op; 445 + 446 + if (IS_ERR(fp)) 447 + goto done; 448 + 449 + mac = fr_cb(fp)->granted_mac; 450 + if (is_zero_ether_addr(mac)) { 451 + fh = fc_frame_header_get(fp); 452 + if (fh->fh_type != FC_TYPE_ELS) { 453 + printk(KERN_ERR PFX "bnx2fc_flogi_resp:" 454 + "fh_type != FC_TYPE_ELS\n"); 455 + fc_frame_free(fp); 456 + return; 457 + } 458 + op = fc_frame_payload_op(fp); 459 + if (lport->vport) { 460 + if (op == ELS_LS_RJT) { 461 + printk(KERN_ERR PFX "bnx2fc_flogi_resp is LS_RJT\n"); 462 + fc_vport_terminate(lport->vport); 463 + fc_frame_free(fp); 464 + return; 465 + } 466 + } 467 + if (fcoe_ctlr_recv_flogi(fip, lport, fp)) { 468 + fc_frame_free(fp); 469 + return; 470 + } 471 + } 472 + fip->update_mac(lport, mac); 473 + done: 474 + fc_lport_flogi_resp(seq, fp, lport); 475 + } 476 + 477 + static void bnx2fc_logo_resp(struct fc_seq *seq, struct fc_frame *fp, 478 + void *arg) 479 + { 480 + struct fcoe_ctlr *fip = arg; 481 + struct fc_exch *exch = fc_seq_exch(seq); 482 + struct fc_lport *lport = exch->lp; 483 + static u8 zero_mac[ETH_ALEN] = { 0 }; 484 + 485 + if (!IS_ERR(fp)) 486 + fip->update_mac(lport, zero_mac); 487 + fc_lport_logo_resp(seq, fp, lport); 488 + } 489 + 490 + struct fc_seq *bnx2fc_elsct_send(struct fc_lport *lport, u32 did, 491 + struct fc_frame *fp, unsigned int op, 492 + void (*resp)(struct fc_seq *, 493 + struct fc_frame *, 494 + void *), 495 + void *arg, u32 timeout) 496 + { 497 + struct fcoe_port *port = lport_priv(lport); 498 + struct bnx2fc_hba *hba = port->priv; 499 + struct fcoe_ctlr *fip = &hba->ctlr; 500 + struct fc_frame_header *fh = fc_frame_header_get(fp); 501 + 502 + switch (op) { 503 + case ELS_FLOGI: 504 + case ELS_FDISC: 505 + return fc_elsct_send(lport, did, fp, op, bnx2fc_flogi_resp, 506 + fip, timeout); 507 + case ELS_LOGO: 508 + /* only hook onto fabric logouts, not port logouts */ 509 + if (ntoh24(fh->fh_d_id) != FC_FID_FLOGI) 510 + break; 511 + return fc_elsct_send(lport, did, fp, op, bnx2fc_logo_resp, 512 + fip, timeout); 513 + } 514 + return fc_elsct_send(lport, did, fp, op, resp, arg, timeout); 515 + }
+2535
drivers/scsi/bnx2fc/bnx2fc_fcoe.c
··· 1 + /* bnx2fc_fcoe.c: Broadcom NetXtreme II Linux FCoE offload driver. 2 + * This file contains the code that interacts with libfc, libfcoe, 3 + * cnic modules to create FCoE instances, send/receive non-offloaded 4 + * FIP/FCoE packets, listen to link events etc. 5 + * 6 + * Copyright (c) 2008 - 2010 Broadcom Corporation 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License as published by 10 + * the Free Software Foundation. 11 + * 12 + * Written by: Bhanu Prakash Gollapudi (bprakash@broadcom.com) 13 + */ 14 + 15 + #include "bnx2fc.h" 16 + 17 + static struct list_head adapter_list; 18 + static u32 adapter_count; 19 + static DEFINE_MUTEX(bnx2fc_dev_lock); 20 + DEFINE_PER_CPU(struct bnx2fc_percpu_s, bnx2fc_percpu); 21 + 22 + #define DRV_MODULE_NAME "bnx2fc" 23 + #define DRV_MODULE_VERSION BNX2FC_VERSION 24 + #define DRV_MODULE_RELDATE "Jan 25, 2011" 25 + 26 + 27 + static char version[] __devinitdata = 28 + "Broadcom NetXtreme II FCoE Driver " DRV_MODULE_NAME \ 29 + " v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n"; 30 + 31 + 32 + MODULE_AUTHOR("Bhanu Prakash Gollapudi <bprakash@broadcom.com>"); 33 + MODULE_DESCRIPTION("Broadcom NetXtreme II BCM57710 FCoE Driver"); 34 + MODULE_LICENSE("GPL"); 35 + MODULE_VERSION(DRV_MODULE_VERSION); 36 + 37 + #define BNX2FC_MAX_QUEUE_DEPTH 256 38 + #define BNX2FC_MIN_QUEUE_DEPTH 32 39 + #define FCOE_WORD_TO_BYTE 4 40 + 41 + static struct scsi_transport_template *bnx2fc_transport_template; 42 + static struct scsi_transport_template *bnx2fc_vport_xport_template; 43 + 44 + struct workqueue_struct *bnx2fc_wq; 45 + 46 + /* bnx2fc structure needs only one instance of the fcoe_percpu_s structure. 47 + * Here the io threads are per cpu but the l2 thread is just one 48 + */ 49 + struct fcoe_percpu_s bnx2fc_global; 50 + DEFINE_SPINLOCK(bnx2fc_global_lock); 51 + 52 + static struct cnic_ulp_ops bnx2fc_cnic_cb; 53 + static struct libfc_function_template bnx2fc_libfc_fcn_templ; 54 + static struct scsi_host_template bnx2fc_shost_template; 55 + static struct fc_function_template bnx2fc_transport_function; 56 + static struct fc_function_template bnx2fc_vport_xport_function; 57 + static int bnx2fc_create(struct net_device *netdev, enum fip_state fip_mode); 58 + static int bnx2fc_destroy(struct net_device *net_device); 59 + static int bnx2fc_enable(struct net_device *netdev); 60 + static int bnx2fc_disable(struct net_device *netdev); 61 + 62 + static void bnx2fc_recv_frame(struct sk_buff *skb); 63 + 64 + static void bnx2fc_start_disc(struct bnx2fc_hba *hba); 65 + static int bnx2fc_shost_config(struct fc_lport *lport, struct device *dev); 66 + static int bnx2fc_net_config(struct fc_lport *lp); 67 + static int bnx2fc_lport_config(struct fc_lport *lport); 68 + static int bnx2fc_em_config(struct fc_lport *lport); 69 + static int bnx2fc_bind_adapter_devices(struct bnx2fc_hba *hba); 70 + static void bnx2fc_unbind_adapter_devices(struct bnx2fc_hba *hba); 71 + static int bnx2fc_bind_pcidev(struct bnx2fc_hba *hba); 72 + static void bnx2fc_unbind_pcidev(struct bnx2fc_hba *hba); 73 + static struct fc_lport *bnx2fc_if_create(struct bnx2fc_hba *hba, 74 + struct device *parent, int npiv); 75 + static void bnx2fc_destroy_work(struct work_struct *work); 76 + 77 + static struct bnx2fc_hba *bnx2fc_hba_lookup(struct net_device *phys_dev); 78 + static struct bnx2fc_hba *bnx2fc_find_hba_for_cnic(struct cnic_dev *cnic); 79 + 80 + static int bnx2fc_fw_init(struct bnx2fc_hba *hba); 81 + static void bnx2fc_fw_destroy(struct bnx2fc_hba *hba); 82 + 83 + static void bnx2fc_port_shutdown(struct fc_lport *lport); 84 + static void bnx2fc_stop(struct bnx2fc_hba *hba); 85 + static int __init bnx2fc_mod_init(void); 86 + static void __exit bnx2fc_mod_exit(void); 87 + 88 + unsigned int bnx2fc_debug_level; 89 + module_param_named(debug_logging, bnx2fc_debug_level, int, S_IRUGO|S_IWUSR); 90 + 91 + static int bnx2fc_cpu_callback(struct notifier_block *nfb, 92 + unsigned long action, void *hcpu); 93 + /* notification function for CPU hotplug events */ 94 + static struct notifier_block bnx2fc_cpu_notifier = { 95 + .notifier_call = bnx2fc_cpu_callback, 96 + }; 97 + 98 + static void bnx2fc_clean_rx_queue(struct fc_lport *lp) 99 + { 100 + struct fcoe_percpu_s *bg; 101 + struct fcoe_rcv_info *fr; 102 + struct sk_buff_head *list; 103 + struct sk_buff *skb, *next; 104 + struct sk_buff *head; 105 + 106 + bg = &bnx2fc_global; 107 + spin_lock_bh(&bg->fcoe_rx_list.lock); 108 + list = &bg->fcoe_rx_list; 109 + head = list->next; 110 + for (skb = head; skb != (struct sk_buff *)list; 111 + skb = next) { 112 + next = skb->next; 113 + fr = fcoe_dev_from_skb(skb); 114 + if (fr->fr_dev == lp) { 115 + __skb_unlink(skb, list); 116 + kfree_skb(skb); 117 + } 118 + } 119 + spin_unlock_bh(&bg->fcoe_rx_list.lock); 120 + } 121 + 122 + int bnx2fc_get_paged_crc_eof(struct sk_buff *skb, int tlen) 123 + { 124 + int rc; 125 + spin_lock(&bnx2fc_global_lock); 126 + rc = fcoe_get_paged_crc_eof(skb, tlen, &bnx2fc_global); 127 + spin_unlock(&bnx2fc_global_lock); 128 + 129 + return rc; 130 + } 131 + 132 + static void bnx2fc_abort_io(struct fc_lport *lport) 133 + { 134 + /* 135 + * This function is no-op for bnx2fc, but we do 136 + * not want to leave it as NULL either, as libfc 137 + * can call the default function which is 138 + * fc_fcp_abort_io. 139 + */ 140 + } 141 + 142 + static void bnx2fc_cleanup(struct fc_lport *lport) 143 + { 144 + struct fcoe_port *port = lport_priv(lport); 145 + struct bnx2fc_hba *hba = port->priv; 146 + struct bnx2fc_rport *tgt; 147 + int i; 148 + 149 + BNX2FC_MISC_DBG("Entered %s\n", __func__); 150 + mutex_lock(&hba->hba_mutex); 151 + spin_lock_bh(&hba->hba_lock); 152 + for (i = 0; i < BNX2FC_NUM_MAX_SESS; i++) { 153 + tgt = hba->tgt_ofld_list[i]; 154 + if (tgt) { 155 + /* Cleanup IOs belonging to requested vport */ 156 + if (tgt->port == port) { 157 + spin_unlock_bh(&hba->hba_lock); 158 + BNX2FC_TGT_DBG(tgt, "flush/cleanup\n"); 159 + bnx2fc_flush_active_ios(tgt); 160 + spin_lock_bh(&hba->hba_lock); 161 + } 162 + } 163 + } 164 + spin_unlock_bh(&hba->hba_lock); 165 + mutex_unlock(&hba->hba_mutex); 166 + } 167 + 168 + static int bnx2fc_xmit_l2_frame(struct bnx2fc_rport *tgt, 169 + struct fc_frame *fp) 170 + { 171 + struct fc_rport_priv *rdata = tgt->rdata; 172 + struct fc_frame_header *fh; 173 + int rc = 0; 174 + 175 + fh = fc_frame_header_get(fp); 176 + BNX2FC_TGT_DBG(tgt, "Xmit L2 frame rport = 0x%x, oxid = 0x%x, " 177 + "r_ctl = 0x%x\n", rdata->ids.port_id, 178 + ntohs(fh->fh_ox_id), fh->fh_r_ctl); 179 + if ((fh->fh_type == FC_TYPE_ELS) && 180 + (fh->fh_r_ctl == FC_RCTL_ELS_REQ)) { 181 + 182 + switch (fc_frame_payload_op(fp)) { 183 + case ELS_ADISC: 184 + rc = bnx2fc_send_adisc(tgt, fp); 185 + break; 186 + case ELS_LOGO: 187 + rc = bnx2fc_send_logo(tgt, fp); 188 + break; 189 + case ELS_RLS: 190 + rc = bnx2fc_send_rls(tgt, fp); 191 + break; 192 + default: 193 + break; 194 + } 195 + } else if ((fh->fh_type == FC_TYPE_BLS) && 196 + (fh->fh_r_ctl == FC_RCTL_BA_ABTS)) 197 + BNX2FC_TGT_DBG(tgt, "ABTS frame\n"); 198 + else { 199 + BNX2FC_TGT_DBG(tgt, "Send L2 frame type 0x%x " 200 + "rctl 0x%x thru non-offload path\n", 201 + fh->fh_type, fh->fh_r_ctl); 202 + return -ENODEV; 203 + } 204 + if (rc) 205 + return -ENOMEM; 206 + else 207 + return 0; 208 + } 209 + 210 + /** 211 + * bnx2fc_xmit - bnx2fc's FCoE frame transmit function 212 + * 213 + * @lport: the associated local port 214 + * @fp: the fc_frame to be transmitted 215 + */ 216 + static int bnx2fc_xmit(struct fc_lport *lport, struct fc_frame *fp) 217 + { 218 + struct ethhdr *eh; 219 + struct fcoe_crc_eof *cp; 220 + struct sk_buff *skb; 221 + struct fc_frame_header *fh; 222 + struct bnx2fc_hba *hba; 223 + struct fcoe_port *port; 224 + struct fcoe_hdr *hp; 225 + struct bnx2fc_rport *tgt; 226 + struct fcoe_dev_stats *stats; 227 + u8 sof, eof; 228 + u32 crc; 229 + unsigned int hlen, tlen, elen; 230 + int wlen, rc = 0; 231 + 232 + port = (struct fcoe_port *)lport_priv(lport); 233 + hba = port->priv; 234 + 235 + fh = fc_frame_header_get(fp); 236 + 237 + skb = fp_skb(fp); 238 + if (!lport->link_up) { 239 + BNX2FC_HBA_DBG(lport, "bnx2fc_xmit link down\n"); 240 + kfree_skb(skb); 241 + return 0; 242 + } 243 + 244 + if (unlikely(fh->fh_r_ctl == FC_RCTL_ELS_REQ)) { 245 + if (!hba->ctlr.sel_fcf) { 246 + BNX2FC_HBA_DBG(lport, "FCF not selected yet!\n"); 247 + kfree_skb(skb); 248 + return -EINVAL; 249 + } 250 + if (fcoe_ctlr_els_send(&hba->ctlr, lport, skb)) 251 + return 0; 252 + } 253 + 254 + sof = fr_sof(fp); 255 + eof = fr_eof(fp); 256 + 257 + /* 258 + * Snoop the frame header to check if the frame is for 259 + * an offloaded session 260 + */ 261 + /* 262 + * tgt_ofld_list access is synchronized using 263 + * both hba mutex and hba lock. Atleast hba mutex or 264 + * hba lock needs to be held for read access. 265 + */ 266 + 267 + spin_lock_bh(&hba->hba_lock); 268 + tgt = bnx2fc_tgt_lookup(port, ntoh24(fh->fh_d_id)); 269 + if (tgt && (test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags))) { 270 + /* This frame is for offloaded session */ 271 + BNX2FC_HBA_DBG(lport, "xmit: Frame is for offloaded session " 272 + "port_id = 0x%x\n", ntoh24(fh->fh_d_id)); 273 + spin_unlock_bh(&hba->hba_lock); 274 + rc = bnx2fc_xmit_l2_frame(tgt, fp); 275 + if (rc != -ENODEV) { 276 + kfree_skb(skb); 277 + return rc; 278 + } 279 + } else { 280 + spin_unlock_bh(&hba->hba_lock); 281 + } 282 + 283 + elen = sizeof(struct ethhdr); 284 + hlen = sizeof(struct fcoe_hdr); 285 + tlen = sizeof(struct fcoe_crc_eof); 286 + wlen = (skb->len - tlen + sizeof(crc)) / FCOE_WORD_TO_BYTE; 287 + 288 + skb->ip_summed = CHECKSUM_NONE; 289 + crc = fcoe_fc_crc(fp); 290 + 291 + /* copy port crc and eof to the skb buff */ 292 + if (skb_is_nonlinear(skb)) { 293 + skb_frag_t *frag; 294 + if (bnx2fc_get_paged_crc_eof(skb, tlen)) { 295 + kfree_skb(skb); 296 + return -ENOMEM; 297 + } 298 + frag = &skb_shinfo(skb)->frags[skb_shinfo(skb)->nr_frags - 1]; 299 + cp = kmap_atomic(frag->page, KM_SKB_DATA_SOFTIRQ) 300 + + frag->page_offset; 301 + } else { 302 + cp = (struct fcoe_crc_eof *)skb_put(skb, tlen); 303 + } 304 + 305 + memset(cp, 0, sizeof(*cp)); 306 + cp->fcoe_eof = eof; 307 + cp->fcoe_crc32 = cpu_to_le32(~crc); 308 + if (skb_is_nonlinear(skb)) { 309 + kunmap_atomic(cp, KM_SKB_DATA_SOFTIRQ); 310 + cp = NULL; 311 + } 312 + 313 + /* adjust skb network/transport offsets to match mac/fcoe/port */ 314 + skb_push(skb, elen + hlen); 315 + skb_reset_mac_header(skb); 316 + skb_reset_network_header(skb); 317 + skb->mac_len = elen; 318 + skb->protocol = htons(ETH_P_FCOE); 319 + skb->dev = hba->netdev; 320 + 321 + /* fill up mac and fcoe headers */ 322 + eh = eth_hdr(skb); 323 + eh->h_proto = htons(ETH_P_FCOE); 324 + if (hba->ctlr.map_dest) 325 + fc_fcoe_set_mac(eh->h_dest, fh->fh_d_id); 326 + else 327 + /* insert GW address */ 328 + memcpy(eh->h_dest, hba->ctlr.dest_addr, ETH_ALEN); 329 + 330 + if (unlikely(hba->ctlr.flogi_oxid != FC_XID_UNKNOWN)) 331 + memcpy(eh->h_source, hba->ctlr.ctl_src_addr, ETH_ALEN); 332 + else 333 + memcpy(eh->h_source, port->data_src_addr, ETH_ALEN); 334 + 335 + hp = (struct fcoe_hdr *)(eh + 1); 336 + memset(hp, 0, sizeof(*hp)); 337 + if (FC_FCOE_VER) 338 + FC_FCOE_ENCAPS_VER(hp, FC_FCOE_VER); 339 + hp->fcoe_sof = sof; 340 + 341 + /* fcoe lso, mss is in max_payload which is non-zero for FCP data */ 342 + if (lport->seq_offload && fr_max_payload(fp)) { 343 + skb_shinfo(skb)->gso_type = SKB_GSO_FCOE; 344 + skb_shinfo(skb)->gso_size = fr_max_payload(fp); 345 + } else { 346 + skb_shinfo(skb)->gso_type = 0; 347 + skb_shinfo(skb)->gso_size = 0; 348 + } 349 + 350 + /*update tx stats */ 351 + stats = per_cpu_ptr(lport->dev_stats, get_cpu()); 352 + stats->TxFrames++; 353 + stats->TxWords += wlen; 354 + put_cpu(); 355 + 356 + /* send down to lld */ 357 + fr_dev(fp) = lport; 358 + if (port->fcoe_pending_queue.qlen) 359 + fcoe_check_wait_queue(lport, skb); 360 + else if (fcoe_start_io(skb)) 361 + fcoe_check_wait_queue(lport, skb); 362 + 363 + return 0; 364 + } 365 + 366 + /** 367 + * bnx2fc_rcv - This is bnx2fc's receive function called by NET_RX_SOFTIRQ 368 + * 369 + * @skb: the receive socket buffer 370 + * @dev: associated net device 371 + * @ptype: context 372 + * @olddev: last device 373 + * 374 + * This function receives the packet and builds FC frame and passes it up 375 + */ 376 + static int bnx2fc_rcv(struct sk_buff *skb, struct net_device *dev, 377 + struct packet_type *ptype, struct net_device *olddev) 378 + { 379 + struct fc_lport *lport; 380 + struct bnx2fc_hba *hba; 381 + struct fc_frame_header *fh; 382 + struct fcoe_rcv_info *fr; 383 + struct fcoe_percpu_s *bg; 384 + unsigned short oxid; 385 + 386 + hba = container_of(ptype, struct bnx2fc_hba, fcoe_packet_type); 387 + lport = hba->ctlr.lp; 388 + 389 + if (unlikely(lport == NULL)) { 390 + printk(KERN_ALERT PFX "bnx2fc_rcv: lport is NULL\n"); 391 + goto err; 392 + } 393 + 394 + if (unlikely(eth_hdr(skb)->h_proto != htons(ETH_P_FCOE))) { 395 + printk(KERN_ALERT PFX "bnx2fc_rcv: Wrong FC type frame\n"); 396 + goto err; 397 + } 398 + 399 + /* 400 + * Check for minimum frame length, and make sure required FCoE 401 + * and FC headers are pulled into the linear data area. 402 + */ 403 + if (unlikely((skb->len < FCOE_MIN_FRAME) || 404 + !pskb_may_pull(skb, FCOE_HEADER_LEN))) 405 + goto err; 406 + 407 + skb_set_transport_header(skb, sizeof(struct fcoe_hdr)); 408 + fh = (struct fc_frame_header *) skb_transport_header(skb); 409 + 410 + oxid = ntohs(fh->fh_ox_id); 411 + 412 + fr = fcoe_dev_from_skb(skb); 413 + fr->fr_dev = lport; 414 + fr->ptype = ptype; 415 + 416 + bg = &bnx2fc_global; 417 + spin_lock_bh(&bg->fcoe_rx_list.lock); 418 + 419 + __skb_queue_tail(&bg->fcoe_rx_list, skb); 420 + if (bg->fcoe_rx_list.qlen == 1) 421 + wake_up_process(bg->thread); 422 + 423 + spin_unlock_bh(&bg->fcoe_rx_list.lock); 424 + 425 + return 0; 426 + err: 427 + kfree_skb(skb); 428 + return -1; 429 + } 430 + 431 + static int bnx2fc_l2_rcv_thread(void *arg) 432 + { 433 + struct fcoe_percpu_s *bg = arg; 434 + struct sk_buff *skb; 435 + 436 + set_user_nice(current, -20); 437 + set_current_state(TASK_INTERRUPTIBLE); 438 + while (!kthread_should_stop()) { 439 + schedule(); 440 + set_current_state(TASK_RUNNING); 441 + spin_lock_bh(&bg->fcoe_rx_list.lock); 442 + while ((skb = __skb_dequeue(&bg->fcoe_rx_list)) != NULL) { 443 + spin_unlock_bh(&bg->fcoe_rx_list.lock); 444 + bnx2fc_recv_frame(skb); 445 + spin_lock_bh(&bg->fcoe_rx_list.lock); 446 + } 447 + spin_unlock_bh(&bg->fcoe_rx_list.lock); 448 + set_current_state(TASK_INTERRUPTIBLE); 449 + } 450 + set_current_state(TASK_RUNNING); 451 + return 0; 452 + } 453 + 454 + 455 + static void bnx2fc_recv_frame(struct sk_buff *skb) 456 + { 457 + u32 fr_len; 458 + struct fc_lport *lport; 459 + struct fcoe_rcv_info *fr; 460 + struct fcoe_dev_stats *stats; 461 + struct fc_frame_header *fh; 462 + struct fcoe_crc_eof crc_eof; 463 + struct fc_frame *fp; 464 + struct fc_lport *vn_port; 465 + struct fcoe_port *port; 466 + u8 *mac = NULL; 467 + u8 *dest_mac = NULL; 468 + struct fcoe_hdr *hp; 469 + 470 + fr = fcoe_dev_from_skb(skb); 471 + lport = fr->fr_dev; 472 + if (unlikely(lport == NULL)) { 473 + printk(KERN_ALERT PFX "Invalid lport struct\n"); 474 + kfree_skb(skb); 475 + return; 476 + } 477 + 478 + if (skb_is_nonlinear(skb)) 479 + skb_linearize(skb); 480 + mac = eth_hdr(skb)->h_source; 481 + dest_mac = eth_hdr(skb)->h_dest; 482 + 483 + /* Pull the header */ 484 + hp = (struct fcoe_hdr *) skb_network_header(skb); 485 + fh = (struct fc_frame_header *) skb_transport_header(skb); 486 + skb_pull(skb, sizeof(struct fcoe_hdr)); 487 + fr_len = skb->len - sizeof(struct fcoe_crc_eof); 488 + 489 + stats = per_cpu_ptr(lport->dev_stats, get_cpu()); 490 + stats->RxFrames++; 491 + stats->RxWords += fr_len / FCOE_WORD_TO_BYTE; 492 + 493 + fp = (struct fc_frame *)skb; 494 + fc_frame_init(fp); 495 + fr_dev(fp) = lport; 496 + fr_sof(fp) = hp->fcoe_sof; 497 + if (skb_copy_bits(skb, fr_len, &crc_eof, sizeof(crc_eof))) { 498 + put_cpu(); 499 + kfree_skb(skb); 500 + return; 501 + } 502 + fr_eof(fp) = crc_eof.fcoe_eof; 503 + fr_crc(fp) = crc_eof.fcoe_crc32; 504 + if (pskb_trim(skb, fr_len)) { 505 + put_cpu(); 506 + kfree_skb(skb); 507 + return; 508 + } 509 + 510 + fh = fc_frame_header_get(fp); 511 + 512 + vn_port = fc_vport_id_lookup(lport, ntoh24(fh->fh_d_id)); 513 + if (vn_port) { 514 + port = lport_priv(vn_port); 515 + if (compare_ether_addr(port->data_src_addr, dest_mac) 516 + != 0) { 517 + BNX2FC_HBA_DBG(lport, "fpma mismatch\n"); 518 + put_cpu(); 519 + kfree_skb(skb); 520 + return; 521 + } 522 + } 523 + if (fh->fh_r_ctl == FC_RCTL_DD_SOL_DATA && 524 + fh->fh_type == FC_TYPE_FCP) { 525 + /* Drop FCP data. We dont this in L2 path */ 526 + put_cpu(); 527 + kfree_skb(skb); 528 + return; 529 + } 530 + if (fh->fh_r_ctl == FC_RCTL_ELS_REQ && 531 + fh->fh_type == FC_TYPE_ELS) { 532 + switch (fc_frame_payload_op(fp)) { 533 + case ELS_LOGO: 534 + if (ntoh24(fh->fh_s_id) == FC_FID_FLOGI) { 535 + /* drop non-FIP LOGO */ 536 + put_cpu(); 537 + kfree_skb(skb); 538 + return; 539 + } 540 + break; 541 + } 542 + } 543 + if (le32_to_cpu(fr_crc(fp)) != 544 + ~crc32(~0, skb->data, fr_len)) { 545 + if (stats->InvalidCRCCount < 5) 546 + printk(KERN_WARNING PFX "dropping frame with " 547 + "CRC error\n"); 548 + stats->InvalidCRCCount++; 549 + put_cpu(); 550 + kfree_skb(skb); 551 + return; 552 + } 553 + put_cpu(); 554 + fc_exch_recv(lport, fp); 555 + } 556 + 557 + /** 558 + * bnx2fc_percpu_io_thread - thread per cpu for ios 559 + * 560 + * @arg: ptr to bnx2fc_percpu_info structure 561 + */ 562 + int bnx2fc_percpu_io_thread(void *arg) 563 + { 564 + struct bnx2fc_percpu_s *p = arg; 565 + struct bnx2fc_work *work, *tmp; 566 + LIST_HEAD(work_list); 567 + 568 + set_user_nice(current, -20); 569 + set_current_state(TASK_INTERRUPTIBLE); 570 + while (!kthread_should_stop()) { 571 + schedule(); 572 + set_current_state(TASK_RUNNING); 573 + spin_lock_bh(&p->fp_work_lock); 574 + while (!list_empty(&p->work_list)) { 575 + list_splice_init(&p->work_list, &work_list); 576 + spin_unlock_bh(&p->fp_work_lock); 577 + 578 + list_for_each_entry_safe(work, tmp, &work_list, list) { 579 + list_del_init(&work->list); 580 + bnx2fc_process_cq_compl(work->tgt, work->wqe); 581 + kfree(work); 582 + } 583 + 584 + spin_lock_bh(&p->fp_work_lock); 585 + } 586 + spin_unlock_bh(&p->fp_work_lock); 587 + set_current_state(TASK_INTERRUPTIBLE); 588 + } 589 + set_current_state(TASK_RUNNING); 590 + 591 + return 0; 592 + } 593 + 594 + static struct fc_host_statistics *bnx2fc_get_host_stats(struct Scsi_Host *shost) 595 + { 596 + struct fc_host_statistics *bnx2fc_stats; 597 + struct fc_lport *lport = shost_priv(shost); 598 + struct fcoe_port *port = lport_priv(lport); 599 + struct bnx2fc_hba *hba = port->priv; 600 + struct fcoe_statistics_params *fw_stats; 601 + int rc = 0; 602 + 603 + fw_stats = (struct fcoe_statistics_params *)hba->stats_buffer; 604 + if (!fw_stats) 605 + return NULL; 606 + 607 + bnx2fc_stats = fc_get_host_stats(shost); 608 + 609 + init_completion(&hba->stat_req_done); 610 + if (bnx2fc_send_stat_req(hba)) 611 + return bnx2fc_stats; 612 + rc = wait_for_completion_timeout(&hba->stat_req_done, (2 * HZ)); 613 + if (!rc) { 614 + BNX2FC_HBA_DBG(lport, "FW stat req timed out\n"); 615 + return bnx2fc_stats; 616 + } 617 + bnx2fc_stats->invalid_crc_count += fw_stats->rx_stat1.fc_crc_cnt; 618 + bnx2fc_stats->tx_frames += fw_stats->tx_stat.fcoe_tx_pkt_cnt; 619 + bnx2fc_stats->tx_words += (fw_stats->tx_stat.fcoe_tx_byte_cnt) / 4; 620 + bnx2fc_stats->rx_frames += fw_stats->rx_stat0.fcoe_rx_pkt_cnt; 621 + bnx2fc_stats->rx_words += (fw_stats->rx_stat0.fcoe_rx_byte_cnt) / 4; 622 + 623 + bnx2fc_stats->dumped_frames = 0; 624 + bnx2fc_stats->lip_count = 0; 625 + bnx2fc_stats->nos_count = 0; 626 + bnx2fc_stats->loss_of_sync_count = 0; 627 + bnx2fc_stats->loss_of_signal_count = 0; 628 + bnx2fc_stats->prim_seq_protocol_err_count = 0; 629 + 630 + return bnx2fc_stats; 631 + } 632 + 633 + static int bnx2fc_shost_config(struct fc_lport *lport, struct device *dev) 634 + { 635 + struct fcoe_port *port = lport_priv(lport); 636 + struct bnx2fc_hba *hba = port->priv; 637 + struct Scsi_Host *shost = lport->host; 638 + int rc = 0; 639 + 640 + shost->max_cmd_len = BNX2FC_MAX_CMD_LEN; 641 + shost->max_lun = BNX2FC_MAX_LUN; 642 + shost->max_id = BNX2FC_MAX_FCP_TGT; 643 + shost->max_channel = 0; 644 + if (lport->vport) 645 + shost->transportt = bnx2fc_vport_xport_template; 646 + else 647 + shost->transportt = bnx2fc_transport_template; 648 + 649 + /* Add the new host to SCSI-ml */ 650 + rc = scsi_add_host(lport->host, dev); 651 + if (rc) { 652 + printk(KERN_ERR PFX "Error on scsi_add_host\n"); 653 + return rc; 654 + } 655 + if (!lport->vport) 656 + fc_host_max_npiv_vports(lport->host) = USHRT_MAX; 657 + sprintf(fc_host_symbolic_name(lport->host), "%s v%s over %s", 658 + BNX2FC_NAME, BNX2FC_VERSION, 659 + hba->netdev->name); 660 + 661 + return 0; 662 + } 663 + 664 + static int bnx2fc_mfs_update(struct fc_lport *lport) 665 + { 666 + struct fcoe_port *port = lport_priv(lport); 667 + struct bnx2fc_hba *hba = port->priv; 668 + struct net_device *netdev = hba->netdev; 669 + u32 mfs; 670 + u32 max_mfs; 671 + 672 + mfs = netdev->mtu - (sizeof(struct fcoe_hdr) + 673 + sizeof(struct fcoe_crc_eof)); 674 + max_mfs = BNX2FC_MAX_PAYLOAD + sizeof(struct fc_frame_header); 675 + BNX2FC_HBA_DBG(lport, "mfs = %d, max_mfs = %d\n", mfs, max_mfs); 676 + if (mfs > max_mfs) 677 + mfs = max_mfs; 678 + 679 + /* Adjust mfs to be a multiple of 256 bytes */ 680 + mfs = (((mfs - sizeof(struct fc_frame_header)) / BNX2FC_MIN_PAYLOAD) * 681 + BNX2FC_MIN_PAYLOAD); 682 + mfs = mfs + sizeof(struct fc_frame_header); 683 + 684 + BNX2FC_HBA_DBG(lport, "Set MFS = %d\n", mfs); 685 + if (fc_set_mfs(lport, mfs)) 686 + return -EINVAL; 687 + return 0; 688 + } 689 + static void bnx2fc_link_speed_update(struct fc_lport *lport) 690 + { 691 + struct fcoe_port *port = lport_priv(lport); 692 + struct bnx2fc_hba *hba = port->priv; 693 + struct net_device *netdev = hba->netdev; 694 + struct ethtool_cmd ecmd = { ETHTOOL_GSET }; 695 + 696 + if (!dev_ethtool_get_settings(netdev, &ecmd)) { 697 + lport->link_supported_speeds &= 698 + ~(FC_PORTSPEED_1GBIT | FC_PORTSPEED_10GBIT); 699 + if (ecmd.supported & (SUPPORTED_1000baseT_Half | 700 + SUPPORTED_1000baseT_Full)) 701 + lport->link_supported_speeds |= FC_PORTSPEED_1GBIT; 702 + if (ecmd.supported & SUPPORTED_10000baseT_Full) 703 + lport->link_supported_speeds |= FC_PORTSPEED_10GBIT; 704 + 705 + if (ecmd.speed == SPEED_1000) 706 + lport->link_speed = FC_PORTSPEED_1GBIT; 707 + if (ecmd.speed == SPEED_10000) 708 + lport->link_speed = FC_PORTSPEED_10GBIT; 709 + } 710 + return; 711 + } 712 + static int bnx2fc_link_ok(struct fc_lport *lport) 713 + { 714 + struct fcoe_port *port = lport_priv(lport); 715 + struct bnx2fc_hba *hba = port->priv; 716 + struct net_device *dev = hba->phys_dev; 717 + int rc = 0; 718 + 719 + if ((dev->flags & IFF_UP) && netif_carrier_ok(dev)) 720 + clear_bit(ADAPTER_STATE_LINK_DOWN, &hba->adapter_state); 721 + else { 722 + set_bit(ADAPTER_STATE_LINK_DOWN, &hba->adapter_state); 723 + rc = -1; 724 + } 725 + return rc; 726 + } 727 + 728 + /** 729 + * bnx2fc_get_link_state - get network link state 730 + * 731 + * @hba: adapter instance pointer 732 + * 733 + * updates adapter structure flag based on netdev state 734 + */ 735 + void bnx2fc_get_link_state(struct bnx2fc_hba *hba) 736 + { 737 + if (test_bit(__LINK_STATE_NOCARRIER, &hba->netdev->state)) 738 + set_bit(ADAPTER_STATE_LINK_DOWN, &hba->adapter_state); 739 + else 740 + clear_bit(ADAPTER_STATE_LINK_DOWN, &hba->adapter_state); 741 + } 742 + 743 + static int bnx2fc_net_config(struct fc_lport *lport) 744 + { 745 + struct bnx2fc_hba *hba; 746 + struct fcoe_port *port; 747 + u64 wwnn, wwpn; 748 + 749 + port = lport_priv(lport); 750 + hba = port->priv; 751 + 752 + /* require support for get_pauseparam ethtool op. */ 753 + if (!hba->phys_dev->ethtool_ops || 754 + !hba->phys_dev->ethtool_ops->get_pauseparam) 755 + return -EOPNOTSUPP; 756 + 757 + if (bnx2fc_mfs_update(lport)) 758 + return -EINVAL; 759 + 760 + skb_queue_head_init(&port->fcoe_pending_queue); 761 + port->fcoe_pending_queue_active = 0; 762 + setup_timer(&port->timer, fcoe_queue_timer, (unsigned long) lport); 763 + 764 + bnx2fc_link_speed_update(lport); 765 + 766 + if (!lport->vport) { 767 + wwnn = fcoe_wwn_from_mac(hba->ctlr.ctl_src_addr, 1, 0); 768 + BNX2FC_HBA_DBG(lport, "WWNN = 0x%llx\n", wwnn); 769 + fc_set_wwnn(lport, wwnn); 770 + 771 + wwpn = fcoe_wwn_from_mac(hba->ctlr.ctl_src_addr, 2, 0); 772 + BNX2FC_HBA_DBG(lport, "WWPN = 0x%llx\n", wwpn); 773 + fc_set_wwpn(lport, wwpn); 774 + } 775 + 776 + return 0; 777 + } 778 + 779 + static void bnx2fc_destroy_timer(unsigned long data) 780 + { 781 + struct bnx2fc_hba *hba = (struct bnx2fc_hba *)data; 782 + 783 + BNX2FC_HBA_DBG(hba->ctlr.lp, "ERROR:bnx2fc_destroy_timer - " 784 + "Destroy compl not received!!\n"); 785 + hba->flags |= BNX2FC_FLAG_DESTROY_CMPL; 786 + wake_up_interruptible(&hba->destroy_wait); 787 + } 788 + 789 + /** 790 + * bnx2fc_indicate_netevent - Generic netdev event handler 791 + * 792 + * @context: adapter structure pointer 793 + * @event: event type 794 + * 795 + * Handles NETDEV_UP, NETDEV_DOWN, NETDEV_GOING_DOWN,NETDEV_CHANGE and 796 + * NETDEV_CHANGE_MTU events 797 + */ 798 + static void bnx2fc_indicate_netevent(void *context, unsigned long event) 799 + { 800 + struct bnx2fc_hba *hba = (struct bnx2fc_hba *)context; 801 + struct fc_lport *lport = hba->ctlr.lp; 802 + struct fc_lport *vport; 803 + u32 link_possible = 1; 804 + 805 + if (!test_bit(BNX2FC_CREATE_DONE, &hba->init_done)) { 806 + BNX2FC_MISC_DBG("driver not ready. event=%s %ld\n", 807 + hba->netdev->name, event); 808 + return; 809 + } 810 + 811 + /* 812 + * ASSUMPTION: 813 + * indicate_netevent cannot be called from cnic unless bnx2fc 814 + * does register_device 815 + */ 816 + BUG_ON(!lport); 817 + 818 + BNX2FC_HBA_DBG(lport, "enter netevent handler - event=%s %ld\n", 819 + hba->netdev->name, event); 820 + 821 + switch (event) { 822 + case NETDEV_UP: 823 + BNX2FC_HBA_DBG(lport, "Port up, adapter_state = %ld\n", 824 + hba->adapter_state); 825 + if (!test_bit(ADAPTER_STATE_UP, &hba->adapter_state)) 826 + printk(KERN_ERR "indicate_netevent: "\ 827 + "adapter is not UP!!\n"); 828 + /* fall thru to update mfs if MTU has changed */ 829 + case NETDEV_CHANGEMTU: 830 + BNX2FC_HBA_DBG(lport, "NETDEV_CHANGEMTU event\n"); 831 + bnx2fc_mfs_update(lport); 832 + mutex_lock(&lport->lp_mutex); 833 + list_for_each_entry(vport, &lport->vports, list) 834 + bnx2fc_mfs_update(vport); 835 + mutex_unlock(&lport->lp_mutex); 836 + break; 837 + 838 + case NETDEV_DOWN: 839 + BNX2FC_HBA_DBG(lport, "Port down\n"); 840 + clear_bit(ADAPTER_STATE_GOING_DOWN, &hba->adapter_state); 841 + clear_bit(ADAPTER_STATE_UP, &hba->adapter_state); 842 + link_possible = 0; 843 + break; 844 + 845 + case NETDEV_GOING_DOWN: 846 + BNX2FC_HBA_DBG(lport, "Port going down\n"); 847 + set_bit(ADAPTER_STATE_GOING_DOWN, &hba->adapter_state); 848 + link_possible = 0; 849 + break; 850 + 851 + case NETDEV_CHANGE: 852 + BNX2FC_HBA_DBG(lport, "NETDEV_CHANGE\n"); 853 + break; 854 + 855 + default: 856 + printk(KERN_ERR PFX "Unkonwn netevent %ld", event); 857 + return; 858 + } 859 + 860 + bnx2fc_link_speed_update(lport); 861 + 862 + if (link_possible && !bnx2fc_link_ok(lport)) { 863 + printk(KERN_ERR "indicate_netevent: call ctlr_link_up\n"); 864 + fcoe_ctlr_link_up(&hba->ctlr); 865 + } else { 866 + printk(KERN_ERR "indicate_netevent: call ctlr_link_down\n"); 867 + if (fcoe_ctlr_link_down(&hba->ctlr)) { 868 + clear_bit(ADAPTER_STATE_READY, &hba->adapter_state); 869 + mutex_lock(&lport->lp_mutex); 870 + list_for_each_entry(vport, &lport->vports, list) 871 + fc_host_port_type(vport->host) = 872 + FC_PORTTYPE_UNKNOWN; 873 + mutex_unlock(&lport->lp_mutex); 874 + fc_host_port_type(lport->host) = FC_PORTTYPE_UNKNOWN; 875 + per_cpu_ptr(lport->dev_stats, 876 + get_cpu())->LinkFailureCount++; 877 + put_cpu(); 878 + fcoe_clean_pending_queue(lport); 879 + 880 + init_waitqueue_head(&hba->shutdown_wait); 881 + BNX2FC_HBA_DBG(lport, "indicate_netevent " 882 + "num_ofld_sess = %d\n", 883 + hba->num_ofld_sess); 884 + hba->wait_for_link_down = 1; 885 + BNX2FC_HBA_DBG(lport, "waiting for uploads to " 886 + "compl proc = %s\n", 887 + current->comm); 888 + wait_event_interruptible(hba->shutdown_wait, 889 + (hba->num_ofld_sess == 0)); 890 + BNX2FC_HBA_DBG(lport, "wakeup - num_ofld_sess = %d\n", 891 + hba->num_ofld_sess); 892 + hba->wait_for_link_down = 0; 893 + 894 + if (signal_pending(current)) 895 + flush_signals(current); 896 + } 897 + } 898 + } 899 + 900 + static int bnx2fc_libfc_config(struct fc_lport *lport) 901 + { 902 + 903 + /* Set the function pointers set by bnx2fc driver */ 904 + memcpy(&lport->tt, &bnx2fc_libfc_fcn_templ, 905 + sizeof(struct libfc_function_template)); 906 + fc_elsct_init(lport); 907 + fc_exch_init(lport); 908 + fc_rport_init(lport); 909 + fc_disc_init(lport); 910 + return 0; 911 + } 912 + 913 + static int bnx2fc_em_config(struct fc_lport *lport) 914 + { 915 + struct fcoe_port *port = lport_priv(lport); 916 + struct bnx2fc_hba *hba = port->priv; 917 + 918 + if (!fc_exch_mgr_alloc(lport, FC_CLASS_3, FCOE_MIN_XID, 919 + FCOE_MAX_XID, NULL)) { 920 + printk(KERN_ERR PFX "em_config:fc_exch_mgr_alloc failed\n"); 921 + return -ENOMEM; 922 + } 923 + 924 + hba->cmd_mgr = bnx2fc_cmd_mgr_alloc(hba, BNX2FC_MIN_XID, 925 + BNX2FC_MAX_XID); 926 + 927 + if (!hba->cmd_mgr) { 928 + printk(KERN_ERR PFX "em_config:bnx2fc_cmd_mgr_alloc failed\n"); 929 + fc_exch_mgr_free(lport); 930 + return -ENOMEM; 931 + } 932 + return 0; 933 + } 934 + 935 + static int bnx2fc_lport_config(struct fc_lport *lport) 936 + { 937 + lport->link_up = 0; 938 + lport->qfull = 0; 939 + lport->max_retry_count = 3; 940 + lport->max_rport_retry_count = 3; 941 + lport->e_d_tov = 2 * 1000; 942 + lport->r_a_tov = 10 * 1000; 943 + 944 + /* REVISIT: enable when supporting tape devices 945 + lport->service_params = (FCP_SPPF_INIT_FCN | FCP_SPPF_RD_XRDY_DIS | 946 + FCP_SPPF_RETRY | FCP_SPPF_CONF_COMPL); 947 + */ 948 + lport->service_params = (FCP_SPPF_INIT_FCN | FCP_SPPF_RD_XRDY_DIS); 949 + lport->does_npiv = 1; 950 + 951 + memset(&lport->rnid_gen, 0, sizeof(struct fc_els_rnid_gen)); 952 + lport->rnid_gen.rnid_atype = BNX2FC_RNID_HBA; 953 + 954 + /* alloc stats structure */ 955 + if (fc_lport_init_stats(lport)) 956 + return -ENOMEM; 957 + 958 + /* Finish fc_lport configuration */ 959 + fc_lport_config(lport); 960 + 961 + return 0; 962 + } 963 + 964 + /** 965 + * bnx2fc_fip_recv - handle a received FIP frame. 966 + * 967 + * @skb: the received skb 968 + * @dev: associated &net_device 969 + * @ptype: the &packet_type structure which was used to register this handler. 970 + * @orig_dev: original receive &net_device, in case @ dev is a bond. 971 + * 972 + * Returns: 0 for success 973 + */ 974 + static int bnx2fc_fip_recv(struct sk_buff *skb, struct net_device *dev, 975 + struct packet_type *ptype, 976 + struct net_device *orig_dev) 977 + { 978 + struct bnx2fc_hba *hba; 979 + hba = container_of(ptype, struct bnx2fc_hba, fip_packet_type); 980 + fcoe_ctlr_recv(&hba->ctlr, skb); 981 + return 0; 982 + } 983 + 984 + /** 985 + * bnx2fc_update_src_mac - Update Ethernet MAC filters. 986 + * 987 + * @fip: FCoE controller. 988 + * @old: Unicast MAC address to delete if the MAC is non-zero. 989 + * @new: Unicast MAC address to add. 990 + * 991 + * Remove any previously-set unicast MAC filter. 992 + * Add secondary FCoE MAC address filter for our OUI. 993 + */ 994 + static void bnx2fc_update_src_mac(struct fc_lport *lport, u8 *addr) 995 + { 996 + struct fcoe_port *port = lport_priv(lport); 997 + 998 + memcpy(port->data_src_addr, addr, ETH_ALEN); 999 + } 1000 + 1001 + /** 1002 + * bnx2fc_get_src_mac - return the ethernet source address for an lport 1003 + * 1004 + * @lport: libfc port 1005 + */ 1006 + static u8 *bnx2fc_get_src_mac(struct fc_lport *lport) 1007 + { 1008 + struct fcoe_port *port; 1009 + 1010 + port = (struct fcoe_port *)lport_priv(lport); 1011 + return port->data_src_addr; 1012 + } 1013 + 1014 + /** 1015 + * bnx2fc_fip_send - send an Ethernet-encapsulated FIP frame. 1016 + * 1017 + * @fip: FCoE controller. 1018 + * @skb: FIP Packet. 1019 + */ 1020 + static void bnx2fc_fip_send(struct fcoe_ctlr *fip, struct sk_buff *skb) 1021 + { 1022 + skb->dev = bnx2fc_from_ctlr(fip)->netdev; 1023 + dev_queue_xmit(skb); 1024 + } 1025 + 1026 + static int bnx2fc_vport_create(struct fc_vport *vport, bool disabled) 1027 + { 1028 + struct Scsi_Host *shost = vport_to_shost(vport); 1029 + struct fc_lport *n_port = shost_priv(shost); 1030 + struct fcoe_port *port = lport_priv(n_port); 1031 + struct bnx2fc_hba *hba = port->priv; 1032 + struct net_device *netdev = hba->netdev; 1033 + struct fc_lport *vn_port; 1034 + 1035 + if (!test_bit(BNX2FC_FW_INIT_DONE, &hba->init_done)) { 1036 + printk(KERN_ERR PFX "vn ports cannot be created on" 1037 + "this hba\n"); 1038 + return -EIO; 1039 + } 1040 + mutex_lock(&bnx2fc_dev_lock); 1041 + vn_port = bnx2fc_if_create(hba, &vport->dev, 1); 1042 + mutex_unlock(&bnx2fc_dev_lock); 1043 + 1044 + if (IS_ERR(vn_port)) { 1045 + printk(KERN_ERR PFX "bnx2fc_vport_create (%s) failed\n", 1046 + netdev->name); 1047 + return -EIO; 1048 + } 1049 + 1050 + if (disabled) { 1051 + fc_vport_set_state(vport, FC_VPORT_DISABLED); 1052 + } else { 1053 + vn_port->boot_time = jiffies; 1054 + fc_lport_init(vn_port); 1055 + fc_fabric_login(vn_port); 1056 + fc_vport_setlink(vn_port); 1057 + } 1058 + return 0; 1059 + } 1060 + 1061 + static int bnx2fc_vport_destroy(struct fc_vport *vport) 1062 + { 1063 + struct Scsi_Host *shost = vport_to_shost(vport); 1064 + struct fc_lport *n_port = shost_priv(shost); 1065 + struct fc_lport *vn_port = vport->dd_data; 1066 + struct fcoe_port *port = lport_priv(vn_port); 1067 + 1068 + mutex_lock(&n_port->lp_mutex); 1069 + list_del(&vn_port->list); 1070 + mutex_unlock(&n_port->lp_mutex); 1071 + queue_work(bnx2fc_wq, &port->destroy_work); 1072 + return 0; 1073 + } 1074 + 1075 + static int bnx2fc_vport_disable(struct fc_vport *vport, bool disable) 1076 + { 1077 + struct fc_lport *lport = vport->dd_data; 1078 + 1079 + if (disable) { 1080 + fc_vport_set_state(vport, FC_VPORT_DISABLED); 1081 + fc_fabric_logoff(lport); 1082 + } else { 1083 + lport->boot_time = jiffies; 1084 + fc_fabric_login(lport); 1085 + fc_vport_setlink(lport); 1086 + } 1087 + return 0; 1088 + } 1089 + 1090 + 1091 + static int bnx2fc_netdev_setup(struct bnx2fc_hba *hba) 1092 + { 1093 + struct net_device *netdev = hba->netdev; 1094 + struct net_device *physdev = hba->phys_dev; 1095 + struct netdev_hw_addr *ha; 1096 + int sel_san_mac = 0; 1097 + 1098 + /* Do not support for bonding device */ 1099 + if ((netdev->priv_flags & IFF_MASTER_ALB) || 1100 + (netdev->priv_flags & IFF_SLAVE_INACTIVE) || 1101 + (netdev->priv_flags & IFF_MASTER_8023AD)) { 1102 + return -EOPNOTSUPP; 1103 + } 1104 + 1105 + /* setup Source MAC Address */ 1106 + rcu_read_lock(); 1107 + for_each_dev_addr(physdev, ha) { 1108 + BNX2FC_MISC_DBG("net_config: ha->type = %d, fip_mac = ", 1109 + ha->type); 1110 + printk(KERN_INFO "%2x:%2x:%2x:%2x:%2x:%2x\n", ha->addr[0], 1111 + ha->addr[1], ha->addr[2], ha->addr[3], 1112 + ha->addr[4], ha->addr[5]); 1113 + 1114 + if ((ha->type == NETDEV_HW_ADDR_T_SAN) && 1115 + (is_valid_ether_addr(ha->addr))) { 1116 + memcpy(hba->ctlr.ctl_src_addr, ha->addr, ETH_ALEN); 1117 + sel_san_mac = 1; 1118 + BNX2FC_MISC_DBG("Found SAN MAC\n"); 1119 + } 1120 + } 1121 + rcu_read_unlock(); 1122 + 1123 + if (!sel_san_mac) 1124 + return -ENODEV; 1125 + 1126 + hba->fip_packet_type.func = bnx2fc_fip_recv; 1127 + hba->fip_packet_type.type = htons(ETH_P_FIP); 1128 + hba->fip_packet_type.dev = netdev; 1129 + dev_add_pack(&hba->fip_packet_type); 1130 + 1131 + hba->fcoe_packet_type.func = bnx2fc_rcv; 1132 + hba->fcoe_packet_type.type = __constant_htons(ETH_P_FCOE); 1133 + hba->fcoe_packet_type.dev = netdev; 1134 + dev_add_pack(&hba->fcoe_packet_type); 1135 + 1136 + return 0; 1137 + } 1138 + 1139 + static int bnx2fc_attach_transport(void) 1140 + { 1141 + bnx2fc_transport_template = 1142 + fc_attach_transport(&bnx2fc_transport_function); 1143 + 1144 + if (bnx2fc_transport_template == NULL) { 1145 + printk(KERN_ERR PFX "Failed to attach FC transport\n"); 1146 + return -ENODEV; 1147 + } 1148 + 1149 + bnx2fc_vport_xport_template = 1150 + fc_attach_transport(&bnx2fc_vport_xport_function); 1151 + if (bnx2fc_vport_xport_template == NULL) { 1152 + printk(KERN_ERR PFX 1153 + "Failed to attach FC transport for vport\n"); 1154 + fc_release_transport(bnx2fc_transport_template); 1155 + bnx2fc_transport_template = NULL; 1156 + return -ENODEV; 1157 + } 1158 + return 0; 1159 + } 1160 + static void bnx2fc_release_transport(void) 1161 + { 1162 + fc_release_transport(bnx2fc_transport_template); 1163 + fc_release_transport(bnx2fc_vport_xport_template); 1164 + bnx2fc_transport_template = NULL; 1165 + bnx2fc_vport_xport_template = NULL; 1166 + } 1167 + 1168 + static void bnx2fc_interface_release(struct kref *kref) 1169 + { 1170 + struct bnx2fc_hba *hba; 1171 + struct net_device *netdev; 1172 + struct net_device *phys_dev; 1173 + 1174 + hba = container_of(kref, struct bnx2fc_hba, kref); 1175 + BNX2FC_HBA_DBG(hba->ctlr.lp, "Interface is being released\n"); 1176 + 1177 + netdev = hba->netdev; 1178 + phys_dev = hba->phys_dev; 1179 + 1180 + /* tear-down FIP controller */ 1181 + if (test_and_clear_bit(BNX2FC_CTLR_INIT_DONE, &hba->init_done)) 1182 + fcoe_ctlr_destroy(&hba->ctlr); 1183 + 1184 + /* Free the command manager */ 1185 + if (hba->cmd_mgr) { 1186 + bnx2fc_cmd_mgr_free(hba->cmd_mgr); 1187 + hba->cmd_mgr = NULL; 1188 + } 1189 + dev_put(netdev); 1190 + module_put(THIS_MODULE); 1191 + } 1192 + 1193 + static inline void bnx2fc_interface_get(struct bnx2fc_hba *hba) 1194 + { 1195 + kref_get(&hba->kref); 1196 + } 1197 + 1198 + static inline void bnx2fc_interface_put(struct bnx2fc_hba *hba) 1199 + { 1200 + kref_put(&hba->kref, bnx2fc_interface_release); 1201 + } 1202 + static void bnx2fc_interface_destroy(struct bnx2fc_hba *hba) 1203 + { 1204 + bnx2fc_unbind_pcidev(hba); 1205 + kfree(hba); 1206 + } 1207 + 1208 + /** 1209 + * bnx2fc_interface_create - create a new fcoe instance 1210 + * 1211 + * @cnic: pointer to cnic device 1212 + * 1213 + * Creates a new FCoE instance on the given device which include allocating 1214 + * hba structure, scsi_host and lport structures. 1215 + */ 1216 + static struct bnx2fc_hba *bnx2fc_interface_create(struct cnic_dev *cnic) 1217 + { 1218 + struct bnx2fc_hba *hba; 1219 + int rc; 1220 + 1221 + hba = kzalloc(sizeof(*hba), GFP_KERNEL); 1222 + if (!hba) { 1223 + printk(KERN_ERR PFX "Unable to allocate hba structure\n"); 1224 + return NULL; 1225 + } 1226 + spin_lock_init(&hba->hba_lock); 1227 + mutex_init(&hba->hba_mutex); 1228 + 1229 + hba->cnic = cnic; 1230 + rc = bnx2fc_bind_pcidev(hba); 1231 + if (rc) 1232 + goto bind_err; 1233 + hba->phys_dev = cnic->netdev; 1234 + /* will get overwritten after we do vlan discovery */ 1235 + hba->netdev = hba->phys_dev; 1236 + 1237 + init_waitqueue_head(&hba->shutdown_wait); 1238 + init_waitqueue_head(&hba->destroy_wait); 1239 + 1240 + return hba; 1241 + bind_err: 1242 + printk(KERN_ERR PFX "create_interface: bind error\n"); 1243 + kfree(hba); 1244 + return NULL; 1245 + } 1246 + 1247 + static int bnx2fc_interface_setup(struct bnx2fc_hba *hba, 1248 + enum fip_state fip_mode) 1249 + { 1250 + int rc = 0; 1251 + struct net_device *netdev = hba->netdev; 1252 + struct fcoe_ctlr *fip = &hba->ctlr; 1253 + 1254 + dev_hold(netdev); 1255 + kref_init(&hba->kref); 1256 + 1257 + hba->flags = 0; 1258 + 1259 + /* Initialize FIP */ 1260 + memset(fip, 0, sizeof(*fip)); 1261 + fcoe_ctlr_init(fip, fip_mode); 1262 + hba->ctlr.send = bnx2fc_fip_send; 1263 + hba->ctlr.update_mac = bnx2fc_update_src_mac; 1264 + hba->ctlr.get_src_addr = bnx2fc_get_src_mac; 1265 + set_bit(BNX2FC_CTLR_INIT_DONE, &hba->init_done); 1266 + 1267 + rc = bnx2fc_netdev_setup(hba); 1268 + if (rc) 1269 + goto setup_err; 1270 + 1271 + hba->next_conn_id = 0; 1272 + 1273 + memset(hba->tgt_ofld_list, 0, sizeof(hba->tgt_ofld_list)); 1274 + hba->num_ofld_sess = 0; 1275 + 1276 + return 0; 1277 + 1278 + setup_err: 1279 + fcoe_ctlr_destroy(&hba->ctlr); 1280 + dev_put(netdev); 1281 + bnx2fc_interface_put(hba); 1282 + return rc; 1283 + } 1284 + 1285 + /** 1286 + * bnx2fc_if_create - Create FCoE instance on a given interface 1287 + * 1288 + * @hba: FCoE interface to create a local port on 1289 + * @parent: Device pointer to be the parent in sysfs for the SCSI host 1290 + * @npiv: Indicates if the port is vport or not 1291 + * 1292 + * Creates a fc_lport instance and a Scsi_Host instance and configure them. 1293 + * 1294 + * Returns: Allocated fc_lport or an error pointer 1295 + */ 1296 + static struct fc_lport *bnx2fc_if_create(struct bnx2fc_hba *hba, 1297 + struct device *parent, int npiv) 1298 + { 1299 + struct fc_lport *lport = NULL; 1300 + struct fcoe_port *port; 1301 + struct Scsi_Host *shost; 1302 + struct fc_vport *vport = dev_to_vport(parent); 1303 + int rc = 0; 1304 + 1305 + /* Allocate Scsi_Host structure */ 1306 + if (!npiv) { 1307 + lport = libfc_host_alloc(&bnx2fc_shost_template, 1308 + sizeof(struct fcoe_port)); 1309 + } else { 1310 + lport = libfc_vport_create(vport, 1311 + sizeof(struct fcoe_port)); 1312 + } 1313 + 1314 + if (!lport) { 1315 + printk(KERN_ERR PFX "could not allocate scsi host structure\n"); 1316 + return NULL; 1317 + } 1318 + shost = lport->host; 1319 + port = lport_priv(lport); 1320 + port->lport = lport; 1321 + port->priv = hba; 1322 + INIT_WORK(&port->destroy_work, bnx2fc_destroy_work); 1323 + 1324 + /* Configure fcoe_port */ 1325 + rc = bnx2fc_lport_config(lport); 1326 + if (rc) 1327 + goto lp_config_err; 1328 + 1329 + if (npiv) { 1330 + vport = dev_to_vport(parent); 1331 + printk(KERN_ERR PFX "Setting vport names, 0x%llX 0x%llX\n", 1332 + vport->node_name, vport->port_name); 1333 + fc_set_wwnn(lport, vport->node_name); 1334 + fc_set_wwpn(lport, vport->port_name); 1335 + } 1336 + /* Configure netdev and networking properties of the lport */ 1337 + rc = bnx2fc_net_config(lport); 1338 + if (rc) { 1339 + printk(KERN_ERR PFX "Error on bnx2fc_net_config\n"); 1340 + goto lp_config_err; 1341 + } 1342 + 1343 + rc = bnx2fc_shost_config(lport, parent); 1344 + if (rc) { 1345 + printk(KERN_ERR PFX "Couldnt configure shost for %s\n", 1346 + hba->netdev->name); 1347 + goto lp_config_err; 1348 + } 1349 + 1350 + /* Initialize the libfc library */ 1351 + rc = bnx2fc_libfc_config(lport); 1352 + if (rc) { 1353 + printk(KERN_ERR PFX "Couldnt configure libfc\n"); 1354 + goto shost_err; 1355 + } 1356 + fc_host_port_type(lport->host) = FC_PORTTYPE_UNKNOWN; 1357 + 1358 + /* Allocate exchange manager */ 1359 + if (!npiv) { 1360 + rc = bnx2fc_em_config(lport); 1361 + if (rc) { 1362 + printk(KERN_ERR PFX "Error on bnx2fc_em_config\n"); 1363 + goto shost_err; 1364 + } 1365 + } 1366 + 1367 + bnx2fc_interface_get(hba); 1368 + return lport; 1369 + 1370 + shost_err: 1371 + scsi_remove_host(shost); 1372 + lp_config_err: 1373 + scsi_host_put(lport->host); 1374 + return NULL; 1375 + } 1376 + 1377 + static void bnx2fc_netdev_cleanup(struct bnx2fc_hba *hba) 1378 + { 1379 + /* Dont listen for Ethernet packets anymore */ 1380 + __dev_remove_pack(&hba->fcoe_packet_type); 1381 + __dev_remove_pack(&hba->fip_packet_type); 1382 + synchronize_net(); 1383 + } 1384 + 1385 + static void bnx2fc_if_destroy(struct fc_lport *lport) 1386 + { 1387 + struct fcoe_port *port = lport_priv(lport); 1388 + struct bnx2fc_hba *hba = port->priv; 1389 + 1390 + BNX2FC_HBA_DBG(hba->ctlr.lp, "ENTERED bnx2fc_if_destroy\n"); 1391 + /* Stop the transmit retry timer */ 1392 + del_timer_sync(&port->timer); 1393 + 1394 + /* Free existing transmit skbs */ 1395 + fcoe_clean_pending_queue(lport); 1396 + 1397 + bnx2fc_interface_put(hba); 1398 + 1399 + /* Free queued packets for the receive thread */ 1400 + bnx2fc_clean_rx_queue(lport); 1401 + 1402 + /* Detach from scsi-ml */ 1403 + fc_remove_host(lport->host); 1404 + scsi_remove_host(lport->host); 1405 + 1406 + /* 1407 + * Note that only the physical lport will have the exchange manager. 1408 + * for vports, this function is NOP 1409 + */ 1410 + fc_exch_mgr_free(lport); 1411 + 1412 + /* Free memory used by statistical counters */ 1413 + fc_lport_free_stats(lport); 1414 + 1415 + /* Release Scsi_Host */ 1416 + scsi_host_put(lport->host); 1417 + } 1418 + 1419 + /** 1420 + * bnx2fc_destroy - Destroy a bnx2fc FCoE interface 1421 + * 1422 + * @buffer: The name of the Ethernet interface to be destroyed 1423 + * @kp: The associated kernel parameter 1424 + * 1425 + * Called from sysfs. 1426 + * 1427 + * Returns: 0 for success 1428 + */ 1429 + static int bnx2fc_destroy(struct net_device *netdev) 1430 + { 1431 + struct bnx2fc_hba *hba = NULL; 1432 + struct net_device *phys_dev; 1433 + int rc = 0; 1434 + 1435 + if (!rtnl_trylock()) 1436 + return restart_syscall(); 1437 + 1438 + mutex_lock(&bnx2fc_dev_lock); 1439 + #ifdef CONFIG_SCSI_BNX2X_FCOE_MODULE 1440 + if (THIS_MODULE->state != MODULE_STATE_LIVE) { 1441 + rc = -ENODEV; 1442 + goto netdev_err; 1443 + } 1444 + #endif 1445 + /* obtain physical netdev */ 1446 + if (netdev->priv_flags & IFF_802_1Q_VLAN) 1447 + phys_dev = vlan_dev_real_dev(netdev); 1448 + else { 1449 + printk(KERN_ERR PFX "Not a vlan device\n"); 1450 + rc = -ENODEV; 1451 + goto netdev_err; 1452 + } 1453 + 1454 + hba = bnx2fc_hba_lookup(phys_dev); 1455 + if (!hba || !hba->ctlr.lp) { 1456 + rc = -ENODEV; 1457 + printk(KERN_ERR PFX "bnx2fc_destroy: hba or lport not found\n"); 1458 + goto netdev_err; 1459 + } 1460 + 1461 + if (!test_bit(BNX2FC_CREATE_DONE, &hba->init_done)) { 1462 + printk(KERN_ERR PFX "bnx2fc_destroy: Create not called\n"); 1463 + goto netdev_err; 1464 + } 1465 + 1466 + bnx2fc_netdev_cleanup(hba); 1467 + 1468 + bnx2fc_stop(hba); 1469 + 1470 + bnx2fc_if_destroy(hba->ctlr.lp); 1471 + 1472 + destroy_workqueue(hba->timer_work_queue); 1473 + 1474 + if (test_bit(BNX2FC_FW_INIT_DONE, &hba->init_done)) 1475 + bnx2fc_fw_destroy(hba); 1476 + 1477 + clear_bit(BNX2FC_CREATE_DONE, &hba->init_done); 1478 + netdev_err: 1479 + mutex_unlock(&bnx2fc_dev_lock); 1480 + rtnl_unlock(); 1481 + return rc; 1482 + } 1483 + 1484 + static void bnx2fc_destroy_work(struct work_struct *work) 1485 + { 1486 + struct fcoe_port *port; 1487 + struct fc_lport *lport; 1488 + 1489 + port = container_of(work, struct fcoe_port, destroy_work); 1490 + lport = port->lport; 1491 + 1492 + BNX2FC_HBA_DBG(lport, "Entered bnx2fc_destroy_work\n"); 1493 + 1494 + bnx2fc_port_shutdown(lport); 1495 + rtnl_lock(); 1496 + mutex_lock(&bnx2fc_dev_lock); 1497 + bnx2fc_if_destroy(lport); 1498 + mutex_unlock(&bnx2fc_dev_lock); 1499 + rtnl_unlock(); 1500 + } 1501 + 1502 + static void bnx2fc_unbind_adapter_devices(struct bnx2fc_hba *hba) 1503 + { 1504 + bnx2fc_free_fw_resc(hba); 1505 + bnx2fc_free_task_ctx(hba); 1506 + } 1507 + 1508 + /** 1509 + * bnx2fc_bind_adapter_devices - binds bnx2fc adapter with the associated 1510 + * pci structure 1511 + * 1512 + * @hba: Adapter instance 1513 + */ 1514 + static int bnx2fc_bind_adapter_devices(struct bnx2fc_hba *hba) 1515 + { 1516 + if (bnx2fc_setup_task_ctx(hba)) 1517 + goto mem_err; 1518 + 1519 + if (bnx2fc_setup_fw_resc(hba)) 1520 + goto mem_err; 1521 + 1522 + return 0; 1523 + mem_err: 1524 + bnx2fc_unbind_adapter_devices(hba); 1525 + return -ENOMEM; 1526 + } 1527 + 1528 + static int bnx2fc_bind_pcidev(struct bnx2fc_hba *hba) 1529 + { 1530 + struct cnic_dev *cnic; 1531 + 1532 + if (!hba->cnic) { 1533 + printk(KERN_ERR PFX "cnic is NULL\n"); 1534 + return -ENODEV; 1535 + } 1536 + cnic = hba->cnic; 1537 + hba->pcidev = cnic->pcidev; 1538 + if (hba->pcidev) 1539 + pci_dev_get(hba->pcidev); 1540 + 1541 + return 0; 1542 + } 1543 + 1544 + static void bnx2fc_unbind_pcidev(struct bnx2fc_hba *hba) 1545 + { 1546 + if (hba->pcidev) 1547 + pci_dev_put(hba->pcidev); 1548 + hba->pcidev = NULL; 1549 + } 1550 + 1551 + 1552 + 1553 + /** 1554 + * bnx2fc_ulp_start - cnic callback to initialize & start adapter instance 1555 + * 1556 + * @handle: transport handle pointing to adapter struture 1557 + * 1558 + * This function maps adapter structure to pcidev structure and initiates 1559 + * firmware handshake to enable/initialize on-chip FCoE components. 1560 + * This bnx2fc - cnic interface api callback is used after following 1561 + * conditions are met - 1562 + * a) underlying network interface is up (marked by event NETDEV_UP 1563 + * from netdev 1564 + * b) bnx2fc adatper structure is registered. 1565 + */ 1566 + static void bnx2fc_ulp_start(void *handle) 1567 + { 1568 + struct bnx2fc_hba *hba = handle; 1569 + struct fc_lport *lport = hba->ctlr.lp; 1570 + 1571 + BNX2FC_MISC_DBG("Entered %s\n", __func__); 1572 + mutex_lock(&bnx2fc_dev_lock); 1573 + 1574 + if (test_bit(BNX2FC_FW_INIT_DONE, &hba->init_done)) 1575 + goto start_disc; 1576 + 1577 + if (test_bit(BNX2FC_CREATE_DONE, &hba->init_done)) 1578 + bnx2fc_fw_init(hba); 1579 + 1580 + start_disc: 1581 + mutex_unlock(&bnx2fc_dev_lock); 1582 + 1583 + BNX2FC_MISC_DBG("bnx2fc started.\n"); 1584 + 1585 + /* Kick off Fabric discovery*/ 1586 + if (test_bit(BNX2FC_CREATE_DONE, &hba->init_done)) { 1587 + printk(KERN_ERR PFX "ulp_init: start discovery\n"); 1588 + lport->tt.frame_send = bnx2fc_xmit; 1589 + bnx2fc_start_disc(hba); 1590 + } 1591 + } 1592 + 1593 + static void bnx2fc_port_shutdown(struct fc_lport *lport) 1594 + { 1595 + BNX2FC_MISC_DBG("Entered %s\n", __func__); 1596 + fc_fabric_logoff(lport); 1597 + fc_lport_destroy(lport); 1598 + } 1599 + 1600 + static void bnx2fc_stop(struct bnx2fc_hba *hba) 1601 + { 1602 + struct fc_lport *lport; 1603 + struct fc_lport *vport; 1604 + 1605 + BNX2FC_MISC_DBG("ENTERED %s - init_done = %ld\n", __func__, 1606 + hba->init_done); 1607 + if (test_bit(BNX2FC_FW_INIT_DONE, &hba->init_done) && 1608 + test_bit(BNX2FC_CREATE_DONE, &hba->init_done)) { 1609 + lport = hba->ctlr.lp; 1610 + bnx2fc_port_shutdown(lport); 1611 + BNX2FC_HBA_DBG(lport, "bnx2fc_stop: waiting for %d " 1612 + "offloaded sessions\n", 1613 + hba->num_ofld_sess); 1614 + wait_event_interruptible(hba->shutdown_wait, 1615 + (hba->num_ofld_sess == 0)); 1616 + mutex_lock(&lport->lp_mutex); 1617 + list_for_each_entry(vport, &lport->vports, list) 1618 + fc_host_port_type(vport->host) = FC_PORTTYPE_UNKNOWN; 1619 + mutex_unlock(&lport->lp_mutex); 1620 + fc_host_port_type(lport->host) = FC_PORTTYPE_UNKNOWN; 1621 + fcoe_ctlr_link_down(&hba->ctlr); 1622 + fcoe_clean_pending_queue(lport); 1623 + 1624 + mutex_lock(&hba->hba_mutex); 1625 + clear_bit(ADAPTER_STATE_UP, &hba->adapter_state); 1626 + clear_bit(ADAPTER_STATE_GOING_DOWN, &hba->adapter_state); 1627 + 1628 + clear_bit(ADAPTER_STATE_READY, &hba->adapter_state); 1629 + mutex_unlock(&hba->hba_mutex); 1630 + } 1631 + } 1632 + 1633 + static int bnx2fc_fw_init(struct bnx2fc_hba *hba) 1634 + { 1635 + #define BNX2FC_INIT_POLL_TIME (1000 / HZ) 1636 + int rc = -1; 1637 + int i = HZ; 1638 + 1639 + rc = bnx2fc_bind_adapter_devices(hba); 1640 + if (rc) { 1641 + printk(KERN_ALERT PFX 1642 + "bnx2fc_bind_adapter_devices failed - rc = %d\n", rc); 1643 + goto err_out; 1644 + } 1645 + 1646 + rc = bnx2fc_send_fw_fcoe_init_msg(hba); 1647 + if (rc) { 1648 + printk(KERN_ALERT PFX 1649 + "bnx2fc_send_fw_fcoe_init_msg failed - rc = %d\n", rc); 1650 + goto err_unbind; 1651 + } 1652 + 1653 + /* 1654 + * Wait until the adapter init message is complete, and adapter 1655 + * state is UP. 1656 + */ 1657 + while (!test_bit(ADAPTER_STATE_UP, &hba->adapter_state) && i--) 1658 + msleep(BNX2FC_INIT_POLL_TIME); 1659 + 1660 + if (!test_bit(ADAPTER_STATE_UP, &hba->adapter_state)) { 1661 + printk(KERN_ERR PFX "bnx2fc_start: %s failed to initialize. " 1662 + "Ignoring...\n", 1663 + hba->cnic->netdev->name); 1664 + rc = -1; 1665 + goto err_unbind; 1666 + } 1667 + 1668 + 1669 + /* Mark HBA to indicate that the FW INIT is done */ 1670 + set_bit(BNX2FC_FW_INIT_DONE, &hba->init_done); 1671 + return 0; 1672 + 1673 + err_unbind: 1674 + bnx2fc_unbind_adapter_devices(hba); 1675 + err_out: 1676 + return rc; 1677 + } 1678 + 1679 + static void bnx2fc_fw_destroy(struct bnx2fc_hba *hba) 1680 + { 1681 + if (test_and_clear_bit(BNX2FC_FW_INIT_DONE, &hba->init_done)) { 1682 + if (bnx2fc_send_fw_fcoe_destroy_msg(hba) == 0) { 1683 + init_timer(&hba->destroy_timer); 1684 + hba->destroy_timer.expires = BNX2FC_FW_TIMEOUT + 1685 + jiffies; 1686 + hba->destroy_timer.function = bnx2fc_destroy_timer; 1687 + hba->destroy_timer.data = (unsigned long)hba; 1688 + add_timer(&hba->destroy_timer); 1689 + wait_event_interruptible(hba->destroy_wait, 1690 + (hba->flags & 1691 + BNX2FC_FLAG_DESTROY_CMPL)); 1692 + /* This should never happen */ 1693 + if (signal_pending(current)) 1694 + flush_signals(current); 1695 + 1696 + del_timer_sync(&hba->destroy_timer); 1697 + } 1698 + bnx2fc_unbind_adapter_devices(hba); 1699 + } 1700 + } 1701 + 1702 + /** 1703 + * bnx2fc_ulp_stop - cnic callback to shutdown adapter instance 1704 + * 1705 + * @handle: transport handle pointing to adapter structure 1706 + * 1707 + * Driver checks if adapter is already in shutdown mode, if not start 1708 + * the shutdown process. 1709 + */ 1710 + static void bnx2fc_ulp_stop(void *handle) 1711 + { 1712 + struct bnx2fc_hba *hba = (struct bnx2fc_hba *)handle; 1713 + 1714 + printk(KERN_ERR "ULP_STOP\n"); 1715 + 1716 + mutex_lock(&bnx2fc_dev_lock); 1717 + bnx2fc_stop(hba); 1718 + bnx2fc_fw_destroy(hba); 1719 + mutex_unlock(&bnx2fc_dev_lock); 1720 + } 1721 + 1722 + static void bnx2fc_start_disc(struct bnx2fc_hba *hba) 1723 + { 1724 + struct fc_lport *lport; 1725 + int wait_cnt = 0; 1726 + 1727 + BNX2FC_MISC_DBG("Entered %s\n", __func__); 1728 + /* Kick off FIP/FLOGI */ 1729 + if (!test_bit(BNX2FC_FW_INIT_DONE, &hba->init_done)) { 1730 + printk(KERN_ERR PFX "Init not done yet\n"); 1731 + return; 1732 + } 1733 + 1734 + lport = hba->ctlr.lp; 1735 + BNX2FC_HBA_DBG(lport, "calling fc_fabric_login\n"); 1736 + 1737 + if (!bnx2fc_link_ok(lport)) { 1738 + BNX2FC_HBA_DBG(lport, "ctlr_link_up\n"); 1739 + fcoe_ctlr_link_up(&hba->ctlr); 1740 + fc_host_port_type(lport->host) = FC_PORTTYPE_NPORT; 1741 + set_bit(ADAPTER_STATE_READY, &hba->adapter_state); 1742 + } 1743 + 1744 + /* wait for the FCF to be selected before issuing FLOGI */ 1745 + while (!hba->ctlr.sel_fcf) { 1746 + msleep(250); 1747 + /* give up after 3 secs */ 1748 + if (++wait_cnt > 12) 1749 + break; 1750 + } 1751 + fc_lport_init(lport); 1752 + fc_fabric_login(lport); 1753 + } 1754 + 1755 + 1756 + /** 1757 + * bnx2fc_ulp_init - Initialize an adapter instance 1758 + * 1759 + * @dev : cnic device handle 1760 + * Called from cnic_register_driver() context to initialize all 1761 + * enumerated cnic devices. This routine allocates adapter structure 1762 + * and other device specific resources. 1763 + */ 1764 + static void bnx2fc_ulp_init(struct cnic_dev *dev) 1765 + { 1766 + struct bnx2fc_hba *hba; 1767 + int rc = 0; 1768 + 1769 + BNX2FC_MISC_DBG("Entered %s\n", __func__); 1770 + /* bnx2fc works only when bnx2x is loaded */ 1771 + if (!test_bit(CNIC_F_BNX2X_CLASS, &dev->flags)) { 1772 + printk(KERN_ERR PFX "bnx2fc FCoE not supported on %s," 1773 + " flags: %lx\n", 1774 + dev->netdev->name, dev->flags); 1775 + return; 1776 + } 1777 + 1778 + /* Configure FCoE interface */ 1779 + hba = bnx2fc_interface_create(dev); 1780 + if (!hba) { 1781 + printk(KERN_ERR PFX "hba initialization failed\n"); 1782 + return; 1783 + } 1784 + 1785 + /* Add HBA to the adapter list */ 1786 + mutex_lock(&bnx2fc_dev_lock); 1787 + list_add_tail(&hba->link, &adapter_list); 1788 + adapter_count++; 1789 + mutex_unlock(&bnx2fc_dev_lock); 1790 + 1791 + clear_bit(BNX2FC_CNIC_REGISTERED, &hba->reg_with_cnic); 1792 + rc = dev->register_device(dev, CNIC_ULP_FCOE, 1793 + (void *) hba); 1794 + if (rc) 1795 + printk(KERN_ALERT PFX "register_device failed, rc = %d\n", rc); 1796 + else 1797 + set_bit(BNX2FC_CNIC_REGISTERED, &hba->reg_with_cnic); 1798 + } 1799 + 1800 + 1801 + static int bnx2fc_disable(struct net_device *netdev) 1802 + { 1803 + struct bnx2fc_hba *hba; 1804 + struct net_device *phys_dev; 1805 + struct ethtool_drvinfo drvinfo; 1806 + int rc = 0; 1807 + 1808 + if (!rtnl_trylock()) { 1809 + printk(KERN_ERR PFX "retrying for rtnl_lock\n"); 1810 + return -EIO; 1811 + } 1812 + 1813 + mutex_lock(&bnx2fc_dev_lock); 1814 + 1815 + if (THIS_MODULE->state != MODULE_STATE_LIVE) { 1816 + rc = -ENODEV; 1817 + goto nodev; 1818 + } 1819 + 1820 + /* obtain physical netdev */ 1821 + if (netdev->priv_flags & IFF_802_1Q_VLAN) 1822 + phys_dev = vlan_dev_real_dev(netdev); 1823 + else { 1824 + printk(KERN_ERR PFX "Not a vlan device\n"); 1825 + rc = -ENODEV; 1826 + goto nodev; 1827 + } 1828 + 1829 + /* verify if the physical device is a netxtreme2 device */ 1830 + if (phys_dev->ethtool_ops && phys_dev->ethtool_ops->get_drvinfo) { 1831 + memset(&drvinfo, 0, sizeof(drvinfo)); 1832 + phys_dev->ethtool_ops->get_drvinfo(phys_dev, &drvinfo); 1833 + if (strcmp(drvinfo.driver, "bnx2x")) { 1834 + printk(KERN_ERR PFX "Not a netxtreme2 device\n"); 1835 + rc = -ENODEV; 1836 + goto nodev; 1837 + } 1838 + } else { 1839 + printk(KERN_ERR PFX "unable to obtain drv_info\n"); 1840 + rc = -ENODEV; 1841 + goto nodev; 1842 + } 1843 + 1844 + printk(KERN_ERR PFX "phys_dev is netxtreme2 device\n"); 1845 + 1846 + /* obtain hba and initialize rest of the structure */ 1847 + hba = bnx2fc_hba_lookup(phys_dev); 1848 + if (!hba || !hba->ctlr.lp) { 1849 + rc = -ENODEV; 1850 + printk(KERN_ERR PFX "bnx2fc_disable: hba or lport not found\n"); 1851 + } else { 1852 + fcoe_ctlr_link_down(&hba->ctlr); 1853 + fcoe_clean_pending_queue(hba->ctlr.lp); 1854 + } 1855 + 1856 + nodev: 1857 + mutex_unlock(&bnx2fc_dev_lock); 1858 + rtnl_unlock(); 1859 + return rc; 1860 + } 1861 + 1862 + 1863 + static int bnx2fc_enable(struct net_device *netdev) 1864 + { 1865 + struct bnx2fc_hba *hba; 1866 + struct net_device *phys_dev; 1867 + struct ethtool_drvinfo drvinfo; 1868 + int rc = 0; 1869 + 1870 + if (!rtnl_trylock()) { 1871 + printk(KERN_ERR PFX "retrying for rtnl_lock\n"); 1872 + return -EIO; 1873 + } 1874 + 1875 + BNX2FC_MISC_DBG("Entered %s\n", __func__); 1876 + mutex_lock(&bnx2fc_dev_lock); 1877 + 1878 + if (THIS_MODULE->state != MODULE_STATE_LIVE) { 1879 + rc = -ENODEV; 1880 + goto nodev; 1881 + } 1882 + 1883 + /* obtain physical netdev */ 1884 + if (netdev->priv_flags & IFF_802_1Q_VLAN) 1885 + phys_dev = vlan_dev_real_dev(netdev); 1886 + else { 1887 + printk(KERN_ERR PFX "Not a vlan device\n"); 1888 + rc = -ENODEV; 1889 + goto nodev; 1890 + } 1891 + /* verify if the physical device is a netxtreme2 device */ 1892 + if (phys_dev->ethtool_ops && phys_dev->ethtool_ops->get_drvinfo) { 1893 + memset(&drvinfo, 0, sizeof(drvinfo)); 1894 + phys_dev->ethtool_ops->get_drvinfo(phys_dev, &drvinfo); 1895 + if (strcmp(drvinfo.driver, "bnx2x")) { 1896 + printk(KERN_ERR PFX "Not a netxtreme2 device\n"); 1897 + rc = -ENODEV; 1898 + goto nodev; 1899 + } 1900 + } else { 1901 + printk(KERN_ERR PFX "unable to obtain drv_info\n"); 1902 + rc = -ENODEV; 1903 + goto nodev; 1904 + } 1905 + 1906 + /* obtain hba and initialize rest of the structure */ 1907 + hba = bnx2fc_hba_lookup(phys_dev); 1908 + if (!hba || !hba->ctlr.lp) { 1909 + rc = -ENODEV; 1910 + printk(KERN_ERR PFX "bnx2fc_enable: hba or lport not found\n"); 1911 + } else if (!bnx2fc_link_ok(hba->ctlr.lp)) 1912 + fcoe_ctlr_link_up(&hba->ctlr); 1913 + 1914 + nodev: 1915 + mutex_unlock(&bnx2fc_dev_lock); 1916 + rtnl_unlock(); 1917 + return rc; 1918 + } 1919 + 1920 + /** 1921 + * bnx2fc_create - Create bnx2fc FCoE interface 1922 + * 1923 + * @buffer: The name of Ethernet interface to create on 1924 + * @kp: The associated kernel param 1925 + * 1926 + * Called from sysfs. 1927 + * 1928 + * Returns: 0 for success 1929 + */ 1930 + static int bnx2fc_create(struct net_device *netdev, enum fip_state fip_mode) 1931 + { 1932 + struct bnx2fc_hba *hba; 1933 + struct net_device *phys_dev; 1934 + struct fc_lport *lport; 1935 + struct ethtool_drvinfo drvinfo; 1936 + int rc = 0; 1937 + int vlan_id; 1938 + 1939 + BNX2FC_MISC_DBG("Entered bnx2fc_create\n"); 1940 + if (fip_mode != FIP_MODE_FABRIC) { 1941 + printk(KERN_ERR "fip mode not FABRIC\n"); 1942 + return -EIO; 1943 + } 1944 + 1945 + if (!rtnl_trylock()) { 1946 + printk(KERN_ERR "trying for rtnl_lock\n"); 1947 + return -EIO; 1948 + } 1949 + mutex_lock(&bnx2fc_dev_lock); 1950 + 1951 + #ifdef CONFIG_SCSI_BNX2X_FCOE_MODULE 1952 + if (THIS_MODULE->state != MODULE_STATE_LIVE) { 1953 + rc = -ENODEV; 1954 + goto mod_err; 1955 + } 1956 + #endif 1957 + 1958 + if (!try_module_get(THIS_MODULE)) { 1959 + rc = -EINVAL; 1960 + goto mod_err; 1961 + } 1962 + 1963 + /* obtain physical netdev */ 1964 + if (netdev->priv_flags & IFF_802_1Q_VLAN) { 1965 + phys_dev = vlan_dev_real_dev(netdev); 1966 + vlan_id = vlan_dev_vlan_id(netdev); 1967 + } else { 1968 + printk(KERN_ERR PFX "Not a vlan device\n"); 1969 + rc = -EINVAL; 1970 + goto netdev_err; 1971 + } 1972 + /* verify if the physical device is a netxtreme2 device */ 1973 + if (phys_dev->ethtool_ops && phys_dev->ethtool_ops->get_drvinfo) { 1974 + memset(&drvinfo, 0, sizeof(drvinfo)); 1975 + phys_dev->ethtool_ops->get_drvinfo(phys_dev, &drvinfo); 1976 + if (strcmp(drvinfo.driver, "bnx2x")) { 1977 + printk(KERN_ERR PFX "Not a netxtreme2 device\n"); 1978 + rc = -EINVAL; 1979 + goto netdev_err; 1980 + } 1981 + } else { 1982 + printk(KERN_ERR PFX "unable to obtain drv_info\n"); 1983 + rc = -EINVAL; 1984 + goto netdev_err; 1985 + } 1986 + 1987 + /* obtain hba and initialize rest of the structure */ 1988 + hba = bnx2fc_hba_lookup(phys_dev); 1989 + if (!hba) { 1990 + rc = -ENODEV; 1991 + printk(KERN_ERR PFX "bnx2fc_create: hba not found\n"); 1992 + goto netdev_err; 1993 + } 1994 + 1995 + if (!test_bit(BNX2FC_FW_INIT_DONE, &hba->init_done)) { 1996 + rc = bnx2fc_fw_init(hba); 1997 + if (rc) 1998 + goto netdev_err; 1999 + } 2000 + 2001 + if (test_bit(BNX2FC_CREATE_DONE, &hba->init_done)) { 2002 + rc = -EEXIST; 2003 + goto netdev_err; 2004 + } 2005 + 2006 + /* update netdev with vlan netdev */ 2007 + hba->netdev = netdev; 2008 + hba->vlan_id = vlan_id; 2009 + hba->vlan_enabled = 1; 2010 + 2011 + rc = bnx2fc_interface_setup(hba, fip_mode); 2012 + if (rc) { 2013 + printk(KERN_ERR PFX "bnx2fc_interface_setup failed\n"); 2014 + goto ifput_err; 2015 + } 2016 + 2017 + hba->timer_work_queue = 2018 + create_singlethread_workqueue("bnx2fc_timer_wq"); 2019 + if (!hba->timer_work_queue) { 2020 + printk(KERN_ERR PFX "ulp_init could not create timer_wq\n"); 2021 + rc = -EINVAL; 2022 + goto ifput_err; 2023 + } 2024 + 2025 + lport = bnx2fc_if_create(hba, &hba->pcidev->dev, 0); 2026 + if (!lport) { 2027 + printk(KERN_ERR PFX "Failed to create interface (%s)\n", 2028 + netdev->name); 2029 + bnx2fc_netdev_cleanup(hba); 2030 + rc = -EINVAL; 2031 + goto if_create_err; 2032 + } 2033 + 2034 + lport->boot_time = jiffies; 2035 + 2036 + /* Make this master N_port */ 2037 + hba->ctlr.lp = lport; 2038 + 2039 + set_bit(BNX2FC_CREATE_DONE, &hba->init_done); 2040 + printk(KERN_ERR PFX "create: START DISC\n"); 2041 + bnx2fc_start_disc(hba); 2042 + /* 2043 + * Release from kref_init in bnx2fc_interface_setup, on success 2044 + * lport should be holding a reference taken in bnx2fc_if_create 2045 + */ 2046 + bnx2fc_interface_put(hba); 2047 + /* put netdev that was held while calling dev_get_by_name */ 2048 + mutex_unlock(&bnx2fc_dev_lock); 2049 + rtnl_unlock(); 2050 + return 0; 2051 + 2052 + if_create_err: 2053 + destroy_workqueue(hba->timer_work_queue); 2054 + ifput_err: 2055 + bnx2fc_interface_put(hba); 2056 + netdev_err: 2057 + module_put(THIS_MODULE); 2058 + mod_err: 2059 + mutex_unlock(&bnx2fc_dev_lock); 2060 + rtnl_unlock(); 2061 + return rc; 2062 + } 2063 + 2064 + /** 2065 + * bnx2fc_find_hba_for_cnic - maps cnic instance to bnx2fc adapter instance 2066 + * 2067 + * @cnic: Pointer to cnic device instance 2068 + * 2069 + **/ 2070 + static struct bnx2fc_hba *bnx2fc_find_hba_for_cnic(struct cnic_dev *cnic) 2071 + { 2072 + struct list_head *list; 2073 + struct list_head *temp; 2074 + struct bnx2fc_hba *hba; 2075 + 2076 + /* Called with bnx2fc_dev_lock held */ 2077 + list_for_each_safe(list, temp, &adapter_list) { 2078 + hba = (struct bnx2fc_hba *)list; 2079 + if (hba->cnic == cnic) 2080 + return hba; 2081 + } 2082 + return NULL; 2083 + } 2084 + 2085 + static struct bnx2fc_hba *bnx2fc_hba_lookup(struct net_device *phys_dev) 2086 + { 2087 + struct list_head *list; 2088 + struct list_head *temp; 2089 + struct bnx2fc_hba *hba; 2090 + 2091 + /* Called with bnx2fc_dev_lock held */ 2092 + list_for_each_safe(list, temp, &adapter_list) { 2093 + hba = (struct bnx2fc_hba *)list; 2094 + if (hba->phys_dev == phys_dev) 2095 + return hba; 2096 + } 2097 + printk(KERN_ERR PFX "hba_lookup: hba NULL\n"); 2098 + return NULL; 2099 + } 2100 + 2101 + /** 2102 + * bnx2fc_ulp_exit - shuts down adapter instance and frees all resources 2103 + * 2104 + * @dev cnic device handle 2105 + */ 2106 + static void bnx2fc_ulp_exit(struct cnic_dev *dev) 2107 + { 2108 + struct bnx2fc_hba *hba; 2109 + 2110 + BNX2FC_MISC_DBG("Entered bnx2fc_ulp_exit\n"); 2111 + 2112 + if (!test_bit(CNIC_F_BNX2X_CLASS, &dev->flags)) { 2113 + printk(KERN_ERR PFX "bnx2fc port check: %s, flags: %lx\n", 2114 + dev->netdev->name, dev->flags); 2115 + return; 2116 + } 2117 + 2118 + mutex_lock(&bnx2fc_dev_lock); 2119 + hba = bnx2fc_find_hba_for_cnic(dev); 2120 + if (!hba) { 2121 + printk(KERN_ERR PFX "bnx2fc_ulp_exit: hba not found, dev 0%p\n", 2122 + dev); 2123 + mutex_unlock(&bnx2fc_dev_lock); 2124 + return; 2125 + } 2126 + 2127 + list_del_init(&hba->link); 2128 + adapter_count--; 2129 + 2130 + if (test_bit(BNX2FC_CREATE_DONE, &hba->init_done)) { 2131 + /* destroy not called yet, move to quiesced list */ 2132 + bnx2fc_netdev_cleanup(hba); 2133 + bnx2fc_if_destroy(hba->ctlr.lp); 2134 + } 2135 + mutex_unlock(&bnx2fc_dev_lock); 2136 + 2137 + bnx2fc_ulp_stop(hba); 2138 + /* unregister cnic device */ 2139 + if (test_and_clear_bit(BNX2FC_CNIC_REGISTERED, &hba->reg_with_cnic)) 2140 + hba->cnic->unregister_device(hba->cnic, CNIC_ULP_FCOE); 2141 + bnx2fc_interface_destroy(hba); 2142 + } 2143 + 2144 + /** 2145 + * bnx2fc_fcoe_reset - Resets the fcoe 2146 + * 2147 + * @shost: shost the reset is from 2148 + * 2149 + * Returns: always 0 2150 + */ 2151 + static int bnx2fc_fcoe_reset(struct Scsi_Host *shost) 2152 + { 2153 + struct fc_lport *lport = shost_priv(shost); 2154 + fc_lport_reset(lport); 2155 + return 0; 2156 + } 2157 + 2158 + 2159 + static bool bnx2fc_match(struct net_device *netdev) 2160 + { 2161 + mutex_lock(&bnx2fc_dev_lock); 2162 + if (netdev->priv_flags & IFF_802_1Q_VLAN) { 2163 + struct net_device *phys_dev = vlan_dev_real_dev(netdev); 2164 + 2165 + if (bnx2fc_hba_lookup(phys_dev)) { 2166 + mutex_unlock(&bnx2fc_dev_lock); 2167 + return true; 2168 + } 2169 + } 2170 + mutex_unlock(&bnx2fc_dev_lock); 2171 + return false; 2172 + } 2173 + 2174 + 2175 + static struct fcoe_transport bnx2fc_transport = { 2176 + .name = {"bnx2fc"}, 2177 + .attached = false, 2178 + .list = LIST_HEAD_INIT(bnx2fc_transport.list), 2179 + .match = bnx2fc_match, 2180 + .create = bnx2fc_create, 2181 + .destroy = bnx2fc_destroy, 2182 + .enable = bnx2fc_enable, 2183 + .disable = bnx2fc_disable, 2184 + }; 2185 + 2186 + /** 2187 + * bnx2fc_percpu_thread_create - Create a receive thread for an 2188 + * online CPU 2189 + * 2190 + * @cpu: cpu index for the online cpu 2191 + */ 2192 + static void bnx2fc_percpu_thread_create(unsigned int cpu) 2193 + { 2194 + struct bnx2fc_percpu_s *p; 2195 + struct task_struct *thread; 2196 + 2197 + p = &per_cpu(bnx2fc_percpu, cpu); 2198 + 2199 + thread = kthread_create(bnx2fc_percpu_io_thread, 2200 + (void *)p, 2201 + "bnx2fc_thread/%d", cpu); 2202 + /* bind thread to the cpu */ 2203 + if (likely(!IS_ERR(p->iothread))) { 2204 + kthread_bind(thread, cpu); 2205 + p->iothread = thread; 2206 + wake_up_process(thread); 2207 + } 2208 + } 2209 + 2210 + static void bnx2fc_percpu_thread_destroy(unsigned int cpu) 2211 + { 2212 + struct bnx2fc_percpu_s *p; 2213 + struct task_struct *thread; 2214 + struct bnx2fc_work *work, *tmp; 2215 + LIST_HEAD(work_list); 2216 + 2217 + BNX2FC_MISC_DBG("destroying io thread for CPU %d\n", cpu); 2218 + 2219 + /* Prevent any new work from being queued for this CPU */ 2220 + p = &per_cpu(bnx2fc_percpu, cpu); 2221 + spin_lock_bh(&p->fp_work_lock); 2222 + thread = p->iothread; 2223 + p->iothread = NULL; 2224 + 2225 + 2226 + /* Free all work in the list */ 2227 + list_for_each_entry_safe(work, tmp, &work_list, list) { 2228 + list_del_init(&work->list); 2229 + bnx2fc_process_cq_compl(work->tgt, work->wqe); 2230 + kfree(work); 2231 + } 2232 + 2233 + spin_unlock_bh(&p->fp_work_lock); 2234 + 2235 + if (thread) 2236 + kthread_stop(thread); 2237 + } 2238 + 2239 + /** 2240 + * bnx2fc_cpu_callback - Handler for CPU hotplug events 2241 + * 2242 + * @nfb: The callback data block 2243 + * @action: The event triggering the callback 2244 + * @hcpu: The index of the CPU that the event is for 2245 + * 2246 + * This creates or destroys per-CPU data for fcoe 2247 + * 2248 + * Returns NOTIFY_OK always. 2249 + */ 2250 + static int bnx2fc_cpu_callback(struct notifier_block *nfb, 2251 + unsigned long action, void *hcpu) 2252 + { 2253 + unsigned cpu = (unsigned long)hcpu; 2254 + 2255 + switch (action) { 2256 + case CPU_ONLINE: 2257 + case CPU_ONLINE_FROZEN: 2258 + printk(PFX "CPU %x online: Create Rx thread\n", cpu); 2259 + bnx2fc_percpu_thread_create(cpu); 2260 + break; 2261 + case CPU_DEAD: 2262 + case CPU_DEAD_FROZEN: 2263 + printk(PFX "CPU %x offline: Remove Rx thread\n", cpu); 2264 + bnx2fc_percpu_thread_destroy(cpu); 2265 + break; 2266 + default: 2267 + break; 2268 + } 2269 + return NOTIFY_OK; 2270 + } 2271 + 2272 + /** 2273 + * bnx2fc_mod_init - module init entry point 2274 + * 2275 + * Initialize driver wide global data structures, and register 2276 + * with cnic module 2277 + **/ 2278 + static int __init bnx2fc_mod_init(void) 2279 + { 2280 + struct fcoe_percpu_s *bg; 2281 + struct task_struct *l2_thread; 2282 + int rc = 0; 2283 + unsigned int cpu = 0; 2284 + struct bnx2fc_percpu_s *p; 2285 + 2286 + printk(KERN_INFO PFX "%s", version); 2287 + 2288 + /* register as a fcoe transport */ 2289 + rc = fcoe_transport_attach(&bnx2fc_transport); 2290 + if (rc) { 2291 + printk(KERN_ERR "failed to register an fcoe transport, check " 2292 + "if libfcoe is loaded\n"); 2293 + goto out; 2294 + } 2295 + 2296 + INIT_LIST_HEAD(&adapter_list); 2297 + mutex_init(&bnx2fc_dev_lock); 2298 + adapter_count = 0; 2299 + 2300 + /* Attach FC transport template */ 2301 + rc = bnx2fc_attach_transport(); 2302 + if (rc) 2303 + goto detach_ft; 2304 + 2305 + bnx2fc_wq = alloc_workqueue("bnx2fc", 0, 0); 2306 + if (!bnx2fc_wq) { 2307 + rc = -ENOMEM; 2308 + goto release_bt; 2309 + } 2310 + 2311 + bg = &bnx2fc_global; 2312 + skb_queue_head_init(&bg->fcoe_rx_list); 2313 + l2_thread = kthread_create(bnx2fc_l2_rcv_thread, 2314 + (void *)bg, 2315 + "bnx2fc_l2_thread"); 2316 + if (IS_ERR(l2_thread)) { 2317 + rc = PTR_ERR(l2_thread); 2318 + goto free_wq; 2319 + } 2320 + wake_up_process(l2_thread); 2321 + spin_lock_bh(&bg->fcoe_rx_list.lock); 2322 + bg->thread = l2_thread; 2323 + spin_unlock_bh(&bg->fcoe_rx_list.lock); 2324 + 2325 + for_each_possible_cpu(cpu) { 2326 + p = &per_cpu(bnx2fc_percpu, cpu); 2327 + INIT_LIST_HEAD(&p->work_list); 2328 + spin_lock_init(&p->fp_work_lock); 2329 + } 2330 + 2331 + for_each_online_cpu(cpu) { 2332 + bnx2fc_percpu_thread_create(cpu); 2333 + } 2334 + 2335 + /* Initialize per CPU interrupt thread */ 2336 + register_hotcpu_notifier(&bnx2fc_cpu_notifier); 2337 + 2338 + cnic_register_driver(CNIC_ULP_FCOE, &bnx2fc_cnic_cb); 2339 + 2340 + return 0; 2341 + 2342 + free_wq: 2343 + destroy_workqueue(bnx2fc_wq); 2344 + release_bt: 2345 + bnx2fc_release_transport(); 2346 + detach_ft: 2347 + fcoe_transport_detach(&bnx2fc_transport); 2348 + out: 2349 + return rc; 2350 + } 2351 + 2352 + static void __exit bnx2fc_mod_exit(void) 2353 + { 2354 + LIST_HEAD(to_be_deleted); 2355 + struct bnx2fc_hba *hba, *next; 2356 + struct fcoe_percpu_s *bg; 2357 + struct task_struct *l2_thread; 2358 + struct sk_buff *skb; 2359 + unsigned int cpu = 0; 2360 + 2361 + /* 2362 + * NOTE: Since cnic calls register_driver routine rtnl_lock, 2363 + * it will have higher precedence than bnx2fc_dev_lock. 2364 + * unregister_device() cannot be called with bnx2fc_dev_lock 2365 + * held. 2366 + */ 2367 + mutex_lock(&bnx2fc_dev_lock); 2368 + list_splice(&adapter_list, &to_be_deleted); 2369 + INIT_LIST_HEAD(&adapter_list); 2370 + adapter_count = 0; 2371 + mutex_unlock(&bnx2fc_dev_lock); 2372 + 2373 + /* Unregister with cnic */ 2374 + list_for_each_entry_safe(hba, next, &to_be_deleted, link) { 2375 + list_del_init(&hba->link); 2376 + printk(KERN_ERR PFX "MOD_EXIT:destroy hba = 0x%p, kref = %d\n", 2377 + hba, atomic_read(&hba->kref.refcount)); 2378 + bnx2fc_ulp_stop(hba); 2379 + /* unregister cnic device */ 2380 + if (test_and_clear_bit(BNX2FC_CNIC_REGISTERED, 2381 + &hba->reg_with_cnic)) 2382 + hba->cnic->unregister_device(hba->cnic, CNIC_ULP_FCOE); 2383 + bnx2fc_interface_destroy(hba); 2384 + } 2385 + cnic_unregister_driver(CNIC_ULP_FCOE); 2386 + 2387 + /* Destroy global thread */ 2388 + bg = &bnx2fc_global; 2389 + spin_lock_bh(&bg->fcoe_rx_list.lock); 2390 + l2_thread = bg->thread; 2391 + bg->thread = NULL; 2392 + while ((skb = __skb_dequeue(&bg->fcoe_rx_list)) != NULL) 2393 + kfree_skb(skb); 2394 + 2395 + spin_unlock_bh(&bg->fcoe_rx_list.lock); 2396 + 2397 + if (l2_thread) 2398 + kthread_stop(l2_thread); 2399 + 2400 + unregister_hotcpu_notifier(&bnx2fc_cpu_notifier); 2401 + 2402 + /* Destroy per cpu threads */ 2403 + for_each_online_cpu(cpu) { 2404 + bnx2fc_percpu_thread_destroy(cpu); 2405 + } 2406 + 2407 + destroy_workqueue(bnx2fc_wq); 2408 + /* 2409 + * detach from scsi transport 2410 + * must happen after all destroys are done 2411 + */ 2412 + bnx2fc_release_transport(); 2413 + 2414 + /* detach from fcoe transport */ 2415 + fcoe_transport_detach(&bnx2fc_transport); 2416 + } 2417 + 2418 + module_init(bnx2fc_mod_init); 2419 + module_exit(bnx2fc_mod_exit); 2420 + 2421 + static struct fc_function_template bnx2fc_transport_function = { 2422 + .show_host_node_name = 1, 2423 + .show_host_port_name = 1, 2424 + .show_host_supported_classes = 1, 2425 + .show_host_supported_fc4s = 1, 2426 + .show_host_active_fc4s = 1, 2427 + .show_host_maxframe_size = 1, 2428 + 2429 + .show_host_port_id = 1, 2430 + .show_host_supported_speeds = 1, 2431 + .get_host_speed = fc_get_host_speed, 2432 + .show_host_speed = 1, 2433 + .show_host_port_type = 1, 2434 + .get_host_port_state = fc_get_host_port_state, 2435 + .show_host_port_state = 1, 2436 + .show_host_symbolic_name = 1, 2437 + 2438 + .dd_fcrport_size = (sizeof(struct fc_rport_libfc_priv) + 2439 + sizeof(struct bnx2fc_rport)), 2440 + .show_rport_maxframe_size = 1, 2441 + .show_rport_supported_classes = 1, 2442 + 2443 + .show_host_fabric_name = 1, 2444 + .show_starget_node_name = 1, 2445 + .show_starget_port_name = 1, 2446 + .show_starget_port_id = 1, 2447 + .set_rport_dev_loss_tmo = fc_set_rport_loss_tmo, 2448 + .show_rport_dev_loss_tmo = 1, 2449 + .get_fc_host_stats = bnx2fc_get_host_stats, 2450 + 2451 + .issue_fc_host_lip = bnx2fc_fcoe_reset, 2452 + 2453 + .terminate_rport_io = fc_rport_terminate_io, 2454 + 2455 + .vport_create = bnx2fc_vport_create, 2456 + .vport_delete = bnx2fc_vport_destroy, 2457 + .vport_disable = bnx2fc_vport_disable, 2458 + }; 2459 + 2460 + static struct fc_function_template bnx2fc_vport_xport_function = { 2461 + .show_host_node_name = 1, 2462 + .show_host_port_name = 1, 2463 + .show_host_supported_classes = 1, 2464 + .show_host_supported_fc4s = 1, 2465 + .show_host_active_fc4s = 1, 2466 + .show_host_maxframe_size = 1, 2467 + 2468 + .show_host_port_id = 1, 2469 + .show_host_supported_speeds = 1, 2470 + .get_host_speed = fc_get_host_speed, 2471 + .show_host_speed = 1, 2472 + .show_host_port_type = 1, 2473 + .get_host_port_state = fc_get_host_port_state, 2474 + .show_host_port_state = 1, 2475 + .show_host_symbolic_name = 1, 2476 + 2477 + .dd_fcrport_size = (sizeof(struct fc_rport_libfc_priv) + 2478 + sizeof(struct bnx2fc_rport)), 2479 + .show_rport_maxframe_size = 1, 2480 + .show_rport_supported_classes = 1, 2481 + 2482 + .show_host_fabric_name = 1, 2483 + .show_starget_node_name = 1, 2484 + .show_starget_port_name = 1, 2485 + .show_starget_port_id = 1, 2486 + .set_rport_dev_loss_tmo = fc_set_rport_loss_tmo, 2487 + .show_rport_dev_loss_tmo = 1, 2488 + .get_fc_host_stats = fc_get_host_stats, 2489 + .issue_fc_host_lip = bnx2fc_fcoe_reset, 2490 + .terminate_rport_io = fc_rport_terminate_io, 2491 + }; 2492 + 2493 + /** 2494 + * scsi_host_template structure used while registering with SCSI-ml 2495 + */ 2496 + static struct scsi_host_template bnx2fc_shost_template = { 2497 + .module = THIS_MODULE, 2498 + .name = "Broadcom Offload FCoE Initiator", 2499 + .queuecommand = bnx2fc_queuecommand, 2500 + .eh_abort_handler = bnx2fc_eh_abort, /* abts */ 2501 + .eh_device_reset_handler = bnx2fc_eh_device_reset, /* lun reset */ 2502 + .eh_target_reset_handler = bnx2fc_eh_target_reset, /* tgt reset */ 2503 + .eh_host_reset_handler = fc_eh_host_reset, 2504 + .slave_alloc = fc_slave_alloc, 2505 + .change_queue_depth = fc_change_queue_depth, 2506 + .change_queue_type = fc_change_queue_type, 2507 + .this_id = -1, 2508 + .cmd_per_lun = 3, 2509 + .can_queue = (BNX2FC_MAX_OUTSTANDING_CMNDS/2), 2510 + .use_clustering = ENABLE_CLUSTERING, 2511 + .sg_tablesize = BNX2FC_MAX_BDS_PER_CMD, 2512 + .max_sectors = 512, 2513 + }; 2514 + 2515 + static struct libfc_function_template bnx2fc_libfc_fcn_templ = { 2516 + .frame_send = bnx2fc_xmit, 2517 + .elsct_send = bnx2fc_elsct_send, 2518 + .fcp_abort_io = bnx2fc_abort_io, 2519 + .fcp_cleanup = bnx2fc_cleanup, 2520 + .rport_event_callback = bnx2fc_rport_event_handler, 2521 + }; 2522 + 2523 + /** 2524 + * bnx2fc_cnic_cb - global template of bnx2fc - cnic driver interface 2525 + * structure carrying callback function pointers 2526 + */ 2527 + static struct cnic_ulp_ops bnx2fc_cnic_cb = { 2528 + .owner = THIS_MODULE, 2529 + .cnic_init = bnx2fc_ulp_init, 2530 + .cnic_exit = bnx2fc_ulp_exit, 2531 + .cnic_start = bnx2fc_ulp_start, 2532 + .cnic_stop = bnx2fc_ulp_stop, 2533 + .indicate_kcqes = bnx2fc_indicate_kcqe, 2534 + .indicate_netevent = bnx2fc_indicate_netevent, 2535 + };
+1868
drivers/scsi/bnx2fc/bnx2fc_hwi.c
··· 1 + /* bnx2fc_hwi.c: Broadcom NetXtreme II Linux FCoE offload driver. 2 + * This file contains the code that low level functions that interact 3 + * with 57712 FCoE firmware. 4 + * 5 + * Copyright (c) 2008 - 2010 Broadcom Corporation 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation. 10 + * 11 + * Written by: Bhanu Prakash Gollapudi (bprakash@broadcom.com) 12 + */ 13 + 14 + #include "bnx2fc.h" 15 + 16 + DECLARE_PER_CPU(struct bnx2fc_percpu_s, bnx2fc_percpu); 17 + 18 + static void bnx2fc_fastpath_notification(struct bnx2fc_hba *hba, 19 + struct fcoe_kcqe *new_cqe_kcqe); 20 + static void bnx2fc_process_ofld_cmpl(struct bnx2fc_hba *hba, 21 + struct fcoe_kcqe *ofld_kcqe); 22 + static void bnx2fc_process_enable_conn_cmpl(struct bnx2fc_hba *hba, 23 + struct fcoe_kcqe *ofld_kcqe); 24 + static void bnx2fc_init_failure(struct bnx2fc_hba *hba, u32 err_code); 25 + static void bnx2fc_process_conn_destroy_cmpl(struct bnx2fc_hba *hba, 26 + struct fcoe_kcqe *conn_destroy); 27 + 28 + int bnx2fc_send_stat_req(struct bnx2fc_hba *hba) 29 + { 30 + struct fcoe_kwqe_stat stat_req; 31 + struct kwqe *kwqe_arr[2]; 32 + int num_kwqes = 1; 33 + int rc = 0; 34 + 35 + memset(&stat_req, 0x00, sizeof(struct fcoe_kwqe_stat)); 36 + stat_req.hdr.op_code = FCOE_KWQE_OPCODE_STAT; 37 + stat_req.hdr.flags = 38 + (FCOE_KWQE_LAYER_CODE << FCOE_KWQE_HEADER_LAYER_CODE_SHIFT); 39 + 40 + stat_req.stat_params_addr_lo = (u32) hba->stats_buf_dma; 41 + stat_req.stat_params_addr_hi = (u32) ((u64)hba->stats_buf_dma >> 32); 42 + 43 + kwqe_arr[0] = (struct kwqe *) &stat_req; 44 + 45 + if (hba->cnic && hba->cnic->submit_kwqes) 46 + rc = hba->cnic->submit_kwqes(hba->cnic, kwqe_arr, num_kwqes); 47 + 48 + return rc; 49 + } 50 + 51 + /** 52 + * bnx2fc_send_fw_fcoe_init_msg - initiates initial handshake with FCoE f/w 53 + * 54 + * @hba: adapter structure pointer 55 + * 56 + * Send down FCoE firmware init KWQEs which initiates the initial handshake 57 + * with the f/w. 58 + * 59 + */ 60 + int bnx2fc_send_fw_fcoe_init_msg(struct bnx2fc_hba *hba) 61 + { 62 + struct fcoe_kwqe_init1 fcoe_init1; 63 + struct fcoe_kwqe_init2 fcoe_init2; 64 + struct fcoe_kwqe_init3 fcoe_init3; 65 + struct kwqe *kwqe_arr[3]; 66 + int num_kwqes = 3; 67 + int rc = 0; 68 + 69 + if (!hba->cnic) { 70 + printk(KERN_ALERT PFX "hba->cnic NULL during fcoe fw init\n"); 71 + return -ENODEV; 72 + } 73 + 74 + /* fill init1 KWQE */ 75 + memset(&fcoe_init1, 0x00, sizeof(struct fcoe_kwqe_init1)); 76 + fcoe_init1.hdr.op_code = FCOE_KWQE_OPCODE_INIT1; 77 + fcoe_init1.hdr.flags = (FCOE_KWQE_LAYER_CODE << 78 + FCOE_KWQE_HEADER_LAYER_CODE_SHIFT); 79 + 80 + fcoe_init1.num_tasks = BNX2FC_MAX_TASKS; 81 + fcoe_init1.sq_num_wqes = BNX2FC_SQ_WQES_MAX; 82 + fcoe_init1.rq_num_wqes = BNX2FC_RQ_WQES_MAX; 83 + fcoe_init1.rq_buffer_log_size = BNX2FC_RQ_BUF_LOG_SZ; 84 + fcoe_init1.cq_num_wqes = BNX2FC_CQ_WQES_MAX; 85 + fcoe_init1.dummy_buffer_addr_lo = (u32) hba->dummy_buf_dma; 86 + fcoe_init1.dummy_buffer_addr_hi = (u32) ((u64)hba->dummy_buf_dma >> 32); 87 + fcoe_init1.task_list_pbl_addr_lo = (u32) hba->task_ctx_bd_dma; 88 + fcoe_init1.task_list_pbl_addr_hi = 89 + (u32) ((u64) hba->task_ctx_bd_dma >> 32); 90 + fcoe_init1.mtu = hba->netdev->mtu; 91 + 92 + fcoe_init1.flags = (PAGE_SHIFT << 93 + FCOE_KWQE_INIT1_LOG_PAGE_SIZE_SHIFT); 94 + 95 + fcoe_init1.num_sessions_log = BNX2FC_NUM_MAX_SESS_LOG; 96 + 97 + /* fill init2 KWQE */ 98 + memset(&fcoe_init2, 0x00, sizeof(struct fcoe_kwqe_init2)); 99 + fcoe_init2.hdr.op_code = FCOE_KWQE_OPCODE_INIT2; 100 + fcoe_init2.hdr.flags = (FCOE_KWQE_LAYER_CODE << 101 + FCOE_KWQE_HEADER_LAYER_CODE_SHIFT); 102 + 103 + fcoe_init2.hash_tbl_pbl_addr_lo = (u32) hba->hash_tbl_pbl_dma; 104 + fcoe_init2.hash_tbl_pbl_addr_hi = (u32) 105 + ((u64) hba->hash_tbl_pbl_dma >> 32); 106 + 107 + fcoe_init2.t2_hash_tbl_addr_lo = (u32) hba->t2_hash_tbl_dma; 108 + fcoe_init2.t2_hash_tbl_addr_hi = (u32) 109 + ((u64) hba->t2_hash_tbl_dma >> 32); 110 + 111 + fcoe_init2.t2_ptr_hash_tbl_addr_lo = (u32) hba->t2_hash_tbl_ptr_dma; 112 + fcoe_init2.t2_ptr_hash_tbl_addr_hi = (u32) 113 + ((u64) hba->t2_hash_tbl_ptr_dma >> 32); 114 + 115 + fcoe_init2.free_list_count = BNX2FC_NUM_MAX_SESS; 116 + 117 + /* fill init3 KWQE */ 118 + memset(&fcoe_init3, 0x00, sizeof(struct fcoe_kwqe_init3)); 119 + fcoe_init3.hdr.op_code = FCOE_KWQE_OPCODE_INIT3; 120 + fcoe_init3.hdr.flags = (FCOE_KWQE_LAYER_CODE << 121 + FCOE_KWQE_HEADER_LAYER_CODE_SHIFT); 122 + fcoe_init3.error_bit_map_lo = 0xffffffff; 123 + fcoe_init3.error_bit_map_hi = 0xffffffff; 124 + 125 + 126 + kwqe_arr[0] = (struct kwqe *) &fcoe_init1; 127 + kwqe_arr[1] = (struct kwqe *) &fcoe_init2; 128 + kwqe_arr[2] = (struct kwqe *) &fcoe_init3; 129 + 130 + if (hba->cnic && hba->cnic->submit_kwqes) 131 + rc = hba->cnic->submit_kwqes(hba->cnic, kwqe_arr, num_kwqes); 132 + 133 + return rc; 134 + } 135 + int bnx2fc_send_fw_fcoe_destroy_msg(struct bnx2fc_hba *hba) 136 + { 137 + struct fcoe_kwqe_destroy fcoe_destroy; 138 + struct kwqe *kwqe_arr[2]; 139 + int num_kwqes = 1; 140 + int rc = -1; 141 + 142 + /* fill destroy KWQE */ 143 + memset(&fcoe_destroy, 0x00, sizeof(struct fcoe_kwqe_destroy)); 144 + fcoe_destroy.hdr.op_code = FCOE_KWQE_OPCODE_DESTROY; 145 + fcoe_destroy.hdr.flags = (FCOE_KWQE_LAYER_CODE << 146 + FCOE_KWQE_HEADER_LAYER_CODE_SHIFT); 147 + kwqe_arr[0] = (struct kwqe *) &fcoe_destroy; 148 + 149 + if (hba->cnic && hba->cnic->submit_kwqes) 150 + rc = hba->cnic->submit_kwqes(hba->cnic, kwqe_arr, num_kwqes); 151 + return rc; 152 + } 153 + 154 + /** 155 + * bnx2fc_send_session_ofld_req - initiates FCoE Session offload process 156 + * 157 + * @port: port structure pointer 158 + * @tgt: bnx2fc_rport structure pointer 159 + */ 160 + int bnx2fc_send_session_ofld_req(struct fcoe_port *port, 161 + struct bnx2fc_rport *tgt) 162 + { 163 + struct fc_lport *lport = port->lport; 164 + struct bnx2fc_hba *hba = port->priv; 165 + struct kwqe *kwqe_arr[4]; 166 + struct fcoe_kwqe_conn_offload1 ofld_req1; 167 + struct fcoe_kwqe_conn_offload2 ofld_req2; 168 + struct fcoe_kwqe_conn_offload3 ofld_req3; 169 + struct fcoe_kwqe_conn_offload4 ofld_req4; 170 + struct fc_rport_priv *rdata = tgt->rdata; 171 + struct fc_rport *rport = tgt->rport; 172 + int num_kwqes = 4; 173 + u32 port_id; 174 + int rc = 0; 175 + u16 conn_id; 176 + 177 + /* Initialize offload request 1 structure */ 178 + memset(&ofld_req1, 0x00, sizeof(struct fcoe_kwqe_conn_offload1)); 179 + 180 + ofld_req1.hdr.op_code = FCOE_KWQE_OPCODE_OFFLOAD_CONN1; 181 + ofld_req1.hdr.flags = 182 + (FCOE_KWQE_LAYER_CODE << FCOE_KWQE_HEADER_LAYER_CODE_SHIFT); 183 + 184 + 185 + conn_id = (u16)tgt->fcoe_conn_id; 186 + ofld_req1.fcoe_conn_id = conn_id; 187 + 188 + 189 + ofld_req1.sq_addr_lo = (u32) tgt->sq_dma; 190 + ofld_req1.sq_addr_hi = (u32)((u64) tgt->sq_dma >> 32); 191 + 192 + ofld_req1.rq_pbl_addr_lo = (u32) tgt->rq_pbl_dma; 193 + ofld_req1.rq_pbl_addr_hi = (u32)((u64) tgt->rq_pbl_dma >> 32); 194 + 195 + ofld_req1.rq_first_pbe_addr_lo = (u32) tgt->rq_dma; 196 + ofld_req1.rq_first_pbe_addr_hi = 197 + (u32)((u64) tgt->rq_dma >> 32); 198 + 199 + ofld_req1.rq_prod = 0x8000; 200 + 201 + /* Initialize offload request 2 structure */ 202 + memset(&ofld_req2, 0x00, sizeof(struct fcoe_kwqe_conn_offload2)); 203 + 204 + ofld_req2.hdr.op_code = FCOE_KWQE_OPCODE_OFFLOAD_CONN2; 205 + ofld_req2.hdr.flags = 206 + (FCOE_KWQE_LAYER_CODE << FCOE_KWQE_HEADER_LAYER_CODE_SHIFT); 207 + 208 + ofld_req2.tx_max_fc_pay_len = rdata->maxframe_size; 209 + 210 + ofld_req2.cq_addr_lo = (u32) tgt->cq_dma; 211 + ofld_req2.cq_addr_hi = (u32)((u64)tgt->cq_dma >> 32); 212 + 213 + ofld_req2.xferq_addr_lo = (u32) tgt->xferq_dma; 214 + ofld_req2.xferq_addr_hi = (u32)((u64)tgt->xferq_dma >> 32); 215 + 216 + ofld_req2.conn_db_addr_lo = (u32)tgt->conn_db_dma; 217 + ofld_req2.conn_db_addr_hi = (u32)((u64)tgt->conn_db_dma >> 32); 218 + 219 + /* Initialize offload request 3 structure */ 220 + memset(&ofld_req3, 0x00, sizeof(struct fcoe_kwqe_conn_offload3)); 221 + 222 + ofld_req3.hdr.op_code = FCOE_KWQE_OPCODE_OFFLOAD_CONN3; 223 + ofld_req3.hdr.flags = 224 + (FCOE_KWQE_LAYER_CODE << FCOE_KWQE_HEADER_LAYER_CODE_SHIFT); 225 + 226 + ofld_req3.vlan_tag = hba->vlan_id << 227 + FCOE_KWQE_CONN_OFFLOAD3_VLAN_ID_SHIFT; 228 + ofld_req3.vlan_tag |= 3 << FCOE_KWQE_CONN_OFFLOAD3_PRIORITY_SHIFT; 229 + 230 + port_id = fc_host_port_id(lport->host); 231 + if (port_id == 0) { 232 + BNX2FC_HBA_DBG(lport, "ofld_req: port_id = 0, link down?\n"); 233 + return -EINVAL; 234 + } 235 + 236 + /* 237 + * Store s_id of the initiator for further reference. This will 238 + * be used during disable/destroy during linkdown processing as 239 + * when the lport is reset, the port_id also is reset to 0 240 + */ 241 + tgt->sid = port_id; 242 + ofld_req3.s_id[0] = (port_id & 0x000000FF); 243 + ofld_req3.s_id[1] = (port_id & 0x0000FF00) >> 8; 244 + ofld_req3.s_id[2] = (port_id & 0x00FF0000) >> 16; 245 + 246 + port_id = rport->port_id; 247 + ofld_req3.d_id[0] = (port_id & 0x000000FF); 248 + ofld_req3.d_id[1] = (port_id & 0x0000FF00) >> 8; 249 + ofld_req3.d_id[2] = (port_id & 0x00FF0000) >> 16; 250 + 251 + ofld_req3.tx_total_conc_seqs = rdata->max_seq; 252 + 253 + ofld_req3.tx_max_conc_seqs_c3 = rdata->max_seq; 254 + ofld_req3.rx_max_fc_pay_len = lport->mfs; 255 + 256 + ofld_req3.rx_total_conc_seqs = BNX2FC_MAX_SEQS; 257 + ofld_req3.rx_max_conc_seqs_c3 = BNX2FC_MAX_SEQS; 258 + ofld_req3.rx_open_seqs_exch_c3 = 1; 259 + 260 + ofld_req3.confq_first_pbe_addr_lo = tgt->confq_dma; 261 + ofld_req3.confq_first_pbe_addr_hi = (u32)((u64) tgt->confq_dma >> 32); 262 + 263 + /* set mul_n_port_ids supported flag to 0, until it is supported */ 264 + ofld_req3.flags = 0; 265 + /* 266 + ofld_req3.flags |= (((lport->send_sp_features & FC_SP_FT_MNA) ? 1:0) << 267 + FCOE_KWQE_CONN_OFFLOAD3_B_MUL_N_PORT_IDS_SHIFT); 268 + */ 269 + /* Info from PLOGI response */ 270 + ofld_req3.flags |= (((rdata->sp_features & FC_SP_FT_EDTR) ? 1 : 0) << 271 + FCOE_KWQE_CONN_OFFLOAD3_B_E_D_TOV_RES_SHIFT); 272 + 273 + ofld_req3.flags |= (((rdata->sp_features & FC_SP_FT_SEQC) ? 1 : 0) << 274 + FCOE_KWQE_CONN_OFFLOAD3_B_CONT_INCR_SEQ_CNT_SHIFT); 275 + 276 + /* vlan flag */ 277 + ofld_req3.flags |= (hba->vlan_enabled << 278 + FCOE_KWQE_CONN_OFFLOAD3_B_VLAN_FLAG_SHIFT); 279 + 280 + /* C2_VALID and ACK flags are not set as they are not suppported */ 281 + 282 + 283 + /* Initialize offload request 4 structure */ 284 + memset(&ofld_req4, 0x00, sizeof(struct fcoe_kwqe_conn_offload4)); 285 + ofld_req4.hdr.op_code = FCOE_KWQE_OPCODE_OFFLOAD_CONN4; 286 + ofld_req4.hdr.flags = 287 + (FCOE_KWQE_LAYER_CODE << FCOE_KWQE_HEADER_LAYER_CODE_SHIFT); 288 + 289 + ofld_req4.e_d_tov_timer_val = lport->e_d_tov / 20; 290 + 291 + 292 + ofld_req4.src_mac_addr_lo32[0] = port->data_src_addr[5]; 293 + /* local mac */ 294 + ofld_req4.src_mac_addr_lo32[1] = port->data_src_addr[4]; 295 + ofld_req4.src_mac_addr_lo32[2] = port->data_src_addr[3]; 296 + ofld_req4.src_mac_addr_lo32[3] = port->data_src_addr[2]; 297 + ofld_req4.src_mac_addr_hi16[0] = port->data_src_addr[1]; 298 + ofld_req4.src_mac_addr_hi16[1] = port->data_src_addr[0]; 299 + ofld_req4.dst_mac_addr_lo32[0] = hba->ctlr.dest_addr[5];/* fcf mac */ 300 + ofld_req4.dst_mac_addr_lo32[1] = hba->ctlr.dest_addr[4]; 301 + ofld_req4.dst_mac_addr_lo32[2] = hba->ctlr.dest_addr[3]; 302 + ofld_req4.dst_mac_addr_lo32[3] = hba->ctlr.dest_addr[2]; 303 + ofld_req4.dst_mac_addr_hi16[0] = hba->ctlr.dest_addr[1]; 304 + ofld_req4.dst_mac_addr_hi16[1] = hba->ctlr.dest_addr[0]; 305 + 306 + ofld_req4.lcq_addr_lo = (u32) tgt->lcq_dma; 307 + ofld_req4.lcq_addr_hi = (u32)((u64) tgt->lcq_dma >> 32); 308 + 309 + ofld_req4.confq_pbl_base_addr_lo = (u32) tgt->confq_pbl_dma; 310 + ofld_req4.confq_pbl_base_addr_hi = 311 + (u32)((u64) tgt->confq_pbl_dma >> 32); 312 + 313 + kwqe_arr[0] = (struct kwqe *) &ofld_req1; 314 + kwqe_arr[1] = (struct kwqe *) &ofld_req2; 315 + kwqe_arr[2] = (struct kwqe *) &ofld_req3; 316 + kwqe_arr[3] = (struct kwqe *) &ofld_req4; 317 + 318 + if (hba->cnic && hba->cnic->submit_kwqes) 319 + rc = hba->cnic->submit_kwqes(hba->cnic, kwqe_arr, num_kwqes); 320 + 321 + return rc; 322 + } 323 + 324 + /** 325 + * bnx2fc_send_session_enable_req - initiates FCoE Session enablement 326 + * 327 + * @port: port structure pointer 328 + * @tgt: bnx2fc_rport structure pointer 329 + */ 330 + static int bnx2fc_send_session_enable_req(struct fcoe_port *port, 331 + struct bnx2fc_rport *tgt) 332 + { 333 + struct kwqe *kwqe_arr[2]; 334 + struct bnx2fc_hba *hba = port->priv; 335 + struct fcoe_kwqe_conn_enable_disable enbl_req; 336 + struct fc_lport *lport = port->lport; 337 + struct fc_rport *rport = tgt->rport; 338 + int num_kwqes = 1; 339 + int rc = 0; 340 + u32 port_id; 341 + 342 + memset(&enbl_req, 0x00, 343 + sizeof(struct fcoe_kwqe_conn_enable_disable)); 344 + enbl_req.hdr.op_code = FCOE_KWQE_OPCODE_ENABLE_CONN; 345 + enbl_req.hdr.flags = 346 + (FCOE_KWQE_LAYER_CODE << FCOE_KWQE_HEADER_LAYER_CODE_SHIFT); 347 + 348 + enbl_req.src_mac_addr_lo32[0] = port->data_src_addr[5]; 349 + /* local mac */ 350 + enbl_req.src_mac_addr_lo32[1] = port->data_src_addr[4]; 351 + enbl_req.src_mac_addr_lo32[2] = port->data_src_addr[3]; 352 + enbl_req.src_mac_addr_lo32[3] = port->data_src_addr[2]; 353 + enbl_req.src_mac_addr_hi16[0] = port->data_src_addr[1]; 354 + enbl_req.src_mac_addr_hi16[1] = port->data_src_addr[0]; 355 + 356 + enbl_req.dst_mac_addr_lo32[0] = hba->ctlr.dest_addr[5];/* fcf mac */ 357 + enbl_req.dst_mac_addr_lo32[1] = hba->ctlr.dest_addr[4]; 358 + enbl_req.dst_mac_addr_lo32[2] = hba->ctlr.dest_addr[3]; 359 + enbl_req.dst_mac_addr_lo32[3] = hba->ctlr.dest_addr[2]; 360 + enbl_req.dst_mac_addr_hi16[0] = hba->ctlr.dest_addr[1]; 361 + enbl_req.dst_mac_addr_hi16[1] = hba->ctlr.dest_addr[0]; 362 + 363 + port_id = fc_host_port_id(lport->host); 364 + if (port_id != tgt->sid) { 365 + printk(KERN_ERR PFX "WARN: enable_req port_id = 0x%x," 366 + "sid = 0x%x\n", port_id, tgt->sid); 367 + port_id = tgt->sid; 368 + } 369 + enbl_req.s_id[0] = (port_id & 0x000000FF); 370 + enbl_req.s_id[1] = (port_id & 0x0000FF00) >> 8; 371 + enbl_req.s_id[2] = (port_id & 0x00FF0000) >> 16; 372 + 373 + port_id = rport->port_id; 374 + enbl_req.d_id[0] = (port_id & 0x000000FF); 375 + enbl_req.d_id[1] = (port_id & 0x0000FF00) >> 8; 376 + enbl_req.d_id[2] = (port_id & 0x00FF0000) >> 16; 377 + enbl_req.vlan_tag = hba->vlan_id << 378 + FCOE_KWQE_CONN_ENABLE_DISABLE_VLAN_ID_SHIFT; 379 + enbl_req.vlan_tag |= 3 << FCOE_KWQE_CONN_ENABLE_DISABLE_PRIORITY_SHIFT; 380 + enbl_req.vlan_flag = hba->vlan_enabled; 381 + enbl_req.context_id = tgt->context_id; 382 + enbl_req.conn_id = tgt->fcoe_conn_id; 383 + 384 + kwqe_arr[0] = (struct kwqe *) &enbl_req; 385 + 386 + if (hba->cnic && hba->cnic->submit_kwqes) 387 + rc = hba->cnic->submit_kwqes(hba->cnic, kwqe_arr, num_kwqes); 388 + return rc; 389 + } 390 + 391 + /** 392 + * bnx2fc_send_session_disable_req - initiates FCoE Session disable 393 + * 394 + * @port: port structure pointer 395 + * @tgt: bnx2fc_rport structure pointer 396 + */ 397 + int bnx2fc_send_session_disable_req(struct fcoe_port *port, 398 + struct bnx2fc_rport *tgt) 399 + { 400 + struct bnx2fc_hba *hba = port->priv; 401 + struct fcoe_kwqe_conn_enable_disable disable_req; 402 + struct kwqe *kwqe_arr[2]; 403 + struct fc_rport *rport = tgt->rport; 404 + int num_kwqes = 1; 405 + int rc = 0; 406 + u32 port_id; 407 + 408 + memset(&disable_req, 0x00, 409 + sizeof(struct fcoe_kwqe_conn_enable_disable)); 410 + disable_req.hdr.op_code = FCOE_KWQE_OPCODE_DISABLE_CONN; 411 + disable_req.hdr.flags = 412 + (FCOE_KWQE_LAYER_CODE << FCOE_KWQE_HEADER_LAYER_CODE_SHIFT); 413 + 414 + disable_req.src_mac_addr_lo32[0] = port->data_src_addr[5]; 415 + disable_req.src_mac_addr_lo32[2] = port->data_src_addr[3]; 416 + disable_req.src_mac_addr_lo32[3] = port->data_src_addr[2]; 417 + disable_req.src_mac_addr_hi16[0] = port->data_src_addr[1]; 418 + disable_req.src_mac_addr_hi16[1] = port->data_src_addr[0]; 419 + 420 + disable_req.dst_mac_addr_lo32[0] = hba->ctlr.dest_addr[5];/* fcf mac */ 421 + disable_req.dst_mac_addr_lo32[1] = hba->ctlr.dest_addr[4]; 422 + disable_req.dst_mac_addr_lo32[2] = hba->ctlr.dest_addr[3]; 423 + disable_req.dst_mac_addr_lo32[3] = hba->ctlr.dest_addr[2]; 424 + disable_req.dst_mac_addr_hi16[0] = hba->ctlr.dest_addr[1]; 425 + disable_req.dst_mac_addr_hi16[1] = hba->ctlr.dest_addr[0]; 426 + 427 + port_id = tgt->sid; 428 + disable_req.s_id[0] = (port_id & 0x000000FF); 429 + disable_req.s_id[1] = (port_id & 0x0000FF00) >> 8; 430 + disable_req.s_id[2] = (port_id & 0x00FF0000) >> 16; 431 + 432 + 433 + port_id = rport->port_id; 434 + disable_req.d_id[0] = (port_id & 0x000000FF); 435 + disable_req.d_id[1] = (port_id & 0x0000FF00) >> 8; 436 + disable_req.d_id[2] = (port_id & 0x00FF0000) >> 16; 437 + disable_req.context_id = tgt->context_id; 438 + disable_req.conn_id = tgt->fcoe_conn_id; 439 + disable_req.vlan_tag = hba->vlan_id << 440 + FCOE_KWQE_CONN_ENABLE_DISABLE_VLAN_ID_SHIFT; 441 + disable_req.vlan_tag |= 442 + 3 << FCOE_KWQE_CONN_ENABLE_DISABLE_PRIORITY_SHIFT; 443 + disable_req.vlan_flag = hba->vlan_enabled; 444 + 445 + kwqe_arr[0] = (struct kwqe *) &disable_req; 446 + 447 + if (hba->cnic && hba->cnic->submit_kwqes) 448 + rc = hba->cnic->submit_kwqes(hba->cnic, kwqe_arr, num_kwqes); 449 + 450 + return rc; 451 + } 452 + 453 + /** 454 + * bnx2fc_send_session_destroy_req - initiates FCoE Session destroy 455 + * 456 + * @port: port structure pointer 457 + * @tgt: bnx2fc_rport structure pointer 458 + */ 459 + int bnx2fc_send_session_destroy_req(struct bnx2fc_hba *hba, 460 + struct bnx2fc_rport *tgt) 461 + { 462 + struct fcoe_kwqe_conn_destroy destroy_req; 463 + struct kwqe *kwqe_arr[2]; 464 + int num_kwqes = 1; 465 + int rc = 0; 466 + 467 + memset(&destroy_req, 0x00, sizeof(struct fcoe_kwqe_conn_destroy)); 468 + destroy_req.hdr.op_code = FCOE_KWQE_OPCODE_DESTROY_CONN; 469 + destroy_req.hdr.flags = 470 + (FCOE_KWQE_LAYER_CODE << FCOE_KWQE_HEADER_LAYER_CODE_SHIFT); 471 + 472 + destroy_req.context_id = tgt->context_id; 473 + destroy_req.conn_id = tgt->fcoe_conn_id; 474 + 475 + kwqe_arr[0] = (struct kwqe *) &destroy_req; 476 + 477 + if (hba->cnic && hba->cnic->submit_kwqes) 478 + rc = hba->cnic->submit_kwqes(hba->cnic, kwqe_arr, num_kwqes); 479 + 480 + return rc; 481 + } 482 + 483 + static void bnx2fc_unsol_els_work(struct work_struct *work) 484 + { 485 + struct bnx2fc_unsol_els *unsol_els; 486 + struct fc_lport *lport; 487 + struct fc_frame *fp; 488 + 489 + unsol_els = container_of(work, struct bnx2fc_unsol_els, unsol_els_work); 490 + lport = unsol_els->lport; 491 + fp = unsol_els->fp; 492 + fc_exch_recv(lport, fp); 493 + kfree(unsol_els); 494 + } 495 + 496 + void bnx2fc_process_l2_frame_compl(struct bnx2fc_rport *tgt, 497 + unsigned char *buf, 498 + u32 frame_len, u16 l2_oxid) 499 + { 500 + struct fcoe_port *port = tgt->port; 501 + struct fc_lport *lport = port->lport; 502 + struct bnx2fc_unsol_els *unsol_els; 503 + struct fc_frame_header *fh; 504 + struct fc_frame *fp; 505 + struct sk_buff *skb; 506 + u32 payload_len; 507 + u32 crc; 508 + u8 op; 509 + 510 + 511 + unsol_els = kzalloc(sizeof(*unsol_els), GFP_ATOMIC); 512 + if (!unsol_els) { 513 + BNX2FC_TGT_DBG(tgt, "Unable to allocate unsol_work\n"); 514 + return; 515 + } 516 + 517 + BNX2FC_TGT_DBG(tgt, "l2_frame_compl l2_oxid = 0x%x, frame_len = %d\n", 518 + l2_oxid, frame_len); 519 + 520 + payload_len = frame_len - sizeof(struct fc_frame_header); 521 + 522 + fp = fc_frame_alloc(lport, payload_len); 523 + if (!fp) { 524 + printk(KERN_ERR PFX "fc_frame_alloc failure\n"); 525 + return; 526 + } 527 + 528 + fh = (struct fc_frame_header *) fc_frame_header_get(fp); 529 + /* Copy FC Frame header and payload into the frame */ 530 + memcpy(fh, buf, frame_len); 531 + 532 + if (l2_oxid != FC_XID_UNKNOWN) 533 + fh->fh_ox_id = htons(l2_oxid); 534 + 535 + skb = fp_skb(fp); 536 + 537 + if ((fh->fh_r_ctl == FC_RCTL_ELS_REQ) || 538 + (fh->fh_r_ctl == FC_RCTL_ELS_REP)) { 539 + 540 + if (fh->fh_type == FC_TYPE_ELS) { 541 + op = fc_frame_payload_op(fp); 542 + if ((op == ELS_TEST) || (op == ELS_ESTC) || 543 + (op == ELS_FAN) || (op == ELS_CSU)) { 544 + /* 545 + * No need to reply for these 546 + * ELS requests 547 + */ 548 + printk(KERN_ERR PFX "dropping ELS 0x%x\n", op); 549 + kfree_skb(skb); 550 + return; 551 + } 552 + } 553 + crc = fcoe_fc_crc(fp); 554 + fc_frame_init(fp); 555 + fr_dev(fp) = lport; 556 + fr_sof(fp) = FC_SOF_I3; 557 + fr_eof(fp) = FC_EOF_T; 558 + fr_crc(fp) = cpu_to_le32(~crc); 559 + unsol_els->lport = lport; 560 + unsol_els->fp = fp; 561 + INIT_WORK(&unsol_els->unsol_els_work, bnx2fc_unsol_els_work); 562 + queue_work(bnx2fc_wq, &unsol_els->unsol_els_work); 563 + } else { 564 + BNX2FC_HBA_DBG(lport, "fh_r_ctl = 0x%x\n", fh->fh_r_ctl); 565 + kfree_skb(skb); 566 + } 567 + } 568 + 569 + static void bnx2fc_process_unsol_compl(struct bnx2fc_rport *tgt, u16 wqe) 570 + { 571 + u8 num_rq; 572 + struct fcoe_err_report_entry *err_entry; 573 + unsigned char *rq_data; 574 + unsigned char *buf = NULL, *buf1; 575 + int i; 576 + u16 xid; 577 + u32 frame_len, len; 578 + struct bnx2fc_cmd *io_req = NULL; 579 + struct fcoe_task_ctx_entry *task, *task_page; 580 + struct bnx2fc_hba *hba = tgt->port->priv; 581 + int task_idx, index; 582 + int rc = 0; 583 + 584 + 585 + BNX2FC_TGT_DBG(tgt, "Entered UNSOL COMPLETION wqe = 0x%x\n", wqe); 586 + switch (wqe & FCOE_UNSOLICITED_CQE_SUBTYPE) { 587 + case FCOE_UNSOLICITED_FRAME_CQE_TYPE: 588 + frame_len = (wqe & FCOE_UNSOLICITED_CQE_PKT_LEN) >> 589 + FCOE_UNSOLICITED_CQE_PKT_LEN_SHIFT; 590 + 591 + num_rq = (frame_len + BNX2FC_RQ_BUF_SZ - 1) / BNX2FC_RQ_BUF_SZ; 592 + 593 + rq_data = (unsigned char *)bnx2fc_get_next_rqe(tgt, num_rq); 594 + if (rq_data) { 595 + buf = rq_data; 596 + } else { 597 + buf1 = buf = kmalloc((num_rq * BNX2FC_RQ_BUF_SZ), 598 + GFP_ATOMIC); 599 + 600 + if (!buf1) { 601 + BNX2FC_TGT_DBG(tgt, "Memory alloc failure\n"); 602 + break; 603 + } 604 + 605 + for (i = 0; i < num_rq; i++) { 606 + rq_data = (unsigned char *) 607 + bnx2fc_get_next_rqe(tgt, 1); 608 + len = BNX2FC_RQ_BUF_SZ; 609 + memcpy(buf1, rq_data, len); 610 + buf1 += len; 611 + } 612 + } 613 + bnx2fc_process_l2_frame_compl(tgt, buf, frame_len, 614 + FC_XID_UNKNOWN); 615 + 616 + if (buf != rq_data) 617 + kfree(buf); 618 + bnx2fc_return_rqe(tgt, num_rq); 619 + break; 620 + 621 + case FCOE_ERROR_DETECTION_CQE_TYPE: 622 + /* 623 + *In case of error reporting CQE a single RQ entry 624 + * is consumes. 625 + */ 626 + spin_lock_bh(&tgt->tgt_lock); 627 + num_rq = 1; 628 + err_entry = (struct fcoe_err_report_entry *) 629 + bnx2fc_get_next_rqe(tgt, 1); 630 + xid = err_entry->fc_hdr.ox_id; 631 + BNX2FC_TGT_DBG(tgt, "Unsol Error Frame OX_ID = 0x%x\n", xid); 632 + BNX2FC_TGT_DBG(tgt, "err_warn_bitmap = %08x:%08x\n", 633 + err_entry->err_warn_bitmap_hi, 634 + err_entry->err_warn_bitmap_lo); 635 + BNX2FC_TGT_DBG(tgt, "buf_offsets - tx = 0x%x, rx = 0x%x\n", 636 + err_entry->tx_buf_off, err_entry->rx_buf_off); 637 + 638 + bnx2fc_return_rqe(tgt, 1); 639 + 640 + if (xid > BNX2FC_MAX_XID) { 641 + BNX2FC_TGT_DBG(tgt, "xid(0x%x) out of FW range\n", 642 + xid); 643 + spin_unlock_bh(&tgt->tgt_lock); 644 + break; 645 + } 646 + 647 + task_idx = xid / BNX2FC_TASKS_PER_PAGE; 648 + index = xid % BNX2FC_TASKS_PER_PAGE; 649 + task_page = (struct fcoe_task_ctx_entry *) 650 + hba->task_ctx[task_idx]; 651 + task = &(task_page[index]); 652 + 653 + io_req = (struct bnx2fc_cmd *)hba->cmd_mgr->cmds[xid]; 654 + if (!io_req) { 655 + spin_unlock_bh(&tgt->tgt_lock); 656 + break; 657 + } 658 + 659 + if (io_req->cmd_type != BNX2FC_SCSI_CMD) { 660 + printk(KERN_ERR PFX "err_warn: Not a SCSI cmd\n"); 661 + spin_unlock_bh(&tgt->tgt_lock); 662 + break; 663 + } 664 + 665 + if (test_and_clear_bit(BNX2FC_FLAG_IO_CLEANUP, 666 + &io_req->req_flags)) { 667 + BNX2FC_IO_DBG(io_req, "unsol_err: cleanup in " 668 + "progress.. ignore unsol err\n"); 669 + spin_unlock_bh(&tgt->tgt_lock); 670 + break; 671 + } 672 + 673 + /* 674 + * If ABTS is already in progress, and FW error is 675 + * received after that, do not cancel the timeout_work 676 + * and let the error recovery continue by explicitly 677 + * logging out the target, when the ABTS eventually 678 + * times out. 679 + */ 680 + if (!test_and_set_bit(BNX2FC_FLAG_ISSUE_ABTS, 681 + &io_req->req_flags)) { 682 + /* 683 + * Cancel the timeout_work, as we received IO 684 + * completion with FW error. 685 + */ 686 + if (cancel_delayed_work(&io_req->timeout_work)) 687 + kref_put(&io_req->refcount, 688 + bnx2fc_cmd_release); /* timer hold */ 689 + 690 + rc = bnx2fc_initiate_abts(io_req); 691 + if (rc != SUCCESS) { 692 + BNX2FC_IO_DBG(io_req, "err_warn: initiate_abts " 693 + "failed. issue cleanup\n"); 694 + rc = bnx2fc_initiate_cleanup(io_req); 695 + BUG_ON(rc); 696 + } 697 + } else 698 + printk(KERN_ERR PFX "err_warn: io_req (0x%x) already " 699 + "in ABTS processing\n", xid); 700 + spin_unlock_bh(&tgt->tgt_lock); 701 + break; 702 + 703 + case FCOE_WARNING_DETECTION_CQE_TYPE: 704 + /* 705 + *In case of warning reporting CQE a single RQ entry 706 + * is consumes. 707 + */ 708 + num_rq = 1; 709 + err_entry = (struct fcoe_err_report_entry *) 710 + bnx2fc_get_next_rqe(tgt, 1); 711 + xid = cpu_to_be16(err_entry->fc_hdr.ox_id); 712 + BNX2FC_TGT_DBG(tgt, "Unsol Warning Frame OX_ID = 0x%x\n", xid); 713 + BNX2FC_TGT_DBG(tgt, "err_warn_bitmap = %08x:%08x", 714 + err_entry->err_warn_bitmap_hi, 715 + err_entry->err_warn_bitmap_lo); 716 + BNX2FC_TGT_DBG(tgt, "buf_offsets - tx = 0x%x, rx = 0x%x", 717 + err_entry->tx_buf_off, err_entry->rx_buf_off); 718 + 719 + bnx2fc_return_rqe(tgt, 1); 720 + break; 721 + 722 + default: 723 + printk(KERN_ERR PFX "Unsol Compl: Invalid CQE Subtype\n"); 724 + break; 725 + } 726 + } 727 + 728 + void bnx2fc_process_cq_compl(struct bnx2fc_rport *tgt, u16 wqe) 729 + { 730 + struct fcoe_task_ctx_entry *task; 731 + struct fcoe_task_ctx_entry *task_page; 732 + struct fcoe_port *port = tgt->port; 733 + struct bnx2fc_hba *hba = port->priv; 734 + struct bnx2fc_cmd *io_req; 735 + int task_idx, index; 736 + u16 xid; 737 + u8 cmd_type; 738 + u8 rx_state = 0; 739 + u8 num_rq; 740 + 741 + spin_lock_bh(&tgt->tgt_lock); 742 + xid = wqe & FCOE_PEND_WQ_CQE_TASK_ID; 743 + if (xid >= BNX2FC_MAX_TASKS) { 744 + printk(KERN_ALERT PFX "ERROR:xid out of range\n"); 745 + spin_unlock_bh(&tgt->tgt_lock); 746 + return; 747 + } 748 + task_idx = xid / BNX2FC_TASKS_PER_PAGE; 749 + index = xid % BNX2FC_TASKS_PER_PAGE; 750 + task_page = (struct fcoe_task_ctx_entry *)hba->task_ctx[task_idx]; 751 + task = &(task_page[index]); 752 + 753 + num_rq = ((task->rx_wr_tx_rd.rx_flags & 754 + FCOE_TASK_CTX_ENTRY_RXWR_TXRD_NUM_RQ_WQE) >> 755 + FCOE_TASK_CTX_ENTRY_RXWR_TXRD_NUM_RQ_WQE_SHIFT); 756 + 757 + io_req = (struct bnx2fc_cmd *)hba->cmd_mgr->cmds[xid]; 758 + 759 + if (io_req == NULL) { 760 + printk(KERN_ERR PFX "ERROR? cq_compl - io_req is NULL\n"); 761 + spin_unlock_bh(&tgt->tgt_lock); 762 + return; 763 + } 764 + 765 + /* Timestamp IO completion time */ 766 + cmd_type = io_req->cmd_type; 767 + 768 + /* optimized completion path */ 769 + if (cmd_type == BNX2FC_SCSI_CMD) { 770 + rx_state = ((task->rx_wr_tx_rd.rx_flags & 771 + FCOE_TASK_CTX_ENTRY_RXWR_TXRD_RX_STATE) >> 772 + FCOE_TASK_CTX_ENTRY_RXWR_TXRD_RX_STATE_SHIFT); 773 + 774 + if (rx_state == FCOE_TASK_RX_STATE_COMPLETED) { 775 + bnx2fc_process_scsi_cmd_compl(io_req, task, num_rq); 776 + spin_unlock_bh(&tgt->tgt_lock); 777 + return; 778 + } 779 + } 780 + 781 + /* Process other IO completion types */ 782 + switch (cmd_type) { 783 + case BNX2FC_SCSI_CMD: 784 + if (rx_state == FCOE_TASK_RX_STATE_ABTS_COMPLETED) 785 + bnx2fc_process_abts_compl(io_req, task, num_rq); 786 + else if (rx_state == 787 + FCOE_TASK_RX_STATE_EXCHANGE_CLEANUP_COMPLETED) 788 + bnx2fc_process_cleanup_compl(io_req, task, num_rq); 789 + else 790 + printk(KERN_ERR PFX "Invalid rx state - %d\n", 791 + rx_state); 792 + break; 793 + 794 + case BNX2FC_TASK_MGMT_CMD: 795 + BNX2FC_IO_DBG(io_req, "Processing TM complete\n"); 796 + bnx2fc_process_tm_compl(io_req, task, num_rq); 797 + break; 798 + 799 + case BNX2FC_ABTS: 800 + /* 801 + * ABTS request received by firmware. ABTS response 802 + * will be delivered to the task belonging to the IO 803 + * that was aborted 804 + */ 805 + BNX2FC_IO_DBG(io_req, "cq_compl- ABTS sent out by fw\n"); 806 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 807 + break; 808 + 809 + case BNX2FC_ELS: 810 + BNX2FC_IO_DBG(io_req, "cq_compl - call process_els_compl\n"); 811 + bnx2fc_process_els_compl(io_req, task, num_rq); 812 + break; 813 + 814 + case BNX2FC_CLEANUP: 815 + BNX2FC_IO_DBG(io_req, "cq_compl- cleanup resp rcvd\n"); 816 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 817 + break; 818 + 819 + default: 820 + printk(KERN_ERR PFX "Invalid cmd_type %d\n", cmd_type); 821 + break; 822 + } 823 + spin_unlock_bh(&tgt->tgt_lock); 824 + } 825 + 826 + struct bnx2fc_work *bnx2fc_alloc_work(struct bnx2fc_rport *tgt, u16 wqe) 827 + { 828 + struct bnx2fc_work *work; 829 + work = kzalloc(sizeof(struct bnx2fc_work), GFP_ATOMIC); 830 + if (!work) 831 + return NULL; 832 + 833 + INIT_LIST_HEAD(&work->list); 834 + work->tgt = tgt; 835 + work->wqe = wqe; 836 + return work; 837 + } 838 + 839 + int bnx2fc_process_new_cqes(struct bnx2fc_rport *tgt) 840 + { 841 + struct fcoe_cqe *cq; 842 + u32 cq_cons; 843 + struct fcoe_cqe *cqe; 844 + u16 wqe; 845 + bool more_cqes_found = false; 846 + 847 + /* 848 + * cq_lock is a low contention lock used to protect 849 + * the CQ data structure from being freed up during 850 + * the upload operation 851 + */ 852 + spin_lock_bh(&tgt->cq_lock); 853 + 854 + if (!tgt->cq) { 855 + printk(KERN_ERR PFX "process_new_cqes: cq is NULL\n"); 856 + spin_unlock_bh(&tgt->cq_lock); 857 + return 0; 858 + } 859 + cq = tgt->cq; 860 + cq_cons = tgt->cq_cons_idx; 861 + cqe = &cq[cq_cons]; 862 + 863 + do { 864 + more_cqes_found ^= true; 865 + 866 + while (((wqe = cqe->wqe) & FCOE_CQE_TOGGLE_BIT) == 867 + (tgt->cq_curr_toggle_bit << 868 + FCOE_CQE_TOGGLE_BIT_SHIFT)) { 869 + 870 + /* new entry on the cq */ 871 + if (wqe & FCOE_CQE_CQE_TYPE) { 872 + /* Unsolicited event notification */ 873 + bnx2fc_process_unsol_compl(tgt, wqe); 874 + } else { 875 + struct bnx2fc_work *work = NULL; 876 + struct bnx2fc_percpu_s *fps = NULL; 877 + unsigned int cpu = wqe % num_possible_cpus(); 878 + 879 + fps = &per_cpu(bnx2fc_percpu, cpu); 880 + spin_lock_bh(&fps->fp_work_lock); 881 + if (unlikely(!fps->iothread)) 882 + goto unlock; 883 + 884 + work = bnx2fc_alloc_work(tgt, wqe); 885 + if (work) 886 + list_add_tail(&work->list, 887 + &fps->work_list); 888 + unlock: 889 + spin_unlock_bh(&fps->fp_work_lock); 890 + 891 + /* Pending work request completion */ 892 + if (fps->iothread && work) 893 + wake_up_process(fps->iothread); 894 + else 895 + bnx2fc_process_cq_compl(tgt, wqe); 896 + } 897 + cqe++; 898 + tgt->cq_cons_idx++; 899 + 900 + if (tgt->cq_cons_idx == BNX2FC_CQ_WQES_MAX) { 901 + tgt->cq_cons_idx = 0; 902 + cqe = cq; 903 + tgt->cq_curr_toggle_bit = 904 + 1 - tgt->cq_curr_toggle_bit; 905 + } 906 + } 907 + /* Re-arm CQ */ 908 + if (more_cqes_found) { 909 + tgt->conn_db->cq_arm.lo = -1; 910 + wmb(); 911 + } 912 + } while (more_cqes_found); 913 + 914 + /* 915 + * Commit tgt->cq_cons_idx change to the memory 916 + * spin_lock implies full memory barrier, no need to smp_wmb 917 + */ 918 + 919 + spin_unlock_bh(&tgt->cq_lock); 920 + return 0; 921 + } 922 + 923 + /** 924 + * bnx2fc_fastpath_notification - process global event queue (KCQ) 925 + * 926 + * @hba: adapter structure pointer 927 + * @new_cqe_kcqe: pointer to newly DMA'd KCQ entry 928 + * 929 + * Fast path event notification handler 930 + */ 931 + static void bnx2fc_fastpath_notification(struct bnx2fc_hba *hba, 932 + struct fcoe_kcqe *new_cqe_kcqe) 933 + { 934 + u32 conn_id = new_cqe_kcqe->fcoe_conn_id; 935 + struct bnx2fc_rport *tgt = hba->tgt_ofld_list[conn_id]; 936 + 937 + if (!tgt) { 938 + printk(KERN_ALERT PFX "conn_id 0x%x not valid\n", conn_id); 939 + return; 940 + } 941 + 942 + bnx2fc_process_new_cqes(tgt); 943 + } 944 + 945 + /** 946 + * bnx2fc_process_ofld_cmpl - process FCoE session offload completion 947 + * 948 + * @hba: adapter structure pointer 949 + * @ofld_kcqe: connection offload kcqe pointer 950 + * 951 + * handle session offload completion, enable the session if offload is 952 + * successful. 953 + */ 954 + static void bnx2fc_process_ofld_cmpl(struct bnx2fc_hba *hba, 955 + struct fcoe_kcqe *ofld_kcqe) 956 + { 957 + struct bnx2fc_rport *tgt; 958 + struct fcoe_port *port; 959 + u32 conn_id; 960 + u32 context_id; 961 + int rc; 962 + 963 + conn_id = ofld_kcqe->fcoe_conn_id; 964 + context_id = ofld_kcqe->fcoe_conn_context_id; 965 + tgt = hba->tgt_ofld_list[conn_id]; 966 + if (!tgt) { 967 + printk(KERN_ALERT PFX "ERROR:ofld_cmpl: No pending ofld req\n"); 968 + return; 969 + } 970 + BNX2FC_TGT_DBG(tgt, "Entered ofld compl - context_id = 0x%x\n", 971 + ofld_kcqe->fcoe_conn_context_id); 972 + port = tgt->port; 973 + if (hba != tgt->port->priv) { 974 + printk(KERN_ALERT PFX "ERROR:ofld_cmpl: HBA mis-match\n"); 975 + goto ofld_cmpl_err; 976 + } 977 + /* 978 + * cnic has allocated a context_id for this session; use this 979 + * while enabling the session. 980 + */ 981 + tgt->context_id = context_id; 982 + if (ofld_kcqe->completion_status) { 983 + if (ofld_kcqe->completion_status == 984 + FCOE_KCQE_COMPLETION_STATUS_CTX_ALLOC_FAILURE) { 985 + printk(KERN_ERR PFX "unable to allocate FCoE context " 986 + "resources\n"); 987 + set_bit(BNX2FC_FLAG_CTX_ALLOC_FAILURE, &tgt->flags); 988 + } 989 + goto ofld_cmpl_err; 990 + } else { 991 + 992 + /* now enable the session */ 993 + rc = bnx2fc_send_session_enable_req(port, tgt); 994 + if (rc) { 995 + printk(KERN_ALERT PFX "enable session failed\n"); 996 + goto ofld_cmpl_err; 997 + } 998 + } 999 + return; 1000 + ofld_cmpl_err: 1001 + set_bit(BNX2FC_FLAG_OFLD_REQ_CMPL, &tgt->flags); 1002 + wake_up_interruptible(&tgt->ofld_wait); 1003 + } 1004 + 1005 + /** 1006 + * bnx2fc_process_enable_conn_cmpl - process FCoE session enable completion 1007 + * 1008 + * @hba: adapter structure pointer 1009 + * @ofld_kcqe: connection offload kcqe pointer 1010 + * 1011 + * handle session enable completion, mark the rport as ready 1012 + */ 1013 + 1014 + static void bnx2fc_process_enable_conn_cmpl(struct bnx2fc_hba *hba, 1015 + struct fcoe_kcqe *ofld_kcqe) 1016 + { 1017 + struct bnx2fc_rport *tgt; 1018 + u32 conn_id; 1019 + u32 context_id; 1020 + 1021 + context_id = ofld_kcqe->fcoe_conn_context_id; 1022 + conn_id = ofld_kcqe->fcoe_conn_id; 1023 + tgt = hba->tgt_ofld_list[conn_id]; 1024 + if (!tgt) { 1025 + printk(KERN_ALERT PFX "ERROR:enbl_cmpl: No pending ofld req\n"); 1026 + return; 1027 + } 1028 + 1029 + BNX2FC_TGT_DBG(tgt, "Enable compl - context_id = 0x%x\n", 1030 + ofld_kcqe->fcoe_conn_context_id); 1031 + 1032 + /* 1033 + * context_id should be the same for this target during offload 1034 + * and enable 1035 + */ 1036 + if (tgt->context_id != context_id) { 1037 + printk(KERN_ALERT PFX "context id mis-match\n"); 1038 + return; 1039 + } 1040 + if (hba != tgt->port->priv) { 1041 + printk(KERN_ALERT PFX "bnx2fc-enbl_cmpl: HBA mis-match\n"); 1042 + goto enbl_cmpl_err; 1043 + } 1044 + if (ofld_kcqe->completion_status) { 1045 + goto enbl_cmpl_err; 1046 + } else { 1047 + /* enable successful - rport ready for issuing IOs */ 1048 + set_bit(BNX2FC_FLAG_OFFLOADED, &tgt->flags); 1049 + set_bit(BNX2FC_FLAG_OFLD_REQ_CMPL, &tgt->flags); 1050 + wake_up_interruptible(&tgt->ofld_wait); 1051 + } 1052 + return; 1053 + 1054 + enbl_cmpl_err: 1055 + set_bit(BNX2FC_FLAG_OFLD_REQ_CMPL, &tgt->flags); 1056 + wake_up_interruptible(&tgt->ofld_wait); 1057 + } 1058 + 1059 + static void bnx2fc_process_conn_disable_cmpl(struct bnx2fc_hba *hba, 1060 + struct fcoe_kcqe *disable_kcqe) 1061 + { 1062 + 1063 + struct bnx2fc_rport *tgt; 1064 + u32 conn_id; 1065 + 1066 + conn_id = disable_kcqe->fcoe_conn_id; 1067 + tgt = hba->tgt_ofld_list[conn_id]; 1068 + if (!tgt) { 1069 + printk(KERN_ALERT PFX "ERROR: disable_cmpl: No disable req\n"); 1070 + return; 1071 + } 1072 + 1073 + BNX2FC_TGT_DBG(tgt, PFX "disable_cmpl: conn_id %d\n", conn_id); 1074 + 1075 + if (disable_kcqe->completion_status) { 1076 + printk(KERN_ALERT PFX "ERROR: Disable failed with cmpl status %d\n", 1077 + disable_kcqe->completion_status); 1078 + return; 1079 + } else { 1080 + /* disable successful */ 1081 + BNX2FC_TGT_DBG(tgt, "disable successful\n"); 1082 + clear_bit(BNX2FC_FLAG_OFFLOADED, &tgt->flags); 1083 + set_bit(BNX2FC_FLAG_DISABLED, &tgt->flags); 1084 + set_bit(BNX2FC_FLAG_UPLD_REQ_COMPL, &tgt->flags); 1085 + wake_up_interruptible(&tgt->upld_wait); 1086 + } 1087 + } 1088 + 1089 + static void bnx2fc_process_conn_destroy_cmpl(struct bnx2fc_hba *hba, 1090 + struct fcoe_kcqe *destroy_kcqe) 1091 + { 1092 + struct bnx2fc_rport *tgt; 1093 + u32 conn_id; 1094 + 1095 + conn_id = destroy_kcqe->fcoe_conn_id; 1096 + tgt = hba->tgt_ofld_list[conn_id]; 1097 + if (!tgt) { 1098 + printk(KERN_ALERT PFX "destroy_cmpl: No destroy req\n"); 1099 + return; 1100 + } 1101 + 1102 + BNX2FC_TGT_DBG(tgt, "destroy_cmpl: conn_id %d\n", conn_id); 1103 + 1104 + if (destroy_kcqe->completion_status) { 1105 + printk(KERN_ALERT PFX "Destroy conn failed, cmpl status %d\n", 1106 + destroy_kcqe->completion_status); 1107 + return; 1108 + } else { 1109 + /* destroy successful */ 1110 + BNX2FC_TGT_DBG(tgt, "upload successful\n"); 1111 + clear_bit(BNX2FC_FLAG_DISABLED, &tgt->flags); 1112 + set_bit(BNX2FC_FLAG_DESTROYED, &tgt->flags); 1113 + set_bit(BNX2FC_FLAG_UPLD_REQ_COMPL, &tgt->flags); 1114 + wake_up_interruptible(&tgt->upld_wait); 1115 + } 1116 + } 1117 + 1118 + static void bnx2fc_init_failure(struct bnx2fc_hba *hba, u32 err_code) 1119 + { 1120 + switch (err_code) { 1121 + case FCOE_KCQE_COMPLETION_STATUS_INVALID_OPCODE: 1122 + printk(KERN_ERR PFX "init_failure due to invalid opcode\n"); 1123 + break; 1124 + 1125 + case FCOE_KCQE_COMPLETION_STATUS_CTX_ALLOC_FAILURE: 1126 + printk(KERN_ERR PFX "init failed due to ctx alloc failure\n"); 1127 + break; 1128 + 1129 + case FCOE_KCQE_COMPLETION_STATUS_NIC_ERROR: 1130 + printk(KERN_ERR PFX "init_failure due to NIC error\n"); 1131 + break; 1132 + 1133 + default: 1134 + printk(KERN_ERR PFX "Unknown Error code %d\n", err_code); 1135 + } 1136 + } 1137 + 1138 + /** 1139 + * bnx2fc_indicae_kcqe - process KCQE 1140 + * 1141 + * @hba: adapter structure pointer 1142 + * @kcqe: kcqe pointer 1143 + * @num_cqe: Number of completion queue elements 1144 + * 1145 + * Generic KCQ event handler 1146 + */ 1147 + void bnx2fc_indicate_kcqe(void *context, struct kcqe *kcq[], 1148 + u32 num_cqe) 1149 + { 1150 + struct bnx2fc_hba *hba = (struct bnx2fc_hba *)context; 1151 + int i = 0; 1152 + struct fcoe_kcqe *kcqe = NULL; 1153 + 1154 + while (i < num_cqe) { 1155 + kcqe = (struct fcoe_kcqe *) kcq[i++]; 1156 + 1157 + switch (kcqe->op_code) { 1158 + case FCOE_KCQE_OPCODE_CQ_EVENT_NOTIFICATION: 1159 + bnx2fc_fastpath_notification(hba, kcqe); 1160 + break; 1161 + 1162 + case FCOE_KCQE_OPCODE_OFFLOAD_CONN: 1163 + bnx2fc_process_ofld_cmpl(hba, kcqe); 1164 + break; 1165 + 1166 + case FCOE_KCQE_OPCODE_ENABLE_CONN: 1167 + bnx2fc_process_enable_conn_cmpl(hba, kcqe); 1168 + break; 1169 + 1170 + case FCOE_KCQE_OPCODE_INIT_FUNC: 1171 + if (kcqe->completion_status != 1172 + FCOE_KCQE_COMPLETION_STATUS_SUCCESS) { 1173 + bnx2fc_init_failure(hba, 1174 + kcqe->completion_status); 1175 + } else { 1176 + set_bit(ADAPTER_STATE_UP, &hba->adapter_state); 1177 + bnx2fc_get_link_state(hba); 1178 + printk(KERN_INFO PFX "[%.2x]: FCOE_INIT passed\n", 1179 + (u8)hba->pcidev->bus->number); 1180 + } 1181 + break; 1182 + 1183 + case FCOE_KCQE_OPCODE_DESTROY_FUNC: 1184 + if (kcqe->completion_status != 1185 + FCOE_KCQE_COMPLETION_STATUS_SUCCESS) { 1186 + 1187 + printk(KERN_ERR PFX "DESTROY failed\n"); 1188 + } else { 1189 + printk(KERN_ERR PFX "DESTROY success\n"); 1190 + } 1191 + hba->flags |= BNX2FC_FLAG_DESTROY_CMPL; 1192 + wake_up_interruptible(&hba->destroy_wait); 1193 + break; 1194 + 1195 + case FCOE_KCQE_OPCODE_DISABLE_CONN: 1196 + bnx2fc_process_conn_disable_cmpl(hba, kcqe); 1197 + break; 1198 + 1199 + case FCOE_KCQE_OPCODE_DESTROY_CONN: 1200 + bnx2fc_process_conn_destroy_cmpl(hba, kcqe); 1201 + break; 1202 + 1203 + case FCOE_KCQE_OPCODE_STAT_FUNC: 1204 + if (kcqe->completion_status != 1205 + FCOE_KCQE_COMPLETION_STATUS_SUCCESS) 1206 + printk(KERN_ERR PFX "STAT failed\n"); 1207 + complete(&hba->stat_req_done); 1208 + break; 1209 + 1210 + case FCOE_KCQE_OPCODE_FCOE_ERROR: 1211 + /* fall thru */ 1212 + default: 1213 + printk(KERN_ALERT PFX "unknown opcode 0x%x\n", 1214 + kcqe->op_code); 1215 + } 1216 + } 1217 + } 1218 + 1219 + void bnx2fc_add_2_sq(struct bnx2fc_rport *tgt, u16 xid) 1220 + { 1221 + struct fcoe_sqe *sqe; 1222 + 1223 + sqe = &tgt->sq[tgt->sq_prod_idx]; 1224 + 1225 + /* Fill SQ WQE */ 1226 + sqe->wqe = xid << FCOE_SQE_TASK_ID_SHIFT; 1227 + sqe->wqe |= tgt->sq_curr_toggle_bit << FCOE_SQE_TOGGLE_BIT_SHIFT; 1228 + 1229 + /* Advance SQ Prod Idx */ 1230 + if (++tgt->sq_prod_idx == BNX2FC_SQ_WQES_MAX) { 1231 + tgt->sq_prod_idx = 0; 1232 + tgt->sq_curr_toggle_bit = 1 - tgt->sq_curr_toggle_bit; 1233 + } 1234 + } 1235 + 1236 + void bnx2fc_ring_doorbell(struct bnx2fc_rport *tgt) 1237 + { 1238 + struct b577xx_doorbell_set_prod ev_doorbell; 1239 + u32 msg; 1240 + 1241 + wmb(); 1242 + 1243 + memset(&ev_doorbell, 0, sizeof(struct b577xx_doorbell_set_prod)); 1244 + ev_doorbell.header.header = B577XX_DOORBELL_HDR_DB_TYPE; 1245 + 1246 + ev_doorbell.prod = tgt->sq_prod_idx | 1247 + (tgt->sq_curr_toggle_bit << 15); 1248 + ev_doorbell.header.header |= B577XX_FCOE_CONNECTION_TYPE << 1249 + B577XX_DOORBELL_HDR_CONN_TYPE_SHIFT; 1250 + msg = *((u32 *)&ev_doorbell); 1251 + writel(cpu_to_le32(msg), tgt->ctx_base); 1252 + 1253 + mmiowb(); 1254 + 1255 + } 1256 + 1257 + int bnx2fc_map_doorbell(struct bnx2fc_rport *tgt) 1258 + { 1259 + u32 context_id = tgt->context_id; 1260 + struct fcoe_port *port = tgt->port; 1261 + u32 reg_off; 1262 + resource_size_t reg_base; 1263 + struct bnx2fc_hba *hba = port->priv; 1264 + 1265 + reg_base = pci_resource_start(hba->pcidev, 1266 + BNX2X_DOORBELL_PCI_BAR); 1267 + reg_off = BNX2FC_5771X_DB_PAGE_SIZE * 1268 + (context_id & 0x1FFFF) + DPM_TRIGER_TYPE; 1269 + tgt->ctx_base = ioremap_nocache(reg_base + reg_off, 4); 1270 + if (!tgt->ctx_base) 1271 + return -ENOMEM; 1272 + return 0; 1273 + } 1274 + 1275 + char *bnx2fc_get_next_rqe(struct bnx2fc_rport *tgt, u8 num_items) 1276 + { 1277 + char *buf = (char *)tgt->rq + (tgt->rq_cons_idx * BNX2FC_RQ_BUF_SZ); 1278 + 1279 + if (tgt->rq_cons_idx + num_items > BNX2FC_RQ_WQES_MAX) 1280 + return NULL; 1281 + 1282 + tgt->rq_cons_idx += num_items; 1283 + 1284 + if (tgt->rq_cons_idx >= BNX2FC_RQ_WQES_MAX) 1285 + tgt->rq_cons_idx -= BNX2FC_RQ_WQES_MAX; 1286 + 1287 + return buf; 1288 + } 1289 + 1290 + void bnx2fc_return_rqe(struct bnx2fc_rport *tgt, u8 num_items) 1291 + { 1292 + /* return the rq buffer */ 1293 + u32 next_prod_idx = tgt->rq_prod_idx + num_items; 1294 + if ((next_prod_idx & 0x7fff) == BNX2FC_RQ_WQES_MAX) { 1295 + /* Wrap around RQ */ 1296 + next_prod_idx += 0x8000 - BNX2FC_RQ_WQES_MAX; 1297 + } 1298 + tgt->rq_prod_idx = next_prod_idx; 1299 + tgt->conn_db->rq_prod = tgt->rq_prod_idx; 1300 + } 1301 + 1302 + void bnx2fc_init_cleanup_task(struct bnx2fc_cmd *io_req, 1303 + struct fcoe_task_ctx_entry *task, 1304 + u16 orig_xid) 1305 + { 1306 + u8 task_type = FCOE_TASK_TYPE_EXCHANGE_CLEANUP; 1307 + struct bnx2fc_rport *tgt = io_req->tgt; 1308 + u32 context_id = tgt->context_id; 1309 + 1310 + memset(task, 0, sizeof(struct fcoe_task_ctx_entry)); 1311 + 1312 + /* Tx Write Rx Read */ 1313 + task->tx_wr_rx_rd.tx_flags = FCOE_TASK_TX_STATE_EXCHANGE_CLEANUP << 1314 + FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TX_STATE_SHIFT; 1315 + task->tx_wr_rx_rd.init_flags = task_type << 1316 + FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TASK_TYPE_SHIFT; 1317 + task->tx_wr_rx_rd.init_flags |= FCOE_TASK_CLASS_TYPE_3 << 1318 + FCOE_TASK_CTX_ENTRY_TXWR_RXRD_CLASS_TYPE_SHIFT; 1319 + /* Common */ 1320 + task->cmn.common_flags = context_id << 1321 + FCOE_TASK_CTX_ENTRY_TX_RX_CMN_CID_SHIFT; 1322 + task->cmn.general.cleanup_info.task_id = orig_xid; 1323 + 1324 + 1325 + } 1326 + 1327 + void bnx2fc_init_mp_task(struct bnx2fc_cmd *io_req, 1328 + struct fcoe_task_ctx_entry *task) 1329 + { 1330 + struct bnx2fc_mp_req *mp_req = &(io_req->mp_req); 1331 + struct bnx2fc_rport *tgt = io_req->tgt; 1332 + struct fc_frame_header *fc_hdr; 1333 + u8 task_type = 0; 1334 + u64 *hdr; 1335 + u64 temp_hdr[3]; 1336 + u32 context_id; 1337 + 1338 + 1339 + /* Obtain task_type */ 1340 + if ((io_req->cmd_type == BNX2FC_TASK_MGMT_CMD) || 1341 + (io_req->cmd_type == BNX2FC_ELS)) { 1342 + task_type = FCOE_TASK_TYPE_MIDPATH; 1343 + } else if (io_req->cmd_type == BNX2FC_ABTS) { 1344 + task_type = FCOE_TASK_TYPE_ABTS; 1345 + } 1346 + 1347 + memset(task, 0, sizeof(struct fcoe_task_ctx_entry)); 1348 + 1349 + /* Setup the task from io_req for easy reference */ 1350 + io_req->task = task; 1351 + 1352 + BNX2FC_IO_DBG(io_req, "Init MP task for cmd_type = %d task_type = %d\n", 1353 + io_req->cmd_type, task_type); 1354 + 1355 + /* Tx only */ 1356 + if ((task_type == FCOE_TASK_TYPE_MIDPATH) || 1357 + (task_type == FCOE_TASK_TYPE_UNSOLICITED)) { 1358 + task->tx_wr_only.sgl_ctx.mul_sges.cur_sge_addr.lo = 1359 + (u32)mp_req->mp_req_bd_dma; 1360 + task->tx_wr_only.sgl_ctx.mul_sges.cur_sge_addr.hi = 1361 + (u32)((u64)mp_req->mp_req_bd_dma >> 32); 1362 + task->tx_wr_only.sgl_ctx.mul_sges.sgl_size = 1; 1363 + BNX2FC_IO_DBG(io_req, "init_mp_task - bd_dma = 0x%llx\n", 1364 + (unsigned long long)mp_req->mp_req_bd_dma); 1365 + } 1366 + 1367 + /* Tx Write Rx Read */ 1368 + task->tx_wr_rx_rd.tx_flags = FCOE_TASK_TX_STATE_INIT << 1369 + FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TX_STATE_SHIFT; 1370 + task->tx_wr_rx_rd.init_flags = task_type << 1371 + FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TASK_TYPE_SHIFT; 1372 + task->tx_wr_rx_rd.init_flags |= FCOE_TASK_DEV_TYPE_DISK << 1373 + FCOE_TASK_CTX_ENTRY_TXWR_RXRD_DEV_TYPE_SHIFT; 1374 + task->tx_wr_rx_rd.init_flags |= FCOE_TASK_CLASS_TYPE_3 << 1375 + FCOE_TASK_CTX_ENTRY_TXWR_RXRD_CLASS_TYPE_SHIFT; 1376 + 1377 + /* Common */ 1378 + task->cmn.data_2_trns = io_req->data_xfer_len; 1379 + context_id = tgt->context_id; 1380 + task->cmn.common_flags = context_id << 1381 + FCOE_TASK_CTX_ENTRY_TX_RX_CMN_CID_SHIFT; 1382 + task->cmn.common_flags |= 1 << 1383 + FCOE_TASK_CTX_ENTRY_TX_RX_CMN_VALID_SHIFT; 1384 + task->cmn.common_flags |= 1 << 1385 + FCOE_TASK_CTX_ENTRY_TX_RX_CMN_EXP_FIRST_FRAME_SHIFT; 1386 + 1387 + /* Rx Write Tx Read */ 1388 + fc_hdr = &(mp_req->req_fc_hdr); 1389 + if (task_type == FCOE_TASK_TYPE_MIDPATH) { 1390 + fc_hdr->fh_ox_id = cpu_to_be16(io_req->xid); 1391 + fc_hdr->fh_rx_id = htons(0xffff); 1392 + task->rx_wr_tx_rd.rx_id = 0xffff; 1393 + } else if (task_type == FCOE_TASK_TYPE_UNSOLICITED) { 1394 + fc_hdr->fh_rx_id = cpu_to_be16(io_req->xid); 1395 + } 1396 + 1397 + /* Fill FC Header into middle path buffer */ 1398 + hdr = (u64 *) &task->cmn.general.cmd_info.mp_fc_frame.fc_hdr; 1399 + memcpy(temp_hdr, fc_hdr, sizeof(temp_hdr)); 1400 + hdr[0] = cpu_to_be64(temp_hdr[0]); 1401 + hdr[1] = cpu_to_be64(temp_hdr[1]); 1402 + hdr[2] = cpu_to_be64(temp_hdr[2]); 1403 + 1404 + /* Rx Only */ 1405 + if (task_type == FCOE_TASK_TYPE_MIDPATH) { 1406 + 1407 + task->rx_wr_only.sgl_ctx.mul_sges.cur_sge_addr.lo = 1408 + (u32)mp_req->mp_resp_bd_dma; 1409 + task->rx_wr_only.sgl_ctx.mul_sges.cur_sge_addr.hi = 1410 + (u32)((u64)mp_req->mp_resp_bd_dma >> 32); 1411 + task->rx_wr_only.sgl_ctx.mul_sges.sgl_size = 1; 1412 + } 1413 + } 1414 + 1415 + void bnx2fc_init_task(struct bnx2fc_cmd *io_req, 1416 + struct fcoe_task_ctx_entry *task) 1417 + { 1418 + u8 task_type; 1419 + struct scsi_cmnd *sc_cmd = io_req->sc_cmd; 1420 + struct io_bdt *bd_tbl = io_req->bd_tbl; 1421 + struct bnx2fc_rport *tgt = io_req->tgt; 1422 + u64 *fcp_cmnd; 1423 + u64 tmp_fcp_cmnd[4]; 1424 + u32 context_id; 1425 + int cnt, i; 1426 + int bd_count; 1427 + 1428 + memset(task, 0, sizeof(struct fcoe_task_ctx_entry)); 1429 + 1430 + /* Setup the task from io_req for easy reference */ 1431 + io_req->task = task; 1432 + 1433 + if (sc_cmd->sc_data_direction == DMA_TO_DEVICE) 1434 + task_type = FCOE_TASK_TYPE_WRITE; 1435 + else 1436 + task_type = FCOE_TASK_TYPE_READ; 1437 + 1438 + /* Tx only */ 1439 + if (task_type == FCOE_TASK_TYPE_WRITE) { 1440 + task->tx_wr_only.sgl_ctx.mul_sges.cur_sge_addr.lo = 1441 + (u32)bd_tbl->bd_tbl_dma; 1442 + task->tx_wr_only.sgl_ctx.mul_sges.cur_sge_addr.hi = 1443 + (u32)((u64)bd_tbl->bd_tbl_dma >> 32); 1444 + task->tx_wr_only.sgl_ctx.mul_sges.sgl_size = 1445 + bd_tbl->bd_valid; 1446 + } 1447 + 1448 + /*Tx Write Rx Read */ 1449 + /* Init state to NORMAL */ 1450 + task->tx_wr_rx_rd.tx_flags = FCOE_TASK_TX_STATE_NORMAL << 1451 + FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TX_STATE_SHIFT; 1452 + task->tx_wr_rx_rd.init_flags = task_type << 1453 + FCOE_TASK_CTX_ENTRY_TXWR_RXRD_TASK_TYPE_SHIFT; 1454 + task->tx_wr_rx_rd.init_flags |= FCOE_TASK_DEV_TYPE_DISK << 1455 + FCOE_TASK_CTX_ENTRY_TXWR_RXRD_DEV_TYPE_SHIFT; 1456 + task->tx_wr_rx_rd.init_flags |= FCOE_TASK_CLASS_TYPE_3 << 1457 + FCOE_TASK_CTX_ENTRY_TXWR_RXRD_CLASS_TYPE_SHIFT; 1458 + 1459 + /* Common */ 1460 + task->cmn.data_2_trns = io_req->data_xfer_len; 1461 + context_id = tgt->context_id; 1462 + task->cmn.common_flags = context_id << 1463 + FCOE_TASK_CTX_ENTRY_TX_RX_CMN_CID_SHIFT; 1464 + task->cmn.common_flags |= 1 << 1465 + FCOE_TASK_CTX_ENTRY_TX_RX_CMN_VALID_SHIFT; 1466 + task->cmn.common_flags |= 1 << 1467 + FCOE_TASK_CTX_ENTRY_TX_RX_CMN_EXP_FIRST_FRAME_SHIFT; 1468 + 1469 + /* Set initiative ownership */ 1470 + task->cmn.common_flags |= FCOE_TASK_CTX_ENTRY_TX_RX_CMN_SEQ_INIT; 1471 + 1472 + /* Set initial seq counter */ 1473 + task->cmn.tx_low_seq_cnt = 1; 1474 + 1475 + /* Set state to "waiting for the first packet" */ 1476 + task->cmn.common_flags |= FCOE_TASK_CTX_ENTRY_TX_RX_CMN_EXP_FIRST_FRAME; 1477 + 1478 + /* Fill FCP_CMND IU */ 1479 + fcp_cmnd = (u64 *) 1480 + task->cmn.general.cmd_info.fcp_cmd_payload.opaque; 1481 + bnx2fc_build_fcp_cmnd(io_req, (struct fcp_cmnd *)&tmp_fcp_cmnd); 1482 + 1483 + /* swap fcp_cmnd */ 1484 + cnt = sizeof(struct fcp_cmnd) / sizeof(u64); 1485 + 1486 + for (i = 0; i < cnt; i++) { 1487 + *fcp_cmnd = cpu_to_be64(tmp_fcp_cmnd[i]); 1488 + fcp_cmnd++; 1489 + } 1490 + 1491 + /* Rx Write Tx Read */ 1492 + task->rx_wr_tx_rd.rx_id = 0xffff; 1493 + 1494 + /* Rx Only */ 1495 + if (task_type == FCOE_TASK_TYPE_READ) { 1496 + 1497 + bd_count = bd_tbl->bd_valid; 1498 + if (bd_count == 1) { 1499 + 1500 + struct fcoe_bd_ctx *fcoe_bd_tbl = bd_tbl->bd_tbl; 1501 + 1502 + task->rx_wr_only.sgl_ctx.single_sge.cur_buf_addr.lo = 1503 + fcoe_bd_tbl->buf_addr_lo; 1504 + task->rx_wr_only.sgl_ctx.single_sge.cur_buf_addr.hi = 1505 + fcoe_bd_tbl->buf_addr_hi; 1506 + task->rx_wr_only.sgl_ctx.single_sge.cur_buf_rem = 1507 + fcoe_bd_tbl->buf_len; 1508 + task->tx_wr_rx_rd.init_flags |= 1 << 1509 + FCOE_TASK_CTX_ENTRY_TXWR_RXRD_SINGLE_SGE_SHIFT; 1510 + } else { 1511 + 1512 + task->rx_wr_only.sgl_ctx.mul_sges.cur_sge_addr.lo = 1513 + (u32)bd_tbl->bd_tbl_dma; 1514 + task->rx_wr_only.sgl_ctx.mul_sges.cur_sge_addr.hi = 1515 + (u32)((u64)bd_tbl->bd_tbl_dma >> 32); 1516 + task->rx_wr_only.sgl_ctx.mul_sges.sgl_size = 1517 + bd_tbl->bd_valid; 1518 + } 1519 + } 1520 + } 1521 + 1522 + /** 1523 + * bnx2fc_setup_task_ctx - allocate and map task context 1524 + * 1525 + * @hba: pointer to adapter structure 1526 + * 1527 + * allocate memory for task context, and associated BD table to be used 1528 + * by firmware 1529 + * 1530 + */ 1531 + int bnx2fc_setup_task_ctx(struct bnx2fc_hba *hba) 1532 + { 1533 + int rc = 0; 1534 + struct regpair *task_ctx_bdt; 1535 + dma_addr_t addr; 1536 + int i; 1537 + 1538 + /* 1539 + * Allocate task context bd table. A page size of bd table 1540 + * can map 256 buffers. Each buffer contains 32 task context 1541 + * entries. Hence the limit with one page is 8192 task context 1542 + * entries. 1543 + */ 1544 + hba->task_ctx_bd_tbl = dma_alloc_coherent(&hba->pcidev->dev, 1545 + PAGE_SIZE, 1546 + &hba->task_ctx_bd_dma, 1547 + GFP_KERNEL); 1548 + if (!hba->task_ctx_bd_tbl) { 1549 + printk(KERN_ERR PFX "unable to allocate task context BDT\n"); 1550 + rc = -1; 1551 + goto out; 1552 + } 1553 + memset(hba->task_ctx_bd_tbl, 0, PAGE_SIZE); 1554 + 1555 + /* 1556 + * Allocate task_ctx which is an array of pointers pointing to 1557 + * a page containing 32 task contexts 1558 + */ 1559 + hba->task_ctx = kzalloc((BNX2FC_TASK_CTX_ARR_SZ * sizeof(void *)), 1560 + GFP_KERNEL); 1561 + if (!hba->task_ctx) { 1562 + printk(KERN_ERR PFX "unable to allocate task context array\n"); 1563 + rc = -1; 1564 + goto out1; 1565 + } 1566 + 1567 + /* 1568 + * Allocate task_ctx_dma which is an array of dma addresses 1569 + */ 1570 + hba->task_ctx_dma = kmalloc((BNX2FC_TASK_CTX_ARR_SZ * 1571 + sizeof(dma_addr_t)), GFP_KERNEL); 1572 + if (!hba->task_ctx_dma) { 1573 + printk(KERN_ERR PFX "unable to alloc context mapping array\n"); 1574 + rc = -1; 1575 + goto out2; 1576 + } 1577 + 1578 + task_ctx_bdt = (struct regpair *)hba->task_ctx_bd_tbl; 1579 + for (i = 0; i < BNX2FC_TASK_CTX_ARR_SZ; i++) { 1580 + 1581 + hba->task_ctx[i] = dma_alloc_coherent(&hba->pcidev->dev, 1582 + PAGE_SIZE, 1583 + &hba->task_ctx_dma[i], 1584 + GFP_KERNEL); 1585 + if (!hba->task_ctx[i]) { 1586 + printk(KERN_ERR PFX "unable to alloc task context\n"); 1587 + rc = -1; 1588 + goto out3; 1589 + } 1590 + memset(hba->task_ctx[i], 0, PAGE_SIZE); 1591 + addr = (u64)hba->task_ctx_dma[i]; 1592 + task_ctx_bdt->hi = cpu_to_le32((u64)addr >> 32); 1593 + task_ctx_bdt->lo = cpu_to_le32((u32)addr); 1594 + task_ctx_bdt++; 1595 + } 1596 + return 0; 1597 + 1598 + out3: 1599 + for (i = 0; i < BNX2FC_TASK_CTX_ARR_SZ; i++) { 1600 + if (hba->task_ctx[i]) { 1601 + 1602 + dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, 1603 + hba->task_ctx[i], hba->task_ctx_dma[i]); 1604 + hba->task_ctx[i] = NULL; 1605 + } 1606 + } 1607 + 1608 + kfree(hba->task_ctx_dma); 1609 + hba->task_ctx_dma = NULL; 1610 + out2: 1611 + kfree(hba->task_ctx); 1612 + hba->task_ctx = NULL; 1613 + out1: 1614 + dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, 1615 + hba->task_ctx_bd_tbl, hba->task_ctx_bd_dma); 1616 + hba->task_ctx_bd_tbl = NULL; 1617 + out: 1618 + return rc; 1619 + } 1620 + 1621 + void bnx2fc_free_task_ctx(struct bnx2fc_hba *hba) 1622 + { 1623 + int i; 1624 + 1625 + if (hba->task_ctx_bd_tbl) { 1626 + dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, 1627 + hba->task_ctx_bd_tbl, 1628 + hba->task_ctx_bd_dma); 1629 + hba->task_ctx_bd_tbl = NULL; 1630 + } 1631 + 1632 + if (hba->task_ctx) { 1633 + for (i = 0; i < BNX2FC_TASK_CTX_ARR_SZ; i++) { 1634 + if (hba->task_ctx[i]) { 1635 + dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, 1636 + hba->task_ctx[i], 1637 + hba->task_ctx_dma[i]); 1638 + hba->task_ctx[i] = NULL; 1639 + } 1640 + } 1641 + kfree(hba->task_ctx); 1642 + hba->task_ctx = NULL; 1643 + } 1644 + 1645 + kfree(hba->task_ctx_dma); 1646 + hba->task_ctx_dma = NULL; 1647 + } 1648 + 1649 + static void bnx2fc_free_hash_table(struct bnx2fc_hba *hba) 1650 + { 1651 + int i; 1652 + int segment_count; 1653 + int hash_table_size; 1654 + u32 *pbl; 1655 + 1656 + segment_count = hba->hash_tbl_segment_count; 1657 + hash_table_size = BNX2FC_NUM_MAX_SESS * BNX2FC_MAX_ROWS_IN_HASH_TBL * 1658 + sizeof(struct fcoe_hash_table_entry); 1659 + 1660 + pbl = hba->hash_tbl_pbl; 1661 + for (i = 0; i < segment_count; ++i) { 1662 + dma_addr_t dma_address; 1663 + 1664 + dma_address = le32_to_cpu(*pbl); 1665 + ++pbl; 1666 + dma_address += ((u64)le32_to_cpu(*pbl)) << 32; 1667 + ++pbl; 1668 + dma_free_coherent(&hba->pcidev->dev, 1669 + BNX2FC_HASH_TBL_CHUNK_SIZE, 1670 + hba->hash_tbl_segments[i], 1671 + dma_address); 1672 + 1673 + } 1674 + 1675 + if (hba->hash_tbl_pbl) { 1676 + dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, 1677 + hba->hash_tbl_pbl, 1678 + hba->hash_tbl_pbl_dma); 1679 + hba->hash_tbl_pbl = NULL; 1680 + } 1681 + } 1682 + 1683 + static int bnx2fc_allocate_hash_table(struct bnx2fc_hba *hba) 1684 + { 1685 + int i; 1686 + int hash_table_size; 1687 + int segment_count; 1688 + int segment_array_size; 1689 + int dma_segment_array_size; 1690 + dma_addr_t *dma_segment_array; 1691 + u32 *pbl; 1692 + 1693 + hash_table_size = BNX2FC_NUM_MAX_SESS * BNX2FC_MAX_ROWS_IN_HASH_TBL * 1694 + sizeof(struct fcoe_hash_table_entry); 1695 + 1696 + segment_count = hash_table_size + BNX2FC_HASH_TBL_CHUNK_SIZE - 1; 1697 + segment_count /= BNX2FC_HASH_TBL_CHUNK_SIZE; 1698 + hba->hash_tbl_segment_count = segment_count; 1699 + 1700 + segment_array_size = segment_count * sizeof(*hba->hash_tbl_segments); 1701 + hba->hash_tbl_segments = kzalloc(segment_array_size, GFP_KERNEL); 1702 + if (!hba->hash_tbl_segments) { 1703 + printk(KERN_ERR PFX "hash table pointers alloc failed\n"); 1704 + return -ENOMEM; 1705 + } 1706 + dma_segment_array_size = segment_count * sizeof(*dma_segment_array); 1707 + dma_segment_array = kzalloc(dma_segment_array_size, GFP_KERNEL); 1708 + if (!dma_segment_array) { 1709 + printk(KERN_ERR PFX "hash table pointers (dma) alloc failed\n"); 1710 + return -ENOMEM; 1711 + } 1712 + 1713 + for (i = 0; i < segment_count; ++i) { 1714 + hba->hash_tbl_segments[i] = 1715 + dma_alloc_coherent(&hba->pcidev->dev, 1716 + BNX2FC_HASH_TBL_CHUNK_SIZE, 1717 + &dma_segment_array[i], 1718 + GFP_KERNEL); 1719 + if (!hba->hash_tbl_segments[i]) { 1720 + printk(KERN_ERR PFX "hash segment alloc failed\n"); 1721 + while (--i >= 0) { 1722 + dma_free_coherent(&hba->pcidev->dev, 1723 + BNX2FC_HASH_TBL_CHUNK_SIZE, 1724 + hba->hash_tbl_segments[i], 1725 + dma_segment_array[i]); 1726 + hba->hash_tbl_segments[i] = NULL; 1727 + } 1728 + kfree(dma_segment_array); 1729 + return -ENOMEM; 1730 + } 1731 + memset(hba->hash_tbl_segments[i], 0, 1732 + BNX2FC_HASH_TBL_CHUNK_SIZE); 1733 + } 1734 + 1735 + hba->hash_tbl_pbl = dma_alloc_coherent(&hba->pcidev->dev, 1736 + PAGE_SIZE, 1737 + &hba->hash_tbl_pbl_dma, 1738 + GFP_KERNEL); 1739 + if (!hba->hash_tbl_pbl) { 1740 + printk(KERN_ERR PFX "hash table pbl alloc failed\n"); 1741 + kfree(dma_segment_array); 1742 + return -ENOMEM; 1743 + } 1744 + memset(hba->hash_tbl_pbl, 0, PAGE_SIZE); 1745 + 1746 + pbl = hba->hash_tbl_pbl; 1747 + for (i = 0; i < segment_count; ++i) { 1748 + u64 paddr = dma_segment_array[i]; 1749 + *pbl = cpu_to_le32((u32) paddr); 1750 + ++pbl; 1751 + *pbl = cpu_to_le32((u32) (paddr >> 32)); 1752 + ++pbl; 1753 + } 1754 + pbl = hba->hash_tbl_pbl; 1755 + i = 0; 1756 + while (*pbl && *(pbl + 1)) { 1757 + u32 lo; 1758 + u32 hi; 1759 + lo = *pbl; 1760 + ++pbl; 1761 + hi = *pbl; 1762 + ++pbl; 1763 + ++i; 1764 + } 1765 + kfree(dma_segment_array); 1766 + return 0; 1767 + } 1768 + 1769 + /** 1770 + * bnx2fc_setup_fw_resc - Allocate and map hash table and dummy buffer 1771 + * 1772 + * @hba: Pointer to adapter structure 1773 + * 1774 + */ 1775 + int bnx2fc_setup_fw_resc(struct bnx2fc_hba *hba) 1776 + { 1777 + u64 addr; 1778 + u32 mem_size; 1779 + int i; 1780 + 1781 + if (bnx2fc_allocate_hash_table(hba)) 1782 + return -ENOMEM; 1783 + 1784 + mem_size = BNX2FC_NUM_MAX_SESS * sizeof(struct regpair); 1785 + hba->t2_hash_tbl_ptr = dma_alloc_coherent(&hba->pcidev->dev, mem_size, 1786 + &hba->t2_hash_tbl_ptr_dma, 1787 + GFP_KERNEL); 1788 + if (!hba->t2_hash_tbl_ptr) { 1789 + printk(KERN_ERR PFX "unable to allocate t2 hash table ptr\n"); 1790 + bnx2fc_free_fw_resc(hba); 1791 + return -ENOMEM; 1792 + } 1793 + memset(hba->t2_hash_tbl_ptr, 0x00, mem_size); 1794 + 1795 + mem_size = BNX2FC_NUM_MAX_SESS * 1796 + sizeof(struct fcoe_t2_hash_table_entry); 1797 + hba->t2_hash_tbl = dma_alloc_coherent(&hba->pcidev->dev, mem_size, 1798 + &hba->t2_hash_tbl_dma, 1799 + GFP_KERNEL); 1800 + if (!hba->t2_hash_tbl) { 1801 + printk(KERN_ERR PFX "unable to allocate t2 hash table\n"); 1802 + bnx2fc_free_fw_resc(hba); 1803 + return -ENOMEM; 1804 + } 1805 + memset(hba->t2_hash_tbl, 0x00, mem_size); 1806 + for (i = 0; i < BNX2FC_NUM_MAX_SESS; i++) { 1807 + addr = (unsigned long) hba->t2_hash_tbl_dma + 1808 + ((i+1) * sizeof(struct fcoe_t2_hash_table_entry)); 1809 + hba->t2_hash_tbl[i].next.lo = addr & 0xffffffff; 1810 + hba->t2_hash_tbl[i].next.hi = addr >> 32; 1811 + } 1812 + 1813 + hba->dummy_buffer = dma_alloc_coherent(&hba->pcidev->dev, 1814 + PAGE_SIZE, &hba->dummy_buf_dma, 1815 + GFP_KERNEL); 1816 + if (!hba->dummy_buffer) { 1817 + printk(KERN_ERR PFX "unable to alloc MP Dummy Buffer\n"); 1818 + bnx2fc_free_fw_resc(hba); 1819 + return -ENOMEM; 1820 + } 1821 + 1822 + hba->stats_buffer = dma_alloc_coherent(&hba->pcidev->dev, 1823 + PAGE_SIZE, 1824 + &hba->stats_buf_dma, 1825 + GFP_KERNEL); 1826 + if (!hba->stats_buffer) { 1827 + printk(KERN_ERR PFX "unable to alloc Stats Buffer\n"); 1828 + bnx2fc_free_fw_resc(hba); 1829 + return -ENOMEM; 1830 + } 1831 + memset(hba->stats_buffer, 0x00, PAGE_SIZE); 1832 + 1833 + return 0; 1834 + } 1835 + 1836 + void bnx2fc_free_fw_resc(struct bnx2fc_hba *hba) 1837 + { 1838 + u32 mem_size; 1839 + 1840 + if (hba->stats_buffer) { 1841 + dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, 1842 + hba->stats_buffer, hba->stats_buf_dma); 1843 + hba->stats_buffer = NULL; 1844 + } 1845 + 1846 + if (hba->dummy_buffer) { 1847 + dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, 1848 + hba->dummy_buffer, hba->dummy_buf_dma); 1849 + hba->dummy_buffer = NULL; 1850 + } 1851 + 1852 + if (hba->t2_hash_tbl_ptr) { 1853 + mem_size = BNX2FC_NUM_MAX_SESS * sizeof(struct regpair); 1854 + dma_free_coherent(&hba->pcidev->dev, mem_size, 1855 + hba->t2_hash_tbl_ptr, 1856 + hba->t2_hash_tbl_ptr_dma); 1857 + hba->t2_hash_tbl_ptr = NULL; 1858 + } 1859 + 1860 + if (hba->t2_hash_tbl) { 1861 + mem_size = BNX2FC_NUM_MAX_SESS * 1862 + sizeof(struct fcoe_t2_hash_table_entry); 1863 + dma_free_coherent(&hba->pcidev->dev, mem_size, 1864 + hba->t2_hash_tbl, hba->t2_hash_tbl_dma); 1865 + hba->t2_hash_tbl = NULL; 1866 + } 1867 + bnx2fc_free_hash_table(hba); 1868 + }
+1833
drivers/scsi/bnx2fc/bnx2fc_io.c
··· 1 + /* bnx2fc_io.c: Broadcom NetXtreme II Linux FCoE offload driver. 2 + * IO manager and SCSI IO processing. 3 + * 4 + * Copyright (c) 2008 - 2010 Broadcom Corporation 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License as published by 8 + * the Free Software Foundation. 9 + * 10 + * Written by: Bhanu Prakash Gollapudi (bprakash@broadcom.com) 11 + */ 12 + 13 + #include "bnx2fc.h" 14 + static int bnx2fc_split_bd(struct bnx2fc_cmd *io_req, u64 addr, int sg_len, 15 + int bd_index); 16 + static int bnx2fc_map_sg(struct bnx2fc_cmd *io_req); 17 + static void bnx2fc_build_bd_list_from_sg(struct bnx2fc_cmd *io_req); 18 + static int bnx2fc_post_io_req(struct bnx2fc_rport *tgt, 19 + struct bnx2fc_cmd *io_req); 20 + static void bnx2fc_unmap_sg_list(struct bnx2fc_cmd *io_req); 21 + static void bnx2fc_free_mp_resc(struct bnx2fc_cmd *io_req); 22 + static void bnx2fc_parse_fcp_rsp(struct bnx2fc_cmd *io_req, 23 + struct fcoe_fcp_rsp_payload *fcp_rsp, 24 + u8 num_rq); 25 + 26 + void bnx2fc_cmd_timer_set(struct bnx2fc_cmd *io_req, 27 + unsigned int timer_msec) 28 + { 29 + struct bnx2fc_hba *hba = io_req->port->priv; 30 + 31 + if (queue_delayed_work(hba->timer_work_queue, &io_req->timeout_work, 32 + msecs_to_jiffies(timer_msec))) 33 + kref_get(&io_req->refcount); 34 + } 35 + 36 + static void bnx2fc_cmd_timeout(struct work_struct *work) 37 + { 38 + struct bnx2fc_cmd *io_req = container_of(work, struct bnx2fc_cmd, 39 + timeout_work.work); 40 + struct fc_lport *lport; 41 + struct fc_rport_priv *rdata; 42 + u8 cmd_type = io_req->cmd_type; 43 + struct bnx2fc_rport *tgt = io_req->tgt; 44 + int logo_issued; 45 + int rc; 46 + 47 + BNX2FC_IO_DBG(io_req, "cmd_timeout, cmd_type = %d," 48 + "req_flags = %lx\n", cmd_type, io_req->req_flags); 49 + 50 + spin_lock_bh(&tgt->tgt_lock); 51 + if (test_and_clear_bit(BNX2FC_FLAG_ISSUE_RRQ, &io_req->req_flags)) { 52 + clear_bit(BNX2FC_FLAG_RETIRE_OXID, &io_req->req_flags); 53 + /* 54 + * ideally we should hold the io_req until RRQ complets, 55 + * and release io_req from timeout hold. 56 + */ 57 + spin_unlock_bh(&tgt->tgt_lock); 58 + bnx2fc_send_rrq(io_req); 59 + return; 60 + } 61 + if (test_and_clear_bit(BNX2FC_FLAG_RETIRE_OXID, &io_req->req_flags)) { 62 + BNX2FC_IO_DBG(io_req, "IO ready for reuse now\n"); 63 + goto done; 64 + } 65 + 66 + switch (cmd_type) { 67 + case BNX2FC_SCSI_CMD: 68 + if (test_and_clear_bit(BNX2FC_FLAG_EH_ABORT, 69 + &io_req->req_flags)) { 70 + /* Handle eh_abort timeout */ 71 + BNX2FC_IO_DBG(io_req, "eh_abort timed out\n"); 72 + complete(&io_req->tm_done); 73 + } else if (test_bit(BNX2FC_FLAG_ISSUE_ABTS, 74 + &io_req->req_flags)) { 75 + /* Handle internally generated ABTS timeout */ 76 + BNX2FC_IO_DBG(io_req, "ABTS timed out refcnt = %d\n", 77 + io_req->refcount.refcount.counter); 78 + if (!(test_and_set_bit(BNX2FC_FLAG_ABTS_DONE, 79 + &io_req->req_flags))) { 80 + 81 + lport = io_req->port->lport; 82 + rdata = io_req->tgt->rdata; 83 + logo_issued = test_and_set_bit( 84 + BNX2FC_FLAG_EXPL_LOGO, 85 + &tgt->flags); 86 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 87 + spin_unlock_bh(&tgt->tgt_lock); 88 + 89 + /* Explicitly logo the target */ 90 + if (!logo_issued) { 91 + BNX2FC_IO_DBG(io_req, "Explicit " 92 + "logo - tgt flags = 0x%lx\n", 93 + tgt->flags); 94 + 95 + mutex_lock(&lport->disc.disc_mutex); 96 + lport->tt.rport_logoff(rdata); 97 + mutex_unlock(&lport->disc.disc_mutex); 98 + } 99 + return; 100 + } 101 + } else { 102 + /* Hanlde IO timeout */ 103 + BNX2FC_IO_DBG(io_req, "IO timed out. issue ABTS\n"); 104 + if (test_and_set_bit(BNX2FC_FLAG_IO_COMPL, 105 + &io_req->req_flags)) { 106 + BNX2FC_IO_DBG(io_req, "IO completed before " 107 + " timer expiry\n"); 108 + goto done; 109 + } 110 + 111 + if (!test_and_set_bit(BNX2FC_FLAG_ISSUE_ABTS, 112 + &io_req->req_flags)) { 113 + rc = bnx2fc_initiate_abts(io_req); 114 + if (rc == SUCCESS) 115 + goto done; 116 + /* 117 + * Explicitly logo the target if 118 + * abts initiation fails 119 + */ 120 + lport = io_req->port->lport; 121 + rdata = io_req->tgt->rdata; 122 + logo_issued = test_and_set_bit( 123 + BNX2FC_FLAG_EXPL_LOGO, 124 + &tgt->flags); 125 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 126 + spin_unlock_bh(&tgt->tgt_lock); 127 + 128 + if (!logo_issued) { 129 + BNX2FC_IO_DBG(io_req, "Explicit " 130 + "logo - tgt flags = 0x%lx\n", 131 + tgt->flags); 132 + 133 + 134 + mutex_lock(&lport->disc.disc_mutex); 135 + lport->tt.rport_logoff(rdata); 136 + mutex_unlock(&lport->disc.disc_mutex); 137 + } 138 + return; 139 + } else { 140 + BNX2FC_IO_DBG(io_req, "IO already in " 141 + "ABTS processing\n"); 142 + } 143 + } 144 + break; 145 + case BNX2FC_ELS: 146 + 147 + if (test_bit(BNX2FC_FLAG_ISSUE_ABTS, &io_req->req_flags)) { 148 + BNX2FC_IO_DBG(io_req, "ABTS for ELS timed out\n"); 149 + 150 + if (!test_and_set_bit(BNX2FC_FLAG_ABTS_DONE, 151 + &io_req->req_flags)) { 152 + lport = io_req->port->lport; 153 + rdata = io_req->tgt->rdata; 154 + logo_issued = test_and_set_bit( 155 + BNX2FC_FLAG_EXPL_LOGO, 156 + &tgt->flags); 157 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 158 + spin_unlock_bh(&tgt->tgt_lock); 159 + 160 + /* Explicitly logo the target */ 161 + if (!logo_issued) { 162 + BNX2FC_IO_DBG(io_req, "Explicitly logo" 163 + "(els)\n"); 164 + mutex_lock(&lport->disc.disc_mutex); 165 + lport->tt.rport_logoff(rdata); 166 + mutex_unlock(&lport->disc.disc_mutex); 167 + } 168 + return; 169 + } 170 + } else { 171 + /* 172 + * Handle ELS timeout. 173 + * tgt_lock is used to sync compl path and timeout 174 + * path. If els compl path is processing this IO, we 175 + * have nothing to do here, just release the timer hold 176 + */ 177 + BNX2FC_IO_DBG(io_req, "ELS timed out\n"); 178 + if (test_and_set_bit(BNX2FC_FLAG_ELS_DONE, 179 + &io_req->req_flags)) 180 + goto done; 181 + 182 + /* Indicate the cb_func that this ELS is timed out */ 183 + set_bit(BNX2FC_FLAG_ELS_TIMEOUT, &io_req->req_flags); 184 + 185 + if ((io_req->cb_func) && (io_req->cb_arg)) { 186 + io_req->cb_func(io_req->cb_arg); 187 + io_req->cb_arg = NULL; 188 + } 189 + } 190 + break; 191 + default: 192 + printk(KERN_ERR PFX "cmd_timeout: invalid cmd_type %d\n", 193 + cmd_type); 194 + break; 195 + } 196 + 197 + done: 198 + /* release the cmd that was held when timer was set */ 199 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 200 + spin_unlock_bh(&tgt->tgt_lock); 201 + } 202 + 203 + static void bnx2fc_scsi_done(struct bnx2fc_cmd *io_req, int err_code) 204 + { 205 + /* Called with host lock held */ 206 + struct scsi_cmnd *sc_cmd = io_req->sc_cmd; 207 + 208 + /* 209 + * active_cmd_queue may have other command types as well, 210 + * and during flush operation, we want to error back only 211 + * scsi commands. 212 + */ 213 + if (io_req->cmd_type != BNX2FC_SCSI_CMD) 214 + return; 215 + 216 + BNX2FC_IO_DBG(io_req, "scsi_done. err_code = 0x%x\n", err_code); 217 + bnx2fc_unmap_sg_list(io_req); 218 + io_req->sc_cmd = NULL; 219 + if (!sc_cmd) { 220 + printk(KERN_ERR PFX "scsi_done - sc_cmd NULL. " 221 + "IO(0x%x) already cleaned up\n", 222 + io_req->xid); 223 + return; 224 + } 225 + sc_cmd->result = err_code << 16; 226 + 227 + BNX2FC_IO_DBG(io_req, "sc=%p, result=0x%x, retries=%d, allowed=%d\n", 228 + sc_cmd, host_byte(sc_cmd->result), sc_cmd->retries, 229 + sc_cmd->allowed); 230 + scsi_set_resid(sc_cmd, scsi_bufflen(sc_cmd)); 231 + sc_cmd->SCp.ptr = NULL; 232 + sc_cmd->scsi_done(sc_cmd); 233 + } 234 + 235 + struct bnx2fc_cmd_mgr *bnx2fc_cmd_mgr_alloc(struct bnx2fc_hba *hba, 236 + u16 min_xid, u16 max_xid) 237 + { 238 + struct bnx2fc_cmd_mgr *cmgr; 239 + struct io_bdt *bdt_info; 240 + struct bnx2fc_cmd *io_req; 241 + size_t len; 242 + u32 mem_size; 243 + u16 xid; 244 + int i; 245 + int num_ios; 246 + size_t bd_tbl_sz; 247 + 248 + if (max_xid <= min_xid || max_xid == FC_XID_UNKNOWN) { 249 + printk(KERN_ERR PFX "cmd_mgr_alloc: Invalid min_xid 0x%x \ 250 + and max_xid 0x%x\n", min_xid, max_xid); 251 + return NULL; 252 + } 253 + BNX2FC_MISC_DBG("min xid 0x%x, max xid 0x%x\n", min_xid, max_xid); 254 + 255 + num_ios = max_xid - min_xid + 1; 256 + len = (num_ios * (sizeof(struct bnx2fc_cmd *))); 257 + len += sizeof(struct bnx2fc_cmd_mgr); 258 + 259 + cmgr = kzalloc(len, GFP_KERNEL); 260 + if (!cmgr) { 261 + printk(KERN_ERR PFX "failed to alloc cmgr\n"); 262 + return NULL; 263 + } 264 + 265 + cmgr->free_list = kzalloc(sizeof(*cmgr->free_list) * 266 + num_possible_cpus(), GFP_KERNEL); 267 + if (!cmgr->free_list) { 268 + printk(KERN_ERR PFX "failed to alloc free_list\n"); 269 + goto mem_err; 270 + } 271 + 272 + cmgr->free_list_lock = kzalloc(sizeof(*cmgr->free_list_lock) * 273 + num_possible_cpus(), GFP_KERNEL); 274 + if (!cmgr->free_list_lock) { 275 + printk(KERN_ERR PFX "failed to alloc free_list_lock\n"); 276 + goto mem_err; 277 + } 278 + 279 + cmgr->hba = hba; 280 + cmgr->cmds = (struct bnx2fc_cmd **)(cmgr + 1); 281 + 282 + for (i = 0; i < num_possible_cpus(); i++) { 283 + INIT_LIST_HEAD(&cmgr->free_list[i]); 284 + spin_lock_init(&cmgr->free_list_lock[i]); 285 + } 286 + 287 + /* Pre-allocated pool of bnx2fc_cmds */ 288 + xid = BNX2FC_MIN_XID; 289 + for (i = 0; i < num_ios; i++) { 290 + io_req = kzalloc(sizeof(*io_req), GFP_KERNEL); 291 + 292 + if (!io_req) { 293 + printk(KERN_ERR PFX "failed to alloc io_req\n"); 294 + goto mem_err; 295 + } 296 + 297 + INIT_LIST_HEAD(&io_req->link); 298 + INIT_DELAYED_WORK(&io_req->timeout_work, bnx2fc_cmd_timeout); 299 + 300 + io_req->xid = xid++; 301 + if (io_req->xid >= BNX2FC_MAX_OUTSTANDING_CMNDS) 302 + printk(KERN_ERR PFX "ERROR allocating xids - 0x%x\n", 303 + io_req->xid); 304 + list_add_tail(&io_req->link, 305 + &cmgr->free_list[io_req->xid % num_possible_cpus()]); 306 + io_req++; 307 + } 308 + 309 + /* Allocate pool of io_bdts - one for each bnx2fc_cmd */ 310 + mem_size = num_ios * sizeof(struct io_bdt *); 311 + cmgr->io_bdt_pool = kmalloc(mem_size, GFP_KERNEL); 312 + if (!cmgr->io_bdt_pool) { 313 + printk(KERN_ERR PFX "failed to alloc io_bdt_pool\n"); 314 + goto mem_err; 315 + } 316 + 317 + mem_size = sizeof(struct io_bdt); 318 + for (i = 0; i < num_ios; i++) { 319 + cmgr->io_bdt_pool[i] = kmalloc(mem_size, GFP_KERNEL); 320 + if (!cmgr->io_bdt_pool[i]) { 321 + printk(KERN_ERR PFX "failed to alloc " 322 + "io_bdt_pool[%d]\n", i); 323 + goto mem_err; 324 + } 325 + } 326 + 327 + /* Allocate an map fcoe_bdt_ctx structures */ 328 + bd_tbl_sz = BNX2FC_MAX_BDS_PER_CMD * sizeof(struct fcoe_bd_ctx); 329 + for (i = 0; i < num_ios; i++) { 330 + bdt_info = cmgr->io_bdt_pool[i]; 331 + bdt_info->bd_tbl = dma_alloc_coherent(&hba->pcidev->dev, 332 + bd_tbl_sz, 333 + &bdt_info->bd_tbl_dma, 334 + GFP_KERNEL); 335 + if (!bdt_info->bd_tbl) { 336 + printk(KERN_ERR PFX "failed to alloc " 337 + "bdt_tbl[%d]\n", i); 338 + goto mem_err; 339 + } 340 + } 341 + 342 + return cmgr; 343 + 344 + mem_err: 345 + bnx2fc_cmd_mgr_free(cmgr); 346 + return NULL; 347 + } 348 + 349 + void bnx2fc_cmd_mgr_free(struct bnx2fc_cmd_mgr *cmgr) 350 + { 351 + struct io_bdt *bdt_info; 352 + struct bnx2fc_hba *hba = cmgr->hba; 353 + size_t bd_tbl_sz; 354 + u16 min_xid = BNX2FC_MIN_XID; 355 + u16 max_xid = BNX2FC_MAX_XID; 356 + int num_ios; 357 + int i; 358 + 359 + num_ios = max_xid - min_xid + 1; 360 + 361 + /* Free fcoe_bdt_ctx structures */ 362 + if (!cmgr->io_bdt_pool) 363 + goto free_cmd_pool; 364 + 365 + bd_tbl_sz = BNX2FC_MAX_BDS_PER_CMD * sizeof(struct fcoe_bd_ctx); 366 + for (i = 0; i < num_ios; i++) { 367 + bdt_info = cmgr->io_bdt_pool[i]; 368 + if (bdt_info->bd_tbl) { 369 + dma_free_coherent(&hba->pcidev->dev, bd_tbl_sz, 370 + bdt_info->bd_tbl, 371 + bdt_info->bd_tbl_dma); 372 + bdt_info->bd_tbl = NULL; 373 + } 374 + } 375 + 376 + /* Destroy io_bdt pool */ 377 + for (i = 0; i < num_ios; i++) { 378 + kfree(cmgr->io_bdt_pool[i]); 379 + cmgr->io_bdt_pool[i] = NULL; 380 + } 381 + 382 + kfree(cmgr->io_bdt_pool); 383 + cmgr->io_bdt_pool = NULL; 384 + 385 + free_cmd_pool: 386 + kfree(cmgr->free_list_lock); 387 + 388 + /* Destroy cmd pool */ 389 + if (!cmgr->free_list) 390 + goto free_cmgr; 391 + 392 + for (i = 0; i < num_possible_cpus(); i++) { 393 + struct list_head *list; 394 + struct list_head *tmp; 395 + 396 + list_for_each_safe(list, tmp, &cmgr->free_list[i]) { 397 + struct bnx2fc_cmd *io_req = (struct bnx2fc_cmd *)list; 398 + list_del(&io_req->link); 399 + kfree(io_req); 400 + } 401 + } 402 + kfree(cmgr->free_list); 403 + free_cmgr: 404 + /* Free command manager itself */ 405 + kfree(cmgr); 406 + } 407 + 408 + struct bnx2fc_cmd *bnx2fc_elstm_alloc(struct bnx2fc_rport *tgt, int type) 409 + { 410 + struct fcoe_port *port = tgt->port; 411 + struct bnx2fc_hba *hba = port->priv; 412 + struct bnx2fc_cmd_mgr *cmd_mgr = hba->cmd_mgr; 413 + struct bnx2fc_cmd *io_req; 414 + struct list_head *listp; 415 + struct io_bdt *bd_tbl; 416 + u32 max_sqes; 417 + u16 xid; 418 + 419 + max_sqes = tgt->max_sqes; 420 + switch (type) { 421 + case BNX2FC_TASK_MGMT_CMD: 422 + max_sqes = BNX2FC_TM_MAX_SQES; 423 + break; 424 + case BNX2FC_ELS: 425 + max_sqes = BNX2FC_ELS_MAX_SQES; 426 + break; 427 + default: 428 + break; 429 + } 430 + 431 + /* 432 + * NOTE: Free list insertions and deletions are protected with 433 + * cmgr lock 434 + */ 435 + spin_lock_bh(&cmd_mgr->free_list_lock[smp_processor_id()]); 436 + if ((list_empty(&(cmd_mgr->free_list[smp_processor_id()]))) || 437 + (tgt->num_active_ios.counter >= max_sqes)) { 438 + BNX2FC_TGT_DBG(tgt, "No free els_tm cmds available " 439 + "ios(%d):sqes(%d)\n", 440 + tgt->num_active_ios.counter, tgt->max_sqes); 441 + if (list_empty(&(cmd_mgr->free_list[smp_processor_id()]))) 442 + printk(KERN_ERR PFX "elstm_alloc: list_empty\n"); 443 + spin_unlock_bh(&cmd_mgr->free_list_lock[smp_processor_id()]); 444 + return NULL; 445 + } 446 + 447 + listp = (struct list_head *) 448 + cmd_mgr->free_list[smp_processor_id()].next; 449 + list_del_init(listp); 450 + io_req = (struct bnx2fc_cmd *) listp; 451 + xid = io_req->xid; 452 + cmd_mgr->cmds[xid] = io_req; 453 + atomic_inc(&tgt->num_active_ios); 454 + spin_unlock_bh(&cmd_mgr->free_list_lock[smp_processor_id()]); 455 + 456 + INIT_LIST_HEAD(&io_req->link); 457 + 458 + io_req->port = port; 459 + io_req->cmd_mgr = cmd_mgr; 460 + io_req->req_flags = 0; 461 + io_req->cmd_type = type; 462 + 463 + /* Bind io_bdt for this io_req */ 464 + /* Have a static link between io_req and io_bdt_pool */ 465 + bd_tbl = io_req->bd_tbl = cmd_mgr->io_bdt_pool[xid]; 466 + bd_tbl->io_req = io_req; 467 + 468 + /* Hold the io_req against deletion */ 469 + kref_init(&io_req->refcount); 470 + return io_req; 471 + } 472 + static struct bnx2fc_cmd *bnx2fc_cmd_alloc(struct bnx2fc_rport *tgt) 473 + { 474 + struct fcoe_port *port = tgt->port; 475 + struct bnx2fc_hba *hba = port->priv; 476 + struct bnx2fc_cmd_mgr *cmd_mgr = hba->cmd_mgr; 477 + struct bnx2fc_cmd *io_req; 478 + struct list_head *listp; 479 + struct io_bdt *bd_tbl; 480 + u32 max_sqes; 481 + u16 xid; 482 + 483 + max_sqes = BNX2FC_SCSI_MAX_SQES; 484 + /* 485 + * NOTE: Free list insertions and deletions are protected with 486 + * cmgr lock 487 + */ 488 + spin_lock_bh(&cmd_mgr->free_list_lock[smp_processor_id()]); 489 + if ((list_empty(&cmd_mgr->free_list[smp_processor_id()])) || 490 + (tgt->num_active_ios.counter >= max_sqes)) { 491 + spin_unlock_bh(&cmd_mgr->free_list_lock[smp_processor_id()]); 492 + return NULL; 493 + } 494 + 495 + listp = (struct list_head *) 496 + cmd_mgr->free_list[smp_processor_id()].next; 497 + list_del_init(listp); 498 + io_req = (struct bnx2fc_cmd *) listp; 499 + xid = io_req->xid; 500 + cmd_mgr->cmds[xid] = io_req; 501 + atomic_inc(&tgt->num_active_ios); 502 + spin_unlock_bh(&cmd_mgr->free_list_lock[smp_processor_id()]); 503 + 504 + INIT_LIST_HEAD(&io_req->link); 505 + 506 + io_req->port = port; 507 + io_req->cmd_mgr = cmd_mgr; 508 + io_req->req_flags = 0; 509 + 510 + /* Bind io_bdt for this io_req */ 511 + /* Have a static link between io_req and io_bdt_pool */ 512 + bd_tbl = io_req->bd_tbl = cmd_mgr->io_bdt_pool[xid]; 513 + bd_tbl->io_req = io_req; 514 + 515 + /* Hold the io_req against deletion */ 516 + kref_init(&io_req->refcount); 517 + return io_req; 518 + } 519 + 520 + void bnx2fc_cmd_release(struct kref *ref) 521 + { 522 + struct bnx2fc_cmd *io_req = container_of(ref, 523 + struct bnx2fc_cmd, refcount); 524 + struct bnx2fc_cmd_mgr *cmd_mgr = io_req->cmd_mgr; 525 + 526 + spin_lock_bh(&cmd_mgr->free_list_lock[smp_processor_id()]); 527 + if (io_req->cmd_type != BNX2FC_SCSI_CMD) 528 + bnx2fc_free_mp_resc(io_req); 529 + cmd_mgr->cmds[io_req->xid] = NULL; 530 + /* Delete IO from retire queue */ 531 + list_del_init(&io_req->link); 532 + /* Add it to the free list */ 533 + list_add(&io_req->link, 534 + &cmd_mgr->free_list[smp_processor_id()]); 535 + atomic_dec(&io_req->tgt->num_active_ios); 536 + spin_unlock_bh(&cmd_mgr->free_list_lock[smp_processor_id()]); 537 + } 538 + 539 + static void bnx2fc_free_mp_resc(struct bnx2fc_cmd *io_req) 540 + { 541 + struct bnx2fc_mp_req *mp_req = &(io_req->mp_req); 542 + struct bnx2fc_hba *hba = io_req->port->priv; 543 + size_t sz = sizeof(struct fcoe_bd_ctx); 544 + 545 + /* clear tm flags */ 546 + mp_req->tm_flags = 0; 547 + if (mp_req->mp_req_bd) { 548 + dma_free_coherent(&hba->pcidev->dev, sz, 549 + mp_req->mp_req_bd, 550 + mp_req->mp_req_bd_dma); 551 + mp_req->mp_req_bd = NULL; 552 + } 553 + if (mp_req->mp_resp_bd) { 554 + dma_free_coherent(&hba->pcidev->dev, sz, 555 + mp_req->mp_resp_bd, 556 + mp_req->mp_resp_bd_dma); 557 + mp_req->mp_resp_bd = NULL; 558 + } 559 + if (mp_req->req_buf) { 560 + dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, 561 + mp_req->req_buf, 562 + mp_req->req_buf_dma); 563 + mp_req->req_buf = NULL; 564 + } 565 + if (mp_req->resp_buf) { 566 + dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, 567 + mp_req->resp_buf, 568 + mp_req->resp_buf_dma); 569 + mp_req->resp_buf = NULL; 570 + } 571 + } 572 + 573 + int bnx2fc_init_mp_req(struct bnx2fc_cmd *io_req) 574 + { 575 + struct bnx2fc_mp_req *mp_req; 576 + struct fcoe_bd_ctx *mp_req_bd; 577 + struct fcoe_bd_ctx *mp_resp_bd; 578 + struct bnx2fc_hba *hba = io_req->port->priv; 579 + dma_addr_t addr; 580 + size_t sz; 581 + 582 + mp_req = (struct bnx2fc_mp_req *)&(io_req->mp_req); 583 + memset(mp_req, 0, sizeof(struct bnx2fc_mp_req)); 584 + 585 + mp_req->req_len = sizeof(struct fcp_cmnd); 586 + io_req->data_xfer_len = mp_req->req_len; 587 + mp_req->req_buf = dma_alloc_coherent(&hba->pcidev->dev, PAGE_SIZE, 588 + &mp_req->req_buf_dma, 589 + GFP_ATOMIC); 590 + if (!mp_req->req_buf) { 591 + printk(KERN_ERR PFX "unable to alloc MP req buffer\n"); 592 + bnx2fc_free_mp_resc(io_req); 593 + return FAILED; 594 + } 595 + 596 + mp_req->resp_buf = dma_alloc_coherent(&hba->pcidev->dev, PAGE_SIZE, 597 + &mp_req->resp_buf_dma, 598 + GFP_ATOMIC); 599 + if (!mp_req->resp_buf) { 600 + printk(KERN_ERR PFX "unable to alloc TM resp buffer\n"); 601 + bnx2fc_free_mp_resc(io_req); 602 + return FAILED; 603 + } 604 + memset(mp_req->req_buf, 0, PAGE_SIZE); 605 + memset(mp_req->resp_buf, 0, PAGE_SIZE); 606 + 607 + /* Allocate and map mp_req_bd and mp_resp_bd */ 608 + sz = sizeof(struct fcoe_bd_ctx); 609 + mp_req->mp_req_bd = dma_alloc_coherent(&hba->pcidev->dev, sz, 610 + &mp_req->mp_req_bd_dma, 611 + GFP_ATOMIC); 612 + if (!mp_req->mp_req_bd) { 613 + printk(KERN_ERR PFX "unable to alloc MP req bd\n"); 614 + bnx2fc_free_mp_resc(io_req); 615 + return FAILED; 616 + } 617 + mp_req->mp_resp_bd = dma_alloc_coherent(&hba->pcidev->dev, sz, 618 + &mp_req->mp_resp_bd_dma, 619 + GFP_ATOMIC); 620 + if (!mp_req->mp_req_bd) { 621 + printk(KERN_ERR PFX "unable to alloc MP resp bd\n"); 622 + bnx2fc_free_mp_resc(io_req); 623 + return FAILED; 624 + } 625 + /* Fill bd table */ 626 + addr = mp_req->req_buf_dma; 627 + mp_req_bd = mp_req->mp_req_bd; 628 + mp_req_bd->buf_addr_lo = (u32)addr & 0xffffffff; 629 + mp_req_bd->buf_addr_hi = (u32)((u64)addr >> 32); 630 + mp_req_bd->buf_len = PAGE_SIZE; 631 + mp_req_bd->flags = 0; 632 + 633 + /* 634 + * MP buffer is either a task mgmt command or an ELS. 635 + * So the assumption is that it consumes a single bd 636 + * entry in the bd table 637 + */ 638 + mp_resp_bd = mp_req->mp_resp_bd; 639 + addr = mp_req->resp_buf_dma; 640 + mp_resp_bd->buf_addr_lo = (u32)addr & 0xffffffff; 641 + mp_resp_bd->buf_addr_hi = (u32)((u64)addr >> 32); 642 + mp_resp_bd->buf_len = PAGE_SIZE; 643 + mp_resp_bd->flags = 0; 644 + 645 + return SUCCESS; 646 + } 647 + 648 + static int bnx2fc_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags) 649 + { 650 + struct fc_lport *lport; 651 + struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device)); 652 + struct fc_rport_libfc_priv *rp = rport->dd_data; 653 + struct fcoe_port *port; 654 + struct bnx2fc_hba *hba; 655 + struct bnx2fc_rport *tgt; 656 + struct bnx2fc_cmd *io_req; 657 + struct bnx2fc_mp_req *tm_req; 658 + struct fcoe_task_ctx_entry *task; 659 + struct fcoe_task_ctx_entry *task_page; 660 + struct Scsi_Host *host = sc_cmd->device->host; 661 + struct fc_frame_header *fc_hdr; 662 + struct fcp_cmnd *fcp_cmnd; 663 + int task_idx, index; 664 + int rc = SUCCESS; 665 + u16 xid; 666 + u32 sid, did; 667 + unsigned long start = jiffies; 668 + 669 + lport = shost_priv(host); 670 + port = lport_priv(lport); 671 + hba = port->priv; 672 + 673 + if (rport == NULL) { 674 + printk(KERN_ALERT PFX "device_reset: rport is NULL\n"); 675 + rc = FAILED; 676 + goto tmf_err; 677 + } 678 + 679 + rc = fc_block_scsi_eh(sc_cmd); 680 + if (rc) 681 + return rc; 682 + 683 + if (lport->state != LPORT_ST_READY || !(lport->link_up)) { 684 + printk(KERN_ERR PFX "device_reset: link is not ready\n"); 685 + rc = FAILED; 686 + goto tmf_err; 687 + } 688 + /* rport and tgt are allocated together, so tgt should be non-NULL */ 689 + tgt = (struct bnx2fc_rport *)&rp[1]; 690 + 691 + if (!(test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags))) { 692 + printk(KERN_ERR PFX "device_reset: tgt not offloaded\n"); 693 + rc = FAILED; 694 + goto tmf_err; 695 + } 696 + retry_tmf: 697 + io_req = bnx2fc_elstm_alloc(tgt, BNX2FC_TASK_MGMT_CMD); 698 + if (!io_req) { 699 + if (time_after(jiffies, start + HZ)) { 700 + printk(KERN_ERR PFX "tmf: Failed TMF"); 701 + rc = FAILED; 702 + goto tmf_err; 703 + } 704 + msleep(20); 705 + goto retry_tmf; 706 + } 707 + /* Initialize rest of io_req fields */ 708 + io_req->sc_cmd = sc_cmd; 709 + io_req->port = port; 710 + io_req->tgt = tgt; 711 + 712 + tm_req = (struct bnx2fc_mp_req *)&(io_req->mp_req); 713 + 714 + rc = bnx2fc_init_mp_req(io_req); 715 + if (rc == FAILED) { 716 + printk(KERN_ERR PFX "Task mgmt MP request init failed\n"); 717 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 718 + goto tmf_err; 719 + } 720 + 721 + /* Set TM flags */ 722 + io_req->io_req_flags = 0; 723 + tm_req->tm_flags = tm_flags; 724 + 725 + /* Fill FCP_CMND */ 726 + bnx2fc_build_fcp_cmnd(io_req, (struct fcp_cmnd *)tm_req->req_buf); 727 + fcp_cmnd = (struct fcp_cmnd *)tm_req->req_buf; 728 + memset(fcp_cmnd->fc_cdb, 0, sc_cmd->cmd_len); 729 + fcp_cmnd->fc_dl = 0; 730 + 731 + /* Fill FC header */ 732 + fc_hdr = &(tm_req->req_fc_hdr); 733 + sid = tgt->sid; 734 + did = rport->port_id; 735 + __fc_fill_fc_hdr(fc_hdr, FC_RCTL_DD_UNSOL_CMD, did, sid, 736 + FC_TYPE_FCP, FC_FC_FIRST_SEQ | FC_FC_END_SEQ | 737 + FC_FC_SEQ_INIT, 0); 738 + /* Obtain exchange id */ 739 + xid = io_req->xid; 740 + 741 + BNX2FC_TGT_DBG(tgt, "Initiate TMF - xid = 0x%x\n", xid); 742 + task_idx = xid/BNX2FC_TASKS_PER_PAGE; 743 + index = xid % BNX2FC_TASKS_PER_PAGE; 744 + 745 + /* Initialize task context for this IO request */ 746 + task_page = (struct fcoe_task_ctx_entry *) hba->task_ctx[task_idx]; 747 + task = &(task_page[index]); 748 + bnx2fc_init_mp_task(io_req, task); 749 + 750 + sc_cmd->SCp.ptr = (char *)io_req; 751 + 752 + /* Obtain free SQ entry */ 753 + spin_lock_bh(&tgt->tgt_lock); 754 + bnx2fc_add_2_sq(tgt, xid); 755 + 756 + /* Enqueue the io_req to active_tm_queue */ 757 + io_req->on_tmf_queue = 1; 758 + list_add_tail(&io_req->link, &tgt->active_tm_queue); 759 + 760 + init_completion(&io_req->tm_done); 761 + io_req->wait_for_comp = 1; 762 + 763 + /* Ring doorbell */ 764 + bnx2fc_ring_doorbell(tgt); 765 + spin_unlock_bh(&tgt->tgt_lock); 766 + 767 + rc = wait_for_completion_timeout(&io_req->tm_done, 768 + BNX2FC_TM_TIMEOUT * HZ); 769 + spin_lock_bh(&tgt->tgt_lock); 770 + 771 + io_req->wait_for_comp = 0; 772 + if (!(test_bit(BNX2FC_FLAG_TM_COMPL, &io_req->req_flags))) 773 + set_bit(BNX2FC_FLAG_TM_TIMEOUT, &io_req->req_flags); 774 + 775 + spin_unlock_bh(&tgt->tgt_lock); 776 + 777 + if (!rc) { 778 + printk(KERN_ERR PFX "task mgmt command failed...\n"); 779 + rc = FAILED; 780 + } else { 781 + printk(KERN_ERR PFX "task mgmt command success...\n"); 782 + rc = SUCCESS; 783 + } 784 + tmf_err: 785 + return rc; 786 + } 787 + 788 + int bnx2fc_initiate_abts(struct bnx2fc_cmd *io_req) 789 + { 790 + struct fc_lport *lport; 791 + struct bnx2fc_rport *tgt = io_req->tgt; 792 + struct fc_rport *rport = tgt->rport; 793 + struct fc_rport_priv *rdata = tgt->rdata; 794 + struct bnx2fc_hba *hba; 795 + struct fcoe_port *port; 796 + struct bnx2fc_cmd *abts_io_req; 797 + struct fcoe_task_ctx_entry *task; 798 + struct fcoe_task_ctx_entry *task_page; 799 + struct fc_frame_header *fc_hdr; 800 + struct bnx2fc_mp_req *abts_req; 801 + int task_idx, index; 802 + u32 sid, did; 803 + u16 xid; 804 + int rc = SUCCESS; 805 + u32 r_a_tov = rdata->r_a_tov; 806 + 807 + /* called with tgt_lock held */ 808 + BNX2FC_IO_DBG(io_req, "Entered bnx2fc_initiate_abts\n"); 809 + 810 + port = io_req->port; 811 + hba = port->priv; 812 + lport = port->lport; 813 + 814 + if (!test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags)) { 815 + printk(KERN_ERR PFX "initiate_abts: tgt not offloaded\n"); 816 + rc = FAILED; 817 + goto abts_err; 818 + } 819 + 820 + if (rport == NULL) { 821 + printk(KERN_ALERT PFX "initiate_abts: rport is NULL\n"); 822 + rc = FAILED; 823 + goto abts_err; 824 + } 825 + 826 + if (lport->state != LPORT_ST_READY || !(lport->link_up)) { 827 + printk(KERN_ERR PFX "initiate_abts: link is not ready\n"); 828 + rc = FAILED; 829 + goto abts_err; 830 + } 831 + 832 + abts_io_req = bnx2fc_elstm_alloc(tgt, BNX2FC_ABTS); 833 + if (!abts_io_req) { 834 + printk(KERN_ERR PFX "abts: couldnt allocate cmd\n"); 835 + rc = FAILED; 836 + goto abts_err; 837 + } 838 + 839 + /* Initialize rest of io_req fields */ 840 + abts_io_req->sc_cmd = NULL; 841 + abts_io_req->port = port; 842 + abts_io_req->tgt = tgt; 843 + abts_io_req->data_xfer_len = 0; /* No data transfer for ABTS */ 844 + 845 + abts_req = (struct bnx2fc_mp_req *)&(abts_io_req->mp_req); 846 + memset(abts_req, 0, sizeof(struct bnx2fc_mp_req)); 847 + 848 + /* Fill FC header */ 849 + fc_hdr = &(abts_req->req_fc_hdr); 850 + 851 + /* Obtain oxid and rxid for the original exchange to be aborted */ 852 + fc_hdr->fh_ox_id = htons(io_req->xid); 853 + fc_hdr->fh_rx_id = htons(io_req->task->rx_wr_tx_rd.rx_id); 854 + 855 + sid = tgt->sid; 856 + did = rport->port_id; 857 + 858 + __fc_fill_fc_hdr(fc_hdr, FC_RCTL_BA_ABTS, did, sid, 859 + FC_TYPE_BLS, FC_FC_FIRST_SEQ | FC_FC_END_SEQ | 860 + FC_FC_SEQ_INIT, 0); 861 + 862 + xid = abts_io_req->xid; 863 + BNX2FC_IO_DBG(abts_io_req, "ABTS io_req\n"); 864 + task_idx = xid/BNX2FC_TASKS_PER_PAGE; 865 + index = xid % BNX2FC_TASKS_PER_PAGE; 866 + 867 + /* Initialize task context for this IO request */ 868 + task_page = (struct fcoe_task_ctx_entry *) hba->task_ctx[task_idx]; 869 + task = &(task_page[index]); 870 + bnx2fc_init_mp_task(abts_io_req, task); 871 + 872 + /* 873 + * ABTS task is a temporary task that will be cleaned up 874 + * irrespective of ABTS response. We need to start the timer 875 + * for the original exchange, as the CQE is posted for the original 876 + * IO request. 877 + * 878 + * Timer for ABTS is started only when it is originated by a 879 + * TM request. For the ABTS issued as part of ULP timeout, 880 + * scsi-ml maintains the timers. 881 + */ 882 + 883 + /* if (test_bit(BNX2FC_FLAG_ISSUE_ABTS, &io_req->req_flags))*/ 884 + bnx2fc_cmd_timer_set(io_req, 2 * r_a_tov); 885 + 886 + /* Obtain free SQ entry */ 887 + bnx2fc_add_2_sq(tgt, xid); 888 + 889 + /* Ring doorbell */ 890 + bnx2fc_ring_doorbell(tgt); 891 + 892 + abts_err: 893 + return rc; 894 + } 895 + 896 + int bnx2fc_initiate_cleanup(struct bnx2fc_cmd *io_req) 897 + { 898 + struct fc_lport *lport; 899 + struct bnx2fc_rport *tgt = io_req->tgt; 900 + struct bnx2fc_hba *hba; 901 + struct fcoe_port *port; 902 + struct bnx2fc_cmd *cleanup_io_req; 903 + struct fcoe_task_ctx_entry *task; 904 + struct fcoe_task_ctx_entry *task_page; 905 + int task_idx, index; 906 + u16 xid, orig_xid; 907 + int rc = 0; 908 + 909 + /* ASSUMPTION: called with tgt_lock held */ 910 + BNX2FC_IO_DBG(io_req, "Entered bnx2fc_initiate_cleanup\n"); 911 + 912 + port = io_req->port; 913 + hba = port->priv; 914 + lport = port->lport; 915 + 916 + cleanup_io_req = bnx2fc_elstm_alloc(tgt, BNX2FC_CLEANUP); 917 + if (!cleanup_io_req) { 918 + printk(KERN_ERR PFX "cleanup: couldnt allocate cmd\n"); 919 + rc = -1; 920 + goto cleanup_err; 921 + } 922 + 923 + /* Initialize rest of io_req fields */ 924 + cleanup_io_req->sc_cmd = NULL; 925 + cleanup_io_req->port = port; 926 + cleanup_io_req->tgt = tgt; 927 + cleanup_io_req->data_xfer_len = 0; /* No data transfer for cleanup */ 928 + 929 + xid = cleanup_io_req->xid; 930 + 931 + task_idx = xid/BNX2FC_TASKS_PER_PAGE; 932 + index = xid % BNX2FC_TASKS_PER_PAGE; 933 + 934 + /* Initialize task context for this IO request */ 935 + task_page = (struct fcoe_task_ctx_entry *) hba->task_ctx[task_idx]; 936 + task = &(task_page[index]); 937 + orig_xid = io_req->xid; 938 + 939 + BNX2FC_IO_DBG(io_req, "CLEANUP io_req xid = 0x%x\n", xid); 940 + 941 + bnx2fc_init_cleanup_task(cleanup_io_req, task, orig_xid); 942 + 943 + /* Obtain free SQ entry */ 944 + bnx2fc_add_2_sq(tgt, xid); 945 + 946 + /* Ring doorbell */ 947 + bnx2fc_ring_doorbell(tgt); 948 + 949 + cleanup_err: 950 + return rc; 951 + } 952 + 953 + /** 954 + * bnx2fc_eh_target_reset: Reset a target 955 + * 956 + * @sc_cmd: SCSI command 957 + * 958 + * Set from SCSI host template to send task mgmt command to the target 959 + * and wait for the response 960 + */ 961 + int bnx2fc_eh_target_reset(struct scsi_cmnd *sc_cmd) 962 + { 963 + return bnx2fc_initiate_tmf(sc_cmd, FCP_TMF_TGT_RESET); 964 + } 965 + 966 + /** 967 + * bnx2fc_eh_device_reset - Reset a single LUN 968 + * 969 + * @sc_cmd: SCSI command 970 + * 971 + * Set from SCSI host template to send task mgmt command to the target 972 + * and wait for the response 973 + */ 974 + int bnx2fc_eh_device_reset(struct scsi_cmnd *sc_cmd) 975 + { 976 + return bnx2fc_initiate_tmf(sc_cmd, FCP_TMF_LUN_RESET); 977 + } 978 + 979 + /** 980 + * bnx2fc_eh_abort - eh_abort_handler api to abort an outstanding 981 + * SCSI command 982 + * 983 + * @sc_cmd: SCSI_ML command pointer 984 + * 985 + * SCSI abort request handler 986 + */ 987 + int bnx2fc_eh_abort(struct scsi_cmnd *sc_cmd) 988 + { 989 + struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device)); 990 + struct fc_rport_libfc_priv *rp = rport->dd_data; 991 + struct bnx2fc_cmd *io_req; 992 + struct fc_lport *lport; 993 + struct bnx2fc_rport *tgt; 994 + int rc = FAILED; 995 + 996 + 997 + rc = fc_block_scsi_eh(sc_cmd); 998 + if (rc) 999 + return rc; 1000 + 1001 + lport = shost_priv(sc_cmd->device->host); 1002 + if ((lport->state != LPORT_ST_READY) || !(lport->link_up)) { 1003 + printk(KERN_ALERT PFX "eh_abort: link not ready\n"); 1004 + return rc; 1005 + } 1006 + 1007 + tgt = (struct bnx2fc_rport *)&rp[1]; 1008 + 1009 + BNX2FC_TGT_DBG(tgt, "Entered bnx2fc_eh_abort\n"); 1010 + 1011 + spin_lock_bh(&tgt->tgt_lock); 1012 + io_req = (struct bnx2fc_cmd *)sc_cmd->SCp.ptr; 1013 + if (!io_req) { 1014 + /* Command might have just completed */ 1015 + printk(KERN_ERR PFX "eh_abort: io_req is NULL\n"); 1016 + spin_unlock_bh(&tgt->tgt_lock); 1017 + return SUCCESS; 1018 + } 1019 + BNX2FC_IO_DBG(io_req, "eh_abort - refcnt = %d\n", 1020 + io_req->refcount.refcount.counter); 1021 + 1022 + /* Hold IO request across abort processing */ 1023 + kref_get(&io_req->refcount); 1024 + 1025 + BUG_ON(tgt != io_req->tgt); 1026 + 1027 + /* Remove the io_req from the active_q. */ 1028 + /* 1029 + * Task Mgmt functions (LUN RESET & TGT RESET) will not 1030 + * issue an ABTS on this particular IO req, as the 1031 + * io_req is no longer in the active_q. 1032 + */ 1033 + if (tgt->flush_in_prog) { 1034 + printk(KERN_ALERT PFX "eh_abort: io_req (xid = 0x%x) " 1035 + "flush in progress\n", io_req->xid); 1036 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 1037 + spin_unlock_bh(&tgt->tgt_lock); 1038 + return SUCCESS; 1039 + } 1040 + 1041 + if (io_req->on_active_queue == 0) { 1042 + printk(KERN_ALERT PFX "eh_abort: io_req (xid = 0x%x) " 1043 + "not on active_q\n", io_req->xid); 1044 + /* 1045 + * This condition can happen only due to the FW bug, 1046 + * where we do not receive cleanup response from 1047 + * the FW. Handle this case gracefully by erroring 1048 + * back the IO request to SCSI-ml 1049 + */ 1050 + bnx2fc_scsi_done(io_req, DID_ABORT); 1051 + 1052 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 1053 + spin_unlock_bh(&tgt->tgt_lock); 1054 + return SUCCESS; 1055 + } 1056 + 1057 + /* 1058 + * Only eh_abort processing will remove the IO from 1059 + * active_cmd_q before processing the request. this is 1060 + * done to avoid race conditions between IOs aborted 1061 + * as part of task management completion and eh_abort 1062 + * processing 1063 + */ 1064 + list_del_init(&io_req->link); 1065 + io_req->on_active_queue = 0; 1066 + /* Move IO req to retire queue */ 1067 + list_add_tail(&io_req->link, &tgt->io_retire_queue); 1068 + 1069 + init_completion(&io_req->tm_done); 1070 + io_req->wait_for_comp = 1; 1071 + 1072 + if (!test_and_set_bit(BNX2FC_FLAG_ISSUE_ABTS, &io_req->req_flags)) { 1073 + /* Cancel the current timer running on this io_req */ 1074 + if (cancel_delayed_work(&io_req->timeout_work)) 1075 + kref_put(&io_req->refcount, 1076 + bnx2fc_cmd_release); /* drop timer hold */ 1077 + set_bit(BNX2FC_FLAG_EH_ABORT, &io_req->req_flags); 1078 + rc = bnx2fc_initiate_abts(io_req); 1079 + } else { 1080 + printk(KERN_ALERT PFX "eh_abort: io_req (xid = 0x%x) " 1081 + "already in abts processing\n", io_req->xid); 1082 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 1083 + spin_unlock_bh(&tgt->tgt_lock); 1084 + return SUCCESS; 1085 + } 1086 + if (rc == FAILED) { 1087 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 1088 + spin_unlock_bh(&tgt->tgt_lock); 1089 + return rc; 1090 + } 1091 + spin_unlock_bh(&tgt->tgt_lock); 1092 + 1093 + wait_for_completion(&io_req->tm_done); 1094 + 1095 + spin_lock_bh(&tgt->tgt_lock); 1096 + io_req->wait_for_comp = 0; 1097 + if (!(test_and_set_bit(BNX2FC_FLAG_ABTS_DONE, 1098 + &io_req->req_flags))) { 1099 + /* Let the scsi-ml try to recover this command */ 1100 + printk(KERN_ERR PFX "abort failed, xid = 0x%x\n", 1101 + io_req->xid); 1102 + rc = FAILED; 1103 + } else { 1104 + /* 1105 + * We come here even when there was a race condition 1106 + * between timeout and abts completion, and abts 1107 + * completion happens just in time. 1108 + */ 1109 + BNX2FC_IO_DBG(io_req, "abort succeeded\n"); 1110 + rc = SUCCESS; 1111 + bnx2fc_scsi_done(io_req, DID_ABORT); 1112 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 1113 + } 1114 + 1115 + /* release the reference taken in eh_abort */ 1116 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 1117 + spin_unlock_bh(&tgt->tgt_lock); 1118 + return rc; 1119 + } 1120 + 1121 + void bnx2fc_process_cleanup_compl(struct bnx2fc_cmd *io_req, 1122 + struct fcoe_task_ctx_entry *task, 1123 + u8 num_rq) 1124 + { 1125 + BNX2FC_IO_DBG(io_req, "Entered process_cleanup_compl " 1126 + "refcnt = %d, cmd_type = %d\n", 1127 + io_req->refcount.refcount.counter, io_req->cmd_type); 1128 + bnx2fc_scsi_done(io_req, DID_ERROR); 1129 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 1130 + } 1131 + 1132 + void bnx2fc_process_abts_compl(struct bnx2fc_cmd *io_req, 1133 + struct fcoe_task_ctx_entry *task, 1134 + u8 num_rq) 1135 + { 1136 + u32 r_ctl; 1137 + u32 r_a_tov = FC_DEF_R_A_TOV; 1138 + u8 issue_rrq = 0; 1139 + struct bnx2fc_rport *tgt = io_req->tgt; 1140 + 1141 + BNX2FC_IO_DBG(io_req, "Entered process_abts_compl xid = 0x%x" 1142 + "refcnt = %d, cmd_type = %d\n", 1143 + io_req->xid, 1144 + io_req->refcount.refcount.counter, io_req->cmd_type); 1145 + 1146 + if (test_and_set_bit(BNX2FC_FLAG_ABTS_DONE, 1147 + &io_req->req_flags)) { 1148 + BNX2FC_IO_DBG(io_req, "Timer context finished processing" 1149 + " this io\n"); 1150 + return; 1151 + } 1152 + 1153 + /* Do not issue RRQ as this IO is already cleanedup */ 1154 + if (test_and_set_bit(BNX2FC_FLAG_IO_CLEANUP, 1155 + &io_req->req_flags)) 1156 + goto io_compl; 1157 + 1158 + /* 1159 + * For ABTS issued due to SCSI eh_abort_handler, timeout 1160 + * values are maintained by scsi-ml itself. Cancel timeout 1161 + * in case ABTS issued as part of task management function 1162 + * or due to FW error. 1163 + */ 1164 + if (test_bit(BNX2FC_FLAG_ISSUE_ABTS, &io_req->req_flags)) 1165 + if (cancel_delayed_work(&io_req->timeout_work)) 1166 + kref_put(&io_req->refcount, 1167 + bnx2fc_cmd_release); /* drop timer hold */ 1168 + 1169 + r_ctl = task->cmn.general.rsp_info.abts_rsp.r_ctl; 1170 + 1171 + switch (r_ctl) { 1172 + case FC_RCTL_BA_ACC: 1173 + /* 1174 + * Dont release this cmd yet. It will be relesed 1175 + * after we get RRQ response 1176 + */ 1177 + BNX2FC_IO_DBG(io_req, "ABTS response - ACC Send RRQ\n"); 1178 + issue_rrq = 1; 1179 + break; 1180 + 1181 + case FC_RCTL_BA_RJT: 1182 + BNX2FC_IO_DBG(io_req, "ABTS response - RJT\n"); 1183 + break; 1184 + default: 1185 + printk(KERN_ERR PFX "Unknown ABTS response\n"); 1186 + break; 1187 + } 1188 + 1189 + if (issue_rrq) { 1190 + BNX2FC_IO_DBG(io_req, "Issue RRQ after R_A_TOV\n"); 1191 + set_bit(BNX2FC_FLAG_ISSUE_RRQ, &io_req->req_flags); 1192 + } 1193 + set_bit(BNX2FC_FLAG_RETIRE_OXID, &io_req->req_flags); 1194 + bnx2fc_cmd_timer_set(io_req, r_a_tov); 1195 + 1196 + io_compl: 1197 + if (io_req->wait_for_comp) { 1198 + if (test_and_clear_bit(BNX2FC_FLAG_EH_ABORT, 1199 + &io_req->req_flags)) 1200 + complete(&io_req->tm_done); 1201 + } else { 1202 + /* 1203 + * We end up here when ABTS is issued as 1204 + * in asynchronous context, i.e., as part 1205 + * of task management completion, or 1206 + * when FW error is received or when the 1207 + * ABTS is issued when the IO is timed 1208 + * out. 1209 + */ 1210 + 1211 + if (io_req->on_active_queue) { 1212 + list_del_init(&io_req->link); 1213 + io_req->on_active_queue = 0; 1214 + /* Move IO req to retire queue */ 1215 + list_add_tail(&io_req->link, &tgt->io_retire_queue); 1216 + } 1217 + bnx2fc_scsi_done(io_req, DID_ERROR); 1218 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 1219 + } 1220 + } 1221 + 1222 + static void bnx2fc_lun_reset_cmpl(struct bnx2fc_cmd *io_req) 1223 + { 1224 + struct scsi_cmnd *sc_cmd = io_req->sc_cmd; 1225 + struct bnx2fc_rport *tgt = io_req->tgt; 1226 + struct list_head *list; 1227 + struct list_head *tmp; 1228 + struct bnx2fc_cmd *cmd; 1229 + int tm_lun = sc_cmd->device->lun; 1230 + int rc = 0; 1231 + int lun; 1232 + 1233 + /* called with tgt_lock held */ 1234 + BNX2FC_IO_DBG(io_req, "Entered bnx2fc_lun_reset_cmpl\n"); 1235 + /* 1236 + * Walk thru the active_ios queue and ABORT the IO 1237 + * that matches with the LUN that was reset 1238 + */ 1239 + list_for_each_safe(list, tmp, &tgt->active_cmd_queue) { 1240 + BNX2FC_TGT_DBG(tgt, "LUN RST cmpl: scan for pending IOs\n"); 1241 + cmd = (struct bnx2fc_cmd *)list; 1242 + lun = cmd->sc_cmd->device->lun; 1243 + if (lun == tm_lun) { 1244 + /* Initiate ABTS on this cmd */ 1245 + if (!test_and_set_bit(BNX2FC_FLAG_ISSUE_ABTS, 1246 + &cmd->req_flags)) { 1247 + /* cancel the IO timeout */ 1248 + if (cancel_delayed_work(&io_req->timeout_work)) 1249 + kref_put(&io_req->refcount, 1250 + bnx2fc_cmd_release); 1251 + /* timer hold */ 1252 + rc = bnx2fc_initiate_abts(cmd); 1253 + /* abts shouldnt fail in this context */ 1254 + WARN_ON(rc != SUCCESS); 1255 + } else 1256 + printk(KERN_ERR PFX "lun_rst: abts already in" 1257 + " progress for this IO 0x%x\n", 1258 + cmd->xid); 1259 + } 1260 + } 1261 + } 1262 + 1263 + static void bnx2fc_tgt_reset_cmpl(struct bnx2fc_cmd *io_req) 1264 + { 1265 + struct bnx2fc_rport *tgt = io_req->tgt; 1266 + struct list_head *list; 1267 + struct list_head *tmp; 1268 + struct bnx2fc_cmd *cmd; 1269 + int rc = 0; 1270 + 1271 + /* called with tgt_lock held */ 1272 + BNX2FC_IO_DBG(io_req, "Entered bnx2fc_tgt_reset_cmpl\n"); 1273 + /* 1274 + * Walk thru the active_ios queue and ABORT the IO 1275 + * that matches with the LUN that was reset 1276 + */ 1277 + list_for_each_safe(list, tmp, &tgt->active_cmd_queue) { 1278 + BNX2FC_TGT_DBG(tgt, "TGT RST cmpl: scan for pending IOs\n"); 1279 + cmd = (struct bnx2fc_cmd *)list; 1280 + /* Initiate ABTS */ 1281 + if (!test_and_set_bit(BNX2FC_FLAG_ISSUE_ABTS, 1282 + &cmd->req_flags)) { 1283 + /* cancel the IO timeout */ 1284 + if (cancel_delayed_work(&io_req->timeout_work)) 1285 + kref_put(&io_req->refcount, 1286 + bnx2fc_cmd_release); /* timer hold */ 1287 + rc = bnx2fc_initiate_abts(cmd); 1288 + /* abts shouldnt fail in this context */ 1289 + WARN_ON(rc != SUCCESS); 1290 + 1291 + } else 1292 + printk(KERN_ERR PFX "tgt_rst: abts already in progress" 1293 + " for this IO 0x%x\n", cmd->xid); 1294 + } 1295 + } 1296 + 1297 + void bnx2fc_process_tm_compl(struct bnx2fc_cmd *io_req, 1298 + struct fcoe_task_ctx_entry *task, u8 num_rq) 1299 + { 1300 + struct bnx2fc_mp_req *tm_req; 1301 + struct fc_frame_header *fc_hdr; 1302 + struct scsi_cmnd *sc_cmd = io_req->sc_cmd; 1303 + u64 *hdr; 1304 + u64 *temp_hdr; 1305 + void *rsp_buf; 1306 + 1307 + /* Called with tgt_lock held */ 1308 + BNX2FC_IO_DBG(io_req, "Entered process_tm_compl\n"); 1309 + 1310 + if (!(test_bit(BNX2FC_FLAG_TM_TIMEOUT, &io_req->req_flags))) 1311 + set_bit(BNX2FC_FLAG_TM_COMPL, &io_req->req_flags); 1312 + else { 1313 + /* TM has already timed out and we got 1314 + * delayed completion. Ignore completion 1315 + * processing. 1316 + */ 1317 + return; 1318 + } 1319 + 1320 + tm_req = &(io_req->mp_req); 1321 + fc_hdr = &(tm_req->resp_fc_hdr); 1322 + hdr = (u64 *)fc_hdr; 1323 + temp_hdr = (u64 *) 1324 + &task->cmn.general.cmd_info.mp_fc_frame.fc_hdr; 1325 + hdr[0] = cpu_to_be64(temp_hdr[0]); 1326 + hdr[1] = cpu_to_be64(temp_hdr[1]); 1327 + hdr[2] = cpu_to_be64(temp_hdr[2]); 1328 + 1329 + tm_req->resp_len = task->rx_wr_only.sgl_ctx.mul_sges.cur_sge_off; 1330 + 1331 + rsp_buf = tm_req->resp_buf; 1332 + 1333 + if (fc_hdr->fh_r_ctl == FC_RCTL_DD_CMD_STATUS) { 1334 + bnx2fc_parse_fcp_rsp(io_req, 1335 + (struct fcoe_fcp_rsp_payload *) 1336 + rsp_buf, num_rq); 1337 + if (io_req->fcp_rsp_code == 0) { 1338 + /* TM successful */ 1339 + if (tm_req->tm_flags & FCP_TMF_LUN_RESET) 1340 + bnx2fc_lun_reset_cmpl(io_req); 1341 + else if (tm_req->tm_flags & FCP_TMF_TGT_RESET) 1342 + bnx2fc_tgt_reset_cmpl(io_req); 1343 + } 1344 + } else { 1345 + printk(KERN_ERR PFX "tmf's fc_hdr r_ctl = 0x%x\n", 1346 + fc_hdr->fh_r_ctl); 1347 + } 1348 + if (!sc_cmd->SCp.ptr) { 1349 + printk(KERN_ALERT PFX "tm_compl: SCp.ptr is NULL\n"); 1350 + return; 1351 + } 1352 + switch (io_req->fcp_status) { 1353 + case FC_GOOD: 1354 + if (io_req->cdb_status == 0) { 1355 + /* Good IO completion */ 1356 + sc_cmd->result = DID_OK << 16; 1357 + } else { 1358 + /* Transport status is good, SCSI status not good */ 1359 + sc_cmd->result = (DID_OK << 16) | io_req->cdb_status; 1360 + } 1361 + if (io_req->fcp_resid) 1362 + scsi_set_resid(sc_cmd, io_req->fcp_resid); 1363 + break; 1364 + 1365 + default: 1366 + BNX2FC_IO_DBG(io_req, "process_tm_compl: fcp_status = %d\n", 1367 + io_req->fcp_status); 1368 + break; 1369 + } 1370 + 1371 + sc_cmd = io_req->sc_cmd; 1372 + io_req->sc_cmd = NULL; 1373 + 1374 + /* check if the io_req exists in tgt's tmf_q */ 1375 + if (io_req->on_tmf_queue) { 1376 + 1377 + list_del_init(&io_req->link); 1378 + io_req->on_tmf_queue = 0; 1379 + } else { 1380 + 1381 + printk(KERN_ALERT PFX "Command not on active_cmd_queue!\n"); 1382 + return; 1383 + } 1384 + 1385 + sc_cmd->SCp.ptr = NULL; 1386 + sc_cmd->scsi_done(sc_cmd); 1387 + 1388 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 1389 + if (io_req->wait_for_comp) { 1390 + BNX2FC_IO_DBG(io_req, "tm_compl - wake up the waiter\n"); 1391 + complete(&io_req->tm_done); 1392 + } 1393 + } 1394 + 1395 + static int bnx2fc_split_bd(struct bnx2fc_cmd *io_req, u64 addr, int sg_len, 1396 + int bd_index) 1397 + { 1398 + struct fcoe_bd_ctx *bd = io_req->bd_tbl->bd_tbl; 1399 + int frag_size, sg_frags; 1400 + 1401 + sg_frags = 0; 1402 + while (sg_len) { 1403 + if (sg_len >= BNX2FC_BD_SPLIT_SZ) 1404 + frag_size = BNX2FC_BD_SPLIT_SZ; 1405 + else 1406 + frag_size = sg_len; 1407 + bd[bd_index + sg_frags].buf_addr_lo = addr & 0xffffffff; 1408 + bd[bd_index + sg_frags].buf_addr_hi = addr >> 32; 1409 + bd[bd_index + sg_frags].buf_len = (u16)frag_size; 1410 + bd[bd_index + sg_frags].flags = 0; 1411 + 1412 + addr += (u64) frag_size; 1413 + sg_frags++; 1414 + sg_len -= frag_size; 1415 + } 1416 + return sg_frags; 1417 + 1418 + } 1419 + 1420 + static int bnx2fc_map_sg(struct bnx2fc_cmd *io_req) 1421 + { 1422 + struct scsi_cmnd *sc = io_req->sc_cmd; 1423 + struct fcoe_bd_ctx *bd = io_req->bd_tbl->bd_tbl; 1424 + struct scatterlist *sg; 1425 + int byte_count = 0; 1426 + int sg_count = 0; 1427 + int bd_count = 0; 1428 + int sg_frags; 1429 + unsigned int sg_len; 1430 + u64 addr; 1431 + int i; 1432 + 1433 + sg_count = scsi_dma_map(sc); 1434 + scsi_for_each_sg(sc, sg, sg_count, i) { 1435 + sg_len = sg_dma_len(sg); 1436 + addr = sg_dma_address(sg); 1437 + if (sg_len > BNX2FC_MAX_BD_LEN) { 1438 + sg_frags = bnx2fc_split_bd(io_req, addr, sg_len, 1439 + bd_count); 1440 + } else { 1441 + 1442 + sg_frags = 1; 1443 + bd[bd_count].buf_addr_lo = addr & 0xffffffff; 1444 + bd[bd_count].buf_addr_hi = addr >> 32; 1445 + bd[bd_count].buf_len = (u16)sg_len; 1446 + bd[bd_count].flags = 0; 1447 + } 1448 + bd_count += sg_frags; 1449 + byte_count += sg_len; 1450 + } 1451 + if (byte_count != scsi_bufflen(sc)) 1452 + printk(KERN_ERR PFX "byte_count = %d != scsi_bufflen = %d, " 1453 + "task_id = 0x%x\n", byte_count, scsi_bufflen(sc), 1454 + io_req->xid); 1455 + return bd_count; 1456 + } 1457 + 1458 + static void bnx2fc_build_bd_list_from_sg(struct bnx2fc_cmd *io_req) 1459 + { 1460 + struct scsi_cmnd *sc = io_req->sc_cmd; 1461 + struct fcoe_bd_ctx *bd = io_req->bd_tbl->bd_tbl; 1462 + int bd_count; 1463 + 1464 + if (scsi_sg_count(sc)) 1465 + bd_count = bnx2fc_map_sg(io_req); 1466 + else { 1467 + bd_count = 0; 1468 + bd[0].buf_addr_lo = bd[0].buf_addr_hi = 0; 1469 + bd[0].buf_len = bd[0].flags = 0; 1470 + } 1471 + io_req->bd_tbl->bd_valid = bd_count; 1472 + } 1473 + 1474 + static void bnx2fc_unmap_sg_list(struct bnx2fc_cmd *io_req) 1475 + { 1476 + struct scsi_cmnd *sc = io_req->sc_cmd; 1477 + 1478 + if (io_req->bd_tbl->bd_valid && sc) { 1479 + scsi_dma_unmap(sc); 1480 + io_req->bd_tbl->bd_valid = 0; 1481 + } 1482 + } 1483 + 1484 + void bnx2fc_build_fcp_cmnd(struct bnx2fc_cmd *io_req, 1485 + struct fcp_cmnd *fcp_cmnd) 1486 + { 1487 + struct scsi_cmnd *sc_cmd = io_req->sc_cmd; 1488 + char tag[2]; 1489 + 1490 + memset(fcp_cmnd, 0, sizeof(struct fcp_cmnd)); 1491 + 1492 + int_to_scsilun(sc_cmd->device->lun, 1493 + (struct scsi_lun *) fcp_cmnd->fc_lun); 1494 + 1495 + 1496 + fcp_cmnd->fc_dl = htonl(io_req->data_xfer_len); 1497 + memcpy(fcp_cmnd->fc_cdb, sc_cmd->cmnd, sc_cmd->cmd_len); 1498 + 1499 + fcp_cmnd->fc_cmdref = 0; 1500 + fcp_cmnd->fc_pri_ta = 0; 1501 + fcp_cmnd->fc_tm_flags = io_req->mp_req.tm_flags; 1502 + fcp_cmnd->fc_flags = io_req->io_req_flags; 1503 + 1504 + if (scsi_populate_tag_msg(sc_cmd, tag)) { 1505 + switch (tag[0]) { 1506 + case HEAD_OF_QUEUE_TAG: 1507 + fcp_cmnd->fc_pri_ta = FCP_PTA_HEADQ; 1508 + break; 1509 + case ORDERED_QUEUE_TAG: 1510 + fcp_cmnd->fc_pri_ta = FCP_PTA_ORDERED; 1511 + break; 1512 + default: 1513 + fcp_cmnd->fc_pri_ta = FCP_PTA_SIMPLE; 1514 + break; 1515 + } 1516 + } else { 1517 + fcp_cmnd->fc_pri_ta = 0; 1518 + } 1519 + } 1520 + 1521 + static void bnx2fc_parse_fcp_rsp(struct bnx2fc_cmd *io_req, 1522 + struct fcoe_fcp_rsp_payload *fcp_rsp, 1523 + u8 num_rq) 1524 + { 1525 + struct scsi_cmnd *sc_cmd = io_req->sc_cmd; 1526 + struct bnx2fc_rport *tgt = io_req->tgt; 1527 + u8 rsp_flags = fcp_rsp->fcp_flags.flags; 1528 + u32 rq_buff_len = 0; 1529 + int i; 1530 + unsigned char *rq_data; 1531 + unsigned char *dummy; 1532 + int fcp_sns_len = 0; 1533 + int fcp_rsp_len = 0; 1534 + 1535 + io_req->fcp_status = FC_GOOD; 1536 + io_req->fcp_resid = fcp_rsp->fcp_resid; 1537 + 1538 + io_req->scsi_comp_flags = rsp_flags; 1539 + CMD_SCSI_STATUS(sc_cmd) = io_req->cdb_status = 1540 + fcp_rsp->scsi_status_code; 1541 + 1542 + /* Fetch fcp_rsp_info and fcp_sns_info if available */ 1543 + if (num_rq) { 1544 + 1545 + /* 1546 + * We do not anticipate num_rq >1, as the linux defined 1547 + * SCSI_SENSE_BUFFERSIZE is 96 bytes + 8 bytes of FCP_RSP_INFO 1548 + * 256 bytes of single rq buffer is good enough to hold this. 1549 + */ 1550 + 1551 + if (rsp_flags & 1552 + FCOE_FCP_RSP_FLAGS_FCP_RSP_LEN_VALID) { 1553 + fcp_rsp_len = rq_buff_len 1554 + = fcp_rsp->fcp_rsp_len; 1555 + } 1556 + 1557 + if (rsp_flags & 1558 + FCOE_FCP_RSP_FLAGS_FCP_SNS_LEN_VALID) { 1559 + fcp_sns_len = fcp_rsp->fcp_sns_len; 1560 + rq_buff_len += fcp_rsp->fcp_sns_len; 1561 + } 1562 + 1563 + io_req->fcp_rsp_len = fcp_rsp_len; 1564 + io_req->fcp_sns_len = fcp_sns_len; 1565 + 1566 + if (rq_buff_len > num_rq * BNX2FC_RQ_BUF_SZ) { 1567 + /* Invalid sense sense length. */ 1568 + printk(KERN_ALERT PFX "invalid sns length %d\n", 1569 + rq_buff_len); 1570 + /* reset rq_buff_len */ 1571 + rq_buff_len = num_rq * BNX2FC_RQ_BUF_SZ; 1572 + } 1573 + 1574 + rq_data = bnx2fc_get_next_rqe(tgt, 1); 1575 + 1576 + if (num_rq > 1) { 1577 + /* We do not need extra sense data */ 1578 + for (i = 1; i < num_rq; i++) 1579 + dummy = bnx2fc_get_next_rqe(tgt, 1); 1580 + } 1581 + 1582 + /* fetch fcp_rsp_code */ 1583 + if ((fcp_rsp_len == 4) || (fcp_rsp_len == 8)) { 1584 + /* Only for task management function */ 1585 + io_req->fcp_rsp_code = rq_data[3]; 1586 + printk(KERN_ERR PFX "fcp_rsp_code = %d\n", 1587 + io_req->fcp_rsp_code); 1588 + } 1589 + 1590 + /* fetch sense data */ 1591 + rq_data += fcp_rsp_len; 1592 + 1593 + if (fcp_sns_len > SCSI_SENSE_BUFFERSIZE) { 1594 + printk(KERN_ERR PFX "Truncating sense buffer\n"); 1595 + fcp_sns_len = SCSI_SENSE_BUFFERSIZE; 1596 + } 1597 + 1598 + memset(sc_cmd->sense_buffer, 0, sizeof(sc_cmd->sense_buffer)); 1599 + if (fcp_sns_len) 1600 + memcpy(sc_cmd->sense_buffer, rq_data, fcp_sns_len); 1601 + 1602 + /* return RQ entries */ 1603 + for (i = 0; i < num_rq; i++) 1604 + bnx2fc_return_rqe(tgt, 1); 1605 + } 1606 + } 1607 + 1608 + /** 1609 + * bnx2fc_queuecommand - Queuecommand function of the scsi template 1610 + * 1611 + * @host: The Scsi_Host the command was issued to 1612 + * @sc_cmd: struct scsi_cmnd to be executed 1613 + * 1614 + * This is the IO strategy routine, called by SCSI-ML 1615 + **/ 1616 + int bnx2fc_queuecommand(struct Scsi_Host *host, 1617 + struct scsi_cmnd *sc_cmd) 1618 + { 1619 + struct fc_lport *lport = shost_priv(host); 1620 + struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device)); 1621 + struct fc_rport_libfc_priv *rp = rport->dd_data; 1622 + struct bnx2fc_rport *tgt; 1623 + struct bnx2fc_cmd *io_req; 1624 + int rc = 0; 1625 + int rval; 1626 + 1627 + rval = fc_remote_port_chkready(rport); 1628 + if (rval) { 1629 + sc_cmd->result = rval; 1630 + sc_cmd->scsi_done(sc_cmd); 1631 + return 0; 1632 + } 1633 + 1634 + if ((lport->state != LPORT_ST_READY) || !(lport->link_up)) { 1635 + rc = SCSI_MLQUEUE_HOST_BUSY; 1636 + goto exit_qcmd; 1637 + } 1638 + 1639 + /* rport and tgt are allocated together, so tgt should be non-NULL */ 1640 + tgt = (struct bnx2fc_rport *)&rp[1]; 1641 + 1642 + if (!test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags)) { 1643 + /* 1644 + * Session is not offloaded yet. Let SCSI-ml retry 1645 + * the command. 1646 + */ 1647 + rc = SCSI_MLQUEUE_TARGET_BUSY; 1648 + goto exit_qcmd; 1649 + } 1650 + 1651 + io_req = bnx2fc_cmd_alloc(tgt); 1652 + if (!io_req) { 1653 + rc = SCSI_MLQUEUE_HOST_BUSY; 1654 + goto exit_qcmd; 1655 + } 1656 + io_req->sc_cmd = sc_cmd; 1657 + 1658 + if (bnx2fc_post_io_req(tgt, io_req)) { 1659 + printk(KERN_ERR PFX "Unable to post io_req\n"); 1660 + rc = SCSI_MLQUEUE_HOST_BUSY; 1661 + goto exit_qcmd; 1662 + } 1663 + exit_qcmd: 1664 + return rc; 1665 + } 1666 + 1667 + void bnx2fc_process_scsi_cmd_compl(struct bnx2fc_cmd *io_req, 1668 + struct fcoe_task_ctx_entry *task, 1669 + u8 num_rq) 1670 + { 1671 + struct fcoe_fcp_rsp_payload *fcp_rsp; 1672 + struct bnx2fc_rport *tgt = io_req->tgt; 1673 + struct scsi_cmnd *sc_cmd; 1674 + struct Scsi_Host *host; 1675 + 1676 + 1677 + /* scsi_cmd_cmpl is called with tgt lock held */ 1678 + 1679 + if (test_and_set_bit(BNX2FC_FLAG_IO_COMPL, &io_req->req_flags)) { 1680 + /* we will not receive ABTS response for this IO */ 1681 + BNX2FC_IO_DBG(io_req, "Timer context finished processing " 1682 + "this scsi cmd\n"); 1683 + } 1684 + 1685 + /* Cancel the timeout_work, as we received IO completion */ 1686 + if (cancel_delayed_work(&io_req->timeout_work)) 1687 + kref_put(&io_req->refcount, 1688 + bnx2fc_cmd_release); /* drop timer hold */ 1689 + 1690 + sc_cmd = io_req->sc_cmd; 1691 + if (sc_cmd == NULL) { 1692 + printk(KERN_ERR PFX "scsi_cmd_compl - sc_cmd is NULL\n"); 1693 + return; 1694 + } 1695 + 1696 + /* Fetch fcp_rsp from task context and perform cmd completion */ 1697 + fcp_rsp = (struct fcoe_fcp_rsp_payload *) 1698 + &(task->cmn.general.rsp_info.fcp_rsp.payload); 1699 + 1700 + /* parse fcp_rsp and obtain sense data from RQ if available */ 1701 + bnx2fc_parse_fcp_rsp(io_req, fcp_rsp, num_rq); 1702 + 1703 + host = sc_cmd->device->host; 1704 + if (!sc_cmd->SCp.ptr) { 1705 + printk(KERN_ERR PFX "SCp.ptr is NULL\n"); 1706 + return; 1707 + } 1708 + io_req->sc_cmd = NULL; 1709 + 1710 + if (io_req->on_active_queue) { 1711 + list_del_init(&io_req->link); 1712 + io_req->on_active_queue = 0; 1713 + /* Move IO req to retire queue */ 1714 + list_add_tail(&io_req->link, &tgt->io_retire_queue); 1715 + } else { 1716 + /* This should not happen, but could have been pulled 1717 + * by bnx2fc_flush_active_ios(), or during a race 1718 + * between command abort and (late) completion. 1719 + */ 1720 + BNX2FC_IO_DBG(io_req, "xid not on active_cmd_queue\n"); 1721 + if (io_req->wait_for_comp) 1722 + if (test_and_clear_bit(BNX2FC_FLAG_EH_ABORT, 1723 + &io_req->req_flags)) 1724 + complete(&io_req->tm_done); 1725 + } 1726 + 1727 + bnx2fc_unmap_sg_list(io_req); 1728 + 1729 + switch (io_req->fcp_status) { 1730 + case FC_GOOD: 1731 + if (io_req->cdb_status == 0) { 1732 + /* Good IO completion */ 1733 + sc_cmd->result = DID_OK << 16; 1734 + } else { 1735 + /* Transport status is good, SCSI status not good */ 1736 + BNX2FC_IO_DBG(io_req, "scsi_cmpl: cdb_status = %d" 1737 + " fcp_resid = 0x%x\n", 1738 + io_req->cdb_status, io_req->fcp_resid); 1739 + sc_cmd->result = (DID_OK << 16) | io_req->cdb_status; 1740 + } 1741 + if (io_req->fcp_resid) 1742 + scsi_set_resid(sc_cmd, io_req->fcp_resid); 1743 + break; 1744 + default: 1745 + printk(KERN_ALERT PFX "scsi_cmd_compl: fcp_status = %d\n", 1746 + io_req->fcp_status); 1747 + break; 1748 + } 1749 + sc_cmd->SCp.ptr = NULL; 1750 + sc_cmd->scsi_done(sc_cmd); 1751 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 1752 + } 1753 + 1754 + static int bnx2fc_post_io_req(struct bnx2fc_rport *tgt, 1755 + struct bnx2fc_cmd *io_req) 1756 + { 1757 + struct fcoe_task_ctx_entry *task; 1758 + struct fcoe_task_ctx_entry *task_page; 1759 + struct scsi_cmnd *sc_cmd = io_req->sc_cmd; 1760 + struct fcoe_port *port = tgt->port; 1761 + struct bnx2fc_hba *hba = port->priv; 1762 + struct fc_lport *lport = port->lport; 1763 + struct fcoe_dev_stats *stats; 1764 + int task_idx, index; 1765 + u16 xid; 1766 + 1767 + /* Initialize rest of io_req fields */ 1768 + io_req->cmd_type = BNX2FC_SCSI_CMD; 1769 + io_req->port = port; 1770 + io_req->tgt = tgt; 1771 + io_req->data_xfer_len = scsi_bufflen(sc_cmd); 1772 + sc_cmd->SCp.ptr = (char *)io_req; 1773 + 1774 + stats = per_cpu_ptr(lport->dev_stats, get_cpu()); 1775 + if (sc_cmd->sc_data_direction == DMA_FROM_DEVICE) { 1776 + io_req->io_req_flags = BNX2FC_READ; 1777 + stats->InputRequests++; 1778 + stats->InputBytes += io_req->data_xfer_len; 1779 + } else if (sc_cmd->sc_data_direction == DMA_TO_DEVICE) { 1780 + io_req->io_req_flags = BNX2FC_WRITE; 1781 + stats->OutputRequests++; 1782 + stats->OutputBytes += io_req->data_xfer_len; 1783 + } else { 1784 + io_req->io_req_flags = 0; 1785 + stats->ControlRequests++; 1786 + } 1787 + put_cpu(); 1788 + 1789 + xid = io_req->xid; 1790 + 1791 + /* Build buffer descriptor list for firmware from sg list */ 1792 + bnx2fc_build_bd_list_from_sg(io_req); 1793 + 1794 + task_idx = xid / BNX2FC_TASKS_PER_PAGE; 1795 + index = xid % BNX2FC_TASKS_PER_PAGE; 1796 + 1797 + /* Initialize task context for this IO request */ 1798 + task_page = (struct fcoe_task_ctx_entry *) hba->task_ctx[task_idx]; 1799 + task = &(task_page[index]); 1800 + bnx2fc_init_task(io_req, task); 1801 + 1802 + spin_lock_bh(&tgt->tgt_lock); 1803 + 1804 + if (tgt->flush_in_prog) { 1805 + printk(KERN_ERR PFX "Flush in progress..Host Busy\n"); 1806 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 1807 + spin_unlock_bh(&tgt->tgt_lock); 1808 + return -EAGAIN; 1809 + } 1810 + 1811 + if (!test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags)) { 1812 + printk(KERN_ERR PFX "Session not ready...post_io\n"); 1813 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 1814 + spin_unlock_bh(&tgt->tgt_lock); 1815 + return -EAGAIN; 1816 + } 1817 + 1818 + /* Time IO req */ 1819 + bnx2fc_cmd_timer_set(io_req, BNX2FC_IO_TIMEOUT); 1820 + /* Obtain free SQ entry */ 1821 + bnx2fc_add_2_sq(tgt, xid); 1822 + 1823 + /* Enqueue the io_req to active_cmd_queue */ 1824 + 1825 + io_req->on_active_queue = 1; 1826 + /* move io_req from pending_queue to active_queue */ 1827 + list_add_tail(&io_req->link, &tgt->active_cmd_queue); 1828 + 1829 + /* Ring doorbell */ 1830 + bnx2fc_ring_doorbell(tgt); 1831 + spin_unlock_bh(&tgt->tgt_lock); 1832 + return 0; 1833 + }
+844
drivers/scsi/bnx2fc/bnx2fc_tgt.c
··· 1 + /* bnx2fc_tgt.c: Broadcom NetXtreme II Linux FCoE offload driver. 2 + * Handles operations such as session offload/upload etc, and manages 3 + * session resources such as connection id and qp resources. 4 + * 5 + * Copyright (c) 2008 - 2010 Broadcom Corporation 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation. 10 + * 11 + * Written by: Bhanu Prakash Gollapudi (bprakash@broadcom.com) 12 + */ 13 + 14 + #include "bnx2fc.h" 15 + static void bnx2fc_upld_timer(unsigned long data); 16 + static void bnx2fc_ofld_timer(unsigned long data); 17 + static int bnx2fc_init_tgt(struct bnx2fc_rport *tgt, 18 + struct fcoe_port *port, 19 + struct fc_rport_priv *rdata); 20 + static u32 bnx2fc_alloc_conn_id(struct bnx2fc_hba *hba, 21 + struct bnx2fc_rport *tgt); 22 + static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, 23 + struct bnx2fc_rport *tgt); 24 + static void bnx2fc_free_session_resc(struct bnx2fc_hba *hba, 25 + struct bnx2fc_rport *tgt); 26 + static void bnx2fc_free_conn_id(struct bnx2fc_hba *hba, u32 conn_id); 27 + 28 + static void bnx2fc_upld_timer(unsigned long data) 29 + { 30 + 31 + struct bnx2fc_rport *tgt = (struct bnx2fc_rport *)data; 32 + 33 + BNX2FC_TGT_DBG(tgt, "upld_timer - Upload compl not received!!\n"); 34 + /* fake upload completion */ 35 + clear_bit(BNX2FC_FLAG_OFFLOADED, &tgt->flags); 36 + set_bit(BNX2FC_FLAG_UPLD_REQ_COMPL, &tgt->flags); 37 + wake_up_interruptible(&tgt->upld_wait); 38 + } 39 + 40 + static void bnx2fc_ofld_timer(unsigned long data) 41 + { 42 + 43 + struct bnx2fc_rport *tgt = (struct bnx2fc_rport *)data; 44 + 45 + BNX2FC_TGT_DBG(tgt, "entered bnx2fc_ofld_timer\n"); 46 + /* NOTE: This function should never be called, as 47 + * offload should never timeout 48 + */ 49 + /* 50 + * If the timer has expired, this session is dead 51 + * Clear offloaded flag and logout of this device. 52 + * Since OFFLOADED flag is cleared, this case 53 + * will be considered as offload error and the 54 + * port will be logged off, and conn_id, session 55 + * resources are freed up in bnx2fc_offload_session 56 + */ 57 + clear_bit(BNX2FC_FLAG_OFFLOADED, &tgt->flags); 58 + set_bit(BNX2FC_FLAG_OFLD_REQ_CMPL, &tgt->flags); 59 + wake_up_interruptible(&tgt->ofld_wait); 60 + } 61 + 62 + static void bnx2fc_offload_session(struct fcoe_port *port, 63 + struct bnx2fc_rport *tgt, 64 + struct fc_rport_priv *rdata) 65 + { 66 + struct fc_lport *lport = rdata->local_port; 67 + struct fc_rport *rport = rdata->rport; 68 + struct bnx2fc_hba *hba = port->priv; 69 + int rval; 70 + int i = 0; 71 + 72 + /* Initialize bnx2fc_rport */ 73 + /* NOTE: tgt is already bzero'd */ 74 + rval = bnx2fc_init_tgt(tgt, port, rdata); 75 + if (rval) { 76 + printk(KERN_ERR PFX "Failed to allocate conn id for " 77 + "port_id (%6x)\n", rport->port_id); 78 + goto ofld_err; 79 + } 80 + 81 + /* Allocate session resources */ 82 + rval = bnx2fc_alloc_session_resc(hba, tgt); 83 + if (rval) { 84 + printk(KERN_ERR PFX "Failed to allocate resources\n"); 85 + goto ofld_err; 86 + } 87 + 88 + /* 89 + * Initialize FCoE session offload process. 90 + * Upon completion of offload process add 91 + * rport to list of rports 92 + */ 93 + retry_ofld: 94 + clear_bit(BNX2FC_FLAG_OFLD_REQ_CMPL, &tgt->flags); 95 + rval = bnx2fc_send_session_ofld_req(port, tgt); 96 + if (rval) { 97 + printk(KERN_ERR PFX "ofld_req failed\n"); 98 + goto ofld_err; 99 + } 100 + 101 + /* 102 + * wait for the session is offloaded and enabled. 3 Secs 103 + * should be ample time for this process to complete. 104 + */ 105 + setup_timer(&tgt->ofld_timer, bnx2fc_ofld_timer, (unsigned long)tgt); 106 + mod_timer(&tgt->ofld_timer, jiffies + BNX2FC_FW_TIMEOUT); 107 + 108 + wait_event_interruptible(tgt->ofld_wait, 109 + (test_bit( 110 + BNX2FC_FLAG_OFLD_REQ_CMPL, 111 + &tgt->flags))); 112 + if (signal_pending(current)) 113 + flush_signals(current); 114 + 115 + del_timer_sync(&tgt->ofld_timer); 116 + 117 + if (!(test_bit(BNX2FC_FLAG_OFFLOADED, &tgt->flags))) { 118 + if (test_and_clear_bit(BNX2FC_FLAG_CTX_ALLOC_FAILURE, 119 + &tgt->flags)) { 120 + BNX2FC_TGT_DBG(tgt, "ctx_alloc_failure, " 121 + "retry ofld..%d\n", i++); 122 + msleep_interruptible(1000); 123 + if (i > 3) { 124 + i = 0; 125 + goto ofld_err; 126 + } 127 + goto retry_ofld; 128 + } 129 + goto ofld_err; 130 + } 131 + if (bnx2fc_map_doorbell(tgt)) { 132 + printk(KERN_ERR PFX "map doorbell failed - no mem\n"); 133 + /* upload will take care of cleaning up sess resc */ 134 + lport->tt.rport_logoff(rdata); 135 + } 136 + return; 137 + 138 + ofld_err: 139 + /* couldn't offload the session. log off from this rport */ 140 + BNX2FC_TGT_DBG(tgt, "bnx2fc_offload_session - offload error\n"); 141 + lport->tt.rport_logoff(rdata); 142 + /* Free session resources */ 143 + bnx2fc_free_session_resc(hba, tgt); 144 + if (tgt->fcoe_conn_id != -1) 145 + bnx2fc_free_conn_id(hba, tgt->fcoe_conn_id); 146 + } 147 + 148 + void bnx2fc_flush_active_ios(struct bnx2fc_rport *tgt) 149 + { 150 + struct bnx2fc_cmd *io_req; 151 + struct list_head *list; 152 + struct list_head *tmp; 153 + int rc; 154 + int i = 0; 155 + BNX2FC_TGT_DBG(tgt, "Entered flush_active_ios - %d\n", 156 + tgt->num_active_ios.counter); 157 + 158 + spin_lock_bh(&tgt->tgt_lock); 159 + tgt->flush_in_prog = 1; 160 + 161 + list_for_each_safe(list, tmp, &tgt->active_cmd_queue) { 162 + i++; 163 + io_req = (struct bnx2fc_cmd *)list; 164 + list_del_init(&io_req->link); 165 + io_req->on_active_queue = 0; 166 + BNX2FC_IO_DBG(io_req, "cmd_queue cleanup\n"); 167 + 168 + if (cancel_delayed_work(&io_req->timeout_work)) { 169 + if (test_and_clear_bit(BNX2FC_FLAG_EH_ABORT, 170 + &io_req->req_flags)) { 171 + /* Handle eh_abort timeout */ 172 + BNX2FC_IO_DBG(io_req, "eh_abort for IO " 173 + "cleaned up\n"); 174 + complete(&io_req->tm_done); 175 + } 176 + kref_put(&io_req->refcount, 177 + bnx2fc_cmd_release); /* drop timer hold */ 178 + } 179 + 180 + set_bit(BNX2FC_FLAG_IO_COMPL, &io_req->req_flags); 181 + set_bit(BNX2FC_FLAG_IO_CLEANUP, &io_req->req_flags); 182 + rc = bnx2fc_initiate_cleanup(io_req); 183 + BUG_ON(rc); 184 + } 185 + 186 + list_for_each_safe(list, tmp, &tgt->els_queue) { 187 + i++; 188 + io_req = (struct bnx2fc_cmd *)list; 189 + list_del_init(&io_req->link); 190 + io_req->on_active_queue = 0; 191 + 192 + BNX2FC_IO_DBG(io_req, "els_queue cleanup\n"); 193 + 194 + if (cancel_delayed_work(&io_req->timeout_work)) 195 + kref_put(&io_req->refcount, 196 + bnx2fc_cmd_release); /* drop timer hold */ 197 + 198 + if ((io_req->cb_func) && (io_req->cb_arg)) { 199 + io_req->cb_func(io_req->cb_arg); 200 + io_req->cb_arg = NULL; 201 + } 202 + 203 + rc = bnx2fc_initiate_cleanup(io_req); 204 + BUG_ON(rc); 205 + } 206 + 207 + list_for_each_safe(list, tmp, &tgt->io_retire_queue) { 208 + i++; 209 + io_req = (struct bnx2fc_cmd *)list; 210 + list_del_init(&io_req->link); 211 + 212 + BNX2FC_IO_DBG(io_req, "retire_queue flush\n"); 213 + 214 + if (cancel_delayed_work(&io_req->timeout_work)) 215 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 216 + 217 + clear_bit(BNX2FC_FLAG_ISSUE_RRQ, &io_req->req_flags); 218 + } 219 + 220 + BNX2FC_TGT_DBG(tgt, "IOs flushed = %d\n", i); 221 + i = 0; 222 + spin_unlock_bh(&tgt->tgt_lock); 223 + /* wait for active_ios to go to 0 */ 224 + while ((tgt->num_active_ios.counter != 0) && (i++ < BNX2FC_WAIT_CNT)) 225 + msleep(25); 226 + if (tgt->num_active_ios.counter != 0) 227 + printk(KERN_ERR PFX "CLEANUP on port 0x%x:" 228 + " active_ios = %d\n", 229 + tgt->rdata->ids.port_id, tgt->num_active_ios.counter); 230 + spin_lock_bh(&tgt->tgt_lock); 231 + tgt->flush_in_prog = 0; 232 + spin_unlock_bh(&tgt->tgt_lock); 233 + } 234 + 235 + static void bnx2fc_upload_session(struct fcoe_port *port, 236 + struct bnx2fc_rport *tgt) 237 + { 238 + struct bnx2fc_hba *hba = port->priv; 239 + 240 + BNX2FC_TGT_DBG(tgt, "upload_session: active_ios = %d\n", 241 + tgt->num_active_ios.counter); 242 + 243 + /* 244 + * Called with hba->hba_mutex held. 245 + * This is a blocking call 246 + */ 247 + clear_bit(BNX2FC_FLAG_UPLD_REQ_COMPL, &tgt->flags); 248 + bnx2fc_send_session_disable_req(port, tgt); 249 + 250 + /* 251 + * wait for upload to complete. 3 Secs 252 + * should be sufficient time for this process to complete. 253 + */ 254 + setup_timer(&tgt->upld_timer, bnx2fc_upld_timer, (unsigned long)tgt); 255 + mod_timer(&tgt->upld_timer, jiffies + BNX2FC_FW_TIMEOUT); 256 + 257 + BNX2FC_TGT_DBG(tgt, "waiting for disable compl\n"); 258 + wait_event_interruptible(tgt->upld_wait, 259 + (test_bit( 260 + BNX2FC_FLAG_UPLD_REQ_COMPL, 261 + &tgt->flags))); 262 + 263 + if (signal_pending(current)) 264 + flush_signals(current); 265 + 266 + del_timer_sync(&tgt->upld_timer); 267 + 268 + /* 269 + * traverse thru the active_q and tmf_q and cleanup 270 + * IOs in these lists 271 + */ 272 + BNX2FC_TGT_DBG(tgt, "flush/upload - disable wait flags = 0x%lx\n", 273 + tgt->flags); 274 + bnx2fc_flush_active_ios(tgt); 275 + 276 + /* Issue destroy KWQE */ 277 + if (test_bit(BNX2FC_FLAG_DISABLED, &tgt->flags)) { 278 + BNX2FC_TGT_DBG(tgt, "send destroy req\n"); 279 + clear_bit(BNX2FC_FLAG_UPLD_REQ_COMPL, &tgt->flags); 280 + bnx2fc_send_session_destroy_req(hba, tgt); 281 + 282 + /* wait for destroy to complete */ 283 + setup_timer(&tgt->upld_timer, 284 + bnx2fc_upld_timer, (unsigned long)tgt); 285 + mod_timer(&tgt->upld_timer, jiffies + BNX2FC_FW_TIMEOUT); 286 + 287 + wait_event_interruptible(tgt->upld_wait, 288 + (test_bit( 289 + BNX2FC_FLAG_UPLD_REQ_COMPL, 290 + &tgt->flags))); 291 + 292 + if (!(test_bit(BNX2FC_FLAG_DESTROYED, &tgt->flags))) 293 + printk(KERN_ERR PFX "ERROR!! destroy timed out\n"); 294 + 295 + BNX2FC_TGT_DBG(tgt, "destroy wait complete flags = 0x%lx\n", 296 + tgt->flags); 297 + if (signal_pending(current)) 298 + flush_signals(current); 299 + 300 + del_timer_sync(&tgt->upld_timer); 301 + 302 + } else 303 + printk(KERN_ERR PFX "ERROR!! DISABLE req timed out, destroy" 304 + " not sent to FW\n"); 305 + 306 + /* Free session resources */ 307 + spin_lock_bh(&tgt->cq_lock); 308 + bnx2fc_free_session_resc(hba, tgt); 309 + bnx2fc_free_conn_id(hba, tgt->fcoe_conn_id); 310 + spin_unlock_bh(&tgt->cq_lock); 311 + } 312 + 313 + static int bnx2fc_init_tgt(struct bnx2fc_rport *tgt, 314 + struct fcoe_port *port, 315 + struct fc_rport_priv *rdata) 316 + { 317 + 318 + struct fc_rport *rport = rdata->rport; 319 + struct bnx2fc_hba *hba = port->priv; 320 + 321 + tgt->rport = rport; 322 + tgt->rdata = rdata; 323 + tgt->port = port; 324 + 325 + if (hba->num_ofld_sess >= BNX2FC_NUM_MAX_SESS) { 326 + BNX2FC_TGT_DBG(tgt, "exceeded max sessions. logoff this tgt\n"); 327 + tgt->fcoe_conn_id = -1; 328 + return -1; 329 + } 330 + 331 + tgt->fcoe_conn_id = bnx2fc_alloc_conn_id(hba, tgt); 332 + if (tgt->fcoe_conn_id == -1) 333 + return -1; 334 + 335 + BNX2FC_TGT_DBG(tgt, "init_tgt - conn_id = 0x%x\n", tgt->fcoe_conn_id); 336 + 337 + tgt->max_sqes = BNX2FC_SQ_WQES_MAX; 338 + tgt->max_rqes = BNX2FC_RQ_WQES_MAX; 339 + tgt->max_cqes = BNX2FC_CQ_WQES_MAX; 340 + 341 + /* Initialize the toggle bit */ 342 + tgt->sq_curr_toggle_bit = 1; 343 + tgt->cq_curr_toggle_bit = 1; 344 + tgt->sq_prod_idx = 0; 345 + tgt->cq_cons_idx = 0; 346 + tgt->rq_prod_idx = 0x8000; 347 + tgt->rq_cons_idx = 0; 348 + atomic_set(&tgt->num_active_ios, 0); 349 + 350 + tgt->work_time_slice = 2; 351 + 352 + spin_lock_init(&tgt->tgt_lock); 353 + spin_lock_init(&tgt->cq_lock); 354 + 355 + /* Initialize active_cmd_queue list */ 356 + INIT_LIST_HEAD(&tgt->active_cmd_queue); 357 + 358 + /* Initialize IO retire queue */ 359 + INIT_LIST_HEAD(&tgt->io_retire_queue); 360 + 361 + INIT_LIST_HEAD(&tgt->els_queue); 362 + 363 + /* Initialize active_tm_queue list */ 364 + INIT_LIST_HEAD(&tgt->active_tm_queue); 365 + 366 + init_waitqueue_head(&tgt->ofld_wait); 367 + init_waitqueue_head(&tgt->upld_wait); 368 + 369 + return 0; 370 + } 371 + 372 + /** 373 + * This event_callback is called after successful completion of libfc 374 + * initiated target login. bnx2fc can proceed with initiating the session 375 + * establishment. 376 + */ 377 + void bnx2fc_rport_event_handler(struct fc_lport *lport, 378 + struct fc_rport_priv *rdata, 379 + enum fc_rport_event event) 380 + { 381 + struct fcoe_port *port = lport_priv(lport); 382 + struct bnx2fc_hba *hba = port->priv; 383 + struct fc_rport *rport = rdata->rport; 384 + struct fc_rport_libfc_priv *rp; 385 + struct bnx2fc_rport *tgt; 386 + u32 port_id; 387 + 388 + BNX2FC_HBA_DBG(lport, "rport_event_hdlr: event = %d, port_id = 0x%x\n", 389 + event, rdata->ids.port_id); 390 + switch (event) { 391 + case RPORT_EV_READY: 392 + if (!rport) { 393 + printk(KERN_ALERT PFX "rport is NULL: ERROR!\n"); 394 + break; 395 + } 396 + 397 + rp = rport->dd_data; 398 + if (rport->port_id == FC_FID_DIR_SERV) { 399 + /* 400 + * bnx2fc_rport structure doesnt exist for 401 + * directory server. 402 + * We should not come here, as lport will 403 + * take care of fabric login 404 + */ 405 + printk(KERN_ALERT PFX "%x - rport_event_handler ERROR\n", 406 + rdata->ids.port_id); 407 + break; 408 + } 409 + 410 + if (rdata->spp_type != FC_TYPE_FCP) { 411 + BNX2FC_HBA_DBG(lport, "not FCP type target." 412 + " not offloading\n"); 413 + break; 414 + } 415 + if (!(rdata->ids.roles & FC_RPORT_ROLE_FCP_TARGET)) { 416 + BNX2FC_HBA_DBG(lport, "not FCP_TARGET" 417 + " not offloading\n"); 418 + break; 419 + } 420 + 421 + /* 422 + * Offlaod process is protected with hba mutex. 423 + * Use the same mutex_lock for upload process too 424 + */ 425 + mutex_lock(&hba->hba_mutex); 426 + tgt = (struct bnx2fc_rport *)&rp[1]; 427 + 428 + /* This can happen when ADISC finds the same target */ 429 + if (test_bit(BNX2FC_FLAG_OFFLOADED, &tgt->flags)) { 430 + BNX2FC_TGT_DBG(tgt, "already offloaded\n"); 431 + mutex_unlock(&hba->hba_mutex); 432 + return; 433 + } 434 + 435 + /* 436 + * Offload the session. This is a blocking call, and will 437 + * wait until the session is offloaded. 438 + */ 439 + bnx2fc_offload_session(port, tgt, rdata); 440 + 441 + BNX2FC_TGT_DBG(tgt, "OFFLOAD num_ofld_sess = %d\n", 442 + hba->num_ofld_sess); 443 + 444 + if (test_bit(BNX2FC_FLAG_OFFLOADED, &tgt->flags)) { 445 + /* 446 + * Session is offloaded and enabled. Map 447 + * doorbell register for this target 448 + */ 449 + BNX2FC_TGT_DBG(tgt, "sess offloaded\n"); 450 + /* This counter is protected with hba mutex */ 451 + hba->num_ofld_sess++; 452 + 453 + set_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags); 454 + } else { 455 + /* 456 + * Offload or enable would have failed. 457 + * In offload/enable completion path, the 458 + * rport would have already been removed 459 + */ 460 + BNX2FC_TGT_DBG(tgt, "Port is being logged off as " 461 + "offloaded flag not set\n"); 462 + } 463 + mutex_unlock(&hba->hba_mutex); 464 + break; 465 + case RPORT_EV_LOGO: 466 + case RPORT_EV_FAILED: 467 + case RPORT_EV_STOP: 468 + port_id = rdata->ids.port_id; 469 + if (port_id == FC_FID_DIR_SERV) 470 + break; 471 + 472 + if (!rport) { 473 + printk(KERN_ALERT PFX "%x - rport not created Yet!!\n", 474 + port_id); 475 + break; 476 + } 477 + rp = rport->dd_data; 478 + mutex_lock(&hba->hba_mutex); 479 + /* 480 + * Perform session upload. Note that rdata->peers is already 481 + * removed from disc->rports list before we get this event. 482 + */ 483 + tgt = (struct bnx2fc_rport *)&rp[1]; 484 + 485 + if (!(test_bit(BNX2FC_FLAG_OFFLOADED, &tgt->flags))) { 486 + mutex_unlock(&hba->hba_mutex); 487 + break; 488 + } 489 + clear_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags); 490 + 491 + bnx2fc_upload_session(port, tgt); 492 + hba->num_ofld_sess--; 493 + BNX2FC_TGT_DBG(tgt, "UPLOAD num_ofld_sess = %d\n", 494 + hba->num_ofld_sess); 495 + /* 496 + * Try to wake up the linkdown wait thread. If num_ofld_sess 497 + * is 0, the waiting therad wakes up 498 + */ 499 + if ((hba->wait_for_link_down) && 500 + (hba->num_ofld_sess == 0)) { 501 + wake_up_interruptible(&hba->shutdown_wait); 502 + } 503 + if (test_bit(BNX2FC_FLAG_EXPL_LOGO, &tgt->flags)) { 504 + printk(KERN_ERR PFX "Relogin to the tgt\n"); 505 + mutex_lock(&lport->disc.disc_mutex); 506 + lport->tt.rport_login(rdata); 507 + mutex_unlock(&lport->disc.disc_mutex); 508 + } 509 + mutex_unlock(&hba->hba_mutex); 510 + 511 + break; 512 + 513 + case RPORT_EV_NONE: 514 + break; 515 + } 516 + } 517 + 518 + /** 519 + * bnx2fc_tgt_lookup() - Lookup a bnx2fc_rport by port_id 520 + * 521 + * @port: fcoe_port struct to lookup the target port on 522 + * @port_id: The remote port ID to look up 523 + */ 524 + struct bnx2fc_rport *bnx2fc_tgt_lookup(struct fcoe_port *port, 525 + u32 port_id) 526 + { 527 + struct bnx2fc_hba *hba = port->priv; 528 + struct bnx2fc_rport *tgt; 529 + struct fc_rport_priv *rdata; 530 + int i; 531 + 532 + for (i = 0; i < BNX2FC_NUM_MAX_SESS; i++) { 533 + tgt = hba->tgt_ofld_list[i]; 534 + if ((tgt) && (tgt->port == port)) { 535 + rdata = tgt->rdata; 536 + if (rdata->ids.port_id == port_id) { 537 + if (rdata->rp_state != RPORT_ST_DELETE) { 538 + BNX2FC_TGT_DBG(tgt, "rport " 539 + "obtained\n"); 540 + return tgt; 541 + } else { 542 + printk(KERN_ERR PFX "rport 0x%x " 543 + "is in DELETED state\n", 544 + rdata->ids.port_id); 545 + return NULL; 546 + } 547 + } 548 + } 549 + } 550 + return NULL; 551 + } 552 + 553 + 554 + /** 555 + * bnx2fc_alloc_conn_id - allocates FCOE Connection id 556 + * 557 + * @hba: pointer to adapter structure 558 + * @tgt: pointer to bnx2fc_rport structure 559 + */ 560 + static u32 bnx2fc_alloc_conn_id(struct bnx2fc_hba *hba, 561 + struct bnx2fc_rport *tgt) 562 + { 563 + u32 conn_id, next; 564 + 565 + /* called with hba mutex held */ 566 + 567 + /* 568 + * tgt_ofld_list access is synchronized using 569 + * both hba mutex and hba lock. Atleast hba mutex or 570 + * hba lock needs to be held for read access. 571 + */ 572 + 573 + spin_lock_bh(&hba->hba_lock); 574 + next = hba->next_conn_id; 575 + conn_id = hba->next_conn_id++; 576 + if (hba->next_conn_id == BNX2FC_NUM_MAX_SESS) 577 + hba->next_conn_id = 0; 578 + 579 + while (hba->tgt_ofld_list[conn_id] != NULL) { 580 + conn_id++; 581 + if (conn_id == BNX2FC_NUM_MAX_SESS) 582 + conn_id = 0; 583 + 584 + if (conn_id == next) { 585 + /* No free conn_ids are available */ 586 + spin_unlock_bh(&hba->hba_lock); 587 + return -1; 588 + } 589 + } 590 + hba->tgt_ofld_list[conn_id] = tgt; 591 + tgt->fcoe_conn_id = conn_id; 592 + spin_unlock_bh(&hba->hba_lock); 593 + return conn_id; 594 + } 595 + 596 + static void bnx2fc_free_conn_id(struct bnx2fc_hba *hba, u32 conn_id) 597 + { 598 + /* called with hba mutex held */ 599 + spin_lock_bh(&hba->hba_lock); 600 + hba->tgt_ofld_list[conn_id] = NULL; 601 + hba->next_conn_id = conn_id; 602 + spin_unlock_bh(&hba->hba_lock); 603 + } 604 + 605 + /** 606 + *bnx2fc_alloc_session_resc - Allocate qp resources for the session 607 + * 608 + */ 609 + static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, 610 + struct bnx2fc_rport *tgt) 611 + { 612 + dma_addr_t page; 613 + int num_pages; 614 + u32 *pbl; 615 + 616 + /* Allocate and map SQ */ 617 + tgt->sq_mem_size = tgt->max_sqes * BNX2FC_SQ_WQE_SIZE; 618 + tgt->sq_mem_size = (tgt->sq_mem_size + (PAGE_SIZE - 1)) & PAGE_MASK; 619 + 620 + tgt->sq = dma_alloc_coherent(&hba->pcidev->dev, tgt->sq_mem_size, 621 + &tgt->sq_dma, GFP_KERNEL); 622 + if (!tgt->sq) { 623 + printk(KERN_ALERT PFX "unable to allocate SQ memory %d\n", 624 + tgt->sq_mem_size); 625 + goto mem_alloc_failure; 626 + } 627 + memset(tgt->sq, 0, tgt->sq_mem_size); 628 + 629 + /* Allocate and map CQ */ 630 + tgt->cq_mem_size = tgt->max_cqes * BNX2FC_CQ_WQE_SIZE; 631 + tgt->cq_mem_size = (tgt->cq_mem_size + (PAGE_SIZE - 1)) & PAGE_MASK; 632 + 633 + tgt->cq = dma_alloc_coherent(&hba->pcidev->dev, tgt->cq_mem_size, 634 + &tgt->cq_dma, GFP_KERNEL); 635 + if (!tgt->cq) { 636 + printk(KERN_ALERT PFX "unable to allocate CQ memory %d\n", 637 + tgt->cq_mem_size); 638 + goto mem_alloc_failure; 639 + } 640 + memset(tgt->cq, 0, tgt->cq_mem_size); 641 + 642 + /* Allocate and map RQ and RQ PBL */ 643 + tgt->rq_mem_size = tgt->max_rqes * BNX2FC_RQ_WQE_SIZE; 644 + tgt->rq_mem_size = (tgt->rq_mem_size + (PAGE_SIZE - 1)) & PAGE_MASK; 645 + 646 + tgt->rq = dma_alloc_coherent(&hba->pcidev->dev, tgt->rq_mem_size, 647 + &tgt->rq_dma, GFP_KERNEL); 648 + if (!tgt->rq) { 649 + printk(KERN_ALERT PFX "unable to allocate RQ memory %d\n", 650 + tgt->rq_mem_size); 651 + goto mem_alloc_failure; 652 + } 653 + memset(tgt->rq, 0, tgt->rq_mem_size); 654 + 655 + tgt->rq_pbl_size = (tgt->rq_mem_size / PAGE_SIZE) * sizeof(void *); 656 + tgt->rq_pbl_size = (tgt->rq_pbl_size + (PAGE_SIZE - 1)) & PAGE_MASK; 657 + 658 + tgt->rq_pbl = dma_alloc_coherent(&hba->pcidev->dev, tgt->rq_pbl_size, 659 + &tgt->rq_pbl_dma, GFP_KERNEL); 660 + if (!tgt->rq_pbl) { 661 + printk(KERN_ALERT PFX "unable to allocate RQ PBL %d\n", 662 + tgt->rq_pbl_size); 663 + goto mem_alloc_failure; 664 + } 665 + 666 + memset(tgt->rq_pbl, 0, tgt->rq_pbl_size); 667 + num_pages = tgt->rq_mem_size / PAGE_SIZE; 668 + page = tgt->rq_dma; 669 + pbl = (u32 *)tgt->rq_pbl; 670 + 671 + while (num_pages--) { 672 + *pbl = (u32)page; 673 + pbl++; 674 + *pbl = (u32)((u64)page >> 32); 675 + pbl++; 676 + page += PAGE_SIZE; 677 + } 678 + 679 + /* Allocate and map XFERQ */ 680 + tgt->xferq_mem_size = tgt->max_sqes * BNX2FC_XFERQ_WQE_SIZE; 681 + tgt->xferq_mem_size = (tgt->xferq_mem_size + (PAGE_SIZE - 1)) & 682 + PAGE_MASK; 683 + 684 + tgt->xferq = dma_alloc_coherent(&hba->pcidev->dev, tgt->xferq_mem_size, 685 + &tgt->xferq_dma, GFP_KERNEL); 686 + if (!tgt->xferq) { 687 + printk(KERN_ALERT PFX "unable to allocate XFERQ %d\n", 688 + tgt->xferq_mem_size); 689 + goto mem_alloc_failure; 690 + } 691 + memset(tgt->xferq, 0, tgt->xferq_mem_size); 692 + 693 + /* Allocate and map CONFQ & CONFQ PBL */ 694 + tgt->confq_mem_size = tgt->max_sqes * BNX2FC_CONFQ_WQE_SIZE; 695 + tgt->confq_mem_size = (tgt->confq_mem_size + (PAGE_SIZE - 1)) & 696 + PAGE_MASK; 697 + 698 + tgt->confq = dma_alloc_coherent(&hba->pcidev->dev, tgt->confq_mem_size, 699 + &tgt->confq_dma, GFP_KERNEL); 700 + if (!tgt->confq) { 701 + printk(KERN_ALERT PFX "unable to allocate CONFQ %d\n", 702 + tgt->confq_mem_size); 703 + goto mem_alloc_failure; 704 + } 705 + memset(tgt->confq, 0, tgt->confq_mem_size); 706 + 707 + tgt->confq_pbl_size = 708 + (tgt->confq_mem_size / PAGE_SIZE) * sizeof(void *); 709 + tgt->confq_pbl_size = 710 + (tgt->confq_pbl_size + (PAGE_SIZE - 1)) & PAGE_MASK; 711 + 712 + tgt->confq_pbl = dma_alloc_coherent(&hba->pcidev->dev, 713 + tgt->confq_pbl_size, 714 + &tgt->confq_pbl_dma, GFP_KERNEL); 715 + if (!tgt->confq_pbl) { 716 + printk(KERN_ALERT PFX "unable to allocate CONFQ PBL %d\n", 717 + tgt->confq_pbl_size); 718 + goto mem_alloc_failure; 719 + } 720 + 721 + memset(tgt->confq_pbl, 0, tgt->confq_pbl_size); 722 + num_pages = tgt->confq_mem_size / PAGE_SIZE; 723 + page = tgt->confq_dma; 724 + pbl = (u32 *)tgt->confq_pbl; 725 + 726 + while (num_pages--) { 727 + *pbl = (u32)page; 728 + pbl++; 729 + *pbl = (u32)((u64)page >> 32); 730 + pbl++; 731 + page += PAGE_SIZE; 732 + } 733 + 734 + /* Allocate and map ConnDB */ 735 + tgt->conn_db_mem_size = sizeof(struct fcoe_conn_db); 736 + 737 + tgt->conn_db = dma_alloc_coherent(&hba->pcidev->dev, 738 + tgt->conn_db_mem_size, 739 + &tgt->conn_db_dma, GFP_KERNEL); 740 + if (!tgt->conn_db) { 741 + printk(KERN_ALERT PFX "unable to allocate conn_db %d\n", 742 + tgt->conn_db_mem_size); 743 + goto mem_alloc_failure; 744 + } 745 + memset(tgt->conn_db, 0, tgt->conn_db_mem_size); 746 + 747 + 748 + /* Allocate and map LCQ */ 749 + tgt->lcq_mem_size = (tgt->max_sqes + 8) * BNX2FC_SQ_WQE_SIZE; 750 + tgt->lcq_mem_size = (tgt->lcq_mem_size + (PAGE_SIZE - 1)) & 751 + PAGE_MASK; 752 + 753 + tgt->lcq = dma_alloc_coherent(&hba->pcidev->dev, tgt->lcq_mem_size, 754 + &tgt->lcq_dma, GFP_KERNEL); 755 + 756 + if (!tgt->lcq) { 757 + printk(KERN_ALERT PFX "unable to allocate lcq %d\n", 758 + tgt->lcq_mem_size); 759 + goto mem_alloc_failure; 760 + } 761 + memset(tgt->lcq, 0, tgt->lcq_mem_size); 762 + 763 + /* Arm CQ */ 764 + tgt->conn_db->cq_arm.lo = -1; 765 + tgt->conn_db->rq_prod = 0x8000; 766 + 767 + return 0; 768 + 769 + mem_alloc_failure: 770 + bnx2fc_free_session_resc(hba, tgt); 771 + bnx2fc_free_conn_id(hba, tgt->fcoe_conn_id); 772 + return -ENOMEM; 773 + } 774 + 775 + /** 776 + * bnx2i_free_session_resc - free qp resources for the session 777 + * 778 + * @hba: adapter structure pointer 779 + * @tgt: bnx2fc_rport structure pointer 780 + * 781 + * Free QP resources - SQ/RQ/CQ/XFERQ memory and PBL 782 + */ 783 + static void bnx2fc_free_session_resc(struct bnx2fc_hba *hba, 784 + struct bnx2fc_rport *tgt) 785 + { 786 + BNX2FC_TGT_DBG(tgt, "Freeing up session resources\n"); 787 + 788 + if (tgt->ctx_base) { 789 + iounmap(tgt->ctx_base); 790 + tgt->ctx_base = NULL; 791 + } 792 + /* Free LCQ */ 793 + if (tgt->lcq) { 794 + dma_free_coherent(&hba->pcidev->dev, tgt->lcq_mem_size, 795 + tgt->lcq, tgt->lcq_dma); 796 + tgt->lcq = NULL; 797 + } 798 + /* Free connDB */ 799 + if (tgt->conn_db) { 800 + dma_free_coherent(&hba->pcidev->dev, tgt->conn_db_mem_size, 801 + tgt->conn_db, tgt->conn_db_dma); 802 + tgt->conn_db = NULL; 803 + } 804 + /* Free confq and confq pbl */ 805 + if (tgt->confq_pbl) { 806 + dma_free_coherent(&hba->pcidev->dev, tgt->confq_pbl_size, 807 + tgt->confq_pbl, tgt->confq_pbl_dma); 808 + tgt->confq_pbl = NULL; 809 + } 810 + if (tgt->confq) { 811 + dma_free_coherent(&hba->pcidev->dev, tgt->confq_mem_size, 812 + tgt->confq, tgt->confq_dma); 813 + tgt->confq = NULL; 814 + } 815 + /* Free XFERQ */ 816 + if (tgt->xferq) { 817 + dma_free_coherent(&hba->pcidev->dev, tgt->xferq_mem_size, 818 + tgt->xferq, tgt->xferq_dma); 819 + tgt->xferq = NULL; 820 + } 821 + /* Free RQ PBL and RQ */ 822 + if (tgt->rq_pbl) { 823 + dma_free_coherent(&hba->pcidev->dev, tgt->rq_pbl_size, 824 + tgt->rq_pbl, tgt->rq_pbl_dma); 825 + tgt->rq_pbl = NULL; 826 + } 827 + if (tgt->rq) { 828 + dma_free_coherent(&hba->pcidev->dev, tgt->rq_mem_size, 829 + tgt->rq, tgt->rq_dma); 830 + tgt->rq = NULL; 831 + } 832 + /* Free CQ */ 833 + if (tgt->cq) { 834 + dma_free_coherent(&hba->pcidev->dev, tgt->cq_mem_size, 835 + tgt->cq, tgt->cq_dma); 836 + tgt->cq = NULL; 837 + } 838 + /* Free SQ */ 839 + if (tgt->sq) { 840 + dma_free_coherent(&hba->pcidev->dev, tgt->sq_mem_size, 841 + tgt->sq, tgt->sq_dma); 842 + tgt->sq = NULL; 843 + } 844 + }
+3 -1
drivers/scsi/bnx2i/bnx2i.h
··· 360 360 #define ADAPTER_STATE_LINK_DOWN 2 361 361 #define ADAPTER_STATE_INIT_FAILED 31 362 362 unsigned int mtu_supported; 363 - #define BNX2I_MAX_MTU_SUPPORTED 1500 363 + #define BNX2I_MAX_MTU_SUPPORTED 9000 364 364 365 365 struct Scsi_Host *shost; 366 366 ··· 751 751 struct iscsi_task *mtask); 752 752 extern int bnx2i_send_iscsi_tmf(struct bnx2i_conn *conn, 753 753 struct iscsi_task *mtask); 754 + extern int bnx2i_send_iscsi_text(struct bnx2i_conn *conn, 755 + struct iscsi_task *mtask); 754 756 extern int bnx2i_send_iscsi_scsicmd(struct bnx2i_conn *conn, 755 757 struct bnx2i_cmd *cmnd); 756 758 extern int bnx2i_send_iscsi_nopout(struct bnx2i_conn *conn,
+122 -3
drivers/scsi/bnx2i/bnx2i_hwi.c
··· 445 445 } 446 446 447 447 /** 448 + * bnx2i_send_iscsi_text - post iSCSI text WQE to hardware 449 + * @conn: iscsi connection 450 + * @mtask: driver command structure which is requesting 451 + * a WQE to sent to chip for further processing 452 + * 453 + * prepare and post an iSCSI Text request WQE to CNIC firmware 454 + */ 455 + int bnx2i_send_iscsi_text(struct bnx2i_conn *bnx2i_conn, 456 + struct iscsi_task *mtask) 457 + { 458 + struct bnx2i_cmd *bnx2i_cmd; 459 + struct bnx2i_text_request *text_wqe; 460 + struct iscsi_text *text_hdr; 461 + u32 dword; 462 + 463 + bnx2i_cmd = (struct bnx2i_cmd *)mtask->dd_data; 464 + text_hdr = (struct iscsi_text *)mtask->hdr; 465 + text_wqe = (struct bnx2i_text_request *) bnx2i_conn->ep->qp.sq_prod_qe; 466 + 467 + memset(text_wqe, 0, sizeof(struct bnx2i_text_request)); 468 + 469 + text_wqe->op_code = text_hdr->opcode; 470 + text_wqe->op_attr = text_hdr->flags; 471 + text_wqe->data_length = ntoh24(text_hdr->dlength); 472 + text_wqe->itt = mtask->itt | 473 + (ISCSI_TASK_TYPE_MPATH << ISCSI_TEXT_REQUEST_TYPE_SHIFT); 474 + text_wqe->ttt = be32_to_cpu(text_hdr->ttt); 475 + 476 + text_wqe->cmd_sn = be32_to_cpu(text_hdr->cmdsn); 477 + 478 + text_wqe->resp_bd_list_addr_lo = (u32) bnx2i_conn->gen_pdu.resp_bd_dma; 479 + text_wqe->resp_bd_list_addr_hi = 480 + (u32) ((u64) bnx2i_conn->gen_pdu.resp_bd_dma >> 32); 481 + 482 + dword = ((1 << ISCSI_TEXT_REQUEST_NUM_RESP_BDS_SHIFT) | 483 + (bnx2i_conn->gen_pdu.resp_buf_size << 484 + ISCSI_TEXT_REQUEST_RESP_BUFFER_LENGTH_SHIFT)); 485 + text_wqe->resp_buffer = dword; 486 + text_wqe->bd_list_addr_lo = (u32) bnx2i_conn->gen_pdu.req_bd_dma; 487 + text_wqe->bd_list_addr_hi = 488 + (u32) ((u64) bnx2i_conn->gen_pdu.req_bd_dma >> 32); 489 + text_wqe->num_bds = 1; 490 + text_wqe->cq_index = 0; /* CQ# used for completion, 5771x only */ 491 + 492 + bnx2i_ring_dbell_update_sq_params(bnx2i_conn, 1); 493 + return 0; 494 + } 495 + 496 + 497 + /** 448 498 * bnx2i_send_iscsi_scsicmd - post iSCSI scsicmd request WQE to hardware 449 499 * @conn: iscsi connection 450 500 * @cmd: driver command structure which is requesting ··· 540 490 bnx2i_cmd = (struct bnx2i_cmd *)task->dd_data; 541 491 nopout_hdr = (struct iscsi_nopout *)task->hdr; 542 492 nopout_wqe = (struct bnx2i_nop_out_request *)ep->qp.sq_prod_qe; 493 + 494 + memset(nopout_wqe, 0x00, sizeof(struct bnx2i_nop_out_request)); 495 + 543 496 nopout_wqe->op_code = nopout_hdr->opcode; 544 497 nopout_wqe->op_attr = ISCSI_FLAG_CMD_FINAL; 545 498 memcpy(nopout_wqe->lun, nopout_hdr->lun, 8); 546 499 547 500 if (test_bit(BNX2I_NX2_DEV_57710, &ep->hba->cnic_dev_type)) { 548 - u32 tmp = nopout_hdr->lun[0]; 501 + u32 tmp = nopout_wqe->lun[0]; 549 502 /* 57710 requires LUN field to be swapped */ 550 - nopout_hdr->lun[0] = nopout_hdr->lun[1]; 551 - nopout_hdr->lun[1] = tmp; 503 + nopout_wqe->lun[0] = nopout_wqe->lun[1]; 504 + nopout_wqe->lun[1] = tmp; 552 505 } 553 506 554 507 nopout_wqe->itt = ((u16)task->itt | ··· 1478 1425 return 0; 1479 1426 } 1480 1427 1428 + 1429 + /** 1430 + * bnx2i_process_text_resp - this function handles iscsi text response 1431 + * @session: iscsi session pointer 1432 + * @bnx2i_conn: iscsi connection pointer 1433 + * @cqe: pointer to newly DMA'ed CQE entry for processing 1434 + * 1435 + * process iSCSI Text Response CQE& complete it to open-iscsi user daemon 1436 + */ 1437 + static int bnx2i_process_text_resp(struct iscsi_session *session, 1438 + struct bnx2i_conn *bnx2i_conn, 1439 + struct cqe *cqe) 1440 + { 1441 + struct iscsi_conn *conn = bnx2i_conn->cls_conn->dd_data; 1442 + struct iscsi_task *task; 1443 + struct bnx2i_text_response *text; 1444 + struct iscsi_text_rsp *resp_hdr; 1445 + int pld_len; 1446 + int pad_len; 1447 + 1448 + text = (struct bnx2i_text_response *) cqe; 1449 + spin_lock(&session->lock); 1450 + task = iscsi_itt_to_task(conn, text->itt & ISCSI_LOGIN_RESPONSE_INDEX); 1451 + if (!task) 1452 + goto done; 1453 + 1454 + resp_hdr = (struct iscsi_text_rsp *)&bnx2i_conn->gen_pdu.resp_hdr; 1455 + memset(resp_hdr, 0, sizeof(struct iscsi_hdr)); 1456 + resp_hdr->opcode = text->op_code; 1457 + resp_hdr->flags = text->response_flags; 1458 + resp_hdr->hlength = 0; 1459 + 1460 + hton24(resp_hdr->dlength, text->data_length); 1461 + resp_hdr->itt = task->hdr->itt; 1462 + resp_hdr->ttt = cpu_to_be32(text->ttt); 1463 + resp_hdr->statsn = task->hdr->exp_statsn; 1464 + resp_hdr->exp_cmdsn = cpu_to_be32(text->exp_cmd_sn); 1465 + resp_hdr->max_cmdsn = cpu_to_be32(text->max_cmd_sn); 1466 + pld_len = text->data_length; 1467 + bnx2i_conn->gen_pdu.resp_wr_ptr = bnx2i_conn->gen_pdu.resp_buf + 1468 + pld_len; 1469 + pad_len = 0; 1470 + if (pld_len & 0x3) 1471 + pad_len = 4 - (pld_len % 4); 1472 + 1473 + if (pad_len) { 1474 + int i = 0; 1475 + for (i = 0; i < pad_len; i++) { 1476 + bnx2i_conn->gen_pdu.resp_wr_ptr[0] = 0; 1477 + bnx2i_conn->gen_pdu.resp_wr_ptr++; 1478 + } 1479 + } 1480 + __iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr, 1481 + bnx2i_conn->gen_pdu.resp_buf, 1482 + bnx2i_conn->gen_pdu.resp_wr_ptr - 1483 + bnx2i_conn->gen_pdu.resp_buf); 1484 + done: 1485 + spin_unlock(&session->lock); 1486 + return 0; 1487 + } 1488 + 1489 + 1481 1490 /** 1482 1491 * bnx2i_process_tmf_resp - this function handles iscsi TMF response 1483 1492 * @session: iscsi session pointer ··· 1880 1765 case ISCSI_OP_SCSI_TMFUNC_RSP: 1881 1766 bnx2i_process_tmf_resp(session, bnx2i_conn, 1882 1767 qp->cq_cons_qe); 1768 + break; 1769 + case ISCSI_OP_TEXT_RSP: 1770 + bnx2i_process_text_resp(session, bnx2i_conn, 1771 + qp->cq_cons_qe); 1883 1772 break; 1884 1773 case ISCSI_OP_LOGOUT_RSP: 1885 1774 bnx2i_process_logout_resp(session, bnx2i_conn,
+23 -6
drivers/scsi/bnx2i/bnx2i_init.c
··· 18 18 static u32 adapter_count; 19 19 20 20 #define DRV_MODULE_NAME "bnx2i" 21 - #define DRV_MODULE_VERSION "2.6.2.2" 22 - #define DRV_MODULE_RELDATE "Nov 23, 2010" 21 + #define DRV_MODULE_VERSION "2.6.2.3" 22 + #define DRV_MODULE_RELDATE "Dec 31, 2010" 23 23 24 24 static char version[] __devinitdata = 25 25 "Broadcom NetXtreme II iSCSI Driver " DRV_MODULE_NAME \ ··· 29 29 MODULE_AUTHOR("Anil Veerabhadrappa <anilgv@broadcom.com> and " 30 30 "Eddie Wai <eddie.wai@broadcom.com>"); 31 31 32 - MODULE_DESCRIPTION("Broadcom NetXtreme II BCM5706/5708/5709/57710/57711" 32 + MODULE_DESCRIPTION("Broadcom NetXtreme II BCM5706/5708/5709/57710/57711/57712" 33 33 " iSCSI Driver"); 34 34 MODULE_LICENSE("GPL"); 35 35 MODULE_VERSION(DRV_MODULE_VERSION); ··· 88 88 (hba->pci_did == PCI_DEVICE_ID_NX2_5709S)) { 89 89 set_bit(BNX2I_NX2_DEV_5709, &hba->cnic_dev_type); 90 90 hba->mail_queue_access = BNX2I_MQ_BIN_MODE; 91 - } else if (hba->pci_did == PCI_DEVICE_ID_NX2_57710 || 92 - hba->pci_did == PCI_DEVICE_ID_NX2_57711 || 93 - hba->pci_did == PCI_DEVICE_ID_NX2_57711E) 91 + } else if (hba->pci_did == PCI_DEVICE_ID_NX2_57710 || 92 + hba->pci_did == PCI_DEVICE_ID_NX2_57711 || 93 + hba->pci_did == PCI_DEVICE_ID_NX2_57711E || 94 + hba->pci_did == PCI_DEVICE_ID_NX2_57712 || 95 + hba->pci_did == PCI_DEVICE_ID_NX2_57712E) 94 96 set_bit(BNX2I_NX2_DEV_57710, &hba->cnic_dev_type); 95 97 else 96 98 printk(KERN_ALERT "bnx2i: unknown device, 0x%x\n", ··· 163 161 struct bnx2i_hba *hba = handle; 164 162 int i = HZ; 165 163 164 + if (!hba->cnic->max_iscsi_conn) { 165 + printk(KERN_ALERT "bnx2i: dev %s does not support " 166 + "iSCSI\n", hba->netdev->name); 167 + 168 + if (test_bit(BNX2I_CNIC_REGISTERED, &hba->reg_with_cnic)) { 169 + mutex_lock(&bnx2i_dev_lock); 170 + list_del_init(&hba->link); 171 + adapter_count--; 172 + hba->cnic->unregister_device(hba->cnic, CNIC_ULP_ISCSI); 173 + clear_bit(BNX2I_CNIC_REGISTERED, &hba->reg_with_cnic); 174 + mutex_unlock(&bnx2i_dev_lock); 175 + bnx2i_free_hba(hba); 176 + } 177 + return; 178 + } 166 179 bnx2i_send_fw_iscsi_init_msg(hba); 167 180 while (!test_bit(ADAPTER_STATE_UP, &hba->adapter_state) && i--) 168 181 msleep(BNX2I_INIT_POLL_TIME);
+28 -25
drivers/scsi/bnx2i/bnx2i_iscsi.c
··· 1092 1092 case ISCSI_OP_SCSI_TMFUNC: 1093 1093 rc = bnx2i_send_iscsi_tmf(bnx2i_conn, task); 1094 1094 break; 1095 + case ISCSI_OP_TEXT: 1096 + rc = bnx2i_send_iscsi_text(bnx2i_conn, task); 1097 + break; 1095 1098 default: 1096 1099 iscsi_conn_printk(KERN_ALERT, bnx2i_conn->cls_conn->dd_data, 1097 1100 "send_gen: unsupported op 0x%x\n", ··· 1458 1455 1459 1456 1460 1457 /** 1461 - * bnx2i_conn_get_param - return iscsi connection parameter to caller 1462 - * @cls_conn: pointer to iscsi cls conn 1458 + * bnx2i_ep_get_param - return iscsi ep parameter to caller 1459 + * @ep: pointer to iscsi endpoint 1463 1460 * @param: parameter type identifier 1464 1461 * @buf: buffer pointer 1465 1462 * 1466 - * returns iSCSI connection parameters 1463 + * returns iSCSI ep parameters 1467 1464 */ 1468 - static int bnx2i_conn_get_param(struct iscsi_cls_conn *cls_conn, 1469 - enum iscsi_param param, char *buf) 1465 + static int bnx2i_ep_get_param(struct iscsi_endpoint *ep, 1466 + enum iscsi_param param, char *buf) 1470 1467 { 1471 - struct iscsi_conn *conn = cls_conn->dd_data; 1472 - struct bnx2i_conn *bnx2i_conn = conn->dd_data; 1473 - int len = 0; 1468 + struct bnx2i_endpoint *bnx2i_ep = ep->dd_data; 1469 + struct bnx2i_hba *hba = bnx2i_ep->hba; 1470 + int len = -ENOTCONN; 1474 1471 1475 - if (!(bnx2i_conn && bnx2i_conn->ep && bnx2i_conn->ep->hba)) 1476 - goto out; 1472 + if (!hba) 1473 + return -ENOTCONN; 1477 1474 1478 1475 switch (param) { 1479 1476 case ISCSI_PARAM_CONN_PORT: 1480 - mutex_lock(&bnx2i_conn->ep->hba->net_dev_lock); 1481 - if (bnx2i_conn->ep->cm_sk) 1482 - len = sprintf(buf, "%hu\n", 1483 - bnx2i_conn->ep->cm_sk->dst_port); 1484 - mutex_unlock(&bnx2i_conn->ep->hba->net_dev_lock); 1477 + mutex_lock(&hba->net_dev_lock); 1478 + if (bnx2i_ep->cm_sk) 1479 + len = sprintf(buf, "%hu\n", bnx2i_ep->cm_sk->dst_port); 1480 + mutex_unlock(&hba->net_dev_lock); 1485 1481 break; 1486 1482 case ISCSI_PARAM_CONN_ADDRESS: 1487 - mutex_lock(&bnx2i_conn->ep->hba->net_dev_lock); 1488 - if (bnx2i_conn->ep->cm_sk) 1489 - len = sprintf(buf, "%pI4\n", 1490 - &bnx2i_conn->ep->cm_sk->dst_ip); 1491 - mutex_unlock(&bnx2i_conn->ep->hba->net_dev_lock); 1483 + mutex_lock(&hba->net_dev_lock); 1484 + if (bnx2i_ep->cm_sk) 1485 + len = sprintf(buf, "%pI4\n", &bnx2i_ep->cm_sk->dst_ip); 1486 + mutex_unlock(&hba->net_dev_lock); 1492 1487 break; 1493 1488 default: 1494 - return iscsi_conn_get_param(cls_conn, param, buf); 1489 + return -ENOSYS; 1495 1490 } 1496 - out: 1491 + 1497 1492 return len; 1498 1493 } 1499 1494 ··· 1936 1935 cnic_dev_10g = 1; 1937 1936 1938 1937 switch (bnx2i_ep->state) { 1939 - case EP_STATE_CONNECT_FAILED: 1940 1938 case EP_STATE_CLEANUP_FAILED: 1941 1939 case EP_STATE_OFLD_FAILED: 1942 1940 case EP_STATE_DISCONN_TIMEDOUT: 1943 1941 ret = 0; 1944 1942 break; 1945 1943 case EP_STATE_CONNECT_START: 1944 + case EP_STATE_CONNECT_FAILED: 1946 1945 case EP_STATE_CONNECT_COMPL: 1947 1946 case EP_STATE_ULP_UPDATE_START: 1948 1947 case EP_STATE_ULP_UPDATE_COMPL: ··· 2168 2167 .name = "bnx2i", 2169 2168 .caps = CAP_RECOVERY_L0 | CAP_HDRDGST | 2170 2169 CAP_MULTI_R2T | CAP_DATADGST | 2171 - CAP_DATA_PATH_OFFLOAD, 2170 + CAP_DATA_PATH_OFFLOAD | 2171 + CAP_TEXT_NEGO, 2172 2172 .param_mask = ISCSI_MAX_RECV_DLENGTH | 2173 2173 ISCSI_MAX_XMIT_DLENGTH | 2174 2174 ISCSI_HDRDGST_EN | ··· 2202 2200 .bind_conn = bnx2i_conn_bind, 2203 2201 .destroy_conn = bnx2i_conn_destroy, 2204 2202 .set_param = iscsi_set_param, 2205 - .get_conn_param = bnx2i_conn_get_param, 2203 + .get_conn_param = iscsi_conn_get_param, 2206 2204 .get_session_param = iscsi_session_get_param, 2207 2205 .get_host_param = bnx2i_host_get_param, 2208 2206 .start_conn = bnx2i_conn_start, ··· 2211 2209 .xmit_task = bnx2i_task_xmit, 2212 2210 .get_stats = bnx2i_conn_get_stats, 2213 2211 /* TCP connect - disconnect - option-2 interface calls */ 2212 + .get_ep_param = bnx2i_ep_get_param, 2214 2213 .ep_connect = bnx2i_ep_connect, 2215 2214 .ep_poll = bnx2i_ep_poll, 2216 2215 .ep_disconnect = bnx2i_ep_disconnect,
+10 -46
drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
··· 105 105 /* owner and name should be set already */ 106 106 .caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T | CAP_HDRDGST 107 107 | CAP_DATADGST | CAP_DIGEST_OFFLOAD | 108 - CAP_PADDING_OFFLOAD, 108 + CAP_PADDING_OFFLOAD | CAP_TEXT_NEGO, 109 109 .param_mask = ISCSI_MAX_RECV_DLENGTH | ISCSI_MAX_XMIT_DLENGTH | 110 110 ISCSI_HDRDGST_EN | ISCSI_DATADGST_EN | 111 111 ISCSI_INITIAL_R2T_EN | ISCSI_MAX_R2T | ··· 137 137 .destroy_conn = iscsi_tcp_conn_teardown, 138 138 .start_conn = iscsi_conn_start, 139 139 .stop_conn = iscsi_conn_stop, 140 - .get_conn_param = cxgbi_get_conn_param, 140 + .get_conn_param = iscsi_conn_get_param, 141 141 .set_param = cxgbi_set_conn_param, 142 142 .get_stats = cxgbi_get_conn_stats, 143 143 /* pdu xmit req from user space */ ··· 152 152 .xmit_pdu = cxgbi_conn_xmit_pdu, 153 153 .parse_pdu_itt = cxgbi_parse_pdu_itt, 154 154 /* TCP connect/disconnect */ 155 + .get_ep_param = cxgbi_get_ep_param, 155 156 .ep_connect = cxgbi_ep_connect, 156 157 .ep_poll = cxgbi_ep_poll, 157 158 .ep_disconnect = cxgbi_ep_disconnect, ··· 1109 1108 csk, idx, npods, gl); 1110 1109 1111 1110 for (i = 0; i < npods; i++, idx++, pm_addr += PPOD_SIZE) { 1112 - struct sk_buff *skb = ddp->gl_skb[idx]; 1111 + struct sk_buff *skb = alloc_wr(sizeof(struct ulp_mem_io) + 1112 + PPOD_SIZE, 0, GFP_ATOMIC); 1113 1113 1114 - /* hold on to the skb until we clear the ddp mapping */ 1115 - skb_get(skb); 1114 + if (!skb) 1115 + return -ENOMEM; 1116 1116 1117 1117 ulp_mem_io_set_hdr(skb, pm_addr); 1118 1118 cxgbi_ddp_ppod_set((struct cxgbi_pagepod *)(skb->head + ··· 1138 1136 cdev, idx, npods, tag); 1139 1137 1140 1138 for (i = 0; i < npods; i++, idx++, pm_addr += PPOD_SIZE) { 1141 - struct sk_buff *skb = ddp->gl_skb[idx]; 1139 + struct sk_buff *skb = alloc_wr(sizeof(struct ulp_mem_io) + 1140 + PPOD_SIZE, 0, GFP_ATOMIC); 1142 1141 1143 1142 if (!skb) { 1144 - pr_err("tag 0x%x, 0x%x, %d/%u, skb NULL.\n", 1143 + pr_err("tag 0x%x, 0x%x, %d/%u, skb OOM.\n", 1145 1144 tag, idx, i, npods); 1146 1145 continue; 1147 1146 } 1148 - ddp->gl_skb[idx] = NULL; 1149 - memset(skb->head + sizeof(struct ulp_mem_io), 0, PPOD_SIZE); 1150 1147 ulp_mem_io_set_hdr(skb, pm_addr); 1151 1148 skb->priority = CPL_PRIORITY_CONTROL; 1152 1149 cxgb3_ofld_send(cdev->lldev, skb); 1153 1150 } 1154 - } 1155 - 1156 - static void ddp_free_gl_skb(struct cxgbi_ddp_info *ddp, int idx, int cnt) 1157 - { 1158 - int i; 1159 - 1160 - log_debug(1 << CXGBI_DBG_DDP, 1161 - "ddp 0x%p, idx %d, cnt %d.\n", ddp, idx, cnt); 1162 - 1163 - for (i = 0; i < cnt; i++, idx++) 1164 - if (ddp->gl_skb[idx]) { 1165 - kfree_skb(ddp->gl_skb[idx]); 1166 - ddp->gl_skb[idx] = NULL; 1167 - } 1168 - } 1169 - 1170 - static int ddp_alloc_gl_skb(struct cxgbi_ddp_info *ddp, int idx, 1171 - int cnt, gfp_t gfp) 1172 - { 1173 - int i; 1174 - 1175 - log_debug(1 << CXGBI_DBG_DDP, 1176 - "ddp 0x%p, idx %d, cnt %d.\n", ddp, idx, cnt); 1177 - 1178 - for (i = 0; i < cnt; i++) { 1179 - struct sk_buff *skb = alloc_wr(sizeof(struct ulp_mem_io) + 1180 - PPOD_SIZE, 0, gfp); 1181 - if (skb) 1182 - ddp->gl_skb[idx + i] = skb; 1183 - else { 1184 - ddp_free_gl_skb(ddp, idx, i); 1185 - return -ENOMEM; 1186 - } 1187 - } 1188 - return 0; 1189 1151 } 1190 1152 1191 1153 static int ddp_setup_conn_pgidx(struct cxgbi_sock *csk, ··· 1282 1316 } 1283 1317 tdev->ulp_iscsi = ddp; 1284 1318 1285 - cdev->csk_ddp_free_gl_skb = ddp_free_gl_skb; 1286 - cdev->csk_ddp_alloc_gl_skb = ddp_alloc_gl_skb; 1287 1319 cdev->csk_ddp_setup_digest = ddp_setup_conn_digest; 1288 1320 cdev->csk_ddp_setup_pgidx = ddp_setup_conn_pgidx; 1289 1321 cdev->csk_ddp_set = ddp_set_map;
+15 -4
drivers/scsi/cxgbi/cxgb3i/cxgb3i.h
··· 24 24 25 25 extern cxgb3_cpl_handler_func cxgb3i_cpl_handlers[NUM_CPL_CMDS]; 26 26 27 - #define cxgb3i_get_private_ipv4addr(ndev) \ 28 - (((struct port_info *)(netdev_priv(ndev)))->iscsi_ipv4addr) 29 - #define cxgb3i_set_private_ipv4addr(ndev, addr) \ 30 - (((struct port_info *)(netdev_priv(ndev)))->iscsi_ipv4addr) = addr 27 + static inline unsigned int cxgb3i_get_private_ipv4addr(struct net_device *ndev) 28 + { 29 + return ((struct port_info *)(netdev_priv(ndev)))->iscsi_ipv4addr; 30 + } 31 + 32 + static inline void cxgb3i_set_private_ipv4addr(struct net_device *ndev, 33 + unsigned int addr) 34 + { 35 + struct port_info *pi = (struct port_info *)netdev_priv(ndev); 36 + 37 + pi->iscsic.flags = addr ? 1 : 0; 38 + pi->iscsi_ipv4addr = addr; 39 + if (addr) 40 + memcpy(pi->iscsic.mac_addr, ndev->dev_addr, ETH_ALEN); 41 + } 31 42 32 43 struct cpl_iscsi_hdr_norss { 33 44 union opcode_tid ot;
+3 -4
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
··· 106 106 .name = DRV_MODULE_NAME, 107 107 .caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T | CAP_HDRDGST | 108 108 CAP_DATADGST | CAP_DIGEST_OFFLOAD | 109 - CAP_PADDING_OFFLOAD, 109 + CAP_PADDING_OFFLOAD | CAP_TEXT_NEGO, 110 110 .param_mask = ISCSI_MAX_RECV_DLENGTH | ISCSI_MAX_XMIT_DLENGTH | 111 111 ISCSI_HDRDGST_EN | ISCSI_DATADGST_EN | 112 112 ISCSI_INITIAL_R2T_EN | ISCSI_MAX_R2T | ··· 138 138 .destroy_conn = iscsi_tcp_conn_teardown, 139 139 .start_conn = iscsi_conn_start, 140 140 .stop_conn = iscsi_conn_stop, 141 - .get_conn_param = cxgbi_get_conn_param, 141 + .get_conn_param = iscsi_conn_get_param, 142 142 .set_param = cxgbi_set_conn_param, 143 143 .get_stats = cxgbi_get_conn_stats, 144 144 /* pdu xmit req from user space */ ··· 153 153 .xmit_pdu = cxgbi_conn_xmit_pdu, 154 154 .parse_pdu_itt = cxgbi_parse_pdu_itt, 155 155 /* TCP connect/disconnect */ 156 + .get_ep_param = cxgbi_get_ep_param, 156 157 .ep_connect = cxgbi_ep_connect, 157 158 .ep_poll = cxgbi_ep_poll, 158 159 .ep_disconnect = cxgbi_ep_disconnect, ··· 1426 1425 cxgbi_ddp_page_size_factor(pgsz_factor); 1427 1426 cxgb4_iscsi_init(lldi->ports[0], tagmask, pgsz_factor); 1428 1427 1429 - cdev->csk_ddp_free_gl_skb = NULL; 1430 - cdev->csk_ddp_alloc_gl_skb = NULL; 1431 1428 cdev->csk_ddp_setup_digest = ddp_setup_conn_digest; 1432 1429 cdev->csk_ddp_setup_pgidx = ddp_setup_conn_pgidx; 1433 1430 cdev->csk_ddp_set = ddp_set_map;
+27 -39
drivers/scsi/cxgbi/libcxgbi.c
··· 530 530 csk->dst = dst; 531 531 csk->daddr.sin_addr.s_addr = daddr->sin_addr.s_addr; 532 532 csk->daddr.sin_port = daddr->sin_port; 533 + csk->daddr.sin_family = daddr->sin_family; 533 534 csk->saddr.sin_addr.s_addr = rt->rt_src; 534 535 535 536 return csk; ··· 1265 1264 return idx; 1266 1265 } 1267 1266 1268 - if (cdev->csk_ddp_alloc_gl_skb) { 1269 - err = cdev->csk_ddp_alloc_gl_skb(ddp, idx, npods, gfp); 1270 - if (err < 0) 1271 - goto unmark_entries; 1272 - } 1273 - 1274 1267 tag = cxgbi_ddp_tag_base(tformat, sw_tag); 1275 1268 tag |= idx << PPOD_IDX_SHIFT; 1276 1269 ··· 1275 1280 hdr.page_offset = htonl(gl->offset); 1276 1281 1277 1282 err = cdev->csk_ddp_set(csk, &hdr, idx, npods, gl); 1278 - if (err < 0) { 1279 - if (cdev->csk_ddp_free_gl_skb) 1280 - cdev->csk_ddp_free_gl_skb(ddp, idx, npods); 1283 + if (err < 0) 1281 1284 goto unmark_entries; 1282 - } 1283 1285 1284 1286 ddp->idx_last = idx; 1285 1287 log_debug(1 << CXGBI_DBG_DDP, ··· 1342 1350 >> PPOD_PAGES_SHIFT; 1343 1351 pr_info("cdev 0x%p, ddp %d + %d.\n", cdev, i, npods); 1344 1352 kfree(gl); 1345 - if (cdev->csk_ddp_free_gl_skb) 1346 - cdev->csk_ddp_free_gl_skb(ddp, i, npods); 1347 1353 i += npods; 1348 1354 } else 1349 1355 i++; ··· 1384 1394 return -ENOMEM; 1385 1395 } 1386 1396 ddp->gl_map = (struct cxgbi_gather_list **)(ddp + 1); 1387 - ddp->gl_skb = (struct sk_buff **)(((char *)ddp->gl_map) + 1388 - ppmax * sizeof(struct cxgbi_gather_list *)); 1389 1397 cdev->ddp = ddp; 1390 1398 1391 1399 spin_lock_init(&ddp->map_lock); ··· 1883 1895 1884 1896 static inline void tx_skb_setmode(struct sk_buff *skb, int hcrc, int dcrc) 1885 1897 { 1886 - u8 submode = 0; 1898 + if (hcrc || dcrc) { 1899 + u8 submode = 0; 1887 1900 1888 - if (hcrc) 1889 - submode |= 1; 1890 - if (dcrc) 1891 - submode |= 2; 1892 - cxgbi_skcb_ulp_mode(skb) = (ULP2_MODE_ISCSI << 4) | submode; 1901 + if (hcrc) 1902 + submode |= 1; 1903 + if (dcrc) 1904 + submode |= 2; 1905 + cxgbi_skcb_ulp_mode(skb) = (ULP2_MODE_ISCSI << 4) | submode; 1906 + } else 1907 + cxgbi_skcb_ulp_mode(skb) = 0; 1893 1908 } 1894 1909 1895 1910 int cxgbi_conn_init_pdu(struct iscsi_task *task, unsigned int offset, ··· 2188 2197 } 2189 2198 EXPORT_SYMBOL_GPL(cxgbi_set_conn_param); 2190 2199 2191 - int cxgbi_get_conn_param(struct iscsi_cls_conn *cls_conn, 2192 - enum iscsi_param param, char *buf) 2200 + int cxgbi_get_ep_param(struct iscsi_endpoint *ep, enum iscsi_param param, 2201 + char *buf) 2193 2202 { 2194 - struct iscsi_conn *iconn = cls_conn->dd_data; 2203 + struct cxgbi_endpoint *cep = ep->dd_data; 2204 + struct cxgbi_sock *csk; 2195 2205 int len; 2196 2206 2197 2207 log_debug(1 << CXGBI_DBG_ISCSI, 2198 - "cls_conn 0x%p, param %d.\n", cls_conn, param); 2208 + "cls_conn 0x%p, param %d.\n", ep, param); 2199 2209 2200 2210 switch (param) { 2201 2211 case ISCSI_PARAM_CONN_PORT: 2202 - spin_lock_bh(&iconn->session->lock); 2203 - len = sprintf(buf, "%hu\n", iconn->portal_port); 2204 - spin_unlock_bh(&iconn->session->lock); 2205 - break; 2206 2212 case ISCSI_PARAM_CONN_ADDRESS: 2207 - spin_lock_bh(&iconn->session->lock); 2208 - len = sprintf(buf, "%s\n", iconn->portal_address); 2209 - spin_unlock_bh(&iconn->session->lock); 2210 - break; 2213 + if (!cep) 2214 + return -ENOTCONN; 2215 + 2216 + csk = cep->csk; 2217 + if (!csk) 2218 + return -ENOTCONN; 2219 + 2220 + return iscsi_conn_get_addr_param((struct sockaddr_storage *) 2221 + &csk->daddr, param, buf); 2211 2222 default: 2212 - return iscsi_conn_get_param(cls_conn, param, buf); 2223 + return -ENOSYS; 2213 2224 } 2214 2225 return len; 2215 2226 } 2216 - EXPORT_SYMBOL_GPL(cxgbi_get_conn_param); 2227 + EXPORT_SYMBOL_GPL(cxgbi_get_ep_param); 2217 2228 2218 2229 struct iscsi_cls_conn * 2219 2230 cxgbi_create_conn(struct iscsi_cls_session *cls_session, u32 cid) ··· 2281 2288 2282 2289 cxgbi_conn_max_xmit_dlength(conn); 2283 2290 cxgbi_conn_max_recv_dlength(conn); 2284 - 2285 - spin_lock_bh(&conn->session->lock); 2286 - sprintf(conn->portal_address, "%pI4", &csk->daddr.sin_addr.s_addr); 2287 - conn->portal_port = ntohs(csk->daddr.sin_port); 2288 - spin_unlock_bh(&conn->session->lock); 2289 2291 2290 2292 log_debug(1 << CXGBI_DBG_ISCSI, 2291 2293 "cls 0x%p,0x%p, ep 0x%p, cconn 0x%p, csk 0x%p.\n",
+1 -4
drivers/scsi/cxgbi/libcxgbi.h
··· 131 131 unsigned int rsvd_tag_mask; 132 132 spinlock_t map_lock; 133 133 struct cxgbi_gather_list **gl_map; 134 - struct sk_buff **gl_skb; 135 134 }; 136 135 137 136 #define DDP_PGIDX_MAX 4 ··· 535 536 struct cxgbi_ddp_info *ddp; 536 537 537 538 void (*dev_ddp_cleanup)(struct cxgbi_device *); 538 - void (*csk_ddp_free_gl_skb)(struct cxgbi_ddp_info *, int, int); 539 - int (*csk_ddp_alloc_gl_skb)(struct cxgbi_ddp_info *, int, int, gfp_t); 540 539 int (*csk_ddp_set)(struct cxgbi_sock *, struct cxgbi_pagepod_hdr *, 541 540 unsigned int, unsigned int, 542 541 struct cxgbi_gather_list *); ··· 712 715 void cxgbi_get_conn_stats(struct iscsi_cls_conn *, struct iscsi_stats *); 713 716 int cxgbi_set_conn_param(struct iscsi_cls_conn *, 714 717 enum iscsi_param, char *, int); 715 - int cxgbi_get_conn_param(struct iscsi_cls_conn *, enum iscsi_param, char *); 718 + int cxgbi_get_ep_param(struct iscsi_endpoint *ep, enum iscsi_param, char *); 716 719 struct iscsi_cls_conn *cxgbi_create_conn(struct iscsi_cls_session *, u32); 717 720 int cxgbi_bind_conn(struct iscsi_cls_session *, 718 721 struct iscsi_cls_conn *, u64, int);
+37 -75
drivers/scsi/device_handler/scsi_dh.c
··· 25 25 #include <scsi/scsi_dh.h> 26 26 #include "../scsi_priv.h" 27 27 28 - struct scsi_dh_devinfo_list { 29 - struct list_head node; 30 - char vendor[9]; 31 - char model[17]; 32 - struct scsi_device_handler *handler; 33 - }; 34 - 35 28 static DEFINE_SPINLOCK(list_lock); 36 29 static LIST_HEAD(scsi_dh_list); 37 - static LIST_HEAD(scsi_dh_dev_list); 30 + static int scsi_dh_list_idx = 1; 38 31 39 32 static struct scsi_device_handler *get_device_handler(const char *name) 40 33 { ··· 44 51 return found; 45 52 } 46 53 47 - 48 - static struct scsi_device_handler * 49 - scsi_dh_cache_lookup(struct scsi_device *sdev) 54 + static struct scsi_device_handler *get_device_handler_by_idx(int idx) 50 55 { 51 - struct scsi_dh_devinfo_list *tmp; 52 - struct scsi_device_handler *found_dh = NULL; 56 + struct scsi_device_handler *tmp, *found = NULL; 53 57 54 58 spin_lock(&list_lock); 55 - list_for_each_entry(tmp, &scsi_dh_dev_list, node) { 56 - if (!strncmp(sdev->vendor, tmp->vendor, strlen(tmp->vendor)) && 57 - !strncmp(sdev->model, tmp->model, strlen(tmp->model))) { 58 - found_dh = tmp->handler; 59 + list_for_each_entry(tmp, &scsi_dh_list, list) { 60 + if (tmp->idx == idx) { 61 + found = tmp; 59 62 break; 60 63 } 61 64 } 62 65 spin_unlock(&list_lock); 63 - 64 - return found_dh; 65 - } 66 - 67 - static int scsi_dh_handler_lookup(struct scsi_device_handler *scsi_dh, 68 - struct scsi_device *sdev) 69 - { 70 - int i, found = 0; 71 - 72 - for(i = 0; scsi_dh->devlist[i].vendor; i++) { 73 - if (!strncmp(sdev->vendor, scsi_dh->devlist[i].vendor, 74 - strlen(scsi_dh->devlist[i].vendor)) && 75 - !strncmp(sdev->model, scsi_dh->devlist[i].model, 76 - strlen(scsi_dh->devlist[i].model))) { 77 - found = 1; 78 - break; 79 - } 80 - } 81 66 return found; 82 67 } 83 68 ··· 73 102 struct scsi_device *sdev) 74 103 { 75 104 struct scsi_device_handler *found_dh = NULL; 76 - struct scsi_dh_devinfo_list *tmp; 105 + int idx; 77 106 78 - found_dh = scsi_dh_cache_lookup(sdev); 79 - if (found_dh) 80 - return found_dh; 107 + idx = scsi_get_device_flags_keyed(sdev, sdev->vendor, sdev->model, 108 + SCSI_DEVINFO_DH); 109 + found_dh = get_device_handler_by_idx(idx); 81 110 82 - if (scsi_dh) { 83 - if (scsi_dh_handler_lookup(scsi_dh, sdev)) 84 - found_dh = scsi_dh; 85 - } else { 86 - struct scsi_device_handler *tmp_dh; 87 - 88 - spin_lock(&list_lock); 89 - list_for_each_entry(tmp_dh, &scsi_dh_list, list) { 90 - if (scsi_dh_handler_lookup(tmp_dh, sdev)) 91 - found_dh = tmp_dh; 92 - } 93 - spin_unlock(&list_lock); 94 - } 95 - 96 - if (found_dh) { /* If device is found, add it to the cache */ 97 - tmp = kmalloc(sizeof(*tmp), GFP_KERNEL); 98 - if (tmp) { 99 - strncpy(tmp->vendor, sdev->vendor, 8); 100 - strncpy(tmp->model, sdev->model, 16); 101 - tmp->vendor[8] = '\0'; 102 - tmp->model[16] = '\0'; 103 - tmp->handler = found_dh; 104 - spin_lock(&list_lock); 105 - list_add(&tmp->node, &scsi_dh_dev_list); 106 - spin_unlock(&list_lock); 107 - } else { 108 - found_dh = NULL; 109 - } 110 - } 111 + if (scsi_dh && found_dh != scsi_dh) 112 + found_dh = NULL; 111 113 112 114 return found_dh; 113 115 } ··· 317 373 */ 318 374 int scsi_register_device_handler(struct scsi_device_handler *scsi_dh) 319 375 { 376 + int i; 377 + 320 378 if (get_device_handler(scsi_dh->name)) 321 379 return -EBUSY; 322 380 323 381 spin_lock(&list_lock); 382 + scsi_dh->idx = scsi_dh_list_idx++; 324 383 list_add(&scsi_dh->list, &scsi_dh_list); 325 384 spin_unlock(&list_lock); 385 + 386 + for (i = 0; scsi_dh->devlist[i].vendor; i++) { 387 + scsi_dev_info_list_add_keyed(0, 388 + scsi_dh->devlist[i].vendor, 389 + scsi_dh->devlist[i].model, 390 + NULL, 391 + scsi_dh->idx, 392 + SCSI_DEVINFO_DH); 393 + } 394 + 326 395 bus_for_each_dev(&scsi_bus_type, NULL, scsi_dh, scsi_dh_notifier_add); 327 396 printk(KERN_INFO "%s: device handler registered\n", scsi_dh->name); 328 397 ··· 352 395 */ 353 396 int scsi_unregister_device_handler(struct scsi_device_handler *scsi_dh) 354 397 { 355 - struct scsi_dh_devinfo_list *tmp, *pos; 398 + int i; 356 399 357 400 if (!get_device_handler(scsi_dh->name)) 358 401 return -ENODEV; ··· 360 403 bus_for_each_dev(&scsi_bus_type, NULL, scsi_dh, 361 404 scsi_dh_notifier_remove); 362 405 406 + for (i = 0; scsi_dh->devlist[i].vendor; i++) { 407 + scsi_dev_info_list_del_keyed(scsi_dh->devlist[i].vendor, 408 + scsi_dh->devlist[i].model, 409 + SCSI_DEVINFO_DH); 410 + } 411 + 363 412 spin_lock(&list_lock); 364 413 list_del(&scsi_dh->list); 365 - list_for_each_entry_safe(pos, tmp, &scsi_dh_dev_list, node) { 366 - if (pos->handler == scsi_dh) { 367 - list_del(&pos->node); 368 - kfree(pos); 369 - } 370 - } 371 414 spin_unlock(&list_lock); 372 415 printk(KERN_INFO "%s: device handler unregistered\n", scsi_dh->name); 373 416 ··· 533 576 { 534 577 int r; 535 578 579 + r = scsi_dev_info_add_list(SCSI_DEVINFO_DH, "SCSI Device Handler"); 580 + if (r) 581 + return r; 582 + 536 583 r = bus_register_notifier(&scsi_bus_type, &scsi_dh_nb); 537 584 538 585 if (!r) ··· 551 590 bus_for_each_dev(&scsi_bus_type, NULL, NULL, 552 591 scsi_dh_sysfs_attr_remove); 553 592 bus_unregister_notifier(&scsi_bus_type, &scsi_dh_nb); 593 + scsi_dev_info_remove_list(SCSI_DEVINFO_DH); 554 594 } 555 595 556 596 module_init(scsi_dh_init);
+11 -7
drivers/scsi/device_handler/scsi_dh_alua.c
··· 253 253 { 254 254 struct alua_dh_data *h = req->end_io_data; 255 255 struct scsi_sense_hdr sense_hdr; 256 - unsigned err = SCSI_DH_IO; 256 + unsigned err = SCSI_DH_OK; 257 257 258 258 if (error || host_byte(req->errors) != DID_OK || 259 - msg_byte(req->errors) != COMMAND_COMPLETE) 259 + msg_byte(req->errors) != COMMAND_COMPLETE) { 260 + err = SCSI_DH_IO; 260 261 goto done; 262 + } 261 263 262 - if (err == SCSI_DH_IO && h->senselen > 0) { 264 + if (h->senselen > 0) { 263 265 err = scsi_normalize_sense(h->sense, SCSI_SENSE_BUFFERSIZE, 264 266 &sense_hdr); 265 267 if (!err) { ··· 287 285 print_alua_state(h->state)); 288 286 } 289 287 done: 290 - blk_put_request(req); 288 + req->end_io_data = NULL; 289 + __blk_put_request(req->q, req); 291 290 if (h->callback_fn) { 292 291 h->callback_fn(h->callback_data, err); 293 292 h->callback_fn = h->callback_data = NULL; ··· 306 303 static unsigned submit_stpg(struct alua_dh_data *h) 307 304 { 308 305 struct request *rq; 309 - int err = SCSI_DH_RES_TEMP_UNAVAIL; 310 306 int stpg_len = 8; 311 307 struct scsi_device *sdev = h->sdev; 312 308 ··· 334 332 rq->end_io_data = h; 335 333 336 334 blk_execute_rq_nowait(rq->q, NULL, rq, 1, stpg_endio); 337 - return err; 335 + return SCSI_DH_OK; 338 336 } 339 337 340 338 /* ··· 732 730 {"Pillar", "Axiom" }, 733 731 {"Intel", "Multi-Flex"}, 734 732 {"NETAPP", "LUN"}, 733 + {"NETAPP", "LUN C-Mode"}, 735 734 {"AIX", "NVDISK"}, 735 + {"Promise", "VTrak"}, 736 736 {NULL, NULL} 737 737 }; 738 738 ··· 763 759 unsigned long flags; 764 760 int err = SCSI_DH_OK; 765 761 766 - scsi_dh_data = kzalloc(sizeof(struct scsi_device_handler *) 762 + scsi_dh_data = kzalloc(sizeof(*scsi_dh_data) 767 763 + sizeof(*h) , GFP_KERNEL); 768 764 if (!scsi_dh_data) { 769 765 sdev_printk(KERN_ERR, sdev, "%s: Attach failed\n",
+1 -1
drivers/scsi/device_handler/scsi_dh_emc.c
··· 650 650 unsigned long flags; 651 651 int err; 652 652 653 - scsi_dh_data = kzalloc(sizeof(struct scsi_device_handler *) 653 + scsi_dh_data = kzalloc(sizeof(*scsi_dh_data) 654 654 + sizeof(*h) , GFP_KERNEL); 655 655 if (!scsi_dh_data) { 656 656 sdev_printk(KERN_ERR, sdev, "%s: Attach failed\n",
+4 -3
drivers/scsi/device_handler/scsi_dh_hp_sw.c
··· 225 225 } 226 226 } 227 227 done: 228 - blk_put_request(req); 228 + req->end_io_data = NULL; 229 + __blk_put_request(req->q, req); 229 230 if (h->callback_fn) { 230 231 h->callback_fn(h->callback_data, err); 231 232 h->callback_fn = h->callback_data = NULL; ··· 339 338 unsigned long flags; 340 339 int ret; 341 340 342 - scsi_dh_data = kzalloc(sizeof(struct scsi_device_handler *) 343 - + sizeof(struct hp_sw_dh_data) , GFP_KERNEL); 341 + scsi_dh_data = kzalloc(sizeof(*scsi_dh_data) 342 + + sizeof(*h) , GFP_KERNEL); 344 343 if (!scsi_dh_data) { 345 344 sdev_printk(KERN_ERR, sdev, "%s: Attach Failed\n", 346 345 HP_SW_NAME);
+13 -13
drivers/scsi/device_handler/scsi_dh_rdac.c
··· 281 281 } 282 282 283 283 static struct request *rdac_failover_get(struct scsi_device *sdev, 284 - struct rdac_dh_data *h) 284 + struct rdac_dh_data *h, struct list_head *list) 285 285 { 286 286 struct request *rq; 287 287 struct rdac_mode_common *common; 288 288 unsigned data_size; 289 + struct rdac_queue_data *qdata; 290 + u8 *lun_table; 289 291 290 292 if (h->ctlr->use_ms10) { 291 293 struct rdac_pg_expanded *rdac_pg; ··· 300 298 rdac_pg->subpage_code = 0x1; 301 299 rdac_pg->page_len[0] = 0x01; 302 300 rdac_pg->page_len[1] = 0x28; 301 + lun_table = rdac_pg->lun_table; 303 302 } else { 304 303 struct rdac_pg_legacy *rdac_pg; 305 304 ··· 310 307 common = &rdac_pg->common; 311 308 rdac_pg->page_code = RDAC_PAGE_CODE_REDUNDANT_CONTROLLER; 312 309 rdac_pg->page_len = 0x68; 310 + lun_table = rdac_pg->lun_table; 313 311 } 314 312 common->rdac_mode[1] = RDAC_MODE_TRANSFER_SPECIFIED_LUNS; 315 313 common->quiescence_timeout = RDAC_QUIESCENCE_TIME; 316 314 common->rdac_options = RDAC_FORCED_QUIESENCE; 315 + 316 + list_for_each_entry(qdata, list, entry) { 317 + lun_table[qdata->h->lun] = 0x81; 318 + } 317 319 318 320 /* get request for block layer packet command */ 319 321 rq = get_rdac_req(sdev, &h->ctlr->mode_select, data_size, WRITE); ··· 573 565 int err, retry_cnt = RDAC_RETRY_COUNT; 574 566 struct rdac_queue_data *tmp, *qdata; 575 567 LIST_HEAD(list); 576 - u8 *lun_table; 577 568 578 569 spin_lock(&ctlr->ms_lock); 579 570 list_splice_init(&ctlr->ms_head, &list); ··· 580 573 ctlr->ms_sdev = NULL; 581 574 spin_unlock(&ctlr->ms_lock); 582 575 583 - if (ctlr->use_ms10) 584 - lun_table = ctlr->mode_select.expanded.lun_table; 585 - else 586 - lun_table = ctlr->mode_select.legacy.lun_table; 587 - 588 576 retry: 589 577 err = SCSI_DH_RES_TEMP_UNAVAIL; 590 - rq = rdac_failover_get(sdev, h); 578 + rq = rdac_failover_get(sdev, h, &list); 591 579 if (!rq) 592 580 goto done; 593 - 594 - list_for_each_entry(qdata, &list, entry) { 595 - lun_table[qdata->h->lun] = 0x81; 596 - } 597 581 598 582 RDAC_LOG(RDAC_LOG_FAILOVER, sdev, "array %s, ctlr %d, " 599 583 "%s MODE_SELECT command", ··· 767 769 {"DELL", "MD32xx"}, 768 770 {"DELL", "MD32xxi"}, 769 771 {"DELL", "MD36xxi"}, 772 + {"DELL", "MD36xxf"}, 770 773 {"LSI", "INF-01-00"}, 771 774 {"ENGENIO", "INF-01-00"}, 772 775 {"STK", "FLEXLINE 380"}, ··· 799 800 int err; 800 801 char array_name[ARRAY_LABEL_LEN]; 801 802 802 - scsi_dh_data = kzalloc(sizeof(struct scsi_device_handler *) 803 + scsi_dh_data = kzalloc(sizeof(*scsi_dh_data) 803 804 + sizeof(*h) , GFP_KERNEL); 804 805 if (!scsi_dh_data) { 805 806 sdev_printk(KERN_ERR, sdev, "%s: Attach failed\n", ··· 905 906 906 907 MODULE_DESCRIPTION("Multipath LSI/Engenio RDAC driver"); 907 908 MODULE_AUTHOR("Mike Christie, Chandra Seetharaman"); 909 + MODULE_VERSION("01.00.0000.0000"); 908 910 MODULE_LICENSE("GPL");
+2
drivers/scsi/fcoe/Makefile
··· 1 1 obj-$(CONFIG_FCOE) += fcoe.o 2 2 obj-$(CONFIG_LIBFCOE) += libfcoe.o 3 + 4 + libfcoe-objs := fcoe_ctlr.o fcoe_transport.o
+226 -397
drivers/scsi/fcoe/fcoe.c
··· 31 31 #include <linux/fs.h> 32 32 #include <linux/sysfs.h> 33 33 #include <linux/ctype.h> 34 + #include <linux/workqueue.h> 34 35 #include <scsi/scsi_tcq.h> 35 36 #include <scsi/scsicam.h> 36 37 #include <scsi/scsi_transport.h> ··· 59 58 60 59 DEFINE_MUTEX(fcoe_config_mutex); 61 60 61 + static struct workqueue_struct *fcoe_wq; 62 + 62 63 /* fcoe_percpu_clean completion. Waiter protected by fcoe_create_mutex */ 63 64 static DECLARE_COMPLETION(fcoe_flush_completion); 64 65 ··· 75 72 static int fcoe_rcv(struct sk_buff *, struct net_device *, 76 73 struct packet_type *, struct net_device *); 77 74 static int fcoe_percpu_receive_thread(void *); 78 - static void fcoe_clean_pending_queue(struct fc_lport *); 79 75 static void fcoe_percpu_clean(struct fc_lport *); 80 76 static int fcoe_link_speed_update(struct fc_lport *); 81 77 static int fcoe_link_ok(struct fc_lport *); ··· 82 80 static struct fc_lport *fcoe_hostlist_lookup(const struct net_device *); 83 81 static int fcoe_hostlist_add(const struct fc_lport *); 84 82 85 - static void fcoe_check_wait_queue(struct fc_lport *, struct sk_buff *); 86 83 static int fcoe_device_notification(struct notifier_block *, ulong, void *); 87 84 static void fcoe_dev_setup(void); 88 85 static void fcoe_dev_cleanup(void); ··· 102 101 103 102 static int fcoe_cpu_callback(struct notifier_block *, unsigned long, void *); 104 103 105 - static int fcoe_create(const char *, struct kernel_param *); 106 - static int fcoe_destroy(const char *, struct kernel_param *); 107 - static int fcoe_enable(const char *, struct kernel_param *); 108 - static int fcoe_disable(const char *, struct kernel_param *); 104 + static bool fcoe_match(struct net_device *netdev); 105 + static int fcoe_create(struct net_device *netdev, enum fip_state fip_mode); 106 + static int fcoe_destroy(struct net_device *netdev); 107 + static int fcoe_enable(struct net_device *netdev); 108 + static int fcoe_disable(struct net_device *netdev); 109 109 110 110 static struct fc_seq *fcoe_elsct_send(struct fc_lport *, 111 111 u32 did, struct fc_frame *, ··· 119 117 120 118 static void fcoe_get_lesb(struct fc_lport *, struct fc_els_lesb *); 121 119 122 - module_param_call(create, fcoe_create, NULL, (void *)FIP_MODE_FABRIC, S_IWUSR); 123 - __MODULE_PARM_TYPE(create, "string"); 124 - MODULE_PARM_DESC(create, " Creates fcoe instance on a ethernet interface"); 125 - module_param_call(create_vn2vn, fcoe_create, NULL, 126 - (void *)FIP_MODE_VN2VN, S_IWUSR); 127 - __MODULE_PARM_TYPE(create_vn2vn, "string"); 128 - MODULE_PARM_DESC(create_vn2vn, " Creates a VN_node to VN_node FCoE instance " 129 - "on an Ethernet interface"); 130 - module_param_call(destroy, fcoe_destroy, NULL, NULL, S_IWUSR); 131 - __MODULE_PARM_TYPE(destroy, "string"); 132 - MODULE_PARM_DESC(destroy, " Destroys fcoe instance on a ethernet interface"); 133 - module_param_call(enable, fcoe_enable, NULL, NULL, S_IWUSR); 134 - __MODULE_PARM_TYPE(enable, "string"); 135 - MODULE_PARM_DESC(enable, " Enables fcoe on a ethernet interface."); 136 - module_param_call(disable, fcoe_disable, NULL, NULL, S_IWUSR); 137 - __MODULE_PARM_TYPE(disable, "string"); 138 - MODULE_PARM_DESC(disable, " Disables fcoe on a ethernet interface."); 139 - 140 120 /* notification function for packets from net device */ 141 121 static struct notifier_block fcoe_notifier = { 142 122 .notifier_call = fcoe_device_notification, ··· 129 145 .notifier_call = fcoe_cpu_callback, 130 146 }; 131 147 132 - static struct scsi_transport_template *fcoe_transport_template; 133 - static struct scsi_transport_template *fcoe_vport_transport_template; 148 + static struct scsi_transport_template *fcoe_nport_scsi_transport; 149 + static struct scsi_transport_template *fcoe_vport_scsi_transport; 134 150 135 151 static int fcoe_vport_destroy(struct fc_vport *); 136 152 static int fcoe_vport_create(struct fc_vport *, bool disabled); ··· 147 163 .lport_set_port_id = fcoe_set_port_id, 148 164 }; 149 165 150 - struct fc_function_template fcoe_transport_function = { 166 + struct fc_function_template fcoe_nport_fc_functions = { 151 167 .show_host_node_name = 1, 152 168 .show_host_port_name = 1, 153 169 .show_host_supported_classes = 1, ··· 187 203 .bsg_request = fc_lport_bsg_request, 188 204 }; 189 205 190 - struct fc_function_template fcoe_vport_transport_function = { 206 + struct fc_function_template fcoe_vport_fc_functions = { 191 207 .show_host_node_name = 1, 192 208 .show_host_port_name = 1, 193 209 .show_host_supported_classes = 1, ··· 338 354 struct fcoe_interface *fcoe; 339 355 int err; 340 356 357 + if (!try_module_get(THIS_MODULE)) { 358 + FCOE_NETDEV_DBG(netdev, 359 + "Could not get a reference to the module\n"); 360 + fcoe = ERR_PTR(-EBUSY); 361 + goto out; 362 + } 363 + 341 364 fcoe = kzalloc(sizeof(*fcoe), GFP_KERNEL); 342 365 if (!fcoe) { 343 366 FCOE_NETDEV_DBG(netdev, "Could not allocate fcoe structure\n"); 344 - return NULL; 367 + fcoe = ERR_PTR(-ENOMEM); 368 + goto out_nomod; 345 369 } 346 370 347 371 dev_hold(netdev); ··· 368 376 fcoe_ctlr_destroy(&fcoe->ctlr); 369 377 kfree(fcoe); 370 378 dev_put(netdev); 371 - return NULL; 379 + fcoe = ERR_PTR(err); 380 + goto out_nomod; 372 381 } 373 382 383 + goto out; 384 + 385 + out_nomod: 386 + module_put(THIS_MODULE); 387 + out: 374 388 return fcoe; 375 389 } 376 390 ··· 438 440 fcoe_ctlr_destroy(&fcoe->ctlr); 439 441 kfree(fcoe); 440 442 dev_put(netdev); 443 + module_put(THIS_MODULE); 441 444 } 442 445 443 446 /** ··· 502 503 static void fcoe_update_src_mac(struct fc_lport *lport, u8 *addr) 503 504 { 504 505 struct fcoe_port *port = lport_priv(lport); 505 - struct fcoe_interface *fcoe = port->fcoe; 506 + struct fcoe_interface *fcoe = port->priv; 506 507 507 508 rtnl_lock(); 508 509 if (!is_zero_ether_addr(port->data_src_addr)) ··· 555 556 lport->lso_max = 0; 556 557 557 558 return 0; 558 - } 559 - 560 - /** 561 - * fcoe_queue_timer() - The fcoe queue timer 562 - * @lport: The local port 563 - * 564 - * Calls fcoe_check_wait_queue on timeout 565 - */ 566 - static void fcoe_queue_timer(ulong lport) 567 - { 568 - fcoe_check_wait_queue((struct fc_lport *)lport, NULL); 569 559 } 570 560 571 561 /** ··· 636 648 637 649 /* Setup lport private data to point to fcoe softc */ 638 650 port = lport_priv(lport); 639 - fcoe = port->fcoe; 651 + fcoe = port->priv; 640 652 641 653 /* 642 654 * Determine max frame size based on underlying device and optional ··· 694 706 lport->host->max_cmd_len = FCOE_MAX_CMD_LEN; 695 707 696 708 if (lport->vport) 697 - lport->host->transportt = fcoe_vport_transport_template; 709 + lport->host->transportt = fcoe_vport_scsi_transport; 698 710 else 699 - lport->host->transportt = fcoe_transport_template; 711 + lport->host->transportt = fcoe_nport_scsi_transport; 700 712 701 713 /* add the new host to the SCSI-ml */ 702 714 rc = scsi_add_host(lport->host, dev); ··· 746 758 static inline int fcoe_em_config(struct fc_lport *lport) 747 759 { 748 760 struct fcoe_port *port = lport_priv(lport); 749 - struct fcoe_interface *fcoe = port->fcoe; 761 + struct fcoe_interface *fcoe = port->priv; 750 762 struct fcoe_interface *oldfcoe = NULL; 751 763 struct net_device *old_real_dev, *cur_real_dev; 752 764 u16 min_xid = FCOE_MIN_XID; ··· 830 842 static void fcoe_if_destroy(struct fc_lport *lport) 831 843 { 832 844 struct fcoe_port *port = lport_priv(lport); 833 - struct fcoe_interface *fcoe = port->fcoe; 845 + struct fcoe_interface *fcoe = port->priv; 834 846 struct net_device *netdev = fcoe->netdev; 835 847 836 848 FCOE_NETDEV_DBG(netdev, "Destroying interface\n"); ··· 872 884 873 885 /* Release the Scsi_Host */ 874 886 scsi_host_put(lport->host); 875 - module_put(THIS_MODULE); 876 887 } 877 888 878 889 /** ··· 926 939 struct device *parent, int npiv) 927 940 { 928 941 struct net_device *netdev = fcoe->netdev; 929 - struct fc_lport *lport = NULL; 942 + struct fc_lport *lport, *n_port; 930 943 struct fcoe_port *port; 944 + struct Scsi_Host *shost; 931 945 int rc; 932 946 /* 933 947 * parent is only a vport if npiv is 1, ··· 938 950 939 951 FCOE_NETDEV_DBG(netdev, "Create Interface\n"); 940 952 941 - if (!npiv) { 942 - lport = libfc_host_alloc(&fcoe_shost_template, 943 - sizeof(struct fcoe_port)); 944 - } else { 945 - lport = libfc_vport_create(vport, 946 - sizeof(struct fcoe_port)); 947 - } 953 + if (!npiv) 954 + lport = libfc_host_alloc(&fcoe_shost_template, sizeof(*port)); 955 + else 956 + lport = libfc_vport_create(vport, sizeof(*port)); 957 + 948 958 if (!lport) { 949 959 FCOE_NETDEV_DBG(netdev, "Could not allocate host structure\n"); 950 960 rc = -ENOMEM; ··· 950 964 } 951 965 port = lport_priv(lport); 952 966 port->lport = lport; 953 - port->fcoe = fcoe; 967 + port->priv = fcoe; 968 + port->max_queue_depth = FCOE_MAX_QUEUE_DEPTH; 969 + port->min_queue_depth = FCOE_MIN_QUEUE_DEPTH; 954 970 INIT_WORK(&port->destroy_work, fcoe_destroy_work); 955 971 956 972 /* configure a fc_lport including the exchange manager */ ··· 995 1007 goto out_lp_destroy; 996 1008 } 997 1009 998 - if (!npiv) { 999 - /* 1000 - * fcoe_em_alloc() and fcoe_hostlist_add() both 1001 - * need to be atomic with respect to other changes to the 1002 - * hostlist since fcoe_em_alloc() looks for an existing EM 1003 - * instance on host list updated by fcoe_hostlist_add(). 1004 - * 1005 - * This is currently handled through the fcoe_config_mutex 1006 - * begin held. 1007 - */ 1008 - 1010 + /* 1011 + * fcoe_em_alloc() and fcoe_hostlist_add() both 1012 + * need to be atomic with respect to other changes to the 1013 + * hostlist since fcoe_em_alloc() looks for an existing EM 1014 + * instance on host list updated by fcoe_hostlist_add(). 1015 + * 1016 + * This is currently handled through the fcoe_config_mutex 1017 + * begin held. 1018 + */ 1019 + if (!npiv) 1009 1020 /* lport exch manager allocation */ 1010 1021 rc = fcoe_em_config(lport); 1011 - if (rc) { 1012 - FCOE_NETDEV_DBG(netdev, "Could not configure the EM " 1013 - "for the interface\n"); 1014 - goto out_lp_destroy; 1015 - } 1022 + else { 1023 + shost = vport_to_shost(vport); 1024 + n_port = shost_priv(shost); 1025 + rc = fc_exch_mgr_list_clone(n_port, lport); 1026 + } 1027 + 1028 + if (rc) { 1029 + FCOE_NETDEV_DBG(netdev, "Could not configure the EM\n"); 1030 + goto out_lp_destroy; 1016 1031 } 1017 1032 1018 1033 fcoe_interface_get(fcoe); ··· 1039 1048 static int __init fcoe_if_init(void) 1040 1049 { 1041 1050 /* attach to scsi transport */ 1042 - fcoe_transport_template = fc_attach_transport(&fcoe_transport_function); 1043 - fcoe_vport_transport_template = 1044 - fc_attach_transport(&fcoe_vport_transport_function); 1051 + fcoe_nport_scsi_transport = 1052 + fc_attach_transport(&fcoe_nport_fc_functions); 1053 + fcoe_vport_scsi_transport = 1054 + fc_attach_transport(&fcoe_vport_fc_functions); 1045 1055 1046 - if (!fcoe_transport_template) { 1056 + if (!fcoe_nport_scsi_transport) { 1047 1057 printk(KERN_ERR "fcoe: Failed to attach to the FC transport\n"); 1048 1058 return -ENODEV; 1049 1059 } ··· 1061 1069 */ 1062 1070 int __exit fcoe_if_exit(void) 1063 1071 { 1064 - fc_release_transport(fcoe_transport_template); 1065 - fc_release_transport(fcoe_vport_transport_template); 1066 - fcoe_transport_template = NULL; 1067 - fcoe_vport_transport_template = NULL; 1072 + fc_release_transport(fcoe_nport_scsi_transport); 1073 + fc_release_transport(fcoe_vport_scsi_transport); 1074 + fcoe_nport_scsi_transport = NULL; 1075 + fcoe_vport_scsi_transport = NULL; 1068 1076 return 0; 1069 1077 } 1070 1078 ··· 1351 1359 } 1352 1360 1353 1361 /** 1354 - * fcoe_start_io() - Start FCoE I/O 1355 - * @skb: The packet to be transmitted 1356 - * 1357 - * This routine is called from the net device to start transmitting 1358 - * FCoE packets. 1359 - * 1360 - * Returns: 0 for success 1361 - */ 1362 - static inline int fcoe_start_io(struct sk_buff *skb) 1363 - { 1364 - struct sk_buff *nskb; 1365 - int rc; 1366 - 1367 - nskb = skb_clone(skb, GFP_ATOMIC); 1368 - rc = dev_queue_xmit(nskb); 1369 - if (rc != 0) 1370 - return rc; 1371 - kfree_skb(skb); 1372 - return 0; 1373 - } 1374 - 1375 - /** 1376 - * fcoe_get_paged_crc_eof() - Allocate a page to be used for the trailer CRC 1362 + * fcoe_alloc_paged_crc_eof() - Allocate a page to be used for the trailer CRC 1377 1363 * @skb: The packet to be transmitted 1378 1364 * @tlen: The total length of the trailer 1379 1365 * 1380 - * This routine allocates a page for frame trailers. The page is re-used if 1381 - * there is enough room left on it for the current trailer. If there isn't 1382 - * enough buffer left a new page is allocated for the trailer. Reference to 1383 - * the page from this function as well as the skbs using the page fragments 1384 - * ensure that the page is freed at the appropriate time. 1385 - * 1386 1366 * Returns: 0 for success 1387 1367 */ 1388 - static int fcoe_get_paged_crc_eof(struct sk_buff *skb, int tlen) 1368 + static int fcoe_alloc_paged_crc_eof(struct sk_buff *skb, int tlen) 1389 1369 { 1390 1370 struct fcoe_percpu_s *fps; 1391 - struct page *page; 1371 + int rc; 1392 1372 1393 1373 fps = &get_cpu_var(fcoe_percpu); 1394 - page = fps->crc_eof_page; 1395 - if (!page) { 1396 - page = alloc_page(GFP_ATOMIC); 1397 - if (!page) { 1398 - put_cpu_var(fcoe_percpu); 1399 - return -ENOMEM; 1400 - } 1401 - fps->crc_eof_page = page; 1402 - fps->crc_eof_offset = 0; 1403 - } 1404 - 1405 - get_page(page); 1406 - skb_fill_page_desc(skb, skb_shinfo(skb)->nr_frags, page, 1407 - fps->crc_eof_offset, tlen); 1408 - skb->len += tlen; 1409 - skb->data_len += tlen; 1410 - skb->truesize += tlen; 1411 - fps->crc_eof_offset += sizeof(struct fcoe_crc_eof); 1412 - 1413 - if (fps->crc_eof_offset >= PAGE_SIZE) { 1414 - fps->crc_eof_page = NULL; 1415 - fps->crc_eof_offset = 0; 1416 - put_page(page); 1417 - } 1374 + rc = fcoe_get_paged_crc_eof(skb, tlen, fps); 1418 1375 put_cpu_var(fcoe_percpu); 1419 - return 0; 1420 - } 1421 1376 1422 - /** 1423 - * fcoe_fc_crc() - Calculates the CRC for a given frame 1424 - * @fp: The frame to be checksumed 1425 - * 1426 - * This uses crc32() routine to calculate the CRC for a frame 1427 - * 1428 - * Return: The 32 bit CRC value 1429 - */ 1430 - u32 fcoe_fc_crc(struct fc_frame *fp) 1431 - { 1432 - struct sk_buff *skb = fp_skb(fp); 1433 - struct skb_frag_struct *frag; 1434 - unsigned char *data; 1435 - unsigned long off, len, clen; 1436 - u32 crc; 1437 - unsigned i; 1438 - 1439 - crc = crc32(~0, skb->data, skb_headlen(skb)); 1440 - 1441 - for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 1442 - frag = &skb_shinfo(skb)->frags[i]; 1443 - off = frag->page_offset; 1444 - len = frag->size; 1445 - while (len > 0) { 1446 - clen = min(len, PAGE_SIZE - (off & ~PAGE_MASK)); 1447 - data = kmap_atomic(frag->page + (off >> PAGE_SHIFT), 1448 - KM_SKB_DATA_SOFTIRQ); 1449 - crc = crc32(crc, data + (off & ~PAGE_MASK), clen); 1450 - kunmap_atomic(data, KM_SKB_DATA_SOFTIRQ); 1451 - off += clen; 1452 - len -= clen; 1453 - } 1454 - } 1455 - return crc; 1377 + return rc; 1456 1378 } 1457 1379 1458 1380 /** ··· 1389 1483 unsigned int tlen; /* trailer length */ 1390 1484 unsigned int elen; /* eth header, may include vlan */ 1391 1485 struct fcoe_port *port = lport_priv(lport); 1392 - struct fcoe_interface *fcoe = port->fcoe; 1486 + struct fcoe_interface *fcoe = port->priv; 1393 1487 u8 sof, eof; 1394 1488 struct fcoe_hdr *hp; 1395 1489 ··· 1430 1524 /* copy port crc and eof to the skb buff */ 1431 1525 if (skb_is_nonlinear(skb)) { 1432 1526 skb_frag_t *frag; 1433 - if (fcoe_get_paged_crc_eof(skb, tlen)) { 1527 + if (fcoe_alloc_paged_crc_eof(skb, tlen)) { 1434 1528 kfree_skb(skb); 1435 1529 return -ENOMEM; 1436 1530 } ··· 1510 1604 } 1511 1605 1512 1606 /** 1607 + * fcoe_filter_frames() - filter out bad fcoe frames, i.e. bad CRC 1608 + * @lport: The local port the frame was received on 1609 + * @fp: The received frame 1610 + * 1611 + * Return: 0 on passing filtering checks 1612 + */ 1613 + static inline int fcoe_filter_frames(struct fc_lport *lport, 1614 + struct fc_frame *fp) 1615 + { 1616 + struct fcoe_interface *fcoe; 1617 + struct fc_frame_header *fh; 1618 + struct sk_buff *skb = (struct sk_buff *)fp; 1619 + struct fcoe_dev_stats *stats; 1620 + 1621 + /* 1622 + * We only check CRC if no offload is available and if it is 1623 + * it's solicited data, in which case, the FCP layer would 1624 + * check it during the copy. 1625 + */ 1626 + if (lport->crc_offload && skb->ip_summed == CHECKSUM_UNNECESSARY) 1627 + fr_flags(fp) &= ~FCPHF_CRC_UNCHECKED; 1628 + else 1629 + fr_flags(fp) |= FCPHF_CRC_UNCHECKED; 1630 + 1631 + fh = (struct fc_frame_header *) skb_transport_header(skb); 1632 + fh = fc_frame_header_get(fp); 1633 + if (fh->fh_r_ctl == FC_RCTL_DD_SOL_DATA && fh->fh_type == FC_TYPE_FCP) 1634 + return 0; 1635 + 1636 + fcoe = ((struct fcoe_port *)lport_priv(lport))->priv; 1637 + if (is_fip_mode(&fcoe->ctlr) && fc_frame_payload_op(fp) == ELS_LOGO && 1638 + ntoh24(fh->fh_s_id) == FC_FID_FLOGI) { 1639 + FCOE_DBG("fcoe: dropping FCoE lport LOGO in fip mode\n"); 1640 + return -EINVAL; 1641 + } 1642 + 1643 + if (!(fr_flags(fp) & FCPHF_CRC_UNCHECKED) || 1644 + le32_to_cpu(fr_crc(fp)) == ~crc32(~0, skb->data, skb->len)) { 1645 + fr_flags(fp) &= ~FCPHF_CRC_UNCHECKED; 1646 + return 0; 1647 + } 1648 + 1649 + stats = per_cpu_ptr(lport->dev_stats, get_cpu()); 1650 + stats->InvalidCRCCount++; 1651 + if (stats->InvalidCRCCount < 5) 1652 + printk(KERN_WARNING "fcoe: dropping frame with CRC error\n"); 1653 + return -EINVAL; 1654 + } 1655 + 1656 + /** 1513 1657 * fcoe_recv_frame() - process a single received frame 1514 1658 * @skb: frame to process 1515 1659 */ ··· 1569 1613 struct fc_lport *lport; 1570 1614 struct fcoe_rcv_info *fr; 1571 1615 struct fcoe_dev_stats *stats; 1572 - struct fc_frame_header *fh; 1573 1616 struct fcoe_crc_eof crc_eof; 1574 1617 struct fc_frame *fp; 1575 1618 struct fcoe_port *port; ··· 1599 1644 * was done in fcoe_rcv already. 1600 1645 */ 1601 1646 hp = (struct fcoe_hdr *) skb_network_header(skb); 1602 - fh = (struct fc_frame_header *) skb_transport_header(skb); 1603 1647 1604 1648 stats = per_cpu_ptr(lport->dev_stats, get_cpu()); 1605 1649 if (unlikely(FC_FCOE_DECAPS_VER(hp) != FC_FCOE_VER)) { ··· 1631 1677 if (pskb_trim(skb, fr_len)) 1632 1678 goto drop; 1633 1679 1634 - /* 1635 - * We only check CRC if no offload is available and if it is 1636 - * it's solicited data, in which case, the FCP layer would 1637 - * check it during the copy. 1638 - */ 1639 - if (lport->crc_offload && 1640 - skb->ip_summed == CHECKSUM_UNNECESSARY) 1641 - fr_flags(fp) &= ~FCPHF_CRC_UNCHECKED; 1642 - else 1643 - fr_flags(fp) |= FCPHF_CRC_UNCHECKED; 1644 - 1645 - fh = fc_frame_header_get(fp); 1646 - if ((fh->fh_r_ctl != FC_RCTL_DD_SOL_DATA || 1647 - fh->fh_type != FC_TYPE_FCP) && 1648 - (fr_flags(fp) & FCPHF_CRC_UNCHECKED)) { 1649 - if (le32_to_cpu(fr_crc(fp)) != 1650 - ~crc32(~0, skb->data, fr_len)) { 1651 - if (stats->InvalidCRCCount < 5) 1652 - printk(KERN_WARNING "fcoe: dropping " 1653 - "frame with CRC error\n"); 1654 - stats->InvalidCRCCount++; 1655 - goto drop; 1656 - } 1657 - fr_flags(fp) &= ~FCPHF_CRC_UNCHECKED; 1680 + if (!fcoe_filter_frames(lport, fp)) { 1681 + put_cpu(); 1682 + fc_exch_recv(lport, fp); 1683 + return; 1658 1684 } 1659 - put_cpu(); 1660 - fc_exch_recv(lport, fp); 1661 - return; 1662 - 1663 1685 drop: 1664 1686 stats->ErrorFrames++; 1665 1687 put_cpu(); ··· 1671 1741 fcoe_recv_frame(skb); 1672 1742 } 1673 1743 return 0; 1674 - } 1675 - 1676 - /** 1677 - * fcoe_check_wait_queue() - Attempt to clear the transmit backlog 1678 - * @lport: The local port whose backlog is to be cleared 1679 - * 1680 - * This empties the wait_queue, dequeues the head of the wait_queue queue 1681 - * and calls fcoe_start_io() for each packet. If all skb have been 1682 - * transmitted it returns the qlen. If an error occurs it restores 1683 - * wait_queue (to try again later) and returns -1. 1684 - * 1685 - * The wait_queue is used when the skb transmit fails. The failed skb 1686 - * will go in the wait_queue which will be emptied by the timer function or 1687 - * by the next skb transmit. 1688 - */ 1689 - static void fcoe_check_wait_queue(struct fc_lport *lport, struct sk_buff *skb) 1690 - { 1691 - struct fcoe_port *port = lport_priv(lport); 1692 - int rc; 1693 - 1694 - spin_lock_bh(&port->fcoe_pending_queue.lock); 1695 - 1696 - if (skb) 1697 - __skb_queue_tail(&port->fcoe_pending_queue, skb); 1698 - 1699 - if (port->fcoe_pending_queue_active) 1700 - goto out; 1701 - port->fcoe_pending_queue_active = 1; 1702 - 1703 - while (port->fcoe_pending_queue.qlen) { 1704 - /* keep qlen > 0 until fcoe_start_io succeeds */ 1705 - port->fcoe_pending_queue.qlen++; 1706 - skb = __skb_dequeue(&port->fcoe_pending_queue); 1707 - 1708 - spin_unlock_bh(&port->fcoe_pending_queue.lock); 1709 - rc = fcoe_start_io(skb); 1710 - spin_lock_bh(&port->fcoe_pending_queue.lock); 1711 - 1712 - if (rc) { 1713 - __skb_queue_head(&port->fcoe_pending_queue, skb); 1714 - /* undo temporary increment above */ 1715 - port->fcoe_pending_queue.qlen--; 1716 - break; 1717 - } 1718 - /* undo temporary increment above */ 1719 - port->fcoe_pending_queue.qlen--; 1720 - } 1721 - 1722 - if (port->fcoe_pending_queue.qlen < FCOE_LOW_QUEUE_DEPTH) 1723 - lport->qfull = 0; 1724 - if (port->fcoe_pending_queue.qlen && !timer_pending(&port->timer)) 1725 - mod_timer(&port->timer, jiffies + 2); 1726 - port->fcoe_pending_queue_active = 0; 1727 - out: 1728 - if (port->fcoe_pending_queue.qlen > FCOE_MAX_QUEUE_DEPTH) 1729 - lport->qfull = 1; 1730 - spin_unlock_bh(&port->fcoe_pending_queue.lock); 1731 - return; 1732 1744 } 1733 1745 1734 1746 /** ··· 1744 1872 list_del(&fcoe->list); 1745 1873 port = lport_priv(fcoe->ctlr.lp); 1746 1874 fcoe_interface_cleanup(fcoe); 1747 - schedule_work(&port->destroy_work); 1875 + queue_work(fcoe_wq, &port->destroy_work); 1748 1876 goto out; 1749 1877 break; 1750 1878 case NETDEV_FEAT_CHANGE: ··· 1770 1898 } 1771 1899 1772 1900 /** 1773 - * fcoe_if_to_netdev() - Parse a name buffer to get a net device 1774 - * @buffer: The name of the net device 1775 - * 1776 - * Returns: NULL or a ptr to net_device 1777 - */ 1778 - static struct net_device *fcoe_if_to_netdev(const char *buffer) 1779 - { 1780 - char *cp; 1781 - char ifname[IFNAMSIZ + 2]; 1782 - 1783 - if (buffer) { 1784 - strlcpy(ifname, buffer, IFNAMSIZ); 1785 - cp = ifname + strlen(ifname); 1786 - while (--cp >= ifname && *cp == '\n') 1787 - *cp = '\0'; 1788 - return dev_get_by_name(&init_net, ifname); 1789 - } 1790 - return NULL; 1791 - } 1792 - 1793 - /** 1794 1901 * fcoe_disable() - Disables a FCoE interface 1795 - * @buffer: The name of the Ethernet interface to be disabled 1796 - * @kp: The associated kernel parameter 1902 + * @netdev : The net_device object the Ethernet interface to create on 1797 1903 * 1798 - * Called from sysfs. 1904 + * Called from fcoe transport. 1799 1905 * 1800 1906 * Returns: 0 for success 1801 1907 */ 1802 - static int fcoe_disable(const char *buffer, struct kernel_param *kp) 1908 + static int fcoe_disable(struct net_device *netdev) 1803 1909 { 1804 1910 struct fcoe_interface *fcoe; 1805 - struct net_device *netdev; 1806 1911 int rc = 0; 1807 1912 1808 1913 mutex_lock(&fcoe_config_mutex); ··· 1795 1946 } 1796 1947 #endif 1797 1948 1798 - netdev = fcoe_if_to_netdev(buffer); 1799 - if (!netdev) { 1800 - rc = -ENODEV; 1801 - goto out_nodev; 1802 - } 1803 - 1804 1949 if (!rtnl_trylock()) { 1805 - dev_put(netdev); 1806 1950 mutex_unlock(&fcoe_config_mutex); 1807 - return restart_syscall(); 1951 + return -ERESTARTSYS; 1808 1952 } 1809 1953 1810 1954 fcoe = fcoe_hostlist_lookup_port(netdev); ··· 1809 1967 } else 1810 1968 rc = -ENODEV; 1811 1969 1812 - dev_put(netdev); 1813 1970 out_nodev: 1814 1971 mutex_unlock(&fcoe_config_mutex); 1815 1972 return rc; ··· 1816 1975 1817 1976 /** 1818 1977 * fcoe_enable() - Enables a FCoE interface 1819 - * @buffer: The name of the Ethernet interface to be enabled 1820 - * @kp: The associated kernel parameter 1978 + * @netdev : The net_device object the Ethernet interface to create on 1821 1979 * 1822 - * Called from sysfs. 1980 + * Called from fcoe transport. 1823 1981 * 1824 1982 * Returns: 0 for success 1825 1983 */ 1826 - static int fcoe_enable(const char *buffer, struct kernel_param *kp) 1984 + static int fcoe_enable(struct net_device *netdev) 1827 1985 { 1828 1986 struct fcoe_interface *fcoe; 1829 - struct net_device *netdev; 1830 1987 int rc = 0; 1831 1988 1832 1989 mutex_lock(&fcoe_config_mutex); ··· 1839 2000 goto out_nodev; 1840 2001 } 1841 2002 #endif 1842 - 1843 - netdev = fcoe_if_to_netdev(buffer); 1844 - if (!netdev) { 1845 - rc = -ENODEV; 1846 - goto out_nodev; 1847 - } 1848 - 1849 2003 if (!rtnl_trylock()) { 1850 - dev_put(netdev); 1851 2004 mutex_unlock(&fcoe_config_mutex); 1852 - return restart_syscall(); 2005 + return -ERESTARTSYS; 1853 2006 } 1854 2007 1855 2008 fcoe = fcoe_hostlist_lookup_port(netdev); ··· 1852 2021 else if (!fcoe_link_ok(fcoe->ctlr.lp)) 1853 2022 fcoe_ctlr_link_up(&fcoe->ctlr); 1854 2023 1855 - dev_put(netdev); 1856 2024 out_nodev: 1857 2025 mutex_unlock(&fcoe_config_mutex); 1858 2026 return rc; ··· 1859 2029 1860 2030 /** 1861 2031 * fcoe_destroy() - Destroy a FCoE interface 1862 - * @buffer: The name of the Ethernet interface to be destroyed 1863 - * @kp: The associated kernel parameter 2032 + * @netdev : The net_device object the Ethernet interface to create on 1864 2033 * 1865 - * Called from sysfs. 2034 + * Called from fcoe transport 1866 2035 * 1867 2036 * Returns: 0 for success 1868 2037 */ 1869 - static int fcoe_destroy(const char *buffer, struct kernel_param *kp) 2038 + static int fcoe_destroy(struct net_device *netdev) 1870 2039 { 1871 2040 struct fcoe_interface *fcoe; 1872 - struct net_device *netdev; 1873 2041 int rc = 0; 1874 2042 1875 2043 mutex_lock(&fcoe_config_mutex); ··· 1882 2054 goto out_nodev; 1883 2055 } 1884 2056 #endif 1885 - 1886 - netdev = fcoe_if_to_netdev(buffer); 1887 - if (!netdev) { 1888 - rc = -ENODEV; 1889 - goto out_nodev; 1890 - } 1891 - 1892 2057 if (!rtnl_trylock()) { 1893 - dev_put(netdev); 1894 2058 mutex_unlock(&fcoe_config_mutex); 1895 - return restart_syscall(); 2059 + return -ERESTARTSYS; 1896 2060 } 1897 2061 1898 2062 fcoe = fcoe_hostlist_lookup_port(netdev); 1899 2063 if (!fcoe) { 1900 2064 rtnl_unlock(); 1901 2065 rc = -ENODEV; 1902 - goto out_putdev; 2066 + goto out_nodev; 1903 2067 } 1904 2068 fcoe_interface_cleanup(fcoe); 1905 2069 list_del(&fcoe->list); 1906 2070 /* RTNL mutex is dropped by fcoe_if_destroy */ 1907 2071 fcoe_if_destroy(fcoe->ctlr.lp); 1908 - 1909 - out_putdev: 1910 - dev_put(netdev); 1911 2072 out_nodev: 1912 2073 mutex_unlock(&fcoe_config_mutex); 1913 2074 return rc; ··· 1919 2102 } 1920 2103 1921 2104 /** 1922 - * fcoe_create() - Create a fcoe interface 1923 - * @buffer: The name of the Ethernet interface to create on 1924 - * @kp: The associated kernel param 2105 + * fcoe_match() - Check if the FCoE is supported on the given netdevice 2106 + * @netdev : The net_device object the Ethernet interface to create on 1925 2107 * 1926 - * Called from sysfs. 2108 + * Called from fcoe transport. 2109 + * 2110 + * Returns: always returns true as this is the default FCoE transport, 2111 + * i.e., support all netdevs. 2112 + */ 2113 + static bool fcoe_match(struct net_device *netdev) 2114 + { 2115 + return true; 2116 + } 2117 + 2118 + /** 2119 + * fcoe_create() - Create a fcoe interface 2120 + * @netdev : The net_device object the Ethernet interface to create on 2121 + * @fip_mode: The FIP mode for this creation 2122 + * 2123 + * Called from fcoe transport 1927 2124 * 1928 2125 * Returns: 0 for success 1929 2126 */ 1930 - static int fcoe_create(const char *buffer, struct kernel_param *kp) 2127 + static int fcoe_create(struct net_device *netdev, enum fip_state fip_mode) 1931 2128 { 1932 - enum fip_state fip_mode = (enum fip_state)(long)kp->arg; 1933 2129 int rc; 1934 2130 struct fcoe_interface *fcoe; 1935 2131 struct fc_lport *lport; 1936 - struct net_device *netdev; 1937 2132 1938 2133 mutex_lock(&fcoe_config_mutex); 1939 2134 1940 2135 if (!rtnl_trylock()) { 1941 2136 mutex_unlock(&fcoe_config_mutex); 1942 - return restart_syscall(); 2137 + return -ERESTARTSYS; 1943 2138 } 1944 2139 1945 2140 #ifdef CONFIG_FCOE_MODULE ··· 1962 2133 */ 1963 2134 if (THIS_MODULE->state != MODULE_STATE_LIVE) { 1964 2135 rc = -ENODEV; 1965 - goto out_nomod; 1966 - } 1967 - #endif 1968 - 1969 - if (!try_module_get(THIS_MODULE)) { 1970 - rc = -EINVAL; 1971 - goto out_nomod; 1972 - } 1973 - 1974 - netdev = fcoe_if_to_netdev(buffer); 1975 - if (!netdev) { 1976 - rc = -ENODEV; 1977 2136 goto out_nodev; 1978 2137 } 2138 + #endif 1979 2139 1980 2140 /* look for existing lport */ 1981 2141 if (fcoe_hostlist_lookup(netdev)) { 1982 2142 rc = -EEXIST; 1983 - goto out_putdev; 2143 + goto out_nodev; 1984 2144 } 1985 2145 1986 2146 fcoe = fcoe_interface_create(netdev, fip_mode); 1987 - if (!fcoe) { 1988 - rc = -ENOMEM; 1989 - goto out_putdev; 2147 + if (IS_ERR(fcoe)) { 2148 + rc = PTR_ERR(fcoe); 2149 + goto out_nodev; 1990 2150 } 1991 2151 1992 2152 lport = fcoe_if_create(fcoe, &netdev->dev, 0); ··· 2004 2186 * should be holding a reference taken in fcoe_if_create(). 2005 2187 */ 2006 2188 fcoe_interface_put(fcoe); 2007 - dev_put(netdev); 2008 2189 rtnl_unlock(); 2009 2190 mutex_unlock(&fcoe_config_mutex); 2010 2191 2011 2192 return 0; 2012 2193 out_free: 2013 2194 fcoe_interface_put(fcoe); 2014 - out_putdev: 2015 - dev_put(netdev); 2016 2195 out_nodev: 2017 - module_put(THIS_MODULE); 2018 - out_nomod: 2019 2196 rtnl_unlock(); 2020 2197 mutex_unlock(&fcoe_config_mutex); 2021 2198 return rc; ··· 2025 2212 */ 2026 2213 int fcoe_link_speed_update(struct fc_lport *lport) 2027 2214 { 2028 - struct fcoe_port *port = lport_priv(lport); 2029 - struct net_device *netdev = port->fcoe->netdev; 2215 + struct net_device *netdev = fcoe_netdev(lport); 2030 2216 struct ethtool_cmd ecmd = { ETHTOOL_GSET }; 2031 2217 2032 2218 if (!dev_ethtool_get_settings(netdev, &ecmd)) { ··· 2056 2244 */ 2057 2245 int fcoe_link_ok(struct fc_lport *lport) 2058 2246 { 2059 - struct fcoe_port *port = lport_priv(lport); 2060 - struct net_device *netdev = port->fcoe->netdev; 2247 + struct net_device *netdev = fcoe_netdev(lport); 2061 2248 2062 2249 if (netif_oper_up(netdev)) 2063 2250 return 0; ··· 2120 2309 } 2121 2310 2122 2311 /** 2123 - * fcoe_clean_pending_queue() - Dequeue a skb and free it 2124 - * @lport: The local port to dequeue a skb on 2125 - */ 2126 - void fcoe_clean_pending_queue(struct fc_lport *lport) 2127 - { 2128 - struct fcoe_port *port = lport_priv(lport); 2129 - struct sk_buff *skb; 2130 - 2131 - spin_lock_bh(&port->fcoe_pending_queue.lock); 2132 - while ((skb = __skb_dequeue(&port->fcoe_pending_queue)) != NULL) { 2133 - spin_unlock_bh(&port->fcoe_pending_queue.lock); 2134 - kfree_skb(skb); 2135 - spin_lock_bh(&port->fcoe_pending_queue.lock); 2136 - } 2137 - spin_unlock_bh(&port->fcoe_pending_queue.lock); 2138 - } 2139 - 2140 - /** 2141 2312 * fcoe_reset() - Reset a local port 2142 2313 * @shost: The SCSI host associated with the local port to be reset 2143 2314 * ··· 2128 2335 int fcoe_reset(struct Scsi_Host *shost) 2129 2336 { 2130 2337 struct fc_lport *lport = shost_priv(shost); 2131 - fc_lport_reset(lport); 2338 + struct fcoe_port *port = lport_priv(lport); 2339 + struct fcoe_interface *fcoe = port->priv; 2340 + 2341 + fcoe_ctlr_link_down(&fcoe->ctlr); 2342 + fcoe_clean_pending_queue(fcoe->ctlr.lp); 2343 + if (!fcoe_link_ok(fcoe->ctlr.lp)) 2344 + fcoe_ctlr_link_up(&fcoe->ctlr); 2132 2345 return 0; 2133 2346 } 2134 2347 ··· 2192 2393 fcoe = fcoe_hostlist_lookup_port(fcoe_netdev(lport)); 2193 2394 if (!fcoe) { 2194 2395 port = lport_priv(lport); 2195 - fcoe = port->fcoe; 2396 + fcoe = port->priv; 2196 2397 list_add_tail(&fcoe->list, &fcoe_hostlist); 2197 2398 } 2198 2399 return 0; 2199 2400 } 2401 + 2402 + 2403 + static struct fcoe_transport fcoe_sw_transport = { 2404 + .name = {FCOE_TRANSPORT_DEFAULT}, 2405 + .attached = false, 2406 + .list = LIST_HEAD_INIT(fcoe_sw_transport.list), 2407 + .match = fcoe_match, 2408 + .create = fcoe_create, 2409 + .destroy = fcoe_destroy, 2410 + .enable = fcoe_enable, 2411 + .disable = fcoe_disable, 2412 + }; 2200 2413 2201 2414 /** 2202 2415 * fcoe_init() - Initialize fcoe.ko ··· 2220 2409 struct fcoe_percpu_s *p; 2221 2410 unsigned int cpu; 2222 2411 int rc = 0; 2412 + 2413 + fcoe_wq = alloc_workqueue("fcoe", 0, 0); 2414 + if (!fcoe_wq) 2415 + return -ENOMEM; 2416 + 2417 + /* register as a fcoe transport */ 2418 + rc = fcoe_transport_attach(&fcoe_sw_transport); 2419 + if (rc) { 2420 + printk(KERN_ERR "failed to register an fcoe transport, check " 2421 + "if libfcoe is loaded\n"); 2422 + return rc; 2423 + } 2223 2424 2224 2425 mutex_lock(&fcoe_config_mutex); 2225 2426 ··· 2263 2440 fcoe_percpu_thread_destroy(cpu); 2264 2441 } 2265 2442 mutex_unlock(&fcoe_config_mutex); 2443 + destroy_workqueue(fcoe_wq); 2266 2444 return rc; 2267 2445 } 2268 2446 module_init(fcoe_init); ··· 2289 2465 list_del(&fcoe->list); 2290 2466 port = lport_priv(fcoe->ctlr.lp); 2291 2467 fcoe_interface_cleanup(fcoe); 2292 - schedule_work(&port->destroy_work); 2468 + queue_work(fcoe_wq, &port->destroy_work); 2293 2469 } 2294 2470 rtnl_unlock(); 2295 2471 ··· 2300 2476 2301 2477 mutex_unlock(&fcoe_config_mutex); 2302 2478 2303 - /* flush any asyncronous interface destroys, 2304 - * this should happen after the netdev notifier is unregistered */ 2305 - flush_scheduled_work(); 2306 - /* That will flush out all the N_Ports on the hostlist, but now we 2307 - * may have NPIV VN_Ports scheduled for destruction */ 2308 - flush_scheduled_work(); 2479 + /* 2480 + * destroy_work's may be chained but destroy_workqueue() 2481 + * can take care of them. Just kill the fcoe_wq. 2482 + */ 2483 + destroy_workqueue(fcoe_wq); 2309 2484 2310 - /* detach from scsi transport 2311 - * must happen after all destroys are done, therefor after the flush */ 2485 + /* 2486 + * Detaching from the scsi transport must happen after all 2487 + * destroys are done on the fcoe_wq. destroy_workqueue will 2488 + * enusre the fcoe_wq is flushed. 2489 + */ 2312 2490 fcoe_if_exit(); 2491 + 2492 + /* detach from fcoe transport */ 2493 + fcoe_transport_detach(&fcoe_sw_transport); 2313 2494 } 2314 2495 module_exit(fcoe_exit); 2315 2496 ··· 2386 2557 void *arg, u32 timeout) 2387 2558 { 2388 2559 struct fcoe_port *port = lport_priv(lport); 2389 - struct fcoe_interface *fcoe = port->fcoe; 2560 + struct fcoe_interface *fcoe = port->priv; 2390 2561 struct fcoe_ctlr *fip = &fcoe->ctlr; 2391 2562 struct fc_frame_header *fh = fc_frame_header_get(fp); 2392 2563 ··· 2419 2590 struct Scsi_Host *shost = vport_to_shost(vport); 2420 2591 struct fc_lport *n_port = shost_priv(shost); 2421 2592 struct fcoe_port *port = lport_priv(n_port); 2422 - struct fcoe_interface *fcoe = port->fcoe; 2593 + struct fcoe_interface *fcoe = port->priv; 2423 2594 struct net_device *netdev = fcoe->netdev; 2424 2595 struct fc_lport *vn_port; 2425 2596 ··· 2459 2630 mutex_lock(&n_port->lp_mutex); 2460 2631 list_del(&vn_port->list); 2461 2632 mutex_unlock(&n_port->lp_mutex); 2462 - schedule_work(&port->destroy_work); 2633 + queue_work(fcoe_wq, &port->destroy_work); 2463 2634 return 0; 2464 2635 } 2465 2636 ··· 2563 2734 u32 port_id, struct fc_frame *fp) 2564 2735 { 2565 2736 struct fcoe_port *port = lport_priv(lport); 2566 - struct fcoe_interface *fcoe = port->fcoe; 2737 + struct fcoe_interface *fcoe = port->priv; 2567 2738 2568 2739 if (fp && fc_frame_payload_op(fp) == ELS_FLOGI) 2569 2740 fcoe_ctlr_recv_flogi(&fcoe->ctlr, lport, fp);
+3 -47
drivers/scsi/fcoe/fcoe.h
··· 24 24 #include <linux/kthread.h> 25 25 26 26 #define FCOE_MAX_QUEUE_DEPTH 256 27 - #define FCOE_LOW_QUEUE_DEPTH 32 27 + #define FCOE_MIN_QUEUE_DEPTH 32 28 28 29 29 #define FCOE_WORD_TO_BYTE 4 30 30 ··· 39 39 40 40 #define FCOE_MIN_XID 0x0000 /* the min xid supported by fcoe_sw */ 41 41 #define FCOE_MAX_XID 0x0FFF /* the max xid supported by fcoe_sw */ 42 - 43 - /* 44 - * Max MTU for FCoE: 14 (FCoE header) + 24 (FC header) + 2112 (max FC payload) 45 - * + 4 (FC CRC) + 4 (FCoE trailer) = 2158 bytes 46 - */ 47 - #define FCOE_MTU 2158 48 42 49 43 unsigned int fcoe_debug_logging; 50 44 module_param_named(debug_logging, fcoe_debug_logging, int, S_IRUGO|S_IWUSR); ··· 65 71 netdev->name, ##args);) 66 72 67 73 /** 68 - * struct fcoe_percpu_s - The per-CPU context for FCoE receive threads 69 - * @thread: The thread context 70 - * @fcoe_rx_list: The queue of pending packets to process 71 - * @page: The memory page for calculating frame trailer CRCs 72 - * @crc_eof_offset: The offset into the CRC page pointing to available 73 - * memory for a new trailer 74 - */ 75 - struct fcoe_percpu_s { 76 - struct task_struct *thread; 77 - struct sk_buff_head fcoe_rx_list; 78 - struct page *crc_eof_page; 79 - int crc_eof_offset; 80 - }; 81 - 82 - /** 83 74 * struct fcoe_interface - A FCoE interface 84 75 * @list: Handle for a list of FCoE interfaces 85 76 * @netdev: The associated net device ··· 87 108 struct kref kref; 88 109 }; 89 110 90 - /** 91 - * struct fcoe_port - The FCoE private structure 92 - * @fcoe: The associated fcoe interface 93 - * @lport: The associated local port 94 - * @fcoe_pending_queue: The pending Rx queue of skbs 95 - * @fcoe_pending_queue_active: Indicates if the pending queue is active 96 - * @timer: The queue timer 97 - * @destroy_work: Handle for work context 98 - * (to prevent RTNL deadlocks) 99 - * @data_srt_addr: Source address for data 100 - * 101 - * An instance of this structure is to be allocated along with the 102 - * Scsi_Host and libfc fc_lport structures. 103 - */ 104 - struct fcoe_port { 105 - struct fcoe_interface *fcoe; 106 - struct fc_lport *lport; 107 - struct sk_buff_head fcoe_pending_queue; 108 - u8 fcoe_pending_queue_active; 109 - struct timer_list timer; 110 - struct work_struct destroy_work; 111 - u8 data_src_addr[ETH_ALEN]; 112 - }; 113 - 114 111 #define fcoe_from_ctlr(fip) container_of(fip, struct fcoe_interface, ctlr) 115 112 116 113 /** ··· 95 140 */ 96 141 static inline struct net_device *fcoe_netdev(const struct fc_lport *lport) 97 142 { 98 - return ((struct fcoe_port *)lport_priv(lport))->fcoe->netdev; 143 + return ((struct fcoe_interface *) 144 + ((struct fcoe_port *)lport_priv(lport))->priv)->netdev; 99 145 } 100 146 101 147 #endif /* _FCOE_H_ */
+770
drivers/scsi/fcoe/fcoe_transport.c
··· 1 + /* 2 + * Copyright(c) 2008 - 2011 Intel Corporation. All rights reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify it 5 + * under the terms and conditions of the GNU General Public License, 6 + * version 2, as published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope it will be useful, but WITHOUT 9 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 10 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 11 + * more details. 12 + * 13 + * You should have received a copy of the GNU General Public License along with 14 + * this program; if not, write to the Free Software Foundation, Inc., 15 + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 16 + * 17 + * Maintained at www.Open-FCoE.org 18 + */ 19 + 20 + #include <linux/types.h> 21 + #include <linux/module.h> 22 + #include <linux/kernel.h> 23 + #include <linux/list.h> 24 + #include <linux/netdevice.h> 25 + #include <linux/errno.h> 26 + #include <linux/crc32.h> 27 + #include <scsi/libfcoe.h> 28 + 29 + #include "libfcoe.h" 30 + 31 + MODULE_AUTHOR("Open-FCoE.org"); 32 + MODULE_DESCRIPTION("FIP discovery protocol and FCoE transport for FCoE HBAs"); 33 + MODULE_LICENSE("GPL v2"); 34 + 35 + static int fcoe_transport_create(const char *, struct kernel_param *); 36 + static int fcoe_transport_destroy(const char *, struct kernel_param *); 37 + static int fcoe_transport_show(char *buffer, const struct kernel_param *kp); 38 + static struct fcoe_transport *fcoe_transport_lookup(struct net_device *device); 39 + static struct fcoe_transport *fcoe_netdev_map_lookup(struct net_device *device); 40 + static int fcoe_transport_enable(const char *, struct kernel_param *); 41 + static int fcoe_transport_disable(const char *, struct kernel_param *); 42 + static int libfcoe_device_notification(struct notifier_block *notifier, 43 + ulong event, void *ptr); 44 + 45 + static LIST_HEAD(fcoe_transports); 46 + static DEFINE_MUTEX(ft_mutex); 47 + static LIST_HEAD(fcoe_netdevs); 48 + static DEFINE_MUTEX(fn_mutex); 49 + 50 + unsigned int libfcoe_debug_logging; 51 + module_param_named(debug_logging, libfcoe_debug_logging, int, S_IRUGO|S_IWUSR); 52 + MODULE_PARM_DESC(debug_logging, "a bit mask of logging levels"); 53 + 54 + module_param_call(show, NULL, fcoe_transport_show, NULL, S_IRUSR); 55 + __MODULE_PARM_TYPE(show, "string"); 56 + MODULE_PARM_DESC(show, " Show attached FCoE transports"); 57 + 58 + module_param_call(create, fcoe_transport_create, NULL, 59 + (void *)FIP_MODE_FABRIC, S_IWUSR); 60 + __MODULE_PARM_TYPE(create, "string"); 61 + MODULE_PARM_DESC(create, " Creates fcoe instance on a ethernet interface"); 62 + 63 + module_param_call(create_vn2vn, fcoe_transport_create, NULL, 64 + (void *)FIP_MODE_VN2VN, S_IWUSR); 65 + __MODULE_PARM_TYPE(create_vn2vn, "string"); 66 + MODULE_PARM_DESC(create_vn2vn, " Creates a VN_node to VN_node FCoE instance " 67 + "on an Ethernet interface"); 68 + 69 + module_param_call(destroy, fcoe_transport_destroy, NULL, NULL, S_IWUSR); 70 + __MODULE_PARM_TYPE(destroy, "string"); 71 + MODULE_PARM_DESC(destroy, " Destroys fcoe instance on a ethernet interface"); 72 + 73 + module_param_call(enable, fcoe_transport_enable, NULL, NULL, S_IWUSR); 74 + __MODULE_PARM_TYPE(enable, "string"); 75 + MODULE_PARM_DESC(enable, " Enables fcoe on a ethernet interface."); 76 + 77 + module_param_call(disable, fcoe_transport_disable, NULL, NULL, S_IWUSR); 78 + __MODULE_PARM_TYPE(disable, "string"); 79 + MODULE_PARM_DESC(disable, " Disables fcoe on a ethernet interface."); 80 + 81 + /* notification function for packets from net device */ 82 + static struct notifier_block libfcoe_notifier = { 83 + .notifier_call = libfcoe_device_notification, 84 + }; 85 + 86 + /** 87 + * fcoe_fc_crc() - Calculates the CRC for a given frame 88 + * @fp: The frame to be checksumed 89 + * 90 + * This uses crc32() routine to calculate the CRC for a frame 91 + * 92 + * Return: The 32 bit CRC value 93 + */ 94 + u32 fcoe_fc_crc(struct fc_frame *fp) 95 + { 96 + struct sk_buff *skb = fp_skb(fp); 97 + struct skb_frag_struct *frag; 98 + unsigned char *data; 99 + unsigned long off, len, clen; 100 + u32 crc; 101 + unsigned i; 102 + 103 + crc = crc32(~0, skb->data, skb_headlen(skb)); 104 + 105 + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 106 + frag = &skb_shinfo(skb)->frags[i]; 107 + off = frag->page_offset; 108 + len = frag->size; 109 + while (len > 0) { 110 + clen = min(len, PAGE_SIZE - (off & ~PAGE_MASK)); 111 + data = kmap_atomic(frag->page + (off >> PAGE_SHIFT), 112 + KM_SKB_DATA_SOFTIRQ); 113 + crc = crc32(crc, data + (off & ~PAGE_MASK), clen); 114 + kunmap_atomic(data, KM_SKB_DATA_SOFTIRQ); 115 + off += clen; 116 + len -= clen; 117 + } 118 + } 119 + return crc; 120 + } 121 + EXPORT_SYMBOL_GPL(fcoe_fc_crc); 122 + 123 + /** 124 + * fcoe_start_io() - Start FCoE I/O 125 + * @skb: The packet to be transmitted 126 + * 127 + * This routine is called from the net device to start transmitting 128 + * FCoE packets. 129 + * 130 + * Returns: 0 for success 131 + */ 132 + int fcoe_start_io(struct sk_buff *skb) 133 + { 134 + struct sk_buff *nskb; 135 + int rc; 136 + 137 + nskb = skb_clone(skb, GFP_ATOMIC); 138 + if (!nskb) 139 + return -ENOMEM; 140 + rc = dev_queue_xmit(nskb); 141 + if (rc != 0) 142 + return rc; 143 + kfree_skb(skb); 144 + return 0; 145 + } 146 + EXPORT_SYMBOL_GPL(fcoe_start_io); 147 + 148 + 149 + /** 150 + * fcoe_clean_pending_queue() - Dequeue a skb and free it 151 + * @lport: The local port to dequeue a skb on 152 + */ 153 + void fcoe_clean_pending_queue(struct fc_lport *lport) 154 + { 155 + struct fcoe_port *port = lport_priv(lport); 156 + struct sk_buff *skb; 157 + 158 + spin_lock_bh(&port->fcoe_pending_queue.lock); 159 + while ((skb = __skb_dequeue(&port->fcoe_pending_queue)) != NULL) { 160 + spin_unlock_bh(&port->fcoe_pending_queue.lock); 161 + kfree_skb(skb); 162 + spin_lock_bh(&port->fcoe_pending_queue.lock); 163 + } 164 + spin_unlock_bh(&port->fcoe_pending_queue.lock); 165 + } 166 + EXPORT_SYMBOL_GPL(fcoe_clean_pending_queue); 167 + 168 + /** 169 + * fcoe_check_wait_queue() - Attempt to clear the transmit backlog 170 + * @lport: The local port whose backlog is to be cleared 171 + * 172 + * This empties the wait_queue, dequeues the head of the wait_queue queue 173 + * and calls fcoe_start_io() for each packet. If all skb have been 174 + * transmitted it returns the qlen. If an error occurs it restores 175 + * wait_queue (to try again later) and returns -1. 176 + * 177 + * The wait_queue is used when the skb transmit fails. The failed skb 178 + * will go in the wait_queue which will be emptied by the timer function or 179 + * by the next skb transmit. 180 + */ 181 + void fcoe_check_wait_queue(struct fc_lport *lport, struct sk_buff *skb) 182 + { 183 + struct fcoe_port *port = lport_priv(lport); 184 + int rc; 185 + 186 + spin_lock_bh(&port->fcoe_pending_queue.lock); 187 + 188 + if (skb) 189 + __skb_queue_tail(&port->fcoe_pending_queue, skb); 190 + 191 + if (port->fcoe_pending_queue_active) 192 + goto out; 193 + port->fcoe_pending_queue_active = 1; 194 + 195 + while (port->fcoe_pending_queue.qlen) { 196 + /* keep qlen > 0 until fcoe_start_io succeeds */ 197 + port->fcoe_pending_queue.qlen++; 198 + skb = __skb_dequeue(&port->fcoe_pending_queue); 199 + 200 + spin_unlock_bh(&port->fcoe_pending_queue.lock); 201 + rc = fcoe_start_io(skb); 202 + spin_lock_bh(&port->fcoe_pending_queue.lock); 203 + 204 + if (rc) { 205 + __skb_queue_head(&port->fcoe_pending_queue, skb); 206 + /* undo temporary increment above */ 207 + port->fcoe_pending_queue.qlen--; 208 + break; 209 + } 210 + /* undo temporary increment above */ 211 + port->fcoe_pending_queue.qlen--; 212 + } 213 + 214 + if (port->fcoe_pending_queue.qlen < port->min_queue_depth) 215 + lport->qfull = 0; 216 + if (port->fcoe_pending_queue.qlen && !timer_pending(&port->timer)) 217 + mod_timer(&port->timer, jiffies + 2); 218 + port->fcoe_pending_queue_active = 0; 219 + out: 220 + if (port->fcoe_pending_queue.qlen > port->max_queue_depth) 221 + lport->qfull = 1; 222 + spin_unlock_bh(&port->fcoe_pending_queue.lock); 223 + } 224 + EXPORT_SYMBOL_GPL(fcoe_check_wait_queue); 225 + 226 + /** 227 + * fcoe_queue_timer() - The fcoe queue timer 228 + * @lport: The local port 229 + * 230 + * Calls fcoe_check_wait_queue on timeout 231 + */ 232 + void fcoe_queue_timer(ulong lport) 233 + { 234 + fcoe_check_wait_queue((struct fc_lport *)lport, NULL); 235 + } 236 + EXPORT_SYMBOL_GPL(fcoe_queue_timer); 237 + 238 + /** 239 + * fcoe_get_paged_crc_eof() - Allocate a page to be used for the trailer CRC 240 + * @skb: The packet to be transmitted 241 + * @tlen: The total length of the trailer 242 + * @fps: The fcoe context 243 + * 244 + * This routine allocates a page for frame trailers. The page is re-used if 245 + * there is enough room left on it for the current trailer. If there isn't 246 + * enough buffer left a new page is allocated for the trailer. Reference to 247 + * the page from this function as well as the skbs using the page fragments 248 + * ensure that the page is freed at the appropriate time. 249 + * 250 + * Returns: 0 for success 251 + */ 252 + int fcoe_get_paged_crc_eof(struct sk_buff *skb, int tlen, 253 + struct fcoe_percpu_s *fps) 254 + { 255 + struct page *page; 256 + 257 + page = fps->crc_eof_page; 258 + if (!page) { 259 + page = alloc_page(GFP_ATOMIC); 260 + if (!page) 261 + return -ENOMEM; 262 + 263 + fps->crc_eof_page = page; 264 + fps->crc_eof_offset = 0; 265 + } 266 + 267 + get_page(page); 268 + skb_fill_page_desc(skb, skb_shinfo(skb)->nr_frags, page, 269 + fps->crc_eof_offset, tlen); 270 + skb->len += tlen; 271 + skb->data_len += tlen; 272 + skb->truesize += tlen; 273 + fps->crc_eof_offset += sizeof(struct fcoe_crc_eof); 274 + 275 + if (fps->crc_eof_offset >= PAGE_SIZE) { 276 + fps->crc_eof_page = NULL; 277 + fps->crc_eof_offset = 0; 278 + put_page(page); 279 + } 280 + 281 + return 0; 282 + } 283 + EXPORT_SYMBOL_GPL(fcoe_get_paged_crc_eof); 284 + 285 + /** 286 + * fcoe_transport_lookup - find an fcoe transport that matches a netdev 287 + * @netdev: The netdev to look for from all attached transports 288 + * 289 + * Returns : ptr to the fcoe transport that supports this netdev or NULL 290 + * if not found. 291 + * 292 + * The ft_mutex should be held when this is called 293 + */ 294 + static struct fcoe_transport *fcoe_transport_lookup(struct net_device *netdev) 295 + { 296 + struct fcoe_transport *ft = NULL; 297 + 298 + list_for_each_entry(ft, &fcoe_transports, list) 299 + if (ft->match && ft->match(netdev)) 300 + return ft; 301 + return NULL; 302 + } 303 + 304 + /** 305 + * fcoe_transport_attach - Attaches an FCoE transport 306 + * @ft: The fcoe transport to be attached 307 + * 308 + * Returns : 0 for success 309 + */ 310 + int fcoe_transport_attach(struct fcoe_transport *ft) 311 + { 312 + int rc = 0; 313 + 314 + mutex_lock(&ft_mutex); 315 + if (ft->attached) { 316 + LIBFCOE_TRANSPORT_DBG("transport %s already attached\n", 317 + ft->name); 318 + rc = -EEXIST; 319 + goto out_attach; 320 + } 321 + 322 + /* Add default transport to the tail */ 323 + if (strcmp(ft->name, FCOE_TRANSPORT_DEFAULT)) 324 + list_add(&ft->list, &fcoe_transports); 325 + else 326 + list_add_tail(&ft->list, &fcoe_transports); 327 + 328 + ft->attached = true; 329 + LIBFCOE_TRANSPORT_DBG("attaching transport %s\n", ft->name); 330 + 331 + out_attach: 332 + mutex_unlock(&ft_mutex); 333 + return rc; 334 + } 335 + EXPORT_SYMBOL(fcoe_transport_attach); 336 + 337 + /** 338 + * fcoe_transport_attach - Detaches an FCoE transport 339 + * @ft: The fcoe transport to be attached 340 + * 341 + * Returns : 0 for success 342 + */ 343 + int fcoe_transport_detach(struct fcoe_transport *ft) 344 + { 345 + int rc = 0; 346 + 347 + mutex_lock(&ft_mutex); 348 + if (!ft->attached) { 349 + LIBFCOE_TRANSPORT_DBG("transport %s already detached\n", 350 + ft->name); 351 + rc = -ENODEV; 352 + goto out_attach; 353 + } 354 + 355 + list_del(&ft->list); 356 + ft->attached = false; 357 + LIBFCOE_TRANSPORT_DBG("detaching transport %s\n", ft->name); 358 + 359 + out_attach: 360 + mutex_unlock(&ft_mutex); 361 + return rc; 362 + 363 + } 364 + EXPORT_SYMBOL(fcoe_transport_detach); 365 + 366 + static int fcoe_transport_show(char *buffer, const struct kernel_param *kp) 367 + { 368 + int i, j; 369 + struct fcoe_transport *ft = NULL; 370 + 371 + i = j = sprintf(buffer, "Attached FCoE transports:"); 372 + mutex_lock(&ft_mutex); 373 + list_for_each_entry(ft, &fcoe_transports, list) { 374 + i += snprintf(&buffer[i], IFNAMSIZ, "%s ", ft->name); 375 + if (i >= PAGE_SIZE) 376 + break; 377 + } 378 + mutex_unlock(&ft_mutex); 379 + if (i == j) 380 + i += snprintf(&buffer[i], IFNAMSIZ, "none"); 381 + return i; 382 + } 383 + 384 + static int __init fcoe_transport_init(void) 385 + { 386 + register_netdevice_notifier(&libfcoe_notifier); 387 + return 0; 388 + } 389 + 390 + static int __exit fcoe_transport_exit(void) 391 + { 392 + struct fcoe_transport *ft; 393 + 394 + unregister_netdevice_notifier(&libfcoe_notifier); 395 + mutex_lock(&ft_mutex); 396 + list_for_each_entry(ft, &fcoe_transports, list) 397 + printk(KERN_ERR "FCoE transport %s is still attached!\n", 398 + ft->name); 399 + mutex_unlock(&ft_mutex); 400 + return 0; 401 + } 402 + 403 + 404 + static int fcoe_add_netdev_mapping(struct net_device *netdev, 405 + struct fcoe_transport *ft) 406 + { 407 + struct fcoe_netdev_mapping *nm; 408 + 409 + nm = kmalloc(sizeof(*nm), GFP_KERNEL); 410 + if (!nm) { 411 + printk(KERN_ERR "Unable to allocate netdev_mapping"); 412 + return -ENOMEM; 413 + } 414 + 415 + nm->netdev = netdev; 416 + nm->ft = ft; 417 + 418 + mutex_lock(&fn_mutex); 419 + list_add(&nm->list, &fcoe_netdevs); 420 + mutex_unlock(&fn_mutex); 421 + return 0; 422 + } 423 + 424 + 425 + static void fcoe_del_netdev_mapping(struct net_device *netdev) 426 + { 427 + struct fcoe_netdev_mapping *nm = NULL, *tmp; 428 + 429 + mutex_lock(&fn_mutex); 430 + list_for_each_entry_safe(nm, tmp, &fcoe_netdevs, list) { 431 + if (nm->netdev == netdev) { 432 + list_del(&nm->list); 433 + kfree(nm); 434 + mutex_unlock(&fn_mutex); 435 + return; 436 + } 437 + } 438 + mutex_unlock(&fn_mutex); 439 + } 440 + 441 + 442 + /** 443 + * fcoe_netdev_map_lookup - find the fcoe transport that matches the netdev on which 444 + * it was created 445 + * 446 + * Returns : ptr to the fcoe transport that supports this netdev or NULL 447 + * if not found. 448 + * 449 + * The ft_mutex should be held when this is called 450 + */ 451 + static struct fcoe_transport *fcoe_netdev_map_lookup(struct net_device *netdev) 452 + { 453 + struct fcoe_transport *ft = NULL; 454 + struct fcoe_netdev_mapping *nm; 455 + 456 + mutex_lock(&fn_mutex); 457 + list_for_each_entry(nm, &fcoe_netdevs, list) { 458 + if (netdev == nm->netdev) { 459 + ft = nm->ft; 460 + mutex_unlock(&fn_mutex); 461 + return ft; 462 + } 463 + } 464 + 465 + mutex_unlock(&fn_mutex); 466 + return NULL; 467 + } 468 + 469 + /** 470 + * fcoe_if_to_netdev() - Parse a name buffer to get a net device 471 + * @buffer: The name of the net device 472 + * 473 + * Returns: NULL or a ptr to net_device 474 + */ 475 + static struct net_device *fcoe_if_to_netdev(const char *buffer) 476 + { 477 + char *cp; 478 + char ifname[IFNAMSIZ + 2]; 479 + 480 + if (buffer) { 481 + strlcpy(ifname, buffer, IFNAMSIZ); 482 + cp = ifname + strlen(ifname); 483 + while (--cp >= ifname && *cp == '\n') 484 + *cp = '\0'; 485 + return dev_get_by_name(&init_net, ifname); 486 + } 487 + return NULL; 488 + } 489 + 490 + /** 491 + * libfcoe_device_notification() - Handler for net device events 492 + * @notifier: The context of the notification 493 + * @event: The type of event 494 + * @ptr: The net device that the event was on 495 + * 496 + * This function is called by the Ethernet driver in case of link change event. 497 + * 498 + * Returns: 0 for success 499 + */ 500 + static int libfcoe_device_notification(struct notifier_block *notifier, 501 + ulong event, void *ptr) 502 + { 503 + struct net_device *netdev = ptr; 504 + 505 + switch (event) { 506 + case NETDEV_UNREGISTER: 507 + printk(KERN_ERR "libfcoe_device_notification: NETDEV_UNREGISTER %s\n", 508 + netdev->name); 509 + fcoe_del_netdev_mapping(netdev); 510 + break; 511 + } 512 + return NOTIFY_OK; 513 + } 514 + 515 + 516 + /** 517 + * fcoe_transport_create() - Create a fcoe interface 518 + * @buffer: The name of the Ethernet interface to create on 519 + * @kp: The associated kernel param 520 + * 521 + * Called from sysfs. This holds the ft_mutex while calling the 522 + * registered fcoe transport's create function. 523 + * 524 + * Returns: 0 for success 525 + */ 526 + static int fcoe_transport_create(const char *buffer, struct kernel_param *kp) 527 + { 528 + int rc = -ENODEV; 529 + struct net_device *netdev = NULL; 530 + struct fcoe_transport *ft = NULL; 531 + enum fip_state fip_mode = (enum fip_state)(long)kp->arg; 532 + 533 + if (!mutex_trylock(&ft_mutex)) 534 + return restart_syscall(); 535 + 536 + #ifdef CONFIG_LIBFCOE_MODULE 537 + /* 538 + * Make sure the module has been initialized, and is not about to be 539 + * removed. Module parameter sysfs files are writable before the 540 + * module_init function is called and after module_exit. 541 + */ 542 + if (THIS_MODULE->state != MODULE_STATE_LIVE) 543 + goto out_nodev; 544 + #endif 545 + 546 + netdev = fcoe_if_to_netdev(buffer); 547 + if (!netdev) { 548 + LIBFCOE_TRANSPORT_DBG("Invalid device %s.\n", buffer); 549 + goto out_nodev; 550 + } 551 + 552 + ft = fcoe_netdev_map_lookup(netdev); 553 + if (ft) { 554 + LIBFCOE_TRANSPORT_DBG("transport %s already has existing " 555 + "FCoE instance on %s.\n", 556 + ft->name, netdev->name); 557 + rc = -EEXIST; 558 + goto out_putdev; 559 + } 560 + 561 + ft = fcoe_transport_lookup(netdev); 562 + if (!ft) { 563 + LIBFCOE_TRANSPORT_DBG("no FCoE transport found for %s.\n", 564 + netdev->name); 565 + goto out_putdev; 566 + } 567 + 568 + rc = fcoe_add_netdev_mapping(netdev, ft); 569 + if (rc) { 570 + LIBFCOE_TRANSPORT_DBG("failed to add new netdev mapping " 571 + "for FCoE transport %s for %s.\n", 572 + ft->name, netdev->name); 573 + goto out_putdev; 574 + } 575 + 576 + /* pass to transport create */ 577 + rc = ft->create ? ft->create(netdev, fip_mode) : -ENODEV; 578 + if (rc) 579 + fcoe_del_netdev_mapping(netdev); 580 + 581 + LIBFCOE_TRANSPORT_DBG("transport %s %s to create fcoe on %s.\n", 582 + ft->name, (rc) ? "failed" : "succeeded", 583 + netdev->name); 584 + 585 + out_putdev: 586 + dev_put(netdev); 587 + out_nodev: 588 + mutex_unlock(&ft_mutex); 589 + if (rc == -ERESTARTSYS) 590 + return restart_syscall(); 591 + else 592 + return rc; 593 + } 594 + 595 + /** 596 + * fcoe_transport_destroy() - Destroy a FCoE interface 597 + * @buffer: The name of the Ethernet interface to be destroyed 598 + * @kp: The associated kernel parameter 599 + * 600 + * Called from sysfs. This holds the ft_mutex while calling the 601 + * registered fcoe transport's destroy function. 602 + * 603 + * Returns: 0 for success 604 + */ 605 + static int fcoe_transport_destroy(const char *buffer, struct kernel_param *kp) 606 + { 607 + int rc = -ENODEV; 608 + struct net_device *netdev = NULL; 609 + struct fcoe_transport *ft = NULL; 610 + 611 + if (!mutex_trylock(&ft_mutex)) 612 + return restart_syscall(); 613 + 614 + #ifdef CONFIG_LIBFCOE_MODULE 615 + /* 616 + * Make sure the module has been initialized, and is not about to be 617 + * removed. Module parameter sysfs files are writable before the 618 + * module_init function is called and after module_exit. 619 + */ 620 + if (THIS_MODULE->state != MODULE_STATE_LIVE) 621 + goto out_nodev; 622 + #endif 623 + 624 + netdev = fcoe_if_to_netdev(buffer); 625 + if (!netdev) { 626 + LIBFCOE_TRANSPORT_DBG("invalid device %s.\n", buffer); 627 + goto out_nodev; 628 + } 629 + 630 + ft = fcoe_netdev_map_lookup(netdev); 631 + if (!ft) { 632 + LIBFCOE_TRANSPORT_DBG("no FCoE transport found for %s.\n", 633 + netdev->name); 634 + goto out_putdev; 635 + } 636 + 637 + /* pass to transport destroy */ 638 + rc = ft->destroy ? ft->destroy(netdev) : -ENODEV; 639 + fcoe_del_netdev_mapping(netdev); 640 + LIBFCOE_TRANSPORT_DBG("transport %s %s to destroy fcoe on %s.\n", 641 + ft->name, (rc) ? "failed" : "succeeded", 642 + netdev->name); 643 + 644 + out_putdev: 645 + dev_put(netdev); 646 + out_nodev: 647 + mutex_unlock(&ft_mutex); 648 + 649 + if (rc == -ERESTARTSYS) 650 + return restart_syscall(); 651 + else 652 + return rc; 653 + } 654 + 655 + /** 656 + * fcoe_transport_disable() - Disables a FCoE interface 657 + * @buffer: The name of the Ethernet interface to be disabled 658 + * @kp: The associated kernel parameter 659 + * 660 + * Called from sysfs. 661 + * 662 + * Returns: 0 for success 663 + */ 664 + static int fcoe_transport_disable(const char *buffer, struct kernel_param *kp) 665 + { 666 + int rc = -ENODEV; 667 + struct net_device *netdev = NULL; 668 + struct fcoe_transport *ft = NULL; 669 + 670 + if (!mutex_trylock(&ft_mutex)) 671 + return restart_syscall(); 672 + 673 + #ifdef CONFIG_LIBFCOE_MODULE 674 + /* 675 + * Make sure the module has been initialized, and is not about to be 676 + * removed. Module parameter sysfs files are writable before the 677 + * module_init function is called and after module_exit. 678 + */ 679 + if (THIS_MODULE->state != MODULE_STATE_LIVE) 680 + goto out_nodev; 681 + #endif 682 + 683 + netdev = fcoe_if_to_netdev(buffer); 684 + if (!netdev) 685 + goto out_nodev; 686 + 687 + ft = fcoe_netdev_map_lookup(netdev); 688 + if (!ft) 689 + goto out_putdev; 690 + 691 + rc = ft->disable ? ft->disable(netdev) : -ENODEV; 692 + 693 + out_putdev: 694 + dev_put(netdev); 695 + out_nodev: 696 + mutex_unlock(&ft_mutex); 697 + 698 + if (rc == -ERESTARTSYS) 699 + return restart_syscall(); 700 + else 701 + return rc; 702 + } 703 + 704 + /** 705 + * fcoe_transport_enable() - Enables a FCoE interface 706 + * @buffer: The name of the Ethernet interface to be enabled 707 + * @kp: The associated kernel parameter 708 + * 709 + * Called from sysfs. 710 + * 711 + * Returns: 0 for success 712 + */ 713 + static int fcoe_transport_enable(const char *buffer, struct kernel_param *kp) 714 + { 715 + int rc = -ENODEV; 716 + struct net_device *netdev = NULL; 717 + struct fcoe_transport *ft = NULL; 718 + 719 + if (!mutex_trylock(&ft_mutex)) 720 + return restart_syscall(); 721 + 722 + #ifdef CONFIG_LIBFCOE_MODULE 723 + /* 724 + * Make sure the module has been initialized, and is not about to be 725 + * removed. Module parameter sysfs files are writable before the 726 + * module_init function is called and after module_exit. 727 + */ 728 + if (THIS_MODULE->state != MODULE_STATE_LIVE) 729 + goto out_nodev; 730 + #endif 731 + 732 + netdev = fcoe_if_to_netdev(buffer); 733 + if (!netdev) 734 + goto out_nodev; 735 + 736 + ft = fcoe_netdev_map_lookup(netdev); 737 + if (!ft) 738 + goto out_putdev; 739 + 740 + rc = ft->enable ? ft->enable(netdev) : -ENODEV; 741 + 742 + out_putdev: 743 + dev_put(netdev); 744 + out_nodev: 745 + mutex_unlock(&ft_mutex); 746 + if (rc == -ERESTARTSYS) 747 + return restart_syscall(); 748 + else 749 + return rc; 750 + } 751 + 752 + /** 753 + * libfcoe_init() - Initialization routine for libfcoe.ko 754 + */ 755 + static int __init libfcoe_init(void) 756 + { 757 + fcoe_transport_init(); 758 + 759 + return 0; 760 + } 761 + module_init(libfcoe_init); 762 + 763 + /** 764 + * libfcoe_exit() - Tear down libfcoe.ko 765 + */ 766 + static void __exit libfcoe_exit(void) 767 + { 768 + fcoe_transport_exit(); 769 + } 770 + module_exit(libfcoe_exit);
+7 -33
drivers/scsi/fcoe/libfcoe.c drivers/scsi/fcoe/fcoe_ctlr.c
··· 44 44 #include <scsi/libfc.h> 45 45 #include <scsi/libfcoe.h> 46 46 47 - MODULE_AUTHOR("Open-FCoE.org"); 48 - MODULE_DESCRIPTION("FIP discovery protocol support for FCoE HBAs"); 49 - MODULE_LICENSE("GPL v2"); 47 + #include "libfcoe.h" 50 48 51 49 #define FCOE_CTLR_MIN_FKA 500 /* min keep alive (mS) */ 52 50 #define FCOE_CTLR_DEF_FKA FIP_DEF_FKA /* default keep alive (mS) */ ··· 64 66 static u8 fcoe_all_vn2vn[ETH_ALEN] = FIP_ALL_VN2VN_MACS; 65 67 static u8 fcoe_all_p2p[ETH_ALEN] = FIP_ALL_P2P_MACS; 66 68 67 - unsigned int libfcoe_debug_logging; 68 - module_param_named(debug_logging, libfcoe_debug_logging, int, S_IRUGO|S_IWUSR); 69 - MODULE_PARM_DESC(debug_logging, "a bit mask of logging levels"); 70 - 71 - #define LIBFCOE_LOGGING 0x01 /* General logging, not categorized */ 72 - #define LIBFCOE_FIP_LOGGING 0x02 /* FIP logging */ 73 - 74 - #define LIBFCOE_CHECK_LOGGING(LEVEL, CMD) \ 75 - do { \ 76 - if (unlikely(libfcoe_debug_logging & LEVEL)) \ 77 - do { \ 78 - CMD; \ 79 - } while (0); \ 80 - } while (0) 81 - 82 - #define LIBFCOE_DBG(fmt, args...) \ 83 - LIBFCOE_CHECK_LOGGING(LIBFCOE_LOGGING, \ 84 - printk(KERN_INFO "libfcoe: " fmt, ##args);) 85 - 86 - #define LIBFCOE_FIP_DBG(fip, fmt, args...) \ 87 - LIBFCOE_CHECK_LOGGING(LIBFCOE_FIP_LOGGING, \ 88 - printk(KERN_INFO "host%d: fip: " fmt, \ 89 - (fip)->lp->host->host_no, ##args);) 90 - 91 - static const char *fcoe_ctlr_states[] = { 69 + static const char * const fcoe_ctlr_states[] = { 92 70 [FIP_ST_DISABLED] = "DISABLED", 93 71 [FIP_ST_LINK_WAIT] = "LINK_WAIT", 94 72 [FIP_ST_AUTO] = "AUTO", ··· 282 308 struct fip_mac_desc mac; 283 309 struct fip_wwn_desc wwnn; 284 310 struct fip_size_desc size; 285 - } __attribute__((packed)) desc; 286 - } __attribute__((packed)) *sol; 311 + } __packed desc; 312 + } __packed * sol; 287 313 u32 fcoe_size; 288 314 289 315 skb = dev_alloc_skb(sizeof(*sol)); ··· 430 456 struct ethhdr eth; 431 457 struct fip_header fip; 432 458 struct fip_mac_desc mac; 433 - } __attribute__((packed)) *kal; 459 + } __packed * kal; 434 460 struct fip_vn_desc *vn; 435 461 u32 len; 436 462 struct fc_lport *lp; ··· 501 527 struct ethhdr eth; 502 528 struct fip_header fip; 503 529 struct fip_encaps encaps; 504 - } __attribute__((packed)) *cap; 530 + } __packed * cap; 505 531 struct fc_frame_header *fh; 506 532 struct fip_mac_desc *mac; 507 533 struct fcoe_fcf *fcf; ··· 1793 1819 struct fip_mac_desc mac; 1794 1820 struct fip_wwn_desc wwnn; 1795 1821 struct fip_vn_desc vn; 1796 - } __attribute__((packed)) *frame; 1822 + } __packed * frame; 1797 1823 struct fip_fc4_feat *ff; 1798 1824 struct fip_size_desc *size; 1799 1825 u32 fcp_feat;
+31
drivers/scsi/fcoe/libfcoe.h
··· 1 + #ifndef _FCOE_LIBFCOE_H_ 2 + #define _FCOE_LIBFCOE_H_ 3 + 4 + extern unsigned int libfcoe_debug_logging; 5 + #define LIBFCOE_LOGGING 0x01 /* General logging, not categorized */ 6 + #define LIBFCOE_FIP_LOGGING 0x02 /* FIP logging */ 7 + #define LIBFCOE_TRANSPORT_LOGGING 0x04 /* FCoE transport logging */ 8 + 9 + #define LIBFCOE_CHECK_LOGGING(LEVEL, CMD) \ 10 + do { \ 11 + if (unlikely(libfcoe_debug_logging & LEVEL)) \ 12 + do { \ 13 + CMD; \ 14 + } while (0); \ 15 + } while (0) 16 + 17 + #define LIBFCOE_DBG(fmt, args...) \ 18 + LIBFCOE_CHECK_LOGGING(LIBFCOE_LOGGING, \ 19 + printk(KERN_INFO "libfcoe: " fmt, ##args);) 20 + 21 + #define LIBFCOE_FIP_DBG(fip, fmt, args...) \ 22 + LIBFCOE_CHECK_LOGGING(LIBFCOE_FIP_LOGGING, \ 23 + printk(KERN_INFO "host%d: fip: " fmt, \ 24 + (fip)->lp->host->host_no, ##args);) 25 + 26 + #define LIBFCOE_TRANSPORT_DBG(fmt, args...) \ 27 + LIBFCOE_CHECK_LOGGING(LIBFCOE_TRANSPORT_LOGGING, \ 28 + printk(KERN_INFO "%s: " fmt, \ 29 + __func__, ##args);) 30 + 31 + #endif /* _FCOE_LIBFCOE_H_ */
+1 -1
drivers/scsi/fnic/fnic.h
··· 37 37 38 38 #define DRV_NAME "fnic" 39 39 #define DRV_DESCRIPTION "Cisco FCoE HBA Driver" 40 - #define DRV_VERSION "1.4.0.145" 40 + #define DRV_VERSION "1.5.0.1" 41 41 #define PFX DRV_NAME ": " 42 42 #define DFX DRV_NAME "%d: " 43 43
+1 -1
drivers/scsi/fnic/vnic_dev.c
··· 654 654 vdev->linkstatus_pa); 655 655 if (vdev->stats) 656 656 pci_free_consistent(vdev->pdev, 657 - sizeof(struct vnic_dev), 657 + sizeof(struct vnic_stats), 658 658 vdev->stats, vdev->stats_pa); 659 659 if (vdev->fw_info) 660 660 pci_free_consistent(vdev->pdev,
+328 -255
drivers/scsi/hpsa.c
··· 74 74 module_param(hpsa_allow_any, int, S_IRUGO|S_IWUSR); 75 75 MODULE_PARM_DESC(hpsa_allow_any, 76 76 "Allow hpsa driver to access unknown HP Smart Array hardware"); 77 + static int hpsa_simple_mode; 78 + module_param(hpsa_simple_mode, int, S_IRUGO|S_IWUSR); 79 + MODULE_PARM_DESC(hpsa_simple_mode, 80 + "Use 'simple mode' rather than 'performant mode'"); 77 81 78 82 /* define the PCI info for the cards we can control */ 79 83 static const struct pci_device_id hpsa_pci_device_id[] = { ··· 89 85 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x324a}, 90 86 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x324b}, 91 87 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x3233}, 92 - {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x3250}, 93 - {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x3251}, 94 - {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x3252}, 95 - {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x3253}, 96 - {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x3254}, 88 + {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3350}, 89 + {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3351}, 90 + {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3352}, 91 + {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3353}, 92 + {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3354}, 93 + {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3355}, 94 + {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3356}, 97 95 {PCI_VENDOR_ID_HP, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, 98 96 PCI_CLASS_STORAGE_RAID << 8, 0xffff << 8, 0}, 99 97 {0,} ··· 115 109 {0x3249103C, "Smart Array P812", &SA5_access}, 116 110 {0x324a103C, "Smart Array P712m", &SA5_access}, 117 111 {0x324b103C, "Smart Array P711m", &SA5_access}, 118 - {0x3250103C, "Smart Array", &SA5_access}, 119 - {0x3250113C, "Smart Array", &SA5_access}, 120 - {0x3250123C, "Smart Array", &SA5_access}, 121 - {0x3250133C, "Smart Array", &SA5_access}, 122 - {0x3250143C, "Smart Array", &SA5_access}, 112 + {0x3350103C, "Smart Array", &SA5_access}, 113 + {0x3351103C, "Smart Array", &SA5_access}, 114 + {0x3352103C, "Smart Array", &SA5_access}, 115 + {0x3353103C, "Smart Array", &SA5_access}, 116 + {0x3354103C, "Smart Array", &SA5_access}, 117 + {0x3355103C, "Smart Array", &SA5_access}, 118 + {0x3356103C, "Smart Array", &SA5_access}, 123 119 {0xFFFF103C, "Unknown Smart Array", &SA5_access}, 124 120 }; 125 121 ··· 155 147 static int hpsa_slave_alloc(struct scsi_device *sdev); 156 148 static void hpsa_slave_destroy(struct scsi_device *sdev); 157 149 158 - static ssize_t raid_level_show(struct device *dev, 159 - struct device_attribute *attr, char *buf); 160 - static ssize_t lunid_show(struct device *dev, 161 - struct device_attribute *attr, char *buf); 162 - static ssize_t unique_id_show(struct device *dev, 163 - struct device_attribute *attr, char *buf); 164 - static ssize_t host_show_firmware_revision(struct device *dev, 165 - struct device_attribute *attr, char *buf); 166 150 static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno); 167 - static ssize_t host_store_rescan(struct device *dev, 168 - struct device_attribute *attr, const char *buf, size_t count); 169 151 static int check_for_unit_attention(struct ctlr_info *h, 170 152 struct CommandList *c); 171 153 static void check_ioctl_unit_attention(struct ctlr_info *h, ··· 171 173 static int __devinit hpsa_pci_find_memory_BAR(struct pci_dev *pdev, 172 174 unsigned long *memory_bar); 173 175 static int __devinit hpsa_lookup_board_id(struct pci_dev *pdev, u32 *board_id); 174 - 175 - static DEVICE_ATTR(raid_level, S_IRUGO, raid_level_show, NULL); 176 - static DEVICE_ATTR(lunid, S_IRUGO, lunid_show, NULL); 177 - static DEVICE_ATTR(unique_id, S_IRUGO, unique_id_show, NULL); 178 - static DEVICE_ATTR(rescan, S_IWUSR, NULL, host_store_rescan); 179 - static DEVICE_ATTR(firmware_revision, S_IRUGO, 180 - host_show_firmware_revision, NULL); 181 - 182 - static struct device_attribute *hpsa_sdev_attrs[] = { 183 - &dev_attr_raid_level, 184 - &dev_attr_lunid, 185 - &dev_attr_unique_id, 186 - NULL, 187 - }; 188 - 189 - static struct device_attribute *hpsa_shost_attrs[] = { 190 - &dev_attr_rescan, 191 - &dev_attr_firmware_revision, 192 - NULL, 193 - }; 194 - 195 - static struct scsi_host_template hpsa_driver_template = { 196 - .module = THIS_MODULE, 197 - .name = "hpsa", 198 - .proc_name = "hpsa", 199 - .queuecommand = hpsa_scsi_queue_command, 200 - .scan_start = hpsa_scan_start, 201 - .scan_finished = hpsa_scan_finished, 202 - .change_queue_depth = hpsa_change_queue_depth, 203 - .this_id = -1, 204 - .use_clustering = ENABLE_CLUSTERING, 205 - .eh_device_reset_handler = hpsa_eh_device_reset_handler, 206 - .ioctl = hpsa_ioctl, 207 - .slave_alloc = hpsa_slave_alloc, 208 - .slave_destroy = hpsa_slave_destroy, 209 - #ifdef CONFIG_COMPAT 210 - .compat_ioctl = hpsa_compat_ioctl, 211 - #endif 212 - .sdev_attrs = hpsa_sdev_attrs, 213 - .shost_attrs = hpsa_shost_attrs, 214 - }; 176 + static int __devinit hpsa_wait_for_board_state(struct pci_dev *pdev, 177 + void __iomem *vaddr, int wait_for_ready); 178 + #define BOARD_NOT_READY 0 179 + #define BOARD_READY 1 215 180 216 181 static inline struct ctlr_info *sdev_to_hba(struct scsi_device *sdev) 217 182 { ··· 252 291 fwrev[0], fwrev[1], fwrev[2], fwrev[3]); 253 292 } 254 293 255 - /* Enqueuing and dequeuing functions for cmdlists. */ 256 - static inline void addQ(struct hlist_head *list, struct CommandList *c) 294 + static ssize_t host_show_commands_outstanding(struct device *dev, 295 + struct device_attribute *attr, char *buf) 257 296 { 258 - hlist_add_head(&c->list, list); 297 + struct Scsi_Host *shost = class_to_shost(dev); 298 + struct ctlr_info *h = shost_to_hba(shost); 299 + 300 + return snprintf(buf, 20, "%d\n", h->commands_outstanding); 259 301 } 260 302 261 - static inline u32 next_command(struct ctlr_info *h) 303 + static ssize_t host_show_transport_mode(struct device *dev, 304 + struct device_attribute *attr, char *buf) 262 305 { 263 - u32 a; 306 + struct ctlr_info *h; 307 + struct Scsi_Host *shost = class_to_shost(dev); 264 308 265 - if (unlikely(h->transMethod != CFGTBL_Trans_Performant)) 266 - return h->access.command_completed(h); 267 - 268 - if ((*(h->reply_pool_head) & 1) == (h->reply_pool_wraparound)) { 269 - a = *(h->reply_pool_head); /* Next cmd in ring buffer */ 270 - (h->reply_pool_head)++; 271 - h->commands_outstanding--; 272 - } else { 273 - a = FIFO_EMPTY; 274 - } 275 - /* Check for wraparound */ 276 - if (h->reply_pool_head == (h->reply_pool + h->max_commands)) { 277 - h->reply_pool_head = h->reply_pool; 278 - h->reply_pool_wraparound ^= 1; 279 - } 280 - return a; 309 + h = shost_to_hba(shost); 310 + return snprintf(buf, 20, "%s\n", 311 + h->transMethod & CFGTBL_Trans_Performant ? 312 + "performant" : "simple"); 281 313 } 282 314 283 - /* set_performant_mode: Modify the tag for cciss performant 284 - * set bit 0 for pull model, bits 3-1 for block fetch 285 - * register number 286 - */ 287 - static void set_performant_mode(struct ctlr_info *h, struct CommandList *c) 315 + /* List of controllers which cannot be reset on kexec with reset_devices */ 316 + static u32 unresettable_controller[] = { 317 + 0x324a103C, /* Smart Array P712m */ 318 + 0x324b103C, /* SmartArray P711m */ 319 + 0x3223103C, /* Smart Array P800 */ 320 + 0x3234103C, /* Smart Array P400 */ 321 + 0x3235103C, /* Smart Array P400i */ 322 + 0x3211103C, /* Smart Array E200i */ 323 + 0x3212103C, /* Smart Array E200 */ 324 + 0x3213103C, /* Smart Array E200i */ 325 + 0x3214103C, /* Smart Array E200i */ 326 + 0x3215103C, /* Smart Array E200i */ 327 + 0x3237103C, /* Smart Array E500 */ 328 + 0x323D103C, /* Smart Array P700m */ 329 + 0x409C0E11, /* Smart Array 6400 */ 330 + 0x409D0E11, /* Smart Array 6400 EM */ 331 + }; 332 + 333 + static int ctlr_is_resettable(struct ctlr_info *h) 288 334 { 289 - if (likely(h->transMethod == CFGTBL_Trans_Performant)) 290 - c->busaddr |= 1 | (h->blockFetchTable[c->Header.SGList] << 1); 335 + int i; 336 + 337 + for (i = 0; i < ARRAY_SIZE(unresettable_controller); i++) 338 + if (unresettable_controller[i] == h->board_id) 339 + return 0; 340 + return 1; 291 341 } 292 342 293 - static void enqueue_cmd_and_start_io(struct ctlr_info *h, 294 - struct CommandList *c) 343 + static ssize_t host_show_resettable(struct device *dev, 344 + struct device_attribute *attr, char *buf) 295 345 { 296 - unsigned long flags; 346 + struct ctlr_info *h; 347 + struct Scsi_Host *shost = class_to_shost(dev); 297 348 298 - set_performant_mode(h, c); 299 - spin_lock_irqsave(&h->lock, flags); 300 - addQ(&h->reqQ, c); 301 - h->Qdepth++; 302 - start_io(h); 303 - spin_unlock_irqrestore(&h->lock, flags); 304 - } 305 - 306 - static inline void removeQ(struct CommandList *c) 307 - { 308 - if (WARN_ON(hlist_unhashed(&c->list))) 309 - return; 310 - hlist_del_init(&c->list); 311 - } 312 - 313 - static inline int is_hba_lunid(unsigned char scsi3addr[]) 314 - { 315 - return memcmp(scsi3addr, RAID_CTLR_LUNID, 8) == 0; 349 + h = shost_to_hba(shost); 350 + return snprintf(buf, 20, "%d\n", ctlr_is_resettable(h)); 316 351 } 317 352 318 353 static inline int is_logical_dev_addr_mode(unsigned char scsi3addr[]) 319 354 { 320 355 return (scsi3addr[3] & 0xC0) == 0x40; 321 - } 322 - 323 - static inline int is_scsi_rev_5(struct ctlr_info *h) 324 - { 325 - if (!h->hba_inquiry_data) 326 - return 0; 327 - if ((h->hba_inquiry_data[2] & 0x07) == 5) 328 - return 1; 329 - return 0; 330 356 } 331 357 332 358 static const char *raid_label[] = { "0", "4", "1(1+0)", "5", "5+1", "ADG", ··· 405 457 sn[4], sn[5], sn[6], sn[7], 406 458 sn[8], sn[9], sn[10], sn[11], 407 459 sn[12], sn[13], sn[14], sn[15]); 460 + } 461 + 462 + static DEVICE_ATTR(raid_level, S_IRUGO, raid_level_show, NULL); 463 + static DEVICE_ATTR(lunid, S_IRUGO, lunid_show, NULL); 464 + static DEVICE_ATTR(unique_id, S_IRUGO, unique_id_show, NULL); 465 + static DEVICE_ATTR(rescan, S_IWUSR, NULL, host_store_rescan); 466 + static DEVICE_ATTR(firmware_revision, S_IRUGO, 467 + host_show_firmware_revision, NULL); 468 + static DEVICE_ATTR(commands_outstanding, S_IRUGO, 469 + host_show_commands_outstanding, NULL); 470 + static DEVICE_ATTR(transport_mode, S_IRUGO, 471 + host_show_transport_mode, NULL); 472 + static DEVICE_ATTR(resettable, S_IRUGO, 473 + host_show_resettable, NULL); 474 + 475 + static struct device_attribute *hpsa_sdev_attrs[] = { 476 + &dev_attr_raid_level, 477 + &dev_attr_lunid, 478 + &dev_attr_unique_id, 479 + NULL, 480 + }; 481 + 482 + static struct device_attribute *hpsa_shost_attrs[] = { 483 + &dev_attr_rescan, 484 + &dev_attr_firmware_revision, 485 + &dev_attr_commands_outstanding, 486 + &dev_attr_transport_mode, 487 + &dev_attr_resettable, 488 + NULL, 489 + }; 490 + 491 + static struct scsi_host_template hpsa_driver_template = { 492 + .module = THIS_MODULE, 493 + .name = "hpsa", 494 + .proc_name = "hpsa", 495 + .queuecommand = hpsa_scsi_queue_command, 496 + .scan_start = hpsa_scan_start, 497 + .scan_finished = hpsa_scan_finished, 498 + .change_queue_depth = hpsa_change_queue_depth, 499 + .this_id = -1, 500 + .use_clustering = ENABLE_CLUSTERING, 501 + .eh_device_reset_handler = hpsa_eh_device_reset_handler, 502 + .ioctl = hpsa_ioctl, 503 + .slave_alloc = hpsa_slave_alloc, 504 + .slave_destroy = hpsa_slave_destroy, 505 + #ifdef CONFIG_COMPAT 506 + .compat_ioctl = hpsa_compat_ioctl, 507 + #endif 508 + .sdev_attrs = hpsa_sdev_attrs, 509 + .shost_attrs = hpsa_shost_attrs, 510 + }; 511 + 512 + 513 + /* Enqueuing and dequeuing functions for cmdlists. */ 514 + static inline void addQ(struct list_head *list, struct CommandList *c) 515 + { 516 + list_add_tail(&c->list, list); 517 + } 518 + 519 + static inline u32 next_command(struct ctlr_info *h) 520 + { 521 + u32 a; 522 + 523 + if (unlikely(!(h->transMethod & CFGTBL_Trans_Performant))) 524 + return h->access.command_completed(h); 525 + 526 + if ((*(h->reply_pool_head) & 1) == (h->reply_pool_wraparound)) { 527 + a = *(h->reply_pool_head); /* Next cmd in ring buffer */ 528 + (h->reply_pool_head)++; 529 + h->commands_outstanding--; 530 + } else { 531 + a = FIFO_EMPTY; 532 + } 533 + /* Check for wraparound */ 534 + if (h->reply_pool_head == (h->reply_pool + h->max_commands)) { 535 + h->reply_pool_head = h->reply_pool; 536 + h->reply_pool_wraparound ^= 1; 537 + } 538 + return a; 539 + } 540 + 541 + /* set_performant_mode: Modify the tag for cciss performant 542 + * set bit 0 for pull model, bits 3-1 for block fetch 543 + * register number 544 + */ 545 + static void set_performant_mode(struct ctlr_info *h, struct CommandList *c) 546 + { 547 + if (likely(h->transMethod & CFGTBL_Trans_Performant)) 548 + c->busaddr |= 1 | (h->blockFetchTable[c->Header.SGList] << 1); 549 + } 550 + 551 + static void enqueue_cmd_and_start_io(struct ctlr_info *h, 552 + struct CommandList *c) 553 + { 554 + unsigned long flags; 555 + 556 + set_performant_mode(h, c); 557 + spin_lock_irqsave(&h->lock, flags); 558 + addQ(&h->reqQ, c); 559 + h->Qdepth++; 560 + start_io(h); 561 + spin_unlock_irqrestore(&h->lock, flags); 562 + } 563 + 564 + static inline void removeQ(struct CommandList *c) 565 + { 566 + if (WARN_ON(list_empty(&c->list))) 567 + return; 568 + list_del_init(&c->list); 569 + } 570 + 571 + static inline int is_hba_lunid(unsigned char scsi3addr[]) 572 + { 573 + return memcmp(scsi3addr, RAID_CTLR_LUNID, 8) == 0; 574 + } 575 + 576 + static inline int is_scsi_rev_5(struct ctlr_info *h) 577 + { 578 + if (!h->hba_inquiry_data) 579 + return 0; 580 + if ((h->hba_inquiry_data[2] & 0x07) == 5) 581 + return 1; 582 + return 0; 408 583 } 409 584 410 585 static int hpsa_find_target_lun(struct ctlr_info *h, ··· 1201 1130 cmd->result = DID_TIME_OUT << 16; 1202 1131 dev_warn(&h->pdev->dev, "cp %p timedout\n", cp); 1203 1132 break; 1133 + case CMD_UNABORTABLE: 1134 + cmd->result = DID_ERROR << 16; 1135 + dev_warn(&h->pdev->dev, "Command unabortable\n"); 1136 + break; 1204 1137 default: 1205 1138 cmd->result = DID_ERROR << 16; 1206 1139 dev_warn(&h->pdev->dev, "cp %p returned unknown status %x\n", ··· 1235 1160 sh->sg_tablesize = h->maxsgentries; 1236 1161 h->scsi_host = sh; 1237 1162 sh->hostdata[0] = (unsigned long) h; 1238 - sh->irq = h->intr[PERF_MODE_INT]; 1163 + sh->irq = h->intr[h->intr_mode]; 1239 1164 sh->unique_id = sh->irq; 1240 1165 error = scsi_add_host(sh, &h->pdev->dev); 1241 1166 if (error) ··· 1369 1294 break; 1370 1295 case CMD_TIMEOUT: 1371 1296 dev_warn(d, "cp %p timed out\n", cp); 1297 + break; 1298 + case CMD_UNABORTABLE: 1299 + dev_warn(d, "Command unabortable\n"); 1372 1300 break; 1373 1301 default: 1374 1302 dev_warn(d, "cp %p returned unknown status %x\n", cp, ··· 1673 1595 if (lun == 0) /* if lun is 0, then obviously we have a lun 0. */ 1674 1596 return 0; 1675 1597 1598 + memset(scsi3addr, 0, 8); 1599 + scsi3addr[3] = target; 1676 1600 if (is_hba_lunid(scsi3addr)) 1677 1601 return 0; /* Don't add the RAID controller here. */ 1678 1602 ··· 1689 1609 return 0; 1690 1610 } 1691 1611 1692 - memset(scsi3addr, 0, 8); 1693 - scsi3addr[3] = target; 1694 1612 if (hpsa_update_device_info(h, scsi3addr, this_device)) 1695 1613 return 0; 1696 1614 (*nmsa2xxx_enclosures)++; ··· 2277 2199 2278 2200 c->cmdindex = i; 2279 2201 2280 - INIT_HLIST_NODE(&c->list); 2202 + INIT_LIST_HEAD(&c->list); 2281 2203 c->busaddr = (u32) cmd_dma_handle; 2282 2204 temp64.val = (u64) err_dma_handle; 2283 2205 c->ErrDesc.Addr.lower = temp64.val32.lower; ··· 2315 2237 } 2316 2238 memset(c->err_info, 0, sizeof(*c->err_info)); 2317 2239 2318 - INIT_HLIST_NODE(&c->list); 2240 + INIT_LIST_HEAD(&c->list); 2319 2241 c->busaddr = (u32) cmd_dma_handle; 2320 2242 temp64.val = (u64) err_dma_handle; 2321 2243 c->ErrDesc.Addr.lower = temp64.val32.lower; ··· 2345 2267 pci_free_consistent(h->pdev, sizeof(*c->err_info), 2346 2268 c->err_info, (dma_addr_t) temp64.val); 2347 2269 pci_free_consistent(h->pdev, sizeof(*c), 2348 - c, (dma_addr_t) c->busaddr); 2270 + c, (dma_addr_t) (c->busaddr & DIRECT_LOOKUP_MASK)); 2349 2271 } 2350 2272 2351 2273 #ifdef CONFIG_COMPAT ··· 2359 2281 int err; 2360 2282 u32 cp; 2361 2283 2284 + memset(&arg64, 0, sizeof(arg64)); 2362 2285 err = 0; 2363 2286 err |= copy_from_user(&arg64.LUN_info, &arg32->LUN_info, 2364 2287 sizeof(arg64.LUN_info)); ··· 2396 2317 int err; 2397 2318 u32 cp; 2398 2319 2320 + memset(&arg64, 0, sizeof(arg64)); 2399 2321 err = 0; 2400 2322 err |= copy_from_user(&arg64.LUN_info, &arg32->LUN_info, 2401 2323 sizeof(arg64.LUN_info)); ··· 2513 2433 buff = kmalloc(iocommand.buf_size, GFP_KERNEL); 2514 2434 if (buff == NULL) 2515 2435 return -EFAULT; 2516 - } 2517 - if (iocommand.Request.Type.Direction == XFER_WRITE) { 2518 - /* Copy the data into the buffer we created */ 2519 - if (copy_from_user(buff, iocommand.buf, iocommand.buf_size)) { 2520 - kfree(buff); 2521 - return -EFAULT; 2436 + if (iocommand.Request.Type.Direction == XFER_WRITE) { 2437 + /* Copy the data into the buffer we created */ 2438 + if (copy_from_user(buff, iocommand.buf, 2439 + iocommand.buf_size)) { 2440 + kfree(buff); 2441 + return -EFAULT; 2442 + } 2443 + } else { 2444 + memset(buff, 0, iocommand.buf_size); 2522 2445 } 2523 - } else 2524 - memset(buff, 0, iocommand.buf_size); 2446 + } 2525 2447 c = cmd_special_alloc(h); 2526 2448 if (c == NULL) { 2527 2449 kfree(buff); ··· 2569 2487 cmd_special_free(h, c); 2570 2488 return -EFAULT; 2571 2489 } 2572 - 2573 - if (iocommand.Request.Type.Direction == XFER_READ) { 2490 + if (iocommand.Request.Type.Direction == XFER_READ && 2491 + iocommand.buf_size > 0) { 2574 2492 /* Copy the data out of the buffer we created */ 2575 2493 if (copy_to_user(iocommand.buf, buff, iocommand.buf_size)) { 2576 2494 kfree(buff); ··· 2663 2581 } 2664 2582 c->cmd_type = CMD_IOCTL_PEND; 2665 2583 c->Header.ReplyQueue = 0; 2666 - 2667 - if (ioc->buf_size > 0) { 2668 - c->Header.SGList = sg_used; 2669 - c->Header.SGTotal = sg_used; 2670 - } else { 2671 - c->Header.SGList = 0; 2672 - c->Header.SGTotal = 0; 2673 - } 2584 + c->Header.SGList = c->Header.SGTotal = sg_used; 2674 2585 memcpy(&c->Header.LUN, &ioc->LUN_info, sizeof(c->Header.LUN)); 2675 2586 c->Header.Tag.lower = c->busaddr; 2676 2587 memcpy(&c->Request, &ioc->Request, sizeof(c->Request)); ··· 2680 2605 } 2681 2606 } 2682 2607 hpsa_scsi_do_simple_cmd_core(h, c); 2683 - hpsa_pci_unmap(h->pdev, c, sg_used, PCI_DMA_BIDIRECTIONAL); 2608 + if (sg_used) 2609 + hpsa_pci_unmap(h->pdev, c, sg_used, PCI_DMA_BIDIRECTIONAL); 2684 2610 check_ioctl_unit_attention(h, c); 2685 2611 /* Copy the error information out */ 2686 2612 memcpy(&ioc->error_info, c->err_info, sizeof(ioc->error_info)); ··· 2690 2614 status = -EFAULT; 2691 2615 goto cleanup1; 2692 2616 } 2693 - if (ioc->Request.Type.Direction == XFER_READ) { 2617 + if (ioc->Request.Type.Direction == XFER_READ && ioc->buf_size > 0) { 2694 2618 /* Copy the data out of the buffer we created */ 2695 2619 BYTE __user *ptr = ioc->buf; 2696 2620 for (i = 0; i < sg_used; i++) { ··· 2886 2810 { 2887 2811 struct CommandList *c; 2888 2812 2889 - while (!hlist_empty(&h->reqQ)) { 2890 - c = hlist_entry(h->reqQ.first, struct CommandList, list); 2813 + while (!list_empty(&h->reqQ)) { 2814 + c = list_entry(h->reqQ.next, struct CommandList, list); 2891 2815 /* can't do anything if fifo is full */ 2892 2816 if ((h->access.fifo_full(h))) { 2893 2817 dev_warn(&h->pdev->dev, "fifo full\n"); ··· 2943 2867 2944 2868 static inline u32 hpsa_tag_contains_index(u32 tag) 2945 2869 { 2946 - #define DIRECT_LOOKUP_BIT 0x10 2947 2870 return tag & DIRECT_LOOKUP_BIT; 2948 2871 } 2949 2872 2950 2873 static inline u32 hpsa_tag_to_index(u32 tag) 2951 2874 { 2952 - #define DIRECT_LOOKUP_SHIFT 5 2953 2875 return tag >> DIRECT_LOOKUP_SHIFT; 2954 2876 } 2955 2877 2956 - static inline u32 hpsa_tag_discard_error_bits(u32 tag) 2878 + 2879 + static inline u32 hpsa_tag_discard_error_bits(struct ctlr_info *h, u32 tag) 2957 2880 { 2958 - #define HPSA_ERROR_BITS 0x03 2959 - return tag & ~HPSA_ERROR_BITS; 2881 + #define HPSA_PERF_ERROR_BITS ((1 << DIRECT_LOOKUP_SHIFT) - 1) 2882 + #define HPSA_SIMPLE_ERROR_BITS 0x03 2883 + if (unlikely(!(h->transMethod & CFGTBL_Trans_Performant))) 2884 + return tag & ~HPSA_SIMPLE_ERROR_BITS; 2885 + return tag & ~HPSA_PERF_ERROR_BITS; 2960 2886 } 2961 2887 2962 2888 /* process completion of an indexed ("direct lookup") command */ ··· 2982 2904 { 2983 2905 u32 tag; 2984 2906 struct CommandList *c = NULL; 2985 - struct hlist_node *tmp; 2986 2907 2987 - tag = hpsa_tag_discard_error_bits(raw_tag); 2988 - hlist_for_each_entry(c, tmp, &h->cmpQ, list) { 2908 + tag = hpsa_tag_discard_error_bits(h, raw_tag); 2909 + list_for_each_entry(c, &h->cmpQ, list) { 2989 2910 if ((c->busaddr & 0xFFFFFFE0) == (tag & 0xFFFFFFE0)) { 2990 2911 finish_cmd(c, raw_tag); 2991 2912 return next_command(h); ··· 3034 2957 return IRQ_HANDLED; 3035 2958 } 3036 2959 3037 - /* Send a message CDB to the firmware. */ 2960 + /* Send a message CDB to the firmware. Careful, this only works 2961 + * in simple mode, not performant mode due to the tag lookup. 2962 + * We only ever use this immediately after a controller reset. 2963 + */ 3038 2964 static __devinit int hpsa_message(struct pci_dev *pdev, unsigned char opcode, 3039 2965 unsigned char type) 3040 2966 { ··· 3103 3023 3104 3024 for (i = 0; i < HPSA_MSG_SEND_RETRY_LIMIT; i++) { 3105 3025 tag = readl(vaddr + SA5_REPLY_PORT_OFFSET); 3106 - if (hpsa_tag_discard_error_bits(tag) == paddr32) 3026 + if ((tag & ~HPSA_SIMPLE_ERROR_BITS) == paddr32) 3107 3027 break; 3108 3028 msleep(HPSA_MSG_SEND_RETRY_INTERVAL_MSECS); 3109 3029 } ··· 3134 3054 3135 3055 #define hpsa_soft_reset_controller(p) hpsa_message(p, 1, 0) 3136 3056 #define hpsa_noop(p) hpsa_message(p, 3, 0) 3137 - 3138 - static __devinit int hpsa_reset_msi(struct pci_dev *pdev) 3139 - { 3140 - /* the #defines are stolen from drivers/pci/msi.h. */ 3141 - #define msi_control_reg(base) (base + PCI_MSI_FLAGS) 3142 - #define PCI_MSIX_FLAGS_ENABLE (1 << 15) 3143 - 3144 - int pos; 3145 - u16 control = 0; 3146 - 3147 - pos = pci_find_capability(pdev, PCI_CAP_ID_MSI); 3148 - if (pos) { 3149 - pci_read_config_word(pdev, msi_control_reg(pos), &control); 3150 - if (control & PCI_MSI_FLAGS_ENABLE) { 3151 - dev_info(&pdev->dev, "resetting MSI\n"); 3152 - pci_write_config_word(pdev, msi_control_reg(pos), 3153 - control & ~PCI_MSI_FLAGS_ENABLE); 3154 - } 3155 - } 3156 - 3157 - pos = pci_find_capability(pdev, PCI_CAP_ID_MSIX); 3158 - if (pos) { 3159 - pci_read_config_word(pdev, msi_control_reg(pos), &control); 3160 - if (control & PCI_MSIX_FLAGS_ENABLE) { 3161 - dev_info(&pdev->dev, "resetting MSI-X\n"); 3162 - pci_write_config_word(pdev, msi_control_reg(pos), 3163 - control & ~PCI_MSIX_FLAGS_ENABLE); 3164 - } 3165 - } 3166 - 3167 - return 0; 3168 - } 3169 3057 3170 3058 static int hpsa_controller_hard_reset(struct pci_dev *pdev, 3171 3059 void * __iomem vaddr, bool use_doorbell) ··· 3190 3142 */ 3191 3143 static __devinit int hpsa_kdump_hard_reset_controller(struct pci_dev *pdev) 3192 3144 { 3193 - u16 saved_config_space[32]; 3194 3145 u64 cfg_offset; 3195 3146 u32 cfg_base_addr; 3196 3147 u64 cfg_base_addr_index; 3197 3148 void __iomem *vaddr; 3198 3149 unsigned long paddr; 3199 3150 u32 misc_fw_support, active_transport; 3200 - int rc, i; 3151 + int rc; 3201 3152 struct CfgTable __iomem *cfgtable; 3202 3153 bool use_doorbell; 3203 3154 u32 board_id; 3155 + u16 command_register; 3204 3156 3205 3157 /* For controllers as old as the P600, this is very nearly 3206 3158 * the same thing as ··· 3209 3161 * pci_set_power_state(pci_dev, PCI_D3hot); 3210 3162 * pci_set_power_state(pci_dev, PCI_D0); 3211 3163 * pci_restore_state(pci_dev); 3212 - * 3213 - * but we can't use these nice canned kernel routines on 3214 - * kexec, because they also check the MSI/MSI-X state in PCI 3215 - * configuration space and do the wrong thing when it is 3216 - * set/cleared. Also, the pci_save/restore_state functions 3217 - * violate the ordering requirements for restoring the 3218 - * configuration space from the CCISS document (see the 3219 - * comment below). So we roll our own .... 3220 3164 * 3221 3165 * For controllers newer than the P600, the pci power state 3222 3166 * method of resetting doesn't work so we have another way ··· 3222 3182 * likely not be happy. Just forbid resetting this conjoined mess. 3223 3183 * The 640x isn't really supported by hpsa anyway. 3224 3184 */ 3225 - hpsa_lookup_board_id(pdev, &board_id); 3185 + rc = hpsa_lookup_board_id(pdev, &board_id); 3186 + if (rc < 0) { 3187 + dev_warn(&pdev->dev, "Not resetting device.\n"); 3188 + return -ENODEV; 3189 + } 3226 3190 if (board_id == 0x409C0E11 || board_id == 0x409D0E11) 3227 3191 return -ENOTSUPP; 3228 3192 3229 - for (i = 0; i < 32; i++) 3230 - pci_read_config_word(pdev, 2*i, &saved_config_space[i]); 3231 - 3193 + /* Save the PCI command register */ 3194 + pci_read_config_word(pdev, 4, &command_register); 3195 + /* Turn the board off. This is so that later pci_restore_state() 3196 + * won't turn the board on before the rest of config space is ready. 3197 + */ 3198 + pci_disable_device(pdev); 3199 + pci_save_state(pdev); 3232 3200 3233 3201 /* find the first memory BAR, so we can find the cfg table */ 3234 3202 rc = hpsa_pci_find_memory_BAR(pdev, &paddr); ··· 3262 3214 misc_fw_support = readl(&cfgtable->misc_fw_support); 3263 3215 use_doorbell = misc_fw_support & MISC_FW_DOORBELL_RESET; 3264 3216 3265 - /* The doorbell reset seems to cause lockups on some Smart 3266 - * Arrays (e.g. P410, P410i, maybe others). Until this is 3267 - * fixed or at least isolated, avoid the doorbell reset. 3268 - */ 3269 - use_doorbell = 0; 3270 - 3271 3217 rc = hpsa_controller_hard_reset(pdev, vaddr, use_doorbell); 3272 3218 if (rc) 3273 3219 goto unmap_cfgtable; 3274 3220 3275 - /* Restore the PCI configuration space. The Open CISS 3276 - * Specification says, "Restore the PCI Configuration 3277 - * Registers, offsets 00h through 60h. It is important to 3278 - * restore the command register, 16-bits at offset 04h, 3279 - * last. Do not restore the configuration status register, 3280 - * 16-bits at offset 06h." Note that the offset is 2*i. 3281 - */ 3282 - for (i = 0; i < 32; i++) { 3283 - if (i == 2 || i == 3) 3284 - continue; 3285 - pci_write_config_word(pdev, 2*i, saved_config_space[i]); 3221 + pci_restore_state(pdev); 3222 + rc = pci_enable_device(pdev); 3223 + if (rc) { 3224 + dev_warn(&pdev->dev, "failed to enable device.\n"); 3225 + goto unmap_cfgtable; 3286 3226 } 3287 - wmb(); 3288 - pci_write_config_word(pdev, 4, saved_config_space[2]); 3227 + pci_write_config_word(pdev, 4, command_register); 3289 3228 3290 3229 /* Some devices (notably the HP Smart Array 5i Controller) 3291 3230 need a little pause here */ 3292 3231 msleep(HPSA_POST_RESET_PAUSE_MSECS); 3293 3232 3233 + /* Wait for board to become not ready, then ready. */ 3234 + dev_info(&pdev->dev, "Waiting for board to become ready.\n"); 3235 + rc = hpsa_wait_for_board_state(pdev, vaddr, BOARD_NOT_READY); 3236 + if (rc) 3237 + dev_warn(&pdev->dev, 3238 + "failed waiting for board to become not ready\n"); 3239 + rc = hpsa_wait_for_board_state(pdev, vaddr, BOARD_READY); 3240 + if (rc) { 3241 + dev_warn(&pdev->dev, 3242 + "failed waiting for board to become ready\n"); 3243 + goto unmap_cfgtable; 3244 + } 3245 + dev_info(&pdev->dev, "board ready.\n"); 3246 + 3294 3247 /* Controller should be in simple mode at this point. If it's not, 3295 3248 * It means we're on one of those controllers which doesn't support 3296 3249 * the doorbell reset method and on which the PCI power management reset 3297 3250 * method doesn't work (P800, for example.) 3298 - * In those cases, pretend the reset worked and hope for the best. 3251 + * In those cases, don't try to proceed, as it generally doesn't work. 3299 3252 */ 3300 3253 active_transport = readl(&cfgtable->TransportActive); 3301 3254 if (active_transport & PERFORMANT_MODE) { 3302 3255 dev_warn(&pdev->dev, "Unable to successfully reset controller," 3303 - " proceeding anyway.\n"); 3304 - rc = -ENOTSUPP; 3256 + " Ignoring controller.\n"); 3257 + rc = -ENODEV; 3305 3258 } 3306 3259 3307 3260 unmap_cfgtable: ··· 3435 3386 default_int_mode: 3436 3387 #endif /* CONFIG_PCI_MSI */ 3437 3388 /* if we get here we're going to use the default interrupt mode */ 3438 - h->intr[PERF_MODE_INT] = h->pdev->irq; 3389 + h->intr[h->intr_mode] = h->pdev->irq; 3439 3390 } 3440 3391 3441 3392 static int __devinit hpsa_lookup_board_id(struct pci_dev *pdev, u32 *board_id) ··· 3487 3438 return -ENODEV; 3488 3439 } 3489 3440 3490 - static int __devinit hpsa_wait_for_board_ready(struct ctlr_info *h) 3441 + static int __devinit hpsa_wait_for_board_state(struct pci_dev *pdev, 3442 + void __iomem *vaddr, int wait_for_ready) 3491 3443 { 3492 - int i; 3444 + int i, iterations; 3493 3445 u32 scratchpad; 3446 + if (wait_for_ready) 3447 + iterations = HPSA_BOARD_READY_ITERATIONS; 3448 + else 3449 + iterations = HPSA_BOARD_NOT_READY_ITERATIONS; 3494 3450 3495 - for (i = 0; i < HPSA_BOARD_READY_ITERATIONS; i++) { 3496 - scratchpad = readl(h->vaddr + SA5_SCRATCHPAD_OFFSET); 3497 - if (scratchpad == HPSA_FIRMWARE_READY) 3498 - return 0; 3451 + for (i = 0; i < iterations; i++) { 3452 + scratchpad = readl(vaddr + SA5_SCRATCHPAD_OFFSET); 3453 + if (wait_for_ready) { 3454 + if (scratchpad == HPSA_FIRMWARE_READY) 3455 + return 0; 3456 + } else { 3457 + if (scratchpad != HPSA_FIRMWARE_READY) 3458 + return 0; 3459 + } 3499 3460 msleep(HPSA_BOARD_READY_POLL_INTERVAL_MSECS); 3500 3461 } 3501 - dev_warn(&h->pdev->dev, "board not ready, timed out.\n"); 3462 + dev_warn(&pdev->dev, "board not ready, timed out.\n"); 3502 3463 return -ENODEV; 3503 3464 } 3504 3465 ··· 3556 3497 static void __devinit hpsa_get_max_perf_mode_cmds(struct ctlr_info *h) 3557 3498 { 3558 3499 h->max_commands = readl(&(h->cfgtable->MaxPerformantModeCommands)); 3500 + 3501 + /* Limit commands in memory limited kdump scenario. */ 3502 + if (reset_devices && h->max_commands > 32) 3503 + h->max_commands = 32; 3504 + 3559 3505 if (h->max_commands < 16) { 3560 3506 dev_warn(&h->pdev->dev, "Controller reports " 3561 3507 "max supported commands of %d, an obvious lie. " ··· 3635 3571 static void __devinit hpsa_wait_for_mode_change_ack(struct ctlr_info *h) 3636 3572 { 3637 3573 int i; 3574 + u32 doorbell_value; 3575 + unsigned long flags; 3638 3576 3639 3577 /* under certain very rare conditions, this can take awhile. 3640 3578 * (e.g.: hot replace a failed 144GB drive in a RAID 5 set right 3641 3579 * as we enter this code.) 3642 3580 */ 3643 3581 for (i = 0; i < MAX_CONFIG_WAIT; i++) { 3644 - if (!(readl(h->vaddr + SA5_DOORBELL) & CFGTBL_ChangeReq)) 3582 + spin_lock_irqsave(&h->lock, flags); 3583 + doorbell_value = readl(h->vaddr + SA5_DOORBELL); 3584 + spin_unlock_irqrestore(&h->lock, flags); 3585 + if (!(doorbell_value & CFGTBL_ChangeReq)) 3645 3586 break; 3646 3587 /* delay and try again */ 3647 - msleep(10); 3588 + usleep_range(10000, 20000); 3648 3589 } 3649 3590 } 3650 3591 ··· 3672 3603 "unable to get board into simple mode\n"); 3673 3604 return -ENODEV; 3674 3605 } 3606 + h->transMethod = CFGTBL_Trans_Simple; 3675 3607 return 0; 3676 3608 } 3677 3609 ··· 3711 3641 err = -ENOMEM; 3712 3642 goto err_out_free_res; 3713 3643 } 3714 - err = hpsa_wait_for_board_ready(h); 3644 + err = hpsa_wait_for_board_state(h->pdev, h->vaddr, BOARD_READY); 3715 3645 if (err) 3716 3646 goto err_out_free_res; 3717 3647 err = hpsa_find_cfgtables(h); ··· 3780 3710 return 0; /* just try to do the kdump anyhow. */ 3781 3711 if (rc) 3782 3712 return -ENODEV; 3783 - if (hpsa_reset_msi(pdev)) 3784 - return -ENODEV; 3785 3713 3786 3714 /* Now try to get the controller to respond to a no-op */ 3787 3715 for (i = 0; i < HPSA_POST_RESET_NOOP_RETRIES; i++) { ··· 3817 3749 3818 3750 h->pdev = pdev; 3819 3751 h->busy_initializing = 1; 3820 - INIT_HLIST_HEAD(&h->cmpQ); 3821 - INIT_HLIST_HEAD(&h->reqQ); 3752 + h->intr_mode = hpsa_simple_mode ? SIMPLE_MODE_INT : PERF_MODE_INT; 3753 + INIT_LIST_HEAD(&h->cmpQ); 3754 + INIT_LIST_HEAD(&h->reqQ); 3755 + spin_lock_init(&h->lock); 3756 + spin_lock_init(&h->scan_lock); 3822 3757 rc = hpsa_pci_init(h); 3823 3758 if (rc != 0) 3824 3759 goto clean1; ··· 3848 3777 h->access.set_intr_mask(h, HPSA_INTR_OFF); 3849 3778 3850 3779 if (h->msix_vector || h->msi_vector) 3851 - rc = request_irq(h->intr[PERF_MODE_INT], do_hpsa_intr_msi, 3780 + rc = request_irq(h->intr[h->intr_mode], do_hpsa_intr_msi, 3852 3781 IRQF_DISABLED, h->devname, h); 3853 3782 else 3854 - rc = request_irq(h->intr[PERF_MODE_INT], do_hpsa_intr_intx, 3783 + rc = request_irq(h->intr[h->intr_mode], do_hpsa_intr_intx, 3855 3784 IRQF_DISABLED, h->devname, h); 3856 3785 if (rc) { 3857 3786 dev_err(&pdev->dev, "unable to get irq %d for %s\n", 3858 - h->intr[PERF_MODE_INT], h->devname); 3787 + h->intr[h->intr_mode], h->devname); 3859 3788 goto clean2; 3860 3789 } 3861 3790 3862 3791 dev_info(&pdev->dev, "%s: <0x%x> at IRQ %d%s using DAC\n", 3863 3792 h->devname, pdev->device, 3864 - h->intr[PERF_MODE_INT], dac ? "" : " not"); 3793 + h->intr[h->intr_mode], dac ? "" : " not"); 3865 3794 3866 3795 h->cmd_pool_bits = 3867 3796 kmalloc(((h->nr_cmds + BITS_PER_LONG - ··· 3881 3810 } 3882 3811 if (hpsa_allocate_sg_chain_blocks(h)) 3883 3812 goto clean4; 3884 - spin_lock_init(&h->lock); 3885 - spin_lock_init(&h->scan_lock); 3886 3813 init_waitqueue_head(&h->scan_wait_queue); 3887 3814 h->scan_finished = 1; /* no scan currently in progress */ 3888 3815 ··· 3912 3843 h->nr_cmds * sizeof(struct ErrorInfo), 3913 3844 h->errinfo_pool, 3914 3845 h->errinfo_pool_dhandle); 3915 - free_irq(h->intr[PERF_MODE_INT], h); 3846 + free_irq(h->intr[h->intr_mode], h); 3916 3847 clean2: 3917 3848 clean1: 3918 3849 h->busy_initializing = 0; ··· 3956 3887 */ 3957 3888 hpsa_flush_cache(h); 3958 3889 h->access.set_intr_mask(h, HPSA_INTR_OFF); 3959 - free_irq(h->intr[PERF_MODE_INT], h); 3890 + free_irq(h->intr[h->intr_mode], h); 3960 3891 #ifdef CONFIG_PCI_MSI 3961 3892 if (h->msix_vector) 3962 3893 pci_disable_msix(h->pdev); ··· 4058 3989 } 4059 3990 } 4060 3991 4061 - static __devinit void hpsa_enter_performant_mode(struct ctlr_info *h) 3992 + static __devinit void hpsa_enter_performant_mode(struct ctlr_info *h, 3993 + u32 use_short_tags) 4062 3994 { 4063 3995 int i; 4064 3996 unsigned long register_value; ··· 4107 4037 writel(0, &h->transtable->RepQCtrAddrHigh32); 4108 4038 writel(h->reply_pool_dhandle, &h->transtable->RepQAddr0Low32); 4109 4039 writel(0, &h->transtable->RepQAddr0High32); 4110 - writel(CFGTBL_Trans_Performant, 4040 + writel(CFGTBL_Trans_Performant | use_short_tags, 4111 4041 &(h->cfgtable->HostWrite.TransportRequest)); 4112 4042 writel(CFGTBL_ChangeReq, h->vaddr + SA5_DOORBELL); 4113 4043 hpsa_wait_for_mode_change_ack(h); ··· 4117 4047 " performant mode\n"); 4118 4048 return; 4119 4049 } 4050 + /* Change the access methods to the performant access methods */ 4051 + h->access = SA5_performant_access; 4052 + h->transMethod = CFGTBL_Trans_Performant; 4120 4053 } 4121 4054 4122 4055 static __devinit void hpsa_put_ctlr_into_performant_mode(struct ctlr_info *h) 4123 4056 { 4124 4057 u32 trans_support; 4058 + 4059 + if (hpsa_simple_mode) 4060 + return; 4125 4061 4126 4062 trans_support = readl(&(h->cfgtable->TransportSupport)); 4127 4063 if (!(trans_support & PERFORMANT_MODE)) ··· 4148 4072 || (h->blockFetchTable == NULL)) 4149 4073 goto clean_up; 4150 4074 4151 - hpsa_enter_performant_mode(h); 4152 - 4153 - /* Change the access methods to the performant access methods */ 4154 - h->access = SA5_performant_access; 4155 - h->transMethod = CFGTBL_Trans_Performant; 4075 + hpsa_enter_performant_mode(h, 4076 + trans_support & CFGTBL_Trans_use_short_tags); 4156 4077 4157 4078 return; 4158 4079
+7 -2
drivers/scsi/hpsa.h
··· 72 72 unsigned int intr[4]; 73 73 unsigned int msix_vector; 74 74 unsigned int msi_vector; 75 + int intr_mode; /* either PERF_MODE_INT or SIMPLE_MODE_INT */ 75 76 struct access_method access; 76 77 77 78 /* queue and queue Info */ 78 - struct hlist_head reqQ; 79 - struct hlist_head cmpQ; 79 + struct list_head reqQ; 80 + struct list_head cmpQ; 80 81 unsigned int Qdepth; 81 82 unsigned int maxQsinceinit; 82 83 unsigned int maxSG; ··· 155 154 * HPSA_BOARD_READY_ITERATIONS are derived from those. 156 155 */ 157 156 #define HPSA_BOARD_READY_WAIT_SECS (120) 157 + #define HPSA_BOARD_NOT_READY_WAIT_SECS (10) 158 158 #define HPSA_BOARD_READY_POLL_INTERVAL_MSECS (100) 159 159 #define HPSA_BOARD_READY_POLL_INTERVAL \ 160 160 ((HPSA_BOARD_READY_POLL_INTERVAL_MSECS * HZ) / 1000) 161 161 #define HPSA_BOARD_READY_ITERATIONS \ 162 162 ((HPSA_BOARD_READY_WAIT_SECS * 1000) / \ 163 + HPSA_BOARD_READY_POLL_INTERVAL_MSECS) 164 + #define HPSA_BOARD_NOT_READY_ITERATIONS \ 165 + ((HPSA_BOARD_NOT_READY_WAIT_SECS * 1000) / \ 163 166 HPSA_BOARD_READY_POLL_INTERVAL_MSECS) 164 167 #define HPSA_POST_RESET_PAUSE_MSECS (3000) 165 168 #define HPSA_POST_RESET_NOOP_RETRIES (12)
+3 -1
drivers/scsi/hpsa_cmd.h
··· 104 104 105 105 #define CFGTBL_Trans_Simple 0x00000002l 106 106 #define CFGTBL_Trans_Performant 0x00000004l 107 + #define CFGTBL_Trans_use_short_tags 0x20000000l 107 108 108 109 #define CFGTBL_BusType_Ultra2 0x00000001l 109 110 #define CFGTBL_BusType_Ultra3 0x00000002l ··· 266 265 267 266 #define DIRECT_LOOKUP_SHIFT 5 268 267 #define DIRECT_LOOKUP_BIT 0x10 268 + #define DIRECT_LOOKUP_MASK (~((1 << DIRECT_LOOKUP_SHIFT) - 1)) 269 269 270 270 #define HPSA_ERROR_BIT 0x02 271 271 struct ctlr_info; /* defined in hpsa.h */ ··· 293 291 struct ctlr_info *h; 294 292 int cmd_type; 295 293 long cmdindex; 296 - struct hlist_node list; 294 + struct list_head list; 297 295 struct request *rq; 298 296 struct completion *waiting; 299 297 void *scsi_cmd;
+6 -3
drivers/scsi/ipr.c
··· 1301 1301 ipr_clear_res_target(res); 1302 1302 list_move_tail(&res->queue, &ioa_cfg->free_res_q); 1303 1303 } 1304 - } else if (!res->sdev) { 1304 + } else if (!res->sdev || res->del_from_ml) { 1305 1305 res->add_to_ml = 1; 1306 1306 if (ioa_cfg->allow_ml_add_del) 1307 1307 schedule_work(&ioa_cfg->work_q); ··· 3104 3104 did_work = 1; 3105 3105 sdev = res->sdev; 3106 3106 if (!scsi_device_get(sdev)) { 3107 - list_move_tail(&res->queue, &ioa_cfg->free_res_q); 3107 + if (!res->add_to_ml) 3108 + list_move_tail(&res->queue, &ioa_cfg->free_res_q); 3109 + else 3110 + res->del_from_ml = 0; 3108 3111 spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3109 3112 scsi_remove_device(sdev); 3110 3113 scsi_device_put(sdev); ··· 8867 8864 8868 8865 spin_unlock_irqrestore(ioa_cfg->host->host_lock, host_lock_flags); 8869 8866 wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload); 8870 - flush_scheduled_work(); 8867 + flush_work_sync(&ioa_cfg->work_q); 8871 8868 spin_lock_irqsave(ioa_cfg->host->host_lock, host_lock_flags); 8872 8869 8873 8870 spin_lock(&ipr_driver_lock);
+67 -67
drivers/scsi/iscsi_tcp.c
··· 608 608 iscsi_sw_tcp_release_conn(conn); 609 609 } 610 610 611 - static int iscsi_sw_tcp_get_addr(struct iscsi_conn *conn, struct socket *sock, 612 - char *buf, int *port, 613 - int (*getname)(struct socket *, 614 - struct sockaddr *, 615 - int *addrlen)) 616 - { 617 - struct sockaddr_storage *addr; 618 - struct sockaddr_in6 *sin6; 619 - struct sockaddr_in *sin; 620 - int rc = 0, len; 621 - 622 - addr = kmalloc(sizeof(*addr), GFP_KERNEL); 623 - if (!addr) 624 - return -ENOMEM; 625 - 626 - if (getname(sock, (struct sockaddr *) addr, &len)) { 627 - rc = -ENODEV; 628 - goto free_addr; 629 - } 630 - 631 - switch (addr->ss_family) { 632 - case AF_INET: 633 - sin = (struct sockaddr_in *)addr; 634 - spin_lock_bh(&conn->session->lock); 635 - sprintf(buf, "%pI4", &sin->sin_addr.s_addr); 636 - *port = be16_to_cpu(sin->sin_port); 637 - spin_unlock_bh(&conn->session->lock); 638 - break; 639 - case AF_INET6: 640 - sin6 = (struct sockaddr_in6 *)addr; 641 - spin_lock_bh(&conn->session->lock); 642 - sprintf(buf, "%pI6", &sin6->sin6_addr); 643 - *port = be16_to_cpu(sin6->sin6_port); 644 - spin_unlock_bh(&conn->session->lock); 645 - break; 646 - } 647 - free_addr: 648 - kfree(addr); 649 - return rc; 650 - } 651 - 652 611 static int 653 612 iscsi_sw_tcp_conn_bind(struct iscsi_cls_session *cls_session, 654 613 struct iscsi_cls_conn *cls_conn, uint64_t transport_eph, 655 614 int is_leading) 656 615 { 657 - struct Scsi_Host *shost = iscsi_session_to_shost(cls_session); 658 - struct iscsi_host *ihost = shost_priv(shost); 616 + struct iscsi_session *session = cls_session->dd_data; 659 617 struct iscsi_conn *conn = cls_conn->dd_data; 660 618 struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 661 619 struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data; ··· 628 670 "sockfd_lookup failed %d\n", err); 629 671 return -EEXIST; 630 672 } 631 - /* 632 - * copy these values now because if we drop the session 633 - * userspace may still want to query the values since we will 634 - * be using them for the reconnect 635 - */ 636 - err = iscsi_sw_tcp_get_addr(conn, sock, conn->portal_address, 637 - &conn->portal_port, kernel_getpeername); 638 - if (err) 639 - goto free_socket; 640 - 641 - err = iscsi_sw_tcp_get_addr(conn, sock, ihost->local_address, 642 - &ihost->local_port, kernel_getsockname); 643 - if (err) 644 - goto free_socket; 645 673 646 674 err = iscsi_conn_bind(cls_session, cls_conn, is_leading); 647 675 if (err) 648 676 goto free_socket; 649 677 678 + spin_lock_bh(&session->lock); 650 679 /* bind iSCSI connection and socket */ 651 680 tcp_sw_conn->sock = sock; 681 + spin_unlock_bh(&session->lock); 652 682 653 683 /* setup Socket parameters */ 654 684 sk = sock->sk; ··· 698 752 enum iscsi_param param, char *buf) 699 753 { 700 754 struct iscsi_conn *conn = cls_conn->dd_data; 701 - int len; 755 + struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 756 + struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data; 757 + struct sockaddr_in6 addr; 758 + int rc, len; 702 759 703 760 switch(param) { 704 761 case ISCSI_PARAM_CONN_PORT: 705 - spin_lock_bh(&conn->session->lock); 706 - len = sprintf(buf, "%hu\n", conn->portal_port); 707 - spin_unlock_bh(&conn->session->lock); 708 - break; 709 762 case ISCSI_PARAM_CONN_ADDRESS: 710 763 spin_lock_bh(&conn->session->lock); 711 - len = sprintf(buf, "%s\n", conn->portal_address); 764 + if (!tcp_sw_conn || !tcp_sw_conn->sock) { 765 + spin_unlock_bh(&conn->session->lock); 766 + return -ENOTCONN; 767 + } 768 + rc = kernel_getpeername(tcp_sw_conn->sock, 769 + (struct sockaddr *)&addr, &len); 712 770 spin_unlock_bh(&conn->session->lock); 713 - break; 771 + if (rc) 772 + return rc; 773 + 774 + return iscsi_conn_get_addr_param((struct sockaddr_storage *) 775 + &addr, param, buf); 714 776 default: 715 777 return iscsi_conn_get_param(cls_conn, param, buf); 716 778 } 717 779 718 - return len; 780 + return 0; 781 + } 782 + 783 + static int iscsi_sw_tcp_host_get_param(struct Scsi_Host *shost, 784 + enum iscsi_host_param param, char *buf) 785 + { 786 + struct iscsi_sw_tcp_host *tcp_sw_host = iscsi_host_priv(shost); 787 + struct iscsi_session *session = tcp_sw_host->session; 788 + struct iscsi_conn *conn; 789 + struct iscsi_tcp_conn *tcp_conn; 790 + struct iscsi_sw_tcp_conn *tcp_sw_conn; 791 + struct sockaddr_in6 addr; 792 + int rc, len; 793 + 794 + switch (param) { 795 + case ISCSI_HOST_PARAM_IPADDRESS: 796 + spin_lock_bh(&session->lock); 797 + conn = session->leadconn; 798 + if (!conn) { 799 + spin_unlock_bh(&session->lock); 800 + return -ENOTCONN; 801 + } 802 + tcp_conn = conn->dd_data; 803 + 804 + tcp_sw_conn = tcp_conn->dd_data; 805 + if (!tcp_sw_conn->sock) { 806 + spin_unlock_bh(&session->lock); 807 + return -ENOTCONN; 808 + } 809 + 810 + rc = kernel_getsockname(tcp_sw_conn->sock, 811 + (struct sockaddr *)&addr, &len); 812 + spin_unlock_bh(&session->lock); 813 + if (rc) 814 + return rc; 815 + 816 + return iscsi_conn_get_addr_param((struct sockaddr_storage *) 817 + &addr, param, buf); 818 + default: 819 + return iscsi_host_get_param(shost, param, buf); 820 + } 821 + 822 + return 0; 719 823 } 720 824 721 825 static void ··· 793 797 { 794 798 struct iscsi_cls_session *cls_session; 795 799 struct iscsi_session *session; 800 + struct iscsi_sw_tcp_host *tcp_sw_host; 796 801 struct Scsi_Host *shost; 797 802 798 803 if (ep) { ··· 801 804 return NULL; 802 805 } 803 806 804 - shost = iscsi_host_alloc(&iscsi_sw_tcp_sht, 0, 1); 807 + shost = iscsi_host_alloc(&iscsi_sw_tcp_sht, 808 + sizeof(struct iscsi_sw_tcp_host), 1); 805 809 if (!shost) 806 810 return NULL; 807 811 shost->transportt = iscsi_sw_tcp_scsi_transport; ··· 823 825 if (!cls_session) 824 826 goto remove_host; 825 827 session = cls_session->dd_data; 828 + tcp_sw_host = iscsi_host_priv(shost); 829 + tcp_sw_host->session = session; 826 830 827 831 shost->can_queue = session->scsi_cmds_max; 828 832 if (iscsi_tcp_r2tpool_alloc(session)) ··· 929 929 .start_conn = iscsi_conn_start, 930 930 .stop_conn = iscsi_sw_tcp_conn_stop, 931 931 /* iscsi host params */ 932 - .get_host_param = iscsi_host_get_param, 932 + .get_host_param = iscsi_sw_tcp_host_get_param, 933 933 .set_host_param = iscsi_host_set_param, 934 934 /* IO */ 935 935 .send_pdu = iscsi_conn_send_pdu,
+4
drivers/scsi/iscsi_tcp.h
··· 55 55 ssize_t (*sendpage)(struct socket *, struct page *, int, size_t, int); 56 56 }; 57 57 58 + struct iscsi_sw_tcp_host { 59 + struct iscsi_session *session; 60 + }; 61 + 58 62 struct iscsi_sw_tcp_hdrbuf { 59 63 struct iscsi_hdr hdrbuf; 60 64 char hdrextbuf[ISCSI_MAX_AHS_SIZE +
+82 -29
drivers/scsi/libfc/fc_exch.c
··· 38 38 EXPORT_SYMBOL(fc_cpu_mask); 39 39 static u16 fc_cpu_order; /* 2's power to represent total possible cpus */ 40 40 static struct kmem_cache *fc_em_cachep; /* cache for exchanges */ 41 - struct workqueue_struct *fc_exch_workqueue; 41 + static struct workqueue_struct *fc_exch_workqueue; 42 42 43 43 /* 44 44 * Structure and function definitions for managing Fibre Channel Exchanges ··· 558 558 return sp; 559 559 } 560 560 561 + /* 562 + * Set the response handler for the exchange associated with a sequence. 563 + */ 564 + static void fc_seq_set_resp(struct fc_seq *sp, 565 + void (*resp)(struct fc_seq *, struct fc_frame *, 566 + void *), 567 + void *arg) 568 + { 569 + struct fc_exch *ep = fc_seq_exch(sp); 570 + 571 + spin_lock_bh(&ep->ex_lock); 572 + ep->resp = resp; 573 + ep->arg = arg; 574 + spin_unlock_bh(&ep->ex_lock); 575 + } 576 + 561 577 /** 562 578 * fc_seq_exch_abort() - Abort an exchange and sequence 563 579 * @req_sp: The sequence to be aborted ··· 666 650 if (e_stat & ESB_ST_ABNORMAL) 667 651 rc = fc_exch_done_locked(ep); 668 652 spin_unlock_bh(&ep->ex_lock); 653 + if (!rc) 654 + fc_exch_delete(ep); 669 655 if (resp) 670 656 resp(sp, ERR_PTR(-FC_EX_TIMEOUT), arg); 671 - if (!rc) { 672 - /* delete the exchange if it's already being aborted */ 673 - fc_exch_delete(ep); 674 - return; 675 - } 676 657 fc_seq_exch_abort(sp, 2 * ep->r_a_tov); 677 658 goto done; 678 659 } ··· 1279 1266 * @fp: The request frame 1280 1267 * 1281 1268 * On success, the sequence pointer will be returned and also in fr_seq(@fp). 1269 + * A reference will be held on the exchange/sequence for the caller, which 1270 + * must call fc_seq_release(). 1282 1271 */ 1283 1272 static struct fc_seq *fc_seq_assign(struct fc_lport *lport, struct fc_frame *fp) 1284 1273 { ··· 1295 1280 fc_seq_lookup_recip(lport, ema->mp, fp) == FC_RJT_NONE) 1296 1281 break; 1297 1282 return fr_seq(fp); 1283 + } 1284 + 1285 + /** 1286 + * fc_seq_release() - Release the hold 1287 + * @sp: The sequence. 1288 + */ 1289 + static void fc_seq_release(struct fc_seq *sp) 1290 + { 1291 + fc_exch_release(fc_seq_exch(sp)); 1298 1292 } 1299 1293 1300 1294 /** ··· 2175 2151 fc_exch_mgr_del(ema); 2176 2152 return -ENOMEM; 2177 2153 } 2154 + EXPORT_SYMBOL(fc_exch_mgr_list_clone); 2178 2155 2179 2156 /** 2180 2157 * fc_exch_mgr_alloc() - Allocate an exchange manager ··· 2279 2254 EXPORT_SYMBOL(fc_exch_mgr_free); 2280 2255 2281 2256 /** 2257 + * fc_find_ema() - Lookup and return appropriate Exchange Manager Anchor depending 2258 + * upon 'xid'. 2259 + * @f_ctl: f_ctl 2260 + * @lport: The local port the frame was received on 2261 + * @fh: The received frame header 2262 + */ 2263 + static struct fc_exch_mgr_anchor *fc_find_ema(u32 f_ctl, 2264 + struct fc_lport *lport, 2265 + struct fc_frame_header *fh) 2266 + { 2267 + struct fc_exch_mgr_anchor *ema; 2268 + u16 xid; 2269 + 2270 + if (f_ctl & FC_FC_EX_CTX) 2271 + xid = ntohs(fh->fh_ox_id); 2272 + else { 2273 + xid = ntohs(fh->fh_rx_id); 2274 + if (xid == FC_XID_UNKNOWN) 2275 + return list_entry(lport->ema_list.prev, 2276 + typeof(*ema), ema_list); 2277 + } 2278 + 2279 + list_for_each_entry(ema, &lport->ema_list, ema_list) { 2280 + if ((xid >= ema->mp->min_xid) && 2281 + (xid <= ema->mp->max_xid)) 2282 + return ema; 2283 + } 2284 + return NULL; 2285 + } 2286 + /** 2282 2287 * fc_exch_recv() - Handler for received frames 2283 2288 * @lport: The local port the frame was received on 2284 - * @fp: The received frame 2289 + * @fp: The received frame 2285 2290 */ 2286 2291 void fc_exch_recv(struct fc_lport *lport, struct fc_frame *fp) 2287 2292 { 2288 2293 struct fc_frame_header *fh = fc_frame_header_get(fp); 2289 2294 struct fc_exch_mgr_anchor *ema; 2290 - u32 f_ctl, found = 0; 2291 - u16 oxid; 2295 + u32 f_ctl; 2292 2296 2293 2297 /* lport lock ? */ 2294 2298 if (!lport || lport->state == LPORT_ST_DISABLED) { ··· 2328 2274 } 2329 2275 2330 2276 f_ctl = ntoh24(fh->fh_f_ctl); 2331 - oxid = ntohs(fh->fh_ox_id); 2332 - if (f_ctl & FC_FC_EX_CTX) { 2333 - list_for_each_entry(ema, &lport->ema_list, ema_list) { 2334 - if ((oxid >= ema->mp->min_xid) && 2335 - (oxid <= ema->mp->max_xid)) { 2336 - found = 1; 2337 - break; 2338 - } 2339 - } 2340 - 2341 - if (!found) { 2342 - FC_LPORT_DBG(lport, "Received response for out " 2343 - "of range oxid:%hx\n", oxid); 2344 - fc_frame_free(fp); 2345 - return; 2346 - } 2347 - } else 2348 - ema = list_entry(lport->ema_list.prev, typeof(*ema), ema_list); 2277 + ema = fc_find_ema(f_ctl, lport, fh); 2278 + if (!ema) { 2279 + FC_LPORT_DBG(lport, "Unable to find Exchange Manager Anchor," 2280 + "fc_ctl <0x%x>, xid <0x%x>\n", 2281 + f_ctl, 2282 + (f_ctl & FC_FC_EX_CTX) ? 2283 + ntohs(fh->fh_ox_id) : 2284 + ntohs(fh->fh_rx_id)); 2285 + fc_frame_free(fp); 2286 + return; 2287 + } 2349 2288 2350 2289 /* 2351 2290 * If frame is marked invalid, just drop it. ··· 2376 2329 if (!lport->tt.seq_start_next) 2377 2330 lport->tt.seq_start_next = fc_seq_start_next; 2378 2331 2332 + if (!lport->tt.seq_set_resp) 2333 + lport->tt.seq_set_resp = fc_seq_set_resp; 2334 + 2379 2335 if (!lport->tt.exch_seq_send) 2380 2336 lport->tt.exch_seq_send = fc_exch_seq_send; 2381 2337 ··· 2400 2350 if (!lport->tt.seq_assign) 2401 2351 lport->tt.seq_assign = fc_seq_assign; 2402 2352 2353 + if (!lport->tt.seq_release) 2354 + lport->tt.seq_release = fc_seq_release; 2355 + 2403 2356 return 0; 2404 2357 } 2405 2358 EXPORT_SYMBOL(fc_exch_init); ··· 2410 2357 /** 2411 2358 * fc_setup_exch_mgr() - Setup an exchange manager 2412 2359 */ 2413 - int fc_setup_exch_mgr() 2360 + int fc_setup_exch_mgr(void) 2414 2361 { 2415 2362 fc_em_cachep = kmem_cache_create("libfc_em", sizeof(struct fc_exch), 2416 2363 0, SLAB_HWCACHE_ALIGN, NULL); ··· 2448 2395 /** 2449 2396 * fc_destroy_exch_mgr() - Destroy an exchange manager 2450 2397 */ 2451 - void fc_destroy_exch_mgr() 2398 + void fc_destroy_exch_mgr(void) 2452 2399 { 2453 2400 destroy_workqueue(fc_exch_workqueue); 2454 2401 kmem_cache_destroy(fc_em_cachep);
+15 -24
drivers/scsi/libfc/fc_fcp.c
··· 42 42 43 43 #include "fc_libfc.h" 44 44 45 - struct kmem_cache *scsi_pkt_cachep; 45 + static struct kmem_cache *scsi_pkt_cachep; 46 46 47 47 /* SRB state definitions */ 48 48 #define FC_SRB_FREE 0 /* cmd is free */ ··· 155 155 if (fsp) { 156 156 memset(fsp, 0, sizeof(*fsp)); 157 157 fsp->lp = lport; 158 + fsp->xfer_ddp = FC_XID_UNKNOWN; 158 159 atomic_set(&fsp->ref_cnt, 1); 159 160 init_timer(&fsp->timer); 160 161 INIT_LIST_HEAD(&fsp->list); ··· 1202 1201 static int fc_fcp_pkt_abort(struct fc_fcp_pkt *fsp) 1203 1202 { 1204 1203 int rc = FAILED; 1204 + unsigned long ticks_left; 1205 1205 1206 1206 if (fc_fcp_send_abort(fsp)) 1207 1207 return FAILED; ··· 1211 1209 fsp->wait_for_comp = 1; 1212 1210 1213 1211 spin_unlock_bh(&fsp->scsi_pkt_lock); 1214 - rc = wait_for_completion_timeout(&fsp->tm_done, FC_SCSI_TM_TOV); 1212 + ticks_left = wait_for_completion_timeout(&fsp->tm_done, 1213 + FC_SCSI_TM_TOV); 1215 1214 spin_lock_bh(&fsp->scsi_pkt_lock); 1216 1215 fsp->wait_for_comp = 0; 1217 1216 1218 - if (!rc) { 1217 + if (!ticks_left) { 1219 1218 FC_FCP_DBG(fsp, "target abort cmd failed\n"); 1220 - rc = FAILED; 1221 1219 } else if (fsp->state & FC_SRB_ABORTED) { 1222 1220 FC_FCP_DBG(fsp, "target abort cmd passed\n"); 1223 1221 rc = SUCCESS; ··· 1323 1321 * 1324 1322 * scsi-eh will escalate for when either happens. 1325 1323 */ 1326 - goto out; 1324 + return; 1327 1325 } 1328 1326 1329 1327 if (fc_fcp_lock_pkt(fsp)) ··· 1789 1787 1790 1788 /** 1791 1789 * fc_queuecommand() - The queuecommand function of the SCSI template 1790 + * @shost: The Scsi_Host that the command was issued to 1792 1791 * @cmd: The scsi_cmnd to be executed 1793 - * @done: The callback function to be called when the scsi_cmnd is complete 1794 1792 * 1795 - * This is the i/o strategy routine, called by the SCSI layer. This routine 1796 - * is called with the host_lock held. 1793 + * This is the i/o strategy routine, called by the SCSI layer. 1797 1794 */ 1798 - static int fc_queuecommand_lck(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *)) 1795 + int fc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc_cmd) 1799 1796 { 1800 - struct fc_lport *lport; 1797 + struct fc_lport *lport = shost_priv(shost); 1801 1798 struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device)); 1802 1799 struct fc_fcp_pkt *fsp; 1803 1800 struct fc_rport_libfc_priv *rpriv; ··· 1804 1803 int rc = 0; 1805 1804 struct fcoe_dev_stats *stats; 1806 1805 1807 - lport = shost_priv(sc_cmd->device->host); 1808 - 1809 1806 rval = fc_remote_port_chkready(rport); 1810 1807 if (rval) { 1811 1808 sc_cmd->result = rval; 1812 - done(sc_cmd); 1809 + sc_cmd->scsi_done(sc_cmd); 1813 1810 return 0; 1814 1811 } 1815 - spin_unlock_irq(lport->host->host_lock); 1816 1812 1817 1813 if (!*(struct fc_remote_port **)rport->dd_data) { 1818 1814 /* ··· 1817 1819 * online 1818 1820 */ 1819 1821 sc_cmd->result = DID_IMM_RETRY << 16; 1820 - done(sc_cmd); 1822 + sc_cmd->scsi_done(sc_cmd); 1821 1823 goto out; 1822 1824 } 1823 1825 ··· 1840 1842 * build the libfc request pkt 1841 1843 */ 1842 1844 fsp->cmd = sc_cmd; /* save the cmd */ 1843 - fsp->lp = lport; /* save the softc ptr */ 1844 1845 fsp->rport = rport; /* set the remote port ptr */ 1845 - fsp->xfer_ddp = FC_XID_UNKNOWN; 1846 - sc_cmd->scsi_done = done; 1847 1846 1848 1847 /* 1849 1848 * set up the transfer length ··· 1881 1886 rc = SCSI_MLQUEUE_HOST_BUSY; 1882 1887 } 1883 1888 out: 1884 - spin_lock_irq(lport->host->host_lock); 1885 1889 return rc; 1886 1890 } 1887 - 1888 - DEF_SCSI_QCMD(fc_queuecommand) 1889 1891 EXPORT_SYMBOL(fc_queuecommand); 1890 1892 1891 1893 /** ··· 2104 2112 * the sc passed in is not setup for execution like when sent 2105 2113 * through the queuecommand callout. 2106 2114 */ 2107 - fsp->lp = lport; /* save the softc ptr */ 2108 2115 fsp->rport = rport; /* set the remote port ptr */ 2109 2116 2110 2117 /* ··· 2236 2245 } 2237 2246 EXPORT_SYMBOL(fc_fcp_destroy); 2238 2247 2239 - int fc_setup_fcp() 2248 + int fc_setup_fcp(void) 2240 2249 { 2241 2250 int rc = 0; 2242 2251 ··· 2252 2261 return rc; 2253 2262 } 2254 2263 2255 - void fc_destroy_fcp() 2264 + void fc_destroy_fcp(void) 2256 2265 { 2257 2266 if (scsi_pkt_cachep) 2258 2267 kmem_cache_destroy(scsi_pkt_cachep);
+120
drivers/scsi/libfc/fc_libfc.c
··· 35 35 module_param_named(debug_logging, fc_debug_logging, int, S_IRUGO|S_IWUSR); 36 36 MODULE_PARM_DESC(debug_logging, "a bit mask of logging levels"); 37 37 38 + DEFINE_MUTEX(fc_prov_mutex); 39 + static LIST_HEAD(fc_local_ports); 40 + struct blocking_notifier_head fc_lport_notifier_head = 41 + BLOCKING_NOTIFIER_INIT(fc_lport_notifier_head); 42 + EXPORT_SYMBOL(fc_lport_notifier_head); 43 + 44 + /* 45 + * Providers which primarily send requests and PRLIs. 46 + */ 47 + struct fc4_prov *fc_active_prov[FC_FC4_PROV_SIZE] = { 48 + [0] = &fc_rport_t0_prov, 49 + [FC_TYPE_FCP] = &fc_rport_fcp_init, 50 + }; 51 + 52 + /* 53 + * Providers which receive requests. 54 + */ 55 + struct fc4_prov *fc_passive_prov[FC_FC4_PROV_SIZE] = { 56 + [FC_TYPE_ELS] = &fc_lport_els_prov, 57 + }; 58 + 38 59 /** 39 60 * libfc_init() - Initialize libfc.ko 40 61 */ ··· 231 210 fc_fill_hdr(fp, in_fp, r_ctl, FC_FCTL_RESP, 0, parm_offset); 232 211 } 233 212 EXPORT_SYMBOL(fc_fill_reply_hdr); 213 + 214 + /** 215 + * fc_fc4_conf_lport_params() - Modify "service_params" of specified lport 216 + * if there is service provider (target provider) registered with libfc 217 + * for specified "fc_ft_type" 218 + * @lport: Local port which service_params needs to be modified 219 + * @type: FC-4 type, such as FC_TYPE_FCP 220 + */ 221 + void fc_fc4_conf_lport_params(struct fc_lport *lport, enum fc_fh_type type) 222 + { 223 + struct fc4_prov *prov_entry; 224 + BUG_ON(type >= FC_FC4_PROV_SIZE); 225 + BUG_ON(!lport); 226 + prov_entry = fc_passive_prov[type]; 227 + if (type == FC_TYPE_FCP) { 228 + if (prov_entry && prov_entry->recv) 229 + lport->service_params |= FCP_SPPF_TARG_FCN; 230 + } 231 + } 232 + 233 + void fc_lport_iterate(void (*notify)(struct fc_lport *, void *), void *arg) 234 + { 235 + struct fc_lport *lport; 236 + 237 + mutex_lock(&fc_prov_mutex); 238 + list_for_each_entry(lport, &fc_local_ports, lport_list) 239 + notify(lport, arg); 240 + mutex_unlock(&fc_prov_mutex); 241 + } 242 + EXPORT_SYMBOL(fc_lport_iterate); 243 + 244 + /** 245 + * fc_fc4_register_provider() - register FC-4 upper-level provider. 246 + * @type: FC-4 type, such as FC_TYPE_FCP 247 + * @prov: structure describing provider including ops vector. 248 + * 249 + * Returns 0 on success, negative error otherwise. 250 + */ 251 + int fc_fc4_register_provider(enum fc_fh_type type, struct fc4_prov *prov) 252 + { 253 + struct fc4_prov **prov_entry; 254 + int ret = 0; 255 + 256 + if (type >= FC_FC4_PROV_SIZE) 257 + return -EINVAL; 258 + mutex_lock(&fc_prov_mutex); 259 + prov_entry = (prov->recv ? fc_passive_prov : fc_active_prov) + type; 260 + if (*prov_entry) 261 + ret = -EBUSY; 262 + else 263 + *prov_entry = prov; 264 + mutex_unlock(&fc_prov_mutex); 265 + return ret; 266 + } 267 + EXPORT_SYMBOL(fc_fc4_register_provider); 268 + 269 + /** 270 + * fc_fc4_deregister_provider() - deregister FC-4 upper-level provider. 271 + * @type: FC-4 type, such as FC_TYPE_FCP 272 + * @prov: structure describing provider including ops vector. 273 + */ 274 + void fc_fc4_deregister_provider(enum fc_fh_type type, struct fc4_prov *prov) 275 + { 276 + BUG_ON(type >= FC_FC4_PROV_SIZE); 277 + mutex_lock(&fc_prov_mutex); 278 + if (prov->recv) 279 + rcu_assign_pointer(fc_passive_prov[type], NULL); 280 + else 281 + rcu_assign_pointer(fc_active_prov[type], NULL); 282 + mutex_unlock(&fc_prov_mutex); 283 + synchronize_rcu(); 284 + } 285 + EXPORT_SYMBOL(fc_fc4_deregister_provider); 286 + 287 + /** 288 + * fc_fc4_add_lport() - add new local port to list and run notifiers. 289 + * @lport: The new local port. 290 + */ 291 + void fc_fc4_add_lport(struct fc_lport *lport) 292 + { 293 + mutex_lock(&fc_prov_mutex); 294 + list_add_tail(&lport->lport_list, &fc_local_ports); 295 + blocking_notifier_call_chain(&fc_lport_notifier_head, 296 + FC_LPORT_EV_ADD, lport); 297 + mutex_unlock(&fc_prov_mutex); 298 + } 299 + 300 + /** 301 + * fc_fc4_del_lport() - remove local port from list and run notifiers. 302 + * @lport: The new local port. 303 + */ 304 + void fc_fc4_del_lport(struct fc_lport *lport) 305 + { 306 + mutex_lock(&fc_prov_mutex); 307 + list_del(&lport->lport_list); 308 + blocking_notifier_call_chain(&fc_lport_notifier_head, 309 + FC_LPORT_EV_DEL, lport); 310 + mutex_unlock(&fc_prov_mutex); 311 + }
+14
drivers/scsi/libfc/fc_libfc.h
··· 94 94 (lport)->host->host_no, ##args)) 95 95 96 96 /* 97 + * FC-4 Providers. 98 + */ 99 + extern struct fc4_prov *fc_active_prov[]; /* providers without recv */ 100 + extern struct fc4_prov *fc_passive_prov[]; /* providers with recv */ 101 + extern struct mutex fc_prov_mutex; /* lock over table changes */ 102 + 103 + extern struct fc4_prov fc_rport_t0_prov; /* type 0 provider */ 104 + extern struct fc4_prov fc_lport_els_prov; /* ELS provider */ 105 + extern struct fc4_prov fc_rport_fcp_init; /* FCP initiator provider */ 106 + 107 + /* 97 108 * Set up direct-data placement for this I/O request 98 109 */ 99 110 void fc_fcp_ddp_setup(struct fc_fcp_pkt *fsp, u16 xid); ··· 123 112 * Internal libfc functions 124 113 */ 125 114 const char *fc_els_resp_type(struct fc_frame *); 115 + extern void fc_fc4_add_lport(struct fc_lport *); 116 + extern void fc_fc4_del_lport(struct fc_lport *); 117 + extern void fc_fc4_conf_lport_params(struct fc_lport *, enum fc_fh_type); 126 118 127 119 /* 128 120 * Copies a buffer into an sg list
+60 -9
drivers/scsi/libfc/fc_lport.c
··· 633 633 lport->tt.fcp_abort_io(lport); 634 634 lport->tt.disc_stop_final(lport); 635 635 lport->tt.exch_mgr_reset(lport, 0, 0); 636 + fc_fc4_del_lport(lport); 636 637 return 0; 637 638 } 638 639 EXPORT_SYMBOL(fc_lport_destroy); ··· 850 849 } 851 850 852 851 /** 853 - * fc_lport_recv_req() - The generic lport request handler 852 + * fc_lport_recv_els_req() - The generic lport ELS request handler 854 853 * @lport: The local port that received the request 855 854 * @fp: The request frame 856 855 * ··· 860 859 * Locking Note: This function should not be called with the lport 861 860 * lock held becuase it will grab the lock. 862 861 */ 863 - static void fc_lport_recv_req(struct fc_lport *lport, struct fc_frame *fp) 862 + static void fc_lport_recv_els_req(struct fc_lport *lport, 863 + struct fc_frame *fp) 864 864 { 865 - struct fc_frame_header *fh = fc_frame_header_get(fp); 866 865 void (*recv)(struct fc_lport *, struct fc_frame *); 867 866 868 867 mutex_lock(&lport->lp_mutex); ··· 874 873 */ 875 874 if (!lport->link_up) 876 875 fc_frame_free(fp); 877 - else if (fh->fh_type == FC_TYPE_ELS && 878 - fh->fh_r_ctl == FC_RCTL_ELS_REQ) { 876 + else { 879 877 /* 880 878 * Check opcode. 881 879 */ ··· 903 903 } 904 904 905 905 recv(lport, fp); 906 - } else { 907 - FC_LPORT_DBG(lport, "dropping invalid frame (eof %x)\n", 908 - fr_eof(fp)); 909 - fc_frame_free(fp); 910 906 } 911 907 mutex_unlock(&lport->lp_mutex); 908 + } 909 + 910 + static int fc_lport_els_prli(struct fc_rport_priv *rdata, u32 spp_len, 911 + const struct fc_els_spp *spp_in, 912 + struct fc_els_spp *spp_out) 913 + { 914 + return FC_SPP_RESP_INVL; 915 + } 916 + 917 + struct fc4_prov fc_lport_els_prov = { 918 + .prli = fc_lport_els_prli, 919 + .recv = fc_lport_recv_els_req, 920 + }; 921 + 922 + /** 923 + * fc_lport_recv_req() - The generic lport request handler 924 + * @lport: The lport that received the request 925 + * @fp: The frame the request is in 926 + * 927 + * Locking Note: This function should not be called with the lport 928 + * lock held becuase it may grab the lock. 929 + */ 930 + static void fc_lport_recv_req(struct fc_lport *lport, 931 + struct fc_frame *fp) 932 + { 933 + struct fc_frame_header *fh = fc_frame_header_get(fp); 934 + struct fc_seq *sp = fr_seq(fp); 935 + struct fc4_prov *prov; 936 + 937 + /* 938 + * Use RCU read lock and module_lock to be sure module doesn't 939 + * deregister and get unloaded while we're calling it. 940 + * try_module_get() is inlined and accepts a NULL parameter. 941 + * Only ELSes and FCP target ops should come through here. 942 + * The locking is unfortunate, and a better scheme is being sought. 943 + */ 944 + 945 + rcu_read_lock(); 946 + if (fh->fh_type >= FC_FC4_PROV_SIZE) 947 + goto drop; 948 + prov = rcu_dereference(fc_passive_prov[fh->fh_type]); 949 + if (!prov || !try_module_get(prov->module)) 950 + goto drop; 951 + rcu_read_unlock(); 952 + prov->recv(lport, fp); 953 + module_put(prov->module); 954 + return; 955 + drop: 956 + rcu_read_unlock(); 957 + FC_LPORT_DBG(lport, "dropping unexpected frame type %x\n", fh->fh_type); 958 + fc_frame_free(fp); 959 + lport->tt.exch_done(sp); 912 960 } 913 961 914 962 /** ··· 1590 1542 */ 1591 1543 int fc_lport_config(struct fc_lport *lport) 1592 1544 { 1545 + INIT_LIST_HEAD(&lport->ema_list); 1593 1546 INIT_DELAYED_WORK(&lport->retry_work, fc_lport_timeout); 1594 1547 mutex_init(&lport->lp_mutex); 1595 1548 ··· 1598 1549 1599 1550 fc_lport_add_fc4_type(lport, FC_TYPE_FCP); 1600 1551 fc_lport_add_fc4_type(lport, FC_TYPE_CT); 1552 + fc_fc4_conf_lport_params(lport, FC_TYPE_FCP); 1601 1553 1602 1554 return 0; 1603 1555 } ··· 1636 1586 fc_host_supported_speeds(lport->host) |= FC_PORTSPEED_1GBIT; 1637 1587 if (lport->link_supported_speeds & FC_PORTSPEED_10GBIT) 1638 1588 fc_host_supported_speeds(lport->host) |= FC_PORTSPEED_10GBIT; 1589 + fc_fc4_add_lport(lport); 1639 1590 1640 1591 return 0; 1641 1592 }
+2 -8
drivers/scsi/libfc/fc_npiv.c
··· 37 37 38 38 vn_port = libfc_host_alloc(shost->hostt, privsize); 39 39 if (!vn_port) 40 - goto err_out; 41 - if (fc_exch_mgr_list_clone(n_port, vn_port)) 42 - goto err_put; 40 + return vn_port; 43 41 44 42 vn_port->vport = vport; 45 43 vport->dd_data = vn_port; ··· 47 49 mutex_unlock(&n_port->lp_mutex); 48 50 49 51 return vn_port; 50 - 51 - err_put: 52 - scsi_host_put(vn_port->host); 53 - err_out: 54 - return NULL; 55 52 } 56 53 EXPORT_SYMBOL(libfc_vport_create); 57 54 ··· 79 86 80 87 return lport; 81 88 } 89 + EXPORT_SYMBOL(fc_vport_id_lookup); 82 90 83 91 /* 84 92 * When setting the link state of vports during an lport state change, it's
+154 -35
drivers/scsi/libfc/fc_rport.c
··· 58 58 59 59 #include "fc_libfc.h" 60 60 61 - struct workqueue_struct *rport_event_queue; 61 + static struct workqueue_struct *rport_event_queue; 62 62 63 63 static void fc_rport_enter_flogi(struct fc_rport_priv *); 64 64 static void fc_rport_enter_plogi(struct fc_rport_priv *); ··· 145 145 rdata->maxframe_size = FC_MIN_MAX_PAYLOAD; 146 146 INIT_DELAYED_WORK(&rdata->retry_work, fc_rport_timeout); 147 147 INIT_WORK(&rdata->event_work, fc_rport_work); 148 - if (port_id != FC_FID_DIR_SERV) 148 + if (port_id != FC_FID_DIR_SERV) { 149 + rdata->lld_event_callback = lport->tt.rport_event_callback; 149 150 list_add_rcu(&rdata->peers, &lport->disc.rports); 151 + } 150 152 return rdata; 151 153 } 152 154 ··· 259 257 struct fc_rport_operations *rport_ops; 260 258 struct fc_rport_identifiers ids; 261 259 struct fc_rport *rport; 260 + struct fc4_prov *prov; 261 + u8 type; 262 262 263 263 mutex_lock(&rdata->rp_mutex); 264 264 event = rdata->event; ··· 304 300 FC_RPORT_DBG(rdata, "callback ev %d\n", event); 305 301 rport_ops->event_callback(lport, rdata, event); 306 302 } 303 + if (rdata->lld_event_callback) { 304 + FC_RPORT_DBG(rdata, "lld callback ev %d\n", event); 305 + rdata->lld_event_callback(lport, rdata, event); 306 + } 307 307 kref_put(&rdata->kref, lport->tt.rport_destroy); 308 308 break; 309 309 310 310 case RPORT_EV_FAILED: 311 311 case RPORT_EV_LOGO: 312 312 case RPORT_EV_STOP: 313 + if (rdata->prli_count) { 314 + mutex_lock(&fc_prov_mutex); 315 + for (type = 1; type < FC_FC4_PROV_SIZE; type++) { 316 + prov = fc_passive_prov[type]; 317 + if (prov && prov->prlo) 318 + prov->prlo(rdata); 319 + } 320 + mutex_unlock(&fc_prov_mutex); 321 + } 313 322 port_id = rdata->ids.port_id; 314 323 mutex_unlock(&rdata->rp_mutex); 315 324 316 325 if (rport_ops && rport_ops->event_callback) { 317 326 FC_RPORT_DBG(rdata, "callback ev %d\n", event); 318 327 rport_ops->event_callback(lport, rdata, event); 328 + } 329 + if (rdata->lld_event_callback) { 330 + FC_RPORT_DBG(rdata, "lld callback ev %d\n", event); 331 + rdata->lld_event_callback(lport, rdata, event); 319 332 } 320 333 cancel_delayed_work_sync(&rdata->retry_work); 321 334 ··· 357 336 if (port_id == FC_FID_DIR_SERV) { 358 337 rdata->event = RPORT_EV_NONE; 359 338 mutex_unlock(&rdata->rp_mutex); 339 + kref_put(&rdata->kref, lport->tt.rport_destroy); 360 340 } else if ((rdata->flags & FC_RP_STARTED) && 361 341 rdata->major_retries < 362 342 lport->max_rport_retry_count) { ··· 597 575 598 576 /* make sure this isn't an FC_EX_CLOSED error, never retry those */ 599 577 if (PTR_ERR(fp) == -FC_EX_CLOSED) 600 - return fc_rport_error(rdata, fp); 578 + goto out; 601 579 602 580 if (rdata->retries < rdata->local_port->max_rport_retry_count) { 603 581 FC_RPORT_DBG(rdata, "Error %ld in state %s, retrying\n", ··· 610 588 return; 611 589 } 612 590 613 - return fc_rport_error(rdata, fp); 591 + out: 592 + fc_rport_error(rdata, fp); 614 593 } 615 594 616 595 /** ··· 901 878 rdata->ids.port_name = get_unaligned_be64(&plp->fl_wwpn); 902 879 rdata->ids.node_name = get_unaligned_be64(&plp->fl_wwnn); 903 880 881 + /* save plogi response sp_features for further reference */ 882 + rdata->sp_features = ntohs(plp->fl_csp.sp_features); 883 + 904 884 if (lport->point_to_multipoint) 905 885 fc_rport_login_complete(rdata, fp); 906 886 csp_seq = ntohs(plp->fl_csp.sp_tot_seq); ··· 975 949 struct fc_els_prli prli; 976 950 struct fc_els_spp spp; 977 951 } *pp; 952 + struct fc_els_spp temp_spp; 953 + struct fc4_prov *prov; 978 954 u32 roles = FC_RPORT_ROLE_UNKNOWN; 979 955 u32 fcp_parm = 0; 980 956 u8 op; ··· 1011 983 resp_code = (pp->spp.spp_flags & FC_SPP_RESP_MASK); 1012 984 FC_RPORT_DBG(rdata, "PRLI spp_flags = 0x%x\n", 1013 985 pp->spp.spp_flags); 986 + rdata->spp_type = pp->spp.spp_type; 1014 987 if (resp_code != FC_SPP_RESP_ACK) { 1015 988 if (resp_code == FC_SPP_RESP_CONF) 1016 989 fc_rport_error(rdata, fp); ··· 1025 996 fcp_parm = ntohl(pp->spp.spp_params); 1026 997 if (fcp_parm & FCP_SPPF_RETRY) 1027 998 rdata->flags |= FC_RP_FLAGS_RETRY; 999 + if (fcp_parm & FCP_SPPF_CONF_COMPL) 1000 + rdata->flags |= FC_RP_FLAGS_CONF_REQ; 1001 + 1002 + prov = fc_passive_prov[FC_TYPE_FCP]; 1003 + if (prov) { 1004 + memset(&temp_spp, 0, sizeof(temp_spp)); 1005 + prov->prli(rdata, pp->prli.prli_spp_len, 1006 + &pp->spp, &temp_spp); 1007 + } 1028 1008 1029 1009 rdata->supported_classes = FC_COS_CLASS3; 1030 1010 if (fcp_parm & FCP_SPPF_INIT_FCN) ··· 1071 1033 struct fc_els_spp spp; 1072 1034 } *pp; 1073 1035 struct fc_frame *fp; 1036 + struct fc4_prov *prov; 1074 1037 1075 1038 /* 1076 1039 * If the rport is one of the well known addresses ··· 1093 1054 return; 1094 1055 } 1095 1056 1096 - if (!lport->tt.elsct_send(lport, rdata->ids.port_id, fp, ELS_PRLI, 1097 - fc_rport_prli_resp, rdata, 1098 - 2 * lport->r_a_tov)) 1057 + fc_prli_fill(lport, fp); 1058 + 1059 + prov = fc_passive_prov[FC_TYPE_FCP]; 1060 + if (prov) { 1061 + pp = fc_frame_payload_get(fp, sizeof(*pp)); 1062 + prov->prli(rdata, sizeof(pp->spp), NULL, &pp->spp); 1063 + } 1064 + 1065 + fc_fill_fc_hdr(fp, FC_RCTL_ELS_REQ, rdata->ids.port_id, 1066 + fc_host_port_id(lport->host), FC_TYPE_ELS, 1067 + FC_FC_FIRST_SEQ | FC_FC_END_SEQ | FC_FC_SEQ_INIT, 0); 1068 + 1069 + if (!lport->tt.exch_seq_send(lport, fp, fc_rport_prli_resp, 1070 + NULL, rdata, 2 * lport->r_a_tov)) 1099 1071 fc_rport_error_retry(rdata, NULL); 1100 1072 else 1101 1073 kref_get(&rdata->kref); ··· 1692 1642 unsigned int len; 1693 1643 unsigned int plen; 1694 1644 enum fc_els_spp_resp resp; 1645 + enum fc_els_spp_resp passive; 1695 1646 struct fc_seq_els_data rjt_data; 1696 - u32 fcp_parm; 1697 - u32 roles = FC_RPORT_ROLE_UNKNOWN; 1647 + struct fc4_prov *prov; 1698 1648 1699 1649 FC_RPORT_DBG(rdata, "Received PRLI request while in state %s\n", 1700 1650 fc_rport_state(rdata)); ··· 1728 1678 pp->prli.prli_len = htons(len); 1729 1679 len -= sizeof(struct fc_els_prli); 1730 1680 1731 - /* reinitialize remote port roles */ 1732 - rdata->ids.roles = FC_RPORT_ROLE_UNKNOWN; 1733 - 1734 1681 /* 1735 1682 * Go through all the service parameter pages and build 1736 1683 * response. If plen indicates longer SPP than standard, 1737 1684 * use that. The entire response has been pre-cleared above. 1738 1685 */ 1739 1686 spp = &pp->spp; 1687 + mutex_lock(&fc_prov_mutex); 1740 1688 while (len >= plen) { 1689 + rdata->spp_type = rspp->spp_type; 1741 1690 spp->spp_type = rspp->spp_type; 1742 1691 spp->spp_type_ext = rspp->spp_type_ext; 1743 - spp->spp_flags = rspp->spp_flags & FC_SPP_EST_IMG_PAIR; 1744 - resp = FC_SPP_RESP_ACK; 1692 + resp = 0; 1745 1693 1746 - switch (rspp->spp_type) { 1747 - case 0: /* common to all FC-4 types */ 1748 - break; 1749 - case FC_TYPE_FCP: 1750 - fcp_parm = ntohl(rspp->spp_params); 1751 - if (fcp_parm & FCP_SPPF_RETRY) 1752 - rdata->flags |= FC_RP_FLAGS_RETRY; 1753 - rdata->supported_classes = FC_COS_CLASS3; 1754 - if (fcp_parm & FCP_SPPF_INIT_FCN) 1755 - roles |= FC_RPORT_ROLE_FCP_INITIATOR; 1756 - if (fcp_parm & FCP_SPPF_TARG_FCN) 1757 - roles |= FC_RPORT_ROLE_FCP_TARGET; 1758 - rdata->ids.roles = roles; 1759 - 1760 - spp->spp_params = htonl(lport->service_params); 1761 - break; 1762 - default: 1763 - resp = FC_SPP_RESP_INVL; 1764 - break; 1694 + if (rspp->spp_type < FC_FC4_PROV_SIZE) { 1695 + prov = fc_active_prov[rspp->spp_type]; 1696 + if (prov) 1697 + resp = prov->prli(rdata, plen, rspp, spp); 1698 + prov = fc_passive_prov[rspp->spp_type]; 1699 + if (prov) { 1700 + passive = prov->prli(rdata, plen, rspp, spp); 1701 + if (!resp || passive == FC_SPP_RESP_ACK) 1702 + resp = passive; 1703 + } 1704 + } 1705 + if (!resp) { 1706 + if (spp->spp_flags & FC_SPP_EST_IMG_PAIR) 1707 + resp |= FC_SPP_RESP_CONF; 1708 + else 1709 + resp |= FC_SPP_RESP_INVL; 1765 1710 } 1766 1711 spp->spp_flags |= resp; 1767 1712 len -= plen; 1768 1713 rspp = (struct fc_els_spp *)((char *)rspp + plen); 1769 1714 spp = (struct fc_els_spp *)((char *)spp + plen); 1770 1715 } 1716 + mutex_unlock(&fc_prov_mutex); 1771 1717 1772 1718 /* 1773 1719 * Send LS_ACC. If this fails, the originator should retry. ··· 1933 1887 EXPORT_SYMBOL(fc_rport_init); 1934 1888 1935 1889 /** 1890 + * fc_rport_fcp_prli() - Handle incoming PRLI for the FCP initiator. 1891 + * @rdata: remote port private 1892 + * @spp_len: service parameter page length 1893 + * @rspp: received service parameter page 1894 + * @spp: response service parameter page 1895 + * 1896 + * Returns the value for the response code to be placed in spp_flags; 1897 + * Returns 0 if not an initiator. 1898 + */ 1899 + static int fc_rport_fcp_prli(struct fc_rport_priv *rdata, u32 spp_len, 1900 + const struct fc_els_spp *rspp, 1901 + struct fc_els_spp *spp) 1902 + { 1903 + struct fc_lport *lport = rdata->local_port; 1904 + u32 fcp_parm; 1905 + 1906 + fcp_parm = ntohl(rspp->spp_params); 1907 + rdata->ids.roles = FC_RPORT_ROLE_UNKNOWN; 1908 + if (fcp_parm & FCP_SPPF_INIT_FCN) 1909 + rdata->ids.roles |= FC_RPORT_ROLE_FCP_INITIATOR; 1910 + if (fcp_parm & FCP_SPPF_TARG_FCN) 1911 + rdata->ids.roles |= FC_RPORT_ROLE_FCP_TARGET; 1912 + if (fcp_parm & FCP_SPPF_RETRY) 1913 + rdata->flags |= FC_RP_FLAGS_RETRY; 1914 + rdata->supported_classes = FC_COS_CLASS3; 1915 + 1916 + if (!(lport->service_params & FC_RPORT_ROLE_FCP_INITIATOR)) 1917 + return 0; 1918 + 1919 + spp->spp_flags |= rspp->spp_flags & FC_SPP_EST_IMG_PAIR; 1920 + 1921 + /* 1922 + * OR in our service parameters with other providers (target), if any. 1923 + */ 1924 + fcp_parm = ntohl(spp->spp_params); 1925 + spp->spp_params = htonl(fcp_parm | lport->service_params); 1926 + return FC_SPP_RESP_ACK; 1927 + } 1928 + 1929 + /* 1930 + * FC-4 provider ops for FCP initiator. 1931 + */ 1932 + struct fc4_prov fc_rport_fcp_init = { 1933 + .prli = fc_rport_fcp_prli, 1934 + }; 1935 + 1936 + /** 1937 + * fc_rport_t0_prli() - Handle incoming PRLI parameters for type 0 1938 + * @rdata: remote port private 1939 + * @spp_len: service parameter page length 1940 + * @rspp: received service parameter page 1941 + * @spp: response service parameter page 1942 + */ 1943 + static int fc_rport_t0_prli(struct fc_rport_priv *rdata, u32 spp_len, 1944 + const struct fc_els_spp *rspp, 1945 + struct fc_els_spp *spp) 1946 + { 1947 + if (rspp->spp_flags & FC_SPP_EST_IMG_PAIR) 1948 + return FC_SPP_RESP_INVL; 1949 + return FC_SPP_RESP_ACK; 1950 + } 1951 + 1952 + /* 1953 + * FC-4 provider ops for type 0 service parameters. 1954 + * 1955 + * This handles the special case of type 0 which is always successful 1956 + * but doesn't do anything otherwise. 1957 + */ 1958 + struct fc4_prov fc_rport_t0_prov = { 1959 + .prli = fc_rport_t0_prli, 1960 + }; 1961 + 1962 + /** 1936 1963 * fc_setup_rport() - Initialize the rport_event_queue 1937 1964 */ 1938 - int fc_setup_rport() 1965 + int fc_setup_rport(void) 1939 1966 { 1940 1967 rport_event_queue = create_singlethread_workqueue("fc_rport_eq"); 1941 1968 if (!rport_event_queue) ··· 2019 1900 /** 2020 1901 * fc_destroy_rport() - Destroy the rport_event_queue 2021 1902 */ 2022 - void fc_destroy_rport() 1903 + void fc_destroy_rport(void) 2023 1904 { 2024 1905 destroy_workqueue(rport_event_queue); 2025 1906 }
+41 -3
drivers/scsi/libiscsi.c
··· 3352 3352 } 3353 3353 EXPORT_SYMBOL_GPL(iscsi_session_get_param); 3354 3354 3355 + int iscsi_conn_get_addr_param(struct sockaddr_storage *addr, 3356 + enum iscsi_param param, char *buf) 3357 + { 3358 + struct sockaddr_in6 *sin6 = NULL; 3359 + struct sockaddr_in *sin = NULL; 3360 + int len; 3361 + 3362 + switch (addr->ss_family) { 3363 + case AF_INET: 3364 + sin = (struct sockaddr_in *)addr; 3365 + break; 3366 + case AF_INET6: 3367 + sin6 = (struct sockaddr_in6 *)addr; 3368 + break; 3369 + default: 3370 + return -EINVAL; 3371 + } 3372 + 3373 + switch (param) { 3374 + case ISCSI_PARAM_CONN_ADDRESS: 3375 + case ISCSI_HOST_PARAM_IPADDRESS: 3376 + if (sin) 3377 + len = sprintf(buf, "%pI4\n", &sin->sin_addr.s_addr); 3378 + else 3379 + len = sprintf(buf, "%pI6\n", &sin6->sin6_addr); 3380 + break; 3381 + case ISCSI_PARAM_CONN_PORT: 3382 + if (sin) 3383 + len = sprintf(buf, "%hu\n", be16_to_cpu(sin->sin_port)); 3384 + else 3385 + len = sprintf(buf, "%hu\n", 3386 + be16_to_cpu(sin6->sin6_port)); 3387 + break; 3388 + default: 3389 + return -EINVAL; 3390 + } 3391 + 3392 + return len; 3393 + } 3394 + EXPORT_SYMBOL_GPL(iscsi_conn_get_addr_param); 3395 + 3355 3396 int iscsi_conn_get_param(struct iscsi_cls_conn *cls_conn, 3356 3397 enum iscsi_param param, char *buf) 3357 3398 { ··· 3456 3415 break; 3457 3416 case ISCSI_HOST_PARAM_INITIATOR_NAME: 3458 3417 len = sprintf(buf, "%s\n", ihost->initiatorname); 3459 - break; 3460 - case ISCSI_HOST_PARAM_IPADDRESS: 3461 - len = sprintf(buf, "%s\n", ihost->local_address); 3462 3418 break; 3463 3419 default: 3464 3420 return -ENOSYS;
-8
drivers/scsi/libsas/Kconfig
··· 46 46 Allows sas hosts to receive SMP frames. Selecting this 47 47 option builds an SMP interpreter into libsas. Say 48 48 N here if you want to save the few kb this consumes. 49 - 50 - config SCSI_SAS_LIBSAS_DEBUG 51 - bool "Compile the SAS Domain Transport Attributes in debug mode" 52 - default y 53 - depends on SCSI_SAS_LIBSAS 54 - help 55 - Compiles the SAS Layer in debug mode. In debug mode, the 56 - SAS Layer prints diagnostic and debug messages.
-4
drivers/scsi/libsas/Makefile
··· 21 21 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 22 22 # USA 23 23 24 - ifeq ($(CONFIG_SCSI_SAS_LIBSAS_DEBUG),y) 25 - EXTRA_CFLAGS += -DSAS_DEBUG 26 - endif 27 - 28 24 obj-$(CONFIG_SCSI_SAS_LIBSAS) += libsas.o 29 25 libsas-y += sas_init.o \ 30 26 sas_phy.o \
+22 -58
drivers/scsi/libsas/sas_ata.c
··· 71 71 case SAS_SG_ERR: 72 72 return AC_ERR_INVALID; 73 73 74 - case SAM_STAT_CHECK_CONDITION: 75 74 case SAS_OPEN_TO: 76 75 case SAS_OPEN_REJECT: 77 76 SAS_DPRINTK("%s: Saw error %d. What to do?\n", 78 77 __func__, ts->stat); 79 78 return AC_ERR_OTHER; 80 79 80 + case SAM_STAT_CHECK_CONDITION: 81 81 case SAS_ABORTED_TASK: 82 82 return AC_ERR_DEV; 83 83 ··· 107 107 sas_ha = dev->port->ha; 108 108 109 109 spin_lock_irqsave(dev->sata_dev.ap->lock, flags); 110 - if (stat->stat == SAS_PROTO_RESPONSE || stat->stat == SAM_STAT_GOOD) { 110 + if (stat->stat == SAS_PROTO_RESPONSE || stat->stat == SAM_STAT_GOOD || 111 + ((stat->stat == SAM_STAT_CHECK_CONDITION && 112 + dev->sata_dev.command_set == ATAPI_COMMAND_SET))) { 111 113 ata_tf_from_fis(resp->ending_fis, &dev->sata_dev.tf); 112 114 qc->err_mask |= ac_err_mask(dev->sata_dev.tf.command); 113 115 dev->sata_dev.sstatus = resp->sstatus; 114 116 dev->sata_dev.serror = resp->serror; 115 117 dev->sata_dev.scontrol = resp->scontrol; 116 - } else if (stat->stat != SAM_STAT_GOOD) { 118 + } else { 117 119 ac = sas_to_ata_err(stat); 118 120 if (ac) { 119 121 SAS_DPRINTK("%s: SAS error %x\n", __func__, ··· 307 305 } 308 306 } 309 307 310 - static int sas_ata_scr_write(struct ata_link *link, unsigned int sc_reg_in, 311 - u32 val) 312 - { 313 - struct domain_device *dev = link->ap->private_data; 314 - 315 - SAS_DPRINTK("STUB %s\n", __func__); 316 - switch (sc_reg_in) { 317 - case SCR_STATUS: 318 - dev->sata_dev.sstatus = val; 319 - break; 320 - case SCR_CONTROL: 321 - dev->sata_dev.scontrol = val; 322 - break; 323 - case SCR_ERROR: 324 - dev->sata_dev.serror = val; 325 - break; 326 - case SCR_ACTIVE: 327 - dev->sata_dev.ap->link.sactive = val; 328 - break; 329 - default: 330 - return -EINVAL; 331 - } 332 - return 0; 333 - } 334 - 335 - static int sas_ata_scr_read(struct ata_link *link, unsigned int sc_reg_in, 336 - u32 *val) 337 - { 338 - struct domain_device *dev = link->ap->private_data; 339 - 340 - SAS_DPRINTK("STUB %s\n", __func__); 341 - switch (sc_reg_in) { 342 - case SCR_STATUS: 343 - *val = dev->sata_dev.sstatus; 344 - return 0; 345 - case SCR_CONTROL: 346 - *val = dev->sata_dev.scontrol; 347 - return 0; 348 - case SCR_ERROR: 349 - *val = dev->sata_dev.serror; 350 - return 0; 351 - case SCR_ACTIVE: 352 - *val = dev->sata_dev.ap->link.sactive; 353 - return 0; 354 - default: 355 - return -EINVAL; 356 - } 357 - } 358 - 359 308 static struct ata_port_operations sas_sata_ops = { 360 309 .prereset = ata_std_prereset, 361 310 .softreset = NULL, ··· 320 367 .qc_fill_rtf = sas_ata_qc_fill_rtf, 321 368 .port_start = ata_sas_port_start, 322 369 .port_stop = ata_sas_port_stop, 323 - .scr_read = sas_ata_scr_read, 324 - .scr_write = sas_ata_scr_write 325 370 }; 326 371 327 372 static struct ata_port_info sata_port_info = { ··· 752 801 753 802 if (!dev_is_sata(ddev)) 754 803 continue; 755 - 804 + 756 805 ata_port_printk(ap, KERN_DEBUG, "sas eh calling libata port error handler"); 757 806 ata_scsi_port_error_handler(shost, ap); 758 807 } ··· 785 834 LIST_HEAD(sata_q); 786 835 787 836 ap = NULL; 788 - 837 + 789 838 list_for_each_entry_safe(cmd, n, work_q, eh_entry) { 790 839 struct domain_device *ddev = cmd_to_domain_dev(cmd); 791 840 792 841 if (!dev_is_sata(ddev) || TO_SAS_TASK(cmd)) 793 842 continue; 794 - if(ap && ap != ddev->sata_dev.ap) 843 + if (ap && ap != ddev->sata_dev.ap) 795 844 continue; 796 845 ap = ddev->sata_dev.ap; 797 846 rtn = 1; ··· 799 848 } 800 849 801 850 if (!list_empty(&sata_q)) { 802 - ata_port_printk(ap, KERN_DEBUG,"sas eh calling libata cmd error handler\n"); 851 + ata_port_printk(ap, KERN_DEBUG, "sas eh calling libata cmd error handler\n"); 803 852 ata_scsi_cmd_error_handler(shost, ap, &sata_q); 853 + /* 854 + * ata's error handler may leave the cmd on the list 855 + * so make sure they don't remain on a stack list 856 + * about to go out of scope. 857 + * 858 + * This looks strange, since the commands are 859 + * now part of no list, but the next error 860 + * action will be ata_port_error_handler() 861 + * which takes no list and sweeps them up 862 + * anyway from the ata tag array. 863 + */ 864 + while (!list_empty(&sata_q)) 865 + list_del_init(sata_q.next); 804 866 } 805 867 } while (ap); 806 868
-4
drivers/scsi/libsas/sas_dump.c
··· 24 24 25 25 #include "sas_dump.h" 26 26 27 - #ifdef SAS_DEBUG 28 - 29 27 static const char *sas_hae_str[] = { 30 28 [0] = "HAE_RESET", 31 29 }; ··· 70 72 SAS_DPRINTK("port%d: oob_mode:0x%x\n", port->id, port->oob_mode); 71 73 SAS_DPRINTK("port%d: num_phys:%d\n", port->id, port->num_phys); 72 74 } 73 - 74 - #endif /* SAS_DEBUG */
-12
drivers/scsi/libsas/sas_dump.h
··· 24 24 25 25 #include "sas_internal.h" 26 26 27 - #ifdef SAS_DEBUG 28 - 29 27 void sas_dprint_porte(int phyid, enum port_event pe); 30 28 void sas_dprint_phye(int phyid, enum phy_event pe); 31 29 void sas_dprint_hae(struct sas_ha_struct *sas_ha, enum ha_event he); 32 30 void sas_dump_port(struct asd_sas_port *port); 33 - 34 - #else /* SAS_DEBUG */ 35 - 36 - static inline void sas_dprint_porte(int phyid, enum port_event pe) { } 37 - static inline void sas_dprint_phye(int phyid, enum phy_event pe) { } 38 - static inline void sas_dprint_hae(struct sas_ha_struct *sas_ha, 39 - enum ha_event he) { } 40 - static inline void sas_dump_port(struct asd_sas_port *port) { } 41 - 42 - #endif /* SAS_DEBUG */
+5
drivers/scsi/libsas/sas_expander.c
··· 244 244 * dev to host FIS as described in section G.5 of 245 245 * sas-2 r 04b */ 246 246 dr = &((struct smp_resp *)disc_resp)->disc; 247 + if (memcmp(dev->sas_addr, dr->attached_sas_addr, 248 + SAS_ADDR_SIZE) == 0) { 249 + sas_printk("Found loopback topology, just ignore it!\n"); 250 + return 0; 251 + } 247 252 if (!(dr->attached_dev_type == 0 && 248 253 dr->attached_sata_dev)) 249 254 break;
+1 -5
drivers/scsi/libsas/sas_internal.h
··· 33 33 34 34 #define sas_printk(fmt, ...) printk(KERN_NOTICE "sas: " fmt, ## __VA_ARGS__) 35 35 36 - #ifdef SAS_DEBUG 37 - #define SAS_DPRINTK(fmt, ...) printk(KERN_NOTICE "sas: " fmt, ## __VA_ARGS__) 38 - #else 39 - #define SAS_DPRINTK(fmt, ...) 40 - #endif 36 + #define SAS_DPRINTK(fmt, ...) printk(KERN_DEBUG "sas: " fmt, ## __VA_ARGS__) 41 37 42 38 #define TO_SAS_TASK(_scsi_cmd) ((void *)(_scsi_cmd)->host_scribble) 43 39 #define ASSIGN_SAS_TASK(_sc, _t) do { (_sc)->host_scribble = (void *) _t; } while (0)
+1 -2
drivers/scsi/libsas/sas_scsi_host.c
··· 681 681 { 682 682 struct sas_task *task = TO_SAS_TASK(cmd); 683 683 unsigned long flags; 684 - enum blk_eh_timer_return rtn; 684 + enum blk_eh_timer_return rtn; 685 685 686 686 if (sas_ata_timed_out(cmd, task, &rtn)) 687 687 return rtn; 688 - 689 688 690 689 if (!task) { 691 690 cmd->request->timeout /= 2;
+13 -2
drivers/scsi/lpfc/lpfc.h
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2004-2010 Emulex. All rights reserved. * 4 + * Copyright (C) 2004-2011 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * Portions Copyright (C) 2004-2005 Christoph Hellwig * ··· 325 325 #define FC_VPORT_CVL_RCVD 0x400000 /* VLink failed due to CVL */ 326 326 #define FC_VFI_REGISTERED 0x800000 /* VFI is registered */ 327 327 #define FC_FDISC_COMPLETED 0x1000000/* FDISC completed */ 328 + #define FC_DISC_DELAYED 0x2000000/* Delay NPort discovery */ 328 329 329 330 uint32_t ct_flags; 330 331 #define FC_CT_RFF_ID 0x1 /* RFF_ID accepted by switch */ ··· 349 348 350 349 uint32_t fc_myDID; /* fibre channel S_ID */ 351 350 uint32_t fc_prevDID; /* previous fibre channel S_ID */ 351 + struct lpfc_name fabric_portname; 352 + struct lpfc_name fabric_nodename; 352 353 353 354 int32_t stopped; /* HBA has not been restarted since last ERATT */ 354 355 uint8_t fc_linkspeed; /* Link speed after last READ_LA */ ··· 375 372 #define WORKER_DISC_TMO 0x1 /* vport: Discovery timeout */ 376 373 #define WORKER_ELS_TMO 0x2 /* vport: ELS timeout */ 377 374 #define WORKER_FDMI_TMO 0x4 /* vport: FDMI timeout */ 375 + #define WORKER_DELAYED_DISC_TMO 0x8 /* vport: delayed discovery */ 378 376 379 377 #define WORKER_MBOX_TMO 0x100 /* hba: MBOX timeout */ 380 378 #define WORKER_HB_TMO 0x200 /* hba: Heart beat timeout */ ··· 386 382 387 383 struct timer_list fc_fdmitmo; 388 384 struct timer_list els_tmofunc; 385 + struct timer_list delayed_disc_tmo; 389 386 390 387 int unreg_vpi_cmpl; 391 388 ··· 553 548 #define LPFC_SLI3_CRP_ENABLED 0x08 554 549 #define LPFC_SLI3_BG_ENABLED 0x20 555 550 #define LPFC_SLI3_DSS_ENABLED 0x40 551 + #define LPFC_SLI4_PERFH_ENABLED 0x80 552 + #define LPFC_SLI4_PHWQ_ENABLED 0x100 556 553 uint32_t iocb_cmd_size; 557 554 uint32_t iocb_rsp_size; 558 555 ··· 662 655 #define LPFC_INITIALIZE_LINK 0 /* do normal init_link mbox */ 663 656 #define LPFC_DELAY_INIT_LINK 1 /* layered driver hold off */ 664 657 #define LPFC_DELAY_INIT_LINK_INDEFINITELY 2 /* wait, manual intervention */ 665 - 658 + uint32_t cfg_enable_dss; 666 659 lpfc_vpd_t vpd; /* vital product data */ 667 660 668 661 struct pci_dev *pcidev; ··· 799 792 struct dentry *debug_slow_ring_trc; 800 793 struct lpfc_debugfs_trc *slow_ring_trc; 801 794 atomic_t slow_ring_trc_cnt; 795 + /* iDiag debugfs sub-directory */ 796 + struct dentry *idiag_root; 797 + struct dentry *idiag_pci_cfg; 798 + struct dentry *idiag_que_info; 802 799 #endif 803 800 804 801 /* Used for deferred freeing of ELS data buffers */
+93 -30
drivers/scsi/lpfc/lpfc_attr.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2004-2009 Emulex. All rights reserved. * 4 + * Copyright (C) 2004-2011 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * Portions Copyright (C) 2004-2005 Christoph Hellwig * ··· 623 623 int status = 0; 624 624 int cnt = 0; 625 625 int i; 626 + int rc; 626 627 627 628 init_completion(&online_compl); 628 - lpfc_workq_post_event(phba, &status, &online_compl, 629 + rc = lpfc_workq_post_event(phba, &status, &online_compl, 629 630 LPFC_EVT_OFFLINE_PREP); 631 + if (rc == 0) 632 + return -ENOMEM; 633 + 630 634 wait_for_completion(&online_compl); 631 635 632 636 if (status != 0) ··· 656 652 } 657 653 658 654 init_completion(&online_compl); 659 - lpfc_workq_post_event(phba, &status, &online_compl, type); 655 + rc = lpfc_workq_post_event(phba, &status, &online_compl, type); 656 + if (rc == 0) 657 + return -ENOMEM; 658 + 660 659 wait_for_completion(&online_compl); 661 660 662 661 if (status != 0) ··· 678 671 * 679 672 * Notes: 680 673 * Assumes any error from lpfc_do_offline() will be negative. 674 + * Do not make this function static. 681 675 * 682 676 * Returns: 683 677 * lpfc_do_offline() return code if not zero ··· 690 682 { 691 683 struct completion online_compl; 692 684 int status = 0; 685 + int rc; 693 686 694 687 if (!phba->cfg_enable_hba_reset) 695 688 return -EIO; ··· 701 692 return status; 702 693 703 694 init_completion(&online_compl); 704 - lpfc_workq_post_event(phba, &status, &online_compl, 695 + rc = lpfc_workq_post_event(phba, &status, &online_compl, 705 696 LPFC_EVT_ONLINE); 697 + if (rc == 0) 698 + return -ENOMEM; 699 + 706 700 wait_for_completion(&online_compl); 707 701 708 702 if (status != 0) ··· 824 812 struct lpfc_hba *phba = vport->phba; 825 813 struct completion online_compl; 826 814 int status=0; 815 + int rc; 827 816 828 817 if (!phba->cfg_enable_hba_reset) 829 818 return -EACCES; 830 819 init_completion(&online_compl); 831 820 832 821 if(strncmp(buf, "online", sizeof("online") - 1) == 0) { 833 - lpfc_workq_post_event(phba, &status, &online_compl, 822 + rc = lpfc_workq_post_event(phba, &status, &online_compl, 834 823 LPFC_EVT_ONLINE); 824 + if (rc == 0) 825 + return -ENOMEM; 835 826 wait_for_completion(&online_compl); 836 827 } else if (strncmp(buf, "offline", sizeof("offline") - 1) == 0) 837 828 status = lpfc_do_offline(phba, LPFC_EVT_OFFLINE); ··· 1294 1279 } 1295 1280 1296 1281 /** 1282 + * lpfc_dss_show - Return the current state of dss and the configured state 1283 + * @dev: class converted to a Scsi_host structure. 1284 + * @attr: device attribute, not used. 1285 + * @buf: on return contains the formatted text. 1286 + * 1287 + * Returns: size of formatted string. 1288 + **/ 1289 + static ssize_t 1290 + lpfc_dss_show(struct device *dev, struct device_attribute *attr, 1291 + char *buf) 1292 + { 1293 + struct Scsi_Host *shost = class_to_shost(dev); 1294 + struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata; 1295 + struct lpfc_hba *phba = vport->phba; 1296 + 1297 + return snprintf(buf, PAGE_SIZE, "%s - %sOperational\n", 1298 + (phba->cfg_enable_dss) ? "Enabled" : "Disabled", 1299 + (phba->sli3_options & LPFC_SLI3_DSS_ENABLED) ? 1300 + "" : "Not "); 1301 + } 1302 + 1303 + /** 1297 1304 * lpfc_param_show - Return a cfg attribute value in decimal 1298 1305 * 1299 1306 * Description: ··· 1634 1597 1635 1598 #define LPFC_ATTR(name, defval, minval, maxval, desc) \ 1636 1599 static uint lpfc_##name = defval;\ 1637 - module_param(lpfc_##name, uint, 0);\ 1600 + module_param(lpfc_##name, uint, S_IRUGO);\ 1638 1601 MODULE_PARM_DESC(lpfc_##name, desc);\ 1639 1602 lpfc_param_init(name, defval, minval, maxval) 1640 1603 1641 1604 #define LPFC_ATTR_R(name, defval, minval, maxval, desc) \ 1642 1605 static uint lpfc_##name = defval;\ 1643 - module_param(lpfc_##name, uint, 0);\ 1606 + module_param(lpfc_##name, uint, S_IRUGO);\ 1644 1607 MODULE_PARM_DESC(lpfc_##name, desc);\ 1645 1608 lpfc_param_show(name)\ 1646 1609 lpfc_param_init(name, defval, minval, maxval)\ ··· 1648 1611 1649 1612 #define LPFC_ATTR_RW(name, defval, minval, maxval, desc) \ 1650 1613 static uint lpfc_##name = defval;\ 1651 - module_param(lpfc_##name, uint, 0);\ 1614 + module_param(lpfc_##name, uint, S_IRUGO);\ 1652 1615 MODULE_PARM_DESC(lpfc_##name, desc);\ 1653 1616 lpfc_param_show(name)\ 1654 1617 lpfc_param_init(name, defval, minval, maxval)\ ··· 1659 1622 1660 1623 #define LPFC_ATTR_HEX_R(name, defval, minval, maxval, desc) \ 1661 1624 static uint lpfc_##name = defval;\ 1662 - module_param(lpfc_##name, uint, 0);\ 1625 + module_param(lpfc_##name, uint, S_IRUGO);\ 1663 1626 MODULE_PARM_DESC(lpfc_##name, desc);\ 1664 1627 lpfc_param_hex_show(name)\ 1665 1628 lpfc_param_init(name, defval, minval, maxval)\ ··· 1667 1630 1668 1631 #define LPFC_ATTR_HEX_RW(name, defval, minval, maxval, desc) \ 1669 1632 static uint lpfc_##name = defval;\ 1670 - module_param(lpfc_##name, uint, 0);\ 1633 + module_param(lpfc_##name, uint, S_IRUGO);\ 1671 1634 MODULE_PARM_DESC(lpfc_##name, desc);\ 1672 1635 lpfc_param_hex_show(name)\ 1673 1636 lpfc_param_init(name, defval, minval, maxval)\ ··· 1678 1641 1679 1642 #define LPFC_VPORT_ATTR(name, defval, minval, maxval, desc) \ 1680 1643 static uint lpfc_##name = defval;\ 1681 - module_param(lpfc_##name, uint, 0);\ 1644 + module_param(lpfc_##name, uint, S_IRUGO);\ 1682 1645 MODULE_PARM_DESC(lpfc_##name, desc);\ 1683 1646 lpfc_vport_param_init(name, defval, minval, maxval) 1684 1647 1685 1648 #define LPFC_VPORT_ATTR_R(name, defval, minval, maxval, desc) \ 1686 1649 static uint lpfc_##name = defval;\ 1687 - module_param(lpfc_##name, uint, 0);\ 1650 + module_param(lpfc_##name, uint, S_IRUGO);\ 1688 1651 MODULE_PARM_DESC(lpfc_##name, desc);\ 1689 1652 lpfc_vport_param_show(name)\ 1690 1653 lpfc_vport_param_init(name, defval, minval, maxval)\ ··· 1692 1655 1693 1656 #define LPFC_VPORT_ATTR_RW(name, defval, minval, maxval, desc) \ 1694 1657 static uint lpfc_##name = defval;\ 1695 - module_param(lpfc_##name, uint, 0);\ 1658 + module_param(lpfc_##name, uint, S_IRUGO);\ 1696 1659 MODULE_PARM_DESC(lpfc_##name, desc);\ 1697 1660 lpfc_vport_param_show(name)\ 1698 1661 lpfc_vport_param_init(name, defval, minval, maxval)\ ··· 1703 1666 1704 1667 #define LPFC_VPORT_ATTR_HEX_R(name, defval, minval, maxval, desc) \ 1705 1668 static uint lpfc_##name = defval;\ 1706 - module_param(lpfc_##name, uint, 0);\ 1669 + module_param(lpfc_##name, uint, S_IRUGO);\ 1707 1670 MODULE_PARM_DESC(lpfc_##name, desc);\ 1708 1671 lpfc_vport_param_hex_show(name)\ 1709 1672 lpfc_vport_param_init(name, defval, minval, maxval)\ ··· 1711 1674 1712 1675 #define LPFC_VPORT_ATTR_HEX_RW(name, defval, minval, maxval, desc) \ 1713 1676 static uint lpfc_##name = defval;\ 1714 - module_param(lpfc_##name, uint, 0);\ 1677 + module_param(lpfc_##name, uint, S_IRUGO);\ 1715 1678 MODULE_PARM_DESC(lpfc_##name, desc);\ 1716 1679 lpfc_vport_param_hex_show(name)\ 1717 1680 lpfc_vport_param_init(name, defval, minval, maxval)\ ··· 1755 1718 static DEVICE_ATTR(lpfc_temp_sensor, S_IRUGO, lpfc_temp_sensor_show, NULL); 1756 1719 static DEVICE_ATTR(lpfc_fips_level, S_IRUGO, lpfc_fips_level_show, NULL); 1757 1720 static DEVICE_ATTR(lpfc_fips_rev, S_IRUGO, lpfc_fips_rev_show, NULL); 1758 - 1721 + static DEVICE_ATTR(lpfc_dss, S_IRUGO, lpfc_dss_show, NULL); 1759 1722 1760 1723 static char *lpfc_soft_wwn_key = "C99G71SL8032A"; 1761 1724 ··· 1850 1813 int stat1=0, stat2=0; 1851 1814 unsigned int i, j, cnt=count; 1852 1815 u8 wwpn[8]; 1816 + int rc; 1853 1817 1854 1818 if (!phba->cfg_enable_hba_reset) 1855 1819 return -EACCES; ··· 1901 1863 "0463 lpfc_soft_wwpn attribute set failed to " 1902 1864 "reinit adapter - %d\n", stat1); 1903 1865 init_completion(&online_compl); 1904 - lpfc_workq_post_event(phba, &stat2, &online_compl, LPFC_EVT_ONLINE); 1866 + rc = lpfc_workq_post_event(phba, &stat2, &online_compl, 1867 + LPFC_EVT_ONLINE); 1868 + if (rc == 0) 1869 + return -ENOMEM; 1870 + 1905 1871 wait_for_completion(&online_compl); 1906 1872 if (stat2) 1907 1873 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, ··· 1996 1954 1997 1955 1998 1956 static int lpfc_poll = 0; 1999 - module_param(lpfc_poll, int, 0); 1957 + module_param(lpfc_poll, int, S_IRUGO); 2000 1958 MODULE_PARM_DESC(lpfc_poll, "FCP ring polling mode control:" 2001 1959 " 0 - none," 2002 1960 " 1 - poll with interrupts enabled" ··· 2006 1964 lpfc_poll_show, lpfc_poll_store); 2007 1965 2008 1966 int lpfc_sli_mode = 0; 2009 - module_param(lpfc_sli_mode, int, 0); 1967 + module_param(lpfc_sli_mode, int, S_IRUGO); 2010 1968 MODULE_PARM_DESC(lpfc_sli_mode, "SLI mode selector:" 2011 1969 " 0 - auto (SLI-3 if supported)," 2012 1970 " 2 - select SLI-2 even on SLI-3 capable HBAs," 2013 1971 " 3 - select SLI-3"); 2014 1972 2015 1973 int lpfc_enable_npiv = 1; 2016 - module_param(lpfc_enable_npiv, int, 0); 1974 + module_param(lpfc_enable_npiv, int, S_IRUGO); 2017 1975 MODULE_PARM_DESC(lpfc_enable_npiv, "Enable NPIV functionality"); 2018 1976 lpfc_param_show(enable_npiv); 2019 1977 lpfc_param_init(enable_npiv, 1, 0, 1); 2020 1978 static DEVICE_ATTR(lpfc_enable_npiv, S_IRUGO, lpfc_enable_npiv_show, NULL); 2021 1979 2022 1980 int lpfc_enable_rrq; 2023 - module_param(lpfc_enable_rrq, int, 0); 1981 + module_param(lpfc_enable_rrq, int, S_IRUGO); 2024 1982 MODULE_PARM_DESC(lpfc_enable_rrq, "Enable RRQ functionality"); 2025 1983 lpfc_param_show(enable_rrq); 2026 1984 lpfc_param_init(enable_rrq, 0, 0, 1); ··· 2082 2040 lpfc_txcmplq_hw_show, NULL); 2083 2041 2084 2042 int lpfc_iocb_cnt = 2; 2085 - module_param(lpfc_iocb_cnt, int, 1); 2043 + module_param(lpfc_iocb_cnt, int, S_IRUGO); 2086 2044 MODULE_PARM_DESC(lpfc_iocb_cnt, 2087 2045 "Number of IOCBs alloc for ELS, CT, and ABTS: 1k to 5k IOCBs"); 2088 2046 lpfc_param_show(iocb_cnt); ··· 2234 2192 # disappear until the timer expires. Value range is [0,255]. Default 2235 2193 # value is 30. 2236 2194 */ 2237 - module_param(lpfc_devloss_tmo, int, 0); 2195 + module_param(lpfc_devloss_tmo, int, S_IRUGO); 2238 2196 MODULE_PARM_DESC(lpfc_devloss_tmo, 2239 2197 "Seconds driver will hold I/O waiting " 2240 2198 "for a device to come back"); ··· 2344 2302 # Default value of this parameter is 1. 2345 2303 */ 2346 2304 static int lpfc_restrict_login = 1; 2347 - module_param(lpfc_restrict_login, int, 0); 2305 + module_param(lpfc_restrict_login, int, S_IRUGO); 2348 2306 MODULE_PARM_DESC(lpfc_restrict_login, 2349 2307 "Restrict virtual ports login to remote initiators."); 2350 2308 lpfc_vport_param_show(restrict_login); ··· 2515 2473 return -EINVAL; 2516 2474 } 2517 2475 static int lpfc_topology = 0; 2518 - module_param(lpfc_topology, int, 0); 2476 + module_param(lpfc_topology, int, S_IRUGO); 2519 2477 MODULE_PARM_DESC(lpfc_topology, "Select Fibre Channel topology"); 2520 2478 lpfc_param_show(topology) 2521 2479 lpfc_param_init(topology, 0, 0, 6) ··· 2957 2915 } 2958 2916 2959 2917 static int lpfc_link_speed = 0; 2960 - module_param(lpfc_link_speed, int, 0); 2918 + module_param(lpfc_link_speed, int, S_IRUGO); 2961 2919 MODULE_PARM_DESC(lpfc_link_speed, "Select link speed"); 2962 2920 lpfc_param_show(link_speed) 2963 2921 ··· 3085 3043 } 3086 3044 3087 3045 static int lpfc_aer_support = 1; 3088 - module_param(lpfc_aer_support, int, 1); 3046 + module_param(lpfc_aer_support, int, S_IRUGO); 3089 3047 MODULE_PARM_DESC(lpfc_aer_support, "Enable PCIe device AER support"); 3090 3048 lpfc_param_show(aer_support) 3091 3049 ··· 3197 3155 # The value is set in milliseconds. 3198 3156 */ 3199 3157 static int lpfc_max_scsicmpl_time; 3200 - module_param(lpfc_max_scsicmpl_time, int, 0); 3158 + module_param(lpfc_max_scsicmpl_time, int, S_IRUGO); 3201 3159 MODULE_PARM_DESC(lpfc_max_scsicmpl_time, 3202 3160 "Use command completion time to control queue depth"); 3203 3161 lpfc_vport_param_show(max_scsicmpl_time); ··· 3373 3331 */ 3374 3332 unsigned int lpfc_prot_mask = SHOST_DIF_TYPE1_PROTECTION; 3375 3333 3376 - module_param(lpfc_prot_mask, uint, 0); 3334 + module_param(lpfc_prot_mask, uint, S_IRUGO); 3377 3335 MODULE_PARM_DESC(lpfc_prot_mask, "host protection mask"); 3378 3336 3379 3337 /* ··· 3385 3343 # 3386 3344 */ 3387 3345 unsigned char lpfc_prot_guard = SHOST_DIX_GUARD_IP; 3388 - module_param(lpfc_prot_guard, byte, 0); 3346 + module_param(lpfc_prot_guard, byte, S_IRUGO); 3389 3347 MODULE_PARM_DESC(lpfc_prot_guard, "host protection guard type"); 3390 3348 3349 + /* 3350 + * Delay initial NPort discovery when Clean Address bit is cleared in 3351 + * FLOGI/FDISC accept and FCID/Fabric name/Fabric portname is changed. 3352 + * This parameter can have value 0 or 1. 3353 + * When this parameter is set to 0, no delay is added to the initial 3354 + * discovery. 3355 + * When this parameter is set to non-zero value, initial Nport discovery is 3356 + * delayed by ra_tov seconds when Clean Address bit is cleared in FLOGI/FDISC 3357 + * accept and FCID/Fabric name/Fabric portname is changed. 3358 + * Driver always delay Nport discovery for subsequent FLOGI/FDISC completion 3359 + * when Clean Address bit is cleared in FLOGI/FDISC 3360 + * accept and FCID/Fabric name/Fabric portname is changed. 3361 + * Default value is 0. 3362 + */ 3363 + int lpfc_delay_discovery; 3364 + module_param(lpfc_delay_discovery, int, S_IRUGO); 3365 + MODULE_PARM_DESC(lpfc_delay_discovery, 3366 + "Delay NPort discovery when Clean Address bit is cleared. " 3367 + "Allowed values: 0,1."); 3391 3368 3392 3369 /* 3393 3370 * lpfc_sg_seg_cnt - Initial Maximum DMA Segment Count ··· 3498 3437 &dev_attr_txcmplq_hw, 3499 3438 &dev_attr_lpfc_fips_level, 3500 3439 &dev_attr_lpfc_fips_rev, 3440 + &dev_attr_lpfc_dss, 3501 3441 NULL, 3502 3442 }; 3503 3443 ··· 4701 4639 lpfc_aer_support_init(phba, lpfc_aer_support); 4702 4640 lpfc_suppress_link_up_init(phba, lpfc_suppress_link_up); 4703 4641 lpfc_iocb_cnt_init(phba, lpfc_iocb_cnt); 4642 + phba->cfg_enable_dss = 1; 4704 4643 return; 4705 4644 } 4706 4645
+6 -3
drivers/scsi/lpfc/lpfc_crtn.h
··· 53 53 void lpfc_init_link(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t, uint32_t); 54 54 void lpfc_request_features(struct lpfc_hba *, struct lpfcMboxq *); 55 55 void lpfc_supported_pages(struct lpfcMboxq *); 56 - void lpfc_sli4_params(struct lpfcMboxq *); 56 + void lpfc_pc_sli4_params(struct lpfcMboxq *); 57 57 int lpfc_pc_sli4_params_get(struct lpfc_hba *, LPFC_MBOXQ_t *); 58 - 58 + int lpfc_get_sli4_parameters(struct lpfc_hba *, LPFC_MBOXQ_t *); 59 59 struct lpfc_vport *lpfc_find_vport_by_did(struct lpfc_hba *, uint32_t); 60 60 void lpfc_cleanup_rcv_buffers(struct lpfc_vport *); 61 61 void lpfc_rcv_seq_check_edtov(struct lpfc_vport *); ··· 167 167 int lpfc_fdmi_cmd(struct lpfc_vport *, struct lpfc_nodelist *, int); 168 168 void lpfc_fdmi_tmo(unsigned long); 169 169 void lpfc_fdmi_timeout_handler(struct lpfc_vport *); 170 + void lpfc_delayed_disc_tmo(unsigned long); 171 + void lpfc_delayed_disc_timeout_handler(struct lpfc_vport *); 170 172 171 173 int lpfc_config_port_prep(struct lpfc_hba *); 172 174 int lpfc_config_port_post(struct lpfc_hba *); ··· 343 341 extern struct fc_function_template lpfc_vport_transport_functions; 344 342 extern int lpfc_sli_mode; 345 343 extern int lpfc_enable_npiv; 344 + extern int lpfc_delay_discovery; 346 345 347 346 int lpfc_vport_symbolic_node_name(struct lpfc_vport *, char *, size_t); 348 347 int lpfc_vport_symbolic_port_name(struct lpfc_vport *, char *, size_t); ··· 426 423 int lpfc_set_rrq_active(struct lpfc_hba *, struct lpfc_nodelist *, 427 424 uint16_t, uint16_t, uint16_t); 428 425 void lpfc_cleanup_wt_rrqs(struct lpfc_hba *); 429 - void lpfc_cleanup_vports_rrqs(struct lpfc_vport *); 426 + void lpfc_cleanup_vports_rrqs(struct lpfc_vport *, struct lpfc_nodelist *); 430 427 struct lpfc_node_rrq *lpfc_get_active_rrq(struct lpfc_vport *, uint16_t, 431 428 uint32_t);
+49
drivers/scsi/lpfc/lpfc_ct.c
··· 1738 1738 return 1; 1739 1739 } 1740 1740 1741 + /** 1742 + * lpfc_delayed_disc_tmo - Timeout handler for delayed discovery timer. 1743 + * @ptr - Context object of the timer. 1744 + * 1745 + * This function set the WORKER_DELAYED_DISC_TMO flag and wake up 1746 + * the worker thread. 1747 + **/ 1748 + void 1749 + lpfc_delayed_disc_tmo(unsigned long ptr) 1750 + { 1751 + struct lpfc_vport *vport = (struct lpfc_vport *)ptr; 1752 + struct lpfc_hba *phba = vport->phba; 1753 + uint32_t tmo_posted; 1754 + unsigned long iflag; 1755 + 1756 + spin_lock_irqsave(&vport->work_port_lock, iflag); 1757 + tmo_posted = vport->work_port_events & WORKER_DELAYED_DISC_TMO; 1758 + if (!tmo_posted) 1759 + vport->work_port_events |= WORKER_DELAYED_DISC_TMO; 1760 + spin_unlock_irqrestore(&vport->work_port_lock, iflag); 1761 + 1762 + if (!tmo_posted) 1763 + lpfc_worker_wake_up(phba); 1764 + return; 1765 + } 1766 + 1767 + /** 1768 + * lpfc_delayed_disc_timeout_handler - Function called by worker thread to 1769 + * handle delayed discovery. 1770 + * @vport: pointer to a host virtual N_Port data structure. 1771 + * 1772 + * This function start nport discovery of the vport. 1773 + **/ 1774 + void 1775 + lpfc_delayed_disc_timeout_handler(struct lpfc_vport *vport) 1776 + { 1777 + struct Scsi_Host *shost = lpfc_shost_from_vport(vport); 1778 + 1779 + spin_lock_irq(shost->host_lock); 1780 + if (!(vport->fc_flag & FC_DISC_DELAYED)) { 1781 + spin_unlock_irq(shost->host_lock); 1782 + return; 1783 + } 1784 + vport->fc_flag &= ~FC_DISC_DELAYED; 1785 + spin_unlock_irq(shost->host_lock); 1786 + 1787 + lpfc_do_scr_ns_plogi(vport->phba, vport); 1788 + } 1789 + 1741 1790 void 1742 1791 lpfc_fdmi_tmo(unsigned long ptr) 1743 1792 {
+906 -56
drivers/scsi/lpfc/lpfc_debugfs.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2007-2009 Emulex. All rights reserved. * 4 + * Copyright (C) 2007-2011 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * * ··· 57 57 * # mount -t debugfs none /sys/kernel/debug 58 58 * 59 59 * The lpfc debugfs directory hierarchy is: 60 - * lpfc/lpfcX/vportY 61 - * where X is the lpfc hba unique_id 60 + * /sys/kernel/debug/lpfc/fnX/vportY 61 + * where X is the lpfc hba function unique_id 62 62 * where Y is the vport VPI on that hba 63 63 * 64 64 * Debugging services available per vport: ··· 82 82 * the HBA. X MUST also be a power of 2. 83 83 */ 84 84 static int lpfc_debugfs_enable = 1; 85 - module_param(lpfc_debugfs_enable, int, 0); 85 + module_param(lpfc_debugfs_enable, int, S_IRUGO); 86 86 MODULE_PARM_DESC(lpfc_debugfs_enable, "Enable debugfs services"); 87 87 88 88 /* This MUST be a power of 2 */ 89 89 static int lpfc_debugfs_max_disc_trc; 90 - module_param(lpfc_debugfs_max_disc_trc, int, 0); 90 + module_param(lpfc_debugfs_max_disc_trc, int, S_IRUGO); 91 91 MODULE_PARM_DESC(lpfc_debugfs_max_disc_trc, 92 92 "Set debugfs discovery trace depth"); 93 93 94 94 /* This MUST be a power of 2 */ 95 95 static int lpfc_debugfs_max_slow_ring_trc; 96 - module_param(lpfc_debugfs_max_slow_ring_trc, int, 0); 96 + module_param(lpfc_debugfs_max_slow_ring_trc, int, S_IRUGO); 97 97 MODULE_PARM_DESC(lpfc_debugfs_max_slow_ring_trc, 98 98 "Set debugfs slow ring trace depth"); 99 99 100 100 static int lpfc_debugfs_mask_disc_trc; 101 - module_param(lpfc_debugfs_mask_disc_trc, int, 0); 101 + module_param(lpfc_debugfs_mask_disc_trc, int, S_IRUGO); 102 102 MODULE_PARM_DESC(lpfc_debugfs_mask_disc_trc, 103 103 "Set debugfs discovery trace mask"); 104 104 105 105 #include <linux/debugfs.h> 106 106 107 - /* size of output line, for discovery_trace and slow_ring_trace */ 108 - #define LPFC_DEBUG_TRC_ENTRY_SIZE 100 109 - 110 - /* nodelist output buffer size */ 111 - #define LPFC_NODELIST_SIZE 8192 112 - #define LPFC_NODELIST_ENTRY_SIZE 120 113 - 114 - /* dumpHBASlim output buffer size */ 115 - #define LPFC_DUMPHBASLIM_SIZE 4096 116 - 117 - /* dumpHostSlim output buffer size */ 118 - #define LPFC_DUMPHOSTSLIM_SIZE 4096 119 - 120 - /* hbqinfo output buffer size */ 121 - #define LPFC_HBQINFO_SIZE 8192 122 - 123 - struct lpfc_debug { 124 - char *buffer; 125 - int len; 126 - }; 127 - 128 107 static atomic_t lpfc_debugfs_seq_trc_cnt = ATOMIC_INIT(0); 129 108 static unsigned long lpfc_debugfs_start_time = 0L; 109 + 110 + /* iDiag */ 111 + static struct lpfc_idiag idiag; 130 112 131 113 /** 132 114 * lpfc_debugfs_disc_trc_data - Dump discovery logging to a buffer ··· 978 996 return nbytes; 979 997 } 980 998 981 - 982 - 983 999 /** 984 1000 * lpfc_debugfs_nodelist_open - Open the nodelist debugfs file 985 1001 * @inode: The inode pointer that contains a vport pointer. ··· 1079 1099 size_t nbytes, loff_t *ppos) 1080 1100 { 1081 1101 struct lpfc_debug *debug = file->private_data; 1102 + 1082 1103 return simple_read_from_buffer(buf, nbytes, ppos, debug->buffer, 1083 1104 debug->len); 1084 1105 } ··· 1116 1135 kfree(debug); 1117 1136 1118 1137 return 0; 1138 + } 1139 + 1140 + /* 1141 + * iDiag debugfs file access methods 1142 + */ 1143 + 1144 + /* 1145 + * iDiag PCI config space register access methods: 1146 + * 1147 + * The PCI config space register accessees of read, write, read-modify-write 1148 + * for set bits, and read-modify-write for clear bits to SLI4 PCI functions 1149 + * are provided. In the proper SLI4 PCI function's debugfs iDiag directory, 1150 + * 1151 + * /sys/kernel/debug/lpfc/fn<#>/iDiag 1152 + * 1153 + * the access is through the debugfs entry pciCfg: 1154 + * 1155 + * 1. For PCI config space register read access, there are two read methods: 1156 + * A) read a single PCI config space register in the size of a byte 1157 + * (8 bits), a word (16 bits), or a dword (32 bits); or B) browse through 1158 + * the 4K extended PCI config space. 1159 + * 1160 + * A) Read a single PCI config space register consists of two steps: 1161 + * 1162 + * Step-1: Set up PCI config space register read command, the command 1163 + * syntax is, 1164 + * 1165 + * echo 1 <where> <count> > pciCfg 1166 + * 1167 + * where, 1 is the iDiag command for PCI config space read, <where> is the 1168 + * offset from the beginning of the device's PCI config space to read from, 1169 + * and <count> is the size of PCI config space register data to read back, 1170 + * it will be 1 for reading a byte (8 bits), 2 for reading a word (16 bits 1171 + * or 2 bytes), or 4 for reading a dword (32 bits or 4 bytes). 1172 + * 1173 + * Setp-2: Perform the debugfs read operation to execute the idiag command 1174 + * set up in Step-1, 1175 + * 1176 + * cat pciCfg 1177 + * 1178 + * Examples: 1179 + * To read PCI device's vendor-id and device-id from PCI config space, 1180 + * 1181 + * echo 1 0 4 > pciCfg 1182 + * cat pciCfg 1183 + * 1184 + * To read PCI device's currnt command from config space, 1185 + * 1186 + * echo 1 4 2 > pciCfg 1187 + * cat pciCfg 1188 + * 1189 + * B) Browse through the entire 4K extended PCI config space also consists 1190 + * of two steps: 1191 + * 1192 + * Step-1: Set up PCI config space register browsing command, the command 1193 + * syntax is, 1194 + * 1195 + * echo 1 0 4096 > pciCfg 1196 + * 1197 + * where, 1 is the iDiag command for PCI config space read, 0 must be used 1198 + * as the offset for PCI config space register browse, and 4096 must be 1199 + * used as the count for PCI config space register browse. 1200 + * 1201 + * Step-2: Repeately issue the debugfs read operation to browse through 1202 + * the entire PCI config space registers: 1203 + * 1204 + * cat pciCfg 1205 + * cat pciCfg 1206 + * cat pciCfg 1207 + * ... 1208 + * 1209 + * When browsing to the end of the 4K PCI config space, the browse method 1210 + * shall wrap around to start reading from beginning again, and again... 1211 + * 1212 + * 2. For PCI config space register write access, it supports a single PCI 1213 + * config space register write in the size of a byte (8 bits), a word 1214 + * (16 bits), or a dword (32 bits). The command syntax is, 1215 + * 1216 + * echo 2 <where> <count> <value> > pciCfg 1217 + * 1218 + * where, 2 is the iDiag command for PCI config space write, <where> is 1219 + * the offset from the beginning of the device's PCI config space to write 1220 + * into, <count> is the size of data to write into the PCI config space, 1221 + * it will be 1 for writing a byte (8 bits), 2 for writing a word (16 bits 1222 + * or 2 bytes), or 4 for writing a dword (32 bits or 4 bytes), and <value> 1223 + * is the data to be written into the PCI config space register at the 1224 + * offset. 1225 + * 1226 + * Examples: 1227 + * To disable PCI device's interrupt assertion, 1228 + * 1229 + * 1) Read in device's PCI config space register command field <cmd>: 1230 + * 1231 + * echo 1 4 2 > pciCfg 1232 + * cat pciCfg 1233 + * 1234 + * 2) Set bit 10 (Interrupt Disable bit) in the <cmd>: 1235 + * 1236 + * <cmd> = <cmd> | (1 < 10) 1237 + * 1238 + * 3) Write the modified command back: 1239 + * 1240 + * echo 2 4 2 <cmd> > pciCfg 1241 + * 1242 + * 3. For PCI config space register set bits access, it supports a single PCI 1243 + * config space register set bits in the size of a byte (8 bits), a word 1244 + * (16 bits), or a dword (32 bits). The command syntax is, 1245 + * 1246 + * echo 3 <where> <count> <bitmask> > pciCfg 1247 + * 1248 + * where, 3 is the iDiag command for PCI config space set bits, <where> is 1249 + * the offset from the beginning of the device's PCI config space to set 1250 + * bits into, <count> is the size of the bitmask to set into the PCI config 1251 + * space, it will be 1 for setting a byte (8 bits), 2 for setting a word 1252 + * (16 bits or 2 bytes), or 4 for setting a dword (32 bits or 4 bytes), and 1253 + * <bitmask> is the bitmask, indicating the bits to be set into the PCI 1254 + * config space register at the offset. The logic performed to the content 1255 + * of the PCI config space register, regval, is, 1256 + * 1257 + * regval |= <bitmask> 1258 + * 1259 + * 4. For PCI config space register clear bits access, it supports a single 1260 + * PCI config space register clear bits in the size of a byte (8 bits), 1261 + * a word (16 bits), or a dword (32 bits). The command syntax is, 1262 + * 1263 + * echo 4 <where> <count> <bitmask> > pciCfg 1264 + * 1265 + * where, 4 is the iDiag command for PCI config space clear bits, <where> 1266 + * is the offset from the beginning of the device's PCI config space to 1267 + * clear bits from, <count> is the size of the bitmask to set into the PCI 1268 + * config space, it will be 1 for setting a byte (8 bits), 2 for setting 1269 + * a word(16 bits or 2 bytes), or 4 for setting a dword (32 bits or 4 1270 + * bytes), and <bitmask> is the bitmask, indicating the bits to be cleared 1271 + * from the PCI config space register at the offset. the logic performed 1272 + * to the content of the PCI config space register, regval, is, 1273 + * 1274 + * regval &= ~<bitmask> 1275 + * 1276 + * Note, for all single register read, write, set bits, or clear bits access, 1277 + * the offset (<where>) must be aligned with the size of the data: 1278 + * 1279 + * For data size of byte (8 bits), the offset must be aligned to the byte 1280 + * boundary; for data size of word (16 bits), the offset must be aligned 1281 + * to the word boundary; while for data size of dword (32 bits), the offset 1282 + * must be aligned to the dword boundary. Otherwise, the interface will 1283 + * return the error: 1284 + * 1285 + * "-bash: echo: write error: Invalid argument". 1286 + * 1287 + * For example: 1288 + * 1289 + * echo 1 2 4 > pciCfg 1290 + * -bash: echo: write error: Invalid argument 1291 + * 1292 + * Note also, all of the numbers in the command fields for all read, write, 1293 + * set bits, and clear bits PCI config space register command fields can be 1294 + * either decimal or hex. 1295 + * 1296 + * For example, 1297 + * echo 1 0 4096 > pciCfg 1298 + * 1299 + * will be the same as 1300 + * echo 1 0 0x1000 > pciCfg 1301 + * 1302 + * And, 1303 + * echo 2 155 1 10 > pciCfg 1304 + * 1305 + * will be 1306 + * echo 2 0x9b 1 0xa > pciCfg 1307 + */ 1308 + 1309 + /** 1310 + * lpfc_idiag_cmd_get - Get and parse idiag debugfs comands from user space 1311 + * @buf: The pointer to the user space buffer. 1312 + * @nbytes: The number of bytes in the user space buffer. 1313 + * @idiag_cmd: pointer to the idiag command struct. 1314 + * 1315 + * This routine reads data from debugfs user space buffer and parses the 1316 + * buffer for getting the idiag command and arguments. The while space in 1317 + * between the set of data is used as the parsing separator. 1318 + * 1319 + * This routine returns 0 when successful, it returns proper error code 1320 + * back to the user space in error conditions. 1321 + */ 1322 + static int lpfc_idiag_cmd_get(const char __user *buf, size_t nbytes, 1323 + struct lpfc_idiag_cmd *idiag_cmd) 1324 + { 1325 + char mybuf[64]; 1326 + char *pbuf, *step_str; 1327 + int bsize, i; 1328 + 1329 + /* Protect copy from user */ 1330 + if (!access_ok(VERIFY_READ, buf, nbytes)) 1331 + return -EFAULT; 1332 + 1333 + memset(mybuf, 0, sizeof(mybuf)); 1334 + memset(idiag_cmd, 0, sizeof(*idiag_cmd)); 1335 + bsize = min(nbytes, (sizeof(mybuf)-1)); 1336 + 1337 + if (copy_from_user(mybuf, buf, bsize)) 1338 + return -EFAULT; 1339 + pbuf = &mybuf[0]; 1340 + step_str = strsep(&pbuf, "\t "); 1341 + 1342 + /* The opcode must present */ 1343 + if (!step_str) 1344 + return -EINVAL; 1345 + 1346 + idiag_cmd->opcode = simple_strtol(step_str, NULL, 0); 1347 + if (idiag_cmd->opcode == 0) 1348 + return -EINVAL; 1349 + 1350 + for (i = 0; i < LPFC_IDIAG_CMD_DATA_SIZE; i++) { 1351 + step_str = strsep(&pbuf, "\t "); 1352 + if (!step_str) 1353 + return 0; 1354 + idiag_cmd->data[i] = simple_strtol(step_str, NULL, 0); 1355 + } 1356 + return 0; 1357 + } 1358 + 1359 + /** 1360 + * lpfc_idiag_open - idiag open debugfs 1361 + * @inode: The inode pointer that contains a pointer to phba. 1362 + * @file: The file pointer to attach the file operation. 1363 + * 1364 + * Description: 1365 + * This routine is the entry point for the debugfs open file operation. It 1366 + * gets the reference to phba from the i_private field in @inode, it then 1367 + * allocates buffer for the file operation, performs the necessary PCI config 1368 + * space read into the allocated buffer according to the idiag user command 1369 + * setup, and then returns a pointer to buffer in the private_data field in 1370 + * @file. 1371 + * 1372 + * Returns: 1373 + * This function returns zero if successful. On error it will return an 1374 + * negative error value. 1375 + **/ 1376 + static int 1377 + lpfc_idiag_open(struct inode *inode, struct file *file) 1378 + { 1379 + struct lpfc_debug *debug; 1380 + 1381 + debug = kmalloc(sizeof(*debug), GFP_KERNEL); 1382 + if (!debug) 1383 + return -ENOMEM; 1384 + 1385 + debug->i_private = inode->i_private; 1386 + debug->buffer = NULL; 1387 + file->private_data = debug; 1388 + 1389 + return 0; 1390 + } 1391 + 1392 + /** 1393 + * lpfc_idiag_release - Release idiag access file operation 1394 + * @inode: The inode pointer that contains a vport pointer. (unused) 1395 + * @file: The file pointer that contains the buffer to release. 1396 + * 1397 + * Description: 1398 + * This routine is the generic release routine for the idiag access file 1399 + * operation, it frees the buffer that was allocated when the debugfs file 1400 + * was opened. 1401 + * 1402 + * Returns: 1403 + * This function returns zero. 1404 + **/ 1405 + static int 1406 + lpfc_idiag_release(struct inode *inode, struct file *file) 1407 + { 1408 + struct lpfc_debug *debug = file->private_data; 1409 + 1410 + /* Free the buffers to the file operation */ 1411 + kfree(debug->buffer); 1412 + kfree(debug); 1413 + 1414 + return 0; 1415 + } 1416 + 1417 + /** 1418 + * lpfc_idiag_cmd_release - Release idiag cmd access file operation 1419 + * @inode: The inode pointer that contains a vport pointer. (unused) 1420 + * @file: The file pointer that contains the buffer to release. 1421 + * 1422 + * Description: 1423 + * This routine frees the buffer that was allocated when the debugfs file 1424 + * was opened. It also reset the fields in the idiag command struct in the 1425 + * case the command is not continuous browsing of the data structure. 1426 + * 1427 + * Returns: 1428 + * This function returns zero. 1429 + **/ 1430 + static int 1431 + lpfc_idiag_cmd_release(struct inode *inode, struct file *file) 1432 + { 1433 + struct lpfc_debug *debug = file->private_data; 1434 + 1435 + /* Read PCI config register, if not read all, clear command fields */ 1436 + if ((debug->op == LPFC_IDIAG_OP_RD) && 1437 + (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_RD)) 1438 + if ((idiag.cmd.data[1] == sizeof(uint8_t)) || 1439 + (idiag.cmd.data[1] == sizeof(uint16_t)) || 1440 + (idiag.cmd.data[1] == sizeof(uint32_t))) 1441 + memset(&idiag, 0, sizeof(idiag)); 1442 + 1443 + /* Write PCI config register, clear command fields */ 1444 + if ((debug->op == LPFC_IDIAG_OP_WR) && 1445 + (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_WR)) 1446 + memset(&idiag, 0, sizeof(idiag)); 1447 + 1448 + /* Free the buffers to the file operation */ 1449 + kfree(debug->buffer); 1450 + kfree(debug); 1451 + 1452 + return 0; 1453 + } 1454 + 1455 + /** 1456 + * lpfc_idiag_pcicfg_read - idiag debugfs read pcicfg 1457 + * @file: The file pointer to read from. 1458 + * @buf: The buffer to copy the data to. 1459 + * @nbytes: The number of bytes to read. 1460 + * @ppos: The position in the file to start reading from. 1461 + * 1462 + * Description: 1463 + * This routine reads data from the @phba pci config space according to the 1464 + * idiag command, and copies to user @buf. Depending on the PCI config space 1465 + * read command setup, it does either a single register read of a byte 1466 + * (8 bits), a word (16 bits), or a dword (32 bits) or browsing through all 1467 + * registers from the 4K extended PCI config space. 1468 + * 1469 + * Returns: 1470 + * This function returns the amount of data that was read (this could be less 1471 + * than @nbytes if the end of the file was reached) or a negative error value. 1472 + **/ 1473 + static ssize_t 1474 + lpfc_idiag_pcicfg_read(struct file *file, char __user *buf, size_t nbytes, 1475 + loff_t *ppos) 1476 + { 1477 + struct lpfc_debug *debug = file->private_data; 1478 + struct lpfc_hba *phba = (struct lpfc_hba *)debug->i_private; 1479 + int offset_label, offset, len = 0, index = LPFC_PCI_CFG_RD_SIZE; 1480 + int where, count; 1481 + char *pbuffer; 1482 + struct pci_dev *pdev; 1483 + uint32_t u32val; 1484 + uint16_t u16val; 1485 + uint8_t u8val; 1486 + 1487 + pdev = phba->pcidev; 1488 + if (!pdev) 1489 + return 0; 1490 + 1491 + /* This is a user read operation */ 1492 + debug->op = LPFC_IDIAG_OP_RD; 1493 + 1494 + if (!debug->buffer) 1495 + debug->buffer = kmalloc(LPFC_PCI_CFG_SIZE, GFP_KERNEL); 1496 + if (!debug->buffer) 1497 + return 0; 1498 + pbuffer = debug->buffer; 1499 + 1500 + if (*ppos) 1501 + return 0; 1502 + 1503 + if (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_RD) { 1504 + where = idiag.cmd.data[0]; 1505 + count = idiag.cmd.data[1]; 1506 + } else 1507 + return 0; 1508 + 1509 + /* Read single PCI config space register */ 1510 + switch (count) { 1511 + case SIZE_U8: /* byte (8 bits) */ 1512 + pci_read_config_byte(pdev, where, &u8val); 1513 + len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len, 1514 + "%03x: %02x\n", where, u8val); 1515 + break; 1516 + case SIZE_U16: /* word (16 bits) */ 1517 + pci_read_config_word(pdev, where, &u16val); 1518 + len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len, 1519 + "%03x: %04x\n", where, u16val); 1520 + break; 1521 + case SIZE_U32: /* double word (32 bits) */ 1522 + pci_read_config_dword(pdev, where, &u32val); 1523 + len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len, 1524 + "%03x: %08x\n", where, u32val); 1525 + break; 1526 + case LPFC_PCI_CFG_SIZE: /* browse all */ 1527 + goto pcicfg_browse; 1528 + break; 1529 + default: 1530 + /* illegal count */ 1531 + len = 0; 1532 + break; 1533 + } 1534 + return simple_read_from_buffer(buf, nbytes, ppos, pbuffer, len); 1535 + 1536 + pcicfg_browse: 1537 + 1538 + /* Browse all PCI config space registers */ 1539 + offset_label = idiag.offset.last_rd; 1540 + offset = offset_label; 1541 + 1542 + /* Read PCI config space */ 1543 + len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len, 1544 + "%03x: ", offset_label); 1545 + while (index > 0) { 1546 + pci_read_config_dword(pdev, offset, &u32val); 1547 + len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len, 1548 + "%08x ", u32val); 1549 + offset += sizeof(uint32_t); 1550 + index -= sizeof(uint32_t); 1551 + if (!index) 1552 + len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len, 1553 + "\n"); 1554 + else if (!(index % (8 * sizeof(uint32_t)))) { 1555 + offset_label += (8 * sizeof(uint32_t)); 1556 + len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len, 1557 + "\n%03x: ", offset_label); 1558 + } 1559 + } 1560 + 1561 + /* Set up the offset for next portion of pci cfg read */ 1562 + idiag.offset.last_rd += LPFC_PCI_CFG_RD_SIZE; 1563 + if (idiag.offset.last_rd >= LPFC_PCI_CFG_SIZE) 1564 + idiag.offset.last_rd = 0; 1565 + 1566 + return simple_read_from_buffer(buf, nbytes, ppos, pbuffer, len); 1567 + } 1568 + 1569 + /** 1570 + * lpfc_idiag_pcicfg_write - Syntax check and set up idiag pcicfg commands 1571 + * @file: The file pointer to read from. 1572 + * @buf: The buffer to copy the user data from. 1573 + * @nbytes: The number of bytes to get. 1574 + * @ppos: The position in the file to start reading from. 1575 + * 1576 + * This routine get the debugfs idiag command struct from user space and 1577 + * then perform the syntax check for PCI config space read or write command 1578 + * accordingly. In the case of PCI config space read command, it sets up 1579 + * the command in the idiag command struct for the debugfs read operation. 1580 + * In the case of PCI config space write operation, it executes the write 1581 + * operation into the PCI config space accordingly. 1582 + * 1583 + * It returns the @nbytges passing in from debugfs user space when successful. 1584 + * In case of error conditions, it returns proper error code back to the user 1585 + * space. 1586 + */ 1587 + static ssize_t 1588 + lpfc_idiag_pcicfg_write(struct file *file, const char __user *buf, 1589 + size_t nbytes, loff_t *ppos) 1590 + { 1591 + struct lpfc_debug *debug = file->private_data; 1592 + struct lpfc_hba *phba = (struct lpfc_hba *)debug->i_private; 1593 + uint32_t where, value, count; 1594 + uint32_t u32val; 1595 + uint16_t u16val; 1596 + uint8_t u8val; 1597 + struct pci_dev *pdev; 1598 + int rc; 1599 + 1600 + pdev = phba->pcidev; 1601 + if (!pdev) 1602 + return -EFAULT; 1603 + 1604 + /* This is a user write operation */ 1605 + debug->op = LPFC_IDIAG_OP_WR; 1606 + 1607 + rc = lpfc_idiag_cmd_get(buf, nbytes, &idiag.cmd); 1608 + if (rc) 1609 + return rc; 1610 + 1611 + if (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_RD) { 1612 + /* Read command from PCI config space, set up command fields */ 1613 + where = idiag.cmd.data[0]; 1614 + count = idiag.cmd.data[1]; 1615 + if (count == LPFC_PCI_CFG_SIZE) { 1616 + if (where != 0) 1617 + goto error_out; 1618 + } else if ((count != sizeof(uint8_t)) && 1619 + (count != sizeof(uint16_t)) && 1620 + (count != sizeof(uint32_t))) 1621 + goto error_out; 1622 + if (count == sizeof(uint8_t)) { 1623 + if (where > LPFC_PCI_CFG_SIZE - sizeof(uint8_t)) 1624 + goto error_out; 1625 + if (where % sizeof(uint8_t)) 1626 + goto error_out; 1627 + } 1628 + if (count == sizeof(uint16_t)) { 1629 + if (where > LPFC_PCI_CFG_SIZE - sizeof(uint16_t)) 1630 + goto error_out; 1631 + if (where % sizeof(uint16_t)) 1632 + goto error_out; 1633 + } 1634 + if (count == sizeof(uint32_t)) { 1635 + if (where > LPFC_PCI_CFG_SIZE - sizeof(uint32_t)) 1636 + goto error_out; 1637 + if (where % sizeof(uint32_t)) 1638 + goto error_out; 1639 + } 1640 + } else if (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_WR || 1641 + idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_ST || 1642 + idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_CL) { 1643 + /* Write command to PCI config space, read-modify-write */ 1644 + where = idiag.cmd.data[0]; 1645 + count = idiag.cmd.data[1]; 1646 + value = idiag.cmd.data[2]; 1647 + /* Sanity checks */ 1648 + if ((count != sizeof(uint8_t)) && 1649 + (count != sizeof(uint16_t)) && 1650 + (count != sizeof(uint32_t))) 1651 + goto error_out; 1652 + if (count == sizeof(uint8_t)) { 1653 + if (where > LPFC_PCI_CFG_SIZE - sizeof(uint8_t)) 1654 + goto error_out; 1655 + if (where % sizeof(uint8_t)) 1656 + goto error_out; 1657 + if (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_WR) 1658 + pci_write_config_byte(pdev, where, 1659 + (uint8_t)value); 1660 + if (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_ST) { 1661 + rc = pci_read_config_byte(pdev, where, &u8val); 1662 + if (!rc) { 1663 + u8val |= (uint8_t)value; 1664 + pci_write_config_byte(pdev, where, 1665 + u8val); 1666 + } 1667 + } 1668 + if (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_CL) { 1669 + rc = pci_read_config_byte(pdev, where, &u8val); 1670 + if (!rc) { 1671 + u8val &= (uint8_t)(~value); 1672 + pci_write_config_byte(pdev, where, 1673 + u8val); 1674 + } 1675 + } 1676 + } 1677 + if (count == sizeof(uint16_t)) { 1678 + if (where > LPFC_PCI_CFG_SIZE - sizeof(uint16_t)) 1679 + goto error_out; 1680 + if (where % sizeof(uint16_t)) 1681 + goto error_out; 1682 + if (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_WR) 1683 + pci_write_config_word(pdev, where, 1684 + (uint16_t)value); 1685 + if (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_ST) { 1686 + rc = pci_read_config_word(pdev, where, &u16val); 1687 + if (!rc) { 1688 + u16val |= (uint16_t)value; 1689 + pci_write_config_word(pdev, where, 1690 + u16val); 1691 + } 1692 + } 1693 + if (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_CL) { 1694 + rc = pci_read_config_word(pdev, where, &u16val); 1695 + if (!rc) { 1696 + u16val &= (uint16_t)(~value); 1697 + pci_write_config_word(pdev, where, 1698 + u16val); 1699 + } 1700 + } 1701 + } 1702 + if (count == sizeof(uint32_t)) { 1703 + if (where > LPFC_PCI_CFG_SIZE - sizeof(uint32_t)) 1704 + goto error_out; 1705 + if (where % sizeof(uint32_t)) 1706 + goto error_out; 1707 + if (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_WR) 1708 + pci_write_config_dword(pdev, where, value); 1709 + if (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_ST) { 1710 + rc = pci_read_config_dword(pdev, where, 1711 + &u32val); 1712 + if (!rc) { 1713 + u32val |= value; 1714 + pci_write_config_dword(pdev, where, 1715 + u32val); 1716 + } 1717 + } 1718 + if (idiag.cmd.opcode == LPFC_IDIAG_CMD_PCICFG_CL) { 1719 + rc = pci_read_config_dword(pdev, where, 1720 + &u32val); 1721 + if (!rc) { 1722 + u32val &= ~value; 1723 + pci_write_config_dword(pdev, where, 1724 + u32val); 1725 + } 1726 + } 1727 + } 1728 + } else 1729 + /* All other opecodes are illegal for now */ 1730 + goto error_out; 1731 + 1732 + return nbytes; 1733 + error_out: 1734 + memset(&idiag, 0, sizeof(idiag)); 1735 + return -EINVAL; 1736 + } 1737 + 1738 + /** 1739 + * lpfc_idiag_queinfo_read - idiag debugfs read queue information 1740 + * @file: The file pointer to read from. 1741 + * @buf: The buffer to copy the data to. 1742 + * @nbytes: The number of bytes to read. 1743 + * @ppos: The position in the file to start reading from. 1744 + * 1745 + * Description: 1746 + * This routine reads data from the @phba SLI4 PCI function queue information, 1747 + * and copies to user @buf. 1748 + * 1749 + * Returns: 1750 + * This function returns the amount of data that was read (this could be less 1751 + * than @nbytes if the end of the file was reached) or a negative error value. 1752 + **/ 1753 + static ssize_t 1754 + lpfc_idiag_queinfo_read(struct file *file, char __user *buf, size_t nbytes, 1755 + loff_t *ppos) 1756 + { 1757 + struct lpfc_debug *debug = file->private_data; 1758 + struct lpfc_hba *phba = (struct lpfc_hba *)debug->i_private; 1759 + int len = 0, fcp_qidx; 1760 + char *pbuffer; 1761 + 1762 + if (!debug->buffer) 1763 + debug->buffer = kmalloc(LPFC_QUE_INFO_GET_BUF_SIZE, GFP_KERNEL); 1764 + if (!debug->buffer) 1765 + return 0; 1766 + pbuffer = debug->buffer; 1767 + 1768 + if (*ppos) 1769 + return 0; 1770 + 1771 + /* Get slow-path event queue information */ 1772 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1773 + "Slow-path EQ information:\n"); 1774 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1775 + "\tID [%02d], EQE-COUNT [%04d], " 1776 + "HOST-INDEX [%04x], PORT-INDEX [%04x]\n\n", 1777 + phba->sli4_hba.sp_eq->queue_id, 1778 + phba->sli4_hba.sp_eq->entry_count, 1779 + phba->sli4_hba.sp_eq->host_index, 1780 + phba->sli4_hba.sp_eq->hba_index); 1781 + 1782 + /* Get fast-path event queue information */ 1783 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1784 + "Fast-path EQ information:\n"); 1785 + for (fcp_qidx = 0; fcp_qidx < phba->cfg_fcp_eq_count; fcp_qidx++) { 1786 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1787 + "\tID [%02d], EQE-COUNT [%04d], " 1788 + "HOST-INDEX [%04x], PORT-INDEX [%04x]\n", 1789 + phba->sli4_hba.fp_eq[fcp_qidx]->queue_id, 1790 + phba->sli4_hba.fp_eq[fcp_qidx]->entry_count, 1791 + phba->sli4_hba.fp_eq[fcp_qidx]->host_index, 1792 + phba->sli4_hba.fp_eq[fcp_qidx]->hba_index); 1793 + } 1794 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, "\n"); 1795 + 1796 + /* Get mailbox complete queue information */ 1797 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1798 + "Mailbox CQ information:\n"); 1799 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1800 + "\t\tAssociated EQ-ID [%02d]:\n", 1801 + phba->sli4_hba.mbx_cq->assoc_qid); 1802 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1803 + "\tID [%02d], CQE-COUNT [%04d], " 1804 + "HOST-INDEX [%04x], PORT-INDEX [%04x]\n\n", 1805 + phba->sli4_hba.mbx_cq->queue_id, 1806 + phba->sli4_hba.mbx_cq->entry_count, 1807 + phba->sli4_hba.mbx_cq->host_index, 1808 + phba->sli4_hba.mbx_cq->hba_index); 1809 + 1810 + /* Get slow-path complete queue information */ 1811 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1812 + "Slow-path CQ information:\n"); 1813 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1814 + "\t\tAssociated EQ-ID [%02d]:\n", 1815 + phba->sli4_hba.els_cq->assoc_qid); 1816 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1817 + "\tID [%02d], CQE-COUNT [%04d], " 1818 + "HOST-INDEX [%04x], PORT-INDEX [%04x]\n\n", 1819 + phba->sli4_hba.els_cq->queue_id, 1820 + phba->sli4_hba.els_cq->entry_count, 1821 + phba->sli4_hba.els_cq->host_index, 1822 + phba->sli4_hba.els_cq->hba_index); 1823 + 1824 + /* Get fast-path complete queue information */ 1825 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1826 + "Fast-path CQ information:\n"); 1827 + for (fcp_qidx = 0; fcp_qidx < phba->cfg_fcp_eq_count; fcp_qidx++) { 1828 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1829 + "\t\tAssociated EQ-ID [%02d]:\n", 1830 + phba->sli4_hba.fcp_cq[fcp_qidx]->assoc_qid); 1831 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1832 + "\tID [%02d], EQE-COUNT [%04d], " 1833 + "HOST-INDEX [%04x], PORT-INDEX [%04x]\n", 1834 + phba->sli4_hba.fcp_cq[fcp_qidx]->queue_id, 1835 + phba->sli4_hba.fcp_cq[fcp_qidx]->entry_count, 1836 + phba->sli4_hba.fcp_cq[fcp_qidx]->host_index, 1837 + phba->sli4_hba.fcp_cq[fcp_qidx]->hba_index); 1838 + } 1839 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, "\n"); 1840 + 1841 + /* Get mailbox queue information */ 1842 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1843 + "Mailbox MQ information:\n"); 1844 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1845 + "\t\tAssociated CQ-ID [%02d]:\n", 1846 + phba->sli4_hba.mbx_wq->assoc_qid); 1847 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1848 + "\tID [%02d], MQE-COUNT [%04d], " 1849 + "HOST-INDEX [%04x], PORT-INDEX [%04x]\n\n", 1850 + phba->sli4_hba.mbx_wq->queue_id, 1851 + phba->sli4_hba.mbx_wq->entry_count, 1852 + phba->sli4_hba.mbx_wq->host_index, 1853 + phba->sli4_hba.mbx_wq->hba_index); 1854 + 1855 + /* Get slow-path work queue information */ 1856 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1857 + "Slow-path WQ information:\n"); 1858 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1859 + "\t\tAssociated CQ-ID [%02d]:\n", 1860 + phba->sli4_hba.els_wq->assoc_qid); 1861 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1862 + "\tID [%02d], WQE-COUNT [%04d], " 1863 + "HOST-INDEX [%04x], PORT-INDEX [%04x]\n\n", 1864 + phba->sli4_hba.els_wq->queue_id, 1865 + phba->sli4_hba.els_wq->entry_count, 1866 + phba->sli4_hba.els_wq->host_index, 1867 + phba->sli4_hba.els_wq->hba_index); 1868 + 1869 + /* Get fast-path work queue information */ 1870 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1871 + "Fast-path WQ information:\n"); 1872 + for (fcp_qidx = 0; fcp_qidx < phba->cfg_fcp_wq_count; fcp_qidx++) { 1873 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1874 + "\t\tAssociated CQ-ID [%02d]:\n", 1875 + phba->sli4_hba.fcp_wq[fcp_qidx]->assoc_qid); 1876 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1877 + "\tID [%02d], WQE-COUNT [%04d], " 1878 + "HOST-INDEX [%04x], PORT-INDEX [%04x]\n", 1879 + phba->sli4_hba.fcp_wq[fcp_qidx]->queue_id, 1880 + phba->sli4_hba.fcp_wq[fcp_qidx]->entry_count, 1881 + phba->sli4_hba.fcp_wq[fcp_qidx]->host_index, 1882 + phba->sli4_hba.fcp_wq[fcp_qidx]->hba_index); 1883 + } 1884 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, "\n"); 1885 + 1886 + /* Get receive queue information */ 1887 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1888 + "Slow-path RQ information:\n"); 1889 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1890 + "\t\tAssociated CQ-ID [%02d]:\n", 1891 + phba->sli4_hba.hdr_rq->assoc_qid); 1892 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1893 + "\tID [%02d], RHQE-COUNT [%04d], " 1894 + "HOST-INDEX [%04x], PORT-INDEX [%04x]\n", 1895 + phba->sli4_hba.hdr_rq->queue_id, 1896 + phba->sli4_hba.hdr_rq->entry_count, 1897 + phba->sli4_hba.hdr_rq->host_index, 1898 + phba->sli4_hba.hdr_rq->hba_index); 1899 + len += snprintf(pbuffer+len, LPFC_QUE_INFO_GET_BUF_SIZE-len, 1900 + "\tID [%02d], RDQE-COUNT [%04d], " 1901 + "HOST-INDEX [%04x], PORT-INDEX [%04x]\n", 1902 + phba->sli4_hba.dat_rq->queue_id, 1903 + phba->sli4_hba.dat_rq->entry_count, 1904 + phba->sli4_hba.dat_rq->host_index, 1905 + phba->sli4_hba.dat_rq->hba_index); 1906 + 1907 + return simple_read_from_buffer(buf, nbytes, ppos, pbuffer, len); 1119 1908 } 1120 1909 1121 1910 #undef lpfc_debugfs_op_disc_trc ··· 1964 1213 1965 1214 static struct dentry *lpfc_debugfs_root = NULL; 1966 1215 static atomic_t lpfc_debugfs_hba_count; 1216 + 1217 + /* 1218 + * File operations for the iDiag debugfs 1219 + */ 1220 + #undef lpfc_idiag_op_pciCfg 1221 + static const struct file_operations lpfc_idiag_op_pciCfg = { 1222 + .owner = THIS_MODULE, 1223 + .open = lpfc_idiag_open, 1224 + .llseek = lpfc_debugfs_lseek, 1225 + .read = lpfc_idiag_pcicfg_read, 1226 + .write = lpfc_idiag_pcicfg_write, 1227 + .release = lpfc_idiag_cmd_release, 1228 + }; 1229 + 1230 + #undef lpfc_idiag_op_queInfo 1231 + static const struct file_operations lpfc_idiag_op_queInfo = { 1232 + .owner = THIS_MODULE, 1233 + .open = lpfc_idiag_open, 1234 + .read = lpfc_idiag_queinfo_read, 1235 + .release = lpfc_idiag_release, 1236 + }; 1237 + 1967 1238 #endif 1968 1239 1969 1240 /** ··· 2022 1249 if (!lpfc_debugfs_start_time) 2023 1250 lpfc_debugfs_start_time = jiffies; 2024 1251 2025 - /* Setup lpfcX directory for specific HBA */ 2026 - snprintf(name, sizeof(name), "lpfc%d", phba->brd_no); 1252 + /* Setup funcX directory for specific HBA PCI function */ 1253 + snprintf(name, sizeof(name), "fn%d", phba->brd_no); 2027 1254 if (!phba->hba_debugfs_root) { 2028 1255 phba->hba_debugfs_root = 2029 1256 debugfs_create_dir(name, lpfc_debugfs_root); ··· 2048 1275 } 2049 1276 2050 1277 /* Setup dumpHBASlim */ 2051 - snprintf(name, sizeof(name), "dumpHBASlim"); 2052 - phba->debug_dumpHBASlim = 2053 - debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR, 2054 - phba->hba_debugfs_root, 2055 - phba, &lpfc_debugfs_op_dumpHBASlim); 2056 - if (!phba->debug_dumpHBASlim) { 2057 - lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 2058 - "0413 Cannot create debugfs dumpHBASlim\n"); 2059 - goto debug_failed; 2060 - } 1278 + if (phba->sli_rev < LPFC_SLI_REV4) { 1279 + snprintf(name, sizeof(name), "dumpHBASlim"); 1280 + phba->debug_dumpHBASlim = 1281 + debugfs_create_file(name, 1282 + S_IFREG|S_IRUGO|S_IWUSR, 1283 + phba->hba_debugfs_root, 1284 + phba, &lpfc_debugfs_op_dumpHBASlim); 1285 + if (!phba->debug_dumpHBASlim) { 1286 + lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 1287 + "0413 Cannot create debugfs " 1288 + "dumpHBASlim\n"); 1289 + goto debug_failed; 1290 + } 1291 + } else 1292 + phba->debug_dumpHBASlim = NULL; 2061 1293 2062 1294 /* Setup dumpHostSlim */ 2063 - snprintf(name, sizeof(name), "dumpHostSlim"); 2064 - phba->debug_dumpHostSlim = 2065 - debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR, 2066 - phba->hba_debugfs_root, 2067 - phba, &lpfc_debugfs_op_dumpHostSlim); 2068 - if (!phba->debug_dumpHostSlim) { 2069 - lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 2070 - "0414 Cannot create debugfs dumpHostSlim\n"); 2071 - goto debug_failed; 2072 - } 1295 + if (phba->sli_rev < LPFC_SLI_REV4) { 1296 + snprintf(name, sizeof(name), "dumpHostSlim"); 1297 + phba->debug_dumpHostSlim = 1298 + debugfs_create_file(name, 1299 + S_IFREG|S_IRUGO|S_IWUSR, 1300 + phba->hba_debugfs_root, 1301 + phba, &lpfc_debugfs_op_dumpHostSlim); 1302 + if (!phba->debug_dumpHostSlim) { 1303 + lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 1304 + "0414 Cannot create debugfs " 1305 + "dumpHostSlim\n"); 1306 + goto debug_failed; 1307 + } 1308 + } else 1309 + phba->debug_dumpHBASlim = NULL; 2073 1310 2074 1311 /* Setup dumpData */ 2075 1312 snprintf(name, sizeof(name), "dumpData"); ··· 2105 1322 goto debug_failed; 2106 1323 } 2107 1324 2108 - 2109 - 2110 1325 /* Setup slow ring trace */ 2111 1326 if (lpfc_debugfs_max_slow_ring_trc) { 2112 1327 num = lpfc_debugfs_max_slow_ring_trc - 1; ··· 2122 1341 "%d\n", lpfc_debugfs_max_disc_trc); 2123 1342 } 2124 1343 } 2125 - 2126 1344 2127 1345 snprintf(name, sizeof(name), "slow_ring_trace"); 2128 1346 phba->debug_slow_ring_trc = ··· 2214 1434 "0409 Cant create debugfs nodelist\n"); 2215 1435 goto debug_failed; 2216 1436 } 1437 + 1438 + /* 1439 + * iDiag debugfs root entry points for SLI4 device only 1440 + */ 1441 + if (phba->sli_rev < LPFC_SLI_REV4) 1442 + goto debug_failed; 1443 + 1444 + snprintf(name, sizeof(name), "iDiag"); 1445 + if (!phba->idiag_root) { 1446 + phba->idiag_root = 1447 + debugfs_create_dir(name, phba->hba_debugfs_root); 1448 + if (!phba->idiag_root) { 1449 + lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 1450 + "2922 Can't create idiag debugfs\n"); 1451 + goto debug_failed; 1452 + } 1453 + /* Initialize iDiag data structure */ 1454 + memset(&idiag, 0, sizeof(idiag)); 1455 + } 1456 + 1457 + /* iDiag read PCI config space */ 1458 + snprintf(name, sizeof(name), "pciCfg"); 1459 + if (!phba->idiag_pci_cfg) { 1460 + phba->idiag_pci_cfg = 1461 + debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR, 1462 + phba->idiag_root, phba, &lpfc_idiag_op_pciCfg); 1463 + if (!phba->idiag_pci_cfg) { 1464 + lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 1465 + "2923 Can't create idiag debugfs\n"); 1466 + goto debug_failed; 1467 + } 1468 + idiag.offset.last_rd = 0; 1469 + } 1470 + 1471 + /* iDiag get PCI function queue information */ 1472 + snprintf(name, sizeof(name), "queInfo"); 1473 + if (!phba->idiag_que_info) { 1474 + phba->idiag_que_info = 1475 + debugfs_create_file(name, S_IFREG|S_IRUGO, 1476 + phba->idiag_root, phba, &lpfc_idiag_op_queInfo); 1477 + if (!phba->idiag_que_info) { 1478 + lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 1479 + "2924 Can't create idiag debugfs\n"); 1480 + goto debug_failed; 1481 + } 1482 + } 1483 + 2217 1484 debug_failed: 2218 1485 return; 2219 1486 #endif ··· 2335 1508 phba->debug_slow_ring_trc = NULL; 2336 1509 } 2337 1510 1511 + /* 1512 + * iDiag release 1513 + */ 1514 + if (phba->sli_rev == LPFC_SLI_REV4) { 1515 + if (phba->idiag_que_info) { 1516 + /* iDiag queInfo */ 1517 + debugfs_remove(phba->idiag_que_info); 1518 + phba->idiag_que_info = NULL; 1519 + } 1520 + if (phba->idiag_pci_cfg) { 1521 + /* iDiag pciCfg */ 1522 + debugfs_remove(phba->idiag_pci_cfg); 1523 + phba->idiag_pci_cfg = NULL; 1524 + } 1525 + 1526 + /* Finally remove the iDiag debugfs root */ 1527 + if (phba->idiag_root) { 1528 + /* iDiag root */ 1529 + debugfs_remove(phba->idiag_root); 1530 + phba->idiag_root = NULL; 1531 + } 1532 + } 1533 + 2338 1534 if (phba->hba_debugfs_root) { 2339 - debugfs_remove(phba->hba_debugfs_root); /* lpfcX */ 1535 + debugfs_remove(phba->hba_debugfs_root); /* fnX */ 2340 1536 phba->hba_debugfs_root = NULL; 2341 1537 atomic_dec(&lpfc_debugfs_hba_count); 2342 1538 }
+59 -1
drivers/scsi/lpfc/lpfc_debugfs.h
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2007 Emulex. All rights reserved. * 4 + * Copyright (C) 2007-2011 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * * ··· 22 22 #define _H_LPFC_DEBUG_FS 23 23 24 24 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 25 + 26 + /* size of output line, for discovery_trace and slow_ring_trace */ 27 + #define LPFC_DEBUG_TRC_ENTRY_SIZE 100 28 + 29 + /* nodelist output buffer size */ 30 + #define LPFC_NODELIST_SIZE 8192 31 + #define LPFC_NODELIST_ENTRY_SIZE 120 32 + 33 + /* dumpHBASlim output buffer size */ 34 + #define LPFC_DUMPHBASLIM_SIZE 4096 35 + 36 + /* dumpHostSlim output buffer size */ 37 + #define LPFC_DUMPHOSTSLIM_SIZE 4096 38 + 39 + /* hbqinfo output buffer size */ 40 + #define LPFC_HBQINFO_SIZE 8192 41 + 42 + /* rdPciConf output buffer size */ 43 + #define LPFC_PCI_CFG_SIZE 4096 44 + #define LPFC_PCI_CFG_RD_BUF_SIZE (LPFC_PCI_CFG_SIZE/2) 45 + #define LPFC_PCI_CFG_RD_SIZE (LPFC_PCI_CFG_SIZE/4) 46 + 47 + /* queue info output buffer size */ 48 + #define LPFC_QUE_INFO_GET_BUF_SIZE 2048 49 + 50 + #define SIZE_U8 sizeof(uint8_t) 51 + #define SIZE_U16 sizeof(uint16_t) 52 + #define SIZE_U32 sizeof(uint32_t) 53 + 54 + struct lpfc_debug { 55 + char *i_private; 56 + char op; 57 + #define LPFC_IDIAG_OP_RD 1 58 + #define LPFC_IDIAG_OP_WR 2 59 + char *buffer; 60 + int len; 61 + }; 62 + 25 63 struct lpfc_debugfs_trc { 26 64 char *fmt; 27 65 uint32_t data1; ··· 67 29 uint32_t data3; 68 30 uint32_t seq_cnt; 69 31 unsigned long jif; 32 + }; 33 + 34 + struct lpfc_idiag_offset { 35 + uint32_t last_rd; 36 + }; 37 + 38 + #define LPFC_IDIAG_CMD_DATA_SIZE 4 39 + struct lpfc_idiag_cmd { 40 + uint32_t opcode; 41 + #define LPFC_IDIAG_CMD_PCICFG_RD 0x00000001 42 + #define LPFC_IDIAG_CMD_PCICFG_WR 0x00000002 43 + #define LPFC_IDIAG_CMD_PCICFG_ST 0x00000003 44 + #define LPFC_IDIAG_CMD_PCICFG_CL 0x00000004 45 + uint32_t data[LPFC_IDIAG_CMD_DATA_SIZE]; 46 + }; 47 + 48 + struct lpfc_idiag { 49 + uint32_t active; 50 + struct lpfc_idiag_cmd cmd; 51 + struct lpfc_idiag_offset offset; 70 52 }; 71 53 #endif 72 54
+164 -11
drivers/scsi/lpfc/lpfc_els.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2004-2009 Emulex. All rights reserved. * 4 + * Copyright (C) 2004-2011 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * Portions Copyright (C) 2004-2005 Christoph Hellwig * ··· 485 485 } 486 486 487 487 /** 488 + * lpfc_check_clean_addr_bit - Check whether assigned FCID is clean. 489 + * @vport: pointer to a host virtual N_Port data structure. 490 + * @sp: pointer to service parameter data structure. 491 + * 492 + * This routine is called from FLOGI/FDISC completion handler functions. 493 + * lpfc_check_clean_addr_bit return 1 when FCID/Fabric portname/ Fabric 494 + * node nodename is changed in the completion service parameter else return 495 + * 0. This function also set flag in the vport data structure to delay 496 + * NP_Port discovery after the FLOGI/FDISC completion if Clean address bit 497 + * in FLOGI/FDISC response is cleared and FCID/Fabric portname/ Fabric 498 + * node nodename is changed in the completion service parameter. 499 + * 500 + * Return code 501 + * 0 - FCID and Fabric Nodename and Fabric portname is not changed. 502 + * 1 - FCID or Fabric Nodename or Fabric portname is changed. 503 + * 504 + **/ 505 + static uint8_t 506 + lpfc_check_clean_addr_bit(struct lpfc_vport *vport, 507 + struct serv_parm *sp) 508 + { 509 + uint8_t fabric_param_changed = 0; 510 + struct Scsi_Host *shost = lpfc_shost_from_vport(vport); 511 + 512 + if ((vport->fc_prevDID != vport->fc_myDID) || 513 + memcmp(&vport->fabric_portname, &sp->portName, 514 + sizeof(struct lpfc_name)) || 515 + memcmp(&vport->fabric_nodename, &sp->nodeName, 516 + sizeof(struct lpfc_name))) 517 + fabric_param_changed = 1; 518 + 519 + /* 520 + * Word 1 Bit 31 in common service parameter is overloaded. 521 + * Word 1 Bit 31 in FLOGI request is multiple NPort request 522 + * Word 1 Bit 31 in FLOGI response is clean address bit 523 + * 524 + * If fabric parameter is changed and clean address bit is 525 + * cleared delay nport discovery if 526 + * - vport->fc_prevDID != 0 (not initial discovery) OR 527 + * - lpfc_delay_discovery module parameter is set. 528 + */ 529 + if (fabric_param_changed && !sp->cmn.clean_address_bit && 530 + (vport->fc_prevDID || lpfc_delay_discovery)) { 531 + spin_lock_irq(shost->host_lock); 532 + vport->fc_flag |= FC_DISC_DELAYED; 533 + spin_unlock_irq(shost->host_lock); 534 + } 535 + 536 + return fabric_param_changed; 537 + } 538 + 539 + 540 + /** 488 541 * lpfc_cmpl_els_flogi_fabric - Completion function for flogi to a fabric port 489 542 * @vport: pointer to a host virtual N_Port data structure. 490 543 * @ndlp: pointer to a node-list data structure. ··· 565 512 struct lpfc_hba *phba = vport->phba; 566 513 struct lpfc_nodelist *np; 567 514 struct lpfc_nodelist *next_np; 515 + uint8_t fabric_param_changed; 568 516 569 517 spin_lock_irq(shost->host_lock); 570 518 vport->fc_flag |= FC_FABRIC; ··· 598 544 ndlp->nlp_class_sup |= FC_COS_CLASS4; 599 545 ndlp->nlp_maxframe = ((sp->cmn.bbRcvSizeMsb & 0x0F) << 8) | 600 546 sp->cmn.bbRcvSizeLsb; 547 + 548 + fabric_param_changed = lpfc_check_clean_addr_bit(vport, sp); 549 + memcpy(&vport->fabric_portname, &sp->portName, 550 + sizeof(struct lpfc_name)); 551 + memcpy(&vport->fabric_nodename, &sp->nodeName, 552 + sizeof(struct lpfc_name)); 601 553 memcpy(&phba->fc_fabparam, sp, sizeof(struct serv_parm)); 602 554 603 555 if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) { ··· 625 565 } 626 566 } 627 567 628 - if ((vport->fc_prevDID != vport->fc_myDID) && 568 + if (fabric_param_changed && 629 569 !(vport->fc_flag & FC_VPORT_NEEDS_REG_VPI)) { 630 570 631 571 /* If our NportID changed, we need to ensure all ··· 2263 2203 struct Scsi_Host *shost = lpfc_shost_from_vport(vport); 2264 2204 IOCB_t *irsp; 2265 2205 struct lpfc_sli *psli; 2206 + struct lpfcMboxq *mbox; 2266 2207 2267 2208 psli = &phba->sli; 2268 2209 /* we pass cmdiocb to state machine which needs rspiocb as well */ ··· 2321 2260 NLP_EVT_CMPL_LOGO); 2322 2261 out: 2323 2262 lpfc_els_free_iocb(phba, cmdiocb); 2263 + /* If we are in pt2pt mode, we could rcv new S_ID on PLOGI */ 2264 + if ((vport->fc_flag & FC_PT2PT) && 2265 + !(vport->fc_flag & FC_PT2PT_PLOGI)) { 2266 + phba->pport->fc_myDID = 0; 2267 + mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); 2268 + if (mbox) { 2269 + lpfc_config_link(phba, mbox); 2270 + mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl; 2271 + mbox->vport = vport; 2272 + if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT) == 2273 + MBX_NOT_FINISHED) { 2274 + mempool_free(mbox, phba->mbox_mem_pool); 2275 + } 2276 + } 2277 + } 2324 2278 return; 2325 2279 } 2326 2280 ··· 2821 2745 } 2822 2746 break; 2823 2747 case ELS_CMD_FDISC: 2824 - lpfc_issue_els_fdisc(vport, ndlp, retry); 2748 + if (!(vport->fc_flag & FC_VPORT_NEEDS_INIT_VPI)) 2749 + lpfc_issue_els_fdisc(vport, ndlp, retry); 2825 2750 break; 2826 2751 } 2827 2752 return; ··· 2892 2815 2893 2816 switch (irsp->ulpStatus) { 2894 2817 case IOSTAT_FCP_RSP_ERROR: 2895 - case IOSTAT_REMOTE_STOP: 2896 2818 break; 2897 - 2819 + case IOSTAT_REMOTE_STOP: 2820 + if (phba->sli_rev == LPFC_SLI_REV4) { 2821 + /* This IO was aborted by the target, we don't 2822 + * know the rxid and because we did not send the 2823 + * ABTS we cannot generate and RRQ. 2824 + */ 2825 + lpfc_set_rrq_active(phba, ndlp, 2826 + cmdiocb->sli4_xritag, 0, 0); 2827 + } 2828 + break; 2898 2829 case IOSTAT_LOCAL_REJECT: 2899 2830 switch ((irsp->un.ulpWord[4] & 0xff)) { 2900 2831 case IOERR_LOOP_OPEN_FAILURE: ··· 4098 4013 uint8_t *pcmd; 4099 4014 struct RRQ *rrq; 4100 4015 uint16_t rxid; 4016 + uint16_t xri; 4101 4017 struct lpfc_node_rrq *prrq; 4102 4018 4103 4019 4104 4020 pcmd = (uint8_t *) (((struct lpfc_dmabuf *) iocb->context2)->virt); 4105 4021 pcmd += sizeof(uint32_t); 4106 4022 rrq = (struct RRQ *)pcmd; 4107 - rxid = bf_get(rrq_oxid, rrq); 4023 + rrq->rrq_exchg = be32_to_cpu(rrq->rrq_exchg); 4024 + rxid = be16_to_cpu(bf_get(rrq_rxid, rrq)); 4108 4025 4109 4026 lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, 4110 4027 "2883 Clear RRQ for SID:x%x OXID:x%x RXID:x%x" 4111 4028 " x%x x%x\n", 4112 - bf_get(rrq_did, rrq), 4113 - bf_get(rrq_oxid, rrq), 4029 + be32_to_cpu(bf_get(rrq_did, rrq)), 4030 + be16_to_cpu(bf_get(rrq_oxid, rrq)), 4114 4031 rxid, 4115 4032 iocb->iotag, iocb->iocb.ulpContext); 4116 4033 4117 4034 lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP, 4118 4035 "Clear RRQ: did:x%x flg:x%x exchg:x%.08x", 4119 4036 ndlp->nlp_DID, ndlp->nlp_flag, rrq->rrq_exchg); 4120 - prrq = lpfc_get_active_rrq(vport, rxid, ndlp->nlp_DID); 4037 + if (vport->fc_myDID == be32_to_cpu(bf_get(rrq_did, rrq))) 4038 + xri = be16_to_cpu(bf_get(rrq_oxid, rrq)); 4039 + else 4040 + xri = rxid; 4041 + prrq = lpfc_get_active_rrq(vport, xri, ndlp->nlp_DID); 4121 4042 if (prrq) 4122 - lpfc_clr_rrq_active(phba, rxid, prrq); 4043 + lpfc_clr_rrq_active(phba, xri, prrq); 4123 4044 return; 4124 4045 } 4125 4046 ··· 6257 6166 if (vport->load_flag & FC_UNLOADING) 6258 6167 goto dropit; 6259 6168 6169 + /* If NPort discovery is delayed drop incoming ELS */ 6170 + if ((vport->fc_flag & FC_DISC_DELAYED) && 6171 + (cmd != ELS_CMD_PLOGI)) 6172 + goto dropit; 6173 + 6260 6174 ndlp = lpfc_findnode_did(vport, did); 6261 6175 if (!ndlp) { 6262 6176 /* Cannot find existing Fabric ndlp, so allocate a new one */ ··· 6314 6218 ndlp = lpfc_plogi_confirm_nport(phba, payload, ndlp); 6315 6219 6316 6220 lpfc_send_els_event(vport, ndlp, payload); 6221 + 6222 + /* If Nport discovery is delayed, reject PLOGIs */ 6223 + if (vport->fc_flag & FC_DISC_DELAYED) { 6224 + rjt_err = LSRJT_UNABLE_TPC; 6225 + break; 6226 + } 6317 6227 if (vport->port_state < LPFC_DISC_AUTH) { 6318 6228 if (!(phba->pport->fc_flag & FC_PT2PT) || 6319 6229 (phba->pport->fc_flag & FC_PT2PT_PLOGI)) { ··· 6698 6596 lpfc_do_scr_ns_plogi(struct lpfc_hba *phba, struct lpfc_vport *vport) 6699 6597 { 6700 6598 struct lpfc_nodelist *ndlp, *ndlp_fdmi; 6599 + struct Scsi_Host *shost = lpfc_shost_from_vport(vport); 6600 + 6601 + /* 6602 + * If lpfc_delay_discovery parameter is set and the clean address 6603 + * bit is cleared and fc fabric parameters chenged, delay FC NPort 6604 + * discovery. 6605 + */ 6606 + spin_lock_irq(shost->host_lock); 6607 + if (vport->fc_flag & FC_DISC_DELAYED) { 6608 + spin_unlock_irq(shost->host_lock); 6609 + mod_timer(&vport->delayed_disc_tmo, 6610 + jiffies + HZ * phba->fc_ratov); 6611 + return; 6612 + } 6613 + spin_unlock_irq(shost->host_lock); 6701 6614 6702 6615 ndlp = lpfc_findnode_did(vport, NameServer_DID); 6703 6616 if (!ndlp) { ··· 7055 6938 struct lpfc_nodelist *next_np; 7056 6939 IOCB_t *irsp = &rspiocb->iocb; 7057 6940 struct lpfc_iocbq *piocb; 6941 + struct lpfc_dmabuf *pcmd = cmdiocb->context2, *prsp; 6942 + struct serv_parm *sp; 6943 + uint8_t fabric_param_changed; 7058 6944 7059 6945 lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, 7060 6946 "0123 FDISC completes. x%x/x%x prevDID: x%x\n", ··· 7101 6981 7102 6982 vport->fc_myDID = irsp->un.ulpWord[4] & Mask_DID; 7103 6983 lpfc_vport_set_state(vport, FC_VPORT_ACTIVE); 7104 - if ((vport->fc_prevDID != vport->fc_myDID) && 6984 + prsp = list_get_first(&pcmd->list, struct lpfc_dmabuf, list); 6985 + sp = prsp->virt + sizeof(uint32_t); 6986 + fabric_param_changed = lpfc_check_clean_addr_bit(vport, sp); 6987 + memcpy(&vport->fabric_portname, &sp->portName, 6988 + sizeof(struct lpfc_name)); 6989 + memcpy(&vport->fabric_nodename, &sp->nodeName, 6990 + sizeof(struct lpfc_name)); 6991 + if (fabric_param_changed && 7105 6992 !(vport->fc_flag & FC_VPORT_NEEDS_REG_VPI)) { 7106 6993 /* If our NportID changed, we need to ensure all 7107 6994 * remaining NPORTs get unreg_login'ed so we can ··· 7706 7579 /* Cancel all the IOCBs from the completions list */ 7707 7580 lpfc_sli_cancel_iocbs(phba, &completions, IOSTAT_LOCAL_REJECT, 7708 7581 IOERR_SLI_ABORTED); 7582 + } 7583 + 7584 + /** 7585 + * lpfc_sli4_vport_delete_els_xri_aborted -Remove all ndlp references for vport 7586 + * @vport: pointer to lpfc vport data structure. 7587 + * 7588 + * This routine is invoked by the vport cleanup for deletions and the cleanup 7589 + * for an ndlp on removal. 7590 + **/ 7591 + void 7592 + lpfc_sli4_vport_delete_els_xri_aborted(struct lpfc_vport *vport) 7593 + { 7594 + struct lpfc_hba *phba = vport->phba; 7595 + struct lpfc_sglq *sglq_entry = NULL, *sglq_next = NULL; 7596 + unsigned long iflag = 0; 7597 + 7598 + spin_lock_irqsave(&phba->hbalock, iflag); 7599 + spin_lock(&phba->sli4_hba.abts_sgl_list_lock); 7600 + list_for_each_entry_safe(sglq_entry, sglq_next, 7601 + &phba->sli4_hba.lpfc_abts_els_sgl_list, list) { 7602 + if (sglq_entry->ndlp && sglq_entry->ndlp->vport == vport) 7603 + sglq_entry->ndlp = NULL; 7604 + } 7605 + spin_unlock(&phba->sli4_hba.abts_sgl_list_lock); 7606 + spin_unlock_irqrestore(&phba->hbalock, iflag); 7607 + return; 7709 7608 } 7710 7609 7711 7610 /**
+14 -4
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 658 658 lpfc_ramp_down_queue_handler(phba); 659 659 if (work_port_events & WORKER_RAMP_UP_QUEUE) 660 660 lpfc_ramp_up_queue_handler(phba); 661 + if (work_port_events & WORKER_DELAYED_DISC_TMO) 662 + lpfc_delayed_disc_timeout_handler(vport); 661 663 } 662 664 lpfc_destroy_vport_work_array(phba, vports); 663 665 ··· 840 838 841 839 lpfc_port_link_failure(vport); 842 840 841 + /* Stop delayed Nport discovery */ 842 + spin_lock_irq(shost->host_lock); 843 + vport->fc_flag &= ~FC_DISC_DELAYED; 844 + spin_unlock_irq(shost->host_lock); 845 + del_timer_sync(&vport->delayed_disc_tmo); 843 846 } 844 847 845 848 int ··· 3167 3160 spin_unlock_irq(shost->host_lock); 3168 3161 vport->unreg_vpi_cmpl = VPORT_OK; 3169 3162 mempool_free(pmb, phba->mbox_mem_pool); 3170 - lpfc_cleanup_vports_rrqs(vport); 3163 + lpfc_cleanup_vports_rrqs(vport, NULL); 3171 3164 /* 3172 3165 * This shost reference might have been taken at the beginning of 3173 3166 * lpfc_vport_delete() ··· 3907 3900 if (ndlp->nlp_state == NLP_STE_UNUSED_NODE) 3908 3901 return; 3909 3902 lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE); 3903 + if (vport->phba->sli_rev == LPFC_SLI_REV4) 3904 + lpfc_cleanup_vports_rrqs(vport, ndlp); 3910 3905 lpfc_nlp_put(ndlp); 3911 3906 return; 3912 3907 } ··· 4298 4289 4299 4290 list_del_init(&ndlp->els_retry_evt.evt_listp); 4300 4291 list_del_init(&ndlp->dev_loss_evt.evt_listp); 4301 - 4292 + lpfc_cleanup_vports_rrqs(vport, ndlp); 4302 4293 lpfc_unreg_rpi(vport, ndlp); 4303 4294 4304 4295 return 0; ··· 4435 4426 { 4436 4427 struct Scsi_Host *shost = lpfc_shost_from_vport(vport); 4437 4428 struct lpfc_nodelist *ndlp; 4429 + unsigned long iflags; 4438 4430 4439 - spin_lock_irq(shost->host_lock); 4431 + spin_lock_irqsave(shost->host_lock, iflags); 4440 4432 ndlp = __lpfc_findnode_did(vport, did); 4441 - spin_unlock_irq(shost->host_lock); 4433 + spin_unlock_irqrestore(shost->host_lock, iflags); 4442 4434 return ndlp; 4443 4435 } 4444 4436
+10 -1
drivers/scsi/lpfc/lpfc_hw.h
··· 341 341 uint8_t bbCreditMsb; 342 342 uint8_t bbCreditlsb; /* FC Word 0, byte 3 */ 343 343 344 + /* 345 + * Word 1 Bit 31 in common service parameter is overloaded. 346 + * Word 1 Bit 31 in FLOGI request is multiple NPort request 347 + * Word 1 Bit 31 in FLOGI response is clean address bit 348 + */ 349 + #define clean_address_bit request_multiple_Nport /* Word 1, bit 31 */ 344 350 #ifdef __BIG_ENDIAN_BITFIELD 345 351 uint16_t request_multiple_Nport:1; /* FC Word 1, bit 31 */ 346 352 uint16_t randomOffset:1; /* FC Word 1, bit 30 */ ··· 3204 3198 #define IOERR_SLER_RRQ_RJT_ERR 0x4C 3205 3199 #define IOERR_SLER_RRQ_RETRY_ERR 0x4D 3206 3200 #define IOERR_SLER_ABTS_ERR 0x4E 3207 - 3201 + #define IOERR_ELXSEC_KEY_UNWRAP_ERROR 0xF0 3202 + #define IOERR_ELXSEC_KEY_UNWRAP_COMPARE_ERROR 0xF1 3203 + #define IOERR_ELXSEC_CRYPTO_ERROR 0xF2 3204 + #define IOERR_ELXSEC_CRYPTO_COMPARE_ERROR 0xF3 3208 3205 #define IOERR_DRVR_MASK 0x100 3209 3206 #define IOERR_SLI_DOWN 0x101 /* ulpStatus - Driver defined */ 3210 3207 #define IOERR_SLI_BRESET 0x102
+100 -4
drivers/scsi/lpfc/lpfc_hw4.h
··· 778 778 #define LPFC_MBOX_OPCODE_QUERY_FW_CFG 0x3A 779 779 #define LPFC_MBOX_OPCODE_FUNCTION_RESET 0x3D 780 780 #define LPFC_MBOX_OPCODE_MQ_CREATE_EXT 0x5A 781 + #define LPFC_MBOX_OPCODE_GET_SLI4_PARAMETERS 0xB5 781 782 782 783 /* FCoE Opcodes */ 783 784 #define LPFC_MBOX_OPCODE_FCOE_WQ_CREATE 0x01 ··· 1853 1852 #define lpfc_mbx_rq_ftr_rq_ifip_SHIFT 7 1854 1853 #define lpfc_mbx_rq_ftr_rq_ifip_MASK 0x00000001 1855 1854 #define lpfc_mbx_rq_ftr_rq_ifip_WORD word2 1855 + #define lpfc_mbx_rq_ftr_rq_perfh_SHIFT 11 1856 + #define lpfc_mbx_rq_ftr_rq_perfh_MASK 0x00000001 1857 + #define lpfc_mbx_rq_ftr_rq_perfh_WORD word2 1856 1858 uint32_t word3; 1857 1859 #define lpfc_mbx_rq_ftr_rsp_iaab_SHIFT 0 1858 1860 #define lpfc_mbx_rq_ftr_rsp_iaab_MASK 0x00000001 ··· 1881 1877 #define lpfc_mbx_rq_ftr_rsp_ifip_SHIFT 7 1882 1878 #define lpfc_mbx_rq_ftr_rsp_ifip_MASK 0x00000001 1883 1879 #define lpfc_mbx_rq_ftr_rsp_ifip_WORD word3 1880 + #define lpfc_mbx_rq_ftr_rsp_perfh_SHIFT 11 1881 + #define lpfc_mbx_rq_ftr_rsp_perfh_MASK 0x00000001 1882 + #define lpfc_mbx_rq_ftr_rsp_perfh_WORD word3 1884 1883 }; 1885 1884 1886 1885 struct lpfc_mbx_supp_pages { ··· 1942 1935 #define LPFC_SLI4_PARAMETERS 2 1943 1936 }; 1944 1937 1945 - struct lpfc_mbx_sli4_params { 1938 + struct lpfc_mbx_pc_sli4_params { 1946 1939 uint32_t word1; 1947 1940 #define qs_SHIFT 0 1948 1941 #define qs_MASK 0x00000001 ··· 2058 2051 uint32_t rsvd_13_63[51]; 2059 2052 }; 2060 2053 2054 + struct lpfc_sli4_parameters { 2055 + uint32_t word0; 2056 + #define cfg_prot_type_SHIFT 0 2057 + #define cfg_prot_type_MASK 0x000000FF 2058 + #define cfg_prot_type_WORD word0 2059 + uint32_t word1; 2060 + #define cfg_ft_SHIFT 0 2061 + #define cfg_ft_MASK 0x00000001 2062 + #define cfg_ft_WORD word1 2063 + #define cfg_sli_rev_SHIFT 4 2064 + #define cfg_sli_rev_MASK 0x0000000f 2065 + #define cfg_sli_rev_WORD word1 2066 + #define cfg_sli_family_SHIFT 8 2067 + #define cfg_sli_family_MASK 0x0000000f 2068 + #define cfg_sli_family_WORD word1 2069 + #define cfg_if_type_SHIFT 12 2070 + #define cfg_if_type_MASK 0x0000000f 2071 + #define cfg_if_type_WORD word1 2072 + #define cfg_sli_hint_1_SHIFT 16 2073 + #define cfg_sli_hint_1_MASK 0x000000ff 2074 + #define cfg_sli_hint_1_WORD word1 2075 + #define cfg_sli_hint_2_SHIFT 24 2076 + #define cfg_sli_hint_2_MASK 0x0000001f 2077 + #define cfg_sli_hint_2_WORD word1 2078 + uint32_t word2; 2079 + uint32_t word3; 2080 + uint32_t word4; 2081 + #define cfg_cqv_SHIFT 14 2082 + #define cfg_cqv_MASK 0x00000003 2083 + #define cfg_cqv_WORD word4 2084 + uint32_t word5; 2085 + uint32_t word6; 2086 + #define cfg_mqv_SHIFT 14 2087 + #define cfg_mqv_MASK 0x00000003 2088 + #define cfg_mqv_WORD word6 2089 + uint32_t word7; 2090 + uint32_t word8; 2091 + #define cfg_wqv_SHIFT 14 2092 + #define cfg_wqv_MASK 0x00000003 2093 + #define cfg_wqv_WORD word8 2094 + uint32_t word9; 2095 + uint32_t word10; 2096 + #define cfg_rqv_SHIFT 14 2097 + #define cfg_rqv_MASK 0x00000003 2098 + #define cfg_rqv_WORD word10 2099 + uint32_t word11; 2100 + #define cfg_rq_db_window_SHIFT 28 2101 + #define cfg_rq_db_window_MASK 0x0000000f 2102 + #define cfg_rq_db_window_WORD word11 2103 + uint32_t word12; 2104 + #define cfg_fcoe_SHIFT 0 2105 + #define cfg_fcoe_MASK 0x00000001 2106 + #define cfg_fcoe_WORD word12 2107 + #define cfg_phwq_SHIFT 15 2108 + #define cfg_phwq_MASK 0x00000001 2109 + #define cfg_phwq_WORD word12 2110 + #define cfg_loopbk_scope_SHIFT 28 2111 + #define cfg_loopbk_scope_MASK 0x0000000f 2112 + #define cfg_loopbk_scope_WORD word12 2113 + uint32_t sge_supp_len; 2114 + uint32_t word14; 2115 + #define cfg_sgl_page_cnt_SHIFT 0 2116 + #define cfg_sgl_page_cnt_MASK 0x0000000f 2117 + #define cfg_sgl_page_cnt_WORD word14 2118 + #define cfg_sgl_page_size_SHIFT 8 2119 + #define cfg_sgl_page_size_MASK 0x000000ff 2120 + #define cfg_sgl_page_size_WORD word14 2121 + #define cfg_sgl_pp_align_SHIFT 16 2122 + #define cfg_sgl_pp_align_MASK 0x000000ff 2123 + #define cfg_sgl_pp_align_WORD word14 2124 + uint32_t word15; 2125 + uint32_t word16; 2126 + uint32_t word17; 2127 + uint32_t word18; 2128 + uint32_t word19; 2129 + }; 2130 + 2131 + struct lpfc_mbx_get_sli4_parameters { 2132 + struct mbox_header header; 2133 + struct lpfc_sli4_parameters sli4_parameters; 2134 + }; 2135 + 2061 2136 /* Mailbox Completion Queue Error Messages */ 2062 2137 #define MB_CQE_STATUS_SUCCESS 0x0 2063 2138 #define MB_CQE_STATUS_INSUFFICIENT_PRIVILEGES 0x1 ··· 2192 2103 struct lpfc_mbx_post_hdr_tmpl hdr_tmpl; 2193 2104 struct lpfc_mbx_query_fw_cfg query_fw_cfg; 2194 2105 struct lpfc_mbx_supp_pages supp_pages; 2195 - struct lpfc_mbx_sli4_params sli4_params; 2106 + struct lpfc_mbx_pc_sli4_params sli4_params; 2107 + struct lpfc_mbx_get_sli4_parameters get_sli4_parameters; 2196 2108 struct lpfc_mbx_nop nop; 2197 2109 } un; 2198 2110 }; ··· 2471 2381 #define wqe_wqes_SHIFT 15 2472 2382 #define wqe_wqes_MASK 0x00000001 2473 2383 #define wqe_wqes_WORD word10 2384 + /* Note that this field overlaps above fields */ 2385 + #define wqe_wqid_SHIFT 1 2386 + #define wqe_wqid_MASK 0x0000007f 2387 + #define wqe_wqid_WORD word10 2474 2388 #define wqe_pri_SHIFT 16 2475 2389 #define wqe_pri_MASK 0x00000007 2476 2390 #define wqe_pri_WORD word10 ··· 2693 2599 uint32_t total_xfer_len; 2694 2600 uint32_t initial_xfer_len; 2695 2601 struct wqe_common wqe_com; /* words 6-11 */ 2696 - uint32_t rsvd_12_15[4]; /* word 12-15 */ 2602 + uint32_t rsrvd12; 2603 + struct ulp_bde64 ph_bde; /* words 13-15 */ 2697 2604 }; 2698 2605 2699 2606 struct fcp_iread64_wqe { ··· 2703 2608 uint32_t total_xfer_len; /* word 4 */ 2704 2609 uint32_t rsrvd5; /* word 5 */ 2705 2610 struct wqe_common wqe_com; /* words 6-11 */ 2706 - uint32_t rsvd_12_15[4]; /* word 12-15 */ 2611 + uint32_t rsrvd12; 2612 + struct ulp_bde64 ph_bde; /* words 13-15 */ 2707 2613 }; 2708 2614 2709 2615 struct fcp_icmnd64_wqe {
+101 -33
drivers/scsi/lpfc/lpfc_init.c
··· 460 460 || ((phba->cfg_link_speed == LPFC_USER_LINK_SPEED_16G) 461 461 && !(phba->lmt & LMT_16Gb))) { 462 462 /* Reset link speed to auto */ 463 - lpfc_printf_log(phba, KERN_WARNING, LOG_LINK_EVENT, 463 + lpfc_printf_log(phba, KERN_ERR, LOG_LINK_EVENT, 464 464 "1302 Invalid speed for this board: " 465 465 "Reset link speed to auto: x%x\n", 466 466 phba->cfg_link_speed); ··· 945 945 lpfc_rrq_timeout(unsigned long ptr) 946 946 { 947 947 struct lpfc_hba *phba; 948 - uint32_t tmo_posted; 949 948 unsigned long iflag; 950 949 951 950 phba = (struct lpfc_hba *)ptr; 952 951 spin_lock_irqsave(&phba->pport->work_port_lock, iflag); 953 - tmo_posted = phba->hba_flag & HBA_RRQ_ACTIVE; 954 - if (!tmo_posted) 955 - phba->hba_flag |= HBA_RRQ_ACTIVE; 952 + phba->hba_flag |= HBA_RRQ_ACTIVE; 956 953 spin_unlock_irqrestore(&phba->pport->work_port_lock, iflag); 957 - if (!tmo_posted) 958 - lpfc_worker_wake_up(phba); 954 + lpfc_worker_wake_up(phba); 959 955 } 960 956 961 957 /** ··· 2276 2280 /* Wait for any activity on ndlps to settle */ 2277 2281 msleep(10); 2278 2282 } 2283 + lpfc_cleanup_vports_rrqs(vport, NULL); 2279 2284 } 2280 2285 2281 2286 /** ··· 2292 2295 { 2293 2296 del_timer_sync(&vport->els_tmofunc); 2294 2297 del_timer_sync(&vport->fc_fdmitmo); 2298 + del_timer_sync(&vport->delayed_disc_tmo); 2295 2299 lpfc_can_disctmo(vport); 2296 2300 return; 2297 2301 } ··· 2353 2355 del_timer_sync(&phba->fabric_block_timer); 2354 2356 del_timer_sync(&phba->eratt_poll); 2355 2357 del_timer_sync(&phba->hb_tmofunc); 2358 + if (phba->sli_rev == LPFC_SLI_REV4) { 2359 + del_timer_sync(&phba->rrq_tmr); 2360 + phba->hba_flag &= ~HBA_RRQ_ACTIVE; 2361 + } 2356 2362 phba->hb_outstanding = 0; 2357 2363 2358 2364 switch (phba->pci_dev_grp) { ··· 2734 2732 init_timer(&vport->els_tmofunc); 2735 2733 vport->els_tmofunc.function = lpfc_els_timeout; 2736 2734 vport->els_tmofunc.data = (unsigned long)vport; 2735 + 2736 + init_timer(&vport->delayed_disc_tmo); 2737 + vport->delayed_disc_tmo.function = lpfc_delayed_disc_tmo; 2738 + vport->delayed_disc_tmo.data = (unsigned long)vport; 2739 + 2737 2740 error = scsi_add_host_with_dma(shost, dev, &phba->pcidev->dev); 2738 2741 if (error) 2739 2742 goto out_put_shost; ··· 4290 4283 goto out_free_bsmbx; 4291 4284 } 4292 4285 4293 - /* Get the Supported Pages. It is always available. */ 4286 + /* Get the Supported Pages if PORT_CAPABILITIES is supported by port. */ 4294 4287 lpfc_supported_pages(mboxq); 4295 4288 rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); 4296 - if (unlikely(rc)) { 4297 - rc = -EIO; 4298 - mempool_free(mboxq, phba->mbox_mem_pool); 4299 - goto out_free_bsmbx; 4300 - } 4301 - 4302 - mqe = &mboxq->u.mqe; 4303 - memcpy(&pn_page[0], ((uint8_t *)&mqe->un.supp_pages.word3), 4304 - LPFC_MAX_SUPPORTED_PAGES); 4305 - for (i = 0; i < LPFC_MAX_SUPPORTED_PAGES; i++) { 4306 - switch (pn_page[i]) { 4307 - case LPFC_SLI4_PARAMETERS: 4308 - phba->sli4_hba.pc_sli4_params.supported = 1; 4309 - break; 4310 - default: 4311 - break; 4289 + if (!rc) { 4290 + mqe = &mboxq->u.mqe; 4291 + memcpy(&pn_page[0], ((uint8_t *)&mqe->un.supp_pages.word3), 4292 + LPFC_MAX_SUPPORTED_PAGES); 4293 + for (i = 0; i < LPFC_MAX_SUPPORTED_PAGES; i++) { 4294 + switch (pn_page[i]) { 4295 + case LPFC_SLI4_PARAMETERS: 4296 + phba->sli4_hba.pc_sli4_params.supported = 1; 4297 + break; 4298 + default: 4299 + break; 4300 + } 4301 + } 4302 + /* Read the port's SLI4 Parameters capabilities if supported. */ 4303 + if (phba->sli4_hba.pc_sli4_params.supported) 4304 + rc = lpfc_pc_sli4_params_get(phba, mboxq); 4305 + if (rc) { 4306 + mempool_free(mboxq, phba->mbox_mem_pool); 4307 + rc = -EIO; 4308 + goto out_free_bsmbx; 4312 4309 } 4313 4310 } 4314 - 4315 - /* Read the port's SLI4 Parameters capabilities if supported. */ 4316 - if (phba->sli4_hba.pc_sli4_params.supported) 4317 - rc = lpfc_pc_sli4_params_get(phba, mboxq); 4311 + /* 4312 + * Get sli4 parameters that override parameters from Port capabilities. 4313 + * If this call fails it is not a critical error so continue loading. 4314 + */ 4315 + lpfc_get_sli4_parameters(phba, mboxq); 4318 4316 mempool_free(mboxq, phba->mbox_mem_pool); 4319 - if (rc) { 4320 - rc = -EIO; 4321 - goto out_free_bsmbx; 4322 - } 4323 4317 /* Create all the SLI4 queues */ 4324 4318 rc = lpfc_sli4_queue_create(phba); 4325 4319 if (rc) ··· 7818 7810 mqe = &mboxq->u.mqe; 7819 7811 7820 7812 /* Read the port's SLI4 Parameters port capabilities */ 7821 - lpfc_sli4_params(mboxq); 7813 + lpfc_pc_sli4_params(mboxq); 7822 7814 if (!phba->sli4_hba.intr_enable) 7823 7815 rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); 7824 7816 else { ··· 7859 7851 sli4_params->sgl_pages_max = bf_get(sgl_pages, &mqe->un.sli4_params); 7860 7852 sli4_params->sgl_pp_align = bf_get(sgl_pp_align, &mqe->un.sli4_params); 7861 7853 return rc; 7854 + } 7855 + 7856 + /** 7857 + * lpfc_get_sli4_parameters - Get the SLI4 Config PARAMETERS. 7858 + * @phba: Pointer to HBA context object. 7859 + * @mboxq: Pointer to the mailboxq memory for the mailbox command response. 7860 + * 7861 + * This function is called in the SLI4 code path to read the port's 7862 + * sli4 capabilities. 7863 + * 7864 + * This function may be be called from any context that can block-wait 7865 + * for the completion. The expectation is that this routine is called 7866 + * typically from probe_one or from the online routine. 7867 + **/ 7868 + int 7869 + lpfc_get_sli4_parameters(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq) 7870 + { 7871 + int rc; 7872 + struct lpfc_mqe *mqe = &mboxq->u.mqe; 7873 + struct lpfc_pc_sli4_params *sli4_params; 7874 + int length; 7875 + struct lpfc_sli4_parameters *mbx_sli4_parameters; 7876 + 7877 + /* Read the port's SLI4 Config Parameters */ 7878 + length = (sizeof(struct lpfc_mbx_get_sli4_parameters) - 7879 + sizeof(struct lpfc_sli4_cfg_mhdr)); 7880 + lpfc_sli4_config(phba, mboxq, LPFC_MBOX_SUBSYSTEM_COMMON, 7881 + LPFC_MBOX_OPCODE_GET_SLI4_PARAMETERS, 7882 + length, LPFC_SLI4_MBX_EMBED); 7883 + if (!phba->sli4_hba.intr_enable) 7884 + rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); 7885 + else 7886 + rc = lpfc_sli_issue_mbox_wait(phba, mboxq, 7887 + lpfc_mbox_tmo_val(phba, MBX_SLI4_CONFIG)); 7888 + if (unlikely(rc)) 7889 + return rc; 7890 + sli4_params = &phba->sli4_hba.pc_sli4_params; 7891 + mbx_sli4_parameters = &mqe->un.get_sli4_parameters.sli4_parameters; 7892 + sli4_params->if_type = bf_get(cfg_if_type, mbx_sli4_parameters); 7893 + sli4_params->sli_rev = bf_get(cfg_sli_rev, mbx_sli4_parameters); 7894 + sli4_params->sli_family = bf_get(cfg_sli_family, mbx_sli4_parameters); 7895 + sli4_params->featurelevel_1 = bf_get(cfg_sli_hint_1, 7896 + mbx_sli4_parameters); 7897 + sli4_params->featurelevel_2 = bf_get(cfg_sli_hint_2, 7898 + mbx_sli4_parameters); 7899 + if (bf_get(cfg_phwq, mbx_sli4_parameters)) 7900 + phba->sli3_options |= LPFC_SLI4_PHWQ_ENABLED; 7901 + else 7902 + phba->sli3_options &= ~LPFC_SLI4_PHWQ_ENABLED; 7903 + sli4_params->sge_supp_len = mbx_sli4_parameters->sge_supp_len; 7904 + sli4_params->loopbk_scope = bf_get(loopbk_scope, mbx_sli4_parameters); 7905 + sli4_params->cqv = bf_get(cfg_cqv, mbx_sli4_parameters); 7906 + sli4_params->mqv = bf_get(cfg_mqv, mbx_sli4_parameters); 7907 + sli4_params->wqv = bf_get(cfg_wqv, mbx_sli4_parameters); 7908 + sli4_params->rqv = bf_get(cfg_rqv, mbx_sli4_parameters); 7909 + sli4_params->sgl_pages_max = bf_get(cfg_sgl_page_cnt, 7910 + mbx_sli4_parameters); 7911 + sli4_params->sgl_pp_align = bf_get(cfg_sgl_pp_align, 7912 + mbx_sli4_parameters); 7913 + return 0; 7862 7914 } 7863 7915 7864 7916 /**
+10 -9
drivers/scsi/lpfc/lpfc_mbox.c
··· 1263 1263 if (phba->sli_rev == LPFC_SLI_REV3 && phba->vpd.sli3Feat.cerbm) { 1264 1264 if (phba->cfg_enable_bg) 1265 1265 mb->un.varCfgPort.cbg = 1; /* configure BlockGuard */ 1266 - mb->un.varCfgPort.cdss = 1; /* Configure Security */ 1266 + if (phba->cfg_enable_dss) 1267 + mb->un.varCfgPort.cdss = 1; /* Configure Security */ 1267 1268 mb->un.varCfgPort.cerbm = 1; /* Request HBQs */ 1268 1269 mb->un.varCfgPort.ccrp = 1; /* Command Ring Polling */ 1269 1270 mb->un.varCfgPort.max_hbq = lpfc_sli_hbq_count(); ··· 1693 1692 * @mbox: pointer to lpfc mbox command. 1694 1693 * @subsystem: The sli4 config sub mailbox subsystem. 1695 1694 * @opcode: The sli4 config sub mailbox command opcode. 1696 - * @length: Length of the sli4 config mailbox command. 1695 + * @length: Length of the sli4 config mailbox command (including sub-header). 1697 1696 * 1698 1697 * This routine sets up the header fields of SLI4 specific mailbox command 1699 1698 * for sending IOCTL command. ··· 1724 1723 if (emb) { 1725 1724 /* Set up main header fields */ 1726 1725 bf_set(lpfc_mbox_hdr_emb, &sli4_config->header.cfg_mhdr, 1); 1727 - sli4_config->header.cfg_mhdr.payload_length = 1728 - LPFC_MBX_CMD_HDR_LENGTH + length; 1726 + sli4_config->header.cfg_mhdr.payload_length = length; 1729 1727 /* Set up sub-header fields following main header */ 1730 1728 bf_set(lpfc_mbox_hdr_opcode, 1731 1729 &sli4_config->header.cfg_shdr.request, opcode); 1732 1730 bf_set(lpfc_mbox_hdr_subsystem, 1733 1731 &sli4_config->header.cfg_shdr.request, subsystem); 1734 - sli4_config->header.cfg_shdr.request.request_length = length; 1732 + sli4_config->header.cfg_shdr.request.request_length = 1733 + length - LPFC_MBX_CMD_HDR_LENGTH; 1735 1734 return length; 1736 1735 } 1737 1736 ··· 1903 1902 1904 1903 /* Set up host requested features. */ 1905 1904 bf_set(lpfc_mbx_rq_ftr_rq_fcpi, &mboxq->u.mqe.un.req_ftrs, 1); 1905 + bf_set(lpfc_mbx_rq_ftr_rq_perfh, &mboxq->u.mqe.un.req_ftrs, 1); 1906 1906 1907 1907 /* Enable DIF (block guard) only if configured to do so. */ 1908 1908 if (phba->cfg_enable_bg) ··· 2161 2159 } 2162 2160 2163 2161 /** 2164 - * lpfc_sli4_params - Initialize the PORT_CAPABILITIES SLI4 Params 2165 - * mailbox command. 2162 + * lpfc_pc_sli4_params - Initialize the PORT_CAPABILITIES SLI4 Params mbox cmd. 2166 2163 * @mbox: pointer to lpfc mbox command to initialize. 2167 2164 * 2168 2165 * The PORT_CAPABILITIES SLI4 parameters mailbox command is issued to 2169 2166 * retrieve the particular SLI4 features supported by the port. 2170 2167 **/ 2171 2168 void 2172 - lpfc_sli4_params(struct lpfcMboxq *mbox) 2169 + lpfc_pc_sli4_params(struct lpfcMboxq *mbox) 2173 2170 { 2174 - struct lpfc_mbx_sli4_params *sli4_params; 2171 + struct lpfc_mbx_pc_sli4_params *sli4_params; 2175 2172 2176 2173 memset(mbox, 0, sizeof(*mbox)); 2177 2174 sli4_params = &mbox->u.mqe.un.sli4_params;
+8 -3
drivers/scsi/lpfc/lpfc_nportdisc.c
··· 350 350 ndlp->nlp_maxframe = 351 351 ((sp->cmn.bbRcvSizeMsb & 0x0F) << 8) | sp->cmn.bbRcvSizeLsb; 352 352 353 - /* no need to reg_login if we are already in one of these states */ 353 + /* 354 + * Need to unreg_login if we are already in one of these states and 355 + * change to NPR state. This will block the port until after the ACC 356 + * completes and the reg_login is issued and completed. 357 + */ 354 358 switch (ndlp->nlp_state) { 355 359 case NLP_STE_NPR_NODE: 356 360 if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) ··· 363 359 case NLP_STE_PRLI_ISSUE: 364 360 case NLP_STE_UNMAPPED_NODE: 365 361 case NLP_STE_MAPPED_NODE: 366 - lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, cmdiocb, ndlp, NULL); 367 - return 1; 362 + lpfc_unreg_rpi(vport, ndlp); 363 + ndlp->nlp_prev_state = ndlp->nlp_state; 364 + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); 368 365 } 369 366 370 367 if ((vport->fc_flag & FC_PT2PT) &&
+85 -28
drivers/scsi/lpfc/lpfc_scsi.c
··· 609 609 } 610 610 611 611 /** 612 + * lpfc_sli4_vport_delete_fcp_xri_aborted -Remove all ndlp references for vport 613 + * @vport: pointer to lpfc vport data structure. 614 + * 615 + * This routine is invoked by the vport cleanup for deletions and the cleanup 616 + * for an ndlp on removal. 617 + **/ 618 + void 619 + lpfc_sli4_vport_delete_fcp_xri_aborted(struct lpfc_vport *vport) 620 + { 621 + struct lpfc_hba *phba = vport->phba; 622 + struct lpfc_scsi_buf *psb, *next_psb; 623 + unsigned long iflag = 0; 624 + 625 + spin_lock_irqsave(&phba->hbalock, iflag); 626 + spin_lock(&phba->sli4_hba.abts_scsi_buf_list_lock); 627 + list_for_each_entry_safe(psb, next_psb, 628 + &phba->sli4_hba.lpfc_abts_scsi_buf_list, list) { 629 + if (psb->rdata && psb->rdata->pnode 630 + && psb->rdata->pnode->vport == vport) 631 + psb->rdata = NULL; 632 + } 633 + spin_unlock(&phba->sli4_hba.abts_scsi_buf_list_lock); 634 + spin_unlock_irqrestore(&phba->hbalock, iflag); 635 + } 636 + 637 + /** 612 638 * lpfc_sli4_fcp_xri_aborted - Fast-path process of fcp xri abort 613 639 * @phba: pointer to lpfc hba data structure. 614 640 * @axri: pointer to the fcp xri abort wcqe structure. ··· 666 640 psb->status = IOSTAT_SUCCESS; 667 641 spin_unlock( 668 642 &phba->sli4_hba.abts_scsi_buf_list_lock); 669 - ndlp = psb->rdata->pnode; 643 + if (psb->rdata && psb->rdata->pnode) 644 + ndlp = psb->rdata->pnode; 645 + else 646 + ndlp = NULL; 647 + 670 648 rrq_empty = list_empty(&phba->active_rrq_list); 671 649 spin_unlock_irqrestore(&phba->hbalock, iflag); 672 650 if (ndlp) ··· 994 964 static struct lpfc_scsi_buf* 995 965 lpfc_get_scsi_buf_s4(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp) 996 966 { 997 - struct lpfc_scsi_buf *lpfc_cmd = NULL; 998 - struct lpfc_scsi_buf *start_lpfc_cmd = NULL; 999 - struct list_head *scsi_buf_list = &phba->lpfc_scsi_buf_list; 967 + struct lpfc_scsi_buf *lpfc_cmd ; 1000 968 unsigned long iflag = 0; 1001 969 int found = 0; 1002 970 1003 971 spin_lock_irqsave(&phba->scsi_buf_list_lock, iflag); 1004 - list_remove_head(scsi_buf_list, lpfc_cmd, struct lpfc_scsi_buf, list); 1005 - spin_unlock_irqrestore(&phba->scsi_buf_list_lock, iflag); 1006 - while (!found && lpfc_cmd) { 972 + list_for_each_entry(lpfc_cmd, &phba->lpfc_scsi_buf_list, 973 + list) { 1007 974 if (lpfc_test_rrq_active(phba, ndlp, 1008 - lpfc_cmd->cur_iocbq.sli4_xritag)) { 1009 - lpfc_release_scsi_buf_s4(phba, lpfc_cmd); 1010 - spin_lock_irqsave(&phba->scsi_buf_list_lock, iflag); 1011 - list_remove_head(scsi_buf_list, lpfc_cmd, 1012 - struct lpfc_scsi_buf, list); 1013 - spin_unlock_irqrestore(&phba->scsi_buf_list_lock, 1014 - iflag); 1015 - if (lpfc_cmd == start_lpfc_cmd) { 1016 - lpfc_cmd = NULL; 1017 - break; 1018 - } else 1019 - continue; 1020 - } 975 + lpfc_cmd->cur_iocbq.sli4_xritag)) 976 + continue; 977 + list_del(&lpfc_cmd->list); 1021 978 found = 1; 1022 979 lpfc_cmd->seg_cnt = 0; 1023 980 lpfc_cmd->nonsg_phys = 0; 1024 981 lpfc_cmd->prot_seg_cnt = 0; 982 + break; 1025 983 } 1026 - return lpfc_cmd; 984 + spin_unlock_irqrestore(&phba->scsi_buf_list_lock, 985 + iflag); 986 + if (!found) 987 + return NULL; 988 + else 989 + return lpfc_cmd; 1027 990 } 1028 991 /** 1029 992 * lpfc_get_scsi_buf - Get a scsi buffer from lpfc_scsi_buf_list of the HBA ··· 2004 1981 struct scatterlist *sgel = NULL; 2005 1982 struct fcp_cmnd *fcp_cmnd = lpfc_cmd->fcp_cmnd; 2006 1983 struct sli4_sge *sgl = (struct sli4_sge *)lpfc_cmd->fcp_bpl; 1984 + struct sli4_sge *first_data_sgl; 2007 1985 IOCB_t *iocb_cmd = &lpfc_cmd->cur_iocbq.iocb; 2008 1986 dma_addr_t physaddr; 2009 1987 uint32_t num_bde = 0; 2010 1988 uint32_t dma_len; 2011 1989 uint32_t dma_offset = 0; 2012 1990 int nseg; 1991 + struct ulp_bde64 *bde; 2013 1992 2014 1993 /* 2015 1994 * There are three possibilities here - use scatter-gather segment, use ··· 2036 2011 bf_set(lpfc_sli4_sge_last, sgl, 0); 2037 2012 sgl->word2 = cpu_to_le32(sgl->word2); 2038 2013 sgl += 1; 2039 - 2014 + first_data_sgl = sgl; 2040 2015 lpfc_cmd->seg_cnt = nseg; 2041 2016 if (lpfc_cmd->seg_cnt > phba->cfg_sg_seg_cnt) { 2042 2017 lpfc_printf_log(phba, KERN_ERR, LOG_BG, "9074 BLKGRD:" ··· 2071 2046 sgl->sge_len = cpu_to_le32(dma_len); 2072 2047 dma_offset += dma_len; 2073 2048 sgl++; 2049 + } 2050 + /* setup the performance hint (first data BDE) if enabled */ 2051 + if (phba->sli3_options & LPFC_SLI4_PERFH_ENABLED) { 2052 + bde = (struct ulp_bde64 *) 2053 + &(iocb_cmd->unsli3.sli3Words[5]); 2054 + bde->addrLow = first_data_sgl->addr_lo; 2055 + bde->addrHigh = first_data_sgl->addr_hi; 2056 + bde->tus.f.bdeSize = 2057 + le32_to_cpu(first_data_sgl->sge_len); 2058 + bde->tus.f.bdeFlags = BUFF_TYPE_BDE_64; 2059 + bde->tus.w = cpu_to_le32(bde->tus.w); 2074 2060 } 2075 2061 } else { 2076 2062 sgl += 1; ··· 2507 2471 lpfc_worker_wake_up(phba); 2508 2472 break; 2509 2473 case IOSTAT_LOCAL_REJECT: 2474 + case IOSTAT_REMOTE_STOP: 2475 + if (lpfc_cmd->result == IOERR_ELXSEC_KEY_UNWRAP_ERROR || 2476 + lpfc_cmd->result == 2477 + IOERR_ELXSEC_KEY_UNWRAP_COMPARE_ERROR || 2478 + lpfc_cmd->result == IOERR_ELXSEC_CRYPTO_ERROR || 2479 + lpfc_cmd->result == 2480 + IOERR_ELXSEC_CRYPTO_COMPARE_ERROR) { 2481 + cmd->result = ScsiResult(DID_NO_CONNECT, 0); 2482 + break; 2483 + } 2510 2484 if (lpfc_cmd->result == IOERR_INVALID_RPI || 2511 2485 lpfc_cmd->result == IOERR_NO_RESOURCES || 2512 2486 lpfc_cmd->result == IOERR_ABORT_REQUESTED || ··· 2524 2478 cmd->result = ScsiResult(DID_REQUEUE, 0); 2525 2479 break; 2526 2480 } 2527 - 2528 2481 if ((lpfc_cmd->result == IOERR_RX_DMA_FAILED || 2529 2482 lpfc_cmd->result == IOERR_TX_DMA_FAILED) && 2530 2483 pIocbOut->iocb.unsli3.sli3_bg.bgstat) { ··· 2542 2497 "on unprotected cmd\n"); 2543 2498 } 2544 2499 } 2545 - 2500 + if ((lpfc_cmd->status == IOSTAT_REMOTE_STOP) 2501 + && (phba->sli_rev == LPFC_SLI_REV4) 2502 + && (pnode && NLP_CHK_NODE_ACT(pnode))) { 2503 + /* This IO was aborted by the target, we don't 2504 + * know the rxid and because we did not send the 2505 + * ABTS we cannot generate and RRQ. 2506 + */ 2507 + lpfc_set_rrq_active(phba, pnode, 2508 + lpfc_cmd->cur_iocbq.sli4_xritag, 2509 + 0, 0); 2510 + } 2546 2511 /* else: fall through */ 2547 2512 default: 2548 2513 cmd->result = ScsiResult(DID_ERROR, 0); ··· 2563 2508 || (pnode->nlp_state != NLP_STE_MAPPED_NODE)) 2564 2509 cmd->result = ScsiResult(DID_TRANSPORT_DISRUPTED, 2565 2510 SAM_STAT_BUSY); 2566 - } else { 2511 + } else 2567 2512 cmd->result = ScsiResult(DID_OK, 0); 2568 - } 2569 2513 2570 2514 if (cmd->result || lpfc_cmd->fcp_rsp->rspSnsLen) { 2571 2515 uint32_t *lp = (uint32_t *)cmd->sense_buffer; ··· 3058 3004 * transport is still transitioning. 3059 3005 */ 3060 3006 if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) { 3061 - cmnd->result = ScsiResult(DID_TRANSPORT_DISRUPTED, 0); 3007 + cmnd->result = ScsiResult(DID_IMM_RETRY, 0); 3062 3008 goto out_fail_command; 3063 3009 } 3064 3010 if (atomic_read(&ndlp->cmd_pending) >= ndlp->cmd_qdepth) 3065 - goto out_host_busy; 3011 + goto out_tgt_busy; 3066 3012 3067 3013 lpfc_cmd = lpfc_get_scsi_buf(phba, ndlp); 3068 3014 if (lpfc_cmd == NULL) { ··· 3178 3124 lpfc_release_scsi_buf(phba, lpfc_cmd); 3179 3125 out_host_busy: 3180 3126 return SCSI_MLQUEUE_HOST_BUSY; 3127 + 3128 + out_tgt_busy: 3129 + return SCSI_MLQUEUE_TARGET_BUSY; 3181 3130 3182 3131 out_fail_command: 3183 3132 done(cmnd);
+109 -110
drivers/scsi/lpfc/lpfc_sli.c
··· 96 96 /* set consumption flag every once in a while */ 97 97 if (!((q->host_index + 1) % LPFC_RELEASE_NOTIFICATION_INTERVAL)) 98 98 bf_set(wqe_wqec, &wqe->generic.wqe_com, 1); 99 - 99 + if (q->phba->sli3_options & LPFC_SLI4_PHWQ_ENABLED) 100 + bf_set(wqe_wqid, &wqe->generic.wqe_com, q->queue_id); 100 101 lpfc_sli_pcimem_bcopy(wqe, temp_wqe, q->entry_size); 101 102 102 103 /* Update the host index before invoking device */ ··· 535 534 uint16_t adj_xri; 536 535 struct lpfc_node_rrq *rrq; 537 536 int empty; 537 + uint32_t did = 0; 538 + 539 + 540 + if (!ndlp) 541 + return -EINVAL; 542 + 543 + if (!phba->cfg_enable_rrq) 544 + return -EINVAL; 545 + 546 + if (phba->pport->load_flag & FC_UNLOADING) { 547 + phba->hba_flag &= ~HBA_RRQ_ACTIVE; 548 + goto out; 549 + } 550 + did = ndlp->nlp_DID; 538 551 539 552 /* 540 553 * set the active bit even if there is no mem available. 541 554 */ 542 555 adj_xri = xritag - phba->sli4_hba.max_cfg_param.xri_base; 543 - if (!ndlp) 544 - return -EINVAL; 556 + 557 + if (NLP_CHK_FREE_REQ(ndlp)) 558 + goto out; 559 + 560 + if (ndlp->vport && (ndlp->vport->load_flag & FC_UNLOADING)) 561 + goto out; 562 + 545 563 if (test_and_set_bit(adj_xri, ndlp->active_rrqs.xri_bitmap)) 546 - return -EINVAL; 564 + goto out; 565 + 547 566 rrq = mempool_alloc(phba->rrq_pool, GFP_KERNEL); 548 567 if (rrq) { 549 568 rrq->send_rrq = send_rrq; ··· 574 553 rrq->vport = ndlp->vport; 575 554 rrq->rxid = rxid; 576 555 empty = list_empty(&phba->active_rrq_list); 577 - if (phba->cfg_enable_rrq && send_rrq) 578 - /* 579 - * We need the xri before we can add this to the 580 - * phba active rrq list. 581 - */ 582 - rrq->send_rrq = send_rrq; 583 - else 584 - rrq->send_rrq = 0; 556 + rrq->send_rrq = send_rrq; 585 557 list_add_tail(&rrq->list, &phba->active_rrq_list); 586 558 if (!(phba->hba_flag & HBA_RRQ_ACTIVE)) { 587 559 phba->hba_flag |= HBA_RRQ_ACTIVE; ··· 583 569 } 584 570 return 0; 585 571 } 586 - return -ENOMEM; 572 + out: 573 + lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 574 + "2921 Can't set rrq active xri:0x%x rxid:0x%x" 575 + " DID:0x%x Send:%d\n", 576 + xritag, rxid, did, send_rrq); 577 + return -EINVAL; 587 578 } 588 579 589 580 /** 590 - * __lpfc_clr_rrq_active - Clears RRQ active bit in xri_bitmap. 581 + * lpfc_clr_rrq_active - Clears RRQ active bit in xri_bitmap. 591 582 * @phba: Pointer to HBA context object. 592 583 * @xritag: xri used in this exchange. 593 584 * @rrq: The RRQ to be cleared. 594 585 * 595 - * This function is called with hbalock held. This function 596 586 **/ 597 - static void 598 - __lpfc_clr_rrq_active(struct lpfc_hba *phba, 599 - uint16_t xritag, 600 - struct lpfc_node_rrq *rrq) 587 + void 588 + lpfc_clr_rrq_active(struct lpfc_hba *phba, 589 + uint16_t xritag, 590 + struct lpfc_node_rrq *rrq) 601 591 { 602 592 uint16_t adj_xri; 603 - struct lpfc_nodelist *ndlp; 593 + struct lpfc_nodelist *ndlp = NULL; 604 594 605 - ndlp = lpfc_findnode_did(rrq->vport, rrq->nlp_DID); 595 + if ((rrq->vport) && NLP_CHK_NODE_ACT(rrq->ndlp)) 596 + ndlp = lpfc_findnode_did(rrq->vport, rrq->nlp_DID); 606 597 607 598 /* The target DID could have been swapped (cable swap) 608 599 * we should use the ndlp from the findnode if it is 609 600 * available. 610 601 */ 611 - if (!ndlp) 602 + if ((!ndlp) && rrq->ndlp) 612 603 ndlp = rrq->ndlp; 604 + 605 + if (!ndlp) 606 + goto out; 613 607 614 608 adj_xri = xritag - phba->sli4_hba.max_cfg_param.xri_base; 615 609 if (test_and_clear_bit(adj_xri, ndlp->active_rrqs.xri_bitmap)) { ··· 625 603 rrq->xritag = 0; 626 604 rrq->rrq_stop_time = 0; 627 605 } 606 + out: 628 607 mempool_free(rrq, phba->rrq_pool); 629 608 } 630 609 ··· 650 627 struct lpfc_node_rrq *nextrrq; 651 628 unsigned long next_time; 652 629 unsigned long iflags; 630 + LIST_HEAD(send_rrq); 653 631 654 632 spin_lock_irqsave(&phba->hbalock, iflags); 655 633 phba->hba_flag &= ~HBA_RRQ_ACTIVE; 656 634 next_time = jiffies + HZ * (phba->fc_ratov + 1); 657 635 list_for_each_entry_safe(rrq, nextrrq, 658 - &phba->active_rrq_list, list) { 659 - if (time_after(jiffies, rrq->rrq_stop_time)) { 660 - list_del(&rrq->list); 661 - if (!rrq->send_rrq) 662 - /* this call will free the rrq */ 663 - __lpfc_clr_rrq_active(phba, rrq->xritag, rrq); 664 - else { 665 - /* if we send the rrq then the completion handler 666 - * will clear the bit in the xribitmap. 667 - */ 668 - spin_unlock_irqrestore(&phba->hbalock, iflags); 669 - if (lpfc_send_rrq(phba, rrq)) { 670 - lpfc_clr_rrq_active(phba, rrq->xritag, 671 - rrq); 672 - } 673 - spin_lock_irqsave(&phba->hbalock, iflags); 674 - } 675 - } else if (time_before(rrq->rrq_stop_time, next_time)) 636 + &phba->active_rrq_list, list) { 637 + if (time_after(jiffies, rrq->rrq_stop_time)) 638 + list_move(&rrq->list, &send_rrq); 639 + else if (time_before(rrq->rrq_stop_time, next_time)) 676 640 next_time = rrq->rrq_stop_time; 677 641 } 678 642 spin_unlock_irqrestore(&phba->hbalock, iflags); 679 643 if (!list_empty(&phba->active_rrq_list)) 680 644 mod_timer(&phba->rrq_tmr, next_time); 645 + list_for_each_entry_safe(rrq, nextrrq, &send_rrq, list) { 646 + list_del(&rrq->list); 647 + if (!rrq->send_rrq) 648 + /* this call will free the rrq */ 649 + lpfc_clr_rrq_active(phba, rrq->xritag, rrq); 650 + else if (lpfc_send_rrq(phba, rrq)) { 651 + /* if we send the rrq then the completion handler 652 + * will clear the bit in the xribitmap. 653 + */ 654 + lpfc_clr_rrq_active(phba, rrq->xritag, 655 + rrq); 656 + } 657 + } 681 658 } 682 659 683 660 /** ··· 715 692 /** 716 693 * lpfc_cleanup_vports_rrqs - Remove and clear the active RRQ for this vport. 717 694 * @vport: Pointer to vport context object. 718 - * 719 - * Remove all active RRQs for this vport from the phba->active_rrq_list and 720 - * clear the rrq. 695 + * @ndlp: Pointer to the lpfc_node_list structure. 696 + * If ndlp is NULL Remove all active RRQs for this vport from the 697 + * phba->active_rrq_list and clear the rrq. 698 + * If ndlp is not NULL then only remove rrqs for this vport & this ndlp. 721 699 **/ 722 700 void 723 - lpfc_cleanup_vports_rrqs(struct lpfc_vport *vport) 701 + lpfc_cleanup_vports_rrqs(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) 724 702 725 703 { 726 704 struct lpfc_hba *phba = vport->phba; 727 705 struct lpfc_node_rrq *rrq; 728 706 struct lpfc_node_rrq *nextrrq; 729 707 unsigned long iflags; 708 + LIST_HEAD(rrq_list); 730 709 731 710 if (phba->sli_rev != LPFC_SLI_REV4) 732 711 return; 733 - spin_lock_irqsave(&phba->hbalock, iflags); 734 - list_for_each_entry_safe(rrq, nextrrq, &phba->active_rrq_list, list) { 735 - if (rrq->vport == vport) { 736 - list_del(&rrq->list); 737 - __lpfc_clr_rrq_active(phba, rrq->xritag, rrq); 738 - } 712 + if (!ndlp) { 713 + lpfc_sli4_vport_delete_els_xri_aborted(vport); 714 + lpfc_sli4_vport_delete_fcp_xri_aborted(vport); 739 715 } 716 + spin_lock_irqsave(&phba->hbalock, iflags); 717 + list_for_each_entry_safe(rrq, nextrrq, &phba->active_rrq_list, list) 718 + if ((rrq->vport == vport) && (!ndlp || rrq->ndlp == ndlp)) 719 + list_move(&rrq->list, &rrq_list); 740 720 spin_unlock_irqrestore(&phba->hbalock, iflags); 721 + 722 + list_for_each_entry_safe(rrq, nextrrq, &rrq_list, list) { 723 + list_del(&rrq->list); 724 + lpfc_clr_rrq_active(phba, rrq->xritag, rrq); 725 + } 741 726 } 742 727 743 728 /** ··· 763 732 struct lpfc_node_rrq *nextrrq; 764 733 unsigned long next_time; 765 734 unsigned long iflags; 735 + LIST_HEAD(rrq_list); 766 736 767 737 if (phba->sli_rev != LPFC_SLI_REV4) 768 738 return; 769 739 spin_lock_irqsave(&phba->hbalock, iflags); 770 740 phba->hba_flag &= ~HBA_RRQ_ACTIVE; 771 741 next_time = jiffies + HZ * (phba->fc_ratov * 2); 772 - list_for_each_entry_safe(rrq, nextrrq, &phba->active_rrq_list, list) { 773 - list_del(&rrq->list); 774 - __lpfc_clr_rrq_active(phba, rrq->xritag, rrq); 775 - } 742 + list_splice_init(&phba->active_rrq_list, &rrq_list); 776 743 spin_unlock_irqrestore(&phba->hbalock, iflags); 744 + 745 + list_for_each_entry_safe(rrq, nextrrq, &rrq_list, list) { 746 + list_del(&rrq->list); 747 + lpfc_clr_rrq_active(phba, rrq->xritag, rrq); 748 + } 777 749 if (!list_empty(&phba->active_rrq_list)) 778 750 mod_timer(&phba->rrq_tmr, next_time); 779 751 } 780 752 781 753 782 754 /** 783 - * __lpfc_test_rrq_active - Test RRQ bit in xri_bitmap. 755 + * lpfc_test_rrq_active - Test RRQ bit in xri_bitmap. 784 756 * @phba: Pointer to HBA context object. 785 757 * @ndlp: Targets nodelist pointer for this exchange. 786 758 * @xritag the xri in the bitmap to test. ··· 792 758 * returns 0 = rrq not active for this xri 793 759 * 1 = rrq is valid for this xri. 794 760 **/ 795 - static int 796 - __lpfc_test_rrq_active(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp, 761 + int 762 + lpfc_test_rrq_active(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp, 797 763 uint16_t xritag) 798 764 { 799 765 uint16_t adj_xri; ··· 836 802 } 837 803 838 804 /** 839 - * lpfc_clr_rrq_active - Clears RRQ active bit in xri_bitmap. 840 - * @phba: Pointer to HBA context object. 841 - * @xritag: xri used in this exchange. 842 - * @rrq: The RRQ to be cleared. 843 - * 844 - * This function is takes the hbalock. 845 - **/ 846 - void 847 - lpfc_clr_rrq_active(struct lpfc_hba *phba, 848 - uint16_t xritag, 849 - struct lpfc_node_rrq *rrq) 850 - { 851 - unsigned long iflags; 852 - 853 - spin_lock_irqsave(&phba->hbalock, iflags); 854 - __lpfc_clr_rrq_active(phba, xritag, rrq); 855 - spin_unlock_irqrestore(&phba->hbalock, iflags); 856 - return; 857 - } 858 - 859 - 860 - 861 - /** 862 - * lpfc_test_rrq_active - Test RRQ bit in xri_bitmap. 863 - * @phba: Pointer to HBA context object. 864 - * @ndlp: Targets nodelist pointer for this exchange. 865 - * @xritag the xri in the bitmap to test. 866 - * 867 - * This function takes the hbalock. 868 - * returns 0 = rrq not active for this xri 869 - * 1 = rrq is valid for this xri. 870 - **/ 871 - int 872 - lpfc_test_rrq_active(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp, 873 - uint16_t xritag) 874 - { 875 - int ret; 876 - unsigned long iflags; 877 - 878 - spin_lock_irqsave(&phba->hbalock, iflags); 879 - ret = __lpfc_test_rrq_active(phba, ndlp, xritag); 880 - spin_unlock_irqrestore(&phba->hbalock, iflags); 881 - return ret; 882 - } 883 - 884 - /** 885 805 * __lpfc_sli_get_sglq - Allocates an iocb object from sgl pool 886 806 * @phba: Pointer to HBA context object. 887 807 * @piocb: Pointer to the iocbq. ··· 872 884 return NULL; 873 885 adj_xri = sglq->sli4_xritag - 874 886 phba->sli4_hba.max_cfg_param.xri_base; 875 - if (__lpfc_test_rrq_active(phba, ndlp, sglq->sli4_xritag)) { 887 + if (lpfc_test_rrq_active(phba, ndlp, sglq->sli4_xritag)) { 876 888 /* This xri has an rrq outstanding for this DID. 877 889 * put it back in the list and get another xri. 878 890 */ ··· 957 969 } else { 958 970 sglq->state = SGL_FREED; 959 971 sglq->ndlp = NULL; 960 - list_add(&sglq->list, &phba->sli4_hba.lpfc_sgl_list); 972 + list_add_tail(&sglq->list, 973 + &phba->sli4_hba.lpfc_sgl_list); 961 974 962 975 /* Check if TXQ queue needs to be serviced */ 963 976 if (pring->txq_cnt) ··· 4806 4817 "0378 No support for fcpi mode.\n"); 4807 4818 ftr_rsp++; 4808 4819 } 4809 - 4820 + if (bf_get(lpfc_mbx_rq_ftr_rsp_perfh, &mqe->un.req_ftrs)) 4821 + phba->sli3_options |= LPFC_SLI4_PERFH_ENABLED; 4822 + else 4823 + phba->sli3_options &= ~LPFC_SLI4_PERFH_ENABLED; 4810 4824 /* 4811 4825 * If the port cannot support the host's requested features 4812 4826 * then turn off the global config parameters to disable the ··· 4996 5004 spin_lock_irq(&phba->hbalock); 4997 5005 phba->link_state = LPFC_LINK_DOWN; 4998 5006 spin_unlock_irq(&phba->hbalock); 4999 - rc = phba->lpfc_hba_init_link(phba, MBX_NOWAIT); 5007 + if (phba->cfg_suppress_link_up == LPFC_INITIALIZE_LINK) 5008 + rc = phba->lpfc_hba_init_link(phba, MBX_NOWAIT); 5000 5009 out_unset_queue: 5001 5010 /* Unset all the queues set up in this routine when error out */ 5002 5011 if (rc) ··· 10471 10478 cq->type = type; 10472 10479 cq->subtype = subtype; 10473 10480 cq->queue_id = bf_get(lpfc_mbx_cq_create_q_id, &cq_create->u.response); 10481 + cq->assoc_qid = eq->queue_id; 10474 10482 cq->host_index = 0; 10475 10483 cq->hba_index = 0; 10476 10484 ··· 10666 10672 goto out; 10667 10673 } 10668 10674 mq->type = LPFC_MQ; 10675 + mq->assoc_qid = cq->queue_id; 10669 10676 mq->subtype = subtype; 10670 10677 mq->host_index = 0; 10671 10678 mq->hba_index = 0; ··· 10754 10759 goto out; 10755 10760 } 10756 10761 wq->type = LPFC_WQ; 10762 + wq->assoc_qid = cq->queue_id; 10757 10763 wq->subtype = subtype; 10758 10764 wq->host_index = 0; 10759 10765 wq->hba_index = 0; ··· 10872 10876 goto out; 10873 10877 } 10874 10878 hrq->type = LPFC_HRQ; 10879 + hrq->assoc_qid = cq->queue_id; 10875 10880 hrq->subtype = subtype; 10876 10881 hrq->host_index = 0; 10877 10882 hrq->hba_index = 0; ··· 10933 10936 goto out; 10934 10937 } 10935 10938 drq->type = LPFC_DRQ; 10939 + drq->assoc_qid = cq->queue_id; 10936 10940 drq->subtype = subtype; 10937 10941 drq->host_index = 0; 10938 10942 drq->hba_index = 0; ··· 11187 11189 if (!mbox) 11188 11190 return -ENOMEM; 11189 11191 length = (sizeof(struct lpfc_mbx_rq_destroy) - 11190 - sizeof(struct mbox_header)); 11192 + sizeof(struct lpfc_sli4_cfg_mhdr)); 11191 11193 lpfc_sli4_config(phba, mbox, LPFC_MBOX_SUBSYSTEM_FCOE, 11192 11194 LPFC_MBOX_OPCODE_FCOE_RQ_DESTROY, 11193 11195 length, LPFC_SLI4_MBX_EMBED); ··· 11277 11279 lpfc_sli4_config(phba, mbox, LPFC_MBOX_SUBSYSTEM_FCOE, 11278 11280 LPFC_MBOX_OPCODE_FCOE_POST_SGL_PAGES, 11279 11281 sizeof(struct lpfc_mbx_post_sgl_pages) - 11280 - sizeof(struct mbox_header), LPFC_SLI4_MBX_EMBED); 11282 + sizeof(struct lpfc_sli4_cfg_mhdr), LPFC_SLI4_MBX_EMBED); 11281 11283 11282 11284 post_sgl_pages = (struct lpfc_mbx_post_sgl_pages *) 11283 11285 &mbox->u.mqe.un.post_sgl_pages; ··· 12400 12402 lpfc_sli4_config(phba, mboxq, LPFC_MBOX_SUBSYSTEM_FCOE, 12401 12403 LPFC_MBOX_OPCODE_FCOE_POST_HDR_TEMPLATE, 12402 12404 sizeof(struct lpfc_mbx_post_hdr_tmpl) - 12403 - sizeof(struct mbox_header), LPFC_SLI4_MBX_EMBED); 12405 + sizeof(struct lpfc_sli4_cfg_mhdr), 12406 + LPFC_SLI4_MBX_EMBED); 12404 12407 bf_set(lpfc_mbx_post_hdr_tmpl_page_cnt, 12405 12408 hdr_tmpl, rpi_page->page_count); 12406 12409 bf_set(lpfc_mbx_post_hdr_tmpl_rpi_offset, hdr_tmpl,
+7 -1
drivers/scsi/lpfc/lpfc_sli4.h
··· 125 125 uint32_t entry_count; /* Number of entries to support on the queue */ 126 126 uint32_t entry_size; /* Size of each queue entry. */ 127 127 uint32_t queue_id; /* Queue ID assigned by the hardware */ 128 + uint32_t assoc_qid; /* Queue ID associated with, for CQ/WQ/MQ */ 128 129 struct list_head page_list; 129 130 uint32_t page_count; /* Number of pages allocated for this queue */ 130 - 131 131 uint32_t host_index; /* The host's index for putting or getting */ 132 132 uint32_t hba_index; /* The last known hba index for get or put */ 133 133 union sli4_qe qe[1]; /* array to index entries (must be last) */ ··· 359 359 uint32_t hdr_pp_align; 360 360 uint32_t sgl_pages_max; 361 361 uint32_t sgl_pp_align; 362 + uint8_t cqv; 363 + uint8_t mqv; 364 + uint8_t wqv; 365 + uint8_t rqv; 362 366 }; 363 367 364 368 /* SLI4 HBA data structure entries */ ··· 566 562 struct sli4_wcqe_xri_aborted *); 567 563 void lpfc_sli4_els_xri_aborted(struct lpfc_hba *, 568 564 struct sli4_wcqe_xri_aborted *); 565 + void lpfc_sli4_vport_delete_els_xri_aborted(struct lpfc_vport *); 566 + void lpfc_sli4_vport_delete_fcp_xri_aborted(struct lpfc_vport *); 569 567 int lpfc_sli4_brdreset(struct lpfc_hba *); 570 568 int lpfc_sli4_add_fcf_record(struct lpfc_hba *, struct fcf_record *); 571 569 void lpfc_sli_remove_dflt_fcf(struct lpfc_hba *);
+2 -2
drivers/scsi/lpfc/lpfc_version.h
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2004-2010 Emulex. All rights reserved. * 4 + * Copyright (C) 2004-2011 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * * ··· 18 18 * included with this package. * 19 19 *******************************************************************/ 20 20 21 - #define LPFC_DRIVER_VERSION "8.3.20" 21 + #define LPFC_DRIVER_VERSION "8.3.21" 22 22 #define LPFC_DRIVER_NAME "lpfc" 23 23 #define LPFC_SP_DRIVER_HANDLER_NAME "lpfc:sp" 24 24 #define LPFC_FP_DRIVER_HANDLER_NAME "lpfc:fp"
+4
drivers/scsi/lpfc/lpfc_vport.c
··· 464 464 struct lpfc_hba *phba = vport->phba; 465 465 struct lpfc_nodelist *ndlp = NULL, *next_ndlp = NULL; 466 466 long timeout; 467 + struct Scsi_Host *shost = lpfc_shost_from_vport(vport); 467 468 468 469 ndlp = lpfc_findnode_did(vport, Fabric_DID); 469 470 if (ndlp && NLP_CHK_NODE_ACT(ndlp) ··· 499 498 * scsi_host_put() to release the vport. 500 499 */ 501 500 lpfc_mbx_unreg_vpi(vport); 501 + spin_lock_irq(shost->host_lock); 502 + vport->fc_flag |= FC_VPORT_NEEDS_INIT_VPI; 503 + spin_unlock_irq(shost->host_lock); 502 504 503 505 lpfc_vport_set_state(vport, FC_VPORT_DISABLED); 504 506 lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
+7 -3
drivers/scsi/megaraid/megaraid_sas.h
··· 33 33 /* 34 34 * MegaRAID SAS Driver meta data 35 35 */ 36 - #define MEGASAS_VERSION "00.00.05.29-rc1" 37 - #define MEGASAS_RELDATE "Dec. 7, 2010" 38 - #define MEGASAS_EXT_VERSION "Tue. Dec. 7 17:00:00 PDT 2010" 36 + #define MEGASAS_VERSION "00.00.05.34-rc1" 37 + #define MEGASAS_RELDATE "Feb. 24, 2011" 38 + #define MEGASAS_EXT_VERSION "Thu. Feb. 24 17:00:00 PDT 2011" 39 39 40 40 /* 41 41 * Device IDs ··· 723 723 MEGASAS_MAX_DEV_PER_CHANNEL) 724 724 725 725 #define MEGASAS_MAX_SECTORS (2*1024) 726 + #define MEGASAS_MAX_SECTORS_IEEE (2*128) 726 727 #define MEGASAS_DBG_LVL 1 727 728 728 729 #define MEGASAS_FW_BUSY 1 ··· 1477 1476 struct megasas_instance *instance[MAX_MGMT_ADAPTERS]; 1478 1477 int max_index; 1479 1478 }; 1479 + 1480 + #define msi_control_reg(base) (base + PCI_MSI_FLAGS) 1481 + #define PCI_MSIX_FLAGS_ENABLE (1 << 15) 1480 1482 1481 1483 #endif /*LSI_MEGARAID_SAS_H */
+93 -54
drivers/scsi/megaraid/megaraid_sas_base.c
··· 18 18 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 19 * 20 20 * FILE: megaraid_sas_base.c 21 - * Version : v00.00.05.29-rc1 21 + * Version : v00.00.05.34-rc1 22 22 * 23 23 * Authors: LSI Corporation 24 24 * Sreenivas Bagalkote 25 25 * Sumant Patro 26 26 * Bo Yang 27 + * Adam Radford <linuxraid@lsi.com> 27 28 * 28 29 * Send feedback to: <megaraidlinux@lsi.com> 29 30 * ··· 135 134 void 136 135 megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd, 137 136 u8 alt_status); 138 - 137 + static u32 138 + megasas_read_fw_status_reg_gen2(struct megasas_register_set __iomem *regs); 139 + static int 140 + megasas_adp_reset_gen2(struct megasas_instance *instance, 141 + struct megasas_register_set __iomem *reg_set); 139 142 static irqreturn_t megasas_isr(int irq, void *devp); 140 143 static u32 141 144 megasas_init_adapter_mfi(struct megasas_instance *instance); ··· 559 554 megasas_clear_intr_skinny(struct megasas_register_set __iomem *regs) 560 555 { 561 556 u32 status; 557 + u32 mfiStatus = 0; 558 + 562 559 /* 563 560 * Check if it is our interrupt 564 561 */ ··· 569 562 if (!(status & MFI_SKINNY_ENABLE_INTERRUPT_MASK)) { 570 563 return 0; 571 564 } 565 + 566 + /* 567 + * Check if it is our interrupt 568 + */ 569 + if ((megasas_read_fw_status_reg_gen2(regs) & MFI_STATE_MASK) == 570 + MFI_STATE_FAULT) { 571 + mfiStatus = MFI_INTR_FLAG_FIRMWARE_STATE_CHANGE; 572 + } else 573 + mfiStatus = MFI_INTR_FLAG_REPLY_MESSAGE; 572 574 573 575 /* 574 576 * Clear the interrupt by writing back the same value ··· 589 573 */ 590 574 readl(&regs->outbound_intr_status); 591 575 592 - return 1; 576 + return mfiStatus; 593 577 } 594 578 595 579 /** ··· 613 597 } 614 598 615 599 /** 616 - * megasas_adp_reset_skinny - For controller reset 617 - * @regs: MFI register set 618 - */ 619 - static int 620 - megasas_adp_reset_skinny(struct megasas_instance *instance, 621 - struct megasas_register_set __iomem *regs) 622 - { 623 - return 0; 624 - } 625 - 626 - /** 627 600 * megasas_check_reset_skinny - For controller reset check 628 601 * @regs: MFI register set 629 602 */ ··· 630 625 .disable_intr = megasas_disable_intr_skinny, 631 626 .clear_intr = megasas_clear_intr_skinny, 632 627 .read_fw_status_reg = megasas_read_fw_status_reg_skinny, 633 - .adp_reset = megasas_adp_reset_skinny, 628 + .adp_reset = megasas_adp_reset_gen2, 634 629 .check_reset = megasas_check_reset_skinny, 635 630 .service_isr = megasas_isr, 636 631 .tasklet = megasas_complete_cmd_dpc, ··· 745 740 { 746 741 u32 retry = 0 ; 747 742 u32 HostDiag; 743 + u32 *seq_offset = &reg_set->seq_offset; 744 + u32 *hostdiag_offset = &reg_set->host_diag; 748 745 749 - writel(0, &reg_set->seq_offset); 750 - writel(4, &reg_set->seq_offset); 751 - writel(0xb, &reg_set->seq_offset); 752 - writel(2, &reg_set->seq_offset); 753 - writel(7, &reg_set->seq_offset); 754 - writel(0xd, &reg_set->seq_offset); 746 + if (instance->instancet == &megasas_instance_template_skinny) { 747 + seq_offset = &reg_set->fusion_seq_offset; 748 + hostdiag_offset = &reg_set->fusion_host_diag; 749 + } 750 + 751 + writel(0, seq_offset); 752 + writel(4, seq_offset); 753 + writel(0xb, seq_offset); 754 + writel(2, seq_offset); 755 + writel(7, seq_offset); 756 + writel(0xd, seq_offset); 757 + 755 758 msleep(1000); 756 759 757 - HostDiag = (u32)readl(&reg_set->host_diag); 760 + HostDiag = (u32)readl(hostdiag_offset); 758 761 759 762 while ( !( HostDiag & DIAG_WRITE_ENABLE) ) { 760 763 msleep(100); 761 - HostDiag = (u32)readl(&reg_set->host_diag); 764 + HostDiag = (u32)readl(hostdiag_offset); 762 765 printk(KERN_NOTICE "RESETGEN2: retry=%x, hostdiag=%x\n", 763 766 retry, HostDiag); 764 767 ··· 777 764 778 765 printk(KERN_NOTICE "ADP_RESET_GEN2: HostDiag=%x\n", HostDiag); 779 766 780 - writel((HostDiag | DIAG_RESET_ADAPTER), &reg_set->host_diag); 767 + writel((HostDiag | DIAG_RESET_ADAPTER), hostdiag_offset); 781 768 782 769 ssleep(10); 783 770 784 - HostDiag = (u32)readl(&reg_set->host_diag); 771 + HostDiag = (u32)readl(hostdiag_offset); 785 772 while ( ( HostDiag & DIAG_RESET_ADAPTER) ) { 786 773 msleep(100); 787 - HostDiag = (u32)readl(&reg_set->host_diag); 774 + HostDiag = (u32)readl(hostdiag_offset); 788 775 printk(KERN_NOTICE "RESET_GEN2: retry=%x, hostdiag=%x\n", 789 776 retry, HostDiag); 790 777 ··· 2516 2503 if ((mfiStatus = instance->instancet->clear_intr( 2517 2504 instance->reg_set) 2518 2505 ) == 0) { 2519 - return IRQ_NONE; 2506 + /* Hardware may not set outbound_intr_status in MSI-X mode */ 2507 + if (!instance->msi_flag) 2508 + return IRQ_NONE; 2520 2509 } 2521 2510 2522 2511 instance->mfiStatus = mfiStatus; ··· 2626 2611 case MFI_STATE_FAULT: 2627 2612 2628 2613 printk(KERN_DEBUG "megasas: FW in FAULT state!!\n"); 2629 - return -ENODEV; 2614 + max_wait = MEGASAS_RESET_WAIT_TIME; 2615 + cur_state = MFI_STATE_FAULT; 2616 + break; 2630 2617 2631 2618 case MFI_STATE_WAIT_HANDSHAKE: 2632 2619 /* ··· 3441 3424 megasas_free_cmds(instance); 3442 3425 3443 3426 fail_alloc_cmds: 3444 - iounmap(instance->reg_set); 3445 3427 return 1; 3446 3428 } 3447 3429 ··· 3510 3494 3511 3495 /* Get operational params, sge flags, send init cmd to controller */ 3512 3496 if (instance->instancet->init_adapter(instance)) 3513 - return -ENODEV; 3497 + goto fail_init_adapter; 3514 3498 3515 3499 printk(KERN_ERR "megasas: INIT adapter done\n"); 3516 3500 ··· 3559 3543 * Setup tasklet for cmd completion 3560 3544 */ 3561 3545 3562 - tasklet_init(&instance->isr_tasklet, megasas_complete_cmd_dpc, 3546 + tasklet_init(&instance->isr_tasklet, instance->instancet->tasklet, 3563 3547 (unsigned long)instance); 3564 3548 3565 3549 /* Initialize the cmd completion timer */ ··· 3569 3553 MEGASAS_COMPLETION_TIMER_INTERVAL); 3570 3554 return 0; 3571 3555 3556 + fail_init_adapter: 3572 3557 fail_ready_state: 3573 3558 iounmap(instance->reg_set); 3574 3559 ··· 3837 3820 instance->max_fw_cmds - MEGASAS_INT_CMDS; 3838 3821 host->this_id = instance->init_id; 3839 3822 host->sg_tablesize = instance->max_num_sge; 3823 + 3824 + if (instance->fw_support_ieee) 3825 + instance->max_sectors_per_req = MEGASAS_MAX_SECTORS_IEEE; 3826 + 3840 3827 /* 3841 3828 * Check if the module parameter value for max_sectors can be used 3842 3829 */ ··· 3920 3899 static int __devinit 3921 3900 megasas_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) 3922 3901 { 3923 - int rval; 3902 + int rval, pos; 3924 3903 struct Scsi_Host *host; 3925 3904 struct megasas_instance *instance; 3905 + u16 control = 0; 3906 + 3907 + /* Reset MSI-X in the kdump kernel */ 3908 + if (reset_devices) { 3909 + pos = pci_find_capability(pdev, PCI_CAP_ID_MSIX); 3910 + if (pos) { 3911 + pci_read_config_word(pdev, msi_control_reg(pos), 3912 + &control); 3913 + if (control & PCI_MSIX_FLAGS_ENABLE) { 3914 + dev_info(&pdev->dev, "resetting MSI-X\n"); 3915 + pci_write_config_word(pdev, 3916 + msi_control_reg(pos), 3917 + control & 3918 + ~PCI_MSIX_FLAGS_ENABLE); 3919 + } 3920 + } 3921 + } 3926 3922 3927 3923 /* 3928 3924 * Announce PCI information ··· 4077 4039 else 4078 4040 INIT_WORK(&instance->work_init, process_fw_state_change_wq); 4079 4041 4080 - /* 4081 - * Initialize MFI Firmware 4082 - */ 4083 - if (megasas_init_fw(instance)) 4084 - goto fail_init_mfi; 4085 - 4086 4042 /* Try to enable MSI-X */ 4087 4043 if ((instance->pdev->device != PCI_DEVICE_ID_LSI_SAS1078R) && 4088 4044 (instance->pdev->device != PCI_DEVICE_ID_LSI_SAS1078DE) && ··· 4084 4052 !msix_disable && !pci_enable_msix(instance->pdev, 4085 4053 &instance->msixentry, 1)) 4086 4054 instance->msi_flag = 1; 4055 + 4056 + /* 4057 + * Initialize MFI Firmware 4058 + */ 4059 + if (megasas_init_fw(instance)) 4060 + goto fail_init_mfi; 4087 4061 4088 4062 /* 4089 4063 * Register IRQ ··· 4143 4105 instance->instancet->disable_intr(instance->reg_set); 4144 4106 free_irq(instance->msi_flag ? instance->msixentry.vector : 4145 4107 instance->pdev->irq, instance); 4108 + fail_irq: 4109 + if (instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) 4110 + megasas_release_fusion(instance); 4111 + else 4112 + megasas_release_mfi(instance); 4113 + fail_init_mfi: 4146 4114 if (instance->msi_flag) 4147 4115 pci_disable_msix(instance->pdev); 4148 - 4149 - fail_irq: 4150 - fail_init_mfi: 4151 4116 fail_alloc_dma_buf: 4152 4117 if (instance->evt_detail) 4153 4118 pci_free_consistent(pdev, sizeof(struct megasas_evt_detail), 4154 4119 instance->evt_detail, 4155 4120 instance->evt_detail_h); 4156 4121 4157 - if (instance->producer) { 4122 + if (instance->producer) 4158 4123 pci_free_consistent(pdev, sizeof(u32), instance->producer, 4159 4124 instance->producer_h); 4160 - megasas_release_mfi(instance); 4161 - } else { 4162 - megasas_release_fusion(instance); 4163 - } 4164 4125 if (instance->consumer) 4165 4126 pci_free_consistent(pdev, sizeof(u32), instance->consumer, 4166 4127 instance->consumer_h); ··· 4279 4242 /* cancel the delayed work if this work still in queue */ 4280 4243 if (instance->ev != NULL) { 4281 4244 struct megasas_aen_event *ev = instance->ev; 4282 - cancel_delayed_work( 4245 + cancel_delayed_work_sync( 4283 4246 (struct delayed_work *)&ev->hotplug_work); 4284 - flush_scheduled_work(); 4285 4247 instance->ev = NULL; 4286 4248 } 4287 4249 ··· 4333 4297 if (megasas_set_dma_mask(pdev)) 4334 4298 goto fail_set_dma_mask; 4335 4299 4300 + /* Now re-enable MSI-X */ 4301 + if (instance->msi_flag) 4302 + pci_enable_msix(instance->pdev, &instance->msixentry, 1); 4303 + 4336 4304 /* 4337 4305 * Initialize MFI Firmware 4338 4306 */ ··· 4372 4332 4373 4333 tasklet_init(&instance->isr_tasklet, instance->instancet->tasklet, 4374 4334 (unsigned long)instance); 4375 - 4376 - /* Now re-enable MSI-X */ 4377 - if (instance->msi_flag) 4378 - pci_enable_msix(instance->pdev, &instance->msixentry, 1); 4379 4335 4380 4336 /* 4381 4337 * Register IRQ ··· 4453 4417 /* cancel the delayed work if this work still in queue*/ 4454 4418 if (instance->ev != NULL) { 4455 4419 struct megasas_aen_event *ev = instance->ev; 4456 - cancel_delayed_work( 4420 + cancel_delayed_work_sync( 4457 4421 (struct delayed_work *)&ev->hotplug_work); 4458 - flush_scheduled_work(); 4459 4422 instance->ev = NULL; 4460 4423 } 4461 4424 ··· 4646 4611 * For each user buffer, create a mirror buffer and copy in 4647 4612 */ 4648 4613 for (i = 0; i < ioc->sge_count; i++) { 4614 + if (!ioc->sgl[i].iov_len) 4615 + continue; 4616 + 4649 4617 kbuff_arr[i] = dma_alloc_coherent(&instance->pdev->dev, 4650 4618 ioc->sgl[i].iov_len, 4651 4619 &buf_handle, GFP_KERNEL); ··· 5215 5177 break; 5216 5178 5217 5179 case MR_EVT_LD_OFFLINE: 5180 + case MR_EVT_CFG_CLEARED: 5218 5181 case MR_EVT_LD_DELETED: 5219 5182 megasas_get_ld_list(instance); 5220 5183 for (i = 0; i < MEGASAS_MAX_LD_CHANNELS; i++) {
+12 -7
drivers/scsi/megaraid/megaraid_sas_fusion.c
··· 81 81 struct MR_LD_RAID *MR_LdRaidGet(u32 ld, struct MR_FW_RAID_MAP_ALL *map); 82 82 83 83 u16 MR_GetLDTgtId(u32 ld, struct MR_FW_RAID_MAP_ALL *map); 84 + 85 + void 86 + megasas_check_and_restore_queue_depth(struct megasas_instance *instance); 87 + 84 88 u8 MR_ValidateMapInfo(struct MR_FW_RAID_MAP_ALL *map, 85 89 struct LD_LOAD_BALANCE_INFO *lbInfo); 86 90 u16 get_updated_dev_handle(struct LD_LOAD_BALANCE_INFO *lbInfo, ··· 987 983 988 984 return 0; 989 985 990 - fail_alloc_cmds: 991 - fail_alloc_mfi_cmds: 992 986 fail_map_info: 993 987 if (i == 1) 994 988 dma_free_coherent(&instance->pdev->dev, fusion->map_sz, 995 989 fusion->ld_map[0], fusion->ld_map_phys[0]); 996 990 fail_ioc_init: 991 + megasas_free_cmds_fusion(instance); 992 + fail_alloc_cmds: 993 + megasas_free_cmds(instance); 994 + fail_alloc_mfi_cmds: 997 995 return 1; 998 996 } 999 997 ··· 1437 1431 local_map_ptr = fusion->ld_map[(instance->map_id & 1)]; 1438 1432 1439 1433 /* Check if this is a system PD I/O */ 1440 - if ((instance->pd_list[pd_index].driveState == MR_PD_STATE_SYSTEM) && 1441 - (instance->pd_list[pd_index].driveType == TYPE_DISK)) { 1434 + if (instance->pd_list[pd_index].driveState == MR_PD_STATE_SYSTEM) { 1442 1435 io_request->Function = 0; 1443 1436 io_request->DevHandle = 1444 1437 local_map_ptr->raidMap.devHndlInfo[device_id].curDevHdl; ··· 1460 1455 MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT); 1461 1456 } 1462 1457 io_request->RaidContext.VirtualDiskTgtId = device_id; 1463 - io_request->LUN[0] = scmd->device->lun; 1458 + io_request->LUN[1] = scmd->device->lun; 1464 1459 io_request->DataLength = scsi_bufflen(scmd); 1465 1460 } 1466 1461 ··· 1484 1479 device_id = MEGASAS_DEV_INDEX(instance, scp); 1485 1480 1486 1481 /* Zero out some fields so they don't get reused */ 1487 - io_request->LUN[0] = 0; 1482 + io_request->LUN[1] = 0; 1488 1483 io_request->CDB.EEDP32.PrimaryReferenceTag = 0; 1489 1484 io_request->CDB.EEDP32.PrimaryApplicationTagMask = 0; 1490 1485 io_request->EEDPFlags = 0; ··· 1748 1743 wmb(); 1749 1744 writel(fusion->last_reply_idx, 1750 1745 &instance->reg_set->reply_post_host_index); 1751 - 1746 + megasas_check_and_restore_queue_depth(instance); 1752 1747 return IRQ_HANDLED; 1753 1748 } 1754 1749
+3 -2
drivers/scsi/mpt2sas/mpi/mpi2.h
··· 8 8 * scatter/gather formats. 9 9 * Creation Date: June 21, 2006 10 10 * 11 - * mpi2.h Version: 02.00.16 11 + * mpi2.h Version: 02.00.17 12 12 * 13 13 * Version History 14 14 * --------------- ··· 63 63 * function codes, 0xF0 to 0xFF. 64 64 * 05-12-10 02.00.16 Bumped MPI2_HEADER_VERSION_UNIT. 65 65 * Added alternative defines for the SGE Direction bit. 66 + * 08-11-10 02.00.17 Bumped MPI2_HEADER_VERSION_UNIT. 66 67 * -------------------------------------------------------------------------- 67 68 */ 68 69 ··· 89 88 #define MPI2_VERSION_02_00 (0x0200) 90 89 91 90 /* versioning for this MPI header set */ 92 - #define MPI2_HEADER_VERSION_UNIT (0x10) 91 + #define MPI2_HEADER_VERSION_UNIT (0x11) 93 92 #define MPI2_HEADER_VERSION_DEV (0x00) 94 93 #define MPI2_HEADER_VERSION_UNIT_MASK (0xFF00) 95 94 #define MPI2_HEADER_VERSION_UNIT_SHIFT (8)
+3 -3
drivers/scsi/mpt2sas/mpi/mpi2_cnfg.h
··· 6 6 * Title: MPI Configuration messages and pages 7 7 * Creation Date: November 10, 2006 8 8 * 9 - * mpi2_cnfg.h Version: 02.00.15 9 + * mpi2_cnfg.h Version: 02.00.16 10 10 * 11 11 * Version History 12 12 * --------------- ··· 125 125 * define. 126 126 * Added MPI2_PHYSDISK0_INCOMPATIBLE_MEDIA_TYPE define. 127 127 * Added MPI2_SAS_NEG_LINK_RATE_UNSUPPORTED_PHY define. 128 + * 08-11-10 02.00.16 Removed IO Unit Page 1 device path (multi-pathing) 129 + * defines. 128 130 * -------------------------------------------------------------------------- 129 131 */ 130 132 ··· 747 745 #define MPI2_IOUNITPAGE1_DISABLE_IR (0x00000040) 748 746 #define MPI2_IOUNITPAGE1_DISABLE_TASK_SET_FULL_HANDLING (0x00000020) 749 747 #define MPI2_IOUNITPAGE1_IR_USE_STATIC_VOLUME_ID (0x00000004) 750 - #define MPI2_IOUNITPAGE1_MULTI_PATHING (0x00000002) 751 - #define MPI2_IOUNITPAGE1_SINGLE_PATHING (0x00000000) 752 748 753 749 754 750 /* IO Unit Page 3 */
-384
drivers/scsi/mpt2sas/mpi/mpi2_history.txt
··· 1 - ============================== 2 - Fusion-MPT MPI 2.0 Header File Change History 3 - ============================== 4 - 5 - Copyright (c) 2000-2010 LSI Corporation. 6 - 7 - --------------------------------------- 8 - Header Set Release Version: 02.00.14 9 - Header Set Release Date: 10-28-09 10 - --------------------------------------- 11 - 12 - Filename Current version Prior version 13 - ---------- --------------- ------------- 14 - mpi2.h 02.00.14 02.00.13 15 - mpi2_cnfg.h 02.00.13 02.00.12 16 - mpi2_init.h 02.00.08 02.00.07 17 - mpi2_ioc.h 02.00.13 02.00.12 18 - mpi2_raid.h 02.00.04 02.00.04 19 - mpi2_sas.h 02.00.03 02.00.02 20 - mpi2_targ.h 02.00.03 02.00.03 21 - mpi2_tool.h 02.00.04 02.00.04 22 - mpi2_type.h 02.00.00 02.00.00 23 - mpi2_ra.h 02.00.00 02.00.00 24 - mpi2_hbd.h 02.00.00 25 - mpi2_history.txt 02.00.14 02.00.13 26 - 27 - 28 - * Date Version Description 29 - * -------- -------- ------------------------------------------------------ 30 - 31 - mpi2.h 32 - * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. 33 - * 06-04-07 02.00.01 Bumped MPI2_HEADER_VERSION_UNIT. 34 - * 06-26-07 02.00.02 Bumped MPI2_HEADER_VERSION_UNIT. 35 - * 08-31-07 02.00.03 Bumped MPI2_HEADER_VERSION_UNIT. 36 - * Moved ReplyPostHostIndex register to offset 0x6C of the 37 - * MPI2_SYSTEM_INTERFACE_REGS and modified the define for 38 - * MPI2_REPLY_POST_HOST_INDEX_OFFSET. 39 - * Added union of request descriptors. 40 - * Added union of reply descriptors. 41 - * 10-31-07 02.00.04 Bumped MPI2_HEADER_VERSION_UNIT. 42 - * Added define for MPI2_VERSION_02_00. 43 - * Fixed the size of the FunctionDependent5 field in the 44 - * MPI2_DEFAULT_REPLY structure. 45 - * 12-18-07 02.00.05 Bumped MPI2_HEADER_VERSION_UNIT. 46 - * Removed the MPI-defined Fault Codes and extended the 47 - * product specific codes up to 0xEFFF. 48 - * Added a sixth key value for the WriteSequence register 49 - * and changed the flush value to 0x0. 50 - * Added message function codes for Diagnostic Buffer Post 51 - * and Diagnsotic Release. 52 - * New IOCStatus define: MPI2_IOCSTATUS_DIAGNOSTIC_RELEASED 53 - * Moved MPI2_VERSION_UNION from mpi2_ioc.h. 54 - * 02-29-08 02.00.06 Bumped MPI2_HEADER_VERSION_UNIT. 55 - * 03-03-08 02.00.07 Bumped MPI2_HEADER_VERSION_UNIT. 56 - * 05-21-08 02.00.08 Bumped MPI2_HEADER_VERSION_UNIT. 57 - * Added #defines for marking a reply descriptor as unused. 58 - * 06-27-08 02.00.09 Bumped MPI2_HEADER_VERSION_UNIT. 59 - * 10-02-08 02.00.10 Bumped MPI2_HEADER_VERSION_UNIT. 60 - * Moved LUN field defines from mpi2_init.h. 61 - * 01-19-09 02.00.11 Bumped MPI2_HEADER_VERSION_UNIT. 62 - * 05-06-09 02.00.12 Bumped MPI2_HEADER_VERSION_UNIT. 63 - * In all request and reply descriptors, replaced VF_ID 64 - * field with MSIxIndex field. 65 - * Removed DevHandle field from 66 - * MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR and made those 67 - * bytes reserved. 68 - * Added RAID Accelerator functionality. 69 - * 07-30-09 02.00.13 Bumped MPI2_HEADER_VERSION_UNIT. 70 - * 10-28-09 02.00.14 Bumped MPI2_HEADER_VERSION_UNIT. 71 - * Added MSI-x index mask and shift for Reply Post Host 72 - * Index register. 73 - * Added function code for Host Based Discovery Action. 74 - * -------------------------------------------------------------------------- 75 - 76 - mpi2_cnfg.h 77 - * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. 78 - * 06-04-07 02.00.01 Added defines for SAS IO Unit Page 2 PhyFlags. 79 - * Added Manufacturing Page 11. 80 - * Added MPI2_SAS_EXPANDER0_FLAGS_CONNECTOR_END_DEVICE 81 - * define. 82 - * 06-26-07 02.00.02 Adding generic structure for product-specific 83 - * Manufacturing pages: MPI2_CONFIG_PAGE_MANUFACTURING_PS. 84 - * Rework of BIOS Page 2 configuration page. 85 - * Fixed MPI2_BIOSPAGE2_BOOT_DEVICE to be a union of the 86 - * forms. 87 - * Added configuration pages IOC Page 8 and Driver 88 - * Persistent Mapping Page 0. 89 - * 08-31-07 02.00.03 Modified configuration pages dealing with Integrated 90 - * RAID (Manufacturing Page 4, RAID Volume Pages 0 and 1, 91 - * RAID Physical Disk Pages 0 and 1, RAID Configuration 92 - * Page 0). 93 - * Added new value for AccessStatus field of SAS Device 94 - * Page 0 (_SATA_NEEDS_INITIALIZATION). 95 - * 10-31-07 02.00.04 Added missing SEPDevHandle field to 96 - * MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0. 97 - * 12-18-07 02.00.05 Modified IO Unit Page 0 to use 32-bit version fields for 98 - * NVDATA. 99 - * Modified IOC Page 7 to use masks and added field for 100 - * SASBroadcastPrimitiveMasks. 101 - * Added MPI2_CONFIG_PAGE_BIOS_4. 102 - * Added MPI2_CONFIG_PAGE_LOG_0. 103 - * 02-29-08 02.00.06 Modified various names to make them 32-character unique. 104 - * Added SAS Device IDs. 105 - * Updated Integrated RAID configuration pages including 106 - * Manufacturing Page 4, IOC Page 6, and RAID Configuration 107 - * Page 0. 108 - * 05-21-08 02.00.07 Added define MPI2_MANPAGE4_MIX_SSD_SAS_SATA. 109 - * Added define MPI2_MANPAGE4_PHYSDISK_128MB_COERCION. 110 - * Fixed define MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING. 111 - * Added missing MaxNumRoutedSasAddresses field to 112 - * MPI2_CONFIG_PAGE_EXPANDER_0. 113 - * Added SAS Port Page 0. 114 - * Modified structure layout for 115 - * MPI2_CONFIG_PAGE_DRIVER_MAPPING_0. 116 - * 06-27-08 02.00.08 Changed MPI2_CONFIG_PAGE_RD_PDISK_1 to use 117 - * MPI2_RAID_PHYS_DISK1_PATH_MAX to size the array. 118 - * 10-02-08 02.00.09 Changed MPI2_RAID_PGAD_CONFIGNUM_MASK from 0x0000FFFF 119 - * to 0x000000FF. 120 - * Added two new values for the Physical Disk Coercion Size 121 - * bits in the Flags field of Manufacturing Page 4. 122 - * Added product-specific Manufacturing pages 16 to 31. 123 - * Modified Flags bits for controlling write cache on SATA 124 - * drives in IO Unit Page 1. 125 - * Added new bit to AdditionalControlFlags of SAS IO Unit 126 - * Page 1 to control Invalid Topology Correction. 127 - * Added SupportedPhysDisks field to RAID Volume Page 1 and 128 - * added related defines. 129 - * Added additional defines for RAID Volume Page 0 130 - * VolumeStatusFlags field. 131 - * Modified meaning of RAID Volume Page 0 VolumeSettings 132 - * define for auto-configure of hot-swap drives. 133 - * Added PhysDiskAttributes field (and related defines) to 134 - * RAID Physical Disk Page 0. 135 - * Added MPI2_SAS_PHYINFO_PHY_VACANT define. 136 - * Added three new DiscoveryStatus bits for SAS IO Unit 137 - * Page 0 and SAS Expander Page 0. 138 - * Removed multiplexing information from SAS IO Unit pages. 139 - * Added BootDeviceWaitTime field to SAS IO Unit Page 4. 140 - * Removed Zone Address Resolved bit from PhyInfo and from 141 - * Expander Page 0 Flags field. 142 - * Added two new AccessStatus values to SAS Device Page 0 143 - * for indicating routing problems. Added 3 reserved words 144 - * to this page. 145 - * 01-19-09 02.00.10 Fixed defines for GPIOVal field of IO Unit Page 3. 146 - * Inserted missing reserved field into structure for IOC 147 - * Page 6. 148 - * Added more pending task bits to RAID Volume Page 0 149 - * VolumeStatusFlags defines. 150 - * Added MPI2_PHYSDISK0_STATUS_FLAG_NOT_CERTIFIED define. 151 - * Added a new DiscoveryStatus bit for SAS IO Unit Page 0 152 - * and SAS Expander Page 0 to flag a downstream initiator 153 - * when in simplified routing mode. 154 - * Removed SATA Init Failure defines for DiscoveryStatus 155 - * fields of SAS IO Unit Page 0 and SAS Expander Page 0. 156 - * Added MPI2_SAS_DEVICE0_ASTATUS_DEVICE_BLOCKED define. 157 - * Added PortGroups, DmaGroup, and ControlGroup fields to 158 - * SAS Device Page 0. 159 - * 05-06-09 02.00.11 Added structures and defines for IO Unit Page 5 and IO 160 - * Unit Page 6. 161 - * Added expander reduced functionality data to SAS 162 - * Expander Page 0. 163 - * Added SAS PHY Page 2 and SAS PHY Page 3. 164 - * 07-30-09 02.00.12 Added IO Unit Page 7. 165 - * Added new device ids. 166 - * Added SAS IO Unit Page 5. 167 - * Added partial and slumber power management capable flags 168 - * to SAS Device Page 0 Flags field. 169 - * Added PhyInfo defines for power condition. 170 - * Added Ethernet configuration pages. 171 - * 10-28-09 02.00.13 Added MPI2_IOUNITPAGE1_ENABLE_HOST_BASED_DISCOVERY. 172 - * Added SAS PHY Page 4 structure and defines. 173 - * -------------------------------------------------------------------------- 174 - 175 - mpi2_init.h 176 - * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. 177 - * 10-31-07 02.00.01 Fixed name for pMpi2SCSITaskManagementRequest_t. 178 - * 12-18-07 02.00.02 Modified Task Management Target Reset Method defines. 179 - * 02-29-08 02.00.03 Added Query Task Set and Query Unit Attention. 180 - * 03-03-08 02.00.04 Fixed name of struct _MPI2_SCSI_TASK_MANAGE_REPLY. 181 - * 05-21-08 02.00.05 Fixed typo in name of Mpi2SepRequest_t. 182 - * 10-02-08 02.00.06 Removed Untagged and No Disconnect values from SCSI IO 183 - * Control field Task Attribute flags. 184 - * Moved LUN field defines to mpi2.h becasue they are 185 - * common to many structures. 186 - * 05-06-09 02.00.07 Changed task management type of Query Unit Attention to 187 - * Query Asynchronous Event. 188 - * Defined two new bits in the SlotStatus field of the SCSI 189 - * Enclosure Processor Request and Reply. 190 - * 10-28-09 02.00.08 Added defines for decoding the ResponseInfo bytes for 191 - * both SCSI IO Error Reply and SCSI Task Management Reply. 192 - * Added ResponseInfo field to MPI2_SCSI_TASK_MANAGE_REPLY. 193 - * Added MPI2_SCSITASKMGMT_RSP_TM_OVERLAPPED_TAG define. 194 - * -------------------------------------------------------------------------- 195 - 196 - mpi2_ioc.h 197 - * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. 198 - * 06-04-07 02.00.01 In IOCFacts Reply structure, renamed MaxDevices to 199 - * MaxTargets. 200 - * Added TotalImageSize field to FWDownload Request. 201 - * Added reserved words to FWUpload Request. 202 - * 06-26-07 02.00.02 Added IR Configuration Change List Event. 203 - * 08-31-07 02.00.03 Removed SystemReplyQueueDepth field from the IOCInit 204 - * request and replaced it with 205 - * ReplyDescriptorPostQueueDepth and ReplyFreeQueueDepth. 206 - * Replaced the MinReplyQueueDepth field of the IOCFacts 207 - * reply with MaxReplyDescriptorPostQueueDepth. 208 - * Added MPI2_RDPQ_DEPTH_MIN define to specify the minimum 209 - * depth for the Reply Descriptor Post Queue. 210 - * Added SASAddress field to Initiator Device Table 211 - * Overflow Event data. 212 - * 10-31-07 02.00.04 Added ReasonCode MPI2_EVENT_SAS_INIT_RC_NOT_RESPONDING 213 - * for SAS Initiator Device Status Change Event data. 214 - * Modified Reason Code defines for SAS Topology Change 215 - * List Event data, including adding a bit for PHY Vacant 216 - * status, and adding a mask for the Reason Code. 217 - * Added define for 218 - * MPI2_EVENT_SAS_TOPO_ES_DELAY_NOT_RESPONDING. 219 - * Added define for MPI2_EXT_IMAGE_TYPE_MEGARAID. 220 - * 12-18-07 02.00.05 Added Boot Status defines for the IOCExceptions field of 221 - * the IOCFacts Reply. 222 - * Removed MPI2_IOCFACTS_CAPABILITY_EXTENDED_BUFFER define. 223 - * Moved MPI2_VERSION_UNION to mpi2.h. 224 - * Changed MPI2_EVENT_NOTIFICATION_REQUEST to use masks 225 - * instead of enables, and added SASBroadcastPrimitiveMasks 226 - * field. 227 - * Added Log Entry Added Event and related structure. 228 - * 02-29-08 02.00.06 Added define MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID. 229 - * Removed define MPI2_IOCFACTS_PROTOCOL_SMP_TARGET. 230 - * Added MaxVolumes and MaxPersistentEntries fields to 231 - * IOCFacts reply. 232 - * Added ProtocalFlags and IOCCapabilities fields to 233 - * MPI2_FW_IMAGE_HEADER. 234 - * Removed MPI2_PORTENABLE_FLAGS_ENABLE_SINGLE_PORT. 235 - * 03-03-08 02.00.07 Fixed MPI2_FW_IMAGE_HEADER by changing Reserved26 to 236 - * a U16 (from a U32). 237 - * Removed extra 's' from EventMasks name. 238 - * 06-27-08 02.00.08 Fixed an offset in a comment. 239 - * 10-02-08 02.00.09 Removed SystemReplyFrameSize from MPI2_IOC_INIT_REQUEST. 240 - * Removed CurReplyFrameSize from MPI2_IOC_FACTS_REPLY and 241 - * renamed MinReplyFrameSize to ReplyFrameSize. 242 - * Added MPI2_IOCFACTS_EXCEPT_IR_FOREIGN_CONFIG_MAX. 243 - * Added two new RAIDOperation values for Integrated RAID 244 - * Operations Status Event data. 245 - * Added four new IR Configuration Change List Event data 246 - * ReasonCode values. 247 - * Added two new ReasonCode defines for SAS Device Status 248 - * Change Event data. 249 - * Added three new DiscoveryStatus bits for the SAS 250 - * Discovery event data. 251 - * Added Multiplexing Status Change bit to the PhyStatus 252 - * field of the SAS Topology Change List event data. 253 - * Removed define for MPI2_INIT_IMAGE_BOOTFLAGS_XMEMCOPY. 254 - * BootFlags are now product-specific. 255 - * Added defines for the indivdual signature bytes 256 - * for MPI2_INIT_IMAGE_FOOTER. 257 - * 01-19-09 02.00.10 Added MPI2_IOCFACTS_CAPABILITY_EVENT_REPLAY define. 258 - * Added MPI2_EVENT_SAS_DISC_DS_DOWNSTREAM_INITIATOR 259 - * define. 260 - * Added MPI2_EVENT_SAS_DEV_STAT_RC_SATA_INIT_FAILURE 261 - * define. 262 - * Removed MPI2_EVENT_SAS_DISC_DS_SATA_INIT_FAILURE define. 263 - * 05-06-09 02.00.11 Added MPI2_IOCFACTS_CAPABILITY_RAID_ACCELERATOR define. 264 - * Added MPI2_IOCFACTS_CAPABILITY_MSI_X_INDEX define. 265 - * Added two new reason codes for SAS Device Status Change 266 - * Event. 267 - * Added new event: SAS PHY Counter. 268 - * 07-30-09 02.00.12 Added GPIO Interrupt event define and structure. 269 - * Added MPI2_IOCFACTS_CAPABILITY_EXTENDED_BUFFER define. 270 - * Added new product id family for 2208. 271 - * 10-28-09 02.00.13 Added HostMSIxVectors field to MPI2_IOC_INIT_REQUEST. 272 - * Added MaxMSIxVectors field to MPI2_IOC_FACTS_REPLY. 273 - * Added MinDevHandle field to MPI2_IOC_FACTS_REPLY. 274 - * Added MPI2_IOCFACTS_CAPABILITY_HOST_BASED_DISCOVERY. 275 - * Added MPI2_EVENT_HOST_BASED_DISCOVERY_PHY define. 276 - * Added MPI2_EVENT_SAS_TOPO_ES_NO_EXPANDER define. 277 - * Added Host Based Discovery Phy Event data. 278 - * Added defines for ProductID Product field 279 - * (MPI2_FW_HEADER_PID_). 280 - * Modified values for SAS ProductID Family 281 - * (MPI2_FW_HEADER_PID_FAMILY_). 282 - * -------------------------------------------------------------------------- 283 - 284 - mpi2_raid.h 285 - * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. 286 - * 08-31-07 02.00.01 Modifications to RAID Action request and reply, 287 - * including the Actions and ActionData. 288 - * 02-29-08 02.00.02 Added MPI2_RAID_ACTION_ADATA_DISABL_FULL_REBUILD. 289 - * 05-21-08 02.00.03 Added MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS so that 290 - * the PhysDisk array in MPI2_RAID_VOLUME_CREATION_STRUCT 291 - * can be sized by the build environment. 292 - * 07-30-09 02.00.04 Added proper define for the Use Default Settings bit of 293 - * VolumeCreationFlags and marked the old one as obsolete. 294 - * 05-12-10 02.00.05 Added MPI2_RAID_VOL_FLAGS_OP_MDC define. 295 - * -------------------------------------------------------------------------- 296 - 297 - mpi2_sas.h 298 - * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. 299 - * 06-26-07 02.00.01 Added Clear All Persistent Operation to SAS IO Unit 300 - * Control Request. 301 - * 10-02-08 02.00.02 Added Set IOC Parameter Operation to SAS IO Unit Control 302 - * Request. 303 - * 10-28-09 02.00.03 Changed the type of SGL in MPI2_SATA_PASSTHROUGH_REQUEST 304 - * to MPI2_SGE_IO_UNION since it supports chained SGLs. 305 - * 05-12-10 02.00.04 Modified some comments. 306 - * -------------------------------------------------------------------------- 307 - 308 - mpi2_targ.h 309 - * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. 310 - * 08-31-07 02.00.01 Added Command Buffer Data Location Address Space bits to 311 - * BufferPostFlags field of CommandBufferPostBase Request. 312 - * 02-29-08 02.00.02 Modified various names to make them 32-character unique. 313 - * 10-02-08 02.00.03 Removed NextCmdBufferOffset from 314 - * MPI2_TARGET_CMD_BUF_POST_BASE_REQUEST. 315 - * Target Status Send Request only takes a single SGE for 316 - * response data. 317 - * -------------------------------------------------------------------------- 318 - 319 - mpi2_tool.h 320 - * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. 321 - * 12-18-07 02.00.01 Added Diagnostic Buffer Post and Diagnostic Release 322 - * structures and defines. 323 - * 02-29-08 02.00.02 Modified various names to make them 32-character unique. 324 - * 05-06-09 02.00.03 Added ISTWI Read Write Tool and Diagnostic CLI Tool. 325 - * 07-30-09 02.00.04 Added ExtendedType field to DiagnosticBufferPost request 326 - * and reply messages. 327 - * Added MPI2_DIAG_BUF_TYPE_EXTENDED. 328 - * Incremented MPI2_DIAG_BUF_TYPE_COUNT. 329 - * 05-12-10 02.00.05 Added Diagnostic Data Upload tool. 330 - * -------------------------------------------------------------------------- 331 - 332 - mpi2_type.h 333 - * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. 334 - * -------------------------------------------------------------------------- 335 - 336 - mpi2_ra.h 337 - * 05-06-09 02.00.00 Initial version. 338 - * -------------------------------------------------------------------------- 339 - 340 - mpi2_hbd.h 341 - * 10-28-09 02.00.00 Initial version. 342 - * -------------------------------------------------------------------------- 343 - 344 - 345 - mpi2_history.txt Parts list history 346 - 347 - Filename 02.00.14 02.00.13 02.00.12 348 - ---------- -------- -------- -------- 349 - mpi2.h 02.00.14 02.00.13 02.00.12 350 - mpi2_cnfg.h 02.00.13 02.00.12 02.00.11 351 - mpi2_init.h 02.00.08 02.00.07 02.00.07 352 - mpi2_ioc.h 02.00.13 02.00.12 02.00.11 353 - mpi2_raid.h 02.00.04 02.00.04 02.00.03 354 - mpi2_sas.h 02.00.03 02.00.02 02.00.02 355 - mpi2_targ.h 02.00.03 02.00.03 02.00.03 356 - mpi2_tool.h 02.00.04 02.00.04 02.00.03 357 - mpi2_type.h 02.00.00 02.00.00 02.00.00 358 - mpi2_ra.h 02.00.00 02.00.00 02.00.00 359 - mpi2_hbd.h 02.00.00 360 - 361 - Filename 02.00.11 02.00.10 02.00.09 02.00.08 02.00.07 02.00.06 362 - ---------- -------- -------- -------- -------- -------- -------- 363 - mpi2.h 02.00.11 02.00.10 02.00.09 02.00.08 02.00.07 02.00.06 364 - mpi2_cnfg.h 02.00.10 02.00.09 02.00.08 02.00.07 02.00.06 02.00.06 365 - mpi2_init.h 02.00.06 02.00.06 02.00.05 02.00.05 02.00.04 02.00.03 366 - mpi2_ioc.h 02.00.10 02.00.09 02.00.08 02.00.07 02.00.07 02.00.06 367 - mpi2_raid.h 02.00.03 02.00.03 02.00.03 02.00.03 02.00.02 02.00.02 368 - mpi2_sas.h 02.00.02 02.00.02 02.00.01 02.00.01 02.00.01 02.00.01 369 - mpi2_targ.h 02.00.03 02.00.03 02.00.02 02.00.02 02.00.02 02.00.02 370 - mpi2_tool.h 02.00.02 02.00.02 02.00.02 02.00.02 02.00.02 02.00.02 371 - mpi2_type.h 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 372 - 373 - Filename 02.00.05 02.00.04 02.00.03 02.00.02 02.00.01 02.00.00 374 - ---------- -------- -------- -------- -------- -------- -------- 375 - mpi2.h 02.00.05 02.00.04 02.00.03 02.00.02 02.00.01 02.00.00 376 - mpi2_cnfg.h 02.00.05 02.00.04 02.00.03 02.00.02 02.00.01 02.00.00 377 - mpi2_init.h 02.00.02 02.00.01 02.00.00 02.00.00 02.00.00 02.00.00 378 - mpi2_ioc.h 02.00.05 02.00.04 02.00.03 02.00.02 02.00.01 02.00.00 379 - mpi2_raid.h 02.00.01 02.00.01 02.00.01 02.00.00 02.00.00 02.00.00 380 - mpi2_sas.h 02.00.01 02.00.01 02.00.01 02.00.01 02.00.00 02.00.00 381 - mpi2_targ.h 02.00.01 02.00.01 02.00.01 02.00.00 02.00.00 02.00.00 382 - mpi2_tool.h 02.00.01 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 383 - mpi2_type.h 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 384 -
+5 -2
drivers/scsi/mpt2sas/mpi/mpi2_sas.h
··· 6 6 * Title: MPI Serial Attached SCSI structures and definitions 7 7 * Creation Date: February 9, 2007 8 8 * 9 - * mpi2_sas.h Version: 02.00.04 9 + * mpi2_sas.h Version: 02.00.05 10 10 * 11 11 * Version History 12 12 * --------------- ··· 21 21 * 10-28-09 02.00.03 Changed the type of SGL in MPI2_SATA_PASSTHROUGH_REQUEST 22 22 * to MPI2_SGE_IO_UNION since it supports chained SGLs. 23 23 * 05-12-10 02.00.04 Modified some comments. 24 + * 08-11-10 02.00.05 Added NCQ operations to SAS IO Unit Control. 24 25 * -------------------------------------------------------------------------- 25 26 */ 26 27 ··· 164 163 U32 Reserved4; /* 0x14 */ 165 164 U32 DataLength; /* 0x18 */ 166 165 U8 CommandFIS[20]; /* 0x1C */ 167 - MPI2_SGE_IO_UNION SGL; /* 0x20 */ 166 + MPI2_SGE_IO_UNION SGL; /* 0x30 */ 168 167 } MPI2_SATA_PASSTHROUGH_REQUEST, MPI2_POINTER PTR_MPI2_SATA_PASSTHROUGH_REQUEST, 169 168 Mpi2SataPassthroughRequest_t, MPI2_POINTER pMpi2SataPassthroughRequest_t; 170 169 ··· 247 246 #define MPI2_SAS_OP_REMOVE_DEVICE (0x0D) 248 247 #define MPI2_SAS_OP_LOOKUP_MAPPING (0x0E) 249 248 #define MPI2_SAS_OP_SET_IOC_PARAMETER (0x0F) 249 + #define MPI2_SAS_OP_DEV_ENABLE_NCQ (0x14) 250 + #define MPI2_SAS_OP_DEV_DISABLE_NCQ (0x15) 250 251 #define MPI2_SAS_OP_PRODUCT_SPECIFIC_MIN (0x80) 251 252 252 253 /* values for the PrimFlags field */
+7 -1
drivers/scsi/mpt2sas/mpi/mpi2_tool.h
··· 6 6 * Title: MPI diagnostic tool structures and definitions 7 7 * Creation Date: March 26, 2007 8 8 * 9 - * mpi2_tool.h Version: 02.00.05 9 + * mpi2_tool.h Version: 02.00.06 10 10 * 11 11 * Version History 12 12 * --------------- ··· 23 23 * Added MPI2_DIAG_BUF_TYPE_EXTENDED. 24 24 * Incremented MPI2_DIAG_BUF_TYPE_COUNT. 25 25 * 05-12-10 02.00.05 Added Diagnostic Data Upload tool. 26 + * 08-11-10 02.00.06 Added defines that were missing for Diagnostic Buffer 27 + * Post Request. 26 28 * -------------------------------------------------------------------------- 27 29 */ 28 30 ··· 355 353 #define MPI2_DIAG_BUF_TYPE_EXTENDED (0x02) 356 354 /* count of the number of buffer types */ 357 355 #define MPI2_DIAG_BUF_TYPE_COUNT (0x03) 356 + 357 + /* values for the Flags field */ 358 + #define MPI2_DIAG_BUF_FLAG_RELEASE_ON_FULL (0x00000002) 359 + #define MPI2_DIAG_BUF_FLAG_IMMEDIATE_RELEASE (0x00000001) 358 360 359 361 360 362 /****************************************************************************
+75 -51
drivers/scsi/mpt2sas/mpt2sas_base.c
··· 752 752 _base_get_cb_idx(struct MPT2SAS_ADAPTER *ioc, u16 smid) 753 753 { 754 754 int i; 755 - u8 cb_idx = 0xFF; 755 + u8 cb_idx; 756 756 757 - if (smid >= ioc->hi_priority_smid) { 758 - if (smid < ioc->internal_smid) { 759 - i = smid - ioc->hi_priority_smid; 760 - cb_idx = ioc->hpr_lookup[i].cb_idx; 761 - } else if (smid <= ioc->hba_queue_depth) { 762 - i = smid - ioc->internal_smid; 763 - cb_idx = ioc->internal_lookup[i].cb_idx; 764 - } 765 - } else { 757 + if (smid < ioc->hi_priority_smid) { 766 758 i = smid - 1; 767 759 cb_idx = ioc->scsi_lookup[i].cb_idx; 768 - } 760 + } else if (smid < ioc->internal_smid) { 761 + i = smid - ioc->hi_priority_smid; 762 + cb_idx = ioc->hpr_lookup[i].cb_idx; 763 + } else if (smid <= ioc->hba_queue_depth) { 764 + i = smid - ioc->internal_smid; 765 + cb_idx = ioc->internal_lookup[i].cb_idx; 766 + } else 767 + cb_idx = 0xFF; 769 768 return cb_idx; 770 769 } 771 770 ··· 1429 1430 struct scsi_cmnd *scmd) 1430 1431 { 1431 1432 unsigned long flags; 1432 - struct request_tracker *request; 1433 + struct scsiio_tracker *request; 1433 1434 u16 smid; 1434 1435 1435 1436 spin_lock_irqsave(&ioc->scsi_lookup_lock, flags); ··· 1441 1442 } 1442 1443 1443 1444 request = list_entry(ioc->free_list.next, 1444 - struct request_tracker, tracker_list); 1445 + struct scsiio_tracker, tracker_list); 1445 1446 request->scmd = scmd; 1446 1447 request->cb_idx = cb_idx; 1447 1448 smid = request->smid; ··· 1495 1496 struct chain_tracker *chain_req, *next; 1496 1497 1497 1498 spin_lock_irqsave(&ioc->scsi_lookup_lock, flags); 1498 - if (smid >= ioc->hi_priority_smid) { 1499 - if (smid < ioc->internal_smid) { 1500 - /* hi-priority */ 1501 - i = smid - ioc->hi_priority_smid; 1502 - ioc->hpr_lookup[i].cb_idx = 0xFF; 1503 - list_add_tail(&ioc->hpr_lookup[i].tracker_list, 1504 - &ioc->hpr_free_list); 1505 - } else { 1506 - /* internal queue */ 1507 - i = smid - ioc->internal_smid; 1508 - ioc->internal_lookup[i].cb_idx = 0xFF; 1509 - list_add_tail(&ioc->internal_lookup[i].tracker_list, 1510 - &ioc->internal_free_list); 1499 + if (smid < ioc->hi_priority_smid) { 1500 + /* scsiio queue */ 1501 + i = smid - 1; 1502 + if (!list_empty(&ioc->scsi_lookup[i].chain_list)) { 1503 + list_for_each_entry_safe(chain_req, next, 1504 + &ioc->scsi_lookup[i].chain_list, tracker_list) { 1505 + list_del_init(&chain_req->tracker_list); 1506 + list_add_tail(&chain_req->tracker_list, 1507 + &ioc->free_chain_list); 1508 + } 1511 1509 } 1510 + ioc->scsi_lookup[i].cb_idx = 0xFF; 1511 + ioc->scsi_lookup[i].scmd = NULL; 1512 + list_add_tail(&ioc->scsi_lookup[i].tracker_list, 1513 + &ioc->free_list); 1512 1514 spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags); 1513 - return; 1514 - } 1515 1515 1516 - /* scsiio queue */ 1517 - i = smid - 1; 1518 - if (!list_empty(&ioc->scsi_lookup[i].chain_list)) { 1519 - list_for_each_entry_safe(chain_req, next, 1520 - &ioc->scsi_lookup[i].chain_list, tracker_list) { 1521 - list_del_init(&chain_req->tracker_list); 1522 - list_add_tail(&chain_req->tracker_list, 1523 - &ioc->free_chain_list); 1516 + /* 1517 + * See _wait_for_commands_to_complete() call with regards 1518 + * to this code. 1519 + */ 1520 + if (ioc->shost_recovery && ioc->pending_io_count) { 1521 + if (ioc->pending_io_count == 1) 1522 + wake_up(&ioc->reset_wq); 1523 + ioc->pending_io_count--; 1524 1524 } 1525 + return; 1526 + } else if (smid < ioc->internal_smid) { 1527 + /* hi-priority */ 1528 + i = smid - ioc->hi_priority_smid; 1529 + ioc->hpr_lookup[i].cb_idx = 0xFF; 1530 + list_add_tail(&ioc->hpr_lookup[i].tracker_list, 1531 + &ioc->hpr_free_list); 1532 + } else if (smid <= ioc->hba_queue_depth) { 1533 + /* internal queue */ 1534 + i = smid - ioc->internal_smid; 1535 + ioc->internal_lookup[i].cb_idx = 0xFF; 1536 + list_add_tail(&ioc->internal_lookup[i].tracker_list, 1537 + &ioc->internal_free_list); 1525 1538 } 1526 - ioc->scsi_lookup[i].cb_idx = 0xFF; 1527 - ioc->scsi_lookup[i].scmd = NULL; 1528 - list_add_tail(&ioc->scsi_lookup[i].tracker_list, 1529 - &ioc->free_list); 1530 1539 spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags); 1531 - 1532 - /* 1533 - * See _wait_for_commands_to_complete() call with regards to this code. 1534 - */ 1535 - if (ioc->shost_recovery && ioc->pending_io_count) { 1536 - if (ioc->pending_io_count == 1) 1537 - wake_up(&ioc->reset_wq); 1538 - ioc->pending_io_count--; 1539 - } 1540 1540 } 1541 1541 1542 1542 /** ··· 1723 1725 } 1724 1726 1725 1727 /** 1728 + * _base_display_intel_branding - Display branding string 1729 + * @ioc: per adapter object 1730 + * 1731 + * Return nothing. 1732 + */ 1733 + static void 1734 + _base_display_intel_branding(struct MPT2SAS_ADAPTER *ioc) 1735 + { 1736 + if (ioc->pdev->subsystem_vendor == PCI_VENDOR_ID_INTEL && 1737 + ioc->pdev->device == MPI2_MFGPAGE_DEVID_SAS2008) { 1738 + 1739 + switch (ioc->pdev->subsystem_device) { 1740 + case MPT2SAS_INTEL_RMS2LL080_SSDID: 1741 + printk(MPT2SAS_INFO_FMT "%s\n", ioc->name, 1742 + MPT2SAS_INTEL_RMS2LL080_BRANDING); 1743 + break; 1744 + case MPT2SAS_INTEL_RMS2LL040_SSDID: 1745 + printk(MPT2SAS_INFO_FMT "%s\n", ioc->name, 1746 + MPT2SAS_INTEL_RMS2LL040_BRANDING); 1747 + break; 1748 + } 1749 + } 1750 + } 1751 + 1752 + /** 1726 1753 * _base_display_ioc_capabilities - Disply IOC's capabilities. 1727 1754 * @ioc: per adapter object 1728 1755 * ··· 1777 1754 ioc->bios_pg3.BiosVersion & 0x000000FF); 1778 1755 1779 1756 _base_display_dell_branding(ioc); 1757 + _base_display_intel_branding(ioc); 1780 1758 1781 1759 printk(MPT2SAS_INFO_FMT "Protocol=(", ioc->name); 1782 1760 ··· 2276 2252 ioc->name, (unsigned long long) ioc->request_dma)); 2277 2253 total_sz += sz; 2278 2254 2279 - sz = ioc->scsiio_depth * sizeof(struct request_tracker); 2255 + sz = ioc->scsiio_depth * sizeof(struct scsiio_tracker); 2280 2256 ioc->scsi_lookup_pages = get_order(sz); 2281 - ioc->scsi_lookup = (struct request_tracker *)__get_free_pages( 2257 + ioc->scsi_lookup = (struct scsiio_tracker *)__get_free_pages( 2282 2258 GFP_KERNEL, ioc->scsi_lookup_pages); 2283 2259 if (!ioc->scsi_lookup) { 2284 2260 printk(MPT2SAS_ERR_FMT "scsi_lookup: get_free_pages failed, "
+34 -6
drivers/scsi/mpt2sas/mpt2sas_base.h
··· 69 69 #define MPT2SAS_DRIVER_NAME "mpt2sas" 70 70 #define MPT2SAS_AUTHOR "LSI Corporation <DL-MPTFusionLinux@lsi.com>" 71 71 #define MPT2SAS_DESCRIPTION "LSI MPT Fusion SAS 2.0 Device Driver" 72 - #define MPT2SAS_DRIVER_VERSION "07.100.00.00" 73 - #define MPT2SAS_MAJOR_VERSION 07 72 + #define MPT2SAS_DRIVER_VERSION "08.100.00.00" 73 + #define MPT2SAS_MAJOR_VERSION 08 74 74 #define MPT2SAS_MINOR_VERSION 100 75 75 #define MPT2SAS_BUILD_VERSION 00 76 76 #define MPT2SAS_RELEASE_VERSION 00 ··· 101 101 #define MPT_NAME_LENGTH 32 /* generic length of strings */ 102 102 #define MPT_STRING_LENGTH 64 103 103 104 - #define MPT_MAX_CALLBACKS 16 104 + #define MPT_MAX_CALLBACKS 16 105 + 105 106 106 107 #define CAN_SLEEP 1 107 108 #define NO_SLEEP 0 ··· 153 152 #define MPT2SAS_DELL_PERC_H200_EMBEDDED_SSDID 0x1F20 154 153 #define MPT2SAS_DELL_PERC_H200_SSDID 0x1F21 155 154 #define MPT2SAS_DELL_6GBPS_SAS_SSDID 0x1F22 155 + 156 + /* 157 + * Intel HBA branding 158 + */ 159 + #define MPT2SAS_INTEL_RMS2LL080_BRANDING \ 160 + "Intel Integrated RAID Module RMS2LL080" 161 + #define MPT2SAS_INTEL_RMS2LL040_BRANDING \ 162 + "Intel Integrated RAID Module RMS2LL040" 163 + 164 + /* 165 + * Intel HBA SSDIDs 166 + */ 167 + #define MPT2SAS_INTEL_RMS2LL080_SSDID 0x350E 168 + #define MPT2SAS_INTEL_RMS2LL040_SSDID 0x350F 156 169 157 170 /* 158 171 * per target private data ··· 446 431 }; 447 432 448 433 /** 449 - * struct request_tracker - firmware request tracker 434 + * struct scsiio_tracker - scsi mf request tracker 450 435 * @smid: system message id 451 436 * @scmd: scsi request pointer 452 437 * @cb_idx: callback index 453 438 * @chain_list: list of chains associated to this IO 454 439 * @tracker_list: list of free request (ioc->free_list) 455 440 */ 456 - struct request_tracker { 441 + struct scsiio_tracker { 457 442 u16 smid; 458 443 struct scsi_cmnd *scmd; 459 444 u8 cb_idx; 460 445 struct list_head chain_list; 446 + struct list_head tracker_list; 447 + }; 448 + 449 + /** 450 + * struct request_tracker - misc mf request tracker 451 + * @smid: system message id 452 + * @scmd: scsi request pointer 453 + * @cb_idx: callback index 454 + * @tracker_list: list of free request (ioc->free_list) 455 + */ 456 + struct request_tracker { 457 + u16 smid; 458 + u8 cb_idx; 461 459 struct list_head tracker_list; 462 460 }; 463 461 ··· 737 709 u8 *request; 738 710 dma_addr_t request_dma; 739 711 u32 request_dma_sz; 740 - struct request_tracker *scsi_lookup; 712 + struct scsiio_tracker *scsi_lookup; 741 713 ulong scsi_lookup_pages; 742 714 spinlock_t scsi_lookup_lock; 743 715 struct list_head free_list;
-1
drivers/scsi/mpt2sas/mpt2sas_scsih.c
··· 6975 6975 u32 device_state; 6976 6976 6977 6977 mpt2sas_base_stop_watchdog(ioc); 6978 - flush_scheduled_work(); 6979 6978 scsi_block_requests(shost); 6980 6979 device_state = pci_choose_state(pdev, state); 6981 6980 printk(MPT2SAS_INFO_FMT "pdev=0x%p, slot=%s, entering "
+16 -4
drivers/scsi/osd/osd_initiator.c
··· 1005 1005 const struct osd_sg_entry *sglist, unsigned numentries) 1006 1006 { 1007 1007 u64 len; 1008 - int ret = _add_sg_continuation_descriptor(or, sglist, numentries, &len); 1008 + u64 off; 1009 + int ret; 1009 1010 1010 - if (ret) 1011 - return ret; 1012 - osd_req_read(or, obj, 0, bio, len); 1011 + if (numentries > 1) { 1012 + off = 0; 1013 + ret = _add_sg_continuation_descriptor(or, sglist, numentries, 1014 + &len); 1015 + if (ret) 1016 + return ret; 1017 + } else { 1018 + /* Optimize the case of single segment, read_sg is a 1019 + * bidi operation. 1020 + */ 1021 + len = sglist->len; 1022 + off = sglist->offset; 1023 + } 1024 + osd_req_read(or, obj, off, bio, len); 1013 1025 1014 1026 return 0; 1015 1027 }
+17 -20
drivers/scsi/pm8001/pm8001_hwi.c
··· 1382 1382 return MPI_IO_STATUS_BUSY; 1383 1383 } 1384 1384 1385 - static void pm8001_work_queue(struct work_struct *work) 1385 + static void pm8001_work_fn(struct work_struct *work) 1386 1386 { 1387 - struct delayed_work *dw = container_of(work, struct delayed_work, work); 1388 - struct pm8001_wq *wq = container_of(dw, struct pm8001_wq, work_q); 1387 + struct pm8001_work *pw = container_of(work, struct pm8001_work, work); 1389 1388 struct pm8001_device *pm8001_dev; 1390 - struct domain_device *dev; 1389 + struct domain_device *dev; 1391 1390 1392 - switch (wq->handler) { 1391 + switch (pw->handler) { 1393 1392 case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS: 1394 - pm8001_dev = wq->data; 1393 + pm8001_dev = pw->data; 1395 1394 dev = pm8001_dev->sas_device; 1396 1395 pm8001_I_T_nexus_reset(dev); 1397 1396 break; 1398 1397 case IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY: 1399 - pm8001_dev = wq->data; 1398 + pm8001_dev = pw->data; 1400 1399 dev = pm8001_dev->sas_device; 1401 1400 pm8001_I_T_nexus_reset(dev); 1402 1401 break; 1403 1402 case IO_DS_IN_ERROR: 1404 - pm8001_dev = wq->data; 1403 + pm8001_dev = pw->data; 1405 1404 dev = pm8001_dev->sas_device; 1406 1405 pm8001_I_T_nexus_reset(dev); 1407 1406 break; 1408 1407 case IO_DS_NON_OPERATIONAL: 1409 - pm8001_dev = wq->data; 1408 + pm8001_dev = pw->data; 1410 1409 dev = pm8001_dev->sas_device; 1411 1410 pm8001_I_T_nexus_reset(dev); 1412 1411 break; 1413 1412 } 1414 - list_del(&wq->entry); 1415 - kfree(wq); 1413 + kfree(pw); 1416 1414 } 1417 1415 1418 1416 static int pm8001_handle_event(struct pm8001_hba_info *pm8001_ha, void *data, 1419 1417 int handler) 1420 1418 { 1421 - struct pm8001_wq *wq; 1419 + struct pm8001_work *pw; 1422 1420 int ret = 0; 1423 1421 1424 - wq = kmalloc(sizeof(struct pm8001_wq), GFP_ATOMIC); 1425 - if (wq) { 1426 - wq->pm8001_ha = pm8001_ha; 1427 - wq->data = data; 1428 - wq->handler = handler; 1429 - INIT_DELAYED_WORK(&wq->work_q, pm8001_work_queue); 1430 - list_add_tail(&wq->entry, &pm8001_ha->wq_list); 1431 - schedule_delayed_work(&wq->work_q, 0); 1422 + pw = kmalloc(sizeof(struct pm8001_work), GFP_ATOMIC); 1423 + if (pw) { 1424 + pw->pm8001_ha = pm8001_ha; 1425 + pw->data = data; 1426 + pw->handler = handler; 1427 + INIT_WORK(&pw->work, pm8001_work_fn); 1428 + queue_work(pm8001_wq, &pw->work); 1432 1429 } else 1433 1430 ret = -ENOMEM; 1434 1431
+18 -9
drivers/scsi/pm8001/pm8001_init.c
··· 51 51 52 52 LIST_HEAD(hba_list); 53 53 54 + struct workqueue_struct *pm8001_wq; 55 + 54 56 /** 55 57 * The main structure which LLDD must register for scsi core. 56 58 */ ··· 136 134 static void pm8001_free(struct pm8001_hba_info *pm8001_ha) 137 135 { 138 136 int i; 139 - struct pm8001_wq *wq; 140 137 141 138 if (!pm8001_ha) 142 139 return; ··· 151 150 PM8001_CHIP_DISP->chip_iounmap(pm8001_ha); 152 151 if (pm8001_ha->shost) 153 152 scsi_host_put(pm8001_ha->shost); 154 - list_for_each_entry(wq, &pm8001_ha->wq_list, entry) 155 - cancel_delayed_work(&wq->work_q); 153 + flush_workqueue(pm8001_wq); 156 154 kfree(pm8001_ha->tags); 157 155 kfree(pm8001_ha); 158 156 } ··· 381 381 pm8001_ha->sas = sha; 382 382 pm8001_ha->shost = shost; 383 383 pm8001_ha->id = pm8001_id++; 384 - INIT_LIST_HEAD(&pm8001_ha->wq_list); 385 384 pm8001_ha->logging_level = 0x01; 386 385 sprintf(pm8001_ha->name, "%s%d", DRV_NAME, pm8001_ha->id); 387 386 #ifdef PM8001_USE_TASKLET ··· 757 758 int i , pos; 758 759 u32 device_state; 759 760 pm8001_ha = sha->lldd_ha; 760 - flush_scheduled_work(); 761 + flush_workqueue(pm8001_wq); 761 762 scsi_block_requests(pm8001_ha->shost); 762 763 pos = pci_find_capability(pdev, PCI_CAP_ID_PM); 763 764 if (pos == 0) { ··· 869 870 */ 870 871 static int __init pm8001_init(void) 871 872 { 872 - int rc; 873 + int rc = -ENOMEM; 874 + 875 + pm8001_wq = alloc_workqueue("pm8001", 0, 0); 876 + if (!pm8001_wq) 877 + goto err; 878 + 873 879 pm8001_id = 0; 874 880 pm8001_stt = sas_domain_attach_transport(&pm8001_transport_ops); 875 881 if (!pm8001_stt) 876 - return -ENOMEM; 882 + goto err_wq; 877 883 rc = pci_register_driver(&pm8001_pci_driver); 878 884 if (rc) 879 - goto err_out; 885 + goto err_tp; 880 886 return 0; 881 - err_out: 887 + 888 + err_tp: 882 889 sas_release_transport(pm8001_stt); 890 + err_wq: 891 + destroy_workqueue(pm8001_wq); 892 + err: 883 893 return rc; 884 894 } 885 895 ··· 896 888 { 897 889 pci_unregister_driver(&pm8001_pci_driver); 898 890 sas_release_transport(pm8001_stt); 891 + destroy_workqueue(pm8001_wq); 899 892 } 900 893 901 894 module_init(pm8001_init);
+6 -4
drivers/scsi/pm8001/pm8001_sas.h
··· 50 50 #include <linux/dma-mapping.h> 51 51 #include <linux/pci.h> 52 52 #include <linux/interrupt.h> 53 + #include <linux/workqueue.h> 53 54 #include <scsi/libsas.h> 54 55 #include <scsi/scsi_tcq.h> 55 56 #include <scsi/sas_ata.h> ··· 380 379 #ifdef PM8001_USE_TASKLET 381 380 struct tasklet_struct tasklet; 382 381 #endif 383 - struct list_head wq_list; 384 382 u32 logging_level; 385 383 u32 fw_status; 386 384 const struct firmware *fw_image; 387 385 }; 388 386 389 - struct pm8001_wq { 390 - struct delayed_work work_q; 387 + struct pm8001_work { 388 + struct work_struct work; 391 389 struct pm8001_hba_info *pm8001_ha; 392 390 void *data; 393 391 int handler; 394 - struct list_head entry; 395 392 }; 396 393 397 394 struct pm8001_fw_image_header { ··· 458 459 void *param2; 459 460 void *param3; 460 461 }; 462 + 463 + /* pm8001 workqueue */ 464 + extern struct workqueue_struct *pm8001_wq; 461 465 462 466 /******************** function prototype *********************/ 463 467 int pm8001_tag_alloc(struct pm8001_hba_info *pm8001_ha, u32 *tag_out);
+1 -1
drivers/scsi/pmcraid.c
··· 5454 5454 pmcraid_shutdown(pdev); 5455 5455 5456 5456 pmcraid_disable_interrupts(pinstance, ~0); 5457 - flush_scheduled_work(); 5457 + flush_work_sync(&pinstance->worker_q); 5458 5458 5459 5459 pmcraid_kill_tasklets(pinstance); 5460 5460 pmcraid_unregister_interrupt_handler(pinstance);
+6 -3
drivers/scsi/qla2xxx/qla_def.h
··· 2402 2402 volatile struct { 2403 2403 uint32_t mbox_int :1; 2404 2404 uint32_t mbox_busy :1; 2405 - 2406 2405 uint32_t disable_risc_code_load :1; 2407 2406 uint32_t enable_64bit_addressing :1; 2408 2407 uint32_t enable_lip_reset :1; 2409 2408 uint32_t enable_target_reset :1; 2410 2409 uint32_t enable_lip_full_login :1; 2411 2410 uint32_t enable_led_scheme :1; 2411 + 2412 2412 uint32_t msi_enabled :1; 2413 2413 uint32_t msix_enabled :1; 2414 2414 uint32_t disable_serdes :1; ··· 2417 2417 uint32_t pci_channel_io_perm_failure :1; 2418 2418 uint32_t fce_enabled :1; 2419 2419 uint32_t fac_supported :1; 2420 + 2420 2421 uint32_t chip_reset_done :1; 2421 2422 uint32_t port0 :1; 2422 2423 uint32_t running_gold_fw :1; ··· 2425 2424 uint32_t cpu_affinity_enabled :1; 2426 2425 uint32_t disable_msix_handshake :1; 2427 2426 uint32_t fcp_prio_enabled :1; 2428 - uint32_t fw_hung :1; 2429 - uint32_t quiesce_owner:1; 2427 + uint32_t isp82xx_fw_hung:1; 2428 + 2429 + uint32_t quiesce_owner:1; 2430 2430 uint32_t thermal_supported:1; 2431 + uint32_t isp82xx_reset_hdlr_active:1; 2431 2432 /* 26 bits */ 2432 2433 } flags; 2433 2434
+1
drivers/scsi/qla2xxx/qla_gbl.h
··· 565 565 extern int qla82xx_mbx_intr_disable(scsi_qla_host_t *); 566 566 extern void qla82xx_start_iocbs(srb_t *); 567 567 extern int qla82xx_fcoe_ctx_reset(scsi_qla_host_t *); 568 + extern void qla82xx_chip_reset_cleanup(scsi_qla_host_t *); 568 569 569 570 /* BSG related functions */ 570 571 extern int qla24xx_bsg_request(struct fc_bsg_job *);
+14 -7
drivers/scsi/qla2xxx/qla_gs.c
··· 121 121 122 122 rval = QLA_FUNCTION_FAILED; 123 123 if (ms_pkt->entry_status != 0) { 124 - DEBUG2_3(printk("scsi(%ld): %s failed, error status (%x).\n", 125 - vha->host_no, routine, ms_pkt->entry_status)); 124 + DEBUG2_3(printk(KERN_WARNING "scsi(%ld): %s failed, error status " 125 + "(%x) on port_id: %02x%02x%02x.\n", 126 + vha->host_no, routine, ms_pkt->entry_status, 127 + vha->d_id.b.domain, vha->d_id.b.area, 128 + vha->d_id.b.al_pa)); 126 129 } else { 127 130 if (IS_FWI2_CAPABLE(ha)) 128 131 comp_status = le16_to_cpu( ··· 139 136 if (ct_rsp->header.response != 140 137 __constant_cpu_to_be16(CT_ACCEPT_RESPONSE)) { 141 138 DEBUG2_3(printk("scsi(%ld): %s failed, " 142 - "rejected request:\n", vha->host_no, 143 - routine)); 139 + "rejected request on port_id: %02x%02x%02x\n", 140 + vha->host_no, routine, 141 + vha->d_id.b.domain, vha->d_id.b.area, 142 + vha->d_id.b.al_pa)); 144 143 DEBUG2_3(qla2x00_dump_buffer( 145 144 (uint8_t *)&ct_rsp->header, 146 145 sizeof(struct ct_rsp_hdr))); ··· 152 147 break; 153 148 default: 154 149 DEBUG2_3(printk("scsi(%ld): %s failed, completion " 155 - "status (%x).\n", vha->host_no, routine, 156 - comp_status)); 150 + "status (%x) on port_id: %02x%02x%02x.\n", 151 + vha->host_no, routine, comp_status, 152 + vha->d_id.b.domain, vha->d_id.b.area, 153 + vha->d_id.b.al_pa)); 157 154 break; 158 155 } 159 156 } ··· 1972 1965 "scsi(%ld): GFF_ID issue IOCB failed " 1973 1966 "(%d).\n", vha->host_no, rval)); 1974 1967 } else if (qla2x00_chk_ms_status(vha, ms_pkt, ct_rsp, 1975 - "GPN_ID") != QLA_SUCCESS) { 1968 + "GFF_ID") != QLA_SUCCESS) { 1976 1969 DEBUG2_3(printk(KERN_INFO 1977 1970 "scsi(%ld): GFF_ID IOCB status had a " 1978 1971 "failure status code\n", vha->host_no));
+33 -8
drivers/scsi/qla2xxx/qla_init.c
··· 1967 1967 } else { 1968 1968 /* Mailbox cmd failed. Timeout on min_wait. */ 1969 1969 if (time_after_eq(jiffies, mtime) || 1970 - (IS_QLA82XX(ha) && ha->flags.fw_hung)) 1970 + ha->flags.isp82xx_fw_hung) 1971 1971 break; 1972 1972 } 1973 1973 ··· 3945 3945 struct qla_hw_data *ha = vha->hw; 3946 3946 struct scsi_qla_host *vp; 3947 3947 unsigned long flags; 3948 + fc_port_t *fcport; 3948 3949 3949 - vha->flags.online = 0; 3950 + /* For ISP82XX, driver waits for completion of the commands. 3951 + * online flag should be set. 3952 + */ 3953 + if (!IS_QLA82XX(ha)) 3954 + vha->flags.online = 0; 3950 3955 ha->flags.chip_reset_done = 0; 3951 3956 clear_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); 3952 3957 ha->qla_stats.total_isp_aborts++; ··· 3959 3954 qla_printk(KERN_INFO, ha, 3960 3955 "Performing ISP error recovery - ha= %p.\n", ha); 3961 3956 3962 - /* Chip reset does not apply to 82XX */ 3957 + /* For ISP82XX, reset_chip is just disabling interrupts. 3958 + * Driver waits for the completion of the commands. 3959 + * the interrupts need to be enabled. 3960 + */ 3963 3961 if (!IS_QLA82XX(ha)) 3964 3962 ha->isp_ops->reset_chip(vha); 3965 3963 ··· 3988 3980 LOOP_DOWN_TIME); 3989 3981 } 3990 3982 3983 + /* Clear all async request states across all VPs. */ 3984 + list_for_each_entry(fcport, &vha->vp_fcports, list) 3985 + fcport->flags &= ~(FCF_LOGIN_NEEDED | FCF_ASYNC_SENT); 3986 + spin_lock_irqsave(&ha->vport_slock, flags); 3987 + list_for_each_entry(vp, &ha->vp_list, list) { 3988 + atomic_inc(&vp->vref_count); 3989 + spin_unlock_irqrestore(&ha->vport_slock, flags); 3990 + 3991 + list_for_each_entry(fcport, &vp->vp_fcports, list) 3992 + fcport->flags &= ~(FCF_LOGIN_NEEDED | FCF_ASYNC_SENT); 3993 + 3994 + spin_lock_irqsave(&ha->vport_slock, flags); 3995 + atomic_dec(&vp->vref_count); 3996 + } 3997 + spin_unlock_irqrestore(&ha->vport_slock, flags); 3998 + 3991 3999 if (!ha->flags.eeh_busy) { 3992 4000 /* Make sure for ISP 82XX IO DMA is complete */ 3993 4001 if (IS_QLA82XX(ha)) { 3994 - if (qla2x00_eh_wait_for_pending_commands(vha, 0, 0, 3995 - WAIT_HOST) == QLA_SUCCESS) { 3996 - DEBUG2(qla_printk(KERN_INFO, ha, 3997 - "Done wait for pending commands\n")); 3998 - } 4002 + qla82xx_chip_reset_cleanup(vha); 4003 + 4004 + /* Done waiting for pending commands. 4005 + * Reset the online flag. 4006 + */ 4007 + vha->flags.online = 0; 3999 4008 } 4000 4009 4001 4010 /* Requeue all commands in outstanding command list. */
+50 -2
drivers/scsi/qla2xxx/qla_iocb.c
··· 328 328 struct qla_hw_data *ha; 329 329 struct req_que *req; 330 330 struct rsp_que *rsp; 331 + char tag[2]; 331 332 332 333 /* Setup device pointers. */ 333 334 ret = 0; ··· 407 406 cmd_pkt->lun = cpu_to_le16(sp->cmd->device->lun); 408 407 409 408 /* Update tagged queuing modifier */ 410 - cmd_pkt->control_flags = __constant_cpu_to_le16(CF_SIMPLE_TAG); 409 + if (scsi_populate_tag_msg(cmd, tag)) { 410 + switch (tag[0]) { 411 + case HEAD_OF_QUEUE_TAG: 412 + cmd_pkt->control_flags = 413 + __constant_cpu_to_le16(CF_HEAD_TAG); 414 + break; 415 + case ORDERED_QUEUE_TAG: 416 + cmd_pkt->control_flags = 417 + __constant_cpu_to_le16(CF_ORDERED_TAG); 418 + break; 419 + default: 420 + cmd_pkt->control_flags = 421 + __constant_cpu_to_le16(CF_SIMPLE_TAG); 422 + break; 423 + } 424 + } 411 425 412 426 /* Load SCSI command packet. */ 413 427 memcpy(cmd_pkt->scsi_cdb, cmd->cmnd, cmd->cmd_len); ··· 987 971 uint16_t fcp_cmnd_len; 988 972 struct fcp_cmnd *fcp_cmnd; 989 973 dma_addr_t crc_ctx_dma; 974 + char tag[2]; 990 975 991 976 cmd = sp->cmd; 992 977 ··· 1085 1068 LSD(crc_ctx_dma + CRC_CONTEXT_FCPCMND_OFF)); 1086 1069 cmd_pkt->fcp_cmnd_dseg_address[1] = cpu_to_le32( 1087 1070 MSD(crc_ctx_dma + CRC_CONTEXT_FCPCMND_OFF)); 1088 - fcp_cmnd->task_attribute = 0; 1089 1071 fcp_cmnd->task_management = 0; 1072 + 1073 + /* 1074 + * Update tagged queuing modifier if using command tag queuing 1075 + */ 1076 + if (scsi_populate_tag_msg(cmd, tag)) { 1077 + switch (tag[0]) { 1078 + case HEAD_OF_QUEUE_TAG: 1079 + fcp_cmnd->task_attribute = TSK_HEAD_OF_QUEUE; 1080 + break; 1081 + case ORDERED_QUEUE_TAG: 1082 + fcp_cmnd->task_attribute = TSK_ORDERED; 1083 + break; 1084 + default: 1085 + fcp_cmnd->task_attribute = 0; 1086 + break; 1087 + } 1088 + } else { 1089 + fcp_cmnd->task_attribute = 0; 1090 + } 1090 1091 1091 1092 cmd_pkt->fcp_rsp_dseg_len = 0; /* Let response come in status iocb */ 1092 1093 ··· 1212 1177 struct scsi_cmnd *cmd = sp->cmd; 1213 1178 struct scsi_qla_host *vha = sp->fcport->vha; 1214 1179 struct qla_hw_data *ha = vha->hw; 1180 + char tag[2]; 1215 1181 1216 1182 /* Setup device pointers. */ 1217 1183 ret = 0; ··· 1295 1259 1296 1260 int_to_scsilun(sp->cmd->device->lun, &cmd_pkt->lun); 1297 1261 host_to_fcp_swap((uint8_t *)&cmd_pkt->lun, sizeof(cmd_pkt->lun)); 1262 + 1263 + /* Update tagged queuing modifier -- default is TSK_SIMPLE (0). */ 1264 + if (scsi_populate_tag_msg(cmd, tag)) { 1265 + switch (tag[0]) { 1266 + case HEAD_OF_QUEUE_TAG: 1267 + cmd_pkt->task = TSK_HEAD_OF_QUEUE; 1268 + break; 1269 + case ORDERED_QUEUE_TAG: 1270 + cmd_pkt->task = TSK_ORDERED; 1271 + break; 1272 + } 1273 + } 1298 1274 1299 1275 /* Load SCSI command packet. */ 1300 1276 memcpy(cmd_pkt->fcp_cdb, cmd->cmnd, cmd->cmd_len);
+21 -24
drivers/scsi/qla2xxx/qla_mbx.c
··· 71 71 return QLA_FUNCTION_TIMEOUT; 72 72 } 73 73 74 + if (ha->flags.isp82xx_fw_hung) { 75 + /* Setting Link-Down error */ 76 + mcp->mb[0] = MBS_LINK_DOWN_ERROR; 77 + rval = QLA_FUNCTION_FAILED; 78 + goto premature_exit; 79 + } 80 + 74 81 /* 75 82 * Wait for active mailbox commands to finish by waiting at most tov 76 83 * seconds. This is to serialize actual issuing of mailbox cmds during ··· 88 81 DEBUG2_3_11(printk("%s(%ld): cmd access timeout. " 89 82 "Exiting.\n", __func__, base_vha->host_no)); 90 83 return QLA_FUNCTION_TIMEOUT; 91 - } 92 - 93 - if (IS_QLA82XX(ha) && ha->flags.fw_hung) { 94 - /* Setting Link-Down error */ 95 - mcp->mb[0] = MBS_LINK_DOWN_ERROR; 96 - rval = QLA_FUNCTION_FAILED; 97 - goto premature_exit; 98 84 } 99 85 100 86 ha->flags.mbox_busy = 1; ··· 223 223 ha->flags.mbox_int = 0; 224 224 clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags); 225 225 226 - if (IS_QLA82XX(ha) && ha->flags.fw_hung) { 226 + if (ha->flags.isp82xx_fw_hung) { 227 227 ha->flags.mbox_busy = 0; 228 228 /* Setting Link-Down error */ 229 229 mcp->mb[0] = MBS_LINK_DOWN_ERROR; ··· 2462 2462 "-- completion status (%x).\n", __func__, 2463 2463 vha->host_no, le16_to_cpu(sts->comp_status))); 2464 2464 rval = QLA_FUNCTION_FAILED; 2465 - } else if (!(le16_to_cpu(sts->scsi_status) & 2466 - SS_RESPONSE_INFO_LEN_VALID)) { 2467 - DEBUG2_3_11(printk("%s(%ld): failed to complete IOCB " 2468 - "-- no response info (%x).\n", __func__, vha->host_no, 2469 - le16_to_cpu(sts->scsi_status))); 2470 - rval = QLA_FUNCTION_FAILED; 2471 - } else if (le32_to_cpu(sts->rsp_data_len) < 4) { 2472 - DEBUG2_3_11(printk("%s(%ld): failed to complete IOCB " 2473 - "-- not enough response info (%d).\n", __func__, 2474 - vha->host_no, le32_to_cpu(sts->rsp_data_len))); 2475 - rval = QLA_FUNCTION_FAILED; 2476 - } else if (sts->data[3]) { 2477 - DEBUG2_3_11(printk("%s(%ld): failed to complete IOCB " 2478 - "-- response (%x).\n", __func__, 2479 - vha->host_no, sts->data[3])); 2480 - rval = QLA_FUNCTION_FAILED; 2465 + } else if (le16_to_cpu(sts->scsi_status) & 2466 + SS_RESPONSE_INFO_LEN_VALID) { 2467 + if (le32_to_cpu(sts->rsp_data_len) < 4) { 2468 + DEBUG2_3_11(printk("%s(%ld): ignoring inconsistent " 2469 + "data length -- not enough response info (%d).\n", 2470 + __func__, vha->host_no, 2471 + le32_to_cpu(sts->rsp_data_len))); 2472 + } else if (sts->data[3]) { 2473 + DEBUG2_3_11(printk("%s(%ld): failed to complete IOCB " 2474 + "-- response (%x).\n", __func__, 2475 + vha->host_no, sts->data[3])); 2476 + rval = QLA_FUNCTION_FAILED; 2477 + } 2481 2478 } 2482 2479 2483 2480 /* Issue marker IOCB. */
+145 -46
drivers/scsi/qla2xxx/qla_nx.c
··· 7 7 #include "qla_def.h" 8 8 #include <linux/delay.h> 9 9 #include <linux/pci.h> 10 + #include <scsi/scsi_tcq.h> 10 11 11 12 #define MASK(n) ((1ULL<<(n))-1) 12 13 #define MN_WIN(addr) (((addr & 0x1fc0000) >> 1) | \ ··· 2548 2547 dsd_seg = (uint32_t *)&cmd_pkt->fcp_data_dseg_address; 2549 2548 *dsd_seg++ = cpu_to_le32(LSD(dsd_ptr->dsd_list_dma)); 2550 2549 *dsd_seg++ = cpu_to_le32(MSD(dsd_ptr->dsd_list_dma)); 2551 - *dsd_seg++ = dsd_list_len; 2550 + cmd_pkt->fcp_data_dseg_len = dsd_list_len; 2552 2551 } else { 2553 2552 *cur_dsd++ = cpu_to_le32(LSD(dsd_ptr->dsd_list_dma)); 2554 2553 *cur_dsd++ = cpu_to_le32(MSD(dsd_ptr->dsd_list_dma)); ··· 2621 2620 struct qla_hw_data *ha = vha->hw; 2622 2621 struct req_que *req = NULL; 2623 2622 struct rsp_que *rsp = NULL; 2623 + char tag[2]; 2624 2624 2625 2625 /* Setup device pointers. */ 2626 2626 ret = 0; ··· 2772 2770 int_to_scsilun(sp->cmd->device->lun, &cmd_pkt->lun); 2773 2771 host_to_fcp_swap((uint8_t *)&cmd_pkt->lun, sizeof(cmd_pkt->lun)); 2774 2772 2773 + /* 2774 + * Update tagged queuing modifier -- default is TSK_SIMPLE (0). 2775 + */ 2776 + if (scsi_populate_tag_msg(cmd, tag)) { 2777 + switch (tag[0]) { 2778 + case HEAD_OF_QUEUE_TAG: 2779 + ctx->fcp_cmnd->task_attribute = 2780 + TSK_HEAD_OF_QUEUE; 2781 + break; 2782 + case ORDERED_QUEUE_TAG: 2783 + ctx->fcp_cmnd->task_attribute = 2784 + TSK_ORDERED; 2785 + break; 2786 + } 2787 + } 2788 + 2775 2789 /* build FCP_CMND IU */ 2776 2790 memset(ctx->fcp_cmnd, 0, sizeof(struct fcp_cmnd)); 2777 2791 int_to_scsilun(sp->cmd->device->lun, &ctx->fcp_cmnd->lun); ··· 2852 2834 int_to_scsilun(sp->cmd->device->lun, &cmd_pkt->lun); 2853 2835 host_to_fcp_swap((uint8_t *)&cmd_pkt->lun, 2854 2836 sizeof(cmd_pkt->lun)); 2837 + 2838 + /* 2839 + * Update tagged queuing modifier -- default is TSK_SIMPLE (0). 2840 + */ 2841 + if (scsi_populate_tag_msg(cmd, tag)) { 2842 + switch (tag[0]) { 2843 + case HEAD_OF_QUEUE_TAG: 2844 + cmd_pkt->task = TSK_HEAD_OF_QUEUE; 2845 + break; 2846 + case ORDERED_QUEUE_TAG: 2847 + cmd_pkt->task = TSK_ORDERED; 2848 + break; 2849 + } 2850 + } 2855 2851 2856 2852 /* Load SCSI command packet. */ 2857 2853 memcpy(cmd_pkt->fcp_cdb, cmd->cmnd, cmd->cmd_len); ··· 3489 3457 } 3490 3458 } 3491 3459 3492 - static void 3460 + int 3493 3461 qla82xx_check_fw_alive(scsi_qla_host_t *vha) 3494 3462 { 3495 - uint32_t fw_heartbeat_counter, halt_status; 3496 - struct qla_hw_data *ha = vha->hw; 3463 + uint32_t fw_heartbeat_counter; 3464 + int status = 0; 3497 3465 3498 - fw_heartbeat_counter = qla82xx_rd_32(ha, QLA82XX_PEG_ALIVE_COUNTER); 3466 + fw_heartbeat_counter = qla82xx_rd_32(vha->hw, 3467 + QLA82XX_PEG_ALIVE_COUNTER); 3499 3468 /* all 0xff, assume AER/EEH in progress, ignore */ 3500 3469 if (fw_heartbeat_counter == 0xffffffff) 3501 - return; 3470 + return status; 3502 3471 if (vha->fw_heartbeat_counter == fw_heartbeat_counter) { 3503 3472 vha->seconds_since_last_heartbeat++; 3504 3473 /* FW not alive after 2 seconds */ 3505 3474 if (vha->seconds_since_last_heartbeat == 2) { 3506 3475 vha->seconds_since_last_heartbeat = 0; 3507 - halt_status = qla82xx_rd_32(ha, 3508 - QLA82XX_PEG_HALT_STATUS1); 3509 - if (halt_status & HALT_STATUS_UNRECOVERABLE) { 3510 - set_bit(ISP_UNRECOVERABLE, &vha->dpc_flags); 3511 - } else { 3512 - qla_printk(KERN_INFO, ha, 3513 - "scsi(%ld): %s - detect abort needed\n", 3514 - vha->host_no, __func__); 3515 - set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); 3516 - } 3517 - qla2xxx_wake_dpc(vha); 3518 - ha->flags.fw_hung = 1; 3519 - if (ha->flags.mbox_busy) { 3520 - ha->flags.mbox_int = 1; 3521 - DEBUG2(qla_printk(KERN_ERR, ha, 3522 - "Due to fw hung, doing premature " 3523 - "completion of mbx command\n")); 3524 - if (test_bit(MBX_INTR_WAIT, 3525 - &ha->mbx_cmd_flags)) 3526 - complete(&ha->mbx_intr_comp); 3527 - } 3476 + status = 1; 3528 3477 } 3529 3478 } else 3530 3479 vha->seconds_since_last_heartbeat = 0; 3531 3480 vha->fw_heartbeat_counter = fw_heartbeat_counter; 3481 + return status; 3532 3482 } 3533 3483 3534 3484 /* ··· 3571 3557 break; 3572 3558 case QLA82XX_DEV_NEED_RESET: 3573 3559 qla82xx_need_reset_handler(vha); 3560 + dev_init_timeout = jiffies + 3561 + (ha->nx_dev_init_timeout * HZ); 3574 3562 break; 3575 3563 case QLA82XX_DEV_NEED_QUIESCENT: 3576 3564 qla82xx_need_qsnt_handler(vha); ··· 3612 3596 3613 3597 void qla82xx_watchdog(scsi_qla_host_t *vha) 3614 3598 { 3615 - uint32_t dev_state; 3599 + uint32_t dev_state, halt_status; 3616 3600 struct qla_hw_data *ha = vha->hw; 3617 3601 3618 - dev_state = qla82xx_rd_32(ha, QLA82XX_CRB_DEV_STATE); 3619 - 3620 3602 /* don't poll if reset is going on */ 3621 - if (!(test_bit(ISP_ABORT_NEEDED, &vha->dpc_flags) || 3622 - test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags) || 3623 - test_bit(ISP_ABORT_RETRY, &vha->dpc_flags))) { 3624 - if (dev_state == QLA82XX_DEV_NEED_RESET) { 3603 + if (!ha->flags.isp82xx_reset_hdlr_active) { 3604 + dev_state = qla82xx_rd_32(ha, QLA82XX_CRB_DEV_STATE); 3605 + if (dev_state == QLA82XX_DEV_NEED_RESET && 3606 + !test_bit(ISP_ABORT_NEEDED, &vha->dpc_flags)) { 3625 3607 qla_printk(KERN_WARNING, ha, 3626 - "%s(): Adapter reset needed!\n", __func__); 3608 + "%s(): Adapter reset needed!\n", __func__); 3627 3609 set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); 3628 3610 qla2xxx_wake_dpc(vha); 3629 - ha->flags.fw_hung = 1; 3630 - if (ha->flags.mbox_busy) { 3631 - ha->flags.mbox_int = 1; 3632 - DEBUG2(qla_printk(KERN_ERR, ha, 3633 - "Need reset, doing premature " 3634 - "completion of mbx command\n")); 3635 - if (test_bit(MBX_INTR_WAIT, 3636 - &ha->mbx_cmd_flags)) 3637 - complete(&ha->mbx_intr_comp); 3638 - } 3639 3611 } else if (dev_state == QLA82XX_DEV_NEED_QUIESCENT && 3640 3612 !test_bit(ISP_QUIESCE_NEEDED, &vha->dpc_flags)) { 3641 3613 DEBUG(qla_printk(KERN_INFO, ha, ··· 3633 3629 qla2xxx_wake_dpc(vha); 3634 3630 } else { 3635 3631 qla82xx_check_fw_alive(vha); 3632 + if (qla82xx_check_fw_alive(vha)) { 3633 + halt_status = qla82xx_rd_32(ha, 3634 + QLA82XX_PEG_HALT_STATUS1); 3635 + if (halt_status & HALT_STATUS_UNRECOVERABLE) { 3636 + set_bit(ISP_UNRECOVERABLE, 3637 + &vha->dpc_flags); 3638 + } else { 3639 + qla_printk(KERN_INFO, ha, 3640 + "scsi(%ld): %s - detect abort needed\n", 3641 + vha->host_no, __func__); 3642 + set_bit(ISP_ABORT_NEEDED, 3643 + &vha->dpc_flags); 3644 + } 3645 + qla2xxx_wake_dpc(vha); 3646 + ha->flags.isp82xx_fw_hung = 1; 3647 + if (ha->flags.mbox_busy) { 3648 + ha->flags.mbox_int = 1; 3649 + DEBUG2(qla_printk(KERN_ERR, ha, 3650 + "Due to fw hung, doing premature " 3651 + "completion of mbx command\n")); 3652 + if (test_bit(MBX_INTR_WAIT, 3653 + &ha->mbx_cmd_flags)) 3654 + complete(&ha->mbx_intr_comp); 3655 + } 3656 + } 3636 3657 } 3637 3658 } 3638 3659 } ··· 3692 3663 "Exiting.\n", __func__, vha->host_no); 3693 3664 return QLA_SUCCESS; 3694 3665 } 3666 + ha->flags.isp82xx_reset_hdlr_active = 1; 3695 3667 3696 3668 qla82xx_idc_lock(ha); 3697 3669 dev_state = qla82xx_rd_32(ha, QLA82XX_CRB_DEV_STATE); ··· 3713 3683 qla82xx_idc_unlock(ha); 3714 3684 3715 3685 if (rval == QLA_SUCCESS) { 3716 - ha->flags.fw_hung = 0; 3686 + ha->flags.isp82xx_fw_hung = 0; 3687 + ha->flags.isp82xx_reset_hdlr_active = 0; 3717 3688 qla82xx_restart_isp(vha); 3718 3689 } 3719 3690 ··· 3821 3790 "%s status=%d\n", __func__, status)); 3822 3791 3823 3792 return status; 3793 + } 3794 + 3795 + void 3796 + qla82xx_chip_reset_cleanup(scsi_qla_host_t *vha) 3797 + { 3798 + int i; 3799 + unsigned long flags; 3800 + struct qla_hw_data *ha = vha->hw; 3801 + 3802 + /* Check if 82XX firmware is alive or not 3803 + * We may have arrived here from NEED_RESET 3804 + * detection only 3805 + */ 3806 + if (!ha->flags.isp82xx_fw_hung) { 3807 + for (i = 0; i < 2; i++) { 3808 + msleep(1000); 3809 + if (qla82xx_check_fw_alive(vha)) { 3810 + ha->flags.isp82xx_fw_hung = 1; 3811 + if (ha->flags.mbox_busy) { 3812 + ha->flags.mbox_int = 1; 3813 + complete(&ha->mbx_intr_comp); 3814 + } 3815 + break; 3816 + } 3817 + } 3818 + } 3819 + 3820 + /* Abort all commands gracefully if fw NOT hung */ 3821 + if (!ha->flags.isp82xx_fw_hung) { 3822 + int cnt, que; 3823 + srb_t *sp; 3824 + struct req_que *req; 3825 + 3826 + spin_lock_irqsave(&ha->hardware_lock, flags); 3827 + for (que = 0; que < ha->max_req_queues; que++) { 3828 + req = ha->req_q_map[que]; 3829 + if (!req) 3830 + continue; 3831 + for (cnt = 1; cnt < MAX_OUTSTANDING_COMMANDS; cnt++) { 3832 + sp = req->outstanding_cmds[cnt]; 3833 + if (sp) { 3834 + if (!sp->ctx || 3835 + (sp->flags & SRB_FCP_CMND_DMA_VALID)) { 3836 + spin_unlock_irqrestore( 3837 + &ha->hardware_lock, flags); 3838 + if (ha->isp_ops->abort_command(sp)) { 3839 + qla_printk(KERN_INFO, ha, 3840 + "scsi(%ld): mbx abort command failed in %s\n", 3841 + vha->host_no, __func__); 3842 + } else { 3843 + qla_printk(KERN_INFO, ha, 3844 + "scsi(%ld): mbx abort command success in %s\n", 3845 + vha->host_no, __func__); 3846 + } 3847 + spin_lock_irqsave(&ha->hardware_lock, flags); 3848 + } 3849 + } 3850 + } 3851 + } 3852 + spin_unlock_irqrestore(&ha->hardware_lock, flags); 3853 + 3854 + /* Wait for pending cmds (physical and virtual) to complete */ 3855 + if (!qla2x00_eh_wait_for_pending_commands(vha, 0, 0, 3856 + WAIT_HOST) == QLA_SUCCESS) { 3857 + DEBUG2(qla_printk(KERN_INFO, ha, 3858 + "Done wait for pending commands\n")); 3859 + } 3860 + } 3824 3861 }
+29 -28
drivers/scsi/qla2xxx/qla_os.c
··· 506 506 507 507 static inline srb_t * 508 508 qla2x00_get_new_sp(scsi_qla_host_t *vha, fc_port_t *fcport, 509 - struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 509 + struct scsi_cmnd *cmd) 510 510 { 511 511 srb_t *sp; 512 512 struct qla_hw_data *ha = vha->hw; ··· 520 520 sp->cmd = cmd; 521 521 sp->flags = 0; 522 522 CMD_SP(cmd) = (void *)sp; 523 - cmd->scsi_done = done; 524 523 sp->ctx = NULL; 525 524 526 525 return sp; 527 526 } 528 527 529 528 static int 530 - qla2xxx_queuecommand_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 529 + qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) 531 530 { 532 531 scsi_qla_host_t *vha = shost_priv(cmd->device->host); 533 532 fc_port_t *fcport = (struct fc_port *) cmd->device->hostdata; ··· 536 537 srb_t *sp; 537 538 int rval; 538 539 539 - spin_unlock_irq(vha->host->host_lock); 540 540 if (ha->flags.eeh_busy) { 541 541 if (ha->flags.pci_channel_io_perm_failure) 542 542 cmd->result = DID_NO_CONNECT << 16; ··· 567 569 goto qc24_target_busy; 568 570 } 569 571 570 - sp = qla2x00_get_new_sp(base_vha, fcport, cmd, done); 572 + sp = qla2x00_get_new_sp(base_vha, fcport, cmd); 571 573 if (!sp) 572 - goto qc24_host_busy_lock; 574 + goto qc24_host_busy; 573 575 574 576 rval = ha->isp_ops->start_scsi(sp); 575 577 if (rval != QLA_SUCCESS) 576 578 goto qc24_host_busy_free_sp; 577 - 578 - spin_lock_irq(vha->host->host_lock); 579 579 580 580 return 0; 581 581 ··· 581 585 qla2x00_sp_free_dma(sp); 582 586 mempool_free(sp, ha->srb_mempool); 583 587 584 - qc24_host_busy_lock: 585 - spin_lock_irq(vha->host->host_lock); 588 + qc24_host_busy: 586 589 return SCSI_MLQUEUE_HOST_BUSY; 587 590 588 591 qc24_target_busy: 589 - spin_lock_irq(vha->host->host_lock); 590 592 return SCSI_MLQUEUE_TARGET_BUSY; 591 593 592 594 qc24_fail_command: 593 - spin_lock_irq(vha->host->host_lock); 594 - done(cmd); 595 + cmd->scsi_done(cmd); 595 596 596 597 return 0; 597 598 } 598 - 599 - static DEF_SCSI_QCMD(qla2xxx_queuecommand) 600 - 601 599 602 600 /* 603 601 * qla2x00_eh_wait_on_command ··· 811 821 { 812 822 scsi_qla_host_t *vha = shost_priv(cmd->device->host); 813 823 srb_t *sp; 814 - int ret = SUCCESS; 824 + int ret; 815 825 unsigned int id, lun; 816 826 unsigned long flags; 817 827 int wait = 0; 818 828 struct qla_hw_data *ha = vha->hw; 819 829 820 - fc_block_scsi_eh(cmd); 821 - 822 830 if (!CMD_SP(cmd)) 823 831 return SUCCESS; 832 + 833 + ret = fc_block_scsi_eh(cmd); 834 + if (ret != 0) 835 + return ret; 836 + ret = SUCCESS; 824 837 825 838 id = cmd->device->id; 826 839 lun = cmd->device->lun; ··· 933 940 fc_port_t *fcport = (struct fc_port *) cmd->device->hostdata; 934 941 int err; 935 942 936 - fc_block_scsi_eh(cmd); 937 - 938 943 if (!fcport) 939 944 return FAILED; 945 + 946 + err = fc_block_scsi_eh(cmd); 947 + if (err != 0) 948 + return err; 940 949 941 950 qla_printk(KERN_INFO, vha->hw, "scsi(%ld:%d:%d): %s RESET ISSUED.\n", 942 951 vha->host_no, cmd->device->id, cmd->device->lun, name); ··· 1013 1018 int ret = FAILED; 1014 1019 unsigned int id, lun; 1015 1020 1016 - fc_block_scsi_eh(cmd); 1017 - 1018 1021 id = cmd->device->id; 1019 1022 lun = cmd->device->lun; 1020 1023 1021 1024 if (!fcport) 1022 1025 return ret; 1026 + 1027 + ret = fc_block_scsi_eh(cmd); 1028 + if (ret != 0) 1029 + return ret; 1030 + ret = FAILED; 1023 1031 1024 1032 qla_printk(KERN_INFO, vha->hw, 1025 1033 "scsi(%ld:%d:%d): BUS RESET ISSUED.\n", vha->host_no, id, lun); ··· 1076 1078 unsigned int id, lun; 1077 1079 scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev); 1078 1080 1079 - fc_block_scsi_eh(cmd); 1080 - 1081 1081 id = cmd->device->id; 1082 1082 lun = cmd->device->lun; 1083 1083 1084 1084 if (!fcport) 1085 1085 return ret; 1086 + 1087 + ret = fc_block_scsi_eh(cmd); 1088 + if (ret != 0) 1089 + return ret; 1090 + ret = FAILED; 1086 1091 1087 1092 qla_printk(KERN_INFO, ha, 1088 1093 "scsi(%ld:%d:%d): ADAPTER RESET ISSUED.\n", vha->host_no, id, lun); ··· 3806 3805 ha->flags.eeh_busy = 1; 3807 3806 /* For ISP82XX complete any pending mailbox cmd */ 3808 3807 if (IS_QLA82XX(ha)) { 3809 - ha->flags.fw_hung = 1; 3808 + ha->flags.isp82xx_fw_hung = 1; 3810 3809 if (ha->flags.mbox_busy) { 3811 3810 ha->flags.mbox_int = 1; 3812 3811 DEBUG2(qla_printk(KERN_ERR, ha, ··· 3946 3945 qla82xx_wr_32(ha, QLA82XX_CRB_DEV_STATE, 3947 3946 QLA82XX_DEV_READY); 3948 3947 qla82xx_idc_unlock(ha); 3949 - ha->flags.fw_hung = 0; 3948 + ha->flags.isp82xx_fw_hung = 0; 3950 3949 rval = qla82xx_restart_isp(base_vha); 3951 3950 qla82xx_idc_lock(ha); 3952 3951 /* Clear driver state register */ ··· 3959 3958 "This devfn is not reset owner = 0x%x\n", ha->pdev->devfn)); 3960 3959 if ((qla82xx_rd_32(ha, QLA82XX_CRB_DEV_STATE) == 3961 3960 QLA82XX_DEV_READY)) { 3962 - ha->flags.fw_hung = 0; 3961 + ha->flags.isp82xx_fw_hung = 0; 3963 3962 rval = qla82xx_restart_isp(base_vha); 3964 3963 qla82xx_idc_lock(ha); 3965 3964 qla82xx_set_drv_active(base_vha);
+2 -2
drivers/scsi/qla2xxx/qla_version.h
··· 7 7 /* 8 8 * Driver version 9 9 */ 10 - #define QLA2XXX_VERSION "8.03.05-k0" 10 + #define QLA2XXX_VERSION "8.03.07.00" 11 11 12 12 #define QLA_DRIVER_MAJOR_VER 8 13 13 #define QLA_DRIVER_MINOR_VER 3 14 - #define QLA_DRIVER_PATCH_VER 5 14 + #define QLA_DRIVER_PATCH_VER 7 15 15 #define QLA_DRIVER_BETA_VER 0
+115 -87
drivers/scsi/scsi_debug.c
··· 89 89 /* With these defaults, this driver will make 1 host with 1 target 90 90 * (id 0) containing 1 logical unit (lun 0). That is 1 device. 91 91 */ 92 + #define DEF_ATO 1 92 93 #define DEF_DELAY 1 93 94 #define DEF_DEV_SIZE_MB 8 95 + #define DEF_DIF 0 96 + #define DEF_DIX 0 97 + #define DEF_D_SENSE 0 94 98 #define DEF_EVERY_NTH 0 99 + #define DEF_FAKE_RW 0 100 + #define DEF_GUARD 0 101 + #define DEF_LBPU 0 102 + #define DEF_LBPWS 0 103 + #define DEF_LBPWS10 0 104 + #define DEF_LOWEST_ALIGNED 0 105 + #define DEF_NO_LUN_0 0 95 106 #define DEF_NUM_PARTS 0 96 107 #define DEF_OPTS 0 97 - #define DEF_SCSI_LEVEL 5 /* INQUIRY, byte2 [5->SPC-3] */ 98 - #define DEF_PTYPE 0 99 - #define DEF_D_SENSE 0 100 - #define DEF_NO_LUN_0 0 101 - #define DEF_VIRTUAL_GB 0 102 - #define DEF_FAKE_RW 0 103 - #define DEF_VPD_USE_HOSTNO 1 104 - #define DEF_SECTOR_SIZE 512 105 - #define DEF_DIX 0 106 - #define DEF_DIF 0 107 - #define DEF_GUARD 0 108 - #define DEF_ATO 1 109 - #define DEF_PHYSBLK_EXP 0 110 - #define DEF_LOWEST_ALIGNED 0 111 108 #define DEF_OPT_BLKS 64 109 + #define DEF_PHYSBLK_EXP 0 110 + #define DEF_PTYPE 0 111 + #define DEF_SCSI_LEVEL 5 /* INQUIRY, byte2 [5->SPC-3] */ 112 + #define DEF_SECTOR_SIZE 512 113 + #define DEF_UNMAP_ALIGNMENT 0 114 + #define DEF_UNMAP_GRANULARITY 1 112 115 #define DEF_UNMAP_MAX_BLOCKS 0xFFFFFFFF 113 116 #define DEF_UNMAP_MAX_DESC 256 114 - #define DEF_UNMAP_GRANULARITY 1 115 - #define DEF_UNMAP_ALIGNMENT 0 116 - #define DEF_TPWS 0 117 - #define DEF_TPU 0 117 + #define DEF_VIRTUAL_GB 0 118 + #define DEF_VPD_USE_HOSTNO 1 119 + #define DEF_WRITESAME_LENGTH 0xFFFF 118 120 119 121 /* bit mask values for scsi_debug_opts */ 120 122 #define SCSI_DEBUG_OPT_NOISE 1 ··· 146 144 /* when 1==SCSI_DEBUG_OPT_MEDIUM_ERR, a medium error is simulated at this 147 145 * sector on read commands: */ 148 146 #define OPT_MEDIUM_ERR_ADDR 0x1234 /* that's sector 4660 in decimal */ 147 + #define OPT_MEDIUM_ERR_NUM 10 /* number of consecutive medium errs */ 149 148 150 149 /* If REPORT LUNS has luns >= 256 it can choose "flat space" (value 1) 151 150 * or "peripheral device" addressing (value 0) */ ··· 158 155 #define SCSI_DEBUG_CANQUEUE 255 159 156 160 157 static int scsi_debug_add_host = DEF_NUM_HOST; 158 + static int scsi_debug_ato = DEF_ATO; 161 159 static int scsi_debug_delay = DEF_DELAY; 162 160 static int scsi_debug_dev_size_mb = DEF_DEV_SIZE_MB; 161 + static int scsi_debug_dif = DEF_DIF; 162 + static int scsi_debug_dix = DEF_DIX; 163 + static int scsi_debug_dsense = DEF_D_SENSE; 163 164 static int scsi_debug_every_nth = DEF_EVERY_NTH; 165 + static int scsi_debug_fake_rw = DEF_FAKE_RW; 166 + static int scsi_debug_guard = DEF_GUARD; 167 + static int scsi_debug_lowest_aligned = DEF_LOWEST_ALIGNED; 164 168 static int scsi_debug_max_luns = DEF_MAX_LUNS; 165 169 static int scsi_debug_max_queue = SCSI_DEBUG_CANQUEUE; 166 - static int scsi_debug_num_parts = DEF_NUM_PARTS; 167 - static int scsi_debug_no_uld = 0; 168 - static int scsi_debug_num_tgts = DEF_NUM_TGTS; /* targets per host */ 169 - static int scsi_debug_opts = DEF_OPTS; 170 - static int scsi_debug_scsi_level = DEF_SCSI_LEVEL; 171 - static int scsi_debug_ptype = DEF_PTYPE; /* SCSI peripheral type (0==disk) */ 172 - static int scsi_debug_dsense = DEF_D_SENSE; 173 170 static int scsi_debug_no_lun_0 = DEF_NO_LUN_0; 174 - static int scsi_debug_virtual_gb = DEF_VIRTUAL_GB; 175 - static int scsi_debug_fake_rw = DEF_FAKE_RW; 176 - static int scsi_debug_vpd_use_hostno = DEF_VPD_USE_HOSTNO; 177 - static int scsi_debug_sector_size = DEF_SECTOR_SIZE; 178 - static int scsi_debug_dix = DEF_DIX; 179 - static int scsi_debug_dif = DEF_DIF; 180 - static int scsi_debug_guard = DEF_GUARD; 181 - static int scsi_debug_ato = DEF_ATO; 182 - static int scsi_debug_physblk_exp = DEF_PHYSBLK_EXP; 183 - static int scsi_debug_lowest_aligned = DEF_LOWEST_ALIGNED; 171 + static int scsi_debug_no_uld = 0; 172 + static int scsi_debug_num_parts = DEF_NUM_PARTS; 173 + static int scsi_debug_num_tgts = DEF_NUM_TGTS; /* targets per host */ 184 174 static int scsi_debug_opt_blks = DEF_OPT_BLKS; 185 - static unsigned int scsi_debug_unmap_max_desc = DEF_UNMAP_MAX_DESC; 186 - static unsigned int scsi_debug_unmap_max_blocks = DEF_UNMAP_MAX_BLOCKS; 187 - static unsigned int scsi_debug_unmap_granularity = DEF_UNMAP_GRANULARITY; 175 + static int scsi_debug_opts = DEF_OPTS; 176 + static int scsi_debug_physblk_exp = DEF_PHYSBLK_EXP; 177 + static int scsi_debug_ptype = DEF_PTYPE; /* SCSI peripheral type (0==disk) */ 178 + static int scsi_debug_scsi_level = DEF_SCSI_LEVEL; 179 + static int scsi_debug_sector_size = DEF_SECTOR_SIZE; 180 + static int scsi_debug_virtual_gb = DEF_VIRTUAL_GB; 181 + static int scsi_debug_vpd_use_hostno = DEF_VPD_USE_HOSTNO; 182 + static unsigned int scsi_debug_lbpu = DEF_LBPU; 183 + static unsigned int scsi_debug_lbpws = DEF_LBPWS; 184 + static unsigned int scsi_debug_lbpws10 = DEF_LBPWS10; 188 185 static unsigned int scsi_debug_unmap_alignment = DEF_UNMAP_ALIGNMENT; 189 - static unsigned int scsi_debug_tpws = DEF_TPWS; 190 - static unsigned int scsi_debug_tpu = DEF_TPU; 186 + static unsigned int scsi_debug_unmap_granularity = DEF_UNMAP_GRANULARITY; 187 + static unsigned int scsi_debug_unmap_max_blocks = DEF_UNMAP_MAX_BLOCKS; 188 + static unsigned int scsi_debug_unmap_max_desc = DEF_UNMAP_MAX_DESC; 189 + static unsigned int scsi_debug_write_same_length = DEF_WRITESAME_LENGTH; 191 190 192 191 static int scsi_debug_cmnd_count = 0; 193 192 ··· 210 205 #define SDEBUG_SENSE_LEN 32 211 206 212 207 #define SCSI_DEBUG_MAX_CMD_LEN 32 208 + 209 + static unsigned int scsi_debug_lbp(void) 210 + { 211 + return scsi_debug_lbpu | scsi_debug_lbpws | scsi_debug_lbpws10; 212 + } 213 213 214 214 struct sdebug_dev_info { 215 215 struct list_head dev_list; ··· 737 727 /* Optimal Transfer Length */ 738 728 put_unaligned_be32(scsi_debug_opt_blks, &arr[8]); 739 729 740 - if (scsi_debug_tpu) { 730 + if (scsi_debug_lbpu) { 741 731 /* Maximum Unmap LBA Count */ 742 732 put_unaligned_be32(scsi_debug_unmap_max_blocks, &arr[16]); 743 733 ··· 754 744 /* Optimal Unmap Granularity */ 755 745 put_unaligned_be32(scsi_debug_unmap_granularity, &arr[24]); 756 746 757 - return 0x3c; /* Mandatory page length for thin provisioning */ 747 + /* Maximum WRITE SAME Length */ 748 + put_unaligned_be64(scsi_debug_write_same_length, &arr[32]); 749 + 750 + return 0x3c; /* Mandatory page length for Logical Block Provisioning */ 758 751 759 752 return sizeof(vpdb0_data); 760 753 } ··· 780 767 memset(arr, 0, 0x8); 781 768 arr[0] = 0; /* threshold exponent */ 782 769 783 - if (scsi_debug_tpu) 770 + if (scsi_debug_lbpu) 784 771 arr[1] = 1 << 7; 785 772 786 - if (scsi_debug_tpws) 773 + if (scsi_debug_lbpws) 787 774 arr[1] |= 1 << 6; 775 + 776 + if (scsi_debug_lbpws10) 777 + arr[1] |= 1 << 5; 788 778 789 779 return 0x8; 790 780 } ··· 847 831 arr[n++] = 0x89; /* ATA information */ 848 832 arr[n++] = 0xb0; /* Block limits (SBC) */ 849 833 arr[n++] = 0xb1; /* Block characteristics (SBC) */ 850 - arr[n++] = 0xb2; /* Thin provisioning (SBC) */ 834 + if (scsi_debug_lbp()) /* Logical Block Prov. (SBC) */ 835 + arr[n++] = 0xb2; 851 836 arr[3] = n - 4; /* number of supported VPD pages */ 852 837 } else if (0x80 == cmd[2]) { /* unit serial number */ 853 838 arr[1] = cmd[2]; /*sanity */ ··· 896 879 } else if (0xb1 == cmd[2]) { /* Block characteristics (SBC) */ 897 880 arr[1] = cmd[2]; /*sanity */ 898 881 arr[3] = inquiry_evpd_b1(&arr[4]); 899 - } else if (0xb2 == cmd[2]) { /* Thin provisioning (SBC) */ 882 + } else if (0xb2 == cmd[2]) { /* Logical Block Prov. (SBC) */ 900 883 arr[1] = cmd[2]; /*sanity */ 901 884 arr[3] = inquiry_evpd_b2(&arr[4]); 902 885 } else { ··· 1070 1053 arr[13] = scsi_debug_physblk_exp & 0xf; 1071 1054 arr[14] = (scsi_debug_lowest_aligned >> 8) & 0x3f; 1072 1055 1073 - if (scsi_debug_tpu || scsi_debug_tpws) 1074 - arr[14] |= 0x80; /* TPE */ 1056 + if (scsi_debug_lbp()) 1057 + arr[14] |= 0x80; /* LBPME */ 1075 1058 1076 1059 arr[15] = scsi_debug_lowest_aligned & 0xff; 1077 1060 ··· 1808 1791 return ret; 1809 1792 1810 1793 if ((SCSI_DEBUG_OPT_MEDIUM_ERR & scsi_debug_opts) && 1811 - (lba <= OPT_MEDIUM_ERR_ADDR) && 1794 + (lba <= (OPT_MEDIUM_ERR_ADDR + OPT_MEDIUM_ERR_NUM - 1)) && 1812 1795 ((lba + num) > OPT_MEDIUM_ERR_ADDR)) { 1813 1796 /* claim unrecoverable read error */ 1814 - mk_sense_buffer(devip, MEDIUM_ERROR, UNRECOVERED_READ_ERR, 1815 - 0); 1797 + mk_sense_buffer(devip, MEDIUM_ERROR, UNRECOVERED_READ_ERR, 0); 1816 1798 /* set info field and valid bit for fixed descriptor */ 1817 1799 if (0x70 == (devip->sense_buff[0] & 0x7f)) { 1818 1800 devip->sense_buff[0] |= 0x80; /* Valid bit */ 1819 - ret = OPT_MEDIUM_ERR_ADDR; 1801 + ret = (lba < OPT_MEDIUM_ERR_ADDR) 1802 + ? OPT_MEDIUM_ERR_ADDR : (int)lba; 1820 1803 devip->sense_buff[3] = (ret >> 24) & 0xff; 1821 1804 devip->sense_buff[4] = (ret >> 16) & 0xff; 1822 1805 devip->sense_buff[5] = (ret >> 8) & 0xff; ··· 2100 2083 ret = check_device_access_params(devip, lba, num); 2101 2084 if (ret) 2102 2085 return ret; 2086 + 2087 + if (num > scsi_debug_write_same_length) { 2088 + mk_sense_buffer(devip, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 2089 + 0); 2090 + return check_condition_result; 2091 + } 2103 2092 2104 2093 write_lock_irqsave(&atomic_rw, iflags); 2105 2094 ··· 2718 2695 /sys/bus/pseudo/drivers/scsi_debug directory is changed. 2719 2696 */ 2720 2697 module_param_named(add_host, scsi_debug_add_host, int, S_IRUGO | S_IWUSR); 2698 + module_param_named(ato, scsi_debug_ato, int, S_IRUGO); 2721 2699 module_param_named(delay, scsi_debug_delay, int, S_IRUGO | S_IWUSR); 2722 2700 module_param_named(dev_size_mb, scsi_debug_dev_size_mb, int, S_IRUGO); 2701 + module_param_named(dif, scsi_debug_dif, int, S_IRUGO); 2702 + module_param_named(dix, scsi_debug_dix, int, S_IRUGO); 2723 2703 module_param_named(dsense, scsi_debug_dsense, int, S_IRUGO | S_IWUSR); 2724 2704 module_param_named(every_nth, scsi_debug_every_nth, int, S_IRUGO | S_IWUSR); 2725 2705 module_param_named(fake_rw, scsi_debug_fake_rw, int, S_IRUGO | S_IWUSR); 2706 + module_param_named(guard, scsi_debug_guard, int, S_IRUGO); 2707 + module_param_named(lbpu, scsi_debug_lbpu, int, S_IRUGO); 2708 + module_param_named(lbpws, scsi_debug_lbpws, int, S_IRUGO); 2709 + module_param_named(lbpws10, scsi_debug_lbpws10, int, S_IRUGO); 2710 + module_param_named(lowest_aligned, scsi_debug_lowest_aligned, int, S_IRUGO); 2726 2711 module_param_named(max_luns, scsi_debug_max_luns, int, S_IRUGO | S_IWUSR); 2727 2712 module_param_named(max_queue, scsi_debug_max_queue, int, S_IRUGO | S_IWUSR); 2728 2713 module_param_named(no_lun_0, scsi_debug_no_lun_0, int, S_IRUGO | S_IWUSR); 2729 2714 module_param_named(no_uld, scsi_debug_no_uld, int, S_IRUGO); 2730 2715 module_param_named(num_parts, scsi_debug_num_parts, int, S_IRUGO); 2731 2716 module_param_named(num_tgts, scsi_debug_num_tgts, int, S_IRUGO | S_IWUSR); 2717 + module_param_named(opt_blks, scsi_debug_opt_blks, int, S_IRUGO); 2732 2718 module_param_named(opts, scsi_debug_opts, int, S_IRUGO | S_IWUSR); 2719 + module_param_named(physblk_exp, scsi_debug_physblk_exp, int, S_IRUGO); 2733 2720 module_param_named(ptype, scsi_debug_ptype, int, S_IRUGO | S_IWUSR); 2734 2721 module_param_named(scsi_level, scsi_debug_scsi_level, int, S_IRUGO); 2722 + module_param_named(sector_size, scsi_debug_sector_size, int, S_IRUGO); 2723 + module_param_named(unmap_alignment, scsi_debug_unmap_alignment, int, S_IRUGO); 2724 + module_param_named(unmap_granularity, scsi_debug_unmap_granularity, int, S_IRUGO); 2725 + module_param_named(unmap_max_blocks, scsi_debug_unmap_max_blocks, int, S_IRUGO); 2726 + module_param_named(unmap_max_desc, scsi_debug_unmap_max_desc, int, S_IRUGO); 2735 2727 module_param_named(virtual_gb, scsi_debug_virtual_gb, int, S_IRUGO | S_IWUSR); 2736 2728 module_param_named(vpd_use_hostno, scsi_debug_vpd_use_hostno, int, 2737 2729 S_IRUGO | S_IWUSR); 2738 - module_param_named(sector_size, scsi_debug_sector_size, int, S_IRUGO); 2739 - module_param_named(dix, scsi_debug_dix, int, S_IRUGO); 2740 - module_param_named(dif, scsi_debug_dif, int, S_IRUGO); 2741 - module_param_named(guard, scsi_debug_guard, int, S_IRUGO); 2742 - module_param_named(ato, scsi_debug_ato, int, S_IRUGO); 2743 - module_param_named(physblk_exp, scsi_debug_physblk_exp, int, S_IRUGO); 2744 - module_param_named(opt_blks, scsi_debug_opt_blks, int, S_IRUGO); 2745 - module_param_named(lowest_aligned, scsi_debug_lowest_aligned, int, S_IRUGO); 2746 - module_param_named(unmap_max_blocks, scsi_debug_unmap_max_blocks, int, S_IRUGO); 2747 - module_param_named(unmap_max_desc, scsi_debug_unmap_max_desc, int, S_IRUGO); 2748 - module_param_named(unmap_granularity, scsi_debug_unmap_granularity, int, S_IRUGO); 2749 - module_param_named(unmap_alignment, scsi_debug_unmap_alignment, int, S_IRUGO); 2750 - module_param_named(tpu, scsi_debug_tpu, int, S_IRUGO); 2751 - module_param_named(tpws, scsi_debug_tpws, int, S_IRUGO); 2730 + module_param_named(write_same_length, scsi_debug_write_same_length, int, 2731 + S_IRUGO | S_IWUSR); 2752 2732 2753 2733 MODULE_AUTHOR("Eric Youngdale + Douglas Gilbert"); 2754 2734 MODULE_DESCRIPTION("SCSI debug adapter driver"); ··· 2759 2733 MODULE_VERSION(SCSI_DEBUG_VERSION); 2760 2734 2761 2735 MODULE_PARM_DESC(add_host, "0..127 hosts allowed(def=1)"); 2736 + MODULE_PARM_DESC(ato, "application tag ownership: 0=disk 1=host (def=1)"); 2762 2737 MODULE_PARM_DESC(delay, "# of jiffies to delay response(def=1)"); 2763 2738 MODULE_PARM_DESC(dev_size_mb, "size in MB of ram shared by devs(def=8)"); 2739 + MODULE_PARM_DESC(dif, "data integrity field type: 0-3 (def=0)"); 2740 + MODULE_PARM_DESC(dix, "data integrity extensions mask (def=0)"); 2764 2741 MODULE_PARM_DESC(dsense, "use descriptor sense format(def=0 -> fixed)"); 2765 2742 MODULE_PARM_DESC(every_nth, "timeout every nth command(def=0)"); 2766 2743 MODULE_PARM_DESC(fake_rw, "fake reads/writes instead of copying (def=0)"); 2744 + MODULE_PARM_DESC(guard, "protection checksum: 0=crc, 1=ip (def=0)"); 2745 + MODULE_PARM_DESC(lbpu, "enable LBP, support UNMAP command (def=0)"); 2746 + MODULE_PARM_DESC(lbpws, "enable LBP, support WRITE SAME(16) with UNMAP bit (def=0)"); 2747 + MODULE_PARM_DESC(lbpws10, "enable LBP, support WRITE SAME(10) with UNMAP bit (def=0)"); 2748 + MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)"); 2767 2749 MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)"); 2768 2750 MODULE_PARM_DESC(max_queue, "max number of queued commands (1 to 255(def))"); 2769 2751 MODULE_PARM_DESC(no_lun_0, "no LU number 0 (def=0 -> have lun 0)"); 2770 2752 MODULE_PARM_DESC(no_uld, "stop ULD (e.g. sd driver) attaching (def=0))"); 2771 2753 MODULE_PARM_DESC(num_parts, "number of partitions(def=0)"); 2772 2754 MODULE_PARM_DESC(num_tgts, "number of targets per host to simulate(def=1)"); 2755 + MODULE_PARM_DESC(opt_blks, "optimal transfer length in block (def=64)"); 2773 2756 MODULE_PARM_DESC(opts, "1->noise, 2->medium_err, 4->timeout, 8->recovered_err... (def=0)"); 2757 + MODULE_PARM_DESC(physblk_exp, "physical block exponent (def=0)"); 2774 2758 MODULE_PARM_DESC(ptype, "SCSI peripheral type(def=0[disk])"); 2775 2759 MODULE_PARM_DESC(scsi_level, "SCSI level to simulate(def=5[SPC-3])"); 2776 - MODULE_PARM_DESC(virtual_gb, "virtual gigabyte size (def=0 -> use dev_size_mb)"); 2777 - MODULE_PARM_DESC(vpd_use_hostno, "0 -> dev ids ignore hostno (def=1 -> unique dev ids)"); 2778 2760 MODULE_PARM_DESC(sector_size, "logical block size in bytes (def=512)"); 2779 - MODULE_PARM_DESC(physblk_exp, "physical block exponent (def=0)"); 2780 - MODULE_PARM_DESC(opt_blks, "optimal transfer length in block (def=64)"); 2781 - MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)"); 2782 - MODULE_PARM_DESC(dix, "data integrity extensions mask (def=0)"); 2783 - MODULE_PARM_DESC(dif, "data integrity field type: 0-3 (def=0)"); 2784 - MODULE_PARM_DESC(guard, "protection checksum: 0=crc, 1=ip (def=0)"); 2785 - MODULE_PARM_DESC(ato, "application tag ownership: 0=disk 1=host (def=1)"); 2761 + MODULE_PARM_DESC(unmap_alignment, "lowest aligned thin provisioning lba (def=0)"); 2762 + MODULE_PARM_DESC(unmap_granularity, "thin provisioning granularity in blocks (def=1)"); 2786 2763 MODULE_PARM_DESC(unmap_max_blocks, "max # of blocks can be unmapped in one cmd (def=0xffffffff)"); 2787 2764 MODULE_PARM_DESC(unmap_max_desc, "max # of ranges that can be unmapped in one cmd (def=256)"); 2788 - MODULE_PARM_DESC(unmap_granularity, "thin provisioning granularity in blocks (def=1)"); 2789 - MODULE_PARM_DESC(unmap_alignment, "lowest aligned thin provisioning lba (def=0)"); 2790 - MODULE_PARM_DESC(tpu, "enable TP, support UNMAP command (def=0)"); 2791 - MODULE_PARM_DESC(tpws, "enable TP, support WRITE SAME(16) with UNMAP bit (def=0)"); 2765 + MODULE_PARM_DESC(virtual_gb, "virtual gigabyte size (def=0 -> use dev_size_mb)"); 2766 + MODULE_PARM_DESC(vpd_use_hostno, "0 -> dev ids ignore hostno (def=1 -> unique dev ids)"); 2767 + MODULE_PARM_DESC(write_same_length, "Maximum blocks per WRITE SAME cmd (def=0xffff)"); 2792 2768 2793 2769 static char sdebug_info[256]; 2794 2770 ··· 3178 3150 { 3179 3151 ssize_t count; 3180 3152 3181 - if (scsi_debug_tpu == 0 && scsi_debug_tpws == 0) 3153 + if (!scsi_debug_lbp()) 3182 3154 return scnprintf(buf, PAGE_SIZE, "0-%u\n", 3183 3155 sdebug_store_sectors); 3184 3156 ··· 3361 3333 memset(dif_storep, 0xff, dif_size); 3362 3334 } 3363 3335 3364 - /* Thin Provisioning */ 3365 - if (scsi_debug_tpu || scsi_debug_tpws) { 3336 + /* Logical Block Provisioning */ 3337 + if (scsi_debug_lbp()) { 3366 3338 unsigned int map_bytes; 3367 3339 3368 3340 scsi_debug_unmap_max_blocks = ··· 3692 3664 errsts = resp_readcap16(SCpnt, devip); 3693 3665 else if (cmd[1] == SAI_GET_LBA_STATUS) { 3694 3666 3695 - if (scsi_debug_tpu == 0 && scsi_debug_tpws == 0) { 3667 + if (scsi_debug_lbp() == 0) { 3696 3668 mk_sense_buffer(devip, ILLEGAL_REQUEST, 3697 3669 INVALID_COMMAND_OPCODE, 0); 3698 3670 errsts = check_condition_result; ··· 3803 3775 } 3804 3776 break; 3805 3777 case WRITE_SAME_16: 3778 + case WRITE_SAME: 3806 3779 if (cmd[1] & 0x8) { 3807 - if (scsi_debug_tpws == 0) { 3780 + if ((*cmd == WRITE_SAME_16 && scsi_debug_lbpws == 0) || 3781 + (*cmd == WRITE_SAME && scsi_debug_lbpws10 == 0)) { 3808 3782 mk_sense_buffer(devip, ILLEGAL_REQUEST, 3809 3783 INVALID_FIELD_IN_CDB, 0); 3810 3784 errsts = check_condition_result; ··· 3815 3785 } 3816 3786 if (errsts) 3817 3787 break; 3818 - /* fall through */ 3819 - case WRITE_SAME: 3820 3788 errsts = check_readiness(SCpnt, 0, devip); 3821 3789 if (errsts) 3822 3790 break; ··· 3826 3798 if (errsts) 3827 3799 break; 3828 3800 3829 - if (scsi_debug_unmap_max_desc == 0 || scsi_debug_tpu == 0) { 3801 + if (scsi_debug_unmap_max_desc == 0 || scsi_debug_lbpu == 0) { 3830 3802 mk_sense_buffer(devip, ILLEGAL_REQUEST, 3831 3803 INVALID_COMMAND_OPCODE, 0); 3832 3804 errsts = check_condition_result;
+85
drivers/scsi/scsi_devinfo.c
··· 382 382 EXPORT_SYMBOL(scsi_dev_info_list_add_keyed); 383 383 384 384 /** 385 + * scsi_dev_info_list_del_keyed - remove one dev_info list entry. 386 + * @vendor: vendor string 387 + * @model: model (product) string 388 + * @key: specify list to use 389 + * 390 + * Description: 391 + * Remove and destroy one dev_info entry for @vendor, @model 392 + * in list specified by @key. 393 + * 394 + * Returns: 0 OK, -error on failure. 395 + **/ 396 + int scsi_dev_info_list_del_keyed(char *vendor, char *model, int key) 397 + { 398 + struct scsi_dev_info_list *devinfo, *found = NULL; 399 + struct scsi_dev_info_list_table *devinfo_table = 400 + scsi_devinfo_lookup_by_key(key); 401 + 402 + if (IS_ERR(devinfo_table)) 403 + return PTR_ERR(devinfo_table); 404 + 405 + list_for_each_entry(devinfo, &devinfo_table->scsi_dev_info_list, 406 + dev_info_list) { 407 + if (devinfo->compatible) { 408 + /* 409 + * Behave like the older version of get_device_flags. 410 + */ 411 + size_t max; 412 + /* 413 + * XXX why skip leading spaces? If an odd INQUIRY 414 + * value, that should have been part of the 415 + * scsi_static_device_list[] entry, such as " FOO" 416 + * rather than "FOO". Since this code is already 417 + * here, and we don't know what device it is 418 + * trying to work with, leave it as-is. 419 + */ 420 + max = 8; /* max length of vendor */ 421 + while ((max > 0) && *vendor == ' ') { 422 + max--; 423 + vendor++; 424 + } 425 + /* 426 + * XXX removing the following strlen() would be 427 + * good, using it means that for a an entry not in 428 + * the list, we scan every byte of every vendor 429 + * listed in scsi_static_device_list[], and never match 430 + * a single one (and still have to compare at 431 + * least the first byte of each vendor). 432 + */ 433 + if (memcmp(devinfo->vendor, vendor, 434 + min(max, strlen(devinfo->vendor)))) 435 + continue; 436 + /* 437 + * Skip spaces again. 438 + */ 439 + max = 16; /* max length of model */ 440 + while ((max > 0) && *model == ' ') { 441 + max--; 442 + model++; 443 + } 444 + if (memcmp(devinfo->model, model, 445 + min(max, strlen(devinfo->model)))) 446 + continue; 447 + found = devinfo; 448 + } else { 449 + if (!memcmp(devinfo->vendor, vendor, 450 + sizeof(devinfo->vendor)) && 451 + !memcmp(devinfo->model, model, 452 + sizeof(devinfo->model))) 453 + found = devinfo; 454 + } 455 + if (found) 456 + break; 457 + } 458 + 459 + if (found) { 460 + list_del(&found->dev_info_list); 461 + kfree(found); 462 + return 0; 463 + } 464 + 465 + return -ENOENT; 466 + } 467 + EXPORT_SYMBOL(scsi_dev_info_list_del_keyed); 468 + 469 + /** 385 470 * scsi_dev_info_list_add_str - parse dev_list and add to the scsi_dev_info_list. 386 471 * @dev_list: string of device flags to add 387 472 *
+17 -7
drivers/scsi/scsi_error.c
··· 223 223 * @scmd: Cmd to have sense checked. 224 224 * 225 225 * Return value: 226 - * SUCCESS or FAILED or NEEDS_RETRY 226 + * SUCCESS or FAILED or NEEDS_RETRY or TARGET_ERROR 227 227 * 228 228 * Notes: 229 229 * When a deferred error is detected the current command has ··· 326 326 */ 327 327 return SUCCESS; 328 328 329 - /* these three are not supported */ 329 + /* these are not supported */ 330 330 case COPY_ABORTED: 331 331 case VOLUME_OVERFLOW: 332 332 case MISCOMPARE: 333 - return SUCCESS; 333 + case BLANK_CHECK: 334 + case DATA_PROTECT: 335 + return TARGET_ERROR; 334 336 335 337 case MEDIUM_ERROR: 336 338 if (sshdr.asc == 0x11 || /* UNRECOVERED READ ERR */ 337 339 sshdr.asc == 0x13 || /* AMNF DATA FIELD */ 338 340 sshdr.asc == 0x14) { /* RECORD NOT FOUND */ 339 - return SUCCESS; 341 + return TARGET_ERROR; 340 342 } 341 343 return NEEDS_RETRY; 342 344 ··· 346 344 if (scmd->device->retry_hwerror) 347 345 return ADD_TO_MLQUEUE; 348 346 else 349 - return SUCCESS; 347 + return TARGET_ERROR; 350 348 351 349 case ILLEGAL_REQUEST: 352 - case BLANK_CHECK: 353 - case DATA_PROTECT: 354 350 default: 355 351 return SUCCESS; 356 352 } ··· 787 787 case SUCCESS: 788 788 case NEEDS_RETRY: 789 789 case FAILED: 790 + case TARGET_ERROR: 790 791 break; 791 792 case ADD_TO_MLQUEUE: 792 793 rtn = NEEDS_RETRY; ··· 1470 1469 rtn = scsi_check_sense(scmd); 1471 1470 if (rtn == NEEDS_RETRY) 1472 1471 goto maybe_retry; 1472 + else if (rtn == TARGET_ERROR) { 1473 + /* 1474 + * Need to modify host byte to signal a 1475 + * permanent target failure 1476 + */ 1477 + scmd->result |= (DID_TARGET_FAILURE << 16); 1478 + rtn = SUCCESS; 1479 + } 1473 1480 /* if rtn == FAILED, we have no sense information; 1474 1481 * returning FAILED will wake the error handler thread 1475 1482 * to collect the sense and redo the decide ··· 1495 1486 case RESERVATION_CONFLICT: 1496 1487 sdev_printk(KERN_INFO, scmd->device, 1497 1488 "reservation conflict\n"); 1489 + scmd->result |= (DID_NEXUS_FAILURE << 16); 1498 1490 return SUCCESS; /* causes immediate i/o error */ 1499 1491 default: 1500 1492 return FAILED;
+34 -2
drivers/scsi/scsi_lib.c
··· 667 667 } 668 668 EXPORT_SYMBOL(scsi_release_buffers); 669 669 670 + static int __scsi_error_from_host_byte(struct scsi_cmnd *cmd, int result) 671 + { 672 + int error = 0; 673 + 674 + switch(host_byte(result)) { 675 + case DID_TRANSPORT_FAILFAST: 676 + error = -ENOLINK; 677 + break; 678 + case DID_TARGET_FAILURE: 679 + cmd->result |= (DID_OK << 16); 680 + error = -EREMOTEIO; 681 + break; 682 + case DID_NEXUS_FAILURE: 683 + cmd->result |= (DID_OK << 16); 684 + error = -EBADE; 685 + break; 686 + default: 687 + error = -EIO; 688 + break; 689 + } 690 + 691 + return error; 692 + } 693 + 670 694 /* 671 695 * Function: scsi_io_completion() 672 696 * ··· 761 737 req->sense_len = len; 762 738 } 763 739 if (!sense_deferred) 764 - error = -EIO; 740 + error = __scsi_error_from_host_byte(cmd, result); 765 741 } 766 742 767 743 req->resid_len = scsi_get_resid(cmd); ··· 820 796 if (scsi_end_request(cmd, error, good_bytes, result == 0) == NULL) 821 797 return; 822 798 823 - error = -EIO; 799 + error = __scsi_error_from_host_byte(cmd, result); 824 800 825 801 if (host_byte(result) == DID_RESET) { 826 802 /* Third party bus reset or reset for error recovery ··· 867 843 description = "Host Data Integrity Failure"; 868 844 action = ACTION_FAIL; 869 845 error = -EILSEQ; 846 + /* INVALID COMMAND OPCODE or INVALID FIELD IN CDB */ 847 + } else if ((sshdr.asc == 0x20 || sshdr.asc == 0x24) && 848 + (cmd->cmnd[0] == UNMAP || 849 + cmd->cmnd[0] == WRITE_SAME_16 || 850 + cmd->cmnd[0] == WRITE_SAME)) { 851 + description = "Discard failure"; 852 + action = ACTION_FAIL; 870 853 } else 871 854 action = ACTION_FAIL; 872 855 break; ··· 1069 1038 cmd->request = req; 1070 1039 1071 1040 cmd->cmnd = req->cmd; 1041 + cmd->prot_op = SCSI_PROT_NORMAL; 1072 1042 1073 1043 return cmd; 1074 1044 }
+2
drivers/scsi/scsi_priv.h
··· 45 45 enum { 46 46 SCSI_DEVINFO_GLOBAL = 0, 47 47 SCSI_DEVINFO_SPI, 48 + SCSI_DEVINFO_DH, 48 49 }; 49 50 50 51 extern int scsi_get_device_flags(struct scsi_device *sdev, ··· 57 56 extern int scsi_dev_info_list_add_keyed(int compatible, char *vendor, 58 57 char *model, char *strflags, 59 58 int flags, int key); 59 + extern int scsi_dev_info_list_del_keyed(char *vendor, char *model, int key); 60 60 extern int scsi_dev_info_add_list(int key, const char *name); 61 61 extern int scsi_dev_info_remove_list(int key); 62 62
+87 -17
drivers/scsi/scsi_transport_iscsi.c
··· 954 954 if (dd_size) 955 955 conn->dd_data = &conn[1]; 956 956 957 + mutex_init(&conn->ep_mutex); 957 958 INIT_LIST_HEAD(&conn->conn_list); 958 959 conn->transport = transport; 959 960 conn->cid = cid; ··· 976 975 977 976 spin_lock_irqsave(&connlock, flags); 978 977 list_add(&conn->conn_list, &connlist); 979 - conn->active = 1; 980 978 spin_unlock_irqrestore(&connlock, flags); 981 979 982 980 ISCSI_DBG_TRANS_CONN(conn, "Completed conn creation\n"); ··· 1001 1001 unsigned long flags; 1002 1002 1003 1003 spin_lock_irqsave(&connlock, flags); 1004 - conn->active = 0; 1005 1004 list_del(&conn->conn_list); 1006 1005 spin_unlock_irqrestore(&connlock, flags); 1007 1006 ··· 1429 1430 return err; 1430 1431 } 1431 1432 1433 + static int iscsi_if_ep_disconnect(struct iscsi_transport *transport, 1434 + u64 ep_handle) 1435 + { 1436 + struct iscsi_cls_conn *conn; 1437 + struct iscsi_endpoint *ep; 1438 + 1439 + if (!transport->ep_disconnect) 1440 + return -EINVAL; 1441 + 1442 + ep = iscsi_lookup_endpoint(ep_handle); 1443 + if (!ep) 1444 + return -EINVAL; 1445 + conn = ep->conn; 1446 + if (conn) { 1447 + mutex_lock(&conn->ep_mutex); 1448 + conn->ep = NULL; 1449 + mutex_unlock(&conn->ep_mutex); 1450 + } 1451 + 1452 + transport->ep_disconnect(ep); 1453 + return 0; 1454 + } 1455 + 1432 1456 static int 1433 1457 iscsi_if_transport_ep(struct iscsi_transport *transport, 1434 1458 struct iscsi_uevent *ev, int msg_type) ··· 1476 1454 ev->u.ep_poll.timeout_ms); 1477 1455 break; 1478 1456 case ISCSI_UEVENT_TRANSPORT_EP_DISCONNECT: 1479 - if (!transport->ep_disconnect) 1480 - return -EINVAL; 1481 - 1482 - ep = iscsi_lookup_endpoint(ev->u.ep_disconnect.ep_handle); 1483 - if (!ep) 1484 - return -EINVAL; 1485 - 1486 - transport->ep_disconnect(ep); 1457 + rc = iscsi_if_ep_disconnect(transport, 1458 + ev->u.ep_disconnect.ep_handle); 1487 1459 break; 1488 1460 } 1489 1461 return rc; ··· 1625 1609 session = iscsi_session_lookup(ev->u.b_conn.sid); 1626 1610 conn = iscsi_conn_lookup(ev->u.b_conn.sid, ev->u.b_conn.cid); 1627 1611 1628 - if (session && conn) 1629 - ev->r.retcode = transport->bind_conn(session, conn, 1630 - ev->u.b_conn.transport_eph, 1631 - ev->u.b_conn.is_leading); 1632 - else 1612 + if (conn && conn->ep) 1613 + iscsi_if_ep_disconnect(transport, conn->ep->id); 1614 + 1615 + if (!session || !conn) { 1633 1616 err = -EINVAL; 1617 + break; 1618 + } 1619 + 1620 + ev->r.retcode = transport->bind_conn(session, conn, 1621 + ev->u.b_conn.transport_eph, 1622 + ev->u.b_conn.is_leading); 1623 + if (ev->r.retcode || !transport->ep_connect) 1624 + break; 1625 + 1626 + ep = iscsi_lookup_endpoint(ev->u.b_conn.transport_eph); 1627 + if (ep) { 1628 + ep->conn = conn; 1629 + 1630 + mutex_lock(&conn->ep_mutex); 1631 + conn->ep = ep; 1632 + mutex_unlock(&conn->ep_mutex); 1633 + } else 1634 + iscsi_cls_conn_printk(KERN_ERR, conn, 1635 + "Could not set ep conn " 1636 + "binding\n"); 1634 1637 break; 1635 1638 case ISCSI_UEVENT_SET_PARAM: 1636 1639 err = iscsi_set_param(transport, ev); ··· 1782 1747 iscsi_conn_attr(ifmarker, ISCSI_PARAM_IFMARKER_EN); 1783 1748 iscsi_conn_attr(ofmarker, ISCSI_PARAM_OFMARKER_EN); 1784 1749 iscsi_conn_attr(persistent_port, ISCSI_PARAM_PERSISTENT_PORT); 1785 - iscsi_conn_attr(port, ISCSI_PARAM_CONN_PORT); 1786 1750 iscsi_conn_attr(exp_statsn, ISCSI_PARAM_EXP_STATSN); 1787 1751 iscsi_conn_attr(persistent_address, ISCSI_PARAM_PERSISTENT_ADDRESS); 1788 - iscsi_conn_attr(address, ISCSI_PARAM_CONN_ADDRESS); 1789 1752 iscsi_conn_attr(ping_tmo, ISCSI_PARAM_PING_TMO); 1790 1753 iscsi_conn_attr(recv_tmo, ISCSI_PARAM_RECV_TMO); 1754 + 1755 + #define iscsi_conn_ep_attr_show(param) \ 1756 + static ssize_t show_conn_ep_param_##param(struct device *dev, \ 1757 + struct device_attribute *attr,\ 1758 + char *buf) \ 1759 + { \ 1760 + struct iscsi_cls_conn *conn = iscsi_dev_to_conn(dev->parent); \ 1761 + struct iscsi_transport *t = conn->transport; \ 1762 + struct iscsi_endpoint *ep; \ 1763 + ssize_t rc; \ 1764 + \ 1765 + /* \ 1766 + * Need to make sure ep_disconnect does not free the LLD's \ 1767 + * interconnect resources while we are trying to read them. \ 1768 + */ \ 1769 + mutex_lock(&conn->ep_mutex); \ 1770 + ep = conn->ep; \ 1771 + if (!ep && t->ep_connect) { \ 1772 + mutex_unlock(&conn->ep_mutex); \ 1773 + return -ENOTCONN; \ 1774 + } \ 1775 + \ 1776 + if (ep) \ 1777 + rc = t->get_ep_param(ep, param, buf); \ 1778 + else \ 1779 + rc = t->get_conn_param(conn, param, buf); \ 1780 + mutex_unlock(&conn->ep_mutex); \ 1781 + return rc; \ 1782 + } 1783 + 1784 + #define iscsi_conn_ep_attr(field, param) \ 1785 + iscsi_conn_ep_attr_show(param) \ 1786 + static ISCSI_CLASS_ATTR(conn, field, S_IRUGO, \ 1787 + show_conn_ep_param_##param, NULL); 1788 + 1789 + iscsi_conn_ep_attr(address, ISCSI_PARAM_CONN_ADDRESS); 1790 + iscsi_conn_ep_attr(port, ISCSI_PARAM_CONN_PORT); 1791 1791 1792 1792 /* 1793 1793 * iSCSI session attrs
+177 -54
drivers/scsi/sd.c
··· 96 96 #define SD_MINORS 0 97 97 #endif 98 98 99 + static void sd_config_discard(struct scsi_disk *, unsigned int); 99 100 static int sd_revalidate_disk(struct gendisk *); 100 101 static void sd_unlock_native_capacity(struct gendisk *disk); 101 102 static int sd_probe(struct device *); ··· 295 294 { 296 295 struct scsi_disk *sdkp = to_scsi_disk(dev); 297 296 298 - return snprintf(buf, 20, "%u\n", sdkp->thin_provisioning); 297 + return snprintf(buf, 20, "%u\n", sdkp->lbpme); 298 + } 299 + 300 + static const char *lbp_mode[] = { 301 + [SD_LBP_FULL] = "full", 302 + [SD_LBP_UNMAP] = "unmap", 303 + [SD_LBP_WS16] = "writesame_16", 304 + [SD_LBP_WS10] = "writesame_10", 305 + [SD_LBP_ZERO] = "writesame_zero", 306 + [SD_LBP_DISABLE] = "disabled", 307 + }; 308 + 309 + static ssize_t 310 + sd_show_provisioning_mode(struct device *dev, struct device_attribute *attr, 311 + char *buf) 312 + { 313 + struct scsi_disk *sdkp = to_scsi_disk(dev); 314 + 315 + return snprintf(buf, 20, "%s\n", lbp_mode[sdkp->provisioning_mode]); 316 + } 317 + 318 + static ssize_t 319 + sd_store_provisioning_mode(struct device *dev, struct device_attribute *attr, 320 + const char *buf, size_t count) 321 + { 322 + struct scsi_disk *sdkp = to_scsi_disk(dev); 323 + struct scsi_device *sdp = sdkp->device; 324 + 325 + if (!capable(CAP_SYS_ADMIN)) 326 + return -EACCES; 327 + 328 + if (sdp->type != TYPE_DISK) 329 + return -EINVAL; 330 + 331 + if (!strncmp(buf, lbp_mode[SD_LBP_UNMAP], 20)) 332 + sd_config_discard(sdkp, SD_LBP_UNMAP); 333 + else if (!strncmp(buf, lbp_mode[SD_LBP_WS16], 20)) 334 + sd_config_discard(sdkp, SD_LBP_WS16); 335 + else if (!strncmp(buf, lbp_mode[SD_LBP_WS10], 20)) 336 + sd_config_discard(sdkp, SD_LBP_WS10); 337 + else if (!strncmp(buf, lbp_mode[SD_LBP_ZERO], 20)) 338 + sd_config_discard(sdkp, SD_LBP_ZERO); 339 + else if (!strncmp(buf, lbp_mode[SD_LBP_DISABLE], 20)) 340 + sd_config_discard(sdkp, SD_LBP_DISABLE); 341 + else 342 + return -EINVAL; 343 + 344 + return count; 299 345 } 300 346 301 347 static struct device_attribute sd_disk_attrs[] = { ··· 357 309 __ATTR(protection_mode, S_IRUGO, sd_show_protection_mode, NULL), 358 310 __ATTR(app_tag_own, S_IRUGO, sd_show_app_tag_own, NULL), 359 311 __ATTR(thin_provisioning, S_IRUGO, sd_show_thin_provisioning, NULL), 312 + __ATTR(provisioning_mode, S_IRUGO|S_IWUSR, sd_show_provisioning_mode, 313 + sd_store_provisioning_mode), 360 314 __ATTR_NULL, 361 315 }; 362 316 ··· 483 433 scsi_set_prot_type(scmd, dif); 484 434 } 485 435 436 + static void sd_config_discard(struct scsi_disk *sdkp, unsigned int mode) 437 + { 438 + struct request_queue *q = sdkp->disk->queue; 439 + unsigned int logical_block_size = sdkp->device->sector_size; 440 + unsigned int max_blocks = 0; 441 + 442 + q->limits.discard_zeroes_data = sdkp->lbprz; 443 + q->limits.discard_alignment = sdkp->unmap_alignment; 444 + q->limits.discard_granularity = 445 + max(sdkp->physical_block_size, 446 + sdkp->unmap_granularity * logical_block_size); 447 + 448 + switch (mode) { 449 + 450 + case SD_LBP_DISABLE: 451 + q->limits.max_discard_sectors = 0; 452 + queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, q); 453 + return; 454 + 455 + case SD_LBP_UNMAP: 456 + max_blocks = min_not_zero(sdkp->max_unmap_blocks, 0xffffffff); 457 + break; 458 + 459 + case SD_LBP_WS16: 460 + max_blocks = min_not_zero(sdkp->max_ws_blocks, 0xffffffff); 461 + break; 462 + 463 + case SD_LBP_WS10: 464 + max_blocks = min_not_zero(sdkp->max_ws_blocks, (u32)0xffff); 465 + break; 466 + 467 + case SD_LBP_ZERO: 468 + max_blocks = min_not_zero(sdkp->max_ws_blocks, (u32)0xffff); 469 + q->limits.discard_zeroes_data = 1; 470 + break; 471 + } 472 + 473 + q->limits.max_discard_sectors = max_blocks * (logical_block_size >> 9); 474 + queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, q); 475 + 476 + sdkp->provisioning_mode = mode; 477 + } 478 + 486 479 /** 487 480 * scsi_setup_discard_cmnd - unmap blocks on thinly provisioned device 488 481 * @sdp: scsi device to operate one ··· 542 449 unsigned int nr_sectors = bio_sectors(bio); 543 450 unsigned int len; 544 451 int ret; 452 + char *buf; 545 453 struct page *page; 546 454 547 455 if (sdkp->device->sector_size == 4096) { ··· 558 464 if (!page) 559 465 return BLKPREP_DEFER; 560 466 561 - if (sdkp->unmap) { 562 - char *buf = page_address(page); 467 + switch (sdkp->provisioning_mode) { 468 + case SD_LBP_UNMAP: 469 + buf = page_address(page); 563 470 564 471 rq->cmd_len = 10; 565 472 rq->cmd[0] = UNMAP; ··· 572 477 put_unaligned_be32(nr_sectors, &buf[16]); 573 478 574 479 len = 24; 575 - } else { 480 + break; 481 + 482 + case SD_LBP_WS16: 576 483 rq->cmd_len = 16; 577 484 rq->cmd[0] = WRITE_SAME_16; 578 485 rq->cmd[1] = 0x8; /* UNMAP */ ··· 582 485 put_unaligned_be32(nr_sectors, &rq->cmd[10]); 583 486 584 487 len = sdkp->device->sector_size; 488 + break; 489 + 490 + case SD_LBP_WS10: 491 + case SD_LBP_ZERO: 492 + rq->cmd_len = 10; 493 + rq->cmd[0] = WRITE_SAME; 494 + if (sdkp->provisioning_mode == SD_LBP_WS10) 495 + rq->cmd[1] = 0x8; /* UNMAP */ 496 + put_unaligned_be32(sector, &rq->cmd[2]); 497 + put_unaligned_be16(nr_sectors, &rq->cmd[7]); 498 + 499 + len = sdkp->device->sector_size; 500 + break; 501 + 502 + default: 503 + goto out; 585 504 } 586 505 587 506 blk_add_request_payload(rq, page, len); 588 507 ret = scsi_setup_blk_pc_cmnd(sdp, rq); 589 508 rq->buffer = page_address(page); 509 + 510 + out: 590 511 if (ret != BLKPREP_OK) { 591 512 __free_page(page); 592 513 rq->buffer = NULL; ··· 1366 1251 struct scsi_disk *sdkp = scsi_disk(SCpnt->request->rq_disk); 1367 1252 int sense_valid = 0; 1368 1253 int sense_deferred = 0; 1254 + unsigned char op = SCpnt->cmnd[0]; 1369 1255 1370 - if (SCpnt->request->cmd_flags & REQ_DISCARD) { 1371 - if (!result) 1372 - scsi_set_resid(SCpnt, 0); 1373 - return good_bytes; 1374 - } 1256 + if ((SCpnt->request->cmd_flags & REQ_DISCARD) && !result) 1257 + scsi_set_resid(SCpnt, 0); 1375 1258 1376 1259 if (result) { 1377 1260 sense_valid = scsi_command_normalize_sense(SCpnt, &sshdr); ··· 1408 1295 SCpnt->result = 0; 1409 1296 memset(SCpnt->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE); 1410 1297 break; 1411 - case ABORTED_COMMAND: /* DIF: Target detected corruption */ 1412 - case ILLEGAL_REQUEST: /* DIX: Host detected corruption */ 1413 - if (sshdr.asc == 0x10) 1298 + case ABORTED_COMMAND: 1299 + if (sshdr.asc == 0x10) /* DIF: Target detected corruption */ 1414 1300 good_bytes = sd_completed_bytes(SCpnt); 1301 + break; 1302 + case ILLEGAL_REQUEST: 1303 + if (sshdr.asc == 0x10) /* DIX: Host detected corruption */ 1304 + good_bytes = sd_completed_bytes(SCpnt); 1305 + /* INVALID COMMAND OPCODE or INVALID FIELD IN CDB */ 1306 + if ((sshdr.asc == 0x20 || sshdr.asc == 0x24) && 1307 + (op == UNMAP || op == WRITE_SAME_16 || op == WRITE_SAME)) 1308 + sd_config_discard(sdkp, SD_LBP_DISABLE); 1415 1309 break; 1416 1310 default: 1417 1311 break; ··· 1716 1596 sd_printk(KERN_NOTICE, sdkp, 1717 1597 "physical block alignment offset: %u\n", alignment); 1718 1598 1719 - if (buffer[14] & 0x80) { /* TPE */ 1720 - struct request_queue *q = sdp->request_queue; 1599 + if (buffer[14] & 0x80) { /* LBPME */ 1600 + sdkp->lbpme = 1; 1721 1601 1722 - sdkp->thin_provisioning = 1; 1723 - q->limits.discard_granularity = sdkp->physical_block_size; 1724 - q->limits.max_discard_sectors = 0xffffffff; 1602 + if (buffer[14] & 0x40) /* LBPRZ */ 1603 + sdkp->lbprz = 1; 1725 1604 1726 - if (buffer[14] & 0x40) /* TPRZ */ 1727 - q->limits.discard_zeroes_data = 1; 1728 - 1729 - queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, q); 1605 + sd_config_discard(sdkp, SD_LBP_WS16); 1730 1606 } 1731 1607 1732 1608 sdkp->capacity = lba + 1; ··· 2207 2091 */ 2208 2092 static void sd_read_block_limits(struct scsi_disk *sdkp) 2209 2093 { 2210 - struct request_queue *q = sdkp->disk->queue; 2211 2094 unsigned int sector_sz = sdkp->device->sector_size; 2212 2095 const int vpd_len = 64; 2213 2096 unsigned char *buffer = kmalloc(vpd_len, GFP_KERNEL); ··· 2221 2106 blk_queue_io_opt(sdkp->disk->queue, 2222 2107 get_unaligned_be32(&buffer[12]) * sector_sz); 2223 2108 2224 - /* Thin provisioning enabled and page length indicates TP support */ 2225 - if (sdkp->thin_provisioning && buffer[3] == 0x3c) { 2226 - unsigned int lba_count, desc_count, granularity; 2109 + if (buffer[3] == 0x3c) { 2110 + unsigned int lba_count, desc_count; 2111 + 2112 + sdkp->max_ws_blocks = 2113 + (u32) min_not_zero(get_unaligned_be64(&buffer[36]), 2114 + (u64)0xffffffff); 2115 + 2116 + if (!sdkp->lbpme) 2117 + goto out; 2227 2118 2228 2119 lba_count = get_unaligned_be32(&buffer[20]); 2229 2120 desc_count = get_unaligned_be32(&buffer[24]); 2230 2121 2231 - if (lba_count && desc_count) { 2232 - if (sdkp->tpvpd && !sdkp->tpu) 2233 - sdkp->unmap = 0; 2234 - else 2235 - sdkp->unmap = 1; 2236 - } 2122 + if (lba_count && desc_count) 2123 + sdkp->max_unmap_blocks = lba_count; 2237 2124 2238 - if (sdkp->tpvpd && !sdkp->tpu && !sdkp->tpws) { 2239 - sd_printk(KERN_ERR, sdkp, "Thin provisioning is " \ 2240 - "enabled but neither TPU, nor TPWS are " \ 2241 - "set. Disabling discard!\n"); 2242 - goto out; 2243 - } 2244 - 2245 - if (lba_count) 2246 - q->limits.max_discard_sectors = 2247 - lba_count * sector_sz >> 9; 2248 - 2249 - granularity = get_unaligned_be32(&buffer[28]); 2250 - 2251 - if (granularity) 2252 - q->limits.discard_granularity = granularity * sector_sz; 2125 + sdkp->unmap_granularity = get_unaligned_be32(&buffer[28]); 2253 2126 2254 2127 if (buffer[32] & 0x80) 2255 - q->limits.discard_alignment = 2128 + sdkp->unmap_alignment = 2256 2129 get_unaligned_be32(&buffer[32]) & ~(1 << 31); 2130 + 2131 + if (!sdkp->lbpvpd) { /* LBP VPD page not provided */ 2132 + 2133 + if (sdkp->max_unmap_blocks) 2134 + sd_config_discard(sdkp, SD_LBP_UNMAP); 2135 + else 2136 + sd_config_discard(sdkp, SD_LBP_WS16); 2137 + 2138 + } else { /* LBP VPD page tells us what to use */ 2139 + 2140 + if (sdkp->lbpu && sdkp->max_unmap_blocks) 2141 + sd_config_discard(sdkp, SD_LBP_UNMAP); 2142 + else if (sdkp->lbpws) 2143 + sd_config_discard(sdkp, SD_LBP_WS16); 2144 + else if (sdkp->lbpws10) 2145 + sd_config_discard(sdkp, SD_LBP_WS10); 2146 + else 2147 + sd_config_discard(sdkp, SD_LBP_DISABLE); 2148 + } 2257 2149 } 2258 2150 2259 2151 out: ··· 2294 2172 } 2295 2173 2296 2174 /** 2297 - * sd_read_thin_provisioning - Query thin provisioning VPD page 2175 + * sd_read_block_provisioning - Query provisioning VPD page 2298 2176 * @disk: disk to query 2299 2177 */ 2300 - static void sd_read_thin_provisioning(struct scsi_disk *sdkp) 2178 + static void sd_read_block_provisioning(struct scsi_disk *sdkp) 2301 2179 { 2302 2180 unsigned char *buffer; 2303 2181 const int vpd_len = 8; 2304 2182 2305 - if (sdkp->thin_provisioning == 0) 2183 + if (sdkp->lbpme == 0) 2306 2184 return; 2307 2185 2308 2186 buffer = kmalloc(vpd_len, GFP_KERNEL); ··· 2310 2188 if (!buffer || scsi_get_vpd_page(sdkp->device, 0xb2, buffer, vpd_len)) 2311 2189 goto out; 2312 2190 2313 - sdkp->tpvpd = 1; 2314 - sdkp->tpu = (buffer[5] >> 7) & 1; /* UNMAP */ 2315 - sdkp->tpws = (buffer[5] >> 6) & 1; /* WRITE SAME(16) with UNMAP */ 2191 + sdkp->lbpvpd = 1; 2192 + sdkp->lbpu = (buffer[5] >> 7) & 1; /* UNMAP */ 2193 + sdkp->lbpws = (buffer[5] >> 6) & 1; /* WRITE SAME(16) with UNMAP */ 2194 + sdkp->lbpws10 = (buffer[5] >> 5) & 1; /* WRITE SAME(10) with UNMAP */ 2316 2195 2317 2196 out: 2318 2197 kfree(buffer); ··· 2370 2247 sd_read_capacity(sdkp, buffer); 2371 2248 2372 2249 if (sd_try_extended_inquiry(sdp)) { 2373 - sd_read_thin_provisioning(sdkp); 2250 + sd_read_block_provisioning(sdkp); 2374 2251 sd_read_block_limits(sdkp); 2375 2252 sd_read_block_characteristics(sdkp); 2376 2253 }
+20 -5
drivers/scsi/sd.h
··· 43 43 SD_MEMPOOL_SIZE = 2, /* CDB pool size */ 44 44 }; 45 45 46 + enum { 47 + SD_LBP_FULL = 0, /* Full logical block provisioning */ 48 + SD_LBP_UNMAP, /* Use UNMAP command */ 49 + SD_LBP_WS16, /* Use WRITE SAME(16) with UNMAP bit */ 50 + SD_LBP_WS10, /* Use WRITE SAME(10) with UNMAP bit */ 51 + SD_LBP_ZERO, /* Use WRITE SAME(10) with zero payload */ 52 + SD_LBP_DISABLE, /* Discard disabled due to failed cmd */ 53 + }; 54 + 46 55 struct scsi_disk { 47 56 struct scsi_driver *driver; /* always &sd_template */ 48 57 struct scsi_device *device; ··· 59 50 struct gendisk *disk; 60 51 atomic_t openers; 61 52 sector_t capacity; /* size in 512-byte sectors */ 53 + u32 max_ws_blocks; 54 + u32 max_unmap_blocks; 55 + u32 unmap_granularity; 56 + u32 unmap_alignment; 62 57 u32 index; 63 58 unsigned int physical_block_size; 64 59 u8 media_present; 65 60 u8 write_prot; 66 61 u8 protection_type;/* Data Integrity Field */ 62 + u8 provisioning_mode; 67 63 unsigned ATO : 1; /* state of disk ATO bit */ 68 64 unsigned WCE : 1; /* state of disk WCE bit */ 69 65 unsigned RCD : 1; /* state of disk RCD bit, unused */ 70 66 unsigned DPOFUA : 1; /* state of disk DPOFUA bit */ 71 67 unsigned first_scan : 1; 72 - unsigned thin_provisioning : 1; 73 - unsigned unmap : 1; 74 - unsigned tpws : 1; 75 - unsigned tpu : 1; 76 - unsigned tpvpd : 1; 68 + unsigned lbpme : 1; 69 + unsigned lbprz : 1; 70 + unsigned lbpu : 1; 71 + unsigned lbpws : 1; 72 + unsigned lbpws10 : 1; 73 + unsigned lbpvpd : 1; 77 74 }; 78 75 #define to_scsi_disk(obj) container_of(obj,struct scsi_disk,dev) 79 76
+7 -1
drivers/target/target_core_cdb.c
··· 667 667 { 668 668 struct se_device *dev = SE_DEV(cmd); 669 669 unsigned char *buf = cmd->t_task->t_task_buf; 670 - u32 blocks = dev->transport->get_blocks(dev); 670 + unsigned long long blocks_long = dev->transport->get_blocks(dev); 671 + u32 blocks; 672 + 673 + if (blocks_long >= 0x00000000ffffffff) 674 + blocks = 0xffffffff; 675 + else 676 + blocks = (u32)blocks_long; 671 677 672 678 buf[0] = (blocks >> 24) & 0xff; 673 679 buf[1] = (blocks >> 16) & 0xff;
+2
include/linux/pci_ids.h
··· 2079 2079 #define PCI_DEVICE_ID_TIGON3_5723 0x165b 2080 2080 #define PCI_DEVICE_ID_TIGON3_5705M 0x165d 2081 2081 #define PCI_DEVICE_ID_TIGON3_5705M_2 0x165e 2082 + #define PCI_DEVICE_ID_NX2_57712 0x1662 2083 + #define PCI_DEVICE_ID_NX2_57712E 0x1663 2082 2084 #define PCI_DEVICE_ID_TIGON3_5714 0x1668 2083 2085 #define PCI_DEVICE_ID_TIGON3_5714S 0x1669 2084 2086 #define PCI_DEVICE_ID_TIGON3_5780 0x166a
+10 -1
include/scsi/fc/fc_ns.h
··· 41 41 FC_NS_GI_A = 0x0101, /* get identifiers - scope */ 42 42 FC_NS_GPN_ID = 0x0112, /* get port name by ID */ 43 43 FC_NS_GNN_ID = 0x0113, /* get node name by ID */ 44 + FC_NS_GSPN_ID = 0x0118, /* get symbolic port name */ 44 45 FC_NS_GID_PN = 0x0121, /* get ID for port name */ 45 46 FC_NS_GID_NN = 0x0131, /* get IDs for node name */ 46 47 FC_NS_GID_FT = 0x0171, /* get IDs by FC4 type */ ··· 145 144 }; 146 145 147 146 /* 148 - * GID_PN response 147 + * GID_PN response or GSPN_ID request 149 148 */ 150 149 struct fc_gid_pn_resp { 151 150 __u8 fp_resvd; 152 151 __u8 fp_fid[3]; /* port ID */ 152 + }; 153 + 154 + /* 155 + * GSPN_ID response 156 + */ 157 + struct fc_gspn_resp { 158 + __u8 fp_name_len; 159 + char fp_name[]; 153 160 }; 154 161 155 162 /*
+17 -9
include/scsi/fc_encode.h
··· 46 46 } payload; 47 47 }; 48 48 49 + static inline void __fc_fill_fc_hdr(struct fc_frame_header *fh, 50 + enum fc_rctl r_ctl, 51 + u32 did, u32 sid, enum fc_fh_type type, 52 + u32 f_ctl, u32 parm_offset) 53 + { 54 + WARN_ON(r_ctl == 0); 55 + fh->fh_r_ctl = r_ctl; 56 + hton24(fh->fh_d_id, did); 57 + hton24(fh->fh_s_id, sid); 58 + fh->fh_type = type; 59 + hton24(fh->fh_f_ctl, f_ctl); 60 + fh->fh_cs_ctl = 0; 61 + fh->fh_df_ctl = 0; 62 + fh->fh_parm_offset = htonl(parm_offset); 63 + } 64 + 49 65 /** 50 66 * fill FC header fields in specified fc_frame 51 67 */ ··· 72 56 struct fc_frame_header *fh; 73 57 74 58 fh = fc_frame_header_get(fp); 75 - WARN_ON(r_ctl == 0); 76 - fh->fh_r_ctl = r_ctl; 77 - hton24(fh->fh_d_id, did); 78 - hton24(fh->fh_s_id, sid); 79 - fh->fh_type = type; 80 - hton24(fh->fh_f_ctl, f_ctl); 81 - fh->fh_cs_ctl = 0; 82 - fh->fh_df_ctl = 0; 83 - fh->fh_parm_offset = htonl(parm_offset); 59 + __fc_fill_fc_hdr(fh, r_ctl, did, sid, type, f_ctl, parm_offset); 84 60 } 85 61 86 62 /**
+73 -1
include/scsi/libfc.h
··· 35 35 36 36 #include <scsi/fc_frame.h> 37 37 38 + #define FC_FC4_PROV_SIZE (FC_TYPE_FCP + 1) /* size of tables */ 39 + 38 40 /* 39 41 * libfc error codes 40 42 */ ··· 158 156 #define FC_RP_FLAGS_REC_SUPPORTED (1 << 0) 159 157 #define FC_RP_FLAGS_RETRY (1 << 1) 160 158 #define FC_RP_STARTED (1 << 2) 159 + #define FC_RP_FLAGS_CONF_REQ (1 << 3) 161 160 unsigned int e_d_tov; 162 161 unsigned int r_a_tov; 163 162 }; ··· 182 179 * @rp_mutex: The mutex that protects the remote port 183 180 * @retry_work: Handle for retries 184 181 * @event_callback: Callback when READY, FAILED or LOGO states complete 182 + * @prli_count: Count of open PRLI sessions in providers 185 183 * @rcu: Structure used for freeing in an RCU-safe manner 186 184 */ 187 185 struct fc_rport_priv { ··· 206 202 struct list_head peers; 207 203 struct work_struct event_work; 208 204 u32 supported_classes; 205 + u16 prli_count; 209 206 struct rcu_head rcu; 207 + u16 sp_features; 208 + u8 spp_type; 209 + void (*lld_event_callback)(struct fc_lport *, 210 + struct fc_rport_priv *, 211 + enum fc_rport_event); 210 212 }; 211 213 212 214 /** ··· 561 551 struct fc_seq *(*seq_start_next)(struct fc_seq *); 562 552 563 553 /* 554 + * Set a response handler for the exchange of the sequence. 555 + * 556 + * STATUS: OPTIONAL 557 + */ 558 + void (*seq_set_resp)(struct fc_seq *sp, 559 + void (*resp)(struct fc_seq *, struct fc_frame *, 560 + void *), 561 + void *arg); 562 + 563 + /* 564 564 * Assign a sequence for an incoming request frame. 565 565 * 566 566 * STATUS: OPTIONAL 567 567 */ 568 568 struct fc_seq *(*seq_assign)(struct fc_lport *, struct fc_frame *); 569 + 570 + /* 571 + * Release the reference on the sequence returned by seq_assign(). 572 + * 573 + * STATUS: OPTIONAL 574 + */ 575 + void (*seq_release)(struct fc_seq *); 569 576 570 577 /* 571 578 * Reset an exchange manager, completing all sequences and exchanges. ··· 683 656 void (*rport_destroy)(struct kref *); 684 657 685 658 /* 659 + * Callback routine after the remote port is logged in 660 + * 661 + * STATUS: OPTIONAL 662 + */ 663 + void (*rport_event_callback)(struct fc_lport *, 664 + struct fc_rport_priv *, 665 + enum fc_rport_event); 666 + 667 + /* 686 668 * Send a fcp cmd from fsp pkt. 687 669 * Called with the SCSI host lock unlocked and irqs disabled. 688 670 * ··· 785 749 enum fc_disc_event); 786 750 }; 787 751 752 + /* 753 + * Local port notifier and events. 754 + */ 755 + extern struct blocking_notifier_head fc_lport_notifier_head; 756 + enum fc_lport_event { 757 + FC_LPORT_EV_ADD, 758 + FC_LPORT_EV_DEL, 759 + }; 760 + 788 761 /** 789 762 * struct fc_lport - Local port 790 763 * @host: The SCSI host associated with a local port ··· 834 789 * @lso_max: The maximum large offload send size 835 790 * @fcts: FC-4 type mask 836 791 * @lp_mutex: Mutex to protect the local port 837 - * @list: Handle for list of local ports 792 + * @list: Linkage on list of vport peers 838 793 * @retry_work: Handle to local port for delayed retry context 794 + * @prov: Pointers available for use by passive FC-4 providers 795 + * @lport_list: Linkage on module-wide list of local ports 839 796 */ 840 797 struct fc_lport { 841 798 /* Associations */ ··· 893 846 struct mutex lp_mutex; 894 847 struct list_head list; 895 848 struct delayed_work retry_work; 849 + void *prov[FC_FC4_PROV_SIZE]; 850 + struct list_head lport_list; 896 851 }; 852 + 853 + /** 854 + * struct fc4_prov - FC-4 provider registration 855 + * @prli: Handler for incoming PRLI 856 + * @prlo: Handler for session reset 857 + * @recv: Handler for incoming request 858 + * @module: Pointer to module. May be NULL. 859 + */ 860 + struct fc4_prov { 861 + int (*prli)(struct fc_rport_priv *, u32 spp_len, 862 + const struct fc_els_spp *spp_in, 863 + struct fc_els_spp *spp_out); 864 + void (*prlo)(struct fc_rport_priv *); 865 + void (*recv)(struct fc_lport *, struct fc_frame *); 866 + struct module *module; 867 + }; 868 + 869 + /* 870 + * Register FC-4 provider with libfc. 871 + */ 872 + int fc_fc4_register_provider(enum fc_fh_type type, struct fc4_prov *); 873 + void fc_fc4_deregister_provider(enum fc_fh_type type, struct fc4_prov *); 897 874 898 875 /* 899 876 * FC_LPORT HELPER FUNCTIONS ··· 1049 978 struct fc_lport *fc_vport_id_lookup(struct fc_lport *, u32 port_id); 1050 979 int fc_lport_bsg_request(struct fc_bsg_job *); 1051 980 void fc_lport_set_local_id(struct fc_lport *, u32 port_id); 981 + void fc_lport_iterate(void (*func)(struct fc_lport *, void *), void *); 1052 982 1053 983 /* 1054 984 * REMOTE PORT LAYER
+105
include/scsi/libfcoe.h
··· 33 33 #define FCOE_MAX_CMD_LEN 16 /* Supported CDB length */ 34 34 35 35 /* 36 + * Max MTU for FCoE: 14 (FCoE header) + 24 (FC header) + 2112 (max FC payload) 37 + * + 4 (FC CRC) + 4 (FCoE trailer) = 2158 bytes 38 + */ 39 + #define FCOE_MTU 2158 40 + 41 + /* 36 42 * FIP tunable parameters. 37 43 */ 38 44 #define FCOE_CTLR_START_DELAY 2000 /* mS after first adv. to choose FCF */ ··· 227 221 u64 fcoe_wwn_from_mac(unsigned char mac[], unsigned int, unsigned int); 228 222 int fcoe_libfc_config(struct fc_lport *, struct fcoe_ctlr *, 229 223 const struct libfc_function_template *, int init_fcp); 224 + u32 fcoe_fc_crc(struct fc_frame *fp); 225 + int fcoe_start_io(struct sk_buff *skb); 230 226 231 227 /** 232 228 * is_fip_mode() - returns true if FIP mode selected. ··· 239 231 return fip->state == FIP_ST_ENABLED; 240 232 } 241 233 234 + /* helper for FCoE SW HBA drivers, can include subven and subdev if needed. The 235 + * modpost would use pci_device_id table to auto-generate formatted module alias 236 + * into the corresponding .mod.c file, but there may or may not be a pci device 237 + * id table for FCoE drivers so we use the following helper for build the fcoe 238 + * driver module alias. 239 + */ 240 + #define MODULE_ALIAS_FCOE_PCI(ven, dev) \ 241 + MODULE_ALIAS("fcoe-pci:" \ 242 + "v" __stringify(ven) \ 243 + "d" __stringify(dev) "sv*sd*bc*sc*i*") 244 + 245 + /* the name of the default FCoE transport driver fcoe.ko */ 246 + #define FCOE_TRANSPORT_DEFAULT "fcoe" 247 + 248 + /* struct fcoe_transport - The FCoE transport interface 249 + * @name: a vendor specific name for their FCoE transport driver 250 + * @attached: whether this transport is already attached 251 + * @list: list linkage to all attached transports 252 + * @match: handler to allow the transport driver to match up a given netdev 253 + * @create: handler to sysfs entry of create for FCoE instances 254 + * @destroy: handler to sysfs entry of destroy for FCoE instances 255 + * @enable: handler to sysfs entry of enable for FCoE instances 256 + * @disable: handler to sysfs entry of disable for FCoE instances 257 + */ 258 + struct fcoe_transport { 259 + char name[IFNAMSIZ]; 260 + bool attached; 261 + struct list_head list; 262 + bool (*match) (struct net_device *device); 263 + int (*create) (struct net_device *device, enum fip_state fip_mode); 264 + int (*destroy) (struct net_device *device); 265 + int (*enable) (struct net_device *device); 266 + int (*disable) (struct net_device *device); 267 + }; 268 + 269 + /** 270 + * struct fcoe_percpu_s - The context for FCoE receive thread(s) 271 + * @thread: The thread context 272 + * @fcoe_rx_list: The queue of pending packets to process 273 + * @page: The memory page for calculating frame trailer CRCs 274 + * @crc_eof_offset: The offset into the CRC page pointing to available 275 + * memory for a new trailer 276 + */ 277 + struct fcoe_percpu_s { 278 + struct task_struct *thread; 279 + struct sk_buff_head fcoe_rx_list; 280 + struct page *crc_eof_page; 281 + int crc_eof_offset; 282 + }; 283 + 284 + /** 285 + * struct fcoe_port - The FCoE private structure 286 + * @priv: The associated fcoe interface. The structure is 287 + * defined by the low level driver 288 + * @lport: The associated local port 289 + * @fcoe_pending_queue: The pending Rx queue of skbs 290 + * @fcoe_pending_queue_active: Indicates if the pending queue is active 291 + * @max_queue_depth: Max queue depth of pending queue 292 + * @min_queue_depth: Min queue depth of pending queue 293 + * @timer: The queue timer 294 + * @destroy_work: Handle for work context 295 + * (to prevent RTNL deadlocks) 296 + * @data_srt_addr: Source address for data 297 + * 298 + * An instance of this structure is to be allocated along with the 299 + * Scsi_Host and libfc fc_lport structures. 300 + */ 301 + struct fcoe_port { 302 + void *priv; 303 + struct fc_lport *lport; 304 + struct sk_buff_head fcoe_pending_queue; 305 + u8 fcoe_pending_queue_active; 306 + u32 max_queue_depth; 307 + u32 min_queue_depth; 308 + struct timer_list timer; 309 + struct work_struct destroy_work; 310 + u8 data_src_addr[ETH_ALEN]; 311 + }; 312 + void fcoe_clean_pending_queue(struct fc_lport *); 313 + void fcoe_check_wait_queue(struct fc_lport *lport, struct sk_buff *skb); 314 + void fcoe_queue_timer(ulong lport); 315 + int fcoe_get_paged_crc_eof(struct sk_buff *skb, int tlen, 316 + struct fcoe_percpu_s *fps); 317 + 318 + /** 319 + * struct netdev_list 320 + * A mapping from netdevice to fcoe_transport 321 + */ 322 + struct fcoe_netdev_mapping { 323 + struct list_head list; 324 + struct net_device *netdev; 325 + struct fcoe_transport *ft; 326 + }; 327 + 328 + /* fcoe transports registration and deregistration */ 329 + int fcoe_transport_attach(struct fcoe_transport *ft); 330 + int fcoe_transport_detach(struct fcoe_transport *ft); 242 331 243 332 #endif /* _LIBFCOE_H */
+2 -6
include/scsi/libiscsi.h
··· 212 212 /* values userspace uses to id a conn */ 213 213 int persistent_port; 214 214 char *persistent_address; 215 - /* remote portal currently connected to */ 216 - int portal_port; 217 - char portal_address[ISCSI_ADDRESS_BUF_LEN]; 218 215 219 216 /* MIB-statistics */ 220 217 uint64_t txdata_octets; ··· 316 319 /* hw address or netdev iscsi connection is bound to */ 317 320 char *hwaddress; 318 321 char *netdev; 319 - /* local address */ 320 - int local_port; 321 - char local_address[ISCSI_ADDRESS_BUF_LEN]; 322 322 323 323 wait_queue_head_t session_removal_wq; 324 324 /* protects sessions and state */ ··· 388 394 enum iscsi_err err); 389 395 extern int iscsi_conn_get_param(struct iscsi_cls_conn *cls_conn, 390 396 enum iscsi_param param, char *buf); 397 + extern int iscsi_conn_get_addr_param(struct sockaddr_storage *addr, 398 + enum iscsi_param param, char *buf); 391 399 extern void iscsi_suspend_tx(struct iscsi_conn *conn); 392 400 extern void iscsi_suspend_queue(struct iscsi_conn *conn); 393 401 extern void iscsi_conn_queue_work(struct iscsi_conn *conn);
+5
include/scsi/scsi.h
··· 435 435 * recover the link. Transport class will 436 436 * retry or fail IO */ 437 437 #define DID_TRANSPORT_FAILFAST 0x0f /* Transport class fastfailed the io */ 438 + #define DID_TARGET_FAILURE 0x10 /* Permanent target failure, do not retry on 439 + * other paths */ 440 + #define DID_NEXUS_FAILURE 0x11 /* Permanent nexus failure, retry on other 441 + * paths might yield different results */ 438 442 #define DRIVER_OK 0x00 /* Driver status */ 439 443 440 444 /* ··· 468 464 #define TIMEOUT_ERROR 0x2007 469 465 #define SCSI_RETURN_NOT_HANDLED 0x2008 470 466 #define FAST_IO_FAIL 0x2009 467 + #define TARGET_ERROR 0x200A 471 468 472 469 /* 473 470 * Midlevel queue return values.
+1
include/scsi/scsi_device.h
··· 184 184 struct scsi_device_handler { 185 185 /* Used by the infrastructure */ 186 186 struct list_head list; /* list of scsi_device_handlers */ 187 + int idx; 187 188 188 189 /* Filled by the hardware handler */ 189 190 struct module *module;
+5 -1
include/scsi/scsi_transport_iscsi.h
··· 101 101 void (*destroy_conn) (struct iscsi_cls_conn *conn); 102 102 int (*set_param) (struct iscsi_cls_conn *conn, enum iscsi_param param, 103 103 char *buf, int buflen); 104 + int (*get_ep_param) (struct iscsi_endpoint *ep, enum iscsi_param param, 105 + char *buf); 104 106 int (*get_conn_param) (struct iscsi_cls_conn *conn, 105 107 enum iscsi_param param, char *buf); 106 108 int (*get_session_param) (struct iscsi_cls_session *session, ··· 162 160 void *dd_data; /* LLD private data */ 163 161 struct iscsi_transport *transport; 164 162 uint32_t cid; /* connection id */ 163 + struct mutex ep_mutex; 164 + struct iscsi_endpoint *ep; 165 165 166 - int active; /* must be accessed with the connlock */ 167 166 struct device dev; /* sysfs transport/container device */ 168 167 }; 169 168 ··· 225 222 void *dd_data; /* LLD private data */ 226 223 struct device dev; 227 224 uint64_t id; 225 + struct iscsi_cls_conn *conn; 228 226 }; 229 227 230 228 /*
+24 -4
include/trace/events/scsi.h
··· 184 184 scsi_statusbyte_name(SAM_STAT_ACA_ACTIVE), \ 185 185 scsi_statusbyte_name(SAM_STAT_TASK_ABORTED)) 186 186 187 + #define scsi_prot_op_name(result) { result, #result } 188 + #define show_prot_op_name(val) \ 189 + __print_symbolic(val, \ 190 + scsi_prot_op_name(SCSI_PROT_NORMAL), \ 191 + scsi_prot_op_name(SCSI_PROT_READ_INSERT), \ 192 + scsi_prot_op_name(SCSI_PROT_WRITE_STRIP), \ 193 + scsi_prot_op_name(SCSI_PROT_READ_STRIP), \ 194 + scsi_prot_op_name(SCSI_PROT_WRITE_INSERT), \ 195 + scsi_prot_op_name(SCSI_PROT_READ_PASS), \ 196 + scsi_prot_op_name(SCSI_PROT_WRITE_PASS)) 197 + 187 198 const char *scsi_trace_parse_cdb(struct trace_seq*, unsigned char*, int); 188 199 #define __parse_cdb(cdb, len) scsi_trace_parse_cdb(p, cdb, len) 189 200 ··· 213 202 __field( unsigned int, cmd_len ) 214 203 __field( unsigned int, data_sglen ) 215 204 __field( unsigned int, prot_sglen ) 205 + __field( unsigned char, prot_op ) 216 206 __dynamic_array(unsigned char, cmnd, cmd->cmd_len) 217 207 ), 218 208 ··· 226 214 __entry->cmd_len = cmd->cmd_len; 227 215 __entry->data_sglen = scsi_sg_count(cmd); 228 216 __entry->prot_sglen = scsi_prot_sg_count(cmd); 217 + __entry->prot_op = scsi_get_prot_op(cmd); 229 218 memcpy(__get_dynamic_array(cmnd), cmd->cmnd, cmd->cmd_len); 230 219 ), 231 220 232 221 TP_printk("host_no=%u channel=%u id=%u lun=%u data_sgl=%u prot_sgl=%u" \ 233 - " cmnd=(%s %s raw=%s)", 222 + " prot_op=%s cmnd=(%s %s raw=%s)", 234 223 __entry->host_no, __entry->channel, __entry->id, 235 224 __entry->lun, __entry->data_sglen, __entry->prot_sglen, 225 + show_prot_op_name(__entry->prot_op), 236 226 show_opcode_name(__entry->opcode), 237 227 __parse_cdb(__get_dynamic_array(cmnd), __entry->cmd_len), 238 228 __print_hex(__get_dynamic_array(cmnd), __entry->cmd_len)) ··· 256 242 __field( unsigned int, cmd_len ) 257 243 __field( unsigned int, data_sglen ) 258 244 __field( unsigned int, prot_sglen ) 245 + __field( unsigned char, prot_op ) 259 246 __dynamic_array(unsigned char, cmnd, cmd->cmd_len) 260 247 ), 261 248 ··· 270 255 __entry->cmd_len = cmd->cmd_len; 271 256 __entry->data_sglen = scsi_sg_count(cmd); 272 257 __entry->prot_sglen = scsi_prot_sg_count(cmd); 258 + __entry->prot_op = scsi_get_prot_op(cmd); 273 259 memcpy(__get_dynamic_array(cmnd), cmd->cmnd, cmd->cmd_len); 274 260 ), 275 261 276 262 TP_printk("host_no=%u channel=%u id=%u lun=%u data_sgl=%u prot_sgl=%u" \ 277 - " cmnd=(%s %s raw=%s) rtn=%d", 263 + " prot_op=%s cmnd=(%s %s raw=%s) rtn=%d", 278 264 __entry->host_no, __entry->channel, __entry->id, 279 265 __entry->lun, __entry->data_sglen, __entry->prot_sglen, 266 + show_prot_op_name(__entry->prot_op), 280 267 show_opcode_name(__entry->opcode), 281 268 __parse_cdb(__get_dynamic_array(cmnd), __entry->cmd_len), 282 269 __print_hex(__get_dynamic_array(cmnd), __entry->cmd_len), ··· 301 284 __field( unsigned int, cmd_len ) 302 285 __field( unsigned int, data_sglen ) 303 286 __field( unsigned int, prot_sglen ) 287 + __field( unsigned char, prot_op ) 304 288 __dynamic_array(unsigned char, cmnd, cmd->cmd_len) 305 289 ), 306 290 ··· 315 297 __entry->cmd_len = cmd->cmd_len; 316 298 __entry->data_sglen = scsi_sg_count(cmd); 317 299 __entry->prot_sglen = scsi_prot_sg_count(cmd); 300 + __entry->prot_op = scsi_get_prot_op(cmd); 318 301 memcpy(__get_dynamic_array(cmnd), cmd->cmnd, cmd->cmd_len); 319 302 ), 320 303 321 304 TP_printk("host_no=%u channel=%u id=%u lun=%u data_sgl=%u " \ 322 - "prot_sgl=%u cmnd=(%s %s raw=%s) result=(driver=%s host=%s " \ 323 - "message=%s status=%s)", 305 + "prot_sgl=%u prot_op=%s cmnd=(%s %s raw=%s) result=(driver=" \ 306 + "%s host=%s message=%s status=%s)", 324 307 __entry->host_no, __entry->channel, __entry->id, 325 308 __entry->lun, __entry->data_sglen, __entry->prot_sglen, 309 + show_prot_op_name(__entry->prot_op), 326 310 show_opcode_name(__entry->opcode), 327 311 __parse_cdb(__get_dynamic_array(cmnd), __entry->cmd_len), 328 312 __print_hex(__get_dynamic_array(cmnd), __entry->cmd_len),