Merge gregkh@master.kernel.org:/pub/scm/linux/kernel/git/jejb/scsi-rc-fixes-2.6

+1076 -1268
+123
Documentation/scsi/ChangeLog.megaraid
··· 1 + Release Date : Fri May 19 09:31:45 EST 2006 - Seokmann Ju <sju@lsil.com> 2 + Current Version : 2.20.4.9 (scsi module), 2.20.2.6 (cmm module) 3 + Older Version : 2.20.4.8 (scsi module), 2.20.2.6 (cmm module) 4 + 5 + 1. Fixed a bug in megaraid_init_mbox(). 6 + Customer reported "garbage in file on x86_64 platform". 7 + Root Cause: the driver registered controllers as 64-bit DMA capable 8 + for those which are not support it. 9 + Fix: Made change in the function inserting identification machanism 10 + identifying 64-bit DMA capable controllers. 11 + 12 + > -----Original Message----- 13 + > From: Vasily Averin [mailto:vvs@sw.ru] 14 + > Sent: Thursday, May 04, 2006 2:49 PM 15 + > To: linux-scsi@vger.kernel.org; Kolli, Neela; Mukker, Atul; 16 + > Ju, Seokmann; Bagalkote, Sreenivas; 17 + > James.Bottomley@SteelEye.com; devel@openvz.org 18 + > Subject: megaraid_mbox: garbage in file 19 + > 20 + > Hello all, 21 + > 22 + > I've investigated customers claim on the unstable work of 23 + > their node and found a 24 + > strange effect: reading from some files leads to the 25 + > "attempt to access beyond end of device" messages. 26 + > 27 + > I've checked filesystem, memory on the node, motherboard BIOS 28 + > version, but it 29 + > does not help and issue still has been reproduced by simple 30 + > file reading. 31 + > 32 + > Reproducer is simple: 33 + > 34 + > echo 0xffffffff >/proc/sys/dev/scsi/logging_level ; 35 + > cat /vz/private/101/root/etc/ld.so.cache >/tmp/ttt ; 36 + > echo 0 >/proc/sys/dev/scsi/logging 37 + > 38 + > It leads to the following messages in dmesg 39 + > 40 + > sd_init_command: disk=sda, block=871769260, count=26 41 + > sda : block=871769260 42 + > sda : reading 26/26 512 byte blocks. 43 + > scsi_add_timer: scmd: f79ed980, time: 7500, (c02b1420) 44 + > sd 0:1:0:0: send 0xf79ed980 sd 0:1:0:0: 45 + > command: Read (10): 28 00 33 f6 24 ac 00 00 1a 00 46 + > buffer = 0xf7cfb540, bufflen = 13312, done = 0xc0366b40, 47 + > queuecommand 0xc0344010 48 + > leaving scsi_dispatch_cmnd() 49 + > scsi_delete_timer: scmd: f79ed980, rtn: 1 50 + > sd 0:1:0:0: done 0xf79ed980 SUCCESS 0 sd 0:1:0:0: 51 + > command: Read (10): 28 00 33 f6 24 ac 00 00 1a 00 52 + > scsi host busy 1 failed 0 53 + > sd 0:1:0:0: Notifying upper driver of completion (result 0) 54 + > sd_rw_intr: sda: res=0x0 55 + > 26 sectors total, 13312 bytes done. 56 + > use_sg is 4 57 + > attempt to access beyond end of device 58 + > sda6: rw=0, want=1044134458, limit=951401367 59 + > Buffer I/O error on device sda6, logical block 522067228 60 + > attempt to access beyond end of device 61 + 62 + 2. When INQUIRY with EVPD bit set issued to the MegaRAID controller, 63 + system memory gets corrupted. 64 + Root Cause: MegaRAID F/W handle the INQUIRY with EVPD bit set 65 + incorrectly. 66 + Fix: MegaRAID F/W has fixed the problem and being process of release, 67 + soon. Meanwhile, driver will filter out the request. 68 + 69 + 3. One of member in the data structure of the driver leads unaligne 70 + issue on 64-bit platform. 71 + Customer reporeted "kernel unaligned access addrss" issue when 72 + application communicates with MegaRAID HBA driver. 73 + Root Cause: in uioc_t structure, one of member had misaligned and it 74 + led system to display the error message. 75 + Fix: A patch submitted to community from following folk. 76 + 77 + > -----Original Message----- 78 + > From: linux-scsi-owner@vger.kernel.org 79 + > [mailto:linux-scsi-owner@vger.kernel.org] On Behalf Of Sakurai Hiroomi 80 + > Sent: Wednesday, July 12, 2006 4:20 AM 81 + > To: linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org 82 + > Subject: Re: Help: strange messages from kernel on IA64 platform 83 + > 84 + > Hi, 85 + > 86 + > I saw same message. 87 + > 88 + > When GAM(Global Array Manager) is started, The following 89 + > message output. 90 + > kernel: kernel unaligned access to 0xe0000001fe1080d4, 91 + > ip=0xa000000200053371 92 + > 93 + > The uioc structure used by ioctl is defined by packed, 94 + > the allignment of each member are disturbed. 95 + > In a 64 bit structure, the allignment of member doesn't fit 64 bit 96 + > boundary. this causes this messages. 97 + > In a 32 bit structure, we don't see the message because the allinment 98 + > of member fit 32 bit boundary even if packed is specified. 99 + > 100 + > patch 101 + > I Add 32 bit dummy member to fit 64 bit boundary. I tested. 102 + > We confirmed this patch fix the problem by IA64 server. 103 + > 104 + > ************************************************************** 105 + > **************** 106 + > --- linux-2.6.9/drivers/scsi/megaraid/megaraid_ioctl.h.orig 107 + > 2006-04-03 17:13:03.000000000 +0900 108 + > +++ linux-2.6.9/drivers/scsi/megaraid/megaraid_ioctl.h 109 + > 2006-04-03 17:14:09.000000000 +0900 110 + > @@ -132,6 +132,10 @@ 111 + > /* Driver Data: */ 112 + > void __user * user_data; 113 + > uint32_t user_data_len; 114 + > + 115 + > + /* 64bit alignment */ 116 + > + uint32_t pad_0xBC; 117 + > + 118 + > mraid_passthru_t __user *user_pthru; 119 + > 120 + > mraid_passthru_t *pthru32; 121 + > ************************************************************** 122 + > **************** 123 + 1 124 Release Date : Mon Apr 11 12:27:22 EST 2006 - Seokmann Ju <sju@lsil.com> 2 125 Current Version : 2.20.4.8 (scsi module), 2.20.2.6 (cmm module) 3 126 Older Version : 2.20.4.7 (scsi module), 2.20.2.6 (cmm module)
+2 -1
arch/ia64/hp/sim/simscsi.c
··· 244 244 245 245 if (scatterlen == 0) 246 246 memcpy(sc->request_buffer, buf, len); 247 - else for (slp = (struct scatterlist *)sc->request_buffer; scatterlen-- > 0 && len > 0; slp++) { 247 + else for (slp = (struct scatterlist *)sc->request_buffer; 248 + scatterlen-- > 0 && len > 0; slp++) { 248 249 unsigned thislen = min(len, slp->length); 249 250 250 251 memcpy(page_address(slp->page) + slp->offset, buf, thislen);
+2 -20
drivers/infiniband/ulp/iser/iscsi_iser.c
··· 378 378 return iser_conn_set_full_featured_mode(conn); 379 379 } 380 380 381 - static void 382 - iscsi_iser_conn_terminate(struct iscsi_conn *conn) 383 - { 384 - struct iscsi_iser_conn *iser_conn = conn->dd_data; 385 - struct iser_conn *ib_conn = iser_conn->ib_conn; 386 - 387 - BUG_ON(!ib_conn); 388 - /* starts conn teardown process, waits until all previously * 389 - * posted buffers get flushed, deallocates all conn resources */ 390 - iser_conn_terminate(ib_conn); 391 - iser_conn->ib_conn = NULL; 392 - conn->recv_lock = NULL; 393 - } 394 - 395 - 396 381 static struct iscsi_transport iscsi_iser_transport; 397 382 398 383 static struct iscsi_cls_session * ··· 540 555 static void 541 556 iscsi_iser_ep_disconnect(__u64 ep_handle) 542 557 { 543 - struct iser_conn *ib_conn = iscsi_iser_ib_conn_lookup(ep_handle); 558 + struct iser_conn *ib_conn; 544 559 560 + ib_conn = iscsi_iser_ib_conn_lookup(ep_handle); 545 561 if (!ib_conn) 546 562 return; 547 563 548 564 iser_err("ib conn %p state %d\n",ib_conn, ib_conn->state); 549 - 550 565 iser_conn_terminate(ib_conn); 551 566 } 552 567 ··· 599 614 .get_session_param = iscsi_session_get_param, 600 615 .start_conn = iscsi_iser_conn_start, 601 616 .stop_conn = iscsi_conn_stop, 602 - /* these are called as part of conn recovery */ 603 - .suspend_conn_recv = NULL, /* FIXME is/how this relvant to iser? */ 604 - .terminate_conn = iscsi_iser_conn_terminate, 605 617 /* IO */ 606 618 .send_pdu = iscsi_conn_send_pdu, 607 619 .get_stats = iscsi_iser_conn_get_stats,
-1
drivers/message/fusion/mptbase.h
··· 640 640 struct work_struct fc_setup_reset_work; 641 641 struct list_head fc_rports; 642 642 spinlock_t fc_rescan_work_lock; 643 - int fc_rescan_work_count; 644 643 struct work_struct fc_rescan_work; 645 644 char fc_rescan_work_q_name[KOBJ_NAME_LEN]; 646 645 struct workqueue_struct *fc_rescan_work_q;
+41 -57
drivers/message/fusion/mptfc.c
··· 669 669 * if still doing discovery, 670 670 * hang loose a while until finished 671 671 */ 672 - if (pp0dest->PortState == MPI_FCPORTPAGE0_PORTSTATE_UNKNOWN) { 672 + if ((pp0dest->PortState == MPI_FCPORTPAGE0_PORTSTATE_UNKNOWN) || 673 + (pp0dest->PortState == MPI_FCPORTPAGE0_PORTSTATE_ONLINE && 674 + (pp0dest->Flags & MPI_FCPORTPAGE0_FLAGS_ATTACH_TYPE_MASK) 675 + == MPI_FCPORTPAGE0_FLAGS_ATTACH_NO_INIT)) { 673 676 if (count-- > 0) { 674 677 msleep(100); 675 678 goto try_again; ··· 898 895 { 899 896 MPT_ADAPTER *ioc = (MPT_ADAPTER *)arg; 900 897 int ii; 901 - int work_to_do; 902 898 u64 pn; 903 - unsigned long flags; 904 899 struct mptfc_rport_info *ri; 905 900 906 - do { 907 - /* start by tagging all ports as missing */ 908 - list_for_each_entry(ri, &ioc->fc_rports, list) { 909 - if (ri->flags & MPT_RPORT_INFO_FLAGS_REGISTERED) { 910 - ri->flags |= MPT_RPORT_INFO_FLAGS_MISSING; 911 - } 901 + /* start by tagging all ports as missing */ 902 + list_for_each_entry(ri, &ioc->fc_rports, list) { 903 + if (ri->flags & MPT_RPORT_INFO_FLAGS_REGISTERED) { 904 + ri->flags |= MPT_RPORT_INFO_FLAGS_MISSING; 912 905 } 906 + } 913 907 914 - /* 915 - * now rescan devices known to adapter, 916 - * will reregister existing rports 917 - */ 918 - for (ii=0; ii < ioc->facts.NumberOfPorts; ii++) { 919 - (void) mptfc_GetFcPortPage0(ioc, ii); 920 - mptfc_init_host_attr(ioc,ii); /* refresh */ 921 - mptfc_GetFcDevPage0(ioc,ii,mptfc_register_dev); 908 + /* 909 + * now rescan devices known to adapter, 910 + * will reregister existing rports 911 + */ 912 + for (ii=0; ii < ioc->facts.NumberOfPorts; ii++) { 913 + (void) mptfc_GetFcPortPage0(ioc, ii); 914 + mptfc_init_host_attr(ioc, ii); /* refresh */ 915 + mptfc_GetFcDevPage0(ioc, ii, mptfc_register_dev); 916 + } 917 + 918 + /* delete devices still missing */ 919 + list_for_each_entry(ri, &ioc->fc_rports, list) { 920 + /* if newly missing, delete it */ 921 + if (ri->flags & MPT_RPORT_INFO_FLAGS_MISSING) { 922 + 923 + ri->flags &= ~(MPT_RPORT_INFO_FLAGS_REGISTERED| 924 + MPT_RPORT_INFO_FLAGS_MISSING); 925 + fc_remote_port_delete(ri->rport); /* won't sleep */ 926 + ri->rport = NULL; 927 + 928 + pn = (u64)ri->pg0.WWPN.High << 32 | 929 + (u64)ri->pg0.WWPN.Low; 930 + dfcprintk ((MYIOC_s_INFO_FMT 931 + "mptfc_rescan.%d: %llx deleted\n", 932 + ioc->name, 933 + ioc->sh->host_no, 934 + (unsigned long long)pn)); 922 935 } 923 - 924 - /* delete devices still missing */ 925 - list_for_each_entry(ri, &ioc->fc_rports, list) { 926 - /* if newly missing, delete it */ 927 - if (ri->flags & MPT_RPORT_INFO_FLAGS_MISSING) { 928 - 929 - ri->flags &= ~(MPT_RPORT_INFO_FLAGS_REGISTERED| 930 - MPT_RPORT_INFO_FLAGS_MISSING); 931 - fc_remote_port_delete(ri->rport); /* won't sleep */ 932 - ri->rport = NULL; 933 - 934 - pn = (u64)ri->pg0.WWPN.High << 32 | 935 - (u64)ri->pg0.WWPN.Low; 936 - dfcprintk ((MYIOC_s_INFO_FMT 937 - "mptfc_rescan.%d: %llx deleted\n", 938 - ioc->name, 939 - ioc->sh->host_no, 940 - (unsigned long long)pn)); 941 - } 942 - } 943 - 944 - /* 945 - * allow multiple passes as target state 946 - * might have changed during scan 947 - */ 948 - spin_lock_irqsave(&ioc->fc_rescan_work_lock, flags); 949 - if (ioc->fc_rescan_work_count > 2) /* only need one more */ 950 - ioc->fc_rescan_work_count = 2; 951 - work_to_do = --ioc->fc_rescan_work_count; 952 - spin_unlock_irqrestore(&ioc->fc_rescan_work_lock, flags); 953 - } while (work_to_do); 936 + } 954 937 } 955 938 956 939 static int ··· 1148 1159 * by doing it via the workqueue, some locking is eliminated 1149 1160 */ 1150 1161 1151 - ioc->fc_rescan_work_count = 1; 1152 1162 queue_work(ioc->fc_rescan_work_q, &ioc->fc_rescan_work); 1153 1163 flush_workqueue(ioc->fc_rescan_work_q); 1154 1164 ··· 1190 1202 case MPI_EVENT_RESCAN: 1191 1203 spin_lock_irqsave(&ioc->fc_rescan_work_lock, flags); 1192 1204 if (ioc->fc_rescan_work_q) { 1193 - if (ioc->fc_rescan_work_count++ == 0) { 1194 - queue_work(ioc->fc_rescan_work_q, 1195 - &ioc->fc_rescan_work); 1196 - } 1205 + queue_work(ioc->fc_rescan_work_q, 1206 + &ioc->fc_rescan_work); 1197 1207 } 1198 1208 spin_unlock_irqrestore(&ioc->fc_rescan_work_lock, flags); 1199 1209 break; ··· 1234 1248 mptfc_SetFcPortPage1_defaults(ioc); 1235 1249 spin_lock_irqsave(&ioc->fc_rescan_work_lock, flags); 1236 1250 if (ioc->fc_rescan_work_q) { 1237 - if (ioc->fc_rescan_work_count++ == 0) { 1238 - queue_work(ioc->fc_rescan_work_q, 1239 - &ioc->fc_rescan_work); 1240 - } 1251 + queue_work(ioc->fc_rescan_work_q, 1252 + &ioc->fc_rescan_work); 1241 1253 } 1242 1254 spin_unlock_irqrestore(&ioc->fc_rescan_work_lock, flags); 1243 1255 }
+111 -9
drivers/s390/scsi/zfcp_aux.c
··· 112 112 printk("\n"); 113 113 } 114 114 115 + 116 + /****************************************************************/ 117 + /****** Functions to handle the request ID hash table ********/ 118 + /****************************************************************/ 119 + 120 + #define ZFCP_LOG_AREA ZFCP_LOG_AREA_FSF 121 + 122 + static int zfcp_reqlist_init(struct zfcp_adapter *adapter) 123 + { 124 + int i; 125 + 126 + adapter->req_list = kcalloc(REQUEST_LIST_SIZE, sizeof(struct list_head), 127 + GFP_KERNEL); 128 + 129 + if (!adapter->req_list) 130 + return -ENOMEM; 131 + 132 + for (i=0; i<REQUEST_LIST_SIZE; i++) 133 + INIT_LIST_HEAD(&adapter->req_list[i]); 134 + 135 + return 0; 136 + } 137 + 138 + static void zfcp_reqlist_free(struct zfcp_adapter *adapter) 139 + { 140 + struct zfcp_fsf_req *request, *tmp; 141 + unsigned int i; 142 + 143 + for (i=0; i<REQUEST_LIST_SIZE; i++) { 144 + if (list_empty(&adapter->req_list[i])) 145 + continue; 146 + 147 + list_for_each_entry_safe(request, tmp, 148 + &adapter->req_list[i], list) 149 + list_del(&request->list); 150 + } 151 + 152 + kfree(adapter->req_list); 153 + } 154 + 155 + void zfcp_reqlist_add(struct zfcp_adapter *adapter, 156 + struct zfcp_fsf_req *fsf_req) 157 + { 158 + unsigned int i; 159 + 160 + i = fsf_req->req_id % REQUEST_LIST_SIZE; 161 + list_add_tail(&fsf_req->list, &adapter->req_list[i]); 162 + } 163 + 164 + void zfcp_reqlist_remove(struct zfcp_adapter *adapter, unsigned long req_id) 165 + { 166 + struct zfcp_fsf_req *request, *tmp; 167 + unsigned int i, counter; 168 + u64 dbg_tmp[2]; 169 + 170 + i = req_id % REQUEST_LIST_SIZE; 171 + BUG_ON(list_empty(&adapter->req_list[i])); 172 + 173 + counter = 0; 174 + list_for_each_entry_safe(request, tmp, &adapter->req_list[i], list) { 175 + if (request->req_id == req_id) { 176 + dbg_tmp[0] = (u64) atomic_read(&adapter->reqs_active); 177 + dbg_tmp[1] = (u64) counter; 178 + debug_event(adapter->erp_dbf, 4, (void *) dbg_tmp, 16); 179 + list_del(&request->list); 180 + break; 181 + } 182 + counter++; 183 + } 184 + } 185 + 186 + struct zfcp_fsf_req *zfcp_reqlist_ismember(struct zfcp_adapter *adapter, 187 + unsigned long req_id) 188 + { 189 + struct zfcp_fsf_req *request, *tmp; 190 + unsigned int i; 191 + 192 + i = req_id % REQUEST_LIST_SIZE; 193 + 194 + list_for_each_entry_safe(request, tmp, &adapter->req_list[i], list) 195 + if (request->req_id == req_id) 196 + return request; 197 + 198 + return NULL; 199 + } 200 + 201 + int zfcp_reqlist_isempty(struct zfcp_adapter *adapter) 202 + { 203 + unsigned int i; 204 + 205 + for (i=0; i<REQUEST_LIST_SIZE; i++) 206 + if (!list_empty(&adapter->req_list[i])) 207 + return 0; 208 + 209 + return 1; 210 + } 211 + 212 + #undef ZFCP_LOG_AREA 213 + 115 214 /****************************************************************/ 116 215 /************** Uncategorised Functions *************************/ 117 216 /****************************************************************/ ··· 1060 961 INIT_LIST_HEAD(&adapter->port_remove_lh); 1061 962 1062 963 /* initialize list of fsf requests */ 1063 - spin_lock_init(&adapter->fsf_req_list_lock); 1064 - INIT_LIST_HEAD(&adapter->fsf_req_list_head); 964 + spin_lock_init(&adapter->req_list_lock); 965 + retval = zfcp_reqlist_init(adapter); 966 + if (retval) { 967 + ZFCP_LOG_INFO("request list initialization failed\n"); 968 + goto failed_low_mem_buffers; 969 + } 1065 970 1066 971 /* initialize debug locks */ 1067 972 ··· 1144 1041 * !0 - struct zfcp_adapter data structure could not be removed 1145 1042 * (e.g. still used) 1146 1043 * locks: adapter list write lock is assumed to be held by caller 1147 - * adapter->fsf_req_list_lock is taken and released within this 1148 - * function and must not be held on entry 1149 1044 */ 1150 1045 void 1151 1046 zfcp_adapter_dequeue(struct zfcp_adapter *adapter) ··· 1155 1054 zfcp_sysfs_adapter_remove_files(&adapter->ccw_device->dev); 1156 1055 dev_set_drvdata(&adapter->ccw_device->dev, NULL); 1157 1056 /* sanity check: no pending FSF requests */ 1158 - spin_lock_irqsave(&adapter->fsf_req_list_lock, flags); 1159 - retval = !list_empty(&adapter->fsf_req_list_head); 1160 - spin_unlock_irqrestore(&adapter->fsf_req_list_lock, flags); 1161 - if (retval) { 1057 + spin_lock_irqsave(&adapter->req_list_lock, flags); 1058 + retval = zfcp_reqlist_isempty(adapter); 1059 + spin_unlock_irqrestore(&adapter->req_list_lock, flags); 1060 + if (!retval) { 1162 1061 ZFCP_LOG_NORMAL("bug: adapter %s (%p) still in use, " 1163 1062 "%i requests outstanding\n", 1164 1063 zfcp_get_busid_by_adapter(adapter), adapter, 1165 - atomic_read(&adapter->fsf_reqs_active)); 1064 + atomic_read(&adapter->reqs_active)); 1166 1065 retval = -EBUSY; 1167 1066 goto out; 1168 1067 } ··· 1188 1087 zfcp_free_low_mem_buffers(adapter); 1189 1088 /* free memory of adapter data structure and queues */ 1190 1089 zfcp_qdio_free_queues(adapter); 1090 + zfcp_reqlist_free(adapter); 1191 1091 kfree(adapter->fc_stats); 1192 1092 kfree(adapter->stats_reset_data); 1193 1093 ZFCP_LOG_TRACE("freeing adapter structure\n");
+5
drivers/s390/scsi/zfcp_ccw.c
··· 164 164 retval = zfcp_adapter_scsi_register(adapter); 165 165 if (retval) 166 166 goto out_scsi_register; 167 + 168 + /* initialize request counter */ 169 + BUG_ON(!zfcp_reqlist_isempty(adapter)); 170 + adapter->req_no = 0; 171 + 167 172 zfcp_erp_modify_adapter_status(adapter, ZFCP_STATUS_COMMON_RUNNING, 168 173 ZFCP_SET); 169 174 zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED);
+8 -7
drivers/s390/scsi/zfcp_def.h
··· 52 52 /********************* GENERAL DEFINES *********************************/ 53 53 54 54 /* zfcp version number, it consists of major, minor, and patch-level number */ 55 - #define ZFCP_VERSION "4.7.0" 55 + #define ZFCP_VERSION "4.8.0" 56 56 57 57 /** 58 58 * zfcp_sg_to_address - determine kernel address from struct scatterlist ··· 80 80 #define REQUEST_LIST_SIZE 128 81 81 82 82 /********************* SCSI SPECIFIC DEFINES *********************************/ 83 - #define ZFCP_SCSI_ER_TIMEOUT (100*HZ) 83 + #define ZFCP_SCSI_ER_TIMEOUT (10*HZ) 84 84 85 85 /********************* CIO/QDIO SPECIFIC DEFINES *****************************/ 86 86 ··· 886 886 struct list_head port_remove_lh; /* head of ports to be 887 887 removed */ 888 888 u32 ports; /* number of remote ports */ 889 - struct timer_list scsi_er_timer; /* SCSI err recovery watch */ 890 - struct list_head fsf_req_list_head; /* head of FSF req list */ 891 - spinlock_t fsf_req_list_lock; /* lock for ops on list of 892 - FSF requests */ 893 - atomic_t fsf_reqs_active; /* # active FSF reqs */ 889 + struct timer_list scsi_er_timer; /* SCSI err recovery watch */ 890 + atomic_t reqs_active; /* # active FSF reqs */ 891 + unsigned long req_no; /* unique FSF req number */ 892 + struct list_head *req_list; /* list of pending reqs */ 893 + spinlock_t req_list_lock; /* request list lock */ 894 894 struct zfcp_qdio_queue request_queue; /* request queue */ 895 895 u32 fsf_req_seq_no; /* FSF cmnd seq number */ 896 896 wait_queue_head_t request_wq; /* can be used to wait for ··· 986 986 /* FSF request */ 987 987 struct zfcp_fsf_req { 988 988 struct list_head list; /* list of FSF requests */ 989 + unsigned long req_id; /* unique request ID */ 989 990 struct zfcp_adapter *adapter; /* adapter request belongs to */ 990 991 u8 sbal_number; /* nr of SBALs free for use */ 991 992 u8 sbal_first; /* first SBAL for this request */
+73 -137
drivers/s390/scsi/zfcp_erp.c
··· 64 64 static int zfcp_erp_adapter_strategy(struct zfcp_erp_action *); 65 65 static int zfcp_erp_adapter_strategy_generic(struct zfcp_erp_action *, int); 66 66 static int zfcp_erp_adapter_strategy_close(struct zfcp_erp_action *); 67 - static int zfcp_erp_adapter_strategy_close_qdio(struct zfcp_erp_action *); 68 - static int zfcp_erp_adapter_strategy_close_fsf(struct zfcp_erp_action *); 67 + static void zfcp_erp_adapter_strategy_close_qdio(struct zfcp_erp_action *); 68 + static void zfcp_erp_adapter_strategy_close_fsf(struct zfcp_erp_action *); 69 69 static int zfcp_erp_adapter_strategy_open(struct zfcp_erp_action *); 70 70 static int zfcp_erp_adapter_strategy_open_qdio(struct zfcp_erp_action *); 71 71 static int zfcp_erp_adapter_strategy_open_fsf(struct zfcp_erp_action *); ··· 93 93 static int zfcp_erp_unit_strategy_close(struct zfcp_erp_action *); 94 94 static int zfcp_erp_unit_strategy_open(struct zfcp_erp_action *); 95 95 96 - static int zfcp_erp_action_dismiss_adapter(struct zfcp_adapter *); 97 - static int zfcp_erp_action_dismiss_port(struct zfcp_port *); 98 - static int zfcp_erp_action_dismiss_unit(struct zfcp_unit *); 99 - static int zfcp_erp_action_dismiss(struct zfcp_erp_action *); 96 + static void zfcp_erp_action_dismiss_port(struct zfcp_port *); 97 + static void zfcp_erp_action_dismiss_unit(struct zfcp_unit *); 98 + static void zfcp_erp_action_dismiss(struct zfcp_erp_action *); 100 99 101 100 static int zfcp_erp_action_enqueue(int, struct zfcp_adapter *, 102 101 struct zfcp_port *, struct zfcp_unit *); ··· 134 135 zfcp_erp_adapter_reopen(adapter, 0); 135 136 } 136 137 137 - /* 138 - * function: zfcp_fsf_scsi_er_timeout_handler 138 + /** 139 + * zfcp_fsf_scsi_er_timeout_handler - timeout handler for scsi eh tasks 139 140 * 140 - * purpose: This function needs to be called whenever a SCSI error recovery 141 - * action (abort/reset) does not return. 142 - * Re-opening the adapter means that the command can be returned 143 - * by zfcp (it is guarranteed that it does not return via the 144 - * adapter anymore). The buffer can then be used again. 145 - * 146 - * returns: sod all 141 + * This function needs to be called whenever a SCSI error recovery 142 + * action (abort/reset) does not return. Re-opening the adapter means 143 + * that the abort/reset command can be returned by zfcp. It won't complete 144 + * via the adapter anymore (because qdio queues are closed). If ERP is 145 + * already running on this adapter it will be stopped. 147 146 */ 148 - void 149 - zfcp_fsf_scsi_er_timeout_handler(unsigned long data) 147 + void zfcp_fsf_scsi_er_timeout_handler(unsigned long data) 150 148 { 151 149 struct zfcp_adapter *adapter = (struct zfcp_adapter *) data; 150 + unsigned long flags; 152 151 153 152 ZFCP_LOG_NORMAL("warning: SCSI error recovery timed out. " 154 153 "Restarting all operations on the adapter %s\n", 155 154 zfcp_get_busid_by_adapter(adapter)); 156 155 debug_text_event(adapter->erp_dbf, 1, "eh_lmem_tout"); 157 - zfcp_erp_adapter_reopen(adapter, 0); 158 156 159 - return; 157 + write_lock_irqsave(&adapter->erp_lock, flags); 158 + if (atomic_test_mask(ZFCP_STATUS_ADAPTER_ERP_PENDING, 159 + &adapter->status)) { 160 + zfcp_erp_modify_adapter_status(adapter, 161 + ZFCP_STATUS_COMMON_UNBLOCKED|ZFCP_STATUS_COMMON_OPEN, 162 + ZFCP_CLEAR); 163 + zfcp_erp_action_dismiss_adapter(adapter); 164 + write_unlock_irqrestore(&adapter->erp_lock, flags); 165 + /* dismiss all pending requests including requests for ERP */ 166 + zfcp_fsf_req_dismiss_all(adapter); 167 + adapter->fsf_req_seq_no = 0; 168 + } else 169 + write_unlock_irqrestore(&adapter->erp_lock, flags); 170 + zfcp_erp_adapter_reopen(adapter, 0); 160 171 } 161 172 162 173 /* ··· 679 670 return retval; 680 671 } 681 672 682 - /* 683 - * function: 684 - * 685 - * purpose: disable I/O, 686 - * return any open requests and clean them up, 687 - * aim: no pending and incoming I/O 688 - * 689 - * returns: 673 + /** 674 + * zfcp_erp_adapter_block - mark adapter as blocked, block scsi requests 690 675 */ 691 - static void 692 - zfcp_erp_adapter_block(struct zfcp_adapter *adapter, int clear_mask) 676 + static void zfcp_erp_adapter_block(struct zfcp_adapter *adapter, int clear_mask) 693 677 { 694 678 debug_text_event(adapter->erp_dbf, 6, "a_bl"); 695 679 zfcp_erp_modify_adapter_status(adapter, ··· 690 688 clear_mask, ZFCP_CLEAR); 691 689 } 692 690 693 - /* 694 - * function: 695 - * 696 - * purpose: enable I/O 697 - * 698 - * returns: 691 + /** 692 + * zfcp_erp_adapter_unblock - mark adapter as unblocked, allow scsi requests 699 693 */ 700 - static void 701 - zfcp_erp_adapter_unblock(struct zfcp_adapter *adapter) 694 + static void zfcp_erp_adapter_unblock(struct zfcp_adapter *adapter) 702 695 { 703 696 debug_text_event(adapter->erp_dbf, 6, "a_ubl"); 704 697 atomic_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status); ··· 845 848 struct zfcp_adapter *adapter = erp_action->adapter; 846 849 847 850 if (erp_action->fsf_req) { 848 - /* take lock to ensure that request is not being deleted meanwhile */ 849 - spin_lock(&adapter->fsf_req_list_lock); 850 - /* check whether fsf req does still exist */ 851 - list_for_each_entry(fsf_req, &adapter->fsf_req_list_head, list) 852 - if (fsf_req == erp_action->fsf_req) 853 - break; 854 - if (fsf_req && (fsf_req->erp_action == erp_action)) { 851 + /* take lock to ensure that request is not deleted meanwhile */ 852 + spin_lock(&adapter->req_list_lock); 853 + if ((!zfcp_reqlist_ismember(adapter, 854 + erp_action->fsf_req->req_id)) && 855 + (fsf_req->erp_action == erp_action)) { 855 856 /* fsf_req still exists */ 856 857 debug_text_event(adapter->erp_dbf, 3, "a_ca_req"); 857 858 debug_event(adapter->erp_dbf, 3, &fsf_req, 858 859 sizeof (unsigned long)); 859 - /* dismiss fsf_req of timed out or dismissed erp_action */ 860 + /* dismiss fsf_req of timed out/dismissed erp_action */ 860 861 if (erp_action->status & (ZFCP_STATUS_ERP_DISMISSED | 861 862 ZFCP_STATUS_ERP_TIMEDOUT)) { 862 863 debug_text_event(adapter->erp_dbf, 3, ··· 887 892 */ 888 893 erp_action->fsf_req = NULL; 889 894 } 890 - spin_unlock(&adapter->fsf_req_list_lock); 895 + spin_unlock(&adapter->req_list_lock); 891 896 } else 892 897 debug_text_event(adapter->erp_dbf, 3, "a_ca_noreq"); 893 898 894 899 return retval; 895 900 } 896 901 897 - /* 898 - * purpose: generic handler for asynchronous events related to erp_action events 899 - * (normal completion, time-out, dismissing, retry after 900 - * low memory condition) 902 + /** 903 + * zfcp_erp_async_handler_nolock - complete erp_action 901 904 * 902 - * note: deletion of timer is not required (e.g. in case of a time-out), 903 - * but a second try does no harm, 904 - * we leave it in here to allow for greater simplification 905 - * 906 - * returns: 0 - there was an action to handle 907 - * !0 - otherwise 905 + * Used for normal completion, time-out, dismissal and failure after 906 + * low memory condition. 908 907 */ 909 - static int 910 - zfcp_erp_async_handler_nolock(struct zfcp_erp_action *erp_action, 911 - unsigned long set_mask) 908 + static void zfcp_erp_async_handler_nolock(struct zfcp_erp_action *erp_action, 909 + unsigned long set_mask) 912 910 { 913 - int retval; 914 911 struct zfcp_adapter *adapter = erp_action->adapter; 915 912 916 913 if (zfcp_erp_action_exists(erp_action) == ZFCP_ERP_ACTION_RUNNING) { ··· 913 926 del_timer(&erp_action->timer); 914 927 erp_action->status |= set_mask; 915 928 zfcp_erp_action_ready(erp_action); 916 - retval = 0; 917 929 } else { 918 930 /* action is ready or gone - nothing to do */ 919 931 debug_text_event(adapter->erp_dbf, 3, "a_asyh_gone"); 920 932 debug_event(adapter->erp_dbf, 3, &erp_action->action, 921 933 sizeof (int)); 922 - retval = 1; 923 934 } 924 - 925 - return retval; 926 935 } 927 936 928 - /* 929 - * purpose: generic handler for asynchronous events related to erp_action 930 - * events (normal completion, time-out, dismissing, retry after 931 - * low memory condition) 932 - * 933 - * note: deletion of timer is not required (e.g. in case of a time-out), 934 - * but a second try does no harm, 935 - * we leave it in here to allow for greater simplification 936 - * 937 - * returns: 0 - there was an action to handle 938 - * !0 - otherwise 937 + /** 938 + * zfcp_erp_async_handler - wrapper for erp_async_handler_nolock w/ locking 939 939 */ 940 - int 941 - zfcp_erp_async_handler(struct zfcp_erp_action *erp_action, 942 - unsigned long set_mask) 940 + void zfcp_erp_async_handler(struct zfcp_erp_action *erp_action, 941 + unsigned long set_mask) 943 942 { 944 943 struct zfcp_adapter *adapter = erp_action->adapter; 945 944 unsigned long flags; 946 - int retval; 947 945 948 946 write_lock_irqsave(&adapter->erp_lock, flags); 949 - retval = zfcp_erp_async_handler_nolock(erp_action, set_mask); 947 + zfcp_erp_async_handler_nolock(erp_action, set_mask); 950 948 write_unlock_irqrestore(&adapter->erp_lock, flags); 951 - 952 - return retval; 953 949 } 954 950 955 951 /* ··· 969 999 zfcp_erp_async_handler(erp_action, ZFCP_STATUS_ERP_TIMEDOUT); 970 1000 } 971 1001 972 - /* 973 - * purpose: is called for an erp_action which needs to be ended 974 - * though not being done, 975 - * this is usually required if an higher is generated, 976 - * action gets an appropriate flag and will be processed 977 - * accordingly 1002 + /** 1003 + * zfcp_erp_action_dismiss - dismiss an erp_action 978 1004 * 979 - * locks: erp_lock held (thus we need to call another handler variant) 1005 + * adapter->erp_lock must be held 1006 + * 1007 + * Dismissal of an erp_action is usually required if an erp_action of 1008 + * higher priority is generated. 980 1009 */ 981 - static int 982 - zfcp_erp_action_dismiss(struct zfcp_erp_action *erp_action) 1010 + static void zfcp_erp_action_dismiss(struct zfcp_erp_action *erp_action) 983 1011 { 984 1012 struct zfcp_adapter *adapter = erp_action->adapter; 985 1013 ··· 985 1017 debug_event(adapter->erp_dbf, 2, &erp_action->action, sizeof (int)); 986 1018 987 1019 zfcp_erp_async_handler_nolock(erp_action, ZFCP_STATUS_ERP_DISMISSED); 988 - 989 - return 0; 990 1020 } 991 1021 992 1022 int ··· 2040 2074 return retval; 2041 2075 } 2042 2076 2043 - /* 2044 - * function: zfcp_qdio_cleanup 2045 - * 2046 - * purpose: cleans up QDIO operation for the specified adapter 2047 - * 2048 - * returns: 0 - successful cleanup 2049 - * !0 - failed cleanup 2077 + /** 2078 + * zfcp_erp_adapter_strategy_close_qdio - close qdio queues for an adapter 2050 2079 */ 2051 - int 2080 + static void 2052 2081 zfcp_erp_adapter_strategy_close_qdio(struct zfcp_erp_action *erp_action) 2053 2082 { 2054 - int retval = ZFCP_ERP_SUCCEEDED; 2055 2083 int first_used; 2056 2084 int used_count; 2057 2085 struct zfcp_adapter *adapter = erp_action->adapter; ··· 2054 2094 ZFCP_LOG_DEBUG("error: attempt to shut down inactive QDIO " 2055 2095 "queues on adapter %s\n", 2056 2096 zfcp_get_busid_by_adapter(adapter)); 2057 - retval = ZFCP_ERP_FAILED; 2058 - goto out; 2097 + return; 2059 2098 } 2060 2099 2061 2100 /* 2062 2101 * Get queue_lock and clear QDIOUP flag. Thus it's guaranteed that 2063 2102 * do_QDIO won't be called while qdio_shutdown is in progress. 2064 2103 */ 2065 - 2066 2104 write_lock_irq(&adapter->request_queue.queue_lock); 2067 2105 atomic_clear_mask(ZFCP_STATUS_ADAPTER_QDIOUP, &adapter->status); 2068 2106 write_unlock_irq(&adapter->request_queue.queue_lock); ··· 2092 2134 adapter->request_queue.free_index = 0; 2093 2135 atomic_set(&adapter->request_queue.free_count, 0); 2094 2136 adapter->request_queue.distance_from_int = 0; 2095 - out: 2096 - return retval; 2097 2137 } 2098 2138 2099 2139 static int ··· 2214 2258 "%s)\n", zfcp_get_busid_by_adapter(adapter)); 2215 2259 ret = ZFCP_ERP_FAILED; 2216 2260 } 2217 - if (!atomic_test_mask(ZFCP_STATUS_ADAPTER_XPORT_OK, &adapter->status)) { 2218 - ZFCP_LOG_INFO("error: exchange port data failed (adapter " 2261 + 2262 + /* don't treat as error for the sake of compatibility */ 2263 + if (!atomic_test_mask(ZFCP_STATUS_ADAPTER_XPORT_OK, &adapter->status)) 2264 + ZFCP_LOG_INFO("warning: exchange port data failed (adapter " 2219 2265 "%s\n", zfcp_get_busid_by_adapter(adapter)); 2220 - ret = ZFCP_ERP_FAILED; 2221 - } 2222 2266 2223 2267 return ret; 2224 2268 } ··· 2248 2292 return retval; 2249 2293 } 2250 2294 2251 - /* 2252 - * function: zfcp_fsf_cleanup 2253 - * 2254 - * purpose: cleanup FSF operation for specified adapter 2255 - * 2256 - * returns: 0 - FSF operation successfully cleaned up 2257 - * !0 - failed to cleanup FSF operation for this adapter 2295 + /** 2296 + * zfcp_erp_adapter_strategy_close_fsf - stop FSF operations for an adapter 2258 2297 */ 2259 - static int 2298 + static void 2260 2299 zfcp_erp_adapter_strategy_close_fsf(struct zfcp_erp_action *erp_action) 2261 2300 { 2262 - int retval = ZFCP_ERP_SUCCEEDED; 2263 2301 struct zfcp_adapter *adapter = erp_action->adapter; 2264 2302 2265 2303 /* ··· 2267 2317 /* all ports and units are closed */ 2268 2318 zfcp_erp_modify_adapter_status(adapter, 2269 2319 ZFCP_STATUS_COMMON_OPEN, ZFCP_CLEAR); 2270 - 2271 - return retval; 2272 2320 } 2273 2321 2274 2322 /* ··· 3241 3293 } 3242 3294 3243 3295 3244 - static int 3245 - zfcp_erp_action_dismiss_adapter(struct zfcp_adapter *adapter) 3296 + void zfcp_erp_action_dismiss_adapter(struct zfcp_adapter *adapter) 3246 3297 { 3247 - int retval = 0; 3248 3298 struct zfcp_port *port; 3249 3299 3250 3300 debug_text_event(adapter->erp_dbf, 5, "a_actab"); ··· 3251 3305 else 3252 3306 list_for_each_entry(port, &adapter->port_list_head, list) 3253 3307 zfcp_erp_action_dismiss_port(port); 3254 - 3255 - return retval; 3256 3308 } 3257 3309 3258 - static int 3259 - zfcp_erp_action_dismiss_port(struct zfcp_port *port) 3310 + static void zfcp_erp_action_dismiss_port(struct zfcp_port *port) 3260 3311 { 3261 - int retval = 0; 3262 3312 struct zfcp_unit *unit; 3263 3313 struct zfcp_adapter *adapter = port->adapter; 3264 3314 ··· 3265 3323 else 3266 3324 list_for_each_entry(unit, &port->unit_list_head, list) 3267 3325 zfcp_erp_action_dismiss_unit(unit); 3268 - 3269 - return retval; 3270 3326 } 3271 3327 3272 - static int 3273 - zfcp_erp_action_dismiss_unit(struct zfcp_unit *unit) 3328 + static void zfcp_erp_action_dismiss_unit(struct zfcp_unit *unit) 3274 3329 { 3275 - int retval = 0; 3276 3330 struct zfcp_adapter *adapter = unit->port->adapter; 3277 3331 3278 3332 debug_text_event(adapter->erp_dbf, 5, "u_actab"); 3279 3333 debug_event(adapter->erp_dbf, 5, &unit->fcp_lun, sizeof (fcp_lun_t)); 3280 3334 if (atomic_test_mask(ZFCP_STATUS_COMMON_ERP_INUSE, &unit->status)) 3281 3335 zfcp_erp_action_dismiss(&unit->erp_action); 3282 - 3283 - return retval; 3284 3336 } 3285 3337 3286 3338 static inline void
+7 -2
drivers/s390/scsi/zfcp_ext.h
··· 63 63 extern void zfcp_qdio_free_queues(struct zfcp_adapter *); 64 64 extern int zfcp_qdio_determine_pci(struct zfcp_qdio_queue *, 65 65 struct zfcp_fsf_req *); 66 - extern int zfcp_qdio_reqid_check(struct zfcp_adapter *, void *); 67 66 68 67 extern volatile struct qdio_buffer_element *zfcp_qdio_sbale_req 69 68 (struct zfcp_fsf_req *, int, int); ··· 139 140 extern int zfcp_erp_adapter_reopen(struct zfcp_adapter *, int); 140 141 extern int zfcp_erp_adapter_shutdown(struct zfcp_adapter *, int); 141 142 extern void zfcp_erp_adapter_failed(struct zfcp_adapter *); 143 + extern void zfcp_erp_action_dismiss_adapter(struct zfcp_adapter *); 142 144 143 145 extern void zfcp_erp_modify_port_status(struct zfcp_port *, u32, int); 144 146 extern int zfcp_erp_port_reopen(struct zfcp_port *, int); ··· 156 156 extern int zfcp_erp_thread_setup(struct zfcp_adapter *); 157 157 extern int zfcp_erp_thread_kill(struct zfcp_adapter *); 158 158 extern int zfcp_erp_wait(struct zfcp_adapter *); 159 - extern int zfcp_erp_async_handler(struct zfcp_erp_action *, unsigned long); 159 + extern void zfcp_erp_async_handler(struct zfcp_erp_action *, unsigned long); 160 160 161 161 extern int zfcp_test_link(struct zfcp_port *); 162 162 ··· 190 190 struct zfcp_fsf_req *); 191 191 extern void zfcp_scsi_dbf_event_devreset(const char *, u8, struct zfcp_unit *, 192 192 struct scsi_cmnd *); 193 + extern void zfcp_reqlist_add(struct zfcp_adapter *, struct zfcp_fsf_req *); 194 + extern void zfcp_reqlist_remove(struct zfcp_adapter *, unsigned long); 195 + extern struct zfcp_fsf_req *zfcp_reqlist_ismember(struct zfcp_adapter *, 196 + unsigned long); 197 + extern int zfcp_reqlist_isempty(struct zfcp_adapter *); 193 198 194 199 #endif /* ZFCP_EXT_H */
+67 -59
drivers/s390/scsi/zfcp_fsf.c
··· 49 49 static void zfcp_fsf_link_down_info_eval(struct zfcp_adapter *, 50 50 struct fsf_link_down_info *); 51 51 static int zfcp_fsf_req_dispatch(struct zfcp_fsf_req *); 52 - static void zfcp_fsf_req_dismiss(struct zfcp_fsf_req *); 53 52 54 53 /* association between FSF command and FSF QTCB type */ 55 54 static u32 fsf_qtcb_type[] = { ··· 145 146 kfree(fsf_req); 146 147 } 147 148 148 - /* 149 - * function: 150 - * 151 - * purpose: 152 - * 153 - * returns: 154 - * 155 - * note: qdio queues shall be down (no ongoing inbound processing) 149 + /** 150 + * zfcp_fsf_req_dismiss - dismiss a single fsf request 156 151 */ 157 - int 158 - zfcp_fsf_req_dismiss_all(struct zfcp_adapter *adapter) 152 + static void zfcp_fsf_req_dismiss(struct zfcp_adapter *adapter, 153 + struct zfcp_fsf_req *fsf_req, 154 + unsigned int counter) 159 155 { 160 - struct zfcp_fsf_req *fsf_req, *tmp; 161 - unsigned long flags; 162 - LIST_HEAD(remove_queue); 156 + u64 dbg_tmp[2]; 163 157 164 - spin_lock_irqsave(&adapter->fsf_req_list_lock, flags); 165 - list_splice_init(&adapter->fsf_req_list_head, &remove_queue); 166 - atomic_set(&adapter->fsf_reqs_active, 0); 167 - spin_unlock_irqrestore(&adapter->fsf_req_list_lock, flags); 168 - 169 - list_for_each_entry_safe(fsf_req, tmp, &remove_queue, list) { 170 - list_del(&fsf_req->list); 171 - zfcp_fsf_req_dismiss(fsf_req); 172 - } 173 - 174 - return 0; 175 - } 176 - 177 - /* 178 - * function: 179 - * 180 - * purpose: 181 - * 182 - * returns: 183 - */ 184 - static void 185 - zfcp_fsf_req_dismiss(struct zfcp_fsf_req *fsf_req) 186 - { 158 + dbg_tmp[0] = (u64) atomic_read(&adapter->reqs_active); 159 + dbg_tmp[1] = (u64) counter; 160 + debug_event(adapter->erp_dbf, 4, (void *) dbg_tmp, 16); 161 + list_del(&fsf_req->list); 187 162 fsf_req->status |= ZFCP_STATUS_FSFREQ_DISMISSED; 188 163 zfcp_fsf_req_complete(fsf_req); 164 + } 165 + 166 + /** 167 + * zfcp_fsf_req_dismiss_all - dismiss all remaining fsf requests 168 + */ 169 + int zfcp_fsf_req_dismiss_all(struct zfcp_adapter *adapter) 170 + { 171 + struct zfcp_fsf_req *request, *tmp; 172 + unsigned long flags; 173 + unsigned int i, counter; 174 + 175 + spin_lock_irqsave(&adapter->req_list_lock, flags); 176 + atomic_set(&adapter->reqs_active, 0); 177 + for (i=0; i<REQUEST_LIST_SIZE; i++) { 178 + if (list_empty(&adapter->req_list[i])) 179 + continue; 180 + 181 + counter = 0; 182 + list_for_each_entry_safe(request, tmp, 183 + &adapter->req_list[i], list) { 184 + zfcp_fsf_req_dismiss(adapter, request, counter); 185 + counter++; 186 + } 187 + } 188 + spin_unlock_irqrestore(&adapter->req_list_lock, flags); 189 + 190 + return 0; 189 191 } 190 192 191 193 /* ··· 4592 4592 zfcp_fsf_req_qtcb_init(struct zfcp_fsf_req *fsf_req) 4593 4593 { 4594 4594 if (likely(fsf_req->qtcb != NULL)) { 4595 - fsf_req->qtcb->prefix.req_seq_no = fsf_req->adapter->fsf_req_seq_no; 4596 - fsf_req->qtcb->prefix.req_id = (unsigned long)fsf_req; 4595 + fsf_req->qtcb->prefix.req_seq_no = 4596 + fsf_req->adapter->fsf_req_seq_no; 4597 + fsf_req->qtcb->prefix.req_id = fsf_req->req_id; 4597 4598 fsf_req->qtcb->prefix.ulp_info = ZFCP_ULP_INFO_VERSION; 4598 - fsf_req->qtcb->prefix.qtcb_type = fsf_qtcb_type[fsf_req->fsf_command]; 4599 + fsf_req->qtcb->prefix.qtcb_type = 4600 + fsf_qtcb_type[fsf_req->fsf_command]; 4599 4601 fsf_req->qtcb->prefix.qtcb_version = ZFCP_QTCB_VERSION; 4600 - fsf_req->qtcb->header.req_handle = (unsigned long)fsf_req; 4602 + fsf_req->qtcb->header.req_handle = fsf_req->req_id; 4601 4603 fsf_req->qtcb->header.fsf_command = fsf_req->fsf_command; 4602 4604 } 4603 4605 } ··· 4656 4654 { 4657 4655 volatile struct qdio_buffer_element *sbale; 4658 4656 struct zfcp_fsf_req *fsf_req = NULL; 4657 + unsigned long flags; 4659 4658 int ret = 0; 4660 4659 struct zfcp_qdio_queue *req_queue = &adapter->request_queue; 4661 4660 ··· 4671 4668 4672 4669 fsf_req->adapter = adapter; 4673 4670 fsf_req->fsf_command = fsf_cmd; 4671 + INIT_LIST_HEAD(&fsf_req->list); 4672 + 4673 + /* unique request id */ 4674 + spin_lock_irqsave(&adapter->req_list_lock, flags); 4675 + fsf_req->req_id = adapter->req_no++; 4676 + spin_unlock_irqrestore(&adapter->req_list_lock, flags); 4674 4677 4675 4678 zfcp_fsf_req_qtcb_init(fsf_req); 4676 4679 ··· 4716 4707 sbale = zfcp_qdio_sbale_req(fsf_req, fsf_req->sbal_curr, 0); 4717 4708 4718 4709 /* setup common SBALE fields */ 4719 - sbale[0].addr = fsf_req; 4710 + sbale[0].addr = (void *) fsf_req->req_id; 4720 4711 sbale[0].flags |= SBAL_FLAGS0_COMMAND; 4721 4712 if (likely(fsf_req->qtcb != NULL)) { 4722 4713 sbale[1].addr = (void *) fsf_req->qtcb; ··· 4756 4747 volatile struct qdio_buffer_element *sbale; 4757 4748 int inc_seq_no; 4758 4749 int new_distance_from_int; 4759 - unsigned long flags; 4750 + u64 dbg_tmp[2]; 4760 4751 int retval = 0; 4761 4752 4762 4753 adapter = fsf_req->adapter; ··· 4770 4761 ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_TRACE, (char *) sbale[1].addr, 4771 4762 sbale[1].length); 4772 4763 4773 - /* put allocated FSF request at list tail */ 4774 - spin_lock_irqsave(&adapter->fsf_req_list_lock, flags); 4775 - list_add_tail(&fsf_req->list, &adapter->fsf_req_list_head); 4776 - spin_unlock_irqrestore(&adapter->fsf_req_list_lock, flags); 4764 + /* put allocated FSF request into hash table */ 4765 + spin_lock(&adapter->req_list_lock); 4766 + zfcp_reqlist_add(adapter, fsf_req); 4767 + spin_unlock(&adapter->req_list_lock); 4777 4768 4778 4769 inc_seq_no = (fsf_req->qtcb != NULL); 4779 4770 ··· 4812 4803 QDIO_FLAG_SYNC_OUTPUT, 4813 4804 0, fsf_req->sbal_first, fsf_req->sbal_number, NULL); 4814 4805 4806 + dbg_tmp[0] = (unsigned long) sbale[0].addr; 4807 + dbg_tmp[1] = (u64) retval; 4808 + debug_event(adapter->erp_dbf, 4, (void *) dbg_tmp, 16); 4809 + 4815 4810 if (unlikely(retval)) { 4816 4811 /* Queues are down..... */ 4817 4812 retval = -EIO; ··· 4825 4812 */ 4826 4813 if (timer) 4827 4814 del_timer(timer); 4828 - spin_lock_irqsave(&adapter->fsf_req_list_lock, flags); 4829 - list_del(&fsf_req->list); 4830 - spin_unlock_irqrestore(&adapter->fsf_req_list_lock, flags); 4831 - /* 4832 - * adjust the number of free SBALs in request queue as well as 4833 - * position of first one 4834 - */ 4815 + spin_lock(&adapter->req_list_lock); 4816 + zfcp_reqlist_remove(adapter, fsf_req->req_id); 4817 + spin_unlock(&adapter->req_list_lock); 4818 + /* undo changes in request queue made for this request */ 4835 4819 zfcp_qdio_zero_sbals(req_queue->buffer, 4836 4820 fsf_req->sbal_first, fsf_req->sbal_number); 4837 4821 atomic_add(fsf_req->sbal_number, &req_queue->free_count); 4838 - req_queue->free_index -= fsf_req->sbal_number; /* increase */ 4822 + req_queue->free_index -= fsf_req->sbal_number; 4839 4823 req_queue->free_index += QDIO_MAX_BUFFERS_PER_Q; 4840 4824 req_queue->free_index %= QDIO_MAX_BUFFERS_PER_Q; /* wrap */ 4841 - ZFCP_LOG_DEBUG 4842 - ("error: do_QDIO failed. Buffers could not be enqueued " 4843 - "to request queue.\n"); 4825 + zfcp_erp_adapter_reopen(adapter, 0); 4844 4826 } else { 4845 4827 req_queue->distance_from_int = new_distance_from_int; 4846 4828 /* ··· 4851 4843 adapter->fsf_req_seq_no++; 4852 4844 4853 4845 /* count FSF requests pending */ 4854 - atomic_inc(&adapter->fsf_reqs_active); 4846 + atomic_inc(&adapter->reqs_active); 4855 4847 } 4856 4848 return retval; 4857 4849 }
+32 -47
drivers/s390/scsi/zfcp_qdio.c
··· 282 282 return; 283 283 } 284 284 285 + /** 286 + * zfcp_qdio_reqid_check - checks for valid reqids or unsolicited status 287 + */ 288 + static int zfcp_qdio_reqid_check(struct zfcp_adapter *adapter, 289 + unsigned long req_id) 290 + { 291 + struct zfcp_fsf_req *fsf_req; 292 + unsigned long flags; 293 + 294 + debug_long_event(adapter->erp_dbf, 4, req_id); 295 + 296 + spin_lock_irqsave(&adapter->req_list_lock, flags); 297 + fsf_req = zfcp_reqlist_ismember(adapter, req_id); 298 + 299 + if (!fsf_req) { 300 + spin_unlock_irqrestore(&adapter->req_list_lock, flags); 301 + ZFCP_LOG_NORMAL("error: unknown request id (%ld).\n", req_id); 302 + zfcp_erp_adapter_reopen(adapter, 0); 303 + return -EINVAL; 304 + } 305 + 306 + zfcp_reqlist_remove(adapter, req_id); 307 + atomic_dec(&adapter->reqs_active); 308 + spin_unlock_irqrestore(&adapter->req_list_lock, flags); 309 + 310 + /* finish the FSF request */ 311 + zfcp_fsf_req_complete(fsf_req); 312 + 313 + return 0; 314 + } 315 + 285 316 /* 286 317 * function: zfcp_qdio_response_handler 287 318 * ··· 375 344 /* look for QDIO request identifiers in SB */ 376 345 buffere = &buffer->element[buffere_index]; 377 346 retval = zfcp_qdio_reqid_check(adapter, 378 - (void *) buffere->addr); 347 + (unsigned long) buffere->addr); 379 348 380 349 if (retval) { 381 350 ZFCP_LOG_NORMAL("bug: unexpected inbound " ··· 444 413 } 445 414 out: 446 415 return; 447 - } 448 - 449 - /* 450 - * function: zfcp_qdio_reqid_check 451 - * 452 - * purpose: checks for valid reqids or unsolicited status 453 - * 454 - * returns: 0 - valid request id or unsolicited status 455 - * !0 - otherwise 456 - */ 457 - int 458 - zfcp_qdio_reqid_check(struct zfcp_adapter *adapter, void *sbale_addr) 459 - { 460 - struct zfcp_fsf_req *fsf_req; 461 - unsigned long flags; 462 - 463 - /* invalid (per convention used in this driver) */ 464 - if (unlikely(!sbale_addr)) { 465 - ZFCP_LOG_NORMAL("bug: invalid reqid\n"); 466 - return -EINVAL; 467 - } 468 - 469 - /* valid request id and thus (hopefully :) valid fsf_req address */ 470 - fsf_req = (struct zfcp_fsf_req *) sbale_addr; 471 - 472 - /* serialize with zfcp_fsf_req_dismiss_all */ 473 - spin_lock_irqsave(&adapter->fsf_req_list_lock, flags); 474 - if (list_empty(&adapter->fsf_req_list_head)) { 475 - spin_unlock_irqrestore(&adapter->fsf_req_list_lock, flags); 476 - return 0; 477 - } 478 - list_del(&fsf_req->list); 479 - atomic_dec(&adapter->fsf_reqs_active); 480 - spin_unlock_irqrestore(&adapter->fsf_req_list_lock, flags); 481 - 482 - if (unlikely(adapter != fsf_req->adapter)) { 483 - ZFCP_LOG_NORMAL("bug: invalid reqid (fsf_req=%p, " 484 - "fsf_req->adapter=%p, adapter=%p)\n", 485 - fsf_req, fsf_req->adapter, adapter); 486 - return -EINVAL; 487 - } 488 - 489 - /* finish the FSF request */ 490 - zfcp_fsf_req_complete(fsf_req); 491 - 492 - return 0; 493 416 } 494 417 495 418 /**
+37 -36
drivers/s390/scsi/zfcp_scsi.c
··· 30 30 void (*done) (struct scsi_cmnd *)); 31 31 static int zfcp_scsi_eh_abort_handler(struct scsi_cmnd *); 32 32 static int zfcp_scsi_eh_device_reset_handler(struct scsi_cmnd *); 33 - static int zfcp_scsi_eh_bus_reset_handler(struct scsi_cmnd *); 34 33 static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *); 35 34 static int zfcp_task_management_function(struct zfcp_unit *, u8, 36 35 struct scsi_cmnd *); ··· 45 46 .scsi_host_template = { 46 47 .name = ZFCP_NAME, 47 48 .proc_name = "zfcp", 48 - .proc_info = NULL, 49 - .detect = NULL, 50 49 .slave_alloc = zfcp_scsi_slave_alloc, 51 50 .slave_configure = zfcp_scsi_slave_configure, 52 51 .slave_destroy = zfcp_scsi_slave_destroy, 53 52 .queuecommand = zfcp_scsi_queuecommand, 54 53 .eh_abort_handler = zfcp_scsi_eh_abort_handler, 55 54 .eh_device_reset_handler = zfcp_scsi_eh_device_reset_handler, 56 - .eh_bus_reset_handler = zfcp_scsi_eh_bus_reset_handler, 55 + .eh_bus_reset_handler = zfcp_scsi_eh_host_reset_handler, 57 56 .eh_host_reset_handler = zfcp_scsi_eh_host_reset_handler, 58 57 .can_queue = 4096, 59 58 .this_id = -1, 60 - /* 61 - * FIXME: 62 - * one less? can zfcp_create_sbale cope with it? 63 - */ 64 59 .sg_tablesize = ZFCP_MAX_SBALES_PER_REQ, 65 60 .cmd_per_lun = 1, 66 - .unchecked_isa_dma = 0, 67 61 .use_clustering = 1, 68 62 .sdev_attrs = zfcp_sysfs_sdev_attrs, 69 63 }, 70 64 .driver_version = ZFCP_VERSION, 71 - /* rest initialised with zeros */ 72 65 }; 73 66 74 67 /* Find start of Response Information in FCP response unit*/ ··· 167 176 return retval; 168 177 } 169 178 170 - static void 171 - zfcp_scsi_slave_destroy(struct scsi_device *sdpnt) 179 + /** 180 + * zfcp_scsi_slave_destroy - called when scsi device is removed 181 + * 182 + * Remove reference to associated scsi device for an zfcp_unit. 183 + * Mark zfcp_unit as failed. The scsi device might be deleted via sysfs 184 + * or a scan for this device might have failed. 185 + */ 186 + static void zfcp_scsi_slave_destroy(struct scsi_device *sdpnt) 172 187 { 173 188 struct zfcp_unit *unit = (struct zfcp_unit *) sdpnt->hostdata; 174 189 ··· 182 185 atomic_clear_mask(ZFCP_STATUS_UNIT_REGISTERED, &unit->status); 183 186 sdpnt->hostdata = NULL; 184 187 unit->device = NULL; 188 + zfcp_erp_unit_failed(unit); 185 189 zfcp_unit_put(unit); 186 190 } else { 187 191 ZFCP_LOG_NORMAL("bug: no unit associated with SCSI device at " ··· 547 549 } 548 550 549 551 /** 550 - * zfcp_scsi_eh_bus_reset_handler - reset bus (reopen adapter) 552 + * zfcp_scsi_eh_host_reset_handler - handler for host and bus reset 553 + * 554 + * If ERP is already running it will be stopped. 551 555 */ 552 - int 553 - zfcp_scsi_eh_bus_reset_handler(struct scsi_cmnd *scpnt) 556 + int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt) 554 557 { 555 - struct zfcp_unit *unit = (struct zfcp_unit*) scpnt->device->hostdata; 556 - struct zfcp_adapter *adapter = unit->port->adapter; 558 + struct zfcp_unit *unit; 559 + struct zfcp_adapter *adapter; 560 + unsigned long flags; 557 561 558 - ZFCP_LOG_NORMAL("bus reset because of problems with " 562 + unit = (struct zfcp_unit*) scpnt->device->hostdata; 563 + adapter = unit->port->adapter; 564 + 565 + ZFCP_LOG_NORMAL("host/bus reset because of problems with " 559 566 "unit 0x%016Lx\n", unit->fcp_lun); 560 - zfcp_erp_adapter_reopen(adapter, 0); 561 - zfcp_erp_wait(adapter); 562 567 563 - return SUCCESS; 564 - } 565 - 566 - /** 567 - * zfcp_scsi_eh_host_reset_handler - reset host (reopen adapter) 568 - */ 569 - int 570 - zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt) 571 - { 572 - struct zfcp_unit *unit = (struct zfcp_unit*) scpnt->device->hostdata; 573 - struct zfcp_adapter *adapter = unit->port->adapter; 574 - 575 - ZFCP_LOG_NORMAL("host reset because of problems with " 576 - "unit 0x%016Lx\n", unit->fcp_lun); 577 - zfcp_erp_adapter_reopen(adapter, 0); 578 - zfcp_erp_wait(adapter); 568 + write_lock_irqsave(&adapter->erp_lock, flags); 569 + if (atomic_test_mask(ZFCP_STATUS_ADAPTER_ERP_PENDING, 570 + &adapter->status)) { 571 + zfcp_erp_modify_adapter_status(adapter, 572 + ZFCP_STATUS_COMMON_UNBLOCKED|ZFCP_STATUS_COMMON_OPEN, 573 + ZFCP_CLEAR); 574 + zfcp_erp_action_dismiss_adapter(adapter); 575 + write_unlock_irqrestore(&adapter->erp_lock, flags); 576 + zfcp_fsf_req_dismiss_all(adapter); 577 + adapter->fsf_req_seq_no = 0; 578 + zfcp_erp_adapter_reopen(adapter, 0); 579 + } else { 580 + write_unlock_irqrestore(&adapter->erp_lock, flags); 581 + zfcp_erp_adapter_reopen(adapter, 0); 582 + zfcp_erp_wait(adapter); 583 + } 579 584 580 585 return SUCCESS; 581 586 }
+9 -559
drivers/scsi/hptiop.c
··· 45 45 static const char driver_name_long[] = "RocketRAID 3xxx SATA Controller driver"; 46 46 static const char driver_ver[] = "v1.0 (060426)"; 47 47 48 - static DEFINE_SPINLOCK(hptiop_hba_list_lock); 49 - static LIST_HEAD(hptiop_hba_list); 50 - static int hptiop_cdev_major = -1; 51 - 52 48 static void hptiop_host_request_callback(struct hptiop_hba *hba, u32 tag); 53 49 static void hptiop_iop_request_callback(struct hptiop_hba *hba, u32 tag); 54 50 static void hptiop_message_callback(struct hptiop_hba *hba, u32 msg); ··· 573 577 if (atomic_xchg(&hba->resetting, 1) == 0) { 574 578 atomic_inc(&hba->reset_count); 575 579 writel(IOPMU_INBOUND_MSG0_RESET, 576 - &hba->iop->outbound_msgaddr0); 580 + &hba->iop->inbound_msgaddr0); 577 581 hptiop_pci_posting_flush(hba->iop); 578 582 } 579 583 ··· 616 620 return queue_depth; 617 621 } 618 622 619 - struct hptiop_getinfo { 620 - char __user *buffer; 621 - loff_t buflength; 622 - loff_t bufoffset; 623 - loff_t buffillen; 624 - loff_t filpos; 625 - }; 626 - 627 - static void hptiop_copy_mem_info(struct hptiop_getinfo *pinfo, 628 - char *data, int datalen) 629 - { 630 - if (pinfo->filpos < pinfo->bufoffset) { 631 - if (pinfo->filpos + datalen <= pinfo->bufoffset) { 632 - pinfo->filpos += datalen; 633 - return; 634 - } else { 635 - data += (pinfo->bufoffset - pinfo->filpos); 636 - datalen -= (pinfo->bufoffset - pinfo->filpos); 637 - pinfo->filpos = pinfo->bufoffset; 638 - } 639 - } 640 - 641 - pinfo->filpos += datalen; 642 - if (pinfo->buffillen == pinfo->buflength) 643 - return; 644 - 645 - if (pinfo->buflength - pinfo->buffillen < datalen) 646 - datalen = pinfo->buflength - pinfo->buffillen; 647 - 648 - if (copy_to_user(pinfo->buffer + pinfo->buffillen, data, datalen)) 649 - return; 650 - 651 - pinfo->buffillen += datalen; 652 - } 653 - 654 - static int hptiop_copy_info(struct hptiop_getinfo *pinfo, char *fmt, ...) 655 - { 656 - va_list args; 657 - char buf[128]; 658 - int len; 659 - 660 - va_start(args, fmt); 661 - len = vsnprintf(buf, sizeof(buf), fmt, args); 662 - va_end(args); 663 - hptiop_copy_mem_info(pinfo, buf, len); 664 - return len; 665 - } 666 - 667 - static void hptiop_ioctl_done(struct hpt_ioctl_k *arg) 668 - { 669 - arg->done = NULL; 670 - wake_up(&arg->hba->ioctl_wq); 671 - } 672 - 673 - static void hptiop_do_ioctl(struct hpt_ioctl_k *arg) 674 - { 675 - struct hptiop_hba *hba = arg->hba; 676 - u32 val; 677 - struct hpt_iop_request_ioctl_command __iomem *req; 678 - int ioctl_retry = 0; 679 - 680 - dprintk("scsi%d: hptiop_do_ioctl\n", hba->host->host_no); 681 - 682 - /* 683 - * check (in + out) buff size from application. 684 - * outbuf must be dword aligned. 685 - */ 686 - if (((arg->inbuf_size + 3) & ~3) + arg->outbuf_size > 687 - hba->max_request_size 688 - - sizeof(struct hpt_iop_request_header) 689 - - 4 * sizeof(u32)) { 690 - dprintk("scsi%d: ioctl buf size (%d/%d) is too large\n", 691 - hba->host->host_no, 692 - arg->inbuf_size, arg->outbuf_size); 693 - arg->result = HPT_IOCTL_RESULT_FAILED; 694 - return; 695 - } 696 - 697 - retry: 698 - spin_lock_irq(hba->host->host_lock); 699 - 700 - val = readl(&hba->iop->inbound_queue); 701 - if (val == IOPMU_QUEUE_EMPTY) { 702 - spin_unlock_irq(hba->host->host_lock); 703 - dprintk("scsi%d: no free req for ioctl\n", hba->host->host_no); 704 - arg->result = -1; 705 - return; 706 - } 707 - 708 - req = (struct hpt_iop_request_ioctl_command __iomem *) 709 - ((unsigned long)hba->iop + val); 710 - 711 - writel(HPT_CTL_CODE_LINUX_TO_IOP(arg->ioctl_code), 712 - &req->ioctl_code); 713 - writel(arg->inbuf_size, &req->inbuf_size); 714 - writel(arg->outbuf_size, &req->outbuf_size); 715 - 716 - /* 717 - * use the buffer on the IOP local memory first, then copy it 718 - * back to host. 719 - * the caller's request buffer shoudl be little-endian. 720 - */ 721 - if (arg->inbuf_size) 722 - memcpy_toio(req->buf, arg->inbuf, arg->inbuf_size); 723 - 724 - /* correct the controller ID for IOP */ 725 - if ((arg->ioctl_code == HPT_IOCTL_GET_CHANNEL_INFO || 726 - arg->ioctl_code == HPT_IOCTL_GET_CONTROLLER_INFO_V2 || 727 - arg->ioctl_code == HPT_IOCTL_GET_CONTROLLER_INFO) 728 - && arg->inbuf_size >= sizeof(u32)) 729 - writel(0, req->buf); 730 - 731 - writel(IOP_REQUEST_TYPE_IOCTL_COMMAND, &req->header.type); 732 - writel(0, &req->header.flags); 733 - writel(offsetof(struct hpt_iop_request_ioctl_command, buf) 734 - + arg->inbuf_size, &req->header.size); 735 - writel((u32)(unsigned long)arg, &req->header.context); 736 - writel(BITS_PER_LONG > 32 ? (u32)((unsigned long)arg>>32) : 0, 737 - &req->header.context_hi32); 738 - writel(IOP_RESULT_PENDING, &req->header.result); 739 - 740 - arg->result = HPT_IOCTL_RESULT_FAILED; 741 - arg->done = hptiop_ioctl_done; 742 - 743 - writel(val, &hba->iop->inbound_queue); 744 - hptiop_pci_posting_flush(hba->iop); 745 - 746 - spin_unlock_irq(hba->host->host_lock); 747 - 748 - wait_event_timeout(hba->ioctl_wq, arg->done == NULL, 60 * HZ); 749 - 750 - if (arg->done != NULL) { 751 - hptiop_reset_hba(hba); 752 - if (ioctl_retry++ < 3) 753 - goto retry; 754 - } 755 - 756 - dprintk("hpt_iop_ioctl %x result %d\n", 757 - arg->ioctl_code, arg->result); 758 - } 759 - 760 - static int __hpt_do_ioctl(struct hptiop_hba *hba, u32 code, void *inbuf, 761 - u32 insize, void *outbuf, u32 outsize) 762 - { 763 - struct hpt_ioctl_k arg; 764 - arg.hba = hba; 765 - arg.ioctl_code = code; 766 - arg.inbuf = inbuf; 767 - arg.outbuf = outbuf; 768 - arg.inbuf_size = insize; 769 - arg.outbuf_size = outsize; 770 - arg.bytes_returned = NULL; 771 - hptiop_do_ioctl(&arg); 772 - return arg.result; 773 - } 774 - 775 - static inline int hpt_id_valid(__le32 id) 776 - { 777 - return id != 0 && id != cpu_to_le32(0xffffffff); 778 - } 779 - 780 - static int hptiop_get_controller_info(struct hptiop_hba *hba, 781 - struct hpt_controller_info *pinfo) 782 - { 783 - int id = 0; 784 - 785 - return __hpt_do_ioctl(hba, HPT_IOCTL_GET_CONTROLLER_INFO, 786 - &id, sizeof(int), pinfo, sizeof(*pinfo)); 787 - } 788 - 789 - 790 - static int hptiop_get_channel_info(struct hptiop_hba *hba, int bus, 791 - struct hpt_channel_info *pinfo) 792 - { 793 - u32 ids[2]; 794 - 795 - ids[0] = 0; 796 - ids[1] = bus; 797 - return __hpt_do_ioctl(hba, HPT_IOCTL_GET_CHANNEL_INFO, 798 - ids, sizeof(ids), pinfo, sizeof(*pinfo)); 799 - 800 - } 801 - 802 - static int hptiop_get_logical_devices(struct hptiop_hba *hba, 803 - __le32 *pids, int maxcount) 804 - { 805 - int i; 806 - u32 count = maxcount - 1; 807 - 808 - if (__hpt_do_ioctl(hba, HPT_IOCTL_GET_LOGICAL_DEVICES, 809 - &count, sizeof(u32), 810 - pids, sizeof(u32) * maxcount)) 811 - return -1; 812 - 813 - maxcount = le32_to_cpu(pids[0]); 814 - for (i = 0; i < maxcount; i++) 815 - pids[i] = pids[i+1]; 816 - 817 - return maxcount; 818 - } 819 - 820 - static int hptiop_get_device_info_v3(struct hptiop_hba *hba, __le32 id, 821 - struct hpt_logical_device_info_v3 *pinfo) 822 - { 823 - return __hpt_do_ioctl(hba, HPT_IOCTL_GET_DEVICE_INFO_V3, 824 - &id, sizeof(u32), 825 - pinfo, sizeof(*pinfo)); 826 - } 827 - 828 - static const char *get_array_status(struct hpt_logical_device_info_v3 *devinfo) 829 - { 830 - static char s[64]; 831 - u32 flags = le32_to_cpu(devinfo->u.array.flags); 832 - u32 trans_prog = le32_to_cpu(devinfo->u.array.transforming_progress); 833 - u32 reb_prog = le32_to_cpu(devinfo->u.array.rebuilding_progress); 834 - 835 - if (flags & ARRAY_FLAG_DISABLED) 836 - return "Disabled"; 837 - else if (flags & ARRAY_FLAG_TRANSFORMING) 838 - sprintf(s, "Expanding/Migrating %d.%d%%%s%s", 839 - trans_prog / 100, 840 - trans_prog % 100, 841 - (flags & (ARRAY_FLAG_NEEDBUILDING|ARRAY_FLAG_BROKEN))? 842 - ", Critical" : "", 843 - ((flags & ARRAY_FLAG_NEEDINITIALIZING) && 844 - !(flags & ARRAY_FLAG_REBUILDING) && 845 - !(flags & ARRAY_FLAG_INITIALIZING))? 846 - ", Unintialized" : ""); 847 - else if ((flags & ARRAY_FLAG_BROKEN) && 848 - devinfo->u.array.array_type != AT_RAID6) 849 - return "Critical"; 850 - else if (flags & ARRAY_FLAG_REBUILDING) 851 - sprintf(s, 852 - (flags & ARRAY_FLAG_NEEDINITIALIZING)? 853 - "%sBackground initializing %d.%d%%" : 854 - "%sRebuilding %d.%d%%", 855 - (flags & ARRAY_FLAG_BROKEN)? "Critical, " : "", 856 - reb_prog / 100, 857 - reb_prog % 100); 858 - else if (flags & ARRAY_FLAG_VERIFYING) 859 - sprintf(s, "%sVerifying %d.%d%%", 860 - (flags & ARRAY_FLAG_BROKEN)? "Critical, " : "", 861 - reb_prog / 100, 862 - reb_prog % 100); 863 - else if (flags & ARRAY_FLAG_INITIALIZING) 864 - sprintf(s, "%sForground initializing %d.%d%%", 865 - (flags & ARRAY_FLAG_BROKEN)? "Critical, " : "", 866 - reb_prog / 100, 867 - reb_prog % 100); 868 - else if (flags & ARRAY_FLAG_NEEDTRANSFORM) 869 - sprintf(s,"%s%s%s", "Need Expanding/Migrating", 870 - (flags & ARRAY_FLAG_BROKEN)? "Critical, " : "", 871 - ((flags & ARRAY_FLAG_NEEDINITIALIZING) && 872 - !(flags & ARRAY_FLAG_REBUILDING) && 873 - !(flags & ARRAY_FLAG_INITIALIZING))? 874 - ", Unintialized" : ""); 875 - else if (flags & ARRAY_FLAG_NEEDINITIALIZING && 876 - !(flags & ARRAY_FLAG_REBUILDING) && 877 - !(flags & ARRAY_FLAG_INITIALIZING)) 878 - sprintf(s,"%sUninitialized", 879 - (flags & ARRAY_FLAG_BROKEN)? "Critical, " : ""); 880 - else if ((flags & ARRAY_FLAG_NEEDBUILDING) || 881 - (flags & ARRAY_FLAG_BROKEN)) 882 - return "Critical"; 883 - else 884 - return "Normal"; 885 - return s; 886 - } 887 - 888 - static void hptiop_dump_devinfo(struct hptiop_hba *hba, 889 - struct hptiop_getinfo *pinfo, __le32 id, int indent) 890 - { 891 - struct hpt_logical_device_info_v3 devinfo; 892 - int i; 893 - u64 capacity; 894 - 895 - for (i = 0; i < indent; i++) 896 - hptiop_copy_info(pinfo, "\t"); 897 - 898 - if (hptiop_get_device_info_v3(hba, id, &devinfo)) { 899 - hptiop_copy_info(pinfo, "unknown\n"); 900 - return; 901 - } 902 - 903 - switch (devinfo.type) { 904 - 905 - case LDT_DEVICE: { 906 - struct hd_driveid *driveid; 907 - u32 flags = le32_to_cpu(devinfo.u.device.flags); 908 - 909 - driveid = (struct hd_driveid *)devinfo.u.device.ident; 910 - /* model[] is 40 chars long, but we just want 20 chars here */ 911 - driveid->model[20] = 0; 912 - 913 - if (indent) 914 - if (flags & DEVICE_FLAG_DISABLED) 915 - hptiop_copy_info(pinfo,"Missing\n"); 916 - else 917 - hptiop_copy_info(pinfo, "CH%d %s\n", 918 - devinfo.u.device.path_id + 1, 919 - driveid->model); 920 - else { 921 - capacity = le64_to_cpu(devinfo.capacity) * 512; 922 - do_div(capacity, 1000000); 923 - hptiop_copy_info(pinfo, 924 - "CH%d %s, %lluMB, %s %s%s%s%s\n", 925 - devinfo.u.device.path_id + 1, 926 - driveid->model, 927 - capacity, 928 - (flags & DEVICE_FLAG_DISABLED)? 929 - "Disabled" : "Normal", 930 - devinfo.u.device.read_ahead_enabled? 931 - "[RA]" : "", 932 - devinfo.u.device.write_cache_enabled? 933 - "[WC]" : "", 934 - devinfo.u.device.TCQ_enabled? 935 - "[TCQ]" : "", 936 - devinfo.u.device.NCQ_enabled? 937 - "[NCQ]" : "" 938 - ); 939 - } 940 - break; 941 - } 942 - 943 - case LDT_ARRAY: 944 - if (devinfo.target_id != INVALID_TARGET_ID) 945 - hptiop_copy_info(pinfo, "[DISK %d_%d] ", 946 - devinfo.vbus_id, devinfo.target_id); 947 - 948 - capacity = le64_to_cpu(devinfo.capacity) * 512; 949 - do_div(capacity, 1000000); 950 - hptiop_copy_info(pinfo, "%s (%s), %lluMB, %s\n", 951 - devinfo.u.array.name, 952 - devinfo.u.array.array_type==AT_RAID0? "RAID0" : 953 - devinfo.u.array.array_type==AT_RAID1? "RAID1" : 954 - devinfo.u.array.array_type==AT_RAID5? "RAID5" : 955 - devinfo.u.array.array_type==AT_RAID6? "RAID6" : 956 - devinfo.u.array.array_type==AT_JBOD? "JBOD" : 957 - "unknown", 958 - capacity, 959 - get_array_status(&devinfo)); 960 - for (i = 0; i < devinfo.u.array.ndisk; i++) { 961 - if (hpt_id_valid(devinfo.u.array.members[i])) { 962 - if (cpu_to_le16(1<<i) & 963 - devinfo.u.array.critical_members) 964 - hptiop_copy_info(pinfo, "\t*"); 965 - hptiop_dump_devinfo(hba, pinfo, 966 - devinfo.u.array.members[i], indent+1); 967 - } 968 - else 969 - hptiop_copy_info(pinfo, "\tMissing\n"); 970 - } 971 - if (id == devinfo.u.array.transform_source) { 972 - hptiop_copy_info(pinfo, "\tExpanding/Migrating to:\n"); 973 - hptiop_dump_devinfo(hba, pinfo, 974 - devinfo.u.array.transform_target, indent+1); 975 - } 976 - break; 977 - } 978 - } 979 - 980 623 static ssize_t hptiop_show_version(struct class_device *class_dev, char *buf) 981 624 { 982 625 return snprintf(buf, PAGE_SIZE, "%s\n", driver_ver); 983 626 } 984 - 985 - static ssize_t hptiop_cdev_read(struct file *filp, char __user *buf, 986 - size_t count, loff_t *ppos) 987 - { 988 - struct hptiop_hba *hba = filp->private_data; 989 - struct hptiop_getinfo info; 990 - int i, j, ndev; 991 - struct hpt_controller_info con_info; 992 - struct hpt_channel_info chan_info; 993 - __le32 ids[32]; 994 - 995 - info.buffer = buf; 996 - info.buflength = count; 997 - info.bufoffset = ppos ? *ppos : 0; 998 - info.filpos = 0; 999 - info.buffillen = 0; 1000 - 1001 - if (hptiop_get_controller_info(hba, &con_info)) 1002 - return -EIO; 1003 - 1004 - for (i = 0; i < con_info.num_buses; i++) { 1005 - if (hptiop_get_channel_info(hba, i, &chan_info) == 0) { 1006 - if (hpt_id_valid(chan_info.devices[0])) 1007 - hptiop_dump_devinfo(hba, &info, 1008 - chan_info.devices[0], 0); 1009 - if (hpt_id_valid(chan_info.devices[1])) 1010 - hptiop_dump_devinfo(hba, &info, 1011 - chan_info.devices[1], 0); 1012 - } 1013 - } 1014 - 1015 - ndev = hptiop_get_logical_devices(hba, ids, 1016 - sizeof(ids) / sizeof(ids[0])); 1017 - 1018 - /* 1019 - * if hptiop_get_logical_devices fails, ndev==-1 and it just 1020 - * output nothing here 1021 - */ 1022 - for (j = 0; j < ndev; j++) 1023 - hptiop_dump_devinfo(hba, &info, ids[j], 0); 1024 - 1025 - if (ppos) 1026 - *ppos += info.buffillen; 1027 - 1028 - return info.buffillen; 1029 - } 1030 - 1031 - static int hptiop_cdev_ioctl(struct inode *inode, struct file *file, 1032 - unsigned int cmd, unsigned long arg) 1033 - { 1034 - struct hptiop_hba *hba = file->private_data; 1035 - struct hpt_ioctl_u ioctl_u; 1036 - struct hpt_ioctl_k ioctl_k; 1037 - u32 bytes_returned; 1038 - int err = -EINVAL; 1039 - 1040 - if (copy_from_user(&ioctl_u, 1041 - (void __user *)arg, sizeof(struct hpt_ioctl_u))) 1042 - return -EINVAL; 1043 - 1044 - if (ioctl_u.magic != HPT_IOCTL_MAGIC) 1045 - return -EINVAL; 1046 - 1047 - ioctl_k.ioctl_code = ioctl_u.ioctl_code; 1048 - ioctl_k.inbuf = NULL; 1049 - ioctl_k.inbuf_size = ioctl_u.inbuf_size; 1050 - ioctl_k.outbuf = NULL; 1051 - ioctl_k.outbuf_size = ioctl_u.outbuf_size; 1052 - ioctl_k.hba = hba; 1053 - ioctl_k.bytes_returned = &bytes_returned; 1054 - 1055 - /* verify user buffer */ 1056 - if ((ioctl_k.inbuf_size && !access_ok(VERIFY_READ, 1057 - ioctl_u.inbuf, ioctl_k.inbuf_size)) || 1058 - (ioctl_k.outbuf_size && !access_ok(VERIFY_WRITE, 1059 - ioctl_u.outbuf, ioctl_k.outbuf_size)) || 1060 - (ioctl_u.bytes_returned && !access_ok(VERIFY_WRITE, 1061 - ioctl_u.bytes_returned, sizeof(u32))) || 1062 - ioctl_k.inbuf_size + ioctl_k.outbuf_size > 0x10000) { 1063 - 1064 - dprintk("scsi%d: got bad user address\n", hba->host->host_no); 1065 - return -EINVAL; 1066 - } 1067 - 1068 - /* map buffer to kernel. */ 1069 - if (ioctl_k.inbuf_size) { 1070 - ioctl_k.inbuf = kmalloc(ioctl_k.inbuf_size, GFP_KERNEL); 1071 - if (!ioctl_k.inbuf) { 1072 - dprintk("scsi%d: fail to alloc inbuf\n", 1073 - hba->host->host_no); 1074 - err = -ENOMEM; 1075 - goto err_exit; 1076 - } 1077 - 1078 - if (copy_from_user(ioctl_k.inbuf, 1079 - ioctl_u.inbuf, ioctl_k.inbuf_size)) { 1080 - goto err_exit; 1081 - } 1082 - } 1083 - 1084 - if (ioctl_k.outbuf_size) { 1085 - ioctl_k.outbuf = kmalloc(ioctl_k.outbuf_size, GFP_KERNEL); 1086 - if (!ioctl_k.outbuf) { 1087 - dprintk("scsi%d: fail to alloc outbuf\n", 1088 - hba->host->host_no); 1089 - err = -ENOMEM; 1090 - goto err_exit; 1091 - } 1092 - } 1093 - 1094 - hptiop_do_ioctl(&ioctl_k); 1095 - 1096 - if (ioctl_k.result == HPT_IOCTL_RESULT_OK) { 1097 - if (ioctl_k.outbuf_size && 1098 - copy_to_user(ioctl_u.outbuf, 1099 - ioctl_k.outbuf, ioctl_k.outbuf_size)) 1100 - goto err_exit; 1101 - 1102 - if (ioctl_u.bytes_returned && 1103 - copy_to_user(ioctl_u.bytes_returned, 1104 - &bytes_returned, sizeof(u32))) 1105 - goto err_exit; 1106 - 1107 - err = 0; 1108 - } 1109 - 1110 - err_exit: 1111 - kfree(ioctl_k.inbuf); 1112 - kfree(ioctl_k.outbuf); 1113 - 1114 - return err; 1115 - } 1116 - 1117 - static int hptiop_cdev_open(struct inode *inode, struct file *file) 1118 - { 1119 - struct hptiop_hba *hba; 1120 - unsigned i = 0, minor = iminor(inode); 1121 - int ret = -ENODEV; 1122 - 1123 - spin_lock(&hptiop_hba_list_lock); 1124 - list_for_each_entry(hba, &hptiop_hba_list, link) { 1125 - if (i == minor) { 1126 - file->private_data = hba; 1127 - ret = 0; 1128 - goto out; 1129 - } 1130 - i++; 1131 - } 1132 - 1133 - out: 1134 - spin_unlock(&hptiop_hba_list_lock); 1135 - return ret; 1136 - } 1137 - 1138 - static struct file_operations hptiop_cdev_fops = { 1139 - .owner = THIS_MODULE, 1140 - .read = hptiop_cdev_read, 1141 - .ioctl = hptiop_cdev_ioctl, 1142 - .open = hptiop_cdev_open, 1143 - }; 1144 627 1145 628 static ssize_t hptiop_show_fw_version(struct class_device *class_dev, char *buf) 1146 629 { ··· 771 1296 goto unmap_pci_bar; 772 1297 } 773 1298 774 - if (scsi_add_host(host, &pcidev->dev)) { 775 - printk(KERN_ERR "scsi%d: scsi_add_host failed\n", 776 - hba->host->host_no); 777 - goto unmap_pci_bar; 778 - } 779 - 780 1299 pci_set_drvdata(pcidev, host); 781 1300 782 1301 if (request_irq(pcidev->irq, hptiop_intr, IRQF_SHARED, 783 1302 driver_name, hba)) { 784 1303 printk(KERN_ERR "scsi%d: request irq %d failed\n", 785 1304 hba->host->host_no, pcidev->irq); 786 - goto remove_scsi_host; 1305 + goto unmap_pci_bar; 787 1306 } 788 1307 789 1308 /* Allocate request mem */ ··· 824 1355 if (hptiop_initialize_iop(hba)) 825 1356 goto free_request_mem; 826 1357 827 - spin_lock(&hptiop_hba_list_lock); 828 - list_add_tail(&hba->link, &hptiop_hba_list); 829 - spin_unlock(&hptiop_hba_list_lock); 1358 + if (scsi_add_host(host, &pcidev->dev)) { 1359 + printk(KERN_ERR "scsi%d: scsi_add_host failed\n", 1360 + hba->host->host_no); 1361 + goto free_request_mem; 1362 + } 1363 + 830 1364 831 1365 scsi_scan_host(host); 832 1366 ··· 843 1371 844 1372 free_request_irq: 845 1373 free_irq(hba->pcidev->irq, hba); 846 - 847 - remove_scsi_host: 848 - scsi_remove_host(host); 849 1374 850 1375 unmap_pci_bar: 851 1376 iounmap(hba->iop); ··· 891 1422 892 1423 scsi_remove_host(host); 893 1424 894 - spin_lock(&hptiop_hba_list_lock); 895 - list_del_init(&hba->link); 896 - spin_unlock(&hptiop_hba_list_lock); 897 - 898 1425 hptiop_shutdown(pcidev); 899 1426 900 1427 free_irq(hba->pcidev->irq, hba); ··· 927 1462 928 1463 static int __init hptiop_module_init(void) 929 1464 { 930 - int error; 931 - 932 1465 printk(KERN_INFO "%s %s\n", driver_name_long, driver_ver); 933 - 934 - error = pci_register_driver(&hptiop_pci_driver); 935 - if (error < 0) 936 - return error; 937 - 938 - hptiop_cdev_major = register_chrdev(0, "hptiop", &hptiop_cdev_fops); 939 - if (hptiop_cdev_major < 0) { 940 - printk(KERN_WARNING "unable to register hptiop device.\n"); 941 - return hptiop_cdev_major; 942 - } 943 - 944 - return 0; 1466 + return pci_register_driver(&hptiop_pci_driver); 945 1467 } 946 1468 947 1469 static void __exit hptiop_module_exit(void) 948 1470 { 949 - dprintk("hptiop_module_exit\n"); 950 - unregister_chrdev(hptiop_cdev_major, "hptiop"); 951 1471 pci_unregister_driver(&hptiop_pci_driver); 952 1472 } 953 1473
+79 -130
drivers/scsi/iscsi_tcp.c
··· 43 43 44 44 #include "iscsi_tcp.h" 45 45 46 - #define ISCSI_TCP_VERSION "1.0-595" 47 - 48 46 MODULE_AUTHOR("Dmitry Yusupov <dmitry_yus@yahoo.com>, " 49 47 "Alex Aizman <itn780@yahoo.com>"); 50 48 MODULE_DESCRIPTION("iSCSI/TCP data-path"); 51 49 MODULE_LICENSE("GPL"); 52 - MODULE_VERSION(ISCSI_TCP_VERSION); 53 50 /* #define DEBUG_TCP */ 54 51 #define DEBUG_ASSERT 55 52 ··· 182 185 * must be called with session lock 183 186 */ 184 187 static void 185 - __iscsi_ctask_cleanup(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask) 188 + iscsi_tcp_cleanup_ctask(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask) 186 189 { 187 190 struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data; 191 + struct iscsi_r2t_info *r2t; 188 192 struct scsi_cmnd *sc; 193 + 194 + /* flush ctask's r2t queues */ 195 + while (__kfifo_get(tcp_ctask->r2tqueue, (void*)&r2t, sizeof(void*))) { 196 + __kfifo_put(tcp_ctask->r2tpool.queue, (void*)&r2t, 197 + sizeof(void*)); 198 + debug_scsi("iscsi_tcp_cleanup_ctask pending r2t dropped\n"); 199 + } 189 200 190 201 sc = ctask->sc; 191 202 if (unlikely(!sc)) ··· 379 374 spin_unlock(&session->lock); 380 375 return 0; 381 376 } 377 + 382 378 rc = __kfifo_get(tcp_ctask->r2tpool.queue, (void*)&r2t, sizeof(void*)); 383 379 BUG_ON(!rc); 384 380 ··· 405 399 tcp_ctask->exp_r2tsn = r2tsn + 1; 406 400 tcp_ctask->xmstate |= XMSTATE_SOL_HDR; 407 401 __kfifo_put(tcp_ctask->r2tqueue, (void*)&r2t, sizeof(void*)); 408 - __kfifo_put(conn->xmitqueue, (void*)&ctask, sizeof(void*)); 402 + list_move_tail(&ctask->running, &conn->xmitqueue); 409 403 410 404 scsi_queue_work(session->host, &conn->xmitwork); 411 405 conn->r2t_pdus_cnt++; ··· 483 477 case ISCSI_OP_SCSI_DATA_IN: 484 478 tcp_conn->in.ctask = session->cmds[itt]; 485 479 rc = iscsi_data_rsp(conn, tcp_conn->in.ctask); 480 + if (rc) 481 + return rc; 486 482 /* fall through */ 487 483 case ISCSI_OP_SCSI_CMD_RSP: 488 484 tcp_conn->in.ctask = session->cmds[itt]; ··· 492 484 goto copy_hdr; 493 485 494 486 spin_lock(&session->lock); 495 - __iscsi_ctask_cleanup(conn, tcp_conn->in.ctask); 487 + iscsi_tcp_cleanup_ctask(conn, tcp_conn->in.ctask); 496 488 rc = __iscsi_complete_pdu(conn, hdr, NULL, 0); 497 489 spin_unlock(&session->lock); 498 490 break; ··· 508 500 break; 509 501 case ISCSI_OP_LOGIN_RSP: 510 502 case ISCSI_OP_TEXT_RSP: 511 - case ISCSI_OP_LOGOUT_RSP: 512 - case ISCSI_OP_NOOP_IN: 513 503 case ISCSI_OP_REJECT: 514 504 case ISCSI_OP_ASYNC_EVENT: 505 + /* 506 + * It is possible that we could get a PDU with a buffer larger 507 + * than 8K, but there are no targets that currently do this. 508 + * For now we fail until we find a vendor that needs it 509 + */ 510 + if (DEFAULT_MAX_RECV_DATA_SEGMENT_LENGTH < 511 + tcp_conn->in.datalen) { 512 + printk(KERN_ERR "iscsi_tcp: received buffer of len %u " 513 + "but conn buffer is only %u (opcode %0x)\n", 514 + tcp_conn->in.datalen, 515 + DEFAULT_MAX_RECV_DATA_SEGMENT_LENGTH, opcode); 516 + rc = ISCSI_ERR_PROTO; 517 + break; 518 + } 519 + 515 520 if (tcp_conn->in.datalen) 516 521 goto copy_hdr; 517 522 /* fall through */ 523 + case ISCSI_OP_LOGOUT_RSP: 524 + case ISCSI_OP_NOOP_IN: 518 525 case ISCSI_OP_SCSI_TMFUNC_RSP: 519 526 rc = iscsi_complete_pdu(conn, hdr, NULL, 0); 520 527 break; ··· 546 523 * skbs to complete the command then we have to copy the header 547 524 * for later use 548 525 */ 549 - if (tcp_conn->in.zero_copy_hdr && tcp_conn->in.copy < 526 + if (tcp_conn->in.zero_copy_hdr && tcp_conn->in.copy <= 550 527 (tcp_conn->in.datalen + tcp_conn->in.padding + 551 528 (conn->datadgst_en ? 4 : 0))) { 552 529 debug_tcp("Copying header for later use. in.copy %d in.datalen" ··· 637 614 * byte counters. 638 615 **/ 639 616 static inline int 640 - iscsi_tcp_copy(struct iscsi_tcp_conn *tcp_conn) 617 + iscsi_tcp_copy(struct iscsi_conn *conn) 641 618 { 642 - void *buf = tcp_conn->data; 619 + struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 643 620 int buf_size = tcp_conn->in.datalen; 644 621 int buf_left = buf_size - tcp_conn->data_copied; 645 622 int size = min(tcp_conn->in.copy, buf_left); ··· 650 627 BUG_ON(size <= 0); 651 628 652 629 rc = skb_copy_bits(tcp_conn->in.skb, tcp_conn->in.offset, 653 - (char*)buf + tcp_conn->data_copied, size); 630 + (char*)conn->data + tcp_conn->data_copied, size); 654 631 BUG_ON(rc); 655 632 656 633 tcp_conn->in.offset += size; ··· 768 745 done: 769 746 /* check for non-exceptional status */ 770 747 if (tcp_conn->in.hdr->flags & ISCSI_FLAG_DATA_STATUS) { 771 - debug_scsi("done [sc %lx res %d itt 0x%x]\n", 772 - (long)sc, sc->result, ctask->itt); 748 + debug_scsi("done [sc %lx res %d itt 0x%x flags 0x%x]\n", 749 + (long)sc, sc->result, ctask->itt, 750 + tcp_conn->in.hdr->flags); 773 751 spin_lock(&conn->session->lock); 774 - __iscsi_ctask_cleanup(conn, ctask); 752 + iscsi_tcp_cleanup_ctask(conn, ctask); 775 753 __iscsi_complete_pdu(conn, tcp_conn->in.hdr, NULL, 0); 776 754 spin_unlock(&conn->session->lock); 777 755 } ··· 793 769 break; 794 770 case ISCSI_OP_SCSI_CMD_RSP: 795 771 spin_lock(&conn->session->lock); 796 - __iscsi_ctask_cleanup(conn, tcp_conn->in.ctask); 772 + iscsi_tcp_cleanup_ctask(conn, tcp_conn->in.ctask); 797 773 spin_unlock(&conn->session->lock); 798 774 case ISCSI_OP_TEXT_RSP: 799 775 case ISCSI_OP_LOGIN_RSP: 800 - case ISCSI_OP_NOOP_IN: 801 776 case ISCSI_OP_ASYNC_EVENT: 802 777 case ISCSI_OP_REJECT: 803 778 /* 804 779 * Collect data segment to the connection's data 805 780 * placeholder 806 781 */ 807 - if (iscsi_tcp_copy(tcp_conn)) { 782 + if (iscsi_tcp_copy(conn)) { 808 783 rc = -EAGAIN; 809 784 goto exit; 810 785 } 811 786 812 - rc = iscsi_complete_pdu(conn, tcp_conn->in.hdr, tcp_conn->data, 787 + rc = iscsi_complete_pdu(conn, tcp_conn->in.hdr, conn->data, 813 788 tcp_conn->in.datalen); 814 789 if (!rc && conn->datadgst_en && opcode != ISCSI_OP_LOGIN_RSP) 815 - iscsi_recv_digest_update(tcp_conn, tcp_conn->data, 790 + iscsi_recv_digest_update(tcp_conn, conn->data, 816 791 tcp_conn->in.datalen); 817 792 break; 818 793 default: ··· 866 843 if (rc == -EAGAIN) 867 844 goto nomore; 868 845 else { 869 - iscsi_conn_failure(conn, rc); 846 + iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED); 870 847 return 0; 871 848 } 872 849 } ··· 920 897 if (rc) { 921 898 if (rc == -EAGAIN) 922 899 goto again; 923 - iscsi_conn_failure(conn, rc); 900 + iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED); 924 901 return 0; 925 902 } 926 903 tcp_conn->in.copy -= tcp_conn->in.padding; ··· 1051 1028 } 1052 1029 1053 1030 static void 1054 - iscsi_conn_restore_callbacks(struct iscsi_conn *conn) 1031 + iscsi_conn_restore_callbacks(struct iscsi_tcp_conn *tcp_conn) 1055 1032 { 1056 - struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 1057 1033 struct sock *sk = tcp_conn->sock->sk; 1058 1034 1059 1035 /* restore socket callbacks, see also: iscsi_conn_set_callbacks() */ ··· 1330 1308 ctask->imm_count - 1331 1309 ctask->unsol_count; 1332 1310 1333 - debug_scsi("cmd [itt %x total %d imm %d imm_data %d " 1311 + debug_scsi("cmd [itt 0x%x total %d imm %d imm_data %d " 1334 1312 "r2t_data %d]\n", 1335 1313 ctask->itt, ctask->total_length, ctask->imm_count, 1336 1314 ctask->unsol_count, tcp_ctask->r2t_data_count); ··· 1658 1636 } 1659 1637 solicit_again: 1660 1638 /* 1661 - * send Data-Out whitnin this R2T sequence. 1639 + * send Data-Out within this R2T sequence. 1662 1640 */ 1663 1641 if (!r2t->data_count) 1664 1642 goto data_out_done; ··· 1753 1731 struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data; 1754 1732 struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 1755 1733 struct iscsi_data_task *dtask = tcp_ctask->dtask; 1756 - int sent, rc; 1734 + int sent = 0, rc; 1757 1735 1758 1736 tcp_ctask->xmstate &= ~XMSTATE_W_PAD; 1759 1737 iscsi_buf_init_iov(&tcp_ctask->sendbuf, (char*)&tcp_ctask->pad, ··· 1922 1900 tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER; 1923 1901 /* initial operational parameters */ 1924 1902 tcp_conn->hdr_size = sizeof(struct iscsi_hdr); 1925 - tcp_conn->data_size = DEFAULT_MAX_RECV_DATA_SEGMENT_LENGTH; 1926 - 1927 - /* allocate initial PDU receive place holder */ 1928 - if (tcp_conn->data_size <= PAGE_SIZE) 1929 - tcp_conn->data = kmalloc(tcp_conn->data_size, GFP_KERNEL); 1930 - else 1931 - tcp_conn->data = (void*)__get_free_pages(GFP_KERNEL, 1932 - get_order(tcp_conn->data_size)); 1933 - if (!tcp_conn->data) 1934 - goto max_recv_dlenght_alloc_fail; 1935 1903 1936 1904 return cls_conn; 1937 1905 1938 - max_recv_dlenght_alloc_fail: 1939 - kfree(tcp_conn); 1940 1906 tcp_conn_alloc_fail: 1941 1907 iscsi_conn_teardown(cls_conn); 1942 1908 return NULL; 1909 + } 1910 + 1911 + static void 1912 + iscsi_tcp_release_conn(struct iscsi_conn *conn) 1913 + { 1914 + struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 1915 + 1916 + if (!tcp_conn->sock) 1917 + return; 1918 + 1919 + sock_hold(tcp_conn->sock->sk); 1920 + iscsi_conn_restore_callbacks(tcp_conn); 1921 + sock_put(tcp_conn->sock->sk); 1922 + 1923 + sock_release(tcp_conn->sock); 1924 + tcp_conn->sock = NULL; 1925 + conn->recv_lock = NULL; 1943 1926 } 1944 1927 1945 1928 static void ··· 1957 1930 if (conn->hdrdgst_en || conn->datadgst_en) 1958 1931 digest = 1; 1959 1932 1933 + iscsi_tcp_release_conn(conn); 1960 1934 iscsi_conn_teardown(cls_conn); 1961 1935 1962 1936 /* now free tcp_conn */ ··· 1972 1944 crypto_free_tfm(tcp_conn->data_rx_tfm); 1973 1945 } 1974 1946 1975 - /* free conn->data, size = MaxRecvDataSegmentLength */ 1976 - if (tcp_conn->data_size <= PAGE_SIZE) 1977 - kfree(tcp_conn->data); 1978 - else 1979 - free_pages((unsigned long)tcp_conn->data, 1980 - get_order(tcp_conn->data_size)); 1981 1947 kfree(tcp_conn); 1948 + } 1949 + 1950 + static void 1951 + iscsi_tcp_conn_stop(struct iscsi_cls_conn *cls_conn, int flag) 1952 + { 1953 + struct iscsi_conn *conn = cls_conn->dd_data; 1954 + 1955 + iscsi_conn_stop(cls_conn, flag); 1956 + iscsi_tcp_release_conn(conn); 1982 1957 } 1983 1958 1984 1959 static int ··· 2032 2001 return 0; 2033 2002 } 2034 2003 2035 - static void 2036 - iscsi_tcp_cleanup_ctask(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask) 2037 - { 2038 - struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data; 2039 - struct iscsi_r2t_info *r2t; 2040 - 2041 - /* flush ctask's r2t queues */ 2042 - while (__kfifo_get(tcp_ctask->r2tqueue, (void*)&r2t, sizeof(void*))) 2043 - __kfifo_put(tcp_ctask->r2tpool.queue, (void*)&r2t, 2044 - sizeof(void*)); 2045 - 2046 - __iscsi_ctask_cleanup(conn, ctask); 2047 - } 2048 - 2049 - static void 2050 - iscsi_tcp_suspend_conn_rx(struct iscsi_conn *conn) 2051 - { 2052 - struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 2053 - struct sock *sk; 2054 - 2055 - if (!tcp_conn->sock) 2056 - return; 2057 - 2058 - sk = tcp_conn->sock->sk; 2059 - write_lock_bh(&sk->sk_callback_lock); 2060 - set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_rx); 2061 - write_unlock_bh(&sk->sk_callback_lock); 2062 - } 2063 - 2064 - static void 2065 - iscsi_tcp_terminate_conn(struct iscsi_conn *conn) 2066 - { 2067 - struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 2068 - 2069 - if (!tcp_conn->sock) 2070 - return; 2071 - 2072 - sock_hold(tcp_conn->sock->sk); 2073 - iscsi_conn_restore_callbacks(conn); 2074 - sock_put(tcp_conn->sock->sk); 2075 - 2076 - sock_release(tcp_conn->sock); 2077 - tcp_conn->sock = NULL; 2078 - conn->recv_lock = NULL; 2079 - } 2080 - 2081 2004 /* called with host lock */ 2082 2005 static void 2083 2006 iscsi_tcp_mgmt_init(struct iscsi_conn *conn, struct iscsi_mgmt_task *mtask, ··· 2042 2057 iscsi_buf_init_iov(&tcp_mtask->headbuf, (char*)mtask->hdr, 2043 2058 sizeof(struct iscsi_hdr)); 2044 2059 tcp_mtask->xmstate = XMSTATE_IMM_HDR; 2060 + tcp_mtask->sent = 0; 2045 2061 2046 2062 if (mtask->data_count) 2047 2063 iscsi_buf_init_iov(&tcp_mtask->sendbuf, (char*)mtask->data, ··· 2124 2138 int value; 2125 2139 2126 2140 switch(param) { 2127 - case ISCSI_PARAM_MAX_RECV_DLENGTH: { 2128 - char *saveptr = tcp_conn->data; 2129 - gfp_t flags = GFP_KERNEL; 2130 - 2131 - sscanf(buf, "%d", &value); 2132 - if (tcp_conn->data_size >= value) { 2133 - iscsi_set_param(cls_conn, param, buf, buflen); 2134 - break; 2135 - } 2136 - 2137 - spin_lock_bh(&session->lock); 2138 - if (conn->stop_stage == STOP_CONN_RECOVER) 2139 - flags = GFP_ATOMIC; 2140 - spin_unlock_bh(&session->lock); 2141 - 2142 - if (value <= PAGE_SIZE) 2143 - tcp_conn->data = kmalloc(value, flags); 2144 - else 2145 - tcp_conn->data = (void*)__get_free_pages(flags, 2146 - get_order(value)); 2147 - if (tcp_conn->data == NULL) { 2148 - tcp_conn->data = saveptr; 2149 - return -ENOMEM; 2150 - } 2151 - if (tcp_conn->data_size <= PAGE_SIZE) 2152 - kfree(saveptr); 2153 - else 2154 - free_pages((unsigned long)saveptr, 2155 - get_order(tcp_conn->data_size)); 2156 - iscsi_set_param(cls_conn, param, buf, buflen); 2157 - tcp_conn->data_size = value; 2158 - break; 2159 - } 2160 2141 case ISCSI_PARAM_HDRDGST_EN: 2161 2142 iscsi_set_param(cls_conn, param, buf, buflen); 2162 2143 tcp_conn->hdr_size = sizeof(struct iscsi_hdr); ··· 2314 2361 } 2315 2362 2316 2363 static struct scsi_host_template iscsi_sht = { 2317 - .name = "iSCSI Initiator over TCP/IP, v" 2318 - ISCSI_TCP_VERSION, 2364 + .name = "iSCSI Initiator over TCP/IP", 2319 2365 .queuecommand = iscsi_queuecommand, 2320 2366 .change_queue_depth = iscsi_change_queue_depth, 2321 2367 .can_queue = ISCSI_XMIT_CMDS_MAX - 1, ··· 2366 2414 .get_conn_param = iscsi_tcp_conn_get_param, 2367 2415 .get_session_param = iscsi_session_get_param, 2368 2416 .start_conn = iscsi_conn_start, 2369 - .stop_conn = iscsi_conn_stop, 2370 - /* these are called as part of conn recovery */ 2371 - .suspend_conn_recv = iscsi_tcp_suspend_conn_rx, 2372 - .terminate_conn = iscsi_tcp_terminate_conn, 2417 + .stop_conn = iscsi_tcp_conn_stop, 2373 2418 /* IO */ 2374 2419 .send_pdu = iscsi_conn_send_pdu, 2375 2420 .get_stats = iscsi_conn_get_stats,
-2
drivers/scsi/iscsi_tcp.h
··· 78 78 char hdrext[4*sizeof(__u16) + 79 79 sizeof(__u32)]; 80 80 int data_copied; 81 - char *data; /* data placeholder */ 82 - int data_size; /* actual recv_dlength */ 83 81 int stop_stage; /* conn_stop() flag: * 84 82 * stop to recover, * 85 83 * stop to terminate */
+116 -98
drivers/scsi/libiscsi.c
··· 189 189 { 190 190 struct scsi_cmnd *sc = ctask->sc; 191 191 192 + ctask->state = ISCSI_TASK_COMPLETED; 192 193 ctask->sc = NULL; 193 194 list_del_init(&ctask->running); 194 195 __kfifo_put(session->cmdpool.queue, (void*)&ctask, sizeof(void*)); ··· 276 275 return rc; 277 276 } 278 277 278 + static void iscsi_tmf_rsp(struct iscsi_conn *conn, struct iscsi_hdr *hdr) 279 + { 280 + struct iscsi_tm_rsp *tmf = (struct iscsi_tm_rsp *)hdr; 281 + 282 + conn->exp_statsn = be32_to_cpu(hdr->statsn) + 1; 283 + conn->tmfrsp_pdus_cnt++; 284 + 285 + if (conn->tmabort_state != TMABORT_INITIAL) 286 + return; 287 + 288 + if (tmf->response == ISCSI_TMF_RSP_COMPLETE) 289 + conn->tmabort_state = TMABORT_SUCCESS; 290 + else if (tmf->response == ISCSI_TMF_RSP_NO_TASK) 291 + conn->tmabort_state = TMABORT_NOT_FOUND; 292 + else 293 + conn->tmabort_state = TMABORT_FAILED; 294 + wake_up(&conn->ehwait); 295 + } 296 + 279 297 /** 280 298 * __iscsi_complete_pdu - complete pdu 281 299 * @conn: iscsi conn ··· 360 340 361 341 switch(opcode) { 362 342 case ISCSI_OP_LOGOUT_RSP: 343 + if (datalen) { 344 + rc = ISCSI_ERR_PROTO; 345 + break; 346 + } 363 347 conn->exp_statsn = be32_to_cpu(hdr->statsn) + 1; 364 348 /* fall through */ 365 349 case ISCSI_OP_LOGIN_RSP: ··· 372 348 * login related PDU's exp_statsn is handled in 373 349 * userspace 374 350 */ 375 - rc = iscsi_recv_pdu(conn->cls_conn, hdr, data, datalen); 351 + if (iscsi_recv_pdu(conn->cls_conn, hdr, data, datalen)) 352 + rc = ISCSI_ERR_CONN_FAILED; 376 353 list_del(&mtask->running); 377 354 if (conn->login_mtask != mtask) 378 355 __kfifo_put(session->mgmtpool.queue, ··· 385 360 break; 386 361 } 387 362 388 - conn->exp_statsn = be32_to_cpu(hdr->statsn) + 1; 389 - conn->tmfrsp_pdus_cnt++; 390 - if (conn->tmabort_state == TMABORT_INITIAL) { 391 - conn->tmabort_state = 392 - ((struct iscsi_tm_rsp *)hdr)-> 393 - response == ISCSI_TMF_RSP_COMPLETE ? 394 - TMABORT_SUCCESS:TMABORT_FAILED; 395 - /* unblock eh_abort() */ 396 - wake_up(&conn->ehwait); 397 - } 363 + iscsi_tmf_rsp(conn, hdr); 398 364 break; 399 365 case ISCSI_OP_NOOP_IN: 400 - if (hdr->ttt != ISCSI_RESERVED_TAG) { 366 + if (hdr->ttt != ISCSI_RESERVED_TAG || datalen) { 401 367 rc = ISCSI_ERR_PROTO; 402 368 break; 403 369 } 404 370 conn->exp_statsn = be32_to_cpu(hdr->statsn) + 1; 405 371 406 - rc = iscsi_recv_pdu(conn->cls_conn, hdr, data, datalen); 372 + if (iscsi_recv_pdu(conn->cls_conn, hdr, data, datalen)) 373 + rc = ISCSI_ERR_CONN_FAILED; 407 374 list_del(&mtask->running); 408 375 if (conn->login_mtask != mtask) 409 376 __kfifo_put(session->mgmtpool.queue, ··· 408 391 } else if (itt == ISCSI_RESERVED_TAG) { 409 392 switch(opcode) { 410 393 case ISCSI_OP_NOOP_IN: 411 - if (!datalen) { 412 - rc = iscsi_check_assign_cmdsn(session, 413 - (struct iscsi_nopin*)hdr); 414 - if (!rc && hdr->ttt != ISCSI_RESERVED_TAG) 415 - rc = iscsi_recv_pdu(conn->cls_conn, 416 - hdr, NULL, 0); 417 - } else 394 + if (datalen) { 418 395 rc = ISCSI_ERR_PROTO; 396 + break; 397 + } 398 + 399 + rc = iscsi_check_assign_cmdsn(session, 400 + (struct iscsi_nopin*)hdr); 401 + if (rc) 402 + break; 403 + 404 + if (hdr->ttt == ISCSI_RESERVED_TAG) 405 + break; 406 + 407 + if (iscsi_recv_pdu(conn->cls_conn, hdr, NULL, 0)) 408 + rc = ISCSI_ERR_CONN_FAILED; 419 409 break; 420 410 case ISCSI_OP_REJECT: 421 411 /* we need sth like iscsi_reject_rsp()*/ ··· 592 568 } 593 569 594 570 /* process command queue */ 595 - while (__kfifo_get(conn->xmitqueue, (void*)&conn->ctask, 596 - sizeof(void*))) { 571 + spin_lock_bh(&conn->session->lock); 572 + while (!list_empty(&conn->xmitqueue)) { 597 573 /* 598 574 * iscsi tcp may readd the task to the xmitqueue to send 599 575 * write data 600 576 */ 601 - spin_lock_bh(&conn->session->lock); 602 - if (list_empty(&conn->ctask->running)) 603 - list_add_tail(&conn->ctask->running, &conn->run_list); 577 + conn->ctask = list_entry(conn->xmitqueue.next, 578 + struct iscsi_cmd_task, running); 579 + conn->ctask->state = ISCSI_TASK_RUNNING; 580 + list_move_tail(conn->xmitqueue.next, &conn->run_list); 604 581 spin_unlock_bh(&conn->session->lock); 582 + 605 583 rc = tt->xmit_cmd_task(conn, conn->ctask); 606 584 if (rc) 607 585 goto again; 586 + spin_lock_bh(&conn->session->lock); 608 587 } 588 + spin_unlock_bh(&conn->session->lock); 609 589 /* done with this ctask */ 610 590 conn->ctask = NULL; 611 591 ··· 719 691 sc->SCp.phase = session->age; 720 692 sc->SCp.ptr = (char *)ctask; 721 693 694 + ctask->state = ISCSI_TASK_PENDING; 722 695 ctask->mtask = NULL; 723 696 ctask->conn = conn; 724 697 ctask->sc = sc; ··· 729 700 730 701 session->tt->init_cmd_task(ctask); 731 702 732 - __kfifo_put(conn->xmitqueue, (void*)&ctask, sizeof(void*)); 703 + list_add_tail(&ctask->running, &conn->xmitqueue); 733 704 debug_scsi( 734 705 "ctask enq [%s cid %d sc %lx itt 0x%x len %d cmdsn %d win %d]\n", 735 706 sc->sc_data_direction == DMA_TO_DEVICE ? "write" : "read", ··· 1006 977 /* 1007 978 * xmit mutex and session lock must be held 1008 979 */ 1009 - #define iscsi_remove_task(tasktype) \ 1010 - static struct iscsi_##tasktype * \ 1011 - iscsi_remove_##tasktype(struct kfifo *fifo, uint32_t itt) \ 1012 - { \ 1013 - int i, nr_tasks = __kfifo_len(fifo) / sizeof(void*); \ 1014 - struct iscsi_##tasktype *task; \ 1015 - \ 1016 - debug_scsi("searching %d tasks\n", nr_tasks); \ 1017 - \ 1018 - for (i = 0; i < nr_tasks; i++) { \ 1019 - __kfifo_get(fifo, (void*)&task, sizeof(void*)); \ 1020 - debug_scsi("check task %u\n", task->itt); \ 1021 - \ 1022 - if (task->itt == itt) { \ 1023 - debug_scsi("matched task\n"); \ 1024 - return task; \ 1025 - } \ 1026 - \ 1027 - __kfifo_put(fifo, (void*)&task, sizeof(void*)); \ 1028 - } \ 1029 - return NULL; \ 1030 - } 980 + static struct iscsi_mgmt_task * 981 + iscsi_remove_mgmt_task(struct kfifo *fifo, uint32_t itt) 982 + { 983 + int i, nr_tasks = __kfifo_len(fifo) / sizeof(void*); 984 + struct iscsi_mgmt_task *task; 1031 985 1032 - iscsi_remove_task(mgmt_task); 1033 - iscsi_remove_task(cmd_task); 986 + debug_scsi("searching %d tasks\n", nr_tasks); 987 + 988 + for (i = 0; i < nr_tasks; i++) { 989 + __kfifo_get(fifo, (void*)&task, sizeof(void*)); 990 + debug_scsi("check task %u\n", task->itt); 991 + 992 + if (task->itt == itt) { 993 + debug_scsi("matched task\n"); 994 + return task; 995 + } 996 + 997 + __kfifo_put(fifo, (void*)&task, sizeof(void*)); 998 + } 999 + return NULL; 1000 + } 1034 1001 1035 1002 static int iscsi_ctask_mtask_cleanup(struct iscsi_cmd_task *ctask) 1036 1003 { ··· 1052 1027 { 1053 1028 struct scsi_cmnd *sc; 1054 1029 1055 - conn->session->tt->cleanup_cmd_task(conn, ctask); 1056 - iscsi_ctask_mtask_cleanup(ctask); 1057 - 1058 1030 sc = ctask->sc; 1059 1031 if (!sc) 1060 1032 return; 1033 + 1034 + conn->session->tt->cleanup_cmd_task(conn, ctask); 1035 + iscsi_ctask_mtask_cleanup(ctask); 1036 + 1061 1037 sc->result = err; 1062 1038 sc->resid = sc->request_bufflen; 1063 1039 iscsi_complete_command(conn->session, ctask); ··· 1069 1043 struct iscsi_cmd_task *ctask = (struct iscsi_cmd_task *)sc->SCp.ptr; 1070 1044 struct iscsi_conn *conn = ctask->conn; 1071 1045 struct iscsi_session *session = conn->session; 1072 - struct iscsi_cmd_task *pending_ctask; 1073 1046 int rc; 1074 1047 1075 1048 conn->eh_abort_cnt++; ··· 1086 1061 goto failed; 1087 1062 1088 1063 /* ctask completed before time out */ 1089 - if (!ctask->sc) 1090 - goto success; 1064 + if (!ctask->sc) { 1065 + spin_unlock_bh(&session->lock); 1066 + debug_scsi("sc completed while abort in progress\n"); 1067 + goto success_rel_mutex; 1068 + } 1091 1069 1092 1070 /* what should we do here ? */ 1093 1071 if (conn->ctask == ctask) { ··· 1099 1071 goto failed; 1100 1072 } 1101 1073 1102 - /* check for the easy pending cmd abort */ 1103 - pending_ctask = iscsi_remove_cmd_task(conn->xmitqueue, ctask->itt); 1104 - if (pending_ctask) { 1105 - /* iscsi_tcp queues write transfers on the xmitqueue */ 1106 - if (list_empty(&pending_ctask->running)) { 1107 - debug_scsi("found pending task\n"); 1108 - goto success; 1109 - } else 1110 - __kfifo_put(conn->xmitqueue, (void*)&pending_ctask, 1111 - sizeof(void*)); 1112 - } 1074 + if (ctask->state == ISCSI_TASK_PENDING) 1075 + goto success_cleanup; 1113 1076 1114 1077 conn->tmabort_state = TMABORT_INITIAL; 1115 1078 ··· 1108 1089 rc = iscsi_exec_abort_task(sc, ctask); 1109 1090 spin_lock_bh(&session->lock); 1110 1091 1111 - iscsi_ctask_mtask_cleanup(ctask); 1112 1092 if (rc || sc->SCp.phase != session->age || 1113 1093 session->state != ISCSI_STATE_LOGGED_IN) 1114 1094 goto failed; 1095 + iscsi_ctask_mtask_cleanup(ctask); 1115 1096 1116 - /* ctask completed before tmf abort response */ 1117 - if (!ctask->sc) { 1118 - debug_scsi("sc completed while abort in progress\n"); 1119 - goto success; 1120 - } 1121 - 1122 - if (conn->tmabort_state != TMABORT_SUCCESS) { 1097 + switch (conn->tmabort_state) { 1098 + case TMABORT_SUCCESS: 1099 + goto success_cleanup; 1100 + case TMABORT_NOT_FOUND: 1101 + if (!ctask->sc) { 1102 + /* ctask completed before tmf abort response */ 1103 + spin_unlock_bh(&session->lock); 1104 + debug_scsi("sc completed while abort in progress\n"); 1105 + goto success_rel_mutex; 1106 + } 1107 + /* fall through */ 1108 + default: 1109 + /* timedout or failed */ 1123 1110 spin_unlock_bh(&session->lock); 1124 1111 iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED); 1125 1112 spin_lock_bh(&session->lock); 1126 1113 goto failed; 1127 1114 } 1128 1115 1129 - success: 1116 + success_cleanup: 1130 1117 debug_scsi("abort success [sc %lx itt 0x%x]\n", (long)sc, ctask->itt); 1131 1118 spin_unlock_bh(&session->lock); 1132 1119 ··· 1146 1121 spin_unlock(&session->lock); 1147 1122 write_unlock_bh(conn->recv_lock); 1148 1123 1124 + success_rel_mutex: 1149 1125 mutex_unlock(&conn->xmitmutex); 1150 1126 return SUCCESS; 1151 1127 ··· 1289 1263 if (cmd_task_size) 1290 1264 ctask->dd_data = &ctask[1]; 1291 1265 ctask->itt = cmd_i; 1266 + INIT_LIST_HEAD(&ctask->running); 1292 1267 } 1293 1268 1294 1269 spin_lock_init(&session->lock); ··· 1309 1282 if (mgmt_task_size) 1310 1283 mtask->dd_data = &mtask[1]; 1311 1284 mtask->itt = ISCSI_MGMT_ITT_OFFSET + cmd_i; 1285 + INIT_LIST_HEAD(&mtask->running); 1312 1286 } 1313 1287 1314 1288 if (scsi_add_host(shost, NULL)) ··· 1350 1322 { 1351 1323 struct Scsi_Host *shost = iscsi_session_to_shost(cls_session); 1352 1324 struct iscsi_session *session = iscsi_hostdata(shost->hostdata); 1325 + struct module *owner = cls_session->transport->owner; 1353 1326 1354 1327 scsi_remove_host(shost); 1355 1328 1356 1329 iscsi_pool_free(&session->mgmtpool, (void**)session->mgmt_cmds); 1357 1330 iscsi_pool_free(&session->cmdpool, (void**)session->cmds); 1358 1331 1332 + kfree(session->targetname); 1333 + 1359 1334 iscsi_destroy_session(cls_session); 1360 1335 scsi_host_put(shost); 1361 - module_put(cls_session->transport->owner); 1336 + module_put(owner); 1362 1337 } 1363 1338 EXPORT_SYMBOL_GPL(iscsi_session_teardown); 1364 1339 ··· 1392 1361 conn->tmabort_state = TMABORT_INITIAL; 1393 1362 INIT_LIST_HEAD(&conn->run_list); 1394 1363 INIT_LIST_HEAD(&conn->mgmt_run_list); 1395 - 1396 - /* initialize general xmit PDU commands queue */ 1397 - conn->xmitqueue = kfifo_alloc(session->cmds_max * sizeof(void*), 1398 - GFP_KERNEL, NULL); 1399 - if (conn->xmitqueue == ERR_PTR(-ENOMEM)) 1400 - goto xmitqueue_alloc_fail; 1364 + INIT_LIST_HEAD(&conn->xmitqueue); 1401 1365 1402 1366 /* initialize general immediate & non-immediate PDU commands queue */ 1403 1367 conn->immqueue = kfifo_alloc(session->mgmtpool_max * sizeof(void*), ··· 1420 1394 data = kmalloc(DEFAULT_MAX_RECV_DATA_SEGMENT_LENGTH, GFP_KERNEL); 1421 1395 if (!data) 1422 1396 goto login_mtask_data_alloc_fail; 1423 - conn->login_mtask->data = data; 1397 + conn->login_mtask->data = conn->data = data; 1424 1398 1425 1399 init_timer(&conn->tmabort_timer); 1426 1400 mutex_init(&conn->xmitmutex); ··· 1436 1410 mgmtqueue_alloc_fail: 1437 1411 kfifo_free(conn->immqueue); 1438 1412 immqueue_alloc_fail: 1439 - kfifo_free(conn->xmitqueue); 1440 - xmitqueue_alloc_fail: 1441 1413 iscsi_destroy_conn(cls_conn); 1442 1414 return NULL; 1443 1415 } ··· 1456 1432 1457 1433 set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx); 1458 1434 mutex_lock(&conn->xmitmutex); 1459 - if (conn->c_stage == ISCSI_CONN_INITIAL_STAGE) { 1460 - if (session->tt->suspend_conn_recv) 1461 - session->tt->suspend_conn_recv(conn); 1462 - 1463 - session->tt->terminate_conn(conn); 1464 - } 1465 1435 1466 1436 spin_lock_bh(&session->lock); 1467 1437 conn->c_stage = ISCSI_CONN_CLEANUP_WAIT; ··· 1492 1474 } 1493 1475 1494 1476 spin_lock_bh(&session->lock); 1495 - kfree(conn->login_mtask->data); 1477 + kfree(conn->data); 1478 + kfree(conn->persistent_address); 1496 1479 __kfifo_put(session->mgmtpool.queue, (void*)&conn->login_mtask, 1497 1480 sizeof(void*)); 1498 1481 list_del(&conn->item); ··· 1508 1489 session->cmdsn = session->max_cmdsn = session->exp_cmdsn = 1; 1509 1490 spin_unlock_bh(&session->lock); 1510 1491 1511 - kfifo_free(conn->xmitqueue); 1512 1492 kfifo_free(conn->immqueue); 1513 1493 kfifo_free(conn->mgmtqueue); 1514 1494 ··· 1590 1572 struct iscsi_cmd_task *ctask, *tmp; 1591 1573 1592 1574 /* flush pending */ 1593 - while (__kfifo_get(conn->xmitqueue, (void*)&ctask, sizeof(void*))) { 1575 + list_for_each_entry_safe(ctask, tmp, &conn->xmitqueue, running) { 1594 1576 debug_scsi("failing pending sc %p itt 0x%x\n", ctask->sc, 1595 1577 ctask->itt); 1596 1578 fail_command(conn, ctask, DID_BUS_BUSY << 16); ··· 1633 1615 set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx); 1634 1616 spin_unlock_bh(&session->lock); 1635 1617 1636 - if (session->tt->suspend_conn_recv) 1637 - session->tt->suspend_conn_recv(conn); 1618 + write_lock_bh(conn->recv_lock); 1619 + set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_rx); 1620 + write_unlock_bh(conn->recv_lock); 1638 1621 1639 1622 mutex_lock(&conn->xmitmutex); 1640 1623 /* ··· 1654 1635 } 1655 1636 } 1656 1637 1657 - session->tt->terminate_conn(conn); 1658 1638 /* 1659 1639 * flush queues. 1660 1640 */
+92 -9
drivers/scsi/lpfc/lpfc_attr.c
··· 222 222 pmboxq->mb.mbxCommand = MBX_DOWN_LINK; 223 223 pmboxq->mb.mbxOwner = OWN_HOST; 224 224 225 - mbxstatus = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2); 225 + mbxstatus = lpfc_sli_issue_mbox_wait(phba, pmboxq, LPFC_MBOX_TMO * 2); 226 226 227 227 if ((mbxstatus == MBX_SUCCESS) && (pmboxq->mb.mbxStatus == 0)) { 228 228 memset((void *)pmboxq, 0, sizeof (LPFC_MBOXQ_t)); ··· 884 884 phba->sysfs_mbox.mbox == NULL ) { 885 885 sysfs_mbox_idle(phba); 886 886 spin_unlock_irq(host->host_lock); 887 - return -EINVAL; 887 + return -EAGAIN; 888 888 } 889 889 } 890 890 ··· 1000 1000 spin_unlock_irq(phba->host->host_lock); 1001 1001 rc = lpfc_sli_issue_mbox_wait (phba, 1002 1002 phba->sysfs_mbox.mbox, 1003 - phba->fc_ratov * 2); 1003 + lpfc_mbox_tmo_val(phba, 1004 + phba->sysfs_mbox.mbox->mb.mbxCommand) * HZ); 1004 1005 spin_lock_irq(phba->host->host_lock); 1005 1006 } 1006 1007 1007 1008 if (rc != MBX_SUCCESS) { 1008 1009 sysfs_mbox_idle(phba); 1009 1010 spin_unlock_irq(host->host_lock); 1010 - return -ENODEV; 1011 + return (rc == MBX_TIMEOUT) ? -ETIME : -ENODEV; 1011 1012 } 1012 1013 phba->sysfs_mbox.state = SMBOX_READING; 1013 1014 } ··· 1017 1016 printk(KERN_WARNING "mbox_read: Bad State\n"); 1018 1017 sysfs_mbox_idle(phba); 1019 1018 spin_unlock_irq(host->host_lock); 1020 - return -EINVAL; 1019 + return -EAGAIN; 1021 1020 } 1022 1021 1023 1022 memcpy(buf, (uint8_t *) & phba->sysfs_mbox.mbox->mb + off, count); ··· 1211 1210 struct lpfc_hba *phba = (struct lpfc_hba *)shost->hostdata; 1212 1211 struct lpfc_sli *psli = &phba->sli; 1213 1212 struct fc_host_statistics *hs = &phba->link_stats; 1213 + struct lpfc_lnk_stat * lso = &psli->lnk_stat_offsets; 1214 1214 LPFC_MBOXQ_t *pmboxq; 1215 1215 MAILBOX_t *pmb; 1216 + unsigned long seconds; 1216 1217 int rc = 0; 1217 1218 1218 1219 pmboxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); ··· 1275 1272 hs->invalid_crc_count = pmb->un.varRdLnk.crcCnt; 1276 1273 hs->error_frames = pmb->un.varRdLnk.crcCnt; 1277 1274 1275 + hs->link_failure_count -= lso->link_failure_count; 1276 + hs->loss_of_sync_count -= lso->loss_of_sync_count; 1277 + hs->loss_of_signal_count -= lso->loss_of_signal_count; 1278 + hs->prim_seq_protocol_err_count -= lso->prim_seq_protocol_err_count; 1279 + hs->invalid_tx_word_count -= lso->invalid_tx_word_count; 1280 + hs->invalid_crc_count -= lso->invalid_crc_count; 1281 + hs->error_frames -= lso->error_frames; 1282 + 1278 1283 if (phba->fc_topology == TOPOLOGY_LOOP) { 1279 1284 hs->lip_count = (phba->fc_eventTag >> 1); 1285 + hs->lip_count -= lso->link_events; 1280 1286 hs->nos_count = -1; 1281 1287 } else { 1282 1288 hs->lip_count = -1; 1283 1289 hs->nos_count = (phba->fc_eventTag >> 1); 1290 + hs->nos_count -= lso->link_events; 1284 1291 } 1285 1292 1286 1293 hs->dumped_frames = -1; 1287 1294 1288 - /* FIX ME */ 1289 - /*hs->SecondsSinceLastReset = (jiffies - lpfc_loadtime) / HZ;*/ 1295 + seconds = get_seconds(); 1296 + if (seconds < psli->stats_start) 1297 + hs->seconds_since_last_reset = seconds + 1298 + ((unsigned long)-1 - psli->stats_start); 1299 + else 1300 + hs->seconds_since_last_reset = seconds - psli->stats_start; 1290 1301 1291 1302 return hs; 1292 1303 } 1293 1304 1305 + static void 1306 + lpfc_reset_stats(struct Scsi_Host *shost) 1307 + { 1308 + struct lpfc_hba *phba = (struct lpfc_hba *)shost->hostdata; 1309 + struct lpfc_sli *psli = &phba->sli; 1310 + struct lpfc_lnk_stat * lso = &psli->lnk_stat_offsets; 1311 + LPFC_MBOXQ_t *pmboxq; 1312 + MAILBOX_t *pmb; 1313 + int rc = 0; 1314 + 1315 + pmboxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); 1316 + if (!pmboxq) 1317 + return; 1318 + memset(pmboxq, 0, sizeof(LPFC_MBOXQ_t)); 1319 + 1320 + pmb = &pmboxq->mb; 1321 + pmb->mbxCommand = MBX_READ_STATUS; 1322 + pmb->mbxOwner = OWN_HOST; 1323 + pmb->un.varWords[0] = 0x1; /* reset request */ 1324 + pmboxq->context1 = NULL; 1325 + 1326 + if ((phba->fc_flag & FC_OFFLINE_MODE) || 1327 + (!(psli->sli_flag & LPFC_SLI2_ACTIVE))) 1328 + rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL); 1329 + else 1330 + rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2); 1331 + 1332 + if (rc != MBX_SUCCESS) { 1333 + if (rc == MBX_TIMEOUT) 1334 + pmboxq->mbox_cmpl = lpfc_sli_def_mbox_cmpl; 1335 + else 1336 + mempool_free(pmboxq, phba->mbox_mem_pool); 1337 + return; 1338 + } 1339 + 1340 + memset(pmboxq, 0, sizeof(LPFC_MBOXQ_t)); 1341 + pmb->mbxCommand = MBX_READ_LNK_STAT; 1342 + pmb->mbxOwner = OWN_HOST; 1343 + pmboxq->context1 = NULL; 1344 + 1345 + if ((phba->fc_flag & FC_OFFLINE_MODE) || 1346 + (!(psli->sli_flag & LPFC_SLI2_ACTIVE))) 1347 + rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL); 1348 + else 1349 + rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2); 1350 + 1351 + if (rc != MBX_SUCCESS) { 1352 + if (rc == MBX_TIMEOUT) 1353 + pmboxq->mbox_cmpl = lpfc_sli_def_mbox_cmpl; 1354 + else 1355 + mempool_free( pmboxq, phba->mbox_mem_pool); 1356 + return; 1357 + } 1358 + 1359 + lso->link_failure_count = pmb->un.varRdLnk.linkFailureCnt; 1360 + lso->loss_of_sync_count = pmb->un.varRdLnk.lossSyncCnt; 1361 + lso->loss_of_signal_count = pmb->un.varRdLnk.lossSignalCnt; 1362 + lso->prim_seq_protocol_err_count = pmb->un.varRdLnk.primSeqErrCnt; 1363 + lso->invalid_tx_word_count = pmb->un.varRdLnk.invalidXmitWord; 1364 + lso->invalid_crc_count = pmb->un.varRdLnk.crcCnt; 1365 + lso->error_frames = pmb->un.varRdLnk.crcCnt; 1366 + lso->link_events = (phba->fc_eventTag >> 1); 1367 + 1368 + psli->stats_start = get_seconds(); 1369 + 1370 + return; 1371 + } 1294 1372 1295 1373 /* 1296 1374 * The LPFC driver treats linkdown handling as target loss events so there ··· 1515 1431 */ 1516 1432 1517 1433 .get_fc_host_stats = lpfc_get_stats, 1518 - 1519 - /* the LPFC driver doesn't support resetting stats yet */ 1434 + .reset_fc_host_stats = lpfc_reset_stats, 1520 1435 1521 1436 .dd_fcrport_size = sizeof(struct lpfc_rport_data), 1522 1437 .show_rport_maxframe_size = 1,
+1
drivers/scsi/lpfc/lpfc_crtn.h
··· 127 127 void lpfc_kill_board(struct lpfc_hba *, LPFC_MBOXQ_t *); 128 128 void lpfc_mbox_put(struct lpfc_hba *, LPFC_MBOXQ_t *); 129 129 LPFC_MBOXQ_t *lpfc_mbox_get(struct lpfc_hba *); 130 + int lpfc_mbox_tmo_val(struct lpfc_hba *, int); 130 131 131 132 int lpfc_mem_alloc(struct lpfc_hba *); 132 133 void lpfc_mem_free(struct lpfc_hba *);
+5 -8
drivers/scsi/lpfc/lpfc_ct.c
··· 131 131 } 132 132 133 133 ct_unsol_event_exit_piocbq: 134 + list_del(&head); 134 135 if (pmbuf) { 135 136 list_for_each_entry_safe(matp, next_matp, &pmbuf->list, list) { 136 137 lpfc_mbuf_free(phba, matp->virt, matp->phys); ··· 482 481 if (CTrsp->CommandResponse.bits.CmdRsp == 483 482 be16_to_cpu(SLI_CT_RESPONSE_FS_ACC)) { 484 483 lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY, 485 - "%d:0239 NameServer Rsp " 484 + "%d:0208 NameServer Rsp " 486 485 "Data: x%x\n", 487 486 phba->brd_no, 488 487 phba->fc_flag); ··· 589 588 590 589 lpfc_decode_firmware_rev(phba, fwrev, 0); 591 590 592 - if (phba->Port[0]) { 593 - sprintf(symbp, "Emulex %s Port %s FV%s DV%s", phba->ModelName, 594 - phba->Port, fwrev, lpfc_release_version); 595 - } else { 596 - sprintf(symbp, "Emulex %s FV%s DV%s", phba->ModelName, 597 - fwrev, lpfc_release_version); 598 - } 591 + sprintf(symbp, "Emulex %s FV%s DV%s", phba->ModelName, 592 + fwrev, lpfc_release_version); 593 + return; 599 594 } 600 595 601 596 /*
+15 -6
drivers/scsi/lpfc/lpfc_els.c
··· 1848 1848 lpfc_cmpl_els_acc(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb, 1849 1849 struct lpfc_iocbq * rspiocb) 1850 1850 { 1851 + IOCB_t *irsp; 1851 1852 struct lpfc_nodelist *ndlp; 1852 1853 LPFC_MBOXQ_t *mbox = NULL; 1854 + 1855 + irsp = &rspiocb->iocb; 1853 1856 1854 1857 ndlp = (struct lpfc_nodelist *) cmdiocb->context1; 1855 1858 if (cmdiocb->context_un.mbox) ··· 1896 1893 mempool_free( mbox, phba->mbox_mem_pool); 1897 1894 } else { 1898 1895 mempool_free( mbox, phba->mbox_mem_pool); 1899 - if (ndlp->nlp_flag & NLP_ACC_REGLOGIN) { 1900 - lpfc_nlp_list(phba, ndlp, NLP_NO_LIST); 1901 - ndlp = NULL; 1896 + /* Do not call NO_LIST for lpfc_els_abort'ed ELS cmds */ 1897 + if (!((irsp->ulpStatus == IOSTAT_LOCAL_REJECT) && 1898 + ((irsp->un.ulpWord[4] == IOERR_SLI_ABORTED) || 1899 + (irsp->un.ulpWord[4] == IOERR_LINK_DOWN) || 1900 + (irsp->un.ulpWord[4] == IOERR_SLI_DOWN)))) { 1901 + if (ndlp->nlp_flag & NLP_ACC_REGLOGIN) { 1902 + lpfc_nlp_list(phba, ndlp, NLP_NO_LIST); 1903 + ndlp = NULL; 1904 + } 1902 1905 } 1903 1906 } 1904 1907 } ··· 2848 2839 2849 2840 /* Xmit ELS RPS ACC response tag <ulpIoTag> */ 2850 2841 lpfc_printf_log(phba, KERN_INFO, LOG_ELS, 2851 - "%d:0128 Xmit ELS RPS ACC response tag x%x " 2842 + "%d:0118 Xmit ELS RPS ACC response tag x%x " 2852 2843 "Data: x%x x%x x%x x%x x%x\n", 2853 2844 phba->brd_no, 2854 2845 elsiocb->iocb.ulpIoTag, ··· 2957 2948 2958 2949 /* Xmit ELS RPL ACC response tag <ulpIoTag> */ 2959 2950 lpfc_printf_log(phba, KERN_INFO, LOG_ELS, 2960 - "%d:0128 Xmit ELS RPL ACC response tag x%x " 2951 + "%d:0120 Xmit ELS RPL ACC response tag x%x " 2961 2952 "Data: x%x x%x x%x x%x x%x\n", 2962 2953 phba->brd_no, 2963 2954 elsiocb->iocb.ulpIoTag, ··· 3118 3109 struct lpfc_nodelist *ndlp, *next_ndlp; 3119 3110 3120 3111 /* FAN received */ 3121 - lpfc_printf_log(phba, KERN_INFO, LOG_ELS, "%d:265 FAN received\n", 3112 + lpfc_printf_log(phba, KERN_INFO, LOG_ELS, "%d:0265 FAN received\n", 3122 3113 phba->brd_no); 3123 3114 3124 3115 icmd = &cmdiocb->iocb;
+9 -6
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 1557 1557 mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl; 1558 1558 } 1559 1559 } 1560 + 1561 + spin_lock_irq(phba->host->host_lock); 1560 1562 list_for_each_entry_safe(mb, nextmb, &phba->sli.mboxq, list) { 1561 1563 if ((mb->mb.mbxCommand == MBX_REG_LOGIN64) && 1562 1564 (ndlp == (struct lpfc_nodelist *) mb->context2)) { ··· 1571 1569 mempool_free(mb, phba->mbox_mem_pool); 1572 1570 } 1573 1571 } 1572 + spin_unlock_irq(phba->host->host_lock); 1574 1573 1575 1574 lpfc_els_abort(phba,ndlp,0); 1576 1575 spin_lock_irq(phba->host->host_lock); ··· 1785 1782 /* LOG change to REGLOGIN */ 1786 1783 /* FIND node DID reglogin */ 1787 1784 lpfc_printf_log(phba, KERN_INFO, LOG_NODE, 1788 - "%d:0931 FIND node DID reglogin" 1785 + "%d:0901 FIND node DID reglogin" 1789 1786 " Data: x%p x%x x%x x%x\n", 1790 1787 phba->brd_no, 1791 1788 ndlp, ndlp->nlp_DID, ··· 1808 1805 /* LOG change to PRLI */ 1809 1806 /* FIND node DID prli */ 1810 1807 lpfc_printf_log(phba, KERN_INFO, LOG_NODE, 1811 - "%d:0931 FIND node DID prli " 1808 + "%d:0902 FIND node DID prli " 1812 1809 "Data: x%p x%x x%x x%x\n", 1813 1810 phba->brd_no, 1814 1811 ndlp, ndlp->nlp_DID, ··· 1831 1828 /* LOG change to NPR */ 1832 1829 /* FIND node DID npr */ 1833 1830 lpfc_printf_log(phba, KERN_INFO, LOG_NODE, 1834 - "%d:0931 FIND node DID npr " 1831 + "%d:0903 FIND node DID npr " 1835 1832 "Data: x%p x%x x%x x%x\n", 1836 1833 phba->brd_no, 1837 1834 ndlp, ndlp->nlp_DID, ··· 1854 1851 /* LOG change to UNUSED */ 1855 1852 /* FIND node DID unused */ 1856 1853 lpfc_printf_log(phba, KERN_INFO, LOG_NODE, 1857 - "%d:0931 FIND node DID unused " 1854 + "%d:0905 FIND node DID unused " 1858 1855 "Data: x%p x%x x%x x%x\n", 1859 1856 phba->brd_no, 1860 1857 ndlp, ndlp->nlp_DID, ··· 2338 2335 initlinkmbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); 2339 2336 if (!initlinkmbox) { 2340 2337 lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY, 2341 - "%d:0226 Device Discovery " 2338 + "%d:0206 Device Discovery " 2342 2339 "completion error\n", 2343 2340 phba->brd_no); 2344 2341 phba->hba_state = LPFC_HBA_ERROR; ··· 2368 2365 if (!clearlambox) { 2369 2366 clrlaerr = 1; 2370 2367 lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY, 2371 - "%d:0226 Device Discovery " 2368 + "%d:0207 Device Discovery " 2372 2369 "completion error\n", 2373 2370 phba->brd_no); 2374 2371 phba->hba_state = LPFC_HBA_ERROR;
+12 -1
drivers/scsi/lpfc/lpfc_init.c
··· 1379 1379 /* stop all timers associated with this hba */ 1380 1380 lpfc_stop_timer(phba); 1381 1381 phba->work_hba_events = 0; 1382 + phba->work_ha = 0; 1382 1383 1383 1384 lpfc_printf_log(phba, 1384 1385 KERN_WARNING, ··· 1617 1616 goto out_free_iocbq; 1618 1617 } 1619 1618 1620 - /* We can rely on a queue depth attribute only after SLI HBA setup */ 1619 + /* 1620 + * Set initial can_queue value since 0 is no longer supported and 1621 + * scsi_add_host will fail. This will be adjusted later based on the 1622 + * max xri value determined in hba setup. 1623 + */ 1621 1624 host->can_queue = phba->cfg_hba_queue_depth - 10; 1622 1625 1623 1626 /* Tell the midlayer we support 16 byte commands */ ··· 1660 1655 error = -ENODEV; 1661 1656 goto out_free_irq; 1662 1657 } 1658 + 1659 + /* 1660 + * hba setup may have changed the hba_queue_depth so we need to adjust 1661 + * the value of can_queue. 1662 + */ 1663 + host->can_queue = phba->cfg_hba_queue_depth - 10; 1663 1664 1664 1665 lpfc_discovery_wait(phba); 1665 1666
+16
drivers/scsi/lpfc/lpfc_mbox.c
··· 651 651 652 652 return mbq; 653 653 } 654 + 655 + int 656 + lpfc_mbox_tmo_val(struct lpfc_hba *phba, int cmd) 657 + { 658 + switch (cmd) { 659 + case MBX_WRITE_NV: /* 0x03 */ 660 + case MBX_UPDATE_CFG: /* 0x1B */ 661 + case MBX_DOWN_LOAD: /* 0x1C */ 662 + case MBX_DEL_LD_ENTRY: /* 0x1D */ 663 + case MBX_LOAD_AREA: /* 0x81 */ 664 + case MBX_FLASH_WR_ULA: /* 0x98 */ 665 + case MBX_LOAD_EXP_ROM: /* 0x9C */ 666 + return LPFC_MBOX_TMO_FLASH_CMD; 667 + } 668 + return LPFC_MBOX_TMO; 669 + }
+22 -2
drivers/scsi/lpfc/lpfc_nportdisc.c
··· 179 179 180 180 /* Abort outstanding I/O on NPort <nlp_DID> */ 181 181 lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY, 182 - "%d:0201 Abort outstanding I/O on NPort x%x " 182 + "%d:0205 Abort outstanding I/O on NPort x%x " 183 183 "Data: x%x x%x x%x\n", 184 184 phba->brd_no, ndlp->nlp_DID, ndlp->nlp_flag, 185 185 ndlp->nlp_state, ndlp->nlp_rpi); ··· 392 392 mbox->mbox_cmpl = lpfc_mbx_cmpl_reg_login; 393 393 mbox->context2 = ndlp; 394 394 ndlp->nlp_flag |= (NLP_ACC_REGLOGIN | NLP_RCV_PLOGI); 395 + 396 + /* 397 + * If there is an outstanding PLOGI issued, abort it before 398 + * sending ACC rsp for received PLOGI. If pending plogi 399 + * is not canceled here, the plogi will be rejected by 400 + * remote port and will be retried. On a configuration with 401 + * single discovery thread, this will cause a huge delay in 402 + * discovery. Also this will cause multiple state machines 403 + * running in parallel for this node. 404 + */ 405 + if (ndlp->nlp_state == NLP_STE_PLOGI_ISSUE) { 406 + /* software abort outstanding PLOGI */ 407 + lpfc_els_abort(phba, ndlp, 1); 408 + } 395 409 396 410 lpfc_els_rsp_acc(phba, ELS_CMD_PLOGI, cmdiocb, ndlp, mbox, 0); 397 411 return 1; ··· 1615 1601 1616 1602 lpfc_rcv_padisc(phba, ndlp, cmdiocb); 1617 1603 1618 - if (!(ndlp->nlp_flag & NLP_DELAY_TMO)) { 1604 + /* 1605 + * Do not start discovery if discovery is about to start 1606 + * or discovery in progress for this node. Starting discovery 1607 + * here will affect the counting of discovery threads. 1608 + */ 1609 + if ((!(ndlp->nlp_flag & NLP_DELAY_TMO)) && 1610 + (ndlp->nlp_flag & NLP_NPR_2B_DISC)){ 1619 1611 if (ndlp->nlp_flag & NLP_NPR_ADISC) { 1620 1612 ndlp->nlp_prev_state = NLP_STE_NPR_NODE; 1621 1613 ndlp->nlp_state = NLP_STE_ADISC_ISSUE;
+20 -1
drivers/scsi/lpfc/lpfc_scsi.c
··· 21 21 22 22 #include <linux/pci.h> 23 23 #include <linux/interrupt.h> 24 + #include <linux/delay.h> 24 25 25 26 #include <scsi/scsi.h> 26 27 #include <scsi/scsi_device.h> ··· 842 841 return 0; 843 842 } 844 843 844 + static void 845 + lpfc_block_error_handler(struct scsi_cmnd *cmnd) 846 + { 847 + struct Scsi_Host *shost = cmnd->device->host; 848 + struct fc_rport *rport = starget_to_rport(scsi_target(cmnd->device)); 849 + 850 + spin_lock_irq(shost->host_lock); 851 + while (rport->port_state == FC_PORTSTATE_BLOCKED) { 852 + spin_unlock_irq(shost->host_lock); 853 + msleep(1000); 854 + spin_lock_irq(shost->host_lock); 855 + } 856 + spin_unlock_irq(shost->host_lock); 857 + return; 858 + } 845 859 846 860 static int 847 861 lpfc_abort_handler(struct scsi_cmnd *cmnd) ··· 871 855 unsigned int loop_count = 0; 872 856 int ret = SUCCESS; 873 857 858 + lpfc_block_error_handler(cmnd); 874 859 spin_lock_irq(shost->host_lock); 875 860 876 861 lpfc_cmd = (struct lpfc_scsi_buf *)cmnd->host_scribble; ··· 974 957 int ret = FAILED; 975 958 int cnt, loopcnt; 976 959 960 + lpfc_block_error_handler(cmnd); 977 961 spin_lock_irq(shost->host_lock); 978 962 /* 979 963 * If target is not in a MAPPED state, delay the reset until ··· 1091 1073 int cnt, loopcnt; 1092 1074 struct lpfc_scsi_buf * lpfc_cmd; 1093 1075 1076 + lpfc_block_error_handler(cmnd); 1094 1077 spin_lock_irq(shost->host_lock); 1095 1078 1096 1079 lpfc_cmd = lpfc_get_scsi_buf(phba); ··· 1123 1104 ndlp->rport->dd_data); 1124 1105 if (ret != SUCCESS) { 1125 1106 lpfc_printf_log(phba, KERN_ERR, LOG_FCP, 1126 - "%d:0713 Bus Reset on target %d failed\n", 1107 + "%d:0700 Bus Reset on target %d failed\n", 1127 1108 phba->brd_no, i); 1128 1109 err_count++; 1129 1110 }
+36 -21
drivers/scsi/lpfc/lpfc_sli.c
··· 320 320 kfree(old_arr); 321 321 return iotag; 322 322 } 323 - } 323 + } else 324 + spin_unlock_irq(phba->host->host_lock); 324 325 325 326 lpfc_printf_log(phba, KERN_ERR,LOG_SLI, 326 327 "%d:0318 Failed to allocate IOTAG.last IOTAG is %d\n", ··· 970 969 * resources need to be recovered. 971 970 */ 972 971 if (unlikely(irsp->ulpCommand == CMD_XRI_ABORTED_CX)) { 973 - printk(KERN_INFO "%s: IOCB cmd 0x%x processed." 974 - " Skipping completion\n", __FUNCTION__, 975 - irsp->ulpCommand); 972 + lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 973 + "%d:0314 IOCB cmd 0x%x" 974 + " processed. Skipping" 975 + " completion", phba->brd_no, 976 + irsp->ulpCommand); 976 977 break; 977 978 } 978 979 ··· 1107 1104 if (unlikely(irsp->ulpStatus)) { 1108 1105 /* Rsp ring <ringno> error: IOCB */ 1109 1106 lpfc_printf_log(phba, KERN_WARNING, LOG_SLI, 1110 - "%d:0326 Rsp Ring %d error: IOCB Data: " 1107 + "%d:0336 Rsp Ring %d error: IOCB Data: " 1111 1108 "x%x x%x x%x x%x x%x x%x x%x x%x\n", 1112 1109 phba->brd_no, pring->ringno, 1113 1110 irsp->un.ulpWord[0], irsp->un.ulpWord[1], ··· 1125 1122 * resources need to be recovered. 1126 1123 */ 1127 1124 if (unlikely(irsp->ulpCommand == CMD_XRI_ABORTED_CX)) { 1128 - printk(KERN_INFO "%s: IOCB cmd 0x%x processed. " 1129 - "Skipping completion\n", __FUNCTION__, 1130 - irsp->ulpCommand); 1125 + lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 1126 + "%d:0333 IOCB cmd 0x%x" 1127 + " processed. Skipping" 1128 + " completion\n", phba->brd_no, 1129 + irsp->ulpCommand); 1131 1130 break; 1132 1131 } 1133 1132 ··· 1160 1155 } else { 1161 1156 /* Unknown IOCB command */ 1162 1157 lpfc_printf_log(phba, KERN_ERR, LOG_SLI, 1163 - "%d:0321 Unknown IOCB command " 1158 + "%d:0334 Unknown IOCB command " 1164 1159 "Data: x%x, x%x x%x x%x x%x\n", 1165 1160 phba->brd_no, type, irsp->ulpCommand, 1166 1161 irsp->ulpStatus, irsp->ulpIoTag, ··· 1243 1238 lpfc_printf_log(phba, 1244 1239 KERN_ERR, 1245 1240 LOG_SLI, 1246 - "%d:0312 Ring %d handler: portRspPut %d " 1241 + "%d:0303 Ring %d handler: portRspPut %d " 1247 1242 "is bigger then rsp ring %d\n", 1248 1243 phba->brd_no, 1249 1244 pring->ringno, portRspPut, portRspMax); ··· 1388 1383 lpfc_printf_log(phba, 1389 1384 KERN_ERR, 1390 1385 LOG_SLI, 1391 - "%d:0321 Unknown IOCB command " 1386 + "%d:0335 Unknown IOCB command " 1392 1387 "Data: x%x x%x x%x x%x\n", 1393 1388 phba->brd_no, 1394 1389 irsp->ulpCommand, ··· 1404 1399 next_iocb, 1405 1400 &saveq->list, 1406 1401 list) { 1402 + list_del(&rspiocbp->list); 1407 1403 lpfc_sli_release_iocbq(phba, 1408 1404 rspiocbp); 1409 1405 } 1410 1406 } 1411 - 1412 1407 lpfc_sli_release_iocbq(phba, saveq); 1413 1408 } 1414 1409 } ··· 1716 1711 phba->fc_myDID = 0; 1717 1712 phba->fc_prevDID = 0; 1718 1713 1719 - psli->sli_flag = 0; 1720 - 1721 1714 /* Turn off parity checking and serr during the physical reset */ 1722 1715 pci_read_config_word(phba->pcidev, PCI_COMMAND, &cfg_value); 1723 1716 pci_write_config_word(phba->pcidev, PCI_COMMAND, 1724 1717 (cfg_value & 1725 1718 ~(PCI_COMMAND_PARITY | PCI_COMMAND_SERR))); 1726 1719 1727 - psli->sli_flag &= ~LPFC_SLI2_ACTIVE; 1720 + psli->sli_flag &= ~(LPFC_SLI2_ACTIVE | LPFC_PROCESS_LA); 1728 1721 /* Now toggle INITFF bit in the Host Control Register */ 1729 1722 writel(HC_INITFF, phba->HCregaddr); 1730 1723 mdelay(1); ··· 1763 1760 1764 1761 /* Restart HBA */ 1765 1762 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 1766 - "%d:0328 Restart HBA Data: x%x x%x\n", phba->brd_no, 1763 + "%d:0337 Restart HBA Data: x%x x%x\n", phba->brd_no, 1767 1764 phba->hba_state, psli->sli_flag); 1768 1765 1769 1766 word0 = 0; ··· 1794 1791 phba->hba_state = LPFC_INIT_START; 1795 1792 1796 1793 spin_unlock_irq(phba->host->host_lock); 1794 + 1795 + memset(&psli->lnk_stat_offsets, 0, sizeof(psli->lnk_stat_offsets)); 1796 + psli->stats_start = get_seconds(); 1797 1797 1798 1798 if (skip_post) 1799 1799 mdelay(100); ··· 1908 1902 } 1909 1903 1910 1904 while (resetcount < 2 && !done) { 1905 + spin_lock_irq(phba->host->host_lock); 1906 + phba->sli.sli_flag |= LPFC_SLI_MBOX_ACTIVE; 1907 + spin_unlock_irq(phba->host->host_lock); 1911 1908 phba->hba_state = LPFC_STATE_UNKNOWN; 1912 1909 lpfc_sli_brdrestart(phba); 1913 1910 msleep(2500); ··· 1918 1909 if (rc) 1919 1910 break; 1920 1911 1912 + spin_lock_irq(phba->host->host_lock); 1913 + phba->sli.sli_flag &= ~LPFC_SLI_MBOX_ACTIVE; 1914 + spin_unlock_irq(phba->host->host_lock); 1921 1915 resetcount++; 1922 1916 1923 1917 /* Call pre CONFIG_PORT mailbox command initialization. A value of 0 ··· 2206 2194 return (MBX_NOT_FINISHED); 2207 2195 } 2208 2196 /* timeout active mbox command */ 2209 - mod_timer(&psli->mbox_tmo, jiffies + HZ * LPFC_MBOX_TMO); 2197 + mod_timer(&psli->mbox_tmo, (jiffies + 2198 + (HZ * lpfc_mbox_tmo_val(phba, mb->mbxCommand)))); 2210 2199 } 2211 2200 2212 2201 /* Mailbox cmd <cmd> issue */ ··· 2267 2254 break; 2268 2255 2269 2256 case MBX_POLL: 2270 - i = 0; 2271 2257 psli->mbox_active = NULL; 2272 2258 if (psli->sli_flag & LPFC_SLI2_ACTIVE) { 2273 2259 /* First read mbox status word */ ··· 2280 2268 /* Read the HBA Host Attention Register */ 2281 2269 ha_copy = readl(phba->HAregaddr); 2282 2270 2271 + i = lpfc_mbox_tmo_val(phba, mb->mbxCommand); 2272 + i *= 1000; /* Convert to ms */ 2273 + 2283 2274 /* Wait for command to complete */ 2284 2275 while (((word0 & OWN_CHIP) == OWN_CHIP) || 2285 2276 (!(ha_copy & HA_MBATT) && 2286 2277 (phba->hba_state > LPFC_WARM_START))) { 2287 - if (i++ >= 100) { 2278 + if (i-- <= 0) { 2288 2279 psli->sli_flag &= ~LPFC_SLI_MBOX_ACTIVE; 2289 2280 spin_unlock_irqrestore(phba->host->host_lock, 2290 2281 drvr_flag); ··· 2305 2290 2306 2291 /* Can be in interrupt context, do not sleep */ 2307 2292 /* (or might be called with interrupts disabled) */ 2308 - mdelay(i); 2293 + mdelay(1); 2309 2294 2310 2295 spin_lock_irqsave(phba->host->host_lock, drvr_flag); 2311 2296 ··· 3020 3005 3021 3006 if (timeleft == 0) { 3022 3007 lpfc_printf_log(phba, KERN_ERR, LOG_SLI, 3023 - "%d:0329 IOCB wait timeout error - no " 3008 + "%d:0338 IOCB wait timeout error - no " 3024 3009 "wake response Data x%x\n", 3025 3010 phba->brd_no, timeout); 3026 3011 retval = IOCB_TIMEDOUT;
+20
drivers/scsi/lpfc/lpfc_sli.h
··· 172 172 uint32_t mbox_busy; /* Mailbox cmd busy */ 173 173 }; 174 174 175 + /* Structure to store link status values when port stats are reset */ 176 + struct lpfc_lnk_stat { 177 + uint32_t link_failure_count; 178 + uint32_t loss_of_sync_count; 179 + uint32_t loss_of_signal_count; 180 + uint32_t prim_seq_protocol_err_count; 181 + uint32_t invalid_tx_word_count; 182 + uint32_t invalid_crc_count; 183 + uint32_t error_frames; 184 + uint32_t link_events; 185 + }; 186 + 175 187 /* Structure used to hold SLI information */ 176 188 struct lpfc_sli { 177 189 uint32_t num_rings; ··· 213 201 struct lpfc_iocbq ** iocbq_lookup; /* array to lookup IOCB by IOTAG */ 214 202 size_t iocbq_lookup_len; /* current lengs of the array */ 215 203 uint16_t last_iotag; /* last allocated IOTAG */ 204 + unsigned long stats_start; /* in seconds */ 205 + struct lpfc_lnk_stat lnk_stat_offsets; 216 206 }; 217 207 218 208 /* Given a pointer to the start of the ring, and the slot number of ··· 225 211 226 212 #define LPFC_MBOX_TMO 30 /* Sec tmo for outstanding mbox 227 213 command */ 214 + #define LPFC_MBOX_TMO_FLASH_CMD 300 /* Sec tmo for outstanding FLASH write 215 + * or erase cmds. This is especially 216 + * long because of the potential of 217 + * multiple flash erases that can be 218 + * spawned. 219 + */
+1 -1
drivers/scsi/lpfc/lpfc_version.h
··· 18 18 * included with this package. * 19 19 *******************************************************************/ 20 20 21 - #define LPFC_DRIVER_VERSION "8.1.7" 21 + #define LPFC_DRIVER_VERSION "8.1.9" 22 22 23 23 #define LPFC_DRIVER_NAME "lpfc" 24 24
+6
drivers/scsi/megaraid/mega_common.h
··· 37 37 #define LSI_MAX_CHANNELS 16 38 38 #define LSI_MAX_LOGICAL_DRIVES_64LD (64+1) 39 39 40 + #define HBA_SIGNATURE_64_BIT 0x299 41 + #define PCI_CONF_AMISIG64 0xa4 42 + 43 + #define MEGA_SCSI_INQ_EVPD 1 44 + #define MEGA_INVALID_FIELD_IN_CDB 0x24 45 + 40 46 41 47 /** 42 48 * scb_t - scsi command control block
+4
drivers/scsi/megaraid/megaraid_ioctl.h
··· 132 132 /* Driver Data: */ 133 133 void __user * user_data; 134 134 uint32_t user_data_len; 135 + 136 + /* 64bit alignment */ 137 + uint32_t pad_for_64bit_align; 138 + 135 139 mraid_passthru_t __user *user_pthru; 136 140 137 141 mraid_passthru_t *pthru32;
+35 -5
drivers/scsi/megaraid/megaraid_mbox.c
··· 10 10 * 2 of the License, or (at your option) any later version. 11 11 * 12 12 * FILE : megaraid_mbox.c 13 - * Version : v2.20.4.8 (Apr 11 2006) 13 + * Version : v2.20.4.9 (Jul 16 2006) 14 14 * 15 15 * Authors: 16 16 * Atul Mukker <Atul.Mukker@lsil.com> ··· 720 720 struct pci_dev *pdev; 721 721 mraid_device_t *raid_dev; 722 722 int i; 723 + uint32_t magic64; 723 724 724 725 725 726 adapter->ito = MBOX_TIMEOUT; ··· 864 863 865 864 // Set the DMA mask to 64-bit. All supported controllers as capable of 866 865 // DMA in this range 867 - if (pci_set_dma_mask(adapter->pdev, DMA_64BIT_MASK) != 0) { 866 + pci_read_config_dword(adapter->pdev, PCI_CONF_AMISIG64, &magic64); 868 867 869 - con_log(CL_ANN, (KERN_WARNING 870 - "megaraid: could not set DMA mask for 64-bit.\n")); 868 + if (((magic64 == HBA_SIGNATURE_64_BIT) && 869 + ((adapter->pdev->subsystem_device != 870 + PCI_SUBSYS_ID_MEGARAID_SATA_150_6) || 871 + (adapter->pdev->subsystem_device != 872 + PCI_SUBSYS_ID_MEGARAID_SATA_150_4))) || 873 + (adapter->pdev->vendor == PCI_VENDOR_ID_LSI_LOGIC && 874 + adapter->pdev->device == PCI_DEVICE_ID_VERDE) || 875 + (adapter->pdev->vendor == PCI_VENDOR_ID_LSI_LOGIC && 876 + adapter->pdev->device == PCI_DEVICE_ID_DOBSON) || 877 + (adapter->pdev->vendor == PCI_VENDOR_ID_LSI_LOGIC && 878 + adapter->pdev->device == PCI_DEVICE_ID_LINDSAY) || 879 + (adapter->pdev->vendor == PCI_VENDOR_ID_DELL && 880 + adapter->pdev->device == PCI_DEVICE_ID_PERC4_DI_EVERGLADES) || 881 + (adapter->pdev->vendor == PCI_VENDOR_ID_DELL && 882 + adapter->pdev->device == PCI_DEVICE_ID_PERC4E_DI_KOBUK)) { 883 + if (pci_set_dma_mask(adapter->pdev, DMA_64BIT_MASK)) { 884 + con_log(CL_ANN, (KERN_WARNING 885 + "megaraid: DMA mask for 64-bit failed\n")); 871 886 872 - goto out_free_sysfs_res; 887 + if (pci_set_dma_mask (adapter->pdev, DMA_32BIT_MASK)) { 888 + con_log(CL_ANN, (KERN_WARNING 889 + "megaraid: 32-bit DMA mask failed\n")); 890 + goto out_free_sysfs_res; 891 + } 892 + } 873 893 } 874 894 875 895 // setup tasklet for DPC ··· 1642 1620 " [virtual] for logical drives\n")); 1643 1621 1644 1622 rdev->last_disp |= (1L << SCP2CHANNEL(scp)); 1623 + } 1624 + 1625 + if (scp->cmnd[1] & MEGA_SCSI_INQ_EVPD) { 1626 + scp->sense_buffer[0] = 0x70; 1627 + scp->sense_buffer[2] = ILLEGAL_REQUEST; 1628 + scp->sense_buffer[12] = MEGA_INVALID_FIELD_IN_CDB; 1629 + scp->result = CHECK_CONDITION << 1; 1630 + return NULL; 1645 1631 } 1646 1632 1647 1633 /* Fall through */
+2 -2
drivers/scsi/megaraid/megaraid_mbox.h
··· 21 21 #include "megaraid_ioctl.h" 22 22 23 23 24 - #define MEGARAID_VERSION "2.20.4.8" 25 - #define MEGARAID_EXT_VERSION "(Release Date: Mon Apr 11 12:27:22 EST 2006)" 24 + #define MEGARAID_VERSION "2.20.4.9" 25 + #define MEGARAID_EXT_VERSION "(Release Date: Sun Jul 16 12:27:22 EST 2006)" 26 26 27 27 28 28 /*
+1 -1
drivers/scsi/megaraid/megaraid_mm.c
··· 10 10 * 2 of the License, or (at your option) any later version. 11 11 * 12 12 * FILE : megaraid_mm.c 13 - * Version : v2.20.2.6 (Mar 7 2005) 13 + * Version : v2.20.2.7 (Jul 16 2006) 14 14 * 15 15 * Common management module 16 16 */
+2 -2
drivers/scsi/megaraid/megaraid_mm.h
··· 27 27 #include "megaraid_ioctl.h" 28 28 29 29 30 - #define LSI_COMMON_MOD_VERSION "2.20.2.6" 30 + #define LSI_COMMON_MOD_VERSION "2.20.2.7" 31 31 #define LSI_COMMON_MOD_EXT_VERSION \ 32 - "(Release Date: Mon Mar 7 00:01:03 EST 2005)" 32 + "(Release Date: Sun Jul 16 00:01:03 EST 2006)" 33 33 34 34 35 35 #define LSI_DBGLVL dbglevel
+1
drivers/scsi/qla2xxx/qla_def.h
··· 487 487 #define MBA_IP_RCV_BUFFER_EMPTY 0x8026 /* IP receive buffer queue empty. */ 488 488 #define MBA_IP_HDR_DATA_SPLIT 0x8027 /* IP header/data splitting feature */ 489 489 /* used. */ 490 + #define MBA_TRACE_NOTIFICATION 0x8028 /* Trace/Diagnostic notification. */ 490 491 #define MBA_POINT_TO_POINT 0x8030 /* Point to point mode. */ 491 492 #define MBA_CMPLT_1_16BIT 0x8031 /* Completion 1 16bit IOSB. */ 492 493 #define MBA_CMPLT_2_16BIT 0x8032 /* Completion 2 16bit IOSB. */
+11
drivers/scsi/qla2xxx/qla_init.c
··· 3063 3063 int 3064 3064 qla2x00_abort_isp(scsi_qla_host_t *ha) 3065 3065 { 3066 + int rval; 3066 3067 unsigned long flags = 0; 3067 3068 uint16_t cnt; 3068 3069 srb_t *sp; ··· 3120 3119 3121 3120 ha->isp_abort_cnt = 0; 3122 3121 clear_bit(ISP_ABORT_RETRY, &ha->dpc_flags); 3122 + 3123 + if (ha->eft) { 3124 + rval = qla2x00_trace_control(ha, TC_ENABLE, 3125 + ha->eft_dma, EFT_NUM_BUFFERS); 3126 + if (rval) { 3127 + qla_printk(KERN_WARNING, ha, 3128 + "Unable to reinitialize EFT " 3129 + "(%d).\n", rval); 3130 + } 3131 + } 3123 3132 } else { /* failed the ISP abort */ 3124 3133 ha->flags.online = 1; 3125 3134 if (test_bit(ISP_ABORT_RETRY, &ha->dpc_flags)) {
+1
drivers/scsi/qla2xxx/qla_iocb.c
··· 471 471 mrk24->nport_handle = cpu_to_le16(loop_id); 472 472 mrk24->lun[1] = LSB(lun); 473 473 mrk24->lun[2] = MSB(lun); 474 + host_to_fcp_swap(mrk24->lun, sizeof(mrk24->lun)); 474 475 } else { 475 476 SET_TARGET_ID(ha, mrk->target, loop_id); 476 477 mrk->lun = cpu_to_le16(lun);
+5
drivers/scsi/qla2xxx/qla_isr.c
··· 587 587 DEBUG2(printk("scsi(%ld): Discard RND Frame -- %04x %04x " 588 588 "%04x.\n", ha->host_no, mb[1], mb[2], mb[3])); 589 589 break; 590 + 591 + case MBA_TRACE_NOTIFICATION: 592 + DEBUG2(printk("scsi(%ld): Trace Notification -- %04x %04x.\n", 593 + ha->host_no, mb[1], mb[2])); 594 + break; 590 595 } 591 596 } 592 597
+3 -12
drivers/scsi/qla2xxx/qla_os.c
··· 744 744 { 745 745 scsi_qla_host_t *ha = to_qla_host(cmd->device->host); 746 746 fc_port_t *fcport = (struct fc_port *) cmd->device->hostdata; 747 - srb_t *sp; 748 747 int ret; 749 748 unsigned int id, lun; 750 749 unsigned long serial; ··· 754 755 lun = cmd->device->lun; 755 756 serial = cmd->serial_number; 756 757 757 - sp = (srb_t *) CMD_SP(cmd); 758 - if (!sp || !fcport) 758 + if (!fcport) 759 759 return ret; 760 760 761 761 qla_printk(KERN_INFO, ha, ··· 873 875 { 874 876 scsi_qla_host_t *ha = to_qla_host(cmd->device->host); 875 877 fc_port_t *fcport = (struct fc_port *) cmd->device->hostdata; 876 - srb_t *sp; 877 878 int ret; 878 879 unsigned int id, lun; 879 880 unsigned long serial; ··· 883 886 lun = cmd->device->lun; 884 887 serial = cmd->serial_number; 885 888 886 - sp = (srb_t *) CMD_SP(cmd); 887 - if (!sp || !fcport) 889 + if (!fcport) 888 890 return ret; 889 891 890 892 qla_printk(KERN_INFO, ha, ··· 932 936 { 933 937 scsi_qla_host_t *ha = to_qla_host(cmd->device->host); 934 938 fc_port_t *fcport = (struct fc_port *) cmd->device->hostdata; 935 - srb_t *sp; 936 939 int ret; 937 940 unsigned int id, lun; 938 941 unsigned long serial; ··· 942 947 lun = cmd->device->lun; 943 948 serial = cmd->serial_number; 944 949 945 - sp = (srb_t *) CMD_SP(cmd); 946 - if (!sp || !fcport) 950 + if (!fcport) 947 951 return ret; 948 952 949 953 qla_printk(KERN_INFO, ha, ··· 2238 2244 2239 2245 next_loopid = 0; 2240 2246 list_for_each_entry(fcport, &ha->fcports, list) { 2241 - if (fcport->port_type != FCT_TARGET) 2242 - continue; 2243 - 2244 2247 /* 2245 2248 * If the port is not ONLINE then try to login 2246 2249 * to it if we haven't run out of retries.
+2 -2
drivers/scsi/qla2xxx/qla_version.h
··· 7 7 /* 8 8 * Driver version 9 9 */ 10 - #define QLA2XXX_VERSION "8.01.05-k3" 10 + #define QLA2XXX_VERSION "8.01.07-k1" 11 11 12 12 #define QLA_DRIVER_MAJOR_VER 8 13 13 #define QLA_DRIVER_MINOR_VER 1 14 - #define QLA_DRIVER_PATCH_VER 5 14 + #define QLA_DRIVER_PATCH_VER 7 15 15 #define QLA_DRIVER_BETA_VER 0
+9 -9
drivers/scsi/scsi_error.c
··· 460 460 * Return value: 461 461 * SUCCESS or FAILED or NEEDS_RETRY 462 462 **/ 463 - static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, int timeout, int copy_sense) 463 + static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, unsigned char *cmnd, 464 + int cmnd_size, int timeout, int copy_sense) 464 465 { 465 466 struct scsi_device *sdev = scmd->device; 466 467 struct Scsi_Host *shost = sdev->host; ··· 490 489 old_data_direction = scmd->sc_data_direction; 491 490 old_cmd_len = scmd->cmd_len; 492 491 old_use_sg = scmd->use_sg; 492 + 493 + memset(scmd->cmnd, 0, sizeof(scmd->cmnd)); 494 + memcpy(scmd->cmnd, cmnd, cmnd_size); 493 495 494 496 if (copy_sense) { 495 497 int gfp_mask = GFP_ATOMIC; ··· 614 610 static unsigned char generic_sense[6] = 615 611 {REQUEST_SENSE, 0, 0, 0, 252, 0}; 616 612 617 - memcpy(scmd->cmnd, generic_sense, sizeof(generic_sense)); 618 - return scsi_send_eh_cmnd(scmd, SENSE_TIMEOUT, 1); 613 + return scsi_send_eh_cmnd(scmd, generic_sense, 6, SENSE_TIMEOUT, 1); 619 614 } 620 615 621 616 /** ··· 739 736 int retry_cnt = 1, rtn; 740 737 741 738 retry_tur: 742 - memcpy(scmd->cmnd, tur_command, sizeof(tur_command)); 743 - 744 - 745 - rtn = scsi_send_eh_cmnd(scmd, SENSE_TIMEOUT, 0); 739 + rtn = scsi_send_eh_cmnd(scmd, tur_command, 6, SENSE_TIMEOUT, 0); 746 740 747 741 SCSI_LOG_ERROR_RECOVERY(3, printk("%s: scmd %p rtn %x\n", 748 742 __FUNCTION__, scmd, rtn)); ··· 839 839 if (scmd->device->allow_restart) { 840 840 int rtn; 841 841 842 - memcpy(scmd->cmnd, stu_command, sizeof(stu_command)); 843 - rtn = scsi_send_eh_cmnd(scmd, START_UNIT_TIMEOUT, 0); 842 + rtn = scsi_send_eh_cmnd(scmd, stu_command, 6, 843 + START_UNIT_TIMEOUT, 0); 844 844 if (rtn == SUCCESS) 845 845 return 0; 846 846 }
+10 -5
drivers/scsi/scsi_transport_iscsi.c
··· 34 34 #define ISCSI_SESSION_ATTRS 11 35 35 #define ISCSI_CONN_ATTRS 11 36 36 #define ISCSI_HOST_ATTRS 0 37 + #define ISCSI_TRANSPORT_VERSION "1.1-646" 37 38 38 39 struct iscsi_internal { 39 40 int daemon_pid; ··· 635 634 } 636 635 637 636 static int 638 - iscsi_broadcast_skb(struct mempool_zone *zone, struct sk_buff *skb) 637 + iscsi_broadcast_skb(struct mempool_zone *zone, struct sk_buff *skb, gfp_t gfp) 639 638 { 640 639 unsigned long flags; 641 640 int rc; 642 641 643 642 skb_get(skb); 644 - rc = netlink_broadcast(nls, skb, 0, 1, GFP_KERNEL); 643 + rc = netlink_broadcast(nls, skb, 0, 1, gfp); 645 644 if (rc < 0) { 646 645 mempool_free(skb, zone->pool); 647 646 printk(KERN_ERR "iscsi: can not broadcast skb (%d)\n", rc); ··· 750 749 ev->r.connerror.cid = conn->cid; 751 750 ev->r.connerror.sid = iscsi_conn_get_sid(conn); 752 751 753 - iscsi_broadcast_skb(conn->z_error, skb); 752 + iscsi_broadcast_skb(conn->z_error, skb, GFP_ATOMIC); 754 753 755 754 dev_printk(KERN_INFO, &conn->dev, "iscsi: detected conn error (%d)\n", 756 755 error); ··· 896 895 * this will occur if the daemon is not up, so we just warn 897 896 * the user and when the daemon is restarted it will handle it 898 897 */ 899 - rc = iscsi_broadcast_skb(conn->z_pdu, skb); 898 + rc = iscsi_broadcast_skb(conn->z_pdu, skb, GFP_KERNEL); 900 899 if (rc < 0) 901 900 dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of " 902 901 "session destruction event. Check iscsi daemon\n"); ··· 959 958 * this will occur if the daemon is not up, so we just warn 960 959 * the user and when the daemon is restarted it will handle it 961 960 */ 962 - rc = iscsi_broadcast_skb(conn->z_pdu, skb); 961 + rc = iscsi_broadcast_skb(conn->z_pdu, skb, GFP_KERNEL); 963 962 if (rc < 0) 964 963 dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of " 965 964 "session creation event. Check iscsi daemon\n"); ··· 1614 1613 { 1615 1614 int err; 1616 1615 1616 + printk(KERN_INFO "Loading iSCSI transport class v%s.", 1617 + ISCSI_TRANSPORT_VERSION); 1618 + 1617 1619 err = class_register(&iscsi_transport_class); 1618 1620 if (err) 1619 1621 return err; ··· 1682 1678 "Alex Aizman <itn780@yahoo.com>"); 1683 1679 MODULE_DESCRIPTION("iSCSI Transport Interface"); 1684 1680 MODULE_LICENSE("GPL"); 1681 + MODULE_VERSION(ISCSI_TRANSPORT_VERSION);
+4 -4
drivers/scsi/sg.c
··· 18 18 * 19 19 */ 20 20 21 - static int sg_version_num = 30533; /* 2 digits for each component */ 22 - #define SG_VERSION_STR "3.5.33" 21 + static int sg_version_num = 30534; /* 2 digits for each component */ 22 + #define SG_VERSION_STR "3.5.34" 23 23 24 24 /* 25 25 * D. P. Gilbert (dgilbert@interlog.com, dougg@triode.net.au), notes: ··· 60 60 61 61 #ifdef CONFIG_SCSI_PROC_FS 62 62 #include <linux/proc_fs.h> 63 - static char *sg_version_date = "20050908"; 63 + static char *sg_version_date = "20060818"; 64 64 65 65 static int sg_proc_init(void); 66 66 static void sg_proc_cleanup(void); ··· 1164 1164 len = vma->vm_end - sa; 1165 1165 len = (len < sg->length) ? len : sg->length; 1166 1166 if (offset < len) { 1167 - page = sg->page; 1167 + page = virt_to_page(page_address(sg->page) + offset); 1168 1168 get_page(page); /* increment page count */ 1169 1169 break; 1170 1170 }
+1 -1
drivers/scsi/sym53c8xx_2/sym_glue.c
··· 2084 2084 { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_NCR_53C860, 2085 2085 PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL }, 2086 2086 { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_53C1510, 2087 - PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL }, 2087 + PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_STORAGE_SCSI<<8, 0xffff00, 0UL }, 2088 2088 { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_NCR_53C896, 2089 2089 PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL }, 2090 2090 { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_NCR_53C895,
+18 -1
include/scsi/libiscsi.h
··· 60 60 #define TMABORT_SUCCESS 0x1 61 61 #define TMABORT_FAILED 0x2 62 62 #define TMABORT_TIMEDOUT 0x3 63 + #define TMABORT_NOT_FOUND 0x4 63 64 64 65 /* Connection suspend "bit" */ 65 66 #define ISCSI_SUSPEND_BIT 1 ··· 84 83 struct list_head running; 85 84 }; 86 85 86 + enum { 87 + ISCSI_TASK_COMPLETED, 88 + ISCSI_TASK_PENDING, 89 + ISCSI_TASK_RUNNING, 90 + }; 91 + 87 92 struct iscsi_cmd_task { 88 93 /* 89 94 * Becuae LLDs allocate their hdr differently, this is a pointer to ··· 108 101 struct iscsi_conn *conn; /* used connection */ 109 102 struct iscsi_mgmt_task *mtask; /* tmf mtask in progr */ 110 103 104 + /* state set/tested under session->lock */ 105 + int state; 111 106 struct list_head running; /* running cmd list */ 112 107 void *dd_data; /* driver/transport data */ 113 108 }; ··· 135 126 int id; /* CID */ 136 127 struct list_head item; /* maintains list of conns */ 137 128 int c_stage; /* connection state */ 129 + /* 130 + * Preallocated buffer for pdus that have data but do not 131 + * originate from scsi-ml. We never have two pdus using the 132 + * buffer at the same time. It is only allocated to 133 + * the default max recv size because the pdus we support 134 + * should always fit in this buffer 135 + */ 136 + char *data; 138 137 struct iscsi_mgmt_task *login_mtask; /* mtask used for login/text */ 139 138 struct iscsi_mgmt_task *mtask; /* xmit mtask in progress */ 140 139 struct iscsi_cmd_task *ctask; /* xmit ctask in progress */ ··· 151 134 struct kfifo *immqueue; /* immediate xmit queue */ 152 135 struct kfifo *mgmtqueue; /* mgmt (control) xmit queue */ 153 136 struct list_head mgmt_run_list; /* list of control tasks */ 154 - struct kfifo *xmitqueue; /* data-path cmd queue */ 137 + struct list_head xmitqueue; /* data-path cmd queue */ 155 138 struct list_head run_list; /* list of cmds in progress */ 156 139 struct work_struct xmitwork; /* per-conn. xmit workqueue */ 157 140 /*
-4
include/scsi/scsi_transport_iscsi.h
··· 57 57 * @stop_conn: suspend/recover/terminate connection 58 58 * @send_pdu: send iSCSI PDU, Login, Logout, NOP-Out, Reject, Text. 59 59 * @session_recovery_timedout: notify LLD a block during recovery timed out 60 - * @suspend_conn_recv: susepend the recv side of the connection 61 - * @termincate_conn: destroy socket connection. Called with mutex lock. 62 60 * @init_cmd_task: Initialize a iscsi_cmd_task and any internal structs. 63 61 * Called from queuecommand with session lock held. 64 62 * @init_mgmt_task: Initialize a iscsi_mgmt_task and any internal structs. ··· 110 112 char *data, uint32_t data_size); 111 113 void (*get_stats) (struct iscsi_cls_conn *conn, 112 114 struct iscsi_stats *stats); 113 - void (*suspend_conn_recv) (struct iscsi_conn *conn); 114 - void (*terminate_conn) (struct iscsi_conn *conn); 115 115 void (*init_cmd_task) (struct iscsi_cmd_task *ctask); 116 116 void (*init_mgmt_task) (struct iscsi_conn *conn, 117 117 struct iscsi_mgmt_task *mtask,