Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

libsas: remove task_collector mode

The task_collector mode (or "latency_injector", (C) Dan Willians) is an
optional I/O path in libsas that queues up scsi commands instead of
directly sending it to the hardware. It generall increases latencies
to in the optiomal case slightly reduce mmio traffic to the hardware.

Only the obsolete aic94xx driver and the mvsas driver allowed to use
it without recompiling the kernel, and most drivers didn't support it
at all.

Remove the giant blob of code to allow better optimizations for scsi-mq
in the future.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Acked-by: Dan Williams <dan.j.williams@intel.com>

+94 -553
+4 -78
Documentation/scsi/libsas.txt
··· 226 226 my_ha->sas_ha.lldd_dev_found = my_dev_found; 227 227 my_ha->sas_ha.lldd_dev_gone = my_dev_gone; 228 228 229 - my_ha->sas_ha.lldd_max_execute_num = lldd_max_execute_num; (1) 230 - 231 - my_ha->sas_ha.lldd_queue_size = ha_can_queue; 232 229 my_ha->sas_ha.lldd_execute_task = my_execute_task; 233 230 234 231 my_ha->sas_ha.lldd_abort_task = my_abort_task; ··· 243 246 244 247 return sas_register_ha(&my_ha->sas_ha); 245 248 } 246 - 247 - (1) This is normally a LLDD parameter, something of the 248 - lines of a task collector. What it tells the SAS Layer is 249 - whether the SAS layer should run in Direct Mode (default: 250 - value 0 or 1) or Task Collector Mode (value greater than 1). 251 - 252 - In Direct Mode, the SAS Layer calls Execute Task as soon as 253 - it has a command to send to the SDS, _and_ this is a single 254 - command, i.e. not linked. 255 - 256 - Some hardware (e.g. aic94xx) has the capability to DMA more 257 - than one task at a time (interrupt) from host memory. Task 258 - Collector Mode is an optional feature for HAs which support 259 - this in their hardware. (Again, it is completely optional 260 - even if your hardware supports it.) 261 - 262 - In Task Collector Mode, the SAS Layer would do _natural_ 263 - coalescing of tasks and at the appropriate moment it would 264 - call your driver to DMA more than one task in a single HA 265 - interrupt. DMBS may want to use this by insmod/modprobe 266 - setting the lldd_max_execute_num to something greater than 267 - 1. 268 249 269 250 (2) SAS 1.1 does not define I_T Nexus Reset TMF. 270 251 ··· 300 325 301 326 The Execute Command SCSI RPC: 302 327 303 - int (*lldd_execute_task)(struct sas_task *, int num, 304 - unsigned long gfp_flags); 328 + int (*lldd_execute_task)(struct sas_task *, gfp_t gfp_flags); 305 329 306 - Used to queue a task to the SAS LLDD. @task is the tasks to 307 - be executed. @num should be the number of tasks being 308 - queued at this function call (they are linked listed via 309 - task::list), @gfp_mask should be the gfp_mask defining the 310 - context of the caller. 330 + Used to queue a task to the SAS LLDD. @task is the task to be executed. 331 + @gfp_mask is the gfp_mask defining the context of the caller. 311 332 312 333 This function should implement the Execute Command SCSI RPC, 313 - or if you're sending a SCSI Task as linked commands, you 314 - should also use this function. 315 334 316 - That is, when lldd_execute_task() is called, the command(s) 335 + That is, when lldd_execute_task() is called, the command 317 336 go out on the transport *immediately*. There is *no* 318 337 queuing of any sort and at any level in a SAS LLDD. 319 - 320 - The use of task::list is two-fold, one for linked commands, 321 - the other discussed below. 322 - 323 - It is possible to queue up more than one task at a time, by 324 - initializing the list element of struct sas_task, and 325 - passing the number of tasks enlisted in this manner in num. 326 338 327 339 Returns: -SAS_QUEUE_FULL, -ENOMEM, nothing was queued; 328 340 0, the task(s) were queued. 329 341 330 - If you want to pass num > 1, then either 331 - A) you're the only caller of this function and keep track 332 - of what you've queued to the LLDD, or 333 - B) you know what you're doing and have a strategy of 334 - retrying. 335 - 336 - As opposed to queuing one task at a time (function call), 337 - batch queuing of tasks, by having num > 1, greatly 338 - simplifies LLDD code, sequencer code, and _hardware design_, 339 - and has some performance advantages in certain situations 340 - (DBMS). 341 - 342 - The LLDD advertises if it can take more than one command at 343 - a time at lldd_execute_task(), by setting the 344 - lldd_max_execute_num parameter (controlled by "collector" 345 - module parameter in aic94xx SAS LLDD). 346 - 347 - You should leave this to the default 1, unless you know what 348 - you're doing. 349 - 350 - This is a function of the LLDD, to which the SAS layer can 351 - cater to. 352 - 353 - int lldd_queue_size 354 - The host adapter's queue size. This is the maximum 355 - number of commands the lldd can have pending to domain 356 - devices on behalf of all upper layers submitting through 357 - lldd_execute_task(). 358 - 359 - You really want to set this to something (much) larger than 360 - 1. 361 - 362 - This _really_ has absolutely nothing to do with queuing. 363 - There is no queuing in SAS LLDDs. 364 - 365 342 struct sas_task { 366 343 dev -- the device this task is destined to 367 - list -- must be initialized (INIT_LIST_HEAD) 368 344 task_proto -- _one_ of enum sas_proto 369 345 scatter -- pointer to scatter gather list array 370 346 num_scatter -- number of elements in scatter
+1 -1
drivers/scsi/aic94xx/aic94xx.h
··· 78 78 79 79 void asd_invalidate_edb(struct asd_ascb *ascb, int edb_id); 80 80 81 - int asd_execute_task(struct sas_task *, int num, gfp_t gfp_flags); 81 + int asd_execute_task(struct sas_task *task, gfp_t gfp_flags); 82 82 83 83 void asd_set_dmamode(struct domain_device *dev); 84 84
+1 -2
drivers/scsi/aic94xx/aic94xx_hwi.c
··· 1200 1200 * Case A: we can send the whole batch at once. Increment "pending" 1201 1201 * in the beginning of this function, when it is checked, in order to 1202 1202 * eliminate races when this function is called by multiple processes. 1203 - * Case B: should never happen if the managing layer considers 1204 - * lldd_queue_size. 1203 + * Case B: should never happen. 1205 1204 */ 1206 1205 int asd_post_ascb_list(struct asd_ha_struct *asd_ha, struct asd_ascb *ascb, 1207 1206 int num)
-11
drivers/scsi/aic94xx/aic94xx_init.c
··· 49 49 "\tEnable(1) or disable(0) using PCI MSI.\n" 50 50 "\tDefault: 0"); 51 51 52 - static int lldd_max_execute_num = 0; 53 - module_param_named(collector, lldd_max_execute_num, int, S_IRUGO); 54 - MODULE_PARM_DESC(collector, "\n" 55 - "\tIf greater than one, tells the SAS Layer to run in Task Collector\n" 56 - "\tMode. If 1 or 0, tells the SAS Layer to run in Direct Mode.\n" 57 - "\tThe aic94xx SAS LLDD supports both modes.\n" 58 - "\tDefault: 0 (Direct Mode).\n"); 59 - 60 52 static struct scsi_transport_template *aic94xx_transport_template; 61 53 static int asd_scan_finished(struct Scsi_Host *, unsigned long); 62 54 static void asd_scan_start(struct Scsi_Host *); ··· 702 710 asd_ha->sas_ha.sas_phy = sas_phys; 703 711 asd_ha->sas_ha.sas_port= sas_ports; 704 712 asd_ha->sas_ha.num_phys= ASD_MAX_PHYS; 705 - 706 - asd_ha->sas_ha.lldd_queue_size = asd_ha->seq.can_queue; 707 - asd_ha->sas_ha.lldd_max_execute_num = lldd_max_execute_num; 708 713 709 714 return sas_register_ha(&asd_ha->sas_ha); 710 715 }
+6 -7
drivers/scsi/aic94xx/aic94xx_task.c
··· 543 543 return res; 544 544 } 545 545 546 - int asd_execute_task(struct sas_task *task, const int num, 547 - gfp_t gfp_flags) 546 + int asd_execute_task(struct sas_task *task, gfp_t gfp_flags) 548 547 { 549 548 int res = 0; 550 549 LIST_HEAD(alist); ··· 552 553 struct asd_ha_struct *asd_ha = task->dev->port->ha->lldd_ha; 553 554 unsigned long flags; 554 555 555 - res = asd_can_queue(asd_ha, num); 556 + res = asd_can_queue(asd_ha, 1); 556 557 if (res) 557 558 return res; 558 559 559 - res = num; 560 + res = 1; 560 561 ascb = asd_ascb_alloc_list(asd_ha, &res, gfp_flags); 561 562 if (res) { 562 563 res = -ENOMEM; ··· 567 568 list_for_each_entry(a, &alist, list) { 568 569 a->uldd_task = t; 569 570 t->lldd_task = a; 570 - t = list_entry(t->list.next, struct sas_task, list); 571 + break; 571 572 } 572 573 list_for_each_entry(a, &alist, list) { 573 574 t = a->uldd_task; ··· 600 601 } 601 602 list_del_init(&alist); 602 603 603 - res = asd_post_ascb_list(asd_ha, ascb, num); 604 + res = asd_post_ascb_list(asd_ha, ascb, 1); 604 605 if (unlikely(res)) { 605 606 a = NULL; 606 607 __list_add(&alist, ascb->list.prev, &ascb->list); ··· 638 639 out_err: 639 640 if (ascb) 640 641 asd_ascb_free_list(ascb); 641 - asd_can_dequeue(asd_ha, num); 642 + asd_can_dequeue(asd_ha, 1); 642 643 return res; 643 644 }
-2
drivers/scsi/isci/init.c
··· 260 260 sas_ha->sas_port = sas_ports; 261 261 sas_ha->num_phys = SCI_MAX_PHYS; 262 262 263 - sas_ha->lldd_queue_size = ISCI_CAN_QUEUE_VAL; 264 - sas_ha->lldd_max_execute_num = 1; 265 263 sas_ha->strict_wide_ports = 1; 266 264 267 265 sas_register_ha(sas_ha);
+67 -74
drivers/scsi/isci/task.c
··· 117 117 * functions. This function is called by libsas to send a task down to 118 118 * hardware. 119 119 * @task: This parameter specifies the SAS task to send. 120 - * @num: This parameter specifies the number of tasks to queue. 121 120 * @gfp_flags: This parameter specifies the context of this call. 122 121 * 123 122 * status, zero indicates success. 124 123 */ 125 - int isci_task_execute_task(struct sas_task *task, int num, gfp_t gfp_flags) 124 + int isci_task_execute_task(struct sas_task *task, gfp_t gfp_flags) 126 125 { 127 126 struct isci_host *ihost = dev_to_ihost(task->dev); 128 127 struct isci_remote_device *idev; 129 128 unsigned long flags; 129 + enum sci_status status = SCI_FAILURE; 130 130 bool io_ready; 131 131 u16 tag; 132 132 133 - dev_dbg(&ihost->pdev->dev, "%s: num=%d\n", __func__, num); 133 + spin_lock_irqsave(&ihost->scic_lock, flags); 134 + idev = isci_lookup_device(task->dev); 135 + io_ready = isci_device_io_ready(idev, task); 136 + tag = isci_alloc_tag(ihost); 137 + spin_unlock_irqrestore(&ihost->scic_lock, flags); 134 138 135 - for_each_sas_task(num, task) { 136 - enum sci_status status = SCI_FAILURE; 139 + dev_dbg(&ihost->pdev->dev, 140 + "task: %p, dev: %p idev: %p:%#lx cmd = %p\n", 141 + task, task->dev, idev, idev ? idev->flags : 0, 142 + task->uldd_task); 137 143 138 - spin_lock_irqsave(&ihost->scic_lock, flags); 139 - idev = isci_lookup_device(task->dev); 140 - io_ready = isci_device_io_ready(idev, task); 141 - tag = isci_alloc_tag(ihost); 142 - spin_unlock_irqrestore(&ihost->scic_lock, flags); 144 + if (!idev) { 145 + isci_task_refuse(ihost, task, SAS_TASK_UNDELIVERED, 146 + SAS_DEVICE_UNKNOWN); 147 + } else if (!io_ready || tag == SCI_CONTROLLER_INVALID_IO_TAG) { 148 + /* Indicate QUEUE_FULL so that the scsi midlayer 149 + * retries. 150 + */ 151 + isci_task_refuse(ihost, task, SAS_TASK_COMPLETE, 152 + SAS_QUEUE_FULL); 153 + } else { 154 + /* There is a device and it's ready for I/O. */ 155 + spin_lock_irqsave(&task->task_state_lock, flags); 143 156 144 - dev_dbg(&ihost->pdev->dev, 145 - "task: %p, num: %d dev: %p idev: %p:%#lx cmd = %p\n", 146 - task, num, task->dev, idev, idev ? idev->flags : 0, 147 - task->uldd_task); 157 + if (task->task_state_flags & SAS_TASK_STATE_ABORTED) { 158 + /* The I/O was aborted. */ 159 + spin_unlock_irqrestore(&task->task_state_lock, flags); 148 160 149 - if (!idev) { 150 - isci_task_refuse(ihost, task, SAS_TASK_UNDELIVERED, 151 - SAS_DEVICE_UNKNOWN); 152 - } else if (!io_ready || tag == SCI_CONTROLLER_INVALID_IO_TAG) { 153 - /* Indicate QUEUE_FULL so that the scsi midlayer 154 - * retries. 155 - */ 156 - isci_task_refuse(ihost, task, SAS_TASK_COMPLETE, 157 - SAS_QUEUE_FULL); 161 + isci_task_refuse(ihost, task, 162 + SAS_TASK_UNDELIVERED, 163 + SAM_STAT_TASK_ABORTED); 158 164 } else { 159 - /* There is a device and it's ready for I/O. */ 160 - spin_lock_irqsave(&task->task_state_lock, flags); 165 + task->task_state_flags |= SAS_TASK_AT_INITIATOR; 166 + spin_unlock_irqrestore(&task->task_state_lock, flags); 161 167 162 - if (task->task_state_flags & SAS_TASK_STATE_ABORTED) { 163 - /* The I/O was aborted. */ 164 - spin_unlock_irqrestore(&task->task_state_lock, 165 - flags); 168 + /* build and send the request. */ 169 + status = isci_request_execute(ihost, idev, task, tag); 166 170 167 - isci_task_refuse(ihost, task, 168 - SAS_TASK_UNDELIVERED, 169 - SAM_STAT_TASK_ABORTED); 170 - } else { 171 - task->task_state_flags |= SAS_TASK_AT_INITIATOR; 171 + if (status != SCI_SUCCESS) { 172 + spin_lock_irqsave(&task->task_state_lock, flags); 173 + /* Did not really start this command. */ 174 + task->task_state_flags &= ~SAS_TASK_AT_INITIATOR; 172 175 spin_unlock_irqrestore(&task->task_state_lock, flags); 173 176 174 - /* build and send the request. */ 175 - status = isci_request_execute(ihost, idev, task, tag); 176 - 177 - if (status != SCI_SUCCESS) { 178 - 179 - spin_lock_irqsave(&task->task_state_lock, flags); 180 - /* Did not really start this command. */ 181 - task->task_state_flags &= ~SAS_TASK_AT_INITIATOR; 182 - spin_unlock_irqrestore(&task->task_state_lock, flags); 183 - 184 - if (test_bit(IDEV_GONE, &idev->flags)) { 185 - 186 - /* Indicate that the device 187 - * is gone. 188 - */ 189 - isci_task_refuse(ihost, task, 190 - SAS_TASK_UNDELIVERED, 191 - SAS_DEVICE_UNKNOWN); 192 - } else { 193 - /* Indicate QUEUE_FULL so that 194 - * the scsi midlayer retries. 195 - * If the request failed for 196 - * remote device reasons, it 197 - * gets returned as 198 - * SAS_TASK_UNDELIVERED next 199 - * time through. 200 - */ 201 - isci_task_refuse(ihost, task, 202 - SAS_TASK_COMPLETE, 203 - SAS_QUEUE_FULL); 204 - } 177 + if (test_bit(IDEV_GONE, &idev->flags)) { 178 + /* Indicate that the device 179 + * is gone. 180 + */ 181 + isci_task_refuse(ihost, task, 182 + SAS_TASK_UNDELIVERED, 183 + SAS_DEVICE_UNKNOWN); 184 + } else { 185 + /* Indicate QUEUE_FULL so that 186 + * the scsi midlayer retries. 187 + * If the request failed for 188 + * remote device reasons, it 189 + * gets returned as 190 + * SAS_TASK_UNDELIVERED next 191 + * time through. 192 + */ 193 + isci_task_refuse(ihost, task, 194 + SAS_TASK_COMPLETE, 195 + SAS_QUEUE_FULL); 205 196 } 206 197 } 207 198 } 208 - if (status != SCI_SUCCESS && tag != SCI_CONTROLLER_INVALID_IO_TAG) { 209 - spin_lock_irqsave(&ihost->scic_lock, flags); 210 - /* command never hit the device, so just free 211 - * the tci and skip the sequence increment 212 - */ 213 - isci_tci_free(ihost, ISCI_TAG_TCI(tag)); 214 - spin_unlock_irqrestore(&ihost->scic_lock, flags); 215 - } 216 - isci_put_device(idev); 217 199 } 200 + 201 + if (status != SCI_SUCCESS && tag != SCI_CONTROLLER_INVALID_IO_TAG) { 202 + spin_lock_irqsave(&ihost->scic_lock, flags); 203 + /* command never hit the device, so just free 204 + * the tci and skip the sequence increment 205 + */ 206 + isci_tci_free(ihost, ISCI_TAG_TCI(tag)); 207 + spin_unlock_irqrestore(&ihost->scic_lock, flags); 208 + } 209 + 210 + isci_put_device(idev); 218 211 return 0; 219 212 } 220 213
-1
drivers/scsi/isci/task.h
··· 131 131 132 132 int isci_task_execute_task( 133 133 struct sas_task *task, 134 - int num, 135 134 gfp_t gfp_flags); 136 135 137 136 int isci_task_abort_task(
+1 -8
drivers/scsi/libsas/sas_ata.c
··· 171 171 spin_unlock_irqrestore(ap->lock, flags); 172 172 173 173 qc_already_gone: 174 - list_del_init(&task->list); 175 174 sas_free_task(task); 176 175 } 177 176 ··· 243 244 if (qc->scsicmd) 244 245 ASSIGN_SAS_TASK(qc->scsicmd, task); 245 246 246 - if (sas_ha->lldd_max_execute_num < 2) 247 - ret = i->dft->lldd_execute_task(task, 1, GFP_ATOMIC); 248 - else 249 - ret = sas_queue_up(task); 250 - 251 - /* Examine */ 247 + ret = i->dft->lldd_execute_task(task, GFP_ATOMIC); 252 248 if (ret) { 253 249 SAS_DPRINTK("lldd_execute_task returned: %d\n", ret); 254 250 ··· 479 485 480 486 return; 481 487 out: 482 - list_del_init(&task->list); 483 488 sas_free_task(task); 484 489 } 485 490
+1 -1
drivers/scsi/libsas/sas_expander.c
··· 96 96 task->slow_task->timer.expires = jiffies + SMP_TIMEOUT*HZ; 97 97 add_timer(&task->slow_task->timer); 98 98 99 - res = i->dft->lldd_execute_task(task, 1, GFP_KERNEL); 99 + res = i->dft->lldd_execute_task(task, GFP_KERNEL); 100 100 101 101 if (res) { 102 102 del_timer(&task->slow_task->timer);
-21
drivers/scsi/libsas/sas_init.c
··· 45 45 struct sas_task *task = kmem_cache_zalloc(sas_task_cache, flags); 46 46 47 47 if (task) { 48 - INIT_LIST_HEAD(&task->list); 49 48 spin_lock_init(&task->task_state_lock); 50 49 task->task_state_flags = SAS_TASK_STATE_PENDING; 51 50 } ··· 76 77 void sas_free_task(struct sas_task *task) 77 78 { 78 79 if (task) { 79 - BUG_ON(!list_empty(&task->list)); 80 80 kfree(task->slow_task); 81 81 kmem_cache_free(sas_task_cache, task); 82 82 } ··· 125 127 spin_lock_init(&sas_ha->phy_port_lock); 126 128 sas_hash_addr(sas_ha->hashed_sas_addr, sas_ha->sas_addr); 127 129 128 - if (sas_ha->lldd_queue_size == 0) 129 - sas_ha->lldd_queue_size = 1; 130 - else if (sas_ha->lldd_queue_size == -1) 131 - sas_ha->lldd_queue_size = 128; /* Sanity */ 132 - 133 130 set_bit(SAS_HA_REGISTERED, &sas_ha->state); 134 131 spin_lock_init(&sas_ha->lock); 135 132 mutex_init(&sas_ha->drain_mutex); ··· 148 155 if (error) { 149 156 printk(KERN_NOTICE "couldn't start event thread:%d\n", error); 150 157 goto Undo_ports; 151 - } 152 - 153 - if (sas_ha->lldd_max_execute_num > 1) { 154 - error = sas_init_queue(sas_ha); 155 - if (error) { 156 - printk(KERN_NOTICE "couldn't start queue thread:%d, " 157 - "running in direct mode\n", error); 158 - sas_ha->lldd_max_execute_num = 1; 159 - } 160 158 } 161 159 162 160 INIT_LIST_HEAD(&sas_ha->eh_done_q); ··· 184 200 mutex_lock(&sas_ha->drain_mutex); 185 201 __sas_drain_work(sas_ha); 186 202 mutex_unlock(&sas_ha->drain_mutex); 187 - 188 - if (sas_ha->lldd_max_execute_num > 1) { 189 - sas_shutdown_queue(sas_ha); 190 - sas_ha->lldd_max_execute_num = 1; 191 - } 192 203 193 204 return 0; 194 205 }
-2
drivers/scsi/libsas/sas_internal.h
··· 66 66 67 67 enum blk_eh_timer_return sas_scsi_timed_out(struct scsi_cmnd *); 68 68 69 - int sas_init_queue(struct sas_ha_struct *sas_ha); 70 69 int sas_init_events(struct sas_ha_struct *sas_ha); 71 - void sas_shutdown_queue(struct sas_ha_struct *sas_ha); 72 70 void sas_disable_revalidation(struct sas_ha_struct *ha); 73 71 void sas_enable_revalidation(struct sas_ha_struct *ha); 74 72 void __sas_drain_work(struct sas_ha_struct *ha);
+1 -175
drivers/scsi/libsas/sas_scsi_host.c
··· 112 112 113 113 sc->result = (hs << 16) | stat; 114 114 ASSIGN_SAS_TASK(sc, NULL); 115 - list_del_init(&task->list); 116 115 sas_free_task(task); 117 116 } 118 117 ··· 137 138 138 139 if (unlikely(!sc)) { 139 140 SAS_DPRINTK("task_done called with non existing SCSI cmnd!\n"); 140 - list_del_init(&task->list); 141 141 sas_free_task(task); 142 142 return; 143 143 } ··· 177 179 return task; 178 180 } 179 181 180 - int sas_queue_up(struct sas_task *task) 181 - { 182 - struct sas_ha_struct *sas_ha = task->dev->port->ha; 183 - struct scsi_core *core = &sas_ha->core; 184 - unsigned long flags; 185 - LIST_HEAD(list); 186 - 187 - spin_lock_irqsave(&core->task_queue_lock, flags); 188 - if (sas_ha->lldd_queue_size < core->task_queue_size + 1) { 189 - spin_unlock_irqrestore(&core->task_queue_lock, flags); 190 - return -SAS_QUEUE_FULL; 191 - } 192 - list_add_tail(&task->list, &core->task_queue); 193 - core->task_queue_size += 1; 194 - spin_unlock_irqrestore(&core->task_queue_lock, flags); 195 - wake_up_process(core->queue_thread); 196 - 197 - return 0; 198 - } 199 - 200 182 int sas_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) 201 183 { 202 184 struct sas_internal *i = to_sas_internal(host->transportt); 203 185 struct domain_device *dev = cmd_to_domain_dev(cmd); 204 - struct sas_ha_struct *sas_ha = dev->port->ha; 205 186 struct sas_task *task; 206 187 int res = 0; 207 188 ··· 201 224 if (!task) 202 225 return SCSI_MLQUEUE_HOST_BUSY; 203 226 204 - /* Queue up, Direct Mode or Task Collector Mode. */ 205 - if (sas_ha->lldd_max_execute_num < 2) 206 - res = i->dft->lldd_execute_task(task, 1, GFP_ATOMIC); 207 - else 208 - res = sas_queue_up(task); 209 - 227 + res = i->dft->lldd_execute_task(task, GFP_ATOMIC); 210 228 if (res) 211 229 goto out_free_task; 212 230 return 0; ··· 295 323 TASK_IS_DONE, 296 324 TASK_IS_ABORTED, 297 325 TASK_IS_AT_LU, 298 - TASK_IS_NOT_AT_HA, 299 326 TASK_IS_NOT_AT_LU, 300 327 TASK_ABORT_FAILED, 301 328 }; 302 329 303 330 static enum task_disposition sas_scsi_find_task(struct sas_task *task) 304 331 { 305 - struct sas_ha_struct *ha = task->dev->port->ha; 306 332 unsigned long flags; 307 333 int i, res; 308 334 struct sas_internal *si = 309 335 to_sas_internal(task->dev->port->ha->core.shost->transportt); 310 - 311 - if (ha->lldd_max_execute_num > 1) { 312 - struct scsi_core *core = &ha->core; 313 - struct sas_task *t, *n; 314 - 315 - mutex_lock(&core->task_queue_flush); 316 - spin_lock_irqsave(&core->task_queue_lock, flags); 317 - list_for_each_entry_safe(t, n, &core->task_queue, list) 318 - if (task == t) { 319 - list_del_init(&t->list); 320 - break; 321 - } 322 - spin_unlock_irqrestore(&core->task_queue_lock, flags); 323 - mutex_unlock(&core->task_queue_flush); 324 - 325 - if (task == t) 326 - return TASK_IS_NOT_AT_HA; 327 - } 328 336 329 337 for (i = 0; i < 5; i++) { 330 338 SAS_DPRINTK("%s: aborting task 0x%p\n", __func__, task); ··· 619 667 cmd->eh_eflags = 0; 620 668 621 669 switch (res) { 622 - case TASK_IS_NOT_AT_HA: 623 - SAS_DPRINTK("%s: task 0x%p is not at ha: %s\n", 624 - __func__, task, 625 - cmd->retries ? "retry" : "aborted"); 626 - if (cmd->retries) 627 - cmd->retries--; 628 - sas_eh_finish_cmd(cmd); 629 - continue; 630 670 case TASK_IS_DONE: 631 671 SAS_DPRINTK("%s: task 0x%p is done\n", __func__, 632 672 task); ··· 780 836 scsi_eh_ready_devs(shost, &eh_work_q, &ha->eh_done_q); 781 837 782 838 out: 783 - if (ha->lldd_max_execute_num > 1) 784 - wake_up_process(ha->core.queue_thread); 785 - 786 839 sas_eh_handle_resets(shost); 787 840 788 841 /* now link into libata eh --- if we have any ata devices */ ··· 923 982 hsc[2] = capacity; 924 983 925 984 return 0; 926 - } 927 - 928 - /* ---------- Task Collector Thread implementation ---------- */ 929 - 930 - static void sas_queue(struct sas_ha_struct *sas_ha) 931 - { 932 - struct scsi_core *core = &sas_ha->core; 933 - unsigned long flags; 934 - LIST_HEAD(q); 935 - int can_queue; 936 - int res; 937 - struct sas_internal *i = to_sas_internal(core->shost->transportt); 938 - 939 - mutex_lock(&core->task_queue_flush); 940 - spin_lock_irqsave(&core->task_queue_lock, flags); 941 - while (!kthread_should_stop() && 942 - !list_empty(&core->task_queue) && 943 - !test_bit(SAS_HA_FROZEN, &sas_ha->state)) { 944 - 945 - can_queue = sas_ha->lldd_queue_size - core->task_queue_size; 946 - if (can_queue >= 0) { 947 - can_queue = core->task_queue_size; 948 - list_splice_init(&core->task_queue, &q); 949 - } else { 950 - struct list_head *a, *n; 951 - 952 - can_queue = sas_ha->lldd_queue_size; 953 - list_for_each_safe(a, n, &core->task_queue) { 954 - list_move_tail(a, &q); 955 - if (--can_queue == 0) 956 - break; 957 - } 958 - can_queue = sas_ha->lldd_queue_size; 959 - } 960 - core->task_queue_size -= can_queue; 961 - spin_unlock_irqrestore(&core->task_queue_lock, flags); 962 - { 963 - struct sas_task *task = list_entry(q.next, 964 - struct sas_task, 965 - list); 966 - list_del_init(&q); 967 - res = i->dft->lldd_execute_task(task, can_queue, 968 - GFP_KERNEL); 969 - if (unlikely(res)) 970 - __list_add(&q, task->list.prev, &task->list); 971 - } 972 - spin_lock_irqsave(&core->task_queue_lock, flags); 973 - if (res) { 974 - list_splice_init(&q, &core->task_queue); /*at head*/ 975 - core->task_queue_size += can_queue; 976 - } 977 - } 978 - spin_unlock_irqrestore(&core->task_queue_lock, flags); 979 - mutex_unlock(&core->task_queue_flush); 980 - } 981 - 982 - /** 983 - * sas_queue_thread -- The Task Collector thread 984 - * @_sas_ha: pointer to struct sas_ha 985 - */ 986 - static int sas_queue_thread(void *_sas_ha) 987 - { 988 - struct sas_ha_struct *sas_ha = _sas_ha; 989 - 990 - while (1) { 991 - set_current_state(TASK_INTERRUPTIBLE); 992 - schedule(); 993 - sas_queue(sas_ha); 994 - if (kthread_should_stop()) 995 - break; 996 - } 997 - 998 - return 0; 999 - } 1000 - 1001 - int sas_init_queue(struct sas_ha_struct *sas_ha) 1002 - { 1003 - struct scsi_core *core = &sas_ha->core; 1004 - 1005 - spin_lock_init(&core->task_queue_lock); 1006 - mutex_init(&core->task_queue_flush); 1007 - core->task_queue_size = 0; 1008 - INIT_LIST_HEAD(&core->task_queue); 1009 - 1010 - core->queue_thread = kthread_run(sas_queue_thread, sas_ha, 1011 - "sas_queue_%d", core->shost->host_no); 1012 - if (IS_ERR(core->queue_thread)) 1013 - return PTR_ERR(core->queue_thread); 1014 - return 0; 1015 - } 1016 - 1017 - void sas_shutdown_queue(struct sas_ha_struct *sas_ha) 1018 - { 1019 - unsigned long flags; 1020 - struct scsi_core *core = &sas_ha->core; 1021 - struct sas_task *task, *n; 1022 - 1023 - kthread_stop(core->queue_thread); 1024 - 1025 - if (!list_empty(&core->task_queue)) 1026 - SAS_DPRINTK("HA: %llx: scsi core task queue is NOT empty!?\n", 1027 - SAS_ADDR(sas_ha->sas_addr)); 1028 - 1029 - spin_lock_irqsave(&core->task_queue_lock, flags); 1030 - list_for_each_entry_safe(task, n, &core->task_queue, list) { 1031 - struct scsi_cmnd *cmd = task->uldd_task; 1032 - 1033 - list_del_init(&task->list); 1034 - 1035 - ASSIGN_SAS_TASK(cmd, NULL); 1036 - sas_free_task(task); 1037 - cmd->result = DID_ABORT << 16; 1038 - cmd->scsi_done(cmd); 1039 - } 1040 - spin_unlock_irqrestore(&core->task_queue_lock, flags); 1041 985 } 1042 986 1043 987 /*
-22
drivers/scsi/mvsas/mv_init.c
··· 26 26 27 27 #include "mv_sas.h" 28 28 29 - static int lldd_max_execute_num = 1; 30 - module_param_named(collector, lldd_max_execute_num, int, S_IRUGO); 31 - MODULE_PARM_DESC(collector, "\n" 32 - "\tIf greater than one, tells the SAS Layer to run in Task Collector\n" 33 - "\tMode. If 1 or 0, tells the SAS Layer to run in Direct Mode.\n" 34 - "\tThe mvsas SAS LLDD supports both modes.\n" 35 - "\tDefault: 1 (Direct Mode).\n"); 36 - 37 29 int interrupt_coalescing = 0x80; 38 30 39 31 static struct scsi_transport_template *mvs_stt; 40 - struct kmem_cache *mvs_task_list_cache; 41 32 static const struct mvs_chip_info mvs_chips[] = { 42 33 [chip_6320] = { 1, 2, 0x400, 17, 16, 6, 9, &mvs_64xx_dispatch, }, 43 34 [chip_6440] = { 1, 4, 0x400, 17, 16, 6, 9, &mvs_64xx_dispatch, }, ··· 504 513 505 514 sha->num_phys = nr_core * chip_info->n_phy; 506 515 507 - sha->lldd_max_execute_num = lldd_max_execute_num; 508 - 509 516 if (mvi->flags & MVF_FLAG_SOC) 510 517 can_queue = MVS_SOC_CAN_QUEUE; 511 518 else 512 519 can_queue = MVS_CHIP_SLOT_SZ; 513 520 514 - sha->lldd_queue_size = can_queue; 515 521 shost->sg_tablesize = min_t(u16, SG_ALL, MVS_MAX_SG); 516 522 shost->can_queue = can_queue; 517 523 mvi->shost->cmd_per_lun = MVS_QUEUE_SIZE; ··· 821 833 if (!mvs_stt) 822 834 return -ENOMEM; 823 835 824 - mvs_task_list_cache = kmem_cache_create("mvs_task_list", sizeof(struct mvs_task_list), 825 - 0, SLAB_HWCACHE_ALIGN, NULL); 826 - if (!mvs_task_list_cache) { 827 - rc = -ENOMEM; 828 - mv_printk("%s: mvs_task_list_cache alloc failed! \n", __func__); 829 - goto err_out; 830 - } 831 - 832 836 rc = pci_register_driver(&mvs_pci_driver); 833 - 834 837 if (rc) 835 838 goto err_out; 836 839 ··· 836 857 { 837 858 pci_unregister_driver(&mvs_pci_driver); 838 859 sas_release_transport(mvs_stt); 839 - kmem_cache_destroy(mvs_task_list_cache); 840 860 } 841 861 842 862 struct device_attribute *mvst_host_attrs[] = {
+4 -105
drivers/scsi/mvsas/mv_sas.c
··· 852 852 return rc; 853 853 } 854 854 855 - static struct mvs_task_list *mvs_task_alloc_list(int *num, gfp_t gfp_flags) 856 - { 857 - struct mvs_task_list *first = NULL; 858 - 859 - for (; *num > 0; --*num) { 860 - struct mvs_task_list *mvs_list = kmem_cache_zalloc(mvs_task_list_cache, gfp_flags); 861 - 862 - if (!mvs_list) 863 - break; 864 - 865 - INIT_LIST_HEAD(&mvs_list->list); 866 - if (!first) 867 - first = mvs_list; 868 - else 869 - list_add_tail(&mvs_list->list, &first->list); 870 - 871 - } 872 - 873 - return first; 874 - } 875 - 876 - static inline void mvs_task_free_list(struct mvs_task_list *mvs_list) 877 - { 878 - LIST_HEAD(list); 879 - struct list_head *pos, *a; 880 - struct mvs_task_list *mlist = NULL; 881 - 882 - __list_add(&list, mvs_list->list.prev, &mvs_list->list); 883 - 884 - list_for_each_safe(pos, a, &list) { 885 - list_del_init(pos); 886 - mlist = list_entry(pos, struct mvs_task_list, list); 887 - kmem_cache_free(mvs_task_list_cache, mlist); 888 - } 889 - } 890 - 891 - static int mvs_task_exec(struct sas_task *task, const int num, gfp_t gfp_flags, 855 + static int mvs_task_exec(struct sas_task *task, gfp_t gfp_flags, 892 856 struct completion *completion, int is_tmf, 893 857 struct mvs_tmf_task *tmf) 894 858 { ··· 876 912 return rc; 877 913 } 878 914 879 - static int mvs_collector_task_exec(struct sas_task *task, const int num, gfp_t gfp_flags, 880 - struct completion *completion, int is_tmf, 881 - struct mvs_tmf_task *tmf) 915 + int mvs_queue_command(struct sas_task *task, gfp_t gfp_flags) 882 916 { 883 - struct domain_device *dev = task->dev; 884 - struct mvs_prv_info *mpi = dev->port->ha->lldd_ha; 885 - struct mvs_info *mvi = NULL; 886 - struct sas_task *t = task; 887 - struct mvs_task_list *mvs_list = NULL, *a; 888 - LIST_HEAD(q); 889 - int pass[2] = {0}; 890 - u32 rc = 0; 891 - u32 n = num; 892 - unsigned long flags = 0; 893 - 894 - mvs_list = mvs_task_alloc_list(&n, gfp_flags); 895 - if (n) { 896 - printk(KERN_ERR "%s: mvs alloc list failed.\n", __func__); 897 - rc = -ENOMEM; 898 - goto free_list; 899 - } 900 - 901 - __list_add(&q, mvs_list->list.prev, &mvs_list->list); 902 - 903 - list_for_each_entry(a, &q, list) { 904 - a->task = t; 905 - t = list_entry(t->list.next, struct sas_task, list); 906 - } 907 - 908 - list_for_each_entry(a, &q , list) { 909 - 910 - t = a->task; 911 - mvi = ((struct mvs_device *)t->dev->lldd_dev)->mvi_info; 912 - 913 - spin_lock_irqsave(&mvi->lock, flags); 914 - rc = mvs_task_prep(t, mvi, is_tmf, tmf, &pass[mvi->id]); 915 - if (rc) 916 - dev_printk(KERN_ERR, mvi->dev, "mvsas exec failed[%d]!\n", rc); 917 - spin_unlock_irqrestore(&mvi->lock, flags); 918 - } 919 - 920 - if (likely(pass[0])) 921 - MVS_CHIP_DISP->start_delivery(mpi->mvi[0], 922 - (mpi->mvi[0]->tx_prod - 1) & (MVS_CHIP_SLOT_SZ - 1)); 923 - 924 - if (likely(pass[1])) 925 - MVS_CHIP_DISP->start_delivery(mpi->mvi[1], 926 - (mpi->mvi[1]->tx_prod - 1) & (MVS_CHIP_SLOT_SZ - 1)); 927 - 928 - list_del_init(&q); 929 - 930 - free_list: 931 - if (mvs_list) 932 - mvs_task_free_list(mvs_list); 933 - 934 - return rc; 935 - } 936 - 937 - int mvs_queue_command(struct sas_task *task, const int num, 938 - gfp_t gfp_flags) 939 - { 940 - struct mvs_device *mvi_dev = task->dev->lldd_dev; 941 - struct sas_ha_struct *sas = mvi_dev->mvi_info->sas; 942 - 943 - if (sas->lldd_max_execute_num < 2) 944 - return mvs_task_exec(task, num, gfp_flags, NULL, 0, NULL); 945 - else 946 - return mvs_collector_task_exec(task, num, gfp_flags, NULL, 0, NULL); 917 + return mvs_task_exec(task, gfp_flags, NULL, 0, NULL); 947 918 } 948 919 949 920 static void mvs_slot_free(struct mvs_info *mvi, u32 rx_desc) ··· 1310 1411 task->slow_task->timer.expires = jiffies + MVS_TASK_TIMEOUT*HZ; 1311 1412 add_timer(&task->slow_task->timer); 1312 1413 1313 - res = mvs_task_exec(task, 1, GFP_KERNEL, NULL, 1, tmf); 1414 + res = mvs_task_exec(task, GFP_KERNEL, NULL, 1, tmf); 1314 1415 1315 1416 if (res) { 1316 1417 del_timer(&task->slow_task->timer);
+1 -9
drivers/scsi/mvsas/mv_sas.h
··· 65 65 extern struct mvs_info *tgt_mvi; 66 66 extern const struct mvs_dispatch mvs_64xx_dispatch; 67 67 extern const struct mvs_dispatch mvs_94xx_dispatch; 68 - extern struct kmem_cache *mvs_task_list_cache; 69 68 70 69 #define DEV_IS_EXPANDER(type) \ 71 70 ((type == SAS_EDGE_EXPANDER_DEVICE) || (type == SAS_FANOUT_EXPANDER_DEVICE)) ··· 439 440 int n_elem; 440 441 }; 441 442 442 - struct mvs_task_list { 443 - struct sas_task *task; 444 - struct list_head list; 445 - }; 446 - 447 - 448 443 /******************** function prototype *********************/ 449 444 void mvs_get_sas_addr(void *buf, u32 buflen); 450 445 void mvs_tag_clear(struct mvs_info *mvi, u32 tag); ··· 455 462 u32 off_hi, u64 sas_addr); 456 463 void mvs_scan_start(struct Scsi_Host *shost); 457 464 int mvs_scan_finished(struct Scsi_Host *shost, unsigned long time); 458 - int mvs_queue_command(struct sas_task *task, const int num, 459 - gfp_t gfp_flags); 465 + int mvs_queue_command(struct sas_task *task, gfp_t gfp_flags); 460 466 int mvs_abort_task(struct sas_task *task); 461 467 int mvs_abort_task_set(struct domain_device *dev, u8 *lun); 462 468 int mvs_clear_aca(struct domain_device *dev, u8 *lun);
-2
drivers/scsi/pm8001/pm8001_init.c
··· 601 601 sha->lldd_module = THIS_MODULE; 602 602 sha->sas_addr = &pm8001_ha->sas_addr[0]; 603 603 sha->num_phys = chip_info->n_phy; 604 - sha->lldd_max_execute_num = 1; 605 - sha->lldd_queue_size = PM8001_CAN_QUEUE; 606 604 sha->core.shost = shost; 607 605 } 608 606
+5 -17
drivers/scsi/pm8001/pm8001_sas.c
··· 350 350 */ 351 351 #define DEV_IS_GONE(pm8001_dev) \ 352 352 ((!pm8001_dev || (pm8001_dev->dev_type == SAS_PHY_UNUSED))) 353 - static int pm8001_task_exec(struct sas_task *task, const int num, 353 + static int pm8001_task_exec(struct sas_task *task, 354 354 gfp_t gfp_flags, int is_tmf, struct pm8001_tmf_task *tmf) 355 355 { 356 356 struct domain_device *dev = task->dev; ··· 360 360 struct sas_task *t = task; 361 361 struct pm8001_ccb_info *ccb; 362 362 u32 tag = 0xdeadbeef, rc, n_elem = 0; 363 - u32 n = num; 364 363 unsigned long flags = 0; 365 364 366 365 if (!dev->port) { ··· 386 387 spin_unlock_irqrestore(&pm8001_ha->lock, flags); 387 388 t->task_done(t); 388 389 spin_lock_irqsave(&pm8001_ha->lock, flags); 389 - if (n > 1) 390 - t = list_entry(t->list.next, 391 - struct sas_task, list); 392 390 continue; 393 391 } else { 394 392 struct task_status_struct *ts = &t->task_status; 395 393 ts->resp = SAS_TASK_UNDELIVERED; 396 394 ts->stat = SAS_PHY_DOWN; 397 395 t->task_done(t); 398 - if (n > 1) 399 - t = list_entry(t->list.next, 400 - struct sas_task, list); 401 396 continue; 402 397 } 403 398 } ··· 453 460 t->task_state_flags |= SAS_TASK_AT_INITIATOR; 454 461 spin_unlock(&t->task_state_lock); 455 462 pm8001_dev->running_req++; 456 - if (n > 1) 457 - t = list_entry(t->list.next, struct sas_task, list); 458 - } while (--n); 463 + } while (0); 459 464 rc = 0; 460 465 goto out_done; 461 466 ··· 474 483 * pm8001_queue_command - register for upper layer used, all IO commands sent 475 484 * to HBA are from this interface. 476 485 * @task: the task to be execute. 477 - * @num: if can_queue great than 1, the task can be queued up. for SMP task, 478 - * we always execute one one time 479 486 * @gfp_flags: gfp_flags 480 487 */ 481 - int pm8001_queue_command(struct sas_task *task, const int num, 482 - gfp_t gfp_flags) 488 + int pm8001_queue_command(struct sas_task *task, gfp_t gfp_flags) 483 489 { 484 - return pm8001_task_exec(task, num, gfp_flags, 0, NULL); 490 + return pm8001_task_exec(task, gfp_flags, 0, NULL); 485 491 } 486 492 487 493 /** ··· 696 708 task->slow_task->timer.expires = jiffies + PM8001_TASK_TIMEOUT*HZ; 697 709 add_timer(&task->slow_task->timer); 698 710 699 - res = pm8001_task_exec(task, 1, GFP_KERNEL, 1, tmf); 711 + res = pm8001_task_exec(task, GFP_KERNEL, 1, tmf); 700 712 701 713 if (res) { 702 714 del_timer(&task->slow_task->timer);
+1 -2
drivers/scsi/pm8001/pm8001_sas.h
··· 623 623 void *funcdata); 624 624 void pm8001_scan_start(struct Scsi_Host *shost); 625 625 int pm8001_scan_finished(struct Scsi_Host *shost, unsigned long time); 626 - int pm8001_queue_command(struct sas_task *task, const int num, 627 - gfp_t gfp_flags); 626 + int pm8001_queue_command(struct sas_task *task, gfp_t gfp_flags); 628 627 int pm8001_abort_task(struct sas_task *task); 629 628 int pm8001_abort_task_set(struct domain_device *dev, u8 *lun); 630 629 int pm8001_clear_aca(struct domain_device *dev, u8 *lun);
+1 -13
include/scsi/libsas.h
··· 365 365 struct scsi_core { 366 366 struct Scsi_Host *shost; 367 367 368 - struct mutex task_queue_flush; 369 - spinlock_t task_queue_lock; 370 - struct list_head task_queue; 371 - int task_queue_size; 372 - 373 - struct task_struct *queue_thread; 374 368 }; 375 369 376 370 struct sas_ha_event { ··· 416 422 struct asd_sas_port **sas_port; /* array of valid pointers, must be set */ 417 423 int num_phys; /* must be set, gt 0, static */ 418 424 419 - /* The class calls this to send a task for execution. */ 420 - int lldd_max_execute_num; 421 - int lldd_queue_size; 422 425 int strict_wide_ports; /* both sas_addr and attached_sas_addr must match 423 426 * their siblings when forming wide ports */ 424 427 ··· 603 612 604 613 struct sas_task { 605 614 struct domain_device *dev; 606 - struct list_head list; 607 615 608 616 spinlock_t task_state_lock; 609 617 unsigned task_state_flags; ··· 655 665 int (*lldd_dev_found)(struct domain_device *); 656 666 void (*lldd_dev_gone)(struct domain_device *); 657 667 658 - int (*lldd_execute_task)(struct sas_task *, int num, 659 - gfp_t gfp_flags); 668 + int (*lldd_execute_task)(struct sas_task *, gfp_t gfp_flags); 660 669 661 670 /* Task Management Functions. Must be called from process context. */ 662 671 int (*lldd_abort_task)(struct sas_task *); ··· 689 700 int sas_set_phy_speed(struct sas_phy *phy, 690 701 struct sas_phy_linkrates *rates); 691 702 int sas_phy_reset(struct sas_phy *phy, int hard_reset); 692 - int sas_queue_up(struct sas_task *task); 693 703 extern int sas_queuecommand(struct Scsi_Host * ,struct scsi_cmnd *); 694 704 extern int sas_target_alloc(struct scsi_target *); 695 705 extern int sas_slave_configure(struct scsi_device *);