Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

scsi: drop reason argument from ->change_queue_depth

Drop the now unused reason argument from the ->change_queue_depth method.
Also add a return value to scsi_adjust_queue_depth, and rename it to
scsi_change_queue_depth now that it can be used as the default
->change_queue_depth implementation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Mike Christie <michaelc@cs.wisc.edu>
Reviewed-by: Hannes Reinecke <hare@suse.de>

+155 -412
+10 -10
Documentation/scsi/scsi_mid_low_api.txt
··· 149 149 scsi_scan_host() -------+ 150 150 | 151 151 slave_alloc() 152 - slave_configure() --> scsi_adjust_queue_depth() 152 + slave_configure() --> scsi_change_queue_depth() 153 153 | 154 154 slave_alloc() 155 155 slave_configure() ··· 159 159 ------------------------------------------------------------ 160 160 161 161 If the LLD wants to adjust the default queue settings, it can invoke 162 - scsi_adjust_queue_depth() in its slave_configure() routine. 162 + scsi_change_queue_depth() in its slave_configure() routine. 163 163 164 164 *** For scsi devices that the mid level tries to scan but do not 165 165 respond, a slave_alloc(), slave_destroy() pair is called. ··· 203 203 scsi_add_device() ------+ 204 204 | 205 205 slave_alloc() 206 - slave_configure() [--> scsi_adjust_queue_depth()] 206 + slave_configure() [--> scsi_change_queue_depth()] 207 207 ------------------------------------------------------------ 208 208 209 209 In a similar fashion, an LLD may become aware that a SCSI device has been ··· 261 261 | scsi_register() 262 262 | 263 263 slave_alloc() 264 - slave_configure() --> scsi_adjust_queue_depth() 264 + slave_configure() --> scsi_change_queue_depth() 265 265 slave_alloc() *** 266 266 slave_destroy() *** 267 267 | ··· 271 271 slave_destroy() *** 272 272 ------------------------------------------------------------ 273 273 274 - The mid level invokes scsi_adjust_queue_depth() with "cmd_per_lun" for that 274 + The mid level invokes scsi_change_queue_depth() with "cmd_per_lun" for that 275 275 host as the queue length. These settings can be overridden by a 276 276 slave_configure() supplied by the LLD. 277 277 ··· 368 368 Summary: 369 369 scsi_add_device - creates new scsi device (lu) instance 370 370 scsi_add_host - perform sysfs registration and set up transport class 371 - scsi_adjust_queue_depth - change the queue depth on a SCSI device 371 + scsi_change_queue_depth - change the queue depth on a SCSI device 372 372 scsi_bios_ptable - return copy of block device's partition table 373 373 scsi_block_requests - prevent further commands being queued to given host 374 374 scsi_host_alloc - return a new scsi_host instance whose refcount==1 ··· 436 436 437 437 438 438 /** 439 - * scsi_adjust_queue_depth - allow LLD to change queue depth on a SCSI device 439 + * scsi_change_queue_depth - allow LLD to change queue depth on a SCSI device 440 440 * @sdev: pointer to SCSI device to change queue depth on 441 441 * @tags Number of tags allowed if tagged queuing enabled, 442 442 * or number of commands the LLD can queue up ··· 453 453 * Defined in: drivers/scsi/scsi.c [see source code for more notes] 454 454 * 455 455 **/ 456 - void scsi_adjust_queue_depth(struct scsi_device *sdev, int tags) 456 + int scsi_change_queue_depth(struct scsi_device *sdev, int tags) 457 457 458 458 459 459 /** ··· 1214 1214 for disk firmware uploads. 1215 1215 cmd_per_lun - maximum number of commands that can be queued on devices 1216 1216 controlled by the host. Overridden by LLD calls to 1217 - scsi_adjust_queue_depth(). 1217 + scsi_change_queue_depth(). 1218 1218 unchecked_isa_dma - 1=>only use bottom 16 MB of ram (ISA DMA addressing 1219 1219 restriction), 0=>can use full 32 bit (or better) DMA 1220 1220 address space ··· 1254 1254 Instances of this structure convey SCSI commands to the LLD and responses 1255 1255 back to the mid level. The SCSI mid level will ensure that no more SCSI 1256 1256 commands become queued against the LLD than are indicated by 1257 - scsi_adjust_queue_depth() (or struct Scsi_Host::cmd_per_lun). There will 1257 + scsi_change_queue_depth() (or struct Scsi_Host::cmd_per_lun). There will 1258 1258 be at least one instance of struct scsi_cmnd available for each SCSI device. 1259 1259 Members of interest: 1260 1260 cmnd - array containing SCSI command
+5 -12
drivers/ata/libata-scsi.c
··· 1164 1164 1165 1165 depth = min(sdev->host->can_queue, ata_id_queue_depth(dev->id)); 1166 1166 depth = min(ATA_MAX_QUEUE - 1, depth); 1167 - scsi_adjust_queue_depth(sdev, depth); 1167 + scsi_change_queue_depth(sdev, depth); 1168 1168 } 1169 1169 1170 1170 blk_queue_flush_queueable(q, false); ··· 1243 1243 * @ap: ATA port to which the device change the queue depth 1244 1244 * @sdev: SCSI device to configure queue depth for 1245 1245 * @queue_depth: new queue depth 1246 - * @reason: calling context 1247 1246 * 1248 1247 * libsas and libata have different approaches for associating a sdev to 1249 1248 * its ata_port. 1250 1249 * 1251 1250 */ 1252 1251 int __ata_change_queue_depth(struct ata_port *ap, struct scsi_device *sdev, 1253 - int queue_depth, int reason) 1252 + int queue_depth) 1254 1253 { 1255 1254 struct ata_device *dev; 1256 1255 unsigned long flags; 1257 - 1258 - if (reason != SCSI_QDEPTH_DEFAULT) 1259 - return -EOPNOTSUPP; 1260 1256 1261 1257 if (queue_depth < 1 || queue_depth == sdev->queue_depth) 1262 1258 return sdev->queue_depth; ··· 1278 1282 if (sdev->queue_depth == queue_depth) 1279 1283 return -EINVAL; 1280 1284 1281 - scsi_adjust_queue_depth(sdev, queue_depth); 1282 - return queue_depth; 1285 + return scsi_change_queue_depth(sdev, queue_depth); 1283 1286 } 1284 1287 1285 1288 /** 1286 1289 * ata_scsi_change_queue_depth - SCSI callback for queue depth config 1287 1290 * @sdev: SCSI device to configure queue depth for 1288 1291 * @queue_depth: new queue depth 1289 - * @reason: calling context 1290 1292 * 1291 1293 * This is libata standard hostt->change_queue_depth callback. 1292 1294 * SCSI will call into this callback when user tries to set queue ··· 1296 1302 * RETURNS: 1297 1303 * Newly configured queue depth. 1298 1304 */ 1299 - int ata_scsi_change_queue_depth(struct scsi_device *sdev, int queue_depth, 1300 - int reason) 1305 + int ata_scsi_change_queue_depth(struct scsi_device *sdev, int queue_depth) 1301 1306 { 1302 1307 struct ata_port *ap = ata_shost_to_port(sdev->host); 1303 1308 1304 - return __ata_change_queue_depth(ap, sdev, queue_depth, reason); 1309 + return __ata_change_queue_depth(ap, sdev, queue_depth); 1305 1310 } 1306 1311 1307 1312 /**
+1 -1
drivers/ata/sata_nv.c
··· 1951 1951 ata_id_c_string(dev->id, model_num, ATA_ID_PROD, sizeof(model_num)); 1952 1952 1953 1953 if (strncmp(model_num, "Maxtor", 6) == 0) { 1954 - ata_scsi_change_queue_depth(sdev, 1, SCSI_QDEPTH_DEFAULT); 1954 + ata_scsi_change_queue_depth(sdev, 1); 1955 1955 ata_dev_notice(dev, "Disabling SWNCQ mode (depth %x)\n", 1956 1956 sdev->queue_depth); 1957 1957 }
+1 -1
drivers/infiniband/ulp/iser/iscsi_iser.c
··· 911 911 .module = THIS_MODULE, 912 912 .name = "iSCSI Initiator over iSER", 913 913 .queuecommand = iscsi_queuecommand, 914 - .change_queue_depth = iscsi_change_queue_depth, 914 + .change_queue_depth = scsi_change_queue_depth, 915 915 .sg_tablesize = ISCSI_ISER_SG_TABLESIZE, 916 916 .max_sectors = 1024, 917 917 .cmd_per_lun = ISER_DEF_CMD_PER_LUN,
+2 -5
drivers/infiniband/ulp/srp/ib_srp.c
··· 2402 2402 * srp_change_queue_depth - setting device queue depth 2403 2403 * @sdev: scsi device struct 2404 2404 * @qdepth: requested queue depth 2405 - * @reason: SCSI_QDEPTH_DEFAULT 2406 - * (see include/scsi/scsi_host.h for definition) 2407 2405 * 2408 2406 * Returns queue depth. 2409 2407 */ 2410 2408 static int 2411 - srp_change_queue_depth(struct scsi_device *sdev, int qdepth, int reason) 2409 + srp_change_queue_depth(struct scsi_device *sdev, int qdepth) 2412 2410 { 2413 2411 if (!sdev->tagged_supported) 2414 2412 qdepth = 1; 2415 - scsi_adjust_queue_depth(sdev, qdepth); 2416 - return sdev->queue_depth; 2413 + return scsi_change_queue_depth(sdev, qdepth); 2417 2414 } 2418 2415 2419 2416 static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag,
+3 -9
drivers/message/fusion/mptscsih.c
··· 2311 2311 * mptscsih_change_queue_depth - This function will set a devices queue depth 2312 2312 * @sdev: per scsi_device pointer 2313 2313 * @qdepth: requested queue depth 2314 - * @reason: calling context 2315 2314 * 2316 2315 * Adding support for new 'change_queue_depth' api. 2317 2316 */ 2318 2317 int 2319 - mptscsih_change_queue_depth(struct scsi_device *sdev, int qdepth, int reason) 2318 + mptscsih_change_queue_depth(struct scsi_device *sdev, int qdepth) 2320 2319 { 2321 2320 MPT_SCSI_HOST *hd = shost_priv(sdev->host); 2322 2321 VirtTarget *vtarget; ··· 2325 2326 2326 2327 starget = scsi_target(sdev); 2327 2328 vtarget = starget->hostdata; 2328 - 2329 - if (reason != SCSI_QDEPTH_DEFAULT) 2330 - return -EOPNOTSUPP; 2331 2329 2332 2330 if (ioc->bus_type == SPI) { 2333 2331 if (!(vtarget->tflags & MPT_TARGET_FLAGS_Q_YES)) ··· 2343 2347 if (qdepth > max_depth) 2344 2348 qdepth = max_depth; 2345 2349 2346 - scsi_adjust_queue_depth(sdev, qdepth); 2347 - return sdev->queue_depth; 2350 + return scsi_change_queue_depth(sdev, qdepth); 2348 2351 } 2349 2352 2350 2353 /* ··· 2387 2392 ioc->name, vtarget->negoFlags, vtarget->maxOffset, 2388 2393 vtarget->minSyncFactor)); 2389 2394 2390 - mptscsih_change_queue_depth(sdev, MPT_SCSI_CMD_PER_DEV_HIGH, 2391 - SCSI_QDEPTH_DEFAULT); 2395 + mptscsih_change_queue_depth(sdev, MPT_SCSI_CMD_PER_DEV_HIGH); 2392 2396 dsprintk(ioc, printk(MYIOC_s_DEBUG_FMT 2393 2397 "tagged %d, simple %d\n", 2394 2398 ioc->name,sdev->tagged_supported, sdev->simple_tags));
+1 -2
drivers/message/fusion/mptscsih.h
··· 128 128 extern int mptscsih_scandv_complete(MPT_ADAPTER *ioc, MPT_FRAME_HDR *mf, MPT_FRAME_HDR *r); 129 129 extern int mptscsih_event_process(MPT_ADAPTER *ioc, EventNotificationReply_t *pEvReply); 130 130 extern int mptscsih_ioc_reset(MPT_ADAPTER *ioc, int post_reset); 131 - extern int mptscsih_change_queue_depth(struct scsi_device *sdev, int qdepth, 132 - int reason); 131 + extern int mptscsih_change_queue_depth(struct scsi_device *sdev, int qdepth); 133 132 extern u8 mptscsih_raid_id_to_num(MPT_ADAPTER *ioc, u8 channel, u8 id); 134 133 extern int mptscsih_is_phys_disk(MPT_ADAPTER *ioc, u8 channel, u8 id); 135 134 extern struct device_attribute *mptscsih_host_attrs[];
+2 -9
drivers/s390/scsi/zfcp_scsi.c
··· 32 32 module_param(allow_lun_scan, bool, 0600); 33 33 MODULE_PARM_DESC(allow_lun_scan, "For NPIV, scan and attach all storage LUNs"); 34 34 35 - static int zfcp_scsi_change_queue_depth(struct scsi_device *sdev, int depth, 36 - int reason) 37 - { 38 - scsi_adjust_queue_depth(sdev, depth); 39 - return sdev->queue_depth; 40 - } 41 - 42 35 static void zfcp_scsi_slave_destroy(struct scsi_device *sdev) 43 36 { 44 37 struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev); ··· 47 54 static int zfcp_scsi_slave_configure(struct scsi_device *sdp) 48 55 { 49 56 if (sdp->tagged_supported) 50 - scsi_adjust_queue_depth(sdp, default_depth); 57 + scsi_change_queue_depth(sdp, default_depth); 51 58 return 0; 52 59 } 53 60 ··· 286 293 .slave_alloc = zfcp_scsi_slave_alloc, 287 294 .slave_configure = zfcp_scsi_slave_configure, 288 295 .slave_destroy = zfcp_scsi_slave_destroy, 289 - .change_queue_depth = zfcp_scsi_change_queue_depth, 296 + .change_queue_depth = scsi_change_queue_depth, 290 297 .proc_name = "zfcp", 291 298 .can_queue = 4096, 292 299 .this_id = -1,
+1 -12
drivers/scsi/3w-9xxx.c
··· 189 189 return len; 190 190 } /* End twa_show_stats() */ 191 191 192 - /* This function will set a devices queue depth */ 193 - static int twa_change_queue_depth(struct scsi_device *sdev, int queue_depth, 194 - int reason) 195 - { 196 - if (reason != SCSI_QDEPTH_DEFAULT) 197 - return -EOPNOTSUPP; 198 - 199 - scsi_adjust_queue_depth(sdev, queue_depth); 200 - return queue_depth; 201 - } /* End twa_change_queue_depth() */ 202 - 203 192 /* Create sysfs 'stats' entry */ 204 193 static struct device_attribute twa_host_stats_attr = { 205 194 .attr = { ··· 2003 2014 .queuecommand = twa_scsi_queue, 2004 2015 .eh_host_reset_handler = twa_scsi_eh_reset, 2005 2016 .bios_param = twa_scsi_biosparam, 2006 - .change_queue_depth = twa_change_queue_depth, 2017 + .change_queue_depth = scsi_change_queue_depth, 2007 2018 .can_queue = TW_Q_LENGTH-2, 2008 2019 .slave_configure = twa_slave_configure, 2009 2020 .this_id = -1,
+1 -12
drivers/scsi/3w-sas.c
··· 191 191 return len; 192 192 } /* End twl_show_stats() */ 193 193 194 - /* This function will set a devices queue depth */ 195 - static int twl_change_queue_depth(struct scsi_device *sdev, int queue_depth, 196 - int reason) 197 - { 198 - if (reason != SCSI_QDEPTH_DEFAULT) 199 - return -EOPNOTSUPP; 200 - 201 - scsi_adjust_queue_depth(sdev, queue_depth); 202 - return queue_depth; 203 - } /* End twl_change_queue_depth() */ 204 - 205 194 /* stats sysfs attribute initializer */ 206 195 static struct device_attribute twl_host_stats_attr = { 207 196 .attr = { ··· 1577 1588 .queuecommand = twl_scsi_queue, 1578 1589 .eh_host_reset_handler = twl_scsi_eh_reset, 1579 1590 .bios_param = twl_scsi_biosparam, 1580 - .change_queue_depth = twl_change_queue_depth, 1591 + .change_queue_depth = scsi_change_queue_depth, 1581 1592 .can_queue = TW_Q_LENGTH-2, 1582 1593 .slave_configure = twl_slave_configure, 1583 1594 .this_id = -1,
+1 -12
drivers/scsi/3w-xxxx.c
··· 523 523 return len; 524 524 } /* End tw_show_stats() */ 525 525 526 - /* This function will set a devices queue depth */ 527 - static int tw_change_queue_depth(struct scsi_device *sdev, int queue_depth, 528 - int reason) 529 - { 530 - if (reason != SCSI_QDEPTH_DEFAULT) 531 - return -EOPNOTSUPP; 532 - 533 - scsi_adjust_queue_depth(sdev, queue_depth); 534 - return queue_depth; 535 - } /* End tw_change_queue_depth() */ 536 - 537 526 /* Create sysfs 'stats' entry */ 538 527 static struct device_attribute tw_host_stats_attr = { 539 528 .attr = { ··· 2257 2268 .queuecommand = tw_scsi_queue, 2258 2269 .eh_host_reset_handler = tw_scsi_eh_reset, 2259 2270 .bios_param = tw_scsi_biosparam, 2260 - .change_queue_depth = tw_change_queue_depth, 2271 + .change_queue_depth = scsi_change_queue_depth, 2261 2272 .can_queue = TW_Q_LENGTH-2, 2262 2273 .slave_configure = tw_slave_configure, 2263 2274 .this_id = -1,
+7 -12
drivers/scsi/53c700.c
··· 175 175 STATIC int NCR_700_slave_alloc(struct scsi_device *SDpnt); 176 176 STATIC int NCR_700_slave_configure(struct scsi_device *SDpnt); 177 177 STATIC void NCR_700_slave_destroy(struct scsi_device *SDpnt); 178 - static int NCR_700_change_queue_depth(struct scsi_device *SDpnt, int depth, int reason); 178 + static int NCR_700_change_queue_depth(struct scsi_device *SDpnt, int depth); 179 179 static int NCR_700_change_queue_type(struct scsi_device *SDpnt, int depth); 180 180 181 181 STATIC struct device_attribute *NCR_700_dev_attrs[]; ··· 904 904 hostdata->tag_negotiated &= ~(1<<scmd_id(SCp)); 905 905 906 906 SCp->device->tagged_supported = 0; 907 - scsi_adjust_queue_depth(SCp->device, host->cmd_per_lun); 907 + scsi_change_queue_depth(SCp->device, host->cmd_per_lun); 908 908 scsi_set_tag_type(SCp->device, 0); 909 909 } else { 910 910 shost_printk(KERN_WARNING, host, ··· 2052 2052 2053 2053 /* to do here: allocate memory; build a queue_full list */ 2054 2054 if(SDp->tagged_supported) { 2055 - scsi_adjust_queue_depth(SDp, NCR_700_DEFAULT_TAGS); 2055 + scsi_change_queue_depth(SDp, NCR_700_DEFAULT_TAGS); 2056 2056 NCR_700_set_tag_neg_state(SDp, NCR_700_START_TAG_NEGOTIATION); 2057 2057 } 2058 2058 ··· 2075 2075 } 2076 2076 2077 2077 static int 2078 - NCR_700_change_queue_depth(struct scsi_device *SDp, int depth, int reason) 2078 + NCR_700_change_queue_depth(struct scsi_device *SDp, int depth) 2079 2079 { 2080 - if (reason != SCSI_QDEPTH_DEFAULT) 2081 - return -EOPNOTSUPP; 2082 - 2083 2080 if (depth > NCR_700_MAX_TAGS) 2084 2081 depth = NCR_700_MAX_TAGS; 2085 - 2086 - scsi_adjust_queue_depth(SDp, depth); 2087 - return depth; 2082 + return scsi_change_queue_depth(SDp, depth); 2088 2083 } 2089 2084 2090 2085 static int NCR_700_change_queue_type(struct scsi_device *SDp, int tag_type) ··· 2100 2105 if (!tag_type) { 2101 2106 /* shift back to the default unqueued number of commands 2102 2107 * (the user can still raise this) */ 2103 - scsi_adjust_queue_depth(SDp, SDp->host->cmd_per_lun); 2108 + scsi_change_queue_depth(SDp, SDp->host->cmd_per_lun); 2104 2109 hostdata->tag_negotiated &= ~(1 << sdev_id(SDp)); 2105 2110 } else { 2106 2111 /* Here, we cleared the negotiation flag above, so this 2107 2112 * will force the driver to renegotiate */ 2108 - scsi_adjust_queue_depth(SDp, SDp->queue_depth); 2113 + scsi_change_queue_depth(SDp, SDp->queue_depth); 2109 2114 if (change_tag) 2110 2115 NCR_700_set_tag_neg_state(SDp, NCR_700_START_TAG_NEGOTIATION); 2111 2116 }
+2 -2
drivers/scsi/BusLogic.c
··· 2327 2327 if (qdepth == 0) 2328 2328 qdepth = BLOGIC_MAX_AUTO_TAG_DEPTH; 2329 2329 adapter->qdepth[tgt_id] = qdepth; 2330 - scsi_adjust_queue_depth(dev, qdepth); 2330 + scsi_change_queue_depth(dev, qdepth); 2331 2331 } else { 2332 2332 adapter->tagq_ok &= ~(1 << tgt_id); 2333 2333 qdepth = adapter->untag_qdepth; 2334 2334 adapter->qdepth[tgt_id] = qdepth; 2335 - scsi_adjust_queue_depth(dev, qdepth); 2335 + scsi_change_queue_depth(dev, qdepth); 2336 2336 } 2337 2337 qdepth = 0; 2338 2338 for (tgt_id = 0; tgt_id < adapter->maxdev; tgt_id++)
+7 -11
drivers/scsi/aacraid/linit.c
··· 462 462 depth = 256; 463 463 else if (depth < 2) 464 464 depth = 2; 465 - scsi_adjust_queue_depth(sdev, depth); 465 + scsi_change_queue_depth(sdev, depth); 466 466 } else 467 - scsi_adjust_queue_depth(sdev, 1); 467 + scsi_change_queue_depth(sdev, 1); 468 468 469 469 return 0; 470 470 } ··· 478 478 * total capacity and the queue depth supported by the target device. 479 479 */ 480 480 481 - static int aac_change_queue_depth(struct scsi_device *sdev, int depth, 482 - int reason) 481 + static int aac_change_queue_depth(struct scsi_device *sdev, int depth) 483 482 { 484 - if (reason != SCSI_QDEPTH_DEFAULT) 485 - return -EOPNOTSUPP; 486 - 487 483 if (sdev->tagged_supported && (sdev->type == TYPE_DISK) && 488 484 (sdev_channel(sdev) == CONTAINER_CHANNEL)) { 489 485 struct scsi_device * dev; ··· 500 504 depth = 256; 501 505 else if (depth < 2) 502 506 depth = 2; 503 - scsi_adjust_queue_depth(sdev, depth); 504 - } else 505 - scsi_adjust_queue_depth(sdev, 1); 506 - return sdev->queue_depth; 507 + return scsi_change_queue_depth(sdev, depth); 508 + } 509 + 510 + return scsi_change_queue_depth(sdev, 1); 507 511 } 508 512 509 513 static ssize_t aac_show_raid_level(struct device *dev, struct device_attribute *attr, char *buf)
+3 -5
drivers/scsi/advansys.c
··· 7706 7706 asc_dvc->cfg->can_tagged_qng |= tid_bit; 7707 7707 asc_dvc->use_tagged_qng |= tid_bit; 7708 7708 } 7709 - scsi_adjust_queue_depth(sdev, 7709 + scsi_change_queue_depth(sdev, 7710 7710 asc_dvc->max_dvc_qng[sdev->id]); 7711 7711 } 7712 7712 } else { ··· 7847 7847 } 7848 7848 } 7849 7849 7850 - if ((adv_dvc->tagqng_able & tidmask) && sdev->tagged_supported) { 7851 - scsi_adjust_queue_depth(sdev, 7852 - adv_dvc->max_dvc_qng); 7853 - } 7850 + if ((adv_dvc->tagqng_able & tidmask) && sdev->tagged_supported) 7851 + scsi_change_queue_depth(sdev, adv_dvc->max_dvc_qng); 7854 7852 } 7855 7853 7856 7854 /*
+2 -2
drivers/scsi/aic7xxx/aic79xx_osm.c
··· 1470 1470 switch ((dev->flags & (AHD_DEV_Q_BASIC|AHD_DEV_Q_TAGGED))) { 1471 1471 case AHD_DEV_Q_BASIC: 1472 1472 case AHD_DEV_Q_TAGGED: 1473 - scsi_adjust_queue_depth(sdev, 1473 + scsi_change_queue_depth(sdev, 1474 1474 dev->openings + dev->active); 1475 1475 break; 1476 1476 default: ··· 1480 1480 * serially on the controller/device. This should 1481 1481 * remove some latency. 1482 1482 */ 1483 - scsi_adjust_queue_depth(sdev, 1); 1483 + scsi_change_queue_depth(sdev, 1); 1484 1484 break; 1485 1485 } 1486 1486 }
+2 -2
drivers/scsi/aic7xxx/aic7xxx_osm.c
··· 1336 1336 switch ((dev->flags & (AHC_DEV_Q_BASIC|AHC_DEV_Q_TAGGED))) { 1337 1337 case AHC_DEV_Q_BASIC: 1338 1338 case AHC_DEV_Q_TAGGED: 1339 - scsi_adjust_queue_depth(sdev, 1339 + scsi_change_queue_depth(sdev, 1340 1340 dev->openings + dev->active); 1341 1341 default: 1342 1342 /* ··· 1345 1345 * serially on the controller/device. This should 1346 1346 * remove some latency. 1347 1347 */ 1348 - scsi_adjust_queue_depth(sdev, 2); 1348 + scsi_change_queue_depth(sdev, 2); 1349 1349 break; 1350 1350 } 1351 1351 }
+2 -7
drivers/scsi/arcmsr/arcmsr_hba.c
··· 114 114 static const char *arcmsr_info(struct Scsi_Host *); 115 115 static irqreturn_t arcmsr_interrupt(struct AdapterControlBlock *acb); 116 116 static void arcmsr_free_irq(struct pci_dev *, struct AdapterControlBlock *); 117 - static int arcmsr_adjust_disk_queue_depth(struct scsi_device *sdev, 118 - int queue_depth, int reason) 117 + static int arcmsr_adjust_disk_queue_depth(struct scsi_device *sdev, int queue_depth) 119 118 { 120 - if (reason != SCSI_QDEPTH_DEFAULT) 121 - return -EOPNOTSUPP; 122 - 123 119 if (queue_depth > ARCMSR_MAX_CMD_PERLUN) 124 120 queue_depth = ARCMSR_MAX_CMD_PERLUN; 125 - scsi_adjust_queue_depth(sdev, queue_depth); 126 - return queue_depth; 121 + return scsi_change_queue_depth(sdev, queue_depth); 127 122 } 128 123 129 124 static struct scsi_host_template arcmsr_scsi_host_template = {
+1 -1
drivers/scsi/be2iscsi/be_main.c
··· 556 556 .name = "Emulex 10Gbe open-iscsi Initiator Driver", 557 557 .proc_name = DRV_NAME, 558 558 .queuecommand = iscsi_queuecommand, 559 - .change_queue_depth = iscsi_change_queue_depth, 559 + .change_queue_depth = scsi_change_queue_depth, 560 560 .slave_configure = beiscsi_slave_configure, 561 561 .target_alloc = iscsi_target_alloc, 562 562 .eh_abort_handler = beiscsi_eh_abort,
+2 -2
drivers/scsi/bfa/bfad_im.c
··· 776 776 static int 777 777 bfad_im_slave_configure(struct scsi_device *sdev) 778 778 { 779 - scsi_adjust_queue_depth(sdev, bfa_lun_queue_depth); 779 + scsi_change_queue_depth(sdev, bfa_lun_queue_depth); 780 780 return 0; 781 781 } 782 782 ··· 866 866 if (bfa_lun_queue_depth > tmp_sdev->queue_depth) { 867 867 if (tmp_sdev->id != sdev->id) 868 868 continue; 869 - scsi_adjust_queue_depth(tmp_sdev, 869 + scsi_change_queue_depth(tmp_sdev, 870 870 tmp_sdev->queue_depth + 1); 871 871 872 872 itnim->last_ramp_up_time = jiffies;
+1 -1
drivers/scsi/bnx2fc/bnx2fc_fcoe.c
··· 2784 2784 .eh_target_reset_handler = bnx2fc_eh_target_reset, /* tgt reset */ 2785 2785 .eh_host_reset_handler = fc_eh_host_reset, 2786 2786 .slave_alloc = fc_slave_alloc, 2787 - .change_queue_depth = fc_change_queue_depth, 2787 + .change_queue_depth = scsi_change_queue_depth, 2788 2788 .change_queue_type = scsi_change_queue_type, 2789 2789 .this_id = -1, 2790 2790 .cmd_per_lun = 3,
+1 -1
drivers/scsi/bnx2i/bnx2i_iscsi.c
··· 2259 2259 .eh_abort_handler = iscsi_eh_abort, 2260 2260 .eh_device_reset_handler = iscsi_eh_device_reset, 2261 2261 .eh_target_reset_handler = iscsi_eh_recover_target, 2262 - .change_queue_depth = iscsi_change_queue_depth, 2262 + .change_queue_depth = scsi_change_queue_depth, 2263 2263 .target_alloc = iscsi_target_alloc, 2264 2264 .can_queue = 2048, 2265 2265 .max_sectors = 127,
+1 -1
drivers/scsi/csiostor/csio_scsi.c
··· 2241 2241 static int 2242 2242 csio_slave_configure(struct scsi_device *sdev) 2243 2243 { 2244 - scsi_adjust_queue_depth(sdev, csio_lun_qdepth); 2244 + scsi_change_queue_depth(sdev, csio_lun_qdepth); 2245 2245 return 0; 2246 2246 } 2247 2247
+1 -1
drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
··· 86 86 .proc_name = DRV_MODULE_NAME, 87 87 .can_queue = CXGB3I_SCSI_HOST_QDEPTH, 88 88 .queuecommand = iscsi_queuecommand, 89 - .change_queue_depth = iscsi_change_queue_depth, 89 + .change_queue_depth = scsi_change_queue_depth, 90 90 .sg_tablesize = SG_ALL, 91 91 .max_sectors = 0xFFFF, 92 92 .cmd_per_lun = ISCSI_DEF_CMD_PER_LUN,
+1 -1
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
··· 89 89 .proc_name = DRV_MODULE_NAME, 90 90 .can_queue = CXGB4I_SCSI_HOST_QDEPTH, 91 91 .queuecommand = iscsi_queuecommand, 92 - .change_queue_depth = iscsi_change_queue_depth, 92 + .change_queue_depth = scsi_change_queue_depth, 93 93 .sg_tablesize = SG_ALL, 94 94 .max_sectors = 0xFFFF, 95 95 .cmd_per_lun = ISCSI_DEF_CMD_PER_LUN,
+1 -1
drivers/scsi/dpt_i2o.c
··· 415 415 pHba = (adpt_hba *) host->hostdata[0]; 416 416 417 417 if (host->can_queue && device->tagged_supported) { 418 - scsi_adjust_queue_depth(device, 418 + scsi_change_queue_depth(device, 419 419 host->can_queue - 1); 420 420 } 421 421 return 0;
+3 -3
drivers/scsi/eata.c
··· 952 952 } else { 953 953 tag_suffix = ", no tags"; 954 954 } 955 - scsi_adjust_queue_depth(dev, tqd); 955 + scsi_change_queue_depth(dev, tqd); 956 956 } else if (TLDEV(dev->type) && linked_comm) { 957 - scsi_adjust_queue_depth(dev, tqd); 957 + scsi_change_queue_depth(dev, tqd); 958 958 tag_suffix = ", untagged"; 959 959 } else { 960 - scsi_adjust_queue_depth(dev, utqd); 960 + scsi_change_queue_depth(dev, utqd); 961 961 tag_suffix = ""; 962 962 } 963 963
-1
drivers/scsi/esas2r/esas2r.h
··· 972 972 struct atto_ioctl *ioctl_hba); 973 973 int esas2r_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd); 974 974 int esas2r_show_info(struct seq_file *m, struct Scsi_Host *sh); 975 - int esas2r_change_queue_depth(struct scsi_device *dev, int depth, int reason); 976 975 long esas2r_proc_ioctl(struct file *fp, unsigned int cmd, unsigned long arg); 977 976 978 977 /* SCSI error handler (eh) functions */
+1 -10
drivers/scsi/esas2r/esas2r_main.c
··· 254 254 .use_clustering = ENABLE_CLUSTERING, 255 255 .emulated = 0, 256 256 .proc_name = ESAS2R_DRVR_NAME, 257 - .change_queue_depth = esas2r_change_queue_depth, 257 + .change_queue_depth = scsi_change_queue_depth, 258 258 .change_queue_type = scsi_change_queue_type, 259 259 .max_sectors = 0xFFFF, 260 260 .use_blk_tags = 1, ··· 1255 1255 esas2r_log(ESAS2R_LOG_INFO, "target_reset (%p)", cmd); 1256 1256 1257 1257 return esas2r_dev_targ_reset(cmd, true); 1258 - } 1259 - 1260 - int esas2r_change_queue_depth(struct scsi_device *dev, int depth, int reason) 1261 - { 1262 - esas2r_log(ESAS2R_LOG_INFO, "change_queue_depth %p, %d", dev, depth); 1263 - 1264 - scsi_adjust_queue_depth(dev, depth); 1265 - 1266 - return dev->queue_depth; 1267 1258 } 1268 1259 1269 1260 void esas2r_log_request_failure(struct esas2r_adapter *a,
+1 -1
drivers/scsi/esp_scsi.c
··· 2407 2407 /* XXX make this configurable somehow XXX */ 2408 2408 int goal_tags = min(ESP_DEFAULT_TAGS, ESP_MAX_TAG); 2409 2409 2410 - scsi_adjust_queue_depth(dev, goal_tags); 2410 + scsi_change_queue_depth(dev, goal_tags); 2411 2411 } 2412 2412 2413 2413 tp->flags |= ESP_TGT_DISCONNECT;
+1 -1
drivers/scsi/fcoe/fcoe.c
··· 280 280 .eh_device_reset_handler = fc_eh_device_reset, 281 281 .eh_host_reset_handler = fc_eh_host_reset, 282 282 .slave_alloc = fc_slave_alloc, 283 - .change_queue_depth = fc_change_queue_depth, 283 + .change_queue_depth = scsi_change_queue_depth, 284 284 .change_queue_type = scsi_change_queue_type, 285 285 .this_id = -1, 286 286 .cmd_per_lun = 3,
+2 -2
drivers/scsi/fnic/fnic_main.c
··· 98 98 if (!rport || fc_remote_port_chkready(rport)) 99 99 return -ENXIO; 100 100 101 - scsi_adjust_queue_depth(sdev, fnic_max_qdepth); 101 + scsi_change_queue_depth(sdev, fnic_max_qdepth); 102 102 return 0; 103 103 } 104 104 ··· 110 110 .eh_device_reset_handler = fnic_device_reset, 111 111 .eh_host_reset_handler = fnic_host_reset, 112 112 .slave_alloc = fnic_slave_alloc, 113 - .change_queue_depth = fc_change_queue_depth, 113 + .change_queue_depth = scsi_change_queue_depth, 114 114 .change_queue_type = scsi_change_queue_type, 115 115 .this_id = -1, 116 116 .cmd_per_lun = 3,
+1 -15
drivers/scsi/hpsa.c
··· 216 216 static void hpsa_scan_start(struct Scsi_Host *); 217 217 static int hpsa_scan_finished(struct Scsi_Host *sh, 218 218 unsigned long elapsed_time); 219 - static int hpsa_change_queue_depth(struct scsi_device *sdev, 220 - int qdepth, int reason); 221 219 222 220 static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd); 223 221 static int hpsa_eh_abort_handler(struct scsi_cmnd *scsicmd); ··· 671 673 .queuecommand = hpsa_scsi_queue_command, 672 674 .scan_start = hpsa_scan_start, 673 675 .scan_finished = hpsa_scan_finished, 674 - .change_queue_depth = hpsa_change_queue_depth, 676 + .change_queue_depth = scsi_change_queue_depth, 675 677 .this_id = -1, 676 678 .use_clustering = ENABLE_CLUSTERING, 677 679 .eh_abort_handler = hpsa_eh_abort_handler, ··· 4070 4072 finished = h->scan_finished; 4071 4073 spin_unlock_irqrestore(&h->scan_lock, flags); 4072 4074 return finished; 4073 - } 4074 - 4075 - static int hpsa_change_queue_depth(struct scsi_device *sdev, 4076 - int qdepth, int reason) 4077 - { 4078 - struct ctlr_info *h = sdev_to_hba(sdev); 4079 - 4080 - if (reason != SCSI_QDEPTH_DEFAULT) 4081 - return -ENOTSUPP; 4082 - 4083 - scsi_adjust_queue_depth(sdev, qdepth); 4084 - return sdev->queue_depth; 4085 4075 } 4086 4076 4087 4077 static void hpsa_unregister_scsi(struct ctlr_info *h)
+2 -6
drivers/scsi/hptiop.c
··· 1118 1118 } 1119 1119 1120 1120 static int hptiop_adjust_disk_queue_depth(struct scsi_device *sdev, 1121 - int queue_depth, int reason) 1121 + int queue_depth) 1122 1122 { 1123 1123 struct hptiop_hba *hba = (struct hptiop_hba *)sdev->host->hostdata; 1124 1124 1125 - if (reason != SCSI_QDEPTH_DEFAULT) 1126 - return -EOPNOTSUPP; 1127 - 1128 1125 if (queue_depth > hba->max_requests) 1129 1126 queue_depth = hba->max_requests; 1130 - scsi_adjust_queue_depth(sdev, queue_depth); 1131 - return queue_depth; 1127 + return scsi_change_queue_depth(sdev, queue_depth); 1132 1128 } 1133 1129 1134 1130 static ssize_t hptiop_show_version(struct device *dev,
+2 -7
drivers/scsi/ibmvscsi/ibmvfc.c
··· 2900 2900 * Return value: 2901 2901 * actual depth set 2902 2902 **/ 2903 - static int ibmvfc_change_queue_depth(struct scsi_device *sdev, int qdepth, 2904 - int reason) 2903 + static int ibmvfc_change_queue_depth(struct scsi_device *sdev, int qdepth) 2905 2904 { 2906 - if (reason != SCSI_QDEPTH_DEFAULT) 2907 - return -EOPNOTSUPP; 2908 - 2909 2905 if (qdepth > IBMVFC_MAX_CMDS_PER_LUN) 2910 2906 qdepth = IBMVFC_MAX_CMDS_PER_LUN; 2911 2907 2912 - scsi_adjust_queue_depth(sdev, qdepth); 2913 - return sdev->queue_depth; 2908 + return scsi_change_queue_depth(sdev, qdepth); 2914 2909 } 2915 2910 2916 2911 static ssize_t ibmvfc_show_host_partition_name(struct device *dev,
+2 -8
drivers/scsi/ibmvscsi/ibmvscsi.c
··· 1941 1941 * Return value: 1942 1942 * actual depth set 1943 1943 **/ 1944 - static int ibmvscsi_change_queue_depth(struct scsi_device *sdev, int qdepth, 1945 - int reason) 1944 + static int ibmvscsi_change_queue_depth(struct scsi_device *sdev, int qdepth) 1946 1945 { 1947 - if (reason != SCSI_QDEPTH_DEFAULT) 1948 - return -EOPNOTSUPP; 1949 - 1950 1946 if (qdepth > IBMVSCSI_MAX_CMDS_PER_LUN) 1951 1947 qdepth = IBMVSCSI_MAX_CMDS_PER_LUN; 1952 - 1953 - scsi_adjust_queue_depth(sdev, qdepth); 1954 - return sdev->queue_depth; 1948 + return scsi_change_queue_depth(sdev, qdepth); 1955 1949 } 1956 1950 1957 1951 /* ------------------------------------------------------------
+3 -7
drivers/scsi/ipr.c
··· 4328 4328 * Return value: 4329 4329 * actual depth set 4330 4330 **/ 4331 - static int ipr_change_queue_depth(struct scsi_device *sdev, int qdepth, 4332 - int reason) 4331 + static int ipr_change_queue_depth(struct scsi_device *sdev, int qdepth) 4333 4332 { 4334 4333 struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)sdev->host->hostdata; 4335 4334 struct ipr_resource_entry *res; 4336 4335 unsigned long lock_flags = 0; 4337 - 4338 - if (reason != SCSI_QDEPTH_DEFAULT) 4339 - return -EOPNOTSUPP; 4340 4336 4341 4337 spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); 4342 4338 res = (struct ipr_resource_entry *)sdev->hostdata; ··· 4341 4345 qdepth = IPR_MAX_CMD_PER_ATA_LUN; 4342 4346 spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 4343 4347 4344 - scsi_adjust_queue_depth(sdev, qdepth); 4348 + scsi_change_queue_depth(sdev, qdepth); 4345 4349 return sdev->queue_depth; 4346 4350 } 4347 4351 ··· 4748 4752 spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 4749 4753 4750 4754 if (ap) { 4751 - scsi_adjust_queue_depth(sdev, IPR_MAX_CMD_PER_ATA_LUN); 4755 + scsi_change_queue_depth(sdev, IPR_MAX_CMD_PER_ATA_LUN); 4752 4756 ata_sas_slave_configure(sdev, ap); 4753 4757 } 4754 4758
+1 -1
drivers/scsi/ips.c
··· 1210 1210 min = ha->max_cmds / 2; 1211 1211 if (ha->enq->ucLogDriveCount <= 2) 1212 1212 min = ha->max_cmds - 1; 1213 - scsi_adjust_queue_depth(SDptr, min); 1213 + scsi_change_queue_depth(SDptr, min); 1214 1214 } 1215 1215 1216 1216 SDptr->skip_ms_page_8 = 1;
+1 -1
drivers/scsi/iscsi_tcp.c
··· 952 952 .module = THIS_MODULE, 953 953 .name = "iSCSI Initiator over TCP/IP", 954 954 .queuecommand = iscsi_queuecommand, 955 - .change_queue_depth = iscsi_change_queue_depth, 955 + .change_queue_depth = scsi_change_queue_depth, 956 956 .can_queue = ISCSI_DEF_XMIT_CMDS_MAX - 1, 957 957 .sg_tablesize = 4096, 958 958 .max_sectors = 0xFFFF,
+1 -14
drivers/scsi/libfc/fc_fcp.c
··· 2160 2160 if (!rport || fc_remote_port_chkready(rport)) 2161 2161 return -ENXIO; 2162 2162 2163 - scsi_adjust_queue_depth(sdev, FC_FCP_DFLT_QUEUE_DEPTH); 2163 + scsi_change_queue_depth(sdev, FC_FCP_DFLT_QUEUE_DEPTH); 2164 2164 return 0; 2165 2165 } 2166 2166 EXPORT_SYMBOL(fc_slave_alloc); 2167 - 2168 - /** 2169 - * fc_change_queue_depth() - Change a device's queue depth 2170 - * @sdev: The SCSI device whose queue depth is to change 2171 - * @qdepth: The new queue depth 2172 - * @reason: The resason for the change 2173 - */ 2174 - int fc_change_queue_depth(struct scsi_device *sdev, int qdepth, int reason) 2175 - { 2176 - scsi_adjust_queue_depth(sdev, qdepth); 2177 - return sdev->queue_depth; 2178 - } 2179 - EXPORT_SYMBOL(fc_change_queue_depth); 2180 2167 2181 2168 /** 2182 2169 * fc_fcp_destory() - Tear down the FCP layer for a given local port
-7
drivers/scsi/libiscsi.c
··· 1771 1771 } 1772 1772 EXPORT_SYMBOL_GPL(iscsi_queuecommand); 1773 1773 1774 - int iscsi_change_queue_depth(struct scsi_device *sdev, int depth, int reason) 1775 - { 1776 - scsi_adjust_queue_depth(sdev, depth); 1777 - return sdev->queue_depth; 1778 - } 1779 - EXPORT_SYMBOL_GPL(iscsi_change_queue_depth); 1780 - 1781 1774 int iscsi_target_alloc(struct scsi_target *starget) 1782 1775 { 1783 1776 struct iscsi_cls_session *cls_session = starget_to_session(starget);
+5 -7
drivers/scsi/libsas/sas_scsi_host.c
··· 940 940 sas_read_port_mode_page(scsi_dev); 941 941 942 942 if (scsi_dev->tagged_supported) { 943 - scsi_adjust_queue_depth(scsi_dev, SAS_DEF_QD); 943 + scsi_change_queue_depth(scsi_dev, SAS_DEF_QD); 944 944 } else { 945 945 SAS_DPRINTK("device %llx, LUN %llx doesn't support " 946 946 "TCQ\n", SAS_ADDR(dev->sas_addr), 947 947 scsi_dev->lun); 948 - scsi_adjust_queue_depth(scsi_dev, 1); 948 + scsi_change_queue_depth(scsi_dev, 1); 949 949 } 950 950 951 951 scsi_dev->allow_restart = 1; ··· 953 953 return 0; 954 954 } 955 955 956 - int sas_change_queue_depth(struct scsi_device *sdev, int depth, int reason) 956 + int sas_change_queue_depth(struct scsi_device *sdev, int depth) 957 957 { 958 958 struct domain_device *dev = sdev_to_domain_dev(sdev); 959 959 960 960 if (dev_is_sata(dev)) 961 - return __ata_change_queue_depth(dev->sata_dev.ap, sdev, depth, 962 - reason); 961 + return __ata_change_queue_depth(dev->sata_dev.ap, sdev, depth); 963 962 964 963 if (!sdev->tagged_supported) 965 964 depth = 1; 966 - scsi_adjust_queue_depth(sdev, depth); 967 - return depth; 965 + return scsi_change_queue_depth(sdev, depth); 968 966 } 969 967 970 968 int sas_change_queue_type(struct scsi_device *scsi_dev, int type)
+4 -22
drivers/scsi/lpfc/lpfc_scsi.c
··· 243 243 } 244 244 245 245 /** 246 - * lpfc_change_queue_depth - Alter scsi device queue depth 247 - * @sdev: Pointer the scsi device on which to change the queue depth. 248 - * @qdepth: New queue depth to set the sdev to. 249 - * @reason: The reason for the queue depth change. 250 - * 251 - * This function is called by the midlayer and the LLD to alter the queue 252 - * depth for a scsi device. This function sets the queue depth to the new 253 - * value and sends an event out to log the queue depth change. 254 - **/ 255 - static int 256 - lpfc_change_queue_depth(struct scsi_device *sdev, int qdepth, int reason) 257 - { 258 - scsi_adjust_queue_depth(sdev, qdepth); 259 - return sdev->queue_depth; 260 - } 261 - 262 - /** 263 246 * lpfc_rampdown_queue_depth - Post RAMP_DOWN_QUEUE event to worker thread 264 247 * @phba: The Hba for which this call is being executed. 265 248 * ··· 327 344 else 328 345 new_queue_depth = sdev->queue_depth - 329 346 new_queue_depth; 330 - lpfc_change_queue_depth(sdev, new_queue_depth, 331 - SCSI_QDEPTH_DEFAULT); 347 + scsi_change_queue_depth(sdev, new_queue_depth); 332 348 } 333 349 } 334 350 lpfc_destroy_vport_work_array(phba, vports); ··· 5495 5513 struct lpfc_vport *vport = (struct lpfc_vport *) sdev->host->hostdata; 5496 5514 struct lpfc_hba *phba = vport->phba; 5497 5515 5498 - scsi_adjust_queue_depth(sdev, vport->cfg_lun_queue_depth); 5516 + scsi_change_queue_depth(sdev, vport->cfg_lun_queue_depth); 5499 5517 5500 5518 if (phba->cfg_poll & ENABLE_FCP_RING_POLLING) { 5501 5519 lpfc_sli_handle_fast_ring_event(phba, ··· 5878 5896 .shost_attrs = lpfc_hba_attrs, 5879 5897 .max_sectors = 0xFFFF, 5880 5898 .vendor_id = LPFC_NL_VENDOR_ID, 5881 - .change_queue_depth = lpfc_change_queue_depth, 5899 + .change_queue_depth = scsi_change_queue_depth, 5882 5900 .change_queue_type = scsi_change_queue_type, 5883 5901 .use_blk_tags = 1, 5884 5902 .track_queue_depth = 1, ··· 5903 5921 .use_clustering = ENABLE_CLUSTERING, 5904 5922 .shost_attrs = lpfc_vport_attrs, 5905 5923 .max_sectors = 0xFFFF, 5906 - .change_queue_depth = lpfc_change_queue_depth, 5924 + .change_queue_depth = scsi_change_queue_depth, 5907 5925 .change_queue_type = scsi_change_queue_type, 5908 5926 .use_blk_tags = 1, 5909 5927 .track_queue_depth = 1,
+1 -20
drivers/scsi/megaraid/megaraid_mbox.c
··· 332 332 NULL, 333 333 }; 334 334 335 - /** 336 - * megaraid_change_queue_depth - Change the device's queue depth 337 - * @sdev: scsi device struct 338 - * @qdepth: depth to set 339 - * @reason: calling context 340 - * 341 - * Return value: 342 - * actual depth set 343 - */ 344 - static int megaraid_change_queue_depth(struct scsi_device *sdev, int qdepth, 345 - int reason) 346 - { 347 - if (reason != SCSI_QDEPTH_DEFAULT) 348 - return -EOPNOTSUPP; 349 - 350 - scsi_adjust_queue_depth(sdev, qdepth); 351 - return sdev->queue_depth; 352 - } 353 - 354 335 /* 355 336 * Scsi host template for megaraid unified driver 356 337 */ ··· 344 363 .eh_device_reset_handler = megaraid_reset_handler, 345 364 .eh_bus_reset_handler = megaraid_reset_handler, 346 365 .eh_host_reset_handler = megaraid_reset_handler, 347 - .change_queue_depth = megaraid_change_queue_depth, 366 + .change_queue_depth = scsi_change_queue_depth, 348 367 .use_clustering = ENABLE_CLUSTERING, 349 368 .no_write_same = 1, 350 369 .sdev_attrs = megaraid_sdev_attrs,
+1 -12
drivers/scsi/megaraid/megaraid_sas_base.c
··· 2591 2591 } 2592 2592 } 2593 2593 2594 - static int megasas_change_queue_depth(struct scsi_device *sdev, 2595 - int queue_depth, int reason) 2596 - { 2597 - if (reason != SCSI_QDEPTH_DEFAULT) 2598 - return -EOPNOTSUPP; 2599 - 2600 - scsi_adjust_queue_depth(sdev, queue_depth); 2601 - 2602 - return queue_depth; 2603 - } 2604 - 2605 2594 static ssize_t 2606 2595 megasas_fw_crash_buffer_store(struct device *cdev, 2607 2596 struct device_attribute *attr, const char *buf, size_t count) ··· 2755 2766 .shost_attrs = megaraid_host_attrs, 2756 2767 .bios_param = megasas_bios_param, 2757 2768 .use_clustering = ENABLE_CLUSTERING, 2758 - .change_queue_depth = megasas_change_queue_depth, 2769 + .change_queue_depth = scsi_change_queue_depth, 2759 2770 .no_write_same = 1, 2760 2771 }; 2761 2772
+4 -6
drivers/scsi/mpt2sas/mpt2sas_scsih.c
··· 1222 1222 max_depth = 1; 1223 1223 if (qdepth > max_depth) 1224 1224 qdepth = max_depth; 1225 - scsi_adjust_queue_depth(sdev, qdepth); 1225 + scsi_change_queue_depth(sdev, qdepth); 1226 1226 } 1227 1227 1228 1228 /** 1229 1229 * _scsih_change_queue_depth - setting device queue depth 1230 1230 * @sdev: scsi device struct 1231 1231 * @qdepth: requested queue depth 1232 - * @reason: SCSI_QDEPTH_DEFAULT 1233 - * (see include/scsi/scsi_host.h for definition) 1234 1232 * 1235 1233 * Returns queue depth. 1236 1234 */ 1237 1235 static int 1238 - _scsih_change_queue_depth(struct scsi_device *sdev, int qdepth, int reason) 1236 + _scsih_change_queue_depth(struct scsi_device *sdev, int qdepth) 1239 1237 { 1240 1238 _scsih_adjust_queue_depth(sdev, qdepth); 1241 1239 ··· 2075 2077 r_level, raid_device->handle, 2076 2078 (unsigned long long)raid_device->wwid, 2077 2079 raid_device->num_pds, ds); 2078 - _scsih_change_queue_depth(sdev, qdepth, SCSI_QDEPTH_DEFAULT); 2080 + _scsih_change_queue_depth(sdev, qdepth); 2079 2081 /* raid transport support */ 2080 2082 if (!ioc->is_warpdrive) 2081 2083 _scsih_set_level(sdev, raid_device->volume_type); ··· 2140 2142 _scsih_display_sata_capabilities(ioc, handle, sdev); 2141 2143 2142 2144 2143 - _scsih_change_queue_depth(sdev, qdepth, SCSI_QDEPTH_DEFAULT); 2145 + _scsih_change_queue_depth(sdev, qdepth); 2144 2146 2145 2147 if (ssp_target) { 2146 2148 sas_read_port_mode_page(sdev);
+4 -6
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 1090 1090 max_depth = 1; 1091 1091 if (qdepth > max_depth) 1092 1092 qdepth = max_depth; 1093 - scsi_adjust_queue_depth(sdev, qdepth); 1093 + scsi_change_queue_depth(sdev, qdepth); 1094 1094 } 1095 1095 1096 1096 /** 1097 1097 * _scsih_change_queue_depth - setting device queue depth 1098 1098 * @sdev: scsi device struct 1099 1099 * @qdepth: requested queue depth 1100 - * @reason: SCSI_QDEPTH_DEFAULT 1101 - * (see include/scsi/scsi_host.h for definition) 1102 1100 * 1103 1101 * Returns queue depth. 1104 1102 */ 1105 1103 static int 1106 - _scsih_change_queue_depth(struct scsi_device *sdev, int qdepth, int reason) 1104 + _scsih_change_queue_depth(struct scsi_device *sdev, int qdepth) 1107 1105 { 1108 1106 _scsih_adjust_queue_depth(sdev, qdepth); 1109 1107 ··· 1732 1734 raid_device->num_pds, ds); 1733 1735 1734 1736 1735 - _scsih_change_queue_depth(sdev, qdepth, SCSI_QDEPTH_DEFAULT); 1737 + _scsih_change_queue_depth(sdev, qdepth); 1736 1738 1737 1739 /* raid transport support */ 1738 1740 _scsih_set_level(sdev, raid_device->volume_type); ··· 1798 1800 _scsih_display_sata_capabilities(ioc, handle, sdev); 1799 1801 1800 1802 1801 - _scsih_change_queue_depth(sdev, qdepth, SCSI_QDEPTH_DEFAULT); 1803 + _scsih_change_queue_depth(sdev, qdepth); 1802 1804 1803 1805 if (ssp_target) { 1804 1806 sas_read_port_mode_page(sdev);
+1 -1
drivers/scsi/ncr53c8xx.c
··· 7997 7997 if (depth_to_use > MAX_TAGS) 7998 7998 depth_to_use = MAX_TAGS; 7999 7999 8000 - scsi_adjust_queue_depth(device, depth_to_use); 8000 + scsi_change_queue_depth(device, depth_to_use); 8001 8001 8002 8002 /* 8003 8003 ** Since the queue depth is not tunable under Linux,
+2 -10
drivers/scsi/pmcraid.c
··· 285 285 * pmcraid_change_queue_depth - Change the device's queue depth 286 286 * @scsi_dev: scsi device struct 287 287 * @depth: depth to set 288 - * @reason: calling context 289 288 * 290 289 * Return value 291 290 * actual depth set 292 291 */ 293 - static int pmcraid_change_queue_depth(struct scsi_device *scsi_dev, int depth, 294 - int reason) 292 + static int pmcraid_change_queue_depth(struct scsi_device *scsi_dev, int depth) 295 293 { 296 - if (reason != SCSI_QDEPTH_DEFAULT) 297 - return -EOPNOTSUPP; 298 - 299 294 if (depth > PMCRAID_MAX_CMD_PER_LUN) 300 295 depth = PMCRAID_MAX_CMD_PER_LUN; 301 - 302 - scsi_adjust_queue_depth(scsi_dev, depth); 303 - 304 - return scsi_dev->queue_depth; 296 + return scsi_change_queue_depth(scsi_dev, depth); 305 297 } 306 298 307 299 /**
+2 -2
drivers/scsi/qla1280.c
··· 1224 1224 1225 1225 if (device->tagged_supported && 1226 1226 (ha->bus_settings[bus].qtag_enables & (BIT_0 << target))) { 1227 - scsi_adjust_queue_depth(device, ha->bus_settings[bus].hiwat); 1227 + scsi_change_queue_depth(device, ha->bus_settings[bus].hiwat); 1228 1228 } else { 1229 - scsi_adjust_queue_depth(device, default_depth); 1229 + scsi_change_queue_depth(device, default_depth); 1230 1230 } 1231 1231 1232 1232 nv->bus[bus].target[target].parameter.enable_sync = device->sdtr;
+2 -10
drivers/scsi/qla2xxx/qla_os.c
··· 236 236 static int qla2xxx_eh_bus_reset(struct scsi_cmnd *); 237 237 static int qla2xxx_eh_host_reset(struct scsi_cmnd *); 238 238 239 - static int qla2x00_change_queue_depth(struct scsi_device *, int, int); 240 239 static void qla2x00_clear_drv_active(struct qla_hw_data *); 241 240 static void qla2x00_free_device(scsi_qla_host_t *); 242 241 static void qla83xx_disable_laser(scsi_qla_host_t *vha); ··· 257 258 .slave_destroy = qla2xxx_slave_destroy, 258 259 .scan_finished = qla2xxx_scan_finished, 259 260 .scan_start = qla2xxx_scan_start, 260 - .change_queue_depth = qla2x00_change_queue_depth, 261 + .change_queue_depth = scsi_change_queue_depth, 261 262 .change_queue_type = scsi_change_queue_type, 262 263 .this_id = -1, 263 264 .cmd_per_lun = 3, ··· 1405 1406 if (IS_T10_PI_CAPABLE(vha->hw)) 1406 1407 blk_queue_update_dma_alignment(sdev->request_queue, 0x7); 1407 1408 1408 - scsi_adjust_queue_depth(sdev, req->max_q_depth); 1409 + scsi_change_queue_depth(sdev, req->max_q_depth); 1409 1410 return 0; 1410 1411 } 1411 1412 ··· 1413 1414 qla2xxx_slave_destroy(struct scsi_device *sdev) 1414 1415 { 1415 1416 sdev->hostdata = NULL; 1416 - } 1417 - 1418 - static int 1419 - qla2x00_change_queue_depth(struct scsi_device *sdev, int qdepth, int reason) 1420 - { 1421 - scsi_adjust_queue_depth(sdev, qdepth); 1422 - return sdev->queue_depth; 1423 1417 } 1424 1418 1425 1419 /**
+2 -2
drivers/scsi/qla4xxx/ql4_os.c
··· 201 201 .eh_timed_out = qla4xxx_eh_cmd_timed_out, 202 202 203 203 .slave_alloc = qla4xxx_slave_alloc, 204 - .change_queue_depth = iscsi_change_queue_depth, 204 + .change_queue_depth = scsi_change_queue_depth, 205 205 206 206 .this_id = -1, 207 207 .cmd_per_lun = 3, ··· 9059 9059 if (ql4xmaxqdepth != 0 && ql4xmaxqdepth <= 0xffffU) 9060 9060 queue_depth = ql4xmaxqdepth; 9061 9061 9062 - scsi_adjust_queue_depth(sdev, queue_depth); 9062 + scsi_change_queue_depth(sdev, queue_depth); 9063 9063 return 0; 9064 9064 } 9065 9065
+15 -26
drivers/scsi/scsi.c
··· 742 742 } 743 743 744 744 /** 745 - * scsi_adjust_queue_depth - Let low level drivers change a device's queue depth 745 + * scsi_change_queue_depth - change a device's queue depth 746 746 * @sdev: SCSI Device in question 747 - * @tags: Number of tags allowed if tagged queueing enabled, 748 - * or number of commands the low level driver can 749 - * queue up in non-tagged mode (as per cmd_per_lun). 747 + * @depth: number of commands allowed to be queued to the driver 750 748 * 751 - * Returns: Nothing 752 - * 753 - * Lock Status: None held on entry 754 - * 755 - * Notes: Low level drivers may call this at any time and we will do 756 - * the right thing depending on whether or not the device is 757 - * currently active and whether or not it even has the 758 - * command blocks built yet. 749 + * Sets the device queue depth and returns the new value. 759 750 */ 760 - void scsi_adjust_queue_depth(struct scsi_device *sdev, int tags) 751 + int scsi_change_queue_depth(struct scsi_device *sdev, int depth) 761 752 { 762 753 unsigned long flags; 763 754 764 - /* 765 - * refuse to set tagged depth to an unworkable size 766 - */ 767 - if (tags <= 0) 768 - return; 755 + if (depth <= 0) 756 + goto out; 769 757 770 758 spin_lock_irqsave(sdev->request_queue->queue_lock, flags); 771 759 ··· 768 780 */ 769 781 if (!shost_use_blk_mq(sdev->host) && !sdev->host->bqt) { 770 782 if (blk_queue_tagged(sdev->request_queue) && 771 - blk_queue_resize_tags(sdev->request_queue, tags) != 0) 772 - goto out; 783 + blk_queue_resize_tags(sdev->request_queue, depth) != 0) 784 + goto out_unlock; 773 785 } 774 786 775 - sdev->queue_depth = tags; 776 - out: 787 + sdev->queue_depth = depth; 788 + out_unlock: 777 789 spin_unlock_irqrestore(sdev->request_queue->queue_lock, flags); 790 + out: 791 + return sdev->queue_depth; 778 792 } 779 - EXPORT_SYMBOL(scsi_adjust_queue_depth); 793 + EXPORT_SYMBOL(scsi_change_queue_depth); 780 794 781 795 /** 782 796 * scsi_track_queue_full - track QUEUE_FULL events to adjust queue depth ··· 823 833 if (sdev->last_queue_full_depth < 8) { 824 834 /* Drop back to untagged */ 825 835 scsi_set_tag_type(sdev, 0); 826 - scsi_adjust_queue_depth(sdev, sdev->host->cmd_per_lun); 836 + scsi_change_queue_depth(sdev, sdev->host->cmd_per_lun); 827 837 return -1; 828 838 } 829 839 830 - scsi_adjust_queue_depth(sdev, depth); 831 - return depth; 840 + return scsi_change_queue_depth(sdev, depth); 832 841 } 833 842 EXPORT_SYMBOL(scsi_track_queue_full); 834 843
+2 -2
drivers/scsi/scsi_debug.c
··· 4469 4469 } 4470 4470 4471 4471 static int 4472 - sdebug_change_qdepth(struct scsi_device *sdev, int qdepth, int reason) 4472 + sdebug_change_qdepth(struct scsi_device *sdev, int qdepth) 4473 4473 { 4474 4474 int num_in_q = 0; 4475 4475 unsigned long iflags; ··· 4489 4489 /* allow to exceed max host queued_arr elements for testing */ 4490 4490 if (qdepth > SCSI_DEBUG_CANQUEUE + 10) 4491 4491 qdepth = SCSI_DEBUG_CANQUEUE + 10; 4492 - scsi_adjust_queue_depth(sdev, qdepth); 4492 + scsi_change_queue_depth(sdev, qdepth); 4493 4493 4494 4494 if (SCSI_DEBUG_OPT_Q_NOISE & scsi_debug_opts) { 4495 4495 sdev_printk(KERN_INFO, sdev,
+1 -1
drivers/scsi/scsi_error.c
··· 632 632 tmp_sdev->queue_depth == sdev->max_queue_depth) 633 633 continue; 634 634 635 - scsi_adjust_queue_depth(tmp_sdev, tmp_sdev->queue_depth + 1); 635 + scsi_change_queue_depth(tmp_sdev, tmp_sdev->queue_depth + 1); 636 636 sdev->last_queue_ramp_up = jiffies; 637 637 } 638 638 }
+1 -1
drivers/scsi/scsi_scan.c
··· 292 292 blk_queue_init_tags(sdev->request_queue, 293 293 sdev->host->cmd_per_lun, shost->bqt); 294 294 } 295 - scsi_adjust_queue_depth(sdev, sdev->host->cmd_per_lun); 295 + scsi_change_queue_depth(sdev, sdev->host->cmd_per_lun); 296 296 297 297 scsi_sysfs_device_initialize(sdev); 298 298
+1 -2
drivers/scsi/scsi_sysfs.c
··· 880 880 if (depth < 1 || depth > sht->can_queue) 881 881 return -EINVAL; 882 882 883 - retval = sht->change_queue_depth(sdev, depth, 884 - SCSI_QDEPTH_DEFAULT); 883 + retval = sht->change_queue_depth(sdev, depth); 885 884 if (retval < 0) 886 885 return retval; 887 886
+1 -1
drivers/scsi/storvsc_drv.c
··· 1429 1429 1430 1430 static int storvsc_device_configure(struct scsi_device *sdevice) 1431 1431 { 1432 - scsi_adjust_queue_depth(sdevice, STORVSC_MAX_IO_REQUESTS); 1432 + scsi_change_queue_depth(sdevice, STORVSC_MAX_IO_REQUESTS); 1433 1433 1434 1434 blk_queue_max_segment_size(sdevice->request_queue, PAGE_SIZE); 1435 1435
+1 -1
drivers/scsi/sym53c8xx_2/sym_glue.c
··· 820 820 if (reqtags > SYM_CONF_MAX_TAG) 821 821 reqtags = SYM_CONF_MAX_TAG; 822 822 depth_to_use = reqtags ? reqtags : 1; 823 - scsi_adjust_queue_depth(sdev, depth_to_use); 823 + scsi_change_queue_depth(sdev, depth_to_use); 824 824 lp->s.scdev_depth = depth_to_use; 825 825 sym_tune_dev_queuing(tp, sdev->lun, reqtags); 826 826
+1 -1
drivers/scsi/tmscsim.c
··· 2194 2194 2195 2195 if (sdev->tagged_supported && (dcb->DevMode & TAG_QUEUEING_)) { 2196 2196 dcb->SyncMode |= EN_TAG_QUEUEING; 2197 - scsi_adjust_queue_depth(sdev, acb->TagMaxNum); 2197 + scsi_change_queue_depth(sdev, acb->TagMaxNum); 2198 2198 } 2199 2199 2200 2200 return 0;
+5 -5
drivers/scsi/u14-34f.c
··· 696 696 if (TLDEV(dev->type) && dev->tagged_supported) 697 697 698 698 if (tag_mode == TAG_SIMPLE) { 699 - scsi_adjust_queue_depth(dev, tqd); 699 + scsi_change_queue_depth(dev, tqd); 700 700 tag_suffix = ", simple tags"; 701 701 } 702 702 else if (tag_mode == TAG_ORDERED) { 703 - scsi_adjust_queue_depth(dev, tqd); 703 + scsi_change_queue_depth(dev, tqd); 704 704 tag_suffix = ", ordered tags"; 705 705 } 706 706 else { 707 - scsi_adjust_queue_depth(dev, tqd); 707 + scsi_change_queue_depth(dev, tqd); 708 708 tag_suffix = ", no tags"; 709 709 } 710 710 711 711 else if (TLDEV(dev->type) && linked_comm) { 712 - scsi_adjust_queue_depth(dev, tqd); 712 + scsi_change_queue_depth(dev, tqd); 713 713 tag_suffix = ", untagged"; 714 714 } 715 715 716 716 else { 717 - scsi_adjust_queue_depth(dev, utqd); 717 + scsi_change_queue_depth(dev, utqd); 718 718 tag_suffix = ""; 719 719 } 720 720
+4 -9
drivers/scsi/ufs/ufshcd.c
··· 2695 2695 2696 2696 dev_dbg(hba->dev, "%s: activate tcq with queue depth %d\n", 2697 2697 __func__, lun_qdepth); 2698 - scsi_adjust_queue_depth(sdev, lun_qdepth); 2698 + scsi_change_queue_depth(sdev, lun_qdepth); 2699 2699 } 2700 2700 2701 2701 /* ··· 2787 2787 * ufshcd_change_queue_depth - change queue depth 2788 2788 * @sdev: pointer to SCSI device 2789 2789 * @depth: required depth to set 2790 - * @reason: reason for changing the depth 2791 2790 * 2792 - * Change queue depth according to the reason and make sure 2793 - * the max. limits are not crossed. 2791 + * Change queue depth and make sure the max. limits are not crossed. 2794 2792 */ 2795 - static int ufshcd_change_queue_depth(struct scsi_device *sdev, 2796 - int depth, int reason) 2793 + static int ufshcd_change_queue_depth(struct scsi_device *sdev, int depth) 2797 2794 { 2798 2795 struct ufs_hba *hba = shost_priv(sdev->host); 2799 2796 2800 2797 if (depth > hba->nutrs) 2801 2798 depth = hba->nutrs; 2802 - 2803 - scsi_adjust_queue_depth(sdev, depth); 2804 - return depth; 2799 + return scsi_change_queue_depth(sdev, depth); 2805 2800 } 2806 2801 2807 2802 /**
+2 -6
drivers/scsi/virtio_scsi.c
··· 682 682 * virtscsi_change_queue_depth() - Change a virtscsi target's queue depth 683 683 * @sdev: Virtscsi target whose queue depth to change 684 684 * @qdepth: New queue depth 685 - * @reason: Reason for the queue depth change. 686 685 */ 687 - static int virtscsi_change_queue_depth(struct scsi_device *sdev, 688 - int qdepth, 689 - int reason) 686 + static int virtscsi_change_queue_depth(struct scsi_device *sdev, int qdepth) 690 687 { 691 688 struct Scsi_Host *shost = sdev->host; 692 689 int max_depth = shost->cmd_per_lun; 693 690 694 - scsi_adjust_queue_depth(sdev, min(max_depth, qdepth)); 695 - return sdev->queue_depth; 691 + return scsi_change_queue_depth(sdev, min(max_depth, qdepth)); 696 692 } 697 693 698 694 static int virtscsi_abort(struct scsi_cmnd *sc)
+2 -10
drivers/scsi/vmw_pvscsi.c
··· 504 504 } 505 505 } 506 506 507 - static int pvscsi_change_queue_depth(struct scsi_device *sdev, 508 - int qdepth, 509 - int reason) 507 + static int pvscsi_change_queue_depth(struct scsi_device *sdev, int qdepth) 510 508 { 511 - if (reason != SCSI_QDEPTH_DEFAULT) 512 - /* 513 - * We support only changing default. 514 - */ 515 - return -EOPNOTSUPP; 516 - 517 509 if (!sdev->tagged_supported) 518 510 qdepth = 1; 519 - scsi_adjust_queue_depth(sdev, qdepth); 511 + scsi_change_queue_depth(sdev, qdepth); 520 512 521 513 if (sdev->inquiry_len > 7) 522 514 sdev_printk(KERN_INFO, sdev,
-1
drivers/scsi/wd7000.c
··· 1653 1653 .can_queue = WD7000_Q, 1654 1654 .this_id = 7, 1655 1655 .sg_tablesize = WD7000_SG, 1656 - .cmd_per_lun = 1, 1657 1656 .unchecked_isa_dma = 1, 1658 1657 .use_clustering = ENABLE_CLUSTERING, 1659 1658 };
+1 -14
drivers/target/loopback/tcm_loop.c
··· 110 110 */ 111 111 struct device *tcm_loop_primary; 112 112 113 - /* 114 - * Copied from drivers/scsi/libfc/fc_fcp.c:fc_change_queue_depth() and 115 - * drivers/scsi/libiscsi.c:iscsi_change_queue_depth() 116 - */ 117 - static int tcm_loop_change_queue_depth( 118 - struct scsi_device *sdev, 119 - int depth, 120 - int reason) 121 - { 122 - scsi_adjust_queue_depth(sdev, depth); 123 - return sdev->queue_depth; 124 - } 125 - 126 113 static void tcm_loop_submission_work(struct work_struct *work) 127 114 { 128 115 struct tcm_loop_cmd *tl_cmd = ··· 384 397 .proc_name = "tcm_loopback", 385 398 .name = "TCM_Loopback", 386 399 .queuecommand = tcm_loop_queuecommand, 387 - .change_queue_depth = tcm_loop_change_queue_depth, 400 + .change_queue_depth = scsi_change_queue_depth, 388 401 .change_queue_type = scsi_change_queue_type, 389 402 .eh_abort_handler = tcm_loop_abort_task, 390 403 .eh_device_reset_handler = tcm_loop_device_reset,
+1 -1
drivers/usb/storage/uas.c
··· 799 799 if (devinfo->flags & US_FL_NO_REPORT_OPCODES) 800 800 sdev->no_report_opcodes = 1; 801 801 802 - scsi_adjust_queue_depth(sdev, devinfo->qdepth - 2); 802 + scsi_change_queue_depth(sdev, devinfo->qdepth - 2); 803 803 return 0; 804 804 } 805 805
+2 -2
include/linux/libata.h
··· 1191 1191 extern int ata_scsi_slave_config(struct scsi_device *sdev); 1192 1192 extern void ata_scsi_slave_destroy(struct scsi_device *sdev); 1193 1193 extern int ata_scsi_change_queue_depth(struct scsi_device *sdev, 1194 - int queue_depth, int reason); 1194 + int queue_depth); 1195 1195 extern int __ata_change_queue_depth(struct ata_port *ap, struct scsi_device *sdev, 1196 - int queue_depth, int reason); 1196 + int queue_depth); 1197 1197 extern struct ata_device *ata_dev_pair(struct ata_device *adev); 1198 1198 extern int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev); 1199 1199 extern void ata_scsi_port_error_handler(struct Scsi_Host *host, struct ata_port *ap);
-1
include/scsi/libfc.h
··· 1105 1105 int fc_eh_device_reset(struct scsi_cmnd *); 1106 1106 int fc_eh_host_reset(struct scsi_cmnd *); 1107 1107 int fc_slave_alloc(struct scsi_device *); 1108 - int fc_change_queue_depth(struct scsi_device *, int qdepth, int reason); 1109 1108 1110 1109 /* 1111 1110 * ELS/CT interface
-2
include/scsi/libiscsi.h
··· 378 378 /* 379 379 * scsi host template 380 380 */ 381 - extern int iscsi_change_queue_depth(struct scsi_device *sdev, int depth, 382 - int reason); 383 381 extern int iscsi_eh_abort(struct scsi_cmnd *sc); 384 382 extern int iscsi_eh_recover_target(struct scsi_cmnd *sc); 385 383 extern int iscsi_eh_session_reset(struct scsi_cmnd *sc);
+1 -2
include/scsi/libsas.h
··· 704 704 extern int sas_queuecommand(struct Scsi_Host * ,struct scsi_cmnd *); 705 705 extern int sas_target_alloc(struct scsi_target *); 706 706 extern int sas_slave_configure(struct scsi_device *); 707 - extern int sas_change_queue_depth(struct scsi_device *, int new_depth, 708 - int reason); 707 + extern int sas_change_queue_depth(struct scsi_device *, int new_depth); 709 708 extern int sas_change_queue_type(struct scsi_device *, int qt); 710 709 extern int sas_bios_param(struct scsi_device *, 711 710 struct block_device *,
+1 -1
include/scsi/scsi_device.h
··· 380 380 #define __shost_for_each_device(sdev, shost) \ 381 381 list_for_each_entry((sdev), &((shost)->__devices), siblings) 382 382 383 - extern void scsi_adjust_queue_depth(struct scsi_device *, int); 383 + extern int scsi_change_queue_depth(struct scsi_device *, int); 384 384 extern int scsi_track_queue_full(struct scsi_device *, int); 385 385 386 386 extern int scsi_set_medium_removal(struct scsi_device *, char);
+2 -6
include/scsi/scsi_host.h
··· 46 46 #define DISABLE_CLUSTERING 0 47 47 #define ENABLE_CLUSTERING 1 48 48 49 - enum { 50 - SCSI_QDEPTH_DEFAULT, /* default requested change, e.g. from sysfs */ 51 - }; 52 - 53 49 struct scsi_host_template { 54 50 struct module *module; 55 51 const char *name; ··· 189 193 * Things currently recommended to be handled at this time include: 190 194 * 191 195 * 1. Setting the device queue depth. Proper setting of this is 192 - * described in the comments for scsi_adjust_queue_depth. 196 + * described in the comments for scsi_change_queue_depth. 193 197 * 2. Determining if the device supports the various synchronous 194 198 * negotiation protocols. The device struct will already have 195 199 * responded to INQUIRY and the results of the standard items ··· 275 279 * 276 280 * Status: OPTIONAL 277 281 */ 278 - int (* change_queue_depth)(struct scsi_device *, int, int); 282 + int (* change_queue_depth)(struct scsi_device *, int); 279 283 280 284 /* 281 285 * Fill in this function to allow the changing of tag types