Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://git.kernel.dk/linux-block

Pull more block layer updates from Jens Axboe:
"A followup pull request, with some parts that either needed a bit more
testing before going in, merge sync, or just later arriving fixes.
This contains:

- Timer related updates from Kees. These were purposefully delayed
since I didn't want to pull in a later v4.14-rc tag to my block
tree.

- ide-cd prep sense buffer fix from Bart. Also delayed, as not to
clash with the late fix we put into 4.14-rc.

- Small BFQ updates series from Luca and Paolo.

- Single nvmet fix from James, fixing a non-functional case there.

- Bio fast clone fix from Michael, which made bcache return the wrong
data for some cases.

- Legacy IO path regression hang fix from Ming"

* 'for-linus' of git://git.kernel.dk/linux-block:
bio: ensure __bio_clone_fast copies bi_partno
nvmet_fc: fix better length checking
block: wake up all tasks blocked in get_request()
block, bfq: move debug blkio stats behind CONFIG_DEBUG_BLK_CGROUP
block, bfq: update blkio stats outside the scheduler lock
block, bfq: add missing invocations of bfqg_stats_update_io_add/remove
doc, block, bfq: update max IOPS sustainable with BFQ
ide: Make ide_cdrom_prep_fs() initialize the sense buffer pointer
md: Convert timers to use timer_setup()
block: swim3: Convert timers to use timer_setup()
block/aoe: Convert timers to use timer_setup()
amifloppy: Convert timers to use timer_setup()
block/floppy: Convert callback to pass timer_list

+311 -166
+37 -6
Documentation/block/bfq-iosched.txt
··· 20 20 details on how to configure BFQ for the desired tradeoff between 21 21 latency and throughput, or on how to maximize throughput. 22 22 23 - On average CPUs, the current version of BFQ can handle devices 24 - performing at most ~30K IOPS; at most ~50 KIOPS on faster CPUs. As a 25 - reference, 30-50 KIOPS correspond to very high bandwidths with 26 - sequential I/O (e.g., 8-12 GB/s if I/O requests are 256 KB large), and 27 - to 120-200 MB/s with 4KB random I/O. BFQ is currently being tested on 28 - multi-queue devices too. 23 + BFQ has a non-null overhead, which limits the maximum IOPS that a CPU 24 + can process for a device scheduled with BFQ. To give an idea of the 25 + limits on slow or average CPUs, here are, first, the limits of BFQ for 26 + three different CPUs, on, respectively, an average laptop, an old 27 + desktop, and a cheap embedded system, in case full hierarchical 28 + support is enabled (i.e., CONFIG_BFQ_GROUP_IOSCHED is set), but 29 + CONFIG_DEBUG_BLK_CGROUP is not set (Section 4-2): 30 + - Intel i7-4850HQ: 400 KIOPS 31 + - AMD A8-3850: 250 KIOPS 32 + - ARM CortexTM-A53 Octa-core: 80 KIOPS 33 + 34 + If CONFIG_DEBUG_BLK_CGROUP is set (and of course full hierarchical 35 + support is enabled), then the sustainable throughput with BFQ 36 + decreases, because all blkio.bfq* statistics are created and updated 37 + (Section 4-2). For BFQ, this leads to the following maximum 38 + sustainable throughputs, on the same systems as above: 39 + - Intel i7-4850HQ: 310 KIOPS 40 + - AMD A8-3850: 200 KIOPS 41 + - ARM CortexTM-A53 Octa-core: 56 KIOPS 42 + 43 + BFQ works for multi-queue devices too. 29 44 30 45 The table of contents follow. Impatients can just jump to Section 3. 31 46 ··· 514 499 BFQ-specific files is "blkio.bfq." or "io.bfq." For example, the group 515 500 parameter to set the weight of a group with BFQ is blkio.bfq.weight 516 501 or io.bfq.weight. 502 + 503 + As for cgroups-v1 (blkio controller), the exact set of stat files 504 + created, and kept up-to-date by bfq, depends on whether 505 + CONFIG_DEBUG_BLK_CGROUP is set. If it is set, then bfq creates all 506 + the stat files documented in 507 + Documentation/cgroup-v1/blkio-controller.txt. If, instead, 508 + CONFIG_DEBUG_BLK_CGROUP is not set, then bfq creates only the files 509 + blkio.bfq.io_service_bytes 510 + blkio.bfq.io_service_bytes_recursive 511 + blkio.bfq.io_serviced 512 + blkio.bfq.io_serviced_recursive 513 + 514 + The value of CONFIG_DEBUG_BLK_CGROUP greatly influences the maximum 515 + throughput sustainable with bfq, because updating the blkio.bfq.* 516 + stats is rather costly, especially for some of the stats enabled by 517 + CONFIG_DEBUG_BLK_CGROUP. 517 518 518 519 Parameters to set 519 520 -----------------
+84 -64
block/bfq-cgroup.c
··· 24 24 25 25 #include "bfq-iosched.h" 26 26 27 - #ifdef CONFIG_BFQ_GROUP_IOSCHED 27 + #if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) 28 28 29 29 /* bfqg stats flags */ 30 30 enum bfqg_stats_flags { ··· 152 152 bfqg_stats_update_group_wait_time(stats); 153 153 } 154 154 155 + void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq, 156 + unsigned int op) 157 + { 158 + blkg_rwstat_add(&bfqg->stats.queued, op, 1); 159 + bfqg_stats_end_empty_time(&bfqg->stats); 160 + if (!(bfqq == ((struct bfq_data *)bfqg->bfqd)->in_service_queue)) 161 + bfqg_stats_set_start_group_wait_time(bfqg, bfqq_group(bfqq)); 162 + } 163 + 164 + void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op) 165 + { 166 + blkg_rwstat_add(&bfqg->stats.queued, op, -1); 167 + } 168 + 169 + void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op) 170 + { 171 + blkg_rwstat_add(&bfqg->stats.merged, op, 1); 172 + } 173 + 174 + void bfqg_stats_update_completion(struct bfq_group *bfqg, uint64_t start_time, 175 + uint64_t io_start_time, unsigned int op) 176 + { 177 + struct bfqg_stats *stats = &bfqg->stats; 178 + unsigned long long now = sched_clock(); 179 + 180 + if (time_after64(now, io_start_time)) 181 + blkg_rwstat_add(&stats->service_time, op, 182 + now - io_start_time); 183 + if (time_after64(io_start_time, start_time)) 184 + blkg_rwstat_add(&stats->wait_time, op, 185 + io_start_time - start_time); 186 + } 187 + 188 + #else /* CONFIG_BFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */ 189 + 190 + void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq, 191 + unsigned int op) { } 192 + void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op) { } 193 + void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op) { } 194 + void bfqg_stats_update_completion(struct bfq_group *bfqg, uint64_t start_time, 195 + uint64_t io_start_time, unsigned int op) { } 196 + void bfqg_stats_update_dequeue(struct bfq_group *bfqg) { } 197 + void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg) { } 198 + void bfqg_stats_update_idle_time(struct bfq_group *bfqg) { } 199 + void bfqg_stats_set_start_idle_time(struct bfq_group *bfqg) { } 200 + void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg) { } 201 + 202 + #endif /* CONFIG_BFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */ 203 + 204 + #ifdef CONFIG_BFQ_GROUP_IOSCHED 205 + 155 206 /* 156 207 * blk-cgroup policy-related handlers 157 208 * The following functions help in converting between blk-cgroup ··· 280 229 blkg_put(bfqg_to_blkg(bfqg)); 281 230 } 282 231 283 - void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq, 284 - unsigned int op) 285 - { 286 - blkg_rwstat_add(&bfqg->stats.queued, op, 1); 287 - bfqg_stats_end_empty_time(&bfqg->stats); 288 - if (!(bfqq == ((struct bfq_data *)bfqg->bfqd)->in_service_queue)) 289 - bfqg_stats_set_start_group_wait_time(bfqg, bfqq_group(bfqq)); 290 - } 291 - 292 - void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op) 293 - { 294 - blkg_rwstat_add(&bfqg->stats.queued, op, -1); 295 - } 296 - 297 - void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op) 298 - { 299 - blkg_rwstat_add(&bfqg->stats.merged, op, 1); 300 - } 301 - 302 - void bfqg_stats_update_completion(struct bfq_group *bfqg, uint64_t start_time, 303 - uint64_t io_start_time, unsigned int op) 304 - { 305 - struct bfqg_stats *stats = &bfqg->stats; 306 - unsigned long long now = sched_clock(); 307 - 308 - if (time_after64(now, io_start_time)) 309 - blkg_rwstat_add(&stats->service_time, op, 310 - now - io_start_time); 311 - if (time_after64(io_start_time, start_time)) 312 - blkg_rwstat_add(&stats->wait_time, op, 313 - io_start_time - start_time); 314 - } 315 - 316 232 /* @stats = 0 */ 317 233 static void bfqg_stats_reset(struct bfqg_stats *stats) 318 234 { 235 + #ifdef CONFIG_DEBUG_BLK_CGROUP 319 236 /* queued stats shouldn't be cleared */ 320 237 blkg_rwstat_reset(&stats->merged); 321 238 blkg_rwstat_reset(&stats->service_time); ··· 295 276 blkg_stat_reset(&stats->group_wait_time); 296 277 blkg_stat_reset(&stats->idle_time); 297 278 blkg_stat_reset(&stats->empty_time); 279 + #endif 298 280 } 299 281 300 282 /* @to += @from */ ··· 304 284 if (!to || !from) 305 285 return; 306 286 287 + #ifdef CONFIG_DEBUG_BLK_CGROUP 307 288 /* queued stats shouldn't be cleared */ 308 289 blkg_rwstat_add_aux(&to->merged, &from->merged); 309 290 blkg_rwstat_add_aux(&to->service_time, &from->service_time); ··· 317 296 blkg_stat_add_aux(&to->group_wait_time, &from->group_wait_time); 318 297 blkg_stat_add_aux(&to->idle_time, &from->idle_time); 319 298 blkg_stat_add_aux(&to->empty_time, &from->empty_time); 299 + #endif 320 300 } 321 301 322 302 /* ··· 364 342 365 343 static void bfqg_stats_exit(struct bfqg_stats *stats) 366 344 { 345 + #ifdef CONFIG_DEBUG_BLK_CGROUP 367 346 blkg_rwstat_exit(&stats->merged); 368 347 blkg_rwstat_exit(&stats->service_time); 369 348 blkg_rwstat_exit(&stats->wait_time); ··· 376 353 blkg_stat_exit(&stats->group_wait_time); 377 354 blkg_stat_exit(&stats->idle_time); 378 355 blkg_stat_exit(&stats->empty_time); 356 + #endif 379 357 } 380 358 381 359 static int bfqg_stats_init(struct bfqg_stats *stats, gfp_t gfp) 382 360 { 361 + #ifdef CONFIG_DEBUG_BLK_CGROUP 383 362 if (blkg_rwstat_init(&stats->merged, gfp) || 384 363 blkg_rwstat_init(&stats->service_time, gfp) || 385 364 blkg_rwstat_init(&stats->wait_time, gfp) || ··· 396 371 bfqg_stats_exit(stats); 397 372 return -ENOMEM; 398 373 } 374 + #endif 399 375 400 376 return 0; 401 377 } ··· 913 887 return bfq_io_set_weight_legacy(of_css(of), NULL, weight); 914 888 } 915 889 890 + #ifdef CONFIG_DEBUG_BLK_CGROUP 916 891 static int bfqg_print_stat(struct seq_file *sf, void *v) 917 892 { 918 893 blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), blkg_prfill_stat, ··· 1018 991 0, false); 1019 992 return 0; 1020 993 } 994 + #endif /* CONFIG_DEBUG_BLK_CGROUP */ 1021 995 1022 996 struct bfq_group *bfq_create_group_hierarchy(struct bfq_data *bfqd, int node) 1023 997 { ··· 1057 1029 1058 1030 /* statistics, covers only the tasks in the bfqg */ 1059 1031 { 1060 - .name = "bfq.time", 1061 - .private = offsetof(struct bfq_group, stats.time), 1062 - .seq_show = bfqg_print_stat, 1063 - }, 1064 - { 1065 - .name = "bfq.sectors", 1066 - .seq_show = bfqg_print_stat_sectors, 1067 - }, 1068 - { 1069 1032 .name = "bfq.io_service_bytes", 1070 1033 .private = (unsigned long)&blkcg_policy_bfq, 1071 1034 .seq_show = blkg_print_stat_bytes, ··· 1065 1046 .name = "bfq.io_serviced", 1066 1047 .private = (unsigned long)&blkcg_policy_bfq, 1067 1048 .seq_show = blkg_print_stat_ios, 1049 + }, 1050 + #ifdef CONFIG_DEBUG_BLK_CGROUP 1051 + { 1052 + .name = "bfq.time", 1053 + .private = offsetof(struct bfq_group, stats.time), 1054 + .seq_show = bfqg_print_stat, 1055 + }, 1056 + { 1057 + .name = "bfq.sectors", 1058 + .seq_show = bfqg_print_stat_sectors, 1068 1059 }, 1069 1060 { 1070 1061 .name = "bfq.io_service_time", ··· 1096 1067 .private = offsetof(struct bfq_group, stats.queued), 1097 1068 .seq_show = bfqg_print_rwstat, 1098 1069 }, 1070 + #endif /* CONFIG_DEBUG_BLK_CGROUP */ 1099 1071 1100 1072 /* the same statictics which cover the bfqg and its descendants */ 1101 - { 1102 - .name = "bfq.time_recursive", 1103 - .private = offsetof(struct bfq_group, stats.time), 1104 - .seq_show = bfqg_print_stat_recursive, 1105 - }, 1106 - { 1107 - .name = "bfq.sectors_recursive", 1108 - .seq_show = bfqg_print_stat_sectors_recursive, 1109 - }, 1110 1073 { 1111 1074 .name = "bfq.io_service_bytes_recursive", 1112 1075 .private = (unsigned long)&blkcg_policy_bfq, ··· 1108 1087 .name = "bfq.io_serviced_recursive", 1109 1088 .private = (unsigned long)&blkcg_policy_bfq, 1110 1089 .seq_show = blkg_print_stat_ios_recursive, 1090 + }, 1091 + #ifdef CONFIG_DEBUG_BLK_CGROUP 1092 + { 1093 + .name = "bfq.time_recursive", 1094 + .private = offsetof(struct bfq_group, stats.time), 1095 + .seq_show = bfqg_print_stat_recursive, 1096 + }, 1097 + { 1098 + .name = "bfq.sectors_recursive", 1099 + .seq_show = bfqg_print_stat_sectors_recursive, 1111 1100 }, 1112 1101 { 1113 1102 .name = "bfq.io_service_time_recursive", ··· 1163 1132 .private = offsetof(struct bfq_group, stats.dequeue), 1164 1133 .seq_show = bfqg_print_stat, 1165 1134 }, 1135 + #endif /* CONFIG_DEBUG_BLK_CGROUP */ 1166 1136 { } /* terminate */ 1167 1137 }; 1168 1138 ··· 1178 1146 }; 1179 1147 1180 1148 #else /* CONFIG_BFQ_GROUP_IOSCHED */ 1181 - 1182 - void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq, 1183 - unsigned int op) { } 1184 - void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op) { } 1185 - void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op) { } 1186 - void bfqg_stats_update_completion(struct bfq_group *bfqg, uint64_t start_time, 1187 - uint64_t io_start_time, unsigned int op) { } 1188 - void bfqg_stats_update_dequeue(struct bfq_group *bfqg) { } 1189 - void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg) { } 1190 - void bfqg_stats_update_idle_time(struct bfq_group *bfqg) { } 1191 - void bfqg_stats_set_start_idle_time(struct bfq_group *bfqg) { } 1192 - void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg) { } 1193 1149 1194 1150 void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, 1195 1151 struct bfq_group *bfqg) {}
+110 -7
block/bfq-iosched.c
··· 1359 1359 bfqq->ttime.last_end_request + 1360 1360 bfqd->bfq_slice_idle * 3; 1361 1361 1362 - bfqg_stats_update_io_add(bfqq_group(RQ_BFQQ(rq)), bfqq, rq->cmd_flags); 1363 1362 1364 1363 /* 1365 1364 * bfqq deserves to be weight-raised if: ··· 1632 1633 if (rq->cmd_flags & REQ_META) 1633 1634 bfqq->meta_pending--; 1634 1635 1635 - bfqg_stats_update_io_remove(bfqq_group(bfqq), rq->cmd_flags); 1636 1636 } 1637 1637 1638 1638 static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio) ··· 1744 1746 bfqq->next_rq = rq; 1745 1747 1746 1748 bfq_remove_request(q, next); 1749 + bfqg_stats_update_io_remove(bfqq_group(bfqq), next->cmd_flags); 1747 1750 1748 1751 spin_unlock_irq(&bfqq->bfqd->lock); 1749 1752 end: ··· 2228 2229 struct bfq_queue *bfqq) 2229 2230 { 2230 2231 if (bfqq) { 2231 - bfqg_stats_update_avg_queue_size(bfqq_group(bfqq)); 2232 2232 bfq_clear_bfqq_fifo_expire(bfqq); 2233 2233 2234 2234 bfqd->budgets_assigned = (bfqd->budgets_assigned * 7 + 256) / 8; ··· 3468 3470 */ 3469 3471 bfq_clear_bfqq_wait_request(bfqq); 3470 3472 hrtimer_try_to_cancel(&bfqd->idle_slice_timer); 3471 - bfqg_stats_update_idle_time(bfqq_group(bfqq)); 3472 3473 } 3473 3474 goto keep_queue; 3474 3475 } ··· 3693 3696 { 3694 3697 struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; 3695 3698 struct request *rq; 3699 + #if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) 3700 + struct bfq_queue *in_serv_queue, *bfqq; 3701 + bool waiting_rq, idle_timer_disabled; 3702 + #endif 3696 3703 3697 3704 spin_lock_irq(&bfqd->lock); 3698 3705 3706 + #if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) 3707 + in_serv_queue = bfqd->in_service_queue; 3708 + waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue); 3709 + 3699 3710 rq = __bfq_dispatch_request(hctx); 3711 + 3712 + idle_timer_disabled = 3713 + waiting_rq && !bfq_bfqq_wait_request(in_serv_queue); 3714 + 3715 + #else 3716 + rq = __bfq_dispatch_request(hctx); 3717 + #endif 3700 3718 spin_unlock_irq(&bfqd->lock); 3719 + 3720 + #if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) 3721 + bfqq = rq ? RQ_BFQQ(rq) : NULL; 3722 + if (!idle_timer_disabled && !bfqq) 3723 + return rq; 3724 + 3725 + /* 3726 + * rq and bfqq are guaranteed to exist until this function 3727 + * ends, for the following reasons. First, rq can be 3728 + * dispatched to the device, and then can be completed and 3729 + * freed, only after this function ends. Second, rq cannot be 3730 + * merged (and thus freed because of a merge) any longer, 3731 + * because it has already started. Thus rq cannot be freed 3732 + * before this function ends, and, since rq has a reference to 3733 + * bfqq, the same guarantee holds for bfqq too. 3734 + * 3735 + * In addition, the following queue lock guarantees that 3736 + * bfqq_group(bfqq) exists as well. 3737 + */ 3738 + spin_lock_irq(hctx->queue->queue_lock); 3739 + if (idle_timer_disabled) 3740 + /* 3741 + * Since the idle timer has been disabled, 3742 + * in_serv_queue contained some request when 3743 + * __bfq_dispatch_request was invoked above, which 3744 + * implies that rq was picked exactly from 3745 + * in_serv_queue. Thus in_serv_queue == bfqq, and is 3746 + * therefore guaranteed to exist because of the above 3747 + * arguments. 3748 + */ 3749 + bfqg_stats_update_idle_time(bfqq_group(in_serv_queue)); 3750 + if (bfqq) { 3751 + struct bfq_group *bfqg = bfqq_group(bfqq); 3752 + 3753 + bfqg_stats_update_avg_queue_size(bfqg); 3754 + bfqg_stats_set_start_empty_time(bfqg); 3755 + bfqg_stats_update_io_remove(bfqg, rq->cmd_flags); 3756 + } 3757 + spin_unlock_irq(hctx->queue->queue_lock); 3758 + #endif 3701 3759 3702 3760 return rq; 3703 3761 } ··· 4211 4159 */ 4212 4160 bfq_clear_bfqq_wait_request(bfqq); 4213 4161 hrtimer_try_to_cancel(&bfqd->idle_slice_timer); 4214 - bfqg_stats_update_idle_time(bfqq_group(bfqq)); 4215 4162 4216 4163 /* 4217 4164 * The queue is not empty, because a new request just ··· 4225 4174 } 4226 4175 } 4227 4176 4228 - static void __bfq_insert_request(struct bfq_data *bfqd, struct request *rq) 4177 + /* returns true if it causes the idle timer to be disabled */ 4178 + static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq) 4229 4179 { 4230 4180 struct bfq_queue *bfqq = RQ_BFQQ(rq), 4231 4181 *new_bfqq = bfq_setup_cooperator(bfqd, bfqq, rq, true); 4182 + bool waiting, idle_timer_disabled = false; 4232 4183 4233 4184 if (new_bfqq) { 4234 4185 if (bic_to_bfqq(RQ_BIC(rq), 1) != bfqq) ··· 4264 4211 bfqq = new_bfqq; 4265 4212 } 4266 4213 4214 + waiting = bfqq && bfq_bfqq_wait_request(bfqq); 4267 4215 bfq_add_request(rq); 4216 + idle_timer_disabled = waiting && !bfq_bfqq_wait_request(bfqq); 4268 4217 4269 4218 rq->fifo_time = ktime_get_ns() + bfqd->bfq_fifo_expire[rq_is_sync(rq)]; 4270 4219 list_add_tail(&rq->queuelist, &bfqq->fifo); 4271 4220 4272 4221 bfq_rq_enqueued(bfqd, bfqq, rq); 4222 + 4223 + return idle_timer_disabled; 4273 4224 } 4274 4225 4275 4226 static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, ··· 4281 4224 { 4282 4225 struct request_queue *q = hctx->queue; 4283 4226 struct bfq_data *bfqd = q->elevator->elevator_data; 4227 + #if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) 4228 + struct bfq_queue *bfqq = RQ_BFQQ(rq); 4229 + bool idle_timer_disabled = false; 4230 + unsigned int cmd_flags; 4231 + #endif 4284 4232 4285 4233 spin_lock_irq(&bfqd->lock); 4286 4234 if (blk_mq_sched_try_insert_merge(q, rq)) { ··· 4304 4242 else 4305 4243 list_add_tail(&rq->queuelist, &bfqd->dispatch); 4306 4244 } else { 4245 + #if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) 4246 + idle_timer_disabled = __bfq_insert_request(bfqd, rq); 4247 + /* 4248 + * Update bfqq, because, if a queue merge has occurred 4249 + * in __bfq_insert_request, then rq has been 4250 + * redirected into a new queue. 4251 + */ 4252 + bfqq = RQ_BFQQ(rq); 4253 + #else 4307 4254 __bfq_insert_request(bfqd, rq); 4255 + #endif 4308 4256 4309 4257 if (rq_mergeable(rq)) { 4310 4258 elv_rqhash_add(q, rq); ··· 4323 4251 } 4324 4252 } 4325 4253 4254 + #if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) 4255 + /* 4256 + * Cache cmd_flags before releasing scheduler lock, because rq 4257 + * may disappear afterwards (for example, because of a request 4258 + * merge). 4259 + */ 4260 + cmd_flags = rq->cmd_flags; 4261 + #endif 4326 4262 spin_unlock_irq(&bfqd->lock); 4263 + 4264 + #if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) 4265 + if (!bfqq) 4266 + return; 4267 + /* 4268 + * bfqq still exists, because it can disappear only after 4269 + * either it is merged with another queue, or the process it 4270 + * is associated with exits. But both actions must be taken by 4271 + * the same process currently executing this flow of 4272 + * instruction. 4273 + * 4274 + * In addition, the following queue lock guarantees that 4275 + * bfqq_group(bfqq) exists as well. 4276 + */ 4277 + spin_lock_irq(q->queue_lock); 4278 + bfqg_stats_update_io_add(bfqq_group(bfqq), bfqq, cmd_flags); 4279 + if (idle_timer_disabled) 4280 + bfqg_stats_update_idle_time(bfqq_group(bfqq)); 4281 + spin_unlock_irq(q->queue_lock); 4282 + #endif 4327 4283 } 4328 4284 4329 4285 static void bfq_insert_requests(struct blk_mq_hw_ctx *hctx, ··· 4528 4428 * lock is held. 4529 4429 */ 4530 4430 4531 - if (!RB_EMPTY_NODE(&rq->rb_node)) 4431 + if (!RB_EMPTY_NODE(&rq->rb_node)) { 4532 4432 bfq_remove_request(rq->q, rq); 4433 + bfqg_stats_update_io_remove(bfqq_group(bfqq), 4434 + rq->cmd_flags); 4435 + } 4533 4436 bfq_put_rq_priv_body(bfqq); 4534 4437 } 4535 4438
+2 -2
block/bfq-iosched.h
··· 689 689 }; 690 690 691 691 struct bfqg_stats { 692 - #ifdef CONFIG_BFQ_GROUP_IOSCHED 692 + #if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) 693 693 /* number of ios merged */ 694 694 struct blkg_rwstat merged; 695 695 /* total time spent on device in ns, may not be accurate w/ queueing */ ··· 717 717 uint64_t start_idle_time; 718 718 uint64_t start_empty_time; 719 719 uint16_t flags; 720 - #endif /* CONFIG_BFQ_GROUP_IOSCHED */ 720 + #endif /* CONFIG_BFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */ 721 721 }; 722 722 723 723 #ifdef CONFIG_BFQ_GROUP_IOSCHED
-1
block/bfq-wf2q.c
··· 843 843 st->vtime += bfq_delta(served, st->wsum); 844 844 bfq_forget_idle(st); 845 845 } 846 - bfqg_stats_set_start_empty_time(bfqq_group(bfqq)); 847 846 bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %d secs", served); 848 847 } 849 848
+1
block/bio.c
··· 597 597 * so we don't set nor calculate new physical/hw segment counts here 598 598 */ 599 599 bio->bi_disk = bio_src->bi_disk; 600 + bio->bi_partno = bio_src->bi_partno; 600 601 bio_set_flag(bio, BIO_CLONED); 601 602 bio->bi_opf = bio_src->bi_opf; 602 603 bio->bi_write_hint = bio_src->bi_write_hint;
+2 -2
block/blk-core.c
··· 637 637 spin_lock_irq(q->queue_lock); 638 638 blk_queue_for_each_rl(rl, q) { 639 639 if (rl->rq_pool) { 640 - wake_up(&rl->wait[BLK_RW_SYNC]); 641 - wake_up(&rl->wait[BLK_RW_ASYNC]); 640 + wake_up_all(&rl->wait[BLK_RW_SYNC]); 641 + wake_up_all(&rl->wait[BLK_RW_ASYNC]); 642 642 } 643 643 } 644 644 spin_unlock_irq(q->queue_lock);
+26 -31
drivers/block/amiflop.c
··· 146 146 147 147 static struct timer_list flush_track_timer[FD_MAX_UNITS]; 148 148 static struct timer_list post_write_timer; 149 + static unsigned long post_write_timer_drive; 149 150 static struct timer_list motor_on_timer; 150 151 static struct timer_list motor_off_timer[FD_MAX_UNITS]; 151 152 static int on_attempts; ··· 324 323 325 324 } 326 325 327 - static void motor_on_callback(unsigned long ignored) 326 + static void motor_on_callback(struct timer_list *unused) 328 327 { 329 328 if (!(ciaa.pra & DSKRDY) || --on_attempts == 0) { 330 329 complete_all(&motor_on_completion); ··· 356 355 on_attempts = -1; 357 356 #if 0 358 357 printk (KERN_ERR "motor_on failed, turning motor off\n"); 359 - fd_motor_off (nr); 358 + fd_motor_off (motor_off_timer + nr); 360 359 return 0; 361 360 #else 362 361 printk (KERN_WARNING "DSKRDY not set after 1.5 seconds - assuming drive is spinning notwithstanding\n"); ··· 366 365 return 1; 367 366 } 368 367 369 - static void fd_motor_off(unsigned long drive) 368 + static void fd_motor_off(struct timer_list *timer) 370 369 { 371 - long calledfromint; 372 - #ifdef MODULE 373 - long decusecount; 370 + unsigned long drive = ((unsigned long)timer - 371 + (unsigned long)&motor_off_timer[0]) / 372 + sizeof(motor_off_timer[0]); 374 373 375 - decusecount = drive & 0x40000000; 376 - #endif 377 - calledfromint = drive & 0x80000000; 378 374 drive&=3; 379 - if (calledfromint && !try_fdc(drive)) { 375 + if (!try_fdc(drive)) { 380 376 /* We would be blocked in an interrupt, so try again later */ 381 - motor_off_timer[drive].expires = jiffies + 1; 382 - add_timer(motor_off_timer + drive); 377 + timer->expires = jiffies + 1; 378 + add_timer(timer); 383 379 return; 384 380 } 385 381 unit[drive].motor = 0; ··· 390 392 int drive; 391 393 392 394 drive = nr & 3; 393 - /* called this way it is always from interrupt */ 394 - motor_off_timer[drive].data = nr | 0x80000000; 395 395 mod_timer(motor_off_timer + drive, jiffies + 3*HZ); 396 396 } 397 397 ··· 431 435 break; 432 436 if (--n == 0) { 433 437 printk (KERN_ERR "fd%d: calibrate failed, turning motor off\n", drive); 434 - fd_motor_off (drive); 438 + fd_motor_off (motor_off_timer + drive); 435 439 unit[drive].track = -1; 436 440 rel_fdc(); 437 441 return 0; ··· 560 564 if (block_flag == 2) { /* writing */ 561 565 writepending = 2; 562 566 post_write_timer.expires = jiffies + 1; /* at least 2 ms */ 563 - post_write_timer.data = selected; 567 + post_write_timer_drive = selected; 564 568 add_timer(&post_write_timer); 565 569 } 566 570 else { /* reading */ ··· 647 651 rel_fdc(); /* corresponds to get_fdc() in raw_write */ 648 652 } 649 653 654 + static void post_write_callback(struct timer_list *timer) 655 + { 656 + post_write(post_write_timer_drive); 657 + } 650 658 651 659 /* 652 660 * The following functions are to convert the block contents into raw data ··· 1244 1244 /* FIXME: this assumes the drive is still spinning - 1245 1245 * which is only true if we complete writing a track within three seconds 1246 1246 */ 1247 - static void flush_track_callback(unsigned long nr) 1247 + static void flush_track_callback(struct timer_list *timer) 1248 1248 { 1249 + unsigned long nr = ((unsigned long)timer - 1250 + (unsigned long)&flush_track_timer[0]) / 1251 + sizeof(flush_track_timer[0]); 1252 + 1249 1253 nr&=3; 1250 1254 writefromint = 1; 1251 1255 if (!try_fdc(nr)) { ··· 1653 1649 fd_ref[drive] = 0; 1654 1650 } 1655 1651 #ifdef MODULE 1656 - /* the mod_use counter is handled this way */ 1657 - floppy_off (drive | 0x40000000); 1652 + floppy_off (drive); 1658 1653 #endif 1659 1654 mutex_unlock(&amiflop_mutex); 1660 1655 } ··· 1794 1791 floppy_find, NULL, NULL); 1795 1792 1796 1793 /* initialize variables */ 1797 - init_timer(&motor_on_timer); 1794 + timer_setup(&motor_on_timer, motor_on_callback, 0); 1798 1795 motor_on_timer.expires = 0; 1799 - motor_on_timer.data = 0; 1800 - motor_on_timer.function = motor_on_callback; 1801 1796 for (i = 0; i < FD_MAX_UNITS; i++) { 1802 - init_timer(&motor_off_timer[i]); 1797 + timer_setup(&motor_off_timer[i], fd_motor_off, 0); 1803 1798 motor_off_timer[i].expires = 0; 1804 - motor_off_timer[i].data = i|0x80000000; 1805 - motor_off_timer[i].function = fd_motor_off; 1806 - init_timer(&flush_track_timer[i]); 1799 + timer_setup(&flush_track_timer[i], flush_track_callback, 0); 1807 1800 flush_track_timer[i].expires = 0; 1808 - flush_track_timer[i].data = i; 1809 - flush_track_timer[i].function = flush_track_callback; 1810 1801 1811 1802 unit[i].track = -1; 1812 1803 } 1813 1804 1814 - init_timer(&post_write_timer); 1805 + timer_setup(&post_write_timer, post_write_callback, 0); 1815 1806 post_write_timer.expires = 0; 1816 - post_write_timer.data = 0; 1817 - post_write_timer.function = post_write; 1818 1807 1819 1808 for (i = 0; i < 128; i++) 1820 1809 mfmdecode[i]=255;
+3 -3
drivers/block/aoe/aoecmd.c
··· 744 744 } 745 745 746 746 static void 747 - rexmit_timer(ulong vp) 747 + rexmit_timer(struct timer_list *timer) 748 748 { 749 749 struct aoedev *d; 750 750 struct aoetgt *t; ··· 758 758 int utgts; /* number of aoetgt descriptors (not slots) */ 759 759 int since; 760 760 761 - d = (struct aoedev *) vp; 761 + d = from_timer(d, timer, timer); 762 762 763 763 spin_lock_irqsave(&d->lock, flags); 764 764 ··· 1429 1429 1430 1430 d->rttavg = RTTAVG_INIT; 1431 1431 d->rttdev = RTTDEV_INIT; 1432 - d->timer.function = rexmit_timer; 1432 + d->timer.function = (TIMER_FUNC_TYPE)rexmit_timer; 1433 1433 1434 1434 skb = skb_clone(skb, GFP_ATOMIC); 1435 1435 if (skb) {
+3 -6
drivers/block/aoe/aoedev.c
··· 15 15 #include <linux/string.h> 16 16 #include "aoe.h" 17 17 18 - static void dummy_timer(ulong); 19 18 static void freetgt(struct aoedev *d, struct aoetgt *t); 20 19 static void skbpoolfree(struct aoedev *d); 21 20 ··· 145 146 } 146 147 147 148 static void 148 - dummy_timer(ulong vp) 149 + dummy_timer(struct timer_list *t) 149 150 { 150 151 struct aoedev *d; 151 152 152 - d = (struct aoedev *)vp; 153 + d = from_timer(d, t, timer); 153 154 if (d->flags & DEVFL_TKILL) 154 155 return; 155 156 d->timer.expires = jiffies + HZ; ··· 465 466 INIT_WORK(&d->work, aoecmd_sleepwork); 466 467 spin_lock_init(&d->lock); 467 468 skb_queue_head_init(&d->skbpool); 468 - init_timer(&d->timer); 469 - d->timer.data = (ulong) d; 470 - d->timer.function = dummy_timer; 469 + timer_setup(&d->timer, dummy_timer, 0); 471 470 d->timer.expires = jiffies + HZ; 472 471 add_timer(&d->timer); 473 472 d->bufpool = NULL; /* defer to aoeblk_gdalloc */
+7 -3
drivers/block/floppy.c
··· 903 903 } 904 904 905 905 /* switches the motor off after a given timeout */ 906 - static void motor_off_callback(unsigned long nr) 906 + static void motor_off_callback(struct timer_list *t) 907 907 { 908 + unsigned long nr = t - motor_off_timer; 908 909 unsigned char mask = ~(0x10 << UNIT(nr)); 910 + 911 + if (WARN_ON_ONCE(nr >= N_DRIVE)) 912 + return; 909 913 910 914 set_dor(FDC(nr), mask, 0); 911 915 } ··· 3051 3047 else 3052 3048 raw_cmd->flags &= ~FD_RAW_DISK_CHANGE; 3053 3049 if (raw_cmd->flags & FD_RAW_NO_MOTOR_AFTER) 3054 - motor_off_callback(current_drive); 3050 + motor_off_callback(&motor_off_timer[current_drive]); 3055 3051 3056 3052 if (raw_cmd->next && 3057 3053 (!(raw_cmd->flags & FD_RAW_FAILURE) || ··· 4546 4542 disks[drive]->fops = &floppy_fops; 4547 4543 sprintf(disks[drive]->disk_name, "fd%d", drive); 4548 4544 4549 - setup_timer(&motor_off_timer[drive], motor_off_callback, drive); 4545 + timer_setup(&motor_off_timer[drive], motor_off_callback, 0); 4550 4546 } 4551 4547 4552 4548 err = register_blkdev(FLOPPY_MAJOR, "fd");
+15 -16
drivers/block/swim3.c
··· 239 239 static void seek_track(struct floppy_state *fs, int n); 240 240 static void init_dma(struct dbdma_cmd *cp, int cmd, void *buf, int count); 241 241 static void act(struct floppy_state *fs); 242 - static void scan_timeout(unsigned long data); 243 - static void seek_timeout(unsigned long data); 244 - static void settle_timeout(unsigned long data); 245 - static void xfer_timeout(unsigned long data); 242 + static void scan_timeout(struct timer_list *t); 243 + static void seek_timeout(struct timer_list *t); 244 + static void settle_timeout(struct timer_list *t); 245 + static void xfer_timeout(struct timer_list *t); 246 246 static irqreturn_t swim3_interrupt(int irq, void *dev_id); 247 247 /*static void fd_dma_interrupt(int irq, void *dev_id);*/ 248 248 static int grab_drive(struct floppy_state *fs, enum swim_state state, ··· 392 392 } 393 393 394 394 static void set_timeout(struct floppy_state *fs, int nticks, 395 - void (*proc)(unsigned long)) 395 + void (*proc)(struct timer_list *t)) 396 396 { 397 397 if (fs->timeout_pending) 398 398 del_timer(&fs->timeout); 399 399 fs->timeout.expires = jiffies + nticks; 400 - fs->timeout.function = proc; 401 - fs->timeout.data = (unsigned long) fs; 400 + fs->timeout.function = (TIMER_FUNC_TYPE)proc; 402 401 add_timer(&fs->timeout); 403 402 fs->timeout_pending = 1; 404 403 } ··· 568 569 } 569 570 } 570 571 571 - static void scan_timeout(unsigned long data) 572 + static void scan_timeout(struct timer_list *t) 572 573 { 573 - struct floppy_state *fs = (struct floppy_state *) data; 574 + struct floppy_state *fs = from_timer(fs, t, timeout); 574 575 struct swim3 __iomem *sw = fs->swim3; 575 576 unsigned long flags; 576 577 ··· 593 594 spin_unlock_irqrestore(&swim3_lock, flags); 594 595 } 595 596 596 - static void seek_timeout(unsigned long data) 597 + static void seek_timeout(struct timer_list *t) 597 598 { 598 - struct floppy_state *fs = (struct floppy_state *) data; 599 + struct floppy_state *fs = from_timer(fs, t, timeout); 599 600 struct swim3 __iomem *sw = fs->swim3; 600 601 unsigned long flags; 601 602 ··· 613 614 spin_unlock_irqrestore(&swim3_lock, flags); 614 615 } 615 616 616 - static void settle_timeout(unsigned long data) 617 + static void settle_timeout(struct timer_list *t) 617 618 { 618 - struct floppy_state *fs = (struct floppy_state *) data; 619 + struct floppy_state *fs = from_timer(fs, t, timeout); 619 620 struct swim3 __iomem *sw = fs->swim3; 620 621 unsigned long flags; 621 622 ··· 643 644 spin_unlock_irqrestore(&swim3_lock, flags); 644 645 } 645 646 646 - static void xfer_timeout(unsigned long data) 647 + static void xfer_timeout(struct timer_list *t) 647 648 { 648 - struct floppy_state *fs = (struct floppy_state *) data; 649 + struct floppy_state *fs = from_timer(fs, t, timeout); 649 650 struct swim3 __iomem *sw = fs->swim3; 650 651 struct dbdma_regs __iomem *dr = fs->dma; 651 652 unsigned long flags; ··· 1181 1182 return -EBUSY; 1182 1183 } 1183 1184 1184 - init_timer(&fs->timeout); 1185 + timer_setup(&fs->timeout, NULL, 0); 1185 1186 1186 1187 swim3_info("SWIM3 floppy controller %s\n", 1187 1188 mdev->media_bay ? "in media bay" : "");
+1 -2
drivers/ide/ide-cd.c
··· 1333 1333 unsigned long blocks = blk_rq_sectors(rq) / (hard_sect >> 9); 1334 1334 struct scsi_request *req = scsi_req(rq); 1335 1335 1336 - scsi_req_init(req); 1337 - memset(req->cmd, 0, BLK_MAX_CDB); 1336 + q->initialize_rq_fn(rq); 1338 1337 1339 1338 if (rq_data_dir(rq) == READ) 1340 1339 req->cmd[0] = GPCMD_READ_10;
+3 -5
drivers/md/bcache/stats.c
··· 147 147 } 148 148 } 149 149 150 - static void scale_accounting(unsigned long data) 150 + static void scale_accounting(struct timer_list *t) 151 151 { 152 - struct cache_accounting *acc = (struct cache_accounting *) data; 152 + struct cache_accounting *acc = from_timer(acc, t, timer); 153 153 154 154 #define move_stat(name) do { \ 155 155 unsigned t = atomic_xchg(&acc->collector.name, 0); \ ··· 234 234 kobject_init(&acc->day.kobj, &bch_stats_ktype); 235 235 236 236 closure_init(&acc->cl, parent); 237 - init_timer(&acc->timer); 237 + timer_setup(&acc->timer, scale_accounting, 0); 238 238 acc->timer.expires = jiffies + accounting_delay; 239 - acc->timer.data = (unsigned long) acc; 240 - acc->timer.function = scale_accounting; 241 239 add_timer(&acc->timer); 242 240 }
+3 -3
drivers/md/dm-delay.c
··· 44 44 45 45 static DEFINE_MUTEX(delayed_bios_lock); 46 46 47 - static void handle_delayed_timer(unsigned long data) 47 + static void handle_delayed_timer(struct timer_list *t) 48 48 { 49 - struct delay_c *dc = (struct delay_c *)data; 49 + struct delay_c *dc = from_timer(dc, t, delay_timer); 50 50 51 51 queue_work(dc->kdelayd_wq, &dc->flush_expired_bios); 52 52 } ··· 195 195 goto bad_queue; 196 196 } 197 197 198 - setup_timer(&dc->delay_timer, handle_delayed_timer, (unsigned long)dc); 198 + timer_setup(&dc->delay_timer, handle_delayed_timer, 0); 199 199 200 200 INIT_WORK(&dc->flush_expired_bios, flush_expired_bios); 201 201 INIT_LIST_HEAD(&dc->delayed_bios);
+3 -3
drivers/md/dm-integrity.c
··· 1094 1094 __remove_wait_queue(&ic->endio_wait, &wait); 1095 1095 } 1096 1096 1097 - static void autocommit_fn(unsigned long data) 1097 + static void autocommit_fn(struct timer_list *t) 1098 1098 { 1099 - struct dm_integrity_c *ic = (struct dm_integrity_c *)data; 1099 + struct dm_integrity_c *ic = from_timer(ic, t, autocommit_timer); 1100 1100 1101 1101 if (likely(!dm_integrity_failed(ic))) 1102 1102 queue_work(ic->commit_wq, &ic->commit_work); ··· 2942 2942 2943 2943 ic->autocommit_jiffies = msecs_to_jiffies(sync_msec); 2944 2944 ic->autocommit_msec = sync_msec; 2945 - setup_timer(&ic->autocommit_timer, autocommit_fn, (unsigned long)ic); 2945 + timer_setup(&ic->autocommit_timer, autocommit_fn, 0); 2946 2946 2947 2947 ic->io = dm_io_client_create(); 2948 2948 if (IS_ERR(ic->io)) {
+3 -5
drivers/md/dm-raid1.c
··· 94 94 queue_work(ms->kmirrord_wq, &ms->kmirrord_work); 95 95 } 96 96 97 - static void delayed_wake_fn(unsigned long data) 97 + static void delayed_wake_fn(struct timer_list *t) 98 98 { 99 - struct mirror_set *ms = (struct mirror_set *) data; 99 + struct mirror_set *ms = from_timer(ms, t, timer); 100 100 101 101 clear_bit(0, &ms->timer_pending); 102 102 wakeup_mirrord(ms); ··· 108 108 return; 109 109 110 110 ms->timer.expires = jiffies + HZ / 5; 111 - ms->timer.data = (unsigned long) ms; 112 - ms->timer.function = delayed_wake_fn; 113 111 add_timer(&ms->timer); 114 112 } 115 113 ··· 1131 1133 goto err_free_context; 1132 1134 } 1133 1135 INIT_WORK(&ms->kmirrord_work, do_mirror); 1134 - init_timer(&ms->timer); 1136 + timer_setup(&ms->timer, delayed_wake_fn, 0); 1135 1137 ms->timer_pending = 0; 1136 1138 INIT_WORK(&ms->trigger_event, trigger_event); 1137 1139
+4 -5
drivers/md/md.c
··· 541 541 bioset_free(sync_bs); 542 542 } 543 543 544 - static void md_safemode_timeout(unsigned long data); 544 + static void md_safemode_timeout(struct timer_list *t); 545 545 546 546 void mddev_init(struct mddev *mddev) 547 547 { ··· 550 550 mutex_init(&mddev->bitmap_info.mutex); 551 551 INIT_LIST_HEAD(&mddev->disks); 552 552 INIT_LIST_HEAD(&mddev->all_mddevs); 553 - setup_timer(&mddev->safemode_timer, md_safemode_timeout, 554 - (unsigned long) mddev); 553 + timer_setup(&mddev->safemode_timer, md_safemode_timeout, 0); 555 554 atomic_set(&mddev->active, 1); 556 555 atomic_set(&mddev->openers, 0); 557 556 atomic_set(&mddev->active_io, 0); ··· 5403 5404 return -EINVAL; 5404 5405 } 5405 5406 5406 - static void md_safemode_timeout(unsigned long data) 5407 + static void md_safemode_timeout(struct timer_list *t) 5407 5408 { 5408 - struct mddev *mddev = (struct mddev *) data; 5409 + struct mddev *mddev = from_timer(mddev, t, safemode_timer); 5409 5410 5410 5411 mddev->safemode = 1; 5411 5412 if (mddev->external)
+4 -2
drivers/nvme/target/fc.c
··· 2144 2144 struct nvmet_fc_fcp_iod *fod) 2145 2145 { 2146 2146 struct nvme_fc_cmd_iu *cmdiu = &fod->cmdiubuf; 2147 + u32 xfrlen = be32_to_cpu(cmdiu->data_len); 2147 2148 int ret; 2148 2149 2149 2150 /* ··· 2158 2157 2159 2158 fod->fcpreq->done = nvmet_fc_xmt_fcp_op_done; 2160 2159 2161 - fod->req.transfer_len = be32_to_cpu(cmdiu->data_len); 2162 2160 if (cmdiu->flags & FCNVME_CMD_FLAGS_WRITE) { 2163 2161 fod->io_dir = NVMET_FCP_WRITE; 2164 2162 if (!nvme_is_write(&cmdiu->sqe)) ··· 2168 2168 goto transport_error; 2169 2169 } else { 2170 2170 fod->io_dir = NVMET_FCP_NODATA; 2171 - if (fod->req.transfer_len) 2171 + if (xfrlen) 2172 2172 goto transport_error; 2173 2173 } 2174 2174 ··· 2191 2191 /* nvmet layer has already called op done to send rsp. */ 2192 2192 return; 2193 2193 } 2194 + 2195 + fod->req.transfer_len = xfrlen; 2194 2196 2195 2197 /* keep a running counter of tail position */ 2196 2198 atomic_inc(&fod->queue->sqtail);