Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-3.14/core' of git://git.kernel.dk/linux-block

Pull core block IO changes from Jens Axboe:
"The major piece in here is the immutable bio_ve series from Kent, the
rest is fairly minor. It was supposed to go in last round, but
various issues pushed it to this release instead. The pull request
contains:

- Various smaller blk-mq fixes from different folks. Nothing major
here, just minor fixes and cleanups.

- Fix for a memory leak in the error path in the block ioctl code
from Christian Engelmayer.

- Header export fix from CaiZhiyong.

- Finally the immutable biovec changes from Kent Overstreet. This
enables some nice future work on making arbitrarily sized bios
possible, and splitting more efficient. Related fixes to immutable
bio_vecs:

- dm-cache immutable fixup from Mike Snitzer.
- btrfs immutable fixup from Muthu Kumar.

- bio-integrity fix from Nic Bellinger, which is also going to stable"

* 'for-3.14/core' of git://git.kernel.dk/linux-block: (44 commits)
xtensa: fixup simdisk driver to work with immutable bio_vecs
block/blk-mq-cpu.c: use hotcpu_notifier()
blk-mq: for_each_* macro correctness
block: Fix memory leak in rw_copy_check_uvector() handling
bio-integrity: Fix bio_integrity_verify segment start bug
block: remove unrelated header files and export symbol
blk-mq: uses page->list incorrectly
blk-mq: use __smp_call_function_single directly
btrfs: fix missing increment of bi_remaining
Revert "block: Warn and free bio if bi_end_io is not set"
block: Warn and free bio if bi_end_io is not set
blk-mq: fix initializing request's start time
block: blk-mq: don't export blk_mq_free_queue()
block: blk-mq: make blk_sync_queue support mq
block: blk-mq: support draining mq queue
dm cache: increment bi_remaining when bi_end_io is restored
block: fixup for generic bio chaining
block: Really silence spurious compiler warnings
block: Silence spurious compiler warnings
block: Kill bio_pair_split()
...

+2137 -2676
+3 -4
Documentation/block/biodoc.txt
··· 447 447 * main unit of I/O for the block layer and lower layers (ie drivers) 448 448 */ 449 449 struct bio { 450 - sector_t bi_sector; 451 450 struct bio *bi_next; /* request queue link */ 452 451 struct block_device *bi_bdev; /* target device */ 453 452 unsigned long bi_flags; /* status, command, etc */ 454 453 unsigned long bi_rw; /* low bits: r/w, high: priority */ 455 454 456 455 unsigned int bi_vcnt; /* how may bio_vec's */ 457 - unsigned int bi_idx; /* current index into bio_vec array */ 456 + struct bvec_iter bi_iter; /* current index into bio_vec array */ 458 457 459 458 unsigned int bi_size; /* total size in bytes */ 460 459 unsigned short bi_phys_segments; /* segments after physaddr coalesce*/ ··· 479 480 - Code that traverses the req list can find all the segments of a bio 480 481 by using rq_for_each_segment. This handles the fact that a request 481 482 has multiple bios, each of which can have multiple segments. 482 - - Drivers which can't process a large bio in one shot can use the bi_idx 483 + - Drivers which can't process a large bio in one shot can use the bi_iter 483 484 field to keep track of the next bio_vec entry to process. 484 485 (e.g a 1MB bio_vec needs to be handled in max 128kB chunks for IDE) 485 486 [TBD: Should preferably also have a bi_voffset and bi_vlen to avoid modifying ··· 588 589 nr_sectors and current_nr_sectors fields (based on the corresponding 589 590 hard_xxx values and the number of bytes transferred) and updates it on 590 591 every transfer that invokes end_that_request_first. It does the same for the 591 - buffer, bio, bio->bi_idx fields too. 592 + buffer, bio, bio->bi_iter fields too. 592 593 593 594 The buffer field is just a virtual address mapping of the current segment 594 595 of the i/o buffer in cases where the buffer resides in low-memory. For high
+111
Documentation/block/biovecs.txt
··· 1 + 2 + Immutable biovecs and biovec iterators: 3 + ======================================= 4 + 5 + Kent Overstreet <kmo@daterainc.com> 6 + 7 + As of 3.13, biovecs should never be modified after a bio has been submitted. 8 + Instead, we have a new struct bvec_iter which represents a range of a biovec - 9 + the iterator will be modified as the bio is completed, not the biovec. 10 + 11 + More specifically, old code that needed to partially complete a bio would 12 + update bi_sector and bi_size, and advance bi_idx to the next biovec. If it 13 + ended up partway through a biovec, it would increment bv_offset and decrement 14 + bv_len by the number of bytes completed in that biovec. 15 + 16 + In the new scheme of things, everything that must be mutated in order to 17 + partially complete a bio is segregated into struct bvec_iter: bi_sector, 18 + bi_size and bi_idx have been moved there; and instead of modifying bv_offset 19 + and bv_len, struct bvec_iter has bi_bvec_done, which represents the number of 20 + bytes completed in the current bvec. 21 + 22 + There are a bunch of new helper macros for hiding the gory details - in 23 + particular, presenting the illusion of partially completed biovecs so that 24 + normal code doesn't have to deal with bi_bvec_done. 25 + 26 + * Driver code should no longer refer to biovecs directly; we now have 27 + bio_iovec() and bio_iovec_iter() macros that return literal struct biovecs, 28 + constructed from the raw biovecs but taking into account bi_bvec_done and 29 + bi_size. 30 + 31 + bio_for_each_segment() has been updated to take a bvec_iter argument 32 + instead of an integer (that corresponded to bi_idx); for a lot of code the 33 + conversion just required changing the types of the arguments to 34 + bio_for_each_segment(). 35 + 36 + * Advancing a bvec_iter is done with bio_advance_iter(); bio_advance() is a 37 + wrapper around bio_advance_iter() that operates on bio->bi_iter, and also 38 + advances the bio integrity's iter if present. 39 + 40 + There is a lower level advance function - bvec_iter_advance() - which takes 41 + a pointer to a biovec, not a bio; this is used by the bio integrity code. 42 + 43 + What's all this get us? 44 + ======================= 45 + 46 + Having a real iterator, and making biovecs immutable, has a number of 47 + advantages: 48 + 49 + * Before, iterating over bios was very awkward when you weren't processing 50 + exactly one bvec at a time - for example, bio_copy_data() in fs/bio.c, 51 + which copies the contents of one bio into another. Because the biovecs 52 + wouldn't necessarily be the same size, the old code was tricky convoluted - 53 + it had to walk two different bios at the same time, keeping both bi_idx and 54 + and offset into the current biovec for each. 55 + 56 + The new code is much more straightforward - have a look. This sort of 57 + pattern comes up in a lot of places; a lot of drivers were essentially open 58 + coding bvec iterators before, and having common implementation considerably 59 + simplifies a lot of code. 60 + 61 + * Before, any code that might need to use the biovec after the bio had been 62 + completed (perhaps to copy the data somewhere else, or perhaps to resubmit 63 + it somewhere else if there was an error) had to save the entire bvec array 64 + - again, this was being done in a fair number of places. 65 + 66 + * Biovecs can be shared between multiple bios - a bvec iter can represent an 67 + arbitrary range of an existing biovec, both starting and ending midway 68 + through biovecs. This is what enables efficient splitting of arbitrary 69 + bios. Note that this means we _only_ use bi_size to determine when we've 70 + reached the end of a bio, not bi_vcnt - and the bio_iovec() macro takes 71 + bi_size into account when constructing biovecs. 72 + 73 + * Splitting bios is now much simpler. The old bio_split() didn't even work on 74 + bios with more than a single bvec! Now, we can efficiently split arbitrary 75 + size bios - because the new bio can share the old bio's biovec. 76 + 77 + Care must be taken to ensure the biovec isn't freed while the split bio is 78 + still using it, in case the original bio completes first, though. Using 79 + bio_chain() when splitting bios helps with this. 80 + 81 + * Submitting partially completed bios is now perfectly fine - this comes up 82 + occasionally in stacking block drivers and various code (e.g. md and 83 + bcache) had some ugly workarounds for this. 84 + 85 + It used to be the case that submitting a partially completed bio would work 86 + fine to _most_ devices, but since accessing the raw bvec array was the 87 + norm, not all drivers would respect bi_idx and those would break. Now, 88 + since all drivers _must_ go through the bvec iterator - and have been 89 + audited to make sure they are - submitting partially completed bios is 90 + perfectly fine. 91 + 92 + Other implications: 93 + =================== 94 + 95 + * Almost all usage of bi_idx is now incorrect and has been removed; instead, 96 + where previously you would have used bi_idx you'd now use a bvec_iter, 97 + probably passing it to one of the helper macros. 98 + 99 + I.e. instead of using bio_iovec_idx() (or bio->bi_iovec[bio->bi_idx]), you 100 + now use bio_iter_iovec(), which takes a bvec_iter and returns a 101 + literal struct bio_vec - constructed on the fly from the raw biovec but 102 + taking into account bi_bvec_done (and bi_size). 103 + 104 + * bi_vcnt can't be trusted or relied upon by driver code - i.e. anything that 105 + doesn't actually own the bio. The reason is twofold: firstly, it's not 106 + actually needed for iterating over the bio anymore - we only use bi_size. 107 + Secondly, when cloning a bio and reusing (a portion of) the original bio's 108 + biovec, in order to calculate bi_vcnt for the new bio we'd have to iterate 109 + over all the biovecs in the new bio - which is silly as it's not needed. 110 + 111 + So, don't use bi_vcnt anymore.
+7 -6
arch/m68k/emu/nfblock.c
··· 62 62 static void nfhd_make_request(struct request_queue *queue, struct bio *bio) 63 63 { 64 64 struct nfhd_device *dev = queue->queuedata; 65 - struct bio_vec *bvec; 66 - int i, dir, len, shift; 67 - sector_t sec = bio->bi_sector; 65 + struct bio_vec bvec; 66 + struct bvec_iter iter; 67 + int dir, len, shift; 68 + sector_t sec = bio->bi_iter.bi_sector; 68 69 69 70 dir = bio_data_dir(bio); 70 71 shift = dev->bshift; 71 - bio_for_each_segment(bvec, bio, i) { 72 - len = bvec->bv_len; 72 + bio_for_each_segment(bvec, bio, iter) { 73 + len = bvec.bv_len; 73 74 len >>= 9; 74 75 nfhd_read_write(dev->id, 0, dir, sec >> shift, len >> shift, 75 - bvec_to_phys(bvec)); 76 + bvec_to_phys(&bvec)); 76 77 sec += len; 77 78 } 78 79 bio_endio(bio, 0);
+11 -10
arch/powerpc/sysdev/axonram.c
··· 109 109 struct axon_ram_bank *bank = bio->bi_bdev->bd_disk->private_data; 110 110 unsigned long phys_mem, phys_end; 111 111 void *user_mem; 112 - struct bio_vec *vec; 112 + struct bio_vec vec; 113 113 unsigned int transfered; 114 - unsigned short idx; 114 + struct bvec_iter iter; 115 115 116 - phys_mem = bank->io_addr + (bio->bi_sector << AXON_RAM_SECTOR_SHIFT); 116 + phys_mem = bank->io_addr + (bio->bi_iter.bi_sector << 117 + AXON_RAM_SECTOR_SHIFT); 117 118 phys_end = bank->io_addr + bank->size; 118 119 transfered = 0; 119 - bio_for_each_segment(vec, bio, idx) { 120 - if (unlikely(phys_mem + vec->bv_len > phys_end)) { 120 + bio_for_each_segment(vec, bio, iter) { 121 + if (unlikely(phys_mem + vec.bv_len > phys_end)) { 121 122 bio_io_error(bio); 122 123 return; 123 124 } 124 125 125 - user_mem = page_address(vec->bv_page) + vec->bv_offset; 126 + user_mem = page_address(vec.bv_page) + vec.bv_offset; 126 127 if (bio_data_dir(bio) == READ) 127 - memcpy(user_mem, (void *) phys_mem, vec->bv_len); 128 + memcpy(user_mem, (void *) phys_mem, vec.bv_len); 128 129 else 129 - memcpy((void *) phys_mem, user_mem, vec->bv_len); 130 + memcpy((void *) phys_mem, user_mem, vec.bv_len); 130 131 131 - phys_mem += vec->bv_len; 132 - transfered += vec->bv_len; 132 + phys_mem += vec.bv_len; 133 + transfered += vec.bv_len; 133 134 } 134 135 bio_endio(bio, 0); 135 136 }
+7 -7
arch/xtensa/platforms/iss/simdisk.c
··· 103 103 104 104 static int simdisk_xfer_bio(struct simdisk *dev, struct bio *bio) 105 105 { 106 - int i; 107 - struct bio_vec *bvec; 108 - sector_t sector = bio->bi_sector; 106 + struct bio_vec bvec; 107 + struct bvec_iter iter; 108 + sector_t sector = bio->bi_iter.bi_sector; 109 109 110 - bio_for_each_segment(bvec, bio, i) { 111 - char *buffer = __bio_kmap_atomic(bio, i); 112 - unsigned len = bvec->bv_len >> SECTOR_SHIFT; 110 + bio_for_each_segment(bvec, bio, iter) { 111 + char *buffer = __bio_kmap_atomic(bio, iter); 112 + unsigned len = bvec.bv_len >> SECTOR_SHIFT; 113 113 114 114 simdisk_transfer(dev, sector, len, buffer, 115 115 bio_data_dir(bio) == WRITE); 116 116 sector += len; 117 - __bio_kunmap_atomic(bio); 117 + __bio_kunmap_atomic(buffer); 118 118 } 119 119 return 0; 120 120 }
+38 -23
block/blk-core.c
··· 38 38 39 39 #include "blk.h" 40 40 #include "blk-cgroup.h" 41 + #include "blk-mq.h" 41 42 42 43 EXPORT_TRACEPOINT_SYMBOL_GPL(block_bio_remap); 43 44 EXPORT_TRACEPOINT_SYMBOL_GPL(block_rq_remap); ··· 131 130 bio_advance(bio, nbytes); 132 131 133 132 /* don't actually finish bio if it's part of flush sequence */ 134 - if (bio->bi_size == 0 && !(rq->cmd_flags & REQ_FLUSH_SEQ)) 133 + if (bio->bi_iter.bi_size == 0 && !(rq->cmd_flags & REQ_FLUSH_SEQ)) 135 134 bio_endio(bio, error); 136 135 } 137 136 ··· 246 245 void blk_sync_queue(struct request_queue *q) 247 246 { 248 247 del_timer_sync(&q->timeout); 249 - cancel_delayed_work_sync(&q->delay_work); 248 + 249 + if (q->mq_ops) { 250 + struct blk_mq_hw_ctx *hctx; 251 + int i; 252 + 253 + queue_for_each_hw_ctx(q, hctx, i) 254 + cancel_delayed_work_sync(&hctx->delayed_work); 255 + } else { 256 + cancel_delayed_work_sync(&q->delay_work); 257 + } 250 258 } 251 259 EXPORT_SYMBOL(blk_sync_queue); 252 260 ··· 507 497 * Drain all requests queued before DYING marking. Set DEAD flag to 508 498 * prevent that q->request_fn() gets invoked after draining finished. 509 499 */ 510 - spin_lock_irq(lock); 511 - __blk_drain_queue(q, true); 500 + if (q->mq_ops) { 501 + blk_mq_drain_queue(q); 502 + spin_lock_irq(lock); 503 + } else { 504 + spin_lock_irq(lock); 505 + __blk_drain_queue(q, true); 506 + } 512 507 queue_flag_set(QUEUE_FLAG_DEAD, q); 513 508 spin_unlock_irq(lock); 514 509 ··· 1341 1326 bio->bi_io_vec->bv_offset = 0; 1342 1327 bio->bi_io_vec->bv_len = len; 1343 1328 1344 - bio->bi_size = len; 1329 + bio->bi_iter.bi_size = len; 1345 1330 bio->bi_vcnt = 1; 1346 1331 bio->bi_phys_segments = 1; 1347 1332 ··· 1366 1351 1367 1352 req->biotail->bi_next = bio; 1368 1353 req->biotail = bio; 1369 - req->__data_len += bio->bi_size; 1354 + req->__data_len += bio->bi_iter.bi_size; 1370 1355 req->ioprio = ioprio_best(req->ioprio, bio_prio(bio)); 1371 1356 1372 1357 blk_account_io_start(req, false); ··· 1395 1380 * not touch req->buffer either... 1396 1381 */ 1397 1382 req->buffer = bio_data(bio); 1398 - req->__sector = bio->bi_sector; 1399 - req->__data_len += bio->bi_size; 1383 + req->__sector = bio->bi_iter.bi_sector; 1384 + req->__data_len += bio->bi_iter.bi_size; 1400 1385 req->ioprio = ioprio_best(req->ioprio, bio_prio(bio)); 1401 1386 1402 1387 blk_account_io_start(req, false); ··· 1474 1459 req->cmd_flags |= REQ_FAILFAST_MASK; 1475 1460 1476 1461 req->errors = 0; 1477 - req->__sector = bio->bi_sector; 1462 + req->__sector = bio->bi_iter.bi_sector; 1478 1463 req->ioprio = bio_prio(bio); 1479 1464 blk_rq_bio_prep(req->q, req, bio); 1480 1465 } ··· 1598 1583 if (bio_sectors(bio) && bdev != bdev->bd_contains) { 1599 1584 struct hd_struct *p = bdev->bd_part; 1600 1585 1601 - bio->bi_sector += p->start_sect; 1586 + bio->bi_iter.bi_sector += p->start_sect; 1602 1587 bio->bi_bdev = bdev->bd_contains; 1603 1588 1604 1589 trace_block_bio_remap(bdev_get_queue(bio->bi_bdev), bio, 1605 1590 bdev->bd_dev, 1606 - bio->bi_sector - p->start_sect); 1591 + bio->bi_iter.bi_sector - p->start_sect); 1607 1592 } 1608 1593 } 1609 1594 ··· 1669 1654 /* Test device or partition size, when known. */ 1670 1655 maxsector = i_size_read(bio->bi_bdev->bd_inode) >> 9; 1671 1656 if (maxsector) { 1672 - sector_t sector = bio->bi_sector; 1657 + sector_t sector = bio->bi_iter.bi_sector; 1673 1658 1674 1659 if (maxsector < nr_sectors || maxsector - nr_sectors < sector) { 1675 1660 /* ··· 1705 1690 "generic_make_request: Trying to access " 1706 1691 "nonexistent block-device %s (%Lu)\n", 1707 1692 bdevname(bio->bi_bdev, b), 1708 - (long long) bio->bi_sector); 1693 + (long long) bio->bi_iter.bi_sector); 1709 1694 goto end_io; 1710 1695 } 1711 1696 ··· 1719 1704 } 1720 1705 1721 1706 part = bio->bi_bdev->bd_part; 1722 - if (should_fail_request(part, bio->bi_size) || 1707 + if (should_fail_request(part, bio->bi_iter.bi_size) || 1723 1708 should_fail_request(&part_to_disk(part)->part0, 1724 - bio->bi_size)) 1709 + bio->bi_iter.bi_size)) 1725 1710 goto end_io; 1726 1711 1727 1712 /* ··· 1880 1865 if (rw & WRITE) { 1881 1866 count_vm_events(PGPGOUT, count); 1882 1867 } else { 1883 - task_io_account_read(bio->bi_size); 1868 + task_io_account_read(bio->bi_iter.bi_size); 1884 1869 count_vm_events(PGPGIN, count); 1885 1870 } 1886 1871 ··· 1889 1874 printk(KERN_DEBUG "%s(%d): %s block %Lu on %s (%u sectors)\n", 1890 1875 current->comm, task_pid_nr(current), 1891 1876 (rw & WRITE) ? "WRITE" : "READ", 1892 - (unsigned long long)bio->bi_sector, 1877 + (unsigned long long)bio->bi_iter.bi_sector, 1893 1878 bdevname(bio->bi_bdev, b), 1894 1879 count); 1895 1880 } ··· 2022 2007 for (bio = rq->bio; bio; bio = bio->bi_next) { 2023 2008 if ((bio->bi_rw & ff) != ff) 2024 2009 break; 2025 - bytes += bio->bi_size; 2010 + bytes += bio->bi_iter.bi_size; 2026 2011 } 2027 2012 2028 2013 /* this could lead to infinite loop */ ··· 2393 2378 total_bytes = 0; 2394 2379 while (req->bio) { 2395 2380 struct bio *bio = req->bio; 2396 - unsigned bio_bytes = min(bio->bi_size, nr_bytes); 2381 + unsigned bio_bytes = min(bio->bi_iter.bi_size, nr_bytes); 2397 2382 2398 - if (bio_bytes == bio->bi_size) 2383 + if (bio_bytes == bio->bi_iter.bi_size) 2399 2384 req->bio = bio->bi_next; 2400 2385 2401 2386 req_bio_endio(req, bio, bio_bytes, error); ··· 2743 2728 rq->nr_phys_segments = bio_phys_segments(q, bio); 2744 2729 rq->buffer = bio_data(bio); 2745 2730 } 2746 - rq->__data_len = bio->bi_size; 2731 + rq->__data_len = bio->bi_iter.bi_size; 2747 2732 rq->bio = rq->biotail = bio; 2748 2733 2749 2734 if (bio->bi_bdev) ··· 2761 2746 void rq_flush_dcache_pages(struct request *rq) 2762 2747 { 2763 2748 struct req_iterator iter; 2764 - struct bio_vec *bvec; 2749 + struct bio_vec bvec; 2765 2750 2766 2751 rq_for_each_segment(bvec, rq, iter) 2767 - flush_dcache_page(bvec->bv_page); 2752 + flush_dcache_page(bvec.bv_page); 2768 2753 } 2769 2754 EXPORT_SYMBOL_GPL(rq_flush_dcache_pages); 2770 2755 #endif
+4
block/blk-exec.c
··· 60 60 rq->rq_disk = bd_disk; 61 61 rq->end_io = done; 62 62 63 + /* 64 + * don't check dying flag for MQ because the request won't 65 + * be resued after dying flag is set 66 + */ 63 67 if (q->mq_ops) { 64 68 blk_mq_insert_request(q, rq, true); 65 69 return;
+1 -1
block/blk-flush.c
··· 548 548 * copied from blk_rq_pos(rq). 549 549 */ 550 550 if (error_sector) 551 - *error_sector = bio->bi_sector; 551 + *error_sector = bio->bi_iter.bi_sector; 552 552 553 553 bio_put(bio); 554 554 return ret;
+22 -18
block/blk-integrity.c
··· 43 43 */ 44 44 int blk_rq_count_integrity_sg(struct request_queue *q, struct bio *bio) 45 45 { 46 - struct bio_vec *iv, *ivprv = NULL; 46 + struct bio_vec iv, ivprv = { NULL }; 47 47 unsigned int segments = 0; 48 48 unsigned int seg_size = 0; 49 - unsigned int i = 0; 49 + struct bvec_iter iter; 50 + int prev = 0; 50 51 51 - bio_for_each_integrity_vec(iv, bio, i) { 52 + bio_for_each_integrity_vec(iv, bio, iter) { 52 53 53 - if (ivprv) { 54 - if (!BIOVEC_PHYS_MERGEABLE(ivprv, iv)) 54 + if (prev) { 55 + if (!BIOVEC_PHYS_MERGEABLE(&ivprv, &iv)) 55 56 goto new_segment; 56 57 57 - if (!BIOVEC_SEG_BOUNDARY(q, ivprv, iv)) 58 + if (!BIOVEC_SEG_BOUNDARY(q, &ivprv, &iv)) 58 59 goto new_segment; 59 60 60 - if (seg_size + iv->bv_len > queue_max_segment_size(q)) 61 + if (seg_size + iv.bv_len > queue_max_segment_size(q)) 61 62 goto new_segment; 62 63 63 - seg_size += iv->bv_len; 64 + seg_size += iv.bv_len; 64 65 } else { 65 66 new_segment: 66 67 segments++; 67 - seg_size = iv->bv_len; 68 + seg_size = iv.bv_len; 68 69 } 69 70 71 + prev = 1; 70 72 ivprv = iv; 71 73 } 72 74 ··· 89 87 int blk_rq_map_integrity_sg(struct request_queue *q, struct bio *bio, 90 88 struct scatterlist *sglist) 91 89 { 92 - struct bio_vec *iv, *ivprv = NULL; 90 + struct bio_vec iv, ivprv = { NULL }; 93 91 struct scatterlist *sg = NULL; 94 92 unsigned int segments = 0; 95 - unsigned int i = 0; 93 + struct bvec_iter iter; 94 + int prev = 0; 96 95 97 - bio_for_each_integrity_vec(iv, bio, i) { 96 + bio_for_each_integrity_vec(iv, bio, iter) { 98 97 99 - if (ivprv) { 100 - if (!BIOVEC_PHYS_MERGEABLE(ivprv, iv)) 98 + if (prev) { 99 + if (!BIOVEC_PHYS_MERGEABLE(&ivprv, &iv)) 101 100 goto new_segment; 102 101 103 - if (!BIOVEC_SEG_BOUNDARY(q, ivprv, iv)) 102 + if (!BIOVEC_SEG_BOUNDARY(q, &ivprv, &iv)) 104 103 goto new_segment; 105 104 106 - if (sg->length + iv->bv_len > queue_max_segment_size(q)) 105 + if (sg->length + iv.bv_len > queue_max_segment_size(q)) 107 106 goto new_segment; 108 107 109 - sg->length += iv->bv_len; 108 + sg->length += iv.bv_len; 110 109 } else { 111 110 new_segment: 112 111 if (!sg) ··· 117 114 sg = sg_next(sg); 118 115 } 119 116 120 - sg_set_page(sg, iv->bv_page, iv->bv_len, iv->bv_offset); 117 + sg_set_page(sg, iv.bv_page, iv.bv_len, iv.bv_offset); 121 118 segments++; 122 119 } 123 120 121 + prev = 1; 124 122 ivprv = iv; 125 123 } 126 124
+6 -6
block/blk-lib.c
··· 108 108 req_sects = end_sect - sector; 109 109 } 110 110 111 - bio->bi_sector = sector; 111 + bio->bi_iter.bi_sector = sector; 112 112 bio->bi_end_io = bio_batch_end_io; 113 113 bio->bi_bdev = bdev; 114 114 bio->bi_private = &bb; 115 115 116 - bio->bi_size = req_sects << 9; 116 + bio->bi_iter.bi_size = req_sects << 9; 117 117 nr_sects -= req_sects; 118 118 sector = end_sect; 119 119 ··· 174 174 break; 175 175 } 176 176 177 - bio->bi_sector = sector; 177 + bio->bi_iter.bi_sector = sector; 178 178 bio->bi_end_io = bio_batch_end_io; 179 179 bio->bi_bdev = bdev; 180 180 bio->bi_private = &bb; ··· 184 184 bio->bi_io_vec->bv_len = bdev_logical_block_size(bdev); 185 185 186 186 if (nr_sects > max_write_same_sectors) { 187 - bio->bi_size = max_write_same_sectors << 9; 187 + bio->bi_iter.bi_size = max_write_same_sectors << 9; 188 188 nr_sects -= max_write_same_sectors; 189 189 sector += max_write_same_sectors; 190 190 } else { 191 - bio->bi_size = nr_sects << 9; 191 + bio->bi_iter.bi_size = nr_sects << 9; 192 192 nr_sects = 0; 193 193 } 194 194 ··· 240 240 break; 241 241 } 242 242 243 - bio->bi_sector = sector; 243 + bio->bi_iter.bi_sector = sector; 244 244 bio->bi_bdev = bdev; 245 245 bio->bi_end_io = bio_batch_end_io; 246 246 bio->bi_private = &bb;
+3 -3
block/blk-map.c
··· 20 20 rq->biotail->bi_next = bio; 21 21 rq->biotail = bio; 22 22 23 - rq->__data_len += bio->bi_size; 23 + rq->__data_len += bio->bi_iter.bi_size; 24 24 } 25 25 return 0; 26 26 } ··· 76 76 77 77 ret = blk_rq_append_bio(q, rq, bio); 78 78 if (!ret) 79 - return bio->bi_size; 79 + return bio->bi_iter.bi_size; 80 80 81 81 /* if it was boucned we must call the end io function */ 82 82 bio_endio(bio, 0); ··· 220 220 if (IS_ERR(bio)) 221 221 return PTR_ERR(bio); 222 222 223 - if (bio->bi_size != len) { 223 + if (bio->bi_iter.bi_size != len) { 224 224 /* 225 225 * Grab an extra reference to this bio, as bio_unmap_user() 226 226 * expects to be able to drop it twice as it happens on the
+36 -30
block/blk-merge.c
··· 12 12 static unsigned int __blk_recalc_rq_segments(struct request_queue *q, 13 13 struct bio *bio) 14 14 { 15 - struct bio_vec *bv, *bvprv = NULL; 16 - int cluster, i, high, highprv = 1; 15 + struct bio_vec bv, bvprv = { NULL }; 16 + int cluster, high, highprv = 1; 17 17 unsigned int seg_size, nr_phys_segs; 18 18 struct bio *fbio, *bbio; 19 + struct bvec_iter iter; 19 20 20 21 if (!bio) 21 22 return 0; ··· 26 25 seg_size = 0; 27 26 nr_phys_segs = 0; 28 27 for_each_bio(bio) { 29 - bio_for_each_segment(bv, bio, i) { 28 + bio_for_each_segment(bv, bio, iter) { 30 29 /* 31 30 * the trick here is making sure that a high page is 32 31 * never considered part of another segment, since that 33 32 * might change with the bounce page. 34 33 */ 35 - high = page_to_pfn(bv->bv_page) > queue_bounce_pfn(q); 36 - if (high || highprv) 37 - goto new_segment; 38 - if (cluster) { 39 - if (seg_size + bv->bv_len 34 + high = page_to_pfn(bv.bv_page) > queue_bounce_pfn(q); 35 + if (!high && !highprv && cluster) { 36 + if (seg_size + bv.bv_len 40 37 > queue_max_segment_size(q)) 41 38 goto new_segment; 42 - if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv)) 39 + if (!BIOVEC_PHYS_MERGEABLE(&bvprv, &bv)) 43 40 goto new_segment; 44 - if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bv)) 41 + if (!BIOVEC_SEG_BOUNDARY(q, &bvprv, &bv)) 45 42 goto new_segment; 46 43 47 - seg_size += bv->bv_len; 44 + seg_size += bv.bv_len; 48 45 bvprv = bv; 49 46 continue; 50 47 } ··· 53 54 54 55 nr_phys_segs++; 55 56 bvprv = bv; 56 - seg_size = bv->bv_len; 57 + seg_size = bv.bv_len; 57 58 highprv = high; 58 59 } 59 60 bbio = bio; ··· 86 87 static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio, 87 88 struct bio *nxt) 88 89 { 90 + struct bio_vec end_bv = { NULL }, nxt_bv; 91 + struct bvec_iter iter; 92 + 89 93 if (!blk_queue_cluster(q)) 90 94 return 0; 91 95 ··· 99 97 if (!bio_has_data(bio)) 100 98 return 1; 101 99 102 - if (!BIOVEC_PHYS_MERGEABLE(__BVEC_END(bio), __BVEC_START(nxt))) 100 + bio_for_each_segment(end_bv, bio, iter) 101 + if (end_bv.bv_len == iter.bi_size) 102 + break; 103 + 104 + nxt_bv = bio_iovec(nxt); 105 + 106 + if (!BIOVEC_PHYS_MERGEABLE(&end_bv, &nxt_bv)) 103 107 return 0; 104 108 105 109 /* 106 110 * bio and nxt are contiguous in memory; check if the queue allows 107 111 * these two to be merged into one 108 112 */ 109 - if (BIO_SEG_BOUNDARY(q, bio, nxt)) 113 + if (BIOVEC_SEG_BOUNDARY(q, &end_bv, &nxt_bv)) 110 114 return 1; 111 115 112 116 return 0; 113 117 } 114 118 115 - static void 119 + static inline void 116 120 __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, 117 - struct scatterlist *sglist, struct bio_vec **bvprv, 121 + struct scatterlist *sglist, struct bio_vec *bvprv, 118 122 struct scatterlist **sg, int *nsegs, int *cluster) 119 123 { 120 124 121 125 int nbytes = bvec->bv_len; 122 126 123 - if (*bvprv && *cluster) { 127 + if (*sg && *cluster) { 124 128 if ((*sg)->length + nbytes > queue_max_segment_size(q)) 125 129 goto new_segment; 126 130 127 - if (!BIOVEC_PHYS_MERGEABLE(*bvprv, bvec)) 131 + if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec)) 128 132 goto new_segment; 129 - if (!BIOVEC_SEG_BOUNDARY(q, *bvprv, bvec)) 133 + if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec)) 130 134 goto new_segment; 131 135 132 136 (*sg)->length += nbytes; ··· 158 150 sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset); 159 151 (*nsegs)++; 160 152 } 161 - *bvprv = bvec; 153 + *bvprv = *bvec; 162 154 } 163 155 164 156 /* ··· 168 160 int blk_rq_map_sg(struct request_queue *q, struct request *rq, 169 161 struct scatterlist *sglist) 170 162 { 171 - struct bio_vec *bvec, *bvprv; 163 + struct bio_vec bvec, bvprv = { NULL }; 172 164 struct req_iterator iter; 173 165 struct scatterlist *sg; 174 166 int nsegs, cluster; ··· 179 171 /* 180 172 * for each bio in rq 181 173 */ 182 - bvprv = NULL; 183 174 sg = NULL; 184 175 rq_for_each_segment(bvec, rq, iter) { 185 - __blk_segment_map_sg(q, bvec, sglist, &bvprv, &sg, 176 + __blk_segment_map_sg(q, &bvec, sglist, &bvprv, &sg, 186 177 &nsegs, &cluster); 187 178 } /* segments in rq */ 188 179 ··· 230 223 int blk_bio_map_sg(struct request_queue *q, struct bio *bio, 231 224 struct scatterlist *sglist) 232 225 { 233 - struct bio_vec *bvec, *bvprv; 226 + struct bio_vec bvec, bvprv = { NULL }; 234 227 struct scatterlist *sg; 235 228 int nsegs, cluster; 236 - unsigned long i; 229 + struct bvec_iter iter; 237 230 238 231 nsegs = 0; 239 232 cluster = blk_queue_cluster(q); 240 233 241 - bvprv = NULL; 242 234 sg = NULL; 243 - bio_for_each_segment(bvec, bio, i) { 244 - __blk_segment_map_sg(q, bvec, sglist, &bvprv, &sg, 235 + bio_for_each_segment(bvec, bio, iter) { 236 + __blk_segment_map_sg(q, &bvec, sglist, &bvprv, &sg, 245 237 &nsegs, &cluster); 246 238 } /* segments in bio */ 247 239 ··· 549 543 550 544 int blk_try_merge(struct request *rq, struct bio *bio) 551 545 { 552 - if (blk_rq_pos(rq) + blk_rq_sectors(rq) == bio->bi_sector) 546 + if (blk_rq_pos(rq) + blk_rq_sectors(rq) == bio->bi_iter.bi_sector) 553 547 return ELEVATOR_BACK_MERGE; 554 - else if (blk_rq_pos(rq) - bio_sectors(bio) == bio->bi_sector) 548 + else if (blk_rq_pos(rq) - bio_sectors(bio) == bio->bi_iter.bi_sector) 555 549 return ELEVATOR_FRONT_MERGE; 556 550 return ELEVATOR_NO_MERGE; 557 551 }
+1 -36
block/blk-mq-cpu.c
··· 28 28 return NOTIFY_OK; 29 29 } 30 30 31 - static void blk_mq_cpu_notify(void *data, unsigned long action, 32 - unsigned int cpu) 33 - { 34 - if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) { 35 - /* 36 - * If the CPU goes away, ensure that we run any pending 37 - * completions. 38 - */ 39 - struct llist_node *node; 40 - struct request *rq; 41 - 42 - local_irq_disable(); 43 - 44 - node = llist_del_all(&per_cpu(ipi_lists, cpu)); 45 - while (node) { 46 - struct llist_node *next = node->next; 47 - 48 - rq = llist_entry(node, struct request, ll_list); 49 - __blk_mq_end_io(rq, rq->errors); 50 - node = next; 51 - } 52 - 53 - local_irq_enable(); 54 - } 55 - } 56 - 57 - static struct notifier_block __cpuinitdata blk_mq_main_cpu_notifier = { 58 - .notifier_call = blk_mq_main_cpu_notify, 59 - }; 60 - 61 31 void blk_mq_register_cpu_notifier(struct blk_mq_cpu_notifier *notifier) 62 32 { 63 33 BUG_ON(!notifier->notify); ··· 52 82 notifier->data = data; 53 83 } 54 84 55 - static struct blk_mq_cpu_notifier __cpuinitdata cpu_notifier = { 56 - .notify = blk_mq_cpu_notify, 57 - }; 58 - 59 85 void __init blk_mq_cpu_init(void) 60 86 { 61 - register_hotcpu_notifier(&blk_mq_main_cpu_notifier); 62 - blk_mq_register_cpu_notifier(&cpu_notifier); 87 + hotcpu_notifier(blk_mq_main_cpu_notify, 0); 63 88 }
+44 -79
block/blk-mq.c
··· 27 27 28 28 static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx); 29 29 30 - DEFINE_PER_CPU(struct llist_head, ipi_lists); 31 - 32 30 static struct blk_mq_ctx *__blk_mq_get_ctx(struct request_queue *q, 33 31 unsigned int cpu) 34 32 { ··· 104 106 105 107 spin_lock_irq(q->queue_lock); 106 108 ret = wait_event_interruptible_lock_irq(q->mq_freeze_wq, 107 - !blk_queue_bypass(q), *q->queue_lock); 109 + !blk_queue_bypass(q) || blk_queue_dying(q), 110 + *q->queue_lock); 108 111 /* inc usage with lock hold to avoid freeze_queue runs here */ 109 - if (!ret) 112 + if (!ret && !blk_queue_dying(q)) 110 113 __percpu_counter_add(&q->mq_usage_counter, 1, 1000000); 114 + else if (blk_queue_dying(q)) 115 + ret = -ENODEV; 111 116 spin_unlock_irq(q->queue_lock); 112 117 113 118 return ret; ··· 119 118 static void blk_mq_queue_exit(struct request_queue *q) 120 119 { 121 120 __percpu_counter_add(&q->mq_usage_counter, -1, 1000000); 121 + } 122 + 123 + static void __blk_mq_drain_queue(struct request_queue *q) 124 + { 125 + while (true) { 126 + s64 count; 127 + 128 + spin_lock_irq(q->queue_lock); 129 + count = percpu_counter_sum(&q->mq_usage_counter); 130 + spin_unlock_irq(q->queue_lock); 131 + 132 + if (count == 0) 133 + break; 134 + blk_mq_run_queues(q, false); 135 + msleep(10); 136 + } 122 137 } 123 138 124 139 /* ··· 150 133 queue_flag_set(QUEUE_FLAG_BYPASS, q); 151 134 spin_unlock_irq(q->queue_lock); 152 135 153 - if (!drain) 154 - return; 136 + if (drain) 137 + __blk_mq_drain_queue(q); 138 + } 155 139 156 - while (true) { 157 - s64 count; 158 - 159 - spin_lock_irq(q->queue_lock); 160 - count = percpu_counter_sum(&q->mq_usage_counter); 161 - spin_unlock_irq(q->queue_lock); 162 - 163 - if (count == 0) 164 - break; 165 - blk_mq_run_queues(q, false); 166 - msleep(10); 167 - } 140 + void blk_mq_drain_queue(struct request_queue *q) 141 + { 142 + __blk_mq_drain_queue(q); 168 143 } 169 144 170 145 static void blk_mq_unfreeze_queue(struct request_queue *q) ··· 188 179 189 180 rq->mq_ctx = ctx; 190 181 rq->cmd_flags = rw_flags; 182 + rq->start_time = jiffies; 183 + set_start_time_ns(rq); 191 184 ctx->rq_dispatched[rw_is_sync(rw_flags)]++; 192 185 } 193 186 ··· 316 305 struct bio *next = bio->bi_next; 317 306 318 307 bio->bi_next = NULL; 319 - bytes += bio->bi_size; 308 + bytes += bio->bi_iter.bi_size; 320 309 blk_mq_bio_endio(rq, bio, error); 321 310 bio = next; 322 311 } ··· 337 326 blk_mq_complete_request(rq, error); 338 327 } 339 328 340 - #if defined(CONFIG_SMP) 341 - 342 - /* 343 - * Called with interrupts disabled. 344 - */ 345 - static void ipi_end_io(void *data) 329 + static void blk_mq_end_io_remote(void *data) 346 330 { 347 - struct llist_head *list = &per_cpu(ipi_lists, smp_processor_id()); 348 - struct llist_node *entry, *next; 349 - struct request *rq; 331 + struct request *rq = data; 350 332 351 - entry = llist_del_all(list); 352 - 353 - while (entry) { 354 - next = entry->next; 355 - rq = llist_entry(entry, struct request, ll_list); 356 - __blk_mq_end_io(rq, rq->errors); 357 - entry = next; 358 - } 333 + __blk_mq_end_io(rq, rq->errors); 359 334 } 360 - 361 - static int ipi_remote_cpu(struct blk_mq_ctx *ctx, const int cpu, 362 - struct request *rq, const int error) 363 - { 364 - struct call_single_data *data = &rq->csd; 365 - 366 - rq->errors = error; 367 - rq->ll_list.next = NULL; 368 - 369 - /* 370 - * If the list is non-empty, an existing IPI must already 371 - * be "in flight". If that is the case, we need not schedule 372 - * a new one. 373 - */ 374 - if (llist_add(&rq->ll_list, &per_cpu(ipi_lists, ctx->cpu))) { 375 - data->func = ipi_end_io; 376 - data->flags = 0; 377 - __smp_call_function_single(ctx->cpu, data, 0); 378 - } 379 - 380 - return true; 381 - } 382 - #else /* CONFIG_SMP */ 383 - static int ipi_remote_cpu(struct blk_mq_ctx *ctx, const int cpu, 384 - struct request *rq, const int error) 385 - { 386 - return false; 387 - } 388 - #endif 389 335 390 336 /* 391 337 * End IO on this request on a multiqueue enabled driver. We'll either do ··· 358 390 return __blk_mq_end_io(rq, error); 359 391 360 392 cpu = get_cpu(); 361 - 362 - if (cpu == ctx->cpu || !cpu_online(ctx->cpu) || 363 - !ipi_remote_cpu(ctx, cpu, rq, error)) 393 + if (cpu != ctx->cpu && cpu_online(ctx->cpu)) { 394 + rq->errors = error; 395 + rq->csd.func = blk_mq_end_io_remote; 396 + rq->csd.info = rq; 397 + rq->csd.flags = 0; 398 + __smp_call_function_single(ctx->cpu, &rq->csd, 0); 399 + } else { 364 400 __blk_mq_end_io(rq, error); 365 - 401 + } 366 402 put_cpu(); 367 403 } 368 404 EXPORT_SYMBOL(blk_mq_end_io); ··· 1063 1091 struct page *page; 1064 1092 1065 1093 while (!list_empty(&hctx->page_list)) { 1066 - page = list_first_entry(&hctx->page_list, struct page, list); 1067 - list_del_init(&page->list); 1094 + page = list_first_entry(&hctx->page_list, struct page, lru); 1095 + list_del_init(&page->lru); 1068 1096 __free_pages(page, page->private); 1069 1097 } 1070 1098 ··· 1128 1156 break; 1129 1157 1130 1158 page->private = this_order; 1131 - list_add_tail(&page->list, &hctx->page_list); 1159 + list_add_tail(&page->lru, &hctx->page_list); 1132 1160 1133 1161 p = page_address(page); 1134 1162 entries_per_page = order_to_size(this_order) / rq_size; ··· 1401 1429 int i; 1402 1430 1403 1431 queue_for_each_hw_ctx(q, hctx, i) { 1404 - cancel_delayed_work_sync(&hctx->delayed_work); 1405 1432 kfree(hctx->ctx_map); 1406 1433 kfree(hctx->ctxs); 1407 1434 blk_mq_free_rq_map(hctx); ··· 1422 1451 list_del_init(&q->all_q_node); 1423 1452 mutex_unlock(&all_q_mutex); 1424 1453 } 1425 - EXPORT_SYMBOL(blk_mq_free_queue); 1426 1454 1427 1455 /* Basically redo blk_mq_init_queue with queue frozen */ 1428 1456 static void blk_mq_queue_reinit(struct request_queue *q) ··· 1465 1495 1466 1496 static int __init blk_mq_init(void) 1467 1497 { 1468 - unsigned int i; 1469 - 1470 - for_each_possible_cpu(i) 1471 - init_llist_head(&per_cpu(ipi_lists, i)); 1472 - 1473 1498 blk_mq_cpu_init(); 1474 1499 1475 1500 /* Must be called after percpu_counter_hotcpu_callback() */
+2 -1
block/blk-mq.h
··· 27 27 void blk_mq_run_request(struct request *rq, bool run_queue, bool async); 28 28 void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async); 29 29 void blk_mq_init_flush(struct request_queue *q); 30 + void blk_mq_drain_queue(struct request_queue *q); 31 + void blk_mq_free_queue(struct request_queue *q); 30 32 31 33 /* 32 34 * CPU hotplug helpers ··· 40 38 void blk_mq_register_cpu_notifier(struct blk_mq_cpu_notifier *notifier); 41 39 void blk_mq_unregister_cpu_notifier(struct blk_mq_cpu_notifier *notifier); 42 40 void blk_mq_cpu_init(void); 43 - DECLARE_PER_CPU(struct llist_head, ipi_lists); 44 41 45 42 /* 46 43 * CPU -> queue mappings
+1
block/blk-sysfs.c
··· 11 11 12 12 #include "blk.h" 13 13 #include "blk-cgroup.h" 14 + #include "blk-mq.h" 14 15 15 16 struct queue_sysfs_entry { 16 17 struct attribute attr;
+7 -7
block/blk-throttle.c
··· 877 877 do_div(tmp, HZ); 878 878 bytes_allowed = tmp; 879 879 880 - if (tg->bytes_disp[rw] + bio->bi_size <= bytes_allowed) { 880 + if (tg->bytes_disp[rw] + bio->bi_iter.bi_size <= bytes_allowed) { 881 881 if (wait) 882 882 *wait = 0; 883 883 return 1; 884 884 } 885 885 886 886 /* Calc approx time to dispatch */ 887 - extra_bytes = tg->bytes_disp[rw] + bio->bi_size - bytes_allowed; 887 + extra_bytes = tg->bytes_disp[rw] + bio->bi_iter.bi_size - bytes_allowed; 888 888 jiffy_wait = div64_u64(extra_bytes * HZ, tg->bps[rw]); 889 889 890 890 if (!jiffy_wait) ··· 987 987 bool rw = bio_data_dir(bio); 988 988 989 989 /* Charge the bio to the group */ 990 - tg->bytes_disp[rw] += bio->bi_size; 990 + tg->bytes_disp[rw] += bio->bi_iter.bi_size; 991 991 tg->io_disp[rw]++; 992 992 993 993 /* ··· 1003 1003 */ 1004 1004 if (!(bio->bi_rw & REQ_THROTTLED)) { 1005 1005 bio->bi_rw |= REQ_THROTTLED; 1006 - throtl_update_dispatch_stats(tg_to_blkg(tg), bio->bi_size, 1007 - bio->bi_rw); 1006 + throtl_update_dispatch_stats(tg_to_blkg(tg), 1007 + bio->bi_iter.bi_size, bio->bi_rw); 1008 1008 } 1009 1009 } 1010 1010 ··· 1503 1503 if (tg) { 1504 1504 if (!tg->has_rules[rw]) { 1505 1505 throtl_update_dispatch_stats(tg_to_blkg(tg), 1506 - bio->bi_size, bio->bi_rw); 1506 + bio->bi_iter.bi_size, bio->bi_rw); 1507 1507 goto out_unlock_rcu; 1508 1508 } 1509 1509 } ··· 1559 1559 /* out-of-limit, queue to @tg */ 1560 1560 throtl_log(sq, "[%c] bio. bdisp=%llu sz=%u bps=%llu iodisp=%u iops=%u queued=%d/%d", 1561 1561 rw == READ ? 'R' : 'W', 1562 - tg->bytes_disp[rw], bio->bi_size, tg->bps[rw], 1562 + tg->bytes_disp[rw], bio->bi_iter.bi_size, tg->bps[rw], 1563 1563 tg->io_disp[rw], tg->iops[rw], 1564 1564 sq->nr_queued[READ], sq->nr_queued[WRITE]); 1565 1565
+11 -7
block/cmdline-parser.c
··· 4 4 * Written by Cai Zhiyong <caizhiyong@huawei.com> 5 5 * 6 6 */ 7 - #include <linux/buffer_head.h> 8 - #include <linux/module.h> 7 + #include <linux/export.h> 9 8 #include <linux/cmdline-parser.h> 10 9 11 10 static int parse_subpart(struct cmdline_subpart **subpart, char *partdef) ··· 158 159 *parts = next_parts; 159 160 } 160 161 } 162 + EXPORT_SYMBOL(cmdline_parts_free); 161 163 162 164 int cmdline_parts_parse(struct cmdline_parts **parts, const char *cmdline) 163 165 { ··· 206 206 cmdline_parts_free(parts); 207 207 goto done; 208 208 } 209 + EXPORT_SYMBOL(cmdline_parts_parse); 209 210 210 211 struct cmdline_parts *cmdline_parts_find(struct cmdline_parts *parts, 211 212 const char *bdev) ··· 215 214 parts = parts->next_parts; 216 215 return parts; 217 216 } 217 + EXPORT_SYMBOL(cmdline_parts_find); 218 218 219 219 /* 220 220 * add_part() 221 221 * 0 success. 222 222 * 1 can not add so many partitions. 223 223 */ 224 - void cmdline_parts_set(struct cmdline_parts *parts, sector_t disk_size, 225 - int slot, 226 - int (*add_part)(int, struct cmdline_subpart *, void *), 227 - void *param) 228 - 224 + int cmdline_parts_set(struct cmdline_parts *parts, sector_t disk_size, 225 + int slot, 226 + int (*add_part)(int, struct cmdline_subpart *, void *), 227 + void *param) 229 228 { 230 229 sector_t from = 0; 231 230 struct cmdline_subpart *subpart; ··· 248 247 if (add_part(slot, subpart, param)) 249 248 break; 250 249 } 250 + 251 + return slot; 251 252 } 253 + EXPORT_SYMBOL(cmdline_parts_set);
+1 -1
block/elevator.c
··· 440 440 /* 441 441 * See if our hash lookup can find a potential backmerge. 442 442 */ 443 - __rq = elv_rqhash_find(q, bio->bi_sector); 443 + __rq = elv_rqhash_find(q, bio->bi_iter.bi_sector); 444 444 if (__rq && elv_rq_merge_ok(__rq, bio)) { 445 445 *req = __rq; 446 446 return ELEVATOR_BACK_MERGE;
+4 -2
block/scsi_ioctl.c
··· 323 323 324 324 if (hdr->iovec_count) { 325 325 size_t iov_data_len; 326 - struct iovec *iov; 326 + struct iovec *iov = NULL; 327 327 328 328 ret = rw_copy_check_uvector(-1, hdr->dxferp, hdr->iovec_count, 329 329 0, NULL, &iov); 330 - if (ret < 0) 330 + if (ret < 0) { 331 + kfree(iov); 331 332 goto out; 333 + } 332 334 333 335 iov_data_len = ret; 334 336 ret = 0;
+2 -8
drivers/block/aoe/aoe.h
··· 100 100 101 101 struct buf { 102 102 ulong nframesout; 103 - ulong resid; 104 - ulong bv_resid; 105 - sector_t sector; 106 103 struct bio *bio; 107 - struct bio_vec *bv; 104 + struct bvec_iter iter; 108 105 struct request *rq; 109 106 }; 110 107 ··· 117 120 ulong waited; 118 121 ulong waited_total; 119 122 struct aoetgt *t; /* parent target I belong to */ 120 - sector_t lba; 121 123 struct sk_buff *skb; /* command skb freed on module exit */ 122 124 struct sk_buff *r_skb; /* response skb for async processing */ 123 125 struct buf *buf; 124 - struct bio_vec *bv; 125 - ulong bcnt; 126 - ulong bv_off; 126 + struct bvec_iter iter; 127 127 char flags; 128 128 }; 129 129
+60 -93
drivers/block/aoe/aoecmd.c
··· 196 196 197 197 t = f->t; 198 198 f->buf = NULL; 199 - f->lba = 0; 200 - f->bv = NULL; 199 + memset(&f->iter, 0, sizeof(f->iter)); 201 200 f->r_skb = NULL; 202 201 f->flags = 0; 203 202 list_add(&f->head, &t->ffree); ··· 294 295 } 295 296 296 297 static void 297 - skb_fillup(struct sk_buff *skb, struct bio_vec *bv, ulong off, ulong cnt) 298 + skb_fillup(struct sk_buff *skb, struct bio *bio, struct bvec_iter iter) 298 299 { 299 300 int frag = 0; 300 - ulong fcnt; 301 - loop: 302 - fcnt = bv->bv_len - (off - bv->bv_offset); 303 - if (fcnt > cnt) 304 - fcnt = cnt; 305 - skb_fill_page_desc(skb, frag++, bv->bv_page, off, fcnt); 306 - cnt -= fcnt; 307 - if (cnt <= 0) 308 - return; 309 - bv++; 310 - off = bv->bv_offset; 311 - goto loop; 301 + struct bio_vec bv; 302 + 303 + __bio_for_each_segment(bv, bio, iter, iter) 304 + skb_fill_page_desc(skb, frag++, bv.bv_page, 305 + bv.bv_offset, bv.bv_len); 312 306 } 313 307 314 308 static void ··· 338 346 t->nout++; 339 347 f->waited = 0; 340 348 f->waited_total = 0; 341 - if (f->buf) 342 - f->lba = f->buf->sector; 343 349 344 350 /* set up ata header */ 345 - ah->scnt = f->bcnt >> 9; 346 - put_lba(ah, f->lba); 351 + ah->scnt = f->iter.bi_size >> 9; 352 + put_lba(ah, f->iter.bi_sector); 347 353 if (t->d->flags & DEVFL_EXT) { 348 354 ah->aflags |= AOEAFL_EXT; 349 355 } else { ··· 350 360 ah->lba3 |= 0xe0; /* LBA bit + obsolete 0xa0 */ 351 361 } 352 362 if (f->buf && bio_data_dir(f->buf->bio) == WRITE) { 353 - skb_fillup(skb, f->bv, f->bv_off, f->bcnt); 363 + skb_fillup(skb, f->buf->bio, f->iter); 354 364 ah->aflags |= AOEAFL_WRITE; 355 - skb->len += f->bcnt; 356 - skb->data_len = f->bcnt; 357 - skb->truesize += f->bcnt; 365 + skb->len += f->iter.bi_size; 366 + skb->data_len = f->iter.bi_size; 367 + skb->truesize += f->iter.bi_size; 358 368 t->wpkts++; 359 369 } else { 360 370 t->rpkts++; ··· 372 382 struct buf *buf; 373 383 struct sk_buff *skb; 374 384 struct sk_buff_head queue; 375 - ulong bcnt, fbcnt; 376 385 377 386 buf = nextbuf(d); 378 387 if (buf == NULL) ··· 379 390 f = newframe(d); 380 391 if (f == NULL) 381 392 return 0; 382 - bcnt = d->maxbcnt; 383 - if (bcnt == 0) 384 - bcnt = DEFAULTBCNT; 385 - if (bcnt > buf->resid) 386 - bcnt = buf->resid; 387 - fbcnt = bcnt; 388 - f->bv = buf->bv; 389 - f->bv_off = f->bv->bv_offset + (f->bv->bv_len - buf->bv_resid); 390 - do { 391 - if (fbcnt < buf->bv_resid) { 392 - buf->bv_resid -= fbcnt; 393 - buf->resid -= fbcnt; 394 - break; 395 - } 396 - fbcnt -= buf->bv_resid; 397 - buf->resid -= buf->bv_resid; 398 - if (buf->resid == 0) { 399 - d->ip.buf = NULL; 400 - break; 401 - } 402 - buf->bv++; 403 - buf->bv_resid = buf->bv->bv_len; 404 - WARN_ON(buf->bv_resid == 0); 405 - } while (fbcnt); 406 393 407 394 /* initialize the headers & frame */ 408 395 f->buf = buf; 409 - f->bcnt = bcnt; 410 - ata_rw_frameinit(f); 396 + f->iter = buf->iter; 397 + f->iter.bi_size = min_t(unsigned long, 398 + d->maxbcnt ?: DEFAULTBCNT, 399 + f->iter.bi_size); 400 + bio_advance_iter(buf->bio, &buf->iter, f->iter.bi_size); 401 + 402 + if (!buf->iter.bi_size) 403 + d->ip.buf = NULL; 411 404 412 405 /* mark all tracking fields and load out */ 413 406 buf->nframesout += 1; 414 - buf->sector += bcnt >> 9; 407 + 408 + ata_rw_frameinit(f); 415 409 416 410 skb = skb_clone(f->skb, GFP_ATOMIC); 417 411 if (skb) { ··· 585 613 skb = nf->skb; 586 614 nf->skb = f->skb; 587 615 nf->buf = f->buf; 588 - nf->bcnt = f->bcnt; 589 - nf->lba = f->lba; 590 - nf->bv = f->bv; 591 - nf->bv_off = f->bv_off; 616 + nf->iter = f->iter; 592 617 nf->waited = 0; 593 618 nf->waited_total = f->waited_total; 594 619 nf->sent = f->sent; ··· 617 648 } 618 649 f->flags |= FFL_PROBE; 619 650 ifrotate(t); 620 - f->bcnt = t->d->maxbcnt ? t->d->maxbcnt : DEFAULTBCNT; 651 + f->iter.bi_size = t->d->maxbcnt ? t->d->maxbcnt : DEFAULTBCNT; 621 652 ata_rw_frameinit(f); 622 653 skb = f->skb; 623 - for (frag = 0, n = f->bcnt; n > 0; ++frag, n -= m) { 654 + for (frag = 0, n = f->iter.bi_size; n > 0; ++frag, n -= m) { 624 655 if (n < PAGE_SIZE) 625 656 m = n; 626 657 else 627 658 m = PAGE_SIZE; 628 659 skb_fill_page_desc(skb, frag, empty_page, 0, m); 629 660 } 630 - skb->len += f->bcnt; 631 - skb->data_len = f->bcnt; 632 - skb->truesize += f->bcnt; 661 + skb->len += f->iter.bi_size; 662 + skb->data_len = f->iter.bi_size; 663 + skb->truesize += f->iter.bi_size; 633 664 634 665 skb = skb_clone(f->skb, GFP_ATOMIC); 635 666 if (skb) { ··· 866 897 static void 867 898 bio_pageinc(struct bio *bio) 868 899 { 869 - struct bio_vec *bv; 900 + struct bio_vec bv; 870 901 struct page *page; 871 - int i; 902 + struct bvec_iter iter; 872 903 873 - bio_for_each_segment(bv, bio, i) { 904 + bio_for_each_segment(bv, bio, iter) { 874 905 /* Non-zero page count for non-head members of 875 906 * compound pages is no longer allowed by the kernel. 876 907 */ 877 - page = compound_trans_head(bv->bv_page); 908 + page = compound_trans_head(bv.bv_page); 878 909 atomic_inc(&page->_count); 879 910 } 880 911 } ··· 882 913 static void 883 914 bio_pagedec(struct bio *bio) 884 915 { 885 - struct bio_vec *bv; 886 916 struct page *page; 887 - int i; 917 + struct bio_vec bv; 918 + struct bvec_iter iter; 888 919 889 - bio_for_each_segment(bv, bio, i) { 890 - page = compound_trans_head(bv->bv_page); 920 + bio_for_each_segment(bv, bio, iter) { 921 + page = compound_trans_head(bv.bv_page); 891 922 atomic_dec(&page->_count); 892 923 } 893 924 } ··· 898 929 memset(buf, 0, sizeof(*buf)); 899 930 buf->rq = rq; 900 931 buf->bio = bio; 901 - buf->resid = bio->bi_size; 902 - buf->sector = bio->bi_sector; 932 + buf->iter = bio->bi_iter; 903 933 bio_pageinc(bio); 904 - buf->bv = bio_iovec(bio); 905 - buf->bv_resid = buf->bv->bv_len; 906 - WARN_ON(buf->bv_resid == 0); 907 934 } 908 935 909 936 static struct buf * ··· 1084 1119 } 1085 1120 1086 1121 static void 1087 - bvcpy(struct bio_vec *bv, ulong off, struct sk_buff *skb, long cnt) 1122 + bvcpy(struct sk_buff *skb, struct bio *bio, struct bvec_iter iter, long cnt) 1088 1123 { 1089 - ulong fcnt; 1090 - char *p; 1091 1124 int soff = 0; 1092 - loop: 1093 - fcnt = bv->bv_len - (off - bv->bv_offset); 1094 - if (fcnt > cnt) 1095 - fcnt = cnt; 1096 - p = page_address(bv->bv_page) + off; 1097 - skb_copy_bits(skb, soff, p, fcnt); 1098 - soff += fcnt; 1099 - cnt -= fcnt; 1100 - if (cnt <= 0) 1101 - return; 1102 - bv++; 1103 - off = bv->bv_offset; 1104 - goto loop; 1125 + struct bio_vec bv; 1126 + 1127 + iter.bi_size = cnt; 1128 + 1129 + __bio_for_each_segment(bv, bio, iter, iter) { 1130 + char *p = page_address(bv.bv_page) + bv.bv_offset; 1131 + skb_copy_bits(skb, soff, p, bv.bv_len); 1132 + soff += bv.bv_len; 1133 + } 1105 1134 } 1106 1135 1107 1136 void ··· 1111 1152 do { 1112 1153 bio = rq->bio; 1113 1154 bok = !fastfail && test_bit(BIO_UPTODATE, &bio->bi_flags); 1114 - } while (__blk_end_request(rq, bok ? 0 : -EIO, bio->bi_size)); 1155 + } while (__blk_end_request(rq, bok ? 0 : -EIO, bio->bi_iter.bi_size)); 1115 1156 1116 1157 /* cf. http://lkml.org/lkml/2006/10/31/28 */ 1117 1158 if (!fastfail) ··· 1188 1229 clear_bit(BIO_UPTODATE, &buf->bio->bi_flags); 1189 1230 break; 1190 1231 } 1191 - bvcpy(f->bv, f->bv_off, skb, n); 1232 + if (n > f->iter.bi_size) { 1233 + pr_err_ratelimited("%s e%ld.%d. bytes=%ld need=%u\n", 1234 + "aoe: too-large data size in read from", 1235 + (long) d->aoemajor, d->aoeminor, 1236 + n, f->iter.bi_size); 1237 + clear_bit(BIO_UPTODATE, &buf->bio->bi_flags); 1238 + break; 1239 + } 1240 + bvcpy(skb, f->buf->bio, f->iter, n); 1192 1241 case ATA_CMD_PIO_WRITE: 1193 1242 case ATA_CMD_PIO_WRITE_EXT: 1194 1243 spin_lock_irq(&d->lock); ··· 1239 1272 1240 1273 aoe_freetframe(f); 1241 1274 1242 - if (buf && --buf->nframesout == 0 && buf->resid == 0) 1275 + if (buf && --buf->nframesout == 0 && buf->iter.bi_size == 0) 1243 1276 aoe_end_buf(d, buf); 1244 1277 1245 1278 spin_unlock_irq(&d->lock); ··· 1694 1727 { 1695 1728 if (buf == NULL) 1696 1729 return; 1697 - buf->resid = 0; 1730 + buf->iter.bi_size = 0; 1698 1731 clear_bit(BIO_UPTODATE, &buf->bio->bi_flags); 1699 1732 if (buf->nframesout == 0) 1700 1733 aoe_end_buf(d, buf);
+8 -8
drivers/block/brd.c
··· 328 328 struct block_device *bdev = bio->bi_bdev; 329 329 struct brd_device *brd = bdev->bd_disk->private_data; 330 330 int rw; 331 - struct bio_vec *bvec; 331 + struct bio_vec bvec; 332 332 sector_t sector; 333 - int i; 333 + struct bvec_iter iter; 334 334 int err = -EIO; 335 335 336 - sector = bio->bi_sector; 336 + sector = bio->bi_iter.bi_sector; 337 337 if (bio_end_sector(bio) > get_capacity(bdev->bd_disk)) 338 338 goto out; 339 339 340 340 if (unlikely(bio->bi_rw & REQ_DISCARD)) { 341 341 err = 0; 342 - discard_from_brd(brd, sector, bio->bi_size); 342 + discard_from_brd(brd, sector, bio->bi_iter.bi_size); 343 343 goto out; 344 344 } 345 345 ··· 347 347 if (rw == READA) 348 348 rw = READ; 349 349 350 - bio_for_each_segment(bvec, bio, i) { 351 - unsigned int len = bvec->bv_len; 352 - err = brd_do_bvec(brd, bvec->bv_page, len, 353 - bvec->bv_offset, rw, sector); 350 + bio_for_each_segment(bvec, bio, iter) { 351 + unsigned int len = bvec.bv_len; 352 + err = brd_do_bvec(brd, bvec.bv_page, len, 353 + bvec.bv_offset, rw, sector); 354 354 if (err) 355 355 break; 356 356 sector += len >> SECTOR_SHIFT;
+1 -1
drivers/block/drbd/drbd_actlog.c
··· 159 159 160 160 bio = bio_alloc_drbd(GFP_NOIO); 161 161 bio->bi_bdev = bdev->md_bdev; 162 - bio->bi_sector = sector; 162 + bio->bi_iter.bi_sector = sector; 163 163 err = -EIO; 164 164 if (bio_add_page(bio, page, size, 0) != size) 165 165 goto out;
+1 -1
drivers/block/drbd/drbd_bitmap.c
··· 1028 1028 } else 1029 1029 page = b->bm_pages[page_nr]; 1030 1030 bio->bi_bdev = mdev->ldev->md_bdev; 1031 - bio->bi_sector = on_disk_sector; 1031 + bio->bi_iter.bi_sector = on_disk_sector; 1032 1032 /* bio_add_page of a single page to an empty bio will always succeed, 1033 1033 * according to api. Do we want to assert that? */ 1034 1034 bio_add_page(bio, page, len, 0);
+15 -12
drivers/block/drbd/drbd_main.c
··· 1537 1537 1538 1538 static int _drbd_send_bio(struct drbd_conf *mdev, struct bio *bio) 1539 1539 { 1540 - struct bio_vec *bvec; 1541 - int i; 1540 + struct bio_vec bvec; 1541 + struct bvec_iter iter; 1542 + 1542 1543 /* hint all but last page with MSG_MORE */ 1543 - bio_for_each_segment(bvec, bio, i) { 1544 + bio_for_each_segment(bvec, bio, iter) { 1544 1545 int err; 1545 1546 1546 - err = _drbd_no_send_page(mdev, bvec->bv_page, 1547 - bvec->bv_offset, bvec->bv_len, 1548 - i == bio->bi_vcnt - 1 ? 0 : MSG_MORE); 1547 + err = _drbd_no_send_page(mdev, bvec.bv_page, 1548 + bvec.bv_offset, bvec.bv_len, 1549 + bio_iter_last(bvec, iter) 1550 + ? 0 : MSG_MORE); 1549 1551 if (err) 1550 1552 return err; 1551 1553 } ··· 1556 1554 1557 1555 static int _drbd_send_zc_bio(struct drbd_conf *mdev, struct bio *bio) 1558 1556 { 1559 - struct bio_vec *bvec; 1560 - int i; 1557 + struct bio_vec bvec; 1558 + struct bvec_iter iter; 1559 + 1561 1560 /* hint all but last page with MSG_MORE */ 1562 - bio_for_each_segment(bvec, bio, i) { 1561 + bio_for_each_segment(bvec, bio, iter) { 1563 1562 int err; 1564 1563 1565 - err = _drbd_send_page(mdev, bvec->bv_page, 1566 - bvec->bv_offset, bvec->bv_len, 1567 - i == bio->bi_vcnt - 1 ? 0 : MSG_MORE); 1564 + err = _drbd_send_page(mdev, bvec.bv_page, 1565 + bvec.bv_offset, bvec.bv_len, 1566 + bio_iter_last(bvec, iter) ? 0 : MSG_MORE); 1568 1567 if (err) 1569 1568 return err; 1570 1569 }
+10 -9
drivers/block/drbd/drbd_receiver.c
··· 1333 1333 goto fail; 1334 1334 } 1335 1335 /* > peer_req->i.sector, unless this is the first bio */ 1336 - bio->bi_sector = sector; 1336 + bio->bi_iter.bi_sector = sector; 1337 1337 bio->bi_bdev = mdev->ldev->backing_bdev; 1338 1338 bio->bi_rw = rw; 1339 1339 bio->bi_private = peer_req; ··· 1353 1353 dev_err(DEV, 1354 1354 "bio_add_page failed for len=%u, " 1355 1355 "bi_vcnt=0 (bi_sector=%llu)\n", 1356 - len, (unsigned long long)bio->bi_sector); 1356 + len, (uint64_t)bio->bi_iter.bi_sector); 1357 1357 err = -ENOSPC; 1358 1358 goto fail; 1359 1359 } ··· 1595 1595 static int recv_dless_read(struct drbd_conf *mdev, struct drbd_request *req, 1596 1596 sector_t sector, int data_size) 1597 1597 { 1598 - struct bio_vec *bvec; 1598 + struct bio_vec bvec; 1599 + struct bvec_iter iter; 1599 1600 struct bio *bio; 1600 - int dgs, err, i, expect; 1601 + int dgs, err, expect; 1601 1602 void *dig_in = mdev->tconn->int_dig_in; 1602 1603 void *dig_vv = mdev->tconn->int_dig_vv; 1603 1604 ··· 1616 1615 mdev->recv_cnt += data_size>>9; 1617 1616 1618 1617 bio = req->master_bio; 1619 - D_ASSERT(sector == bio->bi_sector); 1618 + D_ASSERT(sector == bio->bi_iter.bi_sector); 1620 1619 1621 - bio_for_each_segment(bvec, bio, i) { 1622 - void *mapped = kmap(bvec->bv_page) + bvec->bv_offset; 1623 - expect = min_t(int, data_size, bvec->bv_len); 1620 + bio_for_each_segment(bvec, bio, iter) { 1621 + void *mapped = kmap(bvec.bv_page) + bvec.bv_offset; 1622 + expect = min_t(int, data_size, bvec.bv_len); 1624 1623 err = drbd_recv_all_warn(mdev->tconn, mapped, expect); 1625 - kunmap(bvec->bv_page); 1624 + kunmap(bvec.bv_page); 1626 1625 if (err) 1627 1626 return err; 1628 1627 data_size -= expect;
+3 -3
drivers/block/drbd/drbd_req.c
··· 77 77 req->epoch = 0; 78 78 79 79 drbd_clear_interval(&req->i); 80 - req->i.sector = bio_src->bi_sector; 81 - req->i.size = bio_src->bi_size; 80 + req->i.sector = bio_src->bi_iter.bi_sector; 81 + req->i.size = bio_src->bi_iter.bi_size; 82 82 req->i.local = true; 83 83 req->i.waiting = false; 84 84 ··· 1280 1280 /* 1281 1281 * what we "blindly" assume: 1282 1282 */ 1283 - D_ASSERT(IS_ALIGNED(bio->bi_size, 512)); 1283 + D_ASSERT(IS_ALIGNED(bio->bi_iter.bi_size, 512)); 1284 1284 1285 1285 inc_ap_bio(mdev); 1286 1286 __drbd_make_request(mdev, bio, start_time);
+1 -1
drivers/block/drbd/drbd_req.h
··· 269 269 270 270 /* Short lived temporary struct on the stack. 271 271 * We could squirrel the error to be returned into 272 - * bio->bi_size, or similar. But that would be too ugly. */ 272 + * bio->bi_iter.bi_size, or similar. But that would be too ugly. */ 273 273 struct bio_and_error { 274 274 struct bio *bio; 275 275 int error;
+4 -4
drivers/block/drbd/drbd_worker.c
··· 313 313 { 314 314 struct hash_desc desc; 315 315 struct scatterlist sg; 316 - struct bio_vec *bvec; 317 - int i; 316 + struct bio_vec bvec; 317 + struct bvec_iter iter; 318 318 319 319 desc.tfm = tfm; 320 320 desc.flags = 0; ··· 322 322 sg_init_table(&sg, 1); 323 323 crypto_hash_init(&desc); 324 324 325 - bio_for_each_segment(bvec, bio, i) { 326 - sg_set_page(&sg, bvec->bv_page, bvec->bv_len, bvec->bv_offset); 325 + bio_for_each_segment(bvec, bio, iter) { 326 + sg_set_page(&sg, bvec.bv_page, bvec.bv_len, bvec.bv_offset); 327 327 crypto_hash_update(&desc, &sg, sg.length); 328 328 } 329 329 crypto_hash_final(&desc, digest);
+8 -8
drivers/block/floppy.c
··· 2351 2351 /* Compute maximal contiguous buffer size. */ 2352 2352 static int buffer_chain_size(void) 2353 2353 { 2354 - struct bio_vec *bv; 2354 + struct bio_vec bv; 2355 2355 int size; 2356 2356 struct req_iterator iter; 2357 2357 char *base; ··· 2360 2360 size = 0; 2361 2361 2362 2362 rq_for_each_segment(bv, current_req, iter) { 2363 - if (page_address(bv->bv_page) + bv->bv_offset != base + size) 2363 + if (page_address(bv.bv_page) + bv.bv_offset != base + size) 2364 2364 break; 2365 2365 2366 - size += bv->bv_len; 2366 + size += bv.bv_len; 2367 2367 } 2368 2368 2369 2369 return size >> 9; ··· 2389 2389 static void copy_buffer(int ssize, int max_sector, int max_sector_2) 2390 2390 { 2391 2391 int remaining; /* number of transferred 512-byte sectors */ 2392 - struct bio_vec *bv; 2392 + struct bio_vec bv; 2393 2393 char *buffer; 2394 2394 char *dma_buffer; 2395 2395 int size; ··· 2427 2427 if (!remaining) 2428 2428 break; 2429 2429 2430 - size = bv->bv_len; 2430 + size = bv.bv_len; 2431 2431 SUPBOUND(size, remaining); 2432 2432 2433 - buffer = page_address(bv->bv_page) + bv->bv_offset; 2433 + buffer = page_address(bv.bv_page) + bv.bv_offset; 2434 2434 if (dma_buffer + size > 2435 2435 floppy_track_buffer + (max_buffer_sectors << 10) || 2436 2436 dma_buffer < floppy_track_buffer) { ··· 3775 3775 bio_vec.bv_len = size; 3776 3776 bio_vec.bv_offset = 0; 3777 3777 bio.bi_vcnt = 1; 3778 - bio.bi_size = size; 3778 + bio.bi_iter.bi_size = size; 3779 3779 bio.bi_bdev = bdev; 3780 - bio.bi_sector = 0; 3780 + bio.bi_iter.bi_sector = 0; 3781 3781 bio.bi_flags = (1 << BIO_QUIET); 3782 3782 init_completion(&complete); 3783 3783 bio.bi_private = &complete;
+14 -13
drivers/block/loop.c
··· 288 288 { 289 289 int (*do_lo_send)(struct loop_device *, struct bio_vec *, loff_t, 290 290 struct page *page); 291 - struct bio_vec *bvec; 291 + struct bio_vec bvec; 292 + struct bvec_iter iter; 292 293 struct page *page = NULL; 293 - int i, ret = 0; 294 + int ret = 0; 294 295 295 296 if (lo->transfer != transfer_none) { 296 297 page = alloc_page(GFP_NOIO | __GFP_HIGHMEM); ··· 303 302 do_lo_send = do_lo_send_direct_write; 304 303 } 305 304 306 - bio_for_each_segment(bvec, bio, i) { 307 - ret = do_lo_send(lo, bvec, pos, page); 305 + bio_for_each_segment(bvec, bio, iter) { 306 + ret = do_lo_send(lo, &bvec, pos, page); 308 307 if (ret < 0) 309 308 break; 310 - pos += bvec->bv_len; 309 + pos += bvec.bv_len; 311 310 } 312 311 if (page) { 313 312 kunmap(page); ··· 393 392 static int 394 393 lo_receive(struct loop_device *lo, struct bio *bio, int bsize, loff_t pos) 395 394 { 396 - struct bio_vec *bvec; 395 + struct bio_vec bvec; 396 + struct bvec_iter iter; 397 397 ssize_t s; 398 - int i; 399 398 400 - bio_for_each_segment(bvec, bio, i) { 401 - s = do_lo_receive(lo, bvec, bsize, pos); 399 + bio_for_each_segment(bvec, bio, iter) { 400 + s = do_lo_receive(lo, &bvec, bsize, pos); 402 401 if (s < 0) 403 402 return s; 404 403 405 - if (s != bvec->bv_len) { 404 + if (s != bvec.bv_len) { 406 405 zero_fill_bio(bio); 407 406 break; 408 407 } 409 - pos += bvec->bv_len; 408 + pos += bvec.bv_len; 410 409 } 411 410 return 0; 412 411 } ··· 416 415 loff_t pos; 417 416 int ret; 418 417 419 - pos = ((loff_t) bio->bi_sector << 9) + lo->lo_offset; 418 + pos = ((loff_t) bio->bi_iter.bi_sector << 9) + lo->lo_offset; 420 419 421 420 if (bio_rw(bio) == WRITE) { 422 421 struct file *file = lo->lo_backing_file; ··· 445 444 goto out; 446 445 } 447 446 ret = file->f_op->fallocate(file, mode, pos, 448 - bio->bi_size); 447 + bio->bi_iter.bi_size); 449 448 if (unlikely(ret && ret != -EINVAL && 450 449 ret != -EOPNOTSUPP)) 451 450 ret = -EIO;
+11 -9
drivers/block/mtip32xx/mtip32xx.c
··· 3962 3962 { 3963 3963 struct driver_data *dd = queue->queuedata; 3964 3964 struct scatterlist *sg; 3965 - struct bio_vec *bvec; 3966 - int i, nents = 0; 3965 + struct bio_vec bvec; 3966 + struct bvec_iter iter; 3967 + int nents = 0; 3967 3968 int tag = 0, unaligned = 0; 3968 3969 3969 3970 if (unlikely(dd->dd_flag & MTIP_DDF_STOP_IO)) { ··· 3994 3993 } 3995 3994 3996 3995 if (unlikely(bio->bi_rw & REQ_DISCARD)) { 3997 - bio_endio(bio, mtip_send_trim(dd, bio->bi_sector, 3996 + bio_endio(bio, mtip_send_trim(dd, bio->bi_iter.bi_sector, 3998 3997 bio_sectors(bio))); 3999 3998 return; 4000 3999 } ··· 4007 4006 4008 4007 if (bio_data_dir(bio) == WRITE && bio_sectors(bio) <= 64 && 4009 4008 dd->unal_qdepth) { 4010 - if (bio->bi_sector % 8 != 0) /* Unaligned on 4k boundaries */ 4009 + if (bio->bi_iter.bi_sector % 8 != 0) 4010 + /* Unaligned on 4k boundaries */ 4011 4011 unaligned = 1; 4012 4012 else if (bio_sectors(bio) % 8 != 0) /* Aligned but not 4k/8k */ 4013 4013 unaligned = 1; ··· 4027 4025 } 4028 4026 4029 4027 /* Create the scatter list for this bio. */ 4030 - bio_for_each_segment(bvec, bio, i) { 4028 + bio_for_each_segment(bvec, bio, iter) { 4031 4029 sg_set_page(&sg[nents], 4032 - bvec->bv_page, 4033 - bvec->bv_len, 4034 - bvec->bv_offset); 4030 + bvec.bv_page, 4031 + bvec.bv_len, 4032 + bvec.bv_offset); 4035 4033 nents++; 4036 4034 } 4037 4035 4038 4036 /* Issue the read/write. */ 4039 4037 mtip_hw_submit_io(dd, 4040 - bio->bi_sector, 4038 + bio->bi_iter.bi_sector, 4041 4039 bio_sectors(bio), 4042 4040 nents, 4043 4041 tag,
+7 -7
drivers/block/nbd.c
··· 271 271 272 272 if (nbd_cmd(req) == NBD_CMD_WRITE) { 273 273 struct req_iterator iter; 274 - struct bio_vec *bvec; 274 + struct bio_vec bvec; 275 275 /* 276 276 * we are really probing at internals to determine 277 277 * whether to set MSG_MORE or not... 278 278 */ 279 279 rq_for_each_segment(bvec, req, iter) { 280 280 flags = 0; 281 - if (!rq_iter_last(req, iter)) 281 + if (!rq_iter_last(bvec, iter)) 282 282 flags = MSG_MORE; 283 283 dprintk(DBG_TX, "%s: request %p: sending %d bytes data\n", 284 - nbd->disk->disk_name, req, bvec->bv_len); 285 - result = sock_send_bvec(nbd, bvec, flags); 284 + nbd->disk->disk_name, req, bvec.bv_len); 285 + result = sock_send_bvec(nbd, &bvec, flags); 286 286 if (result <= 0) { 287 287 dev_err(disk_to_dev(nbd->disk), 288 288 "Send data failed (result %d)\n", ··· 378 378 nbd->disk->disk_name, req); 379 379 if (nbd_cmd(req) == NBD_CMD_READ) { 380 380 struct req_iterator iter; 381 - struct bio_vec *bvec; 381 + struct bio_vec bvec; 382 382 383 383 rq_for_each_segment(bvec, req, iter) { 384 - result = sock_recv_bvec(nbd, bvec); 384 + result = sock_recv_bvec(nbd, &bvec); 385 385 if (result <= 0) { 386 386 dev_err(disk_to_dev(nbd->disk), "Receive data failed (result %d)\n", 387 387 result); ··· 389 389 return req; 390 390 } 391 391 dprintk(DBG_RX, "%s: request %p: got %d bytes data\n", 392 - nbd->disk->disk_name, req, bvec->bv_len); 392 + nbd->disk->disk_name, req, bvec.bv_len); 393 393 } 394 394 } 395 395 return req;
+30 -112
drivers/block/nvme-core.c
··· 441 441 return total_len; 442 442 } 443 443 444 - struct nvme_bio_pair { 445 - struct bio b1, b2, *parent; 446 - struct bio_vec *bv1, *bv2; 447 - int err; 448 - atomic_t cnt; 449 - }; 450 - 451 - static void nvme_bio_pair_endio(struct bio *bio, int err) 452 - { 453 - struct nvme_bio_pair *bp = bio->bi_private; 454 - 455 - if (err) 456 - bp->err = err; 457 - 458 - if (atomic_dec_and_test(&bp->cnt)) { 459 - bio_endio(bp->parent, bp->err); 460 - kfree(bp->bv1); 461 - kfree(bp->bv2); 462 - kfree(bp); 463 - } 464 - } 465 - 466 - static struct nvme_bio_pair *nvme_bio_split(struct bio *bio, int idx, 467 - int len, int offset) 468 - { 469 - struct nvme_bio_pair *bp; 470 - 471 - BUG_ON(len > bio->bi_size); 472 - BUG_ON(idx > bio->bi_vcnt); 473 - 474 - bp = kmalloc(sizeof(*bp), GFP_ATOMIC); 475 - if (!bp) 476 - return NULL; 477 - bp->err = 0; 478 - 479 - bp->b1 = *bio; 480 - bp->b2 = *bio; 481 - 482 - bp->b1.bi_size = len; 483 - bp->b2.bi_size -= len; 484 - bp->b1.bi_vcnt = idx; 485 - bp->b2.bi_idx = idx; 486 - bp->b2.bi_sector += len >> 9; 487 - 488 - if (offset) { 489 - bp->bv1 = kmalloc(bio->bi_max_vecs * sizeof(struct bio_vec), 490 - GFP_ATOMIC); 491 - if (!bp->bv1) 492 - goto split_fail_1; 493 - 494 - bp->bv2 = kmalloc(bio->bi_max_vecs * sizeof(struct bio_vec), 495 - GFP_ATOMIC); 496 - if (!bp->bv2) 497 - goto split_fail_2; 498 - 499 - memcpy(bp->bv1, bio->bi_io_vec, 500 - bio->bi_max_vecs * sizeof(struct bio_vec)); 501 - memcpy(bp->bv2, bio->bi_io_vec, 502 - bio->bi_max_vecs * sizeof(struct bio_vec)); 503 - 504 - bp->b1.bi_io_vec = bp->bv1; 505 - bp->b2.bi_io_vec = bp->bv2; 506 - bp->b2.bi_io_vec[idx].bv_offset += offset; 507 - bp->b2.bi_io_vec[idx].bv_len -= offset; 508 - bp->b1.bi_io_vec[idx].bv_len = offset; 509 - bp->b1.bi_vcnt++; 510 - } else 511 - bp->bv1 = bp->bv2 = NULL; 512 - 513 - bp->b1.bi_private = bp; 514 - bp->b2.bi_private = bp; 515 - 516 - bp->b1.bi_end_io = nvme_bio_pair_endio; 517 - bp->b2.bi_end_io = nvme_bio_pair_endio; 518 - 519 - bp->parent = bio; 520 - atomic_set(&bp->cnt, 2); 521 - 522 - return bp; 523 - 524 - split_fail_2: 525 - kfree(bp->bv1); 526 - split_fail_1: 527 - kfree(bp); 528 - return NULL; 529 - } 530 - 531 444 static int nvme_split_and_submit(struct bio *bio, struct nvme_queue *nvmeq, 532 - int idx, int len, int offset) 445 + int len) 533 446 { 534 - struct nvme_bio_pair *bp = nvme_bio_split(bio, idx, len, offset); 535 - if (!bp) 447 + struct bio *split = bio_split(bio, len >> 9, GFP_ATOMIC, NULL); 448 + if (!split) 536 449 return -ENOMEM; 450 + 451 + bio_chain(split, bio); 537 452 538 453 if (bio_list_empty(&nvmeq->sq_cong)) 539 454 add_wait_queue(&nvmeq->sq_full, &nvmeq->sq_cong_wait); 540 - bio_list_add(&nvmeq->sq_cong, &bp->b1); 541 - bio_list_add(&nvmeq->sq_cong, &bp->b2); 455 + bio_list_add(&nvmeq->sq_cong, split); 456 + bio_list_add(&nvmeq->sq_cong, bio); 542 457 543 458 return 0; 544 459 } ··· 465 550 static int nvme_map_bio(struct nvme_queue *nvmeq, struct nvme_iod *iod, 466 551 struct bio *bio, enum dma_data_direction dma_dir, int psegs) 467 552 { 468 - struct bio_vec *bvec, *bvprv = NULL; 553 + struct bio_vec bvec, bvprv; 554 + struct bvec_iter iter; 469 555 struct scatterlist *sg = NULL; 470 - int i, length = 0, nsegs = 0, split_len = bio->bi_size; 556 + int length = 0, nsegs = 0, split_len = bio->bi_iter.bi_size; 557 + int first = 1; 471 558 472 559 if (nvmeq->dev->stripe_size) 473 560 split_len = nvmeq->dev->stripe_size - 474 - ((bio->bi_sector << 9) & (nvmeq->dev->stripe_size - 1)); 561 + ((bio->bi_iter.bi_sector << 9) & 562 + (nvmeq->dev->stripe_size - 1)); 475 563 476 564 sg_init_table(iod->sg, psegs); 477 - bio_for_each_segment(bvec, bio, i) { 478 - if (bvprv && BIOVEC_PHYS_MERGEABLE(bvprv, bvec)) { 479 - sg->length += bvec->bv_len; 565 + bio_for_each_segment(bvec, bio, iter) { 566 + if (!first && BIOVEC_PHYS_MERGEABLE(&bvprv, &bvec)) { 567 + sg->length += bvec.bv_len; 480 568 } else { 481 - if (bvprv && BIOVEC_NOT_VIRT_MERGEABLE(bvprv, bvec)) 482 - return nvme_split_and_submit(bio, nvmeq, i, 483 - length, 0); 569 + if (!first && BIOVEC_NOT_VIRT_MERGEABLE(&bvprv, &bvec)) 570 + return nvme_split_and_submit(bio, nvmeq, 571 + length); 484 572 485 573 sg = sg ? sg + 1 : iod->sg; 486 - sg_set_page(sg, bvec->bv_page, bvec->bv_len, 487 - bvec->bv_offset); 574 + sg_set_page(sg, bvec.bv_page, 575 + bvec.bv_len, bvec.bv_offset); 488 576 nsegs++; 489 577 } 490 578 491 - if (split_len - length < bvec->bv_len) 492 - return nvme_split_and_submit(bio, nvmeq, i, split_len, 493 - split_len - length); 494 - length += bvec->bv_len; 579 + if (split_len - length < bvec.bv_len) 580 + return nvme_split_and_submit(bio, nvmeq, split_len); 581 + length += bvec.bv_len; 495 582 bvprv = bvec; 583 + first = 0; 496 584 } 497 585 iod->nents = nsegs; 498 586 sg_mark_end(sg); 499 587 if (dma_map_sg(nvmeq->q_dmadev, iod->sg, iod->nents, dma_dir) == 0) 500 588 return -ENOMEM; 501 589 502 - BUG_ON(length != bio->bi_size); 590 + BUG_ON(length != bio->bi_iter.bi_size); 503 591 return length; 504 592 } 505 593 ··· 526 608 iod->npages = 0; 527 609 528 610 range->cattr = cpu_to_le32(0); 529 - range->nlb = cpu_to_le32(bio->bi_size >> ns->lba_shift); 530 - range->slba = cpu_to_le64(nvme_block_nr(ns, bio->bi_sector)); 611 + range->nlb = cpu_to_le32(bio->bi_iter.bi_size >> ns->lba_shift); 612 + range->slba = cpu_to_le64(nvme_block_nr(ns, bio->bi_iter.bi_sector)); 531 613 532 614 memset(cmnd, 0, sizeof(*cmnd)); 533 615 cmnd->dsm.opcode = nvme_cmd_dsm; ··· 592 674 } 593 675 594 676 result = -ENOMEM; 595 - iod = nvme_alloc_iod(psegs, bio->bi_size, GFP_ATOMIC); 677 + iod = nvme_alloc_iod(psegs, bio->bi_iter.bi_size, GFP_ATOMIC); 596 678 if (!iod) 597 679 goto nomem; 598 680 iod->private = bio; ··· 641 723 cmnd->rw.nsid = cpu_to_le32(ns->ns_id); 642 724 length = nvme_setup_prps(nvmeq->dev, &cmnd->common, iod, length, 643 725 GFP_ATOMIC); 644 - cmnd->rw.slba = cpu_to_le64(nvme_block_nr(ns, bio->bi_sector)); 726 + cmnd->rw.slba = cpu_to_le64(nvme_block_nr(ns, bio->bi_iter.bi_sector)); 645 727 cmnd->rw.length = cpu_to_le16((length >> ns->lba_shift) - 1); 646 728 cmnd->rw.control = cpu_to_le16(control); 647 729 cmnd->rw.dsmgmt = cpu_to_le32(dsmgmt);
+99 -83
drivers/block/pktcdvd.c
··· 651 651 652 652 for (;;) { 653 653 tmp = rb_entry(n, struct pkt_rb_node, rb_node); 654 - if (s <= tmp->bio->bi_sector) 654 + if (s <= tmp->bio->bi_iter.bi_sector) 655 655 next = n->rb_left; 656 656 else 657 657 next = n->rb_right; ··· 660 660 n = next; 661 661 } 662 662 663 - if (s > tmp->bio->bi_sector) { 663 + if (s > tmp->bio->bi_iter.bi_sector) { 664 664 tmp = pkt_rbtree_next(tmp); 665 665 if (!tmp) 666 666 return NULL; 667 667 } 668 - BUG_ON(s > tmp->bio->bi_sector); 668 + BUG_ON(s > tmp->bio->bi_iter.bi_sector); 669 669 return tmp; 670 670 } 671 671 ··· 676 676 { 677 677 struct rb_node **p = &pd->bio_queue.rb_node; 678 678 struct rb_node *parent = NULL; 679 - sector_t s = node->bio->bi_sector; 679 + sector_t s = node->bio->bi_iter.bi_sector; 680 680 struct pkt_rb_node *tmp; 681 681 682 682 while (*p) { 683 683 parent = *p; 684 684 tmp = rb_entry(parent, struct pkt_rb_node, rb_node); 685 - if (s < tmp->bio->bi_sector) 685 + if (s < tmp->bio->bi_iter.bi_sector) 686 686 p = &(*p)->rb_left; 687 687 else 688 688 p = &(*p)->rb_right; ··· 857 857 spin_lock(&pd->iosched.lock); 858 858 bio = bio_list_peek(&pd->iosched.write_queue); 859 859 spin_unlock(&pd->iosched.lock); 860 - if (bio && (bio->bi_sector == pd->iosched.last_write)) 860 + if (bio && (bio->bi_iter.bi_sector == 861 + pd->iosched.last_write)) 861 862 need_write_seek = 0; 862 863 if (need_write_seek && reads_queued) { 863 864 if (atomic_read(&pd->cdrw.pending_bios) > 0) { ··· 889 888 continue; 890 889 891 890 if (bio_data_dir(bio) == READ) 892 - pd->iosched.successive_reads += bio->bi_size >> 10; 891 + pd->iosched.successive_reads += 892 + bio->bi_iter.bi_size >> 10; 893 893 else { 894 894 pd->iosched.successive_reads = 0; 895 895 pd->iosched.last_write = bio_end_sector(bio); ··· 980 978 981 979 pkt_dbg(2, pd, "bio=%p sec0=%llx sec=%llx err=%d\n", 982 980 bio, (unsigned long long)pkt->sector, 983 - (unsigned long long)bio->bi_sector, err); 981 + (unsigned long long)bio->bi_iter.bi_sector, err); 984 982 985 983 if (err) 986 984 atomic_inc(&pkt->io_errors); ··· 1028 1026 memset(written, 0, sizeof(written)); 1029 1027 spin_lock(&pkt->lock); 1030 1028 bio_list_for_each(bio, &pkt->orig_bios) { 1031 - int first_frame = (bio->bi_sector - pkt->sector) / (CD_FRAMESIZE >> 9); 1032 - int num_frames = bio->bi_size / CD_FRAMESIZE; 1029 + int first_frame = (bio->bi_iter.bi_sector - pkt->sector) / 1030 + (CD_FRAMESIZE >> 9); 1031 + int num_frames = bio->bi_iter.bi_size / CD_FRAMESIZE; 1033 1032 pd->stats.secs_w += num_frames * (CD_FRAMESIZE >> 9); 1034 1033 BUG_ON(first_frame < 0); 1035 1034 BUG_ON(first_frame + num_frames > pkt->frames); ··· 1056 1053 1057 1054 bio = pkt->r_bios[f]; 1058 1055 bio_reset(bio); 1059 - bio->bi_sector = pkt->sector + f * (CD_FRAMESIZE >> 9); 1056 + bio->bi_iter.bi_sector = pkt->sector + f * (CD_FRAMESIZE >> 9); 1060 1057 bio->bi_bdev = pd->bdev; 1061 1058 bio->bi_end_io = pkt_end_io_read; 1062 1059 bio->bi_private = pkt; ··· 1153 1150 bio_reset(pkt->bio); 1154 1151 pkt->bio->bi_bdev = pd->bdev; 1155 1152 pkt->bio->bi_rw = REQ_WRITE; 1156 - pkt->bio->bi_sector = new_sector; 1157 - pkt->bio->bi_size = pkt->frames * CD_FRAMESIZE; 1153 + pkt->bio->bi_iter.bi_sector = new_sector; 1154 + pkt->bio->bi_iter.bi_size = pkt->frames * CD_FRAMESIZE; 1158 1155 pkt->bio->bi_vcnt = pkt->frames; 1159 1156 1160 1157 pkt->bio->bi_end_io = pkt_end_io_packet_write; ··· 1216 1213 node = first_node; 1217 1214 while (node) { 1218 1215 bio = node->bio; 1219 - zone = get_zone(bio->bi_sector, pd); 1216 + zone = get_zone(bio->bi_iter.bi_sector, pd); 1220 1217 list_for_each_entry(p, &pd->cdrw.pkt_active_list, list) { 1221 1218 if (p->sector == zone) { 1222 1219 bio = NULL; ··· 1255 1252 pkt_dbg(2, pd, "looking for zone %llx\n", (unsigned long long)zone); 1256 1253 while ((node = pkt_rbtree_find(pd, zone)) != NULL) { 1257 1254 bio = node->bio; 1258 - pkt_dbg(2, pd, "found zone=%llx\n", 1259 - (unsigned long long)get_zone(bio->bi_sector, pd)); 1260 - if (get_zone(bio->bi_sector, pd) != zone) 1255 + pkt_dbg(2, pd, "found zone=%llx\n", (unsigned long long) 1256 + get_zone(bio->bi_iter.bi_sector, pd)); 1257 + if (get_zone(bio->bi_iter.bi_sector, pd) != zone) 1261 1258 break; 1262 1259 pkt_rbtree_erase(pd, node); 1263 1260 spin_lock(&pkt->lock); 1264 1261 bio_list_add(&pkt->orig_bios, bio); 1265 - pkt->write_size += bio->bi_size / CD_FRAMESIZE; 1262 + pkt->write_size += bio->bi_iter.bi_size / CD_FRAMESIZE; 1266 1263 spin_unlock(&pkt->lock); 1267 1264 } 1268 1265 /* check write congestion marks, and if bio_queue_size is ··· 1296 1293 struct bio_vec *bvec = pkt->w_bio->bi_io_vec; 1297 1294 1298 1295 bio_reset(pkt->w_bio); 1299 - pkt->w_bio->bi_sector = pkt->sector; 1296 + pkt->w_bio->bi_iter.bi_sector = pkt->sector; 1300 1297 pkt->w_bio->bi_bdev = pd->bdev; 1301 1298 pkt->w_bio->bi_end_io = pkt_end_io_packet_write; 1302 1299 pkt->w_bio->bi_private = pkt; ··· 2338 2335 pkt_bio_finished(pd); 2339 2336 } 2340 2337 2341 - static void pkt_make_request(struct request_queue *q, struct bio *bio) 2338 + static void pkt_make_request_read(struct pktcdvd_device *pd, struct bio *bio) 2342 2339 { 2343 - struct pktcdvd_device *pd; 2344 - char b[BDEVNAME_SIZE]; 2340 + struct bio *cloned_bio = bio_clone(bio, GFP_NOIO); 2341 + struct packet_stacked_data *psd = mempool_alloc(psd_pool, GFP_NOIO); 2342 + 2343 + psd->pd = pd; 2344 + psd->bio = bio; 2345 + cloned_bio->bi_bdev = pd->bdev; 2346 + cloned_bio->bi_private = psd; 2347 + cloned_bio->bi_end_io = pkt_end_io_read_cloned; 2348 + pd->stats.secs_r += bio_sectors(bio); 2349 + pkt_queue_bio(pd, cloned_bio); 2350 + } 2351 + 2352 + static void pkt_make_request_write(struct request_queue *q, struct bio *bio) 2353 + { 2354 + struct pktcdvd_device *pd = q->queuedata; 2345 2355 sector_t zone; 2346 2356 struct packet_data *pkt; 2347 2357 int was_empty, blocked_bio; 2348 2358 struct pkt_rb_node *node; 2349 2359 2350 - pd = q->queuedata; 2351 - if (!pd) { 2352 - pr_err("%s incorrect request queue\n", 2353 - bdevname(bio->bi_bdev, b)); 2354 - goto end_io; 2355 - } 2356 - 2357 - /* 2358 - * Clone READ bios so we can have our own bi_end_io callback. 2359 - */ 2360 - if (bio_data_dir(bio) == READ) { 2361 - struct bio *cloned_bio = bio_clone(bio, GFP_NOIO); 2362 - struct packet_stacked_data *psd = mempool_alloc(psd_pool, GFP_NOIO); 2363 - 2364 - psd->pd = pd; 2365 - psd->bio = bio; 2366 - cloned_bio->bi_bdev = pd->bdev; 2367 - cloned_bio->bi_private = psd; 2368 - cloned_bio->bi_end_io = pkt_end_io_read_cloned; 2369 - pd->stats.secs_r += bio_sectors(bio); 2370 - pkt_queue_bio(pd, cloned_bio); 2371 - return; 2372 - } 2373 - 2374 - if (!test_bit(PACKET_WRITABLE, &pd->flags)) { 2375 - pkt_notice(pd, "WRITE for ro device (%llu)\n", 2376 - (unsigned long long)bio->bi_sector); 2377 - goto end_io; 2378 - } 2379 - 2380 - if (!bio->bi_size || (bio->bi_size % CD_FRAMESIZE)) { 2381 - pkt_err(pd, "wrong bio size\n"); 2382 - goto end_io; 2383 - } 2384 - 2385 - blk_queue_bounce(q, &bio); 2386 - 2387 - zone = get_zone(bio->bi_sector, pd); 2388 - pkt_dbg(2, pd, "start = %6llx stop = %6llx\n", 2389 - (unsigned long long)bio->bi_sector, 2390 - (unsigned long long)bio_end_sector(bio)); 2391 - 2392 - /* Check if we have to split the bio */ 2393 - { 2394 - struct bio_pair *bp; 2395 - sector_t last_zone; 2396 - int first_sectors; 2397 - 2398 - last_zone = get_zone(bio_end_sector(bio) - 1, pd); 2399 - if (last_zone != zone) { 2400 - BUG_ON(last_zone != zone + pd->settings.size); 2401 - first_sectors = last_zone - bio->bi_sector; 2402 - bp = bio_split(bio, first_sectors); 2403 - BUG_ON(!bp); 2404 - pkt_make_request(q, &bp->bio1); 2405 - pkt_make_request(q, &bp->bio2); 2406 - bio_pair_release(bp); 2407 - return; 2408 - } 2409 - } 2360 + zone = get_zone(bio->bi_iter.bi_sector, pd); 2410 2361 2411 2362 /* 2412 2363 * If we find a matching packet in state WAITING or READ_WAIT, we can ··· 2374 2417 if ((pkt->state == PACKET_WAITING_STATE) || 2375 2418 (pkt->state == PACKET_READ_WAIT_STATE)) { 2376 2419 bio_list_add(&pkt->orig_bios, bio); 2377 - pkt->write_size += bio->bi_size / CD_FRAMESIZE; 2420 + pkt->write_size += 2421 + bio->bi_iter.bi_size / CD_FRAMESIZE; 2378 2422 if ((pkt->write_size >= pkt->frames) && 2379 2423 (pkt->state == PACKET_WAITING_STATE)) { 2380 2424 atomic_inc(&pkt->run_sm); ··· 2434 2476 */ 2435 2477 wake_up(&pd->wqueue); 2436 2478 } 2479 + } 2480 + 2481 + static void pkt_make_request(struct request_queue *q, struct bio *bio) 2482 + { 2483 + struct pktcdvd_device *pd; 2484 + char b[BDEVNAME_SIZE]; 2485 + struct bio *split; 2486 + 2487 + pd = q->queuedata; 2488 + if (!pd) { 2489 + pr_err("%s incorrect request queue\n", 2490 + bdevname(bio->bi_bdev, b)); 2491 + goto end_io; 2492 + } 2493 + 2494 + pkt_dbg(2, pd, "start = %6llx stop = %6llx\n", 2495 + (unsigned long long)bio->bi_iter.bi_sector, 2496 + (unsigned long long)bio_end_sector(bio)); 2497 + 2498 + /* 2499 + * Clone READ bios so we can have our own bi_end_io callback. 2500 + */ 2501 + if (bio_data_dir(bio) == READ) { 2502 + pkt_make_request_read(pd, bio); 2503 + return; 2504 + } 2505 + 2506 + if (!test_bit(PACKET_WRITABLE, &pd->flags)) { 2507 + pkt_notice(pd, "WRITE for ro device (%llu)\n", 2508 + (unsigned long long)bio->bi_iter.bi_sector); 2509 + goto end_io; 2510 + } 2511 + 2512 + if (!bio->bi_iter.bi_size || (bio->bi_iter.bi_size % CD_FRAMESIZE)) { 2513 + pkt_err(pd, "wrong bio size\n"); 2514 + goto end_io; 2515 + } 2516 + 2517 + blk_queue_bounce(q, &bio); 2518 + 2519 + do { 2520 + sector_t zone = get_zone(bio->bi_iter.bi_sector, pd); 2521 + sector_t last_zone = get_zone(bio_end_sector(bio) - 1, pd); 2522 + 2523 + if (last_zone != zone) { 2524 + BUG_ON(last_zone != zone + pd->settings.size); 2525 + 2526 + split = bio_split(bio, last_zone - 2527 + bio->bi_iter.bi_sector, 2528 + GFP_NOIO, fs_bio_set); 2529 + bio_chain(split, bio); 2530 + } else { 2531 + split = bio; 2532 + } 2533 + 2534 + pkt_make_request_write(q, split); 2535 + } while (split != bio); 2536 + 2437 2537 return; 2438 2538 end_io: 2439 2539 bio_io_error(bio);
+8 -9
drivers/block/ps3disk.c
··· 94 94 { 95 95 unsigned int offset = 0; 96 96 struct req_iterator iter; 97 - struct bio_vec *bvec; 97 + struct bio_vec bvec; 98 98 unsigned int i = 0; 99 99 size_t size; 100 100 void *buf; 101 101 102 102 rq_for_each_segment(bvec, req, iter) { 103 103 unsigned long flags; 104 - dev_dbg(&dev->sbd.core, 105 - "%s:%u: bio %u: %u segs %u sectors from %lu\n", 106 - __func__, __LINE__, i, bio_segments(iter.bio), 107 - bio_sectors(iter.bio), iter.bio->bi_sector); 104 + dev_dbg(&dev->sbd.core, "%s:%u: bio %u: %u sectors from %lu\n", 105 + __func__, __LINE__, i, bio_sectors(iter.bio), 106 + iter.bio->bi_iter.bi_sector); 108 107 109 - size = bvec->bv_len; 110 - buf = bvec_kmap_irq(bvec, &flags); 108 + size = bvec.bv_len; 109 + buf = bvec_kmap_irq(&bvec, &flags); 111 110 if (gather) 112 111 memcpy(dev->bounce_buf+offset, buf, size); 113 112 else 114 113 memcpy(buf, dev->bounce_buf+offset, size); 115 114 offset += size; 116 - flush_kernel_dcache_page(bvec->bv_page); 115 + flush_kernel_dcache_page(bvec.bv_page); 117 116 bvec_kunmap_irq(buf, &flags); 118 117 i++; 119 118 } ··· 129 130 130 131 #ifdef DEBUG 131 132 unsigned int n = 0; 132 - struct bio_vec *bv; 133 + struct bio_vec bv; 133 134 struct req_iterator iter; 134 135 135 136 rq_for_each_segment(bv, req, iter)
+6 -6
drivers/block/ps3vram.c
··· 553 553 struct ps3vram_priv *priv = ps3_system_bus_get_drvdata(dev); 554 554 int write = bio_data_dir(bio) == WRITE; 555 555 const char *op = write ? "write" : "read"; 556 - loff_t offset = bio->bi_sector << 9; 556 + loff_t offset = bio->bi_iter.bi_sector << 9; 557 557 int error = 0; 558 - struct bio_vec *bvec; 559 - unsigned int i; 558 + struct bio_vec bvec; 559 + struct bvec_iter iter; 560 560 struct bio *next; 561 561 562 - bio_for_each_segment(bvec, bio, i) { 562 + bio_for_each_segment(bvec, bio, iter) { 563 563 /* PS3 is ppc64, so we don't handle highmem */ 564 - char *ptr = page_address(bvec->bv_page) + bvec->bv_offset; 565 - size_t len = bvec->bv_len, retlen; 564 + char *ptr = page_address(bvec.bv_page) + bvec.bv_offset; 565 + size_t len = bvec.bv_len, retlen; 566 566 567 567 dev_dbg(&dev->core, " %s %zu bytes at offset %llu\n", op, 568 568 len, offset);
+16 -75
drivers/block/rbd.c
··· 1156 1156 */ 1157 1157 static void zero_bio_chain(struct bio *chain, int start_ofs) 1158 1158 { 1159 - struct bio_vec *bv; 1159 + struct bio_vec bv; 1160 + struct bvec_iter iter; 1160 1161 unsigned long flags; 1161 1162 void *buf; 1162 - int i; 1163 1163 int pos = 0; 1164 1164 1165 1165 while (chain) { 1166 - bio_for_each_segment(bv, chain, i) { 1167 - if (pos + bv->bv_len > start_ofs) { 1166 + bio_for_each_segment(bv, chain, iter) { 1167 + if (pos + bv.bv_len > start_ofs) { 1168 1168 int remainder = max(start_ofs - pos, 0); 1169 - buf = bvec_kmap_irq(bv, &flags); 1169 + buf = bvec_kmap_irq(&bv, &flags); 1170 1170 memset(buf + remainder, 0, 1171 - bv->bv_len - remainder); 1172 - flush_dcache_page(bv->bv_page); 1171 + bv.bv_len - remainder); 1172 + flush_dcache_page(bv.bv_page); 1173 1173 bvec_kunmap_irq(buf, &flags); 1174 1174 } 1175 - pos += bv->bv_len; 1175 + pos += bv.bv_len; 1176 1176 } 1177 1177 1178 1178 chain = chain->bi_next; ··· 1220 1220 unsigned int len, 1221 1221 gfp_t gfpmask) 1222 1222 { 1223 - struct bio_vec *bv; 1224 - unsigned int resid; 1225 - unsigned short idx; 1226 - unsigned int voff; 1227 - unsigned short end_idx; 1228 - unsigned short vcnt; 1229 1223 struct bio *bio; 1230 1224 1231 - /* Handle the easy case for the caller */ 1232 - 1233 - if (!offset && len == bio_src->bi_size) 1234 - return bio_clone(bio_src, gfpmask); 1235 - 1236 - if (WARN_ON_ONCE(!len)) 1237 - return NULL; 1238 - if (WARN_ON_ONCE(len > bio_src->bi_size)) 1239 - return NULL; 1240 - if (WARN_ON_ONCE(offset > bio_src->bi_size - len)) 1241 - return NULL; 1242 - 1243 - /* Find first affected segment... */ 1244 - 1245 - resid = offset; 1246 - bio_for_each_segment(bv, bio_src, idx) { 1247 - if (resid < bv->bv_len) 1248 - break; 1249 - resid -= bv->bv_len; 1250 - } 1251 - voff = resid; 1252 - 1253 - /* ...and the last affected segment */ 1254 - 1255 - resid += len; 1256 - __bio_for_each_segment(bv, bio_src, end_idx, idx) { 1257 - if (resid <= bv->bv_len) 1258 - break; 1259 - resid -= bv->bv_len; 1260 - } 1261 - vcnt = end_idx - idx + 1; 1262 - 1263 - /* Build the clone */ 1264 - 1265 - bio = bio_alloc(gfpmask, (unsigned int) vcnt); 1225 + bio = bio_clone(bio_src, gfpmask); 1266 1226 if (!bio) 1267 1227 return NULL; /* ENOMEM */ 1268 1228 1269 - bio->bi_bdev = bio_src->bi_bdev; 1270 - bio->bi_sector = bio_src->bi_sector + (offset >> SECTOR_SHIFT); 1271 - bio->bi_rw = bio_src->bi_rw; 1272 - bio->bi_flags |= 1 << BIO_CLONED; 1273 - 1274 - /* 1275 - * Copy over our part of the bio_vec, then update the first 1276 - * and last (or only) entries. 1277 - */ 1278 - memcpy(&bio->bi_io_vec[0], &bio_src->bi_io_vec[idx], 1279 - vcnt * sizeof (struct bio_vec)); 1280 - bio->bi_io_vec[0].bv_offset += voff; 1281 - if (vcnt > 1) { 1282 - bio->bi_io_vec[0].bv_len -= voff; 1283 - bio->bi_io_vec[vcnt - 1].bv_len = resid; 1284 - } else { 1285 - bio->bi_io_vec[0].bv_len = len; 1286 - } 1287 - 1288 - bio->bi_vcnt = vcnt; 1289 - bio->bi_size = len; 1290 - bio->bi_idx = 0; 1229 + bio_advance(bio, offset); 1230 + bio->bi_iter.bi_size = len; 1291 1231 1292 1232 return bio; 1293 1233 } ··· 1258 1318 1259 1319 /* Build up a chain of clone bios up to the limit */ 1260 1320 1261 - if (!bi || off >= bi->bi_size || !len) 1321 + if (!bi || off >= bi->bi_iter.bi_size || !len) 1262 1322 return NULL; /* Nothing to clone */ 1263 1323 1264 1324 end = &chain; ··· 1270 1330 rbd_warn(NULL, "bio_chain exhausted with %u left", len); 1271 1331 goto out_err; /* EINVAL; ran out of bio's */ 1272 1332 } 1273 - bi_size = min_t(unsigned int, bi->bi_size - off, len); 1333 + bi_size = min_t(unsigned int, bi->bi_iter.bi_size - off, len); 1274 1334 bio = bio_clone_range(bi, off, bi_size, gfpmask); 1275 1335 if (!bio) 1276 1336 goto out_err; /* ENOMEM */ ··· 1279 1339 end = &bio->bi_next; 1280 1340 1281 1341 off += bi_size; 1282 - if (off == bi->bi_size) { 1342 + if (off == bi->bi_iter.bi_size) { 1283 1343 bi = bi->bi_next; 1284 1344 off = 0; 1285 1345 } ··· 2167 2227 2168 2228 if (type == OBJ_REQUEST_BIO) { 2169 2229 bio_list = data_desc; 2170 - rbd_assert(img_offset == bio_list->bi_sector << SECTOR_SHIFT); 2230 + rbd_assert(img_offset == 2231 + bio_list->bi_iter.bi_sector << SECTOR_SHIFT); 2171 2232 } else { 2172 2233 rbd_assert(type == OBJ_REQUEST_PAGES); 2173 2234 pages = data_desc;
+3 -3
drivers/block/rsxx/dev.c
··· 174 174 if (!card) 175 175 goto req_err; 176 176 177 - if (bio->bi_sector + (bio->bi_size >> 9) > get_capacity(card->gendisk)) 177 + if (bio_end_sector(bio) > get_capacity(card->gendisk)) 178 178 goto req_err; 179 179 180 180 if (unlikely(card->halt)) { ··· 187 187 goto req_err; 188 188 } 189 189 190 - if (bio->bi_size == 0) { 190 + if (bio->bi_iter.bi_size == 0) { 191 191 dev_err(CARD_TO_DEV(card), "size zero BIO!\n"); 192 192 goto req_err; 193 193 } ··· 208 208 209 209 dev_dbg(CARD_TO_DEV(card), "BIO[%c]: meta: %p addr8: x%llx size: %d\n", 210 210 bio_data_dir(bio) ? 'W' : 'R', bio_meta, 211 - (u64)bio->bi_sector << 9, bio->bi_size); 211 + (u64)bio->bi_iter.bi_sector << 9, bio->bi_iter.bi_size); 212 212 213 213 st = rsxx_dma_queue_bio(card, bio, &bio_meta->pending_dmas, 214 214 bio_dma_done_cb, bio_meta);
+8 -7
drivers/block/rsxx/dma.c
··· 684 684 void *cb_data) 685 685 { 686 686 struct list_head dma_list[RSXX_MAX_TARGETS]; 687 - struct bio_vec *bvec; 687 + struct bio_vec bvec; 688 + struct bvec_iter iter; 688 689 unsigned long long addr8; 689 690 unsigned int laddr; 690 691 unsigned int bv_len; ··· 697 696 int st; 698 697 int i; 699 698 700 - addr8 = bio->bi_sector << 9; /* sectors are 512 bytes */ 699 + addr8 = bio->bi_iter.bi_sector << 9; /* sectors are 512 bytes */ 701 700 atomic_set(n_dmas, 0); 702 701 703 702 for (i = 0; i < card->n_targets; i++) { ··· 706 705 } 707 706 708 707 if (bio->bi_rw & REQ_DISCARD) { 709 - bv_len = bio->bi_size; 708 + bv_len = bio->bi_iter.bi_size; 710 709 711 710 while (bv_len > 0) { 712 711 tgt = rsxx_get_dma_tgt(card, addr8); ··· 723 722 bv_len -= RSXX_HW_BLK_SIZE; 724 723 } 725 724 } else { 726 - bio_for_each_segment(bvec, bio, i) { 727 - bv_len = bvec->bv_len; 728 - bv_off = bvec->bv_offset; 725 + bio_for_each_segment(bvec, bio, iter) { 726 + bv_len = bvec.bv_len; 727 + bv_off = bvec.bv_offset; 729 728 730 729 while (bv_len > 0) { 731 730 tgt = rsxx_get_dma_tgt(card, addr8); ··· 737 736 st = rsxx_queue_dma(card, &dma_list[tgt], 738 737 bio_data_dir(bio), 739 738 dma_off, dma_len, 740 - laddr, bvec->bv_page, 739 + laddr, bvec.bv_page, 741 740 bv_off, cb, cb_data); 742 741 if (st) 743 742 goto bvec_err;
+25 -28
drivers/block/umem.c
··· 108 108 * have been written 109 109 */ 110 110 struct bio *bio, *currentbio, **biotail; 111 - int current_idx; 112 - sector_t current_sector; 111 + struct bvec_iter current_iter; 113 112 114 113 struct request_queue *queue; 115 114 ··· 117 118 struct mm_dma_desc *desc; 118 119 int cnt, headcnt; 119 120 struct bio *bio, **biotail; 120 - int idx; 121 + struct bvec_iter iter; 121 122 } mm_pages[2]; 122 123 #define DESC_PER_PAGE ((PAGE_SIZE*2)/sizeof(struct mm_dma_desc)) 123 124 ··· 343 344 dma_addr_t dma_handle; 344 345 int offset; 345 346 struct bio *bio; 346 - struct bio_vec *vec; 347 - int idx; 347 + struct bio_vec vec; 348 348 int rw; 349 - int len; 350 349 351 350 bio = card->currentbio; 352 351 if (!bio && card->bio) { 353 352 card->currentbio = card->bio; 354 - card->current_idx = card->bio->bi_idx; 355 - card->current_sector = card->bio->bi_sector; 353 + card->current_iter = card->bio->bi_iter; 356 354 card->bio = card->bio->bi_next; 357 355 if (card->bio == NULL) 358 356 card->biotail = &card->bio; ··· 358 362 } 359 363 if (!bio) 360 364 return 0; 361 - idx = card->current_idx; 362 365 363 366 rw = bio_rw(bio); 364 367 if (card->mm_pages[card->Ready].cnt >= DESC_PER_PAGE) 365 368 return 0; 366 369 367 - vec = bio_iovec_idx(bio, idx); 368 - len = vec->bv_len; 370 + vec = bio_iter_iovec(bio, card->current_iter); 371 + 369 372 dma_handle = pci_map_page(card->dev, 370 - vec->bv_page, 371 - vec->bv_offset, 372 - len, 373 + vec.bv_page, 374 + vec.bv_offset, 375 + vec.bv_len, 373 376 (rw == READ) ? 374 377 PCI_DMA_FROMDEVICE : PCI_DMA_TODEVICE); 375 378 ··· 376 381 desc = &p->desc[p->cnt]; 377 382 p->cnt++; 378 383 if (p->bio == NULL) 379 - p->idx = idx; 384 + p->iter = card->current_iter; 380 385 if ((p->biotail) != &bio->bi_next) { 381 386 *(p->biotail) = bio; 382 387 p->biotail = &(bio->bi_next); ··· 386 391 desc->data_dma_handle = dma_handle; 387 392 388 393 desc->pci_addr = cpu_to_le64((u64)desc->data_dma_handle); 389 - desc->local_addr = cpu_to_le64(card->current_sector << 9); 390 - desc->transfer_size = cpu_to_le32(len); 394 + desc->local_addr = cpu_to_le64(card->current_iter.bi_sector << 9); 395 + desc->transfer_size = cpu_to_le32(vec.bv_len); 391 396 offset = (((char *)&desc->sem_control_bits) - ((char *)p->desc)); 392 397 desc->sem_addr = cpu_to_le64((u64)(p->page_dma+offset)); 393 398 desc->zero1 = desc->zero2 = 0; ··· 402 407 desc->control_bits |= cpu_to_le32(DMASCR_TRANSFER_READ); 403 408 desc->sem_control_bits = desc->control_bits; 404 409 405 - card->current_sector += (len >> 9); 406 - idx++; 407 - card->current_idx = idx; 408 - if (idx >= bio->bi_vcnt) 410 + 411 + bio_advance_iter(bio, &card->current_iter, vec.bv_len); 412 + if (!card->current_iter.bi_size) 409 413 card->currentbio = NULL; 410 414 411 415 return 1; ··· 433 439 struct mm_dma_desc *desc = &page->desc[page->headcnt]; 434 440 int control = le32_to_cpu(desc->sem_control_bits); 435 441 int last = 0; 436 - int idx; 442 + struct bio_vec vec; 437 443 438 444 if (!(control & DMASCR_DMA_COMPLETE)) { 439 445 control = dma_status; 440 446 last = 1; 441 447 } 448 + 442 449 page->headcnt++; 443 - idx = page->idx; 444 - page->idx++; 445 - if (page->idx >= bio->bi_vcnt) { 450 + vec = bio_iter_iovec(bio, page->iter); 451 + bio_advance_iter(bio, &page->iter, vec.bv_len); 452 + 453 + if (!page->iter.bi_size) { 446 454 page->bio = bio->bi_next; 447 455 if (page->bio) 448 - page->idx = page->bio->bi_idx; 456 + page->iter = page->bio->bi_iter; 449 457 } 450 458 451 459 pci_unmap_page(card->dev, desc->data_dma_handle, 452 - bio_iovec_idx(bio, idx)->bv_len, 460 + vec.bv_len, 453 461 (control & DMASCR_TRANSFER_READ) ? 454 462 PCI_DMA_TODEVICE : PCI_DMA_FROMDEVICE); 455 463 if (control & DMASCR_HARD_ERROR) { ··· 528 532 { 529 533 struct cardinfo *card = q->queuedata; 530 534 pr_debug("mm_make_request %llu %u\n", 531 - (unsigned long long)bio->bi_sector, bio->bi_size); 535 + (unsigned long long)bio->bi_iter.bi_sector, 536 + bio->bi_iter.bi_size); 532 537 533 538 spin_lock_irq(&card->lock); 534 539 *card->biotail = bio;
+1 -1
drivers/block/xen-blkback/blkback.c
··· 1257 1257 bio->bi_bdev = preq.bdev; 1258 1258 bio->bi_private = pending_req; 1259 1259 bio->bi_end_io = end_block_io_op; 1260 - bio->bi_sector = preq.sector_number; 1260 + bio->bi_iter.bi_sector = preq.sector_number; 1261 1261 } 1262 1262 1263 1263 preq.sector_number += seg[i].nsec;
+1 -1
drivers/block/xen-blkfront.c
··· 1547 1547 for (i = 0; i < pending; i++) { 1548 1548 offset = (i * segs * PAGE_SIZE) >> 9; 1549 1549 size = min((unsigned int)(segs * PAGE_SIZE) >> 9, 1550 - (unsigned int)(bio->bi_size >> 9) - offset); 1550 + (unsigned int)bio_sectors(bio) - offset); 1551 1551 cloned_bio = bio_clone(bio, GFP_NOIO); 1552 1552 BUG_ON(cloned_bio == NULL); 1553 1553 bio_trim(cloned_bio, offset, size);
-2
drivers/md/bcache/bcache.h
··· 280 280 unsigned long sectors_dirty_last; 281 281 long sectors_dirty_derivative; 282 282 283 - mempool_t *unaligned_bvec; 284 283 struct bio_set *bio_split; 285 284 286 285 unsigned data_csum:1; ··· 901 902 void bch_bbio_free(struct bio *, struct cache_set *); 902 903 struct bio *bch_bbio_alloc(struct cache_set *); 903 904 904 - struct bio *bch_bio_split(struct bio *, int, gfp_t, struct bio_set *); 905 905 void bch_generic_make_request(struct bio *, struct bio_split_pool *); 906 906 void __bch_submit_bbio(struct bio *, struct cache_set *); 907 907 void bch_submit_bbio(struct bio *, struct cache_set *, struct bkey *, unsigned);
+4 -4
drivers/md/bcache/btree.c
··· 299 299 300 300 bio = bch_bbio_alloc(b->c); 301 301 bio->bi_rw = REQ_META|READ_SYNC; 302 - bio->bi_size = KEY_SIZE(&b->key) << 9; 302 + bio->bi_iter.bi_size = KEY_SIZE(&b->key) << 9; 303 303 bio->bi_end_io = btree_node_read_endio; 304 304 bio->bi_private = &cl; 305 305 ··· 362 362 struct bio_vec *bv; 363 363 int n; 364 364 365 - __bio_for_each_segment(bv, b->bio, n, 0) 365 + bio_for_each_segment_all(bv, b->bio, n) 366 366 __free_page(bv->bv_page); 367 367 368 368 __btree_node_write_done(cl); ··· 395 395 b->bio->bi_end_io = btree_node_write_endio; 396 396 b->bio->bi_private = cl; 397 397 b->bio->bi_rw = REQ_META|WRITE_SYNC|REQ_FUA; 398 - b->bio->bi_size = set_blocks(i, b->c) * block_bytes(b->c); 398 + b->bio->bi_iter.bi_size = set_blocks(i, b->c) * block_bytes(b->c); 399 399 bch_bio_map(b->bio, i); 400 400 401 401 /* ··· 421 421 struct bio_vec *bv; 422 422 void *base = (void *) ((unsigned long) i & ~(PAGE_SIZE - 1)); 423 423 424 - bio_for_each_segment(bv, b->bio, j) 424 + bio_for_each_segment_all(bv, b->bio, j) 425 425 memcpy(page_address(bv->bv_page), 426 426 base + j * PAGE_SIZE, PAGE_SIZE); 427 427
+11 -10
drivers/md/bcache/debug.c
··· 173 173 { 174 174 char name[BDEVNAME_SIZE]; 175 175 struct bio *check; 176 - struct bio_vec *bv; 176 + struct bio_vec bv, *bv2; 177 + struct bvec_iter iter; 177 178 int i; 178 179 179 180 check = bio_clone(bio, GFP_NOIO); ··· 186 185 187 186 submit_bio_wait(READ_SYNC, check); 188 187 189 - bio_for_each_segment(bv, bio, i) { 190 - void *p1 = kmap_atomic(bv->bv_page); 191 - void *p2 = page_address(check->bi_io_vec[i].bv_page); 188 + bio_for_each_segment(bv, bio, iter) { 189 + void *p1 = kmap_atomic(bv.bv_page); 190 + void *p2 = page_address(check->bi_io_vec[iter.bi_idx].bv_page); 192 191 193 - cache_set_err_on(memcmp(p1 + bv->bv_offset, 194 - p2 + bv->bv_offset, 195 - bv->bv_len), 192 + cache_set_err_on(memcmp(p1 + bv.bv_offset, 193 + p2 + bv.bv_offset, 194 + bv.bv_len), 196 195 dc->disk.c, 197 196 "verify failed at dev %s sector %llu", 198 197 bdevname(dc->bdev, name), 199 - (uint64_t) bio->bi_sector); 198 + (uint64_t) bio->bi_iter.bi_sector); 200 199 201 200 kunmap_atomic(p1); 202 201 } 203 202 204 - bio_for_each_segment_all(bv, check, i) 205 - __free_page(bv->bv_page); 203 + bio_for_each_segment_all(bv2, check, i) 204 + __free_page(bv2->bv_page); 206 205 out_put: 207 206 bio_put(check); 208 207 }
+27 -165
drivers/md/bcache/io.c
··· 11 11 12 12 #include <linux/blkdev.h> 13 13 14 - static void bch_bi_idx_hack_endio(struct bio *bio, int error) 15 - { 16 - struct bio *p = bio->bi_private; 17 - 18 - bio_endio(p, error); 19 - bio_put(bio); 20 - } 21 - 22 - static void bch_generic_make_request_hack(struct bio *bio) 23 - { 24 - if (bio->bi_idx) { 25 - struct bio *clone = bio_alloc(GFP_NOIO, bio_segments(bio)); 26 - 27 - memcpy(clone->bi_io_vec, 28 - bio_iovec(bio), 29 - bio_segments(bio) * sizeof(struct bio_vec)); 30 - 31 - clone->bi_sector = bio->bi_sector; 32 - clone->bi_bdev = bio->bi_bdev; 33 - clone->bi_rw = bio->bi_rw; 34 - clone->bi_vcnt = bio_segments(bio); 35 - clone->bi_size = bio->bi_size; 36 - 37 - clone->bi_private = bio; 38 - clone->bi_end_io = bch_bi_idx_hack_endio; 39 - 40 - bio = clone; 41 - } 42 - 43 - /* 44 - * Hack, since drivers that clone bios clone up to bi_max_vecs, but our 45 - * bios might have had more than that (before we split them per device 46 - * limitations). 47 - * 48 - * To be taken out once immutable bvec stuff is in. 49 - */ 50 - bio->bi_max_vecs = bio->bi_vcnt; 51 - 52 - generic_make_request(bio); 53 - } 54 - 55 - /** 56 - * bch_bio_split - split a bio 57 - * @bio: bio to split 58 - * @sectors: number of sectors to split from the front of @bio 59 - * @gfp: gfp mask 60 - * @bs: bio set to allocate from 61 - * 62 - * Allocates and returns a new bio which represents @sectors from the start of 63 - * @bio, and updates @bio to represent the remaining sectors. 64 - * 65 - * If bio_sectors(@bio) was less than or equal to @sectors, returns @bio 66 - * unchanged. 67 - * 68 - * The newly allocated bio will point to @bio's bi_io_vec, if the split was on a 69 - * bvec boundry; it is the caller's responsibility to ensure that @bio is not 70 - * freed before the split. 71 - */ 72 - struct bio *bch_bio_split(struct bio *bio, int sectors, 73 - gfp_t gfp, struct bio_set *bs) 74 - { 75 - unsigned idx = bio->bi_idx, vcnt = 0, nbytes = sectors << 9; 76 - struct bio_vec *bv; 77 - struct bio *ret = NULL; 78 - 79 - BUG_ON(sectors <= 0); 80 - 81 - if (sectors >= bio_sectors(bio)) 82 - return bio; 83 - 84 - if (bio->bi_rw & REQ_DISCARD) { 85 - ret = bio_alloc_bioset(gfp, 1, bs); 86 - if (!ret) 87 - return NULL; 88 - idx = 0; 89 - goto out; 90 - } 91 - 92 - bio_for_each_segment(bv, bio, idx) { 93 - vcnt = idx - bio->bi_idx; 94 - 95 - if (!nbytes) { 96 - ret = bio_alloc_bioset(gfp, vcnt, bs); 97 - if (!ret) 98 - return NULL; 99 - 100 - memcpy(ret->bi_io_vec, bio_iovec(bio), 101 - sizeof(struct bio_vec) * vcnt); 102 - 103 - break; 104 - } else if (nbytes < bv->bv_len) { 105 - ret = bio_alloc_bioset(gfp, ++vcnt, bs); 106 - if (!ret) 107 - return NULL; 108 - 109 - memcpy(ret->bi_io_vec, bio_iovec(bio), 110 - sizeof(struct bio_vec) * vcnt); 111 - 112 - ret->bi_io_vec[vcnt - 1].bv_len = nbytes; 113 - bv->bv_offset += nbytes; 114 - bv->bv_len -= nbytes; 115 - break; 116 - } 117 - 118 - nbytes -= bv->bv_len; 119 - } 120 - out: 121 - ret->bi_bdev = bio->bi_bdev; 122 - ret->bi_sector = bio->bi_sector; 123 - ret->bi_size = sectors << 9; 124 - ret->bi_rw = bio->bi_rw; 125 - ret->bi_vcnt = vcnt; 126 - ret->bi_max_vecs = vcnt; 127 - 128 - bio->bi_sector += sectors; 129 - bio->bi_size -= sectors << 9; 130 - bio->bi_idx = idx; 131 - 132 - if (bio_integrity(bio)) { 133 - if (bio_integrity_clone(ret, bio, gfp)) { 134 - bio_put(ret); 135 - return NULL; 136 - } 137 - 138 - bio_integrity_trim(ret, 0, bio_sectors(ret)); 139 - bio_integrity_trim(bio, bio_sectors(ret), bio_sectors(bio)); 140 - } 141 - 142 - return ret; 143 - } 144 - 145 14 static unsigned bch_bio_max_sectors(struct bio *bio) 146 15 { 147 - unsigned ret = bio_sectors(bio); 148 16 struct request_queue *q = bdev_get_queue(bio->bi_bdev); 149 - unsigned max_segments = min_t(unsigned, BIO_MAX_PAGES, 150 - queue_max_segments(q)); 17 + struct bio_vec bv; 18 + struct bvec_iter iter; 19 + unsigned ret = 0, seg = 0; 151 20 152 21 if (bio->bi_rw & REQ_DISCARD) 153 - return min(ret, q->limits.max_discard_sectors); 22 + return min(bio_sectors(bio), q->limits.max_discard_sectors); 154 23 155 - if (bio_segments(bio) > max_segments || 156 - q->merge_bvec_fn) { 157 - struct bio_vec *bv; 158 - int i, seg = 0; 24 + bio_for_each_segment(bv, bio, iter) { 25 + struct bvec_merge_data bvm = { 26 + .bi_bdev = bio->bi_bdev, 27 + .bi_sector = bio->bi_iter.bi_sector, 28 + .bi_size = ret << 9, 29 + .bi_rw = bio->bi_rw, 30 + }; 159 31 160 - ret = 0; 32 + if (seg == min_t(unsigned, BIO_MAX_PAGES, 33 + queue_max_segments(q))) 34 + break; 161 35 162 - bio_for_each_segment(bv, bio, i) { 163 - struct bvec_merge_data bvm = { 164 - .bi_bdev = bio->bi_bdev, 165 - .bi_sector = bio->bi_sector, 166 - .bi_size = ret << 9, 167 - .bi_rw = bio->bi_rw, 168 - }; 36 + if (q->merge_bvec_fn && 37 + q->merge_bvec_fn(q, &bvm, &bv) < (int) bv.bv_len) 38 + break; 169 39 170 - if (seg == max_segments) 171 - break; 172 - 173 - if (q->merge_bvec_fn && 174 - q->merge_bvec_fn(q, &bvm, bv) < (int) bv->bv_len) 175 - break; 176 - 177 - seg++; 178 - ret += bv->bv_len >> 9; 179 - } 40 + seg++; 41 + ret += bv.bv_len >> 9; 180 42 } 181 43 182 44 ret = min(ret, queue_max_sectors(q)); 183 45 184 46 WARN_ON(!ret); 185 - ret = max_t(int, ret, bio_iovec(bio)->bv_len >> 9); 47 + ret = max_t(int, ret, bio_iovec(bio).bv_len >> 9); 186 48 187 49 return ret; 188 50 } ··· 55 193 56 194 s->bio->bi_end_io = s->bi_end_io; 57 195 s->bio->bi_private = s->bi_private; 58 - bio_endio(s->bio, 0); 196 + bio_endio_nodec(s->bio, 0); 59 197 60 198 closure_debug_destroy(&s->cl); 61 199 mempool_free(s, s->p->bio_split_hook); ··· 94 232 bio_get(bio); 95 233 96 234 do { 97 - n = bch_bio_split(bio, bch_bio_max_sectors(bio), 98 - GFP_NOIO, s->p->bio_split); 235 + n = bio_next_split(bio, bch_bio_max_sectors(bio), 236 + GFP_NOIO, s->p->bio_split); 99 237 100 238 n->bi_end_io = bch_bio_submit_split_endio; 101 239 n->bi_private = &s->cl; 102 240 103 241 closure_get(&s->cl); 104 - bch_generic_make_request_hack(n); 242 + generic_make_request(n); 105 243 } while (n != bio); 106 244 107 245 continue_at(&s->cl, bch_bio_submit_split_done, NULL); 108 246 submit: 109 - bch_generic_make_request_hack(bio); 247 + generic_make_request(bio); 110 248 } 111 249 112 250 /* Bios with headers */ ··· 134 272 { 135 273 struct bbio *b = container_of(bio, struct bbio, bio); 136 274 137 - bio->bi_sector = PTR_OFFSET(&b->key, 0); 138 - bio->bi_bdev = PTR_CACHE(c, &b->key, 0)->bdev; 275 + bio->bi_iter.bi_sector = PTR_OFFSET(&b->key, 0); 276 + bio->bi_bdev = PTR_CACHE(c, &b->key, 0)->bdev; 139 277 140 278 b->submit_time_us = local_clock_us(); 141 279 closure_bio_submit(bio, bio->bi_private, PTR_CACHE(c, &b->key, 0));
+6 -6
drivers/md/bcache/journal.c
··· 51 51 len = min_t(unsigned, left, PAGE_SECTORS * 8); 52 52 53 53 bio_reset(bio); 54 - bio->bi_sector = bucket + offset; 54 + bio->bi_iter.bi_sector = bucket + offset; 55 55 bio->bi_bdev = ca->bdev; 56 56 bio->bi_rw = READ; 57 - bio->bi_size = len << 9; 57 + bio->bi_iter.bi_size = len << 9; 58 58 59 59 bio->bi_end_io = journal_read_endio; 60 60 bio->bi_private = &cl; ··· 437 437 atomic_set(&ja->discard_in_flight, DISCARD_IN_FLIGHT); 438 438 439 439 bio_init(bio); 440 - bio->bi_sector = bucket_to_sector(ca->set, 440 + bio->bi_iter.bi_sector = bucket_to_sector(ca->set, 441 441 ca->sb.d[ja->discard_idx]); 442 442 bio->bi_bdev = ca->bdev; 443 443 bio->bi_rw = REQ_WRITE|REQ_DISCARD; 444 444 bio->bi_max_vecs = 1; 445 445 bio->bi_io_vec = bio->bi_inline_vecs; 446 - bio->bi_size = bucket_bytes(ca); 446 + bio->bi_iter.bi_size = bucket_bytes(ca); 447 447 bio->bi_end_io = journal_discard_endio; 448 448 449 449 closure_get(&ca->set->cl); ··· 608 608 atomic_long_add(sectors, &ca->meta_sectors_written); 609 609 610 610 bio_reset(bio); 611 - bio->bi_sector = PTR_OFFSET(k, i); 611 + bio->bi_iter.bi_sector = PTR_OFFSET(k, i); 612 612 bio->bi_bdev = ca->bdev; 613 613 bio->bi_rw = REQ_WRITE|REQ_SYNC|REQ_META|REQ_FLUSH|REQ_FUA; 614 - bio->bi_size = sectors << 9; 614 + bio->bi_iter.bi_size = sectors << 9; 615 615 616 616 bio->bi_end_io = journal_write_endio; 617 617 bio->bi_private = w;
+2 -2
drivers/md/bcache/movinggc.c
··· 86 86 bio_get(bio); 87 87 bio_set_prio(bio, IOPRIO_PRIO_VALUE(IOPRIO_CLASS_IDLE, 0)); 88 88 89 - bio->bi_size = KEY_SIZE(&io->w->key) << 9; 89 + bio->bi_iter.bi_size = KEY_SIZE(&io->w->key) << 9; 90 90 bio->bi_max_vecs = DIV_ROUND_UP(KEY_SIZE(&io->w->key), 91 91 PAGE_SECTORS); 92 92 bio->bi_private = &io->cl; ··· 102 102 if (!op->error) { 103 103 moving_init(io); 104 104 105 - io->bio.bio.bi_sector = KEY_START(&io->w->key); 105 + io->bio.bio.bi_iter.bi_sector = KEY_START(&io->w->key); 106 106 op->write_prio = 1; 107 107 op->bio = &io->bio.bio; 108 108
+52 -79
drivers/md/bcache/request.c
··· 197 197 198 198 static void bio_csum(struct bio *bio, struct bkey *k) 199 199 { 200 - struct bio_vec *bv; 200 + struct bio_vec bv; 201 + struct bvec_iter iter; 201 202 uint64_t csum = 0; 202 - int i; 203 203 204 - bio_for_each_segment(bv, bio, i) { 205 - void *d = kmap(bv->bv_page) + bv->bv_offset; 206 - csum = bch_crc64_update(csum, d, bv->bv_len); 207 - kunmap(bv->bv_page); 204 + bio_for_each_segment(bv, bio, iter) { 205 + void *d = kmap(bv.bv_page) + bv.bv_offset; 206 + csum = bch_crc64_update(csum, d, bv.bv_len); 207 + kunmap(bv.bv_page); 208 208 } 209 209 210 210 k->ptr[KEY_PTRS(k)] = csum & (~0ULL >> 1); ··· 260 260 struct bio *bio = op->bio; 261 261 262 262 pr_debug("invalidating %i sectors from %llu", 263 - bio_sectors(bio), (uint64_t) bio->bi_sector); 263 + bio_sectors(bio), (uint64_t) bio->bi_iter.bi_sector); 264 264 265 265 while (bio_sectors(bio)) { 266 266 unsigned sectors = min(bio_sectors(bio), ··· 269 269 if (bch_keylist_realloc(&op->insert_keys, 0, op->c)) 270 270 goto out; 271 271 272 - bio->bi_sector += sectors; 273 - bio->bi_size -= sectors << 9; 272 + bio->bi_iter.bi_sector += sectors; 273 + bio->bi_iter.bi_size -= sectors << 9; 274 274 275 275 bch_keylist_add(&op->insert_keys, 276 - &KEY(op->inode, bio->bi_sector, sectors)); 276 + &KEY(op->inode, bio->bi_iter.bi_sector, sectors)); 277 277 } 278 278 279 279 op->insert_data_done = true; ··· 363 363 k = op->insert_keys.top; 364 364 bkey_init(k); 365 365 SET_KEY_INODE(k, op->inode); 366 - SET_KEY_OFFSET(k, bio->bi_sector); 366 + SET_KEY_OFFSET(k, bio->bi_iter.bi_sector); 367 367 368 368 if (!bch_alloc_sectors(op->c, k, bio_sectors(bio), 369 369 op->write_point, op->write_prio, 370 370 op->writeback)) 371 371 goto err; 372 372 373 - n = bch_bio_split(bio, KEY_SIZE(k), GFP_NOIO, split); 373 + n = bio_next_split(bio, KEY_SIZE(k), GFP_NOIO, split); 374 374 375 375 n->bi_end_io = bch_data_insert_endio; 376 376 n->bi_private = cl; ··· 521 521 (bio->bi_rw & REQ_WRITE))) 522 522 goto skip; 523 523 524 - if (bio->bi_sector & (c->sb.block_size - 1) || 524 + if (bio->bi_iter.bi_sector & (c->sb.block_size - 1) || 525 525 bio_sectors(bio) & (c->sb.block_size - 1)) { 526 526 pr_debug("skipping unaligned io"); 527 527 goto skip; ··· 545 545 546 546 spin_lock(&dc->io_lock); 547 547 548 - hlist_for_each_entry(i, iohash(dc, bio->bi_sector), hash) 549 - if (i->last == bio->bi_sector && 548 + hlist_for_each_entry(i, iohash(dc, bio->bi_iter.bi_sector), hash) 549 + if (i->last == bio->bi_iter.bi_sector && 550 550 time_before(jiffies, i->jiffies)) 551 551 goto found; 552 552 ··· 555 555 add_sequential(task); 556 556 i->sequential = 0; 557 557 found: 558 - if (i->sequential + bio->bi_size > i->sequential) 559 - i->sequential += bio->bi_size; 558 + if (i->sequential + bio->bi_iter.bi_size > i->sequential) 559 + i->sequential += bio->bi_iter.bi_size; 560 560 561 561 i->last = bio_end_sector(bio); 562 562 i->jiffies = jiffies + msecs_to_jiffies(5000); ··· 605 605 unsigned insert_bio_sectors; 606 606 607 607 unsigned recoverable:1; 608 - unsigned unaligned_bvec:1; 609 608 unsigned write:1; 610 609 unsigned read_dirty_data:1; 611 610 ··· 648 649 struct bkey *bio_key; 649 650 unsigned ptr; 650 651 651 - if (bkey_cmp(k, &KEY(s->iop.inode, bio->bi_sector, 0)) <= 0) 652 + if (bkey_cmp(k, &KEY(s->iop.inode, bio->bi_iter.bi_sector, 0)) <= 0) 652 653 return MAP_CONTINUE; 653 654 654 655 if (KEY_INODE(k) != s->iop.inode || 655 - KEY_START(k) > bio->bi_sector) { 656 + KEY_START(k) > bio->bi_iter.bi_sector) { 656 657 unsigned bio_sectors = bio_sectors(bio); 657 658 unsigned sectors = KEY_INODE(k) == s->iop.inode 658 659 ? min_t(uint64_t, INT_MAX, 659 - KEY_START(k) - bio->bi_sector) 660 + KEY_START(k) - bio->bi_iter.bi_sector) 660 661 : INT_MAX; 661 662 662 663 int ret = s->d->cache_miss(b, s, bio, sectors); ··· 678 679 if (KEY_DIRTY(k)) 679 680 s->read_dirty_data = true; 680 681 681 - n = bch_bio_split(bio, min_t(uint64_t, INT_MAX, 682 - KEY_OFFSET(k) - bio->bi_sector), 683 - GFP_NOIO, s->d->bio_split); 682 + n = bio_next_split(bio, min_t(uint64_t, INT_MAX, 683 + KEY_OFFSET(k) - bio->bi_iter.bi_sector), 684 + GFP_NOIO, s->d->bio_split); 684 685 685 686 bio_key = &container_of(n, struct bbio, bio)->key; 686 687 bch_bkey_copy_single_ptr(bio_key, k, ptr); 687 688 688 - bch_cut_front(&KEY(s->iop.inode, n->bi_sector, 0), bio_key); 689 + bch_cut_front(&KEY(s->iop.inode, n->bi_iter.bi_sector, 0), bio_key); 689 690 bch_cut_back(&KEY(s->iop.inode, bio_end_sector(n), 0), bio_key); 690 691 691 692 n->bi_end_io = bch_cache_read_endio; ··· 712 713 struct bio *bio = &s->bio.bio; 713 714 714 715 int ret = bch_btree_map_keys(&s->op, s->iop.c, 715 - &KEY(s->iop.inode, bio->bi_sector, 0), 716 + &KEY(s->iop.inode, bio->bi_iter.bi_sector, 0), 716 717 cache_lookup_fn, MAP_END_KEY); 717 718 if (ret == -EAGAIN) 718 719 continue_at(cl, cache_lookup, bcache_wq); ··· 757 758 static void do_bio_hook(struct search *s) 758 759 { 759 760 struct bio *bio = &s->bio.bio; 760 - memcpy(bio, s->orig_bio, sizeof(struct bio)); 761 761 762 + bio_init(bio); 763 + __bio_clone_fast(bio, s->orig_bio); 762 764 bio->bi_end_io = request_endio; 763 765 bio->bi_private = &s->cl; 766 + 764 767 atomic_set(&bio->bi_cnt, 3); 765 768 } 766 769 ··· 774 773 if (s->iop.bio) 775 774 bio_put(s->iop.bio); 776 775 777 - if (s->unaligned_bvec) 778 - mempool_free(s->bio.bio.bi_io_vec, s->d->unaligned_bvec); 779 - 780 776 closure_debug_destroy(cl); 781 777 mempool_free(s, s->d->c->search); 782 778 } ··· 781 783 static struct search *search_alloc(struct bio *bio, struct bcache_device *d) 782 784 { 783 785 struct search *s; 784 - struct bio_vec *bv; 785 786 786 787 s = mempool_alloc(d->c->search, GFP_NOIO); 787 788 memset(s, 0, offsetof(struct search, iop.insert_keys)); ··· 798 801 s->recoverable = 1; 799 802 s->start_time = jiffies; 800 803 do_bio_hook(s); 801 - 802 - if (bio->bi_size != bio_segments(bio) * PAGE_SIZE) { 803 - bv = mempool_alloc(d->unaligned_bvec, GFP_NOIO); 804 - memcpy(bv, bio_iovec(bio), 805 - sizeof(struct bio_vec) * bio_segments(bio)); 806 - 807 - s->bio.bio.bi_io_vec = bv; 808 - s->unaligned_bvec = 1; 809 - } 810 804 811 805 return s; 812 806 } ··· 837 849 { 838 850 struct search *s = container_of(cl, struct search, cl); 839 851 struct bio *bio = &s->bio.bio; 840 - struct bio_vec *bv; 841 - int i; 842 852 843 853 if (s->recoverable) { 844 854 /* Retry from the backing device: */ 845 855 trace_bcache_read_retry(s->orig_bio); 846 856 847 857 s->iop.error = 0; 848 - bv = s->bio.bio.bi_io_vec; 849 858 do_bio_hook(s); 850 - s->bio.bio.bi_io_vec = bv; 851 - 852 - if (!s->unaligned_bvec) 853 - bio_for_each_segment(bv, s->orig_bio, i) 854 - bv->bv_offset = 0, bv->bv_len = PAGE_SIZE; 855 - else 856 - memcpy(s->bio.bio.bi_io_vec, 857 - bio_iovec(s->orig_bio), 858 - sizeof(struct bio_vec) * 859 - bio_segments(s->orig_bio)); 860 859 861 860 /* XXX: invalidate cache */ 862 861 ··· 868 893 869 894 if (s->iop.bio) { 870 895 bio_reset(s->iop.bio); 871 - s->iop.bio->bi_sector = s->cache_miss->bi_sector; 896 + s->iop.bio->bi_iter.bi_sector = s->cache_miss->bi_iter.bi_sector; 872 897 s->iop.bio->bi_bdev = s->cache_miss->bi_bdev; 873 - s->iop.bio->bi_size = s->insert_bio_sectors << 9; 898 + s->iop.bio->bi_iter.bi_size = s->insert_bio_sectors << 9; 874 899 bch_bio_map(s->iop.bio, NULL); 875 900 876 901 bio_copy_data(s->cache_miss, s->iop.bio); ··· 879 904 s->cache_miss = NULL; 880 905 } 881 906 882 - if (verify(dc, &s->bio.bio) && s->recoverable && 883 - !s->unaligned_bvec && !s->read_dirty_data) 907 + if (verify(dc, &s->bio.bio) && s->recoverable && !s->read_dirty_data) 884 908 bch_data_verify(dc, s->orig_bio); 885 909 886 910 bio_complete(s); ··· 919 945 struct bio *miss, *cache_bio; 920 946 921 947 if (s->cache_miss || s->iop.bypass) { 922 - miss = bch_bio_split(bio, sectors, GFP_NOIO, s->d->bio_split); 948 + miss = bio_next_split(bio, sectors, GFP_NOIO, s->d->bio_split); 923 949 ret = miss == bio ? MAP_DONE : MAP_CONTINUE; 924 950 goto out_submit; 925 951 } ··· 933 959 s->insert_bio_sectors = min(sectors, bio_sectors(bio) + reada); 934 960 935 961 s->iop.replace_key = KEY(s->iop.inode, 936 - bio->bi_sector + s->insert_bio_sectors, 962 + bio->bi_iter.bi_sector + s->insert_bio_sectors, 937 963 s->insert_bio_sectors); 938 964 939 965 ret = bch_btree_insert_check_key(b, &s->op, &s->iop.replace_key); ··· 942 968 943 969 s->iop.replace = true; 944 970 945 - miss = bch_bio_split(bio, sectors, GFP_NOIO, s->d->bio_split); 971 + miss = bio_next_split(bio, sectors, GFP_NOIO, s->d->bio_split); 946 972 947 973 /* btree_search_recurse()'s btree iterator is no good anymore */ 948 974 ret = miss == bio ? MAP_DONE : -EINTR; ··· 953 979 if (!cache_bio) 954 980 goto out_submit; 955 981 956 - cache_bio->bi_sector = miss->bi_sector; 957 - cache_bio->bi_bdev = miss->bi_bdev; 958 - cache_bio->bi_size = s->insert_bio_sectors << 9; 982 + cache_bio->bi_iter.bi_sector = miss->bi_iter.bi_sector; 983 + cache_bio->bi_bdev = miss->bi_bdev; 984 + cache_bio->bi_iter.bi_size = s->insert_bio_sectors << 9; 959 985 960 986 cache_bio->bi_end_io = request_endio; 961 987 cache_bio->bi_private = &s->cl; ··· 1005 1031 { 1006 1032 struct closure *cl = &s->cl; 1007 1033 struct bio *bio = &s->bio.bio; 1008 - struct bkey start = KEY(dc->disk.id, bio->bi_sector, 0); 1034 + struct bkey start = KEY(dc->disk.id, bio->bi_iter.bi_sector, 0); 1009 1035 struct bkey end = KEY(dc->disk.id, bio_end_sector(bio), 0); 1010 1036 1011 1037 bch_keybuf_check_overlapping(&s->iop.c->moving_gc_keys, &start, &end); ··· 1061 1087 closure_bio_submit(flush, cl, s->d); 1062 1088 } 1063 1089 } else { 1064 - s->iop.bio = bio_clone_bioset(bio, GFP_NOIO, 1065 - dc->disk.bio_split); 1090 + s->iop.bio = bio_clone_fast(bio, GFP_NOIO, dc->disk.bio_split); 1066 1091 1067 1092 closure_bio_submit(bio, cl, s->d); 1068 1093 } ··· 1099 1126 part_stat_unlock(); 1100 1127 1101 1128 bio->bi_bdev = dc->bdev; 1102 - bio->bi_sector += dc->sb.data_offset; 1129 + bio->bi_iter.bi_sector += dc->sb.data_offset; 1103 1130 1104 1131 if (cached_dev_get(dc)) { 1105 1132 s = search_alloc(bio, d); 1106 1133 trace_bcache_request_start(s->d, bio); 1107 1134 1108 - if (!bio->bi_size) { 1135 + if (!bio->bi_iter.bi_size) { 1109 1136 /* 1110 1137 * can't call bch_journal_meta from under 1111 1138 * generic_make_request ··· 1177 1204 static int flash_dev_cache_miss(struct btree *b, struct search *s, 1178 1205 struct bio *bio, unsigned sectors) 1179 1206 { 1180 - struct bio_vec *bv; 1181 - int i; 1207 + struct bio_vec bv; 1208 + struct bvec_iter iter; 1182 1209 1183 1210 /* Zero fill bio */ 1184 1211 1185 - bio_for_each_segment(bv, bio, i) { 1186 - unsigned j = min(bv->bv_len >> 9, sectors); 1212 + bio_for_each_segment(bv, bio, iter) { 1213 + unsigned j = min(bv.bv_len >> 9, sectors); 1187 1214 1188 - void *p = kmap(bv->bv_page); 1189 - memset(p + bv->bv_offset, 0, j << 9); 1190 - kunmap(bv->bv_page); 1215 + void *p = kmap(bv.bv_page); 1216 + memset(p + bv.bv_offset, 0, j << 9); 1217 + kunmap(bv.bv_page); 1191 1218 1192 1219 sectors -= j; 1193 1220 } 1194 1221 1195 - bio_advance(bio, min(sectors << 9, bio->bi_size)); 1222 + bio_advance(bio, min(sectors << 9, bio->bi_iter.bi_size)); 1196 1223 1197 - if (!bio->bi_size) 1224 + if (!bio->bi_iter.bi_size) 1198 1225 return MAP_DONE; 1199 1226 1200 1227 return MAP_CONTINUE; ··· 1228 1255 1229 1256 trace_bcache_request_start(s->d, bio); 1230 1257 1231 - if (!bio->bi_size) { 1258 + if (!bio->bi_iter.bi_size) { 1232 1259 /* 1233 1260 * can't call bch_journal_meta from under 1234 1261 * generic_make_request ··· 1238 1265 bcache_wq); 1239 1266 } else if (rw) { 1240 1267 bch_keybuf_check_overlapping(&s->iop.c->moving_gc_keys, 1241 - &KEY(d->id, bio->bi_sector, 0), 1268 + &KEY(d->id, bio->bi_iter.bi_sector, 0), 1242 1269 &KEY(d->id, bio_end_sector(bio), 0)); 1243 1270 1244 1271 s->iop.bypass = (bio->bi_rw & REQ_DISCARD) != 0;
+8 -12
drivers/md/bcache/super.c
··· 233 233 struct cache_sb *out = page_address(bio->bi_io_vec[0].bv_page); 234 234 unsigned i; 235 235 236 - bio->bi_sector = SB_SECTOR; 237 - bio->bi_rw = REQ_SYNC|REQ_META; 238 - bio->bi_size = SB_SIZE; 236 + bio->bi_iter.bi_sector = SB_SECTOR; 237 + bio->bi_rw = REQ_SYNC|REQ_META; 238 + bio->bi_iter.bi_size = SB_SIZE; 239 239 bch_bio_map(bio, NULL); 240 240 241 241 out->offset = cpu_to_le64(sb->offset); ··· 347 347 struct bio *bio = bch_bbio_alloc(c); 348 348 349 349 bio->bi_rw = REQ_SYNC|REQ_META|rw; 350 - bio->bi_size = KEY_SIZE(k) << 9; 350 + bio->bi_iter.bi_size = KEY_SIZE(k) << 9; 351 351 352 352 bio->bi_end_io = uuid_endio; 353 353 bio->bi_private = cl; ··· 503 503 504 504 closure_init_stack(cl); 505 505 506 - bio->bi_sector = bucket * ca->sb.bucket_size; 507 - bio->bi_bdev = ca->bdev; 508 - bio->bi_rw = REQ_SYNC|REQ_META|rw; 509 - bio->bi_size = bucket_bytes(ca); 506 + bio->bi_iter.bi_sector = bucket * ca->sb.bucket_size; 507 + bio->bi_bdev = ca->bdev; 508 + bio->bi_rw = REQ_SYNC|REQ_META|rw; 509 + bio->bi_iter.bi_size = bucket_bytes(ca); 510 510 511 511 bio->bi_end_io = prio_endio; 512 512 bio->bi_private = ca; ··· 739 739 } 740 740 741 741 bio_split_pool_free(&d->bio_split_hook); 742 - if (d->unaligned_bvec) 743 - mempool_destroy(d->unaligned_bvec); 744 742 if (d->bio_split) 745 743 bioset_free(d->bio_split); 746 744 if (is_vmalloc_addr(d->full_dirty_stripes)) ··· 791 793 return minor; 792 794 793 795 if (!(d->bio_split = bioset_create(4, offsetof(struct bbio, bio))) || 794 - !(d->unaligned_bvec = mempool_create_kmalloc_pool(1, 795 - sizeof(struct bio_vec) * BIO_MAX_PAGES)) || 796 796 bio_split_pool_init(&d->bio_split_hook) || 797 797 !(d->disk = alloc_disk(1))) { 798 798 ida_simple_remove(&bcache_minor, minor);
+2 -2
drivers/md/bcache/util.c
··· 224 224 225 225 void bch_bio_map(struct bio *bio, void *base) 226 226 { 227 - size_t size = bio->bi_size; 227 + size_t size = bio->bi_iter.bi_size; 228 228 struct bio_vec *bv = bio->bi_io_vec; 229 229 230 - BUG_ON(!bio->bi_size); 230 + BUG_ON(!bio->bi_iter.bi_size); 231 231 BUG_ON(bio->bi_vcnt); 232 232 233 233 bv->bv_offset = base ? ((unsigned long) base) % PAGE_SIZE : 0;
+3 -3
drivers/md/bcache/writeback.c
··· 111 111 if (!io->dc->writeback_percent) 112 112 bio_set_prio(bio, IOPRIO_PRIO_VALUE(IOPRIO_CLASS_IDLE, 0)); 113 113 114 - bio->bi_size = KEY_SIZE(&w->key) << 9; 114 + bio->bi_iter.bi_size = KEY_SIZE(&w->key) << 9; 115 115 bio->bi_max_vecs = DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS); 116 116 bio->bi_private = w; 117 117 bio->bi_io_vec = bio->bi_inline_vecs; ··· 184 184 185 185 dirty_init(w); 186 186 io->bio.bi_rw = WRITE; 187 - io->bio.bi_sector = KEY_START(&w->key); 187 + io->bio.bi_iter.bi_sector = KEY_START(&w->key); 188 188 io->bio.bi_bdev = io->dc->bdev; 189 189 io->bio.bi_end_io = dirty_endio; 190 190 ··· 253 253 io->dc = dc; 254 254 255 255 dirty_init(w); 256 - io->bio.bi_sector = PTR_OFFSET(&w->key, 0); 256 + io->bio.bi_iter.bi_sector = PTR_OFFSET(&w->key, 0); 257 257 io->bio.bi_bdev = PTR_CACHE(dc->disk.c, 258 258 &w->key, 0)->bdev; 259 259 io->bio.bi_rw = READ;
+1 -1
drivers/md/bcache/writeback.h
··· 50 50 return false; 51 51 52 52 if (dc->partial_stripes_expensive && 53 - bcache_dev_stripe_dirty(dc, bio->bi_sector, 53 + bcache_dev_stripe_dirty(dc, bio->bi_iter.bi_sector, 54 54 bio_sectors(bio))) 55 55 return true; 56 56
+3 -34
drivers/md/dm-bio-record.h
··· 17 17 * original bio state. 18 18 */ 19 19 20 - struct dm_bio_vec_details { 21 - #if PAGE_SIZE < 65536 22 - __u16 bv_len; 23 - __u16 bv_offset; 24 - #else 25 - unsigned bv_len; 26 - unsigned bv_offset; 27 - #endif 28 - }; 29 - 30 20 struct dm_bio_details { 31 - sector_t bi_sector; 32 21 struct block_device *bi_bdev; 33 - unsigned int bi_size; 34 - unsigned short bi_idx; 35 22 unsigned long bi_flags; 36 - struct dm_bio_vec_details bi_io_vec[BIO_MAX_PAGES]; 23 + struct bvec_iter bi_iter; 37 24 }; 38 25 39 26 static inline void dm_bio_record(struct dm_bio_details *bd, struct bio *bio) 40 27 { 41 - unsigned i; 42 - 43 - bd->bi_sector = bio->bi_sector; 44 28 bd->bi_bdev = bio->bi_bdev; 45 - bd->bi_size = bio->bi_size; 46 - bd->bi_idx = bio->bi_idx; 47 29 bd->bi_flags = bio->bi_flags; 48 - 49 - for (i = 0; i < bio->bi_vcnt; i++) { 50 - bd->bi_io_vec[i].bv_len = bio->bi_io_vec[i].bv_len; 51 - bd->bi_io_vec[i].bv_offset = bio->bi_io_vec[i].bv_offset; 52 - } 30 + bd->bi_iter = bio->bi_iter; 53 31 } 54 32 55 33 static inline void dm_bio_restore(struct dm_bio_details *bd, struct bio *bio) 56 34 { 57 - unsigned i; 58 - 59 - bio->bi_sector = bd->bi_sector; 60 35 bio->bi_bdev = bd->bi_bdev; 61 - bio->bi_size = bd->bi_size; 62 - bio->bi_idx = bd->bi_idx; 63 36 bio->bi_flags = bd->bi_flags; 64 - 65 - for (i = 0; i < bio->bi_vcnt; i++) { 66 - bio->bi_io_vec[i].bv_len = bd->bi_io_vec[i].bv_len; 67 - bio->bi_io_vec[i].bv_offset = bd->bi_io_vec[i].bv_offset; 68 - } 37 + bio->bi_iter = bd->bi_iter; 69 38 } 70 39 71 40 #endif
+1 -1
drivers/md/dm-bufio.c
··· 540 540 bio_init(&b->bio); 541 541 b->bio.bi_io_vec = b->bio_vec; 542 542 b->bio.bi_max_vecs = DM_BUFIO_INLINE_VECS; 543 - b->bio.bi_sector = block << b->c->sectors_per_block_bits; 543 + b->bio.bi_iter.bi_sector = block << b->c->sectors_per_block_bits; 544 544 b->bio.bi_bdev = b->c->bdev; 545 545 b->bio.bi_end_io = end_io; 546 546
+2 -2
drivers/md/dm-cache-policy-mq.c
··· 72 72 73 73 static void iot_update_stats(struct io_tracker *t, struct bio *bio) 74 74 { 75 - if (bio->bi_sector == from_oblock(t->last_end_oblock) + 1) 75 + if (bio->bi_iter.bi_sector == from_oblock(t->last_end_oblock) + 1) 76 76 t->nr_seq_samples++; 77 77 else { 78 78 /* ··· 87 87 t->nr_rand_samples++; 88 88 } 89 89 90 - t->last_end_oblock = to_oblock(bio->bi_sector + bio_sectors(bio) - 1); 90 + t->last_end_oblock = to_oblock(bio_end_sector(bio) - 1); 91 91 } 92 92 93 93 static void iot_check_for_pattern_switch(struct io_tracker *t)
+18 -10
drivers/md/dm-cache-target.c
··· 85 85 { 86 86 bio->bi_end_io = h->bi_end_io; 87 87 bio->bi_private = h->bi_private; 88 + 89 + /* 90 + * Must bump bi_remaining to allow bio to complete with 91 + * restored bi_end_io. 92 + */ 93 + atomic_inc(&bio->bi_remaining); 88 94 } 89 95 90 96 /*----------------------------------------------------------------*/ ··· 670 664 static void remap_to_cache(struct cache *cache, struct bio *bio, 671 665 dm_cblock_t cblock) 672 666 { 673 - sector_t bi_sector = bio->bi_sector; 667 + sector_t bi_sector = bio->bi_iter.bi_sector; 674 668 675 669 bio->bi_bdev = cache->cache_dev->bdev; 676 670 if (!block_size_is_power_of_two(cache)) 677 - bio->bi_sector = (from_cblock(cblock) * cache->sectors_per_block) + 678 - sector_div(bi_sector, cache->sectors_per_block); 671 + bio->bi_iter.bi_sector = 672 + (from_cblock(cblock) * cache->sectors_per_block) + 673 + sector_div(bi_sector, cache->sectors_per_block); 679 674 else 680 - bio->bi_sector = (from_cblock(cblock) << cache->sectors_per_block_shift) | 681 - (bi_sector & (cache->sectors_per_block - 1)); 675 + bio->bi_iter.bi_sector = 676 + (from_cblock(cblock) << cache->sectors_per_block_shift) | 677 + (bi_sector & (cache->sectors_per_block - 1)); 682 678 } 683 679 684 680 static void check_if_tick_bio_needed(struct cache *cache, struct bio *bio) ··· 720 712 721 713 static dm_oblock_t get_bio_block(struct cache *cache, struct bio *bio) 722 714 { 723 - sector_t block_nr = bio->bi_sector; 715 + sector_t block_nr = bio->bi_iter.bi_sector; 724 716 725 717 if (!block_size_is_power_of_two(cache)) 726 718 (void) sector_div(block_nr, cache->sectors_per_block); ··· 1035 1027 static bool bio_writes_complete_block(struct cache *cache, struct bio *bio) 1036 1028 { 1037 1029 return (bio_data_dir(bio) == WRITE) && 1038 - (bio->bi_size == (cache->sectors_per_block << SECTOR_SHIFT)); 1030 + (bio->bi_iter.bi_size == (cache->sectors_per_block << SECTOR_SHIFT)); 1039 1031 } 1040 1032 1041 1033 static void avoid_copy(struct dm_cache_migration *mg) ··· 1260 1252 size_t pb_data_size = get_per_bio_data_size(cache); 1261 1253 struct per_bio_data *pb = get_per_bio_data(bio, pb_data_size); 1262 1254 1263 - BUG_ON(bio->bi_size); 1255 + BUG_ON(bio->bi_iter.bi_size); 1264 1256 if (!pb->req_nr) 1265 1257 remap_to_origin(cache, bio); 1266 1258 else ··· 1283 1275 */ 1284 1276 static void process_discard_bio(struct cache *cache, struct bio *bio) 1285 1277 { 1286 - dm_block_t start_block = dm_sector_div_up(bio->bi_sector, 1278 + dm_block_t start_block = dm_sector_div_up(bio->bi_iter.bi_sector, 1287 1279 cache->discard_block_size); 1288 - dm_block_t end_block = bio->bi_sector + bio_sectors(bio); 1280 + dm_block_t end_block = bio_end_sector(bio); 1289 1281 dm_block_t b; 1290 1282 1291 1283 end_block = block_div(end_block, cache->discard_block_size);
+26 -38
drivers/md/dm-crypt.c
··· 39 39 struct completion restart; 40 40 struct bio *bio_in; 41 41 struct bio *bio_out; 42 - unsigned int offset_in; 43 - unsigned int offset_out; 44 - unsigned int idx_in; 45 - unsigned int idx_out; 42 + struct bvec_iter iter_in; 43 + struct bvec_iter iter_out; 46 44 sector_t cc_sector; 47 45 atomic_t cc_pending; 48 46 }; ··· 824 826 { 825 827 ctx->bio_in = bio_in; 826 828 ctx->bio_out = bio_out; 827 - ctx->offset_in = 0; 828 - ctx->offset_out = 0; 829 - ctx->idx_in = bio_in ? bio_in->bi_idx : 0; 830 - ctx->idx_out = bio_out ? bio_out->bi_idx : 0; 829 + if (bio_in) 830 + ctx->iter_in = bio_in->bi_iter; 831 + if (bio_out) 832 + ctx->iter_out = bio_out->bi_iter; 831 833 ctx->cc_sector = sector + cc->iv_offset; 832 834 init_completion(&ctx->restart); 833 835 } ··· 855 857 struct convert_context *ctx, 856 858 struct ablkcipher_request *req) 857 859 { 858 - struct bio_vec *bv_in = bio_iovec_idx(ctx->bio_in, ctx->idx_in); 859 - struct bio_vec *bv_out = bio_iovec_idx(ctx->bio_out, ctx->idx_out); 860 + struct bio_vec bv_in = bio_iter_iovec(ctx->bio_in, ctx->iter_in); 861 + struct bio_vec bv_out = bio_iter_iovec(ctx->bio_out, ctx->iter_out); 860 862 struct dm_crypt_request *dmreq; 861 863 u8 *iv; 862 864 int r; ··· 867 869 dmreq->iv_sector = ctx->cc_sector; 868 870 dmreq->ctx = ctx; 869 871 sg_init_table(&dmreq->sg_in, 1); 870 - sg_set_page(&dmreq->sg_in, bv_in->bv_page, 1 << SECTOR_SHIFT, 871 - bv_in->bv_offset + ctx->offset_in); 872 + sg_set_page(&dmreq->sg_in, bv_in.bv_page, 1 << SECTOR_SHIFT, 873 + bv_in.bv_offset); 872 874 873 875 sg_init_table(&dmreq->sg_out, 1); 874 - sg_set_page(&dmreq->sg_out, bv_out->bv_page, 1 << SECTOR_SHIFT, 875 - bv_out->bv_offset + ctx->offset_out); 876 + sg_set_page(&dmreq->sg_out, bv_out.bv_page, 1 << SECTOR_SHIFT, 877 + bv_out.bv_offset); 876 878 877 - ctx->offset_in += 1 << SECTOR_SHIFT; 878 - if (ctx->offset_in >= bv_in->bv_len) { 879 - ctx->offset_in = 0; 880 - ctx->idx_in++; 881 - } 882 - 883 - ctx->offset_out += 1 << SECTOR_SHIFT; 884 - if (ctx->offset_out >= bv_out->bv_len) { 885 - ctx->offset_out = 0; 886 - ctx->idx_out++; 887 - } 879 + bio_advance_iter(ctx->bio_in, &ctx->iter_in, 1 << SECTOR_SHIFT); 880 + bio_advance_iter(ctx->bio_out, &ctx->iter_out, 1 << SECTOR_SHIFT); 888 881 889 882 if (cc->iv_gen_ops) { 890 883 r = cc->iv_gen_ops->generator(cc, iv, dmreq); ··· 926 937 927 938 atomic_set(&ctx->cc_pending, 1); 928 939 929 - while(ctx->idx_in < ctx->bio_in->bi_vcnt && 930 - ctx->idx_out < ctx->bio_out->bi_vcnt) { 940 + while (ctx->iter_in.bi_size && ctx->iter_out.bi_size) { 931 941 932 942 crypt_alloc_req(cc, ctx); 933 943 ··· 1009 1021 size -= len; 1010 1022 } 1011 1023 1012 - if (!clone->bi_size) { 1024 + if (!clone->bi_iter.bi_size) { 1013 1025 bio_put(clone); 1014 1026 return NULL; 1015 1027 } ··· 1149 1161 crypt_inc_pending(io); 1150 1162 1151 1163 clone_init(io, clone); 1152 - clone->bi_sector = cc->start + io->sector; 1164 + clone->bi_iter.bi_sector = cc->start + io->sector; 1153 1165 1154 1166 generic_make_request(clone); 1155 1167 return 0; ··· 1195 1207 } 1196 1208 1197 1209 /* crypt_convert should have filled the clone bio */ 1198 - BUG_ON(io->ctx.idx_out < clone->bi_vcnt); 1210 + BUG_ON(io->ctx.iter_out.bi_size); 1199 1211 1200 - clone->bi_sector = cc->start + io->sector; 1212 + clone->bi_iter.bi_sector = cc->start + io->sector; 1201 1213 1202 1214 if (async) 1203 1215 kcryptd_queue_io(io); ··· 1212 1224 struct dm_crypt_io *new_io; 1213 1225 int crypt_finished; 1214 1226 unsigned out_of_pages = 0; 1215 - unsigned remaining = io->base_bio->bi_size; 1227 + unsigned remaining = io->base_bio->bi_iter.bi_size; 1216 1228 sector_t sector = io->sector; 1217 1229 int r; 1218 1230 ··· 1234 1246 } 1235 1247 1236 1248 io->ctx.bio_out = clone; 1237 - io->ctx.idx_out = 0; 1249 + io->ctx.iter_out = clone->bi_iter; 1238 1250 1239 - remaining -= clone->bi_size; 1251 + remaining -= clone->bi_iter.bi_size; 1240 1252 sector += bio_sectors(clone); 1241 1253 1242 1254 crypt_inc_pending(io); ··· 1278 1290 crypt_inc_pending(new_io); 1279 1291 crypt_convert_init(cc, &new_io->ctx, NULL, 1280 1292 io->base_bio, sector); 1281 - new_io->ctx.idx_in = io->ctx.idx_in; 1282 - new_io->ctx.offset_in = io->ctx.offset_in; 1293 + new_io->ctx.iter_in = io->ctx.iter_in; 1283 1294 1284 1295 /* 1285 1296 * Fragments after the first use the base_io ··· 1856 1869 if (unlikely(bio->bi_rw & (REQ_FLUSH | REQ_DISCARD))) { 1857 1870 bio->bi_bdev = cc->dev->bdev; 1858 1871 if (bio_sectors(bio)) 1859 - bio->bi_sector = cc->start + dm_target_offset(ti, bio->bi_sector); 1872 + bio->bi_iter.bi_sector = cc->start + 1873 + dm_target_offset(ti, bio->bi_iter.bi_sector); 1860 1874 return DM_MAPIO_REMAPPED; 1861 1875 } 1862 1876 1863 - io = crypt_io_alloc(cc, bio, dm_target_offset(ti, bio->bi_sector)); 1877 + io = crypt_io_alloc(cc, bio, dm_target_offset(ti, bio->bi_iter.bi_sector)); 1864 1878 1865 1879 if (bio_data_dir(io->base_bio) == READ) { 1866 1880 if (kcryptd_io_read(io, GFP_NOWAIT))
+4 -3
drivers/md/dm-delay.c
··· 277 277 if ((bio_data_dir(bio) == WRITE) && (dc->dev_write)) { 278 278 bio->bi_bdev = dc->dev_write->bdev; 279 279 if (bio_sectors(bio)) 280 - bio->bi_sector = dc->start_write + 281 - dm_target_offset(ti, bio->bi_sector); 280 + bio->bi_iter.bi_sector = dc->start_write + 281 + dm_target_offset(ti, bio->bi_iter.bi_sector); 282 282 283 283 return delay_bio(dc, dc->write_delay, bio); 284 284 } 285 285 286 286 bio->bi_bdev = dc->dev_read->bdev; 287 - bio->bi_sector = dc->start_read + dm_target_offset(ti, bio->bi_sector); 287 + bio->bi_iter.bi_sector = dc->start_read + 288 + dm_target_offset(ti, bio->bi_iter.bi_sector); 288 289 289 290 return delay_bio(dc, dc->read_delay, bio); 290 291 }
+4 -3
drivers/md/dm-flakey.c
··· 248 248 249 249 bio->bi_bdev = fc->dev->bdev; 250 250 if (bio_sectors(bio)) 251 - bio->bi_sector = flakey_map_sector(ti, bio->bi_sector); 251 + bio->bi_iter.bi_sector = 252 + flakey_map_sector(ti, bio->bi_iter.bi_sector); 252 253 } 253 254 254 255 static void corrupt_bio_data(struct bio *bio, struct flakey_c *fc) ··· 266 265 DMDEBUG("Corrupting data bio=%p by writing %u to byte %u " 267 266 "(rw=%c bi_rw=%lu bi_sector=%llu cur_bytes=%u)\n", 268 267 bio, fc->corrupt_bio_value, fc->corrupt_bio_byte, 269 - (bio_data_dir(bio) == WRITE) ? 'w' : 'r', 270 - bio->bi_rw, (unsigned long long)bio->bi_sector, bio_bytes); 268 + (bio_data_dir(bio) == WRITE) ? 'w' : 'r', bio->bi_rw, 269 + (unsigned long long)bio->bi_iter.bi_sector, bio_bytes); 271 270 } 272 271 } 273 272
+20 -17
drivers/md/dm-io.c
··· 201 201 /* 202 202 * Functions for getting the pages from a bvec. 203 203 */ 204 - static void bvec_get_page(struct dpages *dp, 204 + static void bio_get_page(struct dpages *dp, 205 205 struct page **p, unsigned long *len, unsigned *offset) 206 206 { 207 - struct bio_vec *bvec = (struct bio_vec *) dp->context_ptr; 208 - *p = bvec->bv_page; 209 - *len = bvec->bv_len; 210 - *offset = bvec->bv_offset; 207 + struct bio *bio = dp->context_ptr; 208 + struct bio_vec bvec = bio_iovec(bio); 209 + *p = bvec.bv_page; 210 + *len = bvec.bv_len; 211 + *offset = bvec.bv_offset; 211 212 } 212 213 213 - static void bvec_next_page(struct dpages *dp) 214 + static void bio_next_page(struct dpages *dp) 214 215 { 215 - struct bio_vec *bvec = (struct bio_vec *) dp->context_ptr; 216 - dp->context_ptr = bvec + 1; 216 + struct bio *bio = dp->context_ptr; 217 + struct bio_vec bvec = bio_iovec(bio); 218 + 219 + bio_advance(bio, bvec.bv_len); 217 220 } 218 221 219 - static void bvec_dp_init(struct dpages *dp, struct bio_vec *bvec) 222 + static void bio_dp_init(struct dpages *dp, struct bio *bio) 220 223 { 221 - dp->get_page = bvec_get_page; 222 - dp->next_page = bvec_next_page; 223 - dp->context_ptr = bvec; 224 + dp->get_page = bio_get_page; 225 + dp->next_page = bio_next_page; 226 + dp->context_ptr = bio; 224 227 } 225 228 226 229 /* ··· 307 304 dm_sector_div_up(remaining, (PAGE_SIZE >> SECTOR_SHIFT))); 308 305 309 306 bio = bio_alloc_bioset(GFP_NOIO, num_bvecs, io->client->bios); 310 - bio->bi_sector = where->sector + (where->count - remaining); 307 + bio->bi_iter.bi_sector = where->sector + (where->count - remaining); 311 308 bio->bi_bdev = where->bdev; 312 309 bio->bi_end_io = endio; 313 310 store_io_and_region_in_bio(bio, io, region); 314 311 315 312 if (rw & REQ_DISCARD) { 316 313 num_sectors = min_t(sector_t, q->limits.max_discard_sectors, remaining); 317 - bio->bi_size = num_sectors << SECTOR_SHIFT; 314 + bio->bi_iter.bi_size = num_sectors << SECTOR_SHIFT; 318 315 remaining -= num_sectors; 319 316 } else if (rw & REQ_WRITE_SAME) { 320 317 /* ··· 323 320 dp->get_page(dp, &page, &len, &offset); 324 321 bio_add_page(bio, page, logical_block_size, offset); 325 322 num_sectors = min_t(sector_t, q->limits.max_write_same_sectors, remaining); 326 - bio->bi_size = num_sectors << SECTOR_SHIFT; 323 + bio->bi_iter.bi_size = num_sectors << SECTOR_SHIFT; 327 324 328 325 offset = 0; 329 326 remaining -= num_sectors; ··· 460 457 list_dp_init(dp, io_req->mem.ptr.pl, io_req->mem.offset); 461 458 break; 462 459 463 - case DM_IO_BVEC: 464 - bvec_dp_init(dp, io_req->mem.ptr.bvec); 460 + case DM_IO_BIO: 461 + bio_dp_init(dp, io_req->mem.ptr.bio); 465 462 break; 466 463 467 464 case DM_IO_VMA:
+2 -1
drivers/md/dm-linear.c
··· 85 85 86 86 bio->bi_bdev = lc->dev->bdev; 87 87 if (bio_sectors(bio)) 88 - bio->bi_sector = linear_map_sector(ti, bio->bi_sector); 88 + bio->bi_iter.bi_sector = 89 + linear_map_sector(ti, bio->bi_iter.bi_sector); 89 90 } 90 91 91 92 static int linear_map(struct dm_target *ti, struct bio *bio)
+10 -10
drivers/md/dm-raid1.c
··· 432 432 region_t region = dm_rh_bio_to_region(ms->rh, bio); 433 433 434 434 if (log->type->in_sync(log, region, 0)) 435 - return choose_mirror(ms, bio->bi_sector) ? 1 : 0; 435 + return choose_mirror(ms, bio->bi_iter.bi_sector) ? 1 : 0; 436 436 437 437 return 0; 438 438 } ··· 442 442 */ 443 443 static sector_t map_sector(struct mirror *m, struct bio *bio) 444 444 { 445 - if (unlikely(!bio->bi_size)) 445 + if (unlikely(!bio->bi_iter.bi_size)) 446 446 return 0; 447 - return m->offset + dm_target_offset(m->ms->ti, bio->bi_sector); 447 + return m->offset + dm_target_offset(m->ms->ti, bio->bi_iter.bi_sector); 448 448 } 449 449 450 450 static void map_bio(struct mirror *m, struct bio *bio) 451 451 { 452 452 bio->bi_bdev = m->dev->bdev; 453 - bio->bi_sector = map_sector(m, bio); 453 + bio->bi_iter.bi_sector = map_sector(m, bio); 454 454 } 455 455 456 456 static void map_region(struct dm_io_region *io, struct mirror *m, ··· 526 526 struct dm_io_region io; 527 527 struct dm_io_request io_req = { 528 528 .bi_rw = READ, 529 - .mem.type = DM_IO_BVEC, 530 - .mem.ptr.bvec = bio->bi_io_vec + bio->bi_idx, 529 + .mem.type = DM_IO_BIO, 530 + .mem.ptr.bio = bio, 531 531 .notify.fn = read_callback, 532 532 .notify.context = bio, 533 533 .client = m->ms->io_client, ··· 559 559 * We can only read balance if the region is in sync. 560 560 */ 561 561 if (likely(region_in_sync(ms, region, 1))) 562 - m = choose_mirror(ms, bio->bi_sector); 562 + m = choose_mirror(ms, bio->bi_iter.bi_sector); 563 563 else if (m && atomic_read(&m->error_count)) 564 564 m = NULL; 565 565 ··· 629 629 struct mirror *m; 630 630 struct dm_io_request io_req = { 631 631 .bi_rw = WRITE | (bio->bi_rw & WRITE_FLUSH_FUA), 632 - .mem.type = DM_IO_BVEC, 633 - .mem.ptr.bvec = bio->bi_io_vec + bio->bi_idx, 632 + .mem.type = DM_IO_BIO, 633 + .mem.ptr.bio = bio, 634 634 .notify.fn = write_callback, 635 635 .notify.context = bio, 636 636 .client = ms->io_client, ··· 1181 1181 * The region is in-sync and we can perform reads directly. 1182 1182 * Store enough information so we can retry if it fails. 1183 1183 */ 1184 - m = choose_mirror(ms, bio->bi_sector); 1184 + m = choose_mirror(ms, bio->bi_iter.bi_sector); 1185 1185 if (unlikely(!m)) 1186 1186 return -EIO; 1187 1187
+2 -1
drivers/md/dm-region-hash.c
··· 126 126 127 127 region_t dm_rh_bio_to_region(struct dm_region_hash *rh, struct bio *bio) 128 128 { 129 - return dm_rh_sector_to_region(rh, bio->bi_sector - rh->target_begin); 129 + return dm_rh_sector_to_region(rh, bio->bi_iter.bi_sector - 130 + rh->target_begin); 130 131 } 131 132 EXPORT_SYMBOL_GPL(dm_rh_bio_to_region); 132 133
+10 -9
drivers/md/dm-snap.c
··· 1438 1438 if (full_bio) { 1439 1439 full_bio->bi_end_io = pe->full_bio_end_io; 1440 1440 full_bio->bi_private = pe->full_bio_private; 1441 + atomic_inc(&full_bio->bi_remaining); 1441 1442 } 1442 1443 free_pending_exception(pe); 1443 1444 ··· 1620 1619 struct bio *bio, chunk_t chunk) 1621 1620 { 1622 1621 bio->bi_bdev = s->cow->bdev; 1623 - bio->bi_sector = chunk_to_sector(s->store, 1624 - dm_chunk_number(e->new_chunk) + 1625 - (chunk - e->old_chunk)) + 1626 - (bio->bi_sector & 1627 - s->store->chunk_mask); 1622 + bio->bi_iter.bi_sector = 1623 + chunk_to_sector(s->store, dm_chunk_number(e->new_chunk) + 1624 + (chunk - e->old_chunk)) + 1625 + (bio->bi_iter.bi_sector & s->store->chunk_mask); 1628 1626 } 1629 1627 1630 1628 static int snapshot_map(struct dm_target *ti, struct bio *bio) ··· 1641 1641 return DM_MAPIO_REMAPPED; 1642 1642 } 1643 1643 1644 - chunk = sector_to_chunk(s->store, bio->bi_sector); 1644 + chunk = sector_to_chunk(s->store, bio->bi_iter.bi_sector); 1645 1645 1646 1646 /* Full snapshots are not usable */ 1647 1647 /* To get here the table must be live so s->active is always set. */ ··· 1702 1702 r = DM_MAPIO_SUBMITTED; 1703 1703 1704 1704 if (!pe->started && 1705 - bio->bi_size == (s->store->chunk_size << SECTOR_SHIFT)) { 1705 + bio->bi_iter.bi_size == 1706 + (s->store->chunk_size << SECTOR_SHIFT)) { 1706 1707 pe->started = 1; 1707 1708 up_write(&s->lock); 1708 1709 start_full_bio(pe, bio); ··· 1759 1758 return DM_MAPIO_REMAPPED; 1760 1759 } 1761 1760 1762 - chunk = sector_to_chunk(s->store, bio->bi_sector); 1761 + chunk = sector_to_chunk(s->store, bio->bi_iter.bi_sector); 1763 1762 1764 1763 down_write(&s->lock); 1765 1764 ··· 2096 2095 down_read(&_origins_lock); 2097 2096 o = __lookup_origin(origin->bdev); 2098 2097 if (o) 2099 - r = __origin_write(&o->snapshots, bio->bi_sector, bio); 2098 + r = __origin_write(&o->snapshots, bio->bi_iter.bi_sector, bio); 2100 2099 up_read(&_origins_lock); 2101 2100 2102 2101 return r;
+8 -5
drivers/md/dm-stripe.c
··· 259 259 { 260 260 sector_t begin, end; 261 261 262 - stripe_map_range_sector(sc, bio->bi_sector, target_stripe, &begin); 262 + stripe_map_range_sector(sc, bio->bi_iter.bi_sector, 263 + target_stripe, &begin); 263 264 stripe_map_range_sector(sc, bio_end_sector(bio), 264 265 target_stripe, &end); 265 266 if (begin < end) { 266 267 bio->bi_bdev = sc->stripe[target_stripe].dev->bdev; 267 - bio->bi_sector = begin + sc->stripe[target_stripe].physical_start; 268 - bio->bi_size = to_bytes(end - begin); 268 + bio->bi_iter.bi_sector = begin + 269 + sc->stripe[target_stripe].physical_start; 270 + bio->bi_iter.bi_size = to_bytes(end - begin); 269 271 return DM_MAPIO_REMAPPED; 270 272 } else { 271 273 /* The range doesn't map to the target stripe */ ··· 295 293 return stripe_map_range(sc, bio, target_bio_nr); 296 294 } 297 295 298 - stripe_map_sector(sc, bio->bi_sector, &stripe, &bio->bi_sector); 296 + stripe_map_sector(sc, bio->bi_iter.bi_sector, 297 + &stripe, &bio->bi_iter.bi_sector); 299 298 300 - bio->bi_sector += sc->stripe[stripe].physical_start; 299 + bio->bi_iter.bi_sector += sc->stripe[stripe].physical_start; 301 300 bio->bi_bdev = sc->stripe[stripe].dev->bdev; 302 301 303 302 return DM_MAPIO_REMAPPED;
+2 -2
drivers/md/dm-switch.c
··· 311 311 static int switch_map(struct dm_target *ti, struct bio *bio) 312 312 { 313 313 struct switch_ctx *sctx = ti->private; 314 - sector_t offset = dm_target_offset(ti, bio->bi_sector); 314 + sector_t offset = dm_target_offset(ti, bio->bi_iter.bi_sector); 315 315 unsigned path_nr = switch_get_path_nr(sctx, offset); 316 316 317 317 bio->bi_bdev = sctx->path_list[path_nr].dmdev->bdev; 318 - bio->bi_sector = sctx->path_list[path_nr].start + offset; 318 + bio->bi_iter.bi_sector = sctx->path_list[path_nr].start + offset; 319 319 320 320 return DM_MAPIO_REMAPPED; 321 321 }
+18 -12
drivers/md/dm-thin.c
··· 414 414 static dm_block_t get_bio_block(struct thin_c *tc, struct bio *bio) 415 415 { 416 416 struct pool *pool = tc->pool; 417 - sector_t block_nr = bio->bi_sector; 417 + sector_t block_nr = bio->bi_iter.bi_sector; 418 418 419 419 if (block_size_is_power_of_two(pool)) 420 420 block_nr >>= pool->sectors_per_block_shift; ··· 427 427 static void remap(struct thin_c *tc, struct bio *bio, dm_block_t block) 428 428 { 429 429 struct pool *pool = tc->pool; 430 - sector_t bi_sector = bio->bi_sector; 430 + sector_t bi_sector = bio->bi_iter.bi_sector; 431 431 432 432 bio->bi_bdev = tc->pool_dev->bdev; 433 433 if (block_size_is_power_of_two(pool)) 434 - bio->bi_sector = (block << pool->sectors_per_block_shift) | 435 - (bi_sector & (pool->sectors_per_block - 1)); 434 + bio->bi_iter.bi_sector = 435 + (block << pool->sectors_per_block_shift) | 436 + (bi_sector & (pool->sectors_per_block - 1)); 436 437 else 437 - bio->bi_sector = (block * pool->sectors_per_block) + 438 + bio->bi_iter.bi_sector = (block * pool->sectors_per_block) + 438 439 sector_div(bi_sector, pool->sectors_per_block); 439 440 } 440 441 ··· 613 612 614 613 static void process_prepared_mapping_fail(struct dm_thin_new_mapping *m) 615 614 { 616 - if (m->bio) 615 + if (m->bio) { 617 616 m->bio->bi_end_io = m->saved_bi_end_io; 617 + atomic_inc(&m->bio->bi_remaining); 618 + } 618 619 cell_error(m->tc->pool, m->cell); 619 620 list_del(&m->list); 620 621 mempool_free(m, m->tc->pool->mapping_pool); ··· 630 627 int r; 631 628 632 629 bio = m->bio; 633 - if (bio) 630 + if (bio) { 634 631 bio->bi_end_io = m->saved_bi_end_io; 632 + atomic_inc(&bio->bi_remaining); 633 + } 635 634 636 635 if (m->err) { 637 636 cell_error(pool, m->cell); ··· 736 731 */ 737 732 static int io_overlaps_block(struct pool *pool, struct bio *bio) 738 733 { 739 - return bio->bi_size == (pool->sectors_per_block << SECTOR_SHIFT); 734 + return bio->bi_iter.bi_size == 735 + (pool->sectors_per_block << SECTOR_SHIFT); 740 736 } 741 737 742 738 static int io_overwrites_block(struct pool *pool, struct bio *bio) ··· 1142 1136 if (bio_detain(pool, &key, bio, &cell)) 1143 1137 return; 1144 1138 1145 - if (bio_data_dir(bio) == WRITE && bio->bi_size) 1139 + if (bio_data_dir(bio) == WRITE && bio->bi_iter.bi_size) 1146 1140 break_sharing(tc, bio, block, &key, lookup_result, cell); 1147 1141 else { 1148 1142 struct dm_thin_endio_hook *h = dm_per_bio_data(bio, sizeof(struct dm_thin_endio_hook)); ··· 1165 1159 /* 1166 1160 * Remap empty bios (flushes) immediately, without provisioning. 1167 1161 */ 1168 - if (!bio->bi_size) { 1162 + if (!bio->bi_iter.bi_size) { 1169 1163 inc_all_io_entry(pool, bio); 1170 1164 cell_defer_no_holder(tc, cell); 1171 1165 ··· 1264 1258 r = dm_thin_find_block(tc->td, block, 1, &lookup_result); 1265 1259 switch (r) { 1266 1260 case 0: 1267 - if (lookup_result.shared && (rw == WRITE) && bio->bi_size) 1261 + if (lookup_result.shared && (rw == WRITE) && bio->bi_iter.bi_size) 1268 1262 handle_unserviceable_bio(tc->pool, bio); 1269 1263 else { 1270 1264 inc_all_io_entry(tc->pool, bio); ··· 2945 2939 2946 2940 static int thin_map(struct dm_target *ti, struct bio *bio) 2947 2941 { 2948 - bio->bi_sector = dm_target_offset(ti, bio->bi_sector); 2942 + bio->bi_iter.bi_sector = dm_target_offset(ti, bio->bi_iter.bi_sector); 2949 2943 2950 2944 return thin_bio_map(ti, bio); 2951 2945 }
+18 -42
drivers/md/dm-verity.c
··· 73 73 sector_t block; 74 74 unsigned n_blocks; 75 75 76 - /* saved bio vector */ 77 - struct bio_vec *io_vec; 78 - unsigned io_vec_size; 76 + struct bvec_iter iter; 79 77 80 78 struct work_struct work; 81 - 82 - /* A space for short vectors; longer vectors are allocated separately. */ 83 - struct bio_vec io_vec_inline[DM_VERITY_IO_VEC_INLINE]; 84 79 85 80 /* 86 81 * Three variably-size fields follow this struct: ··· 279 284 static int verity_verify_io(struct dm_verity_io *io) 280 285 { 281 286 struct dm_verity *v = io->v; 287 + struct bio *bio = dm_bio_from_per_bio_data(io, 288 + v->ti->per_bio_data_size); 282 289 unsigned b; 283 290 int i; 284 - unsigned vector = 0, offset = 0; 285 291 286 292 for (b = 0; b < io->n_blocks; b++) { 287 293 struct shash_desc *desc; ··· 332 336 } 333 337 334 338 todo = 1 << v->data_dev_block_bits; 335 - do { 336 - struct bio_vec *bv; 339 + while (io->iter.bi_size) { 337 340 u8 *page; 338 - unsigned len; 341 + struct bio_vec bv = bio_iter_iovec(bio, io->iter); 339 342 340 - BUG_ON(vector >= io->io_vec_size); 341 - bv = &io->io_vec[vector]; 342 - page = kmap_atomic(bv->bv_page); 343 - len = bv->bv_len - offset; 344 - if (likely(len >= todo)) 345 - len = todo; 346 - r = crypto_shash_update(desc, 347 - page + bv->bv_offset + offset, len); 343 + page = kmap_atomic(bv.bv_page); 344 + r = crypto_shash_update(desc, page + bv.bv_offset, 345 + bv.bv_len); 348 346 kunmap_atomic(page); 347 + 349 348 if (r < 0) { 350 349 DMERR("crypto_shash_update failed: %d", r); 351 350 return r; 352 351 } 353 - offset += len; 354 - if (likely(offset == bv->bv_len)) { 355 - offset = 0; 356 - vector++; 357 - } 358 - todo -= len; 359 - } while (todo); 352 + 353 + bio_advance_iter(bio, &io->iter, bv.bv_len); 354 + } 360 355 361 356 if (!v->version) { 362 357 r = crypto_shash_update(desc, v->salt, v->salt_size); ··· 370 383 return -EIO; 371 384 } 372 385 } 373 - BUG_ON(vector != io->io_vec_size); 374 - BUG_ON(offset); 375 386 376 387 return 0; 377 388 } ··· 385 400 bio->bi_end_io = io->orig_bi_end_io; 386 401 bio->bi_private = io->orig_bi_private; 387 402 388 - if (io->io_vec != io->io_vec_inline) 389 - mempool_free(io->io_vec, v->vec_mempool); 390 - 391 - bio_endio(bio, error); 403 + bio_endio_nodec(bio, error); 392 404 } 393 405 394 406 static void verity_work(struct work_struct *w) ··· 475 493 struct dm_verity_io *io; 476 494 477 495 bio->bi_bdev = v->data_dev->bdev; 478 - bio->bi_sector = verity_map_sector(v, bio->bi_sector); 496 + bio->bi_iter.bi_sector = verity_map_sector(v, bio->bi_iter.bi_sector); 479 497 480 - if (((unsigned)bio->bi_sector | bio_sectors(bio)) & 498 + if (((unsigned)bio->bi_iter.bi_sector | bio_sectors(bio)) & 481 499 ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) { 482 500 DMERR_LIMIT("unaligned io"); 483 501 return -EIO; ··· 496 514 io->v = v; 497 515 io->orig_bi_end_io = bio->bi_end_io; 498 516 io->orig_bi_private = bio->bi_private; 499 - io->block = bio->bi_sector >> (v->data_dev_block_bits - SECTOR_SHIFT); 500 - io->n_blocks = bio->bi_size >> v->data_dev_block_bits; 517 + io->block = bio->bi_iter.bi_sector >> (v->data_dev_block_bits - SECTOR_SHIFT); 518 + io->n_blocks = bio->bi_iter.bi_size >> v->data_dev_block_bits; 501 519 502 520 bio->bi_end_io = verity_end_io; 503 521 bio->bi_private = io; 504 - io->io_vec_size = bio_segments(bio); 505 - if (io->io_vec_size < DM_VERITY_IO_VEC_INLINE) 506 - io->io_vec = io->io_vec_inline; 507 - else 508 - io->io_vec = mempool_alloc(v->vec_mempool, GFP_NOIO); 509 - memcpy(io->io_vec, bio_iovec(bio), 510 - io->io_vec_size * sizeof(struct bio_vec)); 522 + io->iter = bio->bi_iter; 511 523 512 524 verity_submit_prefetch(v, io); 513 525
+28 -161
drivers/md/dm.c
··· 575 575 atomic_inc_return(&md->pending[rw])); 576 576 577 577 if (unlikely(dm_stats_used(&md->stats))) 578 - dm_stats_account_io(&md->stats, bio->bi_rw, bio->bi_sector, 578 + dm_stats_account_io(&md->stats, bio->bi_rw, bio->bi_iter.bi_sector, 579 579 bio_sectors(bio), false, 0, &io->stats_aux); 580 580 } 581 581 ··· 593 593 part_stat_unlock(); 594 594 595 595 if (unlikely(dm_stats_used(&md->stats))) 596 - dm_stats_account_io(&md->stats, bio->bi_rw, bio->bi_sector, 596 + dm_stats_account_io(&md->stats, bio->bi_rw, bio->bi_iter.bi_sector, 597 597 bio_sectors(bio), true, duration, &io->stats_aux); 598 598 599 599 /* ··· 742 742 if (io_error == DM_ENDIO_REQUEUE) 743 743 return; 744 744 745 - if ((bio->bi_rw & REQ_FLUSH) && bio->bi_size) { 745 + if ((bio->bi_rw & REQ_FLUSH) && bio->bi_iter.bi_size) { 746 746 /* 747 747 * Preflush done for flush with data, reissue 748 748 * without REQ_FLUSH. ··· 797 797 struct dm_rq_clone_bio_info *info = clone->bi_private; 798 798 struct dm_rq_target_io *tio = info->tio; 799 799 struct bio *bio = info->orig; 800 - unsigned int nr_bytes = info->orig->bi_size; 800 + unsigned int nr_bytes = info->orig->bi_iter.bi_size; 801 801 802 802 bio_put(clone); 803 803 ··· 1128 1128 * this io. 1129 1129 */ 1130 1130 atomic_inc(&tio->io->io_count); 1131 - sector = clone->bi_sector; 1131 + sector = clone->bi_iter.bi_sector; 1132 1132 r = ti->type->map(ti, clone); 1133 1133 if (r == DM_MAPIO_REMAPPED) { 1134 1134 /* the bio has been remapped so dispatch it */ ··· 1155 1155 struct dm_io *io; 1156 1156 sector_t sector; 1157 1157 sector_t sector_count; 1158 - unsigned short idx; 1159 1158 }; 1160 1159 1161 1160 static void bio_setup_sector(struct bio *bio, sector_t sector, sector_t len) 1162 1161 { 1163 - bio->bi_sector = sector; 1164 - bio->bi_size = to_bytes(len); 1165 - } 1166 - 1167 - static void bio_setup_bv(struct bio *bio, unsigned short idx, unsigned short bv_count) 1168 - { 1169 - bio->bi_idx = idx; 1170 - bio->bi_vcnt = idx + bv_count; 1171 - bio->bi_flags &= ~(1 << BIO_SEG_VALID); 1172 - } 1173 - 1174 - static void clone_bio_integrity(struct bio *bio, struct bio *clone, 1175 - unsigned short idx, unsigned len, unsigned offset, 1176 - unsigned trim) 1177 - { 1178 - if (!bio_integrity(bio)) 1179 - return; 1180 - 1181 - bio_integrity_clone(clone, bio, GFP_NOIO); 1182 - 1183 - if (trim) 1184 - bio_integrity_trim(clone, bio_sector_offset(bio, idx, offset), len); 1185 - } 1186 - 1187 - /* 1188 - * Creates a little bio that just does part of a bvec. 1189 - */ 1190 - static void clone_split_bio(struct dm_target_io *tio, struct bio *bio, 1191 - sector_t sector, unsigned short idx, 1192 - unsigned offset, unsigned len) 1193 - { 1194 - struct bio *clone = &tio->clone; 1195 - struct bio_vec *bv = bio->bi_io_vec + idx; 1196 - 1197 - *clone->bi_io_vec = *bv; 1198 - 1199 - bio_setup_sector(clone, sector, len); 1200 - 1201 - clone->bi_bdev = bio->bi_bdev; 1202 - clone->bi_rw = bio->bi_rw; 1203 - clone->bi_vcnt = 1; 1204 - clone->bi_io_vec->bv_offset = offset; 1205 - clone->bi_io_vec->bv_len = clone->bi_size; 1206 - clone->bi_flags |= 1 << BIO_CLONED; 1207 - 1208 - clone_bio_integrity(bio, clone, idx, len, offset, 1); 1162 + bio->bi_iter.bi_sector = sector; 1163 + bio->bi_iter.bi_size = to_bytes(len); 1209 1164 } 1210 1165 1211 1166 /* 1212 1167 * Creates a bio that consists of range of complete bvecs. 1213 1168 */ 1214 1169 static void clone_bio(struct dm_target_io *tio, struct bio *bio, 1215 - sector_t sector, unsigned short idx, 1216 - unsigned short bv_count, unsigned len) 1170 + sector_t sector, unsigned len) 1217 1171 { 1218 1172 struct bio *clone = &tio->clone; 1219 - unsigned trim = 0; 1220 1173 1221 - __bio_clone(clone, bio); 1222 - bio_setup_sector(clone, sector, len); 1223 - bio_setup_bv(clone, idx, bv_count); 1174 + __bio_clone_fast(clone, bio); 1224 1175 1225 - if (idx != bio->bi_idx || clone->bi_size < bio->bi_size) 1226 - trim = 1; 1227 - clone_bio_integrity(bio, clone, idx, len, 0, trim); 1176 + if (bio_integrity(bio)) 1177 + bio_integrity_clone(clone, bio, GFP_NOIO); 1178 + 1179 + bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector)); 1180 + clone->bi_iter.bi_size = to_bytes(len); 1181 + 1182 + if (bio_integrity(bio)) 1183 + bio_integrity_trim(clone, 0, len); 1228 1184 } 1229 1185 1230 1186 static struct dm_target_io *alloc_tio(struct clone_info *ci, ··· 1213 1257 * ci->bio->bi_max_vecs is BIO_INLINE_VECS anyway, for both flush 1214 1258 * and discard, so no need for concern about wasted bvec allocations. 1215 1259 */ 1216 - __bio_clone(clone, ci->bio); 1260 + __bio_clone_fast(clone, ci->bio); 1217 1261 if (len) 1218 1262 bio_setup_sector(clone, ci->sector, len); 1219 1263 ··· 1242 1286 } 1243 1287 1244 1288 static void __clone_and_map_data_bio(struct clone_info *ci, struct dm_target *ti, 1245 - sector_t sector, int nr_iovecs, 1246 - unsigned short idx, unsigned short bv_count, 1247 - unsigned offset, unsigned len, 1248 - unsigned split_bvec) 1289 + sector_t sector, unsigned len) 1249 1290 { 1250 1291 struct bio *bio = ci->bio; 1251 1292 struct dm_target_io *tio; ··· 1256 1303 num_target_bios = ti->num_write_bios(ti, bio); 1257 1304 1258 1305 for (target_bio_nr = 0; target_bio_nr < num_target_bios; target_bio_nr++) { 1259 - tio = alloc_tio(ci, ti, nr_iovecs, target_bio_nr); 1260 - if (split_bvec) 1261 - clone_split_bio(tio, bio, sector, idx, offset, len); 1262 - else 1263 - clone_bio(tio, bio, sector, idx, bv_count, len); 1306 + tio = alloc_tio(ci, ti, 0, target_bio_nr); 1307 + clone_bio(tio, bio, sector, len); 1264 1308 __map_bio(tio); 1265 1309 } 1266 1310 } ··· 1329 1379 } 1330 1380 1331 1381 /* 1332 - * Find maximum number of sectors / bvecs we can process with a single bio. 1333 - */ 1334 - static sector_t __len_within_target(struct clone_info *ci, sector_t max, int *idx) 1335 - { 1336 - struct bio *bio = ci->bio; 1337 - sector_t bv_len, total_len = 0; 1338 - 1339 - for (*idx = ci->idx; max && (*idx < bio->bi_vcnt); (*idx)++) { 1340 - bv_len = to_sector(bio->bi_io_vec[*idx].bv_len); 1341 - 1342 - if (bv_len > max) 1343 - break; 1344 - 1345 - max -= bv_len; 1346 - total_len += bv_len; 1347 - } 1348 - 1349 - return total_len; 1350 - } 1351 - 1352 - static int __split_bvec_across_targets(struct clone_info *ci, 1353 - struct dm_target *ti, sector_t max) 1354 - { 1355 - struct bio *bio = ci->bio; 1356 - struct bio_vec *bv = bio->bi_io_vec + ci->idx; 1357 - sector_t remaining = to_sector(bv->bv_len); 1358 - unsigned offset = 0; 1359 - sector_t len; 1360 - 1361 - do { 1362 - if (offset) { 1363 - ti = dm_table_find_target(ci->map, ci->sector); 1364 - if (!dm_target_is_valid(ti)) 1365 - return -EIO; 1366 - 1367 - max = max_io_len(ci->sector, ti); 1368 - } 1369 - 1370 - len = min(remaining, max); 1371 - 1372 - __clone_and_map_data_bio(ci, ti, ci->sector, 1, ci->idx, 0, 1373 - bv->bv_offset + offset, len, 1); 1374 - 1375 - ci->sector += len; 1376 - ci->sector_count -= len; 1377 - offset += to_bytes(len); 1378 - } while (remaining -= len); 1379 - 1380 - ci->idx++; 1381 - 1382 - return 0; 1383 - } 1384 - 1385 - /* 1386 1382 * Select the correct strategy for processing a non-flush bio. 1387 1383 */ 1388 1384 static int __split_and_process_non_flush(struct clone_info *ci) 1389 1385 { 1390 1386 struct bio *bio = ci->bio; 1391 1387 struct dm_target *ti; 1392 - sector_t len, max; 1393 - int idx; 1388 + unsigned len; 1394 1389 1395 1390 if (unlikely(bio->bi_rw & REQ_DISCARD)) 1396 1391 return __send_discard(ci); ··· 1346 1451 if (!dm_target_is_valid(ti)) 1347 1452 return -EIO; 1348 1453 1349 - max = max_io_len(ci->sector, ti); 1454 + len = min_t(sector_t, max_io_len(ci->sector, ti), ci->sector_count); 1350 1455 1351 - /* 1352 - * Optimise for the simple case where we can do all of 1353 - * the remaining io with a single clone. 1354 - */ 1355 - if (ci->sector_count <= max) { 1356 - __clone_and_map_data_bio(ci, ti, ci->sector, bio->bi_max_vecs, 1357 - ci->idx, bio->bi_vcnt - ci->idx, 0, 1358 - ci->sector_count, 0); 1359 - ci->sector_count = 0; 1360 - return 0; 1361 - } 1456 + __clone_and_map_data_bio(ci, ti, ci->sector, len); 1362 1457 1363 - /* 1364 - * There are some bvecs that don't span targets. 1365 - * Do as many of these as possible. 1366 - */ 1367 - if (to_sector(bio->bi_io_vec[ci->idx].bv_len) <= max) { 1368 - len = __len_within_target(ci, max, &idx); 1458 + ci->sector += len; 1459 + ci->sector_count -= len; 1369 1460 1370 - __clone_and_map_data_bio(ci, ti, ci->sector, bio->bi_max_vecs, 1371 - ci->idx, idx - ci->idx, 0, len, 0); 1372 - 1373 - ci->sector += len; 1374 - ci->sector_count -= len; 1375 - ci->idx = idx; 1376 - 1377 - return 0; 1378 - } 1379 - 1380 - /* 1381 - * Handle a bvec that must be split between two or more targets. 1382 - */ 1383 - return __split_bvec_across_targets(ci, ti, max); 1461 + return 0; 1384 1462 } 1385 1463 1386 1464 /* ··· 1378 1510 ci.io->bio = bio; 1379 1511 ci.io->md = md; 1380 1512 spin_lock_init(&ci.io->endio_lock); 1381 - ci.sector = bio->bi_sector; 1382 - ci.idx = bio->bi_idx; 1513 + ci.sector = bio->bi_iter.bi_sector; 1383 1514 1384 1515 start_io_acct(ci.io); 1385 1516
+12 -7
drivers/md/faulty.c
··· 74 74 { 75 75 struct bio *b = bio->bi_private; 76 76 77 - b->bi_size = bio->bi_size; 78 - b->bi_sector = bio->bi_sector; 77 + b->bi_iter.bi_size = bio->bi_iter.bi_size; 78 + b->bi_iter.bi_sector = bio->bi_iter.bi_sector; 79 79 80 80 bio_put(bio); 81 81 ··· 185 185 return; 186 186 } 187 187 188 - if (check_sector(conf, bio->bi_sector, bio_end_sector(bio), WRITE)) 188 + if (check_sector(conf, bio->bi_iter.bi_sector, 189 + bio_end_sector(bio), WRITE)) 189 190 failit = 1; 190 191 if (check_mode(conf, WritePersistent)) { 191 - add_sector(conf, bio->bi_sector, WritePersistent); 192 + add_sector(conf, bio->bi_iter.bi_sector, 193 + WritePersistent); 192 194 failit = 1; 193 195 } 194 196 if (check_mode(conf, WriteTransient)) 195 197 failit = 1; 196 198 } else { 197 199 /* read request */ 198 - if (check_sector(conf, bio->bi_sector, bio_end_sector(bio), READ)) 200 + if (check_sector(conf, bio->bi_iter.bi_sector, 201 + bio_end_sector(bio), READ)) 199 202 failit = 1; 200 203 if (check_mode(conf, ReadTransient)) 201 204 failit = 1; 202 205 if (check_mode(conf, ReadPersistent)) { 203 - add_sector(conf, bio->bi_sector, ReadPersistent); 206 + add_sector(conf, bio->bi_iter.bi_sector, 207 + ReadPersistent); 204 208 failit = 1; 205 209 } 206 210 if (check_mode(conf, ReadFixable)) { 207 - add_sector(conf, bio->bi_sector, ReadFixable); 211 + add_sector(conf, bio->bi_iter.bi_sector, 212 + ReadFixable); 208 213 failit = 1; 209 214 } 210 215 }
+45 -45
drivers/md/linear.c
··· 288 288 289 289 static void linear_make_request(struct mddev *mddev, struct bio *bio) 290 290 { 291 + char b[BDEVNAME_SIZE]; 291 292 struct dev_info *tmp_dev; 292 - sector_t start_sector; 293 + struct bio *split; 294 + sector_t start_sector, end_sector, data_offset; 293 295 294 296 if (unlikely(bio->bi_rw & REQ_FLUSH)) { 295 297 md_flush_request(mddev, bio); 296 298 return; 297 299 } 298 300 299 - rcu_read_lock(); 300 - tmp_dev = which_dev(mddev, bio->bi_sector); 301 - start_sector = tmp_dev->end_sector - tmp_dev->rdev->sectors; 301 + do { 302 + rcu_read_lock(); 302 303 303 - 304 - if (unlikely(bio->bi_sector >= (tmp_dev->end_sector) 305 - || (bio->bi_sector < start_sector))) { 306 - char b[BDEVNAME_SIZE]; 307 - 308 - printk(KERN_ERR 309 - "md/linear:%s: make_request: Sector %llu out of bounds on " 310 - "dev %s: %llu sectors, offset %llu\n", 311 - mdname(mddev), 312 - (unsigned long long)bio->bi_sector, 313 - bdevname(tmp_dev->rdev->bdev, b), 314 - (unsigned long long)tmp_dev->rdev->sectors, 315 - (unsigned long long)start_sector); 316 - rcu_read_unlock(); 317 - bio_io_error(bio); 318 - return; 319 - } 320 - if (unlikely(bio_end_sector(bio) > tmp_dev->end_sector)) { 321 - /* This bio crosses a device boundary, so we have to 322 - * split it. 323 - */ 324 - struct bio_pair *bp; 325 - sector_t end_sector = tmp_dev->end_sector; 304 + tmp_dev = which_dev(mddev, bio->bi_iter.bi_sector); 305 + start_sector = tmp_dev->end_sector - tmp_dev->rdev->sectors; 306 + end_sector = tmp_dev->end_sector; 307 + data_offset = tmp_dev->rdev->data_offset; 308 + bio->bi_bdev = tmp_dev->rdev->bdev; 326 309 327 310 rcu_read_unlock(); 328 311 329 - bp = bio_split(bio, end_sector - bio->bi_sector); 312 + if (unlikely(bio->bi_iter.bi_sector >= end_sector || 313 + bio->bi_iter.bi_sector < start_sector)) 314 + goto out_of_bounds; 330 315 331 - linear_make_request(mddev, &bp->bio1); 332 - linear_make_request(mddev, &bp->bio2); 333 - bio_pair_release(bp); 334 - return; 335 - } 336 - 337 - bio->bi_bdev = tmp_dev->rdev->bdev; 338 - bio->bi_sector = bio->bi_sector - start_sector 339 - + tmp_dev->rdev->data_offset; 340 - rcu_read_unlock(); 316 + if (unlikely(bio_end_sector(bio) > end_sector)) { 317 + /* This bio crosses a device boundary, so we have to 318 + * split it. 319 + */ 320 + split = bio_split(bio, end_sector - 321 + bio->bi_iter.bi_sector, 322 + GFP_NOIO, fs_bio_set); 323 + bio_chain(split, bio); 324 + } else { 325 + split = bio; 326 + } 341 327 342 - if (unlikely((bio->bi_rw & REQ_DISCARD) && 343 - !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) { 344 - /* Just ignore it */ 345 - bio_endio(bio, 0); 346 - return; 347 - } 328 + split->bi_iter.bi_sector = split->bi_iter.bi_sector - 329 + start_sector + data_offset; 348 330 349 - generic_make_request(bio); 331 + if (unlikely((split->bi_rw & REQ_DISCARD) && 332 + !blk_queue_discard(bdev_get_queue(split->bi_bdev)))) { 333 + /* Just ignore it */ 334 + bio_endio(split, 0); 335 + } else 336 + generic_make_request(split); 337 + } while (split != bio); 338 + return; 339 + 340 + out_of_bounds: 341 + printk(KERN_ERR 342 + "md/linear:%s: make_request: Sector %llu out of bounds on " 343 + "dev %s: %llu sectors, offset %llu\n", 344 + mdname(mddev), 345 + (unsigned long long)bio->bi_iter.bi_sector, 346 + bdevname(tmp_dev->rdev->bdev, b), 347 + (unsigned long long)tmp_dev->rdev->sectors, 348 + (unsigned long long)start_sector); 349 + bio_io_error(bio); 350 350 } 351 351 352 352 static void linear_status (struct seq_file *seq, struct mddev *mddev)
+5 -7
drivers/md/md.c
··· 393 393 struct mddev *mddev = container_of(ws, struct mddev, flush_work); 394 394 struct bio *bio = mddev->flush_bio; 395 395 396 - if (bio->bi_size == 0) 396 + if (bio->bi_iter.bi_size == 0) 397 397 /* an empty barrier - all done */ 398 398 bio_endio(bio, 0); 399 399 else { ··· 754 754 struct bio *bio = bio_alloc_mddev(GFP_NOIO, 1, mddev); 755 755 756 756 bio->bi_bdev = rdev->meta_bdev ? rdev->meta_bdev : rdev->bdev; 757 - bio->bi_sector = sector; 757 + bio->bi_iter.bi_sector = sector; 758 758 bio_add_page(bio, page, size, 0); 759 759 bio->bi_private = rdev; 760 760 bio->bi_end_io = super_written; ··· 782 782 struct bio *bio = bio_alloc_mddev(GFP_NOIO, 1, rdev->mddev); 783 783 int ret; 784 784 785 - rw |= REQ_SYNC; 786 - 787 785 bio->bi_bdev = (metadata_op && rdev->meta_bdev) ? 788 786 rdev->meta_bdev : rdev->bdev; 789 787 if (metadata_op) 790 - bio->bi_sector = sector + rdev->sb_start; 788 + bio->bi_iter.bi_sector = sector + rdev->sb_start; 791 789 else if (rdev->mddev->reshape_position != MaxSector && 792 790 (rdev->mddev->reshape_backwards == 793 791 (sector >= rdev->mddev->reshape_position))) 794 - bio->bi_sector = sector + rdev->new_data_offset; 792 + bio->bi_iter.bi_sector = sector + rdev->new_data_offset; 795 793 else 796 - bio->bi_sector = sector + rdev->data_offset; 794 + bio->bi_iter.bi_sector = sector + rdev->data_offset; 797 795 bio_add_page(bio, page, size, 0); 798 796 submit_bio_wait(rw, bio); 799 797
+7 -6
drivers/md/multipath.c
··· 100 100 md_error (mp_bh->mddev, rdev); 101 101 printk(KERN_ERR "multipath: %s: rescheduling sector %llu\n", 102 102 bdevname(rdev->bdev,b), 103 - (unsigned long long)bio->bi_sector); 103 + (unsigned long long)bio->bi_iter.bi_sector); 104 104 multipath_reschedule_retry(mp_bh); 105 105 } else 106 106 multipath_end_bh_io(mp_bh, error); ··· 132 132 multipath = conf->multipaths + mp_bh->path; 133 133 134 134 mp_bh->bio = *bio; 135 - mp_bh->bio.bi_sector += multipath->rdev->data_offset; 135 + mp_bh->bio.bi_iter.bi_sector += multipath->rdev->data_offset; 136 136 mp_bh->bio.bi_bdev = multipath->rdev->bdev; 137 137 mp_bh->bio.bi_rw |= REQ_FAILFAST_TRANSPORT; 138 138 mp_bh->bio.bi_end_io = multipath_end_request; ··· 355 355 spin_unlock_irqrestore(&conf->device_lock, flags); 356 356 357 357 bio = &mp_bh->bio; 358 - bio->bi_sector = mp_bh->master_bio->bi_sector; 358 + bio->bi_iter.bi_sector = mp_bh->master_bio->bi_iter.bi_sector; 359 359 360 360 if ((mp_bh->path = multipath_map (conf))<0) { 361 361 printk(KERN_ALERT "multipath: %s: unrecoverable IO read" 362 362 " error for block %llu\n", 363 363 bdevname(bio->bi_bdev,b), 364 - (unsigned long long)bio->bi_sector); 364 + (unsigned long long)bio->bi_iter.bi_sector); 365 365 multipath_end_bh_io(mp_bh, -EIO); 366 366 } else { 367 367 printk(KERN_ERR "multipath: %s: redirecting sector %llu" 368 368 " to another IO path\n", 369 369 bdevname(bio->bi_bdev,b), 370 - (unsigned long long)bio->bi_sector); 370 + (unsigned long long)bio->bi_iter.bi_sector); 371 371 *bio = *(mp_bh->master_bio); 372 - bio->bi_sector += conf->multipaths[mp_bh->path].rdev->data_offset; 372 + bio->bi_iter.bi_sector += 373 + conf->multipaths[mp_bh->path].rdev->data_offset; 373 374 bio->bi_bdev = conf->multipaths[mp_bh->path].rdev->bdev; 374 375 bio->bi_rw |= REQ_FAILFAST_TRANSPORT; 375 376 bio->bi_end_io = multipath_end_request;
+29 -48
drivers/md/raid0.c
··· 501 501 unsigned int chunk_sects, struct bio *bio) 502 502 { 503 503 if (likely(is_power_of_2(chunk_sects))) { 504 - return chunk_sects >= ((bio->bi_sector & (chunk_sects-1)) 504 + return chunk_sects >= 505 + ((bio->bi_iter.bi_sector & (chunk_sects-1)) 505 506 + bio_sectors(bio)); 506 507 } else{ 507 - sector_t sector = bio->bi_sector; 508 + sector_t sector = bio->bi_iter.bi_sector; 508 509 return chunk_sects >= (sector_div(sector, chunk_sects) 509 510 + bio_sectors(bio)); 510 511 } ··· 513 512 514 513 static void raid0_make_request(struct mddev *mddev, struct bio *bio) 515 514 { 516 - unsigned int chunk_sects; 517 - sector_t sector_offset; 518 515 struct strip_zone *zone; 519 516 struct md_rdev *tmp_dev; 517 + struct bio *split; 520 518 521 519 if (unlikely(bio->bi_rw & REQ_FLUSH)) { 522 520 md_flush_request(mddev, bio); 523 521 return; 524 522 } 525 523 526 - chunk_sects = mddev->chunk_sectors; 527 - if (unlikely(!is_io_in_chunk_boundary(mddev, chunk_sects, bio))) { 528 - sector_t sector = bio->bi_sector; 529 - struct bio_pair *bp; 530 - /* Sanity check -- queue functions should prevent this happening */ 531 - if (bio_segments(bio) > 1) 532 - goto bad_map; 533 - /* This is a one page bio that upper layers 534 - * refuse to split for us, so we need to split it. 535 - */ 536 - if (likely(is_power_of_2(chunk_sects))) 537 - bp = bio_split(bio, chunk_sects - (sector & 538 - (chunk_sects-1))); 539 - else 540 - bp = bio_split(bio, chunk_sects - 541 - sector_div(sector, chunk_sects)); 542 - raid0_make_request(mddev, &bp->bio1); 543 - raid0_make_request(mddev, &bp->bio2); 544 - bio_pair_release(bp); 545 - return; 546 - } 524 + do { 525 + sector_t sector = bio->bi_iter.bi_sector; 526 + unsigned chunk_sects = mddev->chunk_sectors; 547 527 548 - sector_offset = bio->bi_sector; 549 - zone = find_zone(mddev->private, &sector_offset); 550 - tmp_dev = map_sector(mddev, zone, bio->bi_sector, 551 - &sector_offset); 552 - bio->bi_bdev = tmp_dev->bdev; 553 - bio->bi_sector = sector_offset + zone->dev_start + 554 - tmp_dev->data_offset; 528 + unsigned sectors = chunk_sects - 529 + (likely(is_power_of_2(chunk_sects)) 530 + ? (sector & (chunk_sects-1)) 531 + : sector_div(sector, chunk_sects)); 555 532 556 - if (unlikely((bio->bi_rw & REQ_DISCARD) && 557 - !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) { 558 - /* Just ignore it */ 559 - bio_endio(bio, 0); 560 - return; 561 - } 533 + if (sectors < bio_sectors(bio)) { 534 + split = bio_split(bio, sectors, GFP_NOIO, fs_bio_set); 535 + bio_chain(split, bio); 536 + } else { 537 + split = bio; 538 + } 562 539 563 - generic_make_request(bio); 564 - return; 540 + zone = find_zone(mddev->private, &sector); 541 + tmp_dev = map_sector(mddev, zone, sector, &sector); 542 + split->bi_bdev = tmp_dev->bdev; 543 + split->bi_iter.bi_sector = sector + zone->dev_start + 544 + tmp_dev->data_offset; 565 545 566 - bad_map: 567 - printk("md/raid0:%s: make_request bug: can't convert block across chunks" 568 - " or bigger than %dk %llu %d\n", 569 - mdname(mddev), chunk_sects / 2, 570 - (unsigned long long)bio->bi_sector, bio_sectors(bio) / 2); 571 - 572 - bio_io_error(bio); 573 - return; 546 + if (unlikely((split->bi_rw & REQ_DISCARD) && 547 + !blk_queue_discard(bdev_get_queue(split->bi_bdev)))) { 548 + /* Just ignore it */ 549 + bio_endio(split, 0); 550 + } else 551 + generic_make_request(split); 552 + } while (split != bio); 574 553 } 575 554 576 555 static void raid0_status(struct seq_file *seq, struct mddev *mddev)
+38 -35
drivers/md/raid1.c
··· 229 229 int done; 230 230 struct r1conf *conf = r1_bio->mddev->private; 231 231 sector_t start_next_window = r1_bio->start_next_window; 232 - sector_t bi_sector = bio->bi_sector; 232 + sector_t bi_sector = bio->bi_iter.bi_sector; 233 233 234 234 if (bio->bi_phys_segments) { 235 235 unsigned long flags; ··· 265 265 if (!test_and_set_bit(R1BIO_Returned, &r1_bio->state)) { 266 266 pr_debug("raid1: sync end %s on sectors %llu-%llu\n", 267 267 (bio_data_dir(bio) == WRITE) ? "write" : "read", 268 - (unsigned long long) bio->bi_sector, 269 - (unsigned long long) bio->bi_sector + 270 - bio_sectors(bio) - 1); 268 + (unsigned long long) bio->bi_iter.bi_sector, 269 + (unsigned long long) bio_end_sector(bio) - 1); 271 270 272 271 call_bio_endio(r1_bio); 273 272 } ··· 465 466 struct bio *mbio = r1_bio->master_bio; 466 467 pr_debug("raid1: behind end write sectors" 467 468 " %llu-%llu\n", 468 - (unsigned long long) mbio->bi_sector, 469 - (unsigned long long) mbio->bi_sector + 470 - bio_sectors(mbio) - 1); 469 + (unsigned long long) mbio->bi_iter.bi_sector, 470 + (unsigned long long) bio_end_sector(mbio) - 1); 471 471 call_bio_endio(r1_bio); 472 472 } 473 473 } ··· 873 875 else if ((conf->next_resync - RESYNC_WINDOW_SECTORS 874 876 >= bio_end_sector(bio)) || 875 877 (conf->next_resync + NEXT_NORMALIO_DISTANCE 876 - <= bio->bi_sector)) 878 + <= bio->bi_iter.bi_sector)) 877 879 wait = false; 878 880 else 879 881 wait = true; ··· 911 913 912 914 if (bio && bio_data_dir(bio) == WRITE) { 913 915 if (conf->next_resync + NEXT_NORMALIO_DISTANCE 914 - <= bio->bi_sector) { 916 + <= bio->bi_iter.bi_sector) { 915 917 if (conf->start_next_window == MaxSector) 916 918 conf->start_next_window = 917 919 conf->next_resync + 918 920 NEXT_NORMALIO_DISTANCE; 919 921 920 922 if ((conf->start_next_window + NEXT_NORMALIO_DISTANCE) 921 - <= bio->bi_sector) 923 + <= bio->bi_iter.bi_sector) 922 924 conf->next_window_requests++; 923 925 else 924 926 conf->current_window_requests++; ··· 1025 1027 if (bvecs[i].bv_page) 1026 1028 put_page(bvecs[i].bv_page); 1027 1029 kfree(bvecs); 1028 - pr_debug("%dB behind alloc failed, doing sync I/O\n", bio->bi_size); 1030 + pr_debug("%dB behind alloc failed, doing sync I/O\n", 1031 + bio->bi_iter.bi_size); 1029 1032 } 1030 1033 1031 1034 struct raid1_plug_cb { ··· 1106 1107 1107 1108 if (bio_data_dir(bio) == WRITE && 1108 1109 bio_end_sector(bio) > mddev->suspend_lo && 1109 - bio->bi_sector < mddev->suspend_hi) { 1110 + bio->bi_iter.bi_sector < mddev->suspend_hi) { 1110 1111 /* As the suspend_* range is controlled by 1111 1112 * userspace, we want an interruptible 1112 1113 * wait. ··· 1117 1118 prepare_to_wait(&conf->wait_barrier, 1118 1119 &w, TASK_INTERRUPTIBLE); 1119 1120 if (bio_end_sector(bio) <= mddev->suspend_lo || 1120 - bio->bi_sector >= mddev->suspend_hi) 1121 + bio->bi_iter.bi_sector >= mddev->suspend_hi) 1121 1122 break; 1122 1123 schedule(); 1123 1124 } ··· 1139 1140 r1_bio->sectors = bio_sectors(bio); 1140 1141 r1_bio->state = 0; 1141 1142 r1_bio->mddev = mddev; 1142 - r1_bio->sector = bio->bi_sector; 1143 + r1_bio->sector = bio->bi_iter.bi_sector; 1143 1144 1144 1145 /* We might need to issue multiple reads to different 1145 1146 * devices if there are bad blocks around, so we keep ··· 1179 1180 r1_bio->read_disk = rdisk; 1180 1181 1181 1182 read_bio = bio_clone_mddev(bio, GFP_NOIO, mddev); 1182 - bio_trim(read_bio, r1_bio->sector - bio->bi_sector, 1183 + bio_trim(read_bio, r1_bio->sector - bio->bi_iter.bi_sector, 1183 1184 max_sectors); 1184 1185 1185 1186 r1_bio->bios[rdisk] = read_bio; 1186 1187 1187 - read_bio->bi_sector = r1_bio->sector + mirror->rdev->data_offset; 1188 + read_bio->bi_iter.bi_sector = r1_bio->sector + 1189 + mirror->rdev->data_offset; 1188 1190 read_bio->bi_bdev = mirror->rdev->bdev; 1189 1191 read_bio->bi_end_io = raid1_end_read_request; 1190 1192 read_bio->bi_rw = READ | do_sync; ··· 1197 1197 */ 1198 1198 1199 1199 sectors_handled = (r1_bio->sector + max_sectors 1200 - - bio->bi_sector); 1200 + - bio->bi_iter.bi_sector); 1201 1201 r1_bio->sectors = max_sectors; 1202 1202 spin_lock_irq(&conf->device_lock); 1203 1203 if (bio->bi_phys_segments == 0) ··· 1218 1218 r1_bio->sectors = bio_sectors(bio) - sectors_handled; 1219 1219 r1_bio->state = 0; 1220 1220 r1_bio->mddev = mddev; 1221 - r1_bio->sector = bio->bi_sector + sectors_handled; 1221 + r1_bio->sector = bio->bi_iter.bi_sector + 1222 + sectors_handled; 1222 1223 goto read_again; 1223 1224 } else 1224 1225 generic_make_request(read_bio); ··· 1322 1321 if (r1_bio->bios[j]) 1323 1322 rdev_dec_pending(conf->mirrors[j].rdev, mddev); 1324 1323 r1_bio->state = 0; 1325 - allow_barrier(conf, start_next_window, bio->bi_sector); 1324 + allow_barrier(conf, start_next_window, bio->bi_iter.bi_sector); 1326 1325 md_wait_for_blocked_rdev(blocked_rdev, mddev); 1327 1326 start_next_window = wait_barrier(conf, bio); 1328 1327 /* ··· 1349 1348 bio->bi_phys_segments++; 1350 1349 spin_unlock_irq(&conf->device_lock); 1351 1350 } 1352 - sectors_handled = r1_bio->sector + max_sectors - bio->bi_sector; 1351 + sectors_handled = r1_bio->sector + max_sectors - bio->bi_iter.bi_sector; 1353 1352 1354 1353 atomic_set(&r1_bio->remaining, 1); 1355 1354 atomic_set(&r1_bio->behind_remaining, 0); ··· 1361 1360 continue; 1362 1361 1363 1362 mbio = bio_clone_mddev(bio, GFP_NOIO, mddev); 1364 - bio_trim(mbio, r1_bio->sector - bio->bi_sector, max_sectors); 1363 + bio_trim(mbio, r1_bio->sector - bio->bi_iter.bi_sector, max_sectors); 1365 1364 1366 1365 if (first_clone) { 1367 1366 /* do behind I/O ? ··· 1395 1394 1396 1395 r1_bio->bios[i] = mbio; 1397 1396 1398 - mbio->bi_sector = (r1_bio->sector + 1397 + mbio->bi_iter.bi_sector = (r1_bio->sector + 1399 1398 conf->mirrors[i].rdev->data_offset); 1400 1399 mbio->bi_bdev = conf->mirrors[i].rdev->bdev; 1401 1400 mbio->bi_end_io = raid1_end_write_request; ··· 1435 1434 r1_bio->sectors = bio_sectors(bio) - sectors_handled; 1436 1435 r1_bio->state = 0; 1437 1436 r1_bio->mddev = mddev; 1438 - r1_bio->sector = bio->bi_sector + sectors_handled; 1437 + r1_bio->sector = bio->bi_iter.bi_sector + sectors_handled; 1439 1438 goto retry_write; 1440 1439 } 1441 1440 ··· 1959 1958 /* fixup the bio for reuse */ 1960 1959 bio_reset(b); 1961 1960 b->bi_vcnt = vcnt; 1962 - b->bi_size = r1_bio->sectors << 9; 1963 - b->bi_sector = r1_bio->sector + 1961 + b->bi_iter.bi_size = r1_bio->sectors << 9; 1962 + b->bi_iter.bi_sector = r1_bio->sector + 1964 1963 conf->mirrors[i].rdev->data_offset; 1965 1964 b->bi_bdev = conf->mirrors[i].rdev->bdev; 1966 1965 b->bi_end_io = end_sync_read; 1967 1966 b->bi_private = r1_bio; 1968 1967 1969 - size = b->bi_size; 1968 + size = b->bi_iter.bi_size; 1970 1969 for (j = 0; j < vcnt ; j++) { 1971 1970 struct bio_vec *bi; 1972 1971 bi = &b->bi_io_vec[j]; ··· 2221 2220 } 2222 2221 2223 2222 wbio->bi_rw = WRITE; 2224 - wbio->bi_sector = r1_bio->sector; 2225 - wbio->bi_size = r1_bio->sectors << 9; 2223 + wbio->bi_iter.bi_sector = r1_bio->sector; 2224 + wbio->bi_iter.bi_size = r1_bio->sectors << 9; 2226 2225 2227 2226 bio_trim(wbio, sector - r1_bio->sector, sectors); 2228 - wbio->bi_sector += rdev->data_offset; 2227 + wbio->bi_iter.bi_sector += rdev->data_offset; 2229 2228 wbio->bi_bdev = rdev->bdev; 2230 2229 if (submit_bio_wait(WRITE, wbio) == 0) 2231 2230 /* failure! */ ··· 2339 2338 } 2340 2339 r1_bio->read_disk = disk; 2341 2340 bio = bio_clone_mddev(r1_bio->master_bio, GFP_NOIO, mddev); 2342 - bio_trim(bio, r1_bio->sector - bio->bi_sector, max_sectors); 2341 + bio_trim(bio, r1_bio->sector - bio->bi_iter.bi_sector, 2342 + max_sectors); 2343 2343 r1_bio->bios[r1_bio->read_disk] = bio; 2344 2344 rdev = conf->mirrors[disk].rdev; 2345 2345 printk_ratelimited(KERN_ERR ··· 2349 2347 mdname(mddev), 2350 2348 (unsigned long long)r1_bio->sector, 2351 2349 bdevname(rdev->bdev, b)); 2352 - bio->bi_sector = r1_bio->sector + rdev->data_offset; 2350 + bio->bi_iter.bi_sector = r1_bio->sector + rdev->data_offset; 2353 2351 bio->bi_bdev = rdev->bdev; 2354 2352 bio->bi_end_io = raid1_end_read_request; 2355 2353 bio->bi_rw = READ | do_sync; ··· 2358 2356 /* Drat - have to split this up more */ 2359 2357 struct bio *mbio = r1_bio->master_bio; 2360 2358 int sectors_handled = (r1_bio->sector + max_sectors 2361 - - mbio->bi_sector); 2359 + - mbio->bi_iter.bi_sector); 2362 2360 r1_bio->sectors = max_sectors; 2363 2361 spin_lock_irq(&conf->device_lock); 2364 2362 if (mbio->bi_phys_segments == 0) ··· 2376 2374 r1_bio->state = 0; 2377 2375 set_bit(R1BIO_ReadError, &r1_bio->state); 2378 2376 r1_bio->mddev = mddev; 2379 - r1_bio->sector = mbio->bi_sector + sectors_handled; 2377 + r1_bio->sector = mbio->bi_iter.bi_sector + 2378 + sectors_handled; 2380 2379 2381 2380 goto read_more; 2382 2381 } else ··· 2601 2598 } 2602 2599 if (bio->bi_end_io) { 2603 2600 atomic_inc(&rdev->nr_pending); 2604 - bio->bi_sector = sector_nr + rdev->data_offset; 2601 + bio->bi_iter.bi_sector = sector_nr + rdev->data_offset; 2605 2602 bio->bi_bdev = rdev->bdev; 2606 2603 bio->bi_private = r1_bio; 2607 2604 } ··· 2701 2698 continue; 2702 2699 /* remove last page from this bio */ 2703 2700 bio->bi_vcnt--; 2704 - bio->bi_size -= len; 2701 + bio->bi_iter.bi_size -= len; 2705 2702 bio->bi_flags &= ~(1<< BIO_SEG_VALID); 2706 2703 } 2707 2704 goto bio_full;
+91 -103
drivers/md/raid10.c
··· 1152 1152 kfree(plug); 1153 1153 } 1154 1154 1155 - static void make_request(struct mddev *mddev, struct bio * bio) 1155 + static void __make_request(struct mddev *mddev, struct bio *bio) 1156 1156 { 1157 1157 struct r10conf *conf = mddev->private; 1158 1158 struct r10bio *r10_bio; 1159 1159 struct bio *read_bio; 1160 1160 int i; 1161 - sector_t chunk_mask = (conf->geo.chunk_mask & conf->prev.chunk_mask); 1162 - int chunk_sects = chunk_mask + 1; 1163 1161 const int rw = bio_data_dir(bio); 1164 1162 const unsigned long do_sync = (bio->bi_rw & REQ_SYNC); 1165 1163 const unsigned long do_fua = (bio->bi_rw & REQ_FUA); ··· 1172 1174 int max_sectors; 1173 1175 int sectors; 1174 1176 1175 - if (unlikely(bio->bi_rw & REQ_FLUSH)) { 1176 - md_flush_request(mddev, bio); 1177 - return; 1178 - } 1179 - 1180 - /* If this request crosses a chunk boundary, we need to 1181 - * split it. This will only happen for 1 PAGE (or less) requests. 1182 - */ 1183 - if (unlikely((bio->bi_sector & chunk_mask) + bio_sectors(bio) 1184 - > chunk_sects 1185 - && (conf->geo.near_copies < conf->geo.raid_disks 1186 - || conf->prev.near_copies < conf->prev.raid_disks))) { 1187 - struct bio_pair *bp; 1188 - /* Sanity check -- queue functions should prevent this happening */ 1189 - if (bio_segments(bio) > 1) 1190 - goto bad_map; 1191 - /* This is a one page bio that upper layers 1192 - * refuse to split for us, so we need to split it. 1193 - */ 1194 - bp = bio_split(bio, 1195 - chunk_sects - (bio->bi_sector & (chunk_sects - 1)) ); 1196 - 1197 - /* Each of these 'make_request' calls will call 'wait_barrier'. 1198 - * If the first succeeds but the second blocks due to the resync 1199 - * thread raising the barrier, we will deadlock because the 1200 - * IO to the underlying device will be queued in generic_make_request 1201 - * and will never complete, so will never reduce nr_pending. 1202 - * So increment nr_waiting here so no new raise_barriers will 1203 - * succeed, and so the second wait_barrier cannot block. 1204 - */ 1205 - spin_lock_irq(&conf->resync_lock); 1206 - conf->nr_waiting++; 1207 - spin_unlock_irq(&conf->resync_lock); 1208 - 1209 - make_request(mddev, &bp->bio1); 1210 - make_request(mddev, &bp->bio2); 1211 - 1212 - spin_lock_irq(&conf->resync_lock); 1213 - conf->nr_waiting--; 1214 - wake_up(&conf->wait_barrier); 1215 - spin_unlock_irq(&conf->resync_lock); 1216 - 1217 - bio_pair_release(bp); 1218 - return; 1219 - bad_map: 1220 - printk("md/raid10:%s: make_request bug: can't convert block across chunks" 1221 - " or bigger than %dk %llu %d\n", mdname(mddev), chunk_sects/2, 1222 - (unsigned long long)bio->bi_sector, bio_sectors(bio) / 2); 1223 - 1224 - bio_io_error(bio); 1225 - return; 1226 - } 1227 - 1228 - md_write_start(mddev, bio); 1229 - 1230 - /* 1231 - * Register the new request and wait if the reconstruction 1232 - * thread has put up a bar for new requests. 1233 - * Continue immediately if no resync is active currently. 1234 - */ 1235 - wait_barrier(conf); 1236 - 1237 1177 sectors = bio_sectors(bio); 1238 1178 while (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) && 1239 - bio->bi_sector < conf->reshape_progress && 1240 - bio->bi_sector + sectors > conf->reshape_progress) { 1179 + bio->bi_iter.bi_sector < conf->reshape_progress && 1180 + bio->bi_iter.bi_sector + sectors > conf->reshape_progress) { 1241 1181 /* IO spans the reshape position. Need to wait for 1242 1182 * reshape to pass 1243 1183 */ 1244 1184 allow_barrier(conf); 1245 1185 wait_event(conf->wait_barrier, 1246 - conf->reshape_progress <= bio->bi_sector || 1247 - conf->reshape_progress >= bio->bi_sector + sectors); 1186 + conf->reshape_progress <= bio->bi_iter.bi_sector || 1187 + conf->reshape_progress >= bio->bi_iter.bi_sector + 1188 + sectors); 1248 1189 wait_barrier(conf); 1249 1190 } 1250 1191 if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) && 1251 1192 bio_data_dir(bio) == WRITE && 1252 1193 (mddev->reshape_backwards 1253 - ? (bio->bi_sector < conf->reshape_safe && 1254 - bio->bi_sector + sectors > conf->reshape_progress) 1255 - : (bio->bi_sector + sectors > conf->reshape_safe && 1256 - bio->bi_sector < conf->reshape_progress))) { 1194 + ? (bio->bi_iter.bi_sector < conf->reshape_safe && 1195 + bio->bi_iter.bi_sector + sectors > conf->reshape_progress) 1196 + : (bio->bi_iter.bi_sector + sectors > conf->reshape_safe && 1197 + bio->bi_iter.bi_sector < conf->reshape_progress))) { 1257 1198 /* Need to update reshape_position in metadata */ 1258 1199 mddev->reshape_position = conf->reshape_progress; 1259 1200 set_bit(MD_CHANGE_DEVS, &mddev->flags); ··· 1210 1273 r10_bio->sectors = sectors; 1211 1274 1212 1275 r10_bio->mddev = mddev; 1213 - r10_bio->sector = bio->bi_sector; 1276 + r10_bio->sector = bio->bi_iter.bi_sector; 1214 1277 r10_bio->state = 0; 1215 1278 1216 1279 /* We might need to issue multiple reads to different ··· 1239 1302 slot = r10_bio->read_slot; 1240 1303 1241 1304 read_bio = bio_clone_mddev(bio, GFP_NOIO, mddev); 1242 - bio_trim(read_bio, r10_bio->sector - bio->bi_sector, 1305 + bio_trim(read_bio, r10_bio->sector - bio->bi_iter.bi_sector, 1243 1306 max_sectors); 1244 1307 1245 1308 r10_bio->devs[slot].bio = read_bio; 1246 1309 r10_bio->devs[slot].rdev = rdev; 1247 1310 1248 - read_bio->bi_sector = r10_bio->devs[slot].addr + 1311 + read_bio->bi_iter.bi_sector = r10_bio->devs[slot].addr + 1249 1312 choose_data_offset(r10_bio, rdev); 1250 1313 read_bio->bi_bdev = rdev->bdev; 1251 1314 read_bio->bi_end_io = raid10_end_read_request; ··· 1257 1320 * need another r10_bio. 1258 1321 */ 1259 1322 sectors_handled = (r10_bio->sector + max_sectors 1260 - - bio->bi_sector); 1323 + - bio->bi_iter.bi_sector); 1261 1324 r10_bio->sectors = max_sectors; 1262 1325 spin_lock_irq(&conf->device_lock); 1263 1326 if (bio->bi_phys_segments == 0) ··· 1278 1341 r10_bio->sectors = bio_sectors(bio) - sectors_handled; 1279 1342 r10_bio->state = 0; 1280 1343 r10_bio->mddev = mddev; 1281 - r10_bio->sector = bio->bi_sector + sectors_handled; 1344 + r10_bio->sector = bio->bi_iter.bi_sector + 1345 + sectors_handled; 1282 1346 goto read_again; 1283 1347 } else 1284 1348 generic_make_request(read_bio); ··· 1437 1499 bio->bi_phys_segments++; 1438 1500 spin_unlock_irq(&conf->device_lock); 1439 1501 } 1440 - sectors_handled = r10_bio->sector + max_sectors - bio->bi_sector; 1502 + sectors_handled = r10_bio->sector + max_sectors - 1503 + bio->bi_iter.bi_sector; 1441 1504 1442 1505 atomic_set(&r10_bio->remaining, 1); 1443 1506 bitmap_startwrite(mddev->bitmap, r10_bio->sector, r10_bio->sectors, 0); ··· 1449 1510 if (r10_bio->devs[i].bio) { 1450 1511 struct md_rdev *rdev = conf->mirrors[d].rdev; 1451 1512 mbio = bio_clone_mddev(bio, GFP_NOIO, mddev); 1452 - bio_trim(mbio, r10_bio->sector - bio->bi_sector, 1513 + bio_trim(mbio, r10_bio->sector - bio->bi_iter.bi_sector, 1453 1514 max_sectors); 1454 1515 r10_bio->devs[i].bio = mbio; 1455 1516 1456 - mbio->bi_sector = (r10_bio->devs[i].addr+ 1517 + mbio->bi_iter.bi_sector = (r10_bio->devs[i].addr+ 1457 1518 choose_data_offset(r10_bio, 1458 1519 rdev)); 1459 1520 mbio->bi_bdev = rdev->bdev; ··· 1492 1553 rdev = conf->mirrors[d].rdev; 1493 1554 } 1494 1555 mbio = bio_clone_mddev(bio, GFP_NOIO, mddev); 1495 - bio_trim(mbio, r10_bio->sector - bio->bi_sector, 1556 + bio_trim(mbio, r10_bio->sector - bio->bi_iter.bi_sector, 1496 1557 max_sectors); 1497 1558 r10_bio->devs[i].repl_bio = mbio; 1498 1559 1499 - mbio->bi_sector = (r10_bio->devs[i].addr + 1560 + mbio->bi_iter.bi_sector = (r10_bio->devs[i].addr + 1500 1561 choose_data_offset( 1501 1562 r10_bio, rdev)); 1502 1563 mbio->bi_bdev = rdev->bdev; ··· 1530 1591 r10_bio->sectors = bio_sectors(bio) - sectors_handled; 1531 1592 1532 1593 r10_bio->mddev = mddev; 1533 - r10_bio->sector = bio->bi_sector + sectors_handled; 1594 + r10_bio->sector = bio->bi_iter.bi_sector + sectors_handled; 1534 1595 r10_bio->state = 0; 1535 1596 goto retry_write; 1536 1597 } 1537 1598 one_write_done(r10_bio); 1599 + } 1600 + 1601 + static void make_request(struct mddev *mddev, struct bio *bio) 1602 + { 1603 + struct r10conf *conf = mddev->private; 1604 + sector_t chunk_mask = (conf->geo.chunk_mask & conf->prev.chunk_mask); 1605 + int chunk_sects = chunk_mask + 1; 1606 + 1607 + struct bio *split; 1608 + 1609 + if (unlikely(bio->bi_rw & REQ_FLUSH)) { 1610 + md_flush_request(mddev, bio); 1611 + return; 1612 + } 1613 + 1614 + md_write_start(mddev, bio); 1615 + 1616 + /* 1617 + * Register the new request and wait if the reconstruction 1618 + * thread has put up a bar for new requests. 1619 + * Continue immediately if no resync is active currently. 1620 + */ 1621 + wait_barrier(conf); 1622 + 1623 + do { 1624 + 1625 + /* 1626 + * If this request crosses a chunk boundary, we need to split 1627 + * it. 1628 + */ 1629 + if (unlikely((bio->bi_iter.bi_sector & chunk_mask) + 1630 + bio_sectors(bio) > chunk_sects 1631 + && (conf->geo.near_copies < conf->geo.raid_disks 1632 + || conf->prev.near_copies < 1633 + conf->prev.raid_disks))) { 1634 + split = bio_split(bio, chunk_sects - 1635 + (bio->bi_iter.bi_sector & 1636 + (chunk_sects - 1)), 1637 + GFP_NOIO, fs_bio_set); 1638 + bio_chain(split, bio); 1639 + } else { 1640 + split = bio; 1641 + } 1642 + 1643 + __make_request(mddev, split); 1644 + } while (split != bio); 1538 1645 1539 1646 /* In case raid10d snuck in to freeze_array */ 1540 1647 wake_up(&conf->wait_barrier); ··· 2109 2124 bio_reset(tbio); 2110 2125 2111 2126 tbio->bi_vcnt = vcnt; 2112 - tbio->bi_size = r10_bio->sectors << 9; 2127 + tbio->bi_iter.bi_size = r10_bio->sectors << 9; 2113 2128 tbio->bi_rw = WRITE; 2114 2129 tbio->bi_private = r10_bio; 2115 - tbio->bi_sector = r10_bio->devs[i].addr; 2130 + tbio->bi_iter.bi_sector = r10_bio->devs[i].addr; 2116 2131 2117 2132 for (j=0; j < vcnt ; j++) { 2118 2133 tbio->bi_io_vec[j].bv_offset = 0; ··· 2129 2144 atomic_inc(&r10_bio->remaining); 2130 2145 md_sync_acct(conf->mirrors[d].rdev->bdev, bio_sectors(tbio)); 2131 2146 2132 - tbio->bi_sector += conf->mirrors[d].rdev->data_offset; 2147 + tbio->bi_iter.bi_sector += conf->mirrors[d].rdev->data_offset; 2133 2148 tbio->bi_bdev = conf->mirrors[d].rdev->bdev; 2134 2149 generic_make_request(tbio); 2135 2150 } ··· 2599 2614 sectors = sect_to_write; 2600 2615 /* Write at 'sector' for 'sectors' */ 2601 2616 wbio = bio_clone_mddev(bio, GFP_NOIO, mddev); 2602 - bio_trim(wbio, sector - bio->bi_sector, sectors); 2603 - wbio->bi_sector = (r10_bio->devs[i].addr+ 2617 + bio_trim(wbio, sector - bio->bi_iter.bi_sector, sectors); 2618 + wbio->bi_iter.bi_sector = (r10_bio->devs[i].addr+ 2604 2619 choose_data_offset(r10_bio, rdev) + 2605 2620 (sector - r10_bio->sector)); 2606 2621 wbio->bi_bdev = rdev->bdev; ··· 2672 2687 (unsigned long long)r10_bio->sector); 2673 2688 bio = bio_clone_mddev(r10_bio->master_bio, 2674 2689 GFP_NOIO, mddev); 2675 - bio_trim(bio, r10_bio->sector - bio->bi_sector, max_sectors); 2690 + bio_trim(bio, r10_bio->sector - bio->bi_iter.bi_sector, max_sectors); 2676 2691 r10_bio->devs[slot].bio = bio; 2677 2692 r10_bio->devs[slot].rdev = rdev; 2678 - bio->bi_sector = r10_bio->devs[slot].addr 2693 + bio->bi_iter.bi_sector = r10_bio->devs[slot].addr 2679 2694 + choose_data_offset(r10_bio, rdev); 2680 2695 bio->bi_bdev = rdev->bdev; 2681 2696 bio->bi_rw = READ | do_sync; ··· 2686 2701 struct bio *mbio = r10_bio->master_bio; 2687 2702 int sectors_handled = 2688 2703 r10_bio->sector + max_sectors 2689 - - mbio->bi_sector; 2704 + - mbio->bi_iter.bi_sector; 2690 2705 r10_bio->sectors = max_sectors; 2691 2706 spin_lock_irq(&conf->device_lock); 2692 2707 if (mbio->bi_phys_segments == 0) ··· 2704 2719 set_bit(R10BIO_ReadError, 2705 2720 &r10_bio->state); 2706 2721 r10_bio->mddev = mddev; 2707 - r10_bio->sector = mbio->bi_sector 2722 + r10_bio->sector = mbio->bi_iter.bi_sector 2708 2723 + sectors_handled; 2709 2724 2710 2725 goto read_more; ··· 3142 3157 bio->bi_end_io = end_sync_read; 3143 3158 bio->bi_rw = READ; 3144 3159 from_addr = r10_bio->devs[j].addr; 3145 - bio->bi_sector = from_addr + rdev->data_offset; 3160 + bio->bi_iter.bi_sector = from_addr + 3161 + rdev->data_offset; 3146 3162 bio->bi_bdev = rdev->bdev; 3147 3163 atomic_inc(&rdev->nr_pending); 3148 3164 /* and we write to 'i' (if not in_sync) */ ··· 3167 3181 bio->bi_private = r10_bio; 3168 3182 bio->bi_end_io = end_sync_write; 3169 3183 bio->bi_rw = WRITE; 3170 - bio->bi_sector = to_addr 3184 + bio->bi_iter.bi_sector = to_addr 3171 3185 + rdev->data_offset; 3172 3186 bio->bi_bdev = rdev->bdev; 3173 3187 atomic_inc(&r10_bio->remaining); ··· 3196 3210 bio->bi_private = r10_bio; 3197 3211 bio->bi_end_io = end_sync_write; 3198 3212 bio->bi_rw = WRITE; 3199 - bio->bi_sector = to_addr + rdev->data_offset; 3213 + bio->bi_iter.bi_sector = to_addr + 3214 + rdev->data_offset; 3200 3215 bio->bi_bdev = rdev->bdev; 3201 3216 atomic_inc(&r10_bio->remaining); 3202 3217 break; ··· 3315 3328 bio->bi_private = r10_bio; 3316 3329 bio->bi_end_io = end_sync_read; 3317 3330 bio->bi_rw = READ; 3318 - bio->bi_sector = sector + 3331 + bio->bi_iter.bi_sector = sector + 3319 3332 conf->mirrors[d].rdev->data_offset; 3320 3333 bio->bi_bdev = conf->mirrors[d].rdev->bdev; 3321 3334 count++; ··· 3337 3350 bio->bi_private = r10_bio; 3338 3351 bio->bi_end_io = end_sync_write; 3339 3352 bio->bi_rw = WRITE; 3340 - bio->bi_sector = sector + 3353 + bio->bi_iter.bi_sector = sector + 3341 3354 conf->mirrors[d].replacement->data_offset; 3342 3355 bio->bi_bdev = conf->mirrors[d].replacement->bdev; 3343 3356 count++; ··· 3384 3397 bio2 = bio2->bi_next) { 3385 3398 /* remove last page from this bio */ 3386 3399 bio2->bi_vcnt--; 3387 - bio2->bi_size -= len; 3400 + bio2->bi_iter.bi_size -= len; 3388 3401 bio2->bi_flags &= ~(1<< BIO_SEG_VALID); 3389 3402 } 3390 3403 goto bio_full; ··· 4405 4418 read_bio = bio_alloc_mddev(GFP_KERNEL, RESYNC_PAGES, mddev); 4406 4419 4407 4420 read_bio->bi_bdev = rdev->bdev; 4408 - read_bio->bi_sector = (r10_bio->devs[r10_bio->read_slot].addr 4421 + read_bio->bi_iter.bi_sector = (r10_bio->devs[r10_bio->read_slot].addr 4409 4422 + rdev->data_offset); 4410 4423 read_bio->bi_private = r10_bio; 4411 4424 read_bio->bi_end_io = end_sync_read; ··· 4413 4426 read_bio->bi_flags &= ~(BIO_POOL_MASK - 1); 4414 4427 read_bio->bi_flags |= 1 << BIO_UPTODATE; 4415 4428 read_bio->bi_vcnt = 0; 4416 - read_bio->bi_size = 0; 4429 + read_bio->bi_iter.bi_size = 0; 4417 4430 r10_bio->master_bio = read_bio; 4418 4431 r10_bio->read_slot = r10_bio->devs[r10_bio->read_slot].devnum; 4419 4432 ··· 4439 4452 4440 4453 bio_reset(b); 4441 4454 b->bi_bdev = rdev2->bdev; 4442 - b->bi_sector = r10_bio->devs[s/2].addr + rdev2->new_data_offset; 4455 + b->bi_iter.bi_sector = r10_bio->devs[s/2].addr + 4456 + rdev2->new_data_offset; 4443 4457 b->bi_private = r10_bio; 4444 4458 b->bi_end_io = end_reshape_write; 4445 4459 b->bi_rw = WRITE; ··· 4467 4479 bio2 = bio2->bi_next) { 4468 4480 /* Remove last page from this bio */ 4469 4481 bio2->bi_vcnt--; 4470 - bio2->bi_size -= len; 4482 + bio2->bi_iter.bi_size -= len; 4471 4483 bio2->bi_flags &= ~(1<<BIO_SEG_VALID); 4472 4484 } 4473 4485 goto bio_full;
+43 -41
drivers/md/raid5.c
··· 133 133 static inline struct bio *r5_next_bio(struct bio *bio, sector_t sector) 134 134 { 135 135 int sectors = bio_sectors(bio); 136 - if (bio->bi_sector + sectors < sector + STRIPE_SECTORS) 136 + if (bio->bi_iter.bi_sector + sectors < sector + STRIPE_SECTORS) 137 137 return bio->bi_next; 138 138 else 139 139 return NULL; ··· 225 225 226 226 return_bi = bi->bi_next; 227 227 bi->bi_next = NULL; 228 - bi->bi_size = 0; 228 + bi->bi_iter.bi_size = 0; 229 229 trace_block_bio_complete(bdev_get_queue(bi->bi_bdev), 230 230 bi, 0); 231 231 bio_endio(bi, 0); ··· 852 852 bi->bi_rw, i); 853 853 atomic_inc(&sh->count); 854 854 if (use_new_offset(conf, sh)) 855 - bi->bi_sector = (sh->sector 855 + bi->bi_iter.bi_sector = (sh->sector 856 856 + rdev->new_data_offset); 857 857 else 858 - bi->bi_sector = (sh->sector 858 + bi->bi_iter.bi_sector = (sh->sector 859 859 + rdev->data_offset); 860 860 if (test_bit(R5_ReadNoMerge, &sh->dev[i].flags)) 861 861 bi->bi_rw |= REQ_NOMERGE; ··· 863 863 bi->bi_vcnt = 1; 864 864 bi->bi_io_vec[0].bv_len = STRIPE_SIZE; 865 865 bi->bi_io_vec[0].bv_offset = 0; 866 - bi->bi_size = STRIPE_SIZE; 866 + bi->bi_iter.bi_size = STRIPE_SIZE; 867 867 /* 868 868 * If this is discard request, set bi_vcnt 0. We don't 869 869 * want to confuse SCSI because SCSI will replace payload ··· 899 899 rbi->bi_rw, i); 900 900 atomic_inc(&sh->count); 901 901 if (use_new_offset(conf, sh)) 902 - rbi->bi_sector = (sh->sector 902 + rbi->bi_iter.bi_sector = (sh->sector 903 903 + rrdev->new_data_offset); 904 904 else 905 - rbi->bi_sector = (sh->sector 905 + rbi->bi_iter.bi_sector = (sh->sector 906 906 + rrdev->data_offset); 907 907 rbi->bi_vcnt = 1; 908 908 rbi->bi_io_vec[0].bv_len = STRIPE_SIZE; 909 909 rbi->bi_io_vec[0].bv_offset = 0; 910 - rbi->bi_size = STRIPE_SIZE; 910 + rbi->bi_iter.bi_size = STRIPE_SIZE; 911 911 /* 912 912 * If this is discard request, set bi_vcnt 0. We don't 913 913 * want to confuse SCSI because SCSI will replace payload ··· 935 935 async_copy_data(int frombio, struct bio *bio, struct page *page, 936 936 sector_t sector, struct dma_async_tx_descriptor *tx) 937 937 { 938 - struct bio_vec *bvl; 938 + struct bio_vec bvl; 939 + struct bvec_iter iter; 939 940 struct page *bio_page; 940 - int i; 941 941 int page_offset; 942 942 struct async_submit_ctl submit; 943 943 enum async_tx_flags flags = 0; 944 944 945 - if (bio->bi_sector >= sector) 946 - page_offset = (signed)(bio->bi_sector - sector) * 512; 945 + if (bio->bi_iter.bi_sector >= sector) 946 + page_offset = (signed)(bio->bi_iter.bi_sector - sector) * 512; 947 947 else 948 - page_offset = (signed)(sector - bio->bi_sector) * -512; 948 + page_offset = (signed)(sector - bio->bi_iter.bi_sector) * -512; 949 949 950 950 if (frombio) 951 951 flags |= ASYNC_TX_FENCE; 952 952 init_async_submit(&submit, flags, tx, NULL, NULL, NULL); 953 953 954 - bio_for_each_segment(bvl, bio, i) { 955 - int len = bvl->bv_len; 954 + bio_for_each_segment(bvl, bio, iter) { 955 + int len = bvl.bv_len; 956 956 int clen; 957 957 int b_offset = 0; 958 958 ··· 968 968 clen = len; 969 969 970 970 if (clen > 0) { 971 - b_offset += bvl->bv_offset; 972 - bio_page = bvl->bv_page; 971 + b_offset += bvl.bv_offset; 972 + bio_page = bvl.bv_page; 973 973 if (frombio) 974 974 tx = async_memcpy(page, bio_page, page_offset, 975 975 b_offset, clen, &submit); ··· 1012 1012 BUG_ON(!dev->read); 1013 1013 rbi = dev->read; 1014 1014 dev->read = NULL; 1015 - while (rbi && rbi->bi_sector < 1015 + while (rbi && rbi->bi_iter.bi_sector < 1016 1016 dev->sector + STRIPE_SECTORS) { 1017 1017 rbi2 = r5_next_bio(rbi, dev->sector); 1018 1018 if (!raid5_dec_bi_active_stripes(rbi)) { ··· 1048 1048 dev->read = rbi = dev->toread; 1049 1049 dev->toread = NULL; 1050 1050 spin_unlock_irq(&sh->stripe_lock); 1051 - while (rbi && rbi->bi_sector < 1051 + while (rbi && rbi->bi_iter.bi_sector < 1052 1052 dev->sector + STRIPE_SECTORS) { 1053 1053 tx = async_copy_data(0, rbi, dev->page, 1054 1054 dev->sector, tx); ··· 1390 1390 wbi = dev->written = chosen; 1391 1391 spin_unlock_irq(&sh->stripe_lock); 1392 1392 1393 - while (wbi && wbi->bi_sector < 1393 + while (wbi && wbi->bi_iter.bi_sector < 1394 1394 dev->sector + STRIPE_SECTORS) { 1395 1395 if (wbi->bi_rw & REQ_FUA) 1396 1396 set_bit(R5_WantFUA, &dev->flags); ··· 2615 2615 int firstwrite=0; 2616 2616 2617 2617 pr_debug("adding bi b#%llu to stripe s#%llu\n", 2618 - (unsigned long long)bi->bi_sector, 2618 + (unsigned long long)bi->bi_iter.bi_sector, 2619 2619 (unsigned long long)sh->sector); 2620 2620 2621 2621 /* ··· 2633 2633 firstwrite = 1; 2634 2634 } else 2635 2635 bip = &sh->dev[dd_idx].toread; 2636 - while (*bip && (*bip)->bi_sector < bi->bi_sector) { 2637 - if (bio_end_sector(*bip) > bi->bi_sector) 2636 + while (*bip && (*bip)->bi_iter.bi_sector < bi->bi_iter.bi_sector) { 2637 + if (bio_end_sector(*bip) > bi->bi_iter.bi_sector) 2638 2638 goto overlap; 2639 2639 bip = & (*bip)->bi_next; 2640 2640 } 2641 - if (*bip && (*bip)->bi_sector < bio_end_sector(bi)) 2641 + if (*bip && (*bip)->bi_iter.bi_sector < bio_end_sector(bi)) 2642 2642 goto overlap; 2643 2643 2644 2644 BUG_ON(*bip && bi->bi_next && (*bip) != bi->bi_next); ··· 2652 2652 sector_t sector = sh->dev[dd_idx].sector; 2653 2653 for (bi=sh->dev[dd_idx].towrite; 2654 2654 sector < sh->dev[dd_idx].sector + STRIPE_SECTORS && 2655 - bi && bi->bi_sector <= sector; 2655 + bi && bi->bi_iter.bi_sector <= sector; 2656 2656 bi = r5_next_bio(bi, sh->dev[dd_idx].sector)) { 2657 2657 if (bio_end_sector(bi) >= sector) 2658 2658 sector = bio_end_sector(bi); ··· 2662 2662 } 2663 2663 2664 2664 pr_debug("added bi b#%llu to stripe s#%llu, disk %d.\n", 2665 - (unsigned long long)(*bip)->bi_sector, 2665 + (unsigned long long)(*bip)->bi_iter.bi_sector, 2666 2666 (unsigned long long)sh->sector, dd_idx); 2667 2667 spin_unlock_irq(&sh->stripe_lock); 2668 2668 ··· 2737 2737 if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) 2738 2738 wake_up(&conf->wait_for_overlap); 2739 2739 2740 - while (bi && bi->bi_sector < 2740 + while (bi && bi->bi_iter.bi_sector < 2741 2741 sh->dev[i].sector + STRIPE_SECTORS) { 2742 2742 struct bio *nextbi = r5_next_bio(bi, sh->dev[i].sector); 2743 2743 clear_bit(BIO_UPTODATE, &bi->bi_flags); ··· 2756 2756 bi = sh->dev[i].written; 2757 2757 sh->dev[i].written = NULL; 2758 2758 if (bi) bitmap_end = 1; 2759 - while (bi && bi->bi_sector < 2759 + while (bi && bi->bi_iter.bi_sector < 2760 2760 sh->dev[i].sector + STRIPE_SECTORS) { 2761 2761 struct bio *bi2 = r5_next_bio(bi, sh->dev[i].sector); 2762 2762 clear_bit(BIO_UPTODATE, &bi->bi_flags); ··· 2780 2780 spin_unlock_irq(&sh->stripe_lock); 2781 2781 if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) 2782 2782 wake_up(&conf->wait_for_overlap); 2783 - while (bi && bi->bi_sector < 2783 + while (bi && bi->bi_iter.bi_sector < 2784 2784 sh->dev[i].sector + STRIPE_SECTORS) { 2785 2785 struct bio *nextbi = 2786 2786 r5_next_bio(bi, sh->dev[i].sector); ··· 3004 3004 clear_bit(R5_UPTODATE, &dev->flags); 3005 3005 wbi = dev->written; 3006 3006 dev->written = NULL; 3007 - while (wbi && wbi->bi_sector < 3007 + while (wbi && wbi->bi_iter.bi_sector < 3008 3008 dev->sector + STRIPE_SECTORS) { 3009 3009 wbi2 = r5_next_bio(wbi, dev->sector); 3010 3010 if (!raid5_dec_bi_active_stripes(wbi)) { ··· 4096 4096 4097 4097 static int in_chunk_boundary(struct mddev *mddev, struct bio *bio) 4098 4098 { 4099 - sector_t sector = bio->bi_sector + get_start_sect(bio->bi_bdev); 4099 + sector_t sector = bio->bi_iter.bi_sector + get_start_sect(bio->bi_bdev); 4100 4100 unsigned int chunk_sectors = mddev->chunk_sectors; 4101 4101 unsigned int bio_sectors = bio_sectors(bio); 4102 4102 ··· 4233 4233 /* 4234 4234 * compute position 4235 4235 */ 4236 - align_bi->bi_sector = raid5_compute_sector(conf, raid_bio->bi_sector, 4237 - 0, 4238 - &dd_idx, NULL); 4236 + align_bi->bi_iter.bi_sector = 4237 + raid5_compute_sector(conf, raid_bio->bi_iter.bi_sector, 4238 + 0, &dd_idx, NULL); 4239 4239 4240 4240 end_sector = bio_end_sector(align_bi); 4241 4241 rcu_read_lock(); ··· 4260 4260 align_bi->bi_flags &= ~(1 << BIO_SEG_VALID); 4261 4261 4262 4262 if (!bio_fits_rdev(align_bi) || 4263 - is_badblock(rdev, align_bi->bi_sector, bio_sectors(align_bi), 4263 + is_badblock(rdev, align_bi->bi_iter.bi_sector, 4264 + bio_sectors(align_bi), 4264 4265 &first_bad, &bad_sectors)) { 4265 4266 /* too big in some way, or has a known bad block */ 4266 4267 bio_put(align_bi); ··· 4270 4269 } 4271 4270 4272 4271 /* No reshape active, so we can trust rdev->data_offset */ 4273 - align_bi->bi_sector += rdev->data_offset; 4272 + align_bi->bi_iter.bi_sector += rdev->data_offset; 4274 4273 4275 4274 spin_lock_irq(&conf->device_lock); 4276 4275 wait_event_lock_irq(conf->wait_for_stripe, ··· 4282 4281 if (mddev->gendisk) 4283 4282 trace_block_bio_remap(bdev_get_queue(align_bi->bi_bdev), 4284 4283 align_bi, disk_devt(mddev->gendisk), 4285 - raid_bio->bi_sector); 4284 + raid_bio->bi_iter.bi_sector); 4286 4285 generic_make_request(align_bi); 4287 4286 return 1; 4288 4287 } else { ··· 4465 4464 /* Skip discard while reshape is happening */ 4466 4465 return; 4467 4466 4468 - logical_sector = bi->bi_sector & ~((sector_t)STRIPE_SECTORS-1); 4469 - last_sector = bi->bi_sector + (bi->bi_size>>9); 4467 + logical_sector = bi->bi_iter.bi_sector & ~((sector_t)STRIPE_SECTORS-1); 4468 + last_sector = bi->bi_iter.bi_sector + (bi->bi_iter.bi_size>>9); 4470 4469 4471 4470 bi->bi_next = NULL; 4472 4471 bi->bi_phys_segments = 1; /* over-loaded to count active stripes */ ··· 4570 4569 return; 4571 4570 } 4572 4571 4573 - logical_sector = bi->bi_sector & ~((sector_t)STRIPE_SECTORS-1); 4572 + logical_sector = bi->bi_iter.bi_sector & ~((sector_t)STRIPE_SECTORS-1); 4574 4573 last_sector = bio_end_sector(bi); 4575 4574 bi->bi_next = NULL; 4576 4575 bi->bi_phys_segments = 1; /* over-loaded to count active stripes */ ··· 5054 5053 int remaining; 5055 5054 int handled = 0; 5056 5055 5057 - logical_sector = raid_bio->bi_sector & ~((sector_t)STRIPE_SECTORS-1); 5056 + logical_sector = raid_bio->bi_iter.bi_sector & 5057 + ~((sector_t)STRIPE_SECTORS-1); 5058 5058 sector = raid5_compute_sector(conf, logical_sector, 5059 5059 0, &dd_idx, NULL); 5060 5060 last_sector = bio_end_sector(raid_bio);
+4 -4
drivers/message/fusion/mptsas.c
··· 2235 2235 } 2236 2236 2237 2237 /* do we need to support multiple segments? */ 2238 - if (bio_segments(req->bio) > 1 || bio_segments(rsp->bio) > 1) { 2239 - printk(MYIOC_s_ERR_FMT "%s: multiple segments req %u %u, rsp %u %u\n", 2240 - ioc->name, __func__, bio_segments(req->bio), blk_rq_bytes(req), 2241 - bio_segments(rsp->bio), blk_rq_bytes(rsp)); 2238 + if (bio_multiple_segments(req->bio) || 2239 + bio_multiple_segments(rsp->bio)) { 2240 + printk(MYIOC_s_ERR_FMT "%s: multiple segments req %u, rsp %u\n", 2241 + ioc->name, __func__, blk_rq_bytes(req), blk_rq_bytes(rsp)); 2242 2242 return -EINVAL; 2243 2243 } 2244 2244
+5 -5
drivers/s390/block/dasd_diag.c
··· 504 504 struct dasd_diag_req *dreq; 505 505 struct dasd_diag_bio *dbio; 506 506 struct req_iterator iter; 507 - struct bio_vec *bv; 507 + struct bio_vec bv; 508 508 char *dst; 509 509 unsigned int count, datasize; 510 510 sector_t recid, first_rec, last_rec; ··· 525 525 /* Check struct bio and count the number of blocks for the request. */ 526 526 count = 0; 527 527 rq_for_each_segment(bv, req, iter) { 528 - if (bv->bv_len & (blksize - 1)) 528 + if (bv.bv_len & (blksize - 1)) 529 529 /* Fba can only do full blocks. */ 530 530 return ERR_PTR(-EINVAL); 531 - count += bv->bv_len >> (block->s2b_shift + 9); 531 + count += bv.bv_len >> (block->s2b_shift + 9); 532 532 } 533 533 /* Paranoia. */ 534 534 if (count != last_rec - first_rec + 1) ··· 545 545 dbio = dreq->bio; 546 546 recid = first_rec; 547 547 rq_for_each_segment(bv, req, iter) { 548 - dst = page_address(bv->bv_page) + bv->bv_offset; 549 - for (off = 0; off < bv->bv_len; off += blksize) { 548 + dst = page_address(bv.bv_page) + bv.bv_offset; 549 + for (off = 0; off < bv.bv_len; off += blksize) { 550 550 memset(dbio, 0, sizeof (struct dasd_diag_bio)); 551 551 dbio->type = rw_cmd; 552 552 dbio->block_number = recid + 1;
+24 -24
drivers/s390/block/dasd_eckd.c
··· 2551 2551 struct dasd_ccw_req *cqr; 2552 2552 struct ccw1 *ccw; 2553 2553 struct req_iterator iter; 2554 - struct bio_vec *bv; 2554 + struct bio_vec bv; 2555 2555 char *dst; 2556 2556 unsigned int off; 2557 2557 int count, cidaw, cplength, datasize; ··· 2573 2573 count = 0; 2574 2574 cidaw = 0; 2575 2575 rq_for_each_segment(bv, req, iter) { 2576 - if (bv->bv_len & (blksize - 1)) 2576 + if (bv.bv_len & (blksize - 1)) 2577 2577 /* Eckd can only do full blocks. */ 2578 2578 return ERR_PTR(-EINVAL); 2579 - count += bv->bv_len >> (block->s2b_shift + 9); 2579 + count += bv.bv_len >> (block->s2b_shift + 9); 2580 2580 #if defined(CONFIG_64BIT) 2581 - if (idal_is_needed (page_address(bv->bv_page), bv->bv_len)) 2582 - cidaw += bv->bv_len >> (block->s2b_shift + 9); 2581 + if (idal_is_needed (page_address(bv.bv_page), bv.bv_len)) 2582 + cidaw += bv.bv_len >> (block->s2b_shift + 9); 2583 2583 #endif 2584 2584 } 2585 2585 /* Paranoia. */ ··· 2650 2650 last_rec - recid + 1, cmd, basedev, blksize); 2651 2651 } 2652 2652 rq_for_each_segment(bv, req, iter) { 2653 - dst = page_address(bv->bv_page) + bv->bv_offset; 2653 + dst = page_address(bv.bv_page) + bv.bv_offset; 2654 2654 if (dasd_page_cache) { 2655 2655 char *copy = kmem_cache_alloc(dasd_page_cache, 2656 2656 GFP_DMA | __GFP_NOWARN); 2657 2657 if (copy && rq_data_dir(req) == WRITE) 2658 - memcpy(copy + bv->bv_offset, dst, bv->bv_len); 2658 + memcpy(copy + bv.bv_offset, dst, bv.bv_len); 2659 2659 if (copy) 2660 - dst = copy + bv->bv_offset; 2660 + dst = copy + bv.bv_offset; 2661 2661 } 2662 - for (off = 0; off < bv->bv_len; off += blksize) { 2662 + for (off = 0; off < bv.bv_len; off += blksize) { 2663 2663 sector_t trkid = recid; 2664 2664 unsigned int recoffs = sector_div(trkid, blk_per_trk); 2665 2665 rcmd = cmd; ··· 2735 2735 struct dasd_ccw_req *cqr; 2736 2736 struct ccw1 *ccw; 2737 2737 struct req_iterator iter; 2738 - struct bio_vec *bv; 2738 + struct bio_vec bv; 2739 2739 char *dst, *idaw_dst; 2740 2740 unsigned int cidaw, cplength, datasize; 2741 2741 unsigned int tlf; ··· 2813 2813 idaw_dst = NULL; 2814 2814 idaw_len = 0; 2815 2815 rq_for_each_segment(bv, req, iter) { 2816 - dst = page_address(bv->bv_page) + bv->bv_offset; 2817 - seg_len = bv->bv_len; 2816 + dst = page_address(bv.bv_page) + bv.bv_offset; 2817 + seg_len = bv.bv_len; 2818 2818 while (seg_len) { 2819 2819 if (new_track) { 2820 2820 trkid = recid; ··· 3039 3039 { 3040 3040 struct dasd_ccw_req *cqr; 3041 3041 struct req_iterator iter; 3042 - struct bio_vec *bv; 3042 + struct bio_vec bv; 3043 3043 char *dst; 3044 3044 unsigned int trkcount, ctidaw; 3045 3045 unsigned char cmd; ··· 3125 3125 new_track = 1; 3126 3126 recid = first_rec; 3127 3127 rq_for_each_segment(bv, req, iter) { 3128 - dst = page_address(bv->bv_page) + bv->bv_offset; 3129 - seg_len = bv->bv_len; 3128 + dst = page_address(bv.bv_page) + bv.bv_offset; 3129 + seg_len = bv.bv_len; 3130 3130 while (seg_len) { 3131 3131 if (new_track) { 3132 3132 trkid = recid; ··· 3158 3158 } 3159 3159 } else { 3160 3160 rq_for_each_segment(bv, req, iter) { 3161 - dst = page_address(bv->bv_page) + bv->bv_offset; 3161 + dst = page_address(bv.bv_page) + bv.bv_offset; 3162 3162 last_tidaw = itcw_add_tidaw(itcw, 0x00, 3163 - dst, bv->bv_len); 3163 + dst, bv.bv_len); 3164 3164 if (IS_ERR(last_tidaw)) { 3165 3165 ret = -EINVAL; 3166 3166 goto out_error; ··· 3278 3278 struct dasd_ccw_req *cqr; 3279 3279 struct ccw1 *ccw; 3280 3280 struct req_iterator iter; 3281 - struct bio_vec *bv; 3281 + struct bio_vec bv; 3282 3282 char *dst; 3283 3283 unsigned char cmd; 3284 3284 unsigned int trkcount; ··· 3378 3378 idaws = idal_create_words(idaws, rawpadpage, PAGE_SIZE); 3379 3379 } 3380 3380 rq_for_each_segment(bv, req, iter) { 3381 - dst = page_address(bv->bv_page) + bv->bv_offset; 3382 - seg_len = bv->bv_len; 3381 + dst = page_address(bv.bv_page) + bv.bv_offset; 3382 + seg_len = bv.bv_len; 3383 3383 if (cmd == DASD_ECKD_CCW_READ_TRACK) 3384 3384 memset(dst, 0, seg_len); 3385 3385 if (!len_to_track_end) { ··· 3424 3424 struct dasd_eckd_private *private; 3425 3425 struct ccw1 *ccw; 3426 3426 struct req_iterator iter; 3427 - struct bio_vec *bv; 3427 + struct bio_vec bv; 3428 3428 char *dst, *cda; 3429 3429 unsigned int blksize, blk_per_trk, off; 3430 3430 sector_t recid; ··· 3442 3442 if (private->uses_cdl == 0 || recid > 2*blk_per_trk) 3443 3443 ccw++; 3444 3444 rq_for_each_segment(bv, req, iter) { 3445 - dst = page_address(bv->bv_page) + bv->bv_offset; 3446 - for (off = 0; off < bv->bv_len; off += blksize) { 3445 + dst = page_address(bv.bv_page) + bv.bv_offset; 3446 + for (off = 0; off < bv.bv_len; off += blksize) { 3447 3447 /* Skip locate record. */ 3448 3448 if (private->uses_cdl && recid <= 2*blk_per_trk) 3449 3449 ccw++; ··· 3454 3454 cda = (char *)((addr_t) ccw->cda); 3455 3455 if (dst != cda) { 3456 3456 if (rq_data_dir(req) == READ) 3457 - memcpy(dst, cda, bv->bv_len); 3457 + memcpy(dst, cda, bv.bv_len); 3458 3458 kmem_cache_free(dasd_page_cache, 3459 3459 (void *)((addr_t)cda & PAGE_MASK)); 3460 3460 }
+13 -13
drivers/s390/block/dasd_fba.c
··· 260 260 struct dasd_ccw_req *cqr; 261 261 struct ccw1 *ccw; 262 262 struct req_iterator iter; 263 - struct bio_vec *bv; 263 + struct bio_vec bv; 264 264 char *dst; 265 265 int count, cidaw, cplength, datasize; 266 266 sector_t recid, first_rec, last_rec; ··· 283 283 count = 0; 284 284 cidaw = 0; 285 285 rq_for_each_segment(bv, req, iter) { 286 - if (bv->bv_len & (blksize - 1)) 286 + if (bv.bv_len & (blksize - 1)) 287 287 /* Fba can only do full blocks. */ 288 288 return ERR_PTR(-EINVAL); 289 - count += bv->bv_len >> (block->s2b_shift + 9); 289 + count += bv.bv_len >> (block->s2b_shift + 9); 290 290 #if defined(CONFIG_64BIT) 291 - if (idal_is_needed (page_address(bv->bv_page), bv->bv_len)) 292 - cidaw += bv->bv_len / blksize; 291 + if (idal_is_needed (page_address(bv.bv_page), bv.bv_len)) 292 + cidaw += bv.bv_len / blksize; 293 293 #endif 294 294 } 295 295 /* Paranoia. */ ··· 326 326 } 327 327 recid = first_rec; 328 328 rq_for_each_segment(bv, req, iter) { 329 - dst = page_address(bv->bv_page) + bv->bv_offset; 329 + dst = page_address(bv.bv_page) + bv.bv_offset; 330 330 if (dasd_page_cache) { 331 331 char *copy = kmem_cache_alloc(dasd_page_cache, 332 332 GFP_DMA | __GFP_NOWARN); 333 333 if (copy && rq_data_dir(req) == WRITE) 334 - memcpy(copy + bv->bv_offset, dst, bv->bv_len); 334 + memcpy(copy + bv.bv_offset, dst, bv.bv_len); 335 335 if (copy) 336 - dst = copy + bv->bv_offset; 336 + dst = copy + bv.bv_offset; 337 337 } 338 - for (off = 0; off < bv->bv_len; off += blksize) { 338 + for (off = 0; off < bv.bv_len; off += blksize) { 339 339 /* Locate record for stupid devices. */ 340 340 if (private->rdc_data.mode.bits.data_chain == 0) { 341 341 ccw[-1].flags |= CCW_FLAG_CC; ··· 384 384 struct dasd_fba_private *private; 385 385 struct ccw1 *ccw; 386 386 struct req_iterator iter; 387 - struct bio_vec *bv; 387 + struct bio_vec bv; 388 388 char *dst, *cda; 389 389 unsigned int blksize, off; 390 390 int status; ··· 399 399 if (private->rdc_data.mode.bits.data_chain != 0) 400 400 ccw++; 401 401 rq_for_each_segment(bv, req, iter) { 402 - dst = page_address(bv->bv_page) + bv->bv_offset; 403 - for (off = 0; off < bv->bv_len; off += blksize) { 402 + dst = page_address(bv.bv_page) + bv.bv_offset; 403 + for (off = 0; off < bv.bv_len; off += blksize) { 404 404 /* Skip locate record. */ 405 405 if (private->rdc_data.mode.bits.data_chain == 0) 406 406 ccw++; ··· 411 411 cda = (char *)((addr_t) ccw->cda); 412 412 if (dst != cda) { 413 413 if (rq_data_dir(req) == READ) 414 - memcpy(dst, cda, bv->bv_len); 414 + memcpy(dst, cda, bv.bv_len); 415 415 kmem_cache_free(dasd_page_cache, 416 416 (void *)((addr_t)cda & PAGE_MASK)); 417 417 }
+11 -10
drivers/s390/block/dcssblk.c
··· 808 808 dcssblk_make_request(struct request_queue *q, struct bio *bio) 809 809 { 810 810 struct dcssblk_dev_info *dev_info; 811 - struct bio_vec *bvec; 811 + struct bio_vec bvec; 812 + struct bvec_iter iter; 812 813 unsigned long index; 813 814 unsigned long page_addr; 814 815 unsigned long source_addr; 815 816 unsigned long bytes_done; 816 - int i; 817 817 818 818 bytes_done = 0; 819 819 dev_info = bio->bi_bdev->bd_disk->private_data; 820 820 if (dev_info == NULL) 821 821 goto fail; 822 - if ((bio->bi_sector & 7) != 0 || (bio->bi_size & 4095) != 0) 822 + if ((bio->bi_iter.bi_sector & 7) != 0 || 823 + (bio->bi_iter.bi_size & 4095) != 0) 823 824 /* Request is not page-aligned. */ 824 825 goto fail; 825 826 if (bio_end_sector(bio) > get_capacity(bio->bi_bdev->bd_disk)) { ··· 843 842 } 844 843 } 845 844 846 - index = (bio->bi_sector >> 3); 847 - bio_for_each_segment(bvec, bio, i) { 845 + index = (bio->bi_iter.bi_sector >> 3); 846 + bio_for_each_segment(bvec, bio, iter) { 848 847 page_addr = (unsigned long) 849 - page_address(bvec->bv_page) + bvec->bv_offset; 848 + page_address(bvec.bv_page) + bvec.bv_offset; 850 849 source_addr = dev_info->start + (index<<12) + bytes_done; 851 - if (unlikely((page_addr & 4095) != 0) || (bvec->bv_len & 4095) != 0) 850 + if (unlikely((page_addr & 4095) != 0) || (bvec.bv_len & 4095) != 0) 852 851 // More paranoia. 853 852 goto fail; 854 853 if (bio_data_dir(bio) == READ) { 855 854 memcpy((void*)page_addr, (void*)source_addr, 856 - bvec->bv_len); 855 + bvec.bv_len); 857 856 } else { 858 857 memcpy((void*)source_addr, (void*)page_addr, 859 - bvec->bv_len); 858 + bvec.bv_len); 860 859 } 861 - bytes_done += bvec->bv_len; 860 + bytes_done += bvec.bv_len; 862 861 } 863 862 bio_endio(bio, 0); 864 863 return;
+4 -4
drivers/s390/block/scm_blk.c
··· 130 130 struct aidaw *aidaw = scmrq->aidaw; 131 131 struct msb *msb = &scmrq->aob->msb[0]; 132 132 struct req_iterator iter; 133 - struct bio_vec *bv; 133 + struct bio_vec bv; 134 134 135 135 msb->bs = MSB_BS_4K; 136 136 scmrq->aob->request.msb_count = 1; ··· 142 142 msb->data_addr = (u64) aidaw; 143 143 144 144 rq_for_each_segment(bv, scmrq->request, iter) { 145 - WARN_ON(bv->bv_offset); 146 - msb->blk_count += bv->bv_len >> 12; 147 - aidaw->data_addr = (u64) page_address(bv->bv_page); 145 + WARN_ON(bv.bv_offset); 146 + msb->blk_count += bv.bv_len >> 12; 147 + aidaw->data_addr = (u64) page_address(bv.bv_page); 148 148 aidaw++; 149 149 } 150 150 }
+2 -2
drivers/s390/block/scm_blk_cluster.c
··· 122 122 struct aidaw *aidaw = scmrq->aidaw; 123 123 struct msb *msb = &scmrq->aob->msb[0]; 124 124 struct req_iterator iter; 125 - struct bio_vec *bv; 125 + struct bio_vec bv; 126 126 int i = 0; 127 127 u64 addr; 128 128 ··· 163 163 i++; 164 164 } 165 165 rq_for_each_segment(bv, req, iter) { 166 - aidaw->data_addr = (u64) page_address(bv->bv_page); 166 + aidaw->data_addr = (u64) page_address(bv.bv_page); 167 167 aidaw++; 168 168 i++; 169 169 }
+10 -9
drivers/s390/block/xpram.c
··· 184 184 static void xpram_make_request(struct request_queue *q, struct bio *bio) 185 185 { 186 186 xpram_device_t *xdev = bio->bi_bdev->bd_disk->private_data; 187 - struct bio_vec *bvec; 187 + struct bio_vec bvec; 188 + struct bvec_iter iter; 188 189 unsigned int index; 189 190 unsigned long page_addr; 190 191 unsigned long bytes; 191 - int i; 192 192 193 - if ((bio->bi_sector & 7) != 0 || (bio->bi_size & 4095) != 0) 193 + if ((bio->bi_iter.bi_sector & 7) != 0 || 194 + (bio->bi_iter.bi_size & 4095) != 0) 194 195 /* Request is not page-aligned. */ 195 196 goto fail; 196 - if ((bio->bi_size >> 12) > xdev->size) 197 + if ((bio->bi_iter.bi_size >> 12) > xdev->size) 197 198 /* Request size is no page-aligned. */ 198 199 goto fail; 199 - if ((bio->bi_sector >> 3) > 0xffffffffU - xdev->offset) 200 + if ((bio->bi_iter.bi_sector >> 3) > 0xffffffffU - xdev->offset) 200 201 goto fail; 201 - index = (bio->bi_sector >> 3) + xdev->offset; 202 - bio_for_each_segment(bvec, bio, i) { 202 + index = (bio->bi_iter.bi_sector >> 3) + xdev->offset; 203 + bio_for_each_segment(bvec, bio, iter) { 203 204 page_addr = (unsigned long) 204 - kmap(bvec->bv_page) + bvec->bv_offset; 205 - bytes = bvec->bv_len; 205 + kmap(bvec.bv_page) + bvec.bv_offset; 206 + bytes = bvec.bv_len; 206 207 if ((page_addr & 4095) != 0 || (bytes & 4095) != 0) 207 208 /* More paranoia. */ 208 209 goto fail;
+4 -4
drivers/scsi/libsas/sas_expander.c
··· 2163 2163 } 2164 2164 2165 2165 /* do we need to support multiple segments? */ 2166 - if (bio_segments(req->bio) > 1 || bio_segments(rsp->bio) > 1) { 2167 - printk("%s: multiple segments req %u %u, rsp %u %u\n", 2168 - __func__, bio_segments(req->bio), blk_rq_bytes(req), 2169 - bio_segments(rsp->bio), blk_rq_bytes(rsp)); 2166 + if (bio_multiple_segments(req->bio) || 2167 + bio_multiple_segments(rsp->bio)) { 2168 + printk("%s: multiple segments req %u, rsp %u\n", 2169 + __func__, blk_rq_bytes(req), blk_rq_bytes(rsp)); 2170 2170 return -EINVAL; 2171 2171 } 2172 2172
+21 -20
drivers/scsi/mpt2sas/mpt2sas_transport.c
··· 1901 1901 struct MPT2SAS_ADAPTER *ioc = shost_priv(shost); 1902 1902 Mpi2SmpPassthroughRequest_t *mpi_request; 1903 1903 Mpi2SmpPassthroughReply_t *mpi_reply; 1904 - int rc, i; 1904 + int rc; 1905 1905 u16 smid; 1906 1906 u32 ioc_state; 1907 1907 unsigned long timeleft; ··· 1916 1916 void *pci_addr_out = NULL; 1917 1917 u16 wait_state_count; 1918 1918 struct request *rsp = req->next_rq; 1919 - struct bio_vec *bvec = NULL; 1919 + struct bio_vec bvec; 1920 + struct bvec_iter iter; 1920 1921 1921 1922 if (!rsp) { 1922 1923 printk(MPT2SAS_ERR_FMT "%s: the smp response space is " ··· 1943 1942 ioc->transport_cmds.status = MPT2_CMD_PENDING; 1944 1943 1945 1944 /* Check if the request is split across multiple segments */ 1946 - if (bio_segments(req->bio) > 1) { 1945 + if (bio_multiple_segments(req->bio)) { 1947 1946 u32 offset = 0; 1948 1947 1949 1948 /* Allocate memory and copy the request */ ··· 1956 1955 goto out; 1957 1956 } 1958 1957 1959 - bio_for_each_segment(bvec, req->bio, i) { 1958 + bio_for_each_segment(bvec, req->bio, iter) { 1960 1959 memcpy(pci_addr_out + offset, 1961 - page_address(bvec->bv_page) + bvec->bv_offset, 1962 - bvec->bv_len); 1963 - offset += bvec->bv_len; 1960 + page_address(bvec.bv_page) + bvec.bv_offset, 1961 + bvec.bv_len); 1962 + offset += bvec.bv_len; 1964 1963 } 1965 1964 } else { 1966 1965 dma_addr_out = pci_map_single(ioc->pdev, bio_data(req->bio), ··· 1975 1974 1976 1975 /* Check if the response needs to be populated across 1977 1976 * multiple segments */ 1978 - if (bio_segments(rsp->bio) > 1) { 1977 + if (bio_multiple_segments(rsp->bio)) { 1979 1978 pci_addr_in = pci_alloc_consistent(ioc->pdev, blk_rq_bytes(rsp), 1980 1979 &pci_dma_in); 1981 1980 if (!pci_addr_in) { ··· 2042 2041 sgl_flags = (MPI2_SGE_FLAGS_SIMPLE_ELEMENT | 2043 2042 MPI2_SGE_FLAGS_END_OF_BUFFER | MPI2_SGE_FLAGS_HOST_TO_IOC); 2044 2043 sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT; 2045 - if (bio_segments(req->bio) > 1) { 2044 + if (bio_multiple_segments(req->bio)) { 2046 2045 ioc->base_add_sg_single(psge, sgl_flags | 2047 2046 (blk_rq_bytes(req) - 4), pci_dma_out); 2048 2047 } else { ··· 2058 2057 MPI2_SGE_FLAGS_LAST_ELEMENT | MPI2_SGE_FLAGS_END_OF_BUFFER | 2059 2058 MPI2_SGE_FLAGS_END_OF_LIST); 2060 2059 sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT; 2061 - if (bio_segments(rsp->bio) > 1) { 2060 + if (bio_multiple_segments(rsp->bio)) { 2062 2061 ioc->base_add_sg_single(psge, sgl_flags | 2063 2062 (blk_rq_bytes(rsp) + 4), pci_dma_in); 2064 2063 } else { ··· 2103 2102 le16_to_cpu(mpi_reply->ResponseDataLength); 2104 2103 /* check if the resp needs to be copied from the allocated 2105 2104 * pci mem */ 2106 - if (bio_segments(rsp->bio) > 1) { 2105 + if (bio_multiple_segments(rsp->bio)) { 2107 2106 u32 offset = 0; 2108 2107 u32 bytes_to_copy = 2109 2108 le16_to_cpu(mpi_reply->ResponseDataLength); 2110 - bio_for_each_segment(bvec, rsp->bio, i) { 2111 - if (bytes_to_copy <= bvec->bv_len) { 2112 - memcpy(page_address(bvec->bv_page) + 2113 - bvec->bv_offset, pci_addr_in + 2109 + bio_for_each_segment(bvec, rsp->bio, iter) { 2110 + if (bytes_to_copy <= bvec.bv_len) { 2111 + memcpy(page_address(bvec.bv_page) + 2112 + bvec.bv_offset, pci_addr_in + 2114 2113 offset, bytes_to_copy); 2115 2114 break; 2116 2115 } else { 2117 - memcpy(page_address(bvec->bv_page) + 2118 - bvec->bv_offset, pci_addr_in + 2119 - offset, bvec->bv_len); 2120 - bytes_to_copy -= bvec->bv_len; 2116 + memcpy(page_address(bvec.bv_page) + 2117 + bvec.bv_offset, pci_addr_in + 2118 + offset, bvec.bv_len); 2119 + bytes_to_copy -= bvec.bv_len; 2121 2120 } 2122 - offset += bvec->bv_len; 2121 + offset += bvec.bv_len; 2123 2122 } 2124 2123 } 2125 2124 } else {
+20 -19
drivers/scsi/mpt3sas/mpt3sas_transport.c
··· 1884 1884 struct MPT3SAS_ADAPTER *ioc = shost_priv(shost); 1885 1885 Mpi2SmpPassthroughRequest_t *mpi_request; 1886 1886 Mpi2SmpPassthroughReply_t *mpi_reply; 1887 - int rc, i; 1887 + int rc; 1888 1888 u16 smid; 1889 1889 u32 ioc_state; 1890 1890 unsigned long timeleft; ··· 1898 1898 void *pci_addr_out = NULL; 1899 1899 u16 wait_state_count; 1900 1900 struct request *rsp = req->next_rq; 1901 - struct bio_vec *bvec = NULL; 1901 + struct bio_vec bvec; 1902 + struct bvec_iter iter; 1902 1903 1903 1904 if (!rsp) { 1904 1905 pr_err(MPT3SAS_FMT "%s: the smp response space is missing\n", ··· 1926 1925 ioc->transport_cmds.status = MPT3_CMD_PENDING; 1927 1926 1928 1927 /* Check if the request is split across multiple segments */ 1929 - if (req->bio->bi_vcnt > 1) { 1928 + if (bio_multiple_segments(req->bio)) { 1930 1929 u32 offset = 0; 1931 1930 1932 1931 /* Allocate memory and copy the request */ ··· 1939 1938 goto out; 1940 1939 } 1941 1940 1942 - bio_for_each_segment(bvec, req->bio, i) { 1941 + bio_for_each_segment(bvec, req->bio, iter) { 1943 1942 memcpy(pci_addr_out + offset, 1944 - page_address(bvec->bv_page) + bvec->bv_offset, 1945 - bvec->bv_len); 1946 - offset += bvec->bv_len; 1943 + page_address(bvec.bv_page) + bvec.bv_offset, 1944 + bvec.bv_len); 1945 + offset += bvec.bv_len; 1947 1946 } 1948 1947 } else { 1949 1948 dma_addr_out = pci_map_single(ioc->pdev, bio_data(req->bio), ··· 1958 1957 1959 1958 /* Check if the response needs to be populated across 1960 1959 * multiple segments */ 1961 - if (rsp->bio->bi_vcnt > 1) { 1960 + if (bio_multiple_segments(rsp->bio)) { 1962 1961 pci_addr_in = pci_alloc_consistent(ioc->pdev, blk_rq_bytes(rsp), 1963 1962 &pci_dma_in); 1964 1963 if (!pci_addr_in) { ··· 2019 2018 mpi_request->RequestDataLength = cpu_to_le16(blk_rq_bytes(req) - 4); 2020 2019 psge = &mpi_request->SGL; 2021 2020 2022 - if (req->bio->bi_vcnt > 1) 2021 + if (bio_multiple_segments(req->bio)) 2023 2022 ioc->build_sg(ioc, psge, pci_dma_out, (blk_rq_bytes(req) - 4), 2024 2023 pci_dma_in, (blk_rq_bytes(rsp) + 4)); 2025 2024 else ··· 2064 2063 2065 2064 /* check if the resp needs to be copied from the allocated 2066 2065 * pci mem */ 2067 - if (rsp->bio->bi_vcnt > 1) { 2066 + if (bio_multiple_segments(rsp->bio)) { 2068 2067 u32 offset = 0; 2069 2068 u32 bytes_to_copy = 2070 2069 le16_to_cpu(mpi_reply->ResponseDataLength); 2071 - bio_for_each_segment(bvec, rsp->bio, i) { 2072 - if (bytes_to_copy <= bvec->bv_len) { 2073 - memcpy(page_address(bvec->bv_page) + 2074 - bvec->bv_offset, pci_addr_in + 2070 + bio_for_each_segment(bvec, rsp->bio, iter) { 2071 + if (bytes_to_copy <= bvec.bv_len) { 2072 + memcpy(page_address(bvec.bv_page) + 2073 + bvec.bv_offset, pci_addr_in + 2075 2074 offset, bytes_to_copy); 2076 2075 break; 2077 2076 } else { 2078 - memcpy(page_address(bvec->bv_page) + 2079 - bvec->bv_offset, pci_addr_in + 2080 - offset, bvec->bv_len); 2081 - bytes_to_copy -= bvec->bv_len; 2077 + memcpy(page_address(bvec.bv_page) + 2078 + bvec.bv_offset, pci_addr_in + 2079 + offset, bvec.bv_len); 2080 + bytes_to_copy -= bvec.bv_len; 2082 2081 } 2083 - offset += bvec->bv_len; 2082 + offset += bvec.bv_len; 2084 2083 } 2085 2084 } 2086 2085 } else {
+1 -1
drivers/scsi/osd/osd_initiator.c
··· 731 731 732 732 bio->bi_rw &= ~REQ_WRITE; 733 733 or->in.bio = bio; 734 - or->in.total_bytes = bio->bi_size; 734 + or->in.total_bytes = bio->bi_iter.bi_size; 735 735 return 0; 736 736 } 737 737
+1 -1
drivers/scsi/sd.c
··· 801 801 if (sdkp->device->no_write_same) 802 802 return BLKPREP_KILL; 803 803 804 - BUG_ON(bio_offset(bio) || bio_iovec(bio)->bv_len != sdp->sector_size); 804 + BUG_ON(bio_offset(bio) || bio_iovec(bio).bv_len != sdp->sector_size); 805 805 806 806 sector >>= ilog2(sdp->sector_size) - 9; 807 807 nr_sectors >>= ilog2(sdp->sector_size) - 9;
+16 -14
drivers/scsi/sd_dif.c
··· 365 365 struct bio *bio; 366 366 struct scsi_disk *sdkp; 367 367 struct sd_dif_tuple *sdt; 368 - unsigned int i, j; 369 368 u32 phys, virt; 370 369 371 370 sdkp = rq->bio->bi_bdev->bd_disk->private_data; ··· 375 376 phys = hw_sector & 0xffffffff; 376 377 377 378 __rq_for_each_bio(bio, rq) { 378 - struct bio_vec *iv; 379 + struct bio_vec iv; 380 + struct bvec_iter iter; 381 + unsigned int j; 379 382 380 383 /* Already remapped? */ 381 384 if (bio_flagged(bio, BIO_MAPPED_INTEGRITY)) 382 385 break; 383 386 384 - virt = bio->bi_integrity->bip_sector & 0xffffffff; 387 + virt = bio->bi_integrity->bip_iter.bi_sector & 0xffffffff; 385 388 386 - bip_for_each_vec(iv, bio->bi_integrity, i) { 387 - sdt = kmap_atomic(iv->bv_page) 388 - + iv->bv_offset; 389 + bip_for_each_vec(iv, bio->bi_integrity, iter) { 390 + sdt = kmap_atomic(iv.bv_page) 391 + + iv.bv_offset; 389 392 390 - for (j = 0 ; j < iv->bv_len ; j += tuple_sz, sdt++) { 393 + for (j = 0; j < iv.bv_len; j += tuple_sz, sdt++) { 391 394 392 395 if (be32_to_cpu(sdt->ref_tag) == virt) 393 396 sdt->ref_tag = cpu_to_be32(phys); ··· 415 414 struct scsi_disk *sdkp; 416 415 struct bio *bio; 417 416 struct sd_dif_tuple *sdt; 418 - unsigned int i, j, sectors, sector_sz; 417 + unsigned int j, sectors, sector_sz; 419 418 u32 phys, virt; 420 419 421 420 sdkp = scsi_disk(scmd->request->rq_disk); ··· 431 430 phys >>= 3; 432 431 433 432 __rq_for_each_bio(bio, scmd->request) { 434 - struct bio_vec *iv; 433 + struct bio_vec iv; 434 + struct bvec_iter iter; 435 435 436 - virt = bio->bi_integrity->bip_sector & 0xffffffff; 436 + virt = bio->bi_integrity->bip_iter.bi_sector & 0xffffffff; 437 437 438 - bip_for_each_vec(iv, bio->bi_integrity, i) { 439 - sdt = kmap_atomic(iv->bv_page) 440 - + iv->bv_offset; 438 + bip_for_each_vec(iv, bio->bi_integrity, iter) { 439 + sdt = kmap_atomic(iv.bv_page) 440 + + iv.bv_offset; 441 441 442 - for (j = 0 ; j < iv->bv_len ; j += tuple_sz, sdt++) { 442 + for (j = 0; j < iv.bv_len; j += tuple_sz, sdt++) { 443 443 444 444 if (sectors == 0) { 445 445 kunmap_atomic(sdt);
+14 -12
drivers/staging/lustre/lustre/llite/lloop.c
··· 194 194 struct cl_object *obj = ll_i2info(inode)->lli_clob; 195 195 pgoff_t offset; 196 196 int ret; 197 - int i; 198 197 int rw; 199 198 obd_count page_count = 0; 200 - struct bio_vec *bvec; 199 + struct bio_vec bvec; 200 + struct bvec_iter iter; 201 201 struct bio *bio; 202 202 ssize_t bytes; 203 203 ··· 220 220 for (bio = head; bio != NULL; bio = bio->bi_next) { 221 221 LASSERT(rw == bio->bi_rw); 222 222 223 - offset = (pgoff_t)(bio->bi_sector << 9) + lo->lo_offset; 224 - bio_for_each_segment(bvec, bio, i) { 225 - BUG_ON(bvec->bv_offset != 0); 226 - BUG_ON(bvec->bv_len != PAGE_CACHE_SIZE); 223 + offset = (pgoff_t)(bio->bi_iter.bi_sector << 9) + lo->lo_offset; 224 + bio_for_each_segment(bvec, bio, iter) { 225 + BUG_ON(bvec.bv_offset != 0); 226 + BUG_ON(bvec.bv_len != PAGE_CACHE_SIZE); 227 227 228 - pages[page_count] = bvec->bv_page; 228 + pages[page_count] = bvec.bv_page; 229 229 offsets[page_count] = offset; 230 230 page_count++; 231 - offset += bvec->bv_len; 231 + offset += bvec.bv_len; 232 232 } 233 233 LASSERT(page_count <= LLOOP_MAX_SEGMENTS); 234 234 } ··· 313 313 bio = &lo->lo_bio; 314 314 while (*bio && (*bio)->bi_rw == rw) { 315 315 CDEBUG(D_INFO, "bio sector %llu size %u count %u vcnt%u \n", 316 - (unsigned long long)(*bio)->bi_sector, (*bio)->bi_size, 316 + (unsigned long long)(*bio)->bi_iter.bi_sector, 317 + (*bio)->bi_iter.bi_size, 317 318 page_count, (*bio)->bi_vcnt); 318 319 if (page_count + (*bio)->bi_vcnt > LLOOP_MAX_SEGMENTS) 319 320 break; ··· 348 347 goto err; 349 348 350 349 CDEBUG(D_INFO, "submit bio sector %llu size %u\n", 351 - (unsigned long long)old_bio->bi_sector, old_bio->bi_size); 350 + (unsigned long long)old_bio->bi_iter.bi_sector, 351 + old_bio->bi_iter.bi_size); 352 352 353 353 spin_lock_irq(&lo->lo_lock); 354 354 inactive = (lo->lo_state != LLOOP_BOUND); ··· 369 367 loop_add_bio(lo, old_bio); 370 368 return; 371 369 err: 372 - cfs_bio_io_error(old_bio, old_bio->bi_size); 370 + cfs_bio_io_error(old_bio, old_bio->bi_iter.bi_size); 373 371 } 374 372 375 373 ··· 380 378 while (bio) { 381 379 struct bio *tmp = bio->bi_next; 382 380 bio->bi_next = NULL; 383 - cfs_bio_endio(bio, bio->bi_size, ret); 381 + cfs_bio_endio(bio, bio->bi_iter.bi_size, ret); 384 382 bio = tmp; 385 383 } 386 384 }
+18 -15
drivers/staging/zram/zram_drv.c
··· 171 171 u64 start, end, bound; 172 172 173 173 /* unaligned request */ 174 - if (unlikely(bio->bi_sector & (ZRAM_SECTOR_PER_LOGICAL_BLOCK - 1))) 174 + if (unlikely(bio->bi_iter.bi_sector & 175 + (ZRAM_SECTOR_PER_LOGICAL_BLOCK - 1))) 175 176 return 0; 176 - if (unlikely(bio->bi_size & (ZRAM_LOGICAL_BLOCK_SIZE - 1))) 177 + if (unlikely(bio->bi_iter.bi_size & (ZRAM_LOGICAL_BLOCK_SIZE - 1))) 177 178 return 0; 178 179 179 - start = bio->bi_sector; 180 - end = start + (bio->bi_size >> SECTOR_SHIFT); 180 + start = bio->bi_iter.bi_sector; 181 + end = start + (bio->bi_iter.bi_size >> SECTOR_SHIFT); 181 182 bound = zram->disksize >> SECTOR_SHIFT; 182 183 /* out of range range */ 183 184 if (unlikely(start >= bound || end > bound || start > end)) ··· 681 680 682 681 static void __zram_make_request(struct zram *zram, struct bio *bio, int rw) 683 682 { 684 - int i, offset; 683 + int offset; 685 684 u32 index; 686 - struct bio_vec *bvec; 685 + struct bio_vec bvec; 686 + struct bvec_iter iter; 687 687 688 688 switch (rw) { 689 689 case READ: ··· 695 693 break; 696 694 } 697 695 698 - index = bio->bi_sector >> SECTORS_PER_PAGE_SHIFT; 699 - offset = (bio->bi_sector & (SECTORS_PER_PAGE - 1)) << SECTOR_SHIFT; 696 + index = bio->bi_iter.bi_sector >> SECTORS_PER_PAGE_SHIFT; 697 + offset = (bio->bi_iter.bi_sector & 698 + (SECTORS_PER_PAGE - 1)) << SECTOR_SHIFT; 700 699 701 - bio_for_each_segment(bvec, bio, i) { 700 + bio_for_each_segment(bvec, bio, iter) { 702 701 int max_transfer_size = PAGE_SIZE - offset; 703 702 704 - if (bvec->bv_len > max_transfer_size) { 703 + if (bvec.bv_len > max_transfer_size) { 705 704 /* 706 705 * zram_bvec_rw() can only make operation on a single 707 706 * zram page. Split the bio vector. 708 707 */ 709 708 struct bio_vec bv; 710 709 711 - bv.bv_page = bvec->bv_page; 710 + bv.bv_page = bvec.bv_page; 712 711 bv.bv_len = max_transfer_size; 713 - bv.bv_offset = bvec->bv_offset; 712 + bv.bv_offset = bvec.bv_offset; 714 713 715 714 if (zram_bvec_rw(zram, &bv, index, offset, bio, rw) < 0) 716 715 goto out; 717 716 718 - bv.bv_len = bvec->bv_len - max_transfer_size; 717 + bv.bv_len = bvec.bv_len - max_transfer_size; 719 718 bv.bv_offset += max_transfer_size; 720 719 if (zram_bvec_rw(zram, &bv, index+1, 0, bio, rw) < 0) 721 720 goto out; 722 721 } else 723 - if (zram_bvec_rw(zram, bvec, index, offset, bio, rw) 722 + if (zram_bvec_rw(zram, &bvec, index, offset, bio, rw) 724 723 < 0) 725 724 goto out; 726 725 727 - update_position(&index, &offset, bvec); 726 + update_position(&index, &offset, &bvec); 728 727 } 729 728 730 729 set_bit(BIO_UPTODATE, &bio->bi_flags);
+1 -1
drivers/target/target_core_iblock.c
··· 319 319 bio->bi_bdev = ib_dev->ibd_bd; 320 320 bio->bi_private = cmd; 321 321 bio->bi_end_io = &iblock_bio_done; 322 - bio->bi_sector = lba; 322 + bio->bi_iter.bi_sector = lba; 323 323 324 324 return bio; 325 325 }
+36 -134
fs/bio-integrity.c
··· 134 134 return 0; 135 135 } 136 136 137 - iv = bip_vec_idx(bip, bip->bip_vcnt); 138 - BUG_ON(iv == NULL); 137 + iv = bip->bip_vec + bip->bip_vcnt; 139 138 140 139 iv->bv_page = page; 141 140 iv->bv_len = len; ··· 202 203 return sectors; 203 204 } 204 205 206 + static inline unsigned int bio_integrity_bytes(struct blk_integrity *bi, 207 + unsigned int sectors) 208 + { 209 + return bio_integrity_hw_sectors(bi, sectors) * bi->tuple_size; 210 + } 211 + 205 212 /** 206 213 * bio_integrity_tag_size - Retrieve integrity tag space 207 214 * @bio: bio to inspect ··· 220 215 { 221 216 struct blk_integrity *bi = bdev_get_integrity(bio->bi_bdev); 222 217 223 - BUG_ON(bio->bi_size == 0); 218 + BUG_ON(bio->bi_iter.bi_size == 0); 224 219 225 - return bi->tag_size * (bio->bi_size / bi->sector_size); 220 + return bi->tag_size * (bio->bi_iter.bi_size / bi->sector_size); 226 221 } 227 222 EXPORT_SYMBOL(bio_integrity_tag_size); 228 223 ··· 240 235 nr_sectors = bio_integrity_hw_sectors(bi, 241 236 DIV_ROUND_UP(len, bi->tag_size)); 242 237 243 - if (nr_sectors * bi->tuple_size > bip->bip_size) { 244 - printk(KERN_ERR "%s: tag too big for bio: %u > %u\n", 245 - __func__, nr_sectors * bi->tuple_size, bip->bip_size); 238 + if (nr_sectors * bi->tuple_size > bip->bip_iter.bi_size) { 239 + printk(KERN_ERR "%s: tag too big for bio: %u > %u\n", __func__, 240 + nr_sectors * bi->tuple_size, bip->bip_iter.bi_size); 246 241 return -1; 247 242 } 248 243 ··· 304 299 { 305 300 struct blk_integrity *bi = bdev_get_integrity(bio->bi_bdev); 306 301 struct blk_integrity_exchg bix; 307 - struct bio_vec *bv; 308 - sector_t sector = bio->bi_sector; 309 - unsigned int i, sectors, total; 302 + struct bio_vec bv; 303 + struct bvec_iter iter; 304 + sector_t sector = bio->bi_iter.bi_sector; 305 + unsigned int sectors, total; 310 306 void *prot_buf = bio->bi_integrity->bip_buf; 311 307 312 308 total = 0; 313 309 bix.disk_name = bio->bi_bdev->bd_disk->disk_name; 314 310 bix.sector_size = bi->sector_size; 315 311 316 - bio_for_each_segment(bv, bio, i) { 317 - void *kaddr = kmap_atomic(bv->bv_page); 318 - bix.data_buf = kaddr + bv->bv_offset; 319 - bix.data_size = bv->bv_len; 312 + bio_for_each_segment(bv, bio, iter) { 313 + void *kaddr = kmap_atomic(bv.bv_page); 314 + bix.data_buf = kaddr + bv.bv_offset; 315 + bix.data_size = bv.bv_len; 320 316 bix.prot_buf = prot_buf; 321 317 bix.sector = sector; 322 318 323 319 bi->generate_fn(&bix); 324 320 325 - sectors = bv->bv_len / bi->sector_size; 321 + sectors = bv.bv_len / bi->sector_size; 326 322 sector += sectors; 327 323 prot_buf += sectors * bi->tuple_size; 328 324 total += sectors * bi->tuple_size; 329 - BUG_ON(total > bio->bi_integrity->bip_size); 325 + BUG_ON(total > bio->bi_integrity->bip_iter.bi_size); 330 326 331 327 kunmap_atomic(kaddr); 332 328 } ··· 392 386 393 387 bip->bip_owns_buf = 1; 394 388 bip->bip_buf = buf; 395 - bip->bip_size = len; 396 - bip->bip_sector = bio->bi_sector; 389 + bip->bip_iter.bi_size = len; 390 + bip->bip_iter.bi_sector = bio->bi_iter.bi_sector; 397 391 398 392 /* Map it */ 399 393 offset = offset_in_page(buf); ··· 448 442 struct blk_integrity *bi = bdev_get_integrity(bio->bi_bdev); 449 443 struct blk_integrity_exchg bix; 450 444 struct bio_vec *bv; 451 - sector_t sector = bio->bi_integrity->bip_sector; 452 - unsigned int i, sectors, total, ret; 445 + sector_t sector = bio->bi_integrity->bip_iter.bi_sector; 446 + unsigned int sectors, total, ret; 453 447 void *prot_buf = bio->bi_integrity->bip_buf; 448 + int i; 454 449 455 450 ret = total = 0; 456 451 bix.disk_name = bio->bi_bdev->bd_disk->disk_name; 457 452 bix.sector_size = bi->sector_size; 458 453 459 - bio_for_each_segment(bv, bio, i) { 454 + bio_for_each_segment_all(bv, bio, i) { 460 455 void *kaddr = kmap_atomic(bv->bv_page); 456 + 461 457 bix.data_buf = kaddr + bv->bv_offset; 462 458 bix.data_size = bv->bv_len; 463 459 bix.prot_buf = prot_buf; ··· 476 468 sector += sectors; 477 469 prot_buf += sectors * bi->tuple_size; 478 470 total += sectors * bi->tuple_size; 479 - BUG_ON(total > bio->bi_integrity->bip_size); 471 + BUG_ON(total > bio->bi_integrity->bip_iter.bi_size); 480 472 481 473 kunmap_atomic(kaddr); 482 474 } ··· 503 495 504 496 /* Restore original bio completion handler */ 505 497 bio->bi_end_io = bip->bip_end_io; 506 - bio_endio(bio, error); 498 + bio_endio_nodec(bio, error); 507 499 } 508 500 509 501 /** ··· 541 533 EXPORT_SYMBOL(bio_integrity_endio); 542 534 543 535 /** 544 - * bio_integrity_mark_head - Advance bip_vec skip bytes 545 - * @bip: Integrity vector to advance 546 - * @skip: Number of bytes to advance it 547 - */ 548 - void bio_integrity_mark_head(struct bio_integrity_payload *bip, 549 - unsigned int skip) 550 - { 551 - struct bio_vec *iv; 552 - unsigned int i; 553 - 554 - bip_for_each_vec(iv, bip, i) { 555 - if (skip == 0) { 556 - bip->bip_idx = i; 557 - return; 558 - } else if (skip >= iv->bv_len) { 559 - skip -= iv->bv_len; 560 - } else { /* skip < iv->bv_len) */ 561 - iv->bv_offset += skip; 562 - iv->bv_len -= skip; 563 - bip->bip_idx = i; 564 - return; 565 - } 566 - } 567 - } 568 - 569 - /** 570 - * bio_integrity_mark_tail - Truncate bip_vec to be len bytes long 571 - * @bip: Integrity vector to truncate 572 - * @len: New length of integrity vector 573 - */ 574 - void bio_integrity_mark_tail(struct bio_integrity_payload *bip, 575 - unsigned int len) 576 - { 577 - struct bio_vec *iv; 578 - unsigned int i; 579 - 580 - bip_for_each_vec(iv, bip, i) { 581 - if (len == 0) { 582 - bip->bip_vcnt = i; 583 - return; 584 - } else if (len >= iv->bv_len) { 585 - len -= iv->bv_len; 586 - } else { /* len < iv->bv_len) */ 587 - iv->bv_len = len; 588 - len = 0; 589 - } 590 - } 591 - } 592 - 593 - /** 594 536 * bio_integrity_advance - Advance integrity vector 595 537 * @bio: bio whose integrity vector to update 596 538 * @bytes_done: number of data bytes that have been completed ··· 553 595 { 554 596 struct bio_integrity_payload *bip = bio->bi_integrity; 555 597 struct blk_integrity *bi = bdev_get_integrity(bio->bi_bdev); 556 - unsigned int nr_sectors; 598 + unsigned bytes = bio_integrity_bytes(bi, bytes_done >> 9); 557 599 558 - BUG_ON(bip == NULL); 559 - BUG_ON(bi == NULL); 560 - 561 - nr_sectors = bio_integrity_hw_sectors(bi, bytes_done >> 9); 562 - bio_integrity_mark_head(bip, nr_sectors * bi->tuple_size); 600 + bvec_iter_advance(bip->bip_vec, &bip->bip_iter, bytes); 563 601 } 564 602 EXPORT_SYMBOL(bio_integrity_advance); 565 603 ··· 575 621 { 576 622 struct bio_integrity_payload *bip = bio->bi_integrity; 577 623 struct blk_integrity *bi = bdev_get_integrity(bio->bi_bdev); 578 - unsigned int nr_sectors; 579 624 580 - BUG_ON(bip == NULL); 581 - BUG_ON(bi == NULL); 582 - BUG_ON(!bio_flagged(bio, BIO_CLONED)); 583 - 584 - nr_sectors = bio_integrity_hw_sectors(bi, sectors); 585 - bip->bip_sector = bip->bip_sector + offset; 586 - bio_integrity_mark_head(bip, offset * bi->tuple_size); 587 - bio_integrity_mark_tail(bip, sectors * bi->tuple_size); 625 + bio_integrity_advance(bio, offset << 9); 626 + bip->bip_iter.bi_size = bio_integrity_bytes(bi, sectors); 588 627 } 589 628 EXPORT_SYMBOL(bio_integrity_trim); 590 - 591 - /** 592 - * bio_integrity_split - Split integrity metadata 593 - * @bio: Protected bio 594 - * @bp: Resulting bio_pair 595 - * @sectors: Offset 596 - * 597 - * Description: Splits an integrity page into a bio_pair. 598 - */ 599 - void bio_integrity_split(struct bio *bio, struct bio_pair *bp, int sectors) 600 - { 601 - struct blk_integrity *bi; 602 - struct bio_integrity_payload *bip = bio->bi_integrity; 603 - unsigned int nr_sectors; 604 - 605 - if (bio_integrity(bio) == 0) 606 - return; 607 - 608 - bi = bdev_get_integrity(bio->bi_bdev); 609 - BUG_ON(bi == NULL); 610 - BUG_ON(bip->bip_vcnt != 1); 611 - 612 - nr_sectors = bio_integrity_hw_sectors(bi, sectors); 613 - 614 - bp->bio1.bi_integrity = &bp->bip1; 615 - bp->bio2.bi_integrity = &bp->bip2; 616 - 617 - bp->iv1 = bip->bip_vec[bip->bip_idx]; 618 - bp->iv2 = bip->bip_vec[bip->bip_idx]; 619 - 620 - bp->bip1.bip_vec = &bp->iv1; 621 - bp->bip2.bip_vec = &bp->iv2; 622 - 623 - bp->iv1.bv_len = sectors * bi->tuple_size; 624 - bp->iv2.bv_offset += sectors * bi->tuple_size; 625 - bp->iv2.bv_len -= sectors * bi->tuple_size; 626 - 627 - bp->bip1.bip_sector = bio->bi_integrity->bip_sector; 628 - bp->bip2.bip_sector = bio->bi_integrity->bip_sector + nr_sectors; 629 - 630 - bp->bip1.bip_vcnt = bp->bip2.bip_vcnt = 1; 631 - bp->bip1.bip_idx = bp->bip2.bip_idx = 0; 632 - } 633 - EXPORT_SYMBOL(bio_integrity_split); 634 629 635 630 /** 636 631 * bio_integrity_clone - Callback for cloning bios with integrity metadata ··· 605 702 memcpy(bip->bip_vec, bip_src->bip_vec, 606 703 bip_src->bip_vcnt * sizeof(struct bio_vec)); 607 704 608 - bip->bip_sector = bip_src->bip_sector; 609 705 bip->bip_vcnt = bip_src->bip_vcnt; 610 - bip->bip_idx = bip_src->bip_idx; 706 + bip->bip_iter = bip_src->bip_iter; 611 707 612 708 return 0; 613 709 }
+232 -274
fs/bio.c
··· 38 38 */ 39 39 #define BIO_INLINE_VECS 4 40 40 41 - static mempool_t *bio_split_pool __read_mostly; 42 - 43 41 /* 44 42 * if you change this list, also change bvec_alloc or things will 45 43 * break badly! cannot be bigger than what you can fit into an ··· 271 273 { 272 274 memset(bio, 0, sizeof(*bio)); 273 275 bio->bi_flags = 1 << BIO_UPTODATE; 276 + atomic_set(&bio->bi_remaining, 1); 274 277 atomic_set(&bio->bi_cnt, 1); 275 278 } 276 279 EXPORT_SYMBOL(bio_init); ··· 294 295 295 296 memset(bio, 0, BIO_RESET_BYTES); 296 297 bio->bi_flags = flags|(1 << BIO_UPTODATE); 298 + atomic_set(&bio->bi_remaining, 1); 297 299 } 298 300 EXPORT_SYMBOL(bio_reset); 301 + 302 + static void bio_chain_endio(struct bio *bio, int error) 303 + { 304 + bio_endio(bio->bi_private, error); 305 + bio_put(bio); 306 + } 307 + 308 + /** 309 + * bio_chain - chain bio completions 310 + * 311 + * The caller won't have a bi_end_io called when @bio completes - instead, 312 + * @parent's bi_end_io won't be called until both @parent and @bio have 313 + * completed; the chained bio will also be freed when it completes. 314 + * 315 + * The caller must not set bi_private or bi_end_io in @bio. 316 + */ 317 + void bio_chain(struct bio *bio, struct bio *parent) 318 + { 319 + BUG_ON(bio->bi_private || bio->bi_end_io); 320 + 321 + bio->bi_private = parent; 322 + bio->bi_end_io = bio_chain_endio; 323 + atomic_inc(&parent->bi_remaining); 324 + } 325 + EXPORT_SYMBOL(bio_chain); 299 326 300 327 static void bio_alloc_rescue(struct work_struct *work) 301 328 { ··· 498 473 void zero_fill_bio(struct bio *bio) 499 474 { 500 475 unsigned long flags; 501 - struct bio_vec *bv; 502 - int i; 476 + struct bio_vec bv; 477 + struct bvec_iter iter; 503 478 504 - bio_for_each_segment(bv, bio, i) { 505 - char *data = bvec_kmap_irq(bv, &flags); 506 - memset(data, 0, bv->bv_len); 507 - flush_dcache_page(bv->bv_page); 479 + bio_for_each_segment(bv, bio, iter) { 480 + char *data = bvec_kmap_irq(&bv, &flags); 481 + memset(data, 0, bv.bv_len); 482 + flush_dcache_page(bv.bv_page); 508 483 bvec_kunmap_irq(data, &flags); 509 484 } 510 485 } ··· 540 515 EXPORT_SYMBOL(bio_phys_segments); 541 516 542 517 /** 543 - * __bio_clone - clone a bio 518 + * __bio_clone_fast - clone a bio that shares the original bio's biovec 544 519 * @bio: destination bio 545 520 * @bio_src: bio to clone 546 521 * 547 522 * Clone a &bio. Caller will own the returned bio, but not 548 523 * the actual data it points to. Reference count of returned 549 524 * bio will be one. 525 + * 526 + * Caller must ensure that @bio_src is not freed before @bio. 550 527 */ 551 - void __bio_clone(struct bio *bio, struct bio *bio_src) 528 + void __bio_clone_fast(struct bio *bio, struct bio *bio_src) 552 529 { 553 - memcpy(bio->bi_io_vec, bio_src->bi_io_vec, 554 - bio_src->bi_max_vecs * sizeof(struct bio_vec)); 530 + BUG_ON(bio->bi_pool && BIO_POOL_IDX(bio) != BIO_POOL_NONE); 555 531 556 532 /* 557 533 * most users will be overriding ->bi_bdev with a new target, 558 534 * so we don't set nor calculate new physical/hw segment counts here 559 535 */ 560 - bio->bi_sector = bio_src->bi_sector; 561 536 bio->bi_bdev = bio_src->bi_bdev; 562 537 bio->bi_flags |= 1 << BIO_CLONED; 563 538 bio->bi_rw = bio_src->bi_rw; 564 - bio->bi_vcnt = bio_src->bi_vcnt; 565 - bio->bi_size = bio_src->bi_size; 566 - bio->bi_idx = bio_src->bi_idx; 539 + bio->bi_iter = bio_src->bi_iter; 540 + bio->bi_io_vec = bio_src->bi_io_vec; 567 541 } 568 - EXPORT_SYMBOL(__bio_clone); 542 + EXPORT_SYMBOL(__bio_clone_fast); 569 543 570 544 /** 571 - * bio_clone_bioset - clone a bio 545 + * bio_clone_fast - clone a bio that shares the original bio's biovec 572 546 * @bio: bio to clone 573 547 * @gfp_mask: allocation priority 574 548 * @bs: bio_set to allocate from 575 549 * 576 - * Like __bio_clone, only also allocates the returned bio 550 + * Like __bio_clone_fast, only also allocates the returned bio 577 551 */ 578 - struct bio *bio_clone_bioset(struct bio *bio, gfp_t gfp_mask, 579 - struct bio_set *bs) 552 + struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs) 580 553 { 581 554 struct bio *b; 582 555 583 - b = bio_alloc_bioset(gfp_mask, bio->bi_max_vecs, bs); 556 + b = bio_alloc_bioset(gfp_mask, 0, bs); 584 557 if (!b) 585 558 return NULL; 586 559 587 - __bio_clone(b, bio); 560 + __bio_clone_fast(b, bio); 588 561 589 562 if (bio_integrity(bio)) { 590 563 int ret; ··· 596 573 } 597 574 598 575 return b; 576 + } 577 + EXPORT_SYMBOL(bio_clone_fast); 578 + 579 + /** 580 + * bio_clone_bioset - clone a bio 581 + * @bio_src: bio to clone 582 + * @gfp_mask: allocation priority 583 + * @bs: bio_set to allocate from 584 + * 585 + * Clone bio. Caller will own the returned bio, but not the actual data it 586 + * points to. Reference count of returned bio will be one. 587 + */ 588 + struct bio *bio_clone_bioset(struct bio *bio_src, gfp_t gfp_mask, 589 + struct bio_set *bs) 590 + { 591 + unsigned nr_iovecs = 0; 592 + struct bvec_iter iter; 593 + struct bio_vec bv; 594 + struct bio *bio; 595 + 596 + /* 597 + * Pre immutable biovecs, __bio_clone() used to just do a memcpy from 598 + * bio_src->bi_io_vec to bio->bi_io_vec. 599 + * 600 + * We can't do that anymore, because: 601 + * 602 + * - The point of cloning the biovec is to produce a bio with a biovec 603 + * the caller can modify: bi_idx and bi_bvec_done should be 0. 604 + * 605 + * - The original bio could've had more than BIO_MAX_PAGES biovecs; if 606 + * we tried to clone the whole thing bio_alloc_bioset() would fail. 607 + * But the clone should succeed as long as the number of biovecs we 608 + * actually need to allocate is fewer than BIO_MAX_PAGES. 609 + * 610 + * - Lastly, bi_vcnt should not be looked at or relied upon by code 611 + * that does not own the bio - reason being drivers don't use it for 612 + * iterating over the biovec anymore, so expecting it to be kept up 613 + * to date (i.e. for clones that share the parent biovec) is just 614 + * asking for trouble and would force extra work on 615 + * __bio_clone_fast() anyways. 616 + */ 617 + 618 + bio_for_each_segment(bv, bio_src, iter) 619 + nr_iovecs++; 620 + 621 + bio = bio_alloc_bioset(gfp_mask, nr_iovecs, bs); 622 + if (!bio) 623 + return NULL; 624 + 625 + bio->bi_bdev = bio_src->bi_bdev; 626 + bio->bi_rw = bio_src->bi_rw; 627 + bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector; 628 + bio->bi_iter.bi_size = bio_src->bi_iter.bi_size; 629 + 630 + bio_for_each_segment(bv, bio_src, iter) 631 + bio->bi_io_vec[bio->bi_vcnt++] = bv; 632 + 633 + if (bio_integrity(bio_src)) { 634 + int ret; 635 + 636 + ret = bio_integrity_clone(bio, bio_src, gfp_mask); 637 + if (ret < 0) { 638 + bio_put(bio); 639 + return NULL; 640 + } 641 + } 642 + 643 + return bio; 599 644 } 600 645 EXPORT_SYMBOL(bio_clone_bioset); 601 646 ··· 703 612 if (unlikely(bio_flagged(bio, BIO_CLONED))) 704 613 return 0; 705 614 706 - if (((bio->bi_size + len) >> 9) > max_sectors) 615 + if (((bio->bi_iter.bi_size + len) >> 9) > max_sectors) 707 616 return 0; 708 617 709 618 /* ··· 726 635 simulate merging updated prev_bvec 727 636 as new bvec. */ 728 637 .bi_bdev = bio->bi_bdev, 729 - .bi_sector = bio->bi_sector, 730 - .bi_size = bio->bi_size - prev_bv_len, 638 + .bi_sector = bio->bi_iter.bi_sector, 639 + .bi_size = bio->bi_iter.bi_size - 640 + prev_bv_len, 731 641 .bi_rw = bio->bi_rw, 732 642 }; 733 643 ··· 776 684 if (q->merge_bvec_fn) { 777 685 struct bvec_merge_data bvm = { 778 686 .bi_bdev = bio->bi_bdev, 779 - .bi_sector = bio->bi_sector, 780 - .bi_size = bio->bi_size, 687 + .bi_sector = bio->bi_iter.bi_sector, 688 + .bi_size = bio->bi_iter.bi_size, 781 689 .bi_rw = bio->bi_rw, 782 690 }; 783 691 ··· 800 708 bio->bi_vcnt++; 801 709 bio->bi_phys_segments++; 802 710 done: 803 - bio->bi_size += len; 711 + bio->bi_iter.bi_size += len; 804 712 return len; 805 713 } 806 714 ··· 899 807 if (bio_integrity(bio)) 900 808 bio_integrity_advance(bio, bytes); 901 809 902 - bio->bi_sector += bytes >> 9; 903 - bio->bi_size -= bytes; 904 - 905 - if (bio->bi_rw & BIO_NO_ADVANCE_ITER_MASK) 906 - return; 907 - 908 - while (bytes) { 909 - if (unlikely(bio->bi_idx >= bio->bi_vcnt)) { 910 - WARN_ONCE(1, "bio idx %d >= vcnt %d\n", 911 - bio->bi_idx, bio->bi_vcnt); 912 - break; 913 - } 914 - 915 - if (bytes >= bio_iovec(bio)->bv_len) { 916 - bytes -= bio_iovec(bio)->bv_len; 917 - bio->bi_idx++; 918 - } else { 919 - bio_iovec(bio)->bv_len -= bytes; 920 - bio_iovec(bio)->bv_offset += bytes; 921 - bytes = 0; 922 - } 923 - } 810 + bio_advance_iter(bio, &bio->bi_iter, bytes); 924 811 } 925 812 EXPORT_SYMBOL(bio_advance); 926 813 ··· 945 874 */ 946 875 void bio_copy_data(struct bio *dst, struct bio *src) 947 876 { 948 - struct bio_vec *src_bv, *dst_bv; 949 - unsigned src_offset, dst_offset, bytes; 877 + struct bvec_iter src_iter, dst_iter; 878 + struct bio_vec src_bv, dst_bv; 950 879 void *src_p, *dst_p; 880 + unsigned bytes; 951 881 952 - src_bv = bio_iovec(src); 953 - dst_bv = bio_iovec(dst); 954 - 955 - src_offset = src_bv->bv_offset; 956 - dst_offset = dst_bv->bv_offset; 882 + src_iter = src->bi_iter; 883 + dst_iter = dst->bi_iter; 957 884 958 885 while (1) { 959 - if (src_offset == src_bv->bv_offset + src_bv->bv_len) { 960 - src_bv++; 961 - if (src_bv == bio_iovec_idx(src, src->bi_vcnt)) { 962 - src = src->bi_next; 963 - if (!src) 964 - break; 886 + if (!src_iter.bi_size) { 887 + src = src->bi_next; 888 + if (!src) 889 + break; 965 890 966 - src_bv = bio_iovec(src); 967 - } 968 - 969 - src_offset = src_bv->bv_offset; 891 + src_iter = src->bi_iter; 970 892 } 971 893 972 - if (dst_offset == dst_bv->bv_offset + dst_bv->bv_len) { 973 - dst_bv++; 974 - if (dst_bv == bio_iovec_idx(dst, dst->bi_vcnt)) { 975 - dst = dst->bi_next; 976 - if (!dst) 977 - break; 894 + if (!dst_iter.bi_size) { 895 + dst = dst->bi_next; 896 + if (!dst) 897 + break; 978 898 979 - dst_bv = bio_iovec(dst); 980 - } 981 - 982 - dst_offset = dst_bv->bv_offset; 899 + dst_iter = dst->bi_iter; 983 900 } 984 901 985 - bytes = min(dst_bv->bv_offset + dst_bv->bv_len - dst_offset, 986 - src_bv->bv_offset + src_bv->bv_len - src_offset); 902 + src_bv = bio_iter_iovec(src, src_iter); 903 + dst_bv = bio_iter_iovec(dst, dst_iter); 987 904 988 - src_p = kmap_atomic(src_bv->bv_page); 989 - dst_p = kmap_atomic(dst_bv->bv_page); 905 + bytes = min(src_bv.bv_len, dst_bv.bv_len); 990 906 991 - memcpy(dst_p + dst_offset, 992 - src_p + src_offset, 907 + src_p = kmap_atomic(src_bv.bv_page); 908 + dst_p = kmap_atomic(dst_bv.bv_page); 909 + 910 + memcpy(dst_p + dst_bv.bv_offset, 911 + src_p + src_bv.bv_offset, 993 912 bytes); 994 913 995 914 kunmap_atomic(dst_p); 996 915 kunmap_atomic(src_p); 997 916 998 - src_offset += bytes; 999 - dst_offset += bytes; 917 + bio_advance_iter(src, &src_iter, bytes); 918 + bio_advance_iter(dst, &dst_iter, bytes); 1000 919 } 1001 920 } 1002 921 EXPORT_SYMBOL(bio_copy_data); 1003 922 1004 923 struct bio_map_data { 1005 - struct bio_vec *iovecs; 1006 - struct sg_iovec *sgvecs; 1007 924 int nr_sgvecs; 1008 925 int is_our_pages; 926 + struct sg_iovec sgvecs[]; 1009 927 }; 1010 928 1011 929 static void bio_set_map_data(struct bio_map_data *bmd, struct bio *bio, 1012 930 struct sg_iovec *iov, int iov_count, 1013 931 int is_our_pages) 1014 932 { 1015 - memcpy(bmd->iovecs, bio->bi_io_vec, sizeof(struct bio_vec) * bio->bi_vcnt); 1016 933 memcpy(bmd->sgvecs, iov, sizeof(struct sg_iovec) * iov_count); 1017 934 bmd->nr_sgvecs = iov_count; 1018 935 bmd->is_our_pages = is_our_pages; 1019 936 bio->bi_private = bmd; 1020 937 } 1021 938 1022 - static void bio_free_map_data(struct bio_map_data *bmd) 1023 - { 1024 - kfree(bmd->iovecs); 1025 - kfree(bmd->sgvecs); 1026 - kfree(bmd); 1027 - } 1028 - 1029 939 static struct bio_map_data *bio_alloc_map_data(int nr_segs, 1030 940 unsigned int iov_count, 1031 941 gfp_t gfp_mask) 1032 942 { 1033 - struct bio_map_data *bmd; 1034 - 1035 943 if (iov_count > UIO_MAXIOV) 1036 944 return NULL; 1037 945 1038 - bmd = kmalloc(sizeof(*bmd), gfp_mask); 1039 - if (!bmd) 1040 - return NULL; 1041 - 1042 - bmd->iovecs = kmalloc(sizeof(struct bio_vec) * nr_segs, gfp_mask); 1043 - if (!bmd->iovecs) { 1044 - kfree(bmd); 1045 - return NULL; 1046 - } 1047 - 1048 - bmd->sgvecs = kmalloc(sizeof(struct sg_iovec) * iov_count, gfp_mask); 1049 - if (bmd->sgvecs) 1050 - return bmd; 1051 - 1052 - kfree(bmd->iovecs); 1053 - kfree(bmd); 1054 - return NULL; 946 + return kmalloc(sizeof(struct bio_map_data) + 947 + sizeof(struct sg_iovec) * iov_count, gfp_mask); 1055 948 } 1056 949 1057 - static int __bio_copy_iov(struct bio *bio, struct bio_vec *iovecs, 1058 - struct sg_iovec *iov, int iov_count, 950 + static int __bio_copy_iov(struct bio *bio, struct sg_iovec *iov, int iov_count, 1059 951 int to_user, int from_user, int do_free_page) 1060 952 { 1061 953 int ret = 0, i; ··· 1028 994 1029 995 bio_for_each_segment_all(bvec, bio, i) { 1030 996 char *bv_addr = page_address(bvec->bv_page); 1031 - unsigned int bv_len = iovecs[i].bv_len; 997 + unsigned int bv_len = bvec->bv_len; 1032 998 1033 999 while (bv_len && iov_idx < iov_count) { 1034 1000 unsigned int bytes; ··· 1088 1054 * don't copy into a random user address space, just free. 1089 1055 */ 1090 1056 if (current->mm) 1091 - ret = __bio_copy_iov(bio, bmd->iovecs, bmd->sgvecs, 1092 - bmd->nr_sgvecs, bio_data_dir(bio) == READ, 1057 + ret = __bio_copy_iov(bio, bmd->sgvecs, bmd->nr_sgvecs, 1058 + bio_data_dir(bio) == READ, 1093 1059 0, bmd->is_our_pages); 1094 1060 else if (bmd->is_our_pages) 1095 1061 bio_for_each_segment_all(bvec, bio, i) 1096 1062 __free_page(bvec->bv_page); 1097 1063 } 1098 - bio_free_map_data(bmd); 1064 + kfree(bmd); 1099 1065 bio_put(bio); 1100 1066 return ret; 1101 1067 } ··· 1209 1175 */ 1210 1176 if ((!write_to_vm && (!map_data || !map_data->null_mapped)) || 1211 1177 (map_data && map_data->from_user)) { 1212 - ret = __bio_copy_iov(bio, bio->bi_io_vec, iov, iov_count, 0, 1, 0); 1178 + ret = __bio_copy_iov(bio, iov, iov_count, 0, 1, 0); 1213 1179 if (ret) 1214 1180 goto cleanup; 1215 1181 } ··· 1223 1189 1224 1190 bio_put(bio); 1225 1191 out_bmd: 1226 - bio_free_map_data(bmd); 1192 + kfree(bmd); 1227 1193 return ERR_PTR(ret); 1228 1194 } 1229 1195 ··· 1519 1485 if (IS_ERR(bio)) 1520 1486 return bio; 1521 1487 1522 - if (bio->bi_size == len) 1488 + if (bio->bi_iter.bi_size == len) 1523 1489 return bio; 1524 1490 1525 1491 /* ··· 1540 1506 1541 1507 bio_for_each_segment_all(bvec, bio, i) { 1542 1508 char *addr = page_address(bvec->bv_page); 1543 - int len = bmd->iovecs[i].bv_len; 1544 1509 1545 1510 if (read) 1546 - memcpy(p, addr, len); 1511 + memcpy(p, addr, bvec->bv_len); 1547 1512 1548 1513 __free_page(bvec->bv_page); 1549 - p += len; 1514 + p += bvec->bv_len; 1550 1515 } 1551 1516 1552 - bio_free_map_data(bmd); 1517 + kfree(bmd); 1553 1518 bio_put(bio); 1554 1519 } 1555 1520 ··· 1719 1686 #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1720 1687 void bio_flush_dcache_pages(struct bio *bi) 1721 1688 { 1722 - int i; 1723 - struct bio_vec *bvec; 1689 + struct bio_vec bvec; 1690 + struct bvec_iter iter; 1724 1691 1725 - bio_for_each_segment(bvec, bi, i) 1726 - flush_dcache_page(bvec->bv_page); 1692 + bio_for_each_segment(bvec, bi, iter) 1693 + flush_dcache_page(bvec.bv_page); 1727 1694 } 1728 1695 EXPORT_SYMBOL(bio_flush_dcache_pages); 1729 1696 #endif ··· 1744 1711 **/ 1745 1712 void bio_endio(struct bio *bio, int error) 1746 1713 { 1747 - if (error) 1748 - clear_bit(BIO_UPTODATE, &bio->bi_flags); 1749 - else if (!test_bit(BIO_UPTODATE, &bio->bi_flags)) 1750 - error = -EIO; 1714 + while (bio) { 1715 + BUG_ON(atomic_read(&bio->bi_remaining) <= 0); 1751 1716 1752 - if (bio->bi_end_io) 1753 - bio->bi_end_io(bio, error); 1717 + if (error) 1718 + clear_bit(BIO_UPTODATE, &bio->bi_flags); 1719 + else if (!test_bit(BIO_UPTODATE, &bio->bi_flags)) 1720 + error = -EIO; 1721 + 1722 + if (!atomic_dec_and_test(&bio->bi_remaining)) 1723 + return; 1724 + 1725 + /* 1726 + * Need to have a real endio function for chained bios, 1727 + * otherwise various corner cases will break (like stacking 1728 + * block devices that save/restore bi_end_io) - however, we want 1729 + * to avoid unbounded recursion and blowing the stack. Tail call 1730 + * optimization would handle this, but compiling with frame 1731 + * pointers also disables gcc's sibling call optimization. 1732 + */ 1733 + if (bio->bi_end_io == bio_chain_endio) { 1734 + struct bio *parent = bio->bi_private; 1735 + bio_put(bio); 1736 + bio = parent; 1737 + } else { 1738 + if (bio->bi_end_io) 1739 + bio->bi_end_io(bio, error); 1740 + bio = NULL; 1741 + } 1742 + } 1754 1743 } 1755 1744 EXPORT_SYMBOL(bio_endio); 1756 1745 1757 - void bio_pair_release(struct bio_pair *bp) 1746 + /** 1747 + * bio_endio_nodec - end I/O on a bio, without decrementing bi_remaining 1748 + * @bio: bio 1749 + * @error: error, if any 1750 + * 1751 + * For code that has saved and restored bi_end_io; thing hard before using this 1752 + * function, probably you should've cloned the entire bio. 1753 + **/ 1754 + void bio_endio_nodec(struct bio *bio, int error) 1758 1755 { 1759 - if (atomic_dec_and_test(&bp->cnt)) { 1760 - struct bio *master = bp->bio1.bi_private; 1761 - 1762 - bio_endio(master, bp->error); 1763 - mempool_free(bp, bp->bio2.bi_private); 1764 - } 1756 + atomic_inc(&bio->bi_remaining); 1757 + bio_endio(bio, error); 1765 1758 } 1766 - EXPORT_SYMBOL(bio_pair_release); 1759 + EXPORT_SYMBOL(bio_endio_nodec); 1767 1760 1768 - static void bio_pair_end_1(struct bio *bi, int err) 1769 - { 1770 - struct bio_pair *bp = container_of(bi, struct bio_pair, bio1); 1771 - 1772 - if (err) 1773 - bp->error = err; 1774 - 1775 - bio_pair_release(bp); 1776 - } 1777 - 1778 - static void bio_pair_end_2(struct bio *bi, int err) 1779 - { 1780 - struct bio_pair *bp = container_of(bi, struct bio_pair, bio2); 1781 - 1782 - if (err) 1783 - bp->error = err; 1784 - 1785 - bio_pair_release(bp); 1786 - } 1787 - 1788 - /* 1789 - * split a bio - only worry about a bio with a single page in its iovec 1761 + /** 1762 + * bio_split - split a bio 1763 + * @bio: bio to split 1764 + * @sectors: number of sectors to split from the front of @bio 1765 + * @gfp: gfp mask 1766 + * @bs: bio set to allocate from 1767 + * 1768 + * Allocates and returns a new bio which represents @sectors from the start of 1769 + * @bio, and updates @bio to represent the remaining sectors. 1770 + * 1771 + * The newly allocated bio will point to @bio's bi_io_vec; it is the caller's 1772 + * responsibility to ensure that @bio is not freed before the split. 1790 1773 */ 1791 - struct bio_pair *bio_split(struct bio *bi, int first_sectors) 1774 + struct bio *bio_split(struct bio *bio, int sectors, 1775 + gfp_t gfp, struct bio_set *bs) 1792 1776 { 1793 - struct bio_pair *bp = mempool_alloc(bio_split_pool, GFP_NOIO); 1777 + struct bio *split = NULL; 1794 1778 1795 - if (!bp) 1796 - return bp; 1779 + BUG_ON(sectors <= 0); 1780 + BUG_ON(sectors >= bio_sectors(bio)); 1797 1781 1798 - trace_block_split(bdev_get_queue(bi->bi_bdev), bi, 1799 - bi->bi_sector + first_sectors); 1782 + split = bio_clone_fast(bio, gfp, bs); 1783 + if (!split) 1784 + return NULL; 1800 1785 1801 - BUG_ON(bio_segments(bi) > 1); 1802 - atomic_set(&bp->cnt, 3); 1803 - bp->error = 0; 1804 - bp->bio1 = *bi; 1805 - bp->bio2 = *bi; 1806 - bp->bio2.bi_sector += first_sectors; 1807 - bp->bio2.bi_size -= first_sectors << 9; 1808 - bp->bio1.bi_size = first_sectors << 9; 1786 + split->bi_iter.bi_size = sectors << 9; 1809 1787 1810 - if (bi->bi_vcnt != 0) { 1811 - bp->bv1 = *bio_iovec(bi); 1812 - bp->bv2 = *bio_iovec(bi); 1788 + if (bio_integrity(split)) 1789 + bio_integrity_trim(split, 0, sectors); 1813 1790 1814 - if (bio_is_rw(bi)) { 1815 - bp->bv2.bv_offset += first_sectors << 9; 1816 - bp->bv2.bv_len -= first_sectors << 9; 1817 - bp->bv1.bv_len = first_sectors << 9; 1818 - } 1791 + bio_advance(bio, split->bi_iter.bi_size); 1819 1792 1820 - bp->bio1.bi_io_vec = &bp->bv1; 1821 - bp->bio2.bi_io_vec = &bp->bv2; 1822 - 1823 - bp->bio1.bi_max_vecs = 1; 1824 - bp->bio2.bi_max_vecs = 1; 1825 - } 1826 - 1827 - bp->bio1.bi_end_io = bio_pair_end_1; 1828 - bp->bio2.bi_end_io = bio_pair_end_2; 1829 - 1830 - bp->bio1.bi_private = bi; 1831 - bp->bio2.bi_private = bio_split_pool; 1832 - 1833 - if (bio_integrity(bi)) 1834 - bio_integrity_split(bi, bp, first_sectors); 1835 - 1836 - return bp; 1793 + return split; 1837 1794 } 1838 1795 EXPORT_SYMBOL(bio_split); 1839 1796 ··· 1837 1814 { 1838 1815 /* 'bio' is a cloned bio which we need to trim to match 1839 1816 * the given offset and size. 1840 - * This requires adjusting bi_sector, bi_size, and bi_io_vec 1841 1817 */ 1842 - int i; 1843 - struct bio_vec *bvec; 1844 - int sofar = 0; 1845 1818 1846 1819 size <<= 9; 1847 - if (offset == 0 && size == bio->bi_size) 1820 + if (offset == 0 && size == bio->bi_iter.bi_size) 1848 1821 return; 1849 1822 1850 1823 clear_bit(BIO_SEG_VALID, &bio->bi_flags); 1851 1824 1852 1825 bio_advance(bio, offset << 9); 1853 1826 1854 - bio->bi_size = size; 1855 - 1856 - /* avoid any complications with bi_idx being non-zero*/ 1857 - if (bio->bi_idx) { 1858 - memmove(bio->bi_io_vec, bio->bi_io_vec+bio->bi_idx, 1859 - (bio->bi_vcnt - bio->bi_idx) * sizeof(struct bio_vec)); 1860 - bio->bi_vcnt -= bio->bi_idx; 1861 - bio->bi_idx = 0; 1862 - } 1863 - /* Make sure vcnt and last bv are not too big */ 1864 - bio_for_each_segment(bvec, bio, i) { 1865 - if (sofar + bvec->bv_len > size) 1866 - bvec->bv_len = size - sofar; 1867 - if (bvec->bv_len == 0) { 1868 - bio->bi_vcnt = i; 1869 - break; 1870 - } 1871 - sofar += bvec->bv_len; 1872 - } 1827 + bio->bi_iter.bi_size = size; 1873 1828 } 1874 1829 EXPORT_SYMBOL_GPL(bio_trim); 1875 - 1876 - /** 1877 - * bio_sector_offset - Find hardware sector offset in bio 1878 - * @bio: bio to inspect 1879 - * @index: bio_vec index 1880 - * @offset: offset in bv_page 1881 - * 1882 - * Return the number of hardware sectors between beginning of bio 1883 - * and an end point indicated by a bio_vec index and an offset 1884 - * within that vector's page. 1885 - */ 1886 - sector_t bio_sector_offset(struct bio *bio, unsigned short index, 1887 - unsigned int offset) 1888 - { 1889 - unsigned int sector_sz; 1890 - struct bio_vec *bv; 1891 - sector_t sectors; 1892 - int i; 1893 - 1894 - sector_sz = queue_logical_block_size(bio->bi_bdev->bd_disk->queue); 1895 - sectors = 0; 1896 - 1897 - if (index >= bio->bi_idx) 1898 - index = bio->bi_vcnt - 1; 1899 - 1900 - bio_for_each_segment_all(bv, bio, i) { 1901 - if (i == index) { 1902 - if (offset > bv->bv_offset) 1903 - sectors += (offset - bv->bv_offset) / sector_sz; 1904 - break; 1905 - } 1906 - 1907 - sectors += bv->bv_len / sector_sz; 1908 - } 1909 - 1910 - return sectors; 1911 - } 1912 - EXPORT_SYMBOL(bio_sector_offset); 1913 1830 1914 1831 /* 1915 1832 * create memory pools for biovec's in a bio_set. ··· 2027 2064 2028 2065 if (bioset_integrity_create(fs_bio_set, BIO_POOL_SIZE)) 2029 2066 panic("bio: can't create integrity pool\n"); 2030 - 2031 - bio_split_pool = mempool_create_kmalloc_pool(BIO_SPLIT_ENTRIES, 2032 - sizeof(struct bio_pair)); 2033 - if (!bio_split_pool) 2034 - panic("bio: can't create split pool\n"); 2035 2067 2036 2068 return 0; 2037 2069 }
+4 -4
fs/btrfs/check-integrity.c
··· 1695 1695 return -1; 1696 1696 } 1697 1697 bio->bi_bdev = block_ctx->dev->bdev; 1698 - bio->bi_sector = dev_bytenr >> 9; 1698 + bio->bi_iter.bi_sector = dev_bytenr >> 9; 1699 1699 1700 1700 for (j = i; j < num_pages; j++) { 1701 1701 ret = bio_add_page(bio, block_ctx->pagev[j], ··· 3013 3013 int bio_is_patched; 3014 3014 char **mapped_datav; 3015 3015 3016 - dev_bytenr = 512 * bio->bi_sector; 3016 + dev_bytenr = 512 * bio->bi_iter.bi_sector; 3017 3017 bio_is_patched = 0; 3018 3018 if (dev_state->state->print_mask & 3019 3019 BTRFSIC_PRINT_MASK_SUBMIT_BIO_BH) ··· 3021 3021 "submit_bio(rw=0x%x, bi_vcnt=%u," 3022 3022 " bi_sector=%llu (bytenr %llu), bi_bdev=%p)\n", 3023 3023 rw, bio->bi_vcnt, 3024 - (unsigned long long)bio->bi_sector, dev_bytenr, 3025 - bio->bi_bdev); 3024 + (unsigned long long)bio->bi_iter.bi_sector, 3025 + dev_bytenr, bio->bi_bdev); 3026 3026 3027 3027 mapped_datav = kmalloc(sizeof(*mapped_datav) * bio->bi_vcnt, 3028 3028 GFP_NOFS);
+13 -14
fs/btrfs/compression.c
··· 172 172 goto out; 173 173 174 174 inode = cb->inode; 175 - ret = check_compressed_csum(inode, cb, (u64)bio->bi_sector << 9); 175 + ret = check_compressed_csum(inode, cb, 176 + (u64)bio->bi_iter.bi_sector << 9); 176 177 if (ret) 177 178 goto csum_failed; 178 179 ··· 202 201 if (cb->errors) { 203 202 bio_io_error(cb->orig_bio); 204 203 } else { 205 - int bio_index = 0; 206 - struct bio_vec *bvec = cb->orig_bio->bi_io_vec; 204 + int i; 205 + struct bio_vec *bvec; 207 206 208 207 /* 209 208 * we have verified the checksum already, set page 210 209 * checked so the end_io handlers know about it 211 210 */ 212 - while (bio_index < cb->orig_bio->bi_vcnt) { 211 + bio_for_each_segment_all(bvec, cb->orig_bio, i) 213 212 SetPageChecked(bvec->bv_page); 214 - bvec++; 215 - bio_index++; 216 - } 213 + 217 214 bio_endio(cb->orig_bio, 0); 218 215 } 219 216 ··· 371 372 for (pg_index = 0; pg_index < cb->nr_pages; pg_index++) { 372 373 page = compressed_pages[pg_index]; 373 374 page->mapping = inode->i_mapping; 374 - if (bio->bi_size) 375 + if (bio->bi_iter.bi_size) 375 376 ret = io_tree->ops->merge_bio_hook(WRITE, page, 0, 376 377 PAGE_CACHE_SIZE, 377 378 bio, 0); ··· 505 506 506 507 if (!em || last_offset < em->start || 507 508 (last_offset + PAGE_CACHE_SIZE > extent_map_end(em)) || 508 - (em->block_start >> 9) != cb->orig_bio->bi_sector) { 509 + (em->block_start >> 9) != cb->orig_bio->bi_iter.bi_sector) { 509 510 free_extent_map(em); 510 511 unlock_extent(tree, last_offset, end); 511 512 unlock_page(page); ··· 551 552 * in it. We don't actually do IO on those pages but allocate new ones 552 553 * to hold the compressed pages on disk. 553 554 * 554 - * bio->bi_sector points to the compressed extent on disk 555 + * bio->bi_iter.bi_sector points to the compressed extent on disk 555 556 * bio->bi_io_vec points to all of the inode pages 556 557 * bio->bi_vcnt is a count of pages 557 558 * ··· 572 573 struct page *page; 573 574 struct block_device *bdev; 574 575 struct bio *comp_bio; 575 - u64 cur_disk_byte = (u64)bio->bi_sector << 9; 576 + u64 cur_disk_byte = (u64)bio->bi_iter.bi_sector << 9; 576 577 u64 em_len; 577 578 u64 em_start; 578 579 struct extent_map *em; ··· 658 659 page->mapping = inode->i_mapping; 659 660 page->index = em_start >> PAGE_CACHE_SHIFT; 660 661 661 - if (comp_bio->bi_size) 662 + if (comp_bio->bi_iter.bi_size) 662 663 ret = tree->ops->merge_bio_hook(READ, page, 0, 663 664 PAGE_CACHE_SIZE, 664 665 comp_bio, 0); ··· 686 687 comp_bio, sums); 687 688 BUG_ON(ret); /* -ENOMEM */ 688 689 } 689 - sums += (comp_bio->bi_size + root->sectorsize - 1) / 690 - root->sectorsize; 690 + sums += (comp_bio->bi_iter.bi_size + 691 + root->sectorsize - 1) / root->sectorsize; 691 692 692 693 ret = btrfs_map_bio(root, READ, comp_bio, 693 694 mirror_num, 0);
+5 -8
fs/btrfs/disk-io.c
··· 842 842 843 843 static int btree_csum_one_bio(struct bio *bio) 844 844 { 845 - struct bio_vec *bvec = bio->bi_io_vec; 846 - int bio_index = 0; 845 + struct bio_vec *bvec; 847 846 struct btrfs_root *root; 848 - int ret = 0; 847 + int i, ret = 0; 849 848 850 - WARN_ON(bio->bi_vcnt <= 0); 851 - while (bio_index < bio->bi_vcnt) { 849 + bio_for_each_segment_all(bvec, bio, i) { 852 850 root = BTRFS_I(bvec->bv_page->mapping->host)->root; 853 851 ret = csum_dirty_buffer(root, bvec->bv_page); 854 852 if (ret) 855 853 break; 856 - bio_index++; 857 - bvec++; 858 854 } 855 + 859 856 return ret; 860 857 } 861 858 ··· 1692 1695 bio->bi_private = end_io_wq->private; 1693 1696 bio->bi_end_io = end_io_wq->end_io; 1694 1697 kfree(end_io_wq); 1695 - bio_endio(bio, error); 1698 + bio_endio_nodec(bio, error); 1696 1699 } 1697 1700 1698 1701 static int cleaner_kthread(void *arg)
+20 -29
fs/btrfs/extent_io.c
··· 1984 1984 bio = btrfs_io_bio_alloc(GFP_NOFS, 1); 1985 1985 if (!bio) 1986 1986 return -EIO; 1987 - bio->bi_size = 0; 1987 + bio->bi_iter.bi_size = 0; 1988 1988 map_length = length; 1989 1989 1990 1990 ret = btrfs_map_block(fs_info, WRITE, logical, ··· 1995 1995 } 1996 1996 BUG_ON(mirror_num != bbio->mirror_num); 1997 1997 sector = bbio->stripes[mirror_num-1].physical >> 9; 1998 - bio->bi_sector = sector; 1998 + bio->bi_iter.bi_sector = sector; 1999 1999 dev = bbio->stripes[mirror_num-1].dev; 2000 2000 kfree(bbio); 2001 2001 if (!dev || !dev->bdev || !dev->writeable) { ··· 2268 2268 return -EIO; 2269 2269 } 2270 2270 bio->bi_end_io = failed_bio->bi_end_io; 2271 - bio->bi_sector = failrec->logical >> 9; 2271 + bio->bi_iter.bi_sector = failrec->logical >> 9; 2272 2272 bio->bi_bdev = BTRFS_I(inode)->root->fs_info->fs_devices->latest_bdev; 2273 - bio->bi_size = 0; 2273 + bio->bi_iter.bi_size = 0; 2274 2274 2275 2275 btrfs_failed_bio = btrfs_io_bio(failed_bio); 2276 2276 if (btrfs_failed_bio->csum) { ··· 2332 2332 */ 2333 2333 static void end_bio_extent_writepage(struct bio *bio, int err) 2334 2334 { 2335 - struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1; 2335 + struct bio_vec *bvec; 2336 2336 struct extent_io_tree *tree; 2337 2337 u64 start; 2338 2338 u64 end; 2339 + int i; 2339 2340 2340 - do { 2341 + bio_for_each_segment_all(bvec, bio, i) { 2341 2342 struct page *page = bvec->bv_page; 2342 2343 tree = &BTRFS_I(page->mapping->host)->io_tree; 2343 2344 ··· 2356 2355 start = page_offset(page); 2357 2356 end = start + bvec->bv_offset + bvec->bv_len - 1; 2358 2357 2359 - if (--bvec >= bio->bi_io_vec) 2360 - prefetchw(&bvec->bv_page->flags); 2361 - 2362 2358 if (end_extent_writepage(page, err, start, end)) 2363 2359 continue; 2364 2360 2365 2361 end_page_writeback(page); 2366 - } while (bvec >= bio->bi_io_vec); 2362 + } 2367 2363 2368 2364 bio_put(bio); 2369 2365 } ··· 2390 2392 */ 2391 2393 static void end_bio_extent_readpage(struct bio *bio, int err) 2392 2394 { 2395 + struct bio_vec *bvec; 2393 2396 int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags); 2394 - struct bio_vec *bvec_end = bio->bi_io_vec + bio->bi_vcnt - 1; 2395 - struct bio_vec *bvec = bio->bi_io_vec; 2396 2397 struct btrfs_io_bio *io_bio = btrfs_io_bio(bio); 2397 2398 struct extent_io_tree *tree; 2398 2399 u64 offset = 0; ··· 2402 2405 u64 extent_len = 0; 2403 2406 int mirror; 2404 2407 int ret; 2408 + int i; 2405 2409 2406 2410 if (err) 2407 2411 uptodate = 0; 2408 2412 2409 - do { 2413 + bio_for_each_segment_all(bvec, bio, i) { 2410 2414 struct page *page = bvec->bv_page; 2411 2415 struct inode *inode = page->mapping->host; 2412 2416 2413 2417 pr_debug("end_bio_extent_readpage: bi_sector=%llu, err=%d, " 2414 - "mirror=%lu\n", (u64)bio->bi_sector, err, 2418 + "mirror=%lu\n", (u64)bio->bi_iter.bi_sector, err, 2415 2419 io_bio->mirror_num); 2416 2420 tree = &BTRFS_I(inode)->io_tree; 2417 2421 ··· 2430 2432 start = page_offset(page); 2431 2433 end = start + bvec->bv_offset + bvec->bv_len - 1; 2432 2434 len = bvec->bv_len; 2433 - 2434 - if (++bvec <= bvec_end) 2435 - prefetchw(&bvec->bv_page->flags); 2436 2435 2437 2436 mirror = io_bio->mirror_num; 2438 2437 if (likely(uptodate && tree->ops && ··· 2511 2516 extent_start = start; 2512 2517 extent_len = end + 1 - start; 2513 2518 } 2514 - } while (bvec <= bvec_end); 2519 + } 2515 2520 2516 2521 if (extent_len) 2517 2522 endio_readpage_release_extent(tree, extent_start, extent_len, ··· 2542 2547 } 2543 2548 2544 2549 if (bio) { 2545 - bio->bi_size = 0; 2546 2550 bio->bi_bdev = bdev; 2547 - bio->bi_sector = first_sector; 2551 + bio->bi_iter.bi_sector = first_sector; 2548 2552 btrfs_bio = btrfs_io_bio(bio); 2549 2553 btrfs_bio->csum = NULL; 2550 2554 btrfs_bio->csum_allocated = NULL; ··· 2637 2643 if (bio_ret && *bio_ret) { 2638 2644 bio = *bio_ret; 2639 2645 if (old_compressed) 2640 - contig = bio->bi_sector == sector; 2646 + contig = bio->bi_iter.bi_sector == sector; 2641 2647 else 2642 2648 contig = bio_end_sector(bio) == sector; 2643 2649 ··· 3404 3410 3405 3411 static void end_bio_extent_buffer_writepage(struct bio *bio, int err) 3406 3412 { 3407 - int uptodate = err == 0; 3408 - struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1; 3413 + struct bio_vec *bvec; 3409 3414 struct extent_buffer *eb; 3410 - int done; 3415 + int i, done; 3411 3416 3412 - do { 3417 + bio_for_each_segment_all(bvec, bio, i) { 3413 3418 struct page *page = bvec->bv_page; 3414 3419 3415 - bvec--; 3416 3420 eb = (struct extent_buffer *)page->private; 3417 3421 BUG_ON(!eb); 3418 3422 done = atomic_dec_and_test(&eb->io_pages); 3419 3423 3420 - if (!uptodate || test_bit(EXTENT_BUFFER_IOERR, &eb->bflags)) { 3424 + if (err || test_bit(EXTENT_BUFFER_IOERR, &eb->bflags)) { 3421 3425 set_bit(EXTENT_BUFFER_IOERR, &eb->bflags); 3422 3426 ClearPageUptodate(page); 3423 3427 SetPageError(page); ··· 3427 3435 continue; 3428 3436 3429 3437 end_extent_buffer_writeback(eb); 3430 - } while (bvec >= bio->bi_io_vec); 3438 + } 3431 3439 3432 3440 bio_put(bio); 3433 - 3434 3441 } 3435 3442 3436 3443 static int write_one_eb(struct extent_buffer *eb,
+10 -9
fs/btrfs/file-item.c
··· 182 182 if (!path) 183 183 return -ENOMEM; 184 184 185 - nblocks = bio->bi_size >> inode->i_sb->s_blocksize_bits; 185 + nblocks = bio->bi_iter.bi_size >> inode->i_sb->s_blocksize_bits; 186 186 if (!dst) { 187 187 if (nblocks * csum_size > BTRFS_BIO_INLINE_CSUM_SIZE) { 188 188 btrfs_bio->csum_allocated = kmalloc(nblocks * csum_size, ··· 201 201 csum = (u8 *)dst; 202 202 } 203 203 204 - if (bio->bi_size > PAGE_CACHE_SIZE * 8) 204 + if (bio->bi_iter.bi_size > PAGE_CACHE_SIZE * 8) 205 205 path->reada = 2; 206 206 207 207 WARN_ON(bio->bi_vcnt <= 0); ··· 217 217 path->skip_locking = 1; 218 218 } 219 219 220 - disk_bytenr = (u64)bio->bi_sector << 9; 220 + disk_bytenr = (u64)bio->bi_iter.bi_sector << 9; 221 221 if (dio) 222 222 offset = logical_offset; 223 223 while (bio_index < bio->bi_vcnt) { ··· 302 302 struct btrfs_dio_private *dip, struct bio *bio, 303 303 u64 offset) 304 304 { 305 - int len = (bio->bi_sector << 9) - dip->disk_bytenr; 305 + int len = (bio->bi_iter.bi_sector << 9) - dip->disk_bytenr; 306 306 u16 csum_size = btrfs_super_csum_size(root->fs_info->super_copy); 307 307 int ret; 308 308 ··· 447 447 u64 offset; 448 448 449 449 WARN_ON(bio->bi_vcnt <= 0); 450 - sums = kzalloc(btrfs_ordered_sum_size(root, bio->bi_size), GFP_NOFS); 450 + sums = kzalloc(btrfs_ordered_sum_size(root, bio->bi_iter.bi_size), 451 + GFP_NOFS); 451 452 if (!sums) 452 453 return -ENOMEM; 453 454 454 - sums->len = bio->bi_size; 455 + sums->len = bio->bi_iter.bi_size; 455 456 INIT_LIST_HEAD(&sums->list); 456 457 457 458 if (contig) ··· 462 461 463 462 ordered = btrfs_lookup_ordered_extent(inode, offset); 464 463 BUG_ON(!ordered); /* Logic error */ 465 - sums->bytenr = (u64)bio->bi_sector << 9; 464 + sums->bytenr = (u64)bio->bi_iter.bi_sector << 9; 466 465 index = 0; 467 466 468 467 while (bio_index < bio->bi_vcnt) { ··· 477 476 btrfs_add_ordered_sum(inode, ordered, sums); 478 477 btrfs_put_ordered_extent(ordered); 479 478 480 - bytes_left = bio->bi_size - total_bytes; 479 + bytes_left = bio->bi_iter.bi_size - total_bytes; 481 480 482 481 sums = kzalloc(btrfs_ordered_sum_size(root, bytes_left), 483 482 GFP_NOFS); ··· 485 484 sums->len = bytes_left; 486 485 ordered = btrfs_lookup_ordered_extent(inode, offset); 487 486 BUG_ON(!ordered); /* Logic error */ 488 - sums->bytenr = ((u64)bio->bi_sector << 9) + 487 + sums->bytenr = ((u64)bio->bi_iter.bi_sector << 9) + 489 488 total_bytes; 490 489 index = 0; 491 490 }
+18 -19
fs/btrfs/inode.c
··· 1577 1577 unsigned long bio_flags) 1578 1578 { 1579 1579 struct btrfs_root *root = BTRFS_I(page->mapping->host)->root; 1580 - u64 logical = (u64)bio->bi_sector << 9; 1580 + u64 logical = (u64)bio->bi_iter.bi_sector << 9; 1581 1581 u64 length = 0; 1582 1582 u64 map_length; 1583 1583 int ret; ··· 1585 1585 if (bio_flags & EXTENT_BIO_COMPRESSED) 1586 1586 return 0; 1587 1587 1588 - length = bio->bi_size; 1588 + length = bio->bi_iter.bi_size; 1589 1589 map_length = length; 1590 1590 ret = btrfs_map_block(root->fs_info, rw, logical, 1591 1591 &map_length, NULL, 0); ··· 6783 6783 static void btrfs_endio_direct_read(struct bio *bio, int err) 6784 6784 { 6785 6785 struct btrfs_dio_private *dip = bio->bi_private; 6786 - struct bio_vec *bvec_end = bio->bi_io_vec + bio->bi_vcnt - 1; 6787 - struct bio_vec *bvec = bio->bi_io_vec; 6786 + struct bio_vec *bvec; 6788 6787 struct inode *inode = dip->inode; 6789 6788 struct btrfs_root *root = BTRFS_I(inode)->root; 6790 6789 struct bio *dio_bio; 6791 6790 u32 *csums = (u32 *)dip->csum; 6792 - int index = 0; 6793 6791 u64 start; 6792 + int i; 6794 6793 6795 6794 start = dip->logical_offset; 6796 - do { 6795 + bio_for_each_segment_all(bvec, bio, i) { 6797 6796 if (!(BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM)) { 6798 6797 struct page *page = bvec->bv_page; 6799 6798 char *kaddr; ··· 6808 6809 local_irq_restore(flags); 6809 6810 6810 6811 flush_dcache_page(bvec->bv_page); 6811 - if (csum != csums[index]) { 6812 + if (csum != csums[i]) { 6812 6813 btrfs_err(root->fs_info, "csum failed ino %llu off %llu csum %u expected csum %u", 6813 6814 btrfs_ino(inode), start, csum, 6814 - csums[index]); 6815 + csums[i]); 6815 6816 err = -EIO; 6816 6817 } 6817 6818 } 6818 6819 6819 6820 start += bvec->bv_len; 6820 - bvec++; 6821 - index++; 6822 - } while (bvec <= bvec_end); 6821 + } 6823 6822 6824 6823 unlock_extent(&BTRFS_I(inode)->io_tree, dip->logical_offset, 6825 6824 dip->logical_offset + dip->bytes - 1); ··· 6898 6901 printk(KERN_ERR "btrfs direct IO failed ino %llu rw %lu " 6899 6902 "sector %#Lx len %u err no %d\n", 6900 6903 btrfs_ino(dip->inode), bio->bi_rw, 6901 - (unsigned long long)bio->bi_sector, bio->bi_size, err); 6904 + (unsigned long long)bio->bi_iter.bi_sector, 6905 + bio->bi_iter.bi_size, err); 6902 6906 dip->errors = 1; 6903 6907 6904 6908 /* ··· 6990 6992 struct bio *bio; 6991 6993 struct bio *orig_bio = dip->orig_bio; 6992 6994 struct bio_vec *bvec = orig_bio->bi_io_vec; 6993 - u64 start_sector = orig_bio->bi_sector; 6995 + u64 start_sector = orig_bio->bi_iter.bi_sector; 6994 6996 u64 file_offset = dip->logical_offset; 6995 6997 u64 submit_len = 0; 6996 6998 u64 map_length; ··· 6998 7000 int ret = 0; 6999 7001 int async_submit = 0; 7000 7002 7001 - map_length = orig_bio->bi_size; 7003 + map_length = orig_bio->bi_iter.bi_size; 7002 7004 ret = btrfs_map_block(root->fs_info, rw, start_sector << 9, 7003 7005 &map_length, NULL, 0); 7004 7006 if (ret) { ··· 7006 7008 return -EIO; 7007 7009 } 7008 7010 7009 - if (map_length >= orig_bio->bi_size) { 7011 + if (map_length >= orig_bio->bi_iter.bi_size) { 7010 7012 bio = orig_bio; 7011 7013 goto submit; 7012 7014 } ··· 7058 7060 bio->bi_private = dip; 7059 7061 bio->bi_end_io = btrfs_end_dio_bio; 7060 7062 7061 - map_length = orig_bio->bi_size; 7063 + map_length = orig_bio->bi_iter.bi_size; 7062 7064 ret = btrfs_map_block(root->fs_info, rw, 7063 7065 start_sector << 9, 7064 7066 &map_length, NULL, 0); ··· 7116 7118 7117 7119 if (!skip_sum && !write) { 7118 7120 csum_size = btrfs_super_csum_size(root->fs_info->super_copy); 7119 - sum_len = dio_bio->bi_size >> inode->i_sb->s_blocksize_bits; 7121 + sum_len = dio_bio->bi_iter.bi_size >> 7122 + inode->i_sb->s_blocksize_bits; 7120 7123 sum_len *= csum_size; 7121 7124 } else { 7122 7125 sum_len = 0; ··· 7132 7133 dip->private = dio_bio->bi_private; 7133 7134 dip->inode = inode; 7134 7135 dip->logical_offset = file_offset; 7135 - dip->bytes = dio_bio->bi_size; 7136 - dip->disk_bytenr = (u64)dio_bio->bi_sector << 9; 7136 + dip->bytes = dio_bio->bi_iter.bi_size; 7137 + dip->disk_bytenr = (u64)dio_bio->bi_iter.bi_sector << 9; 7137 7138 io_bio->bi_private = dip; 7138 7139 dip->errors = 0; 7139 7140 dip->orig_bio = io_bio;
+11 -11
fs/btrfs/raid56.c
··· 1032 1032 1033 1033 /* see if we can add this page onto our existing bio */ 1034 1034 if (last) { 1035 - last_end = (u64)last->bi_sector << 9; 1036 - last_end += last->bi_size; 1035 + last_end = (u64)last->bi_iter.bi_sector << 9; 1036 + last_end += last->bi_iter.bi_size; 1037 1037 1038 1038 /* 1039 1039 * we can't merge these if they are from different ··· 1053 1053 if (!bio) 1054 1054 return -ENOMEM; 1055 1055 1056 - bio->bi_size = 0; 1056 + bio->bi_iter.bi_size = 0; 1057 1057 bio->bi_bdev = stripe->dev->bdev; 1058 - bio->bi_sector = disk_start >> 9; 1058 + bio->bi_iter.bi_sector = disk_start >> 9; 1059 1059 set_bit(BIO_UPTODATE, &bio->bi_flags); 1060 1060 1061 1061 bio_add_page(bio, page, PAGE_CACHE_SIZE, 0); ··· 1111 1111 1112 1112 spin_lock_irq(&rbio->bio_list_lock); 1113 1113 bio_list_for_each(bio, &rbio->bio_list) { 1114 - start = (u64)bio->bi_sector << 9; 1114 + start = (u64)bio->bi_iter.bi_sector << 9; 1115 1115 stripe_offset = start - rbio->raid_map[0]; 1116 1116 page_index = stripe_offset >> PAGE_CACHE_SHIFT; 1117 1117 ··· 1272 1272 static int find_bio_stripe(struct btrfs_raid_bio *rbio, 1273 1273 struct bio *bio) 1274 1274 { 1275 - u64 physical = bio->bi_sector; 1275 + u64 physical = bio->bi_iter.bi_sector; 1276 1276 u64 stripe_start; 1277 1277 int i; 1278 1278 struct btrfs_bio_stripe *stripe; ··· 1298 1298 static int find_logical_bio_stripe(struct btrfs_raid_bio *rbio, 1299 1299 struct bio *bio) 1300 1300 { 1301 - u64 logical = bio->bi_sector; 1301 + u64 logical = bio->bi_iter.bi_sector; 1302 1302 u64 stripe_start; 1303 1303 int i; 1304 1304 ··· 1602 1602 plug_list); 1603 1603 struct btrfs_raid_bio *rb = container_of(b, struct btrfs_raid_bio, 1604 1604 plug_list); 1605 - u64 a_sector = ra->bio_list.head->bi_sector; 1606 - u64 b_sector = rb->bio_list.head->bi_sector; 1605 + u64 a_sector = ra->bio_list.head->bi_iter.bi_sector; 1606 + u64 b_sector = rb->bio_list.head->bi_iter.bi_sector; 1607 1607 1608 1608 if (a_sector < b_sector) 1609 1609 return -1; ··· 1691 1691 if (IS_ERR(rbio)) 1692 1692 return PTR_ERR(rbio); 1693 1693 bio_list_add(&rbio->bio_list, bio); 1694 - rbio->bio_list_bytes = bio->bi_size; 1694 + rbio->bio_list_bytes = bio->bi_iter.bi_size; 1695 1695 1696 1696 /* 1697 1697 * don't plug on full rbios, just get them out the door ··· 2044 2044 2045 2045 rbio->read_rebuild = 1; 2046 2046 bio_list_add(&rbio->bio_list, bio); 2047 - rbio->bio_list_bytes = bio->bi_size; 2047 + rbio->bio_list_bytes = bio->bi_iter.bi_size; 2048 2048 2049 2049 rbio->faila = find_logical_bio_stripe(rbio, bio); 2050 2050 if (rbio->faila == -1) {
+6 -6
fs/btrfs/scrub.c
··· 1308 1308 continue; 1309 1309 } 1310 1310 bio->bi_bdev = page->dev->bdev; 1311 - bio->bi_sector = page->physical >> 9; 1311 + bio->bi_iter.bi_sector = page->physical >> 9; 1312 1312 1313 1313 bio_add_page(bio, page->page, PAGE_SIZE, 0); 1314 1314 if (btrfsic_submit_bio_wait(READ, bio)) ··· 1427 1427 if (!bio) 1428 1428 return -EIO; 1429 1429 bio->bi_bdev = page_bad->dev->bdev; 1430 - bio->bi_sector = page_bad->physical >> 9; 1430 + bio->bi_iter.bi_sector = page_bad->physical >> 9; 1431 1431 1432 1432 ret = bio_add_page(bio, page_good->page, PAGE_SIZE, 0); 1433 1433 if (PAGE_SIZE != ret) { ··· 1520 1520 bio->bi_private = sbio; 1521 1521 bio->bi_end_io = scrub_wr_bio_end_io; 1522 1522 bio->bi_bdev = sbio->dev->bdev; 1523 - bio->bi_sector = sbio->physical >> 9; 1523 + bio->bi_iter.bi_sector = sbio->physical >> 9; 1524 1524 sbio->err = 0; 1525 1525 } else if (sbio->physical + sbio->page_count * PAGE_SIZE != 1526 1526 spage->physical_for_dev_replace || ··· 1926 1926 bio->bi_private = sbio; 1927 1927 bio->bi_end_io = scrub_bio_end_io; 1928 1928 bio->bi_bdev = sbio->dev->bdev; 1929 - bio->bi_sector = sbio->physical >> 9; 1929 + bio->bi_iter.bi_sector = sbio->physical >> 9; 1930 1930 sbio->err = 0; 1931 1931 } else if (sbio->physical + sbio->page_count * PAGE_SIZE != 1932 1932 spage->physical || ··· 3371 3371 spin_unlock(&sctx->stat_lock); 3372 3372 return -ENOMEM; 3373 3373 } 3374 - bio->bi_size = 0; 3375 - bio->bi_sector = physical_for_dev_replace >> 9; 3374 + bio->bi_iter.bi_size = 0; 3375 + bio->bi_iter.bi_sector = physical_for_dev_replace >> 9; 3376 3376 bio->bi_bdev = dev->bdev; 3377 3377 ret = bio_add_page(bio, page, PAGE_CACHE_SIZE, 0); 3378 3378 if (ret != PAGE_CACHE_SIZE) {
+13 -6
fs/btrfs/volumes.c
··· 5298 5298 bio_put(bio); 5299 5299 bio = bbio->orig_bio; 5300 5300 } 5301 + 5302 + /* 5303 + * We have original bio now. So increment bi_remaining to 5304 + * account for it in endio 5305 + */ 5306 + atomic_inc(&bio->bi_remaining); 5307 + 5301 5308 bio->bi_private = bbio->private; 5302 5309 bio->bi_end_io = bbio->end_io; 5303 5310 btrfs_io_bio(bio)->mirror_num = bbio->mirror_num; ··· 5418 5411 if (!q->merge_bvec_fn) 5419 5412 return 1; 5420 5413 5421 - bvm.bi_size = bio->bi_size - prev->bv_len; 5414 + bvm.bi_size = bio->bi_iter.bi_size - prev->bv_len; 5422 5415 if (q->merge_bvec_fn(q, &bvm, prev) < prev->bv_len) 5423 5416 return 0; 5424 5417 return 1; ··· 5433 5426 bio->bi_private = bbio; 5434 5427 btrfs_io_bio(bio)->stripe_index = dev_nr; 5435 5428 bio->bi_end_io = btrfs_end_bio; 5436 - bio->bi_sector = physical >> 9; 5429 + bio->bi_iter.bi_sector = physical >> 9; 5437 5430 #ifdef DEBUG 5438 5431 { 5439 5432 struct rcu_string *name; ··· 5471 5464 while (bvec <= (first_bio->bi_io_vec + first_bio->bi_vcnt - 1)) { 5472 5465 if (bio_add_page(bio, bvec->bv_page, bvec->bv_len, 5473 5466 bvec->bv_offset) < bvec->bv_len) { 5474 - u64 len = bio->bi_size; 5467 + u64 len = bio->bi_iter.bi_size; 5475 5468 5476 5469 atomic_inc(&bbio->stripes_pending); 5477 5470 submit_stripe_bio(root, bbio, bio, physical, dev_nr, ··· 5493 5486 bio->bi_private = bbio->private; 5494 5487 bio->bi_end_io = bbio->end_io; 5495 5488 btrfs_io_bio(bio)->mirror_num = bbio->mirror_num; 5496 - bio->bi_sector = logical >> 9; 5489 + bio->bi_iter.bi_sector = logical >> 9; 5497 5490 kfree(bbio); 5498 5491 bio_endio(bio, -EIO); 5499 5492 } ··· 5504 5497 { 5505 5498 struct btrfs_device *dev; 5506 5499 struct bio *first_bio = bio; 5507 - u64 logical = (u64)bio->bi_sector << 9; 5500 + u64 logical = (u64)bio->bi_iter.bi_sector << 9; 5508 5501 u64 length = 0; 5509 5502 u64 map_length; 5510 5503 u64 *raid_map = NULL; ··· 5513 5506 int total_devs = 1; 5514 5507 struct btrfs_bio *bbio = NULL; 5515 5508 5516 - length = bio->bi_size; 5509 + length = bio->bi_iter.bi_size; 5517 5510 map_length = length; 5518 5511 5519 5512 ret = __btrfs_map_block(root->fs_info, rw, logical, &map_length, &bbio,
+6 -6
fs/buffer.c
··· 2982 2982 * let it through, and the IO layer will turn it into 2983 2983 * an EIO. 2984 2984 */ 2985 - if (unlikely(bio->bi_sector >= maxsector)) 2985 + if (unlikely(bio->bi_iter.bi_sector >= maxsector)) 2986 2986 return; 2987 2987 2988 - maxsector -= bio->bi_sector; 2989 - bytes = bio->bi_size; 2988 + maxsector -= bio->bi_iter.bi_sector; 2989 + bytes = bio->bi_iter.bi_size; 2990 2990 if (likely((bytes >> 9) <= maxsector)) 2991 2991 return; 2992 2992 ··· 2994 2994 bytes = maxsector << 9; 2995 2995 2996 2996 /* Truncate the bio.. */ 2997 - bio->bi_size = bytes; 2997 + bio->bi_iter.bi_size = bytes; 2998 2998 bio->bi_io_vec[0].bv_len = bytes; 2999 2999 3000 3000 /* ..and clear the end of the buffer for reads */ ··· 3029 3029 */ 3030 3030 bio = bio_alloc(GFP_NOIO, 1); 3031 3031 3032 - bio->bi_sector = bh->b_blocknr * (bh->b_size >> 9); 3032 + bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); 3033 3033 bio->bi_bdev = bh->b_bdev; 3034 3034 bio->bi_io_vec[0].bv_page = bh->b_page; 3035 3035 bio->bi_io_vec[0].bv_len = bh->b_size; 3036 3036 bio->bi_io_vec[0].bv_offset = bh_offset(bh); 3037 3037 3038 3038 bio->bi_vcnt = 1; 3039 - bio->bi_size = bh->b_size; 3039 + bio->bi_iter.bi_size = bh->b_size; 3040 3040 3041 3041 bio->bi_end_io = end_bio_bh_io_sync; 3042 3042 bio->bi_private = bh;
+2 -2
fs/direct-io.c
··· 375 375 bio = bio_alloc(GFP_KERNEL, nr_vecs); 376 376 377 377 bio->bi_bdev = bdev; 378 - bio->bi_sector = first_sector; 378 + bio->bi_iter.bi_sector = first_sector; 379 379 if (dio->is_async) 380 380 bio->bi_end_io = dio_bio_end_aio; 381 381 else ··· 719 719 if (sdio->bio) { 720 720 loff_t cur_offset = sdio->cur_page_fs_offset; 721 721 loff_t bio_next_offset = sdio->logical_offset_in_bio + 722 - sdio->bio->bi_size; 722 + sdio->bio->bi_iter.bi_size; 723 723 724 724 /* 725 725 * See whether this new request is contiguous with the old.
+4 -4
fs/ext4/page-io.c
··· 65 65 { 66 66 int i; 67 67 int error = !test_bit(BIO_UPTODATE, &bio->bi_flags); 68 + struct bio_vec *bvec; 68 69 69 - for (i = 0; i < bio->bi_vcnt; i++) { 70 - struct bio_vec *bvec = &bio->bi_io_vec[i]; 70 + bio_for_each_segment_all(bvec, bio, i) { 71 71 struct page *page = bvec->bv_page; 72 72 struct buffer_head *bh, *head; 73 73 unsigned bio_start = bvec->bv_offset; ··· 298 298 static void ext4_end_bio(struct bio *bio, int error) 299 299 { 300 300 ext4_io_end_t *io_end = bio->bi_private; 301 - sector_t bi_sector = bio->bi_sector; 301 + sector_t bi_sector = bio->bi_iter.bi_sector; 302 302 303 303 BUG_ON(!io_end); 304 304 bio->bi_end_io = NULL; ··· 366 366 bio = bio_alloc(GFP_NOIO, min(nvecs, BIO_MAX_PAGES)); 367 367 if (!bio) 368 368 return -ENOMEM; 369 - bio->bi_sector = bh->b_blocknr * (bh->b_size >> 9); 369 + bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); 370 370 bio->bi_bdev = bh->b_bdev; 371 371 bio->bi_end_io = ext4_end_bio; 372 372 bio->bi_private = ext4_get_io_end(io->io_end);
+14 -21
fs/f2fs/data.c
··· 26 26 27 27 static void f2fs_read_end_io(struct bio *bio, int err) 28 28 { 29 - const int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags); 30 - struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1; 29 + struct bio_vec *bvec; 30 + int i; 31 31 32 - do { 32 + bio_for_each_segment_all(bvec, bio, i) { 33 33 struct page *page = bvec->bv_page; 34 34 35 - if (--bvec >= bio->bi_io_vec) 36 - prefetchw(&bvec->bv_page->flags); 37 - 38 - if (unlikely(!uptodate)) { 35 + if (!err) { 36 + SetPageUptodate(page); 37 + } else { 39 38 ClearPageUptodate(page); 40 39 SetPageError(page); 41 - } else { 42 - SetPageUptodate(page); 43 40 } 44 41 unlock_page(page); 45 - } while (bvec >= bio->bi_io_vec); 46 - 42 + } 47 43 bio_put(bio); 48 44 } 49 45 50 46 static void f2fs_write_end_io(struct bio *bio, int err) 51 47 { 52 - const int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags); 53 - struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1; 54 - struct f2fs_sb_info *sbi = F2FS_SB(bvec->bv_page->mapping->host->i_sb); 48 + struct f2fs_sb_info *sbi = F2FS_SB(bio->bi_io_vec->bv_page->mapping->host->i_sb); 49 + struct bio_vec *bvec; 50 + int i; 55 51 56 - do { 52 + bio_for_each_segment_all(bvec, bio, i) { 57 53 struct page *page = bvec->bv_page; 58 54 59 - if (--bvec >= bio->bi_io_vec) 60 - prefetchw(&bvec->bv_page->flags); 61 - 62 - if (unlikely(!uptodate)) { 55 + if (unlikely(err)) { 63 56 SetPageError(page); 64 57 set_bit(AS_EIO, &page->mapping->flags); 65 58 set_ckpt_flags(sbi->ckpt, CP_ERROR_FLAG); ··· 60 67 } 61 68 end_page_writeback(page); 62 69 dec_page_count(sbi, F2FS_WRITEBACK); 63 - } while (bvec >= bio->bi_io_vec); 70 + } 64 71 65 72 if (bio->bi_private) 66 73 complete(bio->bi_private); ··· 84 91 bio = bio_alloc(GFP_NOIO, npages); 85 92 86 93 bio->bi_bdev = sbi->sb->s_bdev; 87 - bio->bi_sector = SECTOR_FROM_BLOCK(sbi, blk_addr); 94 + bio->bi_iter.bi_sector = SECTOR_FROM_BLOCK(sbi, blk_addr); 88 95 bio->bi_end_io = is_read ? f2fs_read_end_io : f2fs_write_end_io; 89 96 90 97 return bio;
+1 -1
fs/gfs2/lops.c
··· 273 273 nrvecs = max(nrvecs/2, 1U); 274 274 } 275 275 276 - bio->bi_sector = blkno * (sb->s_blocksize >> 9); 276 + bio->bi_iter.bi_sector = blkno * (sb->s_blocksize >> 9); 277 277 bio->bi_bdev = sb->s_bdev; 278 278 bio->bi_end_io = gfs2_end_log_write; 279 279 bio->bi_private = sdp;
+1 -1
fs/gfs2/ops_fstype.c
··· 238 238 lock_page(page); 239 239 240 240 bio = bio_alloc(GFP_NOFS, 1); 241 - bio->bi_sector = sector * (sb->s_blocksize >> 9); 241 + bio->bi_iter.bi_sector = sector * (sb->s_blocksize >> 9); 242 242 bio->bi_bdev = sb->s_bdev; 243 243 bio_add_page(bio, page, PAGE_SIZE, 0); 244 244
+1 -1
fs/hfsplus/wrapper.c
··· 63 63 sector &= ~((io_size >> HFSPLUS_SECTOR_SHIFT) - 1); 64 64 65 65 bio = bio_alloc(GFP_NOIO, 1); 66 - bio->bi_sector = sector; 66 + bio->bi_iter.bi_sector = sector; 67 67 bio->bi_bdev = sb->s_bdev; 68 68 69 69 if (!(rw & WRITE) && data)
+6 -6
fs/jfs/jfs_logmgr.c
··· 1998 1998 1999 1999 bio = bio_alloc(GFP_NOFS, 1); 2000 2000 2001 - bio->bi_sector = bp->l_blkno << (log->l2bsize - 9); 2001 + bio->bi_iter.bi_sector = bp->l_blkno << (log->l2bsize - 9); 2002 2002 bio->bi_bdev = log->bdev; 2003 2003 bio->bi_io_vec[0].bv_page = bp->l_page; 2004 2004 bio->bi_io_vec[0].bv_len = LOGPSIZE; 2005 2005 bio->bi_io_vec[0].bv_offset = bp->l_offset; 2006 2006 2007 2007 bio->bi_vcnt = 1; 2008 - bio->bi_size = LOGPSIZE; 2008 + bio->bi_iter.bi_size = LOGPSIZE; 2009 2009 2010 2010 bio->bi_end_io = lbmIODone; 2011 2011 bio->bi_private = bp; 2012 2012 /*check if journaling to disk has been disabled*/ 2013 2013 if (log->no_integrity) { 2014 - bio->bi_size = 0; 2014 + bio->bi_iter.bi_size = 0; 2015 2015 lbmIODone(bio, 0); 2016 2016 } else { 2017 2017 submit_bio(READ_SYNC, bio); ··· 2144 2144 jfs_info("lbmStartIO\n"); 2145 2145 2146 2146 bio = bio_alloc(GFP_NOFS, 1); 2147 - bio->bi_sector = bp->l_blkno << (log->l2bsize - 9); 2147 + bio->bi_iter.bi_sector = bp->l_blkno << (log->l2bsize - 9); 2148 2148 bio->bi_bdev = log->bdev; 2149 2149 bio->bi_io_vec[0].bv_page = bp->l_page; 2150 2150 bio->bi_io_vec[0].bv_len = LOGPSIZE; 2151 2151 bio->bi_io_vec[0].bv_offset = bp->l_offset; 2152 2152 2153 2153 bio->bi_vcnt = 1; 2154 - bio->bi_size = LOGPSIZE; 2154 + bio->bi_iter.bi_size = LOGPSIZE; 2155 2155 2156 2156 bio->bi_end_io = lbmIODone; 2157 2157 bio->bi_private = bp; 2158 2158 2159 2159 /* check if journaling to disk has been disabled */ 2160 2160 if (log->no_integrity) { 2161 - bio->bi_size = 0; 2161 + bio->bi_iter.bi_size = 0; 2162 2162 lbmIODone(bio, 0); 2163 2163 } else { 2164 2164 submit_bio(WRITE_SYNC, bio);
+5 -4
fs/jfs/jfs_metapage.c
··· 416 416 * count from hitting zero before we're through 417 417 */ 418 418 inc_io(page); 419 - if (!bio->bi_size) 419 + if (!bio->bi_iter.bi_size) 420 420 goto dump_bio; 421 421 submit_bio(WRITE, bio); 422 422 nr_underway++; ··· 438 438 439 439 bio = bio_alloc(GFP_NOFS, 1); 440 440 bio->bi_bdev = inode->i_sb->s_bdev; 441 - bio->bi_sector = pblock << (inode->i_blkbits - 9); 441 + bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9); 442 442 bio->bi_end_io = metapage_write_end_io; 443 443 bio->bi_private = page; 444 444 ··· 452 452 if (bio) { 453 453 if (bio_add_page(bio, page, bio_bytes, bio_offset) < bio_bytes) 454 454 goto add_failed; 455 - if (!bio->bi_size) 455 + if (!bio->bi_iter.bi_size) 456 456 goto dump_bio; 457 457 458 458 submit_bio(WRITE, bio); ··· 517 517 518 518 bio = bio_alloc(GFP_NOFS, 1); 519 519 bio->bi_bdev = inode->i_sb->s_bdev; 520 - bio->bi_sector = pblock << (inode->i_blkbits - 9); 520 + bio->bi_iter.bi_sector = 521 + pblock << (inode->i_blkbits - 9); 521 522 bio->bi_end_io = metapage_read_end_io; 522 523 bio->bi_private = page; 523 524 len = xlen << inode->i_blkbits;
+16 -20
fs/logfs/dev_bdev.c
··· 26 26 bio_vec.bv_len = PAGE_SIZE; 27 27 bio_vec.bv_offset = 0; 28 28 bio.bi_vcnt = 1; 29 - bio.bi_size = PAGE_SIZE; 30 29 bio.bi_bdev = bdev; 31 - bio.bi_sector = page->index * (PAGE_SIZE >> 9); 30 + bio.bi_iter.bi_sector = page->index * (PAGE_SIZE >> 9); 31 + bio.bi_iter.bi_size = PAGE_SIZE; 32 32 33 33 return submit_bio_wait(rw, &bio); 34 34 } ··· 56 56 static void writeseg_end_io(struct bio *bio, int err) 57 57 { 58 58 const int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags); 59 - struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1; 59 + struct bio_vec *bvec; 60 + int i; 60 61 struct super_block *sb = bio->bi_private; 61 62 struct logfs_super *super = logfs_super(sb); 62 - struct page *page; 63 63 64 64 BUG_ON(!uptodate); /* FIXME: Retry io or write elsewhere */ 65 65 BUG_ON(err); 66 - BUG_ON(bio->bi_vcnt == 0); 67 - do { 68 - page = bvec->bv_page; 69 - if (--bvec >= bio->bi_io_vec) 70 - prefetchw(&bvec->bv_page->flags); 71 66 72 - end_page_writeback(page); 73 - page_cache_release(page); 74 - } while (bvec >= bio->bi_io_vec); 67 + bio_for_each_segment_all(bvec, bio, i) { 68 + end_page_writeback(bvec->bv_page); 69 + page_cache_release(bvec->bv_page); 70 + } 75 71 bio_put(bio); 76 72 if (atomic_dec_and_test(&super->s_pending_writes)) 77 73 wake_up(&wq); ··· 92 96 if (i >= max_pages) { 93 97 /* Block layer cannot split bios :( */ 94 98 bio->bi_vcnt = i; 95 - bio->bi_size = i * PAGE_SIZE; 99 + bio->bi_iter.bi_size = i * PAGE_SIZE; 96 100 bio->bi_bdev = super->s_bdev; 97 - bio->bi_sector = ofs >> 9; 101 + bio->bi_iter.bi_sector = ofs >> 9; 98 102 bio->bi_private = sb; 99 103 bio->bi_end_io = writeseg_end_io; 100 104 atomic_inc(&super->s_pending_writes); ··· 119 123 unlock_page(page); 120 124 } 121 125 bio->bi_vcnt = nr_pages; 122 - bio->bi_size = nr_pages * PAGE_SIZE; 126 + bio->bi_iter.bi_size = nr_pages * PAGE_SIZE; 123 127 bio->bi_bdev = super->s_bdev; 124 - bio->bi_sector = ofs >> 9; 128 + bio->bi_iter.bi_sector = ofs >> 9; 125 129 bio->bi_private = sb; 126 130 bio->bi_end_io = writeseg_end_io; 127 131 atomic_inc(&super->s_pending_writes); ··· 184 188 if (i >= max_pages) { 185 189 /* Block layer cannot split bios :( */ 186 190 bio->bi_vcnt = i; 187 - bio->bi_size = i * PAGE_SIZE; 191 + bio->bi_iter.bi_size = i * PAGE_SIZE; 188 192 bio->bi_bdev = super->s_bdev; 189 - bio->bi_sector = ofs >> 9; 193 + bio->bi_iter.bi_sector = ofs >> 9; 190 194 bio->bi_private = sb; 191 195 bio->bi_end_io = erase_end_io; 192 196 atomic_inc(&super->s_pending_writes); ··· 205 209 bio->bi_io_vec[i].bv_offset = 0; 206 210 } 207 211 bio->bi_vcnt = nr_pages; 208 - bio->bi_size = nr_pages * PAGE_SIZE; 212 + bio->bi_iter.bi_size = nr_pages * PAGE_SIZE; 209 213 bio->bi_bdev = super->s_bdev; 210 - bio->bi_sector = ofs >> 9; 214 + bio->bi_iter.bi_sector = ofs >> 9; 211 215 bio->bi_private = sb; 212 216 bio->bi_end_io = erase_end_io; 213 217 atomic_inc(&super->s_pending_writes);
+9 -10
fs/mpage.c
··· 43 43 */ 44 44 static void mpage_end_io(struct bio *bio, int err) 45 45 { 46 - const int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags); 47 - struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1; 46 + struct bio_vec *bv; 47 + int i; 48 48 49 - do { 50 - struct page *page = bvec->bv_page; 49 + bio_for_each_segment_all(bv, bio, i) { 50 + struct page *page = bv->bv_page; 51 51 52 - if (--bvec >= bio->bi_io_vec) 53 - prefetchw(&bvec->bv_page->flags); 54 52 if (bio_data_dir(bio) == READ) { 55 - if (uptodate) { 53 + if (!err) { 56 54 SetPageUptodate(page); 57 55 } else { 58 56 ClearPageUptodate(page); ··· 58 60 } 59 61 unlock_page(page); 60 62 } else { /* bio_data_dir(bio) == WRITE */ 61 - if (!uptodate) { 63 + if (err) { 62 64 SetPageError(page); 63 65 if (page->mapping) 64 66 set_bit(AS_EIO, &page->mapping->flags); 65 67 } 66 68 end_page_writeback(page); 67 69 } 68 - } while (bvec >= bio->bi_io_vec); 70 + } 71 + 69 72 bio_put(bio); 70 73 } 71 74 ··· 93 94 94 95 if (bio) { 95 96 bio->bi_bdev = bdev; 96 - bio->bi_sector = first_sector; 97 + bio->bi_iter.bi_sector = first_sector; 97 98 } 98 99 return bio; 99 100 }
+18 -25
fs/nfs/blocklayout/blocklayout.c
··· 134 134 if (bio) { 135 135 get_parallel(bio->bi_private); 136 136 dprintk("%s submitting %s bio %u@%llu\n", __func__, 137 - rw == READ ? "read" : "write", 138 - bio->bi_size, (unsigned long long)bio->bi_sector); 137 + rw == READ ? "read" : "write", bio->bi_iter.bi_size, 138 + (unsigned long long)bio->bi_iter.bi_sector); 139 139 submit_bio(rw, bio); 140 140 } 141 141 return NULL; ··· 156 156 } 157 157 158 158 if (bio) { 159 - bio->bi_sector = isect - be->be_f_offset + be->be_v_offset; 159 + bio->bi_iter.bi_sector = isect - be->be_f_offset + 160 + be->be_v_offset; 160 161 bio->bi_bdev = be->be_mdev; 161 162 bio->bi_end_io = end_io; 162 163 bio->bi_private = par; ··· 202 201 static void bl_end_io_read(struct bio *bio, int err) 203 202 { 204 203 struct parallel_io *par = bio->bi_private; 205 - const int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags); 206 - struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1; 204 + struct bio_vec *bvec; 205 + int i; 207 206 208 - do { 209 - struct page *page = bvec->bv_page; 207 + if (!err) 208 + bio_for_each_segment_all(bvec, bio, i) 209 + SetPageUptodate(bvec->bv_page); 210 210 211 - if (--bvec >= bio->bi_io_vec) 212 - prefetchw(&bvec->bv_page->flags); 213 - if (uptodate) 214 - SetPageUptodate(page); 215 - } while (bvec >= bio->bi_io_vec); 216 - if (!uptodate) { 211 + if (err) { 217 212 struct nfs_read_data *rdata = par->data; 218 213 struct nfs_pgio_header *header = rdata->header; 219 214 ··· 380 383 static void bl_end_io_write_zero(struct bio *bio, int err) 381 384 { 382 385 struct parallel_io *par = bio->bi_private; 383 - const int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags); 384 - struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1; 386 + struct bio_vec *bvec; 387 + int i; 385 388 386 - do { 387 - struct page *page = bvec->bv_page; 388 - 389 - if (--bvec >= bio->bi_io_vec) 390 - prefetchw(&bvec->bv_page->flags); 389 + bio_for_each_segment_all(bvec, bio, i) { 391 390 /* This is the zeroing page we added */ 392 - end_page_writeback(page); 393 - page_cache_release(page); 394 - } while (bvec >= bio->bi_io_vec); 391 + end_page_writeback(bvec->bv_page); 392 + page_cache_release(bvec->bv_page); 393 + } 395 394 396 - if (unlikely(!uptodate)) { 395 + if (unlikely(err)) { 397 396 struct nfs_write_data *data = par->data; 398 397 struct nfs_pgio_header *header = data->header; 399 398 ··· 512 519 isect = (page->index << PAGE_CACHE_SECTOR_SHIFT) + 513 520 (offset / SECTOR_SIZE); 514 521 515 - bio->bi_sector = isect - be->be_f_offset + be->be_v_offset; 522 + bio->bi_iter.bi_sector = isect - be->be_f_offset + be->be_v_offset; 516 523 bio->bi_bdev = be->be_mdev; 517 524 bio->bi_end_io = bl_read_single_end_io; 518 525
+2 -1
fs/nilfs2/segbuf.c
··· 416 416 } 417 417 if (likely(bio)) { 418 418 bio->bi_bdev = nilfs->ns_bdev; 419 - bio->bi_sector = start << (nilfs->ns_blocksize_bits - 9); 419 + bio->bi_iter.bi_sector = 420 + start << (nilfs->ns_blocksize_bits - 9); 420 421 } 421 422 return bio; 422 423 }
+1 -1
fs/ocfs2/cluster/heartbeat.c
··· 413 413 } 414 414 415 415 /* Must put everything in 512 byte sectors for the bio... */ 416 - bio->bi_sector = (reg->hr_start_block + cs) << (bits - 9); 416 + bio->bi_iter.bi_sector = (reg->hr_start_block + cs) << (bits - 9); 417 417 bio->bi_bdev = reg->hr_bdev; 418 418 bio->bi_private = wc; 419 419 bio->bi_end_io = o2hb_bio_end_io;
+1 -1
fs/xfs/xfs_aops.c
··· 407 407 struct bio *bio = bio_alloc(GFP_NOIO, nvecs); 408 408 409 409 ASSERT(bio->bi_private == NULL); 410 - bio->bi_sector = bh->b_blocknr * (bh->b_size >> 9); 410 + bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); 411 411 bio->bi_bdev = bh->b_bdev; 412 412 return bio; 413 413 }
+2 -2
fs/xfs/xfs_buf.c
··· 1240 1240 1241 1241 bio = bio_alloc(GFP_NOIO, nr_pages); 1242 1242 bio->bi_bdev = bp->b_target->bt_bdev; 1243 - bio->bi_sector = sector; 1243 + bio->bi_iter.bi_sector = sector; 1244 1244 bio->bi_end_io = xfs_buf_bio_end_io; 1245 1245 bio->bi_private = bp; 1246 1246 ··· 1262 1262 total_nr_pages--; 1263 1263 } 1264 1264 1265 - if (likely(bio->bi_size)) { 1265 + if (likely(bio->bi_iter.bi_size)) { 1266 1266 if (xfs_buf_is_vmapped(bp)) { 1267 1267 flush_kernel_vmap_range(bp->b_addr, 1268 1268 xfs_buf_vmap_len(bp));
+173 -112
include/linux/bio.h
··· 61 61 * various member access, note that bio_data should of course not be used 62 62 * on highmem page vectors 63 63 */ 64 - #define bio_iovec_idx(bio, idx) (&((bio)->bi_io_vec[(idx)])) 65 - #define bio_iovec(bio) bio_iovec_idx((bio), (bio)->bi_idx) 66 - #define bio_page(bio) bio_iovec((bio))->bv_page 67 - #define bio_offset(bio) bio_iovec((bio))->bv_offset 68 - #define bio_segments(bio) ((bio)->bi_vcnt - (bio)->bi_idx) 69 - #define bio_sectors(bio) ((bio)->bi_size >> 9) 70 - #define bio_end_sector(bio) ((bio)->bi_sector + bio_sectors((bio))) 64 + #define __bvec_iter_bvec(bvec, iter) (&(bvec)[(iter).bi_idx]) 65 + 66 + #define bvec_iter_page(bvec, iter) \ 67 + (__bvec_iter_bvec((bvec), (iter))->bv_page) 68 + 69 + #define bvec_iter_len(bvec, iter) \ 70 + min((iter).bi_size, \ 71 + __bvec_iter_bvec((bvec), (iter))->bv_len - (iter).bi_bvec_done) 72 + 73 + #define bvec_iter_offset(bvec, iter) \ 74 + (__bvec_iter_bvec((bvec), (iter))->bv_offset + (iter).bi_bvec_done) 75 + 76 + #define bvec_iter_bvec(bvec, iter) \ 77 + ((struct bio_vec) { \ 78 + .bv_page = bvec_iter_page((bvec), (iter)), \ 79 + .bv_len = bvec_iter_len((bvec), (iter)), \ 80 + .bv_offset = bvec_iter_offset((bvec), (iter)), \ 81 + }) 82 + 83 + #define bio_iter_iovec(bio, iter) \ 84 + bvec_iter_bvec((bio)->bi_io_vec, (iter)) 85 + 86 + #define bio_iter_page(bio, iter) \ 87 + bvec_iter_page((bio)->bi_io_vec, (iter)) 88 + #define bio_iter_len(bio, iter) \ 89 + bvec_iter_len((bio)->bi_io_vec, (iter)) 90 + #define bio_iter_offset(bio, iter) \ 91 + bvec_iter_offset((bio)->bi_io_vec, (iter)) 92 + 93 + #define bio_page(bio) bio_iter_page((bio), (bio)->bi_iter) 94 + #define bio_offset(bio) bio_iter_offset((bio), (bio)->bi_iter) 95 + #define bio_iovec(bio) bio_iter_iovec((bio), (bio)->bi_iter) 96 + 97 + #define bio_multiple_segments(bio) \ 98 + ((bio)->bi_iter.bi_size != bio_iovec(bio).bv_len) 99 + #define bio_sectors(bio) ((bio)->bi_iter.bi_size >> 9) 100 + #define bio_end_sector(bio) ((bio)->bi_iter.bi_sector + bio_sectors((bio))) 101 + 102 + /* 103 + * Check whether this bio carries any data or not. A NULL bio is allowed. 104 + */ 105 + static inline bool bio_has_data(struct bio *bio) 106 + { 107 + if (bio && 108 + bio->bi_iter.bi_size && 109 + !(bio->bi_rw & REQ_DISCARD)) 110 + return true; 111 + 112 + return false; 113 + } 114 + 115 + static inline bool bio_is_rw(struct bio *bio) 116 + { 117 + if (!bio_has_data(bio)) 118 + return false; 119 + 120 + if (bio->bi_rw & BIO_NO_ADVANCE_ITER_MASK) 121 + return false; 122 + 123 + return true; 124 + } 125 + 126 + static inline bool bio_mergeable(struct bio *bio) 127 + { 128 + if (bio->bi_rw & REQ_NOMERGE_FLAGS) 129 + return false; 130 + 131 + return true; 132 + } 71 133 72 134 static inline unsigned int bio_cur_bytes(struct bio *bio) 73 135 { 74 - if (bio->bi_vcnt) 75 - return bio_iovec(bio)->bv_len; 136 + if (bio_has_data(bio)) 137 + return bio_iovec(bio).bv_len; 76 138 else /* dataless requests such as discard */ 77 - return bio->bi_size; 139 + return bio->bi_iter.bi_size; 78 140 } 79 141 80 142 static inline void *bio_data(struct bio *bio) 81 143 { 82 - if (bio->bi_vcnt) 144 + if (bio_has_data(bio)) 83 145 return page_address(bio_page(bio)) + bio_offset(bio); 84 146 85 147 return NULL; ··· 159 97 * permanent PIO fall back, user is probably better off disabling highmem 160 98 * I/O completely on that queue (see ide-dma for example) 161 99 */ 162 - #define __bio_kmap_atomic(bio, idx) \ 163 - (kmap_atomic(bio_iovec_idx((bio), (idx))->bv_page) + \ 164 - bio_iovec_idx((bio), (idx))->bv_offset) 100 + #define __bio_kmap_atomic(bio, iter) \ 101 + (kmap_atomic(bio_iter_iovec((bio), (iter)).bv_page) + \ 102 + bio_iter_iovec((bio), (iter)).bv_offset) 165 103 166 - #define __bio_kunmap_atomic(addr) kunmap_atomic(addr) 104 + #define __bio_kunmap_atomic(addr) kunmap_atomic(addr) 167 105 168 106 /* 169 107 * merge helpers etc 170 108 */ 171 - 172 - #define __BVEC_END(bio) bio_iovec_idx((bio), (bio)->bi_vcnt - 1) 173 - #define __BVEC_START(bio) bio_iovec_idx((bio), (bio)->bi_idx) 174 109 175 110 /* Default implementation of BIOVEC_PHYS_MERGEABLE */ 176 111 #define __BIOVEC_PHYS_MERGEABLE(vec1, vec2) \ ··· 185 126 (((addr1) | (mask)) == (((addr2) - 1) | (mask))) 186 127 #define BIOVEC_SEG_BOUNDARY(q, b1, b2) \ 187 128 __BIO_SEG_BOUNDARY(bvec_to_phys((b1)), bvec_to_phys((b2)) + (b2)->bv_len, queue_segment_boundary((q))) 188 - #define BIO_SEG_BOUNDARY(q, b1, b2) \ 189 - BIOVEC_SEG_BOUNDARY((q), __BVEC_END((b1)), __BVEC_START((b2))) 190 129 191 130 #define bio_io_error(bio) bio_endio((bio), -EIO) 192 - 193 - /* 194 - * drivers should not use the __ version unless they _really_ know what 195 - * they're doing 196 - */ 197 - #define __bio_for_each_segment(bvl, bio, i, start_idx) \ 198 - for (bvl = bio_iovec_idx((bio), (start_idx)), i = (start_idx); \ 199 - i < (bio)->bi_vcnt; \ 200 - bvl++, i++) 201 131 202 132 /* 203 133 * drivers should _never_ use the all version - the bio may have been split 204 134 * before it got to the driver and the driver won't own all of it 205 135 */ 206 136 #define bio_for_each_segment_all(bvl, bio, i) \ 207 - for (i = 0; \ 208 - bvl = bio_iovec_idx((bio), (i)), i < (bio)->bi_vcnt; \ 209 - i++) 137 + for (i = 0, bvl = (bio)->bi_io_vec; i < (bio)->bi_vcnt; i++, bvl++) 210 138 211 - #define bio_for_each_segment(bvl, bio, i) \ 212 - for (i = (bio)->bi_idx; \ 213 - bvl = bio_iovec_idx((bio), (i)), i < (bio)->bi_vcnt; \ 214 - i++) 139 + static inline void bvec_iter_advance(struct bio_vec *bv, struct bvec_iter *iter, 140 + unsigned bytes) 141 + { 142 + WARN_ONCE(bytes > iter->bi_size, 143 + "Attempted to advance past end of bvec iter\n"); 144 + 145 + while (bytes) { 146 + unsigned len = min(bytes, bvec_iter_len(bv, *iter)); 147 + 148 + bytes -= len; 149 + iter->bi_size -= len; 150 + iter->bi_bvec_done += len; 151 + 152 + if (iter->bi_bvec_done == __bvec_iter_bvec(bv, *iter)->bv_len) { 153 + iter->bi_bvec_done = 0; 154 + iter->bi_idx++; 155 + } 156 + } 157 + } 158 + 159 + #define for_each_bvec(bvl, bio_vec, iter, start) \ 160 + for ((iter) = start; \ 161 + (bvl) = bvec_iter_bvec((bio_vec), (iter)), \ 162 + (iter).bi_size; \ 163 + bvec_iter_advance((bio_vec), &(iter), (bvl).bv_len)) 164 + 165 + 166 + static inline void bio_advance_iter(struct bio *bio, struct bvec_iter *iter, 167 + unsigned bytes) 168 + { 169 + iter->bi_sector += bytes >> 9; 170 + 171 + if (bio->bi_rw & BIO_NO_ADVANCE_ITER_MASK) 172 + iter->bi_size -= bytes; 173 + else 174 + bvec_iter_advance(bio->bi_io_vec, iter, bytes); 175 + } 176 + 177 + #define __bio_for_each_segment(bvl, bio, iter, start) \ 178 + for (iter = (start); \ 179 + (iter).bi_size && \ 180 + ((bvl = bio_iter_iovec((bio), (iter))), 1); \ 181 + bio_advance_iter((bio), &(iter), (bvl).bv_len)) 182 + 183 + #define bio_for_each_segment(bvl, bio, iter) \ 184 + __bio_for_each_segment(bvl, bio, iter, (bio)->bi_iter) 185 + 186 + #define bio_iter_last(bvec, iter) ((iter).bi_size == (bvec).bv_len) 187 + 188 + static inline unsigned bio_segments(struct bio *bio) 189 + { 190 + unsigned segs = 0; 191 + struct bio_vec bv; 192 + struct bvec_iter iter; 193 + 194 + bio_for_each_segment(bv, bio, iter) 195 + segs++; 196 + 197 + return segs; 198 + } 215 199 216 200 /* 217 201 * get a reference to a bio, so it won't disappear. the intended use is ··· 279 177 struct bio_integrity_payload { 280 178 struct bio *bip_bio; /* parent bio */ 281 179 282 - sector_t bip_sector; /* virtual start sector */ 180 + struct bvec_iter bip_iter; 283 181 182 + /* kill - should just use bip_vec */ 284 183 void *bip_buf; /* generated integrity data */ 285 - bio_end_io_t *bip_end_io; /* saved I/O completion fn */ 286 184 287 - unsigned int bip_size; 185 + bio_end_io_t *bip_end_io; /* saved I/O completion fn */ 288 186 289 187 unsigned short bip_slab; /* slab the bip came from */ 290 188 unsigned short bip_vcnt; /* # of integrity bio_vecs */ 291 - unsigned short bip_idx; /* current bip_vec index */ 292 189 unsigned bip_owns_buf:1; /* should free bip_buf */ 293 190 294 191 struct work_struct bip_work; /* I/O completion */ ··· 297 196 }; 298 197 #endif /* CONFIG_BLK_DEV_INTEGRITY */ 299 198 300 - /* 301 - * A bio_pair is used when we need to split a bio. 302 - * This can only happen for a bio that refers to just one 303 - * page of data, and in the unusual situation when the 304 - * page crosses a chunk/device boundary 305 - * 306 - * The address of the master bio is stored in bio1.bi_private 307 - * The address of the pool the pair was allocated from is stored 308 - * in bio2.bi_private 309 - */ 310 - struct bio_pair { 311 - struct bio bio1, bio2; 312 - struct bio_vec bv1, bv2; 313 - #if defined(CONFIG_BLK_DEV_INTEGRITY) 314 - struct bio_integrity_payload bip1, bip2; 315 - struct bio_vec iv1, iv2; 316 - #endif 317 - atomic_t cnt; 318 - int error; 319 - }; 320 - extern struct bio_pair *bio_split(struct bio *bi, int first_sectors); 321 - extern void bio_pair_release(struct bio_pair *dbio); 322 199 extern void bio_trim(struct bio *bio, int offset, int size); 200 + extern struct bio *bio_split(struct bio *bio, int sectors, 201 + gfp_t gfp, struct bio_set *bs); 202 + 203 + /** 204 + * bio_next_split - get next @sectors from a bio, splitting if necessary 205 + * @bio: bio to split 206 + * @sectors: number of sectors to split from the front of @bio 207 + * @gfp: gfp mask 208 + * @bs: bio set to allocate from 209 + * 210 + * Returns a bio representing the next @sectors of @bio - if the bio is smaller 211 + * than @sectors, returns the original bio unchanged. 212 + */ 213 + static inline struct bio *bio_next_split(struct bio *bio, int sectors, 214 + gfp_t gfp, struct bio_set *bs) 215 + { 216 + if (sectors >= bio_sectors(bio)) 217 + return bio; 218 + 219 + return bio_split(bio, sectors, gfp, bs); 220 + } 323 221 324 222 extern struct bio_set *bioset_create(unsigned int, unsigned int); 325 223 extern void bioset_free(struct bio_set *); ··· 327 227 extern struct bio *bio_alloc_bioset(gfp_t, int, struct bio_set *); 328 228 extern void bio_put(struct bio *); 329 229 330 - extern void __bio_clone(struct bio *, struct bio *); 230 + extern void __bio_clone_fast(struct bio *, struct bio *); 231 + extern struct bio *bio_clone_fast(struct bio *, gfp_t, struct bio_set *); 331 232 extern struct bio *bio_clone_bioset(struct bio *, gfp_t, struct bio_set *bs); 332 233 333 234 extern struct bio_set *fs_bio_set; ··· 355 254 } 356 255 357 256 extern void bio_endio(struct bio *, int); 257 + extern void bio_endio_nodec(struct bio *, int); 358 258 struct request_queue; 359 259 extern int bio_phys_segments(struct request_queue *, struct bio *); 360 260 ··· 364 262 365 263 extern void bio_init(struct bio *); 366 264 extern void bio_reset(struct bio *); 265 + void bio_chain(struct bio *, struct bio *); 367 266 368 267 extern int bio_add_page(struct bio *, struct page *, unsigned int,unsigned int); 369 268 extern int bio_add_pc_page(struct request_queue *, struct bio *, struct page *, 370 269 unsigned int, unsigned int); 371 270 extern int bio_get_nr_vecs(struct block_device *); 372 - extern sector_t bio_sector_offset(struct bio *, unsigned short, unsigned int); 373 271 extern struct bio *bio_map_user(struct request_queue *, struct block_device *, 374 272 unsigned long, unsigned int, int, gfp_t); 375 273 struct sg_iovec; ··· 459 357 } 460 358 #endif 461 359 462 - static inline char *__bio_kmap_irq(struct bio *bio, unsigned short idx, 360 + static inline char *__bio_kmap_irq(struct bio *bio, struct bvec_iter iter, 463 361 unsigned long *flags) 464 362 { 465 - return bvec_kmap_irq(bio_iovec_idx(bio, idx), flags); 363 + return bvec_kmap_irq(&bio_iter_iovec(bio, iter), flags); 466 364 } 467 365 #define __bio_kunmap_irq(buf, flags) bvec_kunmap_irq(buf, flags) 468 366 469 367 #define bio_kmap_irq(bio, flags) \ 470 - __bio_kmap_irq((bio), (bio)->bi_idx, (flags)) 368 + __bio_kmap_irq((bio), (bio)->bi_iter, (flags)) 471 369 #define bio_kunmap_irq(buf,flags) __bio_kunmap_irq(buf, flags) 472 - 473 - /* 474 - * Check whether this bio carries any data or not. A NULL bio is allowed. 475 - */ 476 - static inline bool bio_has_data(struct bio *bio) 477 - { 478 - if (bio && bio->bi_vcnt) 479 - return true; 480 - 481 - return false; 482 - } 483 - 484 - static inline bool bio_is_rw(struct bio *bio) 485 - { 486 - if (!bio_has_data(bio)) 487 - return false; 488 - 489 - if (bio->bi_rw & REQ_WRITE_SAME) 490 - return false; 491 - 492 - return true; 493 - } 494 - 495 - static inline bool bio_mergeable(struct bio *bio) 496 - { 497 - if (bio->bi_rw & REQ_NOMERGE_FLAGS) 498 - return false; 499 - 500 - return true; 501 - } 502 370 503 371 /* 504 372 * BIO list management for use by remapping drivers (e.g. DM or MD) and loop. ··· 631 559 632 560 #if defined(CONFIG_BLK_DEV_INTEGRITY) 633 561 562 + 563 + 634 564 #define bip_vec_idx(bip, idx) (&(bip->bip_vec[(idx)])) 635 - #define bip_vec(bip) bip_vec_idx(bip, 0) 636 565 637 - #define __bip_for_each_vec(bvl, bip, i, start_idx) \ 638 - for (bvl = bip_vec_idx((bip), (start_idx)), i = (start_idx); \ 639 - i < (bip)->bip_vcnt; \ 640 - bvl++, i++) 641 - 642 - #define bip_for_each_vec(bvl, bip, i) \ 643 - __bip_for_each_vec(bvl, bip, i, (bip)->bip_idx) 566 + #define bip_for_each_vec(bvl, bip, iter) \ 567 + for_each_bvec(bvl, (bip)->bip_vec, iter, (bip)->bip_iter) 644 568 645 569 #define bio_for_each_integrity_vec(_bvl, _bio, _iter) \ 646 570 for_each_bio(_bio) \ ··· 654 586 extern void bio_integrity_endio(struct bio *, int); 655 587 extern void bio_integrity_advance(struct bio *, unsigned int); 656 588 extern void bio_integrity_trim(struct bio *, unsigned int, unsigned int); 657 - extern void bio_integrity_split(struct bio *, struct bio_pair *, int); 658 589 extern int bio_integrity_clone(struct bio *, struct bio *, gfp_t); 659 590 extern int bioset_integrity_create(struct bio_set *, int); 660 591 extern void bioset_integrity_free(struct bio_set *); ··· 695 628 gfp_t gfp_mask) 696 629 { 697 630 return 0; 698 - } 699 - 700 - static inline void bio_integrity_split(struct bio *bio, struct bio_pair *bp, 701 - int sectors) 702 - { 703 - return; 704 631 } 705 632 706 633 static inline void bio_integrity_advance(struct bio *bio,
+6 -7
include/linux/blk-mq.h
··· 113 113 }; 114 114 115 115 struct request_queue *blk_mq_init_queue(struct blk_mq_reg *, void *); 116 - void blk_mq_free_queue(struct request_queue *); 117 116 int blk_mq_register_disk(struct gendisk *); 118 117 void blk_mq_unregister_disk(struct gendisk *); 119 118 void blk_mq_init_commands(struct request_queue *, void (*init)(void *data, struct blk_mq_hw_ctx *, struct request *, unsigned int), void *data); ··· 158 159 } 159 160 160 161 #define queue_for_each_hw_ctx(q, hctx, i) \ 161 - for ((i) = 0, hctx = (q)->queue_hw_ctx[0]; \ 162 - (i) < (q)->nr_hw_queues; (i)++, hctx = (q)->queue_hw_ctx[i]) 162 + for ((i) = 0; (i) < (q)->nr_hw_queues && \ 163 + ({ hctx = (q)->queue_hw_ctx[i]; 1; }); (i)++) 163 164 164 165 #define queue_for_each_ctx(q, ctx, i) \ 165 - for ((i) = 0, ctx = per_cpu_ptr((q)->queue_ctx, 0); \ 166 - (i) < (q)->nr_queues; (i)++, ctx = per_cpu_ptr(q->queue_ctx, (i))) 166 + for ((i) = 0; (i) < (q)->nr_queues && \ 167 + ({ ctx = per_cpu_ptr((q)->queue_ctx, (i)); 1; }); (i)++) 167 168 168 169 #define hctx_for_each_ctx(hctx, ctx, i) \ 169 - for ((i) = 0, ctx = (hctx)->ctxs[0]; \ 170 - (i) < (hctx)->nr_ctx; (i)++, ctx = (hctx)->ctxs[(i)]) 170 + for ((i) = 0; (i) < (hctx)->nr_ctx && \ 171 + ({ ctx = (hctx)->ctxs[(i)]; 1; }); (i)++) 171 172 172 173 #define blk_ctx_sum(q, sum) \ 173 174 ({ \
+17 -7
include/linux/blk_types.h
··· 28 28 unsigned int bv_offset; 29 29 }; 30 30 31 + struct bvec_iter { 32 + sector_t bi_sector; /* device address in 512 byte 33 + sectors */ 34 + unsigned int bi_size; /* residual I/O count */ 35 + 36 + unsigned int bi_idx; /* current index into bvl_vec */ 37 + 38 + unsigned int bi_bvec_done; /* number of bytes completed in 39 + current bvec */ 40 + }; 41 + 31 42 /* 32 43 * main unit of I/O for the block layer and lower layers (ie drivers and 33 44 * stacking drivers) 34 45 */ 35 46 struct bio { 36 - sector_t bi_sector; /* device address in 512 byte 37 - sectors */ 38 47 struct bio *bi_next; /* request queue link */ 39 48 struct block_device *bi_bdev; 40 49 unsigned long bi_flags; /* status, command, etc */ ··· 51 42 * top bits priority 52 43 */ 53 44 54 - unsigned short bi_vcnt; /* how many bio_vec's */ 55 - unsigned short bi_idx; /* current index into bvl_vec */ 45 + struct bvec_iter bi_iter; 56 46 57 47 /* Number of segments in this BIO after 58 48 * physical address coalescing is performed. 59 49 */ 60 50 unsigned int bi_phys_segments; 61 - 62 - unsigned int bi_size; /* residual I/O count */ 63 51 64 52 /* 65 53 * To keep track of the max segment size, we account for the ··· 64 58 */ 65 59 unsigned int bi_seg_front_size; 66 60 unsigned int bi_seg_back_size; 61 + 62 + atomic_t bi_remaining; 67 63 68 64 bio_end_io_t *bi_end_io; 69 65 ··· 82 74 struct bio_integrity_payload *bi_integrity; /* data integrity */ 83 75 #endif 84 76 77 + unsigned short bi_vcnt; /* how many bio_vec's */ 78 + 85 79 /* 86 80 * Everything starting with bi_max_vecs will be preserved by bio_reset() 87 81 */ 88 82 89 - unsigned int bi_max_vecs; /* max bvl_vecs we can hold */ 83 + unsigned short bi_max_vecs; /* max bvl_vecs we can hold */ 90 84 91 85 atomic_t bi_cnt; /* pin count */ 92 86
+5 -4
include/linux/blkdev.h
··· 735 735 }; 736 736 737 737 struct req_iterator { 738 - int i; 738 + struct bvec_iter iter; 739 739 struct bio *bio; 740 740 }; 741 741 ··· 748 748 749 749 #define rq_for_each_segment(bvl, _rq, _iter) \ 750 750 __rq_for_each_bio(_iter.bio, _rq) \ 751 - bio_for_each_segment(bvl, _iter.bio, _iter.i) 751 + bio_for_each_segment(bvl, _iter.bio, _iter.iter) 752 752 753 - #define rq_iter_last(rq, _iter) \ 754 - (_iter.bio->bi_next == NULL && _iter.i == _iter.bio->bi_vcnt-1) 753 + #define rq_iter_last(bvec, _iter) \ 754 + (_iter.bio->bi_next == NULL && \ 755 + bio_iter_last(bvec, _iter.iter)) 755 756 756 757 #ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 757 758 # error "You should define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE for your platform"
+2 -2
include/linux/ceph/messenger.h
··· 1 1 #ifndef __FS_CEPH_MESSENGER_H 2 2 #define __FS_CEPH_MESSENGER_H 3 3 4 + #include <linux/blk_types.h> 4 5 #include <linux/kref.h> 5 6 #include <linux/mutex.h> 6 7 #include <linux/net.h> ··· 120 119 #ifdef CONFIG_BLOCK 121 120 struct { /* bio */ 122 121 struct bio *bio; /* bio from list */ 123 - unsigned int vector_index; /* vector from bio */ 124 - unsigned int vector_offset; /* bytes from vector */ 122 + struct bvec_iter bvec_iter; 125 123 }; 126 124 #endif /* CONFIG_BLOCK */ 127 125 struct { /* pages */
+4 -4
include/linux/cmdline-parser.h
··· 37 37 struct cmdline_parts *cmdline_parts_find(struct cmdline_parts *parts, 38 38 const char *bdev); 39 39 40 - void cmdline_parts_set(struct cmdline_parts *parts, sector_t disk_size, 41 - int slot, 42 - int (*add_part)(int, struct cmdline_subpart *, void *), 43 - void *param); 40 + int cmdline_parts_set(struct cmdline_parts *parts, sector_t disk_size, 41 + int slot, 42 + int (*add_part)(int, struct cmdline_subpart *, void *), 43 + void *param); 44 44 45 45 #endif /* CMDLINEPARSEH */
+2 -2
include/linux/dm-io.h
··· 29 29 30 30 enum dm_io_mem_type { 31 31 DM_IO_PAGE_LIST,/* Page list */ 32 - DM_IO_BVEC, /* Bio vector */ 32 + DM_IO_BIO, /* Bio vector */ 33 33 DM_IO_VMA, /* Virtual memory area */ 34 34 DM_IO_KMEM, /* Kernel memory */ 35 35 }; ··· 41 41 42 42 union { 43 43 struct page_list *pl; 44 - struct bio_vec *bvec; 44 + struct bio *bio; 45 45 void *vma; 46 46 void *addr; 47 47 } ptr;
+13 -13
include/trace/events/bcache.h
··· 24 24 __entry->dev = bio->bi_bdev->bd_dev; 25 25 __entry->orig_major = d->disk->major; 26 26 __entry->orig_minor = d->disk->first_minor; 27 - __entry->sector = bio->bi_sector; 28 - __entry->orig_sector = bio->bi_sector - 16; 29 - __entry->nr_sector = bio->bi_size >> 9; 30 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size); 27 + __entry->sector = bio->bi_iter.bi_sector; 28 + __entry->orig_sector = bio->bi_iter.bi_sector - 16; 29 + __entry->nr_sector = bio->bi_iter.bi_size >> 9; 30 + blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 31 31 ), 32 32 33 33 TP_printk("%d,%d %s %llu + %u (from %d,%d @ %llu)", ··· 99 99 100 100 TP_fast_assign( 101 101 __entry->dev = bio->bi_bdev->bd_dev; 102 - __entry->sector = bio->bi_sector; 103 - __entry->nr_sector = bio->bi_size >> 9; 104 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size); 102 + __entry->sector = bio->bi_iter.bi_sector; 103 + __entry->nr_sector = bio->bi_iter.bi_size >> 9; 104 + blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 105 105 ), 106 106 107 107 TP_printk("%d,%d %s %llu + %u", ··· 134 134 135 135 TP_fast_assign( 136 136 __entry->dev = bio->bi_bdev->bd_dev; 137 - __entry->sector = bio->bi_sector; 138 - __entry->nr_sector = bio->bi_size >> 9; 139 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size); 137 + __entry->sector = bio->bi_iter.bi_sector; 138 + __entry->nr_sector = bio->bi_iter.bi_size >> 9; 139 + blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 140 140 __entry->cache_hit = hit; 141 141 __entry->bypass = bypass; 142 142 ), ··· 162 162 163 163 TP_fast_assign( 164 164 __entry->dev = bio->bi_bdev->bd_dev; 165 - __entry->sector = bio->bi_sector; 166 - __entry->nr_sector = bio->bi_size >> 9; 167 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size); 165 + __entry->sector = bio->bi_iter.bi_sector; 166 + __entry->nr_sector = bio->bi_iter.bi_size >> 9; 167 + blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 168 168 __entry->writeback = writeback; 169 169 __entry->bypass = bypass; 170 170 ),
+13 -13
include/trace/events/block.h
··· 243 243 TP_fast_assign( 244 244 __entry->dev = bio->bi_bdev ? 245 245 bio->bi_bdev->bd_dev : 0; 246 - __entry->sector = bio->bi_sector; 246 + __entry->sector = bio->bi_iter.bi_sector; 247 247 __entry->nr_sector = bio_sectors(bio); 248 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size); 248 + blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 249 249 memcpy(__entry->comm, current->comm, TASK_COMM_LEN); 250 250 ), 251 251 ··· 280 280 281 281 TP_fast_assign( 282 282 __entry->dev = bio->bi_bdev->bd_dev; 283 - __entry->sector = bio->bi_sector; 283 + __entry->sector = bio->bi_iter.bi_sector; 284 284 __entry->nr_sector = bio_sectors(bio); 285 285 __entry->error = error; 286 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size); 286 + blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 287 287 ), 288 288 289 289 TP_printk("%d,%d %s %llu + %u [%d]", ··· 308 308 309 309 TP_fast_assign( 310 310 __entry->dev = bio->bi_bdev->bd_dev; 311 - __entry->sector = bio->bi_sector; 311 + __entry->sector = bio->bi_iter.bi_sector; 312 312 __entry->nr_sector = bio_sectors(bio); 313 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size); 313 + blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 314 314 memcpy(__entry->comm, current->comm, TASK_COMM_LEN); 315 315 ), 316 316 ··· 375 375 376 376 TP_fast_assign( 377 377 __entry->dev = bio->bi_bdev->bd_dev; 378 - __entry->sector = bio->bi_sector; 378 + __entry->sector = bio->bi_iter.bi_sector; 379 379 __entry->nr_sector = bio_sectors(bio); 380 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size); 380 + blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 381 381 memcpy(__entry->comm, current->comm, TASK_COMM_LEN); 382 382 ), 383 383 ··· 403 403 404 404 TP_fast_assign( 405 405 __entry->dev = bio ? bio->bi_bdev->bd_dev : 0; 406 - __entry->sector = bio ? bio->bi_sector : 0; 406 + __entry->sector = bio ? bio->bi_iter.bi_sector : 0; 407 407 __entry->nr_sector = bio ? bio_sectors(bio) : 0; 408 408 blk_fill_rwbs(__entry->rwbs, 409 409 bio ? bio->bi_rw : 0, __entry->nr_sector); ··· 538 538 539 539 TP_fast_assign( 540 540 __entry->dev = bio->bi_bdev->bd_dev; 541 - __entry->sector = bio->bi_sector; 541 + __entry->sector = bio->bi_iter.bi_sector; 542 542 __entry->new_sector = new_sector; 543 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size); 543 + blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 544 544 memcpy(__entry->comm, current->comm, TASK_COMM_LEN); 545 545 ), 546 546 ··· 579 579 580 580 TP_fast_assign( 581 581 __entry->dev = bio->bi_bdev->bd_dev; 582 - __entry->sector = bio->bi_sector; 582 + __entry->sector = bio->bi_iter.bi_sector; 583 583 __entry->nr_sector = bio_sectors(bio); 584 584 __entry->old_dev = dev; 585 585 __entry->old_sector = from; 586 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size); 586 + blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 587 587 ), 588 588 589 589 TP_printk("%d,%d %s %llu + %u <- (%d,%d) %llu",
+2 -2
include/trace/events/f2fs.h
··· 629 629 __entry->dev = sb->s_dev; 630 630 __entry->rw = rw; 631 631 __entry->type = type; 632 - __entry->sector = bio->bi_sector; 633 - __entry->size = bio->bi_size; 632 + __entry->sector = bio->bi_iter.bi_sector; 633 + __entry->size = bio->bi_iter.bi_size; 634 634 ), 635 635 636 636 TP_printk("dev = (%d,%d), %s%s, %s, sector = %lld, size = %u",
+1 -1
kernel/power/block_io.c
··· 32 32 struct bio *bio; 33 33 34 34 bio = bio_alloc(__GFP_WAIT | __GFP_HIGH, 1); 35 - bio->bi_sector = sector; 35 + bio->bi_iter.bi_sector = sector; 36 36 bio->bi_bdev = bdev; 37 37 bio->bi_end_io = end_swap_bio_read; 38 38
+8 -7
kernel/trace/blktrace.c
··· 781 781 if (!error && !bio_flagged(bio, BIO_UPTODATE)) 782 782 error = EIO; 783 783 784 - __blk_add_trace(bt, bio->bi_sector, bio->bi_size, bio->bi_rw, what, 785 - error, 0, NULL); 784 + __blk_add_trace(bt, bio->bi_iter.bi_sector, bio->bi_iter.bi_size, 785 + bio->bi_rw, what, error, 0, NULL); 786 786 } 787 787 788 788 static void blk_add_trace_bio_bounce(void *ignore, ··· 885 885 if (bt) { 886 886 __be64 rpdu = cpu_to_be64(pdu); 887 887 888 - __blk_add_trace(bt, bio->bi_sector, bio->bi_size, bio->bi_rw, 889 - BLK_TA_SPLIT, !bio_flagged(bio, BIO_UPTODATE), 888 + __blk_add_trace(bt, bio->bi_iter.bi_sector, 889 + bio->bi_iter.bi_size, bio->bi_rw, BLK_TA_SPLIT, 890 + !bio_flagged(bio, BIO_UPTODATE), 890 891 sizeof(rpdu), &rpdu); 891 892 } 892 893 } ··· 919 918 r.device_to = cpu_to_be32(bio->bi_bdev->bd_dev); 920 919 r.sector_from = cpu_to_be64(from); 921 920 922 - __blk_add_trace(bt, bio->bi_sector, bio->bi_size, bio->bi_rw, 923 - BLK_TA_REMAP, !bio_flagged(bio, BIO_UPTODATE), 924 - sizeof(r), &r); 921 + __blk_add_trace(bt, bio->bi_iter.bi_sector, bio->bi_iter.bi_size, 922 + bio->bi_rw, BLK_TA_REMAP, 923 + !bio_flagged(bio, BIO_UPTODATE), sizeof(r), &r); 925 924 } 926 925 927 926 /**
+19 -21
mm/bounce.c
··· 98 98 static void copy_to_high_bio_irq(struct bio *to, struct bio *from) 99 99 { 100 100 unsigned char *vfrom; 101 - struct bio_vec *tovec, *fromvec; 102 - int i; 101 + struct bio_vec tovec, *fromvec = from->bi_io_vec; 102 + struct bvec_iter iter; 103 103 104 - bio_for_each_segment(tovec, to, i) { 105 - fromvec = from->bi_io_vec + i; 104 + bio_for_each_segment(tovec, to, iter) { 105 + if (tovec.bv_page != fromvec->bv_page) { 106 + /* 107 + * fromvec->bv_offset and fromvec->bv_len might have 108 + * been modified by the block layer, so use the original 109 + * copy, bounce_copy_vec already uses tovec->bv_len 110 + */ 111 + vfrom = page_address(fromvec->bv_page) + 112 + tovec.bv_offset; 106 113 107 - /* 108 - * not bounced 109 - */ 110 - if (tovec->bv_page == fromvec->bv_page) 111 - continue; 114 + bounce_copy_vec(&tovec, vfrom); 115 + flush_dcache_page(tovec.bv_page); 116 + } 112 117 113 - /* 114 - * fromvec->bv_offset and fromvec->bv_len might have been 115 - * modified by the block layer, so use the original copy, 116 - * bounce_copy_vec already uses tovec->bv_len 117 - */ 118 - vfrom = page_address(fromvec->bv_page) + tovec->bv_offset; 119 - 120 - bounce_copy_vec(tovec, vfrom); 121 - flush_dcache_page(tovec->bv_page); 118 + fromvec++; 122 119 } 123 120 } 124 121 ··· 198 201 { 199 202 struct bio *bio; 200 203 int rw = bio_data_dir(*bio_orig); 201 - struct bio_vec *to, *from; 204 + struct bio_vec *to, from; 205 + struct bvec_iter iter; 202 206 unsigned i; 203 207 204 208 if (force) 205 209 goto bounce; 206 - bio_for_each_segment(from, *bio_orig, i) 207 - if (page_to_pfn(from->bv_page) > queue_bounce_pfn(q)) 210 + bio_for_each_segment(from, *bio_orig, iter) 211 + if (page_to_pfn(from.bv_page) > queue_bounce_pfn(q)) 208 212 goto bounce; 209 213 210 214 return;
+5 -5
mm/page_io.c
··· 31 31 32 32 bio = bio_alloc(gfp_flags, 1); 33 33 if (bio) { 34 - bio->bi_sector = map_swap_page(page, &bio->bi_bdev); 35 - bio->bi_sector <<= PAGE_SHIFT - 9; 34 + bio->bi_iter.bi_sector = map_swap_page(page, &bio->bi_bdev); 35 + bio->bi_iter.bi_sector <<= PAGE_SHIFT - 9; 36 36 bio->bi_io_vec[0].bv_page = page; 37 37 bio->bi_io_vec[0].bv_len = PAGE_SIZE; 38 38 bio->bi_io_vec[0].bv_offset = 0; 39 39 bio->bi_vcnt = 1; 40 - bio->bi_size = PAGE_SIZE; 40 + bio->bi_iter.bi_size = PAGE_SIZE; 41 41 bio->bi_end_io = end_io; 42 42 } 43 43 return bio; ··· 62 62 printk(KERN_ALERT "Write-error on swap-device (%u:%u:%Lu)\n", 63 63 imajor(bio->bi_bdev->bd_inode), 64 64 iminor(bio->bi_bdev->bd_inode), 65 - (unsigned long long)bio->bi_sector); 65 + (unsigned long long)bio->bi_iter.bi_sector); 66 66 ClearPageReclaim(page); 67 67 } 68 68 end_page_writeback(page); ··· 80 80 printk(KERN_ALERT "Read-error on swap-device (%u:%u:%Lu)\n", 81 81 imajor(bio->bi_bdev->bd_inode), 82 82 iminor(bio->bi_bdev->bd_inode), 83 - (unsigned long long)bio->bi_sector); 83 + (unsigned long long)bio->bi_iter.bi_sector); 84 84 goto out; 85 85 } 86 86
+17 -26
net/ceph/messenger.c
··· 778 778 779 779 bio = data->bio; 780 780 BUG_ON(!bio); 781 - BUG_ON(!bio->bi_vcnt); 782 781 783 782 cursor->resid = min(length, data->bio_length); 784 783 cursor->bio = bio; 785 - cursor->vector_index = 0; 786 - cursor->vector_offset = 0; 787 - cursor->last_piece = length <= bio->bi_io_vec[0].bv_len; 784 + cursor->bvec_iter = bio->bi_iter; 785 + cursor->last_piece = 786 + cursor->resid <= bio_iter_len(bio, cursor->bvec_iter); 788 787 } 789 788 790 789 static struct page *ceph_msg_data_bio_next(struct ceph_msg_data_cursor *cursor, ··· 792 793 { 793 794 struct ceph_msg_data *data = cursor->data; 794 795 struct bio *bio; 795 - struct bio_vec *bio_vec; 796 - unsigned int index; 796 + struct bio_vec bio_vec; 797 797 798 798 BUG_ON(data->type != CEPH_MSG_DATA_BIO); 799 799 800 800 bio = cursor->bio; 801 801 BUG_ON(!bio); 802 802 803 - index = cursor->vector_index; 804 - BUG_ON(index >= (unsigned int) bio->bi_vcnt); 803 + bio_vec = bio_iter_iovec(bio, cursor->bvec_iter); 805 804 806 - bio_vec = &bio->bi_io_vec[index]; 807 - BUG_ON(cursor->vector_offset >= bio_vec->bv_len); 808 - *page_offset = (size_t) (bio_vec->bv_offset + cursor->vector_offset); 805 + *page_offset = (size_t) bio_vec.bv_offset; 809 806 BUG_ON(*page_offset >= PAGE_SIZE); 810 807 if (cursor->last_piece) /* pagelist offset is always 0 */ 811 808 *length = cursor->resid; 812 809 else 813 - *length = (size_t) (bio_vec->bv_len - cursor->vector_offset); 810 + *length = (size_t) bio_vec.bv_len; 814 811 BUG_ON(*length > cursor->resid); 815 812 BUG_ON(*page_offset + *length > PAGE_SIZE); 816 813 817 - return bio_vec->bv_page; 814 + return bio_vec.bv_page; 818 815 } 819 816 820 817 static bool ceph_msg_data_bio_advance(struct ceph_msg_data_cursor *cursor, 821 818 size_t bytes) 822 819 { 823 820 struct bio *bio; 824 - struct bio_vec *bio_vec; 825 - unsigned int index; 821 + struct bio_vec bio_vec; 826 822 827 823 BUG_ON(cursor->data->type != CEPH_MSG_DATA_BIO); 828 824 829 825 bio = cursor->bio; 830 826 BUG_ON(!bio); 831 827 832 - index = cursor->vector_index; 833 - BUG_ON(index >= (unsigned int) bio->bi_vcnt); 834 - bio_vec = &bio->bi_io_vec[index]; 828 + bio_vec = bio_iter_iovec(bio, cursor->bvec_iter); 835 829 836 830 /* Advance the cursor offset */ 837 831 838 832 BUG_ON(cursor->resid < bytes); 839 833 cursor->resid -= bytes; 840 - cursor->vector_offset += bytes; 841 - if (cursor->vector_offset < bio_vec->bv_len) 834 + 835 + bio_advance_iter(bio, &cursor->bvec_iter, bytes); 836 + 837 + if (bytes < bio_vec.bv_len) 842 838 return false; /* more bytes to process in this segment */ 843 - BUG_ON(cursor->vector_offset != bio_vec->bv_len); 844 839 845 840 /* Move on to the next segment, and possibly the next bio */ 846 841 847 - if (++index == (unsigned int) bio->bi_vcnt) { 842 + if (!cursor->bvec_iter.bi_size) { 848 843 bio = bio->bi_next; 849 - index = 0; 844 + cursor->bvec_iter = bio->bi_iter; 850 845 } 851 846 cursor->bio = bio; 852 - cursor->vector_index = index; 853 - cursor->vector_offset = 0; 854 847 855 848 if (!cursor->last_piece) { 856 849 BUG_ON(!cursor->resid); 857 850 BUG_ON(!bio); 858 851 /* A short read is OK, so use <= rather than == */ 859 - if (cursor->resid <= bio->bi_io_vec[index].bv_len) 852 + if (cursor->resid <= bio_iter_len(bio, cursor->bvec_iter)) 860 853 cursor->last_piece = true; 861 854 } 862 855