Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'for-6.19/block-20251201' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux

Pull block updates from Jens Axboe:

- Fix head insertion for mq-deadline, a regression from when priority
support was added

- Series simplifying and improving the ublk user copy code

- Various ublk related cleanups

- Fixup REQ_NOWAIT handling in loop/zloop, clearing NOWAIT when the
request is punted to a thread for handling

- Merge and then later revert loop dio nowait support, as it ended up
causing excessive stack usage for when the inline issue code needs to
dip back into the full file system code

- Improve auto integrity code, making it less deadlock prone

- Speedup polled IO handling, but manually managing the hctx lookups

- Fixes for blk-throttle for SSD devices

- Small series with fixes for the S390 dasd driver

- Add support for caching zones, avoiding unnecessary report zone
queries

- MD pull requests via Yu:
- fix null-ptr-dereference regression for dm-raid0
- fix IO hang for raid5 when array is broken with IO inflight
- remove legacy 1s delay to speed up system shutdown
- change maintainer's email address
- data can be lost if array is created with different lbs devices,
fix this problem and record lbs of the array in metadata
- fix rcu protection for md_thread
- fix mddev kobject lifetime regression
- enable atomic writes for md-linear
- some cleanups

- bcache updates via Coly
- remove useless discard and cache device code
- improve usage of per-cpu workqueues

- Reorganize the IO scheduler switching code, fixing some lockdep
reports as well

- Improve the block layer P2P DMA support

- Add support to the block tracing code for zoned devices

- Segment calculation improves, and memory alignment flexibility
improvements

- Set of prep and cleanups patches for ublk batching support. The
actual batching hasn't been added yet, but helps shrink down the
workload of getting that patchset ready for 6.20

- Fix for how the ps3 block driver handles segments offsets

- Improve how block plugging handles batch tag allocations

- nbd fixes for use-after-free of the configuration on device clear/put

- Set of improvements and fixes for zloop

- Add Damien as maintainer of the block zoned device code handling

- Various other fixes and cleanups

* tag 'for-6.19/block-20251201' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux: (162 commits)
block/rnbd: correct all kernel-doc complaints
blk-mq: use queue_hctx in blk_mq_map_queue_type
md: remove legacy 1s delay in md_notify_reboot
md/raid5: fix IO hang when array is broken with IO inflight
md: warn about updating super block failure
md/raid0: fix NULL pointer dereference in create_strip_zones() for dm-raid
sbitmap: fix all kernel-doc warnings
ublk: add helper of __ublk_fetch()
ublk: pass const pointer to ublk_queue_is_zoned()
ublk: refactor auto buffer register in ublk_dispatch_req()
ublk: add `union ublk_io_buf` with improved naming
ublk: add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg()
kfifo: add kfifo_alloc_node() helper for NUMA awareness
blk-mq: fix potential uaf for 'queue_hw_ctx'
blk-mq: use array manage hctx map instead of xarray
ublk: prevent invalid access with DEBUG
s390/dasd: Use scnprintf() instead of sprintf()
s390/dasd: Move device name formatting into separate function
s390/dasd: Remove unnecessary debugfs_create() return checks
s390/dasd: Fix gendisk parent after copy pair swap
...

+3050 -1588
-7
Documentation/ABI/testing/sysfs-block-bcache
··· 106 106 will be discarded from the cache. Should not be turned off with 107 107 writeback caching enabled. 108 108 109 - What: /sys/block/<disk>/bcache/discard 110 - Date: November 2010 111 - Contact: Kent Overstreet <kent.overstreet@gmail.com> 112 - Description: 113 - For a cache, a boolean allowing discard/TRIM to be turned off 114 - or back on if the device supports it. 115 - 116 109 What: /sys/block/<disk>/bcache/bucket_size 117 110 Date: November 2010 118 111 Contact: Kent Overstreet <kent.overstreet@gmail.com>
+2 -11
Documentation/admin-guide/bcache.rst
··· 17 17 It's designed around the performance characteristics of SSDs - it only allocates 18 18 in erase block sized buckets, and it uses a hybrid btree/log to track cached 19 19 extents (which can be anywhere from a single sector to the bucket size). It's 20 - designed to avoid random writes at all costs; it fills up an erase block 21 - sequentially, then issues a discard before reusing it. 20 + designed to avoid random writes at all costs. 22 21 23 22 Both writethrough and writeback caching are supported. Writeback defaults to 24 23 off, but can be switched on and off arbitrarily at runtime. Bcache goes to ··· 617 618 cache_replacement_policy 618 619 One of either lru, fifo or random. 619 620 620 - discard 621 - Boolean; if on a discard/TRIM will be issued to each bucket before it is 622 - reused. Defaults to off, since SATA TRIM is an unqueued command (and thus 623 - slow). 624 - 625 621 freelist_percent 626 622 Size of the freelist as a percentage of nbuckets. Can be written to to 627 623 increase the number of buckets kept on the freelist, which lets you 628 624 artificially reduce the size of the cache at runtime. Mostly for testing 629 - purposes (i.e. testing how different size caches affect your hit rate), but 630 - since buckets are discarded when they move on to the freelist will also make 631 - the SSD's garbage collection easier by effectively giving it more reserved 632 - space. 625 + purposes (i.e. testing how different size caches affect your hit rate). 633 626 634 627 io_errors 635 628 Number of errors that have occurred, decayed by io_error_halflife.
+37 -24
Documentation/admin-guide/blockdev/zoned_loop.rst
··· 68 68 In more details, the options that can be used with the "add" command are as 69 69 follows. 70 70 71 - ================ =========================================================== 72 - id Device number (the X in /dev/zloopX). 73 - Default: automatically assigned. 74 - capacity_mb Device total capacity in MiB. This is always rounded up to 75 - the nearest higher multiple of the zone size. 76 - Default: 16384 MiB (16 GiB). 77 - zone_size_mb Device zone size in MiB. Default: 256 MiB. 78 - zone_capacity_mb Device zone capacity (must always be equal to or lower than 79 - the zone size. Default: zone size. 80 - conv_zones Total number of conventioanl zones starting from sector 0. 81 - Default: 8. 82 - base_dir Path to the base directory where to create the directory 83 - containing the zone files of the device. 84 - Default=/var/local/zloop. 85 - The device directory containing the zone files is always 86 - named with the device ID. E.g. the default zone file 87 - directory for /dev/zloop0 is /var/local/zloop/0. 88 - nr_queues Number of I/O queues of the zoned block device. This value is 89 - always capped by the number of online CPUs 90 - Default: 1 91 - queue_depth Maximum I/O queue depth per I/O queue. 92 - Default: 64 93 - buffered_io Do buffered IOs instead of direct IOs (default: false) 94 - ================ =========================================================== 71 + =================== ========================================================= 72 + id Device number (the X in /dev/zloopX). 73 + Default: automatically assigned. 74 + capacity_mb Device total capacity in MiB. This is always rounded up 75 + to the nearest higher multiple of the zone size. 76 + Default: 16384 MiB (16 GiB). 77 + zone_size_mb Device zone size in MiB. Default: 256 MiB. 78 + zone_capacity_mb Device zone capacity (must always be equal to or lower 79 + than the zone size. Default: zone size. 80 + conv_zones Total number of conventioanl zones starting from 81 + sector 0 82 + Default: 8 83 + base_dir Path to the base directory where to create the directory 84 + containing the zone files of the device. 85 + Default=/var/local/zloop. 86 + The device directory containing the zone files is always 87 + named with the device ID. E.g. the default zone file 88 + directory for /dev/zloop0 is /var/local/zloop/0. 89 + nr_queues Number of I/O queues of the zoned block device. This 90 + value is always capped by the number of online CPUs 91 + Default: 1 92 + queue_depth Maximum I/O queue depth per I/O queue. 93 + Default: 64 94 + buffered_io Do buffered IOs instead of direct IOs (default: false) 95 + zone_append Enable or disable a zloop device native zone append 96 + support. 97 + Default: 1 (enabled). 98 + If native zone append support is disabled, the block layer 99 + will emulate this operation using regular write 100 + operations. 101 + ordered_zone_append Enable zloop mitigation of zone append reordering. 102 + Default: disabled. 103 + This is useful for testing file systems file data mapping 104 + (extents), as when enabled, this can significantly reduce 105 + the number of data extents needed to for a file data 106 + mapping. 107 + =================== ========================================================= 95 108 96 109 3) Deleting a Zoned Device 97 110 --------------------------
+10
Documentation/admin-guide/md.rst
··· 238 238 the number of devices in a raid4/5/6, or to support external 239 239 metadata formats which mandate such clipping. 240 240 241 + logical_block_size 242 + Configure the array's logical block size in bytes. This attribute 243 + is only supported for 1.x meta. Write the value before starting 244 + array. The final array LBS uses the maximum between this 245 + configuration and LBS of all combined devices. Note that 246 + LBS cannot exceed PAGE_SIZE before RAID supports folio. 247 + WARNING: Arrays created on new kernel cannot be assembled at old 248 + kernel due to padding check, Set module parameter 'check_new_feature' 249 + to false to bypass, but data loss may occur. 250 + 241 251 reshape_position 242 252 This is either ``none`` or a sector number within the devices of 243 253 the array where ``reshape`` is up to. If this is set, the three
+11 -2
MAINTAINERS
··· 4307 4307 F: fs/befs/ 4308 4308 4309 4309 BFQ I/O SCHEDULER 4310 - M: Yu Kuai <yukuai3@huawei.com> 4310 + M: Yu Kuai <yukuai@fnnas.com> 4311 4311 L: linux-block@vger.kernel.org 4312 4312 S: Odd Fixes 4313 4313 F: Documentation/block/bfq-iosched.rst ··· 4407 4407 F: drivers/block/ 4408 4408 F: include/linux/bio.h 4409 4409 F: include/linux/blk* 4410 + F: include/uapi/linux/blk* 4411 + F: include/uapi/linux/ioprio.h 4410 4412 F: kernel/trace/blktrace.c 4411 4413 F: lib/sbitmap.c 4412 4414 ··· 23910 23908 23911 23909 SOFTWARE RAID (Multiple Disks) SUPPORT 23912 23910 M: Song Liu <song@kernel.org> 23913 - M: Yu Kuai <yukuai3@huawei.com> 23911 + M: Yu Kuai <yukuai@fnnas.com> 23914 23912 L: linux-raid@vger.kernel.org 23915 23913 S: Supported 23916 23914 Q: https://patchwork.kernel.org/project/linux-raid/list/ ··· 28372 28370 L: linux-kernel@vger.kernel.org 28373 28371 S: Maintained 28374 28372 F: arch/x86/kernel/cpu/zhaoxin.c 28373 + 28374 + ZONED BLOCK DEVICE (BLOCK LAYER) 28375 + M: Damien Le Moal <dlemoal@kernel.org> 28376 + L: linux-block@vger.kernel.org 28377 + S: Maintained 28378 + F: block/blk-zoned.c 28379 + F: include/uapi/linux/blkzoned.h 28375 28380 28376 28381 ZONED LOOP DEVICE 28377 28382 M: Damien Le Moal <dlemoal@kernel.org>
+3 -23
block/bio-integrity-auto.c
··· 29 29 { 30 30 bid->bio->bi_integrity = NULL; 31 31 bid->bio->bi_opf &= ~REQ_INTEGRITY; 32 - kfree(bvec_virt(bid->bip.bip_vec)); 32 + bio_integrity_free_buf(&bid->bip); 33 33 mempool_free(bid, &bid_pool); 34 34 } 35 35 ··· 110 110 struct bio_integrity_data *bid; 111 111 bool set_flags = true; 112 112 gfp_t gfp = GFP_NOIO; 113 - unsigned int len; 114 - void *buf; 115 113 116 114 if (!bi) 117 115 return true; ··· 150 152 if (WARN_ON_ONCE(bio_has_crypt_ctx(bio))) 151 153 return true; 152 154 153 - /* Allocate kernel buffer for protection data */ 154 - len = bio_integrity_bytes(bi, bio_sectors(bio)); 155 - buf = kmalloc(len, gfp); 156 - if (!buf) 157 - goto err_end_io; 158 155 bid = mempool_alloc(&bid_pool, GFP_NOIO); 159 - if (!bid) 160 - goto err_free_buf; 161 156 bio_integrity_init(bio, &bid->bip, &bid->bvec, 1); 162 - 163 157 bid->bio = bio; 164 - 165 158 bid->bip.bip_flags |= BIP_BLOCK_INTEGRITY; 159 + bio_integrity_alloc_buf(bio, gfp & __GFP_ZERO); 160 + 166 161 bip_set_seed(&bid->bip, bio->bi_iter.bi_sector); 167 162 168 163 if (set_flags) { ··· 167 176 bid->bip.bip_flags |= BIP_CHECK_REFTAG; 168 177 } 169 178 170 - if (bio_integrity_add_page(bio, virt_to_page(buf), len, 171 - offset_in_page(buf)) < len) 172 - goto err_end_io; 173 - 174 179 /* Auto-generate integrity metadata if this is a write */ 175 180 if (bio_data_dir(bio) == WRITE && bip_should_check(&bid->bip)) 176 181 blk_integrity_generate(bio); 177 182 else 178 183 bid->saved_bio_iter = bio->bi_iter; 179 184 return true; 180 - 181 - err_free_buf: 182 - kfree(buf); 183 - err_end_io: 184 - bio->bi_status = BLK_STS_RESOURCE; 185 - bio_endio(bio); 186 - return false; 187 185 } 188 186 EXPORT_SYMBOL(bio_integrity_prep); 189 187
+48
block/bio-integrity.c
··· 14 14 struct bio_vec bvecs[]; 15 15 }; 16 16 17 + static mempool_t integrity_buf_pool; 18 + 19 + void bio_integrity_alloc_buf(struct bio *bio, bool zero_buffer) 20 + { 21 + struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk); 22 + struct bio_integrity_payload *bip = bio_integrity(bio); 23 + unsigned int len = bio_integrity_bytes(bi, bio_sectors(bio)); 24 + gfp_t gfp = GFP_NOIO | (zero_buffer ? __GFP_ZERO : 0); 25 + void *buf; 26 + 27 + buf = kmalloc(len, (gfp & ~__GFP_DIRECT_RECLAIM) | 28 + __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN); 29 + if (unlikely(!buf)) { 30 + struct page *page; 31 + 32 + page = mempool_alloc(&integrity_buf_pool, GFP_NOFS); 33 + if (zero_buffer) 34 + memset(page_address(page), 0, len); 35 + bvec_set_page(&bip->bip_vec[0], page, len, 0); 36 + bip->bip_flags |= BIP_MEMPOOL; 37 + } else { 38 + bvec_set_page(&bip->bip_vec[0], virt_to_page(buf), len, 39 + offset_in_page(buf)); 40 + } 41 + 42 + bip->bip_vcnt = 1; 43 + bip->bip_iter.bi_size = len; 44 + } 45 + 46 + void bio_integrity_free_buf(struct bio_integrity_payload *bip) 47 + { 48 + struct bio_vec *bv = &bip->bip_vec[0]; 49 + 50 + if (bip->bip_flags & BIP_MEMPOOL) 51 + mempool_free(bv->bv_page, &integrity_buf_pool); 52 + else 53 + kfree(bvec_virt(bv)); 54 + } 55 + 17 56 /** 18 57 * bio_integrity_free - Free bio integrity payload 19 58 * @bio: bio containing bip to be freed ··· 477 438 478 439 return 0; 479 440 } 441 + 442 + static int __init bio_integrity_initfn(void) 443 + { 444 + if (mempool_init_page_pool(&integrity_buf_pool, BIO_POOL_SIZE, 445 + get_order(BLK_INTEGRITY_MAX_SIZE))) 446 + panic("bio: can't create integrity buf pool\n"); 447 + return 0; 448 + } 449 + subsys_initcall(bio_integrity_initfn);
+1
block/bio.c
··· 253 253 bio->bi_write_hint = 0; 254 254 bio->bi_write_stream = 0; 255 255 bio->bi_status = 0; 256 + bio->bi_bvec_gap_bit = 0; 256 257 bio->bi_iter.bi_sector = 0; 257 258 bio->bi_iter.bi_size = 0; 258 259 bio->bi_iter.bi_idx = 0;
+6 -6
block/blk-core.c
··· 662 662 * bio_list of new bios to be added. ->submit_bio() may indeed add some more 663 663 * bios through a recursive call to submit_bio_noacct. If it did, we find a 664 664 * non-NULL value in bio_list and re-enter the loop from the top. 665 - * - In this case we really did just take the bio of the top of the list (no 665 + * - In this case we really did just take the bio off the top of the list (no 666 666 * pretending) and so remove it from bio_list, and call into ->submit_bio() 667 667 * again. 668 668 * 669 669 * bio_list_on_stack[0] contains bios submitted by the current ->submit_bio. 670 670 * bio_list_on_stack[1] contains bios that were submitted before the current 671 - * ->submit_bio, but that haven't been processed yet. 671 + * ->submit_bio(), but that haven't been processed yet. 672 672 */ 673 673 static void __submit_bio_noacct(struct bio *bio) 674 674 { ··· 743 743 /* 744 744 * We only want one ->submit_bio to be active at a time, else stack 745 745 * usage with stacked devices could be a problem. Use current->bio_list 746 - * to collect a list of requests submited by a ->submit_bio method while 747 - * it is active, and then process them after it returned. 746 + * to collect a list of requests submitted by a ->submit_bio method 747 + * while it is active, and then process them after it returned. 748 748 */ 749 749 if (current->bio_list) { 750 750 if (split) ··· 901 901 * 902 902 * submit_bio() is used to submit I/O requests to block devices. It is passed a 903 903 * fully set up &struct bio that describes the I/O that needs to be done. The 904 - * bio will be send to the device described by the bi_bdev field. 904 + * bio will be sent to the device described by the bi_bdev field. 905 905 * 906 906 * The success/failure status of the request, along with notification of 907 907 * completion, is delivered asynchronously through the ->bi_end_io() callback ··· 991 991 * point to a freshly allocated bio at this point. If that happens 992 992 * we have a few cases to consider: 993 993 * 994 - * 1) the bio is beeing initialized and bi_bdev is NULL. We can just 994 + * 1) the bio is being initialized and bi_bdev is NULL. We can just 995 995 * simply nothing in this case 996 996 * 2) the bio points to a not poll enabled device. bio_poll will catch 997 997 * this and return 0
+2 -4
block/blk-iocost.c
··· 2334 2334 else 2335 2335 usage_dur = max_t(u64, now.now - ioc->period_at, 1); 2336 2336 2337 - usage = clamp_t(u32, 2338 - DIV64_U64_ROUND_UP(usage_us * WEIGHT_ONE, 2339 - usage_dur), 2340 - 1, WEIGHT_ONE); 2337 + usage = clamp(DIV64_U64_ROUND_UP(usage_us * WEIGHT_ONE, usage_dur), 2338 + 1, WEIGHT_ONE); 2341 2339 2342 2340 /* 2343 2341 * Already donating or accumulated enough to start.
+3 -3
block/blk-lib.c
··· 87 87 { 88 88 struct bio *bio = NULL; 89 89 struct blk_plug plug; 90 - int ret; 90 + int ret = 0; 91 91 92 92 blk_start_plug(&plug); 93 - ret = __blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask, &bio); 94 - if (!ret && bio) { 93 + __blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask, &bio); 94 + if (bio) { 95 95 ret = submit_bio_wait(bio); 96 96 if (ret == -EOPNOTSUPP) 97 97 ret = 0;
+3
block/blk-map.c
··· 459 459 if (rq->bio) { 460 460 if (!ll_back_merge_fn(rq, bio, nr_segs)) 461 461 return -EINVAL; 462 + rq->phys_gap_bit = bio_seg_gap(rq->q, rq->biotail, bio, 463 + rq->phys_gap_bit); 462 464 rq->biotail->bi_next = bio; 463 465 rq->biotail = bio; 464 466 rq->__data_len += bio->bi_iter.bi_size; ··· 471 469 rq->nr_phys_segments = nr_segs; 472 470 rq->bio = rq->biotail = bio; 473 471 rq->__data_len = bio->bi_iter.bi_size; 472 + rq->phys_gap_bit = bio->bi_bvec_gap_bit; 474 473 return 0; 475 474 } 476 475 EXPORT_SYMBOL(blk_rq_append_bio);
+40 -4
block/blk-merge.c
··· 302 302 return lim->logical_block_size; 303 303 } 304 304 305 + static inline unsigned int bvec_seg_gap(struct bio_vec *bvprv, 306 + struct bio_vec *bv) 307 + { 308 + return bv->bv_offset | (bvprv->bv_offset + bvprv->bv_len); 309 + } 310 + 305 311 /** 306 312 * bio_split_io_at - check if and where to split a bio 307 313 * @bio: [in] bio to be split ··· 325 319 unsigned *segs, unsigned max_bytes, unsigned len_align_mask) 326 320 { 327 321 struct bio_vec bv, bvprv, *bvprvp = NULL; 322 + unsigned nsegs = 0, bytes = 0, gaps = 0; 328 323 struct bvec_iter iter; 329 - unsigned nsegs = 0, bytes = 0; 330 324 331 325 bio_for_each_bvec(bv, bio, iter) { 332 326 if (bv.bv_offset & lim->dma_alignment || ··· 337 331 * If the queue doesn't support SG gaps and adding this 338 332 * offset would create a gap, disallow it. 339 333 */ 340 - if (bvprvp && bvec_gap_to_prev(lim, bvprvp, bv.bv_offset)) 341 - goto split; 334 + if (bvprvp) { 335 + if (bvec_gap_to_prev(lim, bvprvp, bv.bv_offset)) 336 + goto split; 337 + gaps |= bvec_seg_gap(bvprvp, &bv); 338 + } 342 339 343 340 if (nsegs < lim->max_segments && 344 341 bytes + bv.bv_len <= max_bytes && 345 - bv.bv_offset + bv.bv_len <= lim->min_segment_size) { 342 + bv.bv_offset + bv.bv_len <= lim->max_fast_segment_size) { 346 343 nsegs++; 347 344 bytes += bv.bv_len; 348 345 } else { ··· 359 350 } 360 351 361 352 *segs = nsegs; 353 + bio->bi_bvec_gap_bit = ffs(gaps); 362 354 return 0; 363 355 split: 364 356 if (bio->bi_opf & REQ_ATOMIC) ··· 395 385 * big IO can be trival, disable iopoll when split needed. 396 386 */ 397 387 bio_clear_polled(bio); 388 + bio->bi_bvec_gap_bit = ffs(gaps); 398 389 return bytes >> SECTOR_SHIFT; 399 390 } 400 391 EXPORT_SYMBOL_GPL(bio_split_io_at); ··· 732 721 return (rq->cmd_flags & REQ_ATOMIC) == (next->cmd_flags & REQ_ATOMIC); 733 722 } 734 723 724 + u8 bio_seg_gap(struct request_queue *q, struct bio *prev, struct bio *next, 725 + u8 gaps_bit) 726 + { 727 + struct bio_vec pb, nb; 728 + 729 + if (!bio_has_data(prev)) 730 + return 0; 731 + 732 + gaps_bit = min_not_zero(gaps_bit, prev->bi_bvec_gap_bit); 733 + gaps_bit = min_not_zero(gaps_bit, next->bi_bvec_gap_bit); 734 + 735 + bio_get_last_bvec(prev, &pb); 736 + bio_get_first_bvec(next, &nb); 737 + if (!biovec_phys_mergeable(q, &pb, &nb)) 738 + gaps_bit = min_not_zero(gaps_bit, ffs(bvec_seg_gap(&pb, &nb))); 739 + return gaps_bit; 740 + } 741 + 735 742 /* 736 743 * For non-mq, this has to be called with the request spinlock acquired. 737 744 * For mq with scheduling, the appropriate queue wide lock should be held. ··· 814 785 if (next->start_time_ns < req->start_time_ns) 815 786 req->start_time_ns = next->start_time_ns; 816 787 788 + req->phys_gap_bit = bio_seg_gap(req->q, req->biotail, next->bio, 789 + min_not_zero(next->phys_gap_bit, 790 + req->phys_gap_bit)); 817 791 req->biotail->bi_next = next->bio; 818 792 req->biotail = next->biotail; 819 793 ··· 940 908 if (req->rq_flags & RQF_ZONE_WRITE_PLUGGING) 941 909 blk_zone_write_plug_bio_merged(bio); 942 910 911 + req->phys_gap_bit = bio_seg_gap(req->q, req->biotail, bio, 912 + req->phys_gap_bit); 943 913 req->biotail->bi_next = bio; 944 914 req->biotail = bio; 945 915 req->__data_len += bio->bi_iter.bi_size; ··· 976 942 977 943 blk_update_mixed_merge(req, bio, true); 978 944 945 + req->phys_gap_bit = bio_seg_gap(req->q, bio, req->bio, 946 + req->phys_gap_bit); 979 947 bio->bi_next = req->bio; 980 948 req->bio = bio; 981 949
+17 -12
block/blk-mq-dma.c
··· 79 79 static inline bool blk_can_dma_map_iova(struct request *req, 80 80 struct device *dma_dev) 81 81 { 82 - return !((queue_virt_boundary(req->q) + 1) & 83 - dma_get_merge_boundary(dma_dev)); 82 + return !(req_phys_gap_mask(req) & dma_get_merge_boundary(dma_dev)); 84 83 } 85 84 86 85 static bool blk_dma_map_bus(struct blk_dma_iter *iter, struct phys_vec *vec) ··· 92 93 static bool blk_dma_map_direct(struct request *req, struct device *dma_dev, 93 94 struct blk_dma_iter *iter, struct phys_vec *vec) 94 95 { 95 - iter->addr = dma_map_page(dma_dev, phys_to_page(vec->paddr), 96 - offset_in_page(vec->paddr), vec->len, rq_dma_dir(req)); 96 + unsigned int attrs = 0; 97 + 98 + if (iter->p2pdma.map == PCI_P2PDMA_MAP_THRU_HOST_BRIDGE) 99 + attrs |= DMA_ATTR_MMIO; 100 + 101 + iter->addr = dma_map_phys(dma_dev, vec->paddr, vec->len, 102 + rq_dma_dir(req), attrs); 97 103 if (dma_mapping_error(dma_dev, iter->addr)) { 98 104 iter->status = BLK_STS_RESOURCE; 99 105 return false; ··· 113 109 { 114 110 enum dma_data_direction dir = rq_dma_dir(req); 115 111 unsigned int mapped = 0; 112 + unsigned int attrs = 0; 116 113 int error; 117 114 118 115 iter->addr = state->addr; 119 116 iter->len = dma_iova_size(state); 120 117 118 + if (iter->p2pdma.map == PCI_P2PDMA_MAP_THRU_HOST_BRIDGE) 119 + attrs |= DMA_ATTR_MMIO; 120 + 121 121 do { 122 122 error = dma_iova_link(dma_dev, state, vec->paddr, mapped, 123 - vec->len, dir, 0); 123 + vec->len, dir, attrs); 124 124 if (error) 125 125 break; 126 126 mapped += vec->len; ··· 151 143 .bi_size = rq->special_vec.bv_len, 152 144 } 153 145 }; 154 - } else if (bio) { 146 + } else if (bio) { 155 147 *iter = (struct blk_map_iter) { 156 148 .bio = bio, 157 149 .bvecs = bio->bi_io_vec, ··· 159 151 }; 160 152 } else { 161 153 /* the internal flush request may not have bio attached */ 162 - *iter = (struct blk_map_iter) {}; 154 + *iter = (struct blk_map_iter) {}; 163 155 } 164 156 } 165 157 ··· 171 163 172 164 memset(&iter->p2pdma, 0, sizeof(iter->p2pdma)); 173 165 iter->status = BLK_STS_OK; 166 + iter->p2pdma.map = PCI_P2PDMA_MAP_NONE; 174 167 175 168 /* 176 169 * Grab the first segment ASAP because we'll need it to check for P2P ··· 183 174 switch (pci_p2pdma_state(&iter->p2pdma, dma_dev, 184 175 phys_to_page(vec.paddr))) { 185 176 case PCI_P2PDMA_MAP_BUS_ADDR: 186 - if (iter->iter.is_integrity) 187 - bio_integrity(req->bio)->bip_flags |= BIP_P2P_DMA; 188 - else 189 - req->cmd_flags |= REQ_P2PDMA; 190 177 return blk_dma_map_bus(iter, &vec); 191 178 case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: 192 179 /* ··· 357 352 EXPORT_SYMBOL_GPL(blk_rq_integrity_dma_map_iter_start); 358 353 359 354 /** 360 - * blk_rq_integrity_dma_map_iter_start - map the next integrity DMA segment for 355 + * blk_rq_integrity_dma_map_iter_next - map the next integrity DMA segment for 361 356 * a request 362 357 * @req: request to map 363 358 * @dma_dev: device to map to
+96 -24
block/blk-mq-sched.c
··· 427 427 kfree(et); 428 428 } 429 429 430 - void blk_mq_free_sched_tags_batch(struct xarray *et_table, 430 + void blk_mq_free_sched_res(struct elevator_resources *res, 431 + struct elevator_type *type, 432 + struct blk_mq_tag_set *set) 433 + { 434 + if (res->et) { 435 + blk_mq_free_sched_tags(res->et, set); 436 + res->et = NULL; 437 + } 438 + if (res->data) { 439 + blk_mq_free_sched_data(type, res->data); 440 + res->data = NULL; 441 + } 442 + } 443 + 444 + void blk_mq_free_sched_res_batch(struct xarray *elv_tbl, 431 445 struct blk_mq_tag_set *set) 432 446 { 433 447 struct request_queue *q; 434 - struct elevator_tags *et; 448 + struct elv_change_ctx *ctx; 435 449 436 450 lockdep_assert_held_write(&set->update_nr_hwq_lock); 437 451 ··· 458 444 * concurrently. 459 445 */ 460 446 if (q->elevator) { 461 - et = xa_load(et_table, q->id); 462 - if (unlikely(!et)) 447 + ctx = xa_load(elv_tbl, q->id); 448 + if (!ctx) { 463 449 WARN_ON_ONCE(1); 464 - else 465 - blk_mq_free_sched_tags(et, set); 450 + continue; 451 + } 452 + blk_mq_free_sched_res(&ctx->res, ctx->type, set); 466 453 } 467 454 } 455 + } 456 + 457 + void blk_mq_free_sched_ctx_batch(struct xarray *elv_tbl) 458 + { 459 + unsigned long i; 460 + struct elv_change_ctx *ctx; 461 + 462 + xa_for_each(elv_tbl, i, ctx) { 463 + xa_erase(elv_tbl, i); 464 + kfree(ctx); 465 + } 466 + } 467 + 468 + int blk_mq_alloc_sched_ctx_batch(struct xarray *elv_tbl, 469 + struct blk_mq_tag_set *set) 470 + { 471 + struct request_queue *q; 472 + struct elv_change_ctx *ctx; 473 + 474 + lockdep_assert_held_write(&set->update_nr_hwq_lock); 475 + 476 + list_for_each_entry(q, &set->tag_list, tag_set_list) { 477 + ctx = kzalloc(sizeof(struct elv_change_ctx), GFP_KERNEL); 478 + if (!ctx) 479 + return -ENOMEM; 480 + 481 + if (xa_insert(elv_tbl, q->id, ctx, GFP_KERNEL)) { 482 + kfree(ctx); 483 + return -ENOMEM; 484 + } 485 + } 486 + return 0; 468 487 } 469 488 470 489 struct elevator_tags *blk_mq_alloc_sched_tags(struct blk_mq_tag_set *set, ··· 513 466 else 514 467 nr_tags = nr_hw_queues; 515 468 516 - et = kmalloc(sizeof(struct elevator_tags) + 517 - nr_tags * sizeof(struct blk_mq_tags *), gfp); 469 + et = kmalloc(struct_size(et, tags, nr_tags), gfp); 518 470 if (!et) 519 471 return NULL; 520 472 ··· 544 498 return NULL; 545 499 } 546 500 547 - int blk_mq_alloc_sched_tags_batch(struct xarray *et_table, 501 + int blk_mq_alloc_sched_res(struct request_queue *q, 502 + struct elevator_type *type, 503 + struct elevator_resources *res, 504 + unsigned int nr_hw_queues) 505 + { 506 + struct blk_mq_tag_set *set = q->tag_set; 507 + 508 + res->et = blk_mq_alloc_sched_tags(set, nr_hw_queues, 509 + blk_mq_default_nr_requests(set)); 510 + if (!res->et) 511 + return -ENOMEM; 512 + 513 + res->data = blk_mq_alloc_sched_data(q, type); 514 + if (IS_ERR(res->data)) { 515 + blk_mq_free_sched_tags(res->et, set); 516 + return -ENOMEM; 517 + } 518 + 519 + return 0; 520 + } 521 + 522 + int blk_mq_alloc_sched_res_batch(struct xarray *elv_tbl, 548 523 struct blk_mq_tag_set *set, unsigned int nr_hw_queues) 549 524 { 525 + struct elv_change_ctx *ctx; 550 526 struct request_queue *q; 551 - struct elevator_tags *et; 552 - gfp_t gfp = GFP_NOIO | __GFP_ZERO | __GFP_NOWARN | __GFP_NORETRY; 527 + int ret = -ENOMEM; 553 528 554 529 lockdep_assert_held_write(&set->update_nr_hwq_lock); 555 530 ··· 583 516 * concurrently. 584 517 */ 585 518 if (q->elevator) { 586 - et = blk_mq_alloc_sched_tags(set, nr_hw_queues, 587 - blk_mq_default_nr_requests(set)); 588 - if (!et) 519 + ctx = xa_load(elv_tbl, q->id); 520 + if (WARN_ON_ONCE(!ctx)) { 521 + ret = -ENOENT; 589 522 goto out_unwind; 590 - if (xa_insert(et_table, q->id, et, gfp)) 591 - goto out_free_tags; 523 + } 524 + 525 + ret = blk_mq_alloc_sched_res(q, q->elevator->type, 526 + &ctx->res, nr_hw_queues); 527 + if (ret) 528 + goto out_unwind; 592 529 } 593 530 } 594 531 return 0; 595 - out_free_tags: 596 - blk_mq_free_sched_tags(et, set); 532 + 597 533 out_unwind: 598 534 list_for_each_entry_continue_reverse(q, &set->tag_list, tag_set_list) { 599 535 if (q->elevator) { 600 - et = xa_load(et_table, q->id); 601 - if (et) 602 - blk_mq_free_sched_tags(et, set); 536 + ctx = xa_load(elv_tbl, q->id); 537 + if (ctx) 538 + blk_mq_free_sched_res(&ctx->res, 539 + ctx->type, set); 603 540 } 604 541 } 605 - return -ENOMEM; 542 + return ret; 606 543 } 607 544 608 545 /* caller must have a reference to @e, will grab another one if successful */ 609 546 int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e, 610 - struct elevator_tags *et) 547 + struct elevator_resources *res) 611 548 { 612 549 unsigned int flags = q->tag_set->flags; 550 + struct elevator_tags *et = res->et; 613 551 struct blk_mq_hw_ctx *hctx; 614 552 struct elevator_queue *eq; 615 553 unsigned long i; 616 554 int ret; 617 555 618 - eq = elevator_alloc(q, e, et); 556 + eq = elevator_alloc(q, e, res); 619 557 if (!eq) 620 558 return -ENOMEM; 621 559
+37 -3
block/blk-mq-sched.h
··· 19 19 void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx); 20 20 21 21 int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e, 22 - struct elevator_tags *et); 22 + struct elevator_resources *res); 23 23 void blk_mq_exit_sched(struct request_queue *q, struct elevator_queue *e); 24 24 void blk_mq_sched_free_rqs(struct request_queue *q); 25 25 26 26 struct elevator_tags *blk_mq_alloc_sched_tags(struct blk_mq_tag_set *set, 27 27 unsigned int nr_hw_queues, unsigned int nr_requests); 28 - int blk_mq_alloc_sched_tags_batch(struct xarray *et_table, 28 + int blk_mq_alloc_sched_res(struct request_queue *q, 29 + struct elevator_type *type, 30 + struct elevator_resources *res, 31 + unsigned int nr_hw_queues); 32 + int blk_mq_alloc_sched_res_batch(struct xarray *elv_tbl, 29 33 struct blk_mq_tag_set *set, unsigned int nr_hw_queues); 34 + int blk_mq_alloc_sched_ctx_batch(struct xarray *elv_tbl, 35 + struct blk_mq_tag_set *set); 36 + void blk_mq_free_sched_ctx_batch(struct xarray *elv_tbl); 30 37 void blk_mq_free_sched_tags(struct elevator_tags *et, 31 38 struct blk_mq_tag_set *set); 32 - void blk_mq_free_sched_tags_batch(struct xarray *et_table, 39 + void blk_mq_free_sched_res(struct elevator_resources *res, 40 + struct elevator_type *type, 33 41 struct blk_mq_tag_set *set); 42 + void blk_mq_free_sched_res_batch(struct xarray *et_table, 43 + struct blk_mq_tag_set *set); 44 + /* 45 + * blk_mq_alloc_sched_data() - Allocates scheduler specific data 46 + * Returns: 47 + * - Pointer to allocated data on success 48 + * - NULL if no allocation needed 49 + * - ERR_PTR(-ENOMEM) in case of failure 50 + */ 51 + static inline void *blk_mq_alloc_sched_data(struct request_queue *q, 52 + struct elevator_type *e) 53 + { 54 + void *sched_data; 55 + 56 + if (!e || !e->ops.alloc_sched_data) 57 + return NULL; 58 + 59 + sched_data = e->ops.alloc_sched_data(q); 60 + return (sched_data) ?: ERR_PTR(-ENOMEM); 61 + } 62 + 63 + static inline void blk_mq_free_sched_data(struct elevator_type *e, void *data) 64 + { 65 + if (e && e->ops.free_sched_data) 66 + e->ops.free_sched_data(data); 67 + } 34 68 35 69 static inline void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx) 36 70 {
+1 -1
block/blk-mq-tag.c
··· 499 499 int srcu_idx; 500 500 501 501 /* 502 - * __blk_mq_update_nr_hw_queues() updates nr_hw_queues and hctx_table 502 + * __blk_mq_update_nr_hw_queues() updates nr_hw_queues and queue_hw_ctx 503 503 * while the queue is frozen. So we can use q_usage_counter to avoid 504 504 * racing with it. 505 505 */
+96 -56
block/blk-mq.c
··· 376 376 INIT_LIST_HEAD(&rq->queuelist); 377 377 rq->q = q; 378 378 rq->__sector = (sector_t) -1; 379 + rq->phys_gap_bit = 0; 379 380 INIT_HLIST_NODE(&rq->hash); 380 381 RB_CLEAR_NODE(&rq->rb_node); 381 382 rq->tag = BLK_MQ_NO_TAG; ··· 468 467 unsigned long tag_mask; 469 468 int i, nr = 0; 470 469 471 - tag_mask = blk_mq_get_tags(data, data->nr_tags, &tag_offset); 472 - if (unlikely(!tag_mask)) 473 - return NULL; 470 + do { 471 + tag_mask = blk_mq_get_tags(data, data->nr_tags - nr, &tag_offset); 472 + if (unlikely(!tag_mask)) { 473 + if (nr == 0) 474 + return NULL; 475 + break; 476 + } 477 + tags = blk_mq_tags_from_data(data); 478 + for (i = 0; tag_mask; i++) { 479 + if (!(tag_mask & (1UL << i))) 480 + continue; 481 + tag = tag_offset + i; 482 + prefetch(tags->static_rqs[tag]); 483 + tag_mask &= ~(1UL << i); 484 + rq = blk_mq_rq_ctx_init(data, tags, tag); 485 + rq_list_add_head(data->cached_rqs, rq); 486 + nr++; 487 + } 488 + } while (data->nr_tags > nr); 474 489 475 - tags = blk_mq_tags_from_data(data); 476 - for (i = 0; tag_mask; i++) { 477 - if (!(tag_mask & (1UL << i))) 478 - continue; 479 - tag = tag_offset + i; 480 - prefetch(tags->static_rqs[tag]); 481 - tag_mask &= ~(1UL << i); 482 - rq = blk_mq_rq_ctx_init(data, tags, tag); 483 - rq_list_add_head(data->cached_rqs, rq); 484 - nr++; 485 - } 486 490 if (!(data->rq_flags & RQF_SCHED_TAGS)) 487 491 blk_mq_add_active_requests(data->hctx, nr); 488 492 /* caller already holds a reference, add for remainder */ ··· 674 668 goto out_queue_exit; 675 669 } 676 670 rq->__data_len = 0; 671 + rq->phys_gap_bit = 0; 677 672 rq->__sector = (sector_t) -1; 678 673 rq->bio = rq->biotail = NULL; 679 674 return rq; ··· 730 723 * If not tell the caller that it should skip this queue. 731 724 */ 732 725 ret = -EXDEV; 733 - data.hctx = xa_load(&q->hctx_table, hctx_idx); 726 + data.hctx = q->queue_hw_ctx[hctx_idx]; 734 727 if (!blk_mq_hw_queue_mapped(data.hctx)) 735 728 goto out_queue_exit; 736 729 cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask); ··· 755 748 rq = blk_mq_rq_ctx_init(&data, blk_mq_tags_from_data(&data), tag); 756 749 blk_mq_rq_time_init(rq, alloc_time_ns); 757 750 rq->__data_len = 0; 751 + rq->phys_gap_bit = 0; 758 752 rq->__sector = (sector_t) -1; 759 753 rq->bio = rq->biotail = NULL; 760 754 return rq; ··· 2682 2674 rq->bio = rq->biotail = bio; 2683 2675 rq->__sector = bio->bi_iter.bi_sector; 2684 2676 rq->__data_len = bio->bi_iter.bi_size; 2677 + rq->phys_gap_bit = bio->bi_bvec_gap_bit; 2678 + 2685 2679 rq->nr_phys_segments = nr_segs; 2686 2680 if (bio_integrity(bio)) 2687 2681 rq->nr_integrity_segments = blk_rq_count_integrity_sg(rq->q, ··· 3390 3380 } 3391 3381 rq->nr_phys_segments = rq_src->nr_phys_segments; 3392 3382 rq->nr_integrity_segments = rq_src->nr_integrity_segments; 3383 + rq->phys_gap_bit = rq_src->phys_gap_bit; 3393 3384 3394 3385 if (rq->bio && blk_crypto_rq_bio_prep(rq, rq->bio, gfp_mask) < 0) 3395 3386 goto free_and_out; ··· 3946 3935 blk_free_flush_queue_callback); 3947 3936 hctx->fq = NULL; 3948 3937 3949 - xa_erase(&q->hctx_table, hctx_idx); 3950 - 3951 3938 spin_lock(&q->unused_hctx_lock); 3952 3939 list_add(&hctx->hctx_list, &q->unused_hctx_list); 3953 3940 spin_unlock(&q->unused_hctx_lock); ··· 3987 3978 hctx->numa_node)) 3988 3979 goto exit_hctx; 3989 3980 3990 - if (xa_insert(&q->hctx_table, hctx_idx, hctx, GFP_KERNEL)) 3991 - goto exit_flush_rq; 3992 - 3993 3981 return 0; 3994 3982 3995 - exit_flush_rq: 3996 - if (set->ops->exit_request) 3997 - set->ops->exit_request(set, hctx->fq->flush_rq, hctx_idx); 3998 3983 exit_hctx: 3999 3984 if (set->ops->exit_hctx) 4000 3985 set->ops->exit_hctx(hctx, hctx_idx); ··· 4377 4374 kobject_put(&hctx->kobj); 4378 4375 } 4379 4376 4380 - xa_destroy(&q->hctx_table); 4377 + kfree(q->queue_hw_ctx); 4381 4378 4382 4379 /* 4383 4380 * release .mq_kobj and sw queue's kobject now because ··· 4521 4518 static void __blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, 4522 4519 struct request_queue *q) 4523 4520 { 4524 - struct blk_mq_hw_ctx *hctx; 4525 - unsigned long i, j; 4521 + int i, j, end; 4522 + struct blk_mq_hw_ctx **hctxs = q->queue_hw_ctx; 4523 + 4524 + if (q->nr_hw_queues < set->nr_hw_queues) { 4525 + struct blk_mq_hw_ctx **new_hctxs; 4526 + 4527 + new_hctxs = kcalloc_node(set->nr_hw_queues, 4528 + sizeof(*new_hctxs), GFP_KERNEL, 4529 + set->numa_node); 4530 + if (!new_hctxs) 4531 + return; 4532 + if (hctxs) 4533 + memcpy(new_hctxs, hctxs, q->nr_hw_queues * 4534 + sizeof(*hctxs)); 4535 + rcu_assign_pointer(q->queue_hw_ctx, new_hctxs); 4536 + /* 4537 + * Make sure reading the old queue_hw_ctx from other 4538 + * context concurrently won't trigger uaf. 4539 + */ 4540 + synchronize_rcu_expedited(); 4541 + kfree(hctxs); 4542 + hctxs = new_hctxs; 4543 + } 4526 4544 4527 4545 for (i = 0; i < set->nr_hw_queues; i++) { 4528 4546 int old_node; 4529 4547 int node = blk_mq_get_hctx_node(set, i); 4530 - struct blk_mq_hw_ctx *old_hctx = xa_load(&q->hctx_table, i); 4548 + struct blk_mq_hw_ctx *old_hctx = hctxs[i]; 4531 4549 4532 4550 if (old_hctx) { 4533 4551 old_node = old_hctx->numa_node; 4534 4552 blk_mq_exit_hctx(q, set, old_hctx, i); 4535 4553 } 4536 4554 4537 - if (!blk_mq_alloc_and_init_hctx(set, q, i, node)) { 4555 + hctxs[i] = blk_mq_alloc_and_init_hctx(set, q, i, node); 4556 + if (!hctxs[i]) { 4538 4557 if (!old_hctx) 4539 4558 break; 4540 4559 pr_warn("Allocate new hctx on node %d fails, fallback to previous one on node %d\n", 4541 4560 node, old_node); 4542 - hctx = blk_mq_alloc_and_init_hctx(set, q, i, old_node); 4543 - WARN_ON_ONCE(!hctx); 4561 + hctxs[i] = blk_mq_alloc_and_init_hctx(set, q, i, 4562 + old_node); 4563 + WARN_ON_ONCE(!hctxs[i]); 4544 4564 } 4545 4565 } 4546 4566 /* ··· 4572 4546 */ 4573 4547 if (i != set->nr_hw_queues) { 4574 4548 j = q->nr_hw_queues; 4549 + end = i; 4575 4550 } else { 4576 4551 j = i; 4552 + end = q->nr_hw_queues; 4577 4553 q->nr_hw_queues = set->nr_hw_queues; 4578 4554 } 4579 4555 4580 - xa_for_each_start(&q->hctx_table, j, hctx, j) 4581 - blk_mq_exit_hctx(q, set, hctx, j); 4556 + for (; j < end; j++) { 4557 + struct blk_mq_hw_ctx *hctx = hctxs[j]; 4558 + 4559 + if (hctx) { 4560 + blk_mq_exit_hctx(q, set, hctx, j); 4561 + hctxs[j] = NULL; 4562 + } 4563 + } 4582 4564 } 4583 4565 4584 4566 static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, ··· 4621 4587 4622 4588 INIT_LIST_HEAD(&q->unused_hctx_list); 4623 4589 spin_lock_init(&q->unused_hctx_lock); 4624 - 4625 - xa_init(&q->hctx_table); 4626 4590 4627 4591 blk_mq_realloc_hw_ctxs(set, q); 4628 4592 if (!q->nr_hw_queues) ··· 5015 4983 * Switch back to the elevator type stored in the xarray. 5016 4984 */ 5017 4985 static void blk_mq_elv_switch_back(struct request_queue *q, 5018 - struct xarray *elv_tbl, struct xarray *et_tbl) 4986 + struct xarray *elv_tbl) 5019 4987 { 5020 - struct elevator_type *e = xa_load(elv_tbl, q->id); 5021 - struct elevator_tags *t = xa_load(et_tbl, q->id); 4988 + struct elv_change_ctx *ctx = xa_load(elv_tbl, q->id); 4989 + 4990 + if (WARN_ON_ONCE(!ctx)) 4991 + return; 5022 4992 5023 4993 /* The elv_update_nr_hw_queues unfreezes the queue. */ 5024 - elv_update_nr_hw_queues(q, e, t); 4994 + elv_update_nr_hw_queues(q, ctx); 5025 4995 5026 4996 /* Drop the reference acquired in blk_mq_elv_switch_none. */ 5027 - if (e) 5028 - elevator_put(e); 4997 + if (ctx->type) 4998 + elevator_put(ctx->type); 5029 4999 } 5030 5000 5031 5001 /* 5032 - * Stores elevator type in xarray and set current elevator to none. It uses 5033 - * q->id as an index to store the elevator type into the xarray. 5002 + * Stores elevator name and type in ctx and set current elevator to none. 5034 5003 */ 5035 5004 static int blk_mq_elv_switch_none(struct request_queue *q, 5036 5005 struct xarray *elv_tbl) 5037 5006 { 5038 - int ret = 0; 5007 + struct elv_change_ctx *ctx; 5039 5008 5040 5009 lockdep_assert_held_write(&q->tag_set->update_nr_hwq_lock); 5041 5010 ··· 5048 5015 * can't run concurrently. 5049 5016 */ 5050 5017 if (q->elevator) { 5018 + ctx = xa_load(elv_tbl, q->id); 5019 + if (WARN_ON_ONCE(!ctx)) 5020 + return -ENOENT; 5051 5021 5052 - ret = xa_insert(elv_tbl, q->id, q->elevator->type, GFP_KERNEL); 5053 - if (WARN_ON_ONCE(ret)) 5054 - return ret; 5022 + ctx->name = q->elevator->type->elevator_name; 5055 5023 5056 5024 /* 5057 5025 * Before we switch elevator to 'none', take a reference to ··· 5063 5029 */ 5064 5030 __elevator_get(q->elevator->type); 5065 5031 5032 + /* 5033 + * Store elevator type so that we can release the reference 5034 + * taken above later. 5035 + */ 5036 + ctx->type = q->elevator->type; 5066 5037 elevator_set_none(q); 5067 5038 } 5068 - return ret; 5039 + return 0; 5069 5040 } 5070 5041 5071 5042 static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, ··· 5080 5041 int prev_nr_hw_queues = set->nr_hw_queues; 5081 5042 unsigned int memflags; 5082 5043 int i; 5083 - struct xarray elv_tbl, et_tbl; 5044 + struct xarray elv_tbl; 5084 5045 bool queues_frozen = false; 5085 5046 5086 5047 lockdep_assert_held(&set->tag_list_lock); ··· 5094 5055 5095 5056 memflags = memalloc_noio_save(); 5096 5057 5097 - xa_init(&et_tbl); 5098 - if (blk_mq_alloc_sched_tags_batch(&et_tbl, set, nr_hw_queues) < 0) 5099 - goto out_memalloc_restore; 5100 - 5101 5058 xa_init(&elv_tbl); 5059 + if (blk_mq_alloc_sched_ctx_batch(&elv_tbl, set) < 0) 5060 + goto out_free_ctx; 5061 + 5062 + if (blk_mq_alloc_sched_res_batch(&elv_tbl, set, nr_hw_queues) < 0) 5063 + goto out_free_ctx; 5102 5064 5103 5065 list_for_each_entry(q, &set->tag_list, tag_set_list) { 5104 5066 blk_mq_debugfs_unregister_hctxs(q); ··· 5145 5105 /* switch_back expects queue to be frozen */ 5146 5106 if (!queues_frozen) 5147 5107 blk_mq_freeze_queue_nomemsave(q); 5148 - blk_mq_elv_switch_back(q, &elv_tbl, &et_tbl); 5108 + blk_mq_elv_switch_back(q, &elv_tbl); 5149 5109 } 5150 5110 5151 5111 list_for_each_entry(q, &set->tag_list, tag_set_list) { ··· 5156 5116 blk_mq_add_hw_queues_cpuhp(q); 5157 5117 } 5158 5118 5119 + out_free_ctx: 5120 + blk_mq_free_sched_ctx_batch(&elv_tbl); 5159 5121 xa_destroy(&elv_tbl); 5160 - xa_destroy(&et_tbl); 5161 - out_memalloc_restore: 5162 5122 memalloc_noio_restore(memflags); 5163 5123 5164 5124 /* Free the excess tags when nr_hw_queues shrink. */ ··· 5208 5168 { 5209 5169 if (!blk_mq_can_poll(q)) 5210 5170 return 0; 5211 - return blk_hctx_poll(q, xa_load(&q->hctx_table, cookie), iob, flags); 5171 + return blk_hctx_poll(q, q->queue_hw_ctx[cookie], iob, flags); 5212 5172 } 5213 5173 5214 5174 int blk_rq_poll(struct request *rq, struct io_comp_batch *iob,
+1 -1
block/blk-mq.h
··· 84 84 enum hctx_type type, 85 85 unsigned int cpu) 86 86 { 87 - return xa_load(&q->hctx_table, q->tag_set->map[type].mq_map[cpu]); 87 + return queue_hctx((q), (q->tag_set->map[type].mq_map[cpu])); 88 88 } 89 89 90 90 static inline enum hctx_type blk_mq_get_hctx_type(blk_opf_t opf)
+25 -2
block/blk-settings.c
··· 123 123 return 0; 124 124 } 125 125 126 + /* 127 + * Maximum size of I/O that needs a block layer integrity buffer. Limited 128 + * by the number of intervals for which we can fit the integrity buffer into 129 + * the buffer size. Because the buffer is a single segment it is also limited 130 + * by the maximum segment size. 131 + */ 132 + static inline unsigned int max_integrity_io_size(struct queue_limits *lim) 133 + { 134 + return min_t(unsigned int, lim->max_segment_size, 135 + (BLK_INTEGRITY_MAX_SIZE / lim->integrity.metadata_size) << 136 + lim->integrity.interval_exp); 137 + } 138 + 126 139 static int blk_validate_integrity_limits(struct queue_limits *lim) 127 140 { 128 141 struct blk_integrity *bi = &lim->integrity; ··· 206 193 lim->dma_alignment = max(lim->dma_alignment, 207 194 (1U << bi->interval_exp) - 1); 208 195 } 196 + 197 + /* 198 + * The block layer automatically adds integrity data for bios that don't 199 + * already have it. Limit the I/O size so that a single maximum size 200 + * metadata segment can cover the integrity data for the entire I/O. 201 + */ 202 + lim->max_sectors = min(lim->max_sectors, 203 + max_integrity_io_size(lim) >> SECTOR_SHIFT); 209 204 210 205 return 0; 211 206 } ··· 488 467 return -EINVAL; 489 468 } 490 469 491 - /* setup min segment size for building new segment in fast path */ 470 + /* setup max segment size for building new segment in fast path */ 492 471 if (lim->seg_boundary_mask > lim->max_segment_size - 1) 493 472 seg_size = lim->max_segment_size; 494 473 else 495 474 seg_size = lim->seg_boundary_mask + 1; 496 - lim->min_segment_size = min_t(unsigned int, seg_size, PAGE_SIZE); 475 + lim->max_fast_segment_size = min_t(unsigned int, seg_size, PAGE_SIZE); 497 476 498 477 /* 499 478 * We require drivers to at least do logical block aligned I/O, but ··· 555 534 struct queue_limits *lim) 556 535 { 557 536 int error; 537 + 538 + lockdep_assert_held(&q->limits_lock); 558 539 559 540 error = blk_validate_limits(lim); 560 541 if (error)
+8 -18
block/blk-sysfs.c
··· 143 143 { 144 144 unsigned long ra_kb; 145 145 ssize_t ret; 146 - unsigned int memflags; 147 146 struct request_queue *q = disk->queue; 148 147 149 148 ret = queue_var_store(&ra_kb, page, count); 150 149 if (ret < 0) 151 150 return ret; 152 151 /* 153 - * ->ra_pages is protected by ->limits_lock because it is usually 154 - * calculated from the queue limits by queue_limits_commit_update. 152 + * The ->ra_pages change below is protected by ->limits_lock because it 153 + * is usually calculated from the queue limits by 154 + * queue_limits_commit_update(). 155 + * 156 + * bdi->ra_pages reads are not serialized against bdi->ra_pages writes. 157 + * Use WRITE_ONCE() to write bdi->ra_pages once. 155 158 */ 156 159 mutex_lock(&q->limits_lock); 157 - memflags = blk_mq_freeze_queue(q); 158 - disk->bdi->ra_pages = ra_kb >> (PAGE_SHIFT - 10); 160 + WRITE_ONCE(disk->bdi->ra_pages, ra_kb >> (PAGE_SHIFT - 10)); 159 161 mutex_unlock(&q->limits_lock); 160 - blk_mq_unfreeze_queue(q, memflags); 161 162 162 163 return ret; 163 164 } ··· 376 375 size_t count) 377 376 { 378 377 unsigned long nm; 379 - unsigned int memflags; 380 378 struct request_queue *q = disk->queue; 381 379 ssize_t ret = queue_var_store(&nm, page, count); 382 380 383 381 if (ret < 0) 384 382 return ret; 385 383 386 - memflags = blk_mq_freeze_queue(q); 387 384 blk_queue_flag_clear(QUEUE_FLAG_NOMERGES, q); 388 385 blk_queue_flag_clear(QUEUE_FLAG_NOXMERGES, q); 389 386 if (nm == 2) 390 387 blk_queue_flag_set(QUEUE_FLAG_NOMERGES, q); 391 388 else if (nm) 392 389 blk_queue_flag_set(QUEUE_FLAG_NOXMERGES, q); 393 - blk_mq_unfreeze_queue(q, memflags); 394 390 395 391 return ret; 396 392 } ··· 407 409 #ifdef CONFIG_SMP 408 410 struct request_queue *q = disk->queue; 409 411 unsigned long val; 410 - unsigned int memflags; 411 412 412 413 ret = queue_var_store(&val, page, count); 413 414 if (ret < 0) ··· 418 421 * are accessed individually using atomic test_bit operation. So we 419 422 * don't grab any lock while updating these flags. 420 423 */ 421 - memflags = blk_mq_freeze_queue(q); 422 424 if (val == 2) { 423 425 blk_queue_flag_set(QUEUE_FLAG_SAME_COMP, q); 424 426 blk_queue_flag_set(QUEUE_FLAG_SAME_FORCE, q); ··· 428 432 blk_queue_flag_clear(QUEUE_FLAG_SAME_COMP, q); 429 433 blk_queue_flag_clear(QUEUE_FLAG_SAME_FORCE, q); 430 434 } 431 - blk_mq_unfreeze_queue(q, memflags); 432 435 #endif 433 436 return ret; 434 437 } ··· 441 446 static ssize_t queue_poll_store(struct gendisk *disk, const char *page, 442 447 size_t count) 443 448 { 444 - unsigned int memflags; 445 449 ssize_t ret = count; 446 450 struct request_queue *q = disk->queue; 447 451 448 - memflags = blk_mq_freeze_queue(q); 449 452 if (!(q->limits.features & BLK_FEAT_POLL)) { 450 453 ret = -EINVAL; 451 454 goto out; ··· 452 459 pr_info_ratelimited("writes to the poll attribute are ignored.\n"); 453 460 pr_info_ratelimited("please use driver specific parameters instead.\n"); 454 461 out: 455 - blk_mq_unfreeze_queue(q, memflags); 456 462 return ret; 457 463 } 458 464 ··· 464 472 static ssize_t queue_io_timeout_store(struct gendisk *disk, const char *page, 465 473 size_t count) 466 474 { 467 - unsigned int val, memflags; 475 + unsigned int val; 468 476 int err; 469 477 struct request_queue *q = disk->queue; 470 478 ··· 472 480 if (err || val == 0) 473 481 return -EINVAL; 474 482 475 - memflags = blk_mq_freeze_queue(q); 476 483 blk_queue_rq_timeout(q, msecs_to_jiffies(val)); 477 - blk_mq_unfreeze_queue(q, memflags); 478 484 479 485 return count; 480 486 }
+14 -31
block/blk-throttle.c
··· 12 12 #include <linux/blktrace_api.h> 13 13 #include "blk.h" 14 14 #include "blk-cgroup-rwstat.h" 15 - #include "blk-stat.h" 16 15 #include "blk-throttle.h" 17 16 18 17 /* Max dispatch from a group in 1 round */ ··· 21 22 #define THROTL_QUANTUM 32 22 23 23 24 /* Throttling is performed over a slice and after that slice is renewed */ 24 - #define DFL_THROTL_SLICE_HD (HZ / 10) 25 - #define DFL_THROTL_SLICE_SSD (HZ / 50) 26 - #define MAX_THROTL_SLICE (HZ) 25 + #define DFL_THROTL_SLICE (HZ / 10) 27 26 28 27 /* A workqueue to queue throttle related work */ 29 28 static struct workqueue_struct *kthrotld_workqueue; ··· 38 41 /* Total Number of queued bios on READ and WRITE lists */ 39 42 unsigned int nr_queued[2]; 40 43 41 - unsigned int throtl_slice; 42 - 43 44 /* Work for dispatching throttled bios */ 44 45 struct work_struct dispatch_work; 45 - 46 - bool track_bio_latency; 47 46 }; 48 47 49 48 static void throtl_pending_timer_fn(struct timer_list *t); ··· 444 451 static void throtl_schedule_pending_timer(struct throtl_service_queue *sq, 445 452 unsigned long expires) 446 453 { 447 - unsigned long max_expire = jiffies + 8 * sq_to_td(sq)->throtl_slice; 454 + unsigned long max_expire = jiffies + 8 * DFL_THROTL_SLICE; 448 455 449 456 /* 450 457 * Since we are adjusting the throttle limit dynamically, the sleep ··· 512 519 if (time_after(start, tg->slice_start[rw])) 513 520 tg->slice_start[rw] = start; 514 521 515 - tg->slice_end[rw] = jiffies + tg->td->throtl_slice; 522 + tg->slice_end[rw] = jiffies + DFL_THROTL_SLICE; 516 523 throtl_log(&tg->service_queue, 517 524 "[%c] new slice with credit start=%lu end=%lu jiffies=%lu", 518 525 rw == READ ? 'R' : 'W', tg->slice_start[rw], ··· 527 534 tg->io_disp[rw] = 0; 528 535 } 529 536 tg->slice_start[rw] = jiffies; 530 - tg->slice_end[rw] = jiffies + tg->td->throtl_slice; 537 + tg->slice_end[rw] = jiffies + DFL_THROTL_SLICE; 531 538 532 539 throtl_log(&tg->service_queue, 533 540 "[%c] new slice start=%lu end=%lu jiffies=%lu", ··· 538 545 static inline void throtl_set_slice_end(struct throtl_grp *tg, bool rw, 539 546 unsigned long jiffy_end) 540 547 { 541 - tg->slice_end[rw] = roundup(jiffy_end, tg->td->throtl_slice); 548 + tg->slice_end[rw] = roundup(jiffy_end, DFL_THROTL_SLICE); 542 549 } 543 550 544 551 static inline void throtl_extend_slice(struct throtl_grp *tg, bool rw, ··· 669 676 * sooner, then we need to reduce slice_end. A high bogus slice_end 670 677 * is bad because it does not allow new slice to start. 671 678 */ 672 - throtl_set_slice_end(tg, rw, jiffies + tg->td->throtl_slice); 679 + throtl_set_slice_end(tg, rw, jiffies + DFL_THROTL_SLICE); 673 680 674 681 time_elapsed = rounddown(jiffies - tg->slice_start[rw], 675 - tg->td->throtl_slice); 682 + DFL_THROTL_SLICE); 676 683 /* Don't trim slice until at least 2 slices are used */ 677 - if (time_elapsed < tg->td->throtl_slice * 2) 684 + if (time_elapsed < DFL_THROTL_SLICE * 2) 678 685 return; 679 686 680 687 /* ··· 685 692 * lower rate than expected. Therefore, other than the above rounddown, 686 693 * one extra slice is preserved for deviation. 687 694 */ 688 - time_elapsed -= tg->td->throtl_slice; 695 + time_elapsed -= DFL_THROTL_SLICE; 689 696 bytes_trim = throtl_trim_bps(tg, rw, time_elapsed); 690 697 io_trim = throtl_trim_iops(tg, rw, time_elapsed); 691 698 if (!bytes_trim && !io_trim) ··· 695 702 696 703 throtl_log(&tg->service_queue, 697 704 "[%c] trim slice nr=%lu bytes=%lld io=%d start=%lu end=%lu jiffies=%lu", 698 - rw == READ ? 'R' : 'W', time_elapsed / tg->td->throtl_slice, 705 + rw == READ ? 'R' : 'W', time_elapsed / DFL_THROTL_SLICE, 699 706 bytes_trim, io_trim, tg->slice_start[rw], tg->slice_end[rw], 700 707 jiffies); 701 708 } ··· 766 773 jiffy_elapsed = jiffies - tg->slice_start[rw]; 767 774 768 775 /* Round up to the next throttle slice, wait time must be nonzero */ 769 - jiffy_elapsed_rnd = roundup(jiffy_elapsed + 1, tg->td->throtl_slice); 776 + jiffy_elapsed_rnd = roundup(jiffy_elapsed + 1, DFL_THROTL_SLICE); 770 777 io_allowed = calculate_io_allowed(iops_limit, jiffy_elapsed_rnd); 771 778 if (io_allowed > 0 && tg->io_disp[rw] + 1 <= io_allowed) 772 779 return 0; ··· 792 799 793 800 /* Slice has just started. Consider one slice interval */ 794 801 if (!jiffy_elapsed) 795 - jiffy_elapsed_rnd = tg->td->throtl_slice; 802 + jiffy_elapsed_rnd = DFL_THROTL_SLICE; 796 803 797 - jiffy_elapsed_rnd = roundup(jiffy_elapsed_rnd, tg->td->throtl_slice); 804 + jiffy_elapsed_rnd = roundup(jiffy_elapsed_rnd, DFL_THROTL_SLICE); 798 805 bytes_allowed = calculate_bytes_allowed(bps_limit, jiffy_elapsed_rnd); 799 806 /* Need to consider the case of bytes_allowed overflow. */ 800 807 if ((bytes_allowed > 0 && tg->bytes_disp[rw] + bio_size <= bytes_allowed) ··· 846 853 sq_queued(&tg->service_queue, rw) == 0) 847 854 throtl_start_new_slice(tg, rw, true); 848 855 else 849 - throtl_extend_slice(tg, rw, jiffies + tg->td->throtl_slice); 856 + throtl_extend_slice(tg, rw, jiffies + DFL_THROTL_SLICE); 850 857 } 851 858 852 859 static unsigned long tg_dispatch_bps_time(struct throtl_grp *tg, struct bio *bio) ··· 1331 1338 if (ret) { 1332 1339 q->td = NULL; 1333 1340 kfree(td); 1334 - goto out; 1335 1341 } 1336 1342 1337 - if (blk_queue_nonrot(q)) 1338 - td->throtl_slice = DFL_THROTL_SLICE_SSD; 1339 - else 1340 - td->throtl_slice = DFL_THROTL_SLICE_HD; 1341 - td->track_bio_latency = !queue_is_mq(q); 1342 - if (!td->track_bio_latency) 1343 - blk_stat_enable_accounting(q); 1344 - 1345 - out: 1346 1343 blk_mq_unquiesce_queue(disk->queue); 1347 1344 blk_mq_unfreeze_queue(disk->queue, memflags); 1348 1345
+729 -259
block/blk-zoned.c
··· 33 33 ZONE_COND_NAME(READONLY), 34 34 ZONE_COND_NAME(FULL), 35 35 ZONE_COND_NAME(OFFLINE), 36 + ZONE_COND_NAME(ACTIVE), 36 37 }; 37 38 #undef ZONE_COND_NAME 38 39 39 40 /* 40 41 * Per-zone write plug. 41 42 * @node: hlist_node structure for managing the plug using a hash table. 43 + * @bio_list: The list of BIOs that are currently plugged. 44 + * @bio_work: Work struct to handle issuing of plugged BIOs 45 + * @rcu_head: RCU head to free zone write plugs with an RCU grace period. 46 + * @disk: The gendisk the plug belongs to. 47 + * @lock: Spinlock to atomically manipulate the plug. 42 48 * @ref: Zone write plug reference counter. A zone write plug reference is 43 49 * always at least 1 when the plug is hashed in the disk plug hash table. 44 50 * The reference is incremented whenever a new BIO needing plugging is ··· 54 48 * reference is dropped whenever the zone of the zone write plug is reset, 55 49 * finished and when the zone becomes full (last write BIO to the zone 56 50 * completes). 57 - * @lock: Spinlock to atomically manipulate the plug. 58 51 * @flags: Flags indicating the plug state. 59 52 * @zone_no: The number of the zone the plug is managing. 60 53 * @wp_offset: The zone write pointer location relative to the start of the zone 61 54 * as a number of 512B sectors. 62 - * @bio_list: The list of BIOs that are currently plugged. 63 - * @bio_work: Work struct to handle issuing of plugged BIOs 64 - * @rcu_head: RCU head to free zone write plugs with an RCU grace period. 65 - * @disk: The gendisk the plug belongs to. 55 + * @cond: Condition of the zone 66 56 */ 67 57 struct blk_zone_wplug { 68 58 struct hlist_node node; 69 - refcount_t ref; 70 - spinlock_t lock; 71 - unsigned int flags; 72 - unsigned int zone_no; 73 - unsigned int wp_offset; 74 59 struct bio_list bio_list; 75 60 struct work_struct bio_work; 76 61 struct rcu_head rcu_head; 77 62 struct gendisk *disk; 63 + spinlock_t lock; 64 + refcount_t ref; 65 + unsigned int flags; 66 + unsigned int zone_no; 67 + unsigned int wp_offset; 68 + enum blk_zone_cond cond; 78 69 }; 70 + 71 + static inline bool disk_need_zone_resources(struct gendisk *disk) 72 + { 73 + /* 74 + * All request-based zoned devices need zone resources so that the 75 + * block layer can automatically handle write BIO plugging. BIO-based 76 + * device drivers (e.g. DM devices) are normally responsible for 77 + * handling zone write ordering and do not need zone resources, unless 78 + * the driver requires zone append emulation. 79 + */ 80 + return queue_is_mq(disk->queue) || 81 + queue_emulates_zone_append(disk->queue); 82 + } 83 + 84 + static inline unsigned int disk_zone_wplugs_hash_size(struct gendisk *disk) 85 + { 86 + return 1U << disk->zone_wplugs_hash_bits; 87 + } 79 88 80 89 /* 81 90 * Zone write plug flags bits: ··· 130 109 } 131 110 EXPORT_SYMBOL_GPL(blk_zone_cond_str); 132 111 133 - struct disk_report_zones_cb_args { 134 - struct gendisk *disk; 135 - report_zones_cb user_cb; 136 - void *user_data; 112 + static void blk_zone_set_cond(u8 *zones_cond, unsigned int zno, 113 + enum blk_zone_cond cond) 114 + { 115 + if (!zones_cond) 116 + return; 117 + 118 + switch (cond) { 119 + case BLK_ZONE_COND_IMP_OPEN: 120 + case BLK_ZONE_COND_EXP_OPEN: 121 + case BLK_ZONE_COND_CLOSED: 122 + zones_cond[zno] = BLK_ZONE_COND_ACTIVE; 123 + return; 124 + case BLK_ZONE_COND_NOT_WP: 125 + case BLK_ZONE_COND_EMPTY: 126 + case BLK_ZONE_COND_FULL: 127 + case BLK_ZONE_COND_OFFLINE: 128 + case BLK_ZONE_COND_READONLY: 129 + default: 130 + zones_cond[zno] = cond; 131 + return; 132 + } 133 + } 134 + 135 + static void disk_zone_set_cond(struct gendisk *disk, sector_t sector, 136 + enum blk_zone_cond cond) 137 + { 138 + u8 *zones_cond; 139 + 140 + rcu_read_lock(); 141 + zones_cond = rcu_dereference(disk->zones_cond); 142 + if (zones_cond) { 143 + unsigned int zno = disk_zone_no(disk, sector); 144 + 145 + /* 146 + * The condition of a conventional, readonly and offline zones 147 + * never changes, so do nothing if the target zone is in one of 148 + * these conditions. 149 + */ 150 + switch (zones_cond[zno]) { 151 + case BLK_ZONE_COND_NOT_WP: 152 + case BLK_ZONE_COND_READONLY: 153 + case BLK_ZONE_COND_OFFLINE: 154 + break; 155 + default: 156 + blk_zone_set_cond(zones_cond, zno, cond); 157 + break; 158 + } 159 + } 160 + rcu_read_unlock(); 161 + } 162 + 163 + /** 164 + * bdev_zone_is_seq - check if a sector belongs to a sequential write zone 165 + * @bdev: block device to check 166 + * @sector: sector number 167 + * 168 + * Check if @sector on @bdev is contained in a sequential write required zone. 169 + */ 170 + bool bdev_zone_is_seq(struct block_device *bdev, sector_t sector) 171 + { 172 + struct gendisk *disk = bdev->bd_disk; 173 + unsigned int zno = disk_zone_no(disk, sector); 174 + bool is_seq = false; 175 + u8 *zones_cond; 176 + 177 + if (!bdev_is_zoned(bdev)) 178 + return false; 179 + 180 + rcu_read_lock(); 181 + zones_cond = rcu_dereference(disk->zones_cond); 182 + if (zones_cond && zno < disk->nr_zones) 183 + is_seq = zones_cond[zno] != BLK_ZONE_COND_NOT_WP; 184 + rcu_read_unlock(); 185 + 186 + return is_seq; 187 + } 188 + EXPORT_SYMBOL_GPL(bdev_zone_is_seq); 189 + 190 + /* 191 + * Zone report arguments for block device drivers report_zones operation. 192 + * @cb: report_zones_cb callback for each reported zone. 193 + * @data: Private data passed to report_zones_cb. 194 + */ 195 + struct blk_report_zones_args { 196 + report_zones_cb cb; 197 + void *data; 198 + bool report_active; 137 199 }; 138 200 139 - static void disk_zone_wplug_sync_wp_offset(struct gendisk *disk, 140 - struct blk_zone *zone); 141 - 142 - static int disk_report_zones_cb(struct blk_zone *zone, unsigned int idx, 143 - void *data) 201 + static int blkdev_do_report_zones(struct block_device *bdev, sector_t sector, 202 + unsigned int nr_zones, 203 + struct blk_report_zones_args *args) 144 204 { 145 - struct disk_report_zones_cb_args *args = data; 146 - struct gendisk *disk = args->disk; 205 + struct gendisk *disk = bdev->bd_disk; 147 206 148 - if (disk->zone_wplugs_hash) 149 - disk_zone_wplug_sync_wp_offset(disk, zone); 207 + if (!bdev_is_zoned(bdev) || WARN_ON_ONCE(!disk->fops->report_zones)) 208 + return -EOPNOTSUPP; 150 209 151 - if (!args->user_cb) 210 + if (!nr_zones || sector >= get_capacity(disk)) 152 211 return 0; 153 212 154 - return args->user_cb(zone, idx, args->user_data); 213 + return disk->fops->report_zones(disk, sector, nr_zones, args); 155 214 } 156 215 157 216 /** ··· 256 155 int blkdev_report_zones(struct block_device *bdev, sector_t sector, 257 156 unsigned int nr_zones, report_zones_cb cb, void *data) 258 157 { 259 - struct gendisk *disk = bdev->bd_disk; 260 - sector_t capacity = get_capacity(disk); 261 - struct disk_report_zones_cb_args args = { 262 - .disk = disk, 263 - .user_cb = cb, 264 - .user_data = data, 158 + struct blk_report_zones_args args = { 159 + .cb = cb, 160 + .data = data, 265 161 }; 266 162 267 - if (!bdev_is_zoned(bdev) || WARN_ON_ONCE(!disk->fops->report_zones)) 268 - return -EOPNOTSUPP; 269 - 270 - if (!nr_zones || sector >= capacity) 271 - return 0; 272 - 273 - return disk->fops->report_zones(disk, sector, nr_zones, 274 - disk_report_zones_cb, &args); 163 + return blkdev_do_report_zones(bdev, sector, nr_zones, &args); 275 164 } 276 165 EXPORT_SYMBOL_GPL(blkdev_report_zones); 277 166 ··· 357 266 } 358 267 359 268 /* 360 - * BLKREPORTZONE ioctl processing. 269 + * Mask of valid input flags for BLKREPORTZONEV2 ioctl. 270 + */ 271 + #define BLK_ZONE_REPV2_INPUT_FLAGS BLK_ZONE_REP_CACHED 272 + 273 + /* 274 + * BLKREPORTZONE and BLKREPORTZONEV2 ioctl processing. 361 275 * Called from blkdev_ioctl. 362 276 */ 363 277 int blkdev_report_zones_ioctl(struct block_device *bdev, unsigned int cmd, ··· 386 290 return -EINVAL; 387 291 388 292 args.zones = argp + sizeof(struct blk_zone_report); 389 - ret = blkdev_report_zones(bdev, rep.sector, rep.nr_zones, 390 - blkdev_copy_zone_to_user, &args); 293 + 294 + switch (cmd) { 295 + case BLKREPORTZONE: 296 + ret = blkdev_report_zones(bdev, rep.sector, rep.nr_zones, 297 + blkdev_copy_zone_to_user, &args); 298 + break; 299 + case BLKREPORTZONEV2: 300 + if (rep.flags & ~BLK_ZONE_REPV2_INPUT_FLAGS) 301 + return -EINVAL; 302 + ret = blkdev_report_zones_cached(bdev, rep.sector, rep.nr_zones, 303 + blkdev_copy_zone_to_user, &args); 304 + break; 305 + default: 306 + return -EINVAL; 307 + } 308 + 391 309 if (ret < 0) 392 310 return ret; 393 311 ··· 511 401 { 512 402 struct blk_zone_wplug *zwplg; 513 403 unsigned long flags; 404 + u8 *zones_cond; 514 405 unsigned int idx = 515 406 hash_32(zwplug->zone_no, disk->zone_wplugs_hash_bits); 516 407 ··· 527 416 return false; 528 417 } 529 418 } 419 + 420 + /* 421 + * Set the zone condition: if we do not yet have a zones_cond array 422 + * attached to the disk, then this is a zone write plug insert from the 423 + * first call to blk_revalidate_disk_zones(), in which case the zone is 424 + * necessarilly in the active condition. 425 + */ 426 + zones_cond = rcu_dereference_check(disk->zones_cond, 427 + lockdep_is_held(&disk->zone_wplugs_lock)); 428 + if (zones_cond) 429 + zwplug->cond = zones_cond[zwplug->zone_no]; 430 + else 431 + zwplug->cond = BLK_ZONE_COND_ACTIVE; 432 + 530 433 hlist_add_head_rcu(&zwplug->node, &disk->zone_wplugs_hash[idx]); 531 434 atomic_inc(&disk->nr_zone_wplugs); 532 435 spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags); ··· 640 515 641 516 /* 642 517 * Mark the zone write plug as unhashed and drop the extra reference we 643 - * took when the plug was inserted in the hash table. 518 + * took when the plug was inserted in the hash table. Also update the 519 + * disk zone condition array with the current condition of the zone 520 + * write plug. 644 521 */ 645 522 zwplug->flags |= BLK_ZONE_WPLUG_UNHASHED; 646 523 spin_lock_irqsave(&disk->zone_wplugs_lock, flags); 524 + blk_zone_set_cond(rcu_dereference_check(disk->zones_cond, 525 + lockdep_is_held(&disk->zone_wplugs_lock)), 526 + zwplug->zone_no, zwplug->cond); 647 527 hlist_del_init_rcu(&zwplug->node); 648 528 atomic_dec(&disk->nr_zone_wplugs); 649 529 spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags); ··· 730 600 bio_clear_flag(bio, BIO_ZONE_WRITE_PLUGGING); 731 601 bio_io_error(bio); 732 602 disk_put_zone_wplug(zwplug); 733 - /* Drop the reference taken by disk_zone_wplug_add_bio(() */ 603 + /* Drop the reference taken by disk_zone_wplug_add_bio(). */ 734 604 blk_queue_exit(q); 735 605 } 736 606 ··· 751 621 } 752 622 753 623 /* 624 + * Update a zone write plug condition based on the write pointer offset. 625 + */ 626 + static void disk_zone_wplug_update_cond(struct gendisk *disk, 627 + struct blk_zone_wplug *zwplug) 628 + { 629 + lockdep_assert_held(&zwplug->lock); 630 + 631 + if (disk_zone_wplug_is_full(disk, zwplug)) 632 + zwplug->cond = BLK_ZONE_COND_FULL; 633 + else if (!zwplug->wp_offset) 634 + zwplug->cond = BLK_ZONE_COND_EMPTY; 635 + else 636 + zwplug->cond = BLK_ZONE_COND_ACTIVE; 637 + } 638 + 639 + /* 754 640 * Set a zone write plug write pointer offset to the specified value. 755 641 * This aborts all plugged BIOs, which is fine as this function is called for 756 642 * a zone reset operation, a zone finish operation or if the zone needs a wp ··· 781 635 /* Update the zone write pointer and abort all plugged BIOs. */ 782 636 zwplug->flags &= ~BLK_ZONE_WPLUG_NEED_WP_UPDATE; 783 637 zwplug->wp_offset = wp_offset; 638 + disk_zone_wplug_update_cond(disk, zwplug); 639 + 784 640 disk_zone_wplug_abort(zwplug); 785 641 786 642 /* ··· 800 652 case BLK_ZONE_COND_IMP_OPEN: 801 653 case BLK_ZONE_COND_EXP_OPEN: 802 654 case BLK_ZONE_COND_CLOSED: 655 + case BLK_ZONE_COND_ACTIVE: 803 656 return zone->wp - zone->start; 804 - case BLK_ZONE_COND_FULL: 805 - return zone->len; 806 657 case BLK_ZONE_COND_EMPTY: 807 658 return 0; 659 + case BLK_ZONE_COND_FULL: 808 660 case BLK_ZONE_COND_NOT_WP: 809 661 case BLK_ZONE_COND_OFFLINE: 810 662 case BLK_ZONE_COND_READONLY: 811 663 default: 812 664 /* 813 - * Conventional, offline and read-only zones do not have a valid 814 - * write pointer. 665 + * Conventional, full, offline and read-only zones do not have 666 + * a valid write pointer. 815 667 */ 816 668 return UINT_MAX; 817 669 } 818 670 } 819 671 820 - static void disk_zone_wplug_sync_wp_offset(struct gendisk *disk, 821 - struct blk_zone *zone) 672 + static unsigned int disk_zone_wplug_sync_wp_offset(struct gendisk *disk, 673 + struct blk_zone *zone) 822 674 { 823 675 struct blk_zone_wplug *zwplug; 824 - unsigned long flags; 676 + unsigned int wp_offset = blk_zone_wp_offset(zone); 825 677 826 678 zwplug = disk_get_zone_wplug(disk, zone->start); 827 - if (!zwplug) 828 - return; 829 - 830 - spin_lock_irqsave(&zwplug->lock, flags); 831 - if (zwplug->flags & BLK_ZONE_WPLUG_NEED_WP_UPDATE) 832 - disk_zone_wplug_set_wp_offset(disk, zwplug, 833 - blk_zone_wp_offset(zone)); 834 - spin_unlock_irqrestore(&zwplug->lock, flags); 835 - 836 - disk_put_zone_wplug(zwplug); 837 - } 838 - 839 - static int disk_zone_sync_wp_offset(struct gendisk *disk, sector_t sector) 840 - { 841 - struct disk_report_zones_cb_args args = { 842 - .disk = disk, 843 - }; 844 - 845 - return disk->fops->report_zones(disk, sector, 1, 846 - disk_report_zones_cb, &args); 847 - } 848 - 849 - static bool blk_zone_wplug_handle_reset_or_finish(struct bio *bio, 850 - unsigned int wp_offset) 851 - { 852 - struct gendisk *disk = bio->bi_bdev->bd_disk; 853 - sector_t sector = bio->bi_iter.bi_sector; 854 - struct blk_zone_wplug *zwplug; 855 - unsigned long flags; 856 - 857 - /* Conventional zones cannot be reset nor finished. */ 858 - if (!bdev_zone_is_seq(bio->bi_bdev, sector)) { 859 - bio_io_error(bio); 860 - return true; 861 - } 862 - 863 - /* 864 - * No-wait reset or finish BIOs do not make much sense as the callers 865 - * issue these as blocking operations in most cases. To avoid issues 866 - * the BIO execution potentially failing with BLK_STS_AGAIN, warn about 867 - * REQ_NOWAIT being set and ignore that flag. 868 - */ 869 - if (WARN_ON_ONCE(bio->bi_opf & REQ_NOWAIT)) 870 - bio->bi_opf &= ~REQ_NOWAIT; 871 - 872 - /* 873 - * If we have a zone write plug, set its write pointer offset to 0 874 - * (reset case) or to the zone size (finish case). This will abort all 875 - * BIOs plugged for the target zone. It is fine as resetting or 876 - * finishing zones while writes are still in-flight will result in the 877 - * writes failing anyway. 878 - */ 879 - zwplug = disk_get_zone_wplug(disk, sector); 880 679 if (zwplug) { 680 + unsigned long flags; 681 + 881 682 spin_lock_irqsave(&zwplug->lock, flags); 882 - disk_zone_wplug_set_wp_offset(disk, zwplug, wp_offset); 683 + if (zwplug->flags & BLK_ZONE_WPLUG_NEED_WP_UPDATE) 684 + disk_zone_wplug_set_wp_offset(disk, zwplug, wp_offset); 883 685 spin_unlock_irqrestore(&zwplug->lock, flags); 884 686 disk_put_zone_wplug(zwplug); 885 687 } 886 688 887 - return false; 689 + return wp_offset; 888 690 } 889 691 890 - static bool blk_zone_wplug_handle_reset_all(struct bio *bio) 692 + /** 693 + * disk_report_zone - Report one zone 694 + * @disk: Target disk 695 + * @zone: The zone to report 696 + * @idx: The index of the zone in the overall zone report 697 + * @args: report zones callback and data 698 + * 699 + * Description: 700 + * Helper function for block device drivers to report one zone of a zone 701 + * report initiated with blkdev_report_zones(). The zone being reported is 702 + * specified by @zone and used to update, if necessary, the zone write plug 703 + * information for the zone. If @args specifies a user callback function, 704 + * this callback is executed. 705 + */ 706 + int disk_report_zone(struct gendisk *disk, struct blk_zone *zone, 707 + unsigned int idx, struct blk_report_zones_args *args) 891 708 { 892 - struct gendisk *disk = bio->bi_bdev->bd_disk; 893 - struct blk_zone_wplug *zwplug; 894 - unsigned long flags; 895 - sector_t sector; 896 - 897 - /* 898 - * Set the write pointer offset of all zone write plugs to 0. This will 899 - * abort all plugged BIOs. It is fine as resetting zones while writes 900 - * are still in-flight will result in the writes failing anyway. 901 - */ 902 - for (sector = 0; sector < get_capacity(disk); 903 - sector += disk->queue->limits.chunk_sectors) { 904 - zwplug = disk_get_zone_wplug(disk, sector); 905 - if (zwplug) { 906 - spin_lock_irqsave(&zwplug->lock, flags); 907 - disk_zone_wplug_set_wp_offset(disk, zwplug, 0); 908 - spin_unlock_irqrestore(&zwplug->lock, flags); 909 - disk_put_zone_wplug(zwplug); 709 + if (args && args->report_active) { 710 + /* 711 + * If we come here, then this is a report zones as a fallback 712 + * for a cached report. So collapse the implicit open, explicit 713 + * open and closed conditions into the active zone condition. 714 + */ 715 + switch (zone->cond) { 716 + case BLK_ZONE_COND_IMP_OPEN: 717 + case BLK_ZONE_COND_EXP_OPEN: 718 + case BLK_ZONE_COND_CLOSED: 719 + zone->cond = BLK_ZONE_COND_ACTIVE; 720 + break; 721 + default: 722 + break; 910 723 } 911 724 } 912 725 913 - return false; 726 + if (disk->zone_wplugs_hash) 727 + disk_zone_wplug_sync_wp_offset(disk, zone); 728 + 729 + if (args && args->cb) 730 + return args->cb(zone, idx, args->data); 731 + 732 + return 0; 733 + } 734 + EXPORT_SYMBOL_GPL(disk_report_zone); 735 + 736 + static int blkdev_report_zone_cb(struct blk_zone *zone, unsigned int idx, 737 + void *data) 738 + { 739 + memcpy(data, zone, sizeof(struct blk_zone)); 740 + return 0; 741 + } 742 + 743 + static int blkdev_report_zone_fallback(struct block_device *bdev, 744 + sector_t sector, struct blk_zone *zone) 745 + { 746 + struct blk_report_zones_args args = { 747 + .cb = blkdev_report_zone_cb, 748 + .data = zone, 749 + .report_active = true, 750 + }; 751 + int error; 752 + 753 + error = blkdev_do_report_zones(bdev, sector, 1, &args); 754 + if (error < 0) 755 + return error; 756 + if (error == 0) 757 + return -EIO; 758 + return 0; 759 + } 760 + 761 + /* 762 + * For devices that natively support zone append operations, we do not use zone 763 + * write plugging for zone append writes, which makes the zone condition 764 + * tracking invalid once zone append was used. In that case fall back to a 765 + * regular report zones to get correct information. 766 + */ 767 + static inline bool blkdev_has_cached_report_zones(struct block_device *bdev) 768 + { 769 + return disk_need_zone_resources(bdev->bd_disk) && 770 + (bdev_emulates_zone_append(bdev) || 771 + !test_bit(GD_ZONE_APPEND_USED, &bdev->bd_disk->state)); 772 + } 773 + 774 + /** 775 + * blkdev_get_zone_info - Get a single zone information from cached data 776 + * @bdev: Target block device 777 + * @sector: Sector contained by the target zone 778 + * @zone: zone structure to return the zone information 779 + * 780 + * Description: 781 + * Get the zone information for the zone containing @sector using the zone 782 + * write plug of the target zone, if one exist, or the disk zone condition 783 + * array otherwise. The zone condition may be reported as being 784 + * the BLK_ZONE_COND_ACTIVE condition for a zone that is in the implicit 785 + * open, explicit open or closed condition. 786 + * 787 + * Returns 0 on success and a negative error code on failure. 788 + */ 789 + int blkdev_get_zone_info(struct block_device *bdev, sector_t sector, 790 + struct blk_zone *zone) 791 + { 792 + struct gendisk *disk = bdev->bd_disk; 793 + sector_t zone_sectors = bdev_zone_sectors(bdev); 794 + struct blk_zone_wplug *zwplug; 795 + unsigned long flags; 796 + u8 *zones_cond; 797 + 798 + if (!bdev_is_zoned(bdev)) 799 + return -EOPNOTSUPP; 800 + 801 + if (sector >= get_capacity(disk)) 802 + return -EINVAL; 803 + 804 + memset(zone, 0, sizeof(*zone)); 805 + sector = bdev_zone_start(bdev, sector); 806 + 807 + if (!blkdev_has_cached_report_zones(bdev)) 808 + return blkdev_report_zone_fallback(bdev, sector, zone); 809 + 810 + rcu_read_lock(); 811 + zones_cond = rcu_dereference(disk->zones_cond); 812 + if (!disk->zone_wplugs_hash || !zones_cond) { 813 + rcu_read_unlock(); 814 + return blkdev_report_zone_fallback(bdev, sector, zone); 815 + } 816 + zone->cond = zones_cond[disk_zone_no(disk, sector)]; 817 + rcu_read_unlock(); 818 + 819 + zone->start = sector; 820 + zone->len = zone_sectors; 821 + 822 + /* 823 + * If this is a conventional zone, we do not have a zone write plug and 824 + * can report the zone immediately. 825 + */ 826 + if (zone->cond == BLK_ZONE_COND_NOT_WP) { 827 + zone->type = BLK_ZONE_TYPE_CONVENTIONAL; 828 + zone->capacity = zone_sectors; 829 + zone->wp = ULLONG_MAX; 830 + return 0; 831 + } 832 + 833 + /* 834 + * This is a sequential write required zone. If the zone is read-only or 835 + * offline, only set the zone write pointer to an invalid value and 836 + * report the zone. 837 + */ 838 + zone->type = BLK_ZONE_TYPE_SEQWRITE_REQ; 839 + if (disk_zone_is_last(disk, zone)) 840 + zone->capacity = disk->last_zone_capacity; 841 + else 842 + zone->capacity = disk->zone_capacity; 843 + 844 + if (zone->cond == BLK_ZONE_COND_READONLY || 845 + zone->cond == BLK_ZONE_COND_OFFLINE) { 846 + zone->wp = ULLONG_MAX; 847 + return 0; 848 + } 849 + 850 + /* 851 + * If the zone does not have a zone write plug, it is either full or 852 + * empty, as we otherwise would have a zone write plug for it. In this 853 + * case, set the write pointer accordingly and report the zone. 854 + * Otherwise, if we have a zone write plug, use it. 855 + */ 856 + zwplug = disk_get_zone_wplug(disk, sector); 857 + if (!zwplug) { 858 + if (zone->cond == BLK_ZONE_COND_FULL) 859 + zone->wp = ULLONG_MAX; 860 + else 861 + zone->wp = sector; 862 + return 0; 863 + } 864 + 865 + spin_lock_irqsave(&zwplug->lock, flags); 866 + if (zwplug->flags & BLK_ZONE_WPLUG_NEED_WP_UPDATE) { 867 + spin_unlock_irqrestore(&zwplug->lock, flags); 868 + disk_put_zone_wplug(zwplug); 869 + return blkdev_report_zone_fallback(bdev, sector, zone); 870 + } 871 + zone->cond = zwplug->cond; 872 + zone->wp = sector + zwplug->wp_offset; 873 + spin_unlock_irqrestore(&zwplug->lock, flags); 874 + 875 + disk_put_zone_wplug(zwplug); 876 + 877 + return 0; 878 + } 879 + EXPORT_SYMBOL_GPL(blkdev_get_zone_info); 880 + 881 + /** 882 + * blkdev_report_zones_cached - Get cached zones information 883 + * @bdev: Target block device 884 + * @sector: Sector from which to report zones 885 + * @nr_zones: Maximum number of zones to report 886 + * @cb: Callback function called for each reported zone 887 + * @data: Private data for the callback function 888 + * 889 + * Description: 890 + * Similar to blkdev_report_zones() but instead of calling into the low level 891 + * device driver to get the zone report from the device, use 892 + * blkdev_get_zone_info() to generate the report from the disk zone write 893 + * plugs and zones condition array. Since calling this function without a 894 + * callback does not make sense, @cb must be specified. 895 + */ 896 + int blkdev_report_zones_cached(struct block_device *bdev, sector_t sector, 897 + unsigned int nr_zones, report_zones_cb cb, void *data) 898 + { 899 + struct gendisk *disk = bdev->bd_disk; 900 + sector_t capacity = get_capacity(disk); 901 + sector_t zone_sectors = bdev_zone_sectors(bdev); 902 + unsigned int idx = 0; 903 + struct blk_zone zone; 904 + int ret; 905 + 906 + if (!cb || !bdev_is_zoned(bdev) || 907 + WARN_ON_ONCE(!disk->fops->report_zones)) 908 + return -EOPNOTSUPP; 909 + 910 + if (!nr_zones || sector >= capacity) 911 + return 0; 912 + 913 + if (!blkdev_has_cached_report_zones(bdev)) { 914 + struct blk_report_zones_args args = { 915 + .cb = cb, 916 + .data = data, 917 + .report_active = true, 918 + }; 919 + 920 + return blkdev_do_report_zones(bdev, sector, nr_zones, &args); 921 + } 922 + 923 + for (sector = bdev_zone_start(bdev, sector); 924 + sector < capacity && idx < nr_zones; 925 + sector += zone_sectors, idx++) { 926 + ret = blkdev_get_zone_info(bdev, sector, &zone); 927 + if (ret) 928 + return ret; 929 + 930 + ret = cb(&zone, idx, data); 931 + if (ret) 932 + return ret; 933 + } 934 + 935 + return idx; 936 + } 937 + EXPORT_SYMBOL_GPL(blkdev_report_zones_cached); 938 + 939 + static void blk_zone_reset_bio_endio(struct bio *bio) 940 + { 941 + struct gendisk *disk = bio->bi_bdev->bd_disk; 942 + sector_t sector = bio->bi_iter.bi_sector; 943 + struct blk_zone_wplug *zwplug; 944 + 945 + /* 946 + * If we have a zone write plug, set its write pointer offset to 0. 947 + * This will abort all BIOs plugged for the target zone. It is fine as 948 + * resetting zones while writes are still in-flight will result in the 949 + * writes failing anyway. 950 + */ 951 + zwplug = disk_get_zone_wplug(disk, sector); 952 + if (zwplug) { 953 + unsigned long flags; 954 + 955 + spin_lock_irqsave(&zwplug->lock, flags); 956 + disk_zone_wplug_set_wp_offset(disk, zwplug, 0); 957 + spin_unlock_irqrestore(&zwplug->lock, flags); 958 + disk_put_zone_wplug(zwplug); 959 + } else { 960 + disk_zone_set_cond(disk, sector, BLK_ZONE_COND_EMPTY); 961 + } 962 + } 963 + 964 + static void blk_zone_reset_all_bio_endio(struct bio *bio) 965 + { 966 + struct gendisk *disk = bio->bi_bdev->bd_disk; 967 + sector_t capacity = get_capacity(disk); 968 + struct blk_zone_wplug *zwplug; 969 + unsigned long flags; 970 + sector_t sector; 971 + unsigned int i; 972 + 973 + if (atomic_read(&disk->nr_zone_wplugs)) { 974 + /* Update the condition of all zone write plugs. */ 975 + rcu_read_lock(); 976 + for (i = 0; i < disk_zone_wplugs_hash_size(disk); i++) { 977 + hlist_for_each_entry_rcu(zwplug, 978 + &disk->zone_wplugs_hash[i], 979 + node) { 980 + spin_lock_irqsave(&zwplug->lock, flags); 981 + disk_zone_wplug_set_wp_offset(disk, zwplug, 0); 982 + spin_unlock_irqrestore(&zwplug->lock, flags); 983 + } 984 + } 985 + rcu_read_unlock(); 986 + } 987 + 988 + /* Update the cached zone conditions. */ 989 + for (sector = 0; sector < capacity; 990 + sector += bdev_zone_sectors(bio->bi_bdev)) 991 + disk_zone_set_cond(disk, sector, BLK_ZONE_COND_EMPTY); 992 + clear_bit(GD_ZONE_APPEND_USED, &disk->state); 993 + } 994 + 995 + static void blk_zone_finish_bio_endio(struct bio *bio) 996 + { 997 + struct block_device *bdev = bio->bi_bdev; 998 + struct gendisk *disk = bdev->bd_disk; 999 + sector_t sector = bio->bi_iter.bi_sector; 1000 + struct blk_zone_wplug *zwplug; 1001 + 1002 + /* 1003 + * If we have a zone write plug, set its write pointer offset to the 1004 + * zone size. This will abort all BIOs plugged for the target zone. It 1005 + * is fine as resetting zones while writes are still in-flight will 1006 + * result in the writes failing anyway. 1007 + */ 1008 + zwplug = disk_get_zone_wplug(disk, sector); 1009 + if (zwplug) { 1010 + unsigned long flags; 1011 + 1012 + spin_lock_irqsave(&zwplug->lock, flags); 1013 + disk_zone_wplug_set_wp_offset(disk, zwplug, 1014 + bdev_zone_sectors(bdev)); 1015 + spin_unlock_irqrestore(&zwplug->lock, flags); 1016 + disk_put_zone_wplug(zwplug); 1017 + } else { 1018 + disk_zone_set_cond(disk, sector, BLK_ZONE_COND_FULL); 1019 + } 1020 + } 1021 + 1022 + void blk_zone_mgmt_bio_endio(struct bio *bio) 1023 + { 1024 + /* If the BIO failed, we have nothing to do. */ 1025 + if (bio->bi_status != BLK_STS_OK) 1026 + return; 1027 + 1028 + switch (bio_op(bio)) { 1029 + case REQ_OP_ZONE_RESET: 1030 + blk_zone_reset_bio_endio(bio); 1031 + return; 1032 + case REQ_OP_ZONE_RESET_ALL: 1033 + blk_zone_reset_all_bio_endio(bio); 1034 + return; 1035 + case REQ_OP_ZONE_FINISH: 1036 + blk_zone_finish_bio_endio(bio); 1037 + return; 1038 + default: 1039 + return; 1040 + } 914 1041 } 915 1042 916 1043 static void disk_zone_wplug_schedule_bio_work(struct gendisk *disk, 917 1044 struct blk_zone_wplug *zwplug) 918 1045 { 1046 + lockdep_assert_held(&zwplug->lock); 1047 + 919 1048 /* 920 1049 * Take a reference on the zone write plug and schedule the submission 921 1050 * of the next plugged BIO. blk_zone_wplug_bio_work() will release the ··· 1207 782 struct blk_zone_wplug *zwplug, 1208 783 struct bio *bio, unsigned int nr_segs) 1209 784 { 1210 - bool schedule_bio_work = false; 1211 - 1212 785 /* 1213 786 * Grab an extra reference on the BIO request queue usage counter. 1214 787 * This reference will be reused to submit a request for the BIO for ··· 1223 800 bio_clear_polled(bio); 1224 801 1225 802 /* 1226 - * REQ_NOWAIT BIOs are always handled using the zone write plug BIO 1227 - * work, which can block. So clear the REQ_NOWAIT flag and schedule the 1228 - * work if this is the first BIO we are plugging. 1229 - */ 1230 - if (bio->bi_opf & REQ_NOWAIT) { 1231 - schedule_bio_work = !(zwplug->flags & BLK_ZONE_WPLUG_PLUGGED); 1232 - bio->bi_opf &= ~REQ_NOWAIT; 1233 - } 1234 - 1235 - /* 1236 803 * Reuse the poll cookie field to store the number of segments when 1237 804 * split to the hardware limits. 1238 805 */ ··· 1237 824 bio_list_add(&zwplug->bio_list, bio); 1238 825 trace_disk_zone_wplug_add_bio(zwplug->disk->queue, zwplug->zone_no, 1239 826 bio->bi_iter.bi_sector, bio_sectors(bio)); 1240 - 1241 - zwplug->flags |= BLK_ZONE_WPLUG_PLUGGED; 1242 - 1243 - if (schedule_bio_work) 1244 - disk_zone_wplug_schedule_bio_work(disk, zwplug); 1245 827 } 1246 828 1247 829 /* ··· 1244 836 */ 1245 837 void blk_zone_write_plug_bio_merged(struct bio *bio) 1246 838 { 839 + struct gendisk *disk = bio->bi_bdev->bd_disk; 1247 840 struct blk_zone_wplug *zwplug; 1248 841 unsigned long flags; 1249 842 ··· 1266 857 * have at least one request and one BIO referencing the zone write 1267 858 * plug. So this should not fail. 1268 859 */ 1269 - zwplug = disk_get_zone_wplug(bio->bi_bdev->bd_disk, 1270 - bio->bi_iter.bi_sector); 860 + zwplug = disk_get_zone_wplug(disk, bio->bi_iter.bi_sector); 1271 861 if (WARN_ON_ONCE(!zwplug)) 1272 862 return; 1273 863 1274 864 spin_lock_irqsave(&zwplug->lock, flags); 1275 865 zwplug->wp_offset += bio_sectors(bio); 866 + disk_zone_wplug_update_cond(disk, zwplug); 1276 867 spin_unlock_irqrestore(&zwplug->lock, flags); 1277 868 } 1278 869 ··· 1331 922 /* Drop the reference taken by disk_zone_wplug_add_bio(). */ 1332 923 blk_queue_exit(q); 1333 924 zwplug->wp_offset += bio_sectors(bio); 925 + disk_zone_wplug_update_cond(disk, zwplug); 1334 926 1335 927 req_back_sector += bio_sectors(bio); 1336 928 } ··· 1395 985 1396 986 /* Advance the zone write pointer offset. */ 1397 987 zwplug->wp_offset += bio_sectors(bio); 988 + disk_zone_wplug_update_cond(disk, zwplug); 1398 989 1399 990 return true; 1400 991 } ··· 1447 1036 bio_set_flag(bio, BIO_ZONE_WRITE_PLUGGING); 1448 1037 1449 1038 /* 1450 - * If the zone is already plugged, add the BIO to the plug BIO list. 1451 - * Do the same for REQ_NOWAIT BIOs to ensure that we will not see a 1452 - * BLK_STS_AGAIN failure if we let the BIO execute. 1453 - * Otherwise, plug and let the BIO execute. 1039 + * Add REQ_NOWAIT BIOs to the plug list to ensure that we will not see a 1040 + * BLK_STS_AGAIN failure if we let the caller submit the BIO. 1454 1041 */ 1455 - if ((zwplug->flags & BLK_ZONE_WPLUG_PLUGGED) || 1456 - (bio->bi_opf & REQ_NOWAIT)) 1457 - goto plug; 1042 + if (bio->bi_opf & REQ_NOWAIT) { 1043 + bio->bi_opf &= ~REQ_NOWAIT; 1044 + goto queue_bio; 1045 + } 1046 + 1047 + /* If the zone is already plugged, add the BIO to the BIO plug list. */ 1048 + if (zwplug->flags & BLK_ZONE_WPLUG_PLUGGED) 1049 + goto queue_bio; 1458 1050 1459 1051 if (!blk_zone_wplug_prepare_bio(zwplug, bio)) { 1460 1052 spin_unlock_irqrestore(&zwplug->lock, flags); ··· 1465 1051 return true; 1466 1052 } 1467 1053 1054 + /* Otherwise, plug and let the caller submit the BIO. */ 1468 1055 zwplug->flags |= BLK_ZONE_WPLUG_PLUGGED; 1469 1056 1470 1057 spin_unlock_irqrestore(&zwplug->lock, flags); 1471 1058 1472 1059 return false; 1473 1060 1474 - plug: 1061 + queue_bio: 1475 1062 disk_zone_wplug_add_bio(disk, zwplug, bio, nr_segs); 1063 + 1064 + if (!(zwplug->flags & BLK_ZONE_WPLUG_PLUGGED)) { 1065 + zwplug->flags |= BLK_ZONE_WPLUG_PLUGGED; 1066 + disk_zone_wplug_schedule_bio_work(disk, zwplug); 1067 + } 1476 1068 1477 1069 spin_unlock_irqrestore(&zwplug->lock, flags); 1478 1070 ··· 1490 1070 struct gendisk *disk = bio->bi_bdev->bd_disk; 1491 1071 struct blk_zone_wplug *zwplug; 1492 1072 unsigned long flags; 1073 + 1074 + if (!test_bit(GD_ZONE_APPEND_USED, &disk->state)) 1075 + set_bit(GD_ZONE_APPEND_USED, &disk->state); 1493 1076 1494 1077 /* 1495 1078 * We have native support for zone append operations, so we are not ··· 1527 1104 spin_unlock_irqrestore(&zwplug->lock, flags); 1528 1105 1529 1106 disk_put_zone_wplug(zwplug); 1107 + } 1108 + 1109 + static bool blk_zone_wplug_handle_zone_mgmt(struct bio *bio) 1110 + { 1111 + if (bio_op(bio) != REQ_OP_ZONE_RESET_ALL && 1112 + !bdev_zone_is_seq(bio->bi_bdev, bio->bi_iter.bi_sector)) { 1113 + /* 1114 + * Zone reset and zone finish operations do not apply to 1115 + * conventional zones. 1116 + */ 1117 + bio_io_error(bio); 1118 + return true; 1119 + } 1120 + 1121 + /* 1122 + * No-wait zone management BIOs do not make much sense as the callers 1123 + * issue these as blocking operations in most cases. To avoid issues 1124 + * with the BIO execution potentially failing with BLK_STS_AGAIN, warn 1125 + * about REQ_NOWAIT being set and ignore that flag. 1126 + */ 1127 + if (WARN_ON_ONCE(bio->bi_opf & REQ_NOWAIT)) 1128 + bio->bi_opf &= ~REQ_NOWAIT; 1129 + 1130 + return false; 1530 1131 } 1531 1132 1532 1133 /** ··· 1600 1153 case REQ_OP_WRITE_ZEROES: 1601 1154 return blk_zone_wplug_handle_write(bio, nr_segs); 1602 1155 case REQ_OP_ZONE_RESET: 1603 - return blk_zone_wplug_handle_reset_or_finish(bio, 0); 1604 1156 case REQ_OP_ZONE_FINISH: 1605 - return blk_zone_wplug_handle_reset_or_finish(bio, 1606 - bdev_zone_sectors(bdev)); 1607 1157 case REQ_OP_ZONE_RESET_ALL: 1608 - return blk_zone_wplug_handle_reset_all(bio); 1158 + return blk_zone_wplug_handle_zone_mgmt(bio); 1609 1159 default: 1610 1160 return false; 1611 1161 } ··· 1776 1332 disk_put_zone_wplug(zwplug); 1777 1333 } 1778 1334 1779 - static inline unsigned int disk_zone_wplugs_hash_size(struct gendisk *disk) 1780 - { 1781 - return 1U << disk->zone_wplugs_hash_bits; 1782 - } 1783 - 1784 1335 void disk_init_zone_resources(struct gendisk *disk) 1785 1336 { 1786 1337 spin_lock_init(&disk->zone_wplugs_lock); ··· 1854 1415 kfree(disk->zone_wplugs_hash); 1855 1416 disk->zone_wplugs_hash = NULL; 1856 1417 disk->zone_wplugs_hash_bits = 0; 1418 + 1419 + /* 1420 + * Wait for the zone write plugs to be RCU-freed before destroying the 1421 + * mempool. 1422 + */ 1423 + rcu_barrier(); 1424 + mempool_destroy(disk->zone_wplugs_pool); 1425 + disk->zone_wplugs_pool = NULL; 1857 1426 } 1858 1427 1859 - static unsigned int disk_set_conv_zones_bitmap(struct gendisk *disk, 1860 - unsigned long *bitmap) 1428 + static void disk_set_zones_cond_array(struct gendisk *disk, u8 *zones_cond) 1861 1429 { 1862 - unsigned int nr_conv_zones = 0; 1863 1430 unsigned long flags; 1864 1431 1865 1432 spin_lock_irqsave(&disk->zone_wplugs_lock, flags); 1866 - if (bitmap) 1867 - nr_conv_zones = bitmap_weight(bitmap, disk->nr_zones); 1868 - bitmap = rcu_replace_pointer(disk->conv_zones_bitmap, bitmap, 1869 - lockdep_is_held(&disk->zone_wplugs_lock)); 1433 + zones_cond = rcu_replace_pointer(disk->zones_cond, zones_cond, 1434 + lockdep_is_held(&disk->zone_wplugs_lock)); 1870 1435 spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags); 1871 1436 1872 - kfree_rcu_mightsleep(bitmap); 1873 - 1874 - return nr_conv_zones; 1437 + kfree_rcu_mightsleep(zones_cond); 1875 1438 } 1876 1439 1877 1440 void disk_free_zone_resources(struct gendisk *disk) 1878 1441 { 1879 - if (!disk->zone_wplugs_pool) 1880 - return; 1881 - 1882 1442 if (disk->zone_wplugs_wq) { 1883 1443 destroy_workqueue(disk->zone_wplugs_wq); 1884 1444 disk->zone_wplugs_wq = NULL; ··· 1885 1447 1886 1448 disk_destroy_zone_wplugs_hash_table(disk); 1887 1449 1888 - /* 1889 - * Wait for the zone write plugs to be RCU-freed before 1890 - * destorying the mempool. 1891 - */ 1892 - rcu_barrier(); 1893 - 1894 - mempool_destroy(disk->zone_wplugs_pool); 1895 - disk->zone_wplugs_pool = NULL; 1896 - 1897 - disk_set_conv_zones_bitmap(disk, NULL); 1450 + disk_set_zones_cond_array(disk, NULL); 1898 1451 disk->zone_capacity = 0; 1899 1452 disk->last_zone_capacity = 0; 1900 1453 disk->nr_zones = 0; 1901 1454 } 1902 1455 1903 - static inline bool disk_need_zone_resources(struct gendisk *disk) 1904 - { 1905 - /* 1906 - * All mq zoned devices need zone resources so that the block layer 1907 - * can automatically handle write BIO plugging. BIO-based device drivers 1908 - * (e.g. DM devices) are normally responsible for handling zone write 1909 - * ordering and do not need zone resources, unless the driver requires 1910 - * zone append emulation. 1911 - */ 1912 - return queue_is_mq(disk->queue) || 1913 - queue_emulates_zone_append(disk->queue); 1914 - } 1456 + struct blk_revalidate_zone_args { 1457 + struct gendisk *disk; 1458 + u8 *zones_cond; 1459 + unsigned int nr_zones; 1460 + unsigned int nr_conv_zones; 1461 + unsigned int zone_capacity; 1462 + unsigned int last_zone_capacity; 1463 + sector_t sector; 1464 + }; 1915 1465 1916 1466 static int disk_revalidate_zone_resources(struct gendisk *disk, 1917 - unsigned int nr_zones) 1467 + struct blk_revalidate_zone_args *args) 1918 1468 { 1919 1469 struct queue_limits *lim = &disk->queue->limits; 1920 1470 unsigned int pool_size; 1471 + 1472 + args->disk = disk; 1473 + args->nr_zones = 1474 + DIV_ROUND_UP_ULL(get_capacity(disk), lim->chunk_sectors); 1475 + 1476 + /* Cached zone conditions: 1 byte per zone */ 1477 + args->zones_cond = kzalloc(args->nr_zones, GFP_NOIO); 1478 + if (!args->zones_cond) 1479 + return -ENOMEM; 1921 1480 1922 1481 if (!disk_need_zone_resources(disk)) 1923 1482 return 0; ··· 1925 1490 */ 1926 1491 pool_size = max(lim->max_open_zones, lim->max_active_zones); 1927 1492 if (!pool_size) 1928 - pool_size = min(BLK_ZONE_WPLUG_DEFAULT_POOL_SIZE, nr_zones); 1493 + pool_size = 1494 + min(BLK_ZONE_WPLUG_DEFAULT_POOL_SIZE, args->nr_zones); 1929 1495 1930 1496 if (!disk->zone_wplugs_hash) 1931 1497 return disk_alloc_zone_resources(disk, pool_size); 1932 1498 1933 1499 return 0; 1934 1500 } 1935 - 1936 - struct blk_revalidate_zone_args { 1937 - struct gendisk *disk; 1938 - unsigned long *conv_zones_bitmap; 1939 - unsigned int nr_zones; 1940 - unsigned int zone_capacity; 1941 - unsigned int last_zone_capacity; 1942 - sector_t sector; 1943 - }; 1944 1501 1945 1502 /* 1946 1503 * Update the disk zone resources information and device queue limits. ··· 1942 1515 struct blk_revalidate_zone_args *args) 1943 1516 { 1944 1517 struct request_queue *q = disk->queue; 1945 - unsigned int nr_seq_zones, nr_conv_zones; 1946 - unsigned int pool_size; 1518 + unsigned int nr_seq_zones; 1519 + unsigned int pool_size, memflags; 1947 1520 struct queue_limits lim; 1948 - 1949 - disk->nr_zones = args->nr_zones; 1950 - disk->zone_capacity = args->zone_capacity; 1951 - disk->last_zone_capacity = args->last_zone_capacity; 1952 - nr_conv_zones = 1953 - disk_set_conv_zones_bitmap(disk, args->conv_zones_bitmap); 1954 - if (nr_conv_zones >= disk->nr_zones) { 1955 - pr_warn("%s: Invalid number of conventional zones %u / %u\n", 1956 - disk->disk_name, nr_conv_zones, disk->nr_zones); 1957 - return -ENODEV; 1958 - } 1521 + int ret = 0; 1959 1522 1960 1523 lim = queue_limits_start_update(q); 1961 1524 1525 + memflags = blk_mq_freeze_queue(q); 1526 + 1527 + disk->nr_zones = args->nr_zones; 1528 + if (args->nr_conv_zones >= disk->nr_zones) { 1529 + pr_warn("%s: Invalid number of conventional zones %u / %u\n", 1530 + disk->disk_name, args->nr_conv_zones, disk->nr_zones); 1531 + ret = -ENODEV; 1532 + goto unfreeze; 1533 + } 1534 + 1535 + disk->zone_capacity = args->zone_capacity; 1536 + disk->last_zone_capacity = args->last_zone_capacity; 1537 + disk_set_zones_cond_array(disk, args->zones_cond); 1538 + 1962 1539 /* 1963 - * Some devices can advertize zone resource limits that are larger than 1540 + * Some devices can advertise zone resource limits that are larger than 1964 1541 * the number of sequential zones of the zoned block device, e.g. a 1965 1542 * small ZNS namespace. For such case, assume that the zoned device has 1966 1543 * no zone resource limits. 1967 1544 */ 1968 - nr_seq_zones = disk->nr_zones - nr_conv_zones; 1545 + nr_seq_zones = disk->nr_zones - args->nr_conv_zones; 1969 1546 if (lim.max_open_zones >= nr_seq_zones) 1970 1547 lim.max_open_zones = 0; 1971 1548 if (lim.max_active_zones >= nr_seq_zones) ··· 1999 1568 } 2000 1569 2001 1570 commit: 2002 - return queue_limits_commit_update_frozen(q, &lim); 1571 + ret = queue_limits_commit_update(q, &lim); 1572 + 1573 + unfreeze: 1574 + if (ret) 1575 + disk_free_zone_resources(disk); 1576 + 1577 + blk_mq_unfreeze_queue(q, memflags); 1578 + 1579 + return ret; 1580 + } 1581 + 1582 + static int blk_revalidate_zone_cond(struct blk_zone *zone, unsigned int idx, 1583 + struct blk_revalidate_zone_args *args) 1584 + { 1585 + enum blk_zone_cond cond = zone->cond; 1586 + 1587 + /* Check that the zone condition is consistent with the zone type. */ 1588 + switch (cond) { 1589 + case BLK_ZONE_COND_NOT_WP: 1590 + if (zone->type != BLK_ZONE_TYPE_CONVENTIONAL) 1591 + goto invalid_condition; 1592 + break; 1593 + case BLK_ZONE_COND_IMP_OPEN: 1594 + case BLK_ZONE_COND_EXP_OPEN: 1595 + case BLK_ZONE_COND_CLOSED: 1596 + case BLK_ZONE_COND_EMPTY: 1597 + case BLK_ZONE_COND_FULL: 1598 + case BLK_ZONE_COND_OFFLINE: 1599 + case BLK_ZONE_COND_READONLY: 1600 + if (zone->type != BLK_ZONE_TYPE_SEQWRITE_REQ) 1601 + goto invalid_condition; 1602 + break; 1603 + default: 1604 + pr_warn("%s: Invalid zone condition 0x%X\n", 1605 + args->disk->disk_name, cond); 1606 + return -ENODEV; 1607 + } 1608 + 1609 + blk_zone_set_cond(args->zones_cond, idx, cond); 1610 + 1611 + return 0; 1612 + 1613 + invalid_condition: 1614 + pr_warn("%s: Invalid zone condition 0x%x for type 0x%x\n", 1615 + args->disk->disk_name, cond, zone->type); 1616 + 1617 + return -ENODEV; 2003 1618 } 2004 1619 2005 1620 static int blk_revalidate_conv_zone(struct blk_zone *zone, unsigned int idx, ··· 2062 1585 if (disk_zone_is_last(disk, zone)) 2063 1586 args->last_zone_capacity = zone->capacity; 2064 1587 2065 - if (!disk_need_zone_resources(disk)) 2066 - return 0; 2067 - 2068 - if (!args->conv_zones_bitmap) { 2069 - args->conv_zones_bitmap = 2070 - bitmap_zalloc(args->nr_zones, GFP_NOIO); 2071 - if (!args->conv_zones_bitmap) 2072 - return -ENOMEM; 2073 - } 2074 - 2075 - set_bit(idx, args->conv_zones_bitmap); 1588 + args->nr_conv_zones++; 2076 1589 2077 1590 return 0; 2078 1591 } ··· 2099 1632 if (!queue_emulates_zone_append(disk->queue) || !disk->zone_wplugs_hash) 2100 1633 return 0; 2101 1634 2102 - disk_zone_wplug_sync_wp_offset(disk, zone); 2103 - 2104 - wp_offset = blk_zone_wp_offset(zone); 1635 + wp_offset = disk_zone_wplug_sync_wp_offset(disk, zone); 2105 1636 if (!wp_offset || wp_offset >= zone->capacity) 2106 1637 return 0; 2107 1638 ··· 2158 1693 return -ENODEV; 2159 1694 } 2160 1695 1696 + /* Check zone condition */ 1697 + ret = blk_revalidate_zone_cond(zone, idx, args); 1698 + if (ret) 1699 + return ret; 1700 + 2161 1701 /* Check zone type */ 2162 1702 switch (zone->type) { 2163 1703 case BLK_ZONE_TYPE_CONVENTIONAL: ··· 2203 1733 sector_t zone_sectors = q->limits.chunk_sectors; 2204 1734 sector_t capacity = get_capacity(disk); 2205 1735 struct blk_revalidate_zone_args args = { }; 2206 - unsigned int noio_flag; 1736 + unsigned int memflags, noio_flag; 1737 + struct blk_report_zones_args rep_args = { 1738 + .cb = blk_revalidate_zone_cb, 1739 + .data = &args, 1740 + }; 2207 1741 int ret = -ENOMEM; 2208 1742 2209 1743 if (WARN_ON_ONCE(!blk_queue_is_zoned(q))) ··· 2230 1756 * Ensure that all memory allocations in this context are done as if 2231 1757 * GFP_NOIO was specified. 2232 1758 */ 2233 - args.disk = disk; 2234 - args.nr_zones = (capacity + zone_sectors - 1) >> ilog2(zone_sectors); 2235 1759 noio_flag = memalloc_noio_save(); 2236 - ret = disk_revalidate_zone_resources(disk, args.nr_zones); 1760 + ret = disk_revalidate_zone_resources(disk, &args); 2237 1761 if (ret) { 2238 1762 memalloc_noio_restore(noio_flag); 2239 1763 return ret; 2240 1764 } 2241 1765 2242 - ret = disk->fops->report_zones(disk, 0, UINT_MAX, 2243 - blk_revalidate_zone_cb, &args); 1766 + ret = disk->fops->report_zones(disk, 0, UINT_MAX, &rep_args); 2244 1767 if (!ret) { 2245 1768 pr_warn("%s: No zones reported\n", disk->disk_name); 2246 1769 ret = -ENODEV; ··· 2254 1783 ret = -ENODEV; 2255 1784 } 2256 1785 2257 - /* 2258 - * Set the new disk zone parameters only once the queue is frozen and 2259 - * all I/Os are completed. 2260 - */ 2261 1786 if (ret > 0) 2262 - ret = disk_update_zone_resources(disk, &args); 2263 - else 2264 - pr_warn("%s: failed to revalidate zones\n", disk->disk_name); 2265 - if (ret) { 2266 - unsigned int memflags = blk_mq_freeze_queue(q); 1787 + return disk_update_zone_resources(disk, &args); 2267 1788 2268 - disk_free_zone_resources(disk); 2269 - blk_mq_unfreeze_queue(q, memflags); 2270 - } 1789 + pr_warn("%s: failed to revalidate zones\n", disk->disk_name); 1790 + 1791 + memflags = blk_mq_freeze_queue(q); 1792 + disk_free_zone_resources(disk); 1793 + blk_mq_unfreeze_queue(q, memflags); 2271 1794 2272 1795 return ret; 2273 1796 } ··· 2282 1817 int blk_zone_issue_zeroout(struct block_device *bdev, sector_t sector, 2283 1818 sector_t nr_sects, gfp_t gfp_mask) 2284 1819 { 1820 + struct gendisk *disk = bdev->bd_disk; 2285 1821 int ret; 2286 1822 2287 1823 if (WARN_ON_ONCE(!bdev_is_zoned(bdev))) ··· 2298 1832 * pointer. Undo this using a report zone to update the zone write 2299 1833 * pointer to the correct current value. 2300 1834 */ 2301 - ret = disk_zone_sync_wp_offset(bdev->bd_disk, sector); 1835 + ret = disk->fops->report_zones(disk, sector, 1, NULL); 2302 1836 if (ret != 1) 2303 1837 return ret < 0 ? ret : -EIO; 2304 1838 ··· 2317 1851 unsigned int zwp_wp_offset, zwp_flags; 2318 1852 unsigned int zwp_zone_no, zwp_ref; 2319 1853 unsigned int zwp_bio_list_size; 1854 + enum blk_zone_cond zwp_cond; 2320 1855 unsigned long flags; 2321 1856 2322 1857 spin_lock_irqsave(&zwplug->lock, flags); 2323 1858 zwp_zone_no = zwplug->zone_no; 2324 1859 zwp_flags = zwplug->flags; 2325 1860 zwp_ref = refcount_read(&zwplug->ref); 1861 + zwp_cond = zwplug->cond; 2326 1862 zwp_wp_offset = zwplug->wp_offset; 2327 1863 zwp_bio_list_size = bio_list_size(&zwplug->bio_list); 2328 1864 spin_unlock_irqrestore(&zwplug->lock, flags); 2329 1865 2330 - seq_printf(m, "%u 0x%x %u %u %u\n", zwp_zone_no, zwp_flags, zwp_ref, 2331 - zwp_wp_offset, zwp_bio_list_size); 1866 + seq_printf(m, 1867 + "Zone no: %u, flags: 0x%x, ref: %u, cond: %s, wp ofst: %u, pending BIO: %u\n", 1868 + zwp_zone_no, zwp_flags, zwp_ref, blk_zone_cond_str(zwp_cond), 1869 + zwp_wp_offset, zwp_bio_list_size); 2332 1870 } 2333 1871 2334 1872 int queue_zone_wplugs_show(void *data, struct seq_file *m)
+18 -5
block/blk.h
··· 11 11 #include <xen/xen.h> 12 12 #include "blk-crypto-internal.h" 13 13 14 - struct elevator_type; 15 - struct elevator_tags; 14 + struct elv_change_ctx; 16 15 17 16 /* 18 17 * Default upper limit for the software max_sectors limit used for regular I/Os. ··· 332 333 333 334 bool blk_insert_flush(struct request *rq); 334 335 335 - void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e, 336 - struct elevator_tags *t); 336 + void elv_update_nr_hw_queues(struct request_queue *q, 337 + struct elv_change_ctx *ctx); 337 338 void elevator_set_default(struct request_queue *q); 338 339 void elevator_set_none(struct request_queue *q); 339 340 ··· 376 377 if (bio->bi_vcnt != 1) 377 378 return true; 378 379 return bio->bi_io_vec->bv_len + bio->bi_io_vec->bv_offset > 379 - lim->min_segment_size; 380 + lim->max_fast_segment_size; 380 381 } 381 382 382 383 /** ··· 488 489 void blk_zone_write_plug_bio_merged(struct bio *bio); 489 490 void blk_zone_write_plug_init_request(struct request *rq); 490 491 void blk_zone_append_update_request_bio(struct request *rq, struct bio *bio); 492 + void blk_zone_mgmt_bio_endio(struct bio *bio); 491 493 void blk_zone_write_plug_bio_endio(struct bio *bio); 492 494 static inline void blk_zone_bio_endio(struct bio *bio) 493 495 { 496 + /* 497 + * Zone management BIOs may impact zone write plugs (e.g. a zone reset 498 + * changes a zone write plug zone write pointer offset), but these 499 + * operation do not go through zone write plugging as they may operate 500 + * on zones that do not have a zone write 501 + * plug. blk_zone_mgmt_bio_endio() handles the potential changes to zone 502 + * write plugs that are present. 503 + */ 504 + if (op_is_zone_mgmt(bio_op(bio))) { 505 + blk_zone_mgmt_bio_endio(bio); 506 + return; 507 + } 508 + 494 509 /* 495 510 * For write BIOs to zoned devices, signal the completion of the BIO so 496 511 * that the next write BIO can be submitted by zone write plugging.
+37 -43
block/elevator.c
··· 45 45 #include "blk-wbt.h" 46 46 #include "blk-cgroup.h" 47 47 48 - /* Holding context data for changing elevator */ 49 - struct elv_change_ctx { 50 - const char *name; 51 - bool no_uevent; 52 - 53 - /* for unregistering old elevator */ 54 - struct elevator_queue *old; 55 - /* for registering new elevator */ 56 - struct elevator_queue *new; 57 - /* holds sched tags data */ 58 - struct elevator_tags *et; 59 - }; 60 - 61 48 static DEFINE_SPINLOCK(elv_list_lock); 62 49 static LIST_HEAD(elv_list); 63 50 ··· 121 134 static const struct kobj_type elv_ktype; 122 135 123 136 struct elevator_queue *elevator_alloc(struct request_queue *q, 124 - struct elevator_type *e, struct elevator_tags *et) 137 + struct elevator_type *e, struct elevator_resources *res) 125 138 { 126 139 struct elevator_queue *eq; 127 140 ··· 134 147 kobject_init(&eq->kobj, &elv_ktype); 135 148 mutex_init(&eq->sysfs_lock); 136 149 hash_init(eq->hash); 137 - eq->et = et; 150 + eq->et = res->et; 151 + eq->elevator_data = res->data; 138 152 139 153 return eq; 140 154 } ··· 581 593 } 582 594 583 595 if (new_e) { 584 - ret = blk_mq_init_sched(q, new_e, ctx->et); 596 + ret = blk_mq_init_sched(q, new_e, &ctx->res); 585 597 if (ret) 586 598 goto out_unfreeze; 587 599 ctx->new = q->elevator; ··· 605 617 return ret; 606 618 } 607 619 608 - static void elv_exit_and_release(struct request_queue *q) 620 + static void elv_exit_and_release(struct elv_change_ctx *ctx, 621 + struct request_queue *q) 609 622 { 610 623 struct elevator_queue *e; 611 624 unsigned memflags; ··· 618 629 mutex_unlock(&q->elevator_lock); 619 630 blk_mq_unfreeze_queue(q, memflags); 620 631 if (e) { 621 - blk_mq_free_sched_tags(e->et, q->tag_set); 632 + blk_mq_free_sched_res(&ctx->res, ctx->type, q->tag_set); 622 633 kobject_put(&e->kobj); 623 634 } 624 635 } ··· 629 640 int ret = 0; 630 641 631 642 if (ctx->old) { 643 + struct elevator_resources res = { 644 + .et = ctx->old->et, 645 + .data = ctx->old->elevator_data 646 + }; 632 647 bool enable_wbt = test_bit(ELEVATOR_FLAG_ENABLE_WBT_ON_EXIT, 633 648 &ctx->old->flags); 634 649 635 650 elv_unregister_queue(q, ctx->old); 636 - blk_mq_free_sched_tags(ctx->old->et, q->tag_set); 651 + blk_mq_free_sched_res(&res, ctx->old->type, q->tag_set); 637 652 kobject_put(&ctx->old->kobj); 638 653 if (enable_wbt) 639 654 wbt_enable_default(q->disk); ··· 645 652 if (ctx->new) { 646 653 ret = elv_register_queue(q, ctx->new, !ctx->no_uevent); 647 654 if (ret) 648 - elv_exit_and_release(q); 655 + elv_exit_and_release(ctx, q); 649 656 } 650 657 return ret; 651 658 } ··· 662 669 lockdep_assert_held(&set->update_nr_hwq_lock); 663 670 664 671 if (strncmp(ctx->name, "none", 4)) { 665 - ctx->et = blk_mq_alloc_sched_tags(set, set->nr_hw_queues, 666 - blk_mq_default_nr_requests(set)); 667 - if (!ctx->et) 668 - return -ENOMEM; 672 + ret = blk_mq_alloc_sched_res(q, ctx->type, &ctx->res, 673 + set->nr_hw_queues); 674 + if (ret) 675 + return ret; 669 676 } 670 677 671 678 memflags = blk_mq_freeze_queue(q); ··· 686 693 blk_mq_unfreeze_queue(q, memflags); 687 694 if (!ret) 688 695 ret = elevator_change_done(q, ctx); 696 + 689 697 /* 690 - * Free sched tags if it's allocated but we couldn't switch elevator. 698 + * Free sched resource if it's allocated but we couldn't switch elevator. 691 699 */ 692 - if (ctx->et && !ctx->new) 693 - blk_mq_free_sched_tags(ctx->et, set); 700 + if (!ctx->new) 701 + blk_mq_free_sched_res(&ctx->res, ctx->type, set); 694 702 695 703 return ret; 696 704 } ··· 700 706 * The I/O scheduler depends on the number of hardware queues, this forces a 701 707 * reattachment when nr_hw_queues changes. 702 708 */ 703 - void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e, 704 - struct elevator_tags *t) 709 + void elv_update_nr_hw_queues(struct request_queue *q, 710 + struct elv_change_ctx *ctx) 705 711 { 706 712 struct blk_mq_tag_set *set = q->tag_set; 707 - struct elv_change_ctx ctx = {}; 708 713 int ret = -ENODEV; 709 714 710 715 WARN_ON_ONCE(q->mq_freeze_depth == 0); 711 716 712 - if (e && !blk_queue_dying(q) && blk_queue_registered(q)) { 713 - ctx.name = e->elevator_name; 714 - ctx.et = t; 715 - 717 + if (ctx->type && !blk_queue_dying(q) && blk_queue_registered(q)) { 716 718 mutex_lock(&q->elevator_lock); 717 719 /* force to reattach elevator after nr_hw_queue is updated */ 718 - ret = elevator_switch(q, &ctx); 720 + ret = elevator_switch(q, ctx); 719 721 mutex_unlock(&q->elevator_lock); 720 722 } 721 723 blk_mq_unfreeze_queue_nomemrestore(q); 722 724 if (!ret) 723 - WARN_ON_ONCE(elevator_change_done(q, &ctx)); 725 + WARN_ON_ONCE(elevator_change_done(q, ctx)); 726 + 724 727 /* 725 - * Free sched tags if it's allocated but we couldn't switch elevator. 728 + * Free sched resource if it's allocated but we couldn't switch elevator. 726 729 */ 727 - if (t && !ctx.new) 728 - blk_mq_free_sched_tags(t, set); 730 + if (!ctx->new) 731 + blk_mq_free_sched_res(&ctx->res, ctx->type, set); 729 732 } 730 733 731 734 /* ··· 736 745 .no_uevent = true, 737 746 }; 738 747 int err; 739 - struct elevator_type *e; 740 748 741 749 /* now we allow to switch elevator */ 742 750 blk_queue_flag_clear(QUEUE_FLAG_NO_ELV_SWITCH, q); ··· 748 758 * have multiple queues or mq-deadline is not available, default 749 759 * to "none". 750 760 */ 751 - e = elevator_find_get(ctx.name); 752 - if (!e) 761 + ctx.type = elevator_find_get(ctx.name); 762 + if (!ctx.type) 753 763 return; 754 764 755 765 if ((q->nr_hw_queues == 1 || ··· 759 769 pr_warn("\"%s\" elevator initialization, failed %d, falling back to \"none\"\n", 760 770 ctx.name, err); 761 771 } 762 - elevator_put(e); 772 + elevator_put(ctx.type); 763 773 } 764 774 765 775 void elevator_set_none(struct request_queue *q) ··· 808 818 ctx.name = strstrip(elevator_name); 809 819 810 820 elv_iosched_load_module(ctx.name); 821 + ctx.type = elevator_find_get(ctx.name); 811 822 812 823 down_read(&set->update_nr_hwq_lock); 813 824 if (!blk_queue_no_elv_switch(q)) { ··· 819 828 ret = -ENOENT; 820 829 } 821 830 up_read(&set->update_nr_hwq_lock); 831 + 832 + if (ctx.type) 833 + elevator_put(ctx.type); 822 834 return ret; 823 835 } 824 836
+25 -2
block/elevator.h
··· 32 32 struct blk_mq_tags *tags[]; 33 33 }; 34 34 35 + struct elevator_resources { 36 + /* holds elevator data */ 37 + void *data; 38 + /* holds elevator tags */ 39 + struct elevator_tags *et; 40 + }; 41 + 42 + /* Holding context data for changing elevator */ 43 + struct elv_change_ctx { 44 + const char *name; 45 + bool no_uevent; 46 + 47 + /* for unregistering old elevator */ 48 + struct elevator_queue *old; 49 + /* for registering new elevator */ 50 + struct elevator_queue *new; 51 + /* store elevator type */ 52 + struct elevator_type *type; 53 + /* store elevator resources */ 54 + struct elevator_resources res; 55 + }; 56 + 35 57 struct elevator_mq_ops { 36 58 int (*init_sched)(struct request_queue *, struct elevator_queue *); 37 59 void (*exit_sched)(struct elevator_queue *); 38 60 int (*init_hctx)(struct blk_mq_hw_ctx *, unsigned int); 39 61 void (*exit_hctx)(struct blk_mq_hw_ctx *, unsigned int); 40 62 void (*depth_updated)(struct request_queue *); 63 + void *(*alloc_sched_data)(struct request_queue *); 64 + void (*free_sched_data)(void *); 41 65 42 66 bool (*allow_merge)(struct request_queue *, struct request *, struct bio *); 43 67 bool (*bio_merge)(struct request_queue *, struct bio *, unsigned int); ··· 171 147 struct list_head *); 172 148 extern struct request *elv_former_request(struct request_queue *, struct request *); 173 149 extern struct request *elv_latter_request(struct request_queue *, struct request *); 174 - void elevator_init_mq(struct request_queue *q); 175 150 176 151 /* 177 152 * io scheduler registration ··· 186 163 187 164 extern bool elv_bio_merge_ok(struct request *, struct bio *); 188 165 struct elevator_queue *elevator_alloc(struct request_queue *, 189 - struct elevator_type *, struct elevator_tags *); 166 + struct elevator_type *, struct elevator_resources *); 190 167 191 168 /* 192 169 * Helper functions.
+4 -4
block/genhd.c
··· 90 90 (disk->flags & GENHD_FL_HIDDEN)) 91 91 return false; 92 92 93 - pr_info("%s: detected capacity change from %lld to %lld\n", 93 + pr_info_ratelimited("%s: detected capacity change from %lld to %lld\n", 94 94 disk->disk_name, capacity, size); 95 95 96 96 /* ··· 795 795 * partitions associated with the gendisk, and unregisters the associated 796 796 * request_queue. 797 797 * 798 - * This is the counter to the respective __device_add_disk() call. 798 + * This is the counter to the respective device_add_disk() call. 799 799 * 800 800 * The final removal of the struct gendisk happens when its refcount reaches 0 801 801 * with put_disk(), which should be called after del_gendisk(), if 802 - * __device_add_disk() was used. 802 + * device_add_disk() was used. 803 803 * 804 804 * Drivers exist which depend on the release of the gendisk to be synchronous, 805 805 * it should not be deferred. ··· 1265 1265 * 1266 1266 * This function releases all allocated resources of the gendisk. 1267 1267 * 1268 - * Drivers which used __device_add_disk() have a gendisk with a request_queue 1268 + * Drivers which used device_add_disk() have a gendisk with a request_queue 1269 1269 * assigned. Since the request_queue sits on top of the gendisk for these 1270 1270 * drivers we also call blk_put_queue() for them, and we expect the 1271 1271 * request_queue refcount to reach 0 at this point, and so the request_queue
+2
block/ioctl.c
··· 581 581 case BLKGETDISKSEQ: 582 582 return put_u64(argp, bdev->bd_disk->diskseq); 583 583 case BLKREPORTZONE: 584 + case BLKREPORTZONEV2: 584 585 return blkdev_report_zones_ioctl(bdev, cmd, arg); 585 586 case BLKRESETZONE: 586 587 case BLKOPENZONE: ··· 692 691 693 692 /* Incompatible alignment on i386 */ 694 693 case BLKTRACESETUP: 694 + case BLKTRACESETUP2: 695 695 return blk_trace_ioctl(bdev, cmd, argp); 696 696 default: 697 697 break;
+22 -8
block/kyber-iosched.c
··· 409 409 410 410 static int kyber_init_sched(struct request_queue *q, struct elevator_queue *eq) 411 411 { 412 - struct kyber_queue_data *kqd; 413 - 414 - kqd = kyber_queue_data_alloc(q); 415 - if (IS_ERR(kqd)) 416 - return PTR_ERR(kqd); 417 - 418 412 blk_stat_enable_accounting(q); 419 413 420 414 blk_queue_flag_clear(QUEUE_FLAG_SQ_SCHED, q); 421 415 422 - eq->elevator_data = kqd; 423 416 q->elevator = eq; 424 417 kyber_depth_updated(q); 425 418 426 419 return 0; 427 420 } 428 421 422 + static void *kyber_alloc_sched_data(struct request_queue *q) 423 + { 424 + struct kyber_queue_data *kqd; 425 + 426 + kqd = kyber_queue_data_alloc(q); 427 + if (IS_ERR(kqd)) 428 + return NULL; 429 + 430 + return kqd; 431 + } 432 + 429 433 static void kyber_exit_sched(struct elevator_queue *e) 430 434 { 431 435 struct kyber_queue_data *kqd = e->elevator_data; 432 - int i; 433 436 434 437 timer_shutdown_sync(&kqd->timer); 435 438 blk_stat_disable_accounting(kqd->q); 439 + } 440 + 441 + static void kyber_free_sched_data(void *elv_data) 442 + { 443 + struct kyber_queue_data *kqd = elv_data; 444 + int i; 445 + 446 + if (!kqd) 447 + return; 436 448 437 449 for (i = 0; i < KYBER_NUM_DOMAINS; i++) 438 450 sbitmap_queue_free(&kqd->domain_tokens[i]); ··· 1016 1004 .exit_sched = kyber_exit_sched, 1017 1005 .init_hctx = kyber_init_hctx, 1018 1006 .exit_hctx = kyber_exit_hctx, 1007 + .alloc_sched_data = kyber_alloc_sched_data, 1008 + .free_sched_data = kyber_free_sched_data, 1019 1009 .limit_depth = kyber_limit_depth, 1020 1010 .bio_merge = kyber_bio_merge, 1021 1011 .prepare_request = kyber_prepare_request,
+61 -68
block/mq-deadline.c
··· 71 71 * present on both sort_list[] and fifo_list[]. 72 72 */ 73 73 struct dd_per_prio { 74 - struct list_head dispatch; 75 74 struct rb_root sort_list[DD_DIR_COUNT]; 76 75 struct list_head fifo_list[DD_DIR_COUNT]; 77 76 /* Position of the most recently dispatched request. */ ··· 83 84 * run time data 84 85 */ 85 86 87 + struct list_head dispatch; 86 88 struct dd_per_prio per_prio[DD_PRIO_COUNT]; 87 89 88 90 /* Data direction of latest dispatched request. */ ··· 306 306 return time_after(start_time, latest_start); 307 307 } 308 308 309 + static struct request *dd_start_request(struct deadline_data *dd, 310 + enum dd_data_dir data_dir, 311 + struct request *rq) 312 + { 313 + u8 ioprio_class = dd_rq_ioclass(rq); 314 + enum dd_prio prio = ioprio_class_to_prio[ioprio_class]; 315 + 316 + dd->per_prio[prio].latest_pos[data_dir] = blk_rq_pos(rq); 317 + dd->per_prio[prio].stats.dispatched++; 318 + rq->rq_flags |= RQF_STARTED; 319 + return rq; 320 + } 321 + 309 322 /* 310 323 * deadline_dispatch_requests selects the best request according to 311 324 * read/write expire, fifo_batch, etc and with a start time <= @latest_start. ··· 329 316 { 330 317 struct request *rq, *next_rq; 331 318 enum dd_data_dir data_dir; 332 - enum dd_prio prio; 333 - u8 ioprio_class; 334 319 335 320 lockdep_assert_held(&dd->lock); 336 - 337 - if (!list_empty(&per_prio->dispatch)) { 338 - rq = list_first_entry(&per_prio->dispatch, struct request, 339 - queuelist); 340 - if (started_after(dd, rq, latest_start)) 341 - return NULL; 342 - list_del_init(&rq->queuelist); 343 - data_dir = rq_data_dir(rq); 344 - goto done; 345 - } 346 321 347 322 /* 348 323 * batches are currently reads XOR writes ··· 411 410 */ 412 411 dd->batching++; 413 412 deadline_move_request(dd, per_prio, rq); 414 - done: 415 - ioprio_class = dd_rq_ioclass(rq); 416 - prio = ioprio_class_to_prio[ioprio_class]; 417 - dd->per_prio[prio].latest_pos[data_dir] = blk_rq_pos(rq); 418 - dd->per_prio[prio].stats.dispatched++; 419 - rq->rq_flags |= RQF_STARTED; 420 - return rq; 413 + return dd_start_request(dd, data_dir, rq); 421 414 } 422 415 423 416 /* ··· 458 463 enum dd_prio prio; 459 464 460 465 spin_lock(&dd->lock); 466 + 467 + if (!list_empty(&dd->dispatch)) { 468 + rq = list_first_entry(&dd->dispatch, struct request, queuelist); 469 + list_del_init(&rq->queuelist); 470 + dd_start_request(dd, rq_data_dir(rq), rq); 471 + goto unlock; 472 + } 473 + 461 474 rq = dd_dispatch_prio_aged_requests(dd, now); 462 475 if (rq) 463 476 goto unlock; ··· 554 551 555 552 eq->elevator_data = dd; 556 553 554 + INIT_LIST_HEAD(&dd->dispatch); 557 555 for (prio = 0; prio <= DD_PRIO_MAX; prio++) { 558 556 struct dd_per_prio *per_prio = &dd->per_prio[prio]; 559 557 560 - INIT_LIST_HEAD(&per_prio->dispatch); 561 558 INIT_LIST_HEAD(&per_prio->fifo_list[DD_READ]); 562 559 INIT_LIST_HEAD(&per_prio->fifo_list[DD_WRITE]); 563 560 per_prio->sort_list[DD_READ] = RB_ROOT; ··· 661 658 trace_block_rq_insert(rq); 662 659 663 660 if (flags & BLK_MQ_INSERT_AT_HEAD) { 664 - list_add(&rq->queuelist, &per_prio->dispatch); 661 + list_add(&rq->queuelist, &dd->dispatch); 665 662 rq->fifo_time = jiffies; 666 663 } else { 667 664 deadline_add_rq_rb(per_prio, rq); ··· 728 725 729 726 static bool dd_has_work_for_prio(struct dd_per_prio *per_prio) 730 727 { 731 - return !list_empty_careful(&per_prio->dispatch) || 732 - !list_empty_careful(&per_prio->fifo_list[DD_READ]) || 728 + return !list_empty_careful(&per_prio->fifo_list[DD_READ]) || 733 729 !list_empty_careful(&per_prio->fifo_list[DD_WRITE]); 734 730 } 735 731 ··· 736 734 { 737 735 struct deadline_data *dd = hctx->queue->elevator->elevator_data; 738 736 enum dd_prio prio; 737 + 738 + if (!list_empty_careful(&dd->dispatch)) 739 + return true; 739 740 740 741 for (prio = 0; prio <= DD_PRIO_MAX; prio++) 741 742 if (dd_has_work_for_prio(&dd->per_prio[prio])) ··· 948 943 return 0; 949 944 } 950 945 951 - #define DEADLINE_DISPATCH_ATTR(prio) \ 952 - static void *deadline_dispatch##prio##_start(struct seq_file *m, \ 953 - loff_t *pos) \ 954 - __acquires(&dd->lock) \ 955 - { \ 956 - struct request_queue *q = m->private; \ 957 - struct deadline_data *dd = q->elevator->elevator_data; \ 958 - struct dd_per_prio *per_prio = &dd->per_prio[prio]; \ 959 - \ 960 - spin_lock(&dd->lock); \ 961 - return seq_list_start(&per_prio->dispatch, *pos); \ 962 - } \ 963 - \ 964 - static void *deadline_dispatch##prio##_next(struct seq_file *m, \ 965 - void *v, loff_t *pos) \ 966 - { \ 967 - struct request_queue *q = m->private; \ 968 - struct deadline_data *dd = q->elevator->elevator_data; \ 969 - struct dd_per_prio *per_prio = &dd->per_prio[prio]; \ 970 - \ 971 - return seq_list_next(v, &per_prio->dispatch, pos); \ 972 - } \ 973 - \ 974 - static void deadline_dispatch##prio##_stop(struct seq_file *m, void *v) \ 975 - __releases(&dd->lock) \ 976 - { \ 977 - struct request_queue *q = m->private; \ 978 - struct deadline_data *dd = q->elevator->elevator_data; \ 979 - \ 980 - spin_unlock(&dd->lock); \ 981 - } \ 982 - \ 983 - static const struct seq_operations deadline_dispatch##prio##_seq_ops = { \ 984 - .start = deadline_dispatch##prio##_start, \ 985 - .next = deadline_dispatch##prio##_next, \ 986 - .stop = deadline_dispatch##prio##_stop, \ 987 - .show = blk_mq_debugfs_rq_show, \ 946 + static void *deadline_dispatch_start(struct seq_file *m, loff_t *pos) 947 + __acquires(&dd->lock) 948 + { 949 + struct request_queue *q = m->private; 950 + struct deadline_data *dd = q->elevator->elevator_data; 951 + 952 + spin_lock(&dd->lock); 953 + return seq_list_start(&dd->dispatch, *pos); 988 954 } 989 955 990 - DEADLINE_DISPATCH_ATTR(0); 991 - DEADLINE_DISPATCH_ATTR(1); 992 - DEADLINE_DISPATCH_ATTR(2); 993 - #undef DEADLINE_DISPATCH_ATTR 956 + static void *deadline_dispatch_next(struct seq_file *m, void *v, loff_t *pos) 957 + { 958 + struct request_queue *q = m->private; 959 + struct deadline_data *dd = q->elevator->elevator_data; 960 + 961 + return seq_list_next(v, &dd->dispatch, pos); 962 + } 963 + 964 + static void deadline_dispatch_stop(struct seq_file *m, void *v) 965 + __releases(&dd->lock) 966 + { 967 + struct request_queue *q = m->private; 968 + struct deadline_data *dd = q->elevator->elevator_data; 969 + 970 + spin_unlock(&dd->lock); 971 + } 972 + 973 + static const struct seq_operations deadline_dispatch_seq_ops = { 974 + .start = deadline_dispatch_start, 975 + .next = deadline_dispatch_next, 976 + .stop = deadline_dispatch_stop, 977 + .show = blk_mq_debugfs_rq_show, 978 + }; 994 979 995 980 #define DEADLINE_QUEUE_DDIR_ATTRS(name) \ 996 981 {#name "_fifo_list", 0400, \ ··· 1003 1008 {"batching", 0400, deadline_batching_show}, 1004 1009 {"starved", 0400, deadline_starved_show}, 1005 1010 {"async_depth", 0400, dd_async_depth_show}, 1006 - {"dispatch0", 0400, .seq_ops = &deadline_dispatch0_seq_ops}, 1007 - {"dispatch1", 0400, .seq_ops = &deadline_dispatch1_seq_ops}, 1008 - {"dispatch2", 0400, .seq_ops = &deadline_dispatch2_seq_ops}, 1011 + {"dispatch", 0400, .seq_ops = &deadline_dispatch_seq_ops}, 1009 1012 {"owned_by_driver", 0400, dd_owned_by_driver_show}, 1010 1013 {"queued", 0400, dd_queued_show}, 1011 1014 {},
+1 -2
block/partitions/efi.c
··· 215 215 sz = le32_to_cpu(mbr->partition_record[part].size_in_lba); 216 216 if (sz != (uint32_t) total_sectors - 1 && sz != 0xFFFFFFFF) 217 217 pr_debug("GPT: mbr size in lba (%u) different than whole disk (%u).\n", 218 - sz, min_t(uint32_t, 219 - total_sectors - 1, 0xFFFFFFFF)); 218 + sz, (uint32_t)min(total_sectors - 1, 0xFFFFFFFF)); 220 219 } 221 220 done: 222 221 return ret;
+5 -5
drivers/block/drbd/drbd_bitmap.c
··· 1210 1210 return err; 1211 1211 } 1212 1212 1213 - /** 1213 + /* 1214 1214 * drbd_bm_read() - Read the whole bitmap from its on disk location. 1215 1215 * @device: DRBD device. 1216 1216 */ ··· 1221 1221 return bm_rw(device, BM_AIO_READ, 0); 1222 1222 } 1223 1223 1224 - /** 1224 + /* 1225 1225 * drbd_bm_write() - Write the whole bitmap to its on disk location. 1226 1226 * @device: DRBD device. 1227 1227 * ··· 1233 1233 return bm_rw(device, 0, 0); 1234 1234 } 1235 1235 1236 - /** 1236 + /* 1237 1237 * drbd_bm_write_all() - Write the whole bitmap to its on disk location. 1238 1238 * @device: DRBD device. 1239 1239 * ··· 1255 1255 return bm_rw(device, BM_AIO_COPY_PAGES, upper_idx); 1256 1256 } 1257 1257 1258 - /** 1258 + /* 1259 1259 * drbd_bm_write_copy_pages() - Write the whole bitmap to its on disk location. 1260 1260 * @device: DRBD device. 1261 1261 * ··· 1272 1272 return bm_rw(device, BM_AIO_COPY_PAGES, 0); 1273 1273 } 1274 1274 1275 - /** 1275 + /* 1276 1276 * drbd_bm_write_hinted() - Write bitmap pages with "hint" marks, if they have changed. 1277 1277 * @device: DRBD device. 1278 1278 */
+4 -4
drivers/block/drbd/drbd_receiver.c
··· 1736 1736 page = peer_req->pages; 1737 1737 page_chain_for_each(page) { 1738 1738 unsigned len = min_t(int, ds, PAGE_SIZE); 1739 - data = kmap(page); 1739 + data = kmap_local_page(page); 1740 1740 err = drbd_recv_all_warn(peer_device->connection, data, len); 1741 1741 if (drbd_insert_fault(device, DRBD_FAULT_RECEIVE)) { 1742 1742 drbd_err(device, "Fault injection: Corrupting data on receive\n"); 1743 1743 data[0] = data[0] ^ (unsigned long)-1; 1744 1744 } 1745 - kunmap(page); 1745 + kunmap_local(data); 1746 1746 if (err) { 1747 1747 drbd_free_peer_req(device, peer_req); 1748 1748 return NULL; ··· 1777 1777 1778 1778 page = drbd_alloc_pages(peer_device, 1, 1); 1779 1779 1780 - data = kmap(page); 1780 + data = kmap_local_page(page); 1781 1781 while (data_size) { 1782 1782 unsigned int len = min_t(int, data_size, PAGE_SIZE); 1783 1783 ··· 1786 1786 break; 1787 1787 data_size -= len; 1788 1788 } 1789 - kunmap(page); 1789 + kunmap_local(data); 1790 1790 drbd_free_pages(peer_device->device, page); 1791 1791 return err; 1792 1792 }
+1 -1
drivers/block/floppy.c
··· 329 329 * This default is used whenever the current disk size is unknown. 330 330 * [Now it is rather a minimum] 331 331 */ 332 - #define MAX_DISK_SIZE 4 /* 3984 */ 332 + #define MAX_DISK_SIZE (PAGE_SIZE / 1024) 333 333 334 334 /* 335 335 * globals used by 'result()'
+4
drivers/block/loop.c
··· 1908 1908 goto failed; 1909 1909 } 1910 1910 1911 + /* We can block in this context, so ignore REQ_NOWAIT. */ 1912 + if (rq->cmd_flags & REQ_NOWAIT) 1913 + rq->cmd_flags &= ~REQ_NOWAIT; 1914 + 1911 1915 if (cmd_blkcg_css) 1912 1916 kthread_associate_blkcg(cmd_blkcg_css); 1913 1917 if (cmd_memcg_css)
+3 -2
drivers/block/nbd.c
··· 1021 1021 nbd_mark_nsock_dead(nbd, nsock, 1); 1022 1022 mutex_unlock(&nsock->tx_lock); 1023 1023 1024 - nbd_config_put(nbd); 1025 1024 atomic_dec(&config->recv_threads); 1026 1025 wake_up(&config->recv_wq); 1026 + nbd_config_put(nbd); 1027 1027 kfree(args); 1028 1028 } 1029 1029 ··· 2238 2238 2239 2239 ret = nbd_start_device(nbd); 2240 2240 out: 2241 - mutex_unlock(&nbd->config_lock); 2242 2241 if (!ret) { 2243 2242 set_bit(NBD_RT_HAS_CONFIG_REF, &config->runtime_flags); 2244 2243 refcount_inc(&nbd->config_refs); 2245 2244 nbd_connect_reply(info, nbd->index); 2246 2245 } 2246 + mutex_unlock(&nbd->config_lock); 2247 + 2247 2248 nbd_config_put(nbd); 2248 2249 if (put_dev) 2249 2250 nbd_put(nbd);
+41 -41
drivers/block/null_blk/main.c
··· 1129 1129 return 0; 1130 1130 } 1131 1131 1132 - static int copy_to_nullb(struct nullb *nullb, struct page *source, 1133 - unsigned int off, sector_t sector, size_t n, bool is_fua) 1132 + static blk_status_t copy_to_nullb(struct nullb *nullb, void *source, 1133 + loff_t pos, size_t n, bool is_fua) 1134 1134 { 1135 1135 size_t temp, count = 0; 1136 - unsigned int offset; 1137 1136 struct nullb_page *t_page; 1137 + sector_t sector; 1138 1138 1139 1139 while (count < n) { 1140 - temp = min_t(size_t, nullb->dev->blocksize, n - count); 1140 + temp = min3(nullb->dev->blocksize, n - count, 1141 + PAGE_SIZE - offset_in_page(pos)); 1142 + sector = pos >> SECTOR_SHIFT; 1141 1143 1142 1144 if (null_cache_active(nullb) && !is_fua) 1143 1145 null_make_cache_space(nullb, PAGE_SIZE); 1144 1146 1145 - offset = (sector & SECTOR_MASK) << SECTOR_SHIFT; 1146 1147 t_page = null_insert_page(nullb, sector, 1147 1148 !null_cache_active(nullb) || is_fua); 1148 1149 if (!t_page) 1149 - return -ENOSPC; 1150 + return BLK_STS_NOSPC; 1150 1151 1151 - memcpy_page(t_page->page, offset, source, off + count, temp); 1152 + memcpy_to_page(t_page->page, offset_in_page(pos), 1153 + source + count, temp); 1152 1154 1153 1155 __set_bit(sector & SECTOR_MASK, t_page->bitmap); 1154 1156 ··· 1158 1156 null_free_sector(nullb, sector, true); 1159 1157 1160 1158 count += temp; 1161 - sector += temp >> SECTOR_SHIFT; 1159 + pos += temp; 1162 1160 } 1163 - return 0; 1161 + return BLK_STS_OK; 1164 1162 } 1165 1163 1166 - static int copy_from_nullb(struct nullb *nullb, struct page *dest, 1167 - unsigned int off, sector_t sector, size_t n) 1164 + static void copy_from_nullb(struct nullb *nullb, void *dest, loff_t pos, 1165 + size_t n) 1168 1166 { 1169 1167 size_t temp, count = 0; 1170 - unsigned int offset; 1171 1168 struct nullb_page *t_page; 1169 + sector_t sector; 1172 1170 1173 1171 while (count < n) { 1174 - temp = min_t(size_t, nullb->dev->blocksize, n - count); 1172 + temp = min3(nullb->dev->blocksize, n - count, 1173 + PAGE_SIZE - offset_in_page(pos)); 1174 + sector = pos >> SECTOR_SHIFT; 1175 1175 1176 - offset = (sector & SECTOR_MASK) << SECTOR_SHIFT; 1177 1176 t_page = null_lookup_page(nullb, sector, false, 1178 1177 !null_cache_active(nullb)); 1179 - 1180 1178 if (t_page) 1181 - memcpy_page(dest, off + count, t_page->page, offset, 1182 - temp); 1179 + memcpy_from_page(dest + count, t_page->page, 1180 + offset_in_page(pos), temp); 1183 1181 else 1184 - memzero_page(dest, off + count, temp); 1182 + memset(dest + count, 0, temp); 1185 1183 1186 1184 count += temp; 1187 - sector += temp >> SECTOR_SHIFT; 1185 + pos += temp; 1188 1186 } 1189 - return 0; 1190 - } 1191 - 1192 - static void nullb_fill_pattern(struct nullb *nullb, struct page *page, 1193 - unsigned int len, unsigned int off) 1194 - { 1195 - memset_page(page, off, 0xff, len); 1196 1187 } 1197 1188 1198 1189 blk_status_t null_handle_discard(struct nullb_device *dev, ··· 1229 1234 return errno_to_blk_status(err); 1230 1235 } 1231 1236 1232 - static int null_transfer(struct nullb *nullb, struct page *page, 1233 - unsigned int len, unsigned int off, bool is_write, sector_t sector, 1237 + static blk_status_t null_transfer(struct nullb *nullb, struct page *page, 1238 + unsigned int len, unsigned int off, bool is_write, loff_t pos, 1234 1239 bool is_fua) 1235 1240 { 1236 1241 struct nullb_device *dev = nullb->dev; 1242 + blk_status_t err = BLK_STS_OK; 1237 1243 unsigned int valid_len = len; 1238 - int err = 0; 1244 + void *p; 1239 1245 1246 + p = kmap_local_page(page) + off; 1240 1247 if (!is_write) { 1241 - if (dev->zoned) 1248 + if (dev->zoned) { 1242 1249 valid_len = null_zone_valid_read_len(nullb, 1243 - sector, len); 1250 + pos >> SECTOR_SHIFT, len); 1251 + if (valid_len && valid_len != len) 1252 + valid_len -= pos & (SECTOR_SIZE - 1); 1253 + } 1244 1254 1245 1255 if (valid_len) { 1246 - err = copy_from_nullb(nullb, page, off, 1247 - sector, valid_len); 1256 + copy_from_nullb(nullb, p, pos, valid_len); 1248 1257 off += valid_len; 1249 1258 len -= valid_len; 1250 1259 } 1251 1260 1252 1261 if (len) 1253 - nullb_fill_pattern(nullb, page, len, off); 1262 + memset(p + valid_len, 0xff, len); 1254 1263 flush_dcache_page(page); 1255 1264 } else { 1256 1265 flush_dcache_page(page); 1257 - err = copy_to_nullb(nullb, page, off, sector, len, is_fua); 1266 + err = copy_to_nullb(nullb, p, pos, len, is_fua); 1258 1267 } 1259 1268 1269 + kunmap_local(p); 1260 1270 return err; 1261 1271 } 1262 1272 ··· 1274 1274 { 1275 1275 struct request *rq = blk_mq_rq_from_pdu(cmd); 1276 1276 struct nullb *nullb = cmd->nq->dev->nullb; 1277 - int err = 0; 1277 + blk_status_t err = BLK_STS_OK; 1278 1278 unsigned int len; 1279 - sector_t sector = blk_rq_pos(rq); 1279 + loff_t pos = blk_rq_pos(rq) << SECTOR_SHIFT; 1280 1280 unsigned int max_bytes = nr_sectors << SECTOR_SHIFT; 1281 1281 unsigned int transferred_bytes = 0; 1282 1282 struct req_iterator iter; ··· 1288 1288 if (transferred_bytes + len > max_bytes) 1289 1289 len = max_bytes - transferred_bytes; 1290 1290 err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset, 1291 - op_is_write(req_op(rq)), sector, 1291 + op_is_write(req_op(rq)), pos, 1292 1292 rq->cmd_flags & REQ_FUA); 1293 1293 if (err) 1294 1294 break; 1295 - sector += len >> SECTOR_SHIFT; 1295 + pos += len; 1296 1296 transferred_bytes += len; 1297 1297 if (transferred_bytes >= max_bytes) 1298 1298 break; 1299 1299 } 1300 1300 spin_unlock_irq(&nullb->lock); 1301 1301 1302 - return errno_to_blk_status(err); 1302 + return err; 1303 1303 } 1304 1304 1305 1305 static inline blk_status_t null_handle_throttled(struct nullb_cmd *cmd) ··· 1949 1949 .logical_block_size = dev->blocksize, 1950 1950 .physical_block_size = dev->blocksize, 1951 1951 .max_hw_sectors = dev->max_sectors, 1952 - .dma_alignment = dev->blocksize - 1, 1952 + .dma_alignment = 1, 1953 1953 }; 1954 1954 1955 1955 struct nullb *nullb;
+2 -1
drivers/block/null_blk/null_blk.h
··· 143 143 int null_register_zoned_dev(struct nullb *nullb); 144 144 void null_free_zoned_dev(struct nullb_device *dev); 145 145 int null_report_zones(struct gendisk *disk, sector_t sector, 146 - unsigned int nr_zones, report_zones_cb cb, void *data); 146 + unsigned int nr_zones, 147 + struct blk_report_zones_args *args); 147 148 blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd, enum req_op op, 148 149 sector_t sector, sector_t nr_sectors); 149 150 size_t null_zone_valid_read_len(struct nullb *nullb,
+3 -3
drivers/block/null_blk/zoned.c
··· 191 191 } 192 192 193 193 int null_report_zones(struct gendisk *disk, sector_t sector, 194 - unsigned int nr_zones, report_zones_cb cb, void *data) 194 + unsigned int nr_zones, struct blk_report_zones_args *args) 195 195 { 196 196 struct nullb *nullb = disk->private_data; 197 197 struct nullb_device *dev = nullb->dev; ··· 225 225 blkz.capacity = zone->capacity; 226 226 null_unlock_zone(dev, zone); 227 227 228 - error = cb(&blkz, i, data); 228 + error = disk_report_zone(disk, &blkz, i, args); 229 229 if (error) 230 230 return error; 231 231 } ··· 242 242 { 243 243 struct nullb_device *dev = nullb->dev; 244 244 struct nullb_zone *zone = &dev->zones[null_zone_no(dev, sector)]; 245 - unsigned int nr_sectors = len >> SECTOR_SHIFT; 245 + unsigned int nr_sectors = DIV_ROUND_UP(len, SECTOR_SIZE); 246 246 247 247 /* Read must be below the write pointer position */ 248 248 if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL ||
+4
drivers/block/ps3disk.c
··· 85 85 struct bio_vec bvec; 86 86 87 87 rq_for_each_segment(bvec, req, iter) { 88 + dev_dbg(&dev->sbd.core, "%s:%u: %u sectors from %llu\n", 89 + __func__, __LINE__, bio_sectors(iter.bio), 90 + iter.bio->bi_iter.bi_sector); 88 91 if (gather) 89 92 memcpy_from_bvec(dev->bounce_buf + offset, &bvec); 90 93 else 91 94 memcpy_to_bvec(&bvec, dev->bounce_buf + offset); 95 + offset += bvec.bv_len; 92 96 } 93 97 } 94 98
+11 -4
drivers/block/rnbd/rnbd-proto.h
··· 24 24 #define RTRS_PORT 1234 25 25 26 26 /** 27 - * enum rnbd_msg_types - RNBD message types 27 + * enum rnbd_msg_type - RNBD message types 28 28 * @RNBD_MSG_SESS_INFO: initial session info from client to server 29 29 * @RNBD_MSG_SESS_INFO_RSP: initial session info from server to client 30 30 * @RNBD_MSG_OPEN: open (map) device request ··· 47 47 */ 48 48 struct rnbd_msg_hdr { 49 49 __le16 type; 50 + /* private: */ 50 51 __le16 __padding; 51 52 }; 52 53 53 - /** 54 + /* 54 55 * We allow to map RO many times and RW only once. We allow to map yet another 55 56 * time RW, if MIGRATION is provided (second RW export can be required for 56 57 * example for VM migration) ··· 79 78 struct rnbd_msg_sess_info { 80 79 struct rnbd_msg_hdr hdr; 81 80 u8 ver; 81 + /* private: */ 82 82 u8 reserved[31]; 83 83 }; 84 84 ··· 91 89 struct rnbd_msg_sess_info_rsp { 92 90 struct rnbd_msg_hdr hdr; 93 91 u8 ver; 92 + /* private: */ 94 93 u8 reserved[31]; 95 94 }; 96 95 ··· 100 97 * @hdr: message header 101 98 * @access_mode: the mode to open remote device, valid values see: 102 99 * enum rnbd_access_mode 103 - * @device_name: device path on remote side 100 + * @dev_name: device path on remote side 104 101 */ 105 102 struct rnbd_msg_open { 106 103 struct rnbd_msg_hdr hdr; 107 104 u8 access_mode; 105 + /* private: */ 108 106 u8 resv1; 107 + /* public: */ 109 108 s8 dev_name[NAME_MAX]; 109 + /* private: */ 110 110 u8 reserved[3]; 111 111 }; 112 112 ··· 161 155 __le16 secure_discard; 162 156 u8 obsolete_rotational; 163 157 u8 cache_policy; 158 + /* private: */ 164 159 u8 reserved[10]; 165 160 }; 166 161 ··· 194 187 * @RNBD_OP_DISCARD: discard sectors 195 188 * @RNBD_OP_SECURE_ERASE: securely erase sectors 196 189 * @RNBD_OP_WRITE_ZEROES: write zeroes sectors 197 - 190 + * 198 191 * @RNBD_F_SYNC: request is sync (sync write or read) 199 192 * @RNBD_F_FUA: forced unit access 200 193 */
+1 -2
drivers/block/rnull/rnull.rs
··· 17 17 error::Result, 18 18 pr_info, 19 19 prelude::*, 20 - sync::Arc, 21 - types::ARef, 20 + sync::{aref::ARef, Arc}, 22 21 }; 23 22 use pin_init::PinInit; 24 23
+184 -203
drivers/block/ublk_drv.c
··· 155 155 */ 156 156 #define UBLK_REFCOUNT_INIT (REFCOUNT_MAX / 2) 157 157 158 + union ublk_io_buf { 159 + __u64 addr; 160 + struct ublk_auto_buf_reg auto_reg; 161 + }; 162 + 158 163 struct ublk_io { 159 - /* userspace buffer address from io cmd */ 160 - union { 161 - __u64 addr; 162 - struct ublk_auto_buf_reg buf; 163 - }; 164 + union ublk_io_buf buf; 164 165 unsigned int flags; 165 166 int res; 166 167 ··· 204 203 bool fail_io; /* copy of dev->state == UBLK_S_DEV_FAIL_IO */ 205 204 spinlock_t cancel_lock; 206 205 struct ublk_device *dev; 207 - struct ublk_io ios[]; 206 + struct ublk_io ios[] __counted_by(q_depth); 208 207 }; 209 208 210 209 struct ublk_device { 211 210 struct gendisk *ub_disk; 212 211 213 - char *__queues; 214 - 215 - unsigned int queue_size; 216 212 struct ublksrv_ctrl_dev_info dev_info; 217 213 218 214 struct blk_mq_tag_set tag_set; ··· 237 239 bool canceling; 238 240 pid_t ublksrv_tgid; 239 241 struct delayed_work exit_work; 242 + 243 + struct ublk_queue *queues[]; 240 244 }; 241 245 242 246 /* header of ublk_params */ ··· 265 265 return ub->dev_info.flags & UBLK_F_ZONED; 266 266 } 267 267 268 - static inline bool ublk_queue_is_zoned(struct ublk_queue *ubq) 268 + static inline bool ublk_queue_is_zoned(const struct ublk_queue *ubq) 269 269 { 270 270 return ubq->flags & UBLK_F_ZONED; 271 271 } ··· 368 368 } 369 369 370 370 static int ublk_report_zones(struct gendisk *disk, sector_t sector, 371 - unsigned int nr_zones, report_zones_cb cb, void *data) 371 + unsigned int nr_zones, struct blk_report_zones_args *args) 372 372 { 373 373 struct ublk_device *ub = disk->private_data; 374 374 unsigned int zone_size_sectors = disk->queue->limits.chunk_sectors; ··· 431 431 if (!zone->len) 432 432 break; 433 433 434 - ret = cb(zone, i, data); 434 + ret = disk_report_zone(disk, zone, i, args); 435 435 if (ret) 436 436 goto out; 437 437 ··· 499 499 iod->op_flags = ublk_op | ublk_req_build_flags(req); 500 500 iod->nr_sectors = blk_rq_sectors(req); 501 501 iod->start_sector = blk_rq_pos(req); 502 - iod->addr = io->addr; 502 + iod->addr = io->buf.addr; 503 503 504 504 return BLK_STS_OK; 505 505 } ··· 781 781 static inline struct ublk_queue *ublk_get_queue(struct ublk_device *dev, 782 782 int qid) 783 783 { 784 - return (struct ublk_queue *)&(dev->__queues[qid * dev->queue_size]); 784 + return dev->queues[qid]; 785 785 } 786 786 787 787 static inline bool ublk_rq_has_data(const struct request *rq) ··· 914 914 .report_zones = ublk_report_zones, 915 915 }; 916 916 917 - #define UBLK_MAX_PIN_PAGES 32 918 - 919 - struct ublk_io_iter { 920 - struct page *pages[UBLK_MAX_PIN_PAGES]; 921 - struct bio *bio; 922 - struct bvec_iter iter; 923 - }; 924 - 925 - /* return how many pages are copied */ 926 - static void ublk_copy_io_pages(struct ublk_io_iter *data, 927 - size_t total, size_t pg_off, int dir) 928 - { 929 - unsigned done = 0; 930 - unsigned pg_idx = 0; 931 - 932 - while (done < total) { 933 - struct bio_vec bv = bio_iter_iovec(data->bio, data->iter); 934 - unsigned int bytes = min3(bv.bv_len, (unsigned)total - done, 935 - (unsigned)(PAGE_SIZE - pg_off)); 936 - void *bv_buf = bvec_kmap_local(&bv); 937 - void *pg_buf = kmap_local_page(data->pages[pg_idx]); 938 - 939 - if (dir == ITER_DEST) 940 - memcpy(pg_buf + pg_off, bv_buf, bytes); 941 - else 942 - memcpy(bv_buf, pg_buf + pg_off, bytes); 943 - 944 - kunmap_local(pg_buf); 945 - kunmap_local(bv_buf); 946 - 947 - /* advance page array */ 948 - pg_off += bytes; 949 - if (pg_off == PAGE_SIZE) { 950 - pg_idx += 1; 951 - pg_off = 0; 952 - } 953 - 954 - done += bytes; 955 - 956 - /* advance bio */ 957 - bio_advance_iter_single(data->bio, &data->iter, bytes); 958 - if (!data->iter.bi_size) { 959 - data->bio = data->bio->bi_next; 960 - if (data->bio == NULL) 961 - break; 962 - data->iter = data->bio->bi_iter; 963 - } 964 - } 965 - } 966 - 967 - static bool ublk_advance_io_iter(const struct request *req, 968 - struct ublk_io_iter *iter, unsigned int offset) 969 - { 970 - struct bio *bio = req->bio; 971 - 972 - for_each_bio(bio) { 973 - if (bio->bi_iter.bi_size > offset) { 974 - iter->bio = bio; 975 - iter->iter = bio->bi_iter; 976 - bio_advance_iter(iter->bio, &iter->iter, offset); 977 - return true; 978 - } 979 - offset -= bio->bi_iter.bi_size; 980 - } 981 - return false; 982 - } 983 - 984 917 /* 985 918 * Copy data between request pages and io_iter, and 'offset' 986 919 * is the start point of linear offset of request. ··· 921 988 static size_t ublk_copy_user_pages(const struct request *req, 922 989 unsigned offset, struct iov_iter *uiter, int dir) 923 990 { 924 - struct ublk_io_iter iter; 991 + struct req_iterator iter; 992 + struct bio_vec bv; 925 993 size_t done = 0; 926 994 927 - if (!ublk_advance_io_iter(req, &iter, offset)) 928 - return 0; 995 + rq_for_each_segment(bv, req, iter) { 996 + void *bv_buf; 997 + size_t copied; 929 998 930 - while (iov_iter_count(uiter) && iter.bio) { 931 - unsigned nr_pages; 932 - ssize_t len; 933 - size_t off; 934 - int i; 935 - 936 - len = iov_iter_get_pages2(uiter, iter.pages, 937 - iov_iter_count(uiter), 938 - UBLK_MAX_PIN_PAGES, &off); 939 - if (len <= 0) 940 - return done; 941 - 942 - ublk_copy_io_pages(&iter, len, off, dir); 943 - nr_pages = DIV_ROUND_UP(len + off, PAGE_SIZE); 944 - for (i = 0; i < nr_pages; i++) { 945 - if (dir == ITER_DEST) 946 - set_page_dirty(iter.pages[i]); 947 - put_page(iter.pages[i]); 999 + if (offset >= bv.bv_len) { 1000 + offset -= bv.bv_len; 1001 + continue; 948 1002 } 949 - done += len; 950 - } 951 1003 1004 + bv.bv_offset += offset; 1005 + bv.bv_len -= offset; 1006 + bv_buf = bvec_kmap_local(&bv); 1007 + if (dir == ITER_DEST) 1008 + copied = copy_to_iter(bv_buf, bv.bv_len, uiter); 1009 + else 1010 + copied = copy_from_iter(bv_buf, bv.bv_len, uiter); 1011 + 1012 + kunmap_local(bv_buf); 1013 + 1014 + done += copied; 1015 + if (copied < bv.bv_len) 1016 + break; 1017 + 1018 + offset = 0; 1019 + } 952 1020 return done; 953 1021 } 954 1022 ··· 964 1030 (req_op(req) == REQ_OP_READ || req_op(req) == REQ_OP_DRV_IN); 965 1031 } 966 1032 967 - static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req, 968 - const struct ublk_io *io) 1033 + static unsigned int ublk_map_io(const struct ublk_queue *ubq, 1034 + const struct request *req, 1035 + const struct ublk_io *io) 969 1036 { 970 1037 const unsigned int rq_bytes = blk_rq_bytes(req); 971 1038 ··· 982 1047 struct iov_iter iter; 983 1048 const int dir = ITER_DEST; 984 1049 985 - import_ubuf(dir, u64_to_user_ptr(io->addr), rq_bytes, &iter); 1050 + import_ubuf(dir, u64_to_user_ptr(io->buf.addr), rq_bytes, &iter); 986 1051 return ublk_copy_user_pages(req, 0, &iter, dir); 987 1052 } 988 1053 return rq_bytes; 989 1054 } 990 1055 991 - static int ublk_unmap_io(bool need_map, 1056 + static unsigned int ublk_unmap_io(bool need_map, 992 1057 const struct request *req, 993 1058 const struct ublk_io *io) 994 1059 { ··· 1003 1068 1004 1069 WARN_ON_ONCE(io->res > rq_bytes); 1005 1070 1006 - import_ubuf(dir, u64_to_user_ptr(io->addr), io->res, &iter); 1071 + import_ubuf(dir, u64_to_user_ptr(io->buf.addr), io->res, &iter); 1007 1072 return ublk_copy_user_pages(req, 0, &iter, dir); 1008 1073 } 1009 1074 return rq_bytes; ··· 1069 1134 iod->op_flags = ublk_op | ublk_req_build_flags(req); 1070 1135 iod->nr_sectors = blk_rq_sectors(req); 1071 1136 iod->start_sector = blk_rq_pos(req); 1072 - iod->addr = io->addr; 1137 + iod->addr = io->buf.addr; 1073 1138 1074 1139 return BLK_STS_OK; 1075 1140 } ··· 1168 1233 } 1169 1234 1170 1235 static void 1171 - ublk_auto_buf_reg_fallback(const struct ublk_queue *ubq, struct ublk_io *io) 1236 + ublk_auto_buf_reg_fallback(const struct ublk_queue *ubq, unsigned tag) 1172 1237 { 1173 - unsigned tag = io - ubq->ios; 1174 1238 struct ublksrv_io_desc *iod = ublk_get_iod(ubq, tag); 1175 1239 1176 1240 iod->op_flags |= UBLK_IO_F_NEED_REG_BUF; 1177 1241 } 1178 1242 1179 - static bool ublk_auto_buf_reg(const struct ublk_queue *ubq, struct request *req, 1180 - struct ublk_io *io, unsigned int issue_flags) 1243 + enum auto_buf_reg_res { 1244 + AUTO_BUF_REG_FAIL, 1245 + AUTO_BUF_REG_FALLBACK, 1246 + AUTO_BUF_REG_OK, 1247 + }; 1248 + 1249 + static void ublk_prep_auto_buf_reg_io(const struct ublk_queue *ubq, 1250 + struct request *req, struct ublk_io *io, 1251 + struct io_uring_cmd *cmd, 1252 + enum auto_buf_reg_res res) 1253 + { 1254 + if (res == AUTO_BUF_REG_OK) { 1255 + io->task_registered_buffers = 1; 1256 + io->buf_ctx_handle = io_uring_cmd_ctx_handle(cmd); 1257 + io->flags |= UBLK_IO_FLAG_AUTO_BUF_REG; 1258 + } 1259 + ublk_init_req_ref(ubq, io); 1260 + __ublk_prep_compl_io_cmd(io, req); 1261 + } 1262 + 1263 + static enum auto_buf_reg_res 1264 + __ublk_do_auto_buf_reg(const struct ublk_queue *ubq, struct request *req, 1265 + struct ublk_io *io, struct io_uring_cmd *cmd, 1266 + unsigned int issue_flags) 1181 1267 { 1182 1268 int ret; 1183 1269 1184 - ret = io_buffer_register_bvec(io->cmd, req, ublk_io_release, 1185 - io->buf.index, issue_flags); 1270 + ret = io_buffer_register_bvec(cmd, req, ublk_io_release, 1271 + io->buf.auto_reg.index, issue_flags); 1186 1272 if (ret) { 1187 - if (io->buf.flags & UBLK_AUTO_BUF_REG_FALLBACK) { 1188 - ublk_auto_buf_reg_fallback(ubq, io); 1189 - return true; 1273 + if (io->buf.auto_reg.flags & UBLK_AUTO_BUF_REG_FALLBACK) { 1274 + ublk_auto_buf_reg_fallback(ubq, req->tag); 1275 + return AUTO_BUF_REG_FALLBACK; 1190 1276 } 1191 1277 blk_mq_end_request(req, BLK_STS_IOERR); 1192 - return false; 1278 + return AUTO_BUF_REG_FAIL; 1193 1279 } 1194 1280 1195 - io->task_registered_buffers = 1; 1196 - io->buf_ctx_handle = io_uring_cmd_ctx_handle(io->cmd); 1197 - io->flags |= UBLK_IO_FLAG_AUTO_BUF_REG; 1198 - return true; 1281 + return AUTO_BUF_REG_OK; 1199 1282 } 1200 1283 1201 - static bool ublk_prep_auto_buf_reg(struct ublk_queue *ubq, 1202 - struct request *req, struct ublk_io *io, 1203 - unsigned int issue_flags) 1284 + static void ublk_do_auto_buf_reg(const struct ublk_queue *ubq, struct request *req, 1285 + struct ublk_io *io, struct io_uring_cmd *cmd, 1286 + unsigned int issue_flags) 1204 1287 { 1205 - ublk_init_req_ref(ubq, io); 1206 - if (ublk_support_auto_buf_reg(ubq) && ublk_rq_has_data(req)) 1207 - return ublk_auto_buf_reg(ubq, req, io, issue_flags); 1288 + enum auto_buf_reg_res res = __ublk_do_auto_buf_reg(ubq, req, io, cmd, 1289 + issue_flags); 1208 1290 1209 - return true; 1291 + if (res != AUTO_BUF_REG_FAIL) { 1292 + ublk_prep_auto_buf_reg_io(ubq, req, io, cmd, res); 1293 + io_uring_cmd_done(cmd, UBLK_IO_RES_OK, issue_flags); 1294 + } 1210 1295 } 1211 1296 1212 1297 static bool ublk_start_io(const struct ublk_queue *ubq, struct request *req, ··· 1298 1343 if (!ublk_start_io(ubq, req, io)) 1299 1344 return; 1300 1345 1301 - if (ublk_prep_auto_buf_reg(ubq, req, io, issue_flags)) 1346 + if (ublk_support_auto_buf_reg(ubq) && ublk_rq_has_data(req)) { 1347 + ublk_do_auto_buf_reg(ubq, req, io, io->cmd, issue_flags); 1348 + } else { 1349 + ublk_init_req_ref(ubq, io); 1302 1350 ublk_complete_io_cmd(io, req, UBLK_IO_RES_OK, issue_flags); 1351 + } 1303 1352 } 1304 1353 1305 1354 static void ublk_cmd_tw_cb(struct io_tw_req tw_req, io_tw_token_t tw) ··· 1495 1536 */ 1496 1537 io->flags &= UBLK_IO_FLAG_CANCELED; 1497 1538 io->cmd = NULL; 1498 - io->addr = 0; 1539 + io->buf.addr = 0; 1499 1540 1500 1541 /* 1501 1542 * old task is PF_EXITING, put it now ··· 2056 2097 2057 2098 static inline int ublk_set_auto_buf_reg(struct ublk_io *io, struct io_uring_cmd *cmd) 2058 2099 { 2059 - io->buf = ublk_sqe_addr_to_auto_buf_reg(READ_ONCE(cmd->sqe->addr)); 2100 + struct ublk_auto_buf_reg buf; 2060 2101 2061 - if (io->buf.reserved0 || io->buf.reserved1) 2102 + buf = ublk_sqe_addr_to_auto_buf_reg(READ_ONCE(cmd->sqe->addr)); 2103 + 2104 + if (buf.reserved0 || buf.reserved1) 2062 2105 return -EINVAL; 2063 2106 2064 - if (io->buf.flags & ~UBLK_AUTO_BUF_REG_F_MASK) 2107 + if (buf.flags & ~UBLK_AUTO_BUF_REG_F_MASK) 2065 2108 return -EINVAL; 2109 + io->buf.auto_reg = buf; 2066 2110 return 0; 2067 2111 } 2068 2112 ··· 2087 2125 * this ublk request gets stuck. 2088 2126 */ 2089 2127 if (io->buf_ctx_handle == io_uring_cmd_ctx_handle(cmd)) 2090 - *buf_idx = io->buf.index; 2128 + *buf_idx = io->buf.auto_reg.index; 2091 2129 } 2092 2130 2093 2131 return ublk_set_auto_buf_reg(io, cmd); ··· 2115 2153 if (ublk_dev_support_auto_buf_reg(ub)) 2116 2154 return ublk_handle_auto_buf_reg(io, cmd, buf_idx); 2117 2155 2118 - io->addr = buf_addr; 2156 + io->buf.addr = buf_addr; 2119 2157 return 0; 2120 2158 } 2121 2159 ··· 2233 2271 return 0; 2234 2272 } 2235 2273 2274 + static int __ublk_fetch(struct io_uring_cmd *cmd, struct ublk_device *ub, 2275 + struct ublk_io *io) 2276 + { 2277 + /* UBLK_IO_FETCH_REQ is only allowed before dev is setup */ 2278 + if (ublk_dev_ready(ub)) 2279 + return -EBUSY; 2280 + 2281 + /* allow each command to be FETCHed at most once */ 2282 + if (io->flags & UBLK_IO_FLAG_ACTIVE) 2283 + return -EINVAL; 2284 + 2285 + WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV); 2286 + 2287 + ublk_fill_io_cmd(io, cmd); 2288 + 2289 + WRITE_ONCE(io->task, get_task_struct(current)); 2290 + ublk_mark_io_ready(ub); 2291 + 2292 + return 0; 2293 + } 2294 + 2236 2295 static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_device *ub, 2237 2296 struct ublk_io *io, __u64 buf_addr) 2238 2297 { 2239 - int ret = 0; 2298 + int ret; 2240 2299 2241 2300 /* 2242 2301 * When handling FETCH command for setting up ublk uring queue, ··· 2265 2282 * FETCH, so it is fine even for IO_URING_F_NONBLOCK. 2266 2283 */ 2267 2284 mutex_lock(&ub->mutex); 2268 - /* UBLK_IO_FETCH_REQ is only allowed before dev is setup */ 2269 - if (ublk_dev_ready(ub)) { 2270 - ret = -EBUSY; 2271 - goto out; 2272 - } 2273 - 2274 - /* allow each command to be FETCHed at most once */ 2275 - if (io->flags & UBLK_IO_FLAG_ACTIVE) { 2276 - ret = -EINVAL; 2277 - goto out; 2278 - } 2279 - 2280 - WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV); 2281 - 2282 - ublk_fill_io_cmd(io, cmd); 2283 - ret = ublk_config_io_buf(ub, io, cmd, buf_addr, NULL); 2284 - if (ret) 2285 - goto out; 2286 - 2287 - WRITE_ONCE(io->task, get_task_struct(current)); 2288 - ublk_mark_io_ready(ub); 2289 - out: 2285 + ret = __ublk_fetch(cmd, ub, io); 2286 + if (!ret) 2287 + ret = ublk_config_io_buf(ub, io, cmd, buf_addr, NULL); 2290 2288 mutex_unlock(&ub->mutex); 2291 2289 return ret; 2292 2290 } ··· 2314 2350 */ 2315 2351 io->flags &= ~UBLK_IO_FLAG_NEED_GET_DATA; 2316 2352 /* update iod->addr because ublksrv may have passed a new io buffer */ 2317 - ublk_get_iod(ubq, req->tag)->addr = io->addr; 2353 + ublk_get_iod(ubq, req->tag)->addr = io->buf.addr; 2318 2354 pr_devel("%s: update iod->addr: qid %d tag %d io_flags %x addr %llx\n", 2319 2355 __func__, ubq->q_id, req->tag, io->flags, 2320 2356 ublk_get_iod(ubq, req->tag)->addr); ··· 2330 2366 u16 buf_idx = UBLK_INVALID_BUF_IDX; 2331 2367 struct ublk_device *ub = cmd->file->private_data; 2332 2368 struct ublk_queue *ubq; 2333 - struct ublk_io *io; 2369 + struct ublk_io *io = NULL; 2334 2370 u32 cmd_op = cmd->cmd_op; 2335 2371 u16 q_id = READ_ONCE(ub_src->q_id); 2336 2372 u16 tag = READ_ONCE(ub_src->tag); ··· 2451 2487 2452 2488 out: 2453 2489 pr_devel("%s: complete: cmd op %d, tag %d ret %x io_flags %x\n", 2454 - __func__, cmd_op, tag, ret, io->flags); 2490 + __func__, cmd_op, tag, ret, io ? io->flags : 0); 2455 2491 return ret; 2456 2492 } 2457 2493 ··· 2539 2575 size_t buf_off; 2540 2576 u16 tag, q_id; 2541 2577 2542 - if (!ub) 2543 - return ERR_PTR(-EACCES); 2544 - 2545 2578 if (!user_backed_iter(iter)) 2546 2579 return ERR_PTR(-EACCES); 2547 2580 ··· 2563 2602 req = __ublk_check_and_get_req(ub, q_id, tag, *io, buf_off); 2564 2603 if (!req) 2565 2604 return ERR_PTR(-EINVAL); 2566 - 2567 - if (!req->mq_hctx || !req->mq_hctx->driver_data) 2568 - goto fail; 2569 2605 2570 2606 if (!ublk_check_ubuf_dir(req, dir)) 2571 2607 goto fail; ··· 2620 2662 2621 2663 static void ublk_deinit_queue(struct ublk_device *ub, int q_id) 2622 2664 { 2623 - int size = ublk_queue_cmd_buf_size(ub); 2624 - struct ublk_queue *ubq = ublk_get_queue(ub, q_id); 2625 - int i; 2665 + struct ublk_queue *ubq = ub->queues[q_id]; 2666 + int size, i; 2667 + 2668 + if (!ubq) 2669 + return; 2670 + 2671 + size = ublk_queue_cmd_buf_size(ub); 2626 2672 2627 2673 for (i = 0; i < ubq->q_depth; i++) { 2628 2674 struct ublk_io *io = &ubq->ios[i]; ··· 2638 2676 2639 2677 if (ubq->io_cmd_buf) 2640 2678 free_pages((unsigned long)ubq->io_cmd_buf, get_order(size)); 2679 + 2680 + kvfree(ubq); 2681 + ub->queues[q_id] = NULL; 2682 + } 2683 + 2684 + static int ublk_get_queue_numa_node(struct ublk_device *ub, int q_id) 2685 + { 2686 + unsigned int cpu; 2687 + 2688 + /* Find first CPU mapped to this queue */ 2689 + for_each_possible_cpu(cpu) { 2690 + if (ub->tag_set.map[HCTX_TYPE_DEFAULT].mq_map[cpu] == q_id) 2691 + return cpu_to_node(cpu); 2692 + } 2693 + 2694 + return NUMA_NO_NODE; 2641 2695 } 2642 2696 2643 2697 static int ublk_init_queue(struct ublk_device *ub, int q_id) 2644 2698 { 2645 - struct ublk_queue *ubq = ublk_get_queue(ub, q_id); 2699 + int depth = ub->dev_info.queue_depth; 2646 2700 gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO; 2647 - void *ptr; 2701 + struct ublk_queue *ubq; 2702 + struct page *page; 2703 + int numa_node; 2648 2704 int size; 2705 + 2706 + /* Determine NUMA node based on queue's CPU affinity */ 2707 + numa_node = ublk_get_queue_numa_node(ub, q_id); 2708 + 2709 + /* Allocate queue structure on local NUMA node */ 2710 + ubq = kvzalloc_node(struct_size(ubq, ios, depth), GFP_KERNEL, 2711 + numa_node); 2712 + if (!ubq) 2713 + return -ENOMEM; 2649 2714 2650 2715 spin_lock_init(&ubq->cancel_lock); 2651 2716 ubq->flags = ub->dev_info.flags; 2652 2717 ubq->q_id = q_id; 2653 - ubq->q_depth = ub->dev_info.queue_depth; 2718 + ubq->q_depth = depth; 2654 2719 size = ublk_queue_cmd_buf_size(ub); 2655 2720 2656 - ptr = (void *) __get_free_pages(gfp_flags, get_order(size)); 2657 - if (!ptr) 2721 + /* Allocate I/O command buffer on local NUMA node */ 2722 + page = alloc_pages_node(numa_node, gfp_flags, get_order(size)); 2723 + if (!page) { 2724 + kvfree(ubq); 2658 2725 return -ENOMEM; 2726 + } 2727 + ubq->io_cmd_buf = page_address(page); 2659 2728 2660 - ubq->io_cmd_buf = ptr; 2729 + ub->queues[q_id] = ubq; 2661 2730 ubq->dev = ub; 2662 2731 return 0; 2663 2732 } 2664 2733 2665 2734 static void ublk_deinit_queues(struct ublk_device *ub) 2666 2735 { 2667 - int nr_queues = ub->dev_info.nr_hw_queues; 2668 2736 int i; 2669 2737 2670 - if (!ub->__queues) 2671 - return; 2672 - 2673 - for (i = 0; i < nr_queues; i++) 2738 + for (i = 0; i < ub->dev_info.nr_hw_queues; i++) 2674 2739 ublk_deinit_queue(ub, i); 2675 - kvfree(ub->__queues); 2676 2740 } 2677 2741 2678 2742 static int ublk_init_queues(struct ublk_device *ub) 2679 2743 { 2680 - int nr_queues = ub->dev_info.nr_hw_queues; 2681 - int depth = ub->dev_info.queue_depth; 2682 - int ubq_size = sizeof(struct ublk_queue) + depth * sizeof(struct ublk_io); 2683 - int i, ret = -ENOMEM; 2744 + int i, ret; 2684 2745 2685 - ub->queue_size = ubq_size; 2686 - ub->__queues = kvcalloc(nr_queues, ubq_size, GFP_KERNEL); 2687 - if (!ub->__queues) 2688 - return ret; 2689 - 2690 - for (i = 0; i < nr_queues; i++) { 2691 - if (ublk_init_queue(ub, i)) 2746 + for (i = 0; i < ub->dev_info.nr_hw_queues; i++) { 2747 + ret = ublk_init_queue(ub, i); 2748 + if (ret) 2692 2749 goto fail; 2693 2750 } 2694 2751 ··· 3109 3128 goto out_unlock; 3110 3129 3111 3130 ret = -ENOMEM; 3112 - ub = kzalloc(sizeof(*ub), GFP_KERNEL); 3131 + ub = kzalloc(struct_size(ub, queues, info.nr_hw_queues), GFP_KERNEL); 3113 3132 if (!ub) 3114 3133 goto out_unlock; 3115 3134 mutex_init(&ub->mutex); ··· 3159 3178 ub->dev_info.nr_hw_queues, nr_cpu_ids); 3160 3179 ublk_align_max_io_size(ub); 3161 3180 3162 - ret = ublk_init_queues(ub); 3181 + ret = ublk_add_tag_set(ub); 3163 3182 if (ret) 3164 3183 goto out_free_dev_number; 3165 3184 3166 - ret = ublk_add_tag_set(ub); 3185 + ret = ublk_init_queues(ub); 3167 3186 if (ret) 3168 - goto out_deinit_queues; 3187 + goto out_free_tag_set; 3169 3188 3170 3189 ret = -EFAULT; 3171 3190 if (copy_to_user(argp, &ub->dev_info, sizeof(info))) 3172 - goto out_free_tag_set; 3191 + goto out_deinit_queues; 3173 3192 3174 3193 /* 3175 3194 * Add the char dev so that ublksrv daemon can be setup. ··· 3178 3197 ret = ublk_add_chdev(ub); 3179 3198 goto out_unlock; 3180 3199 3181 - out_free_tag_set: 3182 - blk_mq_free_tag_set(&ub->tag_set); 3183 3200 out_deinit_queues: 3184 3201 ublk_deinit_queues(ub); 3202 + out_free_tag_set: 3203 + blk_mq_free_tag_set(&ub->tag_set); 3185 3204 out_free_dev_number: 3186 3205 ublk_free_dev_number(ub); 3187 3206 out_free_ub:
+18 -6
drivers/block/virtio_blk.c
··· 584 584 585 585 static int virtblk_parse_zone(struct virtio_blk *vblk, 586 586 struct virtio_blk_zone_descriptor *entry, 587 - unsigned int idx, report_zones_cb cb, void *data) 587 + unsigned int idx, 588 + struct blk_report_zones_args *args) 588 589 { 589 590 struct blk_zone zone = { }; 590 591 ··· 651 650 * The callback below checks the validity of the reported 652 651 * entry data, no need to further validate it here. 653 652 */ 654 - return cb(&zone, idx, data); 653 + return disk_report_zone(vblk->disk, &zone, idx, args); 655 654 } 656 655 657 656 static int virtblk_report_zones(struct gendisk *disk, sector_t sector, 658 - unsigned int nr_zones, report_zones_cb cb, 659 - void *data) 657 + unsigned int nr_zones, 658 + struct blk_report_zones_args *args) 660 659 { 661 660 struct virtio_blk *vblk = disk->private_data; 662 661 struct virtio_blk_zone_report *report; ··· 694 693 695 694 for (i = 0; i < nz && zone_idx < nr_zones; i++) { 696 695 ret = virtblk_parse_zone(vblk, &report->zones[i], 697 - zone_idx, cb, data); 696 + zone_idx, args); 698 697 if (ret) 699 698 goto fail_report; 700 699 ··· 1027 1026 out: 1028 1027 kfree(vqs); 1029 1028 kfree(vqs_info); 1030 - if (err) 1029 + if (err) { 1031 1030 kfree(vblk->vqs); 1031 + /* 1032 + * Set to NULL to prevent freeing vqs again during freezing. 1033 + */ 1034 + vblk->vqs = NULL; 1035 + } 1032 1036 return err; 1033 1037 } 1034 1038 ··· 1604 1598 1605 1599 vdev->config->del_vqs(vdev); 1606 1600 kfree(vblk->vqs); 1601 + /* 1602 + * Set to NULL to prevent freeing vqs again after a failed vqs 1603 + * allocation during resume. Note that kfree() already handles NULL 1604 + * pointers safely. 1605 + */ 1606 + vblk->vqs = NULL; 1607 1607 1608 1608 return 0; 1609 1609 }
+141 -19
drivers/block/zloop.c
··· 32 32 ZLOOP_OPT_NR_QUEUES = (1 << 6), 33 33 ZLOOP_OPT_QUEUE_DEPTH = (1 << 7), 34 34 ZLOOP_OPT_BUFFERED_IO = (1 << 8), 35 + ZLOOP_OPT_ZONE_APPEND = (1 << 9), 36 + ZLOOP_OPT_ORDERED_ZONE_APPEND = (1 << 10), 35 37 }; 36 38 37 39 static const match_table_t zloop_opt_tokens = { ··· 46 44 { ZLOOP_OPT_NR_QUEUES, "nr_queues=%u" }, 47 45 { ZLOOP_OPT_QUEUE_DEPTH, "queue_depth=%u" }, 48 46 { ZLOOP_OPT_BUFFERED_IO, "buffered_io" }, 47 + { ZLOOP_OPT_ZONE_APPEND, "zone_append=%u" }, 48 + { ZLOOP_OPT_ORDERED_ZONE_APPEND, "ordered_zone_append" }, 49 49 { ZLOOP_OPT_ERR, NULL } 50 50 }; 51 51 ··· 60 56 #define ZLOOP_DEF_NR_QUEUES 1 61 57 #define ZLOOP_DEF_QUEUE_DEPTH 128 62 58 #define ZLOOP_DEF_BUFFERED_IO false 59 + #define ZLOOP_DEF_ZONE_APPEND true 60 + #define ZLOOP_DEF_ORDERED_ZONE_APPEND false 63 61 64 62 /* Arbitrary limit on the zone size (16GB). */ 65 63 #define ZLOOP_MAX_ZONE_SIZE_MB 16384 ··· 77 71 unsigned int nr_queues; 78 72 unsigned int queue_depth; 79 73 bool buffered_io; 74 + bool zone_append; 75 + bool ordered_zone_append; 80 76 }; 81 77 82 78 /* ··· 100 92 101 93 unsigned long flags; 102 94 struct mutex lock; 95 + spinlock_t wp_lock; 103 96 enum blk_zone_cond cond; 104 97 sector_t start; 105 98 sector_t wp; ··· 117 108 118 109 struct workqueue_struct *workqueue; 119 110 bool buffered_io; 111 + bool zone_append; 112 + bool ordered_zone_append; 120 113 121 114 const char *base_dir; 122 115 struct file *data_dir; ··· 158 147 struct zloop_zone *zone = &zlo->zones[zone_no]; 159 148 struct kstat stat; 160 149 sector_t file_sectors; 150 + unsigned long flags; 161 151 int ret; 162 152 163 153 lockdep_assert_held(&zone->lock); ··· 184 172 return -EINVAL; 185 173 } 186 174 175 + spin_lock_irqsave(&zone->wp_lock, flags); 187 176 if (!file_sectors) { 188 177 zone->cond = BLK_ZONE_COND_EMPTY; 189 178 zone->wp = zone->start; 190 179 } else if (file_sectors == zlo->zone_capacity) { 191 180 zone->cond = BLK_ZONE_COND_FULL; 192 - zone->wp = zone->start + zlo->zone_size; 181 + zone->wp = ULLONG_MAX; 193 182 } else { 194 183 zone->cond = BLK_ZONE_COND_CLOSED; 195 184 zone->wp = zone->start + file_sectors; 196 185 } 186 + spin_unlock_irqrestore(&zone->wp_lock, flags); 197 187 198 188 return 0; 199 189 } ··· 239 225 static int zloop_close_zone(struct zloop_device *zlo, unsigned int zone_no) 240 226 { 241 227 struct zloop_zone *zone = &zlo->zones[zone_no]; 228 + unsigned long flags; 242 229 int ret = 0; 243 230 244 231 if (test_bit(ZLOOP_ZONE_CONV, &zone->flags)) ··· 258 243 break; 259 244 case BLK_ZONE_COND_IMP_OPEN: 260 245 case BLK_ZONE_COND_EXP_OPEN: 246 + spin_lock_irqsave(&zone->wp_lock, flags); 261 247 if (zone->wp == zone->start) 262 248 zone->cond = BLK_ZONE_COND_EMPTY; 263 249 else 264 250 zone->cond = BLK_ZONE_COND_CLOSED; 251 + spin_unlock_irqrestore(&zone->wp_lock, flags); 265 252 break; 266 253 case BLK_ZONE_COND_EMPTY: 267 254 case BLK_ZONE_COND_FULL: ··· 281 264 static int zloop_reset_zone(struct zloop_device *zlo, unsigned int zone_no) 282 265 { 283 266 struct zloop_zone *zone = &zlo->zones[zone_no]; 267 + unsigned long flags; 284 268 int ret = 0; 285 269 286 270 if (test_bit(ZLOOP_ZONE_CONV, &zone->flags)) ··· 299 281 goto unlock; 300 282 } 301 283 284 + spin_lock_irqsave(&zone->wp_lock, flags); 302 285 zone->cond = BLK_ZONE_COND_EMPTY; 303 286 zone->wp = zone->start; 304 287 clear_bit(ZLOOP_ZONE_SEQ_ERROR, &zone->flags); 288 + spin_unlock_irqrestore(&zone->wp_lock, flags); 305 289 306 290 unlock: 307 291 mutex_unlock(&zone->lock); ··· 328 308 static int zloop_finish_zone(struct zloop_device *zlo, unsigned int zone_no) 329 309 { 330 310 struct zloop_zone *zone = &zlo->zones[zone_no]; 311 + unsigned long flags; 331 312 int ret = 0; 332 313 333 314 if (test_bit(ZLOOP_ZONE_CONV, &zone->flags)) ··· 346 325 goto unlock; 347 326 } 348 327 328 + spin_lock_irqsave(&zone->wp_lock, flags); 349 329 zone->cond = BLK_ZONE_COND_FULL; 350 - zone->wp = zone->start + zlo->zone_size; 330 + zone->wp = ULLONG_MAX; 351 331 clear_bit(ZLOOP_ZONE_SEQ_ERROR, &zone->flags); 332 + spin_unlock_irqrestore(&zone->wp_lock, flags); 352 333 353 334 unlock: 354 335 mutex_unlock(&zone->lock); ··· 392 369 struct zloop_zone *zone; 393 370 struct iov_iter iter; 394 371 struct bio_vec tmp; 372 + unsigned long flags; 395 373 sector_t zone_end; 396 374 int nr_bvec = 0; 397 375 int ret; ··· 401 377 cmd->sector = sector; 402 378 cmd->nr_sectors = nr_sectors; 403 379 cmd->ret = 0; 380 + 381 + if (WARN_ON_ONCE(is_append && !zlo->zone_append)) { 382 + ret = -EIO; 383 + goto out; 384 + } 404 385 405 386 /* We should never get an I/O beyond the device capacity. */ 406 387 if (WARN_ON_ONCE(zone_no >= zlo->nr_zones)) { ··· 435 406 if (!test_bit(ZLOOP_ZONE_CONV, &zone->flags) && is_write) { 436 407 mutex_lock(&zone->lock); 437 408 438 - if (is_append) { 439 - sector = zone->wp; 440 - cmd->sector = sector; 441 - } 409 + spin_lock_irqsave(&zone->wp_lock, flags); 442 410 443 411 /* 444 - * Write operations must be aligned to the write pointer and 445 - * fully contained within the zone capacity. 412 + * Zone append operations always go at the current write 413 + * pointer, but regular write operations must already be 414 + * aligned to the write pointer when submitted. 446 415 */ 447 - if (sector != zone->wp || zone->wp + nr_sectors > zone_end) { 416 + if (is_append) { 417 + /* 418 + * If ordered zone append is in use, we already checked 419 + * and set the target sector in zloop_queue_rq(). 420 + */ 421 + if (!zlo->ordered_zone_append) { 422 + if (zone->cond == BLK_ZONE_COND_FULL || 423 + zone->wp + nr_sectors > zone_end) { 424 + spin_unlock_irqrestore(&zone->wp_lock, 425 + flags); 426 + ret = -EIO; 427 + goto unlock; 428 + } 429 + sector = zone->wp; 430 + } 431 + cmd->sector = sector; 432 + } else if (sector != zone->wp) { 433 + spin_unlock_irqrestore(&zone->wp_lock, flags); 448 434 pr_err("Zone %u: unaligned write: sect %llu, wp %llu\n", 449 435 zone_no, sector, zone->wp); 450 436 ret = -EIO; ··· 472 428 zone->cond = BLK_ZONE_COND_IMP_OPEN; 473 429 474 430 /* 475 - * Advance the write pointer of sequential zones. If the write 476 - * fails, the wp position will be corrected when the next I/O 477 - * copmpletes. 431 + * Advance the write pointer, unless ordered zone append is in 432 + * use. If the write fails, the write pointer position will be 433 + * corrected when the next I/O starts execution. 478 434 */ 479 - zone->wp += nr_sectors; 480 - if (zone->wp == zone_end) 481 - zone->cond = BLK_ZONE_COND_FULL; 435 + if (!is_append || !zlo->ordered_zone_append) { 436 + zone->wp += nr_sectors; 437 + if (zone->wp == zone_end) { 438 + zone->cond = BLK_ZONE_COND_FULL; 439 + zone->wp = ULLONG_MAX; 440 + } 441 + } 442 + 443 + spin_unlock_irqrestore(&zone->wp_lock, flags); 482 444 } 483 445 484 446 rq_for_each_bvec(tmp, rq, rq_iter) ··· 547 497 { 548 498 struct request *rq = blk_mq_rq_from_pdu(cmd); 549 499 struct zloop_device *zlo = rq->q->queuedata; 500 + 501 + /* We can block in this context, so ignore REQ_NOWAIT. */ 502 + if (rq->cmd_flags & REQ_NOWAIT) 503 + rq->cmd_flags &= ~REQ_NOWAIT; 550 504 551 505 switch (req_op(rq)) { 552 506 case REQ_OP_READ: ··· 662 608 blk_mq_end_request(rq, sts); 663 609 } 664 610 611 + static bool zloop_set_zone_append_sector(struct request *rq) 612 + { 613 + struct zloop_device *zlo = rq->q->queuedata; 614 + unsigned int zone_no = rq_zone_no(rq); 615 + struct zloop_zone *zone = &zlo->zones[zone_no]; 616 + sector_t zone_end = zone->start + zlo->zone_capacity; 617 + sector_t nr_sectors = blk_rq_sectors(rq); 618 + unsigned long flags; 619 + 620 + spin_lock_irqsave(&zone->wp_lock, flags); 621 + 622 + if (zone->cond == BLK_ZONE_COND_FULL || 623 + zone->wp + nr_sectors > zone_end) { 624 + spin_unlock_irqrestore(&zone->wp_lock, flags); 625 + return false; 626 + } 627 + 628 + rq->__sector = zone->wp; 629 + zone->wp += blk_rq_sectors(rq); 630 + if (zone->wp >= zone_end) { 631 + zone->cond = BLK_ZONE_COND_FULL; 632 + zone->wp = ULLONG_MAX; 633 + } 634 + 635 + spin_unlock_irqrestore(&zone->wp_lock, flags); 636 + 637 + return true; 638 + } 639 + 665 640 static blk_status_t zloop_queue_rq(struct blk_mq_hw_ctx *hctx, 666 641 const struct blk_mq_queue_data *bd) 667 642 { ··· 700 617 701 618 if (zlo->state == Zlo_deleting) 702 619 return BLK_STS_IOERR; 620 + 621 + /* 622 + * If we need to strongly order zone append operations, set the request 623 + * sector to the zone write pointer location now instead of when the 624 + * command work runs. 625 + */ 626 + if (zlo->ordered_zone_append && req_op(rq) == REQ_OP_ZONE_APPEND) { 627 + if (!zloop_set_zone_append_sector(rq)) 628 + return BLK_STS_IOERR; 629 + } 703 630 704 631 blk_mq_start_request(rq); 705 632 ··· 740 647 } 741 648 742 649 static int zloop_report_zones(struct gendisk *disk, sector_t sector, 743 - unsigned int nr_zones, report_zones_cb cb, void *data) 650 + unsigned int nr_zones, struct blk_report_zones_args *args) 744 651 { 745 652 struct zloop_device *zlo = disk->private_data; 746 653 struct blk_zone blkz = {}; 747 654 unsigned int first, i; 655 + unsigned long flags; 748 656 int ret; 749 657 750 658 first = disk_zone_no(disk, sector); ··· 769 675 770 676 blkz.start = zone->start; 771 677 blkz.len = zlo->zone_size; 678 + spin_lock_irqsave(&zone->wp_lock, flags); 772 679 blkz.wp = zone->wp; 680 + spin_unlock_irqrestore(&zone->wp_lock, flags); 773 681 blkz.cond = zone->cond; 774 682 if (test_bit(ZLOOP_ZONE_CONV, &zone->flags)) { 775 683 blkz.type = BLK_ZONE_TYPE_CONVENTIONAL; ··· 783 687 784 688 mutex_unlock(&zone->lock); 785 689 786 - ret = cb(&blkz, i, data); 690 + ret = disk_report_zone(disk, &blkz, i, args); 787 691 if (ret) 788 692 return ret; 789 693 } ··· 879 783 int ret; 880 784 881 785 mutex_init(&zone->lock); 786 + spin_lock_init(&zone->wp_lock); 882 787 zone->start = (sector_t)zone_no << zlo->zone_shift; 883 788 884 789 if (!restore) ··· 981 884 { 982 885 struct queue_limits lim = { 983 886 .max_hw_sectors = SZ_1M >> SECTOR_SHIFT, 984 - .max_hw_zone_append_sectors = SZ_1M >> SECTOR_SHIFT, 985 887 .chunk_sectors = opts->zone_size, 986 888 .features = BLK_FEAT_ZONED, 987 889 }; ··· 1032 936 zlo->nr_zones = nr_zones; 1033 937 zlo->nr_conv_zones = opts->nr_conv_zones; 1034 938 zlo->buffered_io = opts->buffered_io; 939 + zlo->zone_append = opts->zone_append; 940 + if (zlo->zone_append) 941 + zlo->ordered_zone_append = opts->ordered_zone_append; 1035 942 1036 943 zlo->workqueue = alloc_workqueue("zloop%d", WQ_UNBOUND | WQ_FREEZABLE, 1037 944 opts->nr_queues * opts->queue_depth, zlo->id); ··· 1075 976 1076 977 lim.physical_block_size = zlo->block_size; 1077 978 lim.logical_block_size = zlo->block_size; 979 + if (zlo->zone_append) 980 + lim.max_hw_zone_append_sectors = lim.max_hw_sectors; 1078 981 1079 982 zlo->tag_set.ops = &zloop_mq_ops; 1080 983 zlo->tag_set.nr_hw_queues = opts->nr_queues; ··· 1117 1016 zlo->state = Zlo_live; 1118 1017 mutex_unlock(&zloop_ctl_mutex); 1119 1018 1120 - pr_info("Added device %d: %u zones of %llu MB, %u B block size\n", 1019 + pr_info("zloop: device %d, %u zones of %llu MiB, %u B block size\n", 1121 1020 zlo->id, zlo->nr_zones, 1122 1021 ((sector_t)zlo->zone_size << SECTOR_SHIFT) >> 20, 1123 1022 zlo->block_size); 1023 + pr_info("zloop%d: using %s%s zone append\n", 1024 + zlo->id, 1025 + zlo->ordered_zone_append ? "ordered " : "", 1026 + zlo->zone_append ? "native" : "emulated"); 1124 1027 1125 1028 return 0; 1126 1029 ··· 1211 1106 opts->nr_queues = ZLOOP_DEF_NR_QUEUES; 1212 1107 opts->queue_depth = ZLOOP_DEF_QUEUE_DEPTH; 1213 1108 opts->buffered_io = ZLOOP_DEF_BUFFERED_IO; 1109 + opts->zone_append = ZLOOP_DEF_ZONE_APPEND; 1110 + opts->ordered_zone_append = ZLOOP_DEF_ORDERED_ZONE_APPEND; 1214 1111 1215 1112 if (!buf) 1216 1113 return 0; ··· 1321 1214 break; 1322 1215 case ZLOOP_OPT_BUFFERED_IO: 1323 1216 opts->buffered_io = true; 1217 + break; 1218 + case ZLOOP_OPT_ZONE_APPEND: 1219 + if (match_uint(args, &token)) { 1220 + ret = -EINVAL; 1221 + goto out; 1222 + } 1223 + if (token != 0 && token != 1) { 1224 + pr_err("Invalid zone_append value\n"); 1225 + ret = -EINVAL; 1226 + goto out; 1227 + } 1228 + opts->zone_append = token; 1229 + break; 1230 + case ZLOOP_OPT_ORDERED_ZONE_APPEND: 1231 + opts->ordered_zone_append = true; 1324 1232 break; 1325 1233 case ZLOOP_OPT_ERR: 1326 1234 default:
+8 -17
drivers/md/bcache/alloc.c
··· 24 24 * Since the gens and priorities are all stored contiguously on disk, we can 25 25 * batch this up: We fill up the free_inc list with freshly invalidated buckets, 26 26 * call prio_write(), and when prio_write() finishes we pull buckets off the 27 - * free_inc list and optionally discard them. 27 + * free_inc list. 28 28 * 29 29 * free_inc isn't the only freelist - if it was, we'd often to sleep while 30 30 * priorities and gens were being written before we could allocate. c->free is a 31 31 * smaller freelist, and buckets on that list are always ready to be used. 32 - * 33 - * If we've got discards enabled, that happens when a bucket moves from the 34 - * free_inc list to the free list. 35 32 * 36 33 * There is another freelist, because sometimes we have buckets that we know 37 34 * have nothing pointing into them - these we can reuse without waiting for 38 35 * priorities to be rewritten. These come from freed btree nodes and buckets 39 36 * that garbage collection discovered no longer had valid keys pointing into 40 37 * them (because they were overwritten). That's the unused list - buckets on the 41 - * unused list move to the free list, optionally being discarded in the process. 38 + * unused list move to the free list. 42 39 * 43 40 * It's also important to ensure that gens don't wrap around - with respect to 44 41 * either the oldest gen in the btree or the gen on disk. This is quite ··· 115 118 /* 116 119 * Background allocation thread: scans for buckets to be invalidated, 117 120 * invalidates them, rewrites prios/gens (marking them as invalidated on disk), 118 - * then optionally issues discard commands to the newly free buckets, then puts 119 - * them on the various freelists. 121 + * then puts them on the various freelists. 120 122 */ 121 123 122 124 static inline bool can_inc_bucket_gen(struct bucket *b) ··· 317 321 while (1) { 318 322 /* 319 323 * First, we pull buckets off of the unused and free_inc lists, 320 - * possibly issue discards to them, then we add the bucket to 321 - * the free list: 324 + * then we add the bucket to the free list: 322 325 */ 323 326 while (1) { 324 327 long bucket; 325 328 326 329 if (!fifo_pop(&ca->free_inc, bucket)) 327 330 break; 328 - 329 - if (ca->discard) { 330 - mutex_unlock(&ca->set->bucket_lock); 331 - blkdev_issue_discard(ca->bdev, 332 - bucket_to_sector(ca->set, bucket), 333 - ca->sb.bucket_size, GFP_KERNEL); 334 - mutex_lock(&ca->set->bucket_lock); 335 - } 336 331 337 332 allocator_wait(ca, bch_allocator_push(ca, bucket)); 338 333 wake_up(&ca->set->btree_cache_wait); ··· 399 412 TASK_UNINTERRUPTIBLE); 400 413 401 414 mutex_unlock(&ca->set->bucket_lock); 415 + 416 + atomic_inc(&ca->set->bucket_wait_cnt); 402 417 schedule(); 418 + atomic_dec(&ca->set->bucket_wait_cnt); 419 + 403 420 mutex_lock(&ca->set->bucket_lock); 404 421 } while (!fifo_pop(&ca->free[RESERVE_NONE], r) && 405 422 !fifo_pop(&ca->free[reserve], r));
+2 -4
drivers/md/bcache/bcache.h
··· 447 447 * free_inc: Incoming buckets - these are buckets that currently have 448 448 * cached data in them, and we can't reuse them until after we write 449 449 * their new gen to disk. After prio_write() finishes writing the new 450 - * gens/prios, they'll be moved to the free list (and possibly discarded 451 - * in the process) 450 + * gens/prios, they'll be moved to the free list. 452 451 */ 453 452 DECLARE_FIFO(long, free)[RESERVE_NR]; 454 453 DECLARE_FIFO(long, free_inc); ··· 465 466 * cpu 466 467 */ 467 468 unsigned int invalidate_needs_gc; 468 - 469 - bool discard; /* Get rid of? */ 470 469 471 470 struct journal_device journal; 472 471 ··· 604 607 */ 605 608 atomic_t prio_blocked; 606 609 wait_queue_head_t bucket_wait; 610 + atomic_t bucket_wait_cnt; 607 611 608 612 /* 609 613 * For any bio we don't skip we subtract the number of sectors from
+6 -2
drivers/md/bcache/bset.h
··· 327 327 /* Fixed-size btree_iter that can be allocated on the stack */ 328 328 329 329 struct btree_iter_stack { 330 - struct btree_iter iter; 331 - struct btree_iter_set stack_data[MAX_BSETS]; 330 + /* Must be last as it ends in a flexible-array member. */ 331 + TRAILING_OVERLAP(struct btree_iter, iter, data, 332 + struct btree_iter_set stack_data[MAX_BSETS]; 333 + ); 332 334 }; 335 + static_assert(offsetof(struct btree_iter_stack, iter.data) == 336 + offsetof(struct btree_iter_stack, stack_data)); 333 337 334 338 typedef bool (*ptr_filter_fn)(struct btree_keys *b, const struct bkey *k); 335 339
+27 -26
drivers/md/bcache/btree.c
··· 89 89 * Test module load/unload 90 90 */ 91 91 92 - #define MAX_GC_TIMES 100 93 - #define MIN_GC_NODES 100 92 + #define MAX_GC_TIMES_SHIFT 7 /* 128 loops */ 93 + #define GC_NODES_MIN 10 94 + #define GC_SLEEP_MS_MIN 10 94 95 #define GC_SLEEP_MS 100 95 96 96 97 #define PTR_DIRTY_BIT (((uint64_t) 1 << 36)) ··· 372 371 SET_PTR_OFFSET(&k.key, 0, PTR_OFFSET(&k.key, 0) + 373 372 bset_sector_offset(&b->keys, i)); 374 373 375 - if (!bch_bio_alloc_pages(b->bio, __GFP_NOWARN|GFP_NOWAIT)) { 374 + if (!bch_bio_alloc_pages(b->bio, GFP_NOWAIT)) { 376 375 struct bio_vec *bv; 377 376 void *addr = (void *) ((unsigned long) i & ~(PAGE_SIZE - 1)); 378 377 struct bvec_iter_all iter_all; ··· 1579 1578 1580 1579 static size_t btree_gc_min_nodes(struct cache_set *c) 1581 1580 { 1582 - size_t min_nodes; 1581 + size_t min_nodes = GC_NODES_MIN; 1583 1582 1584 - /* 1585 - * Since incremental GC would stop 100ms when front 1586 - * side I/O comes, so when there are many btree nodes, 1587 - * if GC only processes constant (100) nodes each time, 1588 - * GC would last a long time, and the front side I/Os 1589 - * would run out of the buckets (since no new bucket 1590 - * can be allocated during GC), and be blocked again. 1591 - * So GC should not process constant nodes, but varied 1592 - * nodes according to the number of btree nodes, which 1593 - * realized by dividing GC into constant(100) times, 1594 - * so when there are many btree nodes, GC can process 1595 - * more nodes each time, otherwise, GC will process less 1596 - * nodes each time (but no less than MIN_GC_NODES) 1597 - */ 1598 - min_nodes = c->gc_stats.nodes / MAX_GC_TIMES; 1599 - if (min_nodes < MIN_GC_NODES) 1600 - min_nodes = MIN_GC_NODES; 1583 + if (atomic_read(&c->search_inflight) == 0) { 1584 + size_t n = c->gc_stats.nodes >> MAX_GC_TIMES_SHIFT; 1585 + 1586 + if (min_nodes < n) 1587 + min_nodes = n; 1588 + } 1601 1589 1602 1590 return min_nodes; 1603 1591 } 1604 1592 1593 + static uint64_t btree_gc_sleep_ms(struct cache_set *c) 1594 + { 1595 + uint64_t sleep_ms; 1596 + 1597 + if (atomic_read(&c->bucket_wait_cnt) > 0) 1598 + sleep_ms = GC_SLEEP_MS_MIN; 1599 + else 1600 + sleep_ms = GC_SLEEP_MS; 1601 + 1602 + return sleep_ms; 1603 + } 1605 1604 1606 1605 static int btree_gc_recurse(struct btree *b, struct btree_op *op, 1607 1606 struct closure *writes, struct gc_stat *gc) ··· 1669 1668 memmove(r + 1, r, sizeof(r[0]) * (GC_MERGE_NODES - 1)); 1670 1669 r->b = NULL; 1671 1670 1672 - if (atomic_read(&b->c->search_inflight) && 1673 - gc->nodes >= gc->nodes_pre + btree_gc_min_nodes(b->c)) { 1671 + if (gc->nodes >= (gc->nodes_pre + btree_gc_min_nodes(b->c))) { 1674 1672 gc->nodes_pre = gc->nodes; 1675 1673 ret = -EAGAIN; 1676 1674 break; ··· 1846 1846 cond_resched(); 1847 1847 1848 1848 if (ret == -EAGAIN) 1849 - schedule_timeout_interruptible(msecs_to_jiffies 1850 - (GC_SLEEP_MS)); 1849 + schedule_timeout_interruptible( 1850 + msecs_to_jiffies(btree_gc_sleep_ms(c))); 1851 1851 else if (ret) 1852 1852 pr_warn("gc failed!\n"); 1853 1853 } while (ret && !test_bit(CACHE_SET_IO_DISABLE, &c->flags)); ··· 2822 2822 2823 2823 int __init bch_btree_init(void) 2824 2824 { 2825 - btree_io_wq = alloc_workqueue("bch_btree_io", WQ_MEM_RECLAIM, 0); 2825 + btree_io_wq = alloc_workqueue("bch_btree_io", 2826 + WQ_MEM_RECLAIM | WQ_PERCPU, 0); 2826 2827 if (!btree_io_wq) 2827 2828 return -ENOMEM; 2828 2829
+8 -85
drivers/md/bcache/journal.c
··· 275 275 * ja->cur_idx 276 276 */ 277 277 ja->cur_idx = i; 278 - ja->last_idx = ja->discard_idx = (i + 1) % 279 - ca->sb.njournal_buckets; 278 + ja->last_idx = (i + 1) % ca->sb.njournal_buckets; 280 279 281 280 } 282 281 ··· 335 336 } 336 337 } 337 338 338 - static bool is_discard_enabled(struct cache_set *s) 339 - { 340 - struct cache *ca = s->cache; 341 - 342 - if (ca->discard) 343 - return true; 344 - 345 - return false; 346 - } 347 - 348 339 int bch_journal_replay(struct cache_set *s, struct list_head *list) 349 340 { 350 341 int ret = 0, keys = 0, entries = 0; ··· 349 360 BUG_ON(i->pin && atomic_read(i->pin) != 1); 350 361 351 362 if (n != i->j.seq) { 352 - if (n == start && is_discard_enabled(s)) 353 - pr_info("journal entries %llu-%llu may be discarded! (replaying %llu-%llu)\n", 354 - n, i->j.seq - 1, start, end); 355 - else { 356 - pr_err("journal entries %llu-%llu missing! (replaying %llu-%llu)\n", 357 - n, i->j.seq - 1, start, end); 358 - ret = -EIO; 359 - goto err; 360 - } 363 + pr_err("journal entries %llu-%llu missing! (replaying %llu-%llu)\n", 364 + n, i->j.seq - 1, start, end); 365 + ret = -EIO; 366 + goto err; 361 367 } 362 368 363 369 for (k = i->j.start; ··· 552 568 553 569 #define last_seq(j) ((j)->seq - fifo_used(&(j)->pin) + 1) 554 570 555 - static void journal_discard_endio(struct bio *bio) 556 - { 557 - struct journal_device *ja = 558 - container_of(bio, struct journal_device, discard_bio); 559 - struct cache *ca = container_of(ja, struct cache, journal); 560 - 561 - atomic_set(&ja->discard_in_flight, DISCARD_DONE); 562 - 563 - closure_wake_up(&ca->set->journal.wait); 564 - closure_put(&ca->set->cl); 565 - } 566 - 567 - static void journal_discard_work(struct work_struct *work) 568 - { 569 - struct journal_device *ja = 570 - container_of(work, struct journal_device, discard_work); 571 - 572 - submit_bio(&ja->discard_bio); 573 - } 574 - 575 - static void do_journal_discard(struct cache *ca) 576 - { 577 - struct journal_device *ja = &ca->journal; 578 - struct bio *bio = &ja->discard_bio; 579 - 580 - if (!ca->discard) { 581 - ja->discard_idx = ja->last_idx; 582 - return; 583 - } 584 - 585 - switch (atomic_read(&ja->discard_in_flight)) { 586 - case DISCARD_IN_FLIGHT: 587 - return; 588 - 589 - case DISCARD_DONE: 590 - ja->discard_idx = (ja->discard_idx + 1) % 591 - ca->sb.njournal_buckets; 592 - 593 - atomic_set(&ja->discard_in_flight, DISCARD_READY); 594 - fallthrough; 595 - 596 - case DISCARD_READY: 597 - if (ja->discard_idx == ja->last_idx) 598 - return; 599 - 600 - atomic_set(&ja->discard_in_flight, DISCARD_IN_FLIGHT); 601 - 602 - bio_init_inline(bio, ca->bdev, 1, REQ_OP_DISCARD); 603 - bio->bi_iter.bi_sector = bucket_to_sector(ca->set, 604 - ca->sb.d[ja->discard_idx]); 605 - bio->bi_iter.bi_size = bucket_bytes(ca); 606 - bio->bi_end_io = journal_discard_endio; 607 - 608 - closure_get(&ca->set->cl); 609 - INIT_WORK(&ja->discard_work, journal_discard_work); 610 - queue_work(bch_journal_wq, &ja->discard_work); 611 - } 612 - } 613 - 614 571 static unsigned int free_journal_buckets(struct cache_set *c) 615 572 { 616 573 struct journal *j = &c->journal; ··· 560 635 unsigned int n; 561 636 562 637 /* In case njournal_buckets is not power of 2 */ 563 - if (ja->cur_idx >= ja->discard_idx) 564 - n = ca->sb.njournal_buckets + ja->discard_idx - ja->cur_idx; 638 + if (ja->cur_idx >= ja->last_idx) 639 + n = ca->sb.njournal_buckets + ja->last_idx - ja->cur_idx; 565 640 else 566 - n = ja->discard_idx - ja->cur_idx; 641 + n = ja->last_idx - ja->cur_idx; 567 642 568 643 if (n > (1 + j->do_reserve)) 569 644 return n - (1 + j->do_reserve); ··· 592 667 ja->seq[ja->last_idx] < last_seq) 593 668 ja->last_idx = (ja->last_idx + 1) % 594 669 ca->sb.njournal_buckets; 595 - 596 - do_journal_discard(ca); 597 670 598 671 if (c->journal.blocks_free) 599 672 goto out;
-13
drivers/md/bcache/journal.h
··· 139 139 /* Last journal bucket that still contains an open journal entry */ 140 140 unsigned int last_idx; 141 141 142 - /* Next journal bucket to be discarded */ 143 - unsigned int discard_idx; 144 - 145 - #define DISCARD_READY 0 146 - #define DISCARD_IN_FLIGHT 1 147 - #define DISCARD_DONE 2 148 - /* 1 - discard in flight, -1 - discard completed */ 149 - atomic_t discard_in_flight; 150 - 151 - struct work_struct discard_work; 152 - struct bio discard_bio; 153 - struct bio_vec discard_bv; 154 - 155 142 /* Bio for journal reads/writes to this device */ 156 143 struct bio bio; 157 144 struct bio_vec bv[8];
+16 -17
drivers/md/bcache/super.c
··· 1388 1388 bch_cache_accounting_destroy(&dc->accounting); 1389 1389 kobject_del(&d->kobj); 1390 1390 1391 - continue_at(cl, cached_dev_free, system_wq); 1391 + continue_at(cl, cached_dev_free, system_percpu_wq); 1392 1392 } 1393 1393 1394 1394 static int cached_dev_init(struct cached_dev *dc, unsigned int block_size) ··· 1400 1400 __module_get(THIS_MODULE); 1401 1401 INIT_LIST_HEAD(&dc->list); 1402 1402 closure_init(&dc->disk.cl, NULL); 1403 - set_closure_fn(&dc->disk.cl, cached_dev_flush, system_wq); 1403 + set_closure_fn(&dc->disk.cl, cached_dev_flush, system_percpu_wq); 1404 1404 kobject_init(&dc->disk.kobj, &bch_cached_dev_ktype); 1405 1405 INIT_WORK(&dc->detach, cached_dev_detach_finish); 1406 1406 sema_init(&dc->sb_write_mutex, 1); ··· 1513 1513 bcache_device_unlink(d); 1514 1514 mutex_unlock(&bch_register_lock); 1515 1515 kobject_del(&d->kobj); 1516 - continue_at(cl, flash_dev_free, system_wq); 1516 + continue_at(cl, flash_dev_free, system_percpu_wq); 1517 1517 } 1518 1518 1519 1519 static int flash_dev_run(struct cache_set *c, struct uuid_entry *u) ··· 1525 1525 goto err_ret; 1526 1526 1527 1527 closure_init(&d->cl, NULL); 1528 - set_closure_fn(&d->cl, flash_dev_flush, system_wq); 1528 + set_closure_fn(&d->cl, flash_dev_flush, system_percpu_wq); 1529 1529 1530 1530 kobject_init(&d->kobj, &bch_flash_dev_ktype); 1531 1531 ··· 1833 1833 1834 1834 mutex_unlock(&bch_register_lock); 1835 1835 1836 - continue_at(cl, cache_set_flush, system_wq); 1836 + continue_at(cl, cache_set_flush, system_percpu_wq); 1837 1837 } 1838 1838 1839 1839 void bch_cache_set_stop(struct cache_set *c) ··· 1863 1863 1864 1864 __module_get(THIS_MODULE); 1865 1865 closure_init(&c->cl, NULL); 1866 - set_closure_fn(&c->cl, cache_set_free, system_wq); 1866 + set_closure_fn(&c->cl, cache_set_free, system_percpu_wq); 1867 1867 1868 1868 closure_init(&c->caching, &c->cl); 1869 - set_closure_fn(&c->caching, __cache_set_unregister, system_wq); 1869 + set_closure_fn(&c->caching, __cache_set_unregister, system_percpu_wq); 1870 1870 1871 1871 /* Maybe create continue_at_noreturn() and use it here? */ 1872 1872 closure_set_stopped(&c->cl); ··· 1939 1939 if (!c->uuids) 1940 1940 goto err; 1941 1941 1942 - c->moving_gc_wq = alloc_workqueue("bcache_gc", WQ_MEM_RECLAIM, 0); 1942 + c->moving_gc_wq = alloc_workqueue("bcache_gc", 1943 + WQ_MEM_RECLAIM | WQ_PERCPU, 0); 1943 1944 if (!c->moving_gc_wq) 1944 1945 goto err; 1945 1946 ··· 2383 2382 ca->bdev = file_bdev(bdev_file); 2384 2383 ca->sb_disk = sb_disk; 2385 2384 2386 - if (bdev_max_discard_sectors(file_bdev(bdev_file))) 2387 - ca->discard = CACHE_DISCARD(&ca->sb); 2388 - 2389 2385 ret = cache_alloc(ca); 2390 2386 if (ret != 0) { 2391 2387 if (ret == -ENOMEM) ··· 2529 2531 INIT_DELAYED_WORK(&args->reg_work, register_cache_worker); 2530 2532 2531 2533 /* 10 jiffies is enough for a delay */ 2532 - queue_delayed_work(system_wq, &args->reg_work, 10); 2534 + queue_delayed_work(system_percpu_wq, &args->reg_work, 10); 2533 2535 } 2534 2536 2535 2537 static void *alloc_holder_object(struct cache_sb *sb) ··· 2903 2905 if (bch_btree_init()) 2904 2906 goto err; 2905 2907 2906 - bcache_wq = alloc_workqueue("bcache", WQ_MEM_RECLAIM, 0); 2908 + bcache_wq = alloc_workqueue("bcache", WQ_MEM_RECLAIM | WQ_PERCPU, 0); 2907 2909 if (!bcache_wq) 2908 2910 goto err; 2909 2911 2910 2912 /* 2911 2913 * Let's not make this `WQ_MEM_RECLAIM` for the following reasons: 2912 2914 * 2913 - * 1. It used `system_wq` before which also does no memory reclaim. 2915 + * 1. It used `system_percpu_wq` before which also does no memory reclaim. 2914 2916 * 2. With `WQ_MEM_RECLAIM` desktop stalls, increased boot times, and 2915 2917 * reduced throughput can be observed. 2916 2918 * 2917 - * We still want to user our own queue to not congest the `system_wq`. 2919 + * We still want to user our own queue to not congest the `system_percpu_wq`. 2918 2920 */ 2919 - bch_flush_wq = alloc_workqueue("bch_flush", 0, 0); 2921 + bch_flush_wq = alloc_workqueue("bch_flush", WQ_PERCPU, 0); 2920 2922 if (!bch_flush_wq) 2921 2923 goto err; 2922 2924 2923 - bch_journal_wq = alloc_workqueue("bch_journal", WQ_MEM_RECLAIM, 0); 2925 + bch_journal_wq = alloc_workqueue("bch_journal", 2926 + WQ_MEM_RECLAIM | WQ_PERCPU, 0); 2924 2927 if (!bch_journal_wq) 2925 2928 goto err; 2926 2929
-15
drivers/md/bcache/sysfs.c
··· 134 134 rw_attribute(synchronous); 135 135 rw_attribute(journal_delay_ms); 136 136 rw_attribute(io_disable); 137 - rw_attribute(discard); 138 137 rw_attribute(running); 139 138 rw_attribute(label); 140 139 rw_attribute(errors); ··· 1035 1036 sysfs_hprint(bucket_size, bucket_bytes(ca)); 1036 1037 sysfs_hprint(block_size, block_bytes(ca)); 1037 1038 sysfs_print(nbuckets, ca->sb.nbuckets); 1038 - sysfs_print(discard, ca->discard); 1039 1039 sysfs_hprint(written, atomic_long_read(&ca->sectors_written) << 9); 1040 1040 sysfs_hprint(btree_written, 1041 1041 atomic_long_read(&ca->btree_sectors_written) << 9); ··· 1140 1142 if (bcache_is_reboot) 1141 1143 return -EBUSY; 1142 1144 1143 - if (attr == &sysfs_discard) { 1144 - bool v = strtoul_or_return(buf); 1145 - 1146 - if (bdev_max_discard_sectors(ca->bdev)) 1147 - ca->discard = v; 1148 - 1149 - if (v != CACHE_DISCARD(&ca->sb)) { 1150 - SET_CACHE_DISCARD(&ca->sb, v); 1151 - bcache_write_super(ca->set); 1152 - } 1153 - } 1154 - 1155 1145 if (attr == &sysfs_cache_replacement_policy) { 1156 1146 v = __sysfs_match_string(cache_replacement_policies, -1, buf); 1157 1147 if (v < 0) ··· 1171 1185 &sysfs_block_size, 1172 1186 &sysfs_nbuckets, 1173 1187 &sysfs_priority_stats, 1174 - &sysfs_discard, 1175 1188 &sysfs_written, 1176 1189 &sysfs_btree_written, 1177 1190 &sysfs_metadata_written,
+2 -3
drivers/md/bcache/writeback.c
··· 805 805 * may set BCH_ENABLE_AUTO_GC via sysfs, then when 806 806 * BCH_DO_AUTO_GC is set, garbage collection thread 807 807 * will be wake up here. After moving gc, the shrunk 808 - * btree and discarded free buckets SSD space may be 809 - * helpful for following write requests. 808 + * btree may be helpful for following write requests. 810 809 */ 811 810 if (c->gc_after_writeback == 812 811 (BCH_ENABLE_AUTO_GC|BCH_DO_AUTO_GC)) { ··· 1075 1076 int bch_cached_dev_writeback_start(struct cached_dev *dc) 1076 1077 { 1077 1078 dc->writeback_write_wq = alloc_workqueue("bcache_writeback_wq", 1078 - WQ_MEM_RECLAIM, 0); 1079 + WQ_MEM_RECLAIM | WQ_PERCPU, 0); 1079 1080 if (!dc->writeback_write_wq) 1080 1081 return -ENOMEM; 1081 1082
+39 -24
drivers/md/dm-zone.c
··· 17 17 * For internal zone reports bypassing the top BIO submission path. 18 18 */ 19 19 static int dm_blk_do_report_zones(struct mapped_device *md, struct dm_table *t, 20 - sector_t sector, unsigned int nr_zones, 21 - report_zones_cb cb, void *data) 20 + unsigned int nr_zones, 21 + struct dm_report_zones_args *args) 22 22 { 23 - struct gendisk *disk = md->disk; 24 - int ret; 25 - struct dm_report_zones_args args = { 26 - .next_sector = sector, 27 - .orig_data = data, 28 - .orig_cb = cb, 29 - }; 30 - 31 23 do { 32 24 struct dm_target *tgt; 25 + int ret; 33 26 34 - tgt = dm_table_find_target(t, args.next_sector); 27 + tgt = dm_table_find_target(t, args->next_sector); 35 28 if (WARN_ON_ONCE(!tgt->type->report_zones)) 36 29 return -EIO; 37 30 38 - args.tgt = tgt; 39 - ret = tgt->type->report_zones(tgt, &args, 40 - nr_zones - args.zone_idx); 31 + args->tgt = tgt; 32 + ret = tgt->type->report_zones(tgt, args, 33 + nr_zones - args->zone_idx); 41 34 if (ret < 0) 42 35 return ret; 43 - } while (args.zone_idx < nr_zones && 44 - args.next_sector < get_capacity(disk)); 36 + } while (args->zone_idx < nr_zones && 37 + args->next_sector < get_capacity(md->disk)); 45 38 46 - return args.zone_idx; 39 + return args->zone_idx; 47 40 } 48 41 49 42 /* ··· 45 52 * generally implemented by targets using dm_report_zones(). 46 53 */ 47 54 int dm_blk_report_zones(struct gendisk *disk, sector_t sector, 48 - unsigned int nr_zones, report_zones_cb cb, void *data) 55 + unsigned int nr_zones, 56 + struct blk_report_zones_args *args) 49 57 { 50 58 struct mapped_device *md = disk->private_data; 51 59 struct dm_table *map; ··· 70 76 map = zone_revalidate_map; 71 77 } 72 78 73 - if (map) 74 - ret = dm_blk_do_report_zones(md, map, sector, nr_zones, cb, 75 - data); 79 + if (map) { 80 + struct dm_report_zones_args dm_args = { 81 + .disk = md->disk, 82 + .next_sector = sector, 83 + .rep_args = args, 84 + }; 85 + ret = dm_blk_do_report_zones(md, map, nr_zones, &dm_args); 86 + } 76 87 77 88 if (put_table) 78 89 dm_put_live_table(md, srcu_idx); ··· 112 113 } 113 114 114 115 args->next_sector = zone->start + zone->len; 115 - return args->orig_cb(zone, args->zone_idx++, args->orig_data); 116 + 117 + /* If we have an internal callback, call it first. */ 118 + if (args->cb) { 119 + int ret; 120 + 121 + ret = args->cb(zone, args->zone_idx, args->data); 122 + if (ret) 123 + return ret; 124 + } 125 + 126 + return disk_report_zone(args->disk, zone, args->zone_idx++, 127 + args->rep_args); 116 128 } 117 129 118 130 /* ··· 502 492 sector_t sector, unsigned int nr_zones, 503 493 unsigned long *need_reset) 504 494 { 495 + struct dm_report_zones_args args = { 496 + .disk = md->disk, 497 + .next_sector = sector, 498 + .cb = dm_zone_need_reset_cb, 499 + .data = need_reset, 500 + }; 505 501 int ret; 506 502 507 - ret = dm_blk_do_report_zones(md, t, sector, nr_zones, 508 - dm_zone_need_reset_cb, need_reset); 503 + ret = dm_blk_do_report_zones(md, t, nr_zones, &args); 509 504 if (ret != nr_zones) { 510 505 DMERR("Get %s zone reset bitmap failed\n", 511 506 md->disk->disk_name);
+2 -1
drivers/md/dm.h
··· 109 109 void dm_zone_endio(struct dm_io *io, struct bio *clone); 110 110 #ifdef CONFIG_BLK_DEV_ZONED 111 111 int dm_blk_report_zones(struct gendisk *disk, sector_t sector, 112 - unsigned int nr_zones, report_zones_cb cb, void *data); 112 + unsigned int nr_zones, 113 + struct blk_report_zones_args *args); 113 114 bool dm_is_zone_write(struct mapped_device *md, struct bio *bio); 114 115 int dm_zone_get_reset_bitmap(struct mapped_device *md, struct dm_table *t, 115 116 sector_t sector, unsigned int nr_zones,
+2
drivers/md/md-linear.c
··· 72 72 73 73 md_init_stacking_limits(&lim); 74 74 lim.max_hw_sectors = mddev->chunk_sectors; 75 + lim.logical_block_size = mddev->logical_block_size; 75 76 lim.max_write_zeroes_sectors = mddev->chunk_sectors; 76 77 lim.max_hw_wzeroes_unmap_sectors = mddev->chunk_sectors; 77 78 lim.io_min = mddev->chunk_sectors << 9; 79 + lim.features |= BLK_FEAT_ATOMIC_WRITES; 78 80 err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY); 79 81 if (err) 80 82 return err;
+1 -1
drivers/md/md-llbitmap.c
··· 378 378 case BitClean: 379 379 pctl->state[pos] = BitDirty; 380 380 break; 381 - }; 381 + } 382 382 } 383 383 } 384 384
+188 -71
drivers/md/md.c
··· 99 99 struct md_rdev *this); 100 100 static void mddev_detach(struct mddev *mddev); 101 101 static void export_rdev(struct md_rdev *rdev, struct mddev *mddev); 102 - static void md_wakeup_thread_directly(struct md_thread __rcu *thread); 102 + static void md_wakeup_thread_directly(struct md_thread __rcu **thread); 103 103 104 104 /* 105 105 * Default number of read corrections we'll attempt on an rdev ··· 339 339 */ 340 340 static bool create_on_open = true; 341 341 static bool legacy_async_del_gendisk = true; 342 + static bool check_new_feature = true; 342 343 343 344 /* 344 345 * We have a system wide 'event count' that is incremented ··· 731 730 732 731 int mddev_init(struct mddev *mddev) 733 732 { 733 + int err = 0; 734 + 734 735 if (!IS_ENABLED(CONFIG_MD_BITMAP)) 735 736 mddev->bitmap_id = ID_BITMAP_NONE; 736 737 else ··· 744 741 745 742 if (percpu_ref_init(&mddev->writes_pending, no_op, 746 743 PERCPU_REF_ALLOW_REINIT, GFP_KERNEL)) { 747 - percpu_ref_exit(&mddev->active_io); 748 - return -ENOMEM; 744 + err = -ENOMEM; 745 + goto exit_acitve_io; 749 746 } 747 + 748 + err = bioset_init(&mddev->bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS); 749 + if (err) 750 + goto exit_writes_pending; 751 + 752 + err = bioset_init(&mddev->sync_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS); 753 + if (err) 754 + goto exit_bio_set; 755 + 756 + err = bioset_init(&mddev->io_clone_set, BIO_POOL_SIZE, 757 + offsetof(struct md_io_clone, bio_clone), 0); 758 + if (err) 759 + goto exit_sync_set; 750 760 751 761 /* We want to start with the refcount at zero */ 752 762 percpu_ref_put(&mddev->writes_pending); ··· 789 773 INIT_WORK(&mddev->del_work, mddev_delayed_delete); 790 774 791 775 return 0; 776 + 777 + exit_sync_set: 778 + bioset_exit(&mddev->sync_set); 779 + exit_bio_set: 780 + bioset_exit(&mddev->bio_set); 781 + exit_writes_pending: 782 + percpu_ref_exit(&mddev->writes_pending); 783 + exit_acitve_io: 784 + percpu_ref_exit(&mddev->active_io); 785 + return err; 792 786 } 793 787 EXPORT_SYMBOL_GPL(mddev_init); 794 788 795 789 void mddev_destroy(struct mddev *mddev) 796 790 { 791 + bioset_exit(&mddev->bio_set); 792 + bioset_exit(&mddev->sync_set); 793 + bioset_exit(&mddev->io_clone_set); 797 794 percpu_ref_exit(&mddev->active_io); 798 795 percpu_ref_exit(&mddev->writes_pending); 799 796 } ··· 970 941 * do_md_stop. dm raid only uses md_stop to stop. So dm raid 971 942 * doesn't need to check MD_DELETED when getting reconfig lock 972 943 */ 973 - if (test_bit(MD_DELETED, &mddev->flags)) 944 + if (test_bit(MD_DELETED, &mddev->flags) && 945 + !test_and_set_bit(MD_DO_DELETE, &mddev->flags)) { 946 + kobject_del(&mddev->kobj); 974 947 del_gendisk(mddev->gendisk); 948 + } 975 949 } 976 950 } 977 951 EXPORT_SYMBOL_GPL(mddev_unlock); ··· 1852 1820 } 1853 1821 if (sb->pad0 || 1854 1822 sb->pad3[0] || 1855 - memcmp(sb->pad3, sb->pad3+1, sizeof(sb->pad3) - sizeof(sb->pad3[1]))) 1856 - /* Some padding is non-zero, might be a new feature */ 1857 - return -EINVAL; 1823 + memcmp(sb->pad3, sb->pad3+1, sizeof(sb->pad3) - sizeof(sb->pad3[1]))) { 1824 + pr_warn("Some padding is non-zero on %pg, might be a new feature\n", 1825 + rdev->bdev); 1826 + if (check_new_feature) 1827 + return -EINVAL; 1828 + pr_warn("check_new_feature is disabled, data corruption possible\n"); 1829 + } 1858 1830 1859 1831 rdev->preferred_minor = 0xffff; 1860 1832 rdev->data_offset = le64_to_cpu(sb->data_offset); ··· 1999 1963 mddev->layout = le32_to_cpu(sb->layout); 2000 1964 mddev->raid_disks = le32_to_cpu(sb->raid_disks); 2001 1965 mddev->dev_sectors = le64_to_cpu(sb->size); 1966 + mddev->logical_block_size = le32_to_cpu(sb->logical_block_size); 2002 1967 mddev->events = ev1; 2003 1968 mddev->bitmap_info.offset = 0; 2004 1969 mddev->bitmap_info.space = 0; ··· 2209 2172 sb->chunksize = cpu_to_le32(mddev->chunk_sectors); 2210 2173 sb->level = cpu_to_le32(mddev->level); 2211 2174 sb->layout = cpu_to_le32(mddev->layout); 2175 + sb->logical_block_size = cpu_to_le32(mddev->logical_block_size); 2212 2176 if (test_bit(FailFast, &rdev->flags)) 2213 2177 sb->devflags |= FailFast1; 2214 2178 else ··· 2788 2750 if (!md_is_rdwr(mddev)) { 2789 2751 if (force_change) 2790 2752 set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags); 2753 + pr_err("%s: can't update sb for read-only array %s\n", __func__, mdname(mddev)); 2791 2754 return; 2792 2755 } 2793 2756 ··· 5173 5134 * Thread might be blocked waiting for metadata update which will now 5174 5135 * never happen 5175 5136 */ 5176 - md_wakeup_thread_directly(mddev->sync_thread); 5137 + md_wakeup_thread_directly(&mddev->sync_thread); 5177 5138 if (work_pending(&mddev->sync_work)) 5178 5139 flush_work(&mddev->sync_work); 5179 5140 ··· 5939 5900 __ATTR(serialize_policy, S_IRUGO | S_IWUSR, serialize_policy_show, 5940 5901 serialize_policy_store); 5941 5902 5903 + static int mddev_set_logical_block_size(struct mddev *mddev, 5904 + unsigned int lbs) 5905 + { 5906 + int err = 0; 5907 + struct queue_limits lim; 5908 + 5909 + if (queue_logical_block_size(mddev->gendisk->queue) >= lbs) { 5910 + pr_err("%s: Cannot set LBS smaller than mddev LBS %u\n", 5911 + mdname(mddev), lbs); 5912 + return -EINVAL; 5913 + } 5914 + 5915 + lim = queue_limits_start_update(mddev->gendisk->queue); 5916 + lim.logical_block_size = lbs; 5917 + pr_info("%s: logical_block_size is changed, data may be lost\n", 5918 + mdname(mddev)); 5919 + err = queue_limits_commit_update(mddev->gendisk->queue, &lim); 5920 + if (err) 5921 + return err; 5922 + 5923 + mddev->logical_block_size = lbs; 5924 + /* New lbs will be written to superblock after array is running */ 5925 + set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags); 5926 + return 0; 5927 + } 5928 + 5929 + static ssize_t 5930 + lbs_show(struct mddev *mddev, char *page) 5931 + { 5932 + return sprintf(page, "%u\n", mddev->logical_block_size); 5933 + } 5934 + 5935 + static ssize_t 5936 + lbs_store(struct mddev *mddev, const char *buf, size_t len) 5937 + { 5938 + unsigned int lbs; 5939 + int err = -EBUSY; 5940 + 5941 + /* Only 1.x meta supports configurable LBS */ 5942 + if (mddev->major_version == 0) 5943 + return -EINVAL; 5944 + 5945 + if (mddev->pers) 5946 + return -EBUSY; 5947 + 5948 + err = kstrtouint(buf, 10, &lbs); 5949 + if (err < 0) 5950 + return -EINVAL; 5951 + 5952 + err = mddev_lock(mddev); 5953 + if (err) 5954 + goto unlock; 5955 + 5956 + err = mddev_set_logical_block_size(mddev, lbs); 5957 + 5958 + unlock: 5959 + mddev_unlock(mddev); 5960 + return err ?: len; 5961 + } 5962 + 5963 + static struct md_sysfs_entry md_logical_block_size = 5964 + __ATTR(logical_block_size, 0644, lbs_show, lbs_store); 5942 5965 5943 5966 static struct attribute *md_default_attrs[] = { 5944 5967 &md_level.attr, ··· 6023 5922 &md_consistency_policy.attr, 6024 5923 &md_fail_last_dev.attr, 6025 5924 &md_serialize_policy.attr, 5925 + &md_logical_block_size.attr, 6026 5926 NULL, 6027 5927 }; 6028 5928 ··· 6154 6052 return -EINVAL; 6155 6053 } 6156 6054 6055 + /* 6056 + * Before RAID adding folio support, the logical_block_size 6057 + * should be smaller than the page size. 6058 + */ 6059 + if (lim->logical_block_size > PAGE_SIZE) { 6060 + pr_err("%s: logical_block_size must not larger than PAGE_SIZE\n", 6061 + mdname(mddev)); 6062 + return -EINVAL; 6063 + } 6064 + mddev->logical_block_size = lim->logical_block_size; 6065 + 6157 6066 return 0; 6158 6067 } 6159 6068 EXPORT_SYMBOL_GPL(mddev_stack_rdev_limits); ··· 6176 6063 6177 6064 if (mddev_is_dm(mddev)) 6178 6065 return 0; 6066 + 6067 + if (queue_logical_block_size(rdev->bdev->bd_disk->queue) > 6068 + queue_logical_block_size(mddev->gendisk->queue)) { 6069 + pr_err("%s: incompatible logical_block_size, can not add\n", 6070 + mdname(mddev)); 6071 + return -EINVAL; 6072 + } 6179 6073 6180 6074 lim = queue_limits_start_update(mddev->gendisk->queue); 6181 6075 queue_limits_stack_bdev(&lim, rdev->bdev, rdev->data_offset, ··· 6504 6384 nowait = nowait && bdev_nowait(rdev->bdev); 6505 6385 } 6506 6386 6507 - if (!bioset_initialized(&mddev->bio_set)) { 6508 - err = bioset_init(&mddev->bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS); 6509 - if (err) 6510 - return err; 6511 - } 6512 - if (!bioset_initialized(&mddev->sync_set)) { 6513 - err = bioset_init(&mddev->sync_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS); 6514 - if (err) 6515 - goto exit_bio_set; 6516 - } 6517 - 6518 - if (!bioset_initialized(&mddev->io_clone_set)) { 6519 - err = bioset_init(&mddev->io_clone_set, BIO_POOL_SIZE, 6520 - offsetof(struct md_io_clone, bio_clone), 0); 6521 - if (err) 6522 - goto exit_sync_set; 6523 - } 6524 - 6525 6387 pers = get_pers(mddev->level, mddev->clevel); 6526 - if (!pers) { 6527 - err = -EINVAL; 6528 - goto abort; 6529 - } 6388 + if (!pers) 6389 + return -EINVAL; 6530 6390 if (mddev->level != pers->head.id) { 6531 6391 mddev->level = pers->head.id; 6532 6392 mddev->new_level = pers->head.id; ··· 6517 6417 pers->start_reshape == NULL) { 6518 6418 /* This personality cannot handle reshaping... */ 6519 6419 put_pers(pers); 6520 - err = -EINVAL; 6521 - goto abort; 6420 + return -EINVAL; 6522 6421 } 6523 6422 6524 6423 if (pers->sync_request) { ··· 6644 6545 mddev->private = NULL; 6645 6546 put_pers(pers); 6646 6547 md_bitmap_destroy(mddev); 6647 - abort: 6648 - bioset_exit(&mddev->io_clone_set); 6649 - exit_sync_set: 6650 - bioset_exit(&mddev->sync_set); 6651 - exit_bio_set: 6652 - bioset_exit(&mddev->bio_set); 6653 6548 return err; 6654 6549 } 6655 6550 EXPORT_SYMBOL_GPL(md_run); ··· 6776 6683 mddev->chunk_sectors = 0; 6777 6684 mddev->ctime = mddev->utime = 0; 6778 6685 mddev->layout = 0; 6686 + mddev->logical_block_size = 0; 6779 6687 mddev->max_disks = 0; 6780 6688 mddev->events = 0; 6781 6689 mddev->can_decrease_events = 0; ··· 6869 6775 mddev->private = NULL; 6870 6776 put_pers(pers); 6871 6777 clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery); 6872 - 6873 - bioset_exit(&mddev->bio_set); 6874 - bioset_exit(&mddev->sync_set); 6875 - bioset_exit(&mddev->io_clone_set); 6876 6778 } 6877 6779 6878 6780 void md_stop(struct mddev *mddev) ··· 6958 6868 if (mddev->pers) { 6959 6869 if (!md_is_rdwr(mddev)) 6960 6870 set_disk_ro(disk, 0); 6871 + 6872 + if (mode == 2 && mddev->pers->sync_request && 6873 + mddev->to_remove == NULL) 6874 + mddev->to_remove = &md_redundancy_group; 6961 6875 6962 6876 __md_stop_writes(mddev); 6963 6877 __md_stop(mddev); ··· 8467 8373 return 0; 8468 8374 } 8469 8375 8470 - static void md_wakeup_thread_directly(struct md_thread __rcu *thread) 8376 + static void md_wakeup_thread_directly(struct md_thread __rcu **thread) 8471 8377 { 8472 8378 struct md_thread *t; 8473 8379 8474 8380 rcu_read_lock(); 8475 - t = rcu_dereference(thread); 8381 + t = rcu_dereference(*thread); 8476 8382 if (t) 8477 8383 wake_up_process(t->tsk); 8478 8384 rcu_read_unlock(); 8479 8385 } 8480 8386 8481 - void md_wakeup_thread(struct md_thread __rcu *thread) 8387 + void __md_wakeup_thread(struct md_thread __rcu *thread) 8482 8388 { 8483 8389 struct md_thread *t; 8484 8390 8485 - rcu_read_lock(); 8486 8391 t = rcu_dereference(thread); 8487 8392 if (t) { 8488 8393 pr_debug("md: waking up MD thread %s.\n", t->tsk->comm); ··· 8489 8396 if (wq_has_sleeper(&t->wqueue)) 8490 8397 wake_up(&t->wqueue); 8491 8398 } 8492 - rcu_read_unlock(); 8493 8399 } 8494 - EXPORT_SYMBOL(md_wakeup_thread); 8400 + EXPORT_SYMBOL(__md_wakeup_thread); 8495 8401 8496 8402 struct md_thread *md_register_thread(void (*run) (struct md_thread *), 8497 8403 struct mddev *mddev, const char *name) ··· 10070 9978 md_reap_sync_thread(mddev); 10071 9979 } 10072 9980 9981 + static bool md_should_do_recovery(struct mddev *mddev) 9982 + { 9983 + /* 9984 + * As long as one of the following flags is set, 9985 + * recovery needs to do or cleanup. 9986 + */ 9987 + if (test_bit(MD_RECOVERY_NEEDED, &mddev->recovery) || 9988 + test_bit(MD_RECOVERY_DONE, &mddev->recovery)) 9989 + return true; 9990 + 9991 + /* 9992 + * If no flags are set and it is in read-only status, 9993 + * there is nothing to do. 9994 + */ 9995 + if (!md_is_rdwr(mddev)) 9996 + return false; 9997 + 9998 + /* 9999 + * MD_SB_CHANGE_PENDING indicates that the array is switching from clean to 10000 + * active, and no action is needed for now. 10001 + * All other MD_SB_* flags require to update the superblock. 10002 + */ 10003 + if (mddev->sb_flags & ~ (1<<MD_SB_CHANGE_PENDING)) 10004 + return true; 10005 + 10006 + /* 10007 + * If the array is not using external metadata and there has been no data 10008 + * written for some time, then the array's status needs to be set to 10009 + * in_sync. 10010 + */ 10011 + if (mddev->external == 0 && mddev->safemode == 1) 10012 + return true; 10013 + 10014 + /* 10015 + * When the system is about to restart or the process receives an signal, 10016 + * the array needs to be synchronized as soon as possible. 10017 + * Once the data synchronization is completed, need to change the array 10018 + * status to in_sync. 10019 + */ 10020 + if (mddev->safemode == 2 && !mddev->in_sync && 10021 + mddev->resync_offset == MaxSector) 10022 + return true; 10023 + 10024 + return false; 10025 + } 10026 + 10073 10027 /* 10074 10028 * This routine is regularly called by all per-raid-array threads to 10075 10029 * deal with generic issues like resync and super-block update. ··· 10152 10014 flush_signals(current); 10153 10015 } 10154 10016 10155 - if (!md_is_rdwr(mddev) && 10156 - !test_bit(MD_RECOVERY_NEEDED, &mddev->recovery) && 10157 - !test_bit(MD_RECOVERY_DONE, &mddev->recovery)) 10158 - return; 10159 - if ( ! ( 10160 - (mddev->sb_flags & ~ (1<<MD_SB_CHANGE_PENDING)) || 10161 - test_bit(MD_RECOVERY_NEEDED, &mddev->recovery) || 10162 - test_bit(MD_RECOVERY_DONE, &mddev->recovery) || 10163 - (mddev->external == 0 && mddev->safemode == 1) || 10164 - (mddev->safemode == 2 10165 - && !mddev->in_sync && mddev->resync_offset == MaxSector) 10166 - )) 10017 + if (!md_should_do_recovery(mddev)) 10167 10018 return; 10168 10019 10169 10020 if (mddev_trylock(mddev)) { ··· 10408 10281 unsigned long code, void *x) 10409 10282 { 10410 10283 struct mddev *mddev; 10411 - int need_delay = 0; 10412 10284 10413 10285 spin_lock(&all_mddevs_lock); 10414 10286 list_for_each_entry(mddev, &all_mddevs, all_mddevs) { ··· 10421 10295 mddev->safemode = 2; 10422 10296 mddev_unlock(mddev); 10423 10297 } 10424 - need_delay = 1; 10425 10298 spin_lock(&all_mddevs_lock); 10426 10299 mddev_put_locked(mddev); 10427 10300 } 10428 10301 spin_unlock(&all_mddevs_lock); 10429 - 10430 - /* 10431 - * certain more exotic SCSI devices are known to be 10432 - * volatile wrt too early system reboots. While the 10433 - * right place to handle this issue is the given 10434 - * driver, we do want to have a safe RAID driver ... 10435 - */ 10436 - if (need_delay) 10437 - msleep(1000); 10438 10302 10439 10303 return NOTIFY_DONE; 10440 10304 } ··· 10813 10697 module_param_call(new_array, add_named_array, NULL, NULL, S_IWUSR); 10814 10698 module_param(create_on_open, bool, S_IRUSR|S_IWUSR); 10815 10699 module_param(legacy_async_del_gendisk, bool, 0600); 10700 + module_param(check_new_feature, bool, 0600); 10816 10701 10817 10702 MODULE_LICENSE("GPL"); 10818 10703 MODULE_DESCRIPTION("MD RAID framework");
+9 -1
drivers/md/md.h
··· 354 354 MD_HAS_MULTIPLE_PPLS, 355 355 MD_NOT_READY, 356 356 MD_BROKEN, 357 + MD_DO_DELETE, 357 358 MD_DELETED, 358 359 }; 359 360 ··· 433 432 sector_t array_sectors; /* exported array size */ 434 433 int external_size; /* size managed 435 434 * externally */ 435 + unsigned int logical_block_size; 436 436 __u64 events; 437 437 /* If the last 'event' was simply a clean->dirty transition, and 438 438 * we didn't write it to the spares, then it is safe and simple ··· 884 882 885 883 #define THREAD_WAKEUP 0 886 884 885 + #define md_wakeup_thread(thread) do { \ 886 + rcu_read_lock(); \ 887 + __md_wakeup_thread(thread); \ 888 + rcu_read_unlock(); \ 889 + } while (0) 890 + 887 891 static inline void safe_put_page(struct page *p) 888 892 { 889 893 if (p) put_page(p); ··· 903 895 struct mddev *mddev, 904 896 const char *name); 905 897 extern void md_unregister_thread(struct mddev *mddev, struct md_thread __rcu **threadp); 906 - extern void md_wakeup_thread(struct md_thread __rcu *thread); 898 + extern void __md_wakeup_thread(struct md_thread __rcu *thread); 907 899 extern void md_check_recovery(struct mddev *mddev); 908 900 extern void md_reap_sync_thread(struct mddev *mddev); 909 901 extern enum sync_action md_sync_action(struct mddev *mddev);
+13 -7
drivers/md/raid0.c
··· 68 68 struct strip_zone *zone; 69 69 int cnt; 70 70 struct r0conf *conf = kzalloc(sizeof(*conf), GFP_KERNEL); 71 - unsigned blksize = 512; 71 + unsigned int blksize = 512; 72 + 73 + if (!mddev_is_dm(mddev)) 74 + blksize = queue_logical_block_size(mddev->gendisk->queue); 72 75 73 76 *private_conf = ERR_PTR(-ENOMEM); 74 77 if (!conf) ··· 87 84 sector_div(sectors, mddev->chunk_sectors); 88 85 rdev1->sectors = sectors * mddev->chunk_sectors; 89 86 90 - blksize = max(blksize, queue_logical_block_size( 87 + if (mddev_is_dm(mddev)) 88 + blksize = max(blksize, queue_logical_block_size( 91 89 rdev1->bdev->bd_disk->queue)); 92 90 93 91 rdev_for_each(rdev2, mddev) { ··· 387 383 lim.max_hw_sectors = mddev->chunk_sectors; 388 384 lim.max_write_zeroes_sectors = mddev->chunk_sectors; 389 385 lim.max_hw_wzeroes_unmap_sectors = mddev->chunk_sectors; 386 + lim.logical_block_size = mddev->logical_block_size; 390 387 lim.io_min = mddev->chunk_sectors << 9; 391 388 lim.io_opt = lim.io_min * mddev->raid_disks; 392 389 lim.chunk_sectors = mddev->chunk_sectors; ··· 410 405 if (md_check_no_bitmap(mddev)) 411 406 return -EINVAL; 412 407 408 + if (!mddev_is_dm(mddev)) { 409 + ret = raid0_set_limits(mddev); 410 + if (ret) 411 + return ret; 412 + } 413 + 413 414 /* if private is not null, we are here after takeover */ 414 415 if (mddev->private == NULL) { 415 416 ret = create_strip_zones(mddev, &conf); ··· 424 413 mddev->private = conf; 425 414 } 426 415 conf = mddev->private; 427 - if (!mddev_is_dm(mddev)) { 428 - ret = raid0_set_limits(mddev); 429 - if (ret) 430 - return ret; 431 - } 432 416 433 417 /* calculate array device size */ 434 418 md_set_array_sectors(mddev, raid0_size(mddev, 0, 0));
+1
drivers/md/raid1.c
··· 3213 3213 md_init_stacking_limits(&lim); 3214 3214 lim.max_write_zeroes_sectors = 0; 3215 3215 lim.max_hw_wzeroes_unmap_sectors = 0; 3216 + lim.logical_block_size = mddev->logical_block_size; 3216 3217 lim.features |= BLK_FEAT_ATOMIC_WRITES; 3217 3218 err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY); 3218 3219 if (err)
+1
drivers/md/raid10.c
··· 4000 4000 md_init_stacking_limits(&lim); 4001 4001 lim.max_write_zeroes_sectors = 0; 4002 4002 lim.max_hw_wzeroes_unmap_sectors = 0; 4003 + lim.logical_block_size = mddev->logical_block_size; 4003 4004 lim.io_min = mddev->chunk_sectors << 9; 4004 4005 lim.chunk_sectors = mddev->chunk_sectors; 4005 4006 lim.io_opt = lim.io_min * raid10_nr_stripes(conf);
+1 -1
drivers/md/raid5-cache.c
··· 3104 3104 goto out_mempool; 3105 3105 3106 3106 spin_lock_init(&log->tree_lock); 3107 - INIT_RADIX_TREE(&log->big_stripe_tree, GFP_NOWAIT | __GFP_NOWARN); 3107 + INIT_RADIX_TREE(&log->big_stripe_tree, GFP_NOWAIT); 3108 3108 3109 3109 thread = md_register_thread(r5l_reclaim_thread, log->rdev->mddev, 3110 3110 "reclaim");
+5 -2
drivers/md/raid5.c
··· 4956 4956 goto finish; 4957 4957 4958 4958 if (s.handle_bad_blocks || 4959 - test_bit(MD_SB_CHANGE_PENDING, &conf->mddev->sb_flags)) { 4959 + (md_is_rdwr(conf->mddev) && 4960 + test_bit(MD_SB_CHANGE_PENDING, &conf->mddev->sb_flags))) { 4960 4961 set_bit(STRIPE_HANDLE, &sh->state); 4961 4962 goto finish; 4962 4963 } ··· 6769 6768 int batch_size, released; 6770 6769 unsigned int offset; 6771 6770 6772 - if (test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)) 6771 + if (md_is_rdwr(mddev) && 6772 + test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)) 6773 6773 break; 6774 6774 6775 6775 released = release_stripe_list(conf, conf->temp_inactive_list); ··· 7747 7745 stripe = roundup_pow_of_two(data_disks * (mddev->chunk_sectors << 9)); 7748 7746 7749 7747 md_init_stacking_limits(&lim); 7748 + lim.logical_block_size = mddev->logical_block_size; 7750 7749 lim.io_min = mddev->chunk_sectors << 9; 7751 7750 lim.io_opt = lim.io_min * (conf->raid_disks - conf->max_degraded); 7752 7751 lim.features |= BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE;
+1
drivers/nvme/host/apple.c
··· 1283 1283 .reg_read64 = apple_nvme_reg_read64, 1284 1284 .free_ctrl = apple_nvme_free_ctrl, 1285 1285 .get_address = apple_nvme_get_address, 1286 + .get_virt_boundary = nvme_get_virt_boundary, 1286 1287 }; 1287 1288 1288 1289 static void apple_nvme_async_probe(void *data, async_cookie_t cookie)
+7 -8
drivers/nvme/host/core.c
··· 2069 2069 } 2070 2070 2071 2071 static void nvme_set_ctrl_limits(struct nvme_ctrl *ctrl, 2072 - struct queue_limits *lim) 2072 + struct queue_limits *lim, bool is_admin) 2073 2073 { 2074 2074 lim->max_hw_sectors = ctrl->max_hw_sectors; 2075 2075 lim->max_segments = min_t(u32, USHRT_MAX, 2076 2076 min_not_zero(nvme_max_drv_segments(ctrl), ctrl->max_segments)); 2077 2077 lim->max_integrity_segments = ctrl->max_integrity_segments; 2078 - lim->virt_boundary_mask = NVME_CTRL_PAGE_SIZE - 1; 2078 + lim->virt_boundary_mask = ctrl->ops->get_virt_boundary(ctrl, is_admin); 2079 2079 lim->max_segment_size = UINT_MAX; 2080 2080 lim->dma_alignment = 3; 2081 2081 } ··· 2177 2177 int ret; 2178 2178 2179 2179 lim = queue_limits_start_update(ns->disk->queue); 2180 - nvme_set_ctrl_limits(ns->ctrl, &lim); 2180 + nvme_set_ctrl_limits(ns->ctrl, &lim, false); 2181 2181 2182 2182 memflags = blk_mq_freeze_queue(ns->disk->queue); 2183 2183 ret = queue_limits_commit_update(ns->disk->queue, &lim); ··· 2381 2381 ns->head->lba_shift = id->lbaf[lbaf].ds; 2382 2382 ns->head->nuse = le64_to_cpu(id->nuse); 2383 2383 capacity = nvme_lba_to_sect(ns->head, le64_to_cpu(id->nsze)); 2384 - nvme_set_ctrl_limits(ns->ctrl, &lim); 2384 + nvme_set_ctrl_limits(ns->ctrl, &lim, false); 2385 2385 nvme_configure_metadata(ns->ctrl, ns->head, id, nvm, info); 2386 2386 nvme_set_chunk_sectors(ns, id, &lim); 2387 2387 if (!nvme_update_disk_info(ns, id, &lim)) ··· 2599 2599 2600 2600 #ifdef CONFIG_BLK_DEV_ZONED 2601 2601 static int nvme_report_zones(struct gendisk *disk, sector_t sector, 2602 - unsigned int nr_zones, report_zones_cb cb, void *data) 2602 + unsigned int nr_zones, struct blk_report_zones_args *args) 2603 2603 { 2604 - return nvme_ns_report_zones(disk->private_data, sector, nr_zones, cb, 2605 - data); 2604 + return nvme_ns_report_zones(disk->private_data, sector, nr_zones, args); 2606 2605 } 2607 2606 #else 2608 2607 #define nvme_report_zones NULL ··· 3588 3589 min_not_zero(ctrl->max_hw_sectors, max_hw_sectors); 3589 3590 3590 3591 lim = queue_limits_start_update(ctrl->admin_q); 3591 - nvme_set_ctrl_limits(ctrl, &lim); 3592 + nvme_set_ctrl_limits(ctrl, &lim, true); 3592 3593 ret = queue_limits_commit_update(ctrl->admin_q, &lim); 3593 3594 if (ret) 3594 3595 goto out_free;
+6
drivers/nvme/host/fabrics.h
··· 217 217 min(opts->nr_poll_queues, num_online_cpus()); 218 218 } 219 219 220 + static inline unsigned long nvmf_get_virt_boundary(struct nvme_ctrl *ctrl, 221 + bool is_admin) 222 + { 223 + return 0; 224 + } 225 + 220 226 int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val); 221 227 int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val); 222 228 int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val);
+1
drivers/nvme/host/fc.c
··· 3361 3361 .submit_async_event = nvme_fc_submit_async_event, 3362 3362 .delete_ctrl = nvme_fc_delete_ctrl, 3363 3363 .get_address = nvmf_get_address, 3364 + .get_virt_boundary = nvmf_get_virt_boundary, 3364 3365 }; 3365 3366 3366 3367 static void
+2 -2
drivers/nvme/host/multipath.c
··· 576 576 577 577 #ifdef CONFIG_BLK_DEV_ZONED 578 578 static int nvme_ns_head_report_zones(struct gendisk *disk, sector_t sector, 579 - unsigned int nr_zones, report_zones_cb cb, void *data) 579 + unsigned int nr_zones, struct blk_report_zones_args *args) 580 580 { 581 581 struct nvme_ns_head *head = disk->private_data; 582 582 struct nvme_ns *ns; ··· 585 585 srcu_idx = srcu_read_lock(&head->srcu); 586 586 ns = nvme_find_path(head); 587 587 if (ns) 588 - ret = nvme_ns_report_zones(ns, sector, nr_zones, cb, data); 588 + ret = nvme_ns_report_zones(ns, sector, nr_zones, args); 589 589 srcu_read_unlock(&head->srcu, srcu_idx); 590 590 return ret; 591 591 }
+8 -1
drivers/nvme/host/nvme.h
··· 558 558 return head->pi_type && head->ms == head->pi_size; 559 559 } 560 560 561 + static inline unsigned long nvme_get_virt_boundary(struct nvme_ctrl *ctrl, 562 + bool is_admin) 563 + { 564 + return NVME_CTRL_PAGE_SIZE - 1; 565 + } 566 + 561 567 struct nvme_ctrl_ops { 562 568 const char *name; 563 569 struct module *module; ··· 584 578 int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size); 585 579 void (*print_device_info)(struct nvme_ctrl *ctrl); 586 580 bool (*supports_pci_p2pdma)(struct nvme_ctrl *ctrl); 581 + unsigned long (*get_virt_boundary)(struct nvme_ctrl *ctrl, bool is_admin); 587 582 }; 588 583 589 584 /* ··· 1115 1108 }; 1116 1109 1117 1110 int nvme_ns_report_zones(struct nvme_ns *ns, sector_t sector, 1118 - unsigned int nr_zones, report_zones_cb cb, void *data); 1111 + unsigned int nr_zones, struct blk_report_zones_args *args); 1119 1112 int nvme_query_zone_info(struct nvme_ns *ns, unsigned lbaf, 1120 1113 struct nvme_zone_info *zi); 1121 1114 void nvme_update_zone_info(struct nvme_ns *ns, struct queue_limits *lim,
+99 -19
drivers/nvme/host/pci.c
··· 260 260 /* single segment dma mapping */ 261 261 IOD_SINGLE_SEGMENT = 1U << 2, 262 262 263 + /* Data payload contains p2p memory */ 264 + IOD_DATA_P2P = 1U << 3, 265 + 266 + /* Metadata contains p2p memory */ 267 + IOD_META_P2P = 1U << 4, 268 + 269 + /* Data payload contains MMIO memory */ 270 + IOD_DATA_MMIO = 1U << 5, 271 + 272 + /* Metadata contains MMIO memory */ 273 + IOD_META_MMIO = 1U << 6, 274 + 263 275 /* Metadata using non-coalesced MPTR */ 264 - IOD_SINGLE_META_SEGMENT = 1U << 5, 276 + IOD_SINGLE_META_SEGMENT = 1U << 7, 265 277 }; 266 278 267 279 struct nvme_dma_vec { ··· 625 613 struct nvme_queue *nvmeq = req->mq_hctx->driver_data; 626 614 627 615 if (nvmeq->qid && nvme_ctrl_sgl_supported(&dev->ctrl)) { 628 - if (nvme_req(req)->flags & NVME_REQ_USERCMD) 629 - return SGL_FORCED; 630 - if (req->nr_integrity_segments > 1) 616 + /* 617 + * When the controller is capable of using SGL, there are 618 + * several conditions that we force to use it: 619 + * 620 + * 1. A request containing page gaps within the controller's 621 + * mask can not use the PRP format. 622 + * 623 + * 2. User commands use SGL because that lets the device 624 + * validate the requested transfer lengths. 625 + * 626 + * 3. Multiple integrity segments must use SGL as that's the 627 + * only way to describe such a command in NVMe. 628 + */ 629 + if (req_phys_gap_mask(req) & (NVME_CTRL_PAGE_SIZE - 1) || 630 + nvme_req(req)->flags & NVME_REQ_USERCMD || 631 + req->nr_integrity_segments > 1) 631 632 return SGL_FORCED; 632 633 return SGL_SUPPORTED; 633 634 } ··· 710 685 } 711 686 } 712 687 713 - static void nvme_free_prps(struct request *req) 688 + static void nvme_free_prps(struct request *req, unsigned int attrs) 714 689 { 715 690 struct nvme_iod *iod = blk_mq_rq_to_pdu(req); 716 691 struct nvme_queue *nvmeq = req->mq_hctx->driver_data; 717 692 unsigned int i; 718 693 719 694 for (i = 0; i < iod->nr_dma_vecs; i++) 720 - dma_unmap_page(nvmeq->dev->dev, iod->dma_vecs[i].addr, 721 - iod->dma_vecs[i].len, rq_dma_dir(req)); 695 + dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr, 696 + iod->dma_vecs[i].len, rq_dma_dir(req), attrs); 722 697 mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool); 723 698 } 724 699 725 700 static void nvme_free_sgls(struct request *req, struct nvme_sgl_desc *sge, 726 - struct nvme_sgl_desc *sg_list) 701 + struct nvme_sgl_desc *sg_list, unsigned int attrs) 727 702 { 728 703 struct nvme_queue *nvmeq = req->mq_hctx->driver_data; 729 704 enum dma_data_direction dir = rq_dma_dir(req); ··· 732 707 unsigned int i; 733 708 734 709 if (sge->type == (NVME_SGL_FMT_DATA_DESC << 4)) { 735 - dma_unmap_page(dma_dev, le64_to_cpu(sge->addr), len, dir); 710 + dma_unmap_phys(dma_dev, le64_to_cpu(sge->addr), len, dir, 711 + attrs); 736 712 return; 737 713 } 738 714 739 715 for (i = 0; i < len / sizeof(*sg_list); i++) 740 - dma_unmap_page(dma_dev, le64_to_cpu(sg_list[i].addr), 741 - le32_to_cpu(sg_list[i].length), dir); 716 + dma_unmap_phys(dma_dev, le64_to_cpu(sg_list[i].addr), 717 + le32_to_cpu(sg_list[i].length), dir, attrs); 742 718 } 743 719 744 720 static void nvme_unmap_metadata(struct request *req) 745 721 { 746 722 struct nvme_queue *nvmeq = req->mq_hctx->driver_data; 723 + enum pci_p2pdma_map_type map = PCI_P2PDMA_MAP_NONE; 747 724 enum dma_data_direction dir = rq_dma_dir(req); 748 725 struct nvme_iod *iod = blk_mq_rq_to_pdu(req); 749 726 struct device *dma_dev = nvmeq->dev->dev; 750 727 struct nvme_sgl_desc *sge = iod->meta_descriptor; 728 + unsigned int attrs = 0; 751 729 752 730 if (iod->flags & IOD_SINGLE_META_SEGMENT) { 753 731 dma_unmap_page(dma_dev, iod->meta_dma, ··· 759 731 return; 760 732 } 761 733 762 - if (!blk_rq_integrity_dma_unmap(req, dma_dev, &iod->meta_dma_state, 763 - iod->meta_total_len)) { 734 + if (iod->flags & IOD_META_P2P) 735 + map = PCI_P2PDMA_MAP_BUS_ADDR; 736 + else if (iod->flags & IOD_META_MMIO) { 737 + map = PCI_P2PDMA_MAP_THRU_HOST_BRIDGE; 738 + attrs |= DMA_ATTR_MMIO; 739 + } 740 + 741 + if (!blk_rq_dma_unmap(req, dma_dev, &iod->meta_dma_state, 742 + iod->meta_total_len, map)) { 764 743 if (nvme_pci_cmd_use_meta_sgl(&iod->cmd)) 765 - nvme_free_sgls(req, sge, &sge[1]); 744 + nvme_free_sgls(req, sge, &sge[1], attrs); 766 745 else 767 - dma_unmap_page(dma_dev, iod->meta_dma, 768 - iod->meta_total_len, dir); 746 + dma_unmap_phys(dma_dev, iod->meta_dma, 747 + iod->meta_total_len, dir, attrs); 769 748 } 770 749 771 750 if (iod->meta_descriptor) ··· 782 747 783 748 static void nvme_unmap_data(struct request *req) 784 749 { 750 + enum pci_p2pdma_map_type map = PCI_P2PDMA_MAP_NONE; 785 751 struct nvme_iod *iod = blk_mq_rq_to_pdu(req); 786 752 struct nvme_queue *nvmeq = req->mq_hctx->driver_data; 787 753 struct device *dma_dev = nvmeq->dev->dev; 754 + unsigned int attrs = 0; 788 755 789 756 if (iod->flags & IOD_SINGLE_SEGMENT) { 790 757 static_assert(offsetof(union nvme_data_ptr, prp1) == ··· 796 759 return; 797 760 } 798 761 799 - if (!blk_rq_dma_unmap(req, dma_dev, &iod->dma_state, iod->total_len)) { 762 + if (iod->flags & IOD_DATA_P2P) 763 + map = PCI_P2PDMA_MAP_BUS_ADDR; 764 + else if (iod->flags & IOD_DATA_MMIO) { 765 + map = PCI_P2PDMA_MAP_THRU_HOST_BRIDGE; 766 + attrs |= DMA_ATTR_MMIO; 767 + } 768 + 769 + if (!blk_rq_dma_unmap(req, dma_dev, &iod->dma_state, iod->total_len, 770 + map)) { 800 771 if (nvme_pci_cmd_use_sgl(&iod->cmd)) 801 772 nvme_free_sgls(req, iod->descriptors[0], 802 - &iod->cmd.common.dptr.sgl); 773 + &iod->cmd.common.dptr.sgl, attrs); 803 774 else 804 - nvme_free_prps(req); 775 + nvme_free_prps(req, attrs); 805 776 } 806 777 807 778 if (iod->nr_descriptors) ··· 1080 1035 if (!blk_rq_dma_map_iter_start(req, dev->dev, &iod->dma_state, &iter)) 1081 1036 return iter.status; 1082 1037 1038 + switch (iter.p2pdma.map) { 1039 + case PCI_P2PDMA_MAP_BUS_ADDR: 1040 + iod->flags |= IOD_DATA_P2P; 1041 + break; 1042 + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: 1043 + iod->flags |= IOD_DATA_MMIO; 1044 + break; 1045 + case PCI_P2PDMA_MAP_NONE: 1046 + break; 1047 + default: 1048 + return BLK_STS_RESOURCE; 1049 + } 1050 + 1083 1051 if (use_sgl == SGL_FORCED || 1084 1052 (use_sgl == SGL_SUPPORTED && 1085 1053 (sgl_threshold && nvme_pci_avg_seg_size(req) >= sgl_threshold))) ··· 1114 1056 if (!blk_rq_integrity_dma_map_iter_start(req, dev->dev, 1115 1057 &iod->meta_dma_state, &iter)) 1116 1058 return iter.status; 1059 + 1060 + switch (iter.p2pdma.map) { 1061 + case PCI_P2PDMA_MAP_BUS_ADDR: 1062 + iod->flags |= IOD_META_P2P; 1063 + break; 1064 + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: 1065 + iod->flags |= IOD_META_MMIO; 1066 + break; 1067 + case PCI_P2PDMA_MAP_NONE: 1068 + break; 1069 + default: 1070 + return BLK_STS_RESOURCE; 1071 + } 1117 1072 1118 1073 if (blk_rq_dma_map_coalesce(&iod->meta_dma_state)) 1119 1074 entries = 1; ··· 3321 3250 return dma_pci_p2pdma_supported(dev->dev); 3322 3251 } 3323 3252 3253 + static unsigned long nvme_pci_get_virt_boundary(struct nvme_ctrl *ctrl, 3254 + bool is_admin) 3255 + { 3256 + if (!nvme_ctrl_sgl_supported(ctrl) || is_admin) 3257 + return NVME_CTRL_PAGE_SIZE - 1; 3258 + return 0; 3259 + } 3260 + 3324 3261 static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = { 3325 3262 .name = "pcie", 3326 3263 .module = THIS_MODULE, ··· 3343 3264 .get_address = nvme_pci_get_address, 3344 3265 .print_device_info = nvme_pci_print_device_info, 3345 3266 .supports_pci_p2pdma = nvme_pci_supports_pci_p2pdma, 3267 + .get_virt_boundary = nvme_pci_get_virt_boundary, 3346 3268 }; 3347 3269 3348 3270 static int nvme_dev_map(struct nvme_dev *dev)
+1
drivers/nvme/host/rdma.c
··· 2202 2202 .delete_ctrl = nvme_rdma_delete_ctrl, 2203 2203 .get_address = nvmf_get_address, 2204 2204 .stop_ctrl = nvme_rdma_stop_ctrl, 2205 + .get_virt_boundary = nvme_get_virt_boundary, 2205 2206 }; 2206 2207 2207 2208 /*
+1
drivers/nvme/host/tcp.c
··· 2865 2865 .delete_ctrl = nvme_tcp_delete_ctrl, 2866 2866 .get_address = nvme_tcp_get_address, 2867 2867 .stop_ctrl = nvme_tcp_stop_ctrl, 2868 + .get_virt_boundary = nvmf_get_virt_boundary, 2868 2869 }; 2869 2870 2870 2871 static bool
+5 -5
drivers/nvme/host/zns.c
··· 148 148 149 149 static int nvme_zone_parse_entry(struct nvme_ns *ns, 150 150 struct nvme_zone_descriptor *entry, 151 - unsigned int idx, report_zones_cb cb, 152 - void *data) 151 + unsigned int idx, 152 + struct blk_report_zones_args *args) 153 153 { 154 154 struct nvme_ns_head *head = ns->head; 155 155 struct blk_zone zone = { }; ··· 169 169 else 170 170 zone.wp = nvme_lba_to_sect(head, le64_to_cpu(entry->wp)); 171 171 172 - return cb(&zone, idx, data); 172 + return disk_report_zone(ns->disk, &zone, idx, args); 173 173 } 174 174 175 175 int nvme_ns_report_zones(struct nvme_ns *ns, sector_t sector, 176 - unsigned int nr_zones, report_zones_cb cb, void *data) 176 + unsigned int nr_zones, struct blk_report_zones_args *args) 177 177 { 178 178 struct nvme_zone_report *report; 179 179 struct nvme_command c = { }; ··· 213 213 214 214 for (i = 0; i < nz && zone_idx < nr_zones; i++) { 215 215 ret = nvme_zone_parse_entry(ns, &report->entries[i], 216 - zone_idx, cb, data); 216 + zone_idx, args); 217 217 if (ret) 218 218 goto out_free; 219 219 zone_idx++;
+1
drivers/nvme/target/loop.c
··· 511 511 .submit_async_event = nvme_loop_submit_async_event, 512 512 .delete_ctrl = nvme_loop_delete_ctrl_host, 513 513 .get_address = nvmf_get_address, 514 + .get_virt_boundary = nvme_get_virt_boundary, 514 515 }; 515 516 516 517 static int nvme_loop_create_io_queues(struct nvme_loop_ctrl *ctrl)
+8 -56
drivers/s390/block/dasd.c
··· 207 207 return 0; 208 208 } 209 209 210 - static struct dentry *dasd_debugfs_setup(const char *name, 211 - struct dentry *base_dentry) 212 - { 213 - struct dentry *pde; 214 - 215 - if (!base_dentry) 216 - return NULL; 217 - pde = debugfs_create_dir(name, base_dentry); 218 - if (!pde || IS_ERR(pde)) 219 - return NULL; 220 - return pde; 221 - } 222 - 223 210 /* 224 211 * Request the irq line for the device. 225 212 */ ··· 221 234 if (rc) 222 235 return rc; 223 236 block->debugfs_dentry = 224 - dasd_debugfs_setup(block->gdp->disk_name, 237 + debugfs_create_dir(block->gdp->disk_name, 225 238 dasd_debugfs_root_entry); 226 239 dasd_profile_init(&block->profile, block->debugfs_dentry); 227 240 if (dasd_global_profile_level == DASD_PROFILE_ON) 228 241 dasd_profile_on(&device->block->profile); 229 242 } 230 243 device->debugfs_dentry = 231 - dasd_debugfs_setup(dev_name(&device->cdev->dev), 244 + debugfs_create_dir(dev_name(&device->cdev->dev), 232 245 dasd_debugfs_root_entry); 233 246 dasd_profile_init(&device->profile, device->debugfs_dentry); 234 247 dasd_hosts_init(device->debugfs_dentry, device); ··· 1044 1057 static void dasd_profile_init(struct dasd_profile *profile, 1045 1058 struct dentry *base_dentry) 1046 1059 { 1047 - umode_t mode; 1048 - struct dentry *pde; 1049 - 1050 - if (!base_dentry) 1051 - return; 1052 - profile->dentry = NULL; 1053 1060 profile->data = NULL; 1054 - mode = (S_IRUSR | S_IWUSR | S_IFREG); 1055 - pde = debugfs_create_file("statistics", mode, base_dentry, 1056 - profile, &dasd_stats_raw_fops); 1057 - if (pde && !IS_ERR(pde)) 1058 - profile->dentry = pde; 1059 - return; 1061 + profile->dentry = debugfs_create_file("statistics", 0600, base_dentry, 1062 + profile, &dasd_stats_raw_fops); 1060 1063 } 1061 1064 1062 1065 static void dasd_profile_exit(struct dasd_profile *profile) ··· 1066 1089 1067 1090 static void dasd_statistics_createroot(void) 1068 1091 { 1069 - struct dentry *pde; 1070 - 1071 - dasd_debugfs_root_entry = NULL; 1072 - pde = debugfs_create_dir("dasd", NULL); 1073 - if (!pde || IS_ERR(pde)) 1074 - goto error; 1075 - dasd_debugfs_root_entry = pde; 1076 - pde = debugfs_create_dir("global", dasd_debugfs_root_entry); 1077 - if (!pde || IS_ERR(pde)) 1078 - goto error; 1079 - dasd_debugfs_global_entry = pde; 1092 + dasd_debugfs_root_entry = debugfs_create_dir("dasd", NULL); 1093 + dasd_debugfs_global_entry = debugfs_create_dir("global", dasd_debugfs_root_entry); 1080 1094 dasd_profile_init(&dasd_global_profile, dasd_debugfs_global_entry); 1081 - return; 1082 - 1083 - error: 1084 - DBF_EVENT(DBF_ERR, "%s", 1085 - "Creation of the dasd debugfs interface failed"); 1086 - dasd_statistics_removeroot(); 1087 - return; 1088 1095 } 1089 1096 1090 1097 #else ··· 1129 1168 static void dasd_hosts_init(struct dentry *base_dentry, 1130 1169 struct dasd_device *device) 1131 1170 { 1132 - struct dentry *pde; 1133 - umode_t mode; 1134 - 1135 - if (!base_dentry) 1136 - return; 1137 - 1138 - mode = S_IRUSR | S_IFREG; 1139 - pde = debugfs_create_file("host_access_list", mode, base_dentry, 1140 - device, &dasd_hosts_fops); 1141 - if (pde && !IS_ERR(pde)) 1142 - device->hosts_dentry = pde; 1171 + device->hosts_dentry = debugfs_create_file("host_access_list", 0400, base_dentry, 1172 + device, &dasd_hosts_fops); 1143 1173 } 1144 1174 1145 1175 struct dasd_ccw_req *dasd_smalloc_request(int magic, int cplength, int datasize,
+2 -1
drivers/s390/block/dasd_devmap.c
··· 355 355 /* each device in dasd= parameter should be set initially online */ 356 356 features |= DASD_FEATURE_INITIAL_ONLINE; 357 357 while (from <= to) { 358 - sprintf(bus_id, "%01x.%01x.%04x", from_id0, from_id1, from++); 358 + scnprintf(bus_id, sizeof(bus_id), 359 + "%01x.%01x.%04x", from_id0, from_id1, from++); 359 360 devmap = dasd_add_busid(bus_id, features); 360 361 if (IS_ERR(devmap)) { 361 362 rc = PTR_ERR(devmap);
+8
drivers/s390/block/dasd_eckd.c
··· 6139 6139 struct dasd_copy_relation *copy; 6140 6140 struct dasd_block *block; 6141 6141 struct gendisk *gdp; 6142 + int rc; 6142 6143 6143 6144 copy = device->copy; 6144 6145 if (!copy) ··· 6174 6173 /* swap blocklayer device link */ 6175 6174 gdp = block->gdp; 6176 6175 dasd_add_link_to_gendisk(gdp, secondary); 6176 + rc = device_move(disk_to_dev(gdp), &secondary->cdev->dev, DPM_ORDER_NONE); 6177 + if (rc) { 6178 + dev_err(&primary->cdev->dev, 6179 + "copy_pair_swap: moving blockdevice parent %s->%s failed (%d)\n", 6180 + dev_name(&primary->cdev->dev), 6181 + dev_name(&secondary->cdev->dev), rc); 6182 + } 6177 6183 6178 6184 /* re-enable device */ 6179 6185 dasd_device_remove_stop_bits(primary, DASD_STOPPED_PPRC);
+54 -26
drivers/s390/block/dasd_genhd.c
··· 22 22 23 23 static unsigned int queue_depth = 32; 24 24 static unsigned int nr_hw_queues = 4; 25 + static void dasd_gd_free(struct gendisk *gdp); 25 26 26 27 module_param(queue_depth, uint, 0444); 27 28 MODULE_PARM_DESC(queue_depth, "Default queue depth for new DASD devices"); 28 29 29 30 module_param(nr_hw_queues, uint, 0444); 30 31 MODULE_PARM_DESC(nr_hw_queues, "Default number of hardware queues for new DASD devices"); 32 + 33 + /* 34 + * Set device name. 35 + * dasda - dasdz : 26 devices 36 + * dasdaa - dasdzz : 676 devices, added up = 702 37 + * dasdaaa - dasdzzz : 17576 devices, added up = 18278 38 + * dasdaaaa - dasdzzzz : 456976 devices, added up = 475252 39 + */ 40 + static int dasd_name_format(char *prefix, int index, char *buf, int buflen) 41 + { 42 + const int base = 'z' - 'a' + 1; 43 + char *begin = buf + strlen(prefix); 44 + char *end = buf + buflen; 45 + char *p; 46 + int unit; 47 + 48 + p = end - 1; 49 + *p = '\0'; 50 + unit = base; 51 + do { 52 + if (p == begin) 53 + return -EINVAL; 54 + *--p = 'a' + (index % unit); 55 + index = (index / unit) - 1; 56 + } while (index >= 0); 57 + 58 + memmove(begin, p, end - p); 59 + memcpy(buf, prefix, strlen(prefix)); 60 + 61 + return 0; 62 + } 31 63 32 64 /* 33 65 * Allocate and register gendisk structure for device. ··· 77 45 }; 78 46 struct gendisk *gdp; 79 47 struct dasd_device *base; 80 - int len, rc; 48 + unsigned int devindex; 49 + int rc; 81 50 82 51 /* Make sure the minor for this device exists. */ 83 52 base = block->base; 84 - if (base->devindex >= DASD_PER_MAJOR) 53 + devindex = base->devindex; 54 + if (devindex >= DASD_PER_MAJOR) 85 55 return -EBUSY; 86 56 87 57 block->tag_set.ops = &dasd_mq_ops; ··· 103 69 104 70 /* Initialize gendisk structure. */ 105 71 gdp->major = DASD_MAJOR; 106 - gdp->first_minor = base->devindex << DASD_PARTN_BITS; 72 + gdp->first_minor = devindex << DASD_PARTN_BITS; 107 73 gdp->minors = 1 << DASD_PARTN_BITS; 108 74 gdp->fops = &dasd_device_operations; 109 75 110 - /* 111 - * Set device name. 112 - * dasda - dasdz : 26 devices 113 - * dasdaa - dasdzz : 676 devices, added up = 702 114 - * dasdaaa - dasdzzz : 17576 devices, added up = 18278 115 - * dasdaaaa - dasdzzzz : 456976 devices, added up = 475252 116 - */ 117 - len = sprintf(gdp->disk_name, "dasd"); 118 - if (base->devindex > 25) { 119 - if (base->devindex > 701) { 120 - if (base->devindex > 18277) 121 - len += sprintf(gdp->disk_name + len, "%c", 122 - 'a'+(((base->devindex-18278) 123 - /17576)%26)); 124 - len += sprintf(gdp->disk_name + len, "%c", 125 - 'a'+(((base->devindex-702)/676)%26)); 126 - } 127 - len += sprintf(gdp->disk_name + len, "%c", 128 - 'a'+(((base->devindex-26)/26)%26)); 76 + rc = dasd_name_format("dasd", devindex, gdp->disk_name, sizeof(gdp->disk_name)); 77 + if (rc) { 78 + DBF_DEV_EVENT(DBF_ERR, block->base, 79 + "setting disk name failed, rc %d", rc); 80 + dasd_gd_free(gdp); 81 + return rc; 129 82 } 130 - len += sprintf(gdp->disk_name + len, "%c", 'a'+(base->devindex%26)); 131 83 132 84 if (base->features & DASD_FEATURE_READONLY || 133 85 test_bit(DASD_FLAG_DEVICE_RO, &base->flags)) ··· 132 112 } 133 113 134 114 /* 115 + * Free gendisk structure 116 + */ 117 + static void dasd_gd_free(struct gendisk *gd) 118 + { 119 + del_gendisk(gd); 120 + gd->private_data = NULL; 121 + put_disk(gd); 122 + } 123 + 124 + /* 135 125 * Unregister and free gendisk structure for device. 136 126 */ 137 127 void dasd_gendisk_free(struct dasd_block *block) 138 128 { 139 129 if (block->gdp) { 140 - del_gendisk(block->gdp); 141 - block->gdp->private_data = NULL; 142 - put_disk(block->gdp); 130 + dasd_gd_free(block->gdp); 143 131 block->gdp = NULL; 144 132 blk_mq_free_tag_set(&block->tag_set); 145 133 }
+1 -1
drivers/scsi/sd.h
··· 240 240 unsigned int sd_zbc_complete(struct scsi_cmnd *cmd, unsigned int good_bytes, 241 241 struct scsi_sense_hdr *sshdr); 242 242 int sd_zbc_report_zones(struct gendisk *disk, sector_t sector, 243 - unsigned int nr_zones, report_zones_cb cb, void *data); 243 + unsigned int nr_zones, struct blk_report_zones_args *args); 244 244 245 245 #else /* CONFIG_BLK_DEV_ZONED */ 246 246
+7 -13
drivers/scsi/sd_zbc.c
··· 35 35 * @buf: SCSI zone descriptor. 36 36 * @idx: Index of the zone relative to the first zone reported by the current 37 37 * sd_zbc_report_zones() call. 38 - * @cb: Callback function pointer. 39 - * @data: Second argument passed to @cb. 38 + * @args: report zones arguments (callback, etc) 40 39 * 41 40 * Return: Value returned by @cb. 42 41 * ··· 43 44 * call @cb(blk_zone, @data). 44 45 */ 45 46 static int sd_zbc_parse_report(struct scsi_disk *sdkp, const u8 buf[64], 46 - unsigned int idx, report_zones_cb cb, void *data) 47 + unsigned int idx, struct blk_report_zones_args *args) 47 48 { 48 49 struct scsi_device *sdp = sdkp->device; 49 50 struct blk_zone zone = { 0 }; 50 51 sector_t start_lba, gran; 51 - int ret; 52 52 53 53 if (WARN_ON_ONCE(sd_zbc_is_gap_zone(buf))) 54 54 return -EINVAL; ··· 85 87 else 86 88 zone.wp = logical_to_sectors(sdp, get_unaligned_be64(&buf[24])); 87 89 88 - ret = cb(&zone, idx, data); 89 - if (ret) 90 - return ret; 91 - 92 - return 0; 90 + return disk_report_zone(sdkp->disk, &zone, idx, args); 93 91 } 94 92 95 93 /** ··· 211 217 * @disk: Disk to report zones for. 212 218 * @sector: Start sector. 213 219 * @nr_zones: Maximum number of zones to report. 214 - * @cb: Callback function called to report zone information. 215 - * @data: Second argument passed to @cb. 220 + * @args: Callback arguments. 216 221 * 217 222 * Called by the block layer to iterate over zone information. See also the 218 223 * disk->fops->report_zones() calls in block/blk-zoned.c. 219 224 */ 220 225 int sd_zbc_report_zones(struct gendisk *disk, sector_t sector, 221 - unsigned int nr_zones, report_zones_cb cb, void *data) 226 + unsigned int nr_zones, 227 + struct blk_report_zones_args *args) 222 228 { 223 229 struct scsi_disk *sdkp = scsi_disk(disk); 224 230 sector_t lba = sectors_to_logical(sdkp->device, sector); ··· 277 283 } 278 284 279 285 ret = sd_zbc_parse_report(sdkp, buf + offset, zone_idx, 280 - cb, data); 286 + args); 281 287 if (ret) 282 288 goto out; 283 289
+6 -5
fs/btrfs/zoned.c
··· 264 264 } 265 265 } 266 266 267 - ret = blkdev_report_zones(device->bdev, pos >> SECTOR_SHIFT, *nr_zones, 268 - copy_zone_info_cb, zones); 267 + ret = blkdev_report_zones_cached(device->bdev, pos >> SECTOR_SHIFT, 268 + *nr_zones, copy_zone_info_cb, zones); 269 269 if (ret < 0) { 270 270 btrfs_err(device->fs_info, 271 271 "zoned: failed to read zone %llu on %s (devid %llu)", ··· 494 494 case BLK_ZONE_COND_IMP_OPEN: 495 495 case BLK_ZONE_COND_EXP_OPEN: 496 496 case BLK_ZONE_COND_CLOSED: 497 + case BLK_ZONE_COND_ACTIVE: 497 498 __set_bit(nreported, zone_info->active_zones); 498 499 nactive++; 499 500 break; ··· 897 896 if (sb_zone + 1 >= nr_zones) 898 897 return -ENOENT; 899 898 900 - ret = blkdev_report_zones(bdev, zone_start_sector(sb_zone, bdev), 901 - BTRFS_NR_SB_LOG_ZONES, copy_zone_info_cb, 902 - zones); 899 + ret = blkdev_report_zones_cached(bdev, zone_start_sector(sb_zone, bdev), 900 + BTRFS_NR_SB_LOG_ZONES, 901 + copy_zone_info_cb, zones); 903 902 if (ret < 0) 904 903 return ret; 905 904 if (unlikely(ret != BTRFS_NR_SB_LOG_ZONES))
+1
fs/xfs/libxfs/xfs_zones.c
··· 95 95 case BLK_ZONE_COND_IMP_OPEN: 96 96 case BLK_ZONE_COND_EXP_OPEN: 97 97 case BLK_ZONE_COND_CLOSED: 98 + case BLK_ZONE_COND_ACTIVE: 98 99 return xfs_zone_validate_wp(zone, rtg, write_pointer); 99 100 case BLK_ZONE_COND_FULL: 100 101 return xfs_zone_validate_full(zone, rtg, write_pointer);
+1 -1
fs/xfs/xfs_zone_alloc.c
··· 1263 1263 PAGE_SHIFT; 1264 1264 1265 1265 if (bdev_is_zoned(bt->bt_bdev)) { 1266 - error = blkdev_report_zones(bt->bt_bdev, 1266 + error = blkdev_report_zones_cached(bt->bt_bdev, 1267 1267 XFS_FSB_TO_BB(mp, mp->m_sb.sb_rtstart), 1268 1268 mp->m_sb.sb_rgcount, xfs_get_zone_info_cb, &iz); 1269 1269 if (error < 0)
+3 -1
include/linux/backing-dev-defs.h
··· 170 170 u64 id; 171 171 struct rb_node rb_node; /* keyed by ->id */ 172 172 struct list_head bdi_list; 173 - unsigned long ra_pages; /* max readahead in PAGE_SIZE units */ 173 + /* max readahead in PAGE_SIZE units */ 174 + unsigned long __data_racy ra_pages; 175 + 174 176 unsigned long io_pages; /* max allowed IO size */ 175 177 176 178 struct kref refcnt; /* Reference counter for the structure */
+6 -1
include/linux/bio-integrity.h
··· 13 13 BIP_CHECK_GUARD = 1 << 5, /* guard check */ 14 14 BIP_CHECK_REFTAG = 1 << 6, /* reftag check */ 15 15 BIP_CHECK_APPTAG = 1 << 7, /* apptag check */ 16 - BIP_P2P_DMA = 1 << 8, /* using P2P address */ 16 + 17 + BIP_MEMPOOL = 1 << 15, /* buffer backed by mempool */ 17 18 }; 18 19 19 20 struct bio_integrity_payload { ··· 141 140 return 0; 142 141 } 143 142 #endif /* CONFIG_BLK_DEV_INTEGRITY */ 143 + 144 + void bio_integrity_alloc_buf(struct bio *bio, bool zero_buffer); 145 + void bio_integrity_free_buf(struct bio_integrity_payload *bip); 146 + 144 147 #endif /* _LINUX_BIO_INTEGRITY_H */
+2
include/linux/bio.h
··· 324 324 gfp_t gfp, struct bio_set *bs); 325 325 int bio_split_io_at(struct bio *bio, const struct queue_limits *lim, 326 326 unsigned *segs, unsigned max_bytes, unsigned len_align); 327 + u8 bio_seg_gap(struct request_queue *q, struct bio *prev, struct bio *next, 328 + u8 gaps_bit); 327 329 328 330 /** 329 331 * bio_next_split - get next @sectors from a bio, splitting if necessary
+5 -14
include/linux/blk-integrity.h
··· 8 8 9 9 struct request; 10 10 11 + /* 12 + * Maximum contiguous integrity buffer allocation. 13 + */ 14 + #define BLK_INTEGRITY_MAX_SIZE SZ_2M 15 + 11 16 enum blk_integrity_flags { 12 17 BLK_INTEGRITY_NOVERIFY = 1 << 0, 13 18 BLK_INTEGRITY_NOGENERATE = 1 << 1, ··· 32 27 33 28 #ifdef CONFIG_BLK_DEV_INTEGRITY 34 29 int blk_rq_map_integrity_sg(struct request *, struct scatterlist *); 35 - 36 - static inline bool blk_rq_integrity_dma_unmap(struct request *req, 37 - struct device *dma_dev, struct dma_iova_state *state, 38 - size_t mapped_len) 39 - { 40 - return blk_dma_unmap(req, dma_dev, state, mapped_len, 41 - bio_integrity(req->bio)->bip_flags & BIP_P2P_DMA); 42 - } 43 30 44 31 int blk_rq_count_integrity_sg(struct request_queue *, struct bio *); 45 32 int blk_rq_integrity_map_user(struct request *rq, void __user *ubuf, ··· 120 123 struct scatterlist *s) 121 124 { 122 125 return 0; 123 - } 124 - static inline bool blk_rq_integrity_dma_unmap(struct request *req, 125 - struct device *dma_dev, struct dma_iova_state *state, 126 - size_t mapped_len) 127 - { 128 - return false; 129 126 } 130 127 static inline int blk_rq_integrity_map_user(struct request *rq, 131 128 void __user *ubuf,
+13 -15
include/linux/blk-mq-dma.h
··· 16 16 /* Output address range for this iteration */ 17 17 dma_addr_t addr; 18 18 u32 len; 19 + struct pci_p2pdma_map_state p2pdma; 19 20 20 21 /* Status code. Only valid when blk_rq_dma_map_iter_* returned false */ 21 22 blk_status_t status; 22 23 23 24 /* Internal to blk_rq_dma_map_iter_* */ 24 25 struct blk_map_iter iter; 25 - struct pci_p2pdma_map_state p2pdma; 26 26 }; 27 27 28 28 bool blk_rq_dma_map_iter_start(struct request *req, struct device *dma_dev, ··· 43 43 } 44 44 45 45 /** 46 - * blk_dma_unmap - try to DMA unmap a request 46 + * blk_rq_dma_unmap - try to DMA unmap a request 47 47 * @req: request to unmap 48 48 * @dma_dev: device to unmap from 49 49 * @state: DMA IOVA state 50 50 * @mapped_len: number of bytes to unmap 51 - * @is_p2p: true if mapped with PCI_P2PDMA_MAP_BUS_ADDR 51 + * @map: peer-to-peer mapping type 52 52 * 53 53 * Returns %false if the callers need to manually unmap every DMA segment 54 54 * mapped using @iter or %true if no work is left to be done. 55 55 */ 56 - static inline bool blk_dma_unmap(struct request *req, struct device *dma_dev, 57 - struct dma_iova_state *state, size_t mapped_len, bool is_p2p) 56 + static inline bool blk_rq_dma_unmap(struct request *req, struct device *dma_dev, 57 + struct dma_iova_state *state, size_t mapped_len, 58 + enum pci_p2pdma_map_type map) 58 59 { 59 - if (is_p2p) 60 + if (map == PCI_P2PDMA_MAP_BUS_ADDR) 60 61 return true; 61 62 62 63 if (dma_use_iova(state)) { 64 + unsigned int attrs = 0; 65 + 66 + if (map == PCI_P2PDMA_MAP_THRU_HOST_BRIDGE) 67 + attrs |= DMA_ATTR_MMIO; 68 + 63 69 dma_iova_destroy(dma_dev, state, mapped_len, rq_dma_dir(req), 64 - 0); 70 + attrs); 65 71 return true; 66 72 } 67 73 68 74 return !dma_need_unmap(dma_dev); 69 75 } 70 - 71 - static inline bool blk_rq_dma_unmap(struct request *req, struct device *dma_dev, 72 - struct dma_iova_state *state, size_t mapped_len) 73 - { 74 - return blk_dma_unmap(req, dma_dev, state, mapped_len, 75 - req->cmd_flags & REQ_P2PDMA); 76 - } 77 - 78 76 #endif /* BLK_MQ_DMA_H */
+29 -1
include/linux/blk-mq.h
··· 152 152 unsigned short nr_phys_segments; 153 153 unsigned short nr_integrity_segments; 154 154 155 + /* 156 + * The lowest set bit for address gaps between physical segments. This 157 + * provides information necessary for dma optimization opprotunities, 158 + * like for testing if the segments can be coalesced against the 159 + * device's iommu granule. 160 + */ 161 + unsigned char phys_gap_bit; 162 + 155 163 #ifdef CONFIG_BLK_INLINE_ENCRYPTION 156 164 struct bio_crypt_ctx *crypt_ctx; 157 165 struct blk_crypto_keyslot *crypt_keyslot; ··· 215 207 rq_end_io_fn *end_io; 216 208 void *end_io_data; 217 209 }; 210 + 211 + /* 212 + * Returns a mask with all bits starting at req->phys_gap_bit set to 1. 213 + */ 214 + static inline unsigned long req_phys_gap_mask(const struct request *req) 215 + { 216 + return ~(((1 << req->phys_gap_bit) >> 1) - 1); 217 + } 218 218 219 219 static inline enum req_op req_op(const struct request *req) 220 220 { ··· 1015 999 return rq + 1; 1016 1000 } 1017 1001 1002 + static inline struct blk_mq_hw_ctx *queue_hctx(struct request_queue *q, int id) 1003 + { 1004 + struct blk_mq_hw_ctx *hctx; 1005 + 1006 + rcu_read_lock(); 1007 + hctx = rcu_dereference(q->queue_hw_ctx)[id]; 1008 + rcu_read_unlock(); 1009 + 1010 + return hctx; 1011 + } 1012 + 1018 1013 #define queue_for_each_hw_ctx(q, hctx, i) \ 1019 - xa_for_each(&(q)->hctx_table, (i), (hctx)) 1014 + for ((i) = 0; (i) < (q)->nr_hw_queues && \ 1015 + ({ hctx = queue_hctx((q), i); 1; }); (i)++) 1020 1016 1021 1017 #define hctx_for_each_ctx(hctx, ctx, i) \ 1022 1018 for ((i) = 0; (i) < (hctx)->nr_ctx && \
+12 -2
include/linux/blk_types.h
··· 218 218 enum rw_hint bi_write_hint; 219 219 u8 bi_write_stream; 220 220 blk_status_t bi_status; 221 + 222 + /* 223 + * The bvec gap bit indicates the lowest set bit in any address offset 224 + * between all bi_io_vecs. This field is initialized only after the bio 225 + * is split to the hardware limits (see bio_split_io_at()). The value 226 + * may be used to consider DMA optimization when performing that 227 + * mapping. The value is compared to a power of two mask where the 228 + * result depends on any bit set within the mask, so saving the lowest 229 + * bit is sufficient to know if any segment gap collides with the mask. 230 + */ 231 + u8 bi_bvec_gap_bit; 232 + 221 233 atomic_t __bi_remaining; 222 234 223 235 struct bvec_iter bi_iter; ··· 393 381 __REQ_DRV, /* for driver use */ 394 382 __REQ_FS_PRIVATE, /* for file system (submitter) use */ 395 383 __REQ_ATOMIC, /* for atomic write operations */ 396 - __REQ_P2PDMA, /* contains P2P DMA pages */ 397 384 /* 398 385 * Command specific flags, keep last: 399 386 */ ··· 425 414 #define REQ_DRV (__force blk_opf_t)(1ULL << __REQ_DRV) 426 415 #define REQ_FS_PRIVATE (__force blk_opf_t)(1ULL << __REQ_FS_PRIVATE) 427 416 #define REQ_ATOMIC (__force blk_opf_t)(1ULL << __REQ_ATOMIC) 428 - #define REQ_P2PDMA (__force blk_opf_t)(1ULL << __REQ_P2PDMA) 429 417 430 418 #define REQ_NOUNMAP (__force blk_opf_t)(1ULL << __REQ_NOUNMAP) 431 419
+30 -32
include/linux/blkdev.h
··· 38 38 struct kiocb; 39 39 struct pr_ops; 40 40 struct rq_qos; 41 + struct blk_report_zones_args; 41 42 struct blk_queue_stats; 42 43 struct blk_stat_callback; 43 44 struct blk_crypto_profile; ··· 173 172 #define GD_ADDED 4 174 173 #define GD_SUPPRESS_PART_SCAN 5 175 174 #define GD_OWNS_QUEUE 6 175 + #define GD_ZONE_APPEND_USED 7 176 176 177 177 struct mutex open_mutex; /* open/close mutex */ 178 178 unsigned open_partitions; /* number of open partitions */ ··· 197 195 unsigned int nr_zones; 198 196 unsigned int zone_capacity; 199 197 unsigned int last_zone_capacity; 200 - unsigned long __rcu *conv_zones_bitmap; 198 + u8 __rcu *zones_cond; 201 199 unsigned int zone_wplugs_hash_bits; 202 200 atomic_t nr_zone_wplugs; 203 201 spinlock_t zone_wplugs_lock; ··· 380 378 unsigned int max_sectors; 381 379 unsigned int max_user_sectors; 382 380 unsigned int max_segment_size; 383 - unsigned int min_segment_size; 381 + unsigned int max_fast_segment_size; 384 382 unsigned int physical_block_size; 385 383 unsigned int logical_block_size; 386 384 unsigned int alignment_offset; ··· 434 432 typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int idx, 435 433 void *data); 436 434 435 + int disk_report_zone(struct gendisk *disk, struct blk_zone *zone, 436 + unsigned int idx, struct blk_report_zones_args *args); 437 + 438 + int blkdev_get_zone_info(struct block_device *bdev, sector_t sector, 439 + struct blk_zone *zone); 440 + 437 441 #define BLK_ALL_ZONES ((unsigned int)-1) 438 442 int blkdev_report_zones(struct block_device *bdev, sector_t sector, 443 + unsigned int nr_zones, report_zones_cb cb, void *data); 444 + int blkdev_report_zones_cached(struct block_device *bdev, sector_t sector, 439 445 unsigned int nr_zones, report_zones_cb cb, void *data); 440 446 int blkdev_zone_mgmt(struct block_device *bdev, enum req_op op, 441 447 sector_t sectors, sector_t nr_sectors); ··· 495 485 */ 496 486 unsigned long queue_flags; 497 487 498 - unsigned int rq_timeout; 488 + unsigned int __data_racy rq_timeout; 499 489 500 490 unsigned int queue_depth; 501 491 ··· 503 493 504 494 /* hw dispatch queues */ 505 495 unsigned int nr_hw_queues; 506 - struct xarray hctx_table; 496 + struct blk_mq_hw_ctx * __rcu *queue_hw_ctx; 507 497 508 498 struct percpu_ref q_usage_counter; 509 499 struct lock_class_key io_lock_cls_key; ··· 931 921 { 932 922 return disk_zone_capacity(bdev->bd_disk, pos); 933 923 } 924 + 925 + bool bdev_zone_is_seq(struct block_device *bdev, sector_t sector); 926 + 934 927 #else /* CONFIG_BLK_DEV_ZONED */ 935 928 static inline unsigned int disk_nr_zones(struct gendisk *disk) 936 929 { 937 930 return 0; 931 + } 932 + 933 + static inline bool bdev_zone_is_seq(struct block_device *bdev, sector_t sector) 934 + { 935 + return false; 938 936 } 939 937 940 938 static inline bool bio_needs_zone_write_plugging(struct bio *bio) ··· 1522 1504 return q->limits.chunk_sectors; 1523 1505 } 1524 1506 1507 + static inline sector_t bdev_zone_start(struct block_device *bdev, 1508 + sector_t sector) 1509 + { 1510 + return sector & ~(bdev_zone_sectors(bdev) - 1); 1511 + } 1512 + 1525 1513 static inline sector_t bdev_offset_from_zone_start(struct block_device *bdev, 1526 1514 sector_t sector) 1527 1515 { ··· 1551 1527 sector_t sector) 1552 1528 { 1553 1529 return bdev_is_zone_start(bdev, sector); 1554 - } 1555 - 1556 - /** 1557 - * bdev_zone_is_seq - check if a sector belongs to a sequential write zone 1558 - * @bdev: block device to check 1559 - * @sector: sector number 1560 - * 1561 - * Check if @sector on @bdev is contained in a sequential write required zone. 1562 - */ 1563 - static inline bool bdev_zone_is_seq(struct block_device *bdev, sector_t sector) 1564 - { 1565 - bool is_seq = false; 1566 - 1567 - #if IS_ENABLED(CONFIG_BLK_DEV_ZONED) 1568 - if (bdev_is_zoned(bdev)) { 1569 - struct gendisk *disk = bdev->bd_disk; 1570 - unsigned long *bitmap; 1571 - 1572 - rcu_read_lock(); 1573 - bitmap = rcu_dereference(disk->conv_zones_bitmap); 1574 - is_seq = !bitmap || 1575 - !test_bit(disk_zone_no(disk, sector), bitmap); 1576 - rcu_read_unlock(); 1577 - } 1578 - #endif 1579 - 1580 - return is_seq; 1581 1530 } 1582 1531 1583 1532 int blk_zone_issue_zeroout(struct block_device *bdev, sector_t sector, ··· 1659 1662 /* this callback is with swap_lock and sometimes page table lock held */ 1660 1663 void (*swap_slot_free_notify) (struct block_device *, unsigned long); 1661 1664 int (*report_zones)(struct gendisk *, sector_t sector, 1662 - unsigned int nr_zones, report_zones_cb cb, void *data); 1665 + unsigned int nr_zones, 1666 + struct blk_report_zones_args *args); 1663 1667 char *(*devnode)(struct gendisk *disk, umode_t *mode); 1664 1668 /* returns the length of the identifier or a negative errno: */ 1665 1669 int (*get_unique_id)(struct gendisk *disk, u8 id[16],
+2 -1
include/linux/blktrace_api.h
··· 14 14 #include <linux/sysfs.h> 15 15 16 16 struct blk_trace { 17 + int version; 17 18 int trace_state; 18 19 struct rchan *rchan; 19 20 unsigned long __percpu *sequence; 20 21 unsigned char __percpu *msg_data; 21 - u16 act_mask; 22 + u64 act_mask; 22 23 u64 start_lba; 23 24 u64 end_lba; 24 25 u32 pid;
+8 -2
include/linux/device-mapper.h
··· 538 538 #ifdef CONFIG_BLK_DEV_ZONED 539 539 struct dm_report_zones_args { 540 540 struct dm_target *tgt; 541 + struct gendisk *disk; 541 542 sector_t next_sector; 542 543 543 - void *orig_data; 544 - report_zones_cb orig_cb; 545 544 unsigned int zone_idx; 545 + 546 + /* for block layer ->report_zones */ 547 + struct blk_report_zones_args *rep_args; 548 + 549 + /* for internal users */ 550 + report_zones_cb cb; 551 + void *data; 546 552 547 553 /* must be filled by ->report_zones before calling dm_report_zones_cb */ 548 554 sector_t start;
+32 -2
include/linux/kfifo.h
··· 370 370 ) 371 371 372 372 /** 373 + * kfifo_alloc_node - dynamically allocates a new fifo buffer on a NUMA node 374 + * @fifo: pointer to the fifo 375 + * @size: the number of elements in the fifo, this must be a power of 2 376 + * @gfp_mask: get_free_pages mask, passed to kmalloc() 377 + * @node: NUMA node to allocate memory on 378 + * 379 + * This macro dynamically allocates a new fifo buffer with NUMA node awareness. 380 + * 381 + * The number of elements will be rounded-up to a power of 2. 382 + * The fifo will be release with kfifo_free(). 383 + * Return 0 if no error, otherwise an error code. 384 + */ 385 + #define kfifo_alloc_node(fifo, size, gfp_mask, node) \ 386 + __kfifo_int_must_check_helper( \ 387 + ({ \ 388 + typeof((fifo) + 1) __tmp = (fifo); \ 389 + struct __kfifo *__kfifo = &__tmp->kfifo; \ 390 + __is_kfifo_ptr(__tmp) ? \ 391 + __kfifo_alloc_node(__kfifo, size, sizeof(*__tmp->type), gfp_mask, node) : \ 392 + -EINVAL; \ 393 + }) \ 394 + ) 395 + 396 + /** 373 397 * kfifo_free - frees the fifo 374 398 * @fifo: the fifo to be freed 375 399 */ ··· 923 899 ) 924 900 925 901 926 - extern int __kfifo_alloc(struct __kfifo *fifo, unsigned int size, 927 - size_t esize, gfp_t gfp_mask); 902 + extern int __kfifo_alloc_node(struct __kfifo *fifo, unsigned int size, 903 + size_t esize, gfp_t gfp_mask, int node); 904 + 905 + static inline int __kfifo_alloc(struct __kfifo *fifo, unsigned int size, 906 + size_t esize, gfp_t gfp_mask) 907 + { 908 + return __kfifo_alloc_node(fifo, size, esize, gfp_mask, NUMA_NO_NODE); 909 + } 928 910 929 911 extern void __kfifo_free(struct __kfifo *fifo); 930 912
+4 -2
include/linux/sbitmap.h
··· 75 75 */ 76 76 struct sbitmap_word *map; 77 77 78 - /* 78 + /** 79 79 * @alloc_hint: Cache of last successfully allocated or freed bit. 80 80 * 81 81 * This is per-cpu, which allows multiple users to stick to different ··· 128 128 */ 129 129 struct sbq_wait_state *ws; 130 130 131 - /* 131 + /** 132 132 * @ws_active: count of currently active ws waitqueues 133 133 */ 134 134 atomic_t ws_active; ··· 547 547 * sbitmap_queue. 548 548 * @sbq: Bitmap queue to wait on. 549 549 * @wait_index: A counter per "user" of @sbq. 550 + * 551 + * Return: Next wait queue to be used 550 552 */ 551 553 static inline struct sbq_wait_state *sbq_wait_ptr(struct sbitmap_queue *sbq, 552 554 atomic_t *wait_index)
+53 -2
include/uapi/linux/blktrace_api.h
··· 26 26 BLK_TC_DRV_DATA = 1 << 14, /* binary per-driver data */ 27 27 BLK_TC_FUA = 1 << 15, /* fua requests */ 28 28 29 - BLK_TC_END = 1 << 15, /* we've run out of bits! */ 29 + BLK_TC_END_V1 = 1 << 15, /* we've run out of bits! */ 30 + 31 + BLK_TC_ZONE_APPEND = 1ull << 16, /* zone append */ 32 + BLK_TC_ZONE_RESET = 1ull << 17, /* zone reset */ 33 + BLK_TC_ZONE_RESET_ALL = 1ull << 18, /* zone reset all */ 34 + BLK_TC_ZONE_FINISH = 1ull << 19, /* zone finish */ 35 + BLK_TC_ZONE_OPEN = 1ull << 20, /* zone open */ 36 + BLK_TC_ZONE_CLOSE = 1ull << 21, /* zone close */ 37 + 38 + BLK_TC_WRITE_ZEROES = 1ull << 22, /* write-zeroes */ 39 + 40 + BLK_TC_END_V2 = 1ull << 22, 30 41 }; 31 42 32 43 #define BLK_TC_SHIFT (16) 33 - #define BLK_TC_ACT(act) ((act) << BLK_TC_SHIFT) 44 + #define BLK_TC_ACT(act) ((u64)(act) << BLK_TC_SHIFT) 34 45 35 46 /* 36 47 * Basic trace actions ··· 64 53 __BLK_TA_REMAP, /* bio was remapped */ 65 54 __BLK_TA_ABORT, /* request aborted */ 66 55 __BLK_TA_DRV_DATA, /* driver-specific binary data */ 56 + __BLK_TA_ZONE_PLUG, /* zone write plug was plugged */ 57 + __BLK_TA_ZONE_UNPLUG, /* zone write plug was unplugged */ 67 58 __BLK_TA_CGROUP = 1 << 8, /* from a cgroup*/ 68 59 }; 69 60 ··· 101 88 #define BLK_TA_ABORT (__BLK_TA_ABORT | BLK_TC_ACT(BLK_TC_QUEUE)) 102 89 #define BLK_TA_DRV_DATA (__BLK_TA_DRV_DATA | BLK_TC_ACT(BLK_TC_DRV_DATA)) 103 90 91 + #define BLK_TA_ZONE_APPEND (__BLK_TA_COMPLETE |\ 92 + BLK_TC_ACT(BLK_TC_ZONE_APPEND)) 93 + #define BLK_TA_ZONE_PLUG (__BLK_TA_ZONE_PLUG | BLK_TC_ACT(BLK_TC_QUEUE)) 94 + #define BLK_TA_ZONE_UNPLUG (__BLK_TA_ZONE_UNPLUG |\ 95 + BLK_TC_ACT(BLK_TC_QUEUE)) 96 + 104 97 #define BLK_TN_PROCESS (__BLK_TN_PROCESS | BLK_TC_ACT(BLK_TC_NOTIFY)) 105 98 #define BLK_TN_TIMESTAMP (__BLK_TN_TIMESTAMP | BLK_TC_ACT(BLK_TC_NOTIFY)) 106 99 #define BLK_TN_MESSAGE (__BLK_TN_MESSAGE | BLK_TC_ACT(BLK_TC_NOTIFY)) 107 100 108 101 #define BLK_IO_TRACE_MAGIC 0x65617400 109 102 #define BLK_IO_TRACE_VERSION 0x07 103 + #define BLK_IO_TRACE2_VERSION 0x08 110 104 111 105 /* 112 106 * The trace itself ··· 133 113 /* cgroup id will be stored here if exists */ 134 114 }; 135 115 116 + struct blk_io_trace2 { 117 + __u32 magic; /* MAGIC << 8 | BLK_IO_TRACE2_VERSION */ 118 + __u32 sequence; /* event number */ 119 + __u64 time; /* in nanoseconds */ 120 + __u64 sector; /* disk offset */ 121 + __u32 bytes; /* transfer length */ 122 + __u32 pid; /* who did it */ 123 + __u64 action; /* what happened */ 124 + __u32 device; /* device number */ 125 + __u32 cpu; /* on what cpu did it happen */ 126 + __u16 error; /* completion error */ 127 + __u16 pdu_len; /* length of data after this trace */ 128 + __u8 pad[12]; 129 + /* cgroup id will be stored here if it exists */ 130 + }; 136 131 /* 137 132 * The remap event 138 133 */ ··· 164 129 }; 165 130 166 131 #define BLKTRACE_BDEV_SIZE 32 132 + #define BLKTRACE_BDEV_SIZE2 64 167 133 168 134 /* 169 135 * User setup structure passed with BLKTRACESETUP ··· 177 141 __u64 start_lba; 178 142 __u64 end_lba; 179 143 __u32 pid; 144 + }; 145 + 146 + /* 147 + * User setup structure passed with BLKTRACESETUP2 148 + */ 149 + struct blk_user_trace_setup2 { 150 + char name[BLKTRACE_BDEV_SIZE2]; /* output */ 151 + __u64 act_mask; /* input */ 152 + __u32 buf_size; /* input */ 153 + __u32 buf_nr; /* input */ 154 + __u64 start_lba; 155 + __u64 end_lba; 156 + __u32 pid; 157 + __u32 flags; /* currently unused */ 158 + __u64 reserved[11]; 180 159 }; 181 160 182 161 #endif /* _UAPIBLKTRACE_H */
+41 -5
include/uapi/linux/blkzoned.h
··· 48 48 * FINISH ZONE command. 49 49 * @BLK_ZONE_COND_READONLY: The zone is read-only. 50 50 * @BLK_ZONE_COND_OFFLINE: The zone is offline (sectors cannot be read/written). 51 + * @BLK_ZONE_COND_ACTIVE: The zone is either implicitly open, explicitly open, 52 + * or closed. 51 53 * 52 54 * The Zone Condition state machine in the ZBC/ZAC standards maps the above 53 55 * deinitions as: ··· 63 61 * 64 62 * Conditions 0x5 to 0xC are reserved by the current ZBC/ZAC spec and should 65 63 * be considered invalid. 64 + * 65 + * The condition BLK_ZONE_COND_ACTIVE is used only with cached zone reports. 66 + * It is used to report any of the BLK_ZONE_COND_IMP_OPEN, 67 + * BLK_ZONE_COND_EXP_OPEN and BLK_ZONE_COND_CLOSED conditions. Conversely, a 68 + * regular zone report will never report a zone condition using 69 + * BLK_ZONE_COND_ACTIVE and instead use the conditions BLK_ZONE_COND_IMP_OPEN, 70 + * BLK_ZONE_COND_EXP_OPEN or BLK_ZONE_COND_CLOSED as reported by the device. 66 71 */ 67 72 enum blk_zone_cond { 68 73 BLK_ZONE_COND_NOT_WP = 0x0, ··· 80 71 BLK_ZONE_COND_READONLY = 0xD, 81 72 BLK_ZONE_COND_FULL = 0xE, 82 73 BLK_ZONE_COND_OFFLINE = 0xF, 74 + 75 + BLK_ZONE_COND_ACTIVE = 0xFF, 83 76 }; 84 77 85 78 /** 86 79 * enum blk_zone_report_flags - Feature flags of reported zone descriptors. 87 80 * 88 - * @BLK_ZONE_REP_CAPACITY: Zone descriptor has capacity field. 81 + * @BLK_ZONE_REP_CAPACITY: Output only. Indicates that zone descriptors in a 82 + * zone report have a valid capacity field. 83 + * @BLK_ZONE_REP_CACHED: Input only. Indicates that the zone report should be 84 + * generated using cached zone information. In this case, 85 + * the implicit open, explicit open and closed zone 86 + * conditions are all reported with the 87 + * BLK_ZONE_COND_ACTIVE condition. 89 88 */ 90 89 enum blk_zone_report_flags { 91 - BLK_ZONE_REP_CAPACITY = (1 << 0), 90 + /* Output flags */ 91 + BLK_ZONE_REP_CAPACITY = (1U << 0), 92 + 93 + /* Input flags */ 94 + BLK_ZONE_REP_CACHED = (1U << 31), 92 95 }; 93 96 94 97 /** ··· 143 122 * @sector: starting sector of report 144 123 * @nr_zones: IN maximum / OUT actual 145 124 * @flags: one or more flags as defined by enum blk_zone_report_flags. 125 + * @flags: one or more flags as defined by enum blk_zone_report_flags. 126 + * With BLKREPORTZONE, this field is ignored as an input and is valid 127 + * only as an output. Using BLKREPORTZONEV2, this field is used as both 128 + * input and output. 146 129 * @zones: Space to hold @nr_zones @zones entries on reply. 147 130 * 148 131 * The array of at most @nr_zones must follow this structure in memory. ··· 173 148 /** 174 149 * Zoned block device ioctl's: 175 150 * 176 - * @BLKREPORTZONE: Get zone information. Takes a zone report as argument. 177 - * The zone report will start from the zone containing the 178 - * sector specified in the report request structure. 151 + * @BLKREPORTZONE: Get zone information from a zoned device. Takes a zone report 152 + * as argument. The zone report will start from the zone 153 + * containing the sector specified in struct blk_zone_report. 154 + * The flags field of struct blk_zone_report is used as an 155 + * output only and ignored as an input. 156 + * DEPRECATED, use BLKREPORTZONEV2 instead. 157 + * @BLKREPORTZONEV2: Same as @BLKREPORTZONE but uses the flags field of 158 + * struct blk_zone_report as an input, allowing to get a zone 159 + * report using cached zone information if the flag 160 + * BLK_ZONE_REP_CACHED is set. In such case, the zone report 161 + * may include zones with the condition @BLK_ZONE_COND_ACTIVE 162 + * (c.f. the description of this condition above for more 163 + * details). 179 164 * @BLKRESETZONE: Reset the write pointer of the zones in the specified 180 165 * sector range. The sector range must be zone aligned. 181 166 * @BLKGETZONESZ: Get the device zone size in number of 512 B sectors. ··· 204 169 #define BLKOPENZONE _IOW(0x12, 134, struct blk_zone_range) 205 170 #define BLKCLOSEZONE _IOW(0x12, 135, struct blk_zone_range) 206 171 #define BLKFINISHZONE _IOW(0x12, 136, struct blk_zone_range) 172 + #define BLKREPORTZONEV2 _IOWR(0x12, 142, struct blk_zone_report) 207 173 208 174 #endif /* _UAPI_BLKZONED_H */
+2 -1
include/uapi/linux/fs.h
··· 298 298 #define BLKROTATIONAL _IO(0x12,126) 299 299 #define BLKZEROOUT _IO(0x12,127) 300 300 #define BLKGETDISKSEQ _IOR(0x12,128,__u64) 301 - /* 130-136 are used by zoned block device ioctls (uapi/linux/blkzoned.h) */ 301 + /* 130-136 and 142 are used by zoned block device ioctls (uapi/linux/blkzoned.h) */ 302 302 /* 137-141 are used by blk-crypto ioctls (uapi/linux/blk-crypto.h) */ 303 + #define BLKTRACESETUP2 _IOWR(0x12, 142, struct blk_user_trace_setup2) 303 304 304 305 #define BMAP_IOCTL 1 /* obsolete - kept for compatibility */ 305 306 #define FIBMAP _IO(0x00,1) /* bmap access */
+2 -1
include/uapi/linux/raid/md_p.h
··· 291 291 __le64 resync_offset; /* data before this offset (from data_offset) known to be in sync */ 292 292 __le32 sb_csum; /* checksum up to devs[max_dev] */ 293 293 __le32 max_dev; /* size of devs[] array to consider */ 294 - __u8 pad3[64-32]; /* set to 0 when writing */ 294 + __le32 logical_block_size; /* same as q->limits->logical_block_size */ 295 + __u8 pad3[64-36]; /* set to 0 when writing */ 295 296 296 297 /* device state information. Indexed by dev_number. 297 298 * 2 bytes per device
+414 -119
kernel/trace/blktrace.c
··· 63 63 static void blk_register_tracepoints(void); 64 64 static void blk_unregister_tracepoints(void); 65 65 66 + static void record_blktrace_event(struct blk_io_trace *t, pid_t pid, int cpu, 67 + sector_t sector, int bytes, u64 what, 68 + dev_t dev, int error, u64 cgid, 69 + ssize_t cgid_len, void *pdu_data, int pdu_len) 70 + 71 + { 72 + /* 73 + * These two are not needed in ftrace as they are in the 74 + * generic trace_entry, filled by tracing_generic_entry_update, 75 + * but for the trace_event->bin() synthesizer benefit we do it 76 + * here too. 77 + */ 78 + t->cpu = cpu; 79 + t->pid = pid; 80 + 81 + t->sector = sector; 82 + t->bytes = bytes; 83 + t->action = lower_32_bits(what); 84 + t->device = dev; 85 + t->error = error; 86 + t->pdu_len = pdu_len + cgid_len; 87 + 88 + if (cgid_len) 89 + memcpy((void *)t + sizeof(*t), &cgid, cgid_len); 90 + if (pdu_len) 91 + memcpy((void *)t + sizeof(*t) + cgid_len, pdu_data, pdu_len); 92 + } 93 + 94 + static void record_blktrace_event2(struct blk_io_trace2 *t2, pid_t pid, int cpu, 95 + sector_t sector, int bytes, u64 what, 96 + dev_t dev, int error, u64 cgid, 97 + ssize_t cgid_len, void *pdu_data, 98 + int pdu_len) 99 + { 100 + t2->pid = pid; 101 + t2->cpu = cpu; 102 + 103 + t2->sector = sector; 104 + t2->bytes = bytes; 105 + t2->action = what; 106 + t2->device = dev; 107 + t2->error = error; 108 + t2->pdu_len = pdu_len + cgid_len; 109 + 110 + if (cgid_len) 111 + memcpy((void *)t2 + sizeof(*t2), &cgid, cgid_len); 112 + if (pdu_len) 113 + memcpy((void *)t2 + sizeof(*t2) + cgid_len, pdu_data, pdu_len); 114 + } 115 + 116 + static void relay_blktrace_event1(struct blk_trace *bt, unsigned long sequence, 117 + pid_t pid, int cpu, sector_t sector, int bytes, 118 + u64 what, int error, u64 cgid, 119 + ssize_t cgid_len, void *pdu_data, int pdu_len) 120 + { 121 + struct blk_io_trace *t; 122 + size_t trace_len = sizeof(*t) + pdu_len + cgid_len; 123 + 124 + t = relay_reserve(bt->rchan, trace_len); 125 + if (!t) 126 + return; 127 + 128 + t->magic = BLK_IO_TRACE_MAGIC | BLK_IO_TRACE_VERSION; 129 + t->sequence = sequence; 130 + t->time = ktime_to_ns(ktime_get()); 131 + 132 + record_blktrace_event(t, pid, cpu, sector, bytes, what, bt->dev, error, 133 + cgid, cgid_len, pdu_data, pdu_len); 134 + } 135 + 136 + static void relay_blktrace_event2(struct blk_trace *bt, unsigned long sequence, 137 + pid_t pid, int cpu, sector_t sector, 138 + int bytes, u64 what, int error, u64 cgid, 139 + ssize_t cgid_len, void *pdu_data, int pdu_len) 140 + { 141 + struct blk_io_trace2 *t; 142 + size_t trace_len = sizeof(struct blk_io_trace2) + pdu_len + cgid_len; 143 + 144 + t = relay_reserve(bt->rchan, trace_len); 145 + if (!t) 146 + return; 147 + 148 + t->magic = BLK_IO_TRACE_MAGIC | BLK_IO_TRACE2_VERSION; 149 + t->sequence = sequence; 150 + t->time = ktime_to_ns(ktime_get()); 151 + 152 + record_blktrace_event2(t, pid, cpu, sector, bytes, what, bt->dev, error, 153 + cgid, cgid_len, pdu_data, pdu_len); 154 + } 155 + 156 + static void relay_blktrace_event(struct blk_trace *bt, unsigned long sequence, 157 + pid_t pid, int cpu, sector_t sector, int bytes, 158 + u64 what, int error, u64 cgid, 159 + ssize_t cgid_len, void *pdu_data, int pdu_len) 160 + { 161 + if (bt->version == 2) 162 + return relay_blktrace_event2(bt, sequence, pid, cpu, sector, 163 + bytes, what, error, cgid, cgid_len, 164 + pdu_data, pdu_len); 165 + return relay_blktrace_event1(bt, sequence, pid, cpu, sector, bytes, 166 + what, error, cgid, cgid_len, pdu_data, 167 + pdu_len); 168 + } 169 + 66 170 /* 67 171 * Send out a notify message. 68 172 */ 69 - static void trace_note(struct blk_trace *bt, pid_t pid, int action, 173 + static void trace_note(struct blk_trace *bt, pid_t pid, u64 action, 70 174 const void *data, size_t len, u64 cgid) 71 175 { 72 - struct blk_io_trace *t; 73 176 struct ring_buffer_event *event = NULL; 74 177 struct trace_buffer *buffer = NULL; 75 178 unsigned int trace_ctx = 0; ··· 180 77 bool blk_tracer = blk_tracer_enabled; 181 78 ssize_t cgid_len = cgid ? sizeof(cgid) : 0; 182 79 80 + action = lower_32_bits(action | (cgid ? __BLK_TN_CGROUP : 0)); 183 81 if (blk_tracer) { 82 + struct blk_io_trace2 *t; 83 + size_t trace_len = sizeof(*t) + cgid_len + len; 84 + 184 85 buffer = blk_tr->array_buffer.buffer; 185 86 trace_ctx = tracing_gen_ctx_flags(0); 186 87 event = trace_buffer_lock_reserve(buffer, TRACE_BLK, 187 - sizeof(*t) + len + cgid_len, 188 - trace_ctx); 88 + trace_len, trace_ctx); 189 89 if (!event) 190 90 return; 191 91 t = ring_buffer_event_data(event); 192 - goto record_it; 92 + record_blktrace_event2(t, pid, cpu, 0, 0, 93 + action, bt->dev, 0, cgid, cgid_len, 94 + (void *)data, len); 95 + trace_buffer_unlock_commit(blk_tr, buffer, event, trace_ctx); 96 + return; 193 97 } 194 98 195 99 if (!bt->rchan) 196 100 return; 197 101 198 - t = relay_reserve(bt->rchan, sizeof(*t) + len + cgid_len); 199 - if (t) { 200 - t->magic = BLK_IO_TRACE_MAGIC | BLK_IO_TRACE_VERSION; 201 - t->time = ktime_to_ns(ktime_get()); 202 - record_it: 203 - t->device = bt->dev; 204 - t->action = action | (cgid ? __BLK_TN_CGROUP : 0); 205 - t->pid = pid; 206 - t->cpu = cpu; 207 - t->pdu_len = len + cgid_len; 208 - if (cgid_len) 209 - memcpy((void *)t + sizeof(*t), &cgid, cgid_len); 210 - memcpy((void *) t + sizeof(*t) + cgid_len, data, len); 211 - 212 - if (blk_tracer) 213 - trace_buffer_unlock_commit(blk_tr, buffer, event, trace_ctx); 214 - } 102 + relay_blktrace_event(bt, 0, pid, cpu, 0, 0, action, 0, cgid, 103 + cgid_len, (void *)data, len); 215 104 } 216 105 217 106 /* ··· 277 182 } 278 183 EXPORT_SYMBOL_GPL(__blk_trace_note_message); 279 184 280 - static int act_log_check(struct blk_trace *bt, u32 what, sector_t sector, 185 + static int act_log_check(struct blk_trace *bt, u64 what, sector_t sector, 281 186 pid_t pid) 282 187 { 283 188 if (((bt->act_mask << BLK_TC_SHIFT) & what) == 0) ··· 308 213 * blk_io_trace structure and places it in a per-cpu subbuffer. 309 214 */ 310 215 static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes, 311 - const blk_opf_t opf, u32 what, int error, 216 + const blk_opf_t opf, u64 what, int error, 312 217 int pdu_len, void *pdu_data, u64 cgid) 313 218 { 314 219 struct task_struct *tsk = current; 315 220 struct ring_buffer_event *event = NULL; 316 221 struct trace_buffer *buffer = NULL; 317 - struct blk_io_trace *t; 318 222 unsigned long flags = 0; 319 223 unsigned long *sequence; 320 224 unsigned int trace_ctx = 0; ··· 322 228 bool blk_tracer = blk_tracer_enabled; 323 229 ssize_t cgid_len = cgid ? sizeof(cgid) : 0; 324 230 const enum req_op op = opf & REQ_OP_MASK; 231 + size_t trace_len; 325 232 326 233 if (unlikely(bt->trace_state != Blktrace_running && !blk_tracer)) 327 234 return; ··· 333 238 what |= MASK_TC_BIT(opf, META); 334 239 what |= MASK_TC_BIT(opf, PREFLUSH); 335 240 what |= MASK_TC_BIT(opf, FUA); 336 - if (op == REQ_OP_DISCARD || op == REQ_OP_SECURE_ERASE) 241 + 242 + switch (op) { 243 + case REQ_OP_DISCARD: 244 + case REQ_OP_SECURE_ERASE: 337 245 what |= BLK_TC_ACT(BLK_TC_DISCARD); 338 - if (op == REQ_OP_FLUSH) 246 + break; 247 + case REQ_OP_FLUSH: 339 248 what |= BLK_TC_ACT(BLK_TC_FLUSH); 249 + break; 250 + case REQ_OP_ZONE_APPEND: 251 + what |= BLK_TC_ACT(BLK_TC_ZONE_APPEND); 252 + break; 253 + case REQ_OP_ZONE_RESET: 254 + what |= BLK_TC_ACT(BLK_TC_ZONE_RESET); 255 + break; 256 + case REQ_OP_ZONE_RESET_ALL: 257 + what |= BLK_TC_ACT(BLK_TC_ZONE_RESET_ALL); 258 + break; 259 + case REQ_OP_ZONE_FINISH: 260 + what |= BLK_TC_ACT(BLK_TC_ZONE_FINISH); 261 + break; 262 + case REQ_OP_ZONE_OPEN: 263 + what |= BLK_TC_ACT(BLK_TC_ZONE_OPEN); 264 + break; 265 + case REQ_OP_ZONE_CLOSE: 266 + what |= BLK_TC_ACT(BLK_TC_ZONE_CLOSE); 267 + break; 268 + case REQ_OP_WRITE_ZEROES: 269 + what |= BLK_TC_ACT(BLK_TC_WRITE_ZEROES); 270 + break; 271 + default: 272 + break; 273 + } 274 + 275 + /* Drop trace events for zone operations with blktrace v1 */ 276 + if (bt->version == 1 && (what >> BLK_TC_SHIFT) > BLK_TC_END_V1) { 277 + pr_debug_ratelimited("blktrace v1 cannot trace zone operation 0x%llx\n", 278 + (unsigned long long)what); 279 + return; 280 + } 281 + 340 282 if (cgid) 341 283 what |= __BLK_TA_CGROUP; 342 284 ··· 387 255 388 256 buffer = blk_tr->array_buffer.buffer; 389 257 trace_ctx = tracing_gen_ctx_flags(0); 258 + switch (bt->version) { 259 + case 1: 260 + trace_len = sizeof(struct blk_io_trace); 261 + break; 262 + case 2: 263 + default: 264 + /* 265 + * ftrace always uses v2 (blk_io_trace2) format. 266 + * 267 + * For sysfs-enabled tracing path (enabled via 268 + * /sys/block/DEV/trace/enable), blk_trace_setup_queue() 269 + * never initializes bt->version, leaving it 0 from 270 + * kzalloc(). We must handle version==0 safely here. 271 + * 272 + * Fall through to default to ensure we never hit the 273 + * old bug where default set trace_len=0, causing 274 + * buffer underflow and memory corruption. 275 + * 276 + * Always use v2 format for ftrace and normalize 277 + * bt->version to 2 when uninitialized. 278 + */ 279 + trace_len = sizeof(struct blk_io_trace2); 280 + if (bt->version == 0) 281 + bt->version = 2; 282 + break; 283 + } 284 + trace_len += pdu_len + cgid_len; 390 285 event = trace_buffer_lock_reserve(buffer, TRACE_BLK, 391 - sizeof(*t) + pdu_len + cgid_len, 392 - trace_ctx); 286 + trace_len, trace_ctx); 393 287 if (!event) 394 288 return; 395 - t = ring_buffer_event_data(event); 396 - goto record_it; 289 + 290 + switch (bt->version) { 291 + case 1: 292 + record_blktrace_event(ring_buffer_event_data(event), 293 + pid, cpu, sector, bytes, 294 + what, bt->dev, error, cgid, cgid_len, 295 + pdu_data, pdu_len); 296 + break; 297 + case 2: 298 + default: 299 + /* 300 + * Use v2 recording function (record_blktrace_event2) 301 + * which writes blk_io_trace2 structure with correct 302 + * field layout: 303 + * - 32-bit pid at offset 28 304 + * - 64-bit action at offset 32 305 + * 306 + * Fall through to default handles version==0 case 307 + * (from sysfs path), ensuring we always use correct 308 + * v2 recording function to match the v2 buffer 309 + * allocated above. 310 + */ 311 + record_blktrace_event2(ring_buffer_event_data(event), 312 + pid, cpu, sector, bytes, 313 + what, bt->dev, error, cgid, cgid_len, 314 + pdu_data, pdu_len); 315 + break; 316 + } 317 + 318 + trace_buffer_unlock_commit(blk_tr, buffer, event, trace_ctx); 319 + return; 397 320 } 398 321 399 322 if (unlikely(tsk->btrace_seq != blktrace_seq)) ··· 460 273 * from coming in and stepping on our toes. 461 274 */ 462 275 local_irq_save(flags); 463 - t = relay_reserve(bt->rchan, sizeof(*t) + pdu_len + cgid_len); 464 - if (t) { 465 - sequence = per_cpu_ptr(bt->sequence, cpu); 466 - 467 - t->magic = BLK_IO_TRACE_MAGIC | BLK_IO_TRACE_VERSION; 468 - t->sequence = ++(*sequence); 469 - t->time = ktime_to_ns(ktime_get()); 470 - record_it: 471 - /* 472 - * These two are not needed in ftrace as they are in the 473 - * generic trace_entry, filled by tracing_generic_entry_update, 474 - * but for the trace_event->bin() synthesizer benefit we do it 475 - * here too. 476 - */ 477 - t->cpu = cpu; 478 - t->pid = pid; 479 - 480 - t->sector = sector; 481 - t->bytes = bytes; 482 - t->action = what; 483 - t->device = bt->dev; 484 - t->error = error; 485 - t->pdu_len = pdu_len + cgid_len; 486 - 487 - if (cgid_len) 488 - memcpy((void *)t + sizeof(*t), &cgid, cgid_len); 489 - if (pdu_len) 490 - memcpy((void *)t + sizeof(*t) + cgid_len, pdu_data, pdu_len); 491 - 492 - if (blk_tracer) { 493 - trace_buffer_unlock_commit(blk_tr, buffer, event, trace_ctx); 494 - return; 495 - } 496 - } 497 - 276 + sequence = per_cpu_ptr(bt->sequence, cpu); 277 + (*sequence)++; 278 + relay_blktrace_event(bt, *sequence, pid, cpu, sector, bytes, 279 + what, error, cgid, cgid_len, pdu_data, pdu_len); 498 280 local_irq_restore(flags); 499 281 } 500 282 ··· 650 494 /* 651 495 * Setup everything required to start tracing 652 496 */ 653 - static int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev, 654 - struct block_device *bdev, 655 - struct blk_user_trace_setup *buts) 497 + static struct blk_trace *blk_trace_setup_prepare(struct request_queue *q, 498 + char *name, dev_t dev, 499 + u32 buf_size, u32 buf_nr, 500 + struct block_device *bdev) 656 501 { 657 502 struct blk_trace *bt = NULL; 658 503 struct dentry *dir = NULL; ··· 661 504 662 505 lockdep_assert_held(&q->debugfs_mutex); 663 506 664 - if (!buts->buf_size || !buts->buf_nr) 665 - return -EINVAL; 666 - 667 - strscpy_pad(buts->name, name, BLKTRACE_BDEV_SIZE); 668 - 669 - /* 670 - * some device names have larger paths - convert the slashes 671 - * to underscores for this to work as expected 672 - */ 673 - strreplace(buts->name, '/', '_'); 674 - 675 507 /* 676 508 * bdev can be NULL, as with scsi-generic, this is a helpful as 677 509 * we can be. 678 510 */ 679 511 if (rcu_dereference_protected(q->blk_trace, 680 512 lockdep_is_held(&q->debugfs_mutex))) { 681 - pr_warn("Concurrent blktraces are not allowed on %s\n", 682 - buts->name); 683 - return -EBUSY; 513 + pr_warn("Concurrent blktraces are not allowed on %s\n", name); 514 + return ERR_PTR(-EBUSY); 684 515 } 685 516 686 517 bt = kzalloc(sizeof(*bt), GFP_KERNEL); 687 518 if (!bt) 688 - return -ENOMEM; 519 + return ERR_PTR(-ENOMEM); 689 520 690 521 ret = -ENOMEM; 691 522 bt->sequence = alloc_percpu(unsigned long); ··· 693 548 if (bdev && !bdev_is_partition(bdev)) 694 549 dir = q->debugfs_dir; 695 550 else 696 - bt->dir = dir = debugfs_create_dir(buts->name, blk_debugfs_root); 551 + bt->dir = dir = debugfs_create_dir(name, blk_debugfs_root); 697 552 698 553 /* 699 554 * As blktrace relies on debugfs for its interface the debugfs directory ··· 701 556 * files or directories. 702 557 */ 703 558 if (IS_ERR_OR_NULL(dir)) { 704 - pr_warn("debugfs_dir not present for %s so skipping\n", 705 - buts->name); 559 + pr_warn("debugfs_dir not present for %s so skipping\n", name); 706 560 ret = -ENOENT; 707 561 goto err; 708 562 } ··· 713 569 debugfs_create_file("dropped", 0444, dir, bt, &blk_dropped_fops); 714 570 debugfs_create_file("msg", 0222, dir, bt, &blk_msg_fops); 715 571 716 - bt->rchan = relay_open("trace", dir, buts->buf_size, 717 - buts->buf_nr, &blk_relay_callbacks, bt); 572 + bt->rchan = relay_open("trace", dir, buf_size, buf_nr, 573 + &blk_relay_callbacks, bt); 718 574 if (!bt->rchan) 719 575 goto err; 720 576 577 + blk_trace_setup_lba(bt, bdev); 578 + 579 + return bt; 580 + 581 + err: 582 + blk_trace_free(q, bt); 583 + 584 + return ERR_PTR(ret); 585 + } 586 + 587 + static void blk_trace_setup_finalize(struct request_queue *q, 588 + char *name, int version, 589 + struct blk_trace *bt, 590 + struct blk_user_trace_setup2 *buts) 591 + 592 + { 593 + strscpy_pad(buts->name, name, BLKTRACE_BDEV_SIZE2); 594 + 595 + /* 596 + * some device names have larger paths - convert the slashes 597 + * to underscores for this to work as expected 598 + */ 599 + strreplace(buts->name, '/', '_'); 600 + 601 + bt->version = version; 721 602 bt->act_mask = buts->act_mask; 722 603 if (!bt->act_mask) 723 604 bt->act_mask = (u16) -1; 724 - 725 - blk_trace_setup_lba(bt, bdev); 726 605 727 606 /* overwrite with user settings */ 728 607 if (buts->start_lba) ··· 758 591 759 592 rcu_assign_pointer(q->blk_trace, bt); 760 593 get_probe_ref(); 761 - 762 - ret = 0; 763 - err: 764 - if (ret) 765 - blk_trace_free(q, bt); 766 - return ret; 767 594 } 768 595 769 596 int blk_trace_setup(struct request_queue *q, char *name, dev_t dev, 770 597 struct block_device *bdev, 771 598 char __user *arg) 772 599 { 600 + struct blk_user_trace_setup2 buts2; 773 601 struct blk_user_trace_setup buts; 602 + struct blk_trace *bt; 774 603 int ret; 775 604 776 605 ret = copy_from_user(&buts, arg, sizeof(buts)); 777 606 if (ret) 778 607 return -EFAULT; 779 608 609 + if (!buts.buf_size || !buts.buf_nr) 610 + return -EINVAL; 611 + 612 + buts2 = (struct blk_user_trace_setup2) { 613 + .act_mask = buts.act_mask, 614 + .buf_size = buts.buf_size, 615 + .buf_nr = buts.buf_nr, 616 + .start_lba = buts.start_lba, 617 + .end_lba = buts.end_lba, 618 + .pid = buts.pid, 619 + }; 620 + 780 621 mutex_lock(&q->debugfs_mutex); 781 - ret = do_blk_trace_setup(q, name, dev, bdev, &buts); 622 + bt = blk_trace_setup_prepare(q, name, dev, buts.buf_size, buts.buf_nr, 623 + bdev); 624 + if (IS_ERR(bt)) { 625 + mutex_unlock(&q->debugfs_mutex); 626 + return PTR_ERR(bt); 627 + } 628 + blk_trace_setup_finalize(q, name, 1, bt, &buts2); 629 + strcpy(buts.name, buts2.name); 782 630 mutex_unlock(&q->debugfs_mutex); 783 - if (ret) 784 - return ret; 785 631 786 632 if (copy_to_user(arg, &buts, sizeof(buts))) { 787 633 blk_trace_remove(q); ··· 804 624 } 805 625 EXPORT_SYMBOL_GPL(blk_trace_setup); 806 626 627 + static int blk_trace_setup2(struct request_queue *q, char *name, dev_t dev, 628 + struct block_device *bdev, char __user *arg) 629 + { 630 + struct blk_user_trace_setup2 buts2; 631 + struct blk_trace *bt; 632 + 633 + if (copy_from_user(&buts2, arg, sizeof(buts2))) 634 + return -EFAULT; 635 + 636 + if (!buts2.buf_size || !buts2.buf_nr) 637 + return -EINVAL; 638 + 639 + if (buts2.flags != 0) 640 + return -EINVAL; 641 + 642 + mutex_lock(&q->debugfs_mutex); 643 + bt = blk_trace_setup_prepare(q, name, dev, buts2.buf_size, buts2.buf_nr, 644 + bdev); 645 + if (IS_ERR(bt)) { 646 + mutex_unlock(&q->debugfs_mutex); 647 + return PTR_ERR(bt); 648 + } 649 + blk_trace_setup_finalize(q, name, 2, bt, &buts2); 650 + mutex_unlock(&q->debugfs_mutex); 651 + 652 + if (copy_to_user(arg, &buts2, sizeof(buts2))) { 653 + blk_trace_remove(q); 654 + return -EFAULT; 655 + } 656 + return 0; 657 + } 658 + 807 659 #if defined(CONFIG_COMPAT) && defined(CONFIG_X86_64) 808 660 static int compat_blk_trace_setup(struct request_queue *q, char *name, 809 661 dev_t dev, struct block_device *bdev, 810 662 char __user *arg) 811 663 { 812 - struct blk_user_trace_setup buts; 664 + struct blk_user_trace_setup2 buts2; 813 665 struct compat_blk_user_trace_setup cbuts; 814 - int ret; 666 + struct blk_trace *bt; 815 667 816 668 if (copy_from_user(&cbuts, arg, sizeof(cbuts))) 817 669 return -EFAULT; 818 670 819 - buts = (struct blk_user_trace_setup) { 671 + if (!cbuts.buf_size || !cbuts.buf_nr) 672 + return -EINVAL; 673 + 674 + buts2 = (struct blk_user_trace_setup2) { 820 675 .act_mask = cbuts.act_mask, 821 676 .buf_size = cbuts.buf_size, 822 677 .buf_nr = cbuts.buf_nr, ··· 861 646 }; 862 647 863 648 mutex_lock(&q->debugfs_mutex); 864 - ret = do_blk_trace_setup(q, name, dev, bdev, &buts); 649 + bt = blk_trace_setup_prepare(q, name, dev, buts2.buf_size, buts2.buf_nr, 650 + bdev); 651 + if (IS_ERR(bt)) { 652 + mutex_unlock(&q->debugfs_mutex); 653 + return PTR_ERR(bt); 654 + } 655 + blk_trace_setup_finalize(q, name, 1, bt, &buts2); 865 656 mutex_unlock(&q->debugfs_mutex); 866 - if (ret) 867 - return ret; 868 657 869 - if (copy_to_user(arg, &buts.name, ARRAY_SIZE(buts.name))) { 658 + if (copy_to_user(arg, &buts2.name, ARRAY_SIZE(buts2.name))) { 870 659 blk_trace_remove(q); 871 660 return -EFAULT; 872 661 } ··· 926 707 char b[BDEVNAME_SIZE]; 927 708 928 709 switch (cmd) { 710 + case BLKTRACESETUP2: 711 + snprintf(b, sizeof(b), "%pg", bdev); 712 + ret = blk_trace_setup2(q, b, bdev->bd_dev, bdev, arg); 713 + break; 929 714 case BLKTRACESETUP: 930 715 snprintf(b, sizeof(b), "%pg", bdev); 931 716 ret = blk_trace_setup(q, b, bdev->bd_dev, bdev, arg); ··· 1017 794 * 1018 795 **/ 1019 796 static void blk_add_trace_rq(struct request *rq, blk_status_t error, 1020 - unsigned int nr_bytes, u32 what, u64 cgid) 797 + unsigned int nr_bytes, u64 what, u64 cgid) 1021 798 { 1022 799 struct blk_trace *bt; 1023 800 ··· 1069 846 blk_trace_request_get_cgid(rq)); 1070 847 } 1071 848 849 + static void blk_add_trace_zone_update_request(void *ignore, struct request *rq) 850 + { 851 + struct blk_trace *bt; 852 + 853 + rcu_read_lock(); 854 + bt = rcu_dereference(rq->q->blk_trace); 855 + if (likely(!bt) || bt->version < 2) { 856 + rcu_read_unlock(); 857 + return; 858 + } 859 + rcu_read_unlock(); 860 + 861 + blk_add_trace_rq(rq, 0, blk_rq_bytes(rq), BLK_TA_ZONE_APPEND, 862 + blk_trace_request_get_cgid(rq)); 863 + } 864 + 1072 865 /** 1073 866 * blk_add_trace_bio - Add a trace for a bio oriented action 1074 867 * @q: queue the io is for ··· 1097 858 * 1098 859 **/ 1099 860 static void blk_add_trace_bio(struct request_queue *q, struct bio *bio, 1100 - u32 what, int error) 861 + u64 what, int error) 1101 862 { 1102 863 struct blk_trace *bt; 1103 864 ··· 1163 924 bt = rcu_dereference(q->blk_trace); 1164 925 if (bt) { 1165 926 __be64 rpdu = cpu_to_be64(depth); 1166 - u32 what; 927 + u64 what; 1167 928 1168 929 if (explicit) 1169 930 what = BLK_TA_UNPLUG_IO; ··· 1173 934 __blk_add_trace(bt, 0, 0, 0, what, 0, sizeof(rpdu), &rpdu, 0); 1174 935 } 1175 936 rcu_read_unlock(); 937 + } 938 + 939 + static void blk_add_trace_zone_plug(void *ignore, struct request_queue *q, 940 + unsigned int zno, sector_t sector, 941 + unsigned int sectors) 942 + { 943 + struct blk_trace *bt; 944 + 945 + rcu_read_lock(); 946 + bt = rcu_dereference(q->blk_trace); 947 + if (bt && bt->version >= 2) 948 + __blk_add_trace(bt, sector, sectors << SECTOR_SHIFT, 0, 949 + BLK_TA_ZONE_PLUG, 0, 0, NULL, 0); 950 + rcu_read_unlock(); 951 + 952 + return; 953 + } 954 + 955 + static void blk_add_trace_zone_unplug(void *ignore, struct request_queue *q, 956 + unsigned int zno, sector_t sector, 957 + unsigned int sectors) 958 + { 959 + struct blk_trace *bt; 960 + 961 + rcu_read_lock(); 962 + bt = rcu_dereference(q->blk_trace); 963 + if (bt && bt->version >= 2) 964 + __blk_add_trace(bt, sector, sectors << SECTOR_SHIFT, 0, 965 + BLK_TA_ZONE_UNPLUG, 0, 0, NULL, 0); 966 + rcu_read_unlock(); 967 + return; 1176 968 } 1177 969 1178 970 static void blk_add_trace_split(void *ignore, struct bio *bio, unsigned int pdu) ··· 1346 1076 WARN_ON(ret); 1347 1077 ret = register_trace_block_getrq(blk_add_trace_getrq, NULL); 1348 1078 WARN_ON(ret); 1079 + ret = register_trace_blk_zone_append_update_request_bio( 1080 + blk_add_trace_zone_update_request, NULL); 1081 + WARN_ON(ret); 1082 + ret = register_trace_disk_zone_wplug_add_bio(blk_add_trace_zone_plug, 1083 + NULL); 1084 + WARN_ON(ret); 1085 + ret = register_trace_blk_zone_wplug_bio(blk_add_trace_zone_unplug, 1086 + NULL); 1087 + WARN_ON(ret); 1349 1088 ret = register_trace_block_plug(blk_add_trace_plug, NULL); 1350 1089 WARN_ON(ret); 1351 1090 ret = register_trace_block_unplug(blk_add_trace_unplug, NULL); ··· 1374 1095 unregister_trace_block_split(blk_add_trace_split, NULL); 1375 1096 unregister_trace_block_unplug(blk_add_trace_unplug, NULL); 1376 1097 unregister_trace_block_plug(blk_add_trace_plug, NULL); 1098 + unregister_trace_blk_zone_wplug_bio(blk_add_trace_zone_unplug, NULL); 1099 + unregister_trace_disk_zone_wplug_add_bio(blk_add_trace_zone_plug, NULL); 1100 + unregister_trace_blk_zone_append_update_request_bio( 1101 + blk_add_trace_zone_update_request, NULL); 1377 1102 unregister_trace_block_getrq(blk_add_trace_getrq, NULL); 1378 1103 unregister_trace_block_bio_queue(blk_add_trace_bio_queue, NULL); 1379 1104 unregister_trace_block_bio_frontmerge(blk_add_trace_bio_frontmerge, NULL); ··· 1396 1113 * struct blk_io_tracer formatting routines 1397 1114 */ 1398 1115 1399 - static void fill_rwbs(char *rwbs, const struct blk_io_trace *t) 1116 + static void fill_rwbs(char *rwbs, const struct blk_io_trace2 *t) 1400 1117 { 1401 1118 int i = 0; 1402 1119 int tc = t->action >> BLK_TC_SHIFT; ··· 1411 1128 1412 1129 if (tc & BLK_TC_DISCARD) 1413 1130 rwbs[i++] = 'D'; 1414 - else if (tc & BLK_TC_WRITE) 1131 + else if (tc & BLK_TC_WRITE_ZEROES) { 1132 + rwbs[i++] = 'W'; 1133 + rwbs[i++] = 'Z'; 1134 + } else if (tc & BLK_TC_WRITE) 1415 1135 rwbs[i++] = 'W'; 1416 1136 else if (t->bytes) 1417 1137 rwbs[i++] = 'R'; ··· 1434 1148 } 1435 1149 1436 1150 static inline 1437 - const struct blk_io_trace *te_blk_io_trace(const struct trace_entry *ent) 1151 + const struct blk_io_trace2 *te_blk_io_trace(const struct trace_entry *ent) 1438 1152 { 1439 - return (const struct blk_io_trace *)ent; 1153 + return (const struct blk_io_trace2 *)ent; 1440 1154 } 1441 1155 1442 1156 static inline const void *pdu_start(const struct trace_entry *ent, bool has_cg) ··· 1495 1209 unsigned long long ts = iter->ts; 1496 1210 unsigned long nsec_rem = do_div(ts, NSEC_PER_SEC); 1497 1211 unsigned secs = (unsigned long)ts; 1498 - const struct blk_io_trace *t = te_blk_io_trace(iter->ent); 1212 + const struct blk_io_trace2 *t = te_blk_io_trace(iter->ent); 1499 1213 1500 1214 fill_rwbs(rwbs, t); 1501 1215 ··· 1509 1223 bool has_cg) 1510 1224 { 1511 1225 char rwbs[RWBS_LEN]; 1512 - const struct blk_io_trace *t = te_blk_io_trace(iter->ent); 1226 + const struct blk_io_trace2 *t = te_blk_io_trace(iter->ent); 1513 1227 1514 1228 fill_rwbs(rwbs, t); 1515 1229 if (has_cg) { ··· 1730 1444 { 1731 1445 struct trace_array *tr = iter->tr; 1732 1446 struct trace_seq *s = &iter->seq; 1733 - const struct blk_io_trace *t; 1447 + const struct blk_io_trace2 *t; 1734 1448 u16 what; 1735 1449 bool long_act; 1736 1450 blk_log_action_t *log_action; ··· 1767 1481 static void blk_trace_synthesize_old_trace(struct trace_iterator *iter) 1768 1482 { 1769 1483 struct trace_seq *s = &iter->seq; 1770 - struct blk_io_trace *t = (struct blk_io_trace *)iter->ent; 1771 - const int offset = offsetof(struct blk_io_trace, sector); 1484 + struct blk_io_trace2 *t = (struct blk_io_trace2 *)iter->ent; 1485 + const int offset = offsetof(struct blk_io_trace2, sector); 1772 1486 struct blk_io_trace old = { 1773 1487 .magic = BLK_IO_TRACE_MAGIC | BLK_IO_TRACE_VERSION, 1774 1488 .time = iter->ts, ··· 1844 1558 unregister_trace_event(&trace_blk_event); 1845 1559 return 1; 1846 1560 } 1561 + 1562 + BUILD_BUG_ON(__alignof__(struct blk_user_trace_setup2) % 1563 + __alignof__(long)); 1564 + BUILD_BUG_ON(__alignof__(struct blk_io_trace2) % __alignof__(long)); 1847 1565 1848 1566 return 0; 1849 1567 } ··· 1957 1667 { BLK_TC_DISCARD, "discard" }, 1958 1668 { BLK_TC_DRV_DATA, "drv_data" }, 1959 1669 { BLK_TC_FUA, "fua" }, 1670 + { BLK_TC_WRITE_ZEROES, "write-zeroes" }, 1960 1671 }; 1961 1672 1962 1673 static int blk_trace_str2mask(const char *str) ··· 2170 1879 case REQ_OP_ZONE_CLOSE: 2171 1880 rwbs[i++] = 'Z'; 2172 1881 rwbs[i++] = 'C'; 1882 + break; 1883 + case REQ_OP_WRITE_ZEROES: 1884 + rwbs[i++] = 'W'; 1885 + rwbs[i++] = 'Z'; 2173 1886 break; 2174 1887 default: 2175 1888 rwbs[i++] = 'N';
+4 -4
lib/kfifo.c
··· 22 22 return (fifo->mask + 1) - (fifo->in - fifo->out); 23 23 } 24 24 25 - int __kfifo_alloc(struct __kfifo *fifo, unsigned int size, 26 - size_t esize, gfp_t gfp_mask) 25 + int __kfifo_alloc_node(struct __kfifo *fifo, unsigned int size, 26 + size_t esize, gfp_t gfp_mask, int node) 27 27 { 28 28 /* 29 29 * round up to the next power of 2, since our 'let the indices ··· 41 41 return -EINVAL; 42 42 } 43 43 44 - fifo->data = kmalloc_array(esize, size, gfp_mask); 44 + fifo->data = kmalloc_array_node(esize, size, gfp_mask, node); 45 45 46 46 if (!fifo->data) { 47 47 fifo->mask = 0; ··· 51 51 52 52 return 0; 53 53 } 54 - EXPORT_SYMBOL(__kfifo_alloc); 54 + EXPORT_SYMBOL(__kfifo_alloc_node); 55 55 56 56 void __kfifo_free(struct __kfifo *fifo) 57 57 {
+2 -3
rust/kernel/block/mq.rs
··· 20 20 //! The kernel will interface with the block device driver by calling the method 21 21 //! implementations of the `Operations` trait. 22 22 //! 23 - //! IO requests are passed to the driver as [`kernel::types::ARef<Request>`] 23 + //! IO requests are passed to the driver as [`kernel::sync::aref::ARef<Request>`] 24 24 //! instances. The `Request` type is a wrapper around the C `struct request`. 25 25 //! The driver must mark end of processing by calling one of the 26 26 //! `Request::end`, methods. Failure to do so can lead to deadlock or timeout ··· 61 61 //! block::mq::*, 62 62 //! new_mutex, 63 63 //! prelude::*, 64 - //! sync::{Arc, Mutex}, 65 - //! types::{ARef, ForeignOwnable}, 64 + //! sync::{aref::ARef, Arc, Mutex}, 66 65 //! }; 67 66 //! 68 67 //! struct MyBlkDevice;
+2 -2
rust/kernel/block/mq/operations.rs
··· 9 9 block::mq::{request::RequestDataWrapper, Request}, 10 10 error::{from_result, Result}, 11 11 prelude::*, 12 - sync::Refcount, 13 - types::{ARef, ForeignOwnable}, 12 + sync::{aref::ARef, Refcount}, 13 + types::ForeignOwnable, 14 14 }; 15 15 use core::marker::PhantomData; 16 16
+6 -2
rust/kernel/block/mq/request.rs
··· 8 8 bindings, 9 9 block::mq::Operations, 10 10 error::Result, 11 - sync::{atomic::Relaxed, Refcount}, 12 - types::{ARef, AlwaysRefCounted, Opaque}, 11 + sync::{ 12 + aref::{ARef, AlwaysRefCounted}, 13 + atomic::Relaxed, 14 + Refcount, 15 + }, 16 + types::Opaque, 13 17 }; 14 18 use core::{marker::PhantomData, ptr::NonNull}; 15 19
+42 -28
tools/testing/selftests/ublk/kublk.c
··· 836 836 return reapped; 837 837 } 838 838 839 - static void ublk_thread_set_sched_affinity(const struct ublk_thread *t, 840 - cpu_set_t *cpuset) 841 - { 842 - if (sched_setaffinity(0, sizeof(*cpuset), cpuset) < 0) 843 - ublk_err("ublk dev %u thread %u set affinity failed", 844 - t->dev->dev_info.dev_id, t->idx); 845 - } 846 - 847 839 struct ublk_thread_info { 848 840 struct ublk_dev *dev; 841 + pthread_t thread; 849 842 unsigned idx; 850 843 sem_t *ready; 851 844 cpu_set_t *affinity; 852 845 unsigned long long extra_flags; 853 846 }; 854 847 855 - static void *ublk_io_handler_fn(void *data) 848 + static void ublk_thread_set_sched_affinity(const struct ublk_thread_info *info) 856 849 { 857 - struct ublk_thread_info *info = data; 858 - struct ublk_thread *t = &info->dev->threads[info->idx]; 850 + if (pthread_setaffinity_np(pthread_self(), sizeof(*info->affinity), info->affinity) < 0) 851 + ublk_err("ublk dev %u thread %u set affinity failed", 852 + info->dev->dev_info.dev_id, info->idx); 853 + } 854 + 855 + static __attribute__((noinline)) int __ublk_io_handler_fn(struct ublk_thread_info *info) 856 + { 857 + struct ublk_thread t = { 858 + .dev = info->dev, 859 + .idx = info->idx, 860 + }; 859 861 int dev_id = info->dev->dev_info.dev_id; 860 862 int ret; 861 863 862 - t->dev = info->dev; 863 - t->idx = info->idx; 864 - 865 - ret = ublk_thread_init(t, info->extra_flags); 864 + ret = ublk_thread_init(&t, info->extra_flags); 866 865 if (ret) { 867 866 ublk_err("ublk dev %d thread %u init failed\n", 868 - dev_id, t->idx); 869 - return NULL; 867 + dev_id, t.idx); 868 + return ret; 870 869 } 871 - /* IO perf is sensitive with queue pthread affinity on NUMA machine*/ 872 - if (info->affinity) 873 - ublk_thread_set_sched_affinity(t, info->affinity); 874 870 sem_post(info->ready); 875 871 876 872 ublk_dbg(UBLK_DBG_THREAD, "tid %d: ublk dev %d thread %u started\n", 877 - gettid(), dev_id, t->idx); 873 + gettid(), dev_id, t.idx); 878 874 879 875 /* submit all io commands to ublk driver */ 880 - ublk_submit_fetch_commands(t); 876 + ublk_submit_fetch_commands(&t); 881 877 do { 882 - if (ublk_process_io(t) < 0) 878 + if (ublk_process_io(&t) < 0) 883 879 break; 884 880 } while (1); 885 881 886 882 ublk_dbg(UBLK_DBG_THREAD, "tid %d: ublk dev %d thread %d exiting\n", 887 - gettid(), dev_id, t->idx); 888 - ublk_thread_deinit(t); 883 + gettid(), dev_id, t.idx); 884 + ublk_thread_deinit(&t); 885 + return 0; 886 + } 887 + 888 + static void *ublk_io_handler_fn(void *data) 889 + { 890 + struct ublk_thread_info *info = data; 891 + 892 + /* 893 + * IO perf is sensitive with queue pthread affinity on NUMA machine 894 + * 895 + * Set sched_affinity at beginning, so following allocated memory/pages 896 + * could be CPU/NUMA aware. 897 + */ 898 + if (info->affinity) 899 + ublk_thread_set_sched_affinity(info); 900 + 901 + __ublk_io_handler_fn(info); 902 + 889 903 return NULL; 890 904 } 891 905 ··· 997 983 */ 998 984 if (dev->nthreads == dinfo->nr_hw_queues) 999 985 tinfo[i].affinity = &affinity_buf[i]; 1000 - pthread_create(&dev->threads[i].thread, NULL, 986 + pthread_create(&tinfo[i].thread, NULL, 1001 987 ublk_io_handler_fn, 1002 988 &tinfo[i]); 1003 989 } 1004 990 1005 991 for (i = 0; i < dev->nthreads; i++) 1006 992 sem_wait(&ready); 1007 - free(tinfo); 1008 993 free(affinity_buf); 1009 994 1010 995 /* everything is fine now, start us */ ··· 1026 1013 1027 1014 /* wait until we are terminated */ 1028 1015 for (i = 0; i < dev->nthreads; i++) 1029 - pthread_join(dev->threads[i].thread, &thread_ret); 1016 + pthread_join(tinfo[i].thread, &thread_ret); 1017 + free(tinfo); 1030 1018 fail: 1031 1019 for (i = 0; i < dinfo->nr_hw_queues; i++) 1032 1020 ublk_queue_deinit(&dev->q[i]);
+3 -6
tools/testing/selftests/ublk/kublk.h
··· 175 175 176 176 struct ublk_thread { 177 177 struct ublk_dev *dev; 178 - struct io_uring ring; 179 - unsigned int cmd_inflight; 180 - unsigned int io_inflight; 181 - 182 - pthread_t thread; 183 178 unsigned idx; 184 179 185 180 #define UBLKS_T_STOPPING (1U << 0) 186 181 #define UBLKS_T_IDLE (1U << 1) 187 182 unsigned state; 183 + unsigned int cmd_inflight; 184 + unsigned int io_inflight; 185 + struct io_uring ring; 188 186 }; 189 187 190 188 struct ublk_dev { 191 189 struct ublk_tgt tgt; 192 190 struct ublksrv_ctrl_dev_info dev_info; 193 191 struct ublk_queue q[UBLK_MAX_QUEUES]; 194 - struct ublk_thread threads[UBLK_MAX_THREADS]; 195 192 unsigned nthreads; 196 193 unsigned per_io_tasks; 197 194